Are Social Media Sites Bullies or Protectors?

Young man using mobile phone while working on laptop at the table by freerangestock, licensed under Creative Commons 1.0 Universal

Social media has created a new avenue for vulnerable individuals to be exploited and discriminated against in the shared economy. 

A popular short video-sharing platform, Tiktok was in hot water after being caught discriminating against individuals and situations that did not fit the ‘societal’ standards. Perpetuating archaic social standards that only reward conformity with society can create serious consequences for the mental health and self-esteem of those deemed “misfits.” The Intercept was able to obtain documents from Tiktok’s moderation policy, a criteria that heavily outlined physical and environmental features that did not fit the ideal beauty or societal standards, such as “abnormal body shape”, “chubby” and “beer belly”. Ugliness, visible signs of aging, facial and congenital deformities were also flagged. In addition, videos filmed in the slums, rural areas, and underdeveloped locations were some of the unacceptable environmental factors that the document outlined. Tiktok encouraged moderators to suppress videos that did not fit this criteria. Tiktok’s reasoning behind this policy was that new users would find these videos unattractive and unappealing, and accordingly was undeserving of being recommended to appear in the ‘for you’ page. In defense, a Tiktok spokesperson, Josh Gartner argued that the policy was created as an early attempt to prevent bullying. (Biddle, Ribeiro, & Dias, 2020) But ironically, Tiktok’s biased treatment towards creators it deems “conventionally attractive” reinforces this is not a unilateral policy undertaken by Tiktok or any other social networking sites.

Social media’s double standards

Tiktok, by Solen Feyissa, licensed under Creative Commons 2.0

Social media, as an extension of technology, enables the expression of human behavior. (John, 2016) Thus, negative human behavior, such as discrimination and exploitation also exist in these platforms. Tiktok clearly prioritizes user engagement, rather than to protect its users. This is further supported by Kato (2020), who reported in The New York Post that Mikayla Zazon, a tiktok influencer focused on body positivity and self-love has over 40 of her videos showing her “cellulite, stretch marks, and body-fat rolls” removed because they violated community guidelines. While her conventionally attractive counterparts, who post videos in bikinis are not subjected to any violations. (Owens, 2021) In addition, discrimination in social media can further be seen through Instagram’s hypocrisy by allowing Danielle Cohn, an underage instagram influencer who regularly posts adult content to keep her account and further allowed to post sponsored ads, when her audience ranges from the age of 12-16. On the other hand, Belle Delphine, an adult instagram influencer who caters to an adult audience had her account terminated for violating community guidelines through her lewd and suggestive content. (Gonzales, 2021) 

Moderation in social networking sites 

Moderation was initiated to protect individuals from each other and from other content that can be deemed as harmful. (Gillespie, 2018) However, in Tiktok’s alleged attempt to protect these certain individuals from bullying, subjected them to being directly discriminated against by Tiktok by suppressing their videos. Tiktok is not the only platform that is benefitting from these double standards, other social networking sites, such as Instagram have shown the same behavior. For Instagram, It easily holds adults accountable for violating community guidelines, but hesitates in banning accounts run by minors. It proves to value profits and exploiting minors by allowing them to continue participating in adult content, despite their audiences being young and vulnerable children. It also exposes minor content creators to even more harmful ideas and individuals, such as pedophilia and underage sexual activity. 

Trust in social media… or not 

The internet is able to induce trust through its ‘convenient credibility check” just from leaving a digital footprint through online profiles. (Gillespie, 2018) When the internet is able to produce trust easily, it harms its users and subjects them to being vulnerable just because they trust the platform. In the case of Tikok and Instagram, children and other vulnerable individuals are constantly being exposed to harm. The apparent double standard that the big players of social media sites continuously allow are slowly setting the standard for what is acceptable when handling user-generated content. This will result in vulnerable individuals learning body-shaming is okay, minors posting sexually suggestive content as long as they are attractive is acceptable, that being rich and attractive is the only normalcy, any diversified content or appearances would be classified as unacceptable and taboo.

Content moderation

On the other hand, Moderation at times can be a necessary evil in order to limit the spread of misinformation and provide a safe and secure online community. A prime example of this is Tiktok’s content moderation focus following the Israel-Hamas war (Tiktok, 2023) As times of crisis often breed a heightened degree of vitriolic content that can be harmful to social media users, Tiktok’s content moderation efforts are crucial in ensuring that hate speech and violence are not perpetuated in the online community. Tiktok specifically re-worked its automated systems that can detect content depicting violence and graphic imagery. It is also focusing on the addition of content moderators who are fluent in Arabic and Hebrew, so it can moderate content flowing from the conflict with a greater degree of accuracy to ensure conformity with non-hate speech and violence policies. Since the enforcement of measures such as these, Tiktok has successfully struck down 500,000 videos and over 8,000 livestreams. As the Israel/Hamas conflict is particularly polarizing, limiting the spread of content that may incite replicated acts of terror or hate speech in other countries is extremely important. For example, a recent protest at the Sydney Opera House saw some members of the crowd reciting anti-semetic chants like “F… the jews” (ABC, 2023). Such rhetoric is clearly unacceptable, and social media groups will thus play a crucial role going forward in content moderation and the protection of marginalized groups.

Indigenous communities and social media

Global panorama by Mark Roy, licensed under Creative Commons 2.0

Further, the potential for social media to act as “bully” against marginalised groups was also evidenced through misinformation perpetuated in relation to the voice referendum. (Besser, 2023) The no campaign heavily focused on social media activity to sew seeds of doubt in the voice amongst the Australian people. For example, a video circulated social media that use clips of PM Anthony Albanese to suggest that a successful referendum would see “the UN take your house” (Remeikis and Butler, 2023).  Without a constitutional voice, Indigenous peoples will continue to face disadvantage and close the gap. But reflecting the double edge sword that social media can be for indigenous peoples, it traditionally has played a crucial role in promoting and protecting intergenerational relationships. Private social media groups form the basis for Indigenous peoples to share traditional culture with each other in a hate-speech free environment. In fact, 81% of Indigenous survey respondents reported that participating in online indigenous communities boosted their confidence in identity (Carlson, 2017). Overall, it falls on social media companies to effectively moderate the content on their platforms so the benefits of social media can be fully harnessed for our shared economy. 


In conclusion, moderation is an essential tool in determining whether social media harms or protects its users. Moderators who allow double standards to discriminate and exploit against minors, diversified bodies and the environment to increase profits and user-engagement, do more harm than good. However, if social media sites’ only agenda is to protect its users, social media can create a safe place for the shared economy to communicate and interact with each other. 

Reference list 

All Platforms Moderate. (2018). In Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press,.

Besser, L. (2023, October 16). With the lies and disinformation of Voice referendum exposed, Australia is sleepwalking towards the future. ABC News.

Biddle, S., Ribeiro, P. V., & Dias, T. (2020, March 16). Invisible Censorship: TikTok Told Moderators to Suppress Posts by “Ugly” People and the Poor to Attract New Users. The Intercept.

Carlson, B. (2017, April 27). Why are Indigenous people such avid users of social media? | IndigenousX. The Guardian.

Gonzalez, I. (2021, October 12). Social media’s exploitation of young users continues to negatively impact children and teenagers. Highlander.

John, N. A. (2016). The age of sharing. Polity Press.

‌Kato, B. (2020, July 29). Curvy influencers say TikTok banned their body-centric videos. New York Post.

Our continued actions to protect the TikTok community during the Israel-Hamas war. (2019, August 16). Newsroom TikTok.


Premier condemns “horrific” comments at pro-Palestinian Sydney Opera House rally. (2023, October 9). ABC News.

‌Remeikis, A., & Butler, J. (2023, October 11). Voice referendum: factchecking the seven biggest pieces of misinformation pushed by the no side. The Guardian.