This article discusses how stakeholders currently prevent the spread of violent content and what they should do to maintain a safe and protected experience for all online users. Forms of hate speech and harassment is a very common online experience that is now an anticipated factor when participating on the Internet. Stakeholders are the groups that will be affected by this, now the question lies: who should be responsible for preventing the continued spread of such content and how should they go about it?
Throughout the development of internet platforms, the ability to share subjective opinions has flourished and has been tied to ideals of free speech. However, with the privilege of free speech, some individuals have taken advantage of this opportunity and have produce harmful content to certain people or groups. According to a study by Cyberbullying Research Center, 60 percent of teenagers have experienced cyberbullying in their lifetime. This reveals the extent to which it has grown even considering the data is limited as it is collected from a sample of American teenagers.
There is an array of violent content, such as indirect Tweets, Facebook posts, as well as TikTok videos and Instagram reels, where the exposure of the content is increased in accordance with the post’s engagement. This has become a strategy for an account to grow, as controversial or hateful content is likely to receive comments that disagree with the posted idea. This form of online toxicity where an individual uses other people or groups to gain internet fame or the now popularised term ‘clout’ is increasing harm to other users and should not be supported by a platform’s algorithm; but how should a platform monitor all these accounts?
TikTok alone has reported to have over 1 billion monthly active users which reveals the great scale the companies would have to regulate their own platforms to maintain the user security and protection. It is in the user’s best interest to use platforms which will not only enrichen their livelihood, but also to use platforms which protect their data and experience.
Platforms, such as the big social media sites like Facebook, Twitter, YouTube and TikTok, have been able to enforce rules that protect the community on their respective sites and to help create a friendly environment. This regulation maintains public interest through keeping users safe and exposed to harmless content, ultimately allowing loyal users of the platform to continue their experience online (Napoli, 2019).
Regulating online content on social media platforms is beneficial for the companies as it promotes a positive, family-friendly image which suggests a suitability for most age groups. Thus, resulting in more users, and subsequently, more income, whether it be through advertising or even data selling, etc. However, data selling as an income benefit is not fair for the users as they expect the company to protect their data; the income from this is tempting and has evidently overshadowed fair judgement in the past (Brown, 2015). For example, Facebook was found sharing user data to other tech companies such as “Amazon, Apple, Microsoft, Netflix, Spotify and Yandex” in 2018. This lack of care towards their platforms users now questions the importance of censoring or regulating content on the platforms for their users.
Regulation on platforms today is a process that occurs from both the user and company sides. The company is likely to take down violent content through an algorithm after flagging content that includes certain phrases, sounds, or images that does not follow their guidelines (Brannon, 2017). Censoring negative content on their own platforms would provide a somewhat quick response and will therefore prevent the exposure of this content to more users. The platforms can monitor content to a satisfactory standard as strikes and removal of content occurs usually within the first 24 hours of an upload. However, with videos becoming a more prevalent section in our consumption of online content, it is becoming harder to regulate the speech and images of all users. This results in a slower response from the company and consequently allows more time for the post to be shared amongst viewers.
A recent example is of Andrew Tate, and how he was creating content which promoted misogynistic values and showcased wealth (Das, 2022). Convincing younger, more naïve boys that this was the way to live the life they wanted, with wealth and women. This continued for a while where most people engaged in this content for entertainment or information. However, as more information began to surface about Tate, his sexual offences and human trafficking allegations let people see the extent of his misogyny (Rifkind, 2022). Only then were platforms forced to take his content seriously and was finally deplatformed from YouTube, TikTok, Facebook and Instagram (Sung, 2022). This has heavily reduced the amount of his exposure on the internet and has brought him closer to irrelevancy.
Deplatforming and removal of content is an effective way to prevent the spread of violent and harmful content online. Although it is effective, there are some limitations such as the time taken for content to be deleted or for users to be banned. This challenge is assisted by user ability to report a piece of content or a user, but smaller issues will not be solved immediately, again allowing the content to find greater exposure.
Regulation from platforms is rather fast and effective, but with certain content being widespread across platforms, it is imperative that these companies have similar ideals and will offer punishments according to the violations of their guidelines.
With companies having almost an ultimate power over content on their respective platforms, government intervention is not necessary to maintain order online in terms of harmful content. This problematic content is not punishable by law (“Cyberbullying”, 2022), but if issues are extended past the virtual and enter the offline, laws can be taken into effect. This is evident in the Tate case where the allegations for human trafficking and sexual offences are what allowed companies and police to take greater action.
Governments do not have much influence when it comes to regulating problematic content online. Therefore, companies maintain their greater responsibility to enforce rules on their platforms.
Lastly, users are the stakeholders which are being affected the most. With their contribution to all platforms and their consumption of media, they are the ones who will witness such content first. Platforms have allowed them to report users or posts and are able to block certain content from their timeline or feed, stopping a user from having to see content they do not wish to see.
Overall, social media companies should hold the most responsibility to prevent the spread of problematic content online. Users can be held accountable for sharing this content, and others can also report or block harmful posts. Governments do not hold much influence online, particularly in terms of cyberbullying and problematic content.
Ultimately, following community guidelines as a user will create a harmonious atmosphere on these platforms and provide everyone with the best experience online.
Adamic, L., & Huberman, B. (1999). Internet Growth dynamics of the World-Wide Web, 401(6749), 131-132. https://doi.org/10.1038/43604
Brannon, Valerie C. (2017). Free Speech and the Regulation of Social Media Content.
Brown, H. (2015). Does globalization drive interest group strategy? A cross-national study of outside lobbying and social media. Journal Of Public Affairs, 16(3), 294-302. https://doi.org/10.1002/pa.1590
Bursztynsky, J. (2021). TikTok says 1 billion people use the app each month. CNBC. Retrieved 5 October 2022, from https://www.cnbc.com/2021/09/27/tiktok-reaches-1-billion-monthly-users.html.
Cyberbullying. NSW Government. (2022). Retrieved 9 October 2022, from https://www.police.nsw.gov.au/safety_and_prevention/crime_prevention/online_safety/online_safety_accordian/cyberbullying.
Das, S. (2022). Inside the violent, misogynistic world of TikTok’s new star, Andrew Tate. The Guardian. Retrieved 7 October 2022, from https://www.theguardian.com/technology/2022/aug/06/andrew-tate-violent-misogynistic-world-of-tiktok-new-star.
Dijck, J., Poell, T., & Waal, M. (2018). The platform society (pp. 5-32). Oxford University Press.
Facebook’s data-sharing deals exposed. BBC News. (2022). Retrieved 7 October 2022, from https://www.bbc.com/news/technology-46618582.
Help Centre: Community Guidelines. Instagram. (2022). Retrieved 6 October 2022, from https://help.instagram.com/477434105621119/?helpref=uf_share.
Napoli, P. (2019). User Data as Public Resource: Implications for Social Media Regulation. SSRN Electronic Journal, 11(4), 439-459. https://doi.org/10.2139/ssrn.3399017
Patchin, J. (2022). Summary of Our Cyberbullying Research (2007-2021). Cyberbullying Research Censer. Retrieved 5 October 2022, from https://cyberbullying.org/summary-of-our-cyberbullying-research.
Rifkind, H. (2022). ‘I’m not a f..king rapist, but I like the idea of doing what I want’. The OZ. Retrieved 8 October 2022, from https://www.theaustralian.com.au/the-oz/news/im-not-a-fcking-rapist-but-i-like-the-idea-of-doing-what-i-want/news-story/f91fc7bd07bf45b5706c8687a7040903.
Sung, M. (2022). Andrew Tate banned from YouTube, TikTok, Facebook and Instagram. NBC News. Retrieved 7 October 2022, from https://www.nbcnews.com/pop-culture/viral/andrew-tate-facebook-instagram-ban-meta-rcna43998.
Vogels, E. (2021). The State of Online Harassment. Pew Research Center. Retrieved 5 October 2022, from https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/.