Who is responsible for problematic content and how do we stop it?

An investigation about problematic content circulating the internet.

The internet has grown to be a fundamental aspect of modern society. The beginning of the Web 2.0 brought a large online economy, turning the internet into a commercial platform. With this boom on the internet came more users and businesses. Social media platforms soon took over the internet. Myspace being the first social media site to reach a million monthly active users in 2004 (Ortiz-Ospina, 2019). This is arguably the beginning of social media as we know it. Early studies of “cyberspace” and “virtual community” highlighted how individuals and communities would utilise the internet as a mode for social interaction and individual expression (Rheingold, 1993 & Turkle, 1995). Can you imagine a world without the internet and social media? Not really, right? The internet is completely intertwined with almost everything we do. Apple watches, online wallets, phones and so much more have integrated into our lives. The case of Myspace demonstrates the relevance of social media platforms in internet use as in 2006 it temporarily surpassed Google as the most visited website in the US (Ortiz-Ospina, 2019).


As seen in the graph by Our World In Data, around 2004, Myspace, the only social media platform had around 900 thousand users and by 2019 social media platforms like Facebook and YouTube had around 2 billion users.

The internet is an ever-growing space and its important it be utilised effectively and safely. Yes, the internet provides such accessible content regardless of what you’re looking for. Yet, sadly, with good comes the bad. For example, the internet is circulating with bullying, hate crimes and more problematic content. Section 230 of the communications decency act is one of the most valuable tools for protecting free speech on the internet and has played a key role in shaping the global internet. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Overall, protecting users from any legal consequences of other actions on their interfaces. The internet allows for free speech and limitless ways of individual creativity. Clucas (2020) states that the comment section of self-regulating online communities develops their own set of cultural norms or established rules regarding tone, format, and practices. However, sometimes individuals take granted of such freedom especially when provided anonymity. The conventional rules of conversation and human interaction are often blind-sided and modified online. Thus, unethical users frequently interpret freedom of speech as a “right to hate” (Clucas, 2020).Some examples of unethical content circulation are revenge porn, cyberbullying and harassment. Data from the Pew Research Centre that shows that 41% of American adults have personally experienced some form of online harassment and highlights how it’s continuously growing (Vogels, 2021). Additionally, researchers Karsten Muller and Carlo Schwarz noted that increases in violent crimes like assault and arson paralleled increases in posts that incite hatred.

So who is responsible?

If we look at the internet, there are three main stakeholders involved in content creation and circulation. Firstly, there are the platforms, secondly the content creators and lastly, the government.


Social media platforms over the past few years have faced scrutiny over their lack of involvement regarding hateful speech. Thus, each platform has slowly implemented different methods to show their involvement. With an estimation of 500 million twitter posts being posted a day, Twitter has since established a combination of editorial systems that are driven by humans and machines to promote or filter sorts of content (Yaraghi, 2019). Additionally, Twitter have released a statement regarding their hateful conduct policy which states “We believe that everyone should have the power to create and share ideas and information instantly…. there are some limitations on the type of content and behaviour that we allow” (Twitter’s Policy on Hateful Conduct, n.d.). Almost every social media platform has algorithms and artificial intelligence to filter and remove certain content that doesn’t align with their regulations. However, how do these platforms know what is right and wrong?

For example, as Facebook tried to maintain access to national markets or to shield itself from legal culpability, activists and journalists found themselves subject to censorship in various nations and disputed territories, including the Palestinian territories, Kashmir, and the Crimea (Laub, 2019). The company’s hate-speech policies, according to ProPublica, “tend to favour elites and governments over grassroots activists and racial minorities.”

Platforms have developed strategies and a voice addressing hateful content, but they are insufficient on their own to halt its spread.

The Government

The government has come to understand the seriousness of technology with the advancement of it over time. They’ve put laws like Section 230, copyright laws, and pornographic laws into effect. However, different governments have implemented different laws, for example, social media companies are granted wide authority to control their content and uphold laws against hate speech in countries like the United States. Others, like Germany, have the authority to require companies to remove their posts within a specific time frame (Laub, 2019). On the contrary, some governments like China, have prohibited significant social media sites like Facebook and TikTok. Governments have a special right to decide what platforms should be available to users (Barry, 2022). The types of content that consumers can access are influenced by government censorship and surveillance.They hold large power regarding platform regulation yet, certain governments can be seen favouring platforms and creating laws that benefit an agenda. Even, harvesting data from users for advancements in a campaign like the Cambridge Analytica scandal. Although the types of content that consumers can access are influenced by government censorship and surveillance social media use cannot purely be regulated by government legislation alone.

Content Creators

Chelsea Stahl / NBC News; Getty Images; AP

Consumers wield considerable influence. Users and platforms coexist harmoniously, with platforms needing consistent content updates to remain current and engaging. Because of this, platforms have lately made it possible for users to edit their pages to remove particular trigger phrases or content. Content producers might thus restrict their audience’s exposure to abusive sites or users. Additionally, a phenomena known as “cancel culture” has emerged in recent years, which is primarily preventing someone from having a large public platform or career as a result of a mistake they made in public. The effectiveness of cancel culture is a topic of public debate.


Below is an exceptional debate regarding cancel culture.

A small step towards better

The internet is a networked community that requires all parties to work cohesively. Platforms, users, and the government must all put effort into limiting the creation and circulation of problematic content. Firstly, platforms hold responsibility and should excel in the moderation and banning of violent remarks. They are required to guarantee that the platform is safe for marginalised users and adequate for productive engagement (Gillespie, 2018). The internet has changed the way that laws are traditionally applied, necessitating new laws that can be employed with the new culture of digital media. For instance, new regulations against image-based abuse, such as revenge porn, have been passed in Tasmania; but, given the rise in the prevalence of revenge porn, they appear to have minimal social impact (Bowes, 2022). As a result, online rules need to be enforced more strictly and with greater care. Lastly, content creators and users should have a louder voice and impact on the legislation and punishment regarding harmful content.



Abbate. (2017). What and where is the Internet? (Re)defining Internet histories. Internet Histories (2017), 1(1-2), 8–14. https://doi.org/10.1080/24701475.2017.1305836


Clucas, T. (2020). Don’t feed the trolls: Social Media and the Limits of Free Speech. In S. Polak (Ed.), Violence and Trolling on Social Media: History, Affect, and Effects of Online Vitriol. Amsterdam University Press. https://doi.org/10.1515/9789048542048-004


Gillespie, T. (2018). CHAPTER 1. All Platforms Moderate. From https://www.degruyter.com/document/doi/10.12987/9780300235029-001/html


Laub, Z. (2019). Hate Speech on Social Media: Global Comparisons. From https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons#chapter-title-0-4


Ortiz-Ospina, E. (2019). The rise of social media. https://ourworldindata.org/rise-of-social-media#:~:text=Social%20media%20started%20in%20the,media%20as%20we%20know%20it


Section 230 of the Communications Decency Act. (2022). Retrieved 12 October 2022, from https://www.eff.org/issues/cda230


Twitter’s policy on hateful conduct | Twitter Help. (2022). from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy


Vogels, E. A. (2021). The State of Online Harassment. Pew Research Center: Internet, Science & Tech; Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/


Yaraghi, N. (2019). How should social media platforms combat misinformation and hate speech?. From https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media-platforms-combat-misinformation-and-hate-speech/