Who Should Be Responsible for Stopping the Spread of Problematic Content on Digital Platforms?


"Personal social media landscape" by Anne Helmond is licensed under CC BY 2.0.
Personal social media landscape” by Anne Helmond is licensed under CC BY 2.0.

People browsing, sharing and spreading information has become the norm as the digital age continues to evolve. Meanwhile, the digital platform organizes people to form networked public and provides the opportunity for people to interact with a broader range of people. The benefits of digital platforms are obvious, but the perils are also apparent, such as pornography, obscenity, violence, illegality, abuse, hate or some other problematic content (Gillespie, 2019). We must revisit who should be responsible for these growing problems on digital platforms. Network users post content; the content is hosted, organized, and shared by the digital platform; digital platforms operate only with the local government’s permission. Therefore, the platforms, users, and the government are all responsible for poor internet behaviours. This essay will present these three parties as to why and how they should be responsible for stopping the spread of problematic content.


Digital Platform’s Responsibilities

"Me and my 542 bestest friends (on Facebook)" by Terry Chay is licensed under CC BY 2.0.
Me and my 542 bestest friends (on Facebook)” by Terry Chay is licensed under CC BY 2.0.

Digital platforms should be responsible for stopping the spread of problematic content first. The digital platforms are middle between user and user, user and the public, citizens and law enforcement, policymakers and regulators (Gillespie, 2018). Since Facebook was founded in 2004, it has brought friends and family closer, making communication barrier-free and effortless. However, Facebook has some problematic content. In the three months following the 2016 presidential election, fake news on Facebook that the Pope supported Trump or Hillary selling weapons to ISIS was more popular than real news from reliable sources.

Fake News – Computer Screen Reading fake News” by Mike Mackenzie is licensed under CC BY 2.0.

Fake news is not the first time on Facebook; such fake information often appears on websites masquerading as legitimate sources (Big Think, 2016). Zuckerberg responded, “No one in our community wants fake information; We are also victims of this and we do not want it on our service”; this action has been criticized as an attempt by Facebook to avoid responsibility. Facebook should be responsible because it exacerbates the speed of information dissemination and fear among the public and society. Digital platforms are not just tools for transmitting information but also shape public discourse.


“Why Facebook Needs to Take Responsibility for Fake News” by Big Think. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=nCHSOTnXpB8&t=9s

Nowadays, digital platforms have comprehensively penetrated our social and economic life, integrating our online and offline lives to a high degree. So, when it comes to inappropriate content, platforms should be the first to step up and take responsibility rather than running away or turning a blind eye. Current platforms have a system that automatically filters out inappropriate content for users, and platforms employ people to review content manually. However, the system can also miss some inappropriate content, and the work of manual review can be psychologically damaging because the employee reviews disturbing content. Digital platforms should reconsider developing new types of systems that require algorithms to patrol users at all times. In addition, platforms should create a positive internet atmosphere, but this also needs to be worked on with internet users.


Network Users’ Responsibilities

Network users should be responsible for stopping problematic content because they are the most fundamental source of spreading problematic content. In 2010, “Am I pretty or ugly” went viral and gained much attention on YouTube. The girl in this video asked users to make truthful comments about her appearance.

“Am I Pretty/ Am I Ugly?” by wickedlemons1. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=P2IESvDMBbw

Cyberbullying” by Diari La Veu is licensed under CC BY 2.0.

Rossie (2015) noted that this video has many comments around eroticized language or violent intentions, such as “ugly as shit u look like a dying walrus”, “your lips are best for BJ’s”, “You’re a narcissist and an attention whore”. [Note: YouTube is an informal space, and to reflect the tone of the space, Rossie presented this honest feedback directly and without censorship.]

The problematic content is increasing as people rely more on online lives. Vogels (2021) found that over 40% of adults experienced cyberbullying and harassment in 2018. Swansea University (2018) found that victims suffering from cyberbullying are twice as likely to attempt suicide and self-harm.

Although algorithms can automatically issue a warning and try to stop inappropriate content, 4.7 billion of the world’s population are social media users, leading to much problematic content on social media platforms daily. Filtering out downright inappropriate content is impossible only relying on platforms. Most people would prefer platforms to take responsibility for spreading problematic content, but the best way to stop the spread is for users not to post this content from the start (Gillespie, 2019). Users need to remember, “do the compliment, do not slander”, and only if users themselves can monitor their behaviour, then the online atmosphere can gradually change for the better.


The Government’s Responsibilities

Australia Flag and Country” by Global Panorama is licensed under CC BY 2.0.

The government should be responsible for stopping problematic content because the government has a legal obligation and greater power to control the spread of content. Cyberbullying is becoming a significant legal issue (Rademeyer & Mitry, 2018). eSafety Commissioner (2021) pointed that 44% of young Australians had a negative online experience in the last six months, including 15% who have been threatened or abused online. However, for these negative online behaviours, the government is still determining which cyberbullying are illegal in Australia under the Criminal Code Act 1995, the Criminal Code Act 1995 noted that only serious online harassment and bullying is a crime (Ly Lawyers, 2020).

This is very unreasonable and irresponsible behaviour. Firstly, people are much more active on the digital platform today than in 1995, yet the Australian government still judges current cyberbullying based on the past 27 years old legislation. Secondly, the criteria for the word ‘serious’ used is confusing. Ambiguous words cannot determine the harm caused to victims of cyberbullying. Thus, the Australian government needs to re-enact the law on cyberbullying.

Anonymous” by Matt Westervelt is licensed under CC BY 2.0.

On top of this, there are always users on digital platforms who are reckless in posting inappropriate content because the internet allows people to hide behind a mask of anonymity, which threatens to create a healthy network environment. The Australian government could learn from the UK government; the UK government announced in 2022 that it is considering a new rule that would allow online users to block anonymous accounts who unused ID authentication. This rule can isolate as many unfriendly people as possible for the user. Once ID authentication is in place, users’ acts are locatable to the individual so that unfriendly users are no longer lawless on the internet. Therefore, the government has a higher level of regulation and power than the digital platform, which has played a vital role in stopping lousy behaviour or problematic content on platforms and users.



The problematic content on digital platforms is not the responsibility of one party; platforms, users and the government should all be responsible for stopping the spread of bullying, harassment, violent content, hate, porn, and other problematic content on digital platforms. Although platforms can intervene or remove inappropriate content when it occurs, it is impossible to stop all inappropriate content from spreading simply by constant regulation. It also requires users to take responsibility for their words and actions on the digital platform. The digital platform atmosphere will become friendly and healthy if users do not send inappropriate content. Meanwhile, the government should also try to strengthen the regulation of the internet by enacting corresponding legal provisions to enhance the monitoring of the behaviour of violent people on the internet. The digital platforms, users and the government are all responsible for stopping the spread of problematic content. A health network environment needs to be maintained by these three parties.



Big Think. (2016, November 20). Why Facebook Needs to Take Responsibility for Fake News. [Video]. YouTube. https://www.youtube.com/watch?v=nCHSOTnXpB8&t=9s

Crook, J. (2017, March 19). Facebook will never take responsibility for fake news. TechCrunch. https://techcrunch.com/2017/03/19/facebook-will-never-take-responsibility-for-fake-news/

eSafety. (2021). Cyberbullying. https://www.esafety.gov.au/key-issues/cyberbullying

Gillespie, T. (2018). Governance by and through Platforms. In Governance by and through Platforms (pp. 254-278). SAGE Publications.

Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Lee, D. (2017, January 12). Microsoft staff ‘suffering from PTSD’. BBC News. https://www.bbc.com/news/technology-38592089

Ly Lawyers. (2020, March 11). Cyberbullying Laws in Australia. https://lylawyers.com.au/crime-cyber-bullying-australia/

Milmo, D. (2022, August 8). UK social media users could get power to block anonymous accounts. The Guardian. https://www.theguardian.com/media/2022/feb/25/uk-social-media-users-could-get-power-to-block-unverified-accounts

Morse, S. (n.d.). How Facebook Helps Us Communicate. CHRON. https://smallbusiness.chron.com/facebook-helps-communicate-66432.html

Rademeyer, N., & Mitry, R. (2018, August 16). Cyberbullying laws in Australia. Mitry Lawyers. https://www.lexology.com/library/detail.aspx?g=ff372a76-3f36-4b8e-9f30-08ab4cdd9a87

Ritchie, H. (2016, December 30). Read all about it: The biggest fake news stories of 2016. CNBC. https://www.cnbc.com/2016/12/30/read-all-about-it-the-biggest-fake-news-stories-of-2016.html

Rossie, A. (2015). Moving beyond “Am I pretty or ugly?”: Disciplining girls through YouTube feedback. Continuum (Mount Lawley, W.A.), 29(2), 230–240. https://doi.org/10.1080/10304312.2015.1022953

Statista Research Department. (2022, September 20). Worldwide digital population July 2022. Statista. https://www.statista.com/statistics/617136/digital-population-worldwide/

Swansea University. (2018, April 19). Young victims of cyberbullying twice as likely to attempt suicide and self-harm, study finds. Science Daily. https://www.sciencedaily.com/releases/2018/04/180419130923.htm

Vogels, E. A. (2021. January 13). The State of Online Harassment. Pew Research Center. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/

wickedlemons1. (2011, October 7). Am I Pretty/ Am I Ugly?. [Video]. YouTube. https://www.youtube.com/watch?v=P2IESvDMBbw