
Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?
THE BRIEF :

“Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms”. This essay will discuss who is responsible for stopping this spread and how this spread can be stopped. Tim Cook, Apple’s CEO, stated in 2018, “the system as it stands isn’t working” (Owens, 2020). Therefore, certain individuals and groups, and processes and operations need to be enforced in order to stop this spread of problematic content on digital platforms. There are two major categories in which should be at the forefront of stopping this spread and they include: the technical developers of the digital platform and government agencies. Technical developers have created the digital platforms and therefore have the abilities to remove problematic content from their users. Whereas government agencies uphold the rights and responsibilities of protecting their citizens from receiving hateful or harmful content. On the other hand, in regard to “how” the content should be stopped, artificial intelligence processes and content moderation are two major processes which should be enacted to cease the expansion of controversial content. Artificial intelligence partnering with the human intelligence of content moderation are two valuable and efficient ways of dealing with inappropriate content online. Therefore, this essay will discuss the reasons why technical developers and government agencies should be held chargeable for these accounts and also how they will stop the spread.
WHO SHOULD BE STOPPING THE SPREAD ?
-
IS IT TECHNICAL DEVELOPERS ?
Once problematic content such as bullying, harassment, violent content, hate and porn disperse on digital platforms, the technical developers of each digital platform should be responsible for stopping this content. Technical developers consist of various developers who major in software, web, security, graphics, data, front-end and back-end (Burgess, J., Marwick, A. E., & Poell, T, 2018). All of these custodians have the access and power to put an end to problematic content at the micro and meso levels. The micro and meso levels of society include individuals, relationships, families, organisations, groups and commutes. The problematic content which would occur at these levels would include

bullying, harassment and porn. The Australian Government believes, “its a job for the platforms themselves” (Tornqvist, 2022). This reiterates the argument of technical developers taking full authority as they have created and developed the platform and therefore can remove any inappropriate content. This can be made evident through the 2019 New Zealand mosque shooting which was live-streamed, however, the technical developers of Facebook managed to remove 1.5 million videos of the gunman’s massacre. This resulted in the videos being blocked at upload and thus not seen by any people of the public (Snider, 2019). This triggered a new legislation being formed which claims that social media platforms should be held accountable for the dissemination of ‘hate content’ on their platforms (aus.ConsumerCommission.gov). However, this recent legislation has caused a few complications in the technology industry as technical developers are concerned it has the ability to frighten away technology companies from working in Australia. Hence, this demonstrates why government agencies need to be involved as well in the stopping of problematic content across digital platforms.
-
OR IS IT GOVERNMENT AGENCIES ? OR IS IT BOTH ?
Government agencies, including departments and organisations who work for the Australian Government should be responsible for stopping the spread of problematic content circulating across digital platforms. Government agencies uphold the right to protect their citizens from external factors and aim to provide guidelines on everyday behaviours (Aus. Gov, 2016). Therefore, those same responsibilities apply for the online world and hence is why they should be policing the internet alongside artificial and human intelligence. However, an effort has already been made by the government to help end the spread which is the internet police. The internet police comprises of government agencies and departments who police the internet in a number of countries to fight

cybercrime, censorship and graphic/violent content (Jenkins, 2018). The Australian government have stated the internet police are “responsible for detecting, preventing, disrupting and enforcing offences such as harassment, and hate and violent content” (AFP). This demonstrates how government agencies have already taken it upon themselves to help stop the spread of problematic content from occurring on digital platforms. Thus, they are alongside technical developers in taking responsibility to putting an end to harmful acts on the internet. This is reflected through The Platform Governance Triangle claiming, “governments were not seen as knowing enough about digital technologies, and their involvement would inhibit innovation” (Gorwa, 2019). This establishes why government agencies are needed for internet regulation due to their levels of authority, however, they do not have enough information and knowledge about digital platforms to fully stop the spread. Hence, technical developers need to collaborate with government agencies in order to remove content which can be achieved by using artificial intelligence.
HOW SHOULD THE SPREAD BE STOPPED ?
-
IS IT ARTIFICIAL INTELLIGENCE ?
Currently, the procedure for removing inappropriate content is by humans manually observing the data produced by the various digital platforms and seeking out the problematic content when they come across it and then eliminating it themselves. However, this procedure is not effective enough as a study was done on Twitter and it estimated that “500 million tweets are sent per day and assuming these tweets contain 20 words on average, the volume of content published on Twitter in one day is equivalent to New York Times in 182 years” (Yaraghi, 2019). This highlights human intelligence is not enough for stopping the spread of inappropriate content. Whereas, artificial intelligence is able to detect content such as nudity, violence, terrorist matters, spam and problematic posts as they have technology called: computer vision software. This software is able to remove content when it has fixed elements in the image and thus is automatically taken down. Facebook’s CEO, Mark Zuckerberg stated, it is more efficient to create an artificial intelligence system which can detect a nipple than a human who takes hours to find the post, determine whether or not the image is deemed inappropriate and then to delete the content (Milmo, 2021). However, this still isn’t enough as content such as hate speech such as bullying and harassment is much harder for artificial intelligence to police as there can often be various different intents behind the speech (Nieva, 2018). Hence, technical developers should reorient their focus to artificial intelligence, however, they should still “utilise artificial and human intelligence together” (Internet Society, 2014).

-
OR IS IT CONTENT MODERATION ? OR IS IT BOTH ?
Furthermore, the use of content moderations is “intrinsic to platforms” as all the information on digital platforms is meant to be orchestrated (Gillespie, 2018). Governments use content moderation as they believe, “governance is as central to platforms as are data, algorithms and interfaces”, demonstrating the significant importance content moderation has in order to remove and restrict problematic content from occurring (Massanari, 2017). The way in which content moderation works is by removing illegal content, avoids breaches of copyright, removes public controversies and eliminates breaches of copyright (Gillespie, 2018). All of these factors aim to make the digital platform appropriate for users. Different digital platforms employ differing levels of content moderation, for example, Facebook and Instagram use higher levels of content moderation screening, compared to Twitter and Reddit. It depends on the digital platforms identity as what may seem offensive on one platform may not be in another. Hence, content moderators cooperate with a set of regulations given by the designated platform on what to remove and what not to remove. Moreover, content moderators have to work with artificial intelligence programs to increase the efficiency of identifying potentially problematic content. Facebook have recently hired 20,000 content moderators to work alongside artificial intelligence in order to patrol their digital platform (Nieva, 2018). This exemplifies the magnitude of policing content on digital platforms and it is such a big task, it needs both content moderators and artificial intelligence to work together in order to stop the spread of problematic content from happening.
THE WRAP UP :
To conclude, this essay articulates the reasons for who and how problematic content such as bullying, harassment, violent content, hate and porn can be stopped from spreading across platforms. The essay divulges into the reasons why the technical developers of digital platforms should take responsibility as

they have access to eradicate any controversial content from being seen on their respected platform. However, the essay discussed it is not just a job for technical developers but also for government agencies to be involved as they have the authority to override removal of problematic content. Moreover, the ways in which these groups can stop the spread are explained in the essay as they can utilise artificial intelligence and content moderation to do so. Artificial intelligence and content moderation work together to facilitate fast and efficient removal or inappropriate content from appearing online. Therefore, this essay demonstrates who and how problematic content should be stopped from circulating on digital platforms.
– Hyperlink reference list:
AFP. (2022). Cyber Crime – What is Cybercrime? Cybercrime Law. Australian Federal Police. Retrieved from September 19, 2022, from https://www.afp.gov.au/what-we-do/crime- types/ cyber-crime
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23.
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review 8(2).
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329–346.
Owens, H. (2020, February 5). Who should be responsible for policing social media?. Open Universities Australia. Retrieved from September 21, 2022, from https:// www.open.edu.au/advice/insights/who-should-be-responsible-for-policing-social- media
Tornqvist, A. (2022, April 15). 11 Types of Developers (Plus Tips For Becoming One). Indeed. Retrieved from September 19, 2022, from https://www.indeed.com/career-advice/ finding-a-job/types-of-developers
Who Makes the Internet Work: The Internet Ecosystem. (2014). Internet Society.
https://www.internetsociety.org/internet/who-makes-it-work/
Yaraghi, N. (2019, April 9). How should social media platforms combat misinformation and hate speech?. Brookings – TechTank. Retrieved from September 21, 2022, from https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media- platforms-combat-misinformation-and-hate-speech/
– Reference list:
Burgess, J., Marwick, A. E., & Poell, T. (Eds.). (2018). The sage handbook of social media. SAGE Publications, Limited.
Jenkins, L. (2018, November 9). Internet Police – How the internet police ‘police’ the internet?. Wikipedia. Retrieved from September 25, 2022, from https:// en.wikipedia.org/wiki/Internet_police
Milmo, D. (2021, October 29). Enter the Metaverse: the digital future Mark Zuckerberg is steering us toward. The Guardian. Retrieved from September 21, 2022, from https:// www.theguardian.com/technology/2021/oct/28/facebook-mark-zuckerberg-meta- metaverse
Nieva, R. (2018, May 2). How Facebook uses artificial intelligence to take down abusive posts. CNET. Retrieved from September 25, 2022, from https://www.cnet.com/tech/ tech-industry/heres-how-facebook-uses-artificial-intelligence-to-take-down-abusive- posts-f8/
Parliament of New South Wales. (2016). The Roles and Responsibilities of Federal, State and Local Governments. NSW Gov. Retrieved from September 21, 2022, from https:// www.parliament.nsw.gov.au/about/Pages/The-Roles-and-Responsibilities-of-Federal- State-a.aspx
Snider, M. (2019, March 19). No one reported New Zealand mosque shooting livestream as it happened, Facebook says. USA Today. Retrieved from September 24, 2022, from https://www.usatoday.com/story/tech/news/2019/03/19/facebook-new-zealand- shooter-livestream-not-reported-during-massacre/3209751002/
By Olivia Hayes