Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping spread of this content and how?

"iPhone 4" by Brian Wilkins is licensed under CC BY-NC 2.0.

The Internet is a free-speech platform.

The culture of the internet used to be made of techno-meritocratic culture, hacker culture, virtual communitarian culture, and entrepreneurial culture (Castells, 2002). These cultures gather people together and let them have the freedom of speech.

 

Who should be responsible for stopping the spread of those problematic content and how?

Platforms themselves should be responsible for stopping the spread of problematic content by monitoring what users post on the digital platforms and content moderation. Since users could participate in the digital platform, the content they create might spread globally and not be subject to national policies, it is necessary for the platforms to moderate content. Governance had tended to be chosen above country regulation because it is intrinsically more difficult to regulate than traditional media due to the enormous volumes of real-time user-generated material (Gray, 2022). An estimated 14% of young people in the United Kingdom have engaged in some form of self-harm, which is a rate that is the highest in other European countries (Perowne & Gutman, 2022). The self-inflicted violence is poisoning them, cutting themselves, using illegal drugs or drinking an excessive amount of alcohol and so on (Perowne & Gutman, 2022). There is evidence to suggest that young people who have self-abuse symptoms are people who have a great need for support, but it is hard for them to seek it from people around them or in real life (Perowne & Gutman, 2022). Most young people turn to find support for mental health online. However, there is much problematic content on digital platforms such as harassment, violent content, porn, hate content, and so on. According to recent research conducted by the Pew Research Center, 40% of internet users have been the target of harassment (Rubin, 2015).

 

According to the research (Arendt et al., 2019), a large number of Instagram users are either unintentionally or purposefully exposed to

Instagram App” by Brett Jordan is licensed under CC BY 2.0.

content related to self-harm, people see those content might elicit their emotional disturbance. Those exposures are related to self-harm and suicidality statistically (Arendt et al., 2019). Sometimes people do not search for self-abuse content on the digital platform, but they still have the chance to accidentally see them. For example, some users are searching for photographs of cats, the content connected to cutting which is tagged on #cats might expose to them (Arendt et al., 2019). In 2017, a teenager called Molly Russell suicided, she took her own life while suffering from depression and negative content online (The Conversation, 2022). According to the inquest, Molly had used social media such as Instagram and Pinterest to see graphic content about self-harm and suicide (The Conversation, 2022).

 

It is argued that largely user-generated material is not subject to established publication norms and hence cannot be subject to the same laws and regulations as traditional media (Gray, 2022). While we realize that the obligations of platform operators may be different from those of conventional publishers, however, these companies are progressively monitoring, controlling, and removing content, as well as restricting and blocking specific individuals, functions that are very similar to editorial choices (Gray, 2022). Twitter is a platform that takes a relatively lax attitude toward the regulation of content, it has adopted automatic filtering as well as stricter language in its policy governing the illegal posts (Rubin, 2015). Many companies of digital platforms monitor their users to identify what they post on the platforms, when companies find the content they post is illegal such as terrorist pictures, bullying, harmful content, hate speeches, pornography, and so on, these companies will remove and delete those posts (Bellanova & Goede, 2021). Some companies such as YouTube, Twitter, and some small internet services have found themselves are at the forefront of the fight against problematic content online (Bellanova & Goede, 2021). Those companies are responsible for identifying, selecting, and searching potentially malicious datasets. Also, they monitor and limit the users and user groups. Humans and machines work together on content moderation. Although some successful companies are promoting the use of artificial intelligence for content moderation, those technical removal procedures are not entirely automated (Bellanova & Goede, 2021).

 

Who else and how will it stop the spread of those problematic content on digital platforms?

Although platform regulation is easier to stop the spread of online content than state regulation, online platforms might view their participation in self-regulation as a strategy to preserve their reputation which does not necessarily correspond with their adherence to legal requirements (van der Hof, 2014) so state regulations still need to make policies to prevent the spread of content that is problematic online.

 

With the development of the Internet, more and more children are using the internet at a young age. If children post personal

Children” by aka Quique is licensed under CC BY-NC-ND 2.0.

information online, they might put themselves at risk of receiving sexual abuse, cyberbullying, and other negative content (van der Hof, 2014). Although parents can stay with their children when they are online or educate them on how to deal with problematic content, they need privacy and parents should give them more freedom and responsibility for their children to independently use their personal data when they grow up. Therefore, the government makes policies to protect children’s safety online is important. A proposal that would replace a European Data Protection Supervisor was published by the LIBE Committee of the European Parlement (van der Hof, 2014). In the proposal, it mentioned age verification, and deleting personal data. Firstly, for age verification, when digital platforms provide online services, obtaining verifiable parental consent is required to lawfully process the personal data of minors less than 13 years old (van der Hof, 2014). Children less than 13 years old might be less likely to know the potential dangers of disclosing personal information online than adults so setting the age verification allows parents to help their children check whether they have the risk of exposing their privacy online (van der Hof, 2014). Secondly, digital platform users have the right to delete and start over with a blank slate in the rule proposed in Article 17 (van der Hof, 2014). Like the description above, children may lack of the ability to distinguish right from wrong, and also the consequences of these behaviors, they are more likely to do dangerous things on digital platforms and put themselves in dangerous situations. When they grow up, they might forget or regret what they did online (van der Hof, 2014). Thus, deleting personal data allow people to start again. In Article 17, when people erase their personal data and stop spreading the content of their personal data, digital platforms have the responsibility to remove those personal data (van der Hof, 2014).

 

It is possible to deduce some of the best regulatory methods from these situations, despite the fact that the cultures and political systems of different countries may result in different rules (Pillalmarri & Stanley, 2021).

 

In conclusion, both platforms and governments should be responsible for regulating digital platforms. Platforms could content moderation and remove the posts that are against the rules of the platforms. Governments could stop the spread of illegal content by age verification and deleting personal data.

Reference list

Arendt, F., Scherr, S., & Romer, D. (2019). Effects of exposure to self-harm on social media: Evidence from a two-wave panel study among young adults. New Media & Society, 21(11–12), 146–151. https://doi.org/10.1177/194277861000300313

Bellanova, R., & de Goede, M. (2022). Co-Producing Security: Platform Content Moderation and European Security Integration. Journal of Common Market Studies, 60(5), 1316–1334. https://doi.org/10.1111/jcms.13306

Castells, M. (2002). The Culture of the Internet. In The Internet Galaxy: Reflections on the Internet, Business, and Society, Clarendon Lectures in Management Studies (pp. 36-63). Oxford University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.1093/acprof:oso/9780199255771.001.0001

Gray, J. (2022). Week 7: Governing the Internet – Policy & Regulation [PowerPoint]. University of Sydney Canvas. https://canvas.sydney.edu.au/courses/42984/pages/week-7-governing-the-internet-policy-and-regulation?module_item_id=1628062

Rubin, A. (2015, September 1). The Top Social Media Platforms’ Efforts To Control Cyber-Harassment. JDSUPRA. https://www.jdsupra.com/legalnews/the-top-social-media-platforms-efforts-87698/

Perowne, R., & Gutman, L. M. (2022). Barriers and enablers to the moderation of self-harm content for a young person’s online forum. Journal of Mental Health (Abingdon, England), ahead-of-print(ahead-of-print), 1–9. https://doi.org/10.1080/09638237.2022.2069721

Pillalamarri, A. & Stanley, C. (2021). ONLINE CONTENT REGULATION: AN INTERNATIONAL COMPARISON. International Law and Policy Brief. https://studentbriefs.law.gwu.edu/ilpb/2021/12/08/online-content-regulation-an-international-comparison/

The Conversation. (2022, October 4). Coroner finds social media contributed to 14-year-old Molly Russell’s death. How should parents and platforms react?. https://theconversation.com/coroner-finds-social-media-contributed-to-14-year-old-molly-russells-death-how-should-parents-and-platforms-react-191757

van der Hof, S. (2014). No Child’s Play: Online Data Protection for Children. In Minding Minors Wandering the Web: Regulating Online Child Safety (pp. 127–141). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-005-3_7