Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Figure 1, ‘Social media’ dole777. is licensed under CC BY-NC-SA 2.0

 

The Problematic Contents

In this digital era, life is mediated by the internet, where global internet users have grown to more than five billion, 63.1 per cent of the world’s population (Statista, 2022). Concerns regarding problematic internet use (PIU) have been growing in tandem with the increase in internet use. PIU refers to internet usage that causes a person to experience troubles in their life on a psychological, social, academic, or professional level (Beard, 2001, p.378). A study suggested that PIU is considerably and positively impacted by internet fraud, internet pornography use, and cyberbullying. In particular, cyberbullying exhibits the most strong impact (Chao et al., 2020). Social media outlets enable users to approach and explore the outside world, yet increase the chances of exposure to the prevalence of phenomena. Adolescents are more frequently at risk from social media use than most adults believe. Almost half of Australian teenagers had at least one negative online experience in the six months, such as being contacted by strangers or being sent inappropriate content (Statista, 2021). Consequently, their well-being is a concern as researchers examined that social media use and mental health problems have a general correlation. The main risk factors associated with depression, anxiety, and psychological distress comprised the time spent on social media (Keles et al., 2019, pp. 79-93). Hence, who should be responsible for stopping the spread of this content?

Social Media Platforms

Figure 2 ‘Hacker binary attack code’ Markus Spiske. is licensed under CC BY-NC-SA 2.0

Firstly, social media companies take a huge role in regulating the content spreading on their platforms. Content filtering is a crucial component of the internet world, but it is intrinsically challenging to do it perfectly for several reasons. One significant reason is how social platforms frequently process enormous amounts of information due to their large user bases, which makes it challenging to address the problem’s sheer scale without hiring a sizable number of moderators (Langvardt, 2018). Another reason is because of the complexities and nuances of language. Irony, sarcasm, humour, and idiom are frequently used in human communication to convey various meanings and intentions while also being sensitive to context and culture. Therefore, some social media platforms are increasingly employing algorithmic methods to help detect and remove prohibited content (Harrison, 2019). Although they mostly rely on user reporting and human assessment, automating content moderation enables much more significant quantities of material to be examined than manually moderating. In particular, Facebook reported that their system removed 99.5 per cent of terrorist content, 96 per cent of nudity and sexual content, and 86 per cent of violent content in 2018. This proves that social media platforms are trying to censor inappropriate content. However, a massive debate about the power of social media platforms was enhanced after the incident of the former American president, Donald Trump, being abandoned from multiple online platforms. Platforms like Facebook and Twitter deactivated Trump’s account as he spread threats and hate speech. Although this is a reasonable decision, this shows how platforms have enormous power to control information. If the power is used maliciously, freedom of speech and democracy will be damaged. Despite that, the possible impacts could be way more destructive if there are no regulations. Hence, social media companies should at least set rules to protect the community maximally. 

Government

While regulations can only help prevent some of the misuse of the internet, educating the public is one of the crucial ways to tackle the issue from its root. Cyber safety education allows netizens to acknowledge the advantages and opportunities provided by the web, as well as to comprehend the risks and avoid potential harm. Further, accomplish their responsibilities when using the platform. To deliver the information more effectively, the government should be responsible for educating the public as an authorised and influential stakeholder. For example, The New Zealand government organised a campaign, Keep It Real Online, is divided into three parts to target different age groups. One of their successful tactics is a series of advertisements that teaches parents and caregivers to converse with their children about the dangers of the internet. This campaign aims to raise awareness and reduce the risk in children and young people as they spend more time online, especially during the COVID-19 lockdown. According to their website, more than 80 per cent of the parents reflected that the event prompted a conversation about online or digital safety. The data also shows that schools’ safety and security filters blocked a gradually decreasing number of attempts to access pornography when the event was active (Department of Internal Affairs, 2020). This event demonstrated that quality security education is needed and can successfully reduce fear and uncertainty when facing common online threats.

Despite that, it is suggested that the government should not be overly involved in social media regulations. It is neither desirable nor practical for the government to impose regulations to stop disseminating false and misleading information. This is undesirable because no method created to address the issue can be protected against political appropriation. It is also not practical without significant deviations from First Amendment law and clear definitions of misinformation and deception (Lin, 2022). Moreover, economic regulation may also potentially violate freedom of speech. Companies that operate these social platforms appear to be dependent on regular business transactions, the regulation of which is presumed constitutional. However, the exchange that underlies social media is not a regular business deal. Individuals use social media for speech and exchanging information. Their freedom of speech would be limited if the government prohibited the conversations (Samples, 2019, pp. 865).

Online Users

Figure 3 ‘Work on notebook’ Sergey. is licensed under CC BY-NC-SA 2.0

Nevertheless, all of the above actions are not aimed at regulating the social platforms themselves; instead, they remind and force online users to fulfil their responsibilities as digital citizens. They should participate safely, ethically, and respectfully in online community life. It is essential to behave lawfully, especially when the internet allows users to utilise it anonymously. For example, hacking, theft, or unauthorised downloading is illegal to damage another person’s work, identity, or property online. Also, users should be considerate and consider how their online activities could affect themselves, the people around them, and the broader online community. Most importantly, staying sceptical and protecting themselves while using the internet is essential. Data show that only a tiny portion of cyberbullying victims would report or seek help (Statista, 2018). Bullying must be recognised for it to end, and reporting it is crucial. Showing the bully their actions are unacceptable can also help prevent another victim in the future. Therefore, users are responsible for keeping the community peaceful while protecting themselves online. Apart from that, unexpectedly, many users continue to be very naïve about safety despite widespread warnings about the risks of poor online safety practices. Giving naïve users vicarious exposure to safe online behaviours when personal responsivity was emphasised had a more substantial impact on their intent to be safe than simply telling them that it was simple to stay safe online by following safety advice. Stressing personal responsibilities over emphasising the duty of others was more successful when combined with vicarious experience. The partial reversal of these effects among those educated about online safety measures suggests that it may not be a good idea to merely remind people who are already knowledgeable about social norms that their safety is a shared responsibility. Giving users of all ability levels correctly tailored vicarious experiences can improve compliance with online safety requirements (Shillair et al., 2015, pp. 199-207).

 

Conclusion

Ultimately, to make the online community healthier and safe for anyone, every internet user should take responsibility to stop the spread of problematic content. While it is impossible to eliminate all the unappropriated content, users should be aware of the risks and use their critical thinking when using social media.

Reference

Beard, K. (2004, July 5). Modification in the proposed diagnostic criteria for internet addiction … Modification in the Proposed Diagnostic Criteria for Internet Addiction. Retrieved October 14, 2022, from https://www.liebertpub.com/doi/10.1089/109493101300210286

Chao, C.-M., Kao, K.-Y., & Yu, T.-K. (2020, January 1). Reactions to problematic internet use among adolescents: Inappropriate physical and Mental Health Perspectives. Frontiers. Retrieved October 14, 2022, from https://www.frontiersin.org/articles/10.3389/fpsyg.2020.01782/full

Harrison, S. (2019, July 11). Twitter and Instagram unveil new ways to combat hate-again. Wired. Retrieved October 14, 2022, from https://www.wired.com/story/twitter-instagram-unveil-new-ways-combat-hate-again/

Keles, B. (2019, January 17). A systematic review: The influence of social media on depression, anxiety and psychological distress in adolescents. Taylor & Francis. Retrieved October 14, 2022, from https://www.tandfonline.com/doi/full/10.1080/02673843.2019.1590851

Koebler, J. (2018, August 23). Here’s how Facebook is trying to moderate its two billion users. VICE. Retrieved October 14, 2022, from https://www.vice.com/en/article/xwk9zd/how-facebook-content-moderation-works

Lin, H., & Alstyne, M. V. (2022, May 18). Should the government regulate social media? Divided We Fall. Retrieved October 14, 2022, from https://dividedwefall.org/should-the-government-regulate-social-media/

Samples, J. (2020, January 3). Why the government should not regulate content moderation of social media. SSRN. Retrieved October 14, 2022, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3502843

Shillair, R., Cotten, S. R., Tsai, H.-Y. S., Alhabash, S., LaRose, R., & Rifon, N. J. (2015, February 16). Online safety begins with you and me: Convincing internet users to protect themselves. Computers in Human Behavior. Retrieved October 14, 2022, from https://www.sciencedirect.com/science/article/pii/S0747563215000606?casa_token=9U1rFv5A9fUAAAAA%3AH1Pk3uozTwhXj54VIykvV0gHbwXEIcAd2u-VpiZsjcLtRYfXszpd8T7vGhbAUTTAVvvP0U9nPp8#b0135

Statista Research Department. (2022, July 7). UK teens: Actions taken in response to bullying on Twitter 2016. Leading actions taken by teenagers in response to the most recent experience of online bullying or trolling on Twitter in the United Kingdom (UK) in January 2016. Retrieved October 14, 2022, from https://www.statista.com/statistics/549048/uk-leading-actions-taken-by-teens-in-response-to-bullying-on-twitter/

Statista Research Department. (2022, September 20). Internet and social media users in the world 2022. Statista. Retrieved October 14, 2022, from https://www.statista.com/statistics/617136/digital-population-worldwide/

Statista Research Department. (2022, September 23). Australia: Most common actions against cyberbullying by teenagers 2020. Most common actions taken against negative online experiences by teenagers in Australia as of September 2020. Retrieved October 14, 2022, from https://www.statista.com/statistics/1231302/australia-most-common-actions-against-cyberbullying-by-teenagers/

Langvardt, K.. (2018). Regulating online content moderation. Georgetown Law Journal 106 (5).