Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Social platforms have expanded the scale of information dissemination into global internet culture. At the same time, a diverse range of information is being transmitted to different users quickly and in a wide variety of ways through the nature of digital platforms. There is no shortage of immoral content circulating on the Internet, with topics such as violence, hatred and pornography appearing frequently. In fact, it is not easy to control this situation. In the past, laws were established to regulate traditional media distribution and were monitored and maintained by major publishers. The anonymity and timeliness of the digital age have made it impossible to implement internet auditing and control practices smoothly, and harmful information frequently appears on the internet. This paper will discuss why internet censorship is needed and who should be responsible for censoring content such as violent gore and pornography that appears on digital platforms. It also demonstrates that platforms, governments and individual users should all take on this responsibility and stop the spread of such information on the internet.

“Instagram and other Social Media Apps” by Jason A. Howie is licensed under CC BY 2.0.

Why is it important to stop the spread of problematic content?

“Cyberbullying, would you do it?” by kid-josh is licensed under CC BY-NC-SA 2.0.

With the rise of new media, the frequent use of the Internet has made social media a significant part of the interface between people. Teenagers make up the majority of internet users and are skilled at using social platforms to communicate. At the same time, the prolonged exposure of adolescents to unfiltered and uncontrolled speech in social media can affect their physical and mental health, causing significant impairment and trauma. Adolescents may follow the social learning theory of learning to imitate what they see and show others the negative behaviours spread on the Internet (Bae, 2021). In addition, the Internet’s freedom of expression and rapid dissemination allows people to spread negativity and send extreme statements on the Internet with impunity, causing severe adverse effects. They use the Internet to create online identities that hide their true selves and use social platforms as an outlet for their emotions. Once this begins to spread uncontrollably, cyberbullying can occur, such as threatening or sexual messages delivered using social media (Peterson & Densley, 2017). According to a 2014 research, one in five Australian youngsters had experienced cyberbullying (Butler, 2018). Most adolescents suffer from mental health issues such as anxiety and depression because of cyberbullying, and some of those who experience cyberviolence are even at the risk of attempting suicide or losing their lives. The situations described above are not limited to a particular country region but are a serious global problem. Therefore, it is crucial that people work to stop the spread of inappropriate content on the Internet.

Who should be responsible for stopping the spread of problematic content?

It is not enough to rely on the efforts of one party alone to stop the spread of problematic content on the Internet, so multi-stakeholder regulatory schemes are needed to stop the spread of problematic content.
Problem content requires the efforts of the internet audience itself to stop the spread of content, which means that the publishers of content on digital platforms need to take responsibility for online speech. The relatively free atmosphere of the Internet allows people to express themselves on the Internet. At the same time, the ‘anonymity’ of the Internet offers users the possibility to establish different online identities. Nevertheless, it is also the responsibility of online participants to stop the spread of harmful content. Gorwa (2019) has also pointed out that online users can actively participate in regulating the content of platforms through their own actions. Internet users should also improve their digital literacy and be aware of distinguishing between true and false information. Schools offer tutorials to educate young people on the proper use of the Internet and improve mental health by communicating ideas to students who have been exposed to online violence. These measures will positively affect online participants and suppress the possibility of harmful content being distributed on the Internet.

 

Platforms should balance public and personal interests and reject the spread of harmful content on the Internet. Content censorship is a method often used by social platforms to filter out harmful information. Platforms will regulate user activity and remind online users that they should promptly adjust the content of their online messages to avoid spreading problematic content (Gillespie, 2018). For users who chronically violate this rule, the platform will consider using some punishment mechanisms, such as shutting down the right to post messages on the platform account. Twitter is working to reduce the exposure of some of the tweets judged by the platform to have spread undesirable information and form abuse (Kantrowitz, 2017). However, these methods of censorship are not agreeable to everyone and can even lead to some conflicts. Many believe that this stifling censorship of speech will lead to a loss of freedom of expression or that algorithmic misjudgment will lead to accounts being banned. These factors prevent some users from building a trusting relationship with the platform, thus blocking the advancement of content censorship.

network” by michael.heiss is licensed under CC BY-NC-SA 2.0.

In fact, some internet companies have hired staff specifically to screen online information multiple times to prevent algorithmic misjudgments. It is hoped that algorithmic filtering and human review will reduce the spread of problematic content and the incidence of conflicts.

However, platform companies are used to using the words freedom of expression to attract the attention of media audiences. Controversial content can equal heat, which means that the platforms receive more benefits. There is also the possibility that their governance mechanisms will exploit the public interest for personal gain. Many people have questioned that some social platforms take too long to deal with advertisements or problem messages. In contrast, videos that appear on the internet with copyright issues are blocked and removed promptly. The triangular model of platform governance mentioned by Abbott and Snidal in 2009, where multiple parties jointly oversee the internet, would seem to be more acceptable to the public (Abbott & Snidal, 2009). The objectivity of platforms and regulation transparency should therefore be monitored externally.

 

Governments have an obligation to manage the negative aspects of the Internet to protect the public interest, the individual digital right. Government oversight will reduce the frequency of inappropriate content being distributed on the Internet. It can be seen that the options governments choose to govern the Internet are based on the situation in their country. Accordingly, there is some variation in the criteria used to manage the Internet. In 2016, the EU signed a partnership with several internet platforms to curb the dissemination of inappropriate content and decided to strengthen censorship of internet platforms (Aswad, 2016). In addition to self-policing measures by digital platform companies, governments should also develop policies to control and manage the spread of harmful information. The Blue Whale game, a global mass suicide sensation, used social media to inflame the emotions of online users, ultimately leading to the suicide of the young people involved in the activity (Mukhra et al., 2017).
“Social Media Logos” by BrickinNick is licensed under CC BY-NC 2.0.

Harmful content on social networks not only has a negative impact on online users but also undermines national security, as the spread of false information can lead to conflict. The government will often find a large technology or telecommunication company to act as an intermediary and regulate the internet, coordinating and communicating with the platform companies to achieve harmony on the internet. The Australian Code of Practice on Disinformation and Misinformation is a regulation set up to suppress the spread of disinformation, and DIGI, the Digital Industry Group, manages this intervention. This project has been signed with eight social media companies and provides annual transparency reports to improve the effectiveness of the programme’s interventions (Barrett, 2022). It can be seen that the combined efforts of the government and the platforms will reduce some of the proliferation of fake news. Therefore, the government should intervene in social media and monitor the transparency of the platforms in censoring user content.

Conclusion:

The Internet has become a part of people’s lives, and the dissemination of information mixed with a large amount of harmful content has created a more serious social problem. Many online participants have experienced cyberbullying or have become online abusers by viewing violent content. Therefore, stopping the spread of harmful content is a matter of urgency. This requires not only online technical maintenance by platforms but also online users themselves to recognise the dangers of online speech and take responsibility for posting information. The government should also establish laws and regulations while working with the platforms to maintain a harmonious online environment.

 

Reference List:

Abbott, K. W., & Snidal, D. (2009). The Governance Triangle: Regulatory Standards Institutions and the Shadow of the State. In W. Mattli & N. Woods (Eds.), The Politics of Global Regulation (pp. 44–88). Princeton, NJ: Princeton University Press. doi:10.1515/9781400830732.44

Aswad, E. (2016). The Role of U.S. Technology Companies as Enforcers of Europe’s New Internet hate Speech Ban – Columbia Human Rights Law Review. https://hrlr.law.columbia.edu/hrlr-online/the-role-of-u-s-technology-companies-as-enforcers-of-europes-new-internet-hate-speech-ban/

Bae, S.-M. (2021). The moderating effect of the perception of cyber violence on the influence of exposure to violent online media on cyber offending in Korean adolescents. School Psychology International, 42(4), 450–461. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/01430343211006766

Barrett, A. (2022). What are digital platforms doing to tackle misinformation and disinformation? Croakey Health Media. https://www.croakey.org/what-are-digital-platforms-doing-to-tackle-misinformation-and-disinformation

Butler, D. (2018). Cyberbullying and the Law: Parameters for Effective Interventions? Reducing Cyberbullying in Schools, 49–60. https://doi.org/10.1016/C2016-0-01087-2

Gillespie, T. (2018). All platforms moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape social media (pp. 1–23). Yale University Press. http://dx.doi.org/10.12987/9780300235029-001

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Kantrowitz, A. (2017). Twitter Is Now Temporarily Throttling Reach of Abusive Accounts. BuzzFeed News. https://www.buzzfeednews.com/article/alexkantrowitz/twitter-is-now-temporarily-throttling-reach-of-abusive-accou#.cxvKQyKVz

Mukhra, R., Baryah, N., Krishan, K., & Kanchan, T. (2017). “Blue Whale Challenge”: A Game or Crime? Science and Engineering Ethics, 25(1), 285–291. https://doi.org/10.1007/s11948-017-0004-2

Gorwa, R. (2019). The Platform governance triangle: Conceptuslising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Peterson, J., & Densley, J. (2017). Cyber violence: What do we know and where do we go from here? Aggression and Violent Behavior, 34, 193–200. https://doi.org/10.1016/j.avb.2017.01.012