Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

“Social Media Tubes – iPhone Background” by Rosaura Ochoa is licensed under CC BY-NC 2.0.

Introduction

As the Internet becomes increasingly popular, its influence grows, and we live not with media but in media. Everyone in the media can be a sender or a receiver (Carey, 2009). However, because of this characteristic of online communication, the dangers of misuse of the Internet have also increased. The presence of undesirable information on the Internet (pornography, terror, violence, etc.) is increasingly disturbing the regular order of the virtual society. The Internet has become a platform for the public to publish, read and interact with information. The proliferation of undesirable information on the Internet is seriously affecting the development of the Internet. As a result, social media regulation has become a significant concern. To cope with these problems, the government and social media platforms should work together to control the spread of such harmful information on digital platforms. Various countries have now started to focus on controlling undesirable information on the Internet. Below I will describe the background of the multiple types of harmful content and how these problems are wrong, and how the government and social media platforms can stop the spread of dangerous information.

 

Background

In the age of social media, the “true narrative” on the Internet has become more complex and is no longer determined by traditional media or authorities. As information dissemination technology lowers the threshold of information distribution, more ordinary people produce, disseminate, consume, and transform information online quickly and in real time. According to their positions, emotions, and beliefs, they are increasing the fragmentation of knowledge, increasing the number of versions of truth narratives in public discourse, and leading to uncertainty in the outcome of truth construction. Moreover, the crowding effect of social media intensifies social divisions and leads to the polarization of opinions. On the one hand, the nature of social media makes it easier for emotional and simplistic expressions to be disseminated, amplifying negative social sentiments. On the other hand, the crowding effect of the Internet triggers phenomena such as “echo chambers” and “filter bubbles”, which allow most people to cling to their own opinions (Arguedas,2022). Moreover, the “echo chamber” and “filter bubble” phenomena caused by the crowding effect of the Internet make most people stubbornly adhere to their own opinions. As a result, issues such as violent content, hate speech, and cyberbullying have emerged.

 

Negative effects

The spread of violent content and hate speech has negatively affected users. This is especially true for children. Many children believe that the Internet is a virtual world and that they can hide behind it. Some of the boundaries of regulated behaviors become blurred or even disappear online. Offensive and harmful content prevents children’s healthy physical and mental development. Many studies have shown a correlation between violent games on the Internet and violent tendencies among young people (Huesmann, 2007). At the same time, such undesirable content can also affect minors’ real-life interactions, which is detrimental to their social role transformation and hinders individuals’ socialization process.

 

Social platforms

The online ecology in the age of social media is highly complex; not only are there many parties involved, and information spreads rapidly, but there is also significant uncertainty in how information is interpreted and the social impact it causes (Flew, 2019). Social media companies are relatively sensitive to information involving ideological and current affairs content, while they often do not react promptly to other aspects of information content. When unexpected events occur, platform companies often use control measures such as deleting posts and blocking numbers to control the spread of harmful information. However, there are still limitations to this approach. Many social media giants now use artificial intelligence technology to identify undesirable information on the web. For example, social platform giants like YouTube, Facebook, and others. They are committed to creating a better online environment. They have set up two mechanisms. One is overtly used filtering mechanisms, such as automatic comment filters, which filter out the content of hateful and abusive nature, including messages that promote or glorify violence, according to new content filtering regulations within the platform. The other is covert internal filtering criteria, such as the creation of internal content review criteria containing tens of thousands of sensitive terms (Perez, 2020). They ensure that most of the harmful content is removed in the first instance. Thus, protecting users from being exposed to such content as violence, pornography, etc. This ensures that the spread of harmful information is kept to a minimum.

“flickr and facebook” by ansik is licensed under CC BY-NC 2.0.

 

Government

When social media platforms cannot remove undesirable information altogether, social media should interact with national governments and seek ways to limit the spread of violent content and cyberbullying. Through its political rights, the government can create a series of regulations to govern online safety, thus, limiting the massive spread of wrong information on the Internet. Of course, the administrative power in the hands of the local government in dealing with online public opinion, predominantly negative public opinion, must be used with caution. Suppose the measures taken in response are not scientific, rigorous, or appropriate and go against the law of the development of online public opinion. In that case, they will not be conducive to the resolution of the matter and will likely be counterproductive (Flew, 2019). For example, Australia has tried to consider measures to limit objectionable content. Cyberbullying, harassment, and pornography. In Australia, the Dissemination of Vicious and Violent Content Act was passed on 5 April. This legislation was passed following the recent New Zealand shooting broadcast live on Facebook. In 2015, Australia implemented the Safer Internet Act, which requires social media companies to remove harassing and bullying content. And social networking platforms face stiff fines if they do not remove such content within 48 hours (BBC News, 2020). In addition, China government standards require social networking platforms to take technical and administrative measures to manage social networking platform users and their postings. And to link usernames to their true identities, to review the information posted publicly by social networking platform users, and to take measures to strengthen the management of re-posted information.

 

Conclusion

In general, harmful information has a significant negative impact on Internet use, and more and more countries have begun to take this issue seriously. To clean up their networks and work hard to find effective ways to control harmful information on the web from a technological perspective. As dangerous information is everywhere, it also brings a lot of harm to young people. Therefore, the government and social platform companies should work together, and the government should urge social platforms to strengthen their supervision and management efforts by making relevant legal rules. Social media platforms should work on developing technology to remove harmful content more accurately. It is hoped that by managing the safety of social networking platforms and forwarding information. On social networking platforms, the management of information will be strengthened. The actions of social media platforms are only one of the critical reasons for exacerbating the trend of polarization. When people want to solve this dilemma, they need the joint efforts of all social parties, including self-regulation by enterprises, industry autonomy, government regulation, and the active advocacy and collective participation of other social forces to protect the positive values of democracy. To carry out platform governance in the era of artificial intelligence, we need scientific wisdom and institutional rationality.

“Scoble’s Social Media Starfish” by DBarefoot is licensed under CC BY-NC 2.0.

Reference list

 

Arguedas, A.R., Robertson, C.T., Fletcher, R., & Nielsen K.R. (2022). Echo chambers, filter bubbles, and polarisation: a literature review. https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review

 

BBC News. (2020). Social media: How do other governments regulate it? https://www-bbc-com.translate.goog/news/technology-47135058?_x_tr_sl=en&_x_tr_tl=z h-CN&_x_tr_hl=zh-CN&_x_tr_pto=sc

 

Carey, J. (2008). A cultural approach to communication. In Communication as culture: Essays on media and society. (2nd ed.) New York: Routledge, 11-28.

 

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

 

 

Huesmann, L. R. (2007). The Impact of Electronic Media Violence: Scientific Theory and Research. Journal of Adolescent Health, 41(6), S6–S13. https://doi.org/10.1016/j.jadohealth.2007.09.005

 

Perez, S. (2020). YouTube introduces new feature to address toxic comments. https://techcrunch-com.translate.goog/2020/12/03/youtube-introduces-new-features-to-address-toxic-comments/?_x_tr_sl=en&_x_tr_tl=zh-CN&_x_tr_hl=zh-CN&_x_tr_pto=sc