The Internet and mobile devices as user terminals have accelerated the development of new media communication technologies, immersing individuals in the age of information and vastly expanding their access to reliable sources of information. The proliferation of harassment, violence, and other objectionable content on platforms is exacerbated by the fact that digital media, while improving public communication channels and integrating virtual platforms with the real world, amplifies unpleasant feelings among individuals and society. In the face of undesirable content on digital platforms, effective measures should be taken to control the spread of these contents, and it is necessary to concentrate the power of various subjects such as the government, platforms, and online audiences, to realize systematic management of digital platforms.
Bullying On the Internet
Bullying can take many forms and manifest at various ages in today’s culture, which is a severe problem. Olweus(1991) defined bullying as the repeated infliction of unfavorable behavior on someone defenceless by one or more persons. The victim is intended to suffer physical or mental harm as a result of this behavior. Additionally, as internet technology has developed, the issue of cyberbullying has become much worse. Bullying typically ends when the victim and the bully are physically separated. But because of the internet era, bullying has surpassed this physical boundary.
（”Anti-bullying postcard: ‘Stop Bullying Everywhere’” by Ken Whytock is licensed under CC BY-NC 2.0.）
Cyberbullying refers to using the Internet to cause malicious harm to individuals or groups, including abuse, harassment and online threats. Similar to traditional bullying, cyberbullying “often involves repetitive behaviors and a power imbalance between the attacker and the victim (Ronis & Slaunwhite, 2017),” with studies showing victimization rates for adolescents and youth ranging between 13.8% and 57.7% (Ronis & Slaunwhite, 2017). And cyberbullying has become a significant health risk for adolescents.
Who should be responsible for stopping the spread of undesirable content? How?
YouTube’s Transparency Report provides data on the removal of objectionable content, with more than 3.9 million videos terminated between April and June 2022, mostly for violating rules of conduct not allowed on YouTube, such as pornography, incitement to violence, harassment and hate speech. The legal foundation for social media should be improved, and the government should act to stop the spread of harmful content as soon as it starts. To foster a positive atmosphere for network engagement, all forms of activity such as publishing unlawful network speech, disseminating rumours, and breaching others’ privacy, should be held legally accountable. Technical tools should be fully utilized to prompt citizens to comply with the real-name system when they are monitored for excessive expressions. Moreover, the government should strengthen the monitoring of platform content and use online communication technology to screen and collect a variety of valuable information. For example, the Australian government’s regulatory approach to Internet platforms is designed to force media companies to do more to regulate online speech, and companies must ensure that harmful content is removed quickly and prevented from appearing in the first place. There is also the Global Internet Counterterrorism Forum created by YouTube and Facebook, Twitter and Microsoft (Murthy, 2021) as a model to be more responsible and take measures to combat extremist content on the Internet and prevent terrorists and violence extremists from using digital platforms. Furthermore, the rights and obligations of the public should be clearly explained, and citizens’ legal awareness should be raised so that they can consciously abide by relevant laws and regulations in their participation in online activities, and the public should be encouraged to monitor and report various online wrongdoings.
On the Internet, platform enterprises play two distinct roles: one as the subject of competition and the other as the keeper of market order. Platform companies constitute a sort of twin regulatory a pattern where they act as both the regulated who adhere to governmental regulatory rules and the regulators who fulfill the regulatory obligations to the firms in the platform. While online media platforms enjoy profit-making, they also bear the main responsibility of information content management. While the public may perceive government regulation of platform content as different from the management of the media platform itself, which is a maximum intervention for the government, platform regulation is a minimum public governance strategy (Kozyreva et al., 2020). Government regulation undoubtedly plays a significant role in how Internet platforms are governed. In order to include the content regulation of media platforms into the legal framework and to promote healthy development, more focused laws and regulations will shortly be developed with the assistance of the government. However, self-regulation by Internet platforms is more comprehensive and timely than the government. Media platforms, as interactive spaces connecting participants of different platforms, can grasp the content of messages and, for example, promptly detect false and erroneous information on platforms. The Internet has the power to constrain before it can cover the platform operators and users in the virtual space, the platform needs to focus on the risk of content to carry out exceptional rectification work, which can build an artificial intelligence content audit monitoring system to enhance the ability to assess the content audit. With the impact of the epidemic, Twitter and many social platforms are relying more on algorithmic systems for content review. The artificial intelligence technology works with humans to improve review efficiency based on ensuring the effectiveness of the review and more equitable identification of offending content for hate speech, pornography or threats.
The fundamental freedoms and rights of the public to the public discussion are protected in this realm of digital platforms, the Internet expanding the avenues for citizens to fully express themselves, and while users have the right to freedom of expression, the darker side of social media content emerges in objectionable forms (Gongane et al., 2022). The quality of speech and the value of perspectives in the platform remains unregulated, young users in particular report frequent exposure to impolite language and forms of communication in online conversations, a typical example of which is the large number of vicious comments that often abound under various Youtube videos, and the promotion of political extremism and the spread of disinformation (Kozyreva et al., 2020). For example, hate speech is targeted based on group characteristics rather than individuals, and most comments are directed at minorities and focus on race, religion, and sexual orientation, among others. In addition, women are often the victims of sexist or stereotypical speech (Schmid et al., 2022). The Internet’s virtual invisibility used to be a significant management challenge. However, today it is no longer a free and unrestricted cyberspace. Whether users find information through search engines or social media, their actions are governed by algorithms created by businesses seeking to maximize profits (Kozyreva et al., 2020). Under Australian law, overseas students in Australia are allowed to express their personal views and their own opinions to the extent permitted by law, and are allowed to be politically active in the community. Therefore, when using the Internet to acquire and gather information, users must take responsibility for the content they publish, independently select a platform to publish information in accordance with their needs accurately, and establish a network self-regulatory organization by examining the characteristics of media platforms that are different from those in real life.
The ethical standards of network users’ speech, conduct, and ideas are governed by cyberethics. Cyberethics provides the conditions for the Internet to play a positive role, and the platform becomes a new type of power that offers a variety of new possibilities, but also challenges traditional notions of ethics and morality. Both governments, media platforms and users themselves should work together to stop the generation and dissemination of undesirable content and firmly stop the dark side of the Internet in order to improve the quality of people’s use of the Internet and promote economic development.
Gongane, V. U., Munot, M. V., & Anuse, A. D. (2022). Detection and moderation of detrimental content on social media platforms: current status and future directions. Social Network Analysis and Mining, 12(1). https://doi.org/10.1007/s13278-022-00951-3
Hofmann, J., Katzenbach, C., & Gollatz, K. (2016). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9), 1406–1423. https://doi.org/10.1177/1461444816639975
Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools. Psychological Science in the Public Interest, 21(3), 103–156. https://doi.org/10.1177/1529100620946707
Murthy, D. (2021b). Evaluating Platform Accountability: Terrorist Content on YouTube. American Behavioral Scientist, 000276422198977. https://doi.org/10.1177/0002764221989774
Nixon, C. (2014). Current perspectives: the impact of cyberbullying on adolescent health. Adolescent Health, Medicine and Therapeutics, 5(5), 143–158. https://doi.org/10.2147/ahmt.s36456
Olweus History | Violence Prevention Works. (n.d.). Www.violencepreventionworks.org. https://www.violencepreventionworks.org/public/olweus_history.page
Ronis, S., & Slaunwhite, A. (2017). Gender and Geographic Predictors of Cyberbullying Victimization, Perpetration, and Coping Modalities Among Youth. Canadian Journal of School Psychology, 34(1), 3–21. https://doi.org/10.1177/0829573517734029
Schmid, U. K., Kümpel, A. S., & Rieger, D. (2022). How social media users perceive different forms of online hate speech: A qualitative multi-method study. New Media & Society, 146144482210911. https://doi.org/10.1177/14614448221091185