Bullying, harassment, violent content, hate, porn and other problematic content circltates on digutal platforms. Who should be responsible for stoping the spread of this content and how?

Strategies for online inappropriate content regulation

“New Federal Legislation and WikiLeaks Attacks Frame University of Maryland Cybersecurity Center Launch” by Merrill College of Journalism Press Releases is licensed under CC BY-NC 2.0.

The internet existed and emerged initially to serve the needs of people. However, with the development and rise of the internet, the public’s life can no longer be disconnected from the internet. The internet is constantly being updated throughout history to include new things for people to learn and be entertained. As internet users have increased, problematic content such as bullying, harassment, violent content, hate and pornography have gradually appeared. As with common abuses, these harmful contents continue to cause severe problems for children and adults (Milosevic, 2022). In order to avoid further harm caused by these inappropriate contents, the relevant authorities need to take immediate action. Therefore, this paper will analyse who should be responsible for preventing the spread of harmful content on the internet and how to prevent it.

First, the government is a necessary force responsible for spreading inappropriate content on the internet. It must be clear that the harmful problems posed by the internet are not easily solved and cannot be regulated by moral or social regulations and therefore require government guidance and discipline. However, creating laws to establish social media reform is complicated and involves difficult trade-offs (Douek, 2019). When it first started dealing with the internet, the Australian government used to think too simplistically about the issue, even encouraging technology companies to over-censor and limit the distribution of undesirable content on the internet, incidentally avoiding the threat of government accountability. However, its proposed bill not only failed to address the content distribution issue but also created many flaws, such as a lack of transparency or accountability. Australia’s AVM bill provides a lesson in the failure of governments to deal with online issues, but not all countries feel the need to adopt this lesson, and this neglect needs to be addressed urgently. So, what should governments do to effectively deal with the spread of inappropriate content on the internet? The first aspect is to focus on the problem of undesirable content on the internet and take the initiative to assume government responsibility rather than leaving it entirely to science and technology to solve the problem. Reaffirm the role of national leaders and national and international law. Only such laws will be binding and effective. The second aspect is that governments need to have meaningful discussions, balance stocks and make trade-offs in combating the spread of violent content online (Douek, 2019). After serious discussions, we can develop valuable solutions to problems and make more specific legislative restrictions. Finally, after serious discussion and research, the government should immediately put the implementation of the law on the agenda, as it will be a problematic example as long as the legislation continues, even without implementation. It is more focused on public information than on creating effective change. Today, the spread of undesirable content on the internet is getting worse. The government can only effectively deal with the spread of inappropriate content on the internet and provide a safeguard for other departments or groups to play their role, If they stop avoiding it and take the initiative to assume their due responsibility, first and foremost, in terms of laws and regulations.

In addition, smart cities will become a more important and crucial factor in regulating the dissemination of undesirable content on the internet. The internet has had a mixed impact on us, and the development of smart cities will help make the content spread on the internet healthier and more positive (Poletti & Michieli, 2018). The role of cities has changed the safe space since the attack on the offices of Charlie Weekly. Intelligent cities face not only opportunities but also threats posed by the internet. In this case, not only does the country need to supervise network security and health,  but the role of smart cities is also crucial. They need to promote their centrality by increasing responsibility for their platforms’ content and focusing on technology’s role in addressing the cyber threats posed by private companies.

Moreover, online intermediaries, including social media services and digital platform providers, need to be responsible for the content of users who use their platforms (Irwin & Selvadurai, 2021). This has been regulated and defined from the government’s perspective. Online intermediaries should understand that they have the right to remove some undesirable content to maintain a healthy and safe environment on the internet. They should not allow a post containing undesirable content to be clicked and distributed just because it has high traffic and is profitable for the platform. Furthermore, the changes to the bill make it difficult for online intermediaries to collude with some of the upper echelons, ensuring that they can address problems on the internet more fairly and transparently. Besides, the dominant technology companies have disrupted and revolutionised almost all areas of the economy. These technology companies are also responsible for the spread of undesirable content on the Internet (Stramm, 2021). Technology companies collect people’s data and use push and predictions to personalise services, but they also make public information public and no longer safe. For example, some cyber-violence content contains information about physical attacks and people subjected to cyber-violence. This results from poor collection and protection of people’s information by technology companies. In light of this problem, while there is a need to update legislation and strengthen regulation, there are inevitably trade-offs and difficult choices. There is no doubt that technology has become an indispensable part of people’s lives, so technology companies should also take responsibility for protecting people’s privacy and security and hold their ground in the rapid development of the internet. Secondly, technology companies can solve all kinds of online infringement and cyberbullying problems through artificial intelligence. As advances in science and technology have become increasingly accurate and useful, artificial intelligence can also be used to solve online problems. Artificial intelligence can recognise the nuances of language and effectively classify it in areas of extensive data analysis where humans cannot help. As algorithms know more and more about the online activities of bullies, they can also adapt and improve the accuracy of identifying cyberbullying. All in all, technology companies can rely on their level of technology to take responsibility and deal with issues such as cyberbullying.

Finally, Internet companies also need to contribute to protecting cybersecurity. Multi-stakeholder regulatory standard-setting programs are an essential part of the regulatory toolbox for businesses (Gorwa, 2019). At this stage, more and more voluntary, non-binding and informal governance initiatives are beginning to be implemented in online life, with good results. Although the measures or conventions developed by companies and other parties do not have the force of law and are not mandatory, they are recognised by many aspects of society. Companies should pay attention to the demands of various interest groups in society and successfully arrange informal governance. Sometimes, laws and regulations bring regulations that do not accurately solve all problems in internet life. This is when informal governance and oversight come into play, and Internet businesses can do this.

In conclusion, the spread of inappropriate content such as cyberbullying, harassment, violent content, hate and pornography is an issue that needs to be addressed urgently. In the face of this problem, governments, smart cities, online intermediaries, leading technology companies and internet companies all need to contribute their share to take responsibility for online health and safety issues. A healthy and safe online environment can only be achieved if all parties unite and use their abilities to maintain the health of Internet content distribution without being distracted by profit.

 

References

 

Milosevic, T., Van Royen, K., & Davis, B. (2022). Artificial Intelligence to Address Cyberbullying, Harassment and Abuse: New Directions in the Midst of Complexity. International Journal of Bullying Prevention, 4(1), 1–5. https://doi.org/10.1007/s42380-022-00117-x

 

Douek, E. (2019), Australia’s ‘Abhorrent Violent Material’ Law: Shouting ‘Nerd Harder’ and Drowning Out Speech . 94 Australian Law Journal 41 (2020), https://ssrn.com/abstract=3443220

 

Poletti, C., & Michieli, M. (2018). Smart cities, social media platforms and security: online content regulation as a site of controversy and conflict. City, Territory and Architecture, 5(1), 1–14.

https://doi.org/10.1186/s40410-018-0096-2

 

Irwin, E., & Selvadurai, N. (2021). Imposing Liability on Online Intermediaries for Violent User-Generated Content: An Australian Perspective. Richmond Journal of Law & Technology, 28(1), 1–.

https://heinonline.org/HOL/P?h=hein.journals/jolt28&i=1

 

Stramm, J. (2021). RESPONDING TO THE DIGITAL HEALTH REVOLUTION. Richmond Journal of Law & Technology, 28(1), 86–.

http://ezproxy.library.usyd.edu.au/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=lgs&AN=153399434&site=ehost-live

 

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2), 1-22. https://doi.org/10.14763/2019.2.1407