Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

INTRODUCTION

As humanity moves to the digital era, the cost of access to information has significantly reduced. The various online communities have given people a great deal of freedom of expression. However, when people can express their opinions under a mask with shallow barriers, preventing offensive content, hate speech, and a wide range of problematic behaviours is challenging. (Jason, 2017) Especially in countries where the constitution broadly protects freedom of expression, it has become an ” umbrella ” for some extreme users, making them even more rampant. Nevertheless, can freedom of expression justify the amount of discrimination, bullying, harassment, and violence currently on the internet? This essay will discuss who is responsible for the vast amount of sensitive and problematic media on the internet, both at a micro and macro level and offer constructive ideas on how to stop the spread of this content.

MACRO VIEWS: WHETHER GOVERNMENT SHOULD INVOLVE AND REGULATE?

  • Freedom of speech that had been misinterpreted

On social platforms, people can speak out for anything they support regarding religion, gender, race, etc. These problems are what freedom of expression is all about. Nevertheless, for users who are used to venting hate and spreading harmful content online, the value of freedom of expression is misinterpreted and used to defend their right to express themselves. Is this what true freedom of expression is all about? The United Nations defined hate speech 2011 as any communication which uses derogatory and discriminatory language against individuals based on colour, gender, gender identity, international, etc. (2011). Despite the undeniable contribution of social media in upholding people’s right to express and receive information, as the former German Chancellor said in his 2019 speech, “freedom of expression has its verge.” Thus, in some countries where freedom of expression is writting into their constitutions, users still practice what they vaguely consider freedom of expression. This is essentially due to the lack of adequate government regulation and control of “freedom” and the lack of punitive measures to avoid the toxic speech that is increasingly common in the country. Because of the progress of the internet and economy globalisation, the influences are always globally, and the content, desires of expression are becoming homogenised globally, thus allowing for a proliferation of hatred, and discrimination. As expressed in the theory of the ‘information economy, a particular convergence of technological and economic change will begin to transform global production and exchange from the most advanced economies Information, knowledge and culture. (Benkler, 2006) Therefore, education at the national level still needs to be strengthened and a firm stance against hate speech must be taken; at the same time, the interpretation of “freedom” in the context of freedom of expression still needs to be improved in the national and global context.

  • The intensity of the government’s censorship is significant

On the other hand, the government still needs to strengthen the regulation of big companies as a means and increase the strength of sanctions as a deterrent. As the dominance of private ownership, advertiser-based business models, exploitative labour processes, and opaque algorithms limit or exclude the opportunity to operate commercial datafication processes in a non-exploitative manner and extract residual value from the activities of those attracted to their platforms. (Mansell, 2020) That makes large corporations more in control of the rules of their social platforms while being more cautious about rule changes, as this closely links to their financial gains. This might be the reason for the lack of adequate regulation of questionable content on platforms.

Due to the difference in systems, China’s approach to government-business cooperation has worked well in curbing online problems (Deng, 2022). However, government involvement means that strong power can occur, and harsh moderation regimes and laws may undermine users’ freedom in some ways, such as the freedom to discuss politics. In contrast to China, other large internet companies in Europe or the US are explicitly free from government control. They control the data and usage rules of their platforms, which is why different content scales appear on different platforms. Therefore, in such an environment, if the government wants to stamp out problematic content on the internet effectively, it will need to put more pressure on technology companies by conducting stricter investigations into their internal and behavioural practices and using the law as a weapon of sanction.

MICRO VIEWS: AFFECTING THE WORLD WITH DIFFERENT TYPES OF MODERATION

In addition to inter-state policies and other macro reasons influencing and manipulating public opinion in the world in significant directions, however, for social platforms themselves, the question of having effective measures to detect and control the content sent by users has become central to the governance of the platforms. Therefore, the issue of how to efficiently and accurately moderate water media content to clean up the online environment has become an important issue in recent years. (Langvardt, 2018) There are currently two main types of content moderation methods: manual moderation and AI moderation.

  • Manual moderation-A huge system

Firstly, the manual moderation model was the most traditional moderation model at the time of the rise of the internet industry. Manual moderation includes manual pre-moderation and manual post-moderation. These are the most straightforward ways to control and reduce online hate, discrimination, and other toxic issues and to control the quality of the website content to the highest possible level for the company’s benefit. But the problem with manual moderation is not only the loss of users when the level of moderation is low and inefficient (5 Moderation Methods You Should Understand, 2016) but also the more serious problem hidden in human nature. When the moderator tries to take control of a posting that is about to get messy, the subjective element may provoke a rebellion from the user, leading the situation to get further out of control. (Prince & Davies, 2001) Nevertheless, at the same time, as online content became increasingly large and complex, moderation, a formal profession, emerged in internet companies. With the increasing popularity of the internet, the volume of moderation grew. Large internet companies or their outsourcing companies provided a large number of positions. As a result, there is still a large amount of information on this operating post when searching for “content moderation employment” in Google. Regarding Linkedin, over 67,000 relevant jobs are offered. Thus, despite some of its unavoidable shortcomings, manual moderation remains the dominant form of moderation, controlling all aspects of cyberspace and easing the pressure on society for employment.Companies seeking maunal moderators is still the norm

  • Are algorithms really in charge?

Another moderation type is broadly called by automated moderation, which is becoming popular with the development of sophisticated filters and tools, where back-office staff algorithmically optimise filters and rules promptly, making it a faster, more efficient and more cost-effective moderation model. However, as government pressure on major tech companies grows, the opaqueness of crucial political and ethical issues with automated moderation tools makes the public concerned. For example, videotaped versions of shootings on Facebook Live have gone viral on the internet. However, some videos have escaped the pre-moderation mechanisms set up by the algorithms. (Gorwa et al., 2020) At the same time, can platforms create mechanisms to simultaneously address the two extremes of content review systems is being a crucial problem. When a Pulitzer Prize-winning article is posted on Facebook only to be removed by Facebook because of the different scales of photos that would be tolerated in different countries. (Gillespie, 2018) Such examples have not even been solved for manual moderation and will take even more time and technology for algorithms.

CONCLUSION

Overall, this article focuses on the causes of and ways to combat problematic content such as bullying, hate, violence and pornography on the internet from both a macro and micro perspective. The macro causes are due to a lack of education and legal constraints on the use of freedom of expression scenarios, as well as insufficient government regulation of businesses; therefore the government needs to popularise the definition of freedom in a timely manner and optimise the legal determination of online behaviour, as well as strengthen the investigation of businesses. At a micro level, both manual moderation and algorithmic moderation are flawed and cannot fully cover all types of content on the internet and thus block and punish it; therefore, social platforms need to strengthen their expertise in manual moderation and continue to accumulate algorithmic techniques in order to optimise the online environment in a more economical and efficient way.

REFERENCES

5 Moderation Methods You Should Understand. (2016, March 15). Besedo. Retrieved October 14, 2022, from https://besedo.com/knowledge-hub/blog/5-moderation-methods-you-should-understand/

Benkler, Y. (2006). The wealth of networks : How social production transforms markets and freedom contract : freedom in the commons. Yale University Press. https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au/lib/usyd/detail.action?docID=3419996&pq-origsite=primo#

Deng, I. (2022, March 12). China’s internet watchdog pushes for deeper engagement with internet platforms in 2022 to clean, control. . . South China Morning Post. Retrieved October 14, 2022, from https://www.scmp.com/tech/policy/article/3170153/chinas-internet-watchdog-pushes-deeper-engagement-internet-platforms?module=perpetual_scroll_0

Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, New Haven: Yale University Press, 2018. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

Gorwa, R., Binns, R., & Katzenbach, C. (2020, January). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data &Amp; Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945

Jason, R. (2017, March 1). All Too Easy: Spreading Information Through Social Media – The Arkansas Journal of Social Change and Public Service – UA Little Rock. The Arkansas Journal of Social Change and Public Service. from https://ualr.edu/socialchange/2017/03/01/blog-riddle-social-media/

Langvardt, K. (2018). Regulating Online Content Moderation. Georgetown Law           Journal, 106(5), 1353+.                                      https://link.gale.com/apps/doc/A548321177/AONE?u=usyd&sid=bookmark-       AONE&xid=637c7dec

Mansell, & Steinmueller, W. E. (2020). Economic Analysis of Platforms. In Mansell & W. E. Steinmueller, Advanced Introduction to Platform Economics (pp. 35–54).Prince, M., & Davies, M. (2001, December 1). Moderator teams:  an extension to focus group methodology. Qualitative Market Research: An International Journal, 4(4), 207–216. https://doi.org/10.1108/eum0000000005902

Singh, S. (2019, July 22). Everything in Moderation. New America. Retrieved October 14, 2022, from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/