
Background & Introduction
With the advent of the era of web 2.0, users can better participate in the information dissemination process rather than passive recipients of the information. Compared with the era of web1.0, today’s Internet users should be called participants who can express their opinions and create content on social platforms (Flew, Martin, & Suzor, 2019). But the high engagement rate of the Internet creates new problems: the constant presence of problematic content on the Internet. Australian media is increasingly focusing on various issues related to the format of Internet censorship. They put forward four current issues of Internet censorship: bullying, discrimination, harassment, and freedom of expression. Besides, a range of problematic content is spread on the media platforms, such as violent content, hate, and porn, which has attracted the attention of Internet users and countries worldwide. Therefore, who should supervise Internet content and how to supervise it has become an urgent problem to be solved.
Why is Internet content censorship important?
The dissemination of problematic content on the Internet will not only threaten the human rights of citizens but also cause psychological harm to teenagers. Since most users log in under false names on internet platforms, they are more reckless when posting content and expressing opinions, which leads to increasingly more harassment, bullying, and violent content on the Internet. Take the online harassment of Leslie Jones as an example: many people attacked her character, appearance, and even ethnicity on the Internet. Misogynists and racists posted insulting remarks on social media, which damaged her dignity and human rights, but the Internet cannot effectively protect her. Females, people of color, and the LGBTQ community are more likely to be attacked on the Internet.

Besides harassment, cyberbullying can have serious social consequences and negative impacts on youth. In Australia, cyber-bullying affects at least one in ten students (Australian Communications and Media Authority, 2010). Teens who suffer from cyberbullying are more likely to be depressed and suicidal, which can cause psychological problems (Hamm et al., 2013). Therefore, supervising Internet content and maintaining a good Internet environment is conducive to maintaining social order and protecting human rights and the mental health of teenagers.
Who should stop the spread of problematic content, and how?
Platform, government, and individuals are the three central bodies of Internet problem content supervision. Social media is a content distribution network, and they need to rely on content to attract users to continue using the platform. However, if social platform companies do not regulate content in order to gain more profit, they will gain excessive social rights and cause various social problems. Governments need to formulate appropriate policies to enable platforms to regulate content. Users monitoring and reporting each other is also a significant way to supervise Internet content.
(1) Platform
Platforms are the most critical subject of content review and supervision. Most content posted by users to social platforms requires human review by the platform to be appropriately screened, which is based on the business need of this company (Roberts, 2019). Social platforms need to protect personal privacy while censoring user content, balance the relationship between freedom of speech and problematic content, and speed up the use of technological means to improve the efficiency of content censorship.
First, platforms cannot steal the user’s personal information when reviewing the content to establish an algorithm model. Shoshana Zuboff put forward surveillance capitalism and argued that technology companies obtain and commodify users’ personal information for profit (Zuboff, 2014). Digital platforms can monitor even private chats on social platforms. In the process of reviewing content, social platforms may differentiate user groups by analyzing user behavior and recommending personalized content for them in order to develop the attention economy. This requires platforms not to conduct content reviews for the purpose of obtaining personal information and earning commercial interest. Internet companies should assume more social responsibilities to prevent the spread of problematic content and maintain the network ecology.

Second, it is significant for digital platforms to figure out the criteria of problematic content and pay attention to protecting users’ freedom of speech. The most direct means for platforms to prevent the spread of questionable content is to delete them and suspend users who post this content. However, this method is likely to cause dissatisfaction among users, who think it infringes on their freedom of speech. Actually, taking down is not the only way of content moderation. Platforms can use algorithms to reduce the exposure of problematic content so that it cannot be seen by most users (Gillespie, 2022). However, deletion is still the most effective way for serious pornography, violence, and other socially harmful content. Therefore, platforms need to classify problematic content, delete sufficiently harmful content, and reduce the visibility of other radical content.
Third, digital platforms should speed up the development of artificial intelligence and use machines to identify problematic content, which can speed up review efficiency and avoid waste of human resources. Content moderation has been paid more and more attention since 2010, which requires a lot of human resources, so it is often outsourced by Internet companies or transferred to countries with low wages (Roberts, 2019). Suppose platforms use more advanced machine learning classifications to identify the content. In that case, they can figure out sufficiently misleading, risky, or offensive, and then direct instruct to remove or reduce content visibility, which can significantly improve moderation efficiency.
“Why Content Moderation Costs Social Media Companies Billions” by CNBC. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=OBZoVpmbwPk
(2) Government
Today’s social media affects social stability and national politics to a certain extent. Therefore, countries are paying more and more attention to the content supervision of digital platforms. Philip Schlesinger argues that The accelerated transition from mass media to the Internet era has created a precarious “post-public sphere” directly relevant to many development agendas in the regulatory realm (Schlesinger, 2020). Once the content on social platforms is released, it will be disseminated in the public sphere, which will impact the country’s social and political stability. Under a democratic system, platform regulation is likely to become a force contrary to national politics. Therefore, regulating the platform’s content requires the government to issue a sound law to enforce the platform’s execution.
China adopts a social credit model to collect user data and analyze users’ behavior and speech in order to achieve the purpose of strengthening the political power of the central government. In October 2021, China’s Internet regulator released a regulatory document that laid out requirements for social platforms to display users’ locations. Therefore, in April 2022, China’s major social platforms, such as Weibo, Douyin (the Chinese version of TikTok), Zhihu, and Xiaohongshu, began to force the display of users’ locations based on IP addresses (Yip, 2022). This measure of the platform is to prevent some users from spreading rumors and posting problematic content on social media.
“How China censors the internet.” by South China Morning Post. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=ajR9J9eoq34
(3) Individual
Internet users should also take responsibility for monitoring Internet content and maintaining Internet order. Michel Foucault put forward the theory of panopticism, which is a regulatory tool that forces citizens to self-discipline (Foucault, 1995). Applying panopticism to current social media, because users believe that their behavior and speech on social platforms are always supervised by the platform, they will restrict their behavior consciously. In the Internet age, social media constitutes the mutual supervision between users. People who see problematic content will take the initiative to report it to the platform.
Conclusion
The dissemination of problematic content can cause serious social problems, affect user psychology, and even affect national politics. Preventing the dissemination of problematic content on social platforms requires the joint efforts of platforms, governments, and individuals. The platform strengthens content review; the government improves regulations; users regulate their own behavior and actively report problematic content. The three parties should work together to maintain network order.
References
Australian Communications and Media Authority. (2010). Australia in the digital economy, shift to the online environment. Communications Report 2009-10 Series.
Australian Human Rights Commission (2022). 5 Current issues of ‘Internet censorship’: bullying, discrimination, harassment and freedom of expression. https://humanrights.gov.au/our-work/5-current-issues-internet-censorship-bullying-discrimination-harassment-and-freedom
CNBC. (2021, February 28). Why content moderation costs social media companies billions. [Video]. YouTube. https://www.youtube.com/watch?v=OBZoVpmbwPk
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Foucault, M. (1995). Discipline and punish: The birth of the prison (2nd Vintage Books ed.). Vintage Books.
Gillespie, T. (2022). Do not recommend? Reduction as a form of content moderation. Social Media+ Society, 8(3), 20563051221117552.
Hamm, M. P., Newton, A. S., Chisholm, A., Shulhan, J., Milne, A., Sundar, P., & Hartling, L. (2015). Prevalence and effect of cyberbullying on children and young people: A scoping review of social media studies. JAMA pediatrics, 169(8), 770-777.
Howard, A. (2016). Why was Leslie Jones targeted by trolls? NBC News. https://www.nbcnews.com/news/us-news/why-was-leslie-jones-targeted-trolls-n638291
Roberts, S. T. (2019). Behind the Screen : Content Moderation in the Shadows of Social Media. Yale University Press,. https://doi.org/10.12987/9780300245318
Schlesinger P. (2020). After the post-public sphere. Media, culture & society. 2020;42(7-8):1545-1563. doi:10.1177/0163443720948003
South China Morning Post. (2019, April 25). How China censors the internet. [Video]. YouTube. https://www.youtube.com/watch?v=ajR9J9eoq34
Yip, W. (2022). TikTok and its version of Instagram, are set to show users’ locations based on their IP addresses. https://www.insider.com/china-social-platforms-to-make-user-locations-visible-ip-addresses-2022-4
Zuboff, S. (2014). A digital declaration. Frankfurter Allgemeine Zeitung, 15(9).