Internet content moderation: whether sensitive information that can cause online harm to people is effectively be blocked

Social media” by Mike MacKenzie is licensed under CC BY 2.0.

Introduction

The effectiveness of online content censorship in blocking sensitive information that poses internet harms to citizens has been a topic of controversy as the effectiveness of censorship depends on a variety of factors, including the roles of technology, law, policy, and regulatory agencies. (Gillespie, 2018) As the proliferation of the internet grows, more and more information about cyber hazards is being publicly posted for viewing, and for the challenges of online content moderation, this information and propaganda may alter public opinion and exacerbate conflict. Several important theories will support the central argument throughout this blog that because of the “Splinternet” phenomenon created by social media, the web is gradually fragmenting into its own ecosystem, posing a challenge and threat to content censorship (Lemley, 2021). By analyzing the rate of false positives of automated content moderation techniques of digital platforms, such as machine learning, artificial intelligence leads to influence the final outcome of filtered content. (Kloet, 2019) Public engagement in regulation places more emphasis on the importance of civil society engagement and collaboration with digital platforms, discussing how to achieve the right balance of social engagement and auditing techniques to ensure effective regulation while protecting individual rights.

Internet” by asawin is licensed under CC0.

The Dangers of Splinternet

‘Splinternet’ refers to the gradual fragmentation of the Internet into multiple isolated ecosystems that are regulated, censored, and controlled by different countries or regions, thus challenging and threatening the openness and content auditing of the global Internet. (Flew, 2019) This trend is more pronounced in a number of countries and regions, and there have been a number of recent instances that highlight the presence and impact of the Splinternet. Among them, China has been known for its strict Internet censorship policies, including censorship and blocking of social media, search engines, and news sites. The Chinese government requires domestic Internet companies to comply with specific laws and regulations, while blocking many international Internet services. This has resulted in China’s Internet ecosystem being relatively isolated from the international Internet. (Beina, 2017) Through Internet censorship and blocking, the Chinese government has successfully blocked some international Internet services and external information sources. This has also resulted in the government being able to control and restrict the dissemination of politically sensitive information, as well as certain types of harmful content, such as obscene content and false information. (Nina, 2022) China’s restrictions on foreign media platforms and censorship of non-government materials have been described as China’s Great Firewall. (Barry, 2018) Meta’s instant messaging platform Whatsapp and its photo and video sharing application Instagram are also blocked.

(South China Morning Post, 2019)

This simultaneously poses a challenge to China’s Internet environment, with the Splinternet phenomenon severely limiting Chinese citizens’ access to global information. While some can bypass censorship, most can only access censored information, which can lead to one-sided and limited information. (Barry, 2018) Meanwhile, the Splinternet phenomenon may lead some users to look for ways to bypass censorship, which may involve the use of insecure tools or services, thus increasing privacy and security risks. These challenges are part of a broader regional divide in the Asia-Pacific region in terms of regulation, geopolitics, ethics and culture. (Nina, 2022) China’s technology regulations isolate Chinese cyberspace from the rest of the world.

Censorship” by Mike MacKenzie is licensed under CC BY 2.0.

AI’s Moderation Techniques Misjudgement Rate Affects Content Publication

Automated moderation technology is a method of automatically checking and evaluating content on digital platforms by utilizing machine learning and artificial intelligence techniques. This technology is widely used on the internet in social media, e-commerce, online forums, video sharing platforms, etc. to ensure compliance, security and quality of content. (Oweis, 2022) It can be seen and noticed in almost all disciplines and professions, be it business, engineering, medicine or industry. Artificial Intelligence review techniques may incorrectly flag legitimate content as non-compliant or inappropriate. This misclassification may be based on the content and context of the text, image, audio, or video, making it difficult for algorithms to properly understand the content due to complexity and polysemy. Second, in contrast to the high rate of misjudgment, certain users may attempt to abuse the loopholes in automated review by intentionally posting offending content knowing that the algorithms are likely to misjudge it. This situation may lead to an increase in undesirable content. (Flew, 2019) One of the episodes of the Silicon Valley television series illustrates a link between the misjudgment rate of AI’s vetting techniques leading to the publication of harmful content on the web, albeit the link is presented in a dramatic way. Silicon Valley is a comedy-drama series that sheds light on issues within the tech industry through humor. (Marantz, 2016) The tech companies and entrepreneurs in the show often display a disregard for social responsibility, pursue short-term profits, and are too profit-oriented in their interactions with others.

Artificial intelligence” by Mike MacKenzie is licensed under CC BY 2.0.

This reflects some of the realities of the tech industry culture, such as the pressure to chase rapid growth and profitability can lead to irresponsible behavior. (Flew, 2019) The main cause of endocrine dysfunction in the tech industry is depicted in the movie as the high rate of miscarriage of justice by AI vetting technology leading to social dysfunction in social media and digital platforms. When vetting technologies fail to accurately identify and address objectionable content, social media becomes chaotic and filled with harmful speech and disinformation. Platforms such as Twitter and Facebook, which were once considered reliable sources of news and information, have become skeptical and users have begun to doubt these platforms and their credibility due to the amount of bad content and false information. (Clayton, 2022) These fake accounts can be used for malicious purposes including spreading false information, abusive behavior and manipulating public opinion. Critics have highlighted the challenges to the quality and credibility of content on digital platforms through different approaches.

Social Public Participation & Content Moderation

Fake news” by Mike MacKenzie is licensed under CC BY 2.0.

Social engagement plays a crucial role in online content review by helping to improve content quality, reduce the spread of disinformation and harmful content, and maintain a healthy ecosystem on social media platforms. (Aji, 2023) Users are the direct audience of the content and therefore they are able to quickly identify and report issues. Social media platforms need to respond positively to these reports by reviewing and addressing the reported content to maintain the security and credibility of the platform. Group Surveillance Mechanisms When users discover objectionable content, they tend to share and discuss the issues, which can lead to broader concerns. (Aji, 2023) This collective action helps push platforms to take action to address problematic content and may force them to improve their censorship techniques and policies. Although the information, photos, videos, etc. that social media provides to viewers is sometimes inaccurate in today’s social media, certain features play a key role in combating harmful information online.

(Gianluca Demartini, 2022)

During the 2020 New Crown Virus, there were 19 million mentions of the New Crown Virus on social media and news sites around the world within 24 hours. As a result, governments around the world have asked top social media companies, such as Facebook and Twitter, to stop posting misinformation, as it can create panic among people. For example, people have seen posts on social media of empty stores that have triggered panic related to food shortages. (Ahmad, 2020) These lies can pose a serious threat to public health and safety, and Facebook’s whistleblowing system plays a key role. Users of the platform are encouraged to actively participate by reporting false information and offensive content to the platform. Its review team responded proactively, using a combination of manual and automated review techniques to identify false information and flag hate speech. After identifying objectionable content, the platform takes swift action to remove it or restrict its distribution. This effort plays an important role in safeguarding public health and reducing the spread of disinformation.

Facebook” by Mike MacKenzie is licensed under CC BY 2.0.

Conclusion

Based on a discussion of the effectiveness and challenges of online content regulation and ways to address these challenges. It is influenced by a number of factors such as technology, law, policy and regulators, and its effectiveness is controversial.The Splinternet phenomenon poses a threat to content censorship, and automated content review techniques can be problematic, including high rates of false positives, and the fact that this can lead to the dissemination of undesirable content. Public social engagement was emphasized as crucial in content review, through reporting systems, social monitoring and civic initiatives, which can help to improve the quality of content and the health of social media platforms. While some initiatives have protected people to a certain extent from content review, online content review remains a complex issue and requires a combination of factors, and effective content review requires a balance between automated technology and public participation to ensure public safety and content quality.

References

Ahmad, A.R. (2020). The Impact of Social Media on Panic During the COVID-19 Pandemic in Iraqi Kurdistan: Online Questionnaire Study. Journal of medical Internet research22(5), e19556. https://doi.org/10.2196/19556

Aji. G.G. (2023). Public Participation in Social Media: Content Analysis on Comments Section of @Surabaya. S. Setiawan et al. (Eds.): IJCAH 2022, ASSEHR 724, pp. 244–253, 2023. https://doi.org/10.2991/978-2-38476-008-4_28

Barry. E. (2018). These Are the Countries Where Twitter, Facebook and TikTok Are Banned. TECH. SOCIAL MEDIA. Time. https://time.com/6139988/countries-where-twitter-facebook-tiktok-banned/

Beina. X. (2017). Media Censorship in China. The Council on Foreign Relations (CFR) https://www.cfr.org/backgrounder/media-censorship-china

Clayton. J. (2022). Doubts cast over Elon Musk’s Twitter bot claims. BBC NEWS. https://www.bbc.com/news/technology-62571733

Demartini. (2022). Content moderators: the gatekeepers of social media | Gianluca Demartini | TEDxUQ. https://youtu.be/ajjov8Ve4Ik?si=ANKhhUFIh5Hgq8yN

Flew. T. (2019). Guarding the gatekeepers: Trust, truth and digital platforms. Griffith Review64, pp. 94-103. https://eprints.qut.edu.au/128266/

Gillespie, T. (2018). CHAPTER 1. All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). New Haven: Yale University Press. https://doi.org/10.12987/9780300235029-001

Kloet, J.d. (2019). The platformization of Chinese Society: infrastructure, governance, and practice, Chinese Journal of Communication, 12:3, 249-256, DOI: 10.1080/17544750.2019.1644008

Lemley, MA. (2021). THE SPLINTERNET. The Splintering of the Internet. A. Nationalizing Software and Regulation. Duke University. https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=4066&context=dlj

Marantz. A. (2016). How “Silicon Valley” Nails Silicon Valley. THE NEW YORKER. https://www.newyorker.com/culture/culture-desk/how-silicon-valley-nails-silicon-valley

Nina,X. (2022). Economics, Politics and Public Policy in East Asia and the Pacific. Metaverse — the latest chapter of the Splinternet? https://www.eastasiaforum.org/2022/07/06/metaverse-the-latest-chapter-of-the-splinternet/

Oweis, K.A. (2022). Automation of audit processes, and what to expect in the future. Journal of Management Information and Decision Sciences, 25(S4), 1-9. https://www.researchgate.net/publication/363892140_AUTOMATION_OF_AUDIT_PROCESSES_AND_WHAT_TO_EXPECT_IN_THE_FUTURE

South China Morning Post. (2019). How China censors the internet. YouTube Video. https://youtu.be/ajR9J9eoq34?si=uWMUzQce_SAdRdWX