There are tens of thousands of messages posted online every day in the digital era of the Internet, many of which are destructive. As a result, content moderation is crucial to avoid online harm and social panic. The impact of online harm and the need for content moderation are extensively covered in this report.
- To protect the physical and mental health of internet users
To begin with, content moderation protects the physical and mental health of internet users. Cyberbullying is a major part of online harm, causing psychological issues in a large number of youths. According to the study, more than one-third of young people in 30 countries have been victims of cyberbullying, with one in five dropping out of school as a result (Moonshot. News.，2023). Verbal abuse, harassment, and other forms of online harm can cause a significant negative impact on teenagers ‘ mental health and lead to symptoms including anxiety, despair, and irritability. Also included in online harm are self-harm and suicide. There is an independent association between young people’s problematic use of the Internet and suicide attempts (Lavis, A., & Winter, R.,2020). Negative content on the Internet, including games that encourage suicide, can weaken young people’s spirits, have an impact on their mental health, and eventually lead them to take their own lives. Consequently, content moderation is required, first and foremost, to reduce negative on the Internet and limit the likelihood that teenagers will be exposed to harmful information. The second is timely attention to help-seeking material that young people may post on Internet social platforms after being cyberbullied (Lavis, A., & Winter, R.,2020). Adolescents feel ignored because they are unable to communicate properly in their offline lives, so they seek assistance from strangers on the Internet. Teenagers posting anonymously for help on the internet may be the perfect time for us to save them. Content moderation may be the first to discover these posts and provide timely assistance. In a nutshell, online harm can cause irreversible psychological damage to a lot of youth, as well as varied degrees of threat and injury to other online users. As a result, Internet content moderation can preserve the physical and mental health of the Internet masses to a certain extent while also providing timely and appropriate assistance to people in need.
- To eliminate incorrect information and prevent social panic
Secondly, content moderation can promptly eliminate incorrect information and prevent social panic. In recent years, social media platforms have also developed rapidly, and the number of online users is growing. At this point, a large number of users who use public opinion to make money have also appeared on the Internet, and confusion and controversy quickly return to them. For instance, during the 2016 U.S. presidential election, some people spread much of the fake news while also blaming commentators and journalists for their efforts through online harassment. These people used fake news to mislead voters and to persuade readers to click on them so as to reap profits (Gillespie, T.,2018). This is a direct result of the lack of content moderation on the internet. In addition, there are terrorist groups that spread terrorist movies and destroy the internet, causing widespread worry. Mass panic is the dissemination of news that results in public panic due to the influence of widespread media coverage. Lack of content moderation on the internet invariably leads to widespread anxiety. Social unrest has been brought on by the widespread dissemination of false information and terrorist movies and so on. Because of this, Internet content moderation is truly necessary. According to the survey, 65% of Americans support technology companies to control online false information, and 55% support the government to take these measures (Moonshot. News.，2023). First of all, when fake news is discovered, content moderation has the power to remove the relevant material. Second, content moderation can help spread positive news to reduce social panic and stabilize the mood of the masses as well. Last but not least, Internet content moderation can help the general public take relevant news more dispassionately without being distracted by fake news, allowing them to establish their own opinions. Content moderation also ensures the fairness of elections to a certain extent. In order to create a harmonious and stable online community, content moderation is extremely essential.
- To reduce cultural conflicts and protect the commercial benefit of some businesses
Thirdly, content moderation can reduce cultural conflicts and protect the commercial benefit of some businesses. The open world and its social, cultural, commercial, and legal systems and orders are taken into consideration when evaluating content regulation in addition to the platform rules for each platform, which take into account each location’s unique cultural environment (Roberts, S. T.,2019). It is inevitable that there will be cultural variations on the Internet due to the vast number of individuals who access it and who come from many nations with various cultural backgrounds. Apart from that, this may cause online harm to online users from various cultural backgrounds. Some platforms have platform guidelines that say that respecting culture is a must. For example, Instagram’s community guidelines declare clearly that users must respect the cultural backgrounds of other users. And content moderation can help to reduce the emergence of culture clashes by respecting the culture and history of each country. At the same time, some intolerant users may attack other cultures, resulting in cultural disputes and online harm to locals. As for enterprises, a small number of people disclose some business secrets of enterprises on the Internet, which gives an advantage to their competitors and brings substantial harm to enterprises (Roberts, S. T.,2019). Content moderation is thus required to preserve commercial interests and market equilibrium. As a matter of fact, confidentiality agreements pertaining to the nature of their employment are frequently requested of content reviewers. Content moderation is necessary to protect corporate privacy and protect market imbalances from being brought about by the disclosure of certain trade secrets (Roberts, S. T.,2019). All in all, content moderation is vital to safeguarding users from all cultural backgrounds and upholding cultural diversity as well as safeguarding the rights and interests of businesses and maintaining market equilibrium.
However, some may wonder if there is a conflict between freedom of speech and content moderation when it comes to online harm and content moderation. In my opinion, freedom of speech and content regulation need to strike a balance. Content regulation needs to be kept modest in some form. Some type of minimal content moderation is required. There isn’t a platform out there that doesn’t have regulations (Gillespie, T.,2018). We recognize the right to freedom of speech, but it doesn’t mean we can put up with someone using the internet to hurt other people or other cultures. Therefore, the challenge for content moderation is how to make the appropriate guidelines to intervene in online material. To stop inappropriate content from appearing on the Internet in violation of the regulations, manual regulation is also required. The Internet needs to respect the rights of users when regulating content, but at the same time, it needs to ensure that the legitimate rights of other users are respected. The need to develop and implement a content censorship regime that can satisfy both requirements—with censorship regulations that take into consideration some major online harms and content that is problematic but justifiable—is the biggest challenge facing content moderation (Gillespie, T.,2018).
To sum up, the impact caused by online harm is enormous, not only affecting the physical and mental health of network users but also seriously harming society. As a result, content moderation is indispensable, and content moderation can to a certain extent reduce the harm caused by harmful information on the Internet. But at the same time, network content moderation needs to take care of the legitimate rights and interests of everyone. This is one of the things that needs to be considered in content moderation these days.
A Global Definition of What Is Online Harm. (2023, August 7). Moonshot.News. https://moonshot.news/prime/in-focus/global-coalition-for-digital-safety-defining-online-abuse/
Lavis, A., & Winter, R. (2020). Online harms or benefits? An ethnographic analysis of the positives and negatives of peer‐support around self‐harm on social media. Journal of Child Psychology and Psychiatry, 61(8), 842–854. https://doi.org/10.1111/jcpp.13245
Gillespie, T. (2018). CHAPTER 1. All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). New Haven: Yale University Press. https://doi.org/10.12987/9780300235029-001
Increasing Support for Limitations of Online Violence and False Information. (2023, July 26). Moonshot.News. https://moonshot.news/news/media-news/increasing-support-for-limitations-of-online-violence-and-false-information/
Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the Screen (pp. 33–72). Yale University Press. https://doi.org/10.12987/9780300245318-003