
Who will govern the Internet’s inappropriate content
Today’s Internet provides users with more and more voices, people express their opinions through posts, or individuals seek to join an identity group. While the benefits of the internet’s transformation are obvious, the dangers are becoming more apparent: the growing presence of pornographic, bullying, violent, illegal, abusive, and deceptive content(Gillespie, 2018). Once this content appears on a platform, it develops a life of its own and multiplies rapidly. Platforms, as hosts of users and entire communities with very different value systems, are expected to police content and resolve disputes, and in this ever-changing and complex set of bad content, platforms are the ones that can most quickly discover and act as a source to block The role of dissemination of bad content is also more flexible and inclusive. In this process, the platform can implement different blocking schemes for different levels of events through computer algorithms and professional-related departments.
First of all, the platform connects more resources, technologies, and rights, and is more authoritative, which can act as a source to prevent the spread of bad content. They give people unrestricted access to their right to free speech. As a result, whether they like it or not, they must serve as norm-setters, law interpreters, arbiters of taste, arbitrators of disputes, and enforcers of any rules they decide to establish(Gillespie, 2018). The platform’s reviewers can check and delete with the help of computers and artificial intelligence when bullying, harassment, violent content, hate, pornography, and other problematic content appear and are published. The platform spends a lot of time and resources operating this section. Facebook alone has pledged to spend 5% of its yearly revenue ($3.7 billion) on content moderation, which is more than Twitter makes in a year (Yildirim & Zhang, 2022). The ecosystems of platforms are also always being updated and improved. For instance, papers in computer science and software engineering frequently assert that they have researched the shortcomings of earlier technologies and have finally found a new solution. Understanding human culture, racial history, gender relations, power dynamics, and other topics call for automated systems (Munn,2020). Objectively speaking, social platforms themselves are the most efficient and capable of preventing the spread of toxic content.
How Platforms Govern Bad Content
Platforms can also “clean up” the internet by reducing the anonymity of the web. China’s microblogging platform has added IP locations to tighten down on bad conduct and ban users who post maliciously, particularly in light of the COVID-19 virus’s major alterations to the Internet ecosystem and people’s lives. These parameters are intended to lessen inappropriate conduct, such as impersonation and involvement in contentious topics, harmful misinformation, and traffic scraping, and to guarantee the veracity and transparency of broadcast material. For the platform, by continuously strengthening and changing the rules, ensuring the stability and peace of the platform community is the long-term way to develop itself, including the entire Internet.
And platforms are inclusive, adjustable, and flexible in more complex content control than individuals or governments. In addition to basic tools such as the platform directly cutting out dangerous content through algorithms or setting your search engine to “safe search” mode, many difficult to identify toxic cultures such as content with a serious personal culture Or fake news, which cannot be judged by machines, they require staff to judge and stop them through unbiased personal cognition and moral bottom line. The “Pizzagate” incident, for example, was once the topic of mass panic on the Internet. The spread of fake news of this kind can be seen as a bigger threat than violent crime, illegal immigration, and even terrorism. Platforms are evaluated according to a variety of standards, including conflicting psychological impact theories and competing cultural politics(Gillespie, 2018). At this time, the flexibility of the platform is reflected. Rather than relying on technology to review problematic content, the platform’s ecological department is more suitable for reviewing content to prevent the spread of the key.
Special Cases and Platform Governance
While the Internet makes users dominant, it also makes everyone a victim. In addition to violent, pornographic, bullying, and other toxic content that can be ruled out at a glance, the platform needs to shift more focus of review to some news that is difficult to distinguish between the true and false middle. Because it can cause bad effects such as mass panic and social conflict at any time, it may be too late for the government to notice and intervene.
Internet use empowers individuals, but it also renders everyone a victim. In addition to violent, pornographic, bullying, and other toxic content that can be ruled out at a glance, the platform needs to shift more focus of review to some news that is difficult to distinguish between the true and false middle. Because it can cause bad effects such as mass panic and social conflict at any time, it may be too late for the government to notice and intervene. In the “Pizzagate” incident, a man armed with a semi-automatic rifle walked into a modest pizza parlor called Comet Ping Pong in Washington, D.C., and opened fire, as Hillary Clinton and Democratic elites were at a Washington, D.C., before the 2016 presidential election. The unfounded idea that a pizzeria is running a child sex trafficking ring has gone viral on the internet, and the man affected by the fake news has killed many innocent people.PizzaGate has also become an online hashtag and a convenient way to incite discontent on the internet. At this time, the first thing the platform can do is to pay attention to the information at the beginning of dissemination, review the authenticity of the content, and constantly emphasize to the readers how to identify false content in the platform rules. Questionable stories are flagged with a “controversial third-party fact-checker” warning. The platform can let users know the wrongness of bad content and start from the source, reducing the spread.
It is worth discussing that preventing the spread of this type of content is actually quite a challenge for the platform. Even though the platform has paid a lot in terms of governance, it is still not satisfactory to most people. Analyzing and determining authenticity for this content in the first place is complex, and the burden is substantial as each story has to be dealt with individually and cannot be paid for additionally. Some of the moderation labor can be outsourced to increase the effectiveness of preventing dangerous information. For instance, corporations like Facebook and Google will contribute to an independent fund, which will then be used to promote independent fact-checking. Putting more funds and attention on the review content can already see the importance the platform attaches to this sector. Another father from Thailand used the social network’s live video service to live-stream the killing of his 11-month-old baby girl before committing suicide. Since it took the platform almost 24 hours to remove the video, the impact was undoubtedly great. However, given the volume of content posted daily and the speed at which it spreads, it is unlikely that even thousands of investigators could deal with such a violent video quickly. As a result, platforms are generally not to blame for these issues(Tongo, 2022). Facebook’s manual review process for this type of content is a little inefficient. By creating a system and using new artificial intelligence techniques like “Text, Image, Video Mining,”. Potentially offensive content will have more time to be reported by users, giving Facebook employees time to determine if the content needs to be taken down(Tongo, 2022).In some simple or general sense, platforms simply cannot “do it right”. Too many varied ideals, expectations, and appropriate commitments between user demands and profit needs must be taken into account. That’s not to say platforms are blameless, or that their efforts shouldn’t be criticized(Gillespie, 2018).
Summarize
Finally, when platforms such as Facebook, Twitter, Weibo, etc. have become our main sources of information or news, the public space and efforts it provides may be motivated by a sincere desire to foster a friendly community, or a purely economic need, i.e. Don’t lose users driven away by explicit content or ruthless abuse, or fear of legal intervention (Gillespie, 2018). However, platforms are more efficient than individuals and the government. They rely on technology and relevant human departments to jointly review content, and consider toxic content flexibly through various evaluations. Although maintaining public order on the Internet requires the joint efforts of all, the platform can definitely become the most powerful and efficient link.
Reference
Chang, L., Mukherjee, S., & Coppel, N. (2020). We Are All Victims: Questionable Content and Collective Victimisation in the Digital Age. Asian Journal Of Criminology, 16(1), 37-50. https://doi.org/10.1007/s11417-020-09331-2.
Gillespie, T. (2018). Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). Yale University Press.
Munn, L. (2020). Angry by design: toxic communication and technical architectures. Humanities And Social Sciences Communications, 7(1). https://doi.org/10.1057/s41599-020-00550-7.
Pollard, J. (2022). China’s Weibo to Fight ‘Bad Behaviour’ by Adding User Location. Asia Financial. Retrieved 13 October 2022, from https://www.asiafinancial.com/chinas-weibo-to-fight-bad-behaviour-by-showing-user-location.
Tongo, R. (2022). There’s a technology that could stop Facebook Live being used to stream murders – but it has a cost. The Conversation. Retrieved 12 October 2022, from https://theconversation.com/theres-a-technology-that-could-stop-facebook-live-being-used-to-stream-murders-but-it-has-a-cost-76221.
The Danger of Fake News in Inflaming or Suppressing Social Conflict | Center for Information Technology and Society – UC Santa Barbara. Cits.ucsb.edu. (2022). Retrieved 13 October 2022, from https://www.cits.ucsb.edu/fake-news/danger-social.
Yildirim, P., & Zhang, J. (2022). How Social Media Firms Moderate Their Content. Knowledge at Wharton. Retrieved 11 October 2022, from https://knowledge.Wharton.upenn.edu/article/social-media-firms-moderate-content/.