Digital Platforms Struggling with Content Moderation Woes

"Automotive Social Media Marketing" by socialautomotive is licensed under CC BY 2.0.

The rise of digital platforms has led to a rapid increase in the number of users and has gradually penetrated into people’s daily lives. However, the increasingly complex and diverse user base on digital platforms, especially global platforms such as digital giants such as Facebook and Twitter, has also provided the stage for the presentation and display of conflict and controversial content. Bullying, harassment, violent content, hate, pornography and other problematic content frequently appear on digital platforms, which have caused a large social problem and a huge threat to the construction of a harmonious and healthy Internet world. This is a huge problem for users, platforms, and governments, so preventing the spread of this content has become an issue that must be discussed on the agenda. Digital platforms themselves must take most of the responsibilities and explore a more standardized and reasonable regulatory model.

 

Content moderation on digital platforms is largely controversial. Although content moderation initiated by digital platforms from the top down can largely ensure that content is free from pornography, violence, harassment, discrimination, and other problematic content. The digital platform environment can be intuitively maintained by retaining eligible content that is not controversial and removing or adjusting content that is controversial or questionable. Unlike traditional media in the past, anyone can publish anything through the Internet today, so content moderation is a must, and almost all digital platforms have content moderation through certain rules. But on the other hand, the platform’s content moderation is often questioned whether it violates users’ freedom of speech. This perception of digital social platforms as utopian is unrealistic and is often used as an excuse to publish questionable content. Digital platforms must treat users with a gentle attitude, not to interfere too strongly with users’ freedom of speech, and on the other hand, not to allow user content to harm other users (Gillespie, 2018). Discussions on these two aspects never stop.

 

On January 6, 2021, after the US Capitol was broken into by Trump supporters, Twitter and Facebook announced a “permanent ban” of Trump.          With more than 36,000 tweets and 88 million followers in four years, Trump has tweeted about policies, fired officials and bluntly expressed his personal likes and dislikes about certain people and events. The digital platform giant’s ban on Trump has sparked discussions about free speech, content regulation and more. Twitter’s censorship of Trump was actually justified in part because some of Trump’s tweets violated Twitter’s rules, which prohibit misinformation about the election and the electoral process. In particular, Trump’s tweets led to the bad incident of Trump supporters rushing into Capitol Hill to disrupt public order and incite social unrest. However, even so, the matter still generated huge controversy. The debate about conflicting free speech and content censorship arises precisely because how to balance the two remains a very subjective matter. People tend to be different in culture, ideology, and in a variety of ways. For digital platforms, how to develop a content moderation mechanism that can balance freedom of speech and maintain a healthy environment for the platform is a huge challenge (Gillespie, 2018). Digital platforms need an effective content moderation mechanism.

 

“reddit sticker – 3” by Eva Blue is licensed under CC BY 2.0.

On April 18 this year, Yishan Wong, the former CEO of American social networking forum Reddit, said that Elon Musk did not fully understand the challenges faced by content moderation and enforcement of freedom of speech on the Internet. will be in a “painful situation”. According to him, politics is not what determines these dynamics. Rather, it’s because ideas and ideas are inherently “powerful and dangerous,” and the big social media platforms have a responsibility to keep people from spiraling out of control. In addition, he also said that running a social media platform is far more difficult than Musk imagined. Reddit is one of the largest anonymous social platforms in the world, consisting of over a million individual sections, each dedicated to a specific type of content, such as politics, news, gaming, and even some really radical content. Reddit is also considered a breeding ground for toxic culture along with 4chan (Massanari, 2017). Because it is an anonymous section, users are often not asked to take responsibility, so users are often free to express their opinions and ideas, even violence, harassment, bullying, discriminatory content and even more extreme content. The growing number of users and the anonymity of users make censorship and tracking difficult. The forums are full of disturbing content, but due to the imperfect moderation mechanism, a toxic culture is formed, and the communication between users is formed and strengthened through Reddit (Dutton, 2009), which even to a certain extent It became Reddit’s bright spot for troll users and other fans of toxic culture. . Even if Reddit tried to make changes and increase its scrutiny, it was still a drop in the bucket. Incomplete content review makes it difficult for the platform to build a platform environment that satisfies most people. For some users, this will arouse some users’ rebellious mentality and publish more and more problematic content.
 

“Content regulation on social media platform” by Sanskriti IAS is licensed under CC BY 2.0.

How to more effectively build an efficient and effective, relatively reasonable and balanced content review mechanism is the challenge faced by all platforms today. Some platforms have taken action, trying to enforce policies and rules, explaining their rules to users, trying to create a system that operates like a quasi-judicial forum to assess whether a particular account or content should be kept. For example, Facebook has established an “oversight committee”. If the platform decides to block an account or take negative action on an account, the incident can be notified to the committee, and the committee can conduct a re-examination. The committee members have a lot of expertise in speech. Review content fairly. It is unrealistic to rely solely on human resources for content moderation. The massive amount of content published by a huge user base cannot be accomplished by hiring a large number of reviewers alone. AI technology is becoming more and more mature, and the use of AI to conduct content review is foreseeable in the future content review of digital platforms. Through extensive training of AI and feeding massive amounts of data, AI can recognize sensitive information in text, images, audio and video, such as bloody elements, nude elements and discriminatory words. Allow AI to review content immediately after users post content, preventing the spread of harmful information from the source. Real-name accounts or binding mobile phone numbers are controversial to a certain extent. Many users believe that this will reveal their privacy and make them lose their freedom under the surveillance of the platform and the government. It is undeniable that such a problem may indeed exist, but from the perspective of content auditing, the account that can be tracked can effectively prevent some users from publishing problematic content. After all, no one wants to be faced with their own problematic comments on the Internet. In real-life accusations, even if a user publishes problematic content that has a bad influence, hurts others, or even violates the law, the user himself can be found and held accountable.

 

In summary, digital platforms need to build a content moderation mechanism that is robust enough to moderate a large amount of content in a short period of time. Content review needs to be acceptable to the vast majority of users, and content review rules that balance freedom of speech and a good platform environment. Freedom of speech is a human right, but it should not be used to hurt each other. While the platform conducts content review, users should avoid posting problematic content, which is also something that should be done when building a harmonious social environment.

 

Reference

Denham, H. (2021, January 11). These are the platforms that have banned Trump and his allies. The Washington Post. https://www.washingtonpost.com/technology/2021/01/11/trump-banned-social-media/

 

Dutton, W. H. (2009). The fifth estate emerging through the network of networks. Prometheus27(1), 1-15. https://doi.org/10.1080/08109020802657453

 

Fujita, A. (2022, April 25). Former Reddit CEO: Elon Musk shouldn’t take over Twitter. Yahoo Finance – Stock Market Live, Quotes, Business & Finance News. https://finance.yahoo.com/news/why-the-former-reddit-ceo-says-elon-musk-should-back-off-twitter-105349428.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS5oay8&guce_referrer_sig=AQAAAHr8VlFY5FFslALG04taZistVR0ZFbTXHPabMW0KL842Gs-a-1gPIlO5nqovLQj5Mq9bY4m8F6dKaIxB8M1qPiLE8CxJd9GA_II-7evOECoTn84PZNwWoCVuJbOXJdsngEyocCUmL4ekCqvyaexRKG-X3zXMEgNKls1OGdq9izwI

 

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029

 

Massanari, A. (2016). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329-346.