Content Moderation on Social Media

feature image
"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0

Figure 1: “File: Social -Media-1.webp” by Wikimedia Commons is licensed under CC BY-SA 4.0.

The administration and evaluation of user-generated content (UGC) and behavior on online social networks are referred to as social media moderation. The availability of precise rules, standards, limits, and regulations in each platform that all followers or users must rigorously obey is the most essential component of how to manage social media successfully (Langvardt, 2017). Posting guidelines are designed to enhance each user’s welfare through positive interactions and activities. It also inhibits the posting and uploading of objectionable information to the internet. Types of social moderations include Pre-Moderation – This type prevents UGC threats from spreading undesirable contents any further online. The post or contents submitted by the user will undergo reviews that will determine whether it is safe, appropriate, or harmless to the page’s audience; Post Moderation – The opposite of pre-moderation, it allows users to post or upload their contents online on a real-time basis, and then filter out the inappropriate contents once discovered that it violates the social media community’s rules and policies (Langvardt, 2017).

Figure 2: “Social Media Mixed Icons” by Blogtrepreneur Flickr Photostream is licensed under CC by 2.0

Issues arise for digital platforms with content moderation

Social media platforms regularly engage in “content moderation”: the removal, downgrading, and sometimes total censorship of user information and/or user accounts on social media and other digital platforms, typically based on an alleged violation of “policy of community standards. In recent years, this practice has become a subject of intense public interest (Samples, 2019). Not surprisingly, thanks to increasing pressure from governments and certain segments of the public to restrict various types. In discourse, it has also become more intrusive and aggressive, as companies struggle to self-regulate in the hope of evading legal mandates (Langvardt, 2017).

The harmful use of aggression, discrimination, bigotry, biases, vulgarity, stereotypes, pornography and other elements that contribute to the user’s emotional and psychological breakdown, sadness, and tension can be prevented by the aforementioned instructions, regulations, policies, and limitations. Every user’s riches will always be a healthy community. In the virtually toxic corners of the web, social media censorship is the remedy (Veglis, 2014).

Social media censorship

Figure 3: “Donald Trump” by public domain pictures is licensed under CC0 1.0

The topic of content filtering is gaining a lot of traction throughout the world. As governments push commercial players to take tangible steps to limit the spread of specific types of information on the internet, public policy experts and academics dispute whether such measures are harmful to freedom of speech. Even before US President Donald Trump’s trolling prompted Facebook and other platforms to deactivate his accounts earlier this year, the difficulties of regulating the ocean of information that billions of people upload on social media every day were evident (Samples, 2019). Differentiating genuine harassment or abuse from friendly banter,

identifying detrimental visual content among the thousands and millions posted online day after day, and trying to differentiate between actual political messages and competent trolling activities are difficult enough for English-speaking audiences in North America; these challenges are exacerbated when multiple languages and cultural practices are associated (Langvardt, 2017).

Issues surrounding content moderation continue to raise debates on whether the system is broken or whether it is a means of silencing people and denying them freedom of expression. Moderating content is dangerous work. In practice, content moderation relies on people in remote (and almost always less economically wealthy) places to clean up our online spaces of the worst humanity has to offer so that we don’t have to see it. . some workers report traumatic working conditions (Veglis, 2014). Content moderation is inconsistent and confusing; to begin with, let’s talk about resources (Langvardt, 2017). Companies like Facebook and YouTube devote significant resources to content moderation, employing thousands of workers and using sophisticated automation tools to flag or remove unwanted content. But one thing is very clear: the resources allocated to content moderation are not evenly distributed. Controlling copyright is a top priority, and because automation can detect nipples better than it can recognize hate speech, users often complain that more attention is being paid to monitoring the copyright. women’s bodies rather than speech that could be harmful (Samples, 2019).

Figure 4: “Invite-only social app Clubhouse thrives in LGBTQ+ community” by Marco Verch is licensed under CC BY 2.0

Another problem with content moderation is that content moderation decisions can harm real-world users and workers (Schaake, 2021). Attempts by companies to moderate content they deem undesirable have too often had a disproportionate effect on groups already marginalized. corporate attempt to eradicate homophobic and transphobic discourse. While this appears to be a laudable goal, these policies have led to censorship of LGBTQ users for engaging in reverse gear or for using terms claimed to like “dyke.”

Figure 5: “social media” by Sean MacEntee is licensed under CC BY 2.0

Government review restrictions

The government should not regulate the content moderation of social media. Many conservatives argue that Facebook and Google are monopolies trying to limit conservative talk (Veglis, 2014). On the contrary, many on the left are complaining that the big social media platforms promoted both the election of Trump in 2016 and the violence in Charlottesville in 2017. Many on both sides believe that the government should actively regulate the moderation of the platforms. social media. to achieve equity, balance, or other values. Donald Trump recently complained that Google’s searches are biased against Republicans and Conservatives. Many conservatives argue that Facebook and Google are monopolies trying to limit conservative talk (Langvardt, 2017). Conversely, many on the left are complaining that the big social media platforms promoted both the election of Trump in 2016 and the violence in Charlottesville in 2017. Many on both sides believe the government should actively regulate moderation social media platforms to achieve equity. , balance or other values. However, some activists support new government efforts to regulate social media. Although some platforms are large and dominant, their market power can crumble and there are alternatives for speakers who are excluded from a platform. Supporting monopolies instead of weakening them. Others fear that social media could lead to “filter bubbles” that impede democratic deliberation (Veglis, 2014). But the evidence for filter bubbles is not strong, and there are few constitutionally compatible remedies.

Preventing the damage caused by “fake news” or “hate speech” is far beyond the purview of government; tech companies appear determined to remedy this damage, leaving little to the government. Social networks are platforms, not publishers (Spohr, 2017). They allow a large number of people to produce and consume information. They are open to both producers and consumers. Social media managers regulate content on a platform, but the platform does not host everything posted there (Langvardt, 2017). The regulation is necessarily exposed. The number of users and their expectations of immediate release oppose exact regulation (Samples, 2019).

Figure 6: “Artificial Intelligence Archives” by Technovation Girls is licensed under CC BY-NC-ND 4.0

Social media, therefore, includes four groups of people: users who generate content, users who consume content, users who generate commercial speeches (advertisements), and social media managers who host speeches. Each element involves the speech: users generate and consume information, and social media services create the forum in which the speech takes place (Spohr, 2017). Individual speech is highly protected in the United States. The online activities of social media companies also enjoy significant protection from government regulation. Economic regulation can also violate an individual’s right to freedom of expression (Schaake, 2021). Social media companies appear to depend on normal business transactions, the regulation of which is presumed to be constitutional. But the exchange behind social media is no ordinary business transaction. People use social media to express themselves (Veglis, 2014). They have access to social networks in exchange for data concerning them. If the government blocked (prohibited) such an exchange, the individual’s speech would be limited. The prohibition of the economic transaction would be tantamount to prohibiting speech (Samples, 2019).

An idea of public interest often underlies government policies, including regulation of private companies (Veglis, 2014). In other words, politicians and others see that government enforces a public interest through regulation. A public interest argument has two parts. First, it must be recognized that government action is required to ensure widespread value; It is assumed that private activities are theoretically or insufficient to achieve this goal. Second, it must be argued that government action will achieve the values ​​in question without significant costs for other important values. As we’ve seen with social media, core values are at stake. The second part of the public interest argument has to go steeply upwards (Schaake, 2021).


In conclusion, a team of content moderators in situ would make sure that nothing offensive or disconcerting gets to your site. It’ll also defend your audience from attainable bullying or angling by some irrational users. alleviative user-generated content may offer a valuable chance for pattern recognition.


Langvardt, K. (2017). Regulating online content moderation. Geo. LJ106, 1353.

Veglis, A. (2014, June). Moderation techniques for social media content. In International conference on social computing and social media (pp. 137-148).

Schaake, M., & Reich, R. (2021). Election 2020: Content Moderation and Accountability.

Samples, J. (2019). Why the government should not regulate content moderation of social media. Cato Institute Policy Analysis, (865).

Spohr, D. (2017). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Business Information Review, 34(3), 150-160.