“CENSORSHIP..girls on girls in Brussels” by ::..Lk..:: is licensed under CC BY-NC 2.0.
Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?
Platform, government and each individual should be responsible for stopping the spread of inappropriate content. The content will be treated differently by either banning, limiting, removing or restricting such content on the platform. Content will trigger moderation depending on the current internet values, business intention, political ideology and the wish for self-discipline. However, flaws will occur as the moderation mechanisms are not yet perfected.
The new concept of Freedom of Speech
Platforms themselves should be responsible for ceasing the spread of any form of problematic content by either restricting, limiting or removing it from the platform. Without content moderation, the platform is no longer a platform(Gillespie, 2018); it will turn into a free but chaotic and aggressive zone. The public thought of the Californian ideology of freedom of speech is ubiquitous, and they take it for granted in every platform because the developer and contributors of the early internet took ‘freedom’ as the core value(Castells, 2002). Therefore, such historical ideology past down till nowadays. However, the platform has its role to play because traditional regulation cannot effectively and easily handle today’s changes(Gorwa, 2019).
“youtube logo” by redsoul300 is licensed under CC BY-NC 2.0.
In examining YouTube as an example, YouTube has triggers for content moderation such as demonetisation, banning, restricting and taking down the content. The comments contain a significant amount of swear words indicating the greater possibility of hateful speech, and video content with more flagged indicators has a higher dislike rate and therefore triggers more content moderation(Jiang et al., 2020). However, content moderation triggers are flagged by machine-automated detection programs and human censors due to the failure of complete machine detection programs(Gray, 2022). Some social issues arise along with human censors because they often get traumatised due to the content they need to censor, thus leading to industrial action against the company and, as a result, compensating for the leakage of online harm(Gary, 2022). As said, the social issue leads to not only industrial actions but also human censors often bring in their values and beliefs when flagging the content, and therefore, such content moderation could be biased and unfair to others(Jiang et al., 2020). Therefore, platforms should be responsible for calling off the spread of problematic content with the assistance of machine detection programs and human censors. However, both machine and human censors may have flaws, but together, content moderation was successfully carried out.
Hello, business partners!
“Business partners” by Tsahi Levent-Levi is licensed under CC BY 2.0.
In addition, speaking from an economic lens, content moderation is necessary. The platform should also be responsible for the content available on the site because of the prevalence of platforms that many people can visit with different intentions. The optimistic perspective of the internet was that content creators engage in cultural democracy, which meant that the platform is easy to navigate, with no need for governance, and users are able to reach the audience globally to share whatever is available(Gary, 2022). However, such cultural democracy was rejected as Noble noticed that Google’s search engine could be data discriminatory towards people with colours and hyperlinked with porn and other problematic content(2018). Such a social problem led to the algorithm’s bias and prejudice against people of colour, especially women. Therefore, the platform must moderate the content to protect users and business partners; such acts of moderation must balance between users with various values and demands and the platform’s demand for profit(Gillespie, 2018). Because the intention of moderation was not only to keep as many users on the platform as possible(Gillespie, 2018) but also to gain as much profit as possible with business partners without scaring them with the chaotic content on the platform(Myers West, 2018). Therefore, in 2019, Google removed more than 2.7 billion problematic contents from its platform. While the platform bears the wider community, at the same time, the community also have power over the platform(Gillespie, 2018). As said, the content we access is more or less moderated on platforms; no governance and moderation are impossible(Gary, 2022). Hence, platforms are responsible to terminates the spread of problematic content for the good of users, business partners and themselves.
Political regimes
Due to different political regimes, the government and platform together can perform as a better content moderator with the strategy of removing, banning and blocking problematic content. Due to the distinct political value system, the internet has been separated into four internets(O’Hara et al., 2018), and different practices can be observed due to different laws and political reasons. Therefore, each internet and platforms have its own set of rules, management and, of course, different level of content moderation; as such, platform rules and regulations are subject to government-based regulations and standards(Gorwa, 2019). As a result, the level of regulation and the intensity of content moderation is very distinctive.
“Politics and chess at the EP” by European Parliament is licensed under CC BY-NC-ND 2.0.
TikTok is a globally-popular short video platform owned by China, and it has different content moderation standards and intensity in different countries(Jia & Liang, 2021). In the China server, TikTok tends to be sensitive to violence and inappropriate speech towards China, and such content may be taken down from the server, and the account have a higher chance of being banned once the content has been flagged in response to the Chinese government’s standards and regulation(Jia & Liang, 2021). While in the US server, TikTok tends to be sensitive to political content, it was found that right-leaning content contains more swear words, problematic content and dislike than left-leaning. Thus, it will have a greater moderation of such content, such as banning and restricting(Jiang et al., 2020).
Western-Centric Content Moderators Miss Regional Contexts by Centre for International Governance Innovation. Retrieved from: https://www.youtube.com/watch?v=kmMqm9Y1_BM&ab_channel=CentreforInternationalGovernanceInnovation
Therefore, such government-monitored content moderation often demonstrates the current discourse and phenomena within the country; however, such moderation is subject to different servers that aim to serve different countries. The server can be accessed by everyone and contains content that does not always well suited to users’ standard of content moderation(Gillespie, 2018). Thus, such content will be only moderate on a specific server rather than all. In short, the government and platform should be responsible for pausing the dissemination of problematic content with the power to remove, ban and block such content from the platforms. However, this could not entirely stop the spread of such content on other servers.
Self-Discipline
Thought the government and platform had together played an excellent coordinate role in managing the spread of problematic content via multiple strategies. Most importantly, each user should be responsible for their speech and action both online and offline, as punishments and penalties will apply. The culture of freedom of speech assumes people will behave themselves on the internet and believe there are more good people than bad; however, such an assumption is being challenged and criticised(Gary, 2022). In the contemporary media landscape, as the platform grew, chaos, violent content, illegal and hateful speech and explicit content followed(Gillespie, 2018). Therefore, such culture is no longer appropriate to describe the current media landscape.

Since 1st January 2020, the Chinese government has required compulsory real-name authentication on almost every Chinese social media platform to combat the outspread of problematic content and promote the concept of self-discipline. This power (government surveillance) can positively manipulate user actions and speech, and people tend to become docile bodies with the threat of potential surveillance(Roberts, 2005). Therefore, self-content moderation is maintained on the platform by each individual. On the other hand, Facebook has also announced the real-name policy, , which requires every user to use their real and legal name when applying for an account. However, it was not compulsory and easy to get away with. The intention of moderation was meant to eliminate problematic content spreading, and such potential surveillance does benefit the internet ecosystem(Gillespie, 2018). The mandatory real-name authentication policy by the Chinese government works very efficiently in controlling individuals from spreading and posting problematic content as punishments and penalties are outlined. However, western social media platforms may not be regulated the platform in such methods and intensity. Therefore, the spread of problematic content remains a problem.
Summary
In summary, each entity that participates in the networked public has the responsibility to stop the spread of problematic content, with the assistance of punishment and penalties such as banning, limiting, removing and restricting such content on the platform for the wholesome internet ecosystem and each other.
Reference List
Australian Human Rights Commission. (n.d.). 3 Freedom of expression and the Internet. https://humanrights.gov.au/our-work/3-freedom-expression-and-internet
Castells, M. (2002). The Culture of the Internet. In M. Castells (Ed.), The Internet galaxy reflections on the Internet, business, and society (pp. 36-63). Oxford University Press on Demand.
Centre for International Governance Innovation. (2020, June 13). Western-Centric Content Moderators Miss Regional Contexts. [Video]. Youtube. https://www.youtube.com/watch?v=kmMqm9Y1_BM&ab_channel=CentreforInternationalGovernanceInnovation
Facebook. (n.d.). Names allowed on Facebook. https://www.facebook.com/help/229715077154790/
Fung, F. (2021, September 13). The social media chaos driven by the brainwashed society. Medium. https://medium.com/predict/the-social-media-chaos-eab3133ab820
Gary, J. (2022). ARIN2610 Internet Transformations, Lecture 6, Week 6, Module 6: Governing the Internet: Content Moderation and Community Management [PowerPoint Slides]. School of Literature, Art and Media, University of Sydney Canvas. https://canvas.sydney.edu.au/courses/42984/pages/week-6-governing-the-internet-content-moderation-and-community-management?module_item_id=1628061
Gary, J. (2022). ARIN2610 Internet Transformations, Lecture 7, Week 7, Module 7: Governing the Internet: Policy and Regulation [PowerPoint Slides]. School of Literature, Art and Media, University of Sydney Canvas.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029
Google. (2020). Our approach to information quality and content moderation. https://blog.google/documents/84/information_quality_content_moderation_summary.pdf/
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Jia, L., & Liang, F. (2021). The globalization of TikTok: Strategies, governance and geopolitics. Journal of Digital Media & Policy, 12(2), 273–292. https://doi.org/10.1386/jdmp_00062_1
Jiang, S., Robertson, R. E., & Wilson, C. (2020). Reasoning about Political Bias in Content Moderation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13669-13672. https://doi.org/10.1609/aaai.v34i09.7117
Kraus, R. (2020, September 20). YouTube puts human content moderators back to work
Actual humans are tagging in. Mashable. https://mashable.com/article/youtube-human-content-moderation
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059
Noble, S. U. (2018). Algorithms of oppression : how search engines reinforce racism. New York University Press.
O’Hara, K., & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance Innovation.
Pibriefupdate. (n.d.). How Do Legal Systems Differ From One Country to Another?. https://www.pibriefupdate.com/content/pibulj-sec/2065-how-do-legal-systems-differ-from-one-country-to-another
Qu, T. (2021, October 27). China updates rules on real-name registration online in crackdown on schemes to revive banned user accounts. South China Morning Post. https://www.scmp.com/tech/policy/article/3153887/china-updates-rules-real-name-registration-online-crackdown-schemes
Roberts, M. (2005). The production of the psychiatric subject: power, knowledge and Michel Foucault. Nursing Philosophy, 6(1), 33-42. https://doi.org/10.1111/j.1466-769X.2004.00196.x