
With the development of the Internet and the prosperity of the media platform, people’s lives are gradually integrated with these digital communities. However, many questions have arisen. Over the years, inappropriate content has been banned but still occurs on various media platforms. Users can easily post to the public while others can easily access it. Examples include but are not limited to the ubiquitous availability of hateful and discriminatory speech, easily accessible pornography, and the spread of false rumors. Who should be responsible and regulate for the inappropriate statement, what content should be regulated and how?
The history of platform regulation
In 1996, Section 230 of the Communication Decency Act was enacted. In just a few words, it became the cornerstone of the American Internet. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, these Internet companies are not responsible for content posted by third parties or users on their platforms. Digital media and communications platform companies generally tend to present themselves as communications

intermediaries rather than media companies. (p256, Gillespie, 2018) They see themselves as content publishers, not creators. At a deeper level, Section 230 has two meanings: Internet companies are not responsible for third-party information on their platforms, and Internet companies are not responsible for their actions to delete content from their platforms in good faith. This section now looks like a cover for some Internet platform companies, because obviously, they can monitor what users are saying and choose the appropriate content to put on the platforms. However, when the law was enacted, it was at the beginning of the Internet industry, and the platform filtering system was not as mature as it is now. Adding the user’s fault to the Internet enterprise means that the development of the emerging Internet industry will be greatly restricted.
So far, section 230 remains highly controversial. (Stern, 2022) The exemptions that this section provides for Internet companies have been criticized by many. For example, Mark Zuckerberg, the founder of Facebook, has claimed that these platforms should not be granted immunity, but should be required to have systems in place to identify illegal content and have the ability to remove it. (Tracy, 2021) However, these controversies are not enough to overturn this section, so legally, individuals are responsible for the illegal information they post. However, even though these platforms do not have to be responsible for the speech of their users for legal reasons, they still regulate it. Because excessive uncomfortable content can indeed cause many troubles like user loss. Despite some geographical differences, most platforms share the same boundaries when it comes to regulation. Most platforms prohibit or restrict the following rules: pornographic content, violence, hate speech, suicide propaganda, harassment of other users, and the introduction or promotion of illegal activities such as drug use. (p263, Gillespie, 2018)
Moderation before deletion

Compare to the public finding offending content and then removing it, the most effective way is to stop the release of such content through moderation. Moderation means screening, evaluating, classifying, approving, or removing/hiding online content in accordance with relevant dissemination and publication policies. (Flew et al., 2019) The lack of moderation can lead to the widespread of inappropriate content like fake news or hate speech. Elon Musk is planning on modifying the content moderation on Twitter and claims that moderation fundamentally undermines democracy because it doesn’t adhere to the principle of free speech. (Low, 2022) However, the experts believe that with the abolition of content moderation, unfiltered content will occur on the platform, leading to the legalization of more and more harmful content. (Low, 2022) This means that if the moderation system is removed, offending content will appear more easily in the public. Once the public gets used to the illegal information that is easily accessible on the Internet, more illegal content will be legalized, causing serious consequences. Objectively, it is necessary to moderate content before users publish it. This constrains users’ behavior on the platform, preventing people from using their virtual identities on the Internet to deliberately harm others without taking any responsibility. To protect vulnerable groups such as children, this step before publishing is necessary.
Moderation differences
In addition to the content which has been widely regulated as mentioned above, governments have different opinions on the regulation of platforms and the Internet in different countries. In Thailand, Facebook yielded to the government by blocking some content that opposed the monarchy. This move has been criticized as Facebook is working with the authoritarian regime to thwart democracy and foster authoritarianism in Thailand. A company spokesperson claimed that this demand “contravenes international human rights law”. (Ruiz, 2020)

Despite the controversy, the Thai government has been tough and threatened to use it to prosecute Facebook. To this day, insulting the royal remains a serious crime.Similarly, the Chinese government will also require media platforms to strictly moderate what users post to prevent hate speech against the communist party. The most well-known case is Google’s refusal to compromise the Chinese government’s policy and thus lose the Chinese market. (Bradsher & Mozur, 2014) China’s tight Internet controls are a big challenge for social platforms from around the world trying to enter the Chinese market. For LinkedIn, a foreign platform that has successfully integrated into China, the most difficult part is finding an acceptable balance between freedom of expression and Chinese law. (Mozur & Goel, 2014) Within a few years, LinkedIn announced it was shutting down its platform in China. The reason is that the Chinese version faces a “more challenging operating environment and higher compliance requirements.” (Shroff, 2021)
Conclusion
In conclusion, due to the enactment of section 230, American media platforms receive no legal force to take responsibility for the inappropriate content on the platform. Instead, whoever posts the restricted information is responsible for their own words. The platforms have the right to delete illegal content for the browsing experience of other users. Before posting, most media platforms intend to moderate the content to filter out inappropriate content. However, the illegal or not is various on different laws from each country. Moderation is a testing task for countries that support free speech. Finding the right balance between free speech and appropriate content is a challenging job.
Reference List
Bradsher, K., & Mozur, P. (2014, September 22). China web clampdown pinching companies like google. CNBC. Retrieved October 11, 2022, from https://www.cnbc.com/2014/09/21/china-web-clampdown-pinching-companies-like-google.html
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation AS media policy: Rethinking the question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T. (2018), ‘Regulation of and by platforms’, in J. Burgess, A. Marwick and T. Poell (eds), The SAGE Handbook of Social Media, London: SAGE, pp. 254–78.
Legal Information Institute. (n.d.). 47 U.S. Code § 230 – protection for private blocking and screening of offensive material. Legal Information Institute. Retrieved from https://www.law.cornell.edu/uscode/text/47/230
Low, D. (2022, May 03). Twitter users may see more fake news, hate speech after takeover: Experts flag possible outcome of musk’s plans to relax platform’s content moderation policies. The Straits Times Retrieved from http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/newspapers/twitter-users-may-see-more-fake-news-hate-speech/docview/2658562165/se-2
Mozur, P., & Goel, V. (2014, October 6). To reach China, linkedin plays by local rules. CNBC. Retrieved October 11, 2022, from https://www.cnbc.com/2014/10/05/to-reach-china-linkedin-plays-by-local-rules.html
Ruiz, T. (2020, August 25). Facebook to sue Thai government over demand to block anti-monarchy Group. Coconuts. Retrieved from https://coconuts.co/bangkok/news/facebook-to-sue-thai-government-over-demand-to-block-anti-monarchy-group/
Shroff, M. (2021, October 14). China: Sunset of localized version of linkedin and launch of New Injobs app later this year. LinkedIn Official Blog. Retrieved from https://blog.linkedin.com/2021/october/14/china-sunset-of-localized-version-of-linkedin-and-launch-of-new-injobs-app、
Stern, S. A. (2022). SECTION 230 OF THE COMMUNICATIONS DECENCY ACT POTENTIAL REFORMS AND IMPLICATIONS. The Brief, 51(3), 20-24. http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/scholarly-journals/section-230-communications-decency-act-potential/docview/2703523962/se-2
Tracy, R. (2021, March 24). Facebook’s Zuckerberg proposes raising bar for Section 230: Marketscreener. MarketScreener.com | stock exchange quotes| Company News. Retrieved from https://www.marketscreener.com/quote/stock/META-PLATFORMS-INC-10547141/news/Facebook-s-Zuckerberg-Proposes-Raising-Bar-for-Section-230-32777975/