How the Regulation of Platforms Influences Publics?

Introduction

The Internet is a free platform. Since the platform does not belong to any organization or individual, and adheres to the original intention of freedom and is open to all visitors, the Internet is regarded as a decentralized and free online community. In the early days of the Internet, social media platforms’ regulatory measures had not yet become popular. Imperfect regulatory measures and management loopholes provided users with an opportunity for free speech. However, with the rapid development of technology, the Internet plays an increasingly important role in life, and more and more people are participating in increasingly powerful digital media platforms. The issue of free speech and diversity online continues to be amplified. Issues related to platform management must be re-examined, and the responsibilities and rights granted and borne by the platform need to be emphasized. The Internet is an online community that uses a series of artificial intelligence and high technologies such as big data and algorithms. The content presented on the Internet has been carefully calculated and designed. It is conceivable that the Internet is gradually improving its regulatory measures. Therefore, perfect regulatory measures have transformed the Internet from a platform for free speech into a platform for restricting freedom. Content moderation limits free speech to a great extent. In this article, the impact of platform supervision on users will be discussed and re-examined mainly through content review and platform supervision.

Definition

For the individual, freedom of expression means protection from tyranny. Each citizen has the right to make mistakes. The government cannot prevent or interfere with an individual’s right to express his or her opinions. For digital media, the media, as a supervisor, plays the role of a public online platform that can supervise speech, and the platform has the right to legal supervision.

Unleashing the Power of Generative AI: Exploring the Pros and Cons in the Context of Input Control, Regulation, and Privacy Protection” by CollideDataSolutions licensed under CC0 1.0

Free speech

The Internet is a platform for free speech and it has the potential to liberate speech. At the same time, social media platforms host, store, block and disseminate other people’s content (Gillespie, 2018). Social media platforms are not only a bridge between users and users, users and the public, but also a bridge between citizens and law enforcement agencies, policymakers and regulators. (Lesson) Users can communicate and interact on the platform. Users not only play the role of consumers, but also play the role of creators. Citizens can freely express their opinions and beliefs on the Internet. Although the opportunities for free speech created by commenting are ideal and even establish a utopian world, excessive freedom and unregulated and unconstrained communities will inevitably bring negative foreseeable consequences. Freedom of speech may not only be harmful to individuals, but may also have a negative impact on society. For individuals, unregulated speech may cause defamation, rumors, and slander to individual users. For society, some negative hate speech will increase the chances of crime or violence in society. Freedom of speech may affect public order and harm the national security of the principality. Some negative comments can be antithetical to peace and incite rebellion that harms the country. Besides, comments related to racism also tend to create the potential for racial discrimination. What is clear is that the dangers posed by inadequate platform regulation are rising significantly. Pornography, obscenity, violence, abuse, illegality and hatred may bring indelible and irreversible serious consequences to society (Gillespie, 2018). Platforms are now facing calls from policymakers, users, foreign governments, and the media to regulate and control controversial speech (Gillespie, 2018). Social media platforms are increasingly taking on the responsibility of curating content and policing users. This is done not just to satisfy legal requirements or avoid additional policies, but also to avoid losing users who are offended or harassed, to appease advertisers eager to partner their brands with the platform, and to protect their corporate image. and respect for their own personal and institutional ethics (Gillespie, 2018). Therefore, it is necessary and imperative for the platform to take necessary regulatory measures.

Main directions of regulation

As a regulator, the platform’s main regulatory directions can be divided into the following points. First of all, the purpose of supervision is to severely crack down on violent and criminal behaviors that will seriously affect and endanger social security. At the same time, exploitation, bullying, harassment, invasion of privacy, hate speech, threats of violence, pornography, fake news, media manipulation, racism, extremism, self-harm, misogyny and other bad speech are also within the scope of supervision (Gillespie, 2018). Content moderation is not only for the content published on the platform, but also needs to review the feedback content brought about by the published content, such as comments and forwarding. The main form of platform supervision can be manifested as content moderation

Viewpoint: Digital Platform Regulation | Prof Saharsh Agarwal” by ISB Executive Education

What are the impacts of regulation of platforms?

Supervision and governance are the core and essence of online platforms. Necessary platform regulation can trigger many benefits. In the Internet, platforms have social responsibilities to delete illegal content and provide users with a good online environment and experience. At the same time, platform supervision can enable the platform to introduce advertising placement. For financial reasons, bad reviews and content may cause advertisers to lose loyal customer groups. In order to retain customers, content moderation can help advertisers avoid financial losses. Besides, regulation will also bring the possibility of avoiding infringement and public disputes. Traditional communication policies are difficult to apply, adhere to and enforce online (Gillespie, 2018). Illegal activity is difficult to detect on the Internet. Users can take advantage of the anonymity and encryption provided by the site to post inappropriate comments. (Lesson) Furthermore, illegal content can easily cross regional jurisdictions (Gillespie, 2018), This brings huge obstacles and difficulties to content review. Platforms must develop their own measures based on their own platform characteristics. Some of these interventions are popular with users, while others are controversial. (Lesson) For censors, there may be mental health issues due to long-term censorship and reading negative comments (Karabulut et al., 2022). For platforms, it is difficult to define the standards and intensity of content moderation. Excessive censorship and deletion of content may lead to revolt and uprising among users and citizens. Different censors may view different speech differently. The bias and subjectivity of censors regarding content is unavoidable. When content moderation faces ambiguity, reviewers need to verify the author’s intent and authenticity to determine whether the speech is acceptable (Stewart, 2021). The speech censored by artificial intelligence through algorithms often leads to inflexible results. In addition, since the platform can only conduct content moderation after users publish content, this means that bad comments will not be discovered before content moderation. This means that bad speech remains on the platform for at least a short time until it is censored (Gillespie, 2018). 

New S’pore-style regulatory framework for Sri Lanka websites; activists concerned” by IMESH RANASINGHE licensed under CC BY-SA 3.0

Conclusion

Platforms are not only media but also components of public discourse and face a regulatory framework (Gillespie, 2018). The supervision platform can effectively avoid and filter harmful speech related to pornography, threats of violence, hatred, racism, extremism, self-harm, misogyny and other harmful speech. However, the standards for content moderation need to be further clarified in order to avoid the negative impact of regulation.

References

Gillespie, T. (2018). Governance by and through Platforms. In The SAGE handbook of social media (pp. 254–278). https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au/lib/usyd/detail.action?docID=5151795

Karabulut, D., Ozcinar, C., & Anbarjafari, G. (2022). Automatic content moderation on social media. Multimedia Tools and Applicationshttps://doi.org/10.1007/s11042-022-11968-3

Stewart, E. (2021). Detecting Fake News: Two Problems for Content Moderation. Philosophy & Technologyhttps://doi.org/10.1007/s13347-021-00442-x