The importance of and measures used by platforms to self-regulate information that may have a negative impact

Ziqi Qiu

"UNAMA FEATURED PHOTO: 22 June 2014" by UN Assistance Mission in Afghanistan is licensed under CC BY-NC 2.0.

With the advent of the Web 2.0 era, this monopoly or oligopoly in the internet industry is reflected in the fact that more and more internet content is published on proprietary platforms. However, information that is misleading or spreading radical ideas is circulating on the internet, as well as problematic content including bullying, harassment, violent content, hate, porn and so on is circulating on digital media platforms. Thus, who should be responsible for stopping the spread of this content in the current global environment and whether the existing media policy and regulation forms can address these issues have become the focus of research (Flew & Suzor, 2019). This essay argues that digital platforms are responsible for stopping problematic content from circulating on their platforms. Although some governments claim that they have laws to regulate extreme posts circulating online, large technology and telecom companies are the “real” job takers reviewing content and managing the review process. In addition to the most basic platform auditing, the combination of algorithmic and manual information screening can also effectively avoid disseminating problematic content (Price, 2021).

“bad-cyberbully” by J_O_I_D is licensed under CC BY 2.0.

Gillespie (2018) defines platforms as online sites and services that host, organize and distribute shared content or social interactions for users but are not involved in or commissioned to produce that content. The communication of information and the interaction among users on digital platforms especially on social media is achieved by the interactive function provided by the platforms. Hence, digital platforms are able to directly determine what content can be published and spread on their platforms by making regulations and attaining agreements with their users. This could be considered the key reason that digital platforms should be responsible for spreading problematic content on their platforms (DeNardis, 2014).

 

The platform for self-regulation:

Although digital platforms always define themselves as technology companies that act as mediators, mainly responsible for distributing and presenting content and information generated by their users; they recommend personalized content to their social media users through algorithms (Gillespie, 2018). At the same time, Price’s (2021) research suggests that digital platforms increasingly determine what users publish and to whom they publish it, and playing an essential role in the overall information dissemination process. In order to regulate information and content, digital platforms have set up various rules or functions to govern and regulate the users’ behavior on their platforms. For example, YouTube, Facebook videos, and other platforms set up the ‘report’ button to allow users to report any infringement and illegal content and assist them in conducting content supervision (Flew & Suzor, 2019).

“Social Media Mix 3D Icons – Mix #2” by Visual Content is licensed under CC BY 2.0.

Social media platforms are increasingly responsible for curating content and monitoring users’ activities (Price, 2021). Not only because the operation of these digital platforms need to meet various local legal requirements in different countries or regions but also because they are afraid of losing users who are offended or hurt by certain statements and advertisers who care about the safety and stability of their online communities. Therefore, to maintain the company’s image, maintain its position in the industry and ensure the smooth operation of the digital platforms, the platforms have to create an excellent online environment by stopping the spread of problematic content (Flew & Suzor, 2019).

public discourse:

In terms of mediating public discourse, digital platforms play significant roles. Platforms filter, evaluate, categorize, approve or remove/hide online content through vetting, i.e., Positive communication behavior is supported and enforced to minimize offensive and anti-social behavior. For example, it protects vulnerable groups or individuals from undesirable, illegal, obscene, and harmful content (Langvardt, 2018). However, the agreements and policies of platforms may have deficits. Although the agreements and policies of platforms will be adjusted for markets in different countries, and most national regulatory contexts can be applied, this also may lead to problems such as inconsistent content review and contradictory policies issued by the platforms (Flew & Suzor, 2019). For example, YouTube starts with a risk-based review by a machine. This check places certain small independent content producers and material depicting “controversial” and “sensitive” issues off-limits, including gays and lesbians (Bardo, 2017). YouTube removes this content outright, defeating the platform’s original intent of freedom of expression (Bardo, 2017).

(‘Youtube’ by Esther Vargas is licensed with CC BY 2.0)

The above dilemma can be solved by manually creating and implementing a content review system that can deal with both extremes to solve the problem of ‘mis-censorship’. For instance, YouTube has an existing review mechanism where a deep learning algorithm performs the first round of review of risky content, combined with a complaint reporting mechanism that allows users to do a significant portion of the screening and identify any negative information that the AI misses (Crawford & Gillespie, 2014). Moreover, suppose the platform conducts a further manual re-examination of the AI review afterwards. In that case, this will help to prevent a large proportion of the negative information from reaching social and public platforms and address the issue of omissions in machine review (Flew & Suzor, 2019).

Artificial Intelligence & AI & Machine Learning by Mike Mackett, licensed under CC BY 2.0.

 

Government regulation:

However, some scholars still believe that it would be more efficient for the government to take charge of information containing bullying, harassment, violent content, hate, and pornography (Edwards, 2019). The reason for this argument consists of:

  • Platform algorithms intervene in social spaces to increase interaction, interfering with users secretly. For example, Twitter uses its algorithm to put ‘socially relevant’ and ‘authoritative’ content on the front page, and the content that users view is influenced by the algorithm (Price, 2021).
  • The intervention of platform algorithms is more likely to spread harmful content, as algorithmic interventions incentivize and amplify the harm of user interactions.
Why so much harmful content has proliferated online – and what we can do about it by Farah Lalani and Cathy Li by cc 2.0

Hence, the government is mainly responsible for stopping the spread of problematic content circulating on digital platforms that may cause public concern about restrictions on freedom of expression. For example, in China, the state and users directly blocked access to Google and Twitter; people can only publish content through Chinese social media, represented by Weibo. If the government directly controls these platforms, and all kinds of content the state considers negatively impacting may be directly blocked. This prevents disseminating information about for instance miscarriages of justice and government scandals (Shen, 2016). However, this problem can be solved by digital platforms’ algorithms, which are able to filter out problematic content and ensure the freedom of speech of the public on digital platforms (Price, 2021).

At the same time, some people may argue that governments can regulate internet platforms through international cooperation. However, contemporary media and communications policy challenges can be summarised as “the global nature of communications has somewhat reduced the ability of nation states to control and direct media and communications internally significantly (Picard & Pickard, 2017, p.23)”. They aim to regulate digital media platforms to achieve desired digital communication outcomes, which can make the online environment more divisive. These regulatory policies involve direct state and national interests, and each country regulates internet platforms differently (Flew, & Suzor, 2019). For example,

  • At the ITU World Telecommunication Congress in 2012, moving the regulation of domain name registration from an inter-state NGO like ICANN to a more representative international organization like the International Telecommunication Union (ITU) was suggested, but vehemently opposed by the US government and US digital platforms(Schemeil,2012).
  • China’s proposal for a global Internet governance framework with “national cyber sovereignty” as a core principle has struggled to gain the endorsement of the US(Shen,2016).

 

conclusion:

Regulating the internet is inherently a complex and challenging issue, and no system or regulator can completely control the output and distribution of its content. Based on the decentralized nature of the online network and the freedom of content creation and distributionFlew & Suzor, 2019). Digital platforms should be responsible for stopping the spread of bullying, harassment, violent content, hate, porn and other problematic content circulating on these platforms. Self-regulation by platforms is currently a more feasible approach. Platforms can adopt various platform regulations and policies to regulate and manage users and information and continuously improve and upgrade various algorithms to review content. At the same time, they should formulate user agreements concerning the laws and regulations of the local country or region, and maintain the platform’s objectivity and the online environment’s healthy development, which will help keep the platform objective and the online environment healthy (Gillespie, 2018).

 

 

Reference

Bardo, S. (2017). ‘YouTube continues to restrict LGBTQ content’, Huffington Post, 16 January, https://www.huffingtonpost.com/entry/youtube-conti- nues-to-restrict-lgbtq-content_us_5a5e6628e4b03ed177016e90

 

Crawford, K. & Gillespie, T. (2014). ‘What is a flag for? Social media reporting tools and the vocabulary of complaint’, New Media & Society, 18(3),28-410.

 

DeNardis, L. (2014). The Global War for Internet Governance, Yale University Press.

Flew, Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1

 

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

 

Exploring History. (2018). How China Controls the Internet [Video]. Retrieved from https://www.youtube.com/watch?v=_gxFhmsHNMc&t=23s.

 

Langvardt, K. (2018). ‘Regulating online content moderation’. The Georgetown Law Journal, 106(1), 1353-88.

 

Picard, R. and Pickard, V. (2017). Essential Principles for Contemporary Media and Communications Policymaking,Oxford University Press.

 

Price, L. (2021). Platform responsibility for online harms: towards a duty of care for online hazards. The Journal of Media Law, 13(2), 238-261. https://doi.org/10.1080/17577632.2021.2022331

 

Schemeil, Y. (2012). ‘Global governance: Evolution and innovation in inter- national relations’, In E. Brousseau, M. Merzouki & C. Meadel (Eds.), Governance, Regulations and Powers on the Internet (pp. 186-208), Cambridge University Press.

 

Shen, H. (2016). ‘China and global internet governance: Toward an alternative analytical framework’, Chinese Journal of Communication, 9(3),24-304.