Is the negative issue of digital platform the negligence of the platform or the failure of the government? — How should online platforms optimize governance?

 

"Personal social media landscape" by Anne Helmond is licensed under CC BY-NC-ND 2.0.
“Personal social media landscape” by Anne Helmond is licensed under CC BY-NC-ND 2.0.

Background of digital platform development

Digital platforms follow the change of technology to formally enter the Web 2.0 era. Early digital platforms were dominated by free-willist information exchange, and distributed network features interconnected personal ideologies and the rise of virtual communities (Burgess & Poell, 2018). This open platform feature of fast communication without geographical restrictions allows users to fully express their opinions and achieve freedom of expression. Likewise, it predicts that users can receive massive amounts of information content in a short period of time. However, the explosion of information content and the lack of regulation of digital platforms have led to the emergence of harmful information such as terrorist violence, disinformation, and hate speech (Jones, 2022). The regulation of digital platforms has become a social issue of importance to various stakeholders. In this paper, we will analyze the current state of platform management through the spread of terrorist organizations on digital platforms and the proliferation of disinformation, hate speech on digital platforms, and discuss who should stop the spread of negative content and how to approach it.

The current state of digital platform management and problems

Digital platforms are currently dominated by the U.S.-led Open Internet, the EU-led Bourgeois Internet, the Chinese-led Authorized Institutional Internet, and the Digital Platform Commercial Internet (Burgess & Poell, 2018). With the accessibility of digital platforms to interact with each other, it is realized that the completely open online environment raises a series of information security risks. Regulatory oversight of digital platforms leads to public opinion and social panic.

  • The massive spread of terrorism on digital platforms
“Terrorism definition” by Jagz Mario is licensed under CC BY-SA 2.0.

The main purpose of terrorist organizations on digital platforms is to spread pain to lead people to join the organization. Different digital platforms such as Twitter, Facebook, and YouTube can be easily searched for videos of their threats, torture, political demands, or monetary demands to ultimately achieve political goals (Ammar & Xu, 2017).ISIS, as a mainstream terrorist organization, has a well-established recruitment system to attract new recruits and guide them to spread violent and bloody content on digital platforms. A shooting incident triggered by extreme terrorism occurred in New Zealand in March 2019. The perpetrator, a white nationalist advocate, live-streamed the entire mass killing of residents of two New Zealand mosques for nearly 20 minutes via Facebook (Matthew, 2019). The investigation found that the man not only made white racial statements on the digital platform Twitter forum prior to the broadcast, but also watched numerous pro-Islamic State and “violent jihad” videos to learn from them. A total of 51 people died in the New Zealand live terrorist attacks that spread rapidly on social media, including victims in countries such as Afghanistan, Bangladesh, and Saudi Arabia (Matthew, 2019). This suggests that ISIS-led extremist groups are using digital platforms to radicalize their rhetoric and propagate violence and bloodshed to incite racial antagonism and socio-political and economic unrest. For example, “Islamic terrorism” uses Islam to justify its motives and actions through the anonymity and openness of digital platforms. People develop stereotypes of the religions and ethnicities involved – “Islamophobia” (Matthew, 2019). The collective perception formed by the efficient dissemination of digital media drives the perception of racially motivated risks.

"Chitral 15 days training for health workers by Relief International" by groundreporter is licensed under CC BY-NC 2.0.
“Chitral 15 days training for health workers by Relief International” by groundreporter is licensed under CC BY-NC 2.0.

It follows that it is difficult for digital platforms to achieve timely regulation through spontaneous algorithmic sorting. First, digital platform algorithms have limitations such as limited scenarios, low audit efficiency, and poorly defined semantic boundaries. This leads to its difficulty in accurately and timely distinguishing the large amount of content output of the platform to achieve effective regulation. In this New Zealand terrorism live-streaming incident, Twitter’s AI algorithm was unable to identify the live-streaming scenario and react quickly to stop it (Ammar & Xu, 2017). Second, the anonymity and Geopolitical characteristics of digital platforms cause differences in censorship standards, and unclear definition of power also increase the difficulty of regulation. The asymmetry of digital platform algorithms and information rights in different countries causes the dilemma of digital platforms without power to regulate (Popiel, 2018). For example, some NGOs help to legitimize terrorism through digital platforms. The Saudi International Islamic Relief Organization (IIRO) a member of the West African Social Organization. its Southeast Asian branch was listed by the United States and the United Nations on its terrorist list in 2006 for funding terrorist organizations through digital platforms under the cover of transnational cooperation (Ammar & Xu, 2017). Terrorist organizations use rights organizations as a cover for transnational cooperation through digital platforms, and digital platforms have no right to intervene and stop it. In summary, the limitations of technology and rights make it difficult for digital platforms to achieve spontaneous regulation.

  • Disinformation and hate speech are prevalent on digital platforms

“In the post-truth era, people are living in a vortex of fake news and hate speech, and digital platforms as the main means of dissemination are the first problem to be solved. Disinformation means misinformation and deception as a means of falsifying facts for the purpose of dissemination. These news are fabricated and propagated on digital platforms to deceive the public and gain ideological or political economic benefits (McGonagle, 2017). Anonymous accounts on digital platforms lead to the proliferation of fake accounts. Users are in a digital platform dilemma and find it difficult to distinguish between true and false news. This exemplifies the important impact of digital platform regulation on stabilizing society.

Politically active communities have vulnerabilities that disinformation agents exploit for political purposes. twitter data show that Russian social media uses ethnic, political identities to infiltrate diverse groups of authentic users, including liberals, conservatives, and blacks, to stir up anger against opposition outgroups (Deen, & Tetyana , 2020). Groups are made to cross ideological boundaries by spreading disinformation. The Russian government-sponsored private company IRA uses false identities to post Kremlin-friendly disinformation on digital platforms to infiltrate real user communities for political purposes. Analysis found that IRA tweets and its more than 2 million authentic responses found that different online communities had been successfully ideologically infiltrated by disinformation (Deen, & Tetyana , 2020). 70% of tweets were dominated by denigrating political and social opponents, while 30% supported core political groups. They exploit interracial mistrust and animosity to create political events. Russia vandalized websites and promoted false and misleading information during the 2020 election, sometimes accompanied by offensive rhetoric or malicious online activity. They created multiple disinformation campaigns to influence public opinion in support of Russia’s war in Ukraine (Jones, 2022).

Digital platforms have taken a number of measures to control the impact of fake news, hate speech, and insulting messages. However, the vast amount of information and different vetting norms make it almost impossible to control all content. 2019 Facebook allows political parties to post false ads, and they have different standards of truth and speech norms compared to regular users (McGonagle, 2017). The asymmetry of content regulation and beliefs on digital platforms exacerbates the impact people have on the platform environment. People accuse digital platforms of having a damaging impact on the outcomes of elections and referendums, and the flood of disinformation distorts democratic public debate (Allcott & Gentzkow, 2017). Therefore, the differences in different censorship rules for digital platforms make it difficult to achieve full content regulation.

Looking ahead – who is responsible for regulating digital platforms? How to regulate digital platforms?

The current self-regulation of digital media has significant limitations. A good order of digital platforms is not enough only by the platforms themselves, but requires the joint regulation of various stakeholders. First of all, each digital platform should cooperate with the government to supervise and set a unified and clear audit standard for network regulation. Through algorithms to achieve platform content tagging, enriching each use scenario and improving review efficiency. Second, platforms can provide evidence links for tags to facilitate users to report feedback on negative information monitoring (Deen, & Tetyana , 2020). Third, the digital platform displays user messages and statistically detects the content and scope of information dissemination. Finally, a ban on sharing and reply function for suspicious messages, waiting for user complaints to prevent misjudgment.

Secondly, the government regulates digital platforms by means of legislation, monitoring fines and hearings. The White Paper on Cyber Harms in the UK clarifies the government’s monitoring rights on digital platforms. It opposes the publication of photos and videos of extreme speech, terrorist attacks, and cyberbullying on online platforms (Shanapinda, 2020). The U.S. Use of Social Media Against Terrorism Act evaluates content posted to promote the safe development of digital platforms (Shanapinda, 2020). Although, national governments have different legislation, common oversight of digital platforms can be achieved through international cooperation.

 

Reference list

Ammar, J., & Xu, S. (2017). Technology to the Rescue: A Software-Based Approach to Tackle Extreme Speech. In When Jihadi Ideology Meets Social Media (pp. 91–143). Springer International Publishing. https://doi.org/10.1007/978-3-319-60116-8_5

Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. The Journal of Economic Perspectives, 31(2), 211-236. https://doi.org/10.1257/jep.31.2.211

Burgess, J., Marwick, A., & Poell, T. (2018). The SAGE Handbook of Social Media. SAGE Publications Limited.

Deen Freelon, & Tetyana Lokot. (2020). Russian Twitter disinformation campaigns reach across the American political spectrum. Harvard Kennedy School Misinformation Review, 1(1). https://doi.org/10.37016/mr-2020-003

Jones, D. (2022). Russian disinformation campaigns disrupt Ukraine narrative. Cybersecurity Dive.

Matthew Abbott,(2019). “Jacinda Ardern Consoles Families After New Zealand Shooting, ” The New York Times, https://www.nytimes.com/2019/03/15/world/asia/new-zealand-shooting.html

McGonagle, T. (2017). “Fake News”: False fears or real concerns? Netherlands Quarterly of Human Rights, 35(4), 203–209. https://doi.org/10.1177/0924051917738685

Popiel, P. (2018). The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power. Communication, Culture & Critique, 11(4), 566–585. https://doi.org/10.1093/ccc/tcy027

Shanapinda, S. (2020). Oversight Exercised Over the Powers of the Agencies. In Advance Metadata Fair (pp. 161–183). Springer International Publishing. https://doi.org/10.1007/978-3-030-50255-3_7