Toxic content circulating on digital platforms: Who is to blame? How to stop it?

The Global "Techlash"

“Improving the speed and quality of research via shared algorithm implementations” by opensourceway is licensed under CC BY-SA 2.0.

Digital platforms empower people to connect with one another across geographical barriers and organise people into the ‘networked publics’ (boyd, 2011). However, a fully open platform is even more of a utopian, responsive fantasy of democratic and communal perspectives (Gillespie, 2018). In reality, each platform has its own system of regulations, which is necessary because digital platforms simultaneously bring dangers such as obscenity, violence, terrorism, hatred, abuse, and so on.

                                                      Governance issues with digital platforms

No violence no hate speech by John S. Quarterman is licensed under CC BY 2.0

This draws attention to the issue of insufficient self-regulation in the industry. Because there is no clear consensus on digital companies’ corporate social responsibility, and therefore, no real accountability can be brought to platform governance.

And these existing problems are historically evident, with Alexis de Tocqueville arguing that censorship has prevented full social participation in democracies (Softness, 2016). Heretofore, Ben Franklin harshly criticised people who gave up basic liberties in order to purchase temporary security. In the United States, for example, this debate between security and freedom was raging long before the Civil War.

Practices of digital enterprise in a post-capitalist society 

As in the modern era, digital platform companies are not only profiting from outdated laws and imprecise rules in the free market of a post-capitalist society, but they are also using online behavioural advertising to drive monetisation of user data, build user profiles, and construct users as commodities for advertisers to use (Green & Tho Le, 2022).

Meanwhile, in the face of regulatory pressure from government agencies, tech companies have created departments dedicated to lobbying and pressuring government officials to support laws and policies that benefit themselves (Popiel, 2018).

Not only that, but the level of moderation set is an equally contentious topic; given socio-cultural and other differences, should digital companies adhere to a common global framework or not? And is this situation of countries attempting to impose their own content regulation rules on a global scale resulting in a global ‘Splinternet’ (Flew, Martin, & Suzor, 2019)?

 

How should the internet be regulated? – BBC Newsnight

“Internet freedom has always been controversial but now there’s more pressure than ever to reform this law with profound perhaps even chilling consequences for all of us.” – David Grossman

 

Illegal content: Terrorist communication

In the case of the United States, its open architecture gave terrorists the opportunity to direct heinous acts of violence during the 9/11 attacks (Softness, 2016). And it was not until 2004 that the US government made its first attempt to regulate online speech, demonstrating not only that the government never could enact timely legislation to regulate online activity, but also that the responsibility for regulation has fallen more heavily on public and private organisations, as well as on the platform owners themselves.

Fast forward to more than a decade later, when Reynaldo Gonzalez accused Twitter, Google and Facebook of providing support for terrorist activities that led to the deaths of 125 people, including his daughter, in the Islamic State attacks in Paris at the end of 2015. And this accusation challenges several established figures and legal norms regarding the scope of liability, given that he arugued that it was the platforms that failed to flag and remove dangerous content that could reasonably be expected to lead to terrorist violence.

 

“Without Twitter, Facebook, and the Google-owned YouTube, the explosive growth of ISIS over the last few years into the most-feared terrorist group in the world would not have been possible” – Reynaldo Gonzalez

 

“Social Media Mix 3D Icons – Mix #2” by Visual Content is licensed under CC BY 2.0.

In 2016, four of the world’s largest internet companies committed to the European Union Code of Conduct for Countering Illegal Hate Speech Online, pledging to monitor and fight the use of terrorist activities by social media platforms.

Now, the question of how to regulate digital platforms and their accountability relies on public-private partnerships with multi-stakeholders. Yet the implementation of such public-private governance laws becomes a new question, especially when digital companies control the world’s most influential platforms, and to what extent do government restrictions serve as a warning?

‘Lawful but harmful’ content

Social networks such as Instagram, Twitter, and Facebook are popular with people of various social and cultural backgrounds across the globe. While this provides platforms for users to share and connect with others across geographic boundaries, conflicts such as offensive and hate speech and online violence are constantly occurring during the exchange of large amounts of nearly uncontrollable information (Watanabe, Bouazizi, & Ohtsuki, 2018).

“Lt. Governor Host MBE_Small Business Stakeholders Roundtable Discussion” by MDGovpics is licensed under CC BY 2.0.

Social networks such as Instagram, Twitter, and Facebook are popular with people of various social and cultural backgrounds across the globe. While this provides platforms for users to share and connect with others across geographic boundaries, conflicts such as offensive and hate speech and online violence are constantly occurring during the exchange of large amounts of nearly uncontrollable information (Watanabe, Bouazizi, & Ohtsuki, 2018).

Existing methods of detection include the construction of hate keywords and dictionaries, as well as the binary classification of hate speech and non-hate speech. However, sarcastic and metaphorical hate speech can be difficult to filter out. Thus, Watanabe and his team (2018) devised a method to extract patterns by extracting different sets of features to alternatively classify tweets into three categories: clean, offensive, and hateful, and to detect hate speech by combining them with expressions expressing hatred and offence as well as other sentiment-based features. Although it turns out to be effective, digital platforms still need to take more effective measures to regulate “legal but harmful” content.

Public-private platform governance: the Multi-stakeholderism

  • Government Penalty to boost Corporate Social Responsibility
  • Co-regulation
  • Multi-stakeholders
  • User boycott

Digital platforms must be held more accountable for the rights they have to use in democratic deliberation and debate, and existing legislation should begin with corporate social responsibility to benefit individual citizens and their societies by holding companies accountable for the harmful content they spread. The European Union has looked impressive in this regard.

The following video shows Google’s ongoing confrontation with the EU.

The EU utilises penalties to remind platforms of their responsibilities, with the goals of promoting transparency, strengthening users’ control over their personal data, and preventing digital platforms from exploiting them (Green & Tho Le, 2022). Between 2018 and 2020, Google, one of the Big Five, was fined approximately $10 billion for antitrust behaviour in the European Union.

“Government” by Nick Youngson is licensed under CC BY-SA 3.0.Retrieved from:https://www.picpedia.org/highway-signs/g/government.html

Furthermore, governments and platforms appear to have agreed on a shared governance solution for platform security issues. It is a platform-led state intervention in which the government benefits from dealing with a few large companies to address the regulation of a variety of problematic content, while the platform simultaneously reduces the likelihood of antitrust intervention (Green & Tho Le, 2022), as small companies have no voice in this circumstance. This frictionless governance upholds the conventional democratic model by decentralising power and establishing a system of multi-stakeholderism (Popiel, 2022).

Simultaneously, multiple stakeholders, whether public regulators or private advocacy organisations and platforms, agree on a course of action based on informal approaches to legal rules. For instance, instead of going through a cumbersome legal process, law enforcement officials submit information to NGOs for reporting to platforms, which in some cases effectively requires platforms to remove illegal content (Suzor & Gillett, 2022). However, some NGO personnel view their relationship with platforms as largely symbolic, with platforms using them primarily to boost reputation rather than to actively address problematic content.

Lastly, the increasing purchasing power and commercial value of modern consumers gives them the ability to call for and participate in mass boycotts. This is an effective method of social protest against platforms to compel them to increase their corporate social responsibility (Green & Tho Le, 2022).

Conclusion

In the web 2.0 era, the role of digital platform companies in disseminating fake news, hate speech, invasion of privacy, and data misuse has led to a growing global ‘techlash’ (Flew, Martin, & Suzor, 2019). At the centre of the public assault, platforms are increasingly questioned about how they apply standards to various types of problematic content. And media regulation in the digital age is complicated, as national policies frequently conflict and platforms struggle to strike a balance between business and profitability. Different national policies have created a global “Splinternet” despite the fact that co-regulation appears to be the best option at this time. Therefore, regulation of problematic content on digital platforms remains an issue that requires constant improvement.


References:

BBC News. (2016, June 16). Twitter, Facebook and Google “aided Paris attacks.” https://www.bbc.com/news/technology-36548798

Chee, F. Y. (2022, September 14). Google loses challenge against EU antitrust decision, other probes loom. Reuters. https://www.reuters.com/technology/eu-courts-wed-ruling-record-44-bln-google-fine-may-set-precedent-2022-09-14/

European Commission. (n.d.). The EU Code of conduct on countering illegal hate speech online. https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Green, L., & Tho Le, V. (2022). Holding the Line: Responsibility, Digital Citizenship and the Platforms. In Flew, T & Martin, F.R. (Eds.), Digital platform regulation (pp. 85-109). Palgrave Macmillan, Cham.

Grist. (2021). Big Tech says it wants to solve climate change. Its lobbying dollars say otherwise. Retrieved from https://grist.org/politics/big-tech-says-it-wants-to-solve-climate-change-its-lobbying-dollars-say-otherwise/

Karanikioti, T. V. (2020, May 6). BEUC argues that self-regulation of digital platforms is not enough. The Platform Law Blog. https://theplatformlaw.blog/2020/05/06/beuc-argues-that-self-regulation-of-digital-platforms-is-not-enough/

Leetaru, K. (2018, December 15). What does it mean for social media platforms to “sell” our data? Forbes. https://www.forbes.com/sites/kalevleetaru/2018/12/15/what-does-it-mean-for-social-media-platforms-to-sell-our-data/?sh=77f10fc12d6c

MBJ [@MBJuzi]. (2022, October 13). Twitter, one toxic place. [Twitter]. https://twitter.com/MBJuzi/status/1580483109799178241?s=20

Newsnight, B. (2018). How should the internet be regulated? – BBC Newsnight [Video]. YouTube. https://www.youtube.com/watch?v=8EjfqM1ka1Y

Popiel, P. (2018). The tech lobby: Tracing the contours of new media elite lobbying power mark as done. Communication, culture & critique, 11(4), 566-585. 10.1093/ccc/tcy027

Popiel, P. (2022). Digital Platforms as Policy Actors. In Flew, T & Martin, F.R. (Eds.), Digital platform regulation (pp. 131-150). Palgrave Macmillan, Cham.

Reuters. (2022). Google hit with new EU antitrust complaint [Video]. YouTube. https://www.youtube.com/watch?v=lnqmBJyU8aE

Softness, N. (2016). Terrorist communications: Are Facebook, Twitter, and Google responsible for the Islamic state’s actions? Journal of International Affairs (New York), 70(1), 201–215.

Suzor, N., & Gillett, R. (2022). Self-regulation and Discretion. In Flew, T & Martin, F.R. (Eds.), Digital platform regulation (pp. 259-279). Palgrave Macmillan, Cham.

Watanabe, H., Bouazizi, M., & Ohtsuki, T. (2018). Hate speech on Twitter: A pragmatic approach to collect hateful and offensive expressions and perform hate speech detection. IEEE Access, 6, 13825–13835. https://doi.org/10.1109/ACCESS.2018.2806394

Zakrzewski, C. (2022, January 21). Tech companies spent almost $70 million lobbying Washington in 2021 as Congress sought to rein in their power. The Washington Post. https://www.washingtonpost.com/technology/2022/01/21/tech-lobbying-in-washington/