Problematic contents are ubiquitous on digital platforms: violence, hatred and bullying. Who should and how can we make them stop?

“Instagram and other Social Media Apps” by Jason A. Howie is licensed under CC BY 2.0.

Traditional private information providers such as publishers and broadcasters have established legal obligations for the speech they facilitate (Gillespie, 2018). However, digital platforms, as one of the emerging fast-track ways to publish content, as the reach of the platform’s users expands, more and more unethical content is being posted. Public and policy concerns about illegal content extend from pornographic and violent content to hate speech or extremism and the expectation that users’ actions against other users will be addressed (Gillespie, 2018). The following analysis will identify the obligations and responsibilities of individuals, platforms, governments and society to block the distribution of illegal content and propose measures to address them.

 

 

Who should be responsible for stopping the spread of the problematic content 

  • Individual and platform obligations

The starting point for all controversy is the producers of content on digital platforms. As users of digital platforms, problematic content can only be effectively controlled when individuals are held accountable for what they say. Digital literacy, as a new form of human capital, is one of the essential qualities expected of users in the Internet age(Quiggin & Potts, 2008). This ability requires users to evaluate Internet content and produce content of public value, implying that users are responsible for the content they publish. However, this obligation is not regulated or enforced by external circumstances and often depends on individual behaviour. Thus digital platforms need to take responsibility for curating content and regulating user activity when users do so deliberately or cannot realise that their content is inappropriate (Gillespie, 2018). Not only because of the enormous impact platforms have on society but also in response to user needs. Platforms are essentially intermediaries between producers and users, intervening and reshaping social values and norms as a combination of search engines and traditional media (Reuber & Fischer, 2022). Platforms have a high economic and public value and significantly impact democratic processes because of the support they provide for freedom of expression. Moreover, platforms need to be moderated to remove inappropriate content because of users’ regulatory expectations. Public concern about problematic content is gradually shifting from targeting private behaviour to targeting platforms (Ardia, 2010). If platforms do not enforce rules to intervene, users will churn because they believe the platform cannot protect them (Gillespie, 2018). Platforms, therefore, need to adapt their policies and review content to respond to public needs.

 

  • Interventions from outside – society and the state

One of the reasons why individuals and platforms alone cannot be relied upon to stop the spread of inappropriate speech is that platforms, as for-profit organisations, will sacrifice the public interest to gain power for them (Popiel, 2018). Fake news incidents prove that platforms may sell out the public interest because of political rights (Napoli, 2018), and the objectivity and impartiality of platforms need to be monitored from outside. Society and government need to intervene in the platform society rather than allowing its development to depend entirely on capital (de Kloet et al., 2019). As the platform society grows, digital platforms are combined with traditional systems. This hybrid state of affairs allows platform operators to evade regulation by most laws due to the lack of distinction between infrastructure and sectors (Van, 2018). Society needs to construct new laws for digital platforms to provide enhanced interventions on platforms from both government and society (Gorwa, 2019). As the ideology of platform builders may not fit the social context of a particular country or region (Gillespie, 2018). Governments must censor and intercept content on platforms based on the national context to avoid social panic.

 

Thus, it is not the sole responsibility of their carriers, the digital platforms, to stop the dissemination of problematic content. Instead, users, society and governments have good reason to be responsible for it.

 

 

“Panneau stop” by zigazou76 is licensed under CC BY 2.0.

How to stop the problematic content 

  • Non-governmental organizations

The following analysis is based on Abbott and Snidal’s (2009) platform governance triangle model, which considers the role of NGOs, companies and the state in preventing the spread of problematic content. The most intuitive way for platform users is to report comments containing offending content to the platform. However, this reliance on the crowd to regulate the crowd requires establishing a tiered model for dealing with the problem, including the user submitting the complaint, the flagging reviewer and the adjudication team from the platform (Gillespie, 2018). This could be followed by creating an independent industry association that calls on digital citizens to participate in regulating platform content through broader initiatives (Gorwa, 2019). However, this regulation depends on the digital literacy of individual users, and how one can judge the value of reports provided by users needs to be further considered.

 

  • Platforms

Platforms are handled more technically. Firstly, Gillespie (2018) states that platforms need to have programmed internal processes, including responding to legal obligations and translating them into platform rules, and creating new rules for content as new problematic content emerges. This treatment is at scale, so it cannot completely prevent the emergence of unethical content, but it can effectively stop it from spreading. Secondly, platforms need to apply particular censorship to content published by private informants with significant social status, as the content they publish has a more significant impact on society; platforms should not adopt a regular post-publication filtering censorship system for them (Gillespie, 2018). This can be done by signing a user agreement to reinforce the values and obligations of users.

 

  • Government and the state

The most effective measure by the state and society to stop problematic content is to create special regulations for online media platforms rather than applying them to the traditional internet (Gillespie, 2018). It is also essential to provide counter speech to counter illegal speech as much as possible. Collaboration between countries is needed to ensure that international platforms can comply with local laws and act responsibly. For example, private organisations that comply with transnational norms can be established to review platform content and corporate regulatory models (Gorwa, 2019). There are legal and socio-cultural differences between countries, and regulatory approaches must be adapted to local conditions while preventing the spread of transnational illegal content, such as inflammatory political advertisements placed by foreign governments (Napoli, 2018). This is one of the primary purposes of establishing transnational regulatory organisations.

 

  • Combined impact

By working together, all three construct laws that allow for the interaction of stakeholders in the platform ecosystem, including users, technology companies, and governments (Gorwa, 2019). The multidimensional collaboration will be most effective in preventing the spread of problematic content on the platform. However, the line between intervention and moderation needs to be clarified. For example, a pawnshop owner in the US had posted videos on TikTok that mentioned he had acquired an album containing photos of the Nanjing massacre. He could not show all the photos because the content was too bloody and violent, which caused widespread controversy (Sung, 2022). In line with Gillespie (2018), such photographs are not only a record of the war but also an unauthorised disclosure of the suffering of the subjects of concern. Despite their high historical value, it is essential to consider whether their dissemination needs to be as strictly regulated as problematic content. Gillespie (2018) addresses this aspect by suggesting that platforms must establish and implement a content review system that can address both extremes.

 

 

Conclusion

In conclusion, it is not only the platform’s responsibility to stop the spread of problematic content; society and users must take action to reduce its retention on the platform. Systems for reviewing and removing content need to be well established, as the norms and values of platform societies have an undeniable and significant impact on actual democratic societies (Van, 2018). Platforms must also be rigorously vetted while maximising the user’s right to freedom of expression.

 

 

 

 

References

Abbott, K. W., & Snidal, D. (2009). The Governance Triangle: Regulatory Standards Institutions and the Shadow of the State. In W. Mattli & N. Woods (Eds.), The Politics of Global Regulation (pp. 44–88). Princeton, NJ: Princeton University Press. doi:10.1515/9781400830732.44

 

Bridging News. (2022, November 02). An American blogger claims to have found new evidence of the Nanjing Massacre. [Video]. YouTube. https://www.youtube.com/watch?v=VsSwyGE_H-A

 

de Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. (2019). The platformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249–256. https://doi.org/10.1080/17544750.2019.1644008

 

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media(pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029-001

 

Gillespie, T. (2018). Regulations of and by Platforms. In The SAGE handbook of social media(pp. 254-278). SAGE Publications.

 

Gorwa, R. (2019). The Platform governance triangle: Conceptuslising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

 

Napoli, P. M. (2018). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 55–107.

 

Popiel, P. (2018). The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power. Communication, culture & critique, 11(4), 566-585. https://doi.org/10.1093/ccc/tcy027

 

Quiggin, J., & Potts, J. (2008). Economics of Non-Market Innovation and Digital Literacy. Media International Australia Incorporating Culture & Policy, 128(1), 144-150. https://doi.org/10.1177/1329878X0812800118

 

Reuber, A. R., & Fischer, E. (2022). Relying on the engagement of others: A review of the governance choices facing social media platform start-ups. International Small Business Journal, 40(1), 3–22. https://doi.org/10.1177/02662426211050509

 

Sung, M. (2022, September 08). A pawnshop owner thought he discovered unseen images of horrors from the Nanjing massacre. Historians disagree. NBC News. https://www.nbcnews.com/pop-culture/viral/pawnshop-owner-nanjing-massacre-photos-historians-dispute-tiktok-rcna46531

 

Van, J. (2018). The Platform Society as a Contested Concept. In The Platform Society (pp. 5-32).  Oxford University Press. https://doi.org/10.1093/oso/9780190889760.003.0002