Who should be responsible for control the spread of harmful content on the digital platforms? And how?

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

Digital Platform
"Social Media Mix 3D Icons - Mix #2" by Visual Content is licensed under CC BY 2.0.

Harmful content spread widely on today’s digital platforms

The continuous development of the Internet provides an environment for people to participate in discussions freely(Hosseini et al., 2017). The talk about freedom and freedom of speech is full of the past and present development of digital computers and the Internet (Digwach, 2022). However, with the constant growth of digital platforms and the increasing number of users, many problems have arisen. Content with issues such as bullying, harassment, violence, hatred and pornography is disseminated on the digital platform. For example:

  • terrorist organisations spread terrorist videos of civilians being beheaded on digital platforms (BBC News,2020)
  • Leslie Jones became the target of online harassment (Gillespie, 2018,p9).
Harmful content spreads on digital platforms
avoiding-internet-scams1-caution-laptop. s600x600”by robertsrr08 is licensed under CC BY-NC-SA 2.0.

Platform, government and platform users are responsible for, and how?

Digital platforms, the government and the platform users should all be responsible for preventing the spread of harmful content on the Internet. Digital platforms, the government and platform users must be jointly responsible for stopping harmful content. Among them, digital platforms need to take the most significant responsibility.

  • The digital platform must build better algorithms to prevent the spread of harmful content and improve the content moderation mechanism.
  • The government exerts pressure on platform governance and formulates rules and laws for platform content review.
  • The Platform users need to establish the consciousness of maintaining the platform’s content and actively “mark” and report harmful content.
Improve the content moderation algorithm of the digital platform
Innovation & Research Symposium Cisco and Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole Polytechnique/Paris/France is licensed under CC BY-SA 2.0.

 

How digital platform is responsible?

Platforms should take primary responsibilityfocus on improving algorithms and content moderation mechanisms.

Major digital platforms
Social Media Logos” by BrickinNick is licensed under CC BY-NC 2.0.

 

 

Digital platforms should take primary responsibility for preventing the spread of harmful content. The platform needs to change the algorithm that promotes the spread of harmful content, improve the content moderation mechanism. With more and more public discourse, cultural production and social productivity are transferred to the Internet.In the face of harmful Internet content, the supervision of the platform is crucial (Gillespie, 2018, p254). First, the platform needs to change the unreasonable algorithm to prevent the further propagation of harmful content. The algorithm takes popularity as the essential measure, which to some extent promotes the spread of hate, harassment, violence, pornography and other harmful content.

I Got My Reddit Halloween Exchange Gift Today From Australia!!” by A.Currell is licensed under CC BY-NC 2.0.

For example, in the # Gamergate event, Reddit’s algorithm provided fertile soil for Misogynism (Massanari, 2017).When women-hating content appears on Reddit, users with women-hating ideas will have likes  and forwarding. Unreasonable algorithms will recognise this as popular content and further expand user push (Gillespie, 2018, p206). In preparing the algorithm, platforms need to add a program to combat harmful content and find harmful content in time rather than encouraging the spread of harmful content through the unreasonable algorithm for some economic benefits (Gillespie, 2018, p206).

Una Code” by Shinji Nikaru is licensed under CC BY-NC 2.0.

Second,before users officially publish content, platforms can use algorithm prompts to urge users to self-moderate content. The platform uses AI to evaluate whether the content to be released by users may violate the community’s regulations on harassment, violence, pornography, hatred and other harmful content (Paul &Reininger, 2021). If it is identified that the draft to be published by the user contains harmful ingredients, the user will be given a “warning” before publishing. The platform needs to provide users with a prompt like the following (Paul &Reininger, 2021):

 

“Warning: Algorithm has marked this content as content that may violate community norms. Select” Confirm “to complaint or” Edit “to modify (Paul &Reininger, 2021).”

During the mosque attack in New Zealand, Facebook used algorithm warnings to block about 80% terrorism videos before uploading(Gorwa et al., 2020).This approach can reduce harmful content before publishing as much as possible .

Manual content moderation by YouTube
YouTube down” by Andrew is licensed under CC BY-SA 2.0.

At the same time, it is also important for the platform to take measures to prevent the spread of harmful content already published on the platform. Platforms need to use more human moderators and better detection software to improve the platform’s content moderation mechanism. By increasing the moderators, YouTube helped manually review nearly two million videos containing harmful content (Wojcicki, 2017).Digital platforms should also use these moderators to help train machine- learning technology to identify similar videos in the future (Wojcicki, 2017). The Perspective project launched by Google and Jigsaw can use the human experience to help machine learning to detect inappropriate content better(Hosseini et al. ,2017).Digital platforms can help prevent the spread of harmful content by employing more moderators and using machine learning technologies(Gillespie, 2018, p198)

How government is responsible?

The government formulates rules and laws to assist in the supervision.

The government participates in the legislative supervision of digital platforms.
Cyber Security at MoD” by Defence Images is licensed under CC BY-NC 2.0.

 

The government needs auxiliary platforms to be responsible for preventing the spread of harmful content. The government must formulate relevant provisions for controlling the spread of harmful content on digital platforms and impose fines on those platforms that violate them. Because of the commercial nature of digital platforms, they are essentially based on economic interests(Gillespie,2018). Different platforms set different rules for content moderation, and they put these rules themselves to a large extent(Gillespie, 2018, p255). The platform hopes that as many users as possible will stay on the platform(Gillespie, 2018, p255), which leads to the limitations of platform self-supervision and self-content moderation. Because of this limitation, the government needs to exert regulatory pressure on the platform to formulate rules and regulations for harmful Internet content (Medzini ,2022). The platform that fails to comply with relevant rules or laws shall be subject to corresponding punishment. For example, the German government promoted NetzDG(Medzini ,2022).The platform must develop and follow effective and transparent procedures to handle the review of harmful content and eliminate harmful comments within the specified time and scope(Medzini , 2022).At the same time, the platform also needs to provide training and psychological counselling services for content reviewers(Medzini , 2022).Companies that do not comply will face a fine of 5 million euros(Medzini ,2022). The government’s formulation of rules and laws on disseminating harmful content on digital platforms will be an excellent supplement to the platform’s self-governance. And play the role of third-party supervision.

How users of digital platforms is responsible?

The Platform users exercise the right to supervise, “Marking” and reporting, consciously maintain the content of the platform.

Public Media” by Free Press Pics is licensed under CC BY-NC-SA 2.0.

As the leading participant of the digital platform, the Platform users should also be responsible for preventing the spread of harmful content. Platform users should spontaneously maintain the platform’s order, consciously exercise the right of supervision, “Marking” and reporting harmful content on the forum based on not publishing or spreading harmful content. Twitter and other digital platforms allow users to report harmful content and provide a text box to describe the reported range in detail(Crawford & Gillespie,2016).But, according to the survey, less than one in five people will take action to report potentially harmful content when they see it (Doyle, 2022). As YouTube said, more than 100 hours of videos are uploaded every minute (Crawford & Gillespie,2016). The mechanism that users actively mark and report harmful content will help the platform deal with vast and complex content moderation(Crawford & Gillespie,2016).Although sometimes users “mark” the correct content for mischief, it is a minority. Users must consciously avoid becoming publishers, participants and disseminators of harmful content and actively exercise the supervision right granted to individual users by the platform to prevent the spread of harmful content. Users can become ubiquitous governance mechanisms by consciously and actively “marking” and reporting harmful content(Crawford & Gillespie,2016).

Mechanism of joint responsibility and governance

In conclusion, while the development of digital platforms has helped people to obtain more freedom of expression, some problems have also occurred(Hosseini et al., 2017). Harmful content spread on the digital platform. Platforms, government and platform users should take response. For platforms, platforms need to build algorithms to prevent harmful content from spreading and improves the content moderation mechanism. The government apply the rules and laws formulated on platform content moderation to put pressure on platforms .For platform users, consciously supervise ,“mark” and report harmful Internet content and do not participate in releasing and disseminating harmful platform content.  The three central bodies collaborate to form a shared responsibility and governance mechanism.

 

Reference:

 

BBC News (2020). Christchurch mosque attack: Brenton Tarrant sentenced to life. Without parole.https://www.bbc.com/news/world-asia-53919624

 

Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools. and the vocabulary    of complaint. New Media & Society, 18(3), 410–428.https://doi.org/10.1177/1461444814543163

 

Doyle,C. (2022,July1).One in six young people flag harmful content online. Researchlive. https://www.research-live.com/article/news/one-in-six-young-people-flag-harmful-content-online/id/5100612

 

Digwatch(2022).Freedom of expression. Digwatch.

https://dig.watch/topics/freedom-expression

 

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794–. https://doi.org/10.1177/2053951719897945

 

Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools. and the vocabulary of complaint. New Media & Society, 18(3), 410–428. https://doi.org/10.1177/1461444814543163

 

Gillespie, T. (2018). What platform are and what they should be. In Gillespie, T. (Eds.), custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media(pp.197-215). Yale University Press.https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

 

Gillespie, T.  (2018). The Sage Handbook of Social Media. In Regulation of and by Platforms. Sage      reference.https://doi.org/10.4135/9781473984066

 

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and. the Hidden Decisions That Shape Social Media. In Gillespie, T. (Eds.), All platform moderate (pp1-23). Yale University Press. Doi: 10.12987/9780300235029

 

Hosseini, H., Kannan, S., Zhang, B., & Poovendran, R. (2017). Deceiving Google’s. Perspective API built  for detecting toxic comments. Cornell University Library.http://arxiv.org/abs/1702.08138

 

Medzini, R. (2022). Enhanced self-regulation: The case of Facebook’s content governance. New Media &  Society, 24(10), 2227–2251. https://doi.org/10.1177/1461444821989352

 

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture    support toxic techno cultures. New Media & Society, 19(3), 329-346.https://doi.org/10.1177/1461444815608807

 

Paul.C., & Reininger.H (2021, July 20). Platforms should use algorithms to help.                                          Users help themselves.Carngeig.https://carnegieendowment.org/2021/07/20/platforms-should-use-algorithms- to-help-users-help.        themselves-pub-84994

 

Wojcicki, S. (2017). Expanding our work against abuse of our platform. YouTube official blog.  https://blog.youtube/news-and-events/expanding-our-work-against-abuse-of-our/