Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

"Bombs in the Dark" by premasagar is licensed under CC BY-NC 2.0.

Why Stop it

Violent extremism is now universally recognized as a great security risk to global peace, and terrorists employ digital platforms to attract the less defensive young generation that can help them achieve their aims (Amit et al., 2021). Additionally, the number of sexual abuse incidents regarding images has increased dramatically in recent years and has had a negative impact on society (Henry & Witt, 2021). Meanwhile, the posting and re-posting capabilities of media platforms provide a rapid way for negative information to spread on the Internet (Webb, Burnap, et al., 2016). This leads to the ability of these contents to be copied and spread in a short period of time, thus going to influence the behavior or attitude of the public towards something. For example, 82% of the militants apprehended in Bangladesh were involved in terrorist acts because they were exposed to content on social media (Amit et al., 2021)

 

How it spreads

Radicals use social media and the Internet as a weapon to attract populations that they can easily influence and then organize and plan terrorist attacks around the world (Amit et al., 2021). For example, they find information about terrorist violence through social media software such as twitter and then spread it through Youtube as a way to recruit people and raise money for their actions (Amit et al., 2021).

  1. In the Christchurch massacre in 2019, the perpetrators used a helmet-mounted camera, which was live-streamed on Facebook and widely distributed worldwide (Douek, 2019).
  2. In the same year in Sri Lanka, terrorist plotters used social media platforms to organize the Easter suicide bombings(Amit et al., 2021).

In other words, digital platforms such as facebook become a bridge, or rather an intermediary, between terrorists and the general public. They disseminate and receive information through digital platforms and then act on it.

 

Why it spread

  • The large number of users of digital platforms.

Social media has been integrated into the daily lives of  public, and it has become an integral part of basic communication (Dragiewicz et al., 2018).

  1. The huge number of users and the development of globalization have made it possible for information to be disseminated more quickly and widely on digital platforms. The Global Digital Report 2019 states that 4.39 billion people worldwide use the Internet and 3.48 billion use social media (Amit et al., 2021).
  2. The high usage rate of these users provides the basis for the dissemination of information. 85% of teenagers use social media applications such as Facebook and Youtube every day (Amit et al., 2021). On average, 510,000 comments and 136,000 images are shared on Facebook every minute (Amit et al., 2021).

    Me and my 542 bestest friends (on Facebook) ” by tychay is licensed under CC BY-NC-ND 2.0.

However, among these users, there are young teenagers whose worldview and values are not yet fully formed. Therefore, they are easily influenced by these messages and have prejudices or misunderstandings about certain religious cultures, thus endorsing the actions of violent people or even spreading violent content (Amit et al., 2021).

  • The media communication characteristics of digital platforms.

The main features include freedom of speech, mobility, and anonymity.

  1. One of the main factors for the rapid growth of social media platforms is that it promotes freedom of speech and the public can express themselves freely on the platforms with relatively few restrictions.
  2. Since content on social platforms can be reposted by other users (Webb, Jirotka, et al., 2016). Therefore, posts made by radicals can quickly spread across the Internet.
  3. The anonymous registration of media platforms allows users to create many accounts (Dragiewicz et al., 2018), and this combined with anonymity measures such as proxy servers makes it difficult for terrorist messages to be found (Henry & Witt, 2021). This results in radicals posting violent messages, and even if one account is censored, there are many others that can post again, and without fear of having their information exposed.
  • The beneficiaries of digital platform traffic.

Although social platforms have restrictions on the posting of violent content, sometimes they may use other seemingly positive reasons to keep violent content, thus increasing the traffic to the platform and converting it into economic revenue for the platform.

    1. Facebook keeps posts about violence on the network on the grounds of public interest, especially when the content is tightly linked to the news of the moment (Ibrahim, 2017). And facebook does not have to take responsibility for this in any real sense (Douek, 2019).

 

Who should stop it and how

  • Government 
  1. Create laws that indicate the responsibility of platforms(Dragiewicz et al., 2018). Legal penalties are imposed on the person or platform that posted the post. Even though violent posts are uploaded to the platform by others, the platform is still responsible for it, as they are the ones who provide this platform.
  2. Establish strong online monitoring mechanisms. Find violent content that is being distributed on digital platforms and control it.

For example, the Australian government law in 2019 states that if platform service providers fail to remove violent content, they may be prosecuted (Henry & Witt, 2021). The German law sets a deadline for removing illegal content as well as creates reporting obligations for platforms (Douek, 2019).

Delete!” by Mrs TeePot is licensed under CC BY-NC-SA 2.0.
  • Social Media Platforms
  1. Enhance the filtering and classification capabilities of their own software. When radicals make posts that are violent and hateful, platforms need to be able to quickly and accurately identify its content and classify it. Then, suspend the user’s account and the dissemination of posts (Amit et al., 2021). For example, YouTube removed 2.8 million channel accounts in 2019, as well as 8.29 million videos, 6.37 million of which were automatically identified and flagged by the system (Douek, 2019).
  2. Set community rules. Letting users know whether the content they post is allowed and the consequences if they violate them. For example, Tumblr states that if a user posts information about pornographic content, it will restrict the user’s use of the platform or even permanently suspend the account (Henry & Witt, 2021).
  3. Set up user complaint function. Pornhub also allows users to flag harassing videos, which the platform will then take down (Henry & Witt, 2021). This is one method that platforms can use to stop the spread of violent messages, as it takes time for the platform to identify information. Therefore, using the power of the masses can allow such content to be identified more efficiently.
  • Individuals
  1. Equip the public itself with basic knowledge of violent hate content on digital platforms (Amit et al., 2021). Once users personally perceive such messages as unhelpful, they are less likely to actively spread them or even participate in them personally, thus reducing the spread of violent content on digital platforms.
  2. Make uers understand what responsibilities they need to take for their actions on digital platforms. For example, when users register an account on Instagram, they will need to understand and agree to the relevant usage rules (Webb, Jirotka, et al., 2016).

 

Limitations

  • Governments and social platforms have mostly focused on how to reduce the impact of violent content or prohibit the distribution of such content. However, less attention has been paid to the hidden causes of such radicalization, why radicals organize and plan terrorist acts (Amit et al., 2021).
Breastfeeding on a park bench” by space-man is licensed under CC BY-NC-SA 2.0.
  • Social platforms do not have a relatively accurate answer to the definition of violence, hate, nudity, and other content.

For example, before 2015, breastfeeding images on facebook were categorized as nudity, which was among the content that was not allowed to be posted according to facebook community standards, and therefore such images were taken down (Ibrahim, 2017).

However, this act has caused widespread concern in the community. This is because a section of the population believes that this is just a natural and normal maternal act of parenting and should not be labeled as nudity (Ibrahim, 2017). From this, it can be concluded that although social media platforms identify and filter the posts posted, this process of classification is sometimes controversial.

  • Social media can be a source of evidence for certain atrocities (Douek, 2019).

While the vast majority of violent hate content on digital platforms is radicalized and harmful to social safety. But a small portion of content could potentially be the only evidence that certain vulnerable groups have to assert their rights. An example is the video on YouTube about human rights violations in Syria (Douek, 2019). Therefore, this is one of the reasons that prevent violent content from being banned on digital platforms.

But in general, it’s up to the government, social media and individuals to stop the violence, hate and other content.

 

Reference List

Amit, S., Barua, L., & Kafy, A. – A. (2021). Countering violent extremism using social media and preventing implementable strategies for Bangladesh. Heliyon, 7(5), e07121. https://doi.org/10.1016/j.heliyon.2021.e07121

Douek, E. (2019, August 26). Australia’s “Abhorrent Violent Material” Law: Shouting “Nerd Harder” and Drowning Out Speech. Papers.ssrn.com. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3443220

Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N. P., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609–625. https://doi.org/10.1080/14680777.2018.1447341

Henry, N., & Witt, A. (2021). Governing Image-Based Sexual Abuse: Digital Platform Policies, Tools, and Practices. The Emerald International Handbook of Technology Facilitated Violence and Abuse, 749–768. https://doi.org/10.1108/978-1-83982-848-520211054

Ibrahim, Y. (2017). Facebook and the Napalm Girl: Reframing the Iconic as Pornographic. Social Media + Society, 3(4), 205630511774314. https://doi.org/10.1177/2056305117743140

Webb, H., Burnap, P., Procter, R., Rana, O., Stahl, B. C., Williams, M., Housley, W., Edwards, A., & Jirotka, M. (2016). Digital Wildfires: Propagation, Verification, Regulation, and Responsible Innovation. ACM Transactions on Information Systems, 34(3), 1–23. https://doi.org/10.1145/2893478

Webb, H., Jirotka, M., Stahl, B. C., Housley, W., Edwards, A., Williams, M., Procter, R., Rana, O., & Burnap, P. (2016). Digital wildfires: hyper-connectivity, havoc and a global ethos to govern social media. ACM SIGCAS Computers and Society, 45(3), 193–201. https://doi.org/10.1145/2874239.2874267