The Internet was created in the 1960’s primarily at the behest of the military for the purpose of communicating between computers. (Livinginternet, n.d.) But as the times progressed and private money brought investment to help the Internet grow, it was no longer limited to government and military use, but began to explode worldwide. The rapid growth of the Internet has been accompanied by a proliferation of digital platforms for social media, which are now available to everyone to find information, express opinions, and explore new knowledge. Digital platforms are meant to bring people together who share the same interests and hobbies, and users share their views and opinions together. Platforms expect a peaceful, enjoyable and energetic atmosphere for digital users. However, with the light comes darkness, and violent content, harassment and hate pornography are rampant in the vibrant digital platforms.
Because of the freedom and secrecy of digital platforms, some users are promoting hate and violent content on the platforms, but because digital platforms are more of a gray area than the real world, the system and laws are not as effective and strong as they could be. This, coupled with the fact that users in different countries are subject to different legal restrictions, makes the platform not very effective in regulating the content of users’ speech. Hate content does not only happen in a single country, but basically all over the world, individual users are posting hate content on the platform. And as more and more people use social media to communicate, some racist, discriminatory, misogynistic, and homophobic people are finding a niche in this free zone to incite the brainwashed to commit violence, experts say. Social media platforms also provide opportunities for violent people to publicize their actions. (Laub, 2019)
The proliferation of hate speech, violent content, and pornography is not only detrimental to the physical and mental health of users and the development of the platforms but also contributes to the growing violence in society. He was involved in a “self-learning process” on social media that led him to believe that white supremacy requires violent action at times to eliminate other non-white races, and his story was widely circulated among racist communities and his actions were publicized. (Laub, 2019) And since 2014, when the Hindu nationalist Bharatiya Janata Party came to power, much of the increase in mobs and the resort to communal violence such as lynchings has been due to rumors circulating on Whatsapp groups. (Laub, 2019) Myanmar’s military leaders as well as Buddhist nationalists used social media to spread rumors about the Rohingya Muslim minority during the ethnic cleansing campaign. Distorting the fact that the Rohingya make up 2 percent of the population to make statements that the Rohingya will replace the majority Buddhist population. After the UN fact-finding mission, it was stated that those who spread hate speech were using Facebook as the most advantageous tool to control people’s minds, and that for most users Facebook is only the Internet. (Laub, 2019) Thus, it is clear that violent content and hate speech on social media platforms are deeply threatening the stability and order of society and endangering the lives of ordinary people. People who are brainwashed by violent content on social media platforms become the tools of those who disseminate it to carry out the task of harming society. Some violent content is even desired by governments to help them eliminate competitors and build prestige for themselves. Violent content has become a tool that can harm people at will.
A decade or so ago, platforms only had to respond to their own users to solve problems after incidents of harassment, violence, and phishing. (Dibbell, 1994) But as social media grows, platforms have more and more issues to contend with, and these are logistical and PR challenges for the platforms. (Gillespie, 2020) It is the responsibility of the platform to be responsible for the spread of negative content on the web. Social media has mainly adopted artificial intelligence as a solution to the spread of negative content. This seems to be the most ideal approach at the moment, but there are some drawbacks regarding artificial intelligence. Although AI can alleviate and identify some hate speech and ease the burden of human moderation, AI is not completely 100% capable of identifying and quickly responding to negative information such as hate content. Basically, all mainstream social media want their AI to correctly and fairly identify hate speech, violent content, pornography, and bullying and harassment as quickly as possible. Eliminate the negative content before users see it. (Gillespie, 2020) Although humans can clearly identify negative content such as violence and harassment, they do not have the same powerful processors as AI and each person has a different understanding of different words in their perception. During the 2020 coronavirus pandemic, many mainstream social media platforms were forced to completely shift to automated AI processing as human mediators were forced to work from home and quarantine during the epidemic. (Gillespie, 2020) Human mediators are susceptible to external factors that can cause them to miss negative content, and while AI is not as effective as humans would like it to be for now, it should be the best solution in the current situation.
In addition to the artificial intelligence often used to regulate social media, the United Nations has developed an action plan against hate content and defined the scope of hate content. Xenophobia, xenophobia, and racism are occurring globally, and social media is being used as a tool to publish inflammatory content and language for political gain. (United Nations. n.d.). In May 2020, the United Nations, in an effort to prevent hate content on social media platforms from affecting the stability and security of societies, discussed with UNESCO why users’ emotions were not being controlled and why they were behaving in incomprehensible ways in the face of the coronavirus pandemic. In June of the same year, a webinar was organized to explore how education can support youth participation in the online world (United Nations, n.d.). Even as early as 2011, human rights experts were sent to various regions to discuss and seek solutions to the problem of hate and race in different regions. The UN is calling for and trying to reduce and stop hate speech from the younger generation, who are the mainstay of social media platforms.
The state actually has a challenge in regulating the platforms, and there are two contradictory ideas between the courts and the public’s freedom of expression, and the debate about the freedom on the internet has been going on for a long time. The debate over freedom on the Internet has been going on for a long time. Different countries deal with it in different ways under different laws and regulations, but it is still a challenge in the face of the rapid growth of platforms and the explosion of information today. (Laub, 2019) A 1996 law issued in the United States made magazines, television and the Internet subject to prosecution for publishing known false information, but left out social media platforms. This has resulted in a wide latitude of social media in the US and each platform has its own rules and standards. (Laub, 2019) Although the law states that everyone has the right to speak and the right to freedom of speech, it does not mean that everyone has the right to harm others at will. Germany criminalized incitement to and perpetration of genocide in 1948, and the United Nations convicted two media executives at the International Criminal Tribunal for the Rwandan genocide. (Laub, 2019) While social platforms are growing rapidly with technology and time, laws and regulations and state controls are following suit to stop the spread of negative speech. In the future, digital platforms will no longer be outside the law, and everyone will have to pay the price for their words and actions.
Dibbell, J. (1994). A Rape in Cyberspace; or, How an Evil Clown, a Haitian Trickster
Spirit, Two Wizards, and a Cast of Dozens Turned a Database into a Society. In M. Dery (Ed.), Flame Wars: The Discourse of Cyberculture (pp. 237-262). New York, USA: Duke University Press. https://doi.org/10.1515/9780822396765-012
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data &
Society, 7(2). https://doi.org/10.1177/2053951720943234
Laub, Z. (2019, June 7). Hate Speech on Social Media: Global Comparisons. Council
onForeign Relations. https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons
Living Internet. (n.d.) IPTO – Information Processing Techniques Offices.
United Nations. (n.d.). Hate Speech and Incitement to Hatred or Violence.