“HATE speech” by scottmontreal is licensed under CC BY-NC 2.0.
In the last few years, there has been a growing widespread of hateful and harmful content on social media platforms. These forms of hateful content include bullying, harassment, violent content, hate, porn, and other negative attributes. Today there are more than 4.7 billion users on various social media platforms. The widespread use of these platforms around the world is fueling the spread of hateful and harmful content across the world (Banko, et al., 2021). In a world where concerns about mental issues, genocides, mass shootings, and terrorism are becoming rampant, creating a tolerant society is perhaps the greatest need for today’s society. The biggest question is who is responsible for managing and ending the bile propagated within the various social media platforms.
One of the major negative impacts of the poor regulations of the internet and digital platforms has been the increase in hateful content. This is one of the forces which has led to visible net impacts on society (Mathew et al., 2019). Some of the forms of online hate speech have turned into violent acts such as the Rohingya genocide, anti-Muslim violence in Sri Lanka, and synagogue shooting in Pittsburgh among others. Much of these negative acts arise due to the failure of the people tasked with making sure that harmful content does not find its way onto these platforms. So how does harmful content find its way on social media platforms?
The casual approach by social media companies has been the major source of all the challenges allowing the easy spread of hateful, bullying, porn, and harmful content across the various digital platforms (Yar, 2018). Today anyone can create an account anonymously and post any type of content without really revealing their identity. Studies show that much of the hateful content and bullying is usually propagated by individuals hiding behind fake or new accounts. Many people tend to misuse these platforms since there exists a slow response to such hateful content.
The owners of the digital content should be the first people who are supposed to stop the hateful content. Companies such as Facebook have found themselves at loggerheads with various governments and pressure groups for the failure to regulate and control harmful content across their platforms (Ciftci, et al., 2017). Given the ease of opening and operating an account, digital companies and social media companies should take up the initiative to end harmful content.
“I hate you COVID” by Matthew Almon Roth is licensed under CC BY-NC 2.0.
In recent times, social media companies have been accused of their laxity to keep harmful content from their platforms. In most cases, they tend to accept racist, homophobic and hateful content since they have to protect their business. One of the biggest battle fronts has been the need to create a balance between free speech and hate speech. Despite this, social media companies should come up with measures to ensure that harmful content does not find its way to users within their platforms.
The biggest responsibility in curbing harmful content from the internet lies with digital companies. This is because these are the platforms where the content is freely shared and they have the necessary tools to tackle it firsthand. According to Kalsnesand Ihlebæk (2021), over recent years, various social media platforms such as Facebook and Twitter have been accused of laxity when it comes to tackling harmful content. This comes against the backdrop of the existence of various tools at their disposal which can effectively end such content being published on their platforms. Tools such as monitoring and censoring have proved effective when it comes to controlling harmful content.
To demonstrate that digital platform owners can implement the best strategies to curb harmful content we take the example of Germany. In Germany, Facebook had constantly maintained that it had put the necessary strategies to combat hate speech in the country. It had maintained that it had the required personnel to effectively monitor its platforms for potential crimes. When Germany passed the hate speech laws and the potential for taking legislative action against involved companies, Facebook quickly changed its stance. It rapidly began hiring more staff to moderate the platforms and initiate various technical features. Initially, it had declined to implement these measures. Similarly, in Myanmar, various questions can be raised about the company’s failure to raise concerns against the misuse of its platform which fueled one of the greatest atrocities in recent times. Pertinent questions can be raised about why the company never hired moderators immediately to stop the misuse of its platform in spreading hate which was the recipe for the conflict.
“Hate & Anger” by Pro-Zak is licensed under CC BY-NC 2.0.
The above two illustrations show that the biggest responsibility for tackling harmful content lies at the hands of social media companies. Moderating the content by flagging and deleting harmful aspects should be their sole responsibility. Their actions should be based on the accepted ethical and legal principles of the internet. In recent times, social media platforms such as Twitter and Facebook have come up with strategies to counter the spread of false information. The incorporation of fact-checking is a step in the right direction (Bakalis & Hornle, 2021). Digital platform owners should be able to recognize monitor and analyze trends in harmful content and take the necessary actions before things get out of hand. Hiring moderators and incorporating technology such as AI can help monitor these trends and alert the relevant authorities to take action. Finally, digital platforms should offer security features that make it easy to authenticate users. All social media users should have an identity that would make it easy to isolate criminals and stop them before they commit a crime.
“Emotion” by vinylmeister is licensed under CC BY-NC 2.0.
Letting the responsibility of handling harmful content over the digital platformsentirely to the individual companies can be a tall order. This is because digital platforms are influenced by profit-making concepts and most trade free speech with profits. The government should come in and regulate the various digital platforms (Müller & Schwarz, 2021). The government regulation of digital platforms can raise various issues. Free speech is a concept that is protected by many democratic countries around the world. However, the misuse of digital platforms yields more dangers than the components of free speech. The anonymity of social media users makes it hard for society to penalize the individuals who post harmful and hateful content. The lack of punitive strategies across the various digital platforms encourages negative vices. In most cases, transgressors easily find other people with similar opinions and the lack of control unleashes chaos on innocent people.
Security is a key component of a free society. Instances such as terrorism, hate, and harmful content always find their way across social media platforms putting millions of people in danger. The size of the digital platforms is enormous and the fact that some members of society find their way to harm others is one of the reasons why the government should implement regulations. Without incorporating government regulations, it simply means that humanity is surrendering its freedoms and values to uncontrollable and unpredictable dark forces (Patoria, 2022). Many of these forces may have bad intentions for society.
Governments through legislation can impose strategies to tame the presence of harmful content across their platforms. Setting the rules would compel the various digital platform companies to monitor and censor harmful content. The case of Facebook in Germany shows that there are hardly any technical challenges that would hold the company back from acquiring enough staff to review the content (Ahn, et al., 2022). China’s government has demonstrated that it is possible to moderate online content hence tackling harmful content. The successful implementation of the moderation program in China is an indication that governments can realize moderation on the entire digital platforms.
References
Yar, M. (2018). A failure to regulate? The demands and dilemmas of tackling illegal content and behavior on social media. International Journal of Cybersecurity Intelligence & Cybercrime, 1(1), 5-20.
Siapera, E., & Viejo-Otero, P. (2021). Governing hate: Facebook and digital racism. Television & New Media, 22(2), 112-130.
Ciftci, T., Gashi, L., Hoffmann, R., Bahr, D., Ilhan, A., & Fietkiewicz, K. (2017, July). Hate speech on Facebook. In Proceedings of the 4th European Conference on Social Media, ECSM 2017 (pp. 425-433).
Kalsnes, B., & Ihlebæk, K. A. (2021). Hiding hate speech: Political moderation on Facebook. Media, Culture & Society, 43(2), 326-342.
Bakalis, C., & Hornle, J. (2021). The role of social media companies in the regulation of online hate speech. In Studies in Law, Politics, and Society. Emerald Publishing Limited.
Müller, K., & Schwarz, C. (2021). Fanning the flames of hate: Social media and hate crime. Journal of the European Economic Association, 19(4), 2131-2167.
Patoria, S. (2022). Right to free hate speech: A Dworkiniananalysis. Indian Journal of Law and Legal Research, 4(3), 1-9.
Ahn, Soyun, JeeyunBaik, and Clara Sol Krause. “Splintering and centralizing platform governance: how Facebook adapted its content moderation practices to the political and legal contexts in the United States, Germany, and South Korea.” Information, Communication & Society (2022): 1-20.
Mathew, B., Dutt, R., Goyal, P., & Mukherjee, A. (2019, June). Spread of hate speech in online social media. In Proceedings of the 10th ACM conference on web science (pp. 173-182).
Banko, M., MacKeen, B., & Ray, L. (2020, November). A unified taxonomy of harmful content. In Proceedings of the fourth workshop on online abuse and harms (pp. 125-137).