What are social media and what happen to it?
The content of social media means that people can communicate, engage and share their ideas or message in the websites or on the programs. Currently, people use it to keep the relationship with their family and friends while some enterprise uses the social media as a tool to sell and promote its products so that the company can attract more customers. (Carr & Hayes, 2015). Utilization of social media varies based on sociocultural background, age, and other psychological aspects. People speak online, share thoughts and images, and feel their interactions with the groups they have joined satisfy their need for a sense of community. Although social media offers a secure platform for people to openly or anonymously express their thoughts, it is also a fertile ground for violent beliefs and actions (Miller et al., 2016). Because social media is an interactive medium with a large user base, the issue has developed and gotten more complicated.
The negative effect of the spread of this content on social media
According to the statistics, it shows that teens are more likely than adults to cyberbully:
- 59% of teens who use the internet have encountered cyberbullying, vs 33% of adults.
- Over 15.7% of high school kids were cyberbullied in 2018, according to the CDC.
- 37% of kids have been cyberbullied. 30% have been repeatedly targeted.
- 34% of US youngsters have been cyberbullied, and 68% have mental health difficulties.
Figure 1 https://www.datasciencecentral.com/social-media-cyber-bullying-and-need-for-content-moderation/
The use of digital technology improves people’s ability to communicate with one another which makes more voices from a variety of participants can be heard and more and more people can be engaged in the digital platform to exchange their information (Craig et al., 2020). However, it also makes it possible for certain harmful material, such as pornographic content, cyberbullying, and violent content. These materials can be spread in a manner that is described as “viral.” The dissemination of anonymously harmful material, such as via cyberbullying, has been shown to have a far more damaging effect on adolescents than traditional forms of bullying in schools.
Figure 2 https://gdhr.wa.gov.au/-/cyberworld-cyber-bullying
A poll conducted by the United Nations in 2019 across 31 nations found that 33% of the young people had been regarded as the target of cyberbullying and 20% of those individuals have missed school because of the bullying. Teenagers’ physical and mental health may be irreparably damaged by cyberbullying, which not only increases their risk of mental health problems such as sadness, anxiety, and loneliness but also contributes to a significant number of teen deaths by suicide (Amedie, 2015). For young people, their identities are not yet fully established, and there is a great deal more to find out about the world. People especially the youth are quite worried about to be protected in terms of some characteristic, such as culture, gender, religion and disability. Since in the majority of the cases, people will have a strong impact on the same identity, they are very concerned about these protected characteristic (Xu et al., 2012). In addition, these characteristics are usually used as instruments of cyberbullying. People tend to develop a deep-seated hatred for themselves when they are bullied or harassed online. For instance, individuals who are subjected to racial harassment or bullying online are easier to believe that their skin color is an issue and to experiment with changing their skin color in an effort to stop the harassment and bullying.
Why Social media companies should be responsible for stopping the spread?
As social media is a public platform where individuals may use their right to freedom of speech, it contributes to the protection of this right. Meanwhile, the raipd development and impact of digital platforms have offered a tremendous boost to cultural interaction and economic growth in contemporary society. Since the digital platforms contains enormous data and they are difficult to control and manager, some hateful people use these digital platforms to harm others and create damage to them. (Rambaran, Dijkstra & Veenstra, 2020). Because the algorithm can help users better find what they are interested in the content, the haters use these advantages of the algorithm to improve its reputation in the websites and let more people can browse these bad contents on social media, so sometimes, these digital platforms also contribute to the occurrence of hate speech. Although these digital platform companies donot actively encourage hate groups to create bad speech, platforms like Facebook, YouTube, Twitter, and Instagram are key tools for spreading the bad message. Therefore, they should be responsible to manage and eliminate these bad words. By continuously revising their standards and evaluating user-posted material to eliminate hate speech, digital platform firms successfully mitigate their harmful effects. However, platforms have limited censorship capabilities, and it is hard for them to pre-censor all new material given the rate at which users publish content and the vast amount of information that flows through them (Field & Chelliah, 2012). Many firms have taken some measures to prevent the hate speech to be spread on their platforms, while it is still a long way and more effort to do on the balance between preserving freedom of expression and lowering hate speech.
Government should develop strategies to protect people
The damage caused by the spread of online hate speech is not only a network problem, but more importantly a social problem. Besides digital platform companies, governments should also formulate some proper legislation and policies to control and govern the online hate message so that the environment of digital platforms can be clean and safe. (Carter, 2013). For example, during the conflict and Russia and Ukraine, many hates spread fake new and harmful message on the digital platform which can increase the violence. This shows that hate speech on social networks can have a negative impact which undermining the security of countries and exacerbating international conflicts. Therefore, in this case, each country should make its own laws and regulations to limit the spread of online hate speech. For instance, in 2018 Germany mandates that big social media platforms delete “manifestly illegal” content within 24 hours if it violates German law. The national government’s censorship enables a more precise definition of prohibited internet hate speech (Papakyriakopoulos et al., 2018). In addition to legislative restrictions and punishments for lawbreaking, the dread of the repercussions that often accompany lawbreaking may successfully prevent the spread of hate speech.
Figure 3 https://www.stopbullying.gov/resources/laws
Although governments have different standards on laws and policies to limit hate speech, which may cause some differences, governments of all countries have taken different levels of action to reduce the growth of online hate speech. In addition, with the help of the government, the digital platforms companies can have more ability to regulate users’ speech on the platform according to laws and rules so that both the governments and digital platform companies can provide an appropriate environment for all users (Williams & Pearson, 2016). Finally, the digital platform has enabled hate speech to move beyond national borders and globalized hate crimes which makes quite difficult for governments to control and manage. In practice, the government is not only faced with the deviation of the connection between different courts and law enforcement systems, but also the high cost of law enforcement. Therefore, controlling hate speech on a global scale remains a major issue at this moment. Therefore, it still remains the long-term responsibility of the government to prevent hate speech on the digital platforms.
References
Amedie, J. (2015). The impact of social media on society.
Carr, C. T., & Hayes, R. A. (2015). Social media: Defining, developing, and divining. Atlantic journal of communication, 23(1), 46-65.
Carter, M. A. (2013). Third party observers witnessing cyber bullying on social media sites. Procedia-Social and Behavioral Sciences, 84, 1296-1309.
Craig, W., Boniel-Nissim, M., King, N., Walsh, S. D., Boer, M., Donnelly, P. D., … & Pickett, W. (2020). Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. Journal of Adolescent Health, 66(6), S100-S108.
Field, J., & Chelliah, J. (2012). Social‐media misuse a ticking time‐bomb for employers: Robust policies and procedures needed to reduce the risks. Human Resource Management International Digest.
Miller, D., Sinanan, J., Wang, X., McDonald, T., Haynes, N., Costa, E., … & Nicolescu, R. (2016). How the world changed social media (p. 286). UCL press.
Papakyriakopoulos, O., Hegelich, S., Shahrezaye, M., & Serrano, J. C. M. (2018). Social media and microtargeting: Political data processing and the consequences for Germany. Big Data & Society, 5(2), 205395171881184.
Rambaran, J. A., Dijkstra, J. K., & Veenstra, R. (2020). Bullying as a group process in childhood: A longitudinal social network analysis. Child development, 91(4), 1336-1352.
Williams, M. L., & Pearson, O. (2016). Hate crime and bullying in the age of social media.
Xu, J. M., Jun, K. S., Zhu, X., & Bellmore, A. (2012, June). Learning from bullying traces in social media. In Proceedings of the 2012 conference of the North American chapter of the association for computational linguistics: Human language technologies (pp. 656-666).