Abuses on the internet: who is responsible for stopping harmful contents spread

 

Why”Bullying” by Yulissa Lanchi is licensed under CC BY-NC-ND 2.0. To view a copy of this license.

Free speech is a precious right to all individuals, whether in the real world or on the Internet. However, the decentralised  model on the Internet makes each user interact more directly; sometimes, online injury is also straight. For example, “Am I pretty or ugly” was a popular YouTube trend that many teenage girls published. These girls wanted feedback from the audience about their appearance. Nevertheless, many feedbacks were porn and bullying, which makes these girls feel depressed and stuck in cyberbullying. (Olivia, 2013) It is a warning to the public because everyone has a chance to be the victim online. Stop spreading bullying, harassment, violence, hate and porn content is an unignorable  issue on the Internet. Because a large number of users and influential digital platforms are two essential parts to form the Internet, when problematic content such as bullying, harassment, violence and hate appears, users and the platform should take responsibility to stop expressing these contents.

Online users

The decentralized model, which brings more diverse voices from the world and provides chances to every user, both to be the publisher and user, to post whatever they want, makes individual users an essential part of the network. However, the user’s free speech right has no reason to publish hate, violence, porn or bullying contents to harm other users. According to Gamergate, a man makes a complaint about his ex-girlfriend, a game developer. It started an anti-feminist movement on digital platforms such as Twitter and Reddit. Porn, hate speech and harassment even violence filled in the platform. Many educated women get hurt by these anti-feminists, and it makes them live in panic. (Matt, 2016) Users have the right to express various speech even in line with the majority, but inappropriate content that harms others should not be published and spread in the public network. This released harmful content was all published by users, not by the digital platform or giant internet company, so online users also have the responsibility to stop publishing and to spread this harmful content on the Internet.

“Inspirations” by nightsavior is marked with CC0 1.0. To view the terms, visit https://creativecommons.org/publicdomain/zero/1.0/?ref=openverse.

Many users consider that they have no access to stop spreading the harmful content circulating on the digital platform, so their efforts are unconsidered. However, users are the first to interact to stop spreading harmful content and have an approach to affecting technological systems by themselves. On Reddit, the promotion system was designed by users’ voting rate. One topic has a higher voting rate; it will appear at the platform’s top. For example, when Lawrence’s nude picture was revealed on the subreddit “The Fappening”, more users clicked and voted for the subreddit, making porn and sexual bullying keep spreading on the Internet and making these destructive issues become a pop topic. ( Massanari, 2019) Inversely, when these sexual bullying and porn content appear on the digital platform, more users take responsibility to stop spreading and discussing the porn contents and report these harmful contents to the platform. It effectively decreases the harm to victims suffering from online porn, violence and bullying and keeps the online platform safe.

However, Individual users were deeply affected by the digital platform, and their report of inappropriate content was limited. So, only users taking the responsibility to stop bullying, harassment, violence, porn and hate content is not an effective measure. Platform, which has access to stop spreading inappropriate content, has unavoidable to stop spreading harmful content.

However, Individual users was deeply influenced by the digital platform, their report to the problematic contents was be limited. So, only users take the responsibility to stop bullying, harassment, violent, porn and hate content is not an effective measure. Platform, who has the access to stop spreading problematic contents, has unavoidable to stop spreading harmful contents.

“Web 2.0 Digitage 2012” by ocean.flynn is licensed under CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/?ref=openverse.

Digital Platform

As the central role of the digital platform is governance, the online platform is the most crucial part of online moderation to stop the spread of bullying, harassment, violence, and inappropriate content on the Internet. Google CEO Eric said that high tech on Internet runs faster than the government, so government regulation of the Internet has hindered the innovative development of the Internet. (Zuboff, 2019) It implies that the platform reacts faster than the government when harmful content appears on it. Also, in 1996, US communication put forward a “safe harbour” for internet platforms to self-regulation their contents. (Terry & Martin, 2019) Platforms have no reason to shirk the responsibility of stopping bullying, harassment, and other problematic content on the platform.

On the other side, stopping bullying, harassment, violence and hate on the digital platform is a practical approach to increasing economic profit for the digital platform. For example, the high vote rate heavily influences Reddit’s technical system. If a new user first logs in to Reddit during Lawrence’s nude photos leaked, the new user will see “the Fappening,” which spreads and discuss these nude photos. They will think Reddit is a platform addicted to talking about celebrity nude pictures without being forbidden, so the new user may not use it in the future. (Massanari,2019) spread problematic content. Because the white middle classed male dominates the network system, they sometimes ignore the female and minority perspective. (Gillespie,2018) So, when building the technology system, there should be introduced workers from all groups such as homosexuality, feminist, black people and so on to build the evaluating technology system to stop harassment and sexual or racial bullying and other problematic contents on the digital platform. However, when the moderation requires to be negotiated, content moderators are introduced to deal with the content to analyze whether the content is harmful or not is necessary. They are an unignorable part in the moderation process because they face to the violence and horrible content more directly and their moderations are more detailed than the technology system. Moderators are crucial that prevent bullying, harassing and violent contents from the digital platform and make the digital environment clearly and improve digital platform brand to increase economic profits. (Isaac, 2019) Protecting moderators’ interests such as increase their salaries is an innovative way to decrease harmful and problematic contents. Because of decentralize model, every user is vital for a digital platform to increase their profit, so the stop spreading of bullying, harassment, and violent content is not a way to control people’s speech or infringe the right to free speech. It is an adaptive approach for developing a digital platform with many diverse voices every day; It also provides a peaceful platform to attract more users and increase economic profit. So, it is beneficial for a digital platform to stop these harmful contents.

What should the platform do to prevent this? Firstly, setting a rule showing the basic rule is a common approach for a digital platform to moderate online users and content. For example, the Twitter rule shows that violence, harassment, and cyberbullying are not allowed on the platform and simultaneously protects users from sharing their speech freely and safely. However, Twitter has rules to forbid harmful and problematic content on the platform. There are still many examples showing that the harm is extant. Allis Morris showed that the harassment of her was from online to the real world, and she received many death threats. When Alex Runswick posted a Twitter statement, her name made her always be considered a man. This twitter stuck her in sexual bullying and anti-feminist speech such as ‘you should not be doing politics’, ‘you should be in the kitchen, ‘go make me a cup of tea love’ flooded her comments. (Jessica, 2018)

All the real-world example shows that only rules cannot forbid this violence and bullying. Technology and moderate labour should be introduced to stop the spread of problematic content. Because the white middle-class male dominates the network system, they sometimes ignore the female and minority perspective. (Gillespie,2018) So, when building the technology system, workers from all groups should be introduced, such as homosexuals, feminists, black people and so on, build the evaluating technology system to stop harassment and sexual or racial bullying and other problematic content on the digital platform. However, when moderation requires negotiation, content moderators are introduced to deal with the content to analyze whether the content is harmful or not is necessary. They are an unignorable part of the moderation process because they face violence and horrible content more directly, and their moderations are more detailed than the technology system. Because moderators are crucial in preventing bullying, harassment and violent content from the digital platform, making the digital environment clear, and improving the digital platform brand to increase economic profits. (Isaac, 2019) Protecting moderators’ interests, such as increasing their salaries, is an innovative way to decrease harmful and problematic content.

Conclusion

Conclusively, online users and digital platforms are two essential parts in decreasing and stopping the spread of harmful content on the network. They should take measures and responsibility to protect the digital platforms’ environment, peace and safety. However, stopping spreading bullying, harassment, violence, hate and porn content that harms the online network is a long process for individual users and platforms to find a more effective way to solve this severe problem at the root. It needs time, the development of technology and users’ efforts.

References

All Platforms Moderate. (2018). In Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media  (pp. 1–23). Yale University Press, https://doi.org/10.12987/9780300235029

Chotiner. I (2019) The Underworld of Online Content Moderation, At https://www.newyorker.com/news/q-and-a/the-underworld-of-online-content-moderation

Flew, Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Lee. M (2016) What Gamergate should have taught us about the ‘alt-right’, Published by the Guardian, At https://www.theguardian.com/technology/2016/dec/01/gamergate-alt-right-hate-trump

Massanari. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Solon. O (2013) Am I pretty or ugly? Louise Orwin explores this YouTube phenomenon, at https://www.wired.co.uk/article/pretty-ugly

Valenti. J ( 2018) TOXIC TWITTER – WOMEN’S EXPERIENCES OF VIOLENCE AND ABUSE ON TWITTER, At https://www.amnesty.org/en/latest/news/2018/03/online-violence-against-women-chapter-3-2/

Zuboff. S (2019) It’s not that we’ve failed to rein in Facebook and Google. We’ve not even tried, At https://www.theguardian.com/commentisfree/2019/jul/02/facebook-google-data-change-our-behaviour-democracy