Delating All The Problematic Content? Who Is Response For It?

Internet Splat Map” by jurvetson is licensed under CC BY 2.0.

With the rapid development of the digital cultural industry, many problems have gradually been paid attention to through the expansion of the coverage of the Internet in people’s lives. The influence of the Internet has become more significant, and problems have emerged, such as privacy, media regulation, monopoly market, commercialization, copyright protection, false information, harmful information, and other issues.

Among them, the dissemination of harmful information on the Internet greatly and intuitively causes harm to Internet users and has a negative impact. These negative influences on the Internet include bullying, harassment, violent content, hate, porn, and other problematic content circulates. In this blog, we analyze who is responsible for these negative impacts and how they can reduce and organize the development of negative impacts. Attention will be focused on how to limit the spread of harmful speech. The self-regulation of platforms and the introduction of government policies have significantly reduced discriminatory, sexually abusive, hate speech content on social networks.

Government & Policy

The Strengthening Online Safety Act 2015 protects Australians from abuse or exposure to harmful content on social networks (2015). The policy was led by the death of Charlotte Dawson 2014, a well-known public figure who had worked to develop the Internet to protect people from bullying. Her death has caused a stir in Australia, bringing more attention to the issue of verbal abuse on social networks (O’Brien & Ralston, 2014).

Charlotte Dawson” by Eva Rinaldi Celebrity Photographer is licensed under CC BY-SA 2.0.

Charlotte Dawson suffered a long period of online violence before her death, which was also the source of her depression, and finally, she chose to commit suicide (Nimmons, 2014). The government and social networking platforms should be held responsible for her death. Every country’s government is formed to protect the people’s interests, so the government should be responsible for reducing discrimination and hate speech on the Internet. The Enhancing Online Safety Act 2015 is a policy quickly introduced by the government after Charlotte’s death to protect the safety of Internet users in the country in response to similar situations in the future. Australia’s government did not just issue a bill. The government protects the people by promulgating laws and policies, and the laws and policies promulgated by the government are a powerful guarantee to protect the people from harmful information on the Internet. The policy issued by the government is mainly to order the service platform to respond to information such as harassment, bullying, and hate, which is an efficient behaviour.

New Zealand parliament building” by Like_the_Grand_Canyon is licensed under CC BY-NC 2.0.

In 2019, New Zealand passed The Harmful Digital Communications Act 2015 (2015). The policy underscores the need for a tighter regulatory regime for social media companies. It tightens penalties for tech companies’ bad behaviour, including harsh prison terms and heavy fines. This policy mainly restricts the platform, and its purpose is to allow the platform to establish a better and more protective regulatory system. With better self-regulation of platforms, topics such as bullying will be harder to spread on social networks. The vigorous enforcement of laws and regulations makes it a very effective way for the government to participate in management. These policies effectively reduce the spread of bad speech on the Internet. There are already some protection bills, and people still face challenges.

The Internet has developed into a platform for world communication, regulation has become difficult, and a country’s policies can only limit the Internet in that country (Picard & Pickard, 2017).

For effective policies to be developed, it is essential to establish standard national and international policies, not local ones. Existing, as an example, regulations only affect domestic social media providers, and non-domestic companies may be unable, unwilling, or required to comply with the regulations.

terrorists at Mumbai with AK 47” by dotcompals is licensed under CC BY 2.0.

This allows terrorists to publish illegal content (Colifford, 2021) using platforms not regulated by the US government. Domestic laws cannot regulate foreign platforms, but due to the interoperability of the Internet, people can publish content on any platform and receive content on any platform at will. This situation aggravates the difficulty of regulating the Internet. It is the government’s responsibility to reduce harassment, violence, and hate on the Internet. They reduce the production of objectionable content by enacting legal policies limiting individual content publishers’ direction and platform regulation. The enforcement of legal policies is vigorous, and the government has positively impacted removing these. However, due to the uniqueness and limitations of the law, the government still has deficiencies in Internet regulation.

Platform & Moderate

In April 2022 to June 2022 report released by YouTube, they removed nearly 4 million channels (YouTube transparency report). At the same time, they removed over 4 million videos, of which over 4 million, or most videos, were automated flagging, and most of this removed content was viewed no more than ten times. 1.2% of the deleted channels were content about hate, discrimination and bullying. The videos that were removed had even more alarming data. More than 11% of the content was removed because it contained dangerous, horrific and harmful content (Reality Check Team, 2020). In addition to the government should be responsible for blocking lousy information on social networks, the platforms of social networks, which serve as the carrier of information circulation and dissemination, should also be responsible for the content. People send any content through social platforms; whereby social platforms can Root out the vast majority of the content that should not be sent. Those platforms set up their moderate system to manage the content sent out. Social platforms limit the spread and dissemination of objectionable content by removing the objectionable content.

Artificial Intelligence & AI & Machine Learning” by mikemacmarketing is licensed under CC BY 2.0.

“due to advances in AI, “intelligent agents” or bots will begin to more thoroughly scour forums for toxic commentary (Rainie et al., 2017, p.57).”

outlines that the platform auto-moderate is sufficient to stop the toxic contents with a favourable development foreground. The moderate speed is fast, and the vast majority of large-scale Internet social platforms can take action before people read a lot and perceive such lousy information. This is the advantage of platform supervision. Their supervision speed is fast, and it can be at the same speed as the development of the Internet. Such problems arise that are difficult to regulate because of the rapid development of the Internet. The government manages the supervision of the platform. The platform will follow the policies to avoid punishment from the government. The platform cannot extremely limit the users due to their right of free speech policies. Moderating on the Internet is difficult because the rules for defining information are flexible.

Regarding pornography, to what extent is explicitness called pornography, and whether the system should automatically delete photos in swimsuits. At the same time, there are photos of war. These are essentially horror photos, but they have their value and social significance, whether such photos are classified as inappropriate and should not be sent. It is challenging to balance the news value of these photos with their horror. These content or photos make the platform challenging to control (Gillespie, 2018, p.10).

Bikini Open 57” by The Bikini Open is licensed under CC BY 2.0.

Free Speech” by mellowbox is licensed under CC BY-SA 2.0.

Deleting all the related content violated people’s right to free speech, which is not allowed by the government. Platforms will rule out the most serious offences and should support some basic ethical principles. Their efforts may be to retain customers who may be shut down for explicit content or continued abuse or fear legal action if they cannot protect users on their own. The platform will wholly follow the provisions of the law. So, the platform will only remove the most prominent content and not all relevant content. Therefore, it is difficult to completely delete hate speech on the Internet. The platform cannot entirely stop inappropriate content from spreading online; however, they have a positive reaction to the content that will damage the online environment. Finally, the platform is the response to inappropriate speech online, and they should stop it. They moderate the content mainly by the auto moderate.

 

The platforms and the government are the two primary responses to hate, harassment, bullying, violence, pornography, and other problematic content online. By passing laws that impose restrictions on both the platform and the content, the government reduces the dissemination of harmful information. In order to uphold legal requirements and safeguard users’ rights, the platform moderates the contents. Despite the difficulties the government and platforms confront, the overall effort to safeguard those harmed online by inappropriate content is positive

Reference

CLIFFORD, B. (2021). MODERATING EXTREMISM: THE STATE OF ONLINE TERRORIST CONTENT REMOVAL POLICY IN THE UNITED STATES. https://www.voxpol.eu/download/report/Moderating20Extremism20The20State20of20Online20Terrorist20Content20Removal20Policy20in20the20United20States.pdf

Enhancing Online Safety Bill 2015 &Ndash; Parliament of Australia.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

Google Transparency Report. (n.d.). Google Transparency Report. https://transparencyreport.google.com/youtube-policy/removals?hl=en

Harmful Digital Communications Act 2015 No 63, Public Act – New Zealand Legislation.

Nimmons. (2014, March 2). Cyber Bullying: Death of Charlotte Dawson Leads to Calls for Charlotte’s Law. Nimmons Consulting. https://www.nimmonsconsulting.com/security/cyber-bullying-death-charlotte-dawson-leads-calls-charlottes-law/

O’Brien, N., & Ralston, N. (2014). Charlotte Dawson found dead. The Sydney Morning Herald. https://www.smh.com.au/entertainment/celebrity/charlotte-dawson-found-dead-20140222-338j6.html

Picard, R., & Pickard, V. (2017). Essential principles for contemporary media and communications policymaking. https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking

Reality Check Team. “Social Media: How Do Other Governments Regulate It? – BBC News.” BBC News, 12 Feb. 2020, www.bbc.com/news/technology-47135058.

Rainie, L., Anderson, J., & Albright, J. (2017). The future of free speech, trolls, anonymity and fake news online. https://eloncdn.blob.core.windows.net/eu3/sites/964/2019/07/Pew-and-Elon-University-Trolls-Fake-News-Report-Future-of-Internet-3.29.17.pdf