Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Introduction

Along with the development of digitalisation, everything is closely linked to electronic information. Digital platforms have become a venue for distributing all kinds of content, allowing people to express their opinions at will. However, much problematic content is circulating on digital platforms, which is deceptive, virtual, global and uncontrollable. It is easy to make media a vehicle for spreading illegal, violent, pornographic and other kinds of issues. Governments, platform and individuals are all responsible for stopping the spread of problematic content. Gillespie (2018a) reputes that we must re-examine how we organise a speech, social activity, power, and responsibility in these growing digital mediums. So, these digital platform problems need a solution because social media is critical in modern society.

What is problematic content?

Bullying, violent and porn content are frequent issues on the internet. Cyberbullying is different from conventional bullying because it can happen anywhere and anytime through various social media or websites. The consequences of bullying can be traumatic or emotionally scarring, and in severe cases, can lead to self-harm or even suicide. According to Human Rights Commission, cyberbullying has many unique features. It is often anonymous because the perpetrator can exist under a false identity; it has permanent expression because information put online may be archived. (Australia Human Rights Commission, n.d.) Bullying on the internet can take the form of receiving offensive or threatening messages through social networking sites, revealing personal details, spreading rumours about individuals, circulating intimidation or harassment, and discriminatory communication. (n.d.) Violence and pornography also need to be censored in a timely manner as they spread very quickly and widely on digital platforms.

Data Security Breach” by Visual Content is licensed under CC BY 2.0.

How can the governments stop the spread of problematic content?

The government should consolidate the laws and regulations of network security and the technical construction of network content. The Online Safety Act 2021 has various plans to keep Australians safe online, with mechanisms to remove seriously abusive and harmful content. However, as Gillespie (2018a) points out, spotting illegal activity online is challenging because users can enjoy some of the anonymity the internet provides and the privacy of encryption. The Christchurch terror attack on 15 March 2019 was the deadliest in New Zealand’s history. Brenton Tarrant broadcasts its live streaming on the web, which undoubtedly reveals the virality of content in the face of extreme violence. Brenton Tarrant filmed the atrocity with a camera, which gave the lens a first-person shooter feels. (Macklin,2019) The 45th Parliament swiftly passed legislation introduced by the Morrison government aimed at ensuring violent, offensive material from internet service providers is promptly removed. (Biddington, n.d.) Australian lawmakers have introduced the most authoritarian legal measures to hold social media companies accountable for sharing content. Therefore, digital platforms must then respond to state or court requests to remove illegal third-party content. (2018a)

Even if the law includes the need to delete inappropriate content, such as cyberbullying or violent content, promptly, how to deal with the psychological trauma left behind? Not only the psychological trauma brought by the browser but also by the content administrator. So, the government should strengthen the improvement of online content construction and develop more detailed and specific laws in order to protect security of digital platform. The government can also popularize and raise the public’s online safety awareness through education. Cultivate people’s moral literacy and sense of social responsibility and provide psychological counseling.

How can the platform stop the spread of problematic content?

Platforms need to balance their interests with those of users. Social media platforms have to tailor their policies to respond to government demands to remove content said by Gillespie (2018a), demonstrating that the platforms don’t do this voluntarily. Social media platforms use this openness to attract viewers’ attention, but they must operate under the policy. Gillespie (2018b) means that with too little regulation, users will be rooted in toxic environments to avoid; with too many restrictions, users will feel too antiseptic. In other words, if the platform has too much violence or unsafe content, such as pornography, it will not comply with the policy, and users will be lost to competitors because of fear. However, Palfrey (as cited in Gillespie, 2018) argues that overfiltering also has a cost in terms of public sentiment. It is a double-edged sword for regulation.

Each platform should have its review team to check so that many malicious speeches can be blocked. Gillespie (2018a) outlines two content moderators are suing Microsoft after seeing very violent images and content in a job review that caused a staff member to suffer from post-traumatic stress disorder, which is psychological harm for reviewing cybersecurity issues. Therefore, if the platform adds artificial intelligence to identify sensitive words and pictures in advance, a large part of obvious illegal content can be filtered out. In addition, the platform’s content review obligations include identifying infringing information and giving users pop-up risk warnings. The disclosure of confirmation information can also reduce the security risks caused by anonymity through real-name authentication, such as verifying user ID and IP. User authorization is the first step in verifying the user’s information. Deloittes (n.d.) considers that Privacy by Design (PbD) is the need for solid data protection that is reconciled with a desire for data-driven innovation. Social media platforms can embed privacy directly into the design of business practices and network infrastructure, providing a middle way. In this way, competitive advantage is balanced between the needs of the platform and the need to protect privacy.

How can the individuals stop the spread of problematic content?

In addition to government and platform regulations to prevent problematic content circulation, individual resistance is also essential. Users are divided into two categories, publishers and viewers. Usually, the two identities are the same and are interchangeable. As a publisher, we must not arbitrarily evaluate others on the Internet to create online violence and discriminatory content, let alone participate in the dissemination and independent browsing of some illegal content such as violence and porn. As a browser, you must consciously improve your sense of social responsibility to resist harmful information.

Big data la gi” by KamiPhuc is licensed under CC BY 2.0.

It is necessary to verify when paying on platforms and social software, regularly modify or encrypt passwords, not easily click on suspicious links, and frequently update antivirus software, etc., all of which can be done by themselves. Regulated and controlled as users exist on the social media platform, Facebook was hit by a scandal in March 2018 that Cambridge Analytica improperly used Facebook’s user data to build tools to help President Trump’s 2016 campaign. (Dance et al., 2018) This undoubtedly shows that the user’s privacy is completely exposed. Therefore, self-protection awareness is vital. It is impossible to filter out the completely inappropriate content by relying on the platform alone. Most people prefer to let the social media platform be responsible for the content in question. (Gillespie, 2018b)

“Why Care About Internet Privacy?” by Epipheo. All rights reserved. Retrieved from: https://youtu.be/85mu9PLWCuI

Conclusion

The government needs to be responsible for constructing strong network security laws and regulations and network content technology. The government can stop the spread of inappropriate content by raising the public’s network security awareness through education. The platform must balance its and user interests to provide guarantees for content review. Personal privacy protection on the Internet is also essential to protect oneself on the digital platform.

Reference list

Australia Human Rights Commission. (n.d.). Cyberbullying, Human rights and bystanders. https://humanrights.gov.au/our-work/commission-general/cyberbullying-human-rights-and-bystanders

Australian Government. (n.d.). Current Legislation. https://www.infrastructure.gov.au/media-technology-communications/internet/online-safety/current-legislation

Biddington, M. (n.d.). Regulation of Australian online content: cybersafety and harm. PARLIAMENT of AUSTALIA. https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/Cybersafety

Deloitte. (n.d.). Protecting Privacy in the Age of Big Data and Analytics. https://www2.deloitte.com/th/en/pages/risk/articles/privacy-big-data-analytics.html

Dance, G.J.X., LaForgia, M., Confessore, N. (2018, December 18). As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants. The New York Times. https://www.nytimes.com/2018/12/18/technology/facebook-privacy.html

Epipheo. (2013, June 10). Why Care About Internet Privacy? [Video]. YouTube. https://youtu.be/85mu9PLWCuI

Gillespie, T. (2018). Governance by and through Platforms. In Governance by and through Platforms (pp. 254-278). SAGE Publications.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

Macklin, G. (2019, July). The Christchurch Attacks: Livestream Terror in the Viral Video Age. Combating Terrorism Center. https://ctc.westpoint.edu/christchurch-attacks-livestream-terror-viral-video-age/

PbD. (n.d.). What is Privacy by Design. https://www.pcpd.org.hk/pbdconference/privacy-by-design.html