Mourners in Christchurch at a vigil for those killed in the mosque attacks, on March 24, 2019 (Photo: Carl Court/Getty Images)
The excellent communication ability of the Internet has always been a double-edged sword. While enjoying the convenience it brings, people have to think about how to prevent the spread of bullying, harassment, violent content, hatred, pornography, and other problematic content. Based on relative freedom of speech, social media has changed compared with all previous communication methods. Its information interaction mode, online interaction, and secondary forwarding function make the information spread faster, wider, and more far-reaching. In such an environment, extreme terror and other issues will bring all kinds of bad consequences to individuals and society. From a personal point of view, information readers will feel uncomfortable when they see discriminatory or hateful words, which aggravates their dislike of the crowd where the publisher lives. From a social perspective, this will not only aggravate ethnic conflicts but also cause social unrest. Teenagers and children will be more impressed when they watch the problem content, which has formed unstable factors for education and the future of society.
To prevent the occurrence of the above events, some countries have set up firewalls within their countries to fully control the content released by the media. Indeed, this can effectively control the spread of speech that easily leads to racial division, but it also loses many opportunities to participate in global communication. Countries such as Australia, Germany, and Singapore, which have developed communication technologies and call for freedom of expression, need to set up relevant regulations to control the release and dissemination of content in the face of the globalization of the Internet. In Robert Gorwa’s (2019)’s article, he mentioned the plans designated by various governments and institutions to deal with the dissemination of problem content. For example, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron unveiled the Christchurch Call, which enables national governments to jointly fight against terrorist content (Gorwa, 2019). Each state has also signed some agreements with non-governmental organizations to assume responsibility for content control, but at the same time, some legal responsibilities that should be borne by publishers rather than by the platform have been exempted (Gorwa, 2019). These contents show that all countries and institutions have a clear understanding of the necessity of the content of control issues.
Examples of problems and their adverse effects
Masjid An-Nur mosque in Christchurch on the first anniversary of the terrorist attack, in 2020. (Photo: Lisa Maree Williams/Getty Images)
A horrific and deadly terrorist attack occurred in a mosque in New Zealand. The shooting resulted in 50 deaths and at least 47 injuries, making it the deadliest attack in New Zealand’s modern history (Welle, 2019). However, in this event, the continuous impact brought by the online event may be worse than the offline event, which has aroused the attention of all kinds of people. This incident triggered “multiple effects” online, in which violence expressed online feeds into real-world examples (Manhire, 2022). Due to repeated online statements about this incident and the deliberate incitement of terrorists, more and more people declared “anti-Muslim” and triggered “Islamophobia”. The extreme right-wing terrorists effectively participated in the online platform before and after the attack, which triggered a series of offline and online hate crimes, while leaving the target community in fear and anxiety between the fuzzy boundaries of the offline and online worlds (Manhire, 2022). Under the common rendering of media and other online information, the hatred caused by this event far exceeds that caused by offline events only. To a large extent, it has encouraged the actions of terrorists, resulting in violent effects such as demonstrations and protests.
Girls performing in a play on child sex trafficking and abuse in a Manila slum. File image. Credit: Kristian Buus/Corbis via Getty Images
This is a live broadcast on facebook of the sexual abuse of children in the Philippines. Hundreds of Australians participated online and designed plots and payments for it. The report found that only one Australian spent nearly 300000 dollars on live broadcast materials (Burke, 2020). In such an environment where high profits can be obtained, such events in the Philippines will only become more rampant. More innocent children will suffer inhuman abuse because of this criminal transaction, which affects the physical and mental health of many children and destroys their families. The spread of the Internet has also made such incidents more and more serious, bringing a bad atmosphere and bad influence to society.
Network Platform Management and the Cause of Problem Content Propagation
A network platform is originally a communication tool serving the masses, controversial topics or problematic content may attract more attention and discussion. When the problem content has attracted attention, people find it difficult to thoroughly control and prevent the occurrence of such events. When a large number of people use the advanced network platform, the management of the platform will be more difficult, and it is extremely easy to lose control and cause social problems. The current regulatory regulations issued by the government are not perfect, and the platform is not willing to strictly control for the sake of relevant interests, especially in the western countries where freedom of speech exists.
With the development of scientific and technological networks, people do not need a threshold to publish content on the Internet. Only one mobile phone can complete the content release of a large number of platforms, so many We Media have emerged on the platform. We media have no capital to support it. If we want to attract people’s attention and interest, we need to find another way. Some people choose to publish controversial content to get a lot of interaction and discussion. Discriminatory, hateful, and pornographic content can often earn more traffic than ordinary content.
Identify the problem content and prevent the spread
For today’s media platforms, identifying problematic content is sometimes more important than preventing content from spreading. Because the current review technology is still at a preliminary level, the media information dissemination process needs to go through the production, circulation, reception, and other links. However, the current platform audit can only cover two of them. The one that is released is called “first audit”, and the one that is propagated is called “after audit”. The first review is mostly completed by platform algorithms. They will review whether there are any text or images that meet the definition of the problem content. If there are any, the content cannot be published. Some platforms with strict management will also review derivatives brought by some events, but this will affect the use of normal users, making it inconvenient. After the content is successfully published, the communication has brought adverse effects, so the platform will conduct a “post review”, which is mostly completed manually. They will review the adverse effects of content and the content that is difficult to be taken care of by other algorithms, and decide whether to prevent the continued propagation of content.
At present, these two algorithms still have many disadvantages and room for improvement. It is difficult for algorithm auditing to understand the obscure words used by human beings, which often leads to an oolong event that prevents normal content from playing obscure problem content. We can only hope that with the progress of science and technology, the future artificial intelligence algorithm may be able to better solve this problem. Manual review is highly subjective, and some employees may even disclose unpublished information for high profits. Therefore, ensuring the fairness and confidentiality of employees is also the content of each platform company.
As a mature network platform, it is necessary to maintain social stability. For users, the platform will respect and protect online speech, while protecting them from attacks and abuse; For advertisers, the commercial attraction of the platform is environmentally friendly; For legislators, ensure that the platform is diligent so that no further supervision is required (Gillespie, 2018).
How to ensure network freedom at the same time
To control and prevent the spread of problem content, it is very likely that the freedom of expression of Internet users will also be restricted. Network supervision sometimes has a strong subjectivity, which will fundamentally bring negative effects such as discrimination to the entire environment. For example, online platforms with strong Geek culture and male culture atmosphere will automatically marginalize women’s speech on the platform, to consolidate the male hegemony temperament (Massanari, 2017). There is also the suppression of LGBT people in the network environment, which has made them lose the freedom of online speech to a large extent. This atmosphere even reverses black and white, making Rana Ayyub, a sexually assaulted woman, unable to speak on social media. Therefore, balancing hate speech and freedom of expression is also a very important link. It is hoped that network regulators can accept social supervision more fairly and actively, optimize the algorithm of platform supervision, and improve manual management. Bring a better and healthier network living environment to the public.
Reference
Burke, K. (2020). Housewife among Australian predators paying to see child porn live-streamed. 7NEWS. https://7news.com.au/news/crime/child-pornography-livestreamed-from-philippines-accessed-by-hundreds-of-australians-c-705273Deutsche
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Gillespie, T. (2018). CHAPTER 3. Community Guidelines, or the Sound of No. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 45-73). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029-003
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/146144481560
Manhire, T. (2022, March 15). How the Christchurch terror attacks “sparked a wave of hatred” in Australia. The Spinoff. https://thespinoff.co.nz/politics/15-03-2022/how-the-christchurch-terror-attacks-sparked-a-wave-of-hatred-against-muslims
Welle. (2019). Christchurch terror attacks: What you need to know | DW | 16.03.2019. DW.COM. https://www.dw.com/en/christchurch-terror-attacks-what-you-need-to-know/a-47942310