
Introduction: The Content of the Internet
The era of decentralized Web 3.0 is an era of user-centric, complex content. People follow the principle of “#Free-Speech” and freely post whatever words, pictures or videos they want on digital platforms (The Two Sides of Free Speech in Social Media, 2022). This forms an invisible “double-edged sword”, with overwhelming opinions coming from all sides. Some content may create a positive digital atmosphere for digital actors, however, on the contrary, some content is often full of problems, including bullying, harassment, violence, hate, pornography and other types of negative content.
“Jon Leland speaking at Web 3.0 Asia at Hong Kong
Cyberport” by jonleland is licensed under CC BY-NC 2.0.
To reduce the influence of these materials on the ideology of digital actors and society’s mainstream values, various groups should work together to face difficulties. Gillespie (2020) once said that content moderation has been paid more and more attention by the outside world, and everyone is paying attention to how this issue can be solved. The main contradiction of this issue is that it is worth thinking about how to balance the relationship between citizens’ right to free speech, which was recognized by the Internet in the early days, and now digital platforms have to respond to some problematic content (Sukriti, 2022). Moreover, whether the government has the right to control, and to what extent the government, platforms, intermediaries, individuals and communities should cooperate to maintain a harmonious digital space.
“Freedom of Speech, Freedom of Religion” by Sally
Frederick Tudor is licensed under CC BY-NC 2.0.
The Solution Behind the Digital Dilemma
Government and Law
Governments should act on legislation and accountability for digital space dilemmas. Firstly, the spread speed and scope of the global digital platform can not be underestimated. Once the vicious #devil-spread, the scene will get out of control (Sukriti, 2022). Only through #legal-mechanisms and #intermediary-accountability can the problem of content moderation be improved to some extent. For instance, the US issued the Communications Norms Act in 2016, and the UK promulgated the Online Safety Bill, both aiming to reduce the dissemination of negative content through moderate decisions (Sukriti, 2022).

However, there is a tricky problem that this will destroy individual rights and individual interests to a certain extent, and the government does not have strong control over intermediaries and digital actors, which may cause some rebels to bring political and social chaos (Flew et al., 2019). Therefore, government control of the digital space is only one part of the funnel of global regulatory content moderation, control from the platforms themselves is also essential.
“Jump on the social media bandwagon“
by Matt Hamm is licensed under CC BY-NC 2.0.
Digital Platforms
Digital platforms should also step up to the plate when it comes to the controversy surrounding content moderation, and address the issue with algorithms related to “#visibility moderation” and “#information recommendation”. First of all, new media makes society oriented to global real-time and information synchronization, which is quite different from the old media (Flew et al., 2019). When the digital platform in the context of new media disturbs the social environment, the digital platform can adopt the visibility-moderation method – by controlling the distribution of traffic (also known as “#Shadow-Ban”), to reduce the reach of the content that does not meet the tonality and requirements of the platform, thus reducing the impact (Zeng & Kaye, 2022). For example, Cotter (2022) reports that Chambers has 600,000 followers, and when she accidentally sends harmful content, her video traffic starts to decrease, even if her fans want to see her video can only be viewed by clicking on the home page, which means that she will not be given a place in the recommendation stream of the digital platform.
In addition, by using positive and positive information to combat wrong and negative information, the platform can not only ensure the diversity of ideas but also protect the platform’s ecological environment and digital space culture, which is called information recommendation (Yaraghi, 2019). For instance, there is a piece of malicious false information about “#vaccines make people sick”, and YouTube’s solution is to set a link to the real information next to the information to allow digital actors to identify the source of information and improve the cultural atmosphere in the digital space from the perspective of information distribution (Yaraghi, 2019).
“University of Maryland and Sourcefire Announce New Cybersecurity Partnership“
by Merrill College of Journalism Press Releases is licensed under CC BY-NC 2.0.
Nevertheless, the algorithms of digital platforms are artificially biased, which raises doubts and concerns about their effectiveness. When black creators mention the “#n-word”, they will face serious control, but white people still mention it frequently, and it is still popular (Jones & Hall, 2019). Thus, it can be seen that the content moderation of digital platforms has a strong dependence on algorithms, which highlights the importance of multi-party joint action.
“Black Men, Black Male, Made in the U.S.A.“
by Thomas Hawk is licensed under CC BY-NC 2.0.
Third Party: Content Moderation Intermediaries
The challenge of policing the Internet is also shared by third-party intermediaries, who can optimize AI machine algorithms and find outsourced human moderators. Initially, with the historical change of the network, the digital actors of the digital platform are more able to achieve direct contact than those in the old media era, so nowadays more emphasis is placed on protecting the interested parties from harm in the process of interpersonal communication (Gillespie et al., 2020). This means that digital platforms need to spend a lot of effort to complete the work of moderators, which is difficult for them to bear the cost, so third-party intermediaries have gradually become indispensable secret partners of digital platforms (The Hustle, 2021). It is obvious that to meet their economic benefits, the intermediary has a two-way relationship with the digital platform.
‘It’s the worst job and no-one cares’ by BBC NEWS. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=yK86Rzo_iHA
For example, #Accenture Company, after complying with laws and requirements in various regions, continuously refined the features of AI moderation to help Facebook accurately regulate 90%content (possibly downgrade, delete) (The Hustle, 2021). Furthermore, the remaining 10% will be processed by the brains of outsourced employees, thus filtering out the content that AI has no moderation success (The Hustle, 2021). Even though outsourcing employees had regional differences, they had different perceptions of different words and expressions, which also led to their moderation (Roberts, 2019). Consequently, such a way can indeed safeguard the public interest and bring intermediary interests and address the surface problems, but it has to be mentioned that there are certain drawbacks – that do not address the root cause.
“Social Media Brasil 2010 – Dia 24 – #2010“
by Alexandre.Formagio is licensed under CC BY-NC 2.0.
Individuals and Communities
Individuals and communities are also part of the funnel that prevents spammy content from spreading on digital platforms. Individuals give metadata to the background through likes and clicks and create a warm and harmonious digital environment. In the beginning, the media is like a #watchdog. The problem faced by media is that users on the platform publish content contrary to the mainstream culture of society (Massanari, 2017). Then to satisfy public order while maintaining the harm principle, we can start from the individual perspective. For instance, through some relevant training, people have a subjective content moderation, which influences the usage habits of Internet users, thus blurring the boundary between individual rights and public rights, and trying to solve existing problems (Abbate, 2017). After that, the vision is that people actively contribute to online community management, realizing the ideology of real individual self-regulation and community self-regulation (Massanari, 2017).

Though the limitations of this method are territorial (difficult to reach consensus and coordination under globalization) and the inherent system of free thinking is difficult to be changed, people still think that this method deprives them of their freedom of speech and tries to control their behaviour on the Internet (Abbate, 2017). Therefore, it is advantageous for individuals and communities to make some level of change for the sake of the current spamming problem, which obviously cannot be solved by individuals alone.
Conclusion
In summary, with the development of time, digital platforms have more and more information, which brings convenience to people but also brings some threats. Given the problematic content circulating on digital platforms proposed by this blog, we advocate integrating regulation plans and rules of the government, platform, intermediary, and individual community, and maintaining the balance of each other’s rights through self-regulation and passive supervision, to create a harmonious and safe Internet environment.
How to Solve the Dilemma of the Disharmonious Environment of Digital Platforms? © by Zhanhua Han is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
References:
Abbate, J. (2017). What and where is the Internet? (Re)defining Internet histories, Internet histories, 1(1-2), 8-14. https://doi-org.ezproxy.library.sydney.edu.au/10.1080/24701475.2017.1305836
Cotter, K. (2021). ‘Shadowbanning is not a thing’: Black box gaslighting and the power to independently know and credibly critique algorithms. Information, Communication & Society, Online First, 1–18. https://doi.org/10.1080/1369118X.2021.1994624
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi-org.ezproxy.library.sydney.edu.au/10.1386/jdmp.10.1.33_1
Gillespie, T. & Aufderheide, P. & Carmi, E. & Gerrard, Y. & Gorwa, R. & Matamoros-Fernández, A. & Roberts, S. T. & Sinnreich, A. & Myers West, S. (2020). Expanding the debate about content moderation: scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512
Jones, T., & Hall, C. (2019). Grammatical reanalysis and the multiple N-words in African American english. American Speech, 94(4), 478–512. https://doi.org/10.1215/00031283-7611213
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/1461444815608807
Roberts, S. T. (2019). Behind the screen : Content moderation in the shadows of social media. Yale University Press.
Sukriti, S. (2022). Social media, content moderation and free speech: A tussle. The Leaflet. https://theleaflet.in/social-media-content-moderation-and-free-speech-a-tussle/
The Hustle. (2021, September 7). Who’s doing FB’s dirty work? The Hustle. https://thehustle.co/%f0%9f%91%a8%f0%9f%8f%bb%e2%80%8d%f0%9f%92%bb-whos-doing-fbs-dirty-work/
The two sides of free speech in social media. (2022, February 2). Voices of Youth. https://www.voicesofyouth.org/blog/two-sides-free-speech-social-media
Yaraghi, N. (2019, April 9). How should social media platforms combat misinformation and hate speech? Brookings. https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media-platforms-combat-misinformation-and-hate-speech/
Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet, 14, 79– 95. https://doi.org/10.1002/poi3.287