
Introduction

More and more aspects of daily life are being facilitated by digital platforms, which have also generated a staggering amount of wealth, social status, and political influence for investors but have also offered benefits and convenience to consumers. This essay discusses what digital platforms and governments should do when spreading harmful content like cyberbullying, harassment, violent content, hatred, and pornography. The focus is on actions meant to stop the spread of violence. Furthermore, this compares how the government and media were regulated before and during the Internet era.
Digital platforms should take responsibility for organising inappropriate content and self-policing

When inappropriate content is posted on a digital platform, it is the platform’s job and responsibility to enforce the rules and allow self-policing. In recent years, digital platforms have snowballed. These platforms have been used to spread false information about politics, public health, fake news, and fake products, among other things. Some of these services have also been involved in large-scale, secret surveillance of digital users and bad connections between businesses (Cusumano et al., 2021). People increasingly demand more control over information on digital platforms because they are worried that seeing harmful content can hurt their mental and physical health in the real world. For instance, ISC claimed that one of the murderers of Fusilier Lee Rigby had stated his intent to kill a soldier on Facebook. The Internet Systems Consortium (ISC) stated, “Facebook and other platforms provided a “safe haven” for terrorists and asked that they share more information with law enforcement authorities about any terrorist threats (Cusumano et al., 2021).

Platforms have developed to sort through and categorize all information. When the information classification is unclear, it is left to the human to identify. Roberts argues that discussions about content review often focus on human labor (Gerrard, 2018). The platform relies on a large human workforce to review content. However, users are worried about whether or not reviewers from different cultures will be able to judge accurately whether or not that content is appropriate for public consumption. In many cases, there is no clear line between nude or violent images that are globally and historically significant and those that are not. Some images may be offensive in one part of the world and acceptable in another. Even if there were a clear standard, it would not be easy to sift through millions of weekly posts on a case-by-case basis (Gillespie, 2018). Arrow claims that digital platform operations are prone to “moral hazard” without clear rules (Arrow, 1978). The principles created and appraised throughout the evaluation are also very contentious, as some digital platforms use low-income individuals to process information to maximize revenues (Block, 2018). Workers tasked with screening content are experiencing burnout. Some businesses realize that they need to offer counseling to employees who are exposed to obscenity, hate speech, or abusive movies or photos for long periods (Roberts, 2019). It can show how much damage can be done by the uncontrolled and random distribution of harmful content on the Web.

George Washington University explores the impact of intervening in online terrorist activities. The study looked at suspicious accounts for 30 days, keeping track of when “similar” accounts came back after being removed or suspended. At first, research suggests that suspending accounts suspected of terrorist activity stops “terrorist communication” and that repeated suspensions eventually reduce the number of followers of any suspected user. In the long run, this limits disseminating false information online (Softness, 2016). One of the most challenging tasks for the platform is to develop and implement a content moderation mechanism that covers both extremes. However, Twitter states in its stated community guidelines:Everyone should have the ability to create and share ideas and information without barriers (Gillespie, 2018). The role of government action is also significant.
Government should take responsibility for oversight and legislative deterrence
Governments must be the platform’s guardians when false information is shared on the Internet. To be more precise, governments all around the world have finally started to pay attention to the issue of platform abuse after neglecting it for decades. The European Commission led antitrust lawsuits against Google beginning in 2010. In the fall of 2020, the United States government also took solid antitrust actions against Google and Facebook. In 2021, the European Union, Germany, Australia, Japan, Russia, and China took more stringent regulatory measures (Mozur et al., 2021). Digital platforms and the government clash when the government begins to regulate. Nicole Wong says that the platform chooses to limit local IP addresses when it says that some information breaks local laws, but does not break community norms (Block, 2018). An individual’s right to know could be compromised by excessive state interference. Alternatively, Tristan Harris points out that it is commonly believed that human nature is human nature and that technology is a neutral product that does not accentuate anything. In reality, technology is biased toward attracting and keeping the attention of as many people as possible, even if that means giving more weight to content about atrocities in search results (Block, 2018). In reality, the platform and the government should work together equally. The government has not made any good tools for finding inappropriate content, but it can pass laws that limit what the platform can do (Cusumano et al., 2021). Digital platforms control content distribution well and prevent full government involvement in managing digital platforms.
It is easy to see why enterprises and new industries could not embrace self-regulation when the perceived short-term costs of digital platforms are significant, and modifications indicate a dampening of network effects and possibly a drop in income or profits. However, ignoring the need for self-regulation can also negatively affect companies. If governments make rules for digital platforms that are too hard to follow or if people abuse or misuse platforms in ways that hurt customer trust and demand, business for these platforms is likely to go down. Companies are more likely to take self-regulation seriously when they think government regulation is a real possibility, even if self-regulation hurts short-term sales and profits. Governments can facilitate self-regulation through digital platforms by helping people see the value in looking past the near term. Governments have tried to manage the media in other ways, not just on digital platforms.
Comparing past and present government-controlled media and digital platforms

Before the Internet era’s efforts by firms and industries to self-regulate, there were fewer of them. Here, we go into the subject matter of movies and computer and video games. When businesses and industries feared that the government would step in to control some unregulated practices, they began to self-regulate to avoid that outcome. Despite their apparent differences, the connections between these past instances and today’s Internet platforms are significant and illuminating. For decades, governments and other organizations have utilized censorship for safeguarding citizens, particularly minors, from exposure to potentially harmful information and media. The film industry in the United States is an excellent example of government and self-regulation. In 1907, Chicago passed the first law in the U.S. To censor movies. This law gave the police chief the power to stop “obscene or immoral” movies from being shown in theatres and penny arcades all over the city. Producers of films attempted to fight the law, but were unsuccessful (Cusumano et al., 2021).The difference between the two is clear enough; the company’s owner is not the arbiter of information. However, it is clear that digital platforms can play a more active role in limiting harmful or ethically questionable content. Social media companies, in particular, seem well suited to curate content, as delivering and amplifying information is a core activity and capability (Cusumano et al., 2021).
Conclusion
In the end, social media and digital content platforms have worked together with regulators to find and get rid of harmful or false information. Restricting the spread of harmful content on the Internet requires a combination of platform self-monitoring and government-assisted regulation. Vera Jourová, vice president of the European Commission, said in September 2020 that the Commission plans to focus on how to limit the spread of this kind of content online(Cusumano et al., 2021). It can create a peaceful and healthy online society and reach a win-win situation.
Reference:
Arrow, K. J. (1978, January 1). 21 – UNCERTAINTY AND THE WELFARE ECONOMICS OF MEDICAL CARE (P. Diamond & M. Rothschild, Eds.). ScienceDirect; Academic Press. https://www.sciencedirect.com/science/article/pii/B9780122148507500280
cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021). Can self-regulation save digital platforms? Industrial and Corporate Change, 30(5), 1259–1285. https://doi.org/https://academic.oup.com/icc/article/30/5/1259/6355574?login=true
Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media & Society, 20(12), 4492–4511. https://doi.org/10.1177/1461444818776611
Gillespie, T. (2018). Custodians of the internet : platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Hans Block. (2018). The Cleaners. In Dailymotion. https://www.dailymotion.com/video/x6hv4w9
Mozur, P., Kang, C., Satariano, A., & McCabe, D. (2021).
A Global Tipping Point for Reining In Tech Has Arrived. New York Times .
http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/newspapers/global-tipping-point-reining-tech-has-arrived/docview/2515156738/se-2
Roberts, S. T. (2019). Behind the Screen. Yale University Press.
Softness, N. (2016). Terrorist communications: are Facebook, Twitter, and Google responsible for the Islamic state’s actions? Journal of International Affairs, 70(1).