It’s time for cooperation: Government, users, and platform companies

Wed 10 RE (Sulagna)

Social Media Scripting Framework” by Carlos Veira Lorenzo is licensed under CC BY-NC-SA 2.0.

The development of the Internet has given people endless freedom. It means sharing information that brings happiness to people and negative information that frowns – for example, hate speech, pornographic and violent images, fake news, and others. – is spreading faster and easier among people (Gillespie, 2018). The mediums of the Internet, especially digital platforms such as Facebook, Twitter, and Instagram that we use daily, can speed up the spread of negative factors. On the other hand, there is a growing need for sanctions against these factors that harm others. So, who should be responsible for this and sanction it? The answer is that we need to cooperate with platform companies, governments, and even users.

3D Judges Gavel” by is licensed under CC BY 2.0.


First, in the case of the government, rather than direct censorship of individual users, there is a role to be held accountable for them by enacting legislation regulating platform companies. So far, platform companies-which mostly based in the U.S., have evaded responsibility for inappropriate online behaviour under provisions such as ‘Section 230.’ Specifically, the platforms only give users access to the network, do not have to take responsibility for what users write, and do not lose Safe Harbor if they impose sanctions (Mueller, 2015). As a result, the giant companies that now dominate the Internet world are not responsible for harmful content posted by users or hate speech and leave such harmful data floating in the Internet world without sanctions. It, in turn, leads to continuous negative consequences such as illegal activities such as gun sales or damage to the image of an individual (Citron & Franks, 2020). Also, brutal posting, such as suicide spreading through online platforms, has tragic consequences for teenagers to follow (June & Fairall, 2012).

Tragedy of Cyberbullying” by Bernie Goldbach is licensed under CC BY-NC-ND 2.0.

In addition, cyberbullying, conducted using social media, also creates a tragic choice for adolescents. However, as mentioned earlier, platform companies avoid this result by blaming individuals for writing specific posts on the platform in the protection of section 230. Therefore, to reduce the upload of harmful factors that not only bring discomfort to other users in the Internet world but eventually lead to social problems, government sanctions against platform companies that have been avoided so far are needed. Governments must create a way to enact appropriate laws that make it easier and faster for platform companies to deal with harmful factors.



On the other hand, there is a fear that such government censorship of platform companies could eventually result in government censorship of individual expressions of opinion. Therefore, a kind effort by users and platform companies against government pressure is also needed. If the government over-censors personal ideas expressed through the platform under the name of censorship for a better Internet, users must ‘re-censorship’ these governments. Platform users must proudly express their displeasure with the government that suppresses individual freedom (Carnegie Council for Ethics in International Affairs, 2020). Of course, this is not easy to do in the face of the incredible power of government. However, individual users have to work hard because they should not break down the Internet’s ideals of bringing more freedom to people. In addition, for users, it is best not to post harmful content, such as cyberbullying and hate speech, on a digital platform everyone uses. However, this is rarely done either. Therefore, in the case of individuals, personal efforts are needed to make more active use of the user censorship system, like flagging various platforms that exist within the platform to ensure that harmful posts are censored even a little faster. In fact, according to Mena (2019) ‘s research, the flag warnings seen by Facebook users have helped reduce the credibility of fake news, reducing the ‘fake news sharing’ goal. It means that the connection between user and platform flagging helps prevent the spread of harmful content.

Meanwhile, we need to recognise that these users’ choices and actions are unconditionally right and not fair judgment and that further development is required. The flag system on social media is sometimes used to benefit a particular-usually dominant user group. Like Reddit’s “#Gamergate” case, which is an overly male-centric platform, flags do not play their original role (Massanari, 2016). The flagging system does nothing to further harass or alleviate the platform’s weaknesses from the dominant group’s point of view. Platform’s dominant user groups strategically leverage the flag system, eventually increasing their interest in the group and using the few as a game to bully again (Crawford & Gillespie, 2014). Namely, finally, flagging may be seen as the cause of the endless cycle of cyberbullying, which is considered a problem for the platform.


Platform Company

File:Shell sorting algorithm color bars.svg” by Balu Ertl is licensed under CC BY-SA 4.0.

Hence, cooperation between the platform company and the government (of course, users) is necessary to prevent the problems mentioned above with the flagging system. Platform companies will need to enhance the accuracy and neutrality of their algorithmic screening systems rather than relying unconditionally on users’ flag checking. Today, there is criticism that algorithmic systems are also not neutral and biased (Fatemi, 2020). There are also many cases where algorithmic systems produce virtually sexist results. One of the ways to prevent this can be accomplished by diversifying the people who develop and supervise algorithms. Most algorithmic development teams and most tech industries have been dominated by men from the beginning of Internet development to today, especially white men. Therefore, we can expect fair and neutral algorithmic results by promoting diversification of employment as a government intervention at this stage (Lee et al., 2019).

Noun Project ID icon 2961224 (Wikiarabia 2022)” by & collored by Reda Kerbouche is licensed under CC BY-SA 3.0.

In addition, individual users should be responsible for the harmful content they post by introducing a personal authentication and identification system when signing up for the platform or posting. Anonymity within social media makes people more open to inappropriate behaviour (Rainie et al., 2017). The easy-to-create nature of social media (for instance, you can easily create an account without entering your exact information)  makes people freer to continuously upload and spread harmful content (Nicas, 2020). The video below shows how easy it is to create an account with fake information. Similarly, if one account is prohibited, people will continue uploading it by creating other accounts; therefore, platform companies should create a more organised identity verification system. For example, when people sign up for the platform, platform companies use a method that prevents people from creating accounts in duplicate by identifying, preventing them from creating accounts without identification, or restricting their use of the whole platform services.


Some might believe that this suppresses individual freedom to express their opinions. However, given that platforms have become an essential factor for most people in their daily lives, including children, harmful content that is offensive and damaging to others requires some level of sanctions through the cooperation of governments, platform companies, and users. The violent, sensational, and reproachful content constantly reflected on social media continues to be recognised by us. Eventually, we are psychologically negatively influenced by sharing it with others and trying to emulate it too (Abu-Sharida, 2021). Of course, we can easily find the various harmful crimes and consequences of imitating social media content. Furthermore, as Prieto Curiel et al. (2020) ‘s findings related to Twitter shows, continuing to launch harmful media on the platform, such as violent posts, is limited to specific accounts. It suggests that actions such as identifying and stopping users using the platform will eventually not suppress all users but will suppress only certain inappropriate users, creating a better platform environment.



In conclusion, it is not just certain groups that need to work to prevent harmful content such as sensational, violent elements of various digital platforms, cyberbullying, and hate speech from spreading and sharing quickly. Instead, governments that protect people, including Internet users, platform companies that provide a space for users to communicate more efficiently, and free-spoken users should all work together to prevent this spread. The three elements must monitor each other and work together to create a better platform culture.




Abu-Sharida, M. (2021, December). Why Harmful Posts on Social Media should be regulated. International Review of Law, 10(2), 241–258.


Carnegie Council for Ethics in International Affairs. (2020, January 31). Internet Regulation: The Responsibility of the People. Retrieved September 20, 2022, from


Citron, D. K., & Franks, M. A. (2020). The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. SSRN Electronic Journal.


[CNN Business]. (2020, February 28). This 17-year-old tricked Twitter into verifying his fake candidate [Video]. YouTube. Retrieved October 5, 2022, from


Crawford, K., & Gillespie, T. (2014, July 15). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media &Amp; Society, 18(3), 410–428.


Fatemi, F. (2020, April 16). Three Platforms Where AI Bias Lives. Forbes. Retrieved September 20, 2022, from


Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Amsterdam University Press.


June, J. D., & Fairall, J. M. (2012, May). Social Media and Suicide: A Public Health Perspective. American Journal of Public Health, 102(S2), S195–S200.


Kohrman, M. (2020, September 9). Millions of Guns For Sale. Few Questions Asked. The Trace. Retrieved September 22, 2022, from


Lee, N. T., Resnick, P., & Barton, G. (2019, May 12). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. In Brookings. Brookings.


Massanari, A. (2016, July 9). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media &Amp; Society, 19(3), 329–346.


Mena, P. (2019, July 28). Cleaning Up Social Media: The Effect of Warning Labels on Likelihood of Sharing False News on Facebook. Policy &Amp; Internet, 12(2), 165–183.


Mueller, M. (2015, October). Hyper-transparency and social control: Social media as magnets for regulation. Telecommunications Policy, 39(9), 804–810.


Nicas, J. (2020, December 8). Why can’t the social networks stop fake accounts? The New York Times. Retrieved September 20, 2022, from


Prieto Curiel, R., Cresci, S., Muntean, C. I., & Bishop, S. R. (2020, April 2). Crime and its fear in social media. Palgrave Communications, 6(1).


Rainie, L., Anderson, J., & Albright, J. (2017, March 29). The Future of Free Speech, Trolls, Anonymity and Fake News Online. In Pew Research Center. Pew Research Center.