Regulating internet content: who should be responsible and why it’s not as straightforward as you may think

"Social Media Keyboard" by Shahid Abdullah is marked with CC01.0. Retrieved from: https://creativecommons.org/publicdomain/zero/1.0/?ref=openverse.

There is no doubt that there is problematic content circulating the internet on digital platforms, but the trouble is what should be done and who should be the one to do it. Unfortunately, there is no simple or straightforward answer to this problem. Regulation of the internet and the content that is shared on its platforms has always been a controversial and convoluted issue, often consisting of trying to choose between the lesser of two evils. The answer here, however, lies in a combination of all evils. The regulation of problematic content on digital platforms should be the responsibility of platforms, governments, and users themselves with an overall focus on increasing transparency. Each of these parties should play a role in helping maintain the safety and integrity of the internet for all users.

The active role of online platforms is key to successfully regulating problematic online content including bullying, harassment, violent content, hate, and porn.While digital platforms are most frequently the hosts of such content, they are currently not doing enough to protect their users. Most social media platforms have some sort of algorithm in place to screen for inappropriate content, but these algorithms are often too slow, inconsistent, and lack transparency. Online platforms are able to get away with this sloppy oversight of user content because of safe harbor laws in Section 230 of the Communications Decency Act.

Safe harbor laws were originally put into place to protect platforms and ISPs from being held responsible for the content their users post, even if they do sometimes regulate this content. While Section 230 has done a lot of good to allow the internet to flourish from its enactment in 1996, it is in dire need of reform. A Harvard Business Review article by Michael D. Smith and Marshall Van Alstyne does an excellent job of addressing this issue; they explain that upon Section 230’s initial enactment it seemed that platforms would still regulate content to the best of their ability to protect their economic self-interest, despite a lack of legal repercussions. While this may have been true over two decades ago, it is obvious that there is no longer enough incentive for platforms to self-regulate to a meaningful extent, and the consequences of this lack of regulation have only grown more severe (Smith & Alstyne, 2021).

Social media platforms have played host to the planning of riots, inciting of violence, forums for the encouragement of mass shootings, illegal sale of firearms, and facilitated the sexual exploitation of children to name a few. A Fordham Law Review article by Danielle Citron and Benjamin Wittes proposes to crucially change section 230 as follows: “No provider or user of an interactive computer service that takes reasonable steps to address known unlawful uses of its services that create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider”(Citron & Wittes, 2017).

A reform of Section 230 that requires more accountability and transparency from social platforms and ISPs would do a tremendous amount to regulate harmful content on digital platforms. This reform would force platforms to make serious upgrades to algorithms and other forms of content regulation, producing greater transparency by requiring them to actually prove that they are indeed taking “reasonable steps to address known unlawful uses of its services”. 

Government roles in content regulation on digital platforms is quite complicated because of the diverse range of policies regarding online content in different countries. Due to this, the role that the government can play in content regulation online is vastly different depending where you are in the world. For example, an article about social media regulation by The Conversation cites that, “the UK minister for technology and digital economy, recently gave a speech about the government’s plan for digital regulation through its proposed online safety bill, promising a ‘light-touch’ approach” meanwhile, “the Russian communications regulator Roskomnadzor blocked access to the platforms (Facebook and Twitter), citing discrimination” (Cruft & Ashton, 2022). 

While governments should strive to regulate platforms as minimally as possible in order to allow for the free and open flow of information and ideas, it is crucial that governments are able to get involved in order to preserve the safety of users. In an age of the internet where we have continued to see violent actions and threats organized via digital platforms, it is essential that meaningful steps and legal consequences be taken to combat this. 

A perfect example of this was the Christchurch Call to Action to “combat violent extremism online” that was signed by world leaders in Paris just months after the Christchurch massacre in New Zealand (Leitch, 2022). The countries which have signed the Christchurch Call commitments have promised to follow 24 commitments “covering everything from applying appropriate laws and regulation to specific technical measures, to efforts to address the underlying drivers of terrorism… they commit to deliver transparently and in a way that respects and promotes human rights and a free, open, secure internet” (Christchurch Call, 2019). 

Vigil for victims of Christchurch shooting in New Zealand

In her article entitled “Clickbait extremism, mass shootings, and the assault on democracy – time for a rethink of social media?” Shirley Leitch draws eerie comparisons between the Christchurch massacre and the January 6 insurrection in the US during Donald Trump’s presidency (Leitch, 2022). The Christchurch Call was written less than two years prior to the January 6 insurrection, which Trump refused to sign. Both the Capitol Riots and the Christchurch Call were violent attacks that were implemented, incited, and organized using social media platforms and are perfect examples of how dangerous some content on digital platforms can be if gone unchecked and unregulated. All governments should be required to commit to the Christchurch Call or a similar document that is dedicated to preventing and punishing violent action organized on social media platforms. 

January 6 capitol riots in the US, organized via social media channels.

While it is important for platforms and governments to host and maintain a safe and enjoyable user experience on digital platforms, users themselves should not be exempt from accountability of the content that they produce and consume. However, this can only be achieved through increased transparency across the board from digital platforms. While most digital platforms have users agree to a set of Terms and Conditions and community guidelines, it often consists of unclear or contradictory rules that are inconsistently enforced. With greater clarity of what is and is not allowed on each platform and greater transparency of how algorithms are not only policing user content, but also exposing them to content, this would allow for users to take more control of their experience(Watson & Nations, 2019). 

In a Harvard Business Review article by Dipayan Ghosh, he explains the difference in user experience between consuming traditional news media and social media content. Ghosh explains that “viewers and readers of traditional news media must proactively choose the content they consume — whether that’s a show they choose to watch or a column they choose to subscribe to. Social media users, on the other hand, have almost no control over the content they see. Instead, platforms use complex algorithms to serve content they think will keep users scrolling, often exposing them to more radical posts that they may never have sought out on their own” (Ghosh, 2021). Due to the convoluted nature of social media algorithms it would be extremely difficult to hold users accountable for content that they consume or dispute any complaints users have with the content they are exposed to.  

Increased transparency about how these algorithms work would allow digital platforms to shift more accountability onto the user instead of bearing its full weight. Centralizing users in their own experience on digital platforms by increasing transparency online would allow for users to know why they are being exposed to certain content and how to avoid it if they choose to do so . This would make it easier for platforms to hold users responsible for the type of content they are choosing to interact with and make things easier for users by more easily being able to avoid content they don’t want themselves or their children exposed to. 

Regulation of content on digital platforms is an extremely complicated issue, almost certainly becoming even more complex as the internet continues to develop and expand into new areas. While there is no one simple solution to the problem of content regulation, the best choice is a multi-stakeholder approach as to not give any one entity too much control. A combination of platforms, governments, and individual users working to increase transparency online would allow for the most safe and enjoyable version of the internet for all parties.

References

Christchurch Call. (2019, May 15). Christchurch Call Text. Retrieved October 6, 2022, from https://www.christchurchcall.com/about/christchurch-call-text/

Citron, D. K., & Wittes, B. (2017). The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity. Fordham Law Review. Retrieved October 6, 2022, from https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5435&context=flr

Cruft, R., & Ashton, N. A. (2022, September 13). Social Media Regulation: Why we must ensure it is democratic and inclusive. The Conversation. Retrieved October 6, 2022, from https://theconversation.com/social-media-regulation-why-we-must-ensure-it-is-democratic-and-inclusive-179819

Department of Justice, U. S. (2021, January 20). Department of Justice’s review of Section 230 of the communications decency act of 1996. The United States Department of Justice. Retrieved October 6, 2022, from https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996

Ghosh, D. (2021, December 13). Are We Entering a New Era of Social Media Regulation? Harvard Business Review. Retrieved October 6, 2022, from https://hbr.org/2021/01/are-we-entering-a-new-era-of-social-media-regulation

Leitch, S. (2022, September 26). Clickbait extremism, mass shootings, and the assault on democracy – time for a rethink of social media? The Conversation. Retrieved October 6, 2022, from https://theconversation.com/clickbait-extremism-mass-shootings-and-the-assault-on-democracy-time-for-a-rethink-of-social-media-187176\

Smith, M. D., & Alstyne, M. V. (2021, August 16). It’s time to update Section 230. Harvard Business Review. Retrieved October 6, 2022, from https://hbr.org/2021/08/its-time-to-update-section-230

Watson, H. J., & Nations, C. (2019). Addressing the growing need for algorithmic transparency. Communications of the Association for Information Systems, 488–510. https://doi.org/10.17705/1cais.04526