Digital Platforms and Their Responsibility In Modern Internet Usage

Cyber Bullies

Section 230 of the Communications Decency Act of 1996 established that digital platforms are distributors, not publishers of content (Goodman, Whittington, 2019). Thus, no providers of computer services will be held accountable for content posted by their users, providing immunity for ISP’s. The reasoning for this was that government intervention would limit the sharing economy of the Internet, and most media moderation policies were outdated and only applicable to physical news. However, the recent boom of social media in everyday life has brought more responsibility on to companies to move away from self-regulation and moderate harmful content such as hate speech, cyberbullying, or pornography.

Online marketing secrets” by Internet marketing secrets is licensed under CC BY-NC 2.0.

This increased sense of responsibility of media companies can be attributed to the growth of real-time user-generated content and polarizing websites such as Reddit and 4chan that allow users to anonymously post racist or sexist content to discussion boards. The development of the Internet has led to emotional echo chambers that magnify dangerous stereotypes such as misogyny or racism. These problems are perpetuated by digital platforms, and it has now become their responsibility to moderate the content posted to their sites in order to keep the Internet safe, especially for younger users.

Current problems:

A quick scan of any social media platform’s terms and policies will show a clear stance against online abuse and bullying. Twitter’s help center makes sure to clarify the difference between a difference in opinion and online abuse. Their first step in being a safe user on the internet is to end communication with accounts that users disagree with by unfollowing or blocking (Twitter, 2022). This is a clear example of user self-regulation, and one of the implicit responsibilities that comes with being an internet user. If the situation escalates towards online abuse and physical danger, Twitter urges users to report the behavior to Twitter or local law enforcement. Cyberbullying has been shown to correlate to suicidal thoughts, especially among adolescents (Hinduja, Patchin, 2010). One study found that 19% of Internet users between 10-17 have participated in cyberbullying either as the victim or offender (Ybarra, Mitchell, 2004). Social media sites must take accountability for the spread of hateful messages that are actively endangering the youth’s mental health.

Cyber Bullys” by Adam Clarke is licensed under CC BY-NC-SA 2.0.

Gamergate and Sexism in Emotional Echo Chambers

Another growing issue is how the Internet allows users to easily aggregate and spread harmful materials such as anti-feminist thoughts or even pornography. The spread of Jennifer Lawrence’s nudes across 4chan and Reddit was facilitated by the anonymity provided by these platforms, as well as Reddit’s upvote system that essentially rewarded users for sharing provocative content. The spread of “The Fappening” and Gamergate can be attributed to Reddit’s ability to promote deep engagement with niche interests (Massanari, 2015), and how this confirmation bias amplifies a white, male masculinity. This highly toxic, hypermasculine culture has normalized the ethically dubious actions by Internet users such as doxxing, spreading private information/pictures, and harassing other users for their gender or sexual identity.

reddit sticker – 3” by Eva Blue is licensed under CC BY 2.0.

Example: Discord

Discord, a digital communication platform, has become a central pillar of cyberspace.  Over the past five years, Discord has grown from 10 users on their launch day in 2015 to over 350 million registered users (Curry, 2021). Discord’s server system was particularly innovative as it allowed users to categorize themselves, creating emotional echo chambers. User self-selection has become increasingly important as people seek to find a like-minded community in a digital space. Discord’s focus on creating environments where users have complete control over their interactions has created new problems of online abuse. Discord’s anonymity is one of its biggest strengths, but has led to the use of the platform by alt-right and neo-nazi sentimentalists. In 2017, the Unite the Right rally, a white supremacist movement that resulted in one death and 34 injuries (Brown, 2020), was found to have been planned on Discord. The company faced severe backlash for failing to regulate its servers to which they responded by shutting down several AltRight groups and the AltRight server (Brown, 2020). However, this response also brought questions about how Discord should regulate hate speech while still maintaining user privacy. 

Prior to the Unite the Right rally, Discord had no official safety team. The platform advertised itself as a hands-off digital platform, expecting users to keep radical ideologies within their own private circles. Sean Li, head of the new safety team describes Discord as “a country with 100 million inhabitants, living in different states and towns… We make the rules on what is allowed to help shape the society at large” (Pierce, 2020). The dilemma of balancing free speech with safety was addressed through the creation of moderators and bots, and the company now dedicates 15% of its labor force to monitoring as of May 2020 (Hatmaker, 2021). Complications include a massive network of pornography servers ranging from furry to revenge and minor pornography. Although the platform explicitly states that they do not allow this type of harmful content, the spread of pornography is exacerbrated by the privacy laws and decentralized system that restricts Discord from reading personal messages.

Discord has recognized their role in facilitating safe online interactions. Their recent acquisition of Sentropy in 2021, an AI-powered software designed to detect online harassment (Hatmaker, 2021), characterizes the agency’s commitment to evolving with the Internet. 

Solutions for stopping the spread:

Digital platforms must implement new moderation policies instead of relying on user self-regulation. This can appear in the form of content moderation bots such as Sentropy, or expanding the size of moderation teams. Facebook CEO, Mark Zuckerberg, stated that Facebook would spend over $3.7 billion on platform safety. However, compared to Facebook’s annual revenue of over $70 billion (Edelman, 2020), it is clear that tech companies have not devoted enough of their resources towards internet safety. 

Next, victims should be able to bring legal action against their online harassers. Under tort law, harassers can be prosecuted for publicizing private, “non-newsworthy” information without legitimate reason if it would be considered offensive to a reasonable person (Citron, 2014, p.121). A viable example of this is personal nude photos, such as in The Fappening. While these cases may be pursued under defamation standards, there must be legislative change to account specifically for Internet danger. Most cases of cyberbullying are not pursued in court and are dismissed by the public and the judicial system. There must be efforts on a federal level to criminalize online stalking and harassment to keep pace with technological change (Citron, 2014, p. 124). By providing users more protection under civil and criminal law, it will deter Internet users from overstepping boundaries while also giving victims a way to seek reparations. 

The platformisation of the Internet began in the late 2000’s with the boom in mobile media. However, technological advances and increased access to cyberspace has led to new dangers online such as personal privacy, sexual harassment, or cyberbullying. It is no longer feasible for digital platforms to place regulation responsibilities on their users’ shoulders and expect immunity from content posted on their sites. While government legislation is important to securing the safety of citizens online, the distribution of this content must be addressed through internal change at tech companies.



About online abuse. (n.d.).

‌Brown, A. (2020). Discord Was Once The Alt-Right’s Favorite Chat App. Now It’s Gone Mainstream And Scored A New $3.5 Billion Valuation. Forbes. Retrieved October 9, 2022, from

Citron, D. K. (2014). Hate Crimes in Cyberspace. Harvard University Press.

‌Curry, David. (2022). Discord Revenue and Usage Statistics. Business of Apps.

Dewey, C. (2014, October 14). The only guide to Gamergate you will ever need to read. The Washington Post.

Edelman, G. (2020). Stop Saying Facebook Is “Too Big to Moderate.” Wired. Retrieved October 9, 2022, from

Goodman, E. P., & Whittington, R. (2019). Section 230 of the Communications Decency Act and the Future of Online Speech. SSRN Electronic Journal.

‌Hatmaker, T. (2021, July 13). Discord buys Sentropy, which makes AI software that fights online harassment. TechCrunch.

Hinduja, S., & Patchin, J. W. (2010). Bullying, Cyberbullying, and Suicide. Archives of Suicide Research, 14(3), 206–221.

Katz, A. (2017). Scenes From the Deadly Unrest in Charlottesville.; TIME.

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

‌Pierce, D. (2020, October 29). How Discord (somewhat accidentally) invented the future of the internet. Protocol.

Safety Principles and Policies. (n.d.). Discord.

Ybarra , M. L. , & Mitchell , J. K. ( 2004 ). Online aggressor/targets, aggressors and targets: A comparison of associated youth characteristics . Journal of Child Psychology and Psychiatry , 45 , 1308 – 1316 .

‌4chan: The “shock post” site that hosted the private Jennifer Lawrence photos. (n.d.). Washington Post. Retrieved October 9, 2022, from