Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

"Tragedy of Cyberbullying" by Bernie Goldbach is licensed under CC BY-NC-ND 2.0.

What is problematic content circulation on digital platforms?

Everyone encourages the act of sharing, yet in the digitized world, people frequently express their thoughts on subjects they are unfamiliar with. They may be insensitive in more than one way, and online aggression of this kind is often tolerated because there are rarely any lasting repercussions. (Dieterle & Edward, 2019)

In addition to this, due to the anonymous and viral nature of the internet, if even one person posts objectionable content that is ableist, homophobic, racist, etc. this starts a domino effect among different online communities. This is made simple by social networks like Reddit and Twitter, among others, which enable disseminating information uncomplicated by using straightforward methods like reshare and retweet. People don’t conduct an in-depth investigation into the origins of a particular news item, which adds to the chaos. (Dieterle & Edward, 2019)

According to a study, almost three billion people communicate via social media sites. (2). As a result, if half of humanity is using digital platforms, there ought to be disagreements and crimes. Hacking, trolling, online stalking, and other forms of harassment are among the many cybercrimes that are on the rise.

Because of the widespread use of mobile devices and social networks by children and teenagers, the number of persons who have encountered cyber-harassment in their lifetime has risen from 18% in 2007 to 36% in 2019. (Abarna et al., 2022)

Furthermore, despite the fact that the internet’s drawbacks impact everyone, research reveals that women are more likely than men to become victims of cyberstalking, sexual harassment, and derogatory remarks.

Taking an Example: With the release of the female-led Ghostbusters reboot, a coordinated attempt was made to harass actress and comedian Leslie Jones. Millions of people, particularly men, reviewed the trailer through an extremely racist and misogynistic lens. Instead of appreciating the filmmakers’ efforts, the female-led film attracted widespread condemnation.

 

ghostbusters trailer:

The Guardian claimed the ghostbuster trailer to be the most disliked youtube video in its history as of 2016. ( Woolf, 2016)

Harassers created phony Twitter accounts for Jones, assumed her identity, and tweeted false information using hashtags like #freemilo to directly coordinate the distribution of hate-filled messages. Jones briefly logged out of Twitter to avoid the hostility. In a case that many have referred to as “revenge porn,” Jones’ website was hacked following months of harassment, and nude pictures of her were posted. (Dieterle & Edward, 2019)

 Leslie Jones went on to Twitter to express her disappointment :

This makes us realize that violence doesn’t only have physical significance but can range from mental, systematic as well as symbolic violence. (Henry & Powell, 2015). However, relatively few academics have investigated how modern technologies encourage sexual aggression and harassment toward adult women. According to some academics, virtual harms cannot be compared to those taking place in the “real world” and are quick to be dismissive of the trauma and anguish created by it. On the brighter side, current Australian and American legislation prohibits the creation, ownership, and dissemination of “sexual exploitation material” presenting a kid or child-like person and admits that it is the equivalent of sexual exploitation of a real child. (Henry & Powell, 2015)

Should digital platforms be censored?

Social media platforms sprang from the beautiful anarchy of the web. This means that these new enterprises recognized that an open web gives users a sense of freedom, freedom to express, talk to anyone they like, and impact communities. Even though the advantages of this are evident and occasionally seem utopian, the risks are brutally obvious and becoming more so every day: pornographic, obscene, violent, illegal, abusive, and hateful content. The idea of a truly “open” platform is compelling as it resonates with the deep and profound ideals of democracy and community, but it is merely a fantasy. There isn’t a platform out there that doesn’t, in some way, enforce regulations. To do otherwise would be just impossible. (Gillespie, 2018)And because of this antagonizing quality of social media, platforms must intervene and self-regulate. Its users should be protected against hostile and offensive information as a key concern.

“The Internet” by tipl is licensed under CC BY-NC 2.0.

Regulation is hard and that cannot be ignored

The main problem with moderation is that often these companies don’t know on what basis they should set norms, what exactly is right and where should they draw the line of public expression. “The challenge for platforms, then, is exactly when, how, and why to intervene.” (Gillespie, 2018, p. 5) These questions are valid as regulation by the platforms oppose the very principle and promise upon, which they were founded, namely free expression. Furthermore, implementing the regulations takes time since it is dependent on the meticulous judgments of many people, and it turns out to be highly expensive. (Gillespie, 2018) A substantial element of what most platforms perform is handling complaints, reviewing material or behavior that is problematic, enforcing punishments, and addressing appeals.

Social media platforms are already playing a role in how the metaverse is being defined and developed. “Social Media Mix 3D Icons” by Blogtrepreneur is licensed under CC BY 2.0.

Should social media self-regulate?

Social media is not the first platform in history to confront control of the appropriateness of the spread of information it enables; in many respects, it mimics the challenges experienced by the film industry, television shows, and advertising. To keep regulators away, the film and video game industries adopted a self-imposed and/or self-monitored rating system, which is still in use today (Cusumano et al., 2021) Therefore, it is only appropriate to take inspiration from history. social media companies should start self-regulating before the government intervenes.

Section 230 stipulates that as long as the company is not curating or engaging in content, it is safe. However, it disregards platforms that intervene and/or remove offensive content in “good faith”. This exception to the rule is extensively contested since we don’t know if it is in fact in good faith or if there is a prejudice (Cusumano et al., 2021). Before the government repeals all Section 230 safeguards, platform businesses must implement their own behavior and usage limitations. Technology that uses big data, artificial intelligence, and machine learning, along with some human editing, will increasingly provide digital platforms with the capacity to filter what happens on their platforms. (Cusumano et al., 2021)

Twitter, for example, emphasizes its vast range of information in its established community norms, followed by its unbiased administration of it: “We believe that everyone should have the power to create and share ideas and information instantly, without barriers. In order to protect the experience and safety of people who use Twitter, there are some limitations on the type of content and behavior that we allow.”  (Gillespie, 2018, p. 7)

Proactive self-regulation was frequently more successful when coalitions of enterprises in the same industry collaborated. Lastly, enterprises or industry unions become serious about self-regulation when they face a genuine threat of government regulation, even if it means sacrificing short-term sales and earnings. This tendency was observed in tobacco and cigarette advertisements, airline bookings, social media advertisements for terrorist organization recruiting, and pornographic content. In 2021, that danger should be evident and obvious to digital platforms. (Cusumano et al., 2021).

To conclude:

In a world where everything is being questioned and society is becoming increasingly individualistic, a little compassion from both individuals and businesses is essential. It would be overly simple to argue that platforms are either uninformed of the difficulties or are too egoistic to effectively handle them. But it is also too naive to allow them to remain silent in the face of horrors. (Gillespie, 2018) To summarise, history implies that modern digital platforms should act boldly and proactively today, rather than waiting for governments to impose limitations. (Cusumano et al., 2021)

 

 

 

 

References

Abarna, S., Sheeba, J. I., Jayasrilakshmi, S., & Devaneyan, S. P. (2022). Identification of cyber harassment and intention of target users on social media platforms. Engineering applications of artificial intelligence, 115, 105283.

Cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021). Social media companies should self-regulate. Now. Harvard Business Review, 15.

Dieterle, B., & Edwards, D. (2019). Confronting digital aggression with an ethics of circulation. In Digital Ethics (pp. 227-243). Routledge.

Feig, P. (2016, March 4).  GHOSTBUSTER-official Trailer (HD) [Video]. Sony Pictures Entertainment. https://www.youtube.com/watch?v=w3ugHP-yZXw&ab_channel=SonyPicturesEntertainment.

Gillespie, T. (2018). All platforms moderate. In Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. (pp. 1-24). Yale University Press.

Henry, N., & Powell, A. (2015). Embodied harms: Gender, shame, and technology-facilitated sexual violence. Violence against women, 21(6), 758-779. https://doi.org/10.1177/1077801215576581

Jones, L. [@Lesdoggg]. (2016, July 19). I’m more human and real than you fucking think. I work my ass off. I’m not different that any of you who has a dream to do what they love. [Tweet]. Twitter. https://twitter.com/Lesdoggg/status/755207297471741955?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E755207297471741955%7Ctwgr%5E87d62cdf8d39fe438892070c4d50b48d1f880c8b%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theguardian.com%2Fculture%2F2016%2Fjul%2F18%2Fleslie-jones-racist-tweets-ghostbusters

Woolf, N. (2016, July 19). Leslie Jones bombarded with racist tweets after Ghostbusters opens. The Guardian. https://www.theguardian.com/culture/2016/jul/18/leslie-jones-racist-tweets-ghostbusters