Online harm and content moderation

Group: Selena Nguyen, Stephanie

In the age of digitisation, the internet has become a double-edged sword. On one hand, it serves as a great equaliser, offering unprecedented opportunities for communication, commerce, and information sharing. On the other hand, it is a breeding ground for various forms of online harm, including misinformation, cyberbullying, and exploitation. Content moderation has emerged as a focal point in this ongoing debate, often bringing tech giants like Facebook, Twitter, and YouTube under scrutiny. How effective are these platforms in striking a balance between freedom of speech and the prevention of online harm? This blog post aims to delve into this question by examining three real-world examples that illustrate the complexities and challenges in the realm of online content moderation. 

Addressing the Challenge of Suicide and Self-Harm Content on TikTok

A girl using her cell phone on social networks without being aware of the dangers they can have” by “Javidestock” is marked with CC0 1.0.

Content moderation is a complex issue that defies one-size-fits-all solutions. As our examples have show, on sites like TikTok, there have been instances of films inciting such dangerous behaviour, including live streamed acts of self-harm, which has led to severe concerns about suicide and self-harm content. After their son committed himself, the parents of a child from the United States discovered numerous videos on his TikTok about death, despair, and sadness. TikTok and other social media sites have acted in reaction to these worrying developments in order to remedy this problem. To quickly identify and remove such upsetting content from their platforms, they have put in place strict content management systems.

Spreading COVID-19 Misinformation on Facebook

As the coronavirus spread, an experiment showed Facebook was struggling to keep up with fake news” by “ABC News Breakfast” is marked with CC0 1.0.

During the first year of COVID-19, Facebook detected several fraudulent information posts, including those from an anti-vaccination group, and these posts became popular. In response, social media companies launched fact-checking programmes and began actively flagging or removing posts containing false or misleading information. However, these efforts also sparked significant debates about the delicate balance between addressing misinformation and protecting free speech, raising concerns about the possibility of censorship in the digital age.

Twitter’s Efforts in Combatting Misinformation

Introducing our crisis misinformation policy” by “Yoel Roth” is marked with CC0 1.0.

Twitter has been at the forefront of tackling misinformation, but its strategies have not been without controversy. It has employed methods like flagging tweets, adding fact-check labels, and even removing posts that spread misinformation. While these efforts are aimed at creating a more informed public sphere, they have also drawn criticism from those who argue that such tactics can be forms of censorship, infringing on the right to free speech. This example epitomizes the intricacies of content moderation: it is a constant juggling act between mitigating harm and preserving freedom.

The struggle for effective content moderation is fraught with dilemmas. It involves a multi-faceted interplay between technological capabilities, ethical considerations, and social responsibilities. While platforms like TikTok, Facebook, and Twitter are making strides to balance freedom and safety, the real-world examples examined here reveal there is no easy solution. Continued dialogue and innovative approaches are essential for navigating the complicated terrain of online harm and content moderation. By studying real-world attempts and failures, we can move toward creating a safer, more inclusive online environment for everyone. 

References

Carville, O. (2023, April 20). TikTok’s Algorithm Keeps Pushing Suicide to Vulnerable Kids. Bloomberg. https://www.bloomberg.com/news/features/2023-04-20/tiktok-effects-on-mental-health-in-focus-after-teen-suicide

ABC News Breakfast. (2020, April 24). As the coronavirus spread, an experiment showed Facebook was struggling to keep up with fake news. ABC News. https://www.abc.net.au/news/2020-04-24/facebook-approves-ads-with-covid-19-misinformation/12172168