The Role of Content Moderation in Protecting Social Media Users

The Internet and Social Media Platforms

The rise of the internet has changed the way people live. Since two decades ago, the internet has become the world’s communication tool (Levy & Strombek, 2002). It has the ability to connect people from around the world to communicate, share information, and educational tools. The development of the internet has led to the existence of social media platforms. Social media platforms include a variety of technologies that enable users to share ideas and information (Dollarhide, 2023). Some examples of social media platforms are Facebook, Instagram, X (formerly Twitter), YouTube, and TikTok. Social Media are used for several different purposes such as social networking, entertainment, businesses, source of information, and also some used for political and economic purposes (Dollarhide, 2023). This makes social media open for public and everyone can access this.

Automotive Social Media Marketing” by socialautomotive is licensed under CC BY 2.0.

Social media is a sharing platform and many people can access it easily. Individuals use social media for communicate with family and friends, join online community and other activities as a part of our daily life. The average spend time on social media is 2.3 hours per day (Zsila & Reyes, 2023). As individual are exposed to social media as a part of daily routines, there is a possibilities where the content inside can affect our wellbeing. Research shows that the quality of the contents has a higher effect in users’ mental health than the quantity (e.g., time spend) of social media (Zsila & Reyes, 2023). Although the internet can educate, entertain, and connecting individuals including young people, there is also potential risk of online abuse.

Online Harms and Content Moderation

Online harms or online abuse are the contents that are shared on social media that can damage an individual’s social, emotional, psychological, financial, and physical safety. Some examples of online harms are cyberbullying, trolling, defamatory comments, impersonation accounts, sexual extortion, emotional abuse, grooming, and others. Individuals that are affected by this action could experience a decrease in their well-being. Individuals could develop anxiety, eating disorders, self-harm, and even suicidal thoughts (NSPCC, n.d.). Therefore, content moderation are needed. Content moderation are the action taken usually by the social media platform to monitor and regulate the posts that are shared on the internet. However, does content moderation really protects individuals? Are there any other way individuals could do to prevent and avoid online harms? This essay will explain more about online harms and content moderation further.

Online Harms

Cyberbullying

Social media platform has become the most common tool for cyberbullying. Cyberbullying itself could be describe as an intentional and antagonistic action by individual or group towards the victim multiple times using online platforms (Giumetti & Kowalski, 2022). Approximately 44% of young individuals in Australia have encountered negative online experience within the past half-year, with 15% have been subjected to online threats and harassment. Cyberbullying can be spread in many different ways such as messages, emails, video, and others (Giumetti & Kowalski, 2022). As it has been mentioned above that the content that are shared on the platform could affect individual’s mental health, especially when it is negative and directed towards them.

There are cases of cyberbullying that leads to a tragic ending. An example is the case of a 15 year old boy that died by suicide due to cyberbullying. Nate Bronstein was a transfer student in one of the most prestigious school in Chicago. He was a bright students and his action was really unexpected to happen especially for his parents. It turns out that his classmates has been bullying him via text messages and snapchat that leads him to not want to continue his own life. Allegedly, the school knew about the bullying, however, the parents were not informed. Which makes the parents think that the school, especially this prestigious school could act on this issue with more responsibility.

For more details, see this video:

source: https://www.youtube.com/watch?v=Z9DDeAUHr7g

A quote below was said by another teenager, 16 year old, cyberbully victim about how she felt during those period of cyberbullying.

“It was pretty terrible. I ended up with anxiety, depression. I had panic attacks all the time.”

– Caitlyn, 16 years old.

A victim will never explicitly say that they are the victim. They will feel helpless and alone. The parents and individuals around them have the responsibility of making them feel seen and understands them. Repeatedly inform them that help is available. Which in that case is not very easy for the parents and others to do it. As it was mentioned in previous case that sometimes the issue is not visible directly by the parents. The bullying happens online and the victim could hide it.

Online Impersonation

Another example of online harms is impersonation which could also lead to other harms including cyberbullying. Impersonation is a form of identity theft and a crucial issues as it could affects individuals privacy and safety (Bitdefender, n.d.). Often times in social media, individuals were asked to provide some personal information to log into their account and once the hackers breached this data, they will have the power to use it. The perpetrators could create a fake account which could display people that are trusted or even the individuals itself. The fake account then could be used to scammed, cyberbully, and spread fake news. The culprit could also lead people to meet them to do crime. This could led into financial loss, loss of trust, scamming, privacy breached, mental health issues and even death. There are many more types of online harms that need to be addressed.

So, What does the Social Media Platform do?

In this aspect, a role from content moderator plays a huge part where the contents on social media are being reviewed and monitor to make sure that it meets the certain standards. Most social media platform also have their own community guidelines that could help address the issue. For example, X platform (Twitter) has the safety guidelines that focuses on the users to not share any violent speech, abuse/harassment, perpetrator of Violent attacks, suicide content, or any other sensitive media to the platform. Some platform also provide the report and block function where users could use it to report any harmful and inappropriate contents. Some sensitive contents could also be blocked immediately by the social media or only be access with the users permissions. This community guidelines could help to addressed the cyberbullying issues by blocking and reporting a hateful comments or posts. However, it only helps to a certain extent. An action from the individual itself are important. Take a rest from social media if needed and seek for support.

social media” by Sean MacEntee is licensed under CC BY 2.0.

Addressing the impersonation issues is slightly more complex. Some platforms such as dating apps have the feature to verify their photos where if they chose to verify their account, the users could not use a fake images. According to Anand (2022), Facebook uses AI to detect fake account. This system are created using machine learning algorithm and hand-coded to evaluate whether the account is original. There are a lot of fake account detector that can be used by social media users. However, since it is still a manmade feature, an error could still be made.

In the end,

Individuals still need to be careful in using social media. Content moderation and community guidelines from the platform itself could protect users, however, only to a certain extent. An individual action should be taken into consideration. Be aware of online harms and be careful and think critically about the shared information. Do not trust everything in social media easily and mental health support is always available for individuals that needs it.

References

Anand, A. (n.d.). Fake account detection using AI in facebook. Analytics Steps. https://www.analyticssteps.com/blogs/fake-account-detection-using-ai-facebook

CBS Interactive. (n.d.). A 15-year-old boy died by suicide after relentless cyberbullying, and his parents say the Latin school could have done more to stop it. CBS News. https://www.cbsnews.com/chicago/news/15-year-old-boy-cyberbullying-suicide-latin-school-chicago-lawsuit/  

Cyberbullying. eSafety Commissioner. (n.d.-a). https://www.esafety.gov.au/key-topics/cyberbullying#:~:text=Sadly%2C%20cyberbullying%20happens%20a%20lot,received%20threats%20or%20abuse%20online.

eSafety Commissioner. (n.d.). What is online abuse?. eSafety Commissioner. https://www.esafety.gov.au/women/women-in-the-spotlight/online-abuse

Giumetti, G. W., & Kowalski, R. M. (2022). Cyberbullying via social media and well-being. Current Opinion in Psychology, 45, 101314–. https://doi.org/10.1016/j.copsyc.2022.101314

Levy, J. A., & Strombeck, R. (2002). Health benefits and risks of the Internet. Journal of Medical Systems, 26(6), 495–510. https://doi.org/10.1023/A:1020288508362

Nspcc. (n.d.). Online abuse. NSPCC. https://www.nspcc.org.uk/what-is-child-abuse/types-of-abuse/online-abuse/

Twitter. (n.d.). The X rules: Safety, privacy, authenticity, and more. Twitter. https://help.twitter.com/en/rules-and-policies/x-rules

Zsila, Á., & Reyes, M. E. S. (2023). Pros & cons: impacts of social media on mental health. BMC Psychology, 11(1), 201–201. https://doi.org/10.1186/s40359-023-01243-x