Who owns the responsibility for stopping the distribution of offending content?

"Automotive Social Media Marketing" by socialautomotive is licensed under CC BY 2.0.

With the growth of the Internet, more people are using it, technology has improved, and online life has become more enjoyable. Social media and personalized recommendation platforms have replaced traditional portals and search engines as the primary avenues for communication, and socialized information transmission is now considered the norm in terms of communication channels and modes. But there are some disadvantages to that. The Internet will inevitably have a lot of violent stuff and other undesirable content scattered across digital channels. This essay will discuss platforms, governments, and individual users should all work together to stop the dissemination of such information, in my opinion.

Social Media apps” by Jason A. Howie is licensed under CC BY 2.0.

The platform must, in the first place, control these offensive contents. Up until we are concerned by the content that is freely circulated through their networks, we have a tendency to support platforms as unrestricted means of linguistic expression. In return for the right to gather, aggregate, and analyze users’ data as well as republish their content, social networks, and social media firms provide users “free” access to their networks, content management and archiving communications, and entertainment content and services. The extensive Terms of Service (ToS), which we must agree to when we register to use the Platform, detail these rights. Unfortunately, few individuals actually read the ToS since they are so lengthy and complicated. As a result, we have a tendency to blame the platform too much. Social media content delivery is driven by advertising-driven business interests to maximize traffic, which is driven by emotional content that encourages more people to be more engaged (Roberts, 2019). Based on this, I believe the platform should foster a positive social climate until it has reaped enough rewards from users. rather than allowing unlawful content to circulate widely.

 

Secondly, the government is also accountable. The most fundamental duty of citizens is to uphold the laws of the nation since we are its citizens. However, this shows that the state is in charge of ensuring that residents lead regular lives. Therefore, individuals will consider how the government will handle the situation when an incident arises as a result of illegal content on the internet. As an illustration, 16-year-old Elle Trowbridge from Tyrone County committed herself in April 2021 as a result of being the victim of cyberbullies. Her family requested that the police assist and become involved after learning about cyber-violence (“The shock and pain of cyber-bullying”, 2022). People want the government to put pressure on platforms to take down politically sensitive content by using laws that are purportedly in place to fight cybercrime, safeguard children, or outlaw terrorist materials. Cause Instagram owner Meta will allow users in some nations to call for violence against Russian President Vladimir Putin and Russian servicemen. Some violent posts that would ordinarily be against the rules, like “Russian intruder dead,” may momentarily be permitted. The Kremlin thinks that Instagram’s “false news” regarding the spread of the invasion of Ukraine has stoked public sentiment, and access to the platform has been banned in Russia (“War in Ukraine: Instagram banned in Russia over ‘calls to violence'”, 2022). Therefore, it seems to me that the government has a complete legal obligation to control the spread of such offensive information, which is all the more justification for it not abdicating this duty.

“How false news can spread – Noah Tavlin” by TED-Ed. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=cSKGa_7XJkg&t=1s

The personal obligation to prevent the spread of offensive material comes last. The majority of individuals want the community to self-police, or even better, to refrain from posting offensive material in the first place. Platforms have discovered that they must serve as norm setters, legal interpreters, dispute arbitrators, and enforcers of any rules they decide to impose, whether they like it or not (Gillespie, 2018). However, one should make one’s own judgments about information as they are already subject to legal accountability as citizens.

Kim Phuc – The Napalm Girl In Vietnam” by e-strategyblog.com is licensed under CC BY 2.0.

The children in “Napalm Girls,” a 1972 image by photographer Nick Ut, are depicted running along a desolate street as they escape napalm attacks while being pursued by Vietnamese soldiers in the background. The most noticeable of these are thin nude people with burns from napalm on their arms, backs, and necks. This portrayal of the horrors of the Vietnam War is possibly the most enduring. Underage nudity is strictly prohibited both culturally and legally in almost all countries. These kids are clearly hurting, and it’s horrible. Although this image flouts one set of rules, it upholds another, and appropriateness is disregarded for moral reasons. It is a picture of an occasion that shouldn’t be disregarded (Gillespie, 2018). Therefore, whether by self-policing or reporting on others, it appears to me that citizens also have a responsibility to take part in the control of content on platforms.

 

There are numerous strategies that can be used for the platform. Platforms have the ability to proactively identify violent content, warn about it, and filter it out before it is posted. For instance, a “hide offensive comments” filter, which is the default setting, filters comments on Instagram that are recognized by AI as potentially inflammatory or meant to harass others. Users will be asked to edit the content that has been posted that has been flagged as potentially objectionable. According to their own community guidelines, platforms are permitted to occasionally address violent content and accounts that are in violation of the law (Marwick, 2018).

 

Platforms are the finest places to practice intervention. Governments enforce rigorous rules on social media businesses and pass legislation to safeguard individuals, with those who break the law subject to harsh fines or other penalties. China has a unique position known as the cyber police as an illustration of government regulation. They are tasked with keeping an eye out for anything that is deemed politically sensitive on computers and social media sites. Words that are delicate are automatically eliminated (Marwick, 2018).

 

Most social media platforms offer unique mechanisms for filing complaints about violence against people. Twitter also encourages users to report accounts that may be breaking the rules. Users can make reports to the team anonymously at any time through posts, comments, or stories on Facebook or Instagram. Users can effectively stop the spread of improper content by permanently deleting or blacklisting individual accounts. Currently, fake news is also a prominent topic. To guarantee they acquire various viewpoints, the individual users should make it a point to set aside their prejudices and read news from a range of sources (Marwick, 2018).

 

In conclusion, the development of personal rights websites and even specialist opinion monitoring websites may be traced back to BBS and blogs, as well as the relevant channels of various news websites and portals. Internet monitoring offers certain distinct benefits and modern-day strengths. For platform supervision, governmental oversight, and individual user supervision, it is crucial to identify Internet public opinion trends in real-time and stop them from developing into significant unanticipated public opinion events.

 

 

 

References

 

The shock and pain of cyber-bullying. BBC News. (2022). Retrieved 3 October 2022, from https://www.bbc.com/news/uk-northern-ireland-41687882.

 

War in Ukraine: Instagram banned in Russia over ‘calls to violence’. BBC News. (2022). Retrieved 3 October 2022, from https://www.bbc.com/news/technology-60709208.

 

Gillespie, T. (2022). Regulation of and by Platforms. Academia.edu. Retrieved 3 October 2022, from https://www.academia.edu/26896839/Regulation_of_and_by_Platforms .

 

Burgess, J., Marwick, A. E., & Poell, T. (Eds.). (2018). The sage handbook of social media. SAGE Publications, Limited (pp. 255-278).

 

Gillespie, T. (2018). CHAPTER 1. All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029-001

 

Roberts, S. (2019). 2. Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300245318-003