On digital networks, harmful material including porn, violent content, bullying, and harassment are shared. Who and how should be in charge of preventing the dissemination of this content?

Facebook Burnout

 

 

 

 

 

 

Facebook Burnout” by mkhmarketing is licensed under CC BY 2.0.

This work is licensed under a Creative Commons 2.0 License.

Introduction

With technology, the internet has become more prominent in its importance of connectivity as it creates spaces for people to participate, express and interact in the community. More individuals are in direct contact with one another thanks to social media platforms, which also give them additional opportunities to communicate and engage with a greater variety of people and group them into networked publics (Gillespie, 2018, p. 5). Unfortunately, despite these apparent benefits, the emergence of inappropriate content such as obscenity, bullying, harassment, violence, hate and pornography counteracts these positive attributes. The following article will analyze the impact of problematic content on digital platforms, understanding the performance of the stakeholders behind it to attribute responsibility and suggest measures to address it.

Who should be responsible?

User

Users have an obligation to stop the spread of problematic content, as they play an essential role in creating and sharing content on social media. Meserole (2018) argues that “despite the prevalence of the problem, the opportunity to find misinformation in action is quite rare”. Because most users who generate misinformation also rarely share accurate information, it can be challenging to identify the impact of the misinformation itself. For example, when President Trump shared misinformation on Twitter, it spread at a viral rate. Yet the misinformation did not spread because it was wrong; it was promoted because of Trump’s popularity and the associated political conversation. This shows that the subjective preferences of individual users can also influence the redistribution of the content and that even when the content itself is objectionable, there are some avid seekers who blindly promote it. However, the content itself is shaped by the values of the individual, and inappropriate content will inevitably be produced and distributed by some vandals. As Gillespie (2018, p. 5) says, it would be better if users did not post objectionable things from the outset. Individuals who are fed information on the internet should also be responsible for their actions and pay for the content they post.


Trump tweets” by LittleRoamingChief is licensed under CC BY 2.0.

Digital platform

While digital platforms are needed to moderate when individuals have a blind or deliberate intent to disseminate problematic content. Gillespie (2018, p. 5) mentions, “Platforms discover that they must act as norm-setters, law-interpreters, arbiters of taste, dispute-judgers, and enforcers of whatever rules they decide to impose, whether they like it or not. ” Platforms provide a vast space for freedom of expression, but they also have a responsibility to restrain the spread of inappropriate content and prevent more severe consequences. For example, Facebook decided to suspend the social media accounts of former US President Donald Trump after posting inflammatory comments on social media that led to riots at the Capitol. This action was a direct response to a serious incident, and while it did not stop the incident at its source, the platform did take responsibility for managing specific social media accounts. As Gillespie (2018, p. 21) puts it, Platforms face a potentially irreconcilable contradiction: they are portrayed as mere conduits, yet they are based on making decisions about what users see and say. He added that platform moderators present social media products through a selection process and that excluding violence, threats, pornography, or terrorism is just one of how platforms create social media products for their audience. However, moderation can be a challenge for platforms. Too much intervention can sometimes lead to complaints from users about freedom of expression, and a lack of mandatory intervention can also lead to the loss of some users who fear that the platform will not be able to protect them. Therefore, digital platforms must be more actively involved in maintaining good communities and insightfully censoring online content.

Should social media companies be more accountable for their content?

Government

Maintaining the public health of social media platforms is also a unshirkable responsibility and obligation of the government. As Gillespie (2018, p. 11) states, corporate values and interests determine what content stays or goes, and almost all social media platforms are commercial enterprises that must find ways to make a profit, keep advertisers happy, and comply with the law. The government’s role is to clarify the rules of conduct for platforms, and the opaque mechanisms of media companies cannot be separated from judicial oversight. Ghosh (2021) suggests that the censorship of Trump on mainstream social media platforms such as Facebook and Twitter has led to a broader social debate about the need for industry self-regulation to drive effective change. He emphasizes that “achieving real change requires government support, and business and government leaders cannot simply use it as a partisan opportunity to unseat a single actor or advance a single political cause.” Because corporate self-awareness alone is not enough, the power of government may be needed to support the implementation of such self-regulation.

Therefore, the responsibility for stopping the spread of problematic content lies with the individual, the platform and the government. All three parties must take appropriate measures to stop the spread of problematic content when confronted with it.

How to prevent those contents?

Social media has evolved into an essential platform for public discourse. Nevertheless, it is rarely seen as a conducive communication tool for democracy and is used more as a vehicle for fake news, conspiracy theories and hateful ideas. At the same time, there is growing concern about the role private online technology companies play on social media and the growing phenomenon of unpopular speech being removed. The information that emerges on social media becomes a driver of polarization. How to make social communication on the Internet be a positive element of democracy again?

User and Platform

For the users themselves, the first step is to be able to make their judgement when faced with problematic content. It is essential to be aware of content’s negative implications and report offending content to the relevant platform. However, Meserole (2018) believes that the rapid engagement of misinformation streams is due to an important role played by human bias. A tweet will be shown to additional users via the newsfeed algorithm if enough initial viewers retweet, like, or reply. At that point, it will also tap into the biases of those users, leading to even more interaction, and so on. At its worst, this loop may transform social media into a machine that encourages confirmation bias, making it ideal for disseminating false information. This suggests that platforms should improve their algorithms and push mechanisms, increase their scrutiny of inappropriate content and limit the visibility of trending misinformation. At the same time, users need to become more digitally literate and aware of how biased information is generated and disseminated so that they can participate in the resolution of misinformation.

Government/State  

The government can address this issue by creating relevant laws and regulations. Germany has led the way with The Network Enforcement Law (NetzDG), which currently regulates platforms with more than 2 million users in Germany, and must ensure that complaints online are thoroughly investigated, and all illegal content needs to remove within 24 hours. For example, in 2019, Facebook was fined €2 million for failing to comply with this regulation. This German law is an example of a success story that could have an inspiring effect on other countries around the world. According to the Danish think tank Justitia, 25 countries have discussed or implemented similar laws inspired by Germany’s NetzDG law until October 2020. However, Justitia reports that NetzDG laws contain guarantees of the rule of law and protection of freedom of expression but that not all countries have taken on these guarantees to the same extent. Therefore, the specific legislation will have to be tailored to the national context and situation, and countries can enhance the effectiveness of their legal systems through increased communication.

Conclusion

In short, it is the responsibility of users, platforms and governments to stop the spread of problematic content and to maintain an excellent online community without the efforts of stakeholders.

Reference

Etherington, D. (2021, January 7). Mark Zuckerberg announces Trump banned from Facebook and Instagram for ‘at least the next two weeks’. TechCrunch. https://techcrunch.com/2021/01/07/mark-zuckerberg-announces-trump-banned-from-facebook-and-instagram-for-at-least-the-next-two-weeks/

Google Transparency Report. (n.d.). Removals under the Network Enforcement Law – Google Transparency Report. Retrieved 14 October 2022, from https://transparencyreport.google.com/netzdg/youtube

Martinez, N. (n.d.). The misinformation Trump tweeted in his first year as president. Media Matters for America. Retrieved 14 October 2022, from https://www.mediamatters.org/fox-friends/misinformation-trump-tweeted-his-first-year-president

Ghosh, D. (2021, January 14). Are We Entering a New Era of Social Media Regulation? Harvard Business Review. https://hbr.org/2021/01/are-we-entering-a-new-era-of-social-media-regulation

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media(pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Meserole, C. (2018, May 9). How misinformation spreads on social media—And what to do about it. Brookings. https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/