Who should be responsible for preventing the spread of harmful content on digital platforms?

Introduction

Uncontrolled harmful content on the Internet today is predictable, and the promise of the early Internet was its indiscriminate nature as an information-sharing platform in a unique space with no geographic location. It is the independence of cyberspace that evades government control, legislation, and jurisdiction over the Internet (Roberts, 2019a), leading to the proliferation of bullying, harassment, violent content, hate, pornography, and other problematic content on digital platforms. spread. Although, in social spaces, users themselves create behavioral norms or other forms of self-governance. The government’s control of inappropriate content is also through the regulation of digital platforms. Therefore, the major Internet media companies that can quickly stop the spread of bad content and have the most influence should be the major Internet media companies that provide platforms to the public.

Social media platforms are already playing a role in how the metaverse is being defined and developed. “Social Media Mix 3D Icons” by Blogtrepreneur is licensed under CC BY 2.0.

Platform

It is directly in the platform’s interest for Internet media companies to prevent the spread of objectionable content by moderating and screening harmful content to protect their company’s or platform’s brand’s reputation. Major Internet media platforms such as YouTube, Facebook, Twitter, and other media platforms usually employ content moderators, who are employed to review content uploaded to Internet social media sites, evaluate and judge user-generated online content, and decide whether it can be retained or not. must be removed (Roberts, 2019a). However, manual review is not instantaneous. Manual review information generally occurs after the information is released, and some platforms even initiate manual review after receiving complaints from users who have viewed harmful content (Roberts, 2019b). And because the auditors are required to evaluate harmful content, it will also bring some mental damage to the auditors. A moderator hired by Facebook described the stress of her job, where she was forced to take antidepressant medication due to the sheer volume of violent content she reviewed daily. And manual labor costs a lot of money to support, so platforms use algorithms to automate content moderation when they think content moderators are unavailable or too expensive(Gillespie et al., 2020). Therefore, some platforms directly implement corresponding mitigation mechanisms for harmful content to prevent the release of harmful content. For example, Twitter detects mean and offensive posts through algorithms and prompts users to consider whether to send these posts, thereby reducing users’ presence and Harmful content tweets.

“Social Media Logos” by BrickinNick is licensed under CC BY-NC 2.0.

However, manual intervention for the review of social media content uploaded by users is more user-friendly and accurate than an algorithmic review. Reviewers use their cognitive and cultural abilities to decide whether the review is suitable for a certain site or platform. On popular media platforms, the content posted by users is constantly increasing, and users are posting new information almost every minute and every second. The complicated process of classifying the information posted by users as acceptable or unqualified is far from complete. beyond the capabilities of the software or algorithms themselves (Roberts, 2019b). Algorithms may be biased when reviewing, and algorithms are also based on human data, which may lead to biased reviewing. But human review can also lead to prejudice. The TikTok platform excludes ugly, poor, or disabled users, deeming these videos less attractive and not worth recommending to new users. However, these biased behaviors were quickly stopped, and platforms generally quickly updated the review policy for reputation. Just like the TikTok incident, they established a trust and security center to supervise the formulation and implementation of daily review policies. And different users have a different understanding of the received information (Jhaver et al., 2018), such as some videos posted on YouTube, these videos are not pornographic, but some users will think that the content in the video is sexually suggestive (Tarvin & Stanfill, 2022).

Government

Although the government does not have direct control over the platform, its focus on and regulation of social media content moderation is increasing because the government needs to safeguard the interests of its citizens. However, some platforms actively avoid content auditing and management to provide users with encrypted services. Encrypted platforms are undoubtedly a convenient channel for users who voluntarily obtain bad information. The information they circulate may involve criminal actions, which is a violation of the interests of the country and the masses. Platforms that offer encrypted calling services such as WhatsApp, Signal, and Telegram are embroiled in high-stakes political battles over fundamental rights around the world. In response to encrypted calls, Australia has successfully passed a law giving the government new powers to snoop on encrypted data, requiring digital intermediaries to provide “technical assistance” to federal police and intelligence agencies to access the encrypted conversations of criminal suspects (Gillespie et al., 2020).

As for the regulation of platforms, the government is not only concerned about the excessive rights that Internet technology companies have. The government has been under pressure to take action against the spread of harmful content such as bullying, harassment, violence, hatred, and pornography. The government does not directly intervene in the management of harmful content, but instead oversees the platforms, typically by independent regulators and possibly funded by taxes on media companies. The formulation and implementation of measures require joint efforts among platforms, societies, and governments (Gillespie et al., 2020).

“Jump on the social media bandwagon” by Matt Hamm is licensed under CC BY-NC 2.0.

User

As for the dissemination of harmful content on digital platforms, although the user is in a passive position, it affects preventing the dissemination or phenomenon of harmful content. It can be said that the dissemination of harmful content is a part of the users on the platform, and the victims are also the users on the platform. To maintain the environment of the platform, users should also actively manage themselves and actively report harmful content to the platform. According to Ofcom, 67% of 13 to 24-year-olds have seen harmful content on social media, but only 17% have reported it. Media companies ideally want users to manage in-house, and user-determined content moderation does occur in online communities. Moderators set up on Twitter are the online community owners, and their responsibilities include reviewing member reports, hiding member tweets, and removing disruptive members from the community.

But users can not just use the power that the platform confers, they can also use the platform to fight for their rights, taking advantage of a personalized form of citizen journalism that creates content by interacting with other members of the public, commenting, linking, sharing, etc. (Wall, 2015). Citizen journalism is influential on internet platforms because information can be received and disseminated by all, such as the Iran Green Movement on digital platforms that upended the public image of Iran for decades. So using citizen journalism to improve the environment of the platform community is also a solution to counteract harmful content.

In conclusion

The issue of bullying, harassment, and the spread of violent content, hate, pornography, and other questionable content has always been a problem on digital platforms. Regulation of the dissemination of harmful content has been improving. The main responsibility for blocking harmful content is the platform, the government, and the user. The government is trusted by the people, and what it wants to protect are

 the interests of the country and its citizens. What it can do is formulate relevant laws and regulatory platforms, thereby indirectly preventing the spread of harmful content. As a user, facing harmful content on digital platforms, in addition to self-regulation, you can also use the functions and features of the platform to protect your own interests. As the party providing the platform, the Internet media company can directly influence the publishing of content, and maintaining a positive Internet environment also has a positive impact on the company’s interests.

Reference List:

Roberts, S. T. (2019a). Behind the Internet. In Content moderation in the shadows of social media (pp. 1–19). Yale University Press. http://dx.doi.org/10.12987/9780300245318-001

Roberts, S. T. (2019b). Understanding commercial content moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33–72). Yale University Press. http://dx.doi.org/10.2307/j.ctvhrcz0v.5

Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online harassment and content moderation. ACM Transactions on Computer-Human Interaction, 25(2), 1–33. https://doi.org/10.1145/3185593

Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Matamoros-Fernández, A., Roberts, S. T., Sinnreich, A., & Myers West, S. (2020). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512

Wall, M. (2015). Citizen journalism. Digital Journalism, 3(6), 797–813. https://doi.org/10.1080/21670811.2014.1002513

Tarvin, E., & Stanfill, M. (2022). “YouTube’s predator problem”: Platform moderation as governance-washing, and user resistance. Convergence: The International Journal of Research into New Media Technologies, 28(3), 822–837. https://doi.org/10.1177/13548565211066490

Dwoskin, E., Whalen, J., & Cabato, R. (2019, July 25). Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently. The Washington Post. https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/

Criddle, B. C. (2021, May 12). Facebook moderator: ‘Every day was a nightmare.’ BBC News. https://www.bbc.com/news/technology-57088382

Hern, A. (2022, June 16). ‘Want to review this?’: Twitter’s niceness prompts do alter behaviour, study finds. The Guardian. https://www.theguardian.com/technology/2022/jun/16/twitter-niceness-prompts-behaviour-study

Hern, A. (2020, March 17). TikTok “tried to filter out videos from ugly, poor or disabled users.” The Guardian. https://www.theguardian.com/technology/2020/mar/17/tiktok-tried-to-filter-out-videos-from-ugly-poor-or-disabled-users

Kapetas, A. (2021, July 19). WhatsApp and the right to encrypt. The Strategist. https://www.aspistrategist.org.au/whatsapp-and-the-right-to-encrypt/

Stewart, H., & Hern, A. (2019, April 4). Social media bosses could be liable for harmful content, leaked UK plan reveals. The Guardian. https://www.theguardian.com/technology/2019/apr/04/social-media-bosses-could-be-liable-for-harmful-content-leaked-uk-plan-reveals

Milmo, D. (2022, June 26). Young people must report harmful online content, says UK watchdog. The Guardian. https://www.theguardian.com/media/2022/jun/27/young-people-must-report-harmful-online-content-says-uk-watchdog

Twitter. (2022, March 17). Communities moderator playbook. Twitter Help. https://help.twitter.com/en/using-twitter/communities-moderator-playbook#mod-faqs

Dangerfield, M. B. (2022). Power to the People. Tate. https://www.tate.org.uk/art/art-terms/p/photojournalism/power-people