Government, Internet company and Users should be responsible for stopping the spread the problematic content

BY: Kaidi Cui, Qiyu Wang, Yi Zhong, Fiona Bai

Government regulation:

Since social media is flooded with Bullying, harassment, violent content, hate, porn and other problematic content. A Government regulation to prevent those problems might be helpful. Currently, social media sites like Facebook and Twitter do some of their own moderation, they didn’t do a good job with accusations of poorly drafted policies.

The best remedy might be for the government to enact targeted regulations combined with leadership in valuing and rewarding truth and evidence-based reasoning.

Why the Government Should Not Regulate Content Moderation of Social Media |  Cato Institute

For example, Governments around the world have attempted to address such activities through a variety of means and to varying degrees. These include mandatory filtering at the Internet Service Provider (ISP) level and selective filtering at the computer level. In Australia, there has been considerable debate about what level of filtering, if any, should be mandatory. The Howard government favours an approach that emphasises self-regulation by ISPs and a combination of legislation, education and family freedoms to choose between computer or ISP filtering based on a list of unacceptable content. The Rudd and Gillard governments preferred the option of mandatory ISP-level filtering, although this was also based on “blacklists” of prohibited content.

Social media platform:

Different countries have their own approaches to the issue of Internet regulation. As Gillespie and Tarleton mentioned in the “governance by and through Platforms, China and the Middle East impose a “strict liability” on Internet intermediaries to prevent the distribution of illegal or illicit content. This usually means proactive removal or censorship, often in direct cooperation with the government. (Gillespie, Tarleton 2017) A relevant example in China is that when TikTok users attempt to publish their videos on the platform, they are required to wait for the examination, and only after the examination is passed can the video be promoted. The latest conditions for examination in 2022 include the prohibition of acts against the interests of minors, pornography, and fraud, inducing users to transitional consumption behaviour, etc. Once the platform has reviewed the above-mentioned problematic videos in the user’s account, they will be forced to take them down, and their account will be blocked in serious cases.

Users:

Users are responsible for preventing the distribution of such content. To begin, users should avoid bullying, harassment, violent content, hate, porn, and other problematic content when creating their own homepage and posting content, and second, moderate the comments section of their posts to avoid starting arguments and bad discussions in their own comments Area and backstage found bad content in a timely manner to avoid causing a wider range of dissemination and reporting to the platform to make bad content disseminators are sanctimonious.

According to Weimann and Masri (2020), TikTok’s unique feature – as a video platform that allows users to watch content from strangers rather than just a selected network of friends – makes it a new way to spread abuse and avoid detection. There are fewer keywords to monitor, making it more difficult for computers to flag posts automatically. This increases the importance of users maintaining and monitoring their own account content in order to prevent unwanted content from spreading through them.

Reference:

Australian Governments and dilemmas in filtering the Internet: Juggling freedoms against potential for harm. (n.d.). https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/rp/rp1415/InternetFiltering

Hall, W., & O’Hara, K. (2018, December 7). Four internets: The geopolitics of digital governance. Centre for International Governance Innovation. Retrieved September 4, 2022, from https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance/

Research Note: Spreading Hate on TikTok. (2022). Studies In Conflict & Terrorism. Retrieved from https://www.tandfonline.com/doi/full/10.1080/1057610X.2020.1780027

Solutions, M. W. (n.d.). TikTok introduce new rules to protect younger users following safety fears. Cybersmile. Retrieved September 4, 2022, from https://www.cybersmile.org/news/tiktok-introduce-new-rules-to-protect-younger-users-following-safety-fears#:~:text=TikTok%20has%20announced%20new%20measures,instead%20of%20being%20shared%20publicly.