Bullying, harassment, violent content, hate, porn, and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content, and how?

"Stop" by kevin dooley is licensed under CC BY 2.0.

In the web 2.0 era, the rapid growth of the internet has led to an increase in the size of digital platforms and the number of private users. Mosco (2017) found that internet companies such as Google and Facebook are gradually overtaking financial companies as the largest companies in the world in terms of market capitalization. This essay argues that internet companies, governments, and schools should be responsible for stopping the spread of harmful content on social media platforms. The internet companies behind the social media platforms, such as Facebook and Instagram, are mainly responsible for this.

Why it’s essential to stop the spread of harmful content on the internet

The rapid development of the internet has made people accustomed to the presence of media in their lives. Deuze (2007) points out that we do not live with the media; we live in it. Due to the anonymity, scalability, and speed of communication of social media platforms, users can express their discontent and show their true selves more freely. On the other hand, these characteristics of virtual platforms also accelerate the spread of false information, violence, and pornography.

 

Fake News – Computer Screen Reading Fake News” by mikemacmarketing is licensed under CC BY 2.0.

Although some arguments stop harmful internet content is unnecessary, for example, Burgess (2018) argues that fake news can be valuable to read, as it can help users learn more about other perspectives. However, most fake news is used to gain views through exaggerated content that can benefit companies or individuals. If such content is not restricted, it will only increase the exploitation of weak people, such as users or employees, by internet companies. The existence of fake news can even create a sense of psychological fear among users and endanger the social order.

Therefore, this article argues that it is necessary to stop this problematic content and create a healthy Internet environment and that a concerted effort is needed.

Why Internet companies are mainly responsible

In the era of web 2.0, social media platforms are no longer just virtual communication platforms for social and knowledge exchange. Still, they are also becoming a sizeable economic platform. Mansell (2020) has developed the idea of multi-sided markets, describing the current Internet environment as a multi-party marketplace where no user is independent of the Internet. For example, the digital traces left by users browsing the web can be automatically stored in the large databases of Internet companies for their benefit. Therefore, as a direct profiteer in the development of the Internet, there is the possibility that Internet companies themselves may ignore the public interest to gain more significant financial profit.

Furthermore, as direct rule makers of internet usage, internet companies are very effective in managing the dissemination of harmful content directly at the source through technologies such as algorithms. The reality is that one of the direct causes of the emergence of harmful content is the anonymity of the internet rules set by internet companies. Therefore, internet companies should bear the most responsibility for stopping the continued distribution of harmful content.

Measures that the company can take

Internet companies should reduce the spread of harmful content by increasing the scrutiny of content uploaded to their platforms. Firstly, companies should hire more reviewers. Although most internet companies currently have positions in the area of auditing, the number of employees currently responsible for auditing content is not enough to stop the spread of the most harmful content in the first place when compared to the number of videos and articles uploaded every day. Using YouTube as an example, Flew (2019) states that the platform receives over 400 hours of video content every minute. Such intense work can not only be hazardous to the mental health of the reviewers but can also lead to the reviewers being more prone to vulnerabilities in their work. And this can be exploited by some users. For example, some publishers who wish to disrupt society will maliciously register many virtual accounts and repeatedly post the same non-compliant content to increase the likelihood of passing the audit. Therefore, internet companies should hire more auditors to participate in their work. Also, companies should equip auditors with free psychological counseling services. This maintains employees’ mental health and helps make content auditing a sustainable industry.

 

YouTube video Brandweer Nederweert” by mauritsonline is licensed under CC BY 2.0.

Secondly, the company should improve its audit management system and reduce the oversight caused by the manual audit system by having a hierarchical audit system. Although each company has established a uniform standard for auditing, each auditor may not have the same knowledge of the standard. Companies can set up junior, intermediate, and senior reviewers to review the content of releases one level at a time.  Multiple reviews are carried out by different staff to minimize human error and reduce the spread of harmful content.

Additionally, based on the current audit, mainly using algorithms and manual auditing, internet companies can reduce the spread of harmful content through technological innovations, such as improved algorithm settings and other techniques. The improvement of algorithms can also reduce auditors’ workload and improve auditors’ efficiency to a certain extent.

Furthermore, it is the responsibility of internet companies to inform users about the rules regarding content restrictions on social media platforms, which should include what specific content needs to be restricted, how the platform will feed back the results of the review, and the platform’s penalty mechanism. With the accelerated pace of life, users may not have much patience to understand the full range of rules. Companies can help users become familiar with and understand community rules by creating engaging videos, for example, in the form of cartoons, that explain the rules to users. YouTube, for example, uses animations to help users quickly understand why hate speech is not allowed and the specific rules of the community.

 

 

Measures that government can take

Harmful speech on social media platforms affects the Internet environment and harms the public interest and social order. Therefore, while the platforms need to bear most of the responsibility, the government, as a representative of the public interest, should also take part of the responsibility for regulating the content of the Internet.

Government can discourage the spread of harmful content by creating and improving laws and regulations. For example, China has made a new position called the Internet Police, which attempts to more precisely build on existing laws and regulations to regulate content such as violent pornography. China hopes to increase the penalties for the dissemination of harmful content through the creation of Internet police. However, due to cultural, economic, and other differences between countries, it is impossible to achieve global uniformity in internet regulation. The exact definition of harmful content varies from government to government. Therefore, individual countries should also reduce the conflict of internet norms caused by different rules and improve their rules by learning from other countries’ regulations.

On the other hand, the government should not interfere excessively with the Internet market. Moderate freedom of expression is conducive to increasing technological innovation and market dynamism on the Internet. The government should gradually improve the regulatory system to ensure that the internet industry can still develop sustainably under control.

Measures that schools can take

Based on the current state of social media platforms, it will take time to create a completely healthy environment for speech. And, as more young people begin to use the internet, the involvement of schools can make a good foundation for shaping the online environment. Schools can do this by inviting internet auditors to give talks on the topic, talking about their work and the benefits of regulated internet use. Alternatively, schools could introduce additional courses to help students understand Internet use rules through assessments.

Conclusion

In summary, spreading harmful content on the internet is also a social issue that requires action by internet companies, governments, schools, and other parties to counteract. It is also essential to protect freedom of expression while regulating the spread of harmful speech. Only by balancing the public interest with the interests of individuals can we ensure the sustainable development of the Internet industry and create a healthy Internet environment.

References:

Burgess, J., Marwick, A. E., & Poell, T. (Eds.). (2018). The sage handbook of social media. SAGE Publications, Limited (pp. 255-278).

Deuze, M. (2007). Media Life. Media, Culture & Society, 33(1), 137–148. https://doi.org/10.1177/0163443710386518

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Mansell, R. (2020). Advanced introduction to platform economics. Edward Elgar Publishing.

Mosco, V. (2017). Becoming digital:  toward a post-Internet society (V. Mosco, Ed.). Emerald Publishing.

YouTube Creators. (2019, May 24). Hate Speech Policy: YouTube Community Guidelines. [Video].Youtube. https://youtu.be/45suVEYFCyc.