Who is responsible for negative, hateful, offensive speech on social media? Are the existing solutions good enough?

Presence of inappropriate content on the Internet

Ever since the introduction of social media and digital platforms, negative content has inevitably emerged in front of the public’s eyes. Digital platforms advertise themselves as free speech, where people have the right to speak and where platforms construct a public sphere, a space where negative speech cannot be ignored and where it is easier and more direct to harm others due to the covert nature of online speech. The ban on hate speech is not just about protecting sensitive groups or minorities from attack, it is about articulating the community’s commitment to inclusion, diversity, and equality (Kohl, 2022). As technology companies have an ever-increasing sphere of influence on society, self-regulation of hate speech is now an important part of platform regulation (Gillespie, 2019, pp. 1–23). In addition, government regulation can be an effective tool, most commonly by setting specific laws and regulations to govern user speech, however, the possibilities for government regulation go beyond this. This article focuses on the effectiveness of platform regulation, the possibility of government regulation, and possible future approaches to co-regulation.

No violence no hate speech

No violence no hate speech” by faul is licensed under CC BY 2.0.

Platform Regulation

In 1996, the US Communications Act passed Section 230 of the Act, a safe harbour provision, which meant that platforms were not required to meet effective regulatory standards and were not held accountable for the speech of their users (Flew et al., 2019). However, as the complex reality of the situation advances, platforms are required to take responsibility for curating content and regulating user activity, which includes the hate speech we are discussing. This behaviour firstly satisfies existing legal requirements and secondly, a platform full of offensive, harassing and hateful speech will not be chosen by users (Flew et al., 2019). Platforms need to ensure that their communities are healthy and project a normal corporate image so that they will attract more users and, in turn, advertisers’ investments. The need for consistency here between platforms and media companies by default means that platforms cannot continue to avoid responsibility (Flew et al., 2019). Some possible measures could be to require access to specific content, such as displaying the user’s IP address under their comments, to act as a self-regulatory measure for comments. This initiative is already in use in China but will need to be adapted to consider different countries (VyprVPN, 2022).

(China Revealed, 2022)

For a more standardized process, the first thing the platform can do is track whether users post content that violates community rules based on powerful algorithms, and if there is an indication of this, remove it or take other measures to govern it. However, there will always be content that the algorithm misses, and human regulation is essential if you want to regulate content effectively. Before regulation can take place, the platform needs to define the scope of negative speech; it must not be vague, which can cloud the judgement of the regulator (Kalsnes & Ihlebæk, 2020). Secondly, when judging the content of texts, internal staff need to discuss it first and, if necessary, consider the views of the government as well as the public (Kalsnes & Ihlebæk, 2020). Finally, platforms need to provide the results of penalties to reflect the severity of the problem, such as removing posts, restricting discussion threads, or even blocking users’ accounts (Kalsnes & Ihlebæk, 2020).

However, the drawbacks of platform regulation are also very apparent. The solutions provided above are not perfect; for example, most rules need to be designed by the platform itself for the community as there is no adequate prior reference to laws and regulations. The values and interests of the company determine what is acceptable or should be de-emphasized (Gillespie, 2019, pp. 1–23). In addition, the identity of regulators needs to be more diverse, and as most social media platforms are currently staffed full-time by white males, this situation can lead to minority perspectives being ignored due to different social contexts (Gillespie, 2019, pp. 1–23). The diversity of the user community will grow and become non-negligible, and regulators should then be more comprehensive and careful in their choices.

Government intervention

There is no denying that government intervention is equally influential. The government can stop the spread of inaccurate and negative content on social media through legislation, introducing stricter laws and regulations that users can use to protect themselves when offensive content harms minority groups (Stockmann, 2022). This approach is the most direct way, but platforms should be held accountable for large-scale online violence where the specific perpetrators cannot be tracked. Government control over content review on digital platforms should be tightened, and countries such as France, Poland and the UK are considering legislation seeking wider oversight, as well as imposing new censorship on tech companies (Stockmann, 2022). Germany’s NetzDG law, which came into force in 2017, requires over 2 million users of German social media platforms to remove hate speech and illegal content (Kohl, 2022). This law provides a good public framework for private censorship, and regulated self-regulation (Kohl, 2022). It defines the gatekeeping role of the platform, which needs to consider local public content standards and basic due process when reviewing content (Kohl, 2022). NetzDG reinforces the legitimacy of private regulation.

FREE SPEECH

FREE SPEECH” by Newtown grafitti is licensed under CC BY 2.0.

However, the main contradiction between the views of European countries and those of the United States is that the two regions have different concepts of freedom of expression and different concerns, with Europe focusing on how the bill will better protect the rights and dignity of its citizens (Kohl, 2022). Conversely, the US is more concerned with the consequences of government interference with freedom of expression (Kohl, 2022). From this example, it can be concluded that the laws that governments can implement vary from region to region or country to country, that there is no single provision that can be directly borrowed from, and that the best legal provisions should be tailored to the characteristics of the media and the perceptions of the population in different regions (Stockmann, 2022). Even so, Germany’s NetzDG provides a good example and is likely to be the blueprint for more extensive regulation of platforms in the future.

Co-Regulation in Australia

As mentioned above, both platform regulation and government intervention have their own drawbacks and imperfections. In this context, co-management by the platform and the government is particularly important. In the Australian context, platforms should maintain channel independence, objectivity, diversity of media content and the promotion of social solidarity and cohesion. At the same time, the ACCC needs to ensure that there is competition in the domestic market. (Cunningham & Turnbull, 2014, pp. 74–91) Australia has a high level of oligarchy and needs investment in the emerging media and communications sector so that monopolies can be prevented, and better pressure can be put on the big tech platforms (Cunningham & Turnbull, 2014, pp. 74–91). Regulation must be collaborative, led by a government agency represented by the ACCC.

Conclusion

In conclusion, platforms should have stronger content controls and set clear standards for negative, hateful, and offensive speech on the internet. In addition, national governments should establish different regulations depending on the situation, as well as urge transparency in private regulatory systems. New models of regulation, such as the concept of co-regulation, need to be introduced to better combat negative content.

References List

China Revealed. (2022, June 14). “Watch your words”: Chinese social networks now publish a user’s IP on every comment made. Www.youtube.com. https://www.youtube.com/watch?v=UIKtLtDw_70

Cunningham, S., & Turnbull, S. (2014). The media & communications in Australia (4th ed., pp. 74–91). Allen & Unwin.

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2019). Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029

Kalsnes, B., & Ihlebæk, K. A. (2020). Hiding hate speech: political moderation on Facebook. Media, Culture & Society, 43(2), 016344372095756. https://doi.org/10.1177/0163443720957562

Kohl, U. (2022). Platform regulation of hate speech – a transatlantic speech compromise? Journal of Media Law, 1–25. https://doi.org/10.1080/17577632.2022.2082520

Stockmann, D. (2022). Tech companies and the public interest: the role of the state in governing social media platforms. Information, Communication & Society, 1–15. https://doi.org/10.1080/1369118x.2022.2032796

VyprVPN. (2022, May 12). China’s Social Media Displays User’s Location. VyprVPN. https://www.vyprvpn.com/blog/china-social-media-ip-address