An overall presentation of existing research on the online problematic discourse
The genuine hazards posed by the Internet are becoming more prevalent in everyday life, attracting the attention of those studying it. Finding a solution to this problem is considered important and urgent since, in the Web 2.0 era, where a huge number of individuals engage in Internet activities daily, the aforementioned problems can be exposed to everyone. Scholars have been engaged in proposing answers to this significant societal issue, and the present debate on who should be held responsible for it has been widely divided between individual end-users, internet platforms, and government involvement (Shepherd et al., 2015). The research has progressed, if modestly, from monolithic studies to referable cross-platform investigations (Pater et al., 2016) and longitudinal multiple comparison studies (Ben-David & Fernández, 2016) in recent years. While some scholars have voiced trust in the administration of the Internet ecosystem and provided generous ideas towards solutions, such views have been opposed by researchers with more pessimistic perspectives. This study attempts to address the question of responsibility division for the dissemination of problematic speech on digital platforms as well as governance proposals while maintaining a relatively anti-utopian viewpoint.
Cyberbullying
‘What’s Cyberbullying?’by Common Sense Education.
Cyberbullying is a widespread and ongoing social issue, with 41% of American adults claiming to have directly experienced it (Vogels, 2021). Furthermore, according to Vogels’ research, social media is by far the most commonly used forum for bullying. This viewpoint is supported by data showing that 75% of bullying victims’ most recent contacts occurred online. A devastating example is the trolling of a father who lost his child in a shooting in the United States yet was accused of being part of a government attempt to prohibit guns.
The responsibility of individuals
Shepherd (2015) expresses a different perspective on cyberbullying, such that hate speech has existed since the beginning of time, that it comes from within and does not occur after the advent of the internet, and Rainie (2017) supports this view from a psychological perspective that, while the platform of speech has adjusted with the internet, the publisher of the content has not. As a result, they imply that the problem is with the individuals who utilise the internet rather than the platform.
The responsibility of platforms
This article generally believes that online users are a major contributor to online hate speech. This, however, does not absolve platforms of accountability. Shepherd (2015) and Rainie (2017) continue by emphasising platform responsibility and how the structural aspects of the internet provide liberal spaces for hate speech, such as in the form of tags, which are regarded to potentially assist in normalising certain undesirable speech. Meme culture, such as the often-used emojis, is an example that makes it easier than hashtags to comprehend the availability of platforms in fostering online verbal bullying (Fernández, 2018). According to Fernández, bullies may even employ emojis which could have a larger effect than explicit harsh language. People agree with this when reminded of the viral nature of memes and the fact that, although being labelled as benign and entertaining, emojis can contribute to an implicit sense of prejudice, such as the ‘Tears of Joy’ emoji, which is occasionally used to indicate sarcasm and gloating. This, in turn, indicates the impact that the platform’s mechanism (how it distributes emojis) could have on platformed racism and hints at why platforms should be held accountable.
“Brick-moji: Face with tears of joy” by Ochre Jelly is marked with Public Domain Mark 1.0.
Facebook as a case study
While platforms have long advertised their “open, fair, and non-interventionist” traits (Gillespie, 2018b, p. 256) to indicate that they are not accountable for negative online speech, critics have argued that this claim is unsustainable when it comes to evaluating the platform’s algorithms and revealing their for-profit intentions (Gillespie, 2018a). To begin, Fernández (2018) contends that the platform’s policies and availability might be likely to encourage cyberbullying. Platform-based racism could be regarded as a type of cyberbullying, and one example of a platform that leads to discrimination is Facebook, which allows marketers to exclude users based on race. Although relevant laws may have been broken, the possible financial gains may have enticed Facebook to offer such a service.
“dinero facebook” by Esther Vargas is licensed under CC BY-SA 2.0.
The non-intermediated and commercial nature of the platform
This example contradicts the belief that platforms are essentially online mediators that stay neutral (Gillespie, 2018b), and it provides evidence to support the idea that platforms are not neutral channels, that they are profitable, and that they influence the public conversation (Gillespie, 2018a). Search engines, for example, may be essentially advertising corporations, and could bias search results because of their financial interests (Gray, 2019). It demonstrates that, while platforms do not create content, they can affect the shape of public discourse (how it is presented, by whom it is presented, and to whom it is presented) (Gillespie, 2018a). The history of platforms’ shifting roles may assist to explain this, as platforms’ functions have developed from simply offering search and contribution to now having more control over what content is enabled to be transmitted, according to Gillespie (2018). All of these theories and examples suggest that the platform manages the content and its dissemination. According to this viewpoint, Gillespie’s suggestion of platform administration for negative speech seems feasible. However, in contrast to explanations that maintain an optimistic view of platform governance, such as platforms being motivated to provide a great communication environment for users to maintain user stickiness, the main reason why this paper still holds a more pessimistic stance is due to the platforms’ commercial motivation to achieve profitability through controversial advertising or opaque algorithms.
The responsibility of the governments
In light of the analysis of platform governance’s limitations, it seems that complementary support is required to clean up the platform speech environment, as Gorwa (2019) emphasizes the importance of shifting away from platform self-regulation, with increased government intervention expected to make platforms more transparent and thus hold them accountable for online speech. As a result, a mix of platforms and laws appears to be necessary as a solution.
The potential limitations and challenges
Shepherd (2015), on the other hand, demonstrates the limitations of this argument, namely that such a combination may only make problematic speech invisible on the internet on a superficial level, but may not address the root of the problem, such as bullying and discrimination, because the problem is essentially cultural, and the internet only amplifies it. A related challenge is the limitations of using technology to block problematic speech or user accounts. According to Stocking’s analysis of the user composition of seven non-major social networking sites, 15% of well-known accounts on large social media sites whose viewers are more likely to appreciate free expression have been permanently suspended. Even if these users’ perspectives are tainted by personal will and lack impartiality, they nevertheless highlight one of the significant issues of social platform governance: the potentially murky line between under and over-regulation.
Users who reject rigorous regulation typically claim freedom of expression to excuse their behaviour, such as in the case of an award-winning photo intended to depict war but was removed because it contained minors’ nudity (Gillespie, 2018a).
“Kim Phuc – The Napalm Girl In Vietnam” by David Erickson is licensed under CC BY 2.0.
This suggests that there may be grey borderline scenarios, as well as how difficult it may be for platforms to handle problematic content precisely. At the same time, large platforms, particularly global platforms, may be difficult to manage due to the extremely subjective nature of determining whether various content is offensive or not, as well as the fact that judgments may differ depending on geographical and cultural origins (Gorwa, 2019). China is thought to exercise heavy censorship, with gay-themed themes, for example, being discouraged from appearing on social media, although this may differ in other nations. This may respond to the argument that the internet’s decentralised nature (Flew et al., 2019) may present unique regulatory challenges, and it may be challenging to provide a uniform set of standards that can be applied to any regulatory regime.
“同性恋 – Homosexual in Chinese” by Jonathan is licensed under CC BY-NC 2.0.
Conclusion and prospect
Finally, this article suggests that while platforms should be responsible for the regulation of online discourse, individuals and governments should play a supportive role because of challenges such as the constraints of potentially insufficient regulatory incentives for platforms. Meanwhile, this paper urges greater research into how digital platforms have duplicated and amplified unpleasant online speech. Furthermore, because these problematic discourses are generally ascribed to long-standing cultural difficulties (Shepherd et al., 2015), more research using a critical approach to analyse the nature of the origins of problematic discourse and possible ways to ameliorate them at their root may be required.
Reference List
Ben-David, A., & Fernández, A. M. (2016). Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain. International Journal of Communication, 10(0), Article 0. https://ijoc.org/index.php/ijoc/article/view/3697
Brodner, S. (2016). How Trolls Are Ruining the Internet. Time. https://time.com/4457110/internet-trolls/
Chemaly S. (2016). Fake news and online harassment are more than social media byproducts—They’re powerful profit drivers. Salon. https://www.salon.com/2016/12/17/fake-news-and-online-harassment-are-more-than-social-media-byproducts-theyre-powerful-profit-drivers/
Fernández M. (2018). Inciting anger through Facebook reactions in Belgium: The use of emoji and related vernacular expressions in racist discourse. First Monday, 23(9), Article 9. https://eprints.qut.edu.au/122413/
Flew T., Martin F., & Suzor N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance [Text]. Intellect. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T. (2018a). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press. https://web-s-ebscohost-com.ezproxy.library.sydney.edu.au/ehost/ebookviewer/ebook/bmxlYmtfXzE4MzQ0MDFfX0FO0?sid=da1cf5bb-d5e0-420b-abe8-9df5bf84ae14@redis&vid=0&format=EB&lpid=lp_1&rid=0
Gillespie, T. (2018b). Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications, Limited. http://ebookcentral.proquest.com/lib/usyd/detail.action?docID=5151795
Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://policyreview.info/articles/analysis/platform-governance-triangle-conceptualising-informal-regulation-online-content
Gray, K. L. (2019). Algorithms of oppression: How search engines reinforce racism. Feminist Media Studies, 19(2), 308–310. https://doi.org/10.1080/14680777.2019.1579984
Hagan, N. (2018). Why the Face with Tears of Joy emoji is the symbol of our age. Medium. https://medium.com/@nickjameshagan_14702/why-the-face-with-tears-of-joy-emoji-is-the-symbol-of-our-age-902a8543baa4
Huang, J. H., Zheping. (2016). China’s new television rules ban homosexuality, drinking, and vengeance. Quartz. https://qz.com/630159/chinas-new-television-rules-ban-homosexuality-drinking-and-vengeance/
Jr J. A. (2016). Facebook Lets Advertisers Exclude Users by Race. ProPublica. https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race
Pater, J. A., Kim, M. K., Mynatt, E. D., & Fiesler, C. (2016). Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms. Proceedings of the 19th International Conference on Supporting Group Work, 369–374. https://doi.org/10.1145/2957276.2957297
Rainie, L., Anderson, J., & Albright, J. (2017). The Future of Free Speech, Trolls, Anonymity and Fake News Online. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/
Shepherd, T., Harvey, A., Jordan, T., Srauy, S., & Miltner, K. (2015). Histories of Hating. Social Media + Society, 1(2), 2056305115603997. https://doi.org/10.1177/2056305115603997
Stocking G., Mitchell A., Widjaya R., & Smith A. (2022). The Role of Alternative Social Media in the News and Information Environment. Pew Research Center’s Journalism Project. https://www.pewresearch.org/journalism/2022/10/06/the-role-of-alternative-social-media-in-the-news-and-information-environment/
Vogels E. (2021). The State of Online Harassment. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/