
Background

Social media has become the primary source of information for people living in modern times. On the other hand, there are also bad pictures or remarks on social media, such as racism and misogyny. Someone needs to take responsibility for regulating these potentially disruptive items and preventing them from contributing to a peaceful online society. Users, platforms, and the law must be maintained to protect social networking from becoming an extrajudicial arena. The point that should be taken away from this paper is that online users, platform mediums, and government legislation should all be held accountable for avoiding discriminatory information on digital platforms. Because multi-stakeholder governance standards are frequently the best option in a series of bad choices (Gorwa, 2019), this article will look at 1. How Internet users can voluntarily keep digital platforms safe. 2. How the platform manages and prevents harmful content from appearing. 3. How governments and laws regulate digital platforms.
How are the users responsible?
As network consumers, users have a large user base. Compared to the offline world, the anonymity and freedom of social media allow users to speak freely. Because of passively receiving information from passive audiences, today’s users (and audiences) are more active, users are transformed into involved thinking parties, and users have the right to provide feedback (Durham & Kellner, 2006). Thus, users serve as both the encoder and decoder in the network, meaning that users are not only a receiver of information but also an initiator. When members of disadvantaged groups face discrimination, disadvantaged groups can use social media to speak up for themselves while also seeking justice for others who have had similar experiences. At the same time, the web has given groups like digital civil society, academics, and journalists a vital role in corporate responsibility and oversight through their advocacy, research, and investigation of platform practices (Gorwa, 2019). Use popular power to monitor and limit the appearance of sexism and racism on digital platforms. Because, whether in a whole society or a network society, people (or users) are always an essential part of society. Even if upper-level policymakers pass restrictive legislation, the problem is only restricted, not eradicated.

Actress Alyssa Milano used Tarana Burke’s #MeToo hashtag to respond to Hollywood producer Harvey Weinstein’s allegations of sexual assault. Milano encouraged public participation and demonstrated the gravity of the problem of sexual violence, and the incident and the hashtag #MeToo received widespread attention (Mendes et al., 2018). With the rise of the Internet, feminists are increasingly turning to digital technologies and social media platforms to talk, network, and organize against contemporary sexism, misogyny, and rape culture; an increasing number of people are willing to participate in the boycott (Mendes et al., 2018). Because people do not require high conditions to participate on the Internet, the anonymity of the Internet can protect women’s safety. As a result, getting users (and the general public) involved in preventing discrimination on the web is beneficial. User actions are valid to encourage more people to use social media to combat discrimination in the network.
How is the digital platform (medium) responsible?
Digital platforms also play a vital role as mediums. Medium is a communication platform but can act as a superintendent when sharing offensive material. The platform equips its users with various technical tools for managing, reporting, mediating, and preventing discriminatory information, such as sensitive media filters, disabling harmful links, blacklists, “dislike” buttons, notifications, and resets. These mechanisms are limited in leaving little room for transparent and open discussion about why something is considered offensive (Matamoros-Fernandez, 2017). The platform seeks to support and implement positive online communication behavior and minimize aggressive and anti-social behavior. In this process, platform moderation has paralleled historical forms of negative media regulation aimed at protecting individuals, especially vulnerable audiences such as children, from inappropriate, illegal, obscene, or potentially harmful material (Flew et al., 2019). Matamoros-Fernandez (2017) argues that while sensitive media filters can help remove some racist information and give consumers more targeted content, they are ultimately only a band-aid solution. A negative link is a link that the platform makes unavailable to users; however, if the content of the prohibited page is in the form of a screenshot, it can be allowed without consequence. Users can express their disapproval of prejudice by clicking the “dislike” button. However, this action could also encourage the algorithm to produce content countering platform advertisers’ interests or promoting discriminatory behavior.

For example, by posting a screenshot, a user documented the onslaught of chimpanzee photos on Adam Goodes’ Wikipedia page. However, Twitter does not consider screenshots sufficient proof of harassment (Matamoros-Fernandez, 2017). Indeed, medium management plays a part, albeit one fraught with difficulties, because digital platform or firms connects to issues of income, such as advertising, marketing, attention, etcetera. Relying entirely on the platform to regulate itself has several drawbacks. The market for communication systems is substantial and nuanced because of its central role in the economy (Flew et al., 2019). Government and legal regulations are necessary for the administration of digital platform mediums.
How the law regulates digital platforms (medium)?
Based on the previous paragraph, it is clear that trying to prevent discriminatory content by depending solely on the strength of the mediating ego is difficult and insufficient. This section will examine the legal frameworks that control online service administration. Since Web 2.0, consumers have increasingly relied on specialized social media websites and apps to access various online material. Governments are looking for new ways to regulate the industry to curb global digital media conglomerates’ growing influence and power (Flew et al., 2019). Given that users have no way of knowing whether the platform is willingly preserving network peace or is being forced to do so by the rules, it is clear that the platform’s self-management alone is insufficient to prevent discriminatory content from emerging. Instead of reacting to social justice or advocacy-related goals, this “invisible work” typically supports platforms’ profit-seeking and legal demands (Matamoros-Fernández, 2017). The world’s policymakers have identified this problem. Governments are working on updating platform-specific legislation for the converged media environment of the twenty-first century. However, the question of what kind of regulation is necessary to accomplish desirable communications policy results is still up for debate (Flew, 2016).

For example, the safe harbor provisions of the US Communications Act of 1996. By providing access to the Internet and other network services but not acting as publishers of user content in the legal sense, mediums are immune from liability for the expression of their users (Flew, 2016). This law incentivizes platform promotion of regulatory measures and grants platforms expanded management authority. In this way, the law has an indirect but crucial effect on platform oversight, and the law and platform management play a complementary role.
Conclusion

In conclusion, it is the joint obligation of users, platforms, and regulations to ensure that digital spaces remain free of discriminatory material. Grower’s “The governance triangle” describes the checks and balances that exist between the state, non-governmental organizations (NGOs), and businesses (2019). Like actual society, the network society cannot function without working together in many different fields. Modern digital platforms are at peace because of the checks and balances of these three entities, but no industry is whole without them. There is legal support for platform regulation, but there are also obstacles that researchers must consider because it is not a perfect issue.
Reference:
Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Durham, M. G., & Kellner, D. M. (2006). Media and cultural studies: keyworks (Rev. ed, Vol. 2). Wiley.
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Mendes, Ringrose, J., & Keller, J. (2018). MeToo and the promise and pitfalls of challenging rape culture through digital feminist activism. The European Journal of Women’s Studies, 25(2), 236–246. https://doi.org/10.1177/1350506818765318
Hyperlink:
Gordon, S. (2022, April 24). What is the Me Too Movement All About? Verywell Mind. Retrieved October 12, 2022, from https://www.verywellmind.com/what-is-the-metoo-movement-4774817