Digital platforms must be responsible for the toxic content

Assignment 2

(‘facebook data security’ by Stock Catalog is licensed with CC BY 2.0)

Challenges faced in digital platform content

 Almost all digital social platforms are filled with misogynistic, violent, pornographic, harassing, and other toxic content. This content severely affects users’ usage and reduces the reliability of digital platforms. For example, Twitter’s user base in the U.S. has been declining annually. Twitter and its analysts attribute this decline to competition, so Twitter has developed many new features to stimulate the imagination and interest of its users in order to win them back. However, these data indicate that these new features have not solved the problem of user losses. Instead, Twitter has turned into a flash motion picture of Las Vegas filled with spam. However, the real reason for the decline is that Twitter is full of toxic content that users cannot tolerate, so they opt out of Twitter. Therefore, moderation of the platform’s content is critical if digital platforms want to retain users to stay profitable.

Actions taken by digital platforms to stop toxic content

 To stop the spread of toxic content on digital platforms, digital platforms use computational tools to filter the content uploaded by users. For example, automatic moderate banning of words and the use of “skin filters” to determine whether videos and images show bare flesh, as well as flagging and removal of copyrighted material. While computational tools can quickly filter out large amounts of toxic content, they still require human intervention, especially of content flagged by users. Moderation workers are necessary because a machine without emotion can lead to misjudgment (suggesting bare flesh does not always mean pornography) (Roberts,2019).

(‘Youtube’ by Esther Vargas is licensed with CC BY 2.0)

Another key issue with platform content moderation is the huge amount of content. For example, YouTube uploads almost 400 hours of video per minute user upload. That is a huge amount of work, even with the computing help of AI. So, to solve this problem, the platform waits for users to flag which video content will cause discomfort. Only one video is flagged by the AI as problematic or receives many user reports, thus going into human intervention (Flew et al., 2019). Furthermore, to meet the 24/7 availability of human intervention, the platform will take outsourcing and work in shifts, with the Indian team coming online when the US content reviewers are off duty (Roberts,2019).

Hate, harassment, and other toxic content that appears on platforms are spread on digital platforms, and platforms must be held accountable for this because platforms have problems with self-regulation.

1.Over the past 20 years, social platforms have used section 230 as an excuse to avoid regulation, which has contributed to the proliferation of problematic content.

2.Social platforms are self-regulating, so each platform has community guidelines for users. However, some of the problems on the platform may be because self-regulation follows the theory of hegemony. An important part of the Internet culture is the entrepreneurial culture, which follows a profit-oriented and money-chasing. Therefore, social platforms choose to ignore fringe viewpoints in order to defend the dominant interests. Subjective content moderation can be used to promote patriarchal social interests to the detriment of women who demand equal access to platforms (Nurik,2019).

In 2017, Alana, an American feminist comedian, posted ‘Men Are Scum’ after being fed up with harassment and sexism from male users, which a user subsequently reported for chronic hate speech. Alana’s account was banned for 30 days. With Alana’s support, 500 comedians posted ‘Men Are Scum’ on Facebook, which resulted in the comedians’ accounts all being banned. The harasser threatened Alana’s friend by photoshopping her face after the assault, and afterward, she could not get the website to remove the horrible photo until the media intervened (Nurik,2019). There is an asymmetrical and unfair content moderation standard on Facebook in treating women who are blocked to defend their rights. At the same time, men are protected when they harass and threaten women.

(‘Cyberbullying, would you do it?’ by kid-josh is licensed with CC BY 2.0)

3.Most of the current platform’s technical employees in global, are educated white males, and most are libertarians; this can lead to a homogeneity of views and opinions (Gillespie, 2018). Most moderation workers on  social platforms are mostly outsourced. The platform can hire moderation workers at a low cost, but this can lead to a serious problem where staff must enforce language and cultural norms that match those of their customers. Furthermore, these cultural norms may counter their own (Roberts,2019).

How to stop?

Platform

Data shows that in 2017, Facebook’s global male workforce comprised 65% of the total workforce, and 89% of the U.S. workforce was white and Asian (Nurik,2019). There is a racial and gender imbalance within Facebook, so people from different countries, backgrounds, cultures, religions, beliefs, genders, and races around the world should be hired.

Platforms can sacrifice marginal interests for dominant interests, so government involvement becomes especially important.

 Government intervention

(‘Bundestag’ by malditofriki is licensed with CC BY 2.0)

In 2017, the Bundestag passed a law, NetzDG, which requires social platforms to remove hate and extremist speech or face hefty fines of up to 50 million euros. In contrast, Facebook was fined $2.3 million in 2019 for providing incomplete hate speech. Although NetzDG can effectively regulate the platform for fair content moderation, NetzDG has been criticized by experts. Experts say this law would seriously undermine free speech.

Users Backlash

 Users who experience asymmetrical treatment in the platform can put pressure on the platform through activism, thus demanding fair regulation of content review. For example, the influence of women is great for the platform, and they can bring in significant advertising revenue. Therefore, the platform overhauls content regulation in light of influence; women can join together to protest against website sexism or boycott ‘Facebook,’ and leave the platform. In 2013 female Facebook protesters launched a petition demanding that Facebook take stricter action to stop the harassment of women; first, they tried to negotiate with Facebook politely but got nowhere. They later sent screenshots of ads to advertising companies that showed ads next to abusive images of women. As ad revenue declined, Facebook was willing to negotiate with the protesters, which ultimately resulted in the protesters being able to participate in Facebook’s review process, increased training, and frequent updates to community guidelines (Nurik,2019).

Conclusion 

Platforms often use section 230 to avoid regulation, but it is not a good reason to do so. When there is bullying, harassment, violent content, hate, and other problematic content on a platform, the platform must take responsibility for it and act to stop it, not simply to meet legal requirements but also to avoid losing users who trust the platform. Platforms should hire more diverse staff to conduct censorship and avoid asymmetrical enforcement. In addition, government involvement is not a good decision for content censorship, as it will limit free speech. Users, faced with asymmetrical censorship standards, can ongoing activism.

 

Reference:

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the. question of digital communication platform governance. Journal of Digital Media & Policy10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content.   moderation, and the hidden decisions that shape social media (pp. 1-23). Yale University Press.

NURIK, C. (2019). “Men Are Scum”: Self-Regulation, Hate Speech, and Gender-Based Censorship. on Facebook. International Journal of Communication13(2019), 2878–2898.

Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the screen (pp.  33-72). Yale University Press.