Who should be regulating problematic online content?

Problematic Online Content

With the opening of the digital age, people spend much time on online platforms. On our common social platforms such as Facebook, Twitter, Youtube, etc., users often upload (or try to upload) content such as child pornography, gratuitous violence and disturbing, hate-filled messages. This harmful or dangerous content is outlined as Problematic Online Content (Massanari, 2017)

Anti-Social Media?” by Intersection Digital is licensed under CC BY-NC 2.0.

 

tumblr meme faces_poker face” by guzm4n is licensed under CC BY-NC 2.0.

 

The overlooked Problematic memes.

In the era of Web 2.0, everyday mimetic practices have become highly visible in the public sphere (Shifman, 2014). In many online communities, memes bring a sense of solidarity as community members are able to share common ground on a larger scale. In an era marked by “network individualism,” people use memes to simultaneously express both their uniqueness and their connectivity (Shifman, 2014), which makes “meme culture” emerge in our society and, with it, the ridicule of what was once a sensitive topic. In a written statement to the British Parliament, it was clearly stated that memes could normalise lousy behaviour such as trolling, humiliation and bullying.

We often see hateful memes on the platform. In 2017, a blogger tweeted a ridiculously fabricated story about meeting celebrity Brown at an airport with a meme attached, followed by another story of a blogger stating that Brown discriminated against him for being gay. Although people knew these were fake stories, this type of story was “viral”, with wild retweets of the “Millie Bobby Brown is gay” modalities on Twitter, accompanied by the troll hashtag #TakeDownMillieBobbyBrown, and all the racist falsehoods stories attributed to Brown, these memes were remixed and ridiculed by LGBTQ people for the fun of it. The cost of this internet-co-opted orgy was a 14-year-old girl, and this ridiculous “joke” caused Brown to quit Twitter.

The specificity of Memes is that we do not consider text and photos independently; we combine them to understand the meaning, which is challenging with machine detection on the platform. Existing research on the automatic detection of hateful content has focused on text-based content ignoring multimodal content such as memes; these hateful memes frequently target particular groups and people based on their race, religion, gender, or physical features and dehumanise them in the process (Roy et al., 2021). Social networking sites like Facebook have provided a sizable dataset of hate memes as part of a contest to entice researchers to submit hate memes categorisation methods in an effort to stop the spread of hate memes; the “Hate Meme Challenge” rewards researchers for offering solutions for hate meme classification.

 

FWIW I would actually read conservative social platforms today but the algorithm gods don’t think I’d be interested.” by kate.gardiner is licensed under CC BY-NC 2.0.

Who should regulate Problematic Online Content?

In the world of the Internet, it is difficult to directly pursue online ‘publishers’ for illegal or unlawful acts, including the proliferation of prohibited content, particularly pornography and piracy, for which the United States has taken several defamations, public access to private documents and hate speech lawsuits that are beginning to be brought not against individual users, but against the Internet service providers that distribute the content (Gillespie, 2018). Social media companies are often criticised for failing to remove harmful content and also because they do so. Powerful platforms are getting better at finding and assessing offensive content for removal as their capabilities develop. Only a small portion of suspensions and removals for the most prominent platforms (Facebook, YouTube, and Twitter) come through user, law enforcement, and government referrals; the great majority of offensive content is discovered by sophisticated algorithms (Bishop, Macdonald, 2019). A tiny number of powerful private technology corporations own the majority of the power in the internet world, which gives them the authority to select who has the right to express their thoughts and who does not.

 A combination of user reports and staffed by content censors to enforce their rules about appropriate content socially, media platforms have for the most part relied on artificial intelligence to detect. However, reviewers suffer the trauma of sifting through large amounts of content and disturbing posts, and social media companies do not invest resources evenly across many of the markets they serve. As Facebook tries to maintain its access to domestic markets or as Facebook believes that content that might include criticism and condemnation of religion, public figures, and political ideology would carry legal liability and cause destructive conflict issues, activists and journalists have discovered that their content has been censored and removed in many countries and in disputed areas such as the Palestinian territories, Kashmir, and Crimea. According to the ProPublica report, Facebook’s policies are unclear to users and “the company’s hate speech policies tend to favour elites and governments over grassroots activists and minority groups.”

 

Scoble’s Social Media Starfish” by DBarefoot is licensed under CC BY-NC 2.0.

Conceptualising the informal regulation of online content

Other important stakeholders are involved in a number of legal, political, and economic interactions that were started by significant international organisations like the World Health Organization (Gorwa, 2019), and after years of consumer boycotts and global activist campaigns (Sikkink, 1986), with initiatives and transnational regulatory schemes increasingly developed by non-government groups and industry groups, these new regulatory arrangements have been a significant novelty of the private actors, operating separately through further cooperation and the correspondingly modest and primarily indirect role of the ‘state’. The extensive participation of private parties, acting independently through continued collaboration, constitutes the primary innovation of these new regulatory systems. The fact that “most of these arrangements are handled by corporations, and industry associations whose own or supplier companies’ conduct are the subject of regulation,” along with the “correspondingly minimal and mostly indirect part of the “state” (Abbott, 1986). Most of these arrangements are managed by companies and industry groups whose practices are regulated by themselves or by supplier companies’ (Abbott & Snidal, 2009). There are three main categories of participants: “companies,” which includes individual businesses as well as trade associations and other groups of businesses; “NGOs,” which includes a variety of social organisations, international NGOs, academic researchers, active investors, and private whistleblowers; and “participants.” and ‘states’ and individuals; and ‘states’, including respective governments and supranational governmental organisations (e.g. EU, UN) (Gorwa, 2019). Some governments have also made efforts to lessen the accessibility of terrorist content. In 2010, the Netherlands, UK, Germany, Belgium, and Spain sponsored the European Commission’s “Clean IT” project, which was to develop a strategy to combat terrorist content. However, the project was abandoned because the actual name policy was developed but was opposed by society (Gorwa, 2019). The relative novelty of platforms’ business models, the significant obstacles to free speech posed by particular governmental and authoritarian interventions, the absence of some relatively significant policy experiments, and the lack of demonstrable precedents are just a few of the many potential obstacles to regulation. Such as the highly restrictive nature of freedom of expression and limited access to information online in the Chinese government (Harriet & Champa, 2021), the stringent censorship of content posted on social media platforms, especially during Covid-19, platforms were extremely strict about posting content, and any sensitive content was removed.

 

Conclusion

Tens of thousands of online content reviewers do content evaluation. Even Nevertheless, there is growing pressure on the most popular platforms to improve their review tools and make their review procedures more open. Creating and implementing a content review mechanism that caters to one group while taking into account the other and requires not only self-regulation but also governmental and multifaceted oversight to prevent problematic content from being posted and distributed is one of the biggest challenges facing platforms.

 

 

References

Abbott, K. W., & Snidal, D. (2009a). Strengthening international regulation through transmittal new governance: Overcoming the orchestration deficit. Vanderbilt Journal of Transnational Law, 42, 501–578. Available at https://wp0.its.vanderbilt.edu/wpcontent/uploads/sites/78/abbott-cr_final.pdf

 

Bishop P, Macdonald S (2019) Terrorist content and the social media ecosystem: the role of regulation. In: Digital Jihad: online communication and violent extremism, pp 135-152. https://cronfa.swan.ac.uk/Record/cronfa52902

Gillespie, T. (2018). Regulation of and by Platforms. The SAGE Handbook of Social Media, 254-278. https://doi.org/10.4135/9781473984066.n15

 

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review8(2). https://doi.org/10.14763/2019.2.1407

 

Lee, R. K., Cao, R., Fan, Z., Jiang, J., & Chong, W. (2021). Disentangling hate in online memes. Proceedings of the 29th ACM International Conference on Multimediahttps://doi.org/10.1145/3474085.3475625

 

Massanari, A. (2016). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329-346. https://doi.org/10.1177/1461444815608807

 

Restrictions on online freedom of expression in China. (2021, April 22). Chatham House – International Affairs Think Tank. https://www.chathamhouse.org/2021/03/restrictions-online-freedom-expression-china

 

SHIFMAN, L. (2014). Memes in Digital Culture. The MIT Press, pp. 17-36

http://www.jstor.org/stable/j.ctt14bs14s

 

Sikkink, K. (1986). Codes of conduct for transnational corporations: The case of the WHO/UNICEF code. International Organization40(4), 815-840. https://doi.org/10.1017/s0020818300027387