Who’s Responsible for Problematic Content?

Thurs 9am CC

"bad-cyberbully" by J_O_I_D under CC BY 2.0

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how? 

The responsibility of the moderation of problematic content should not fall on any one specific group or party, but rather the level of responsibility of involved parties fluctuates case by case and across different platforms. The three main actors involved include the individuals and groups as the users of the platform and whose content is moderated, the digital platforms and companies as distributors and curators of the user experience, and the government who control law creation and enforcement. These three parties all have differing levels of responsibility depending on the situation and their involvement in the creation, distribution and proliferation of problematic content, but all are involved to varying degrees as they all have different means of exerting power and influence to reduce or prevent its spread. Drawing on the existing practices and relations between platforms and users (Gillespie, 2017; 2018), the use of automation in content moderation (Gillespie, 2020), and the shift towards government intervention (Gowra, 2019), practices of problematic content moderation will be analysed showing the different levels of involvement the acting parties have. The actions each has undertaken in these scenarios will also be examined in the difference between reactionary responses and preventative measures taken as a result.

Social Media 01” by Rosaura Ochoa under CC BY 2.0

Digital platforms as distributors and curators of the content users experience have a responsibility to their users, as well as stakeholders, to address problematic content. There is a balance that must be struck between too policed or too lenient, which platforms must find in order to keep a majority of their audience on and using their site (Gillespie, 2017, pp 262-263). This is in relation to users feeling safe and able to express themselves freely on the site without being targeted, either by other users or the platform itself.  Platforms have a general consensus of written rules and agreements in terms of conduct and type of content prohibited on their site, with an expectation that users follow them but also as a working guideline to find offending content. Platforms also develop rules in response to inappropriate content that may have not been previously considered (Gillespie, 2017, pg. 264). Currently, platforms are in the practice of allowing all content to be posted first before being reviewed, as “in the West, we accept that platforms cannot review content before it is posted, reject the delay that would impose, and yet also demand that platforms respond quickly and consistently to our complaints” (Gillespie, 2017, pg. 266). This is an issue, as the task of identifying problematic content, whether before or after it has been posted, has been done by outsourced human labour and often without the original social and cultural context that may determine if it is truly offending or not (Gillespie, 2017, pp. 266-267). Automated or machine learning algorithms designed to help manage this content and reduce the harm on labourers have also been inconsistent without the initial input of these labourers, whilst also lack the ability to adapt and understand new contexts (Gillespie, 2020). The next step forward would involve machine learning algorithms to support human moderators, while also further involving the entrenching values of users, particularly the most vulnerable, into the process of moderation as some sites already do. 

The IT-professional” by cindayra under CC BY-NC-SA 2.0

The users of platforms are the ones whose content is being moderated, and who are on the receiving end of negative experiences as a result of problematic instances, but are able to exert their influence through the platform. Platforms employ users to help moderate content for them through either flagging systems, or tools for rating and filtering their own or others content. Flagging allows users to indicate problematic content that they have come across to the platform, which will then be reviewed and dealt with accordingly. However, Gillespie (2017, pg. 268) notes that flagging is optional and that only the most motivated or compelled will flag, allowing offensive content to still fall through and biases to occur. This can lead to platforms “either attending too much to those who complain, or too easily dismissing them as a noisy minority [or assume] a “silent majority” who have no complaints” (Gillespie, 2018, pg 12). Alternatively, tools for rating and filtering content involve all users, who can label their own content when posting as well as decide what content to avoid. This is considered a better practice as it enables all users to moderate their own experiences, but also may allow inappropriate content to proliferate in communities that seek this type of content. Practices of traditional online community moderation also have been employed in smaller forms within platforms, such as Reddit threads, Facebook groups and Discord servers, but the size and volume makes these practices harder to apply across a wider reach (Gillespie, 2017, pg. 265). This is due to the reliance on moderators being volunteers who have in-depth knowledge and awareness of the community, its people and the codes of conduct, and are more capable of regulating what is acceptable or not and settling disagreements on an individual basis. Users of platforms have avenues for helping moderate content or control aspects of their online experience, but these hinge on platforms being responsive to or having the affordances for these systems of user feedback, with the most affected turning to external governance for adjudication.

Legal Gavel & Open Law Book” by Blogtrepreneur under CC BY 2.0

Governments can influence the actions of platform companies and individuals by providing an external neutral perspective and power over inappropriate behaviour in social media sites. Gowra (2019) argues that there is a shift towards increased government intervention in platform governance, with a series of informal and formal measures at different levels of government and private regulatory bodies. How platform companies are currently governed are through laws and regulations which governments ensure they follow through legal and financial penalty, but passing and enforcing these regulations are hard as major platform companies lobby for more beneficial situations (Gowra, 2019, pg. 4). For example, The Sharing of Abhorrent Violent Material bill was passed in 2019 in response to the Christchurch shooting, punishing platforms for not alerting police to violent conduct within a reasonable timeframe of its posting. Companies retaliated by criticising its vagueness about who would be penalised, and the impossibility of being aware of all violent content posted by users. Private organisations have also been established to govern platforms through codes of conduct, with a multistakeholder governance model arising when other actors like state governments, non-governmental organisations, and citizens are involved. However, Gowra (2019, pg. 14) notes that “the governance of online content on platforms is a far less multistakeholder undertaking” compared to the internet, agreeing with the involvement of citizens and users as an integral part of the process. Individuals affected by targeted, inappropriate content will inevitably turn to external judicial processes, particularly when the platform itself has less of a role in the constitution of the content, like private messaging services. An example of cyberbullying through Snapchat in North Queensland demonstrates how the platform’s feature of transient messages could be used to harass another person and remove the evidence, but is still liable as Snapchat is a carriage service or electronic communication method. Governments are also responsible for helping reduce the spread of problematic content, addressing both the platforms and also individuals through legal measures.

Monitoring of school teaching online.” by Nenad Stojkovic under CC BY 2.0

Platforms, users and governments all have different levels of interactivity and therefore responsibility to ensure that problematic content and inappropriate behaviour on digital networking sites. Platforms have a responsibility to their users and stakeholders to create environments that encourage continued usage, holding up values of free speech and safety, yet are exploiting human labourers or ineffectual automated moderation systems in order to do so. Users are responsible for their own conduct, and also able to help the platform through moderating their own content and others through flagging and rating systems, or community moderation practices, but these are inconsistent and unsustainable by themselves. Governments and external regulatory bodies provide codes and standards that both platforms and users should adhere to, but involve the other two actors less in that process due to competing interests. Each of these groups has their own issues individually, but collaboration and further inclusion of civil society and the values of users into their decision making process would be beneficial to creating a cohesive effort towards reducing the circulation of problematic content.

Reference List

Gillespie, T. (2017). Regulation of and by platforms. In J. Burgess, A. Marwick & T. Poell (Eds.), The SAGE handbook of social media (pp. 254-278). SAGE.

Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029 

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720943234 

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407 

Karp, P. (2019, April 4). Australia passes social media law penalising platforms for violent content. The Guardian. https://www.theguardian.com/media/2019/apr/04/australia-passes-social-media-law-penalising-platforms-for-violent-content 

Rigby, M. (2018, April 5). Cairns teen charged for using Snapchat to repeatedly ‘threaten’ another girl. ABC News. https://www.abc.net.au/news/2018-04-05/cairns-girl-charged-for-using-snapchat-to-threaten-teen/9622698