Content moderation on Digital platforms

Content moderation in digital platforms is an ongoing and evolving phenomenon that manifests itself in the active monitoring and management of user-generated content on various online platforms. Content moderation is carried out by removing problematic content or labelling it with warnings, or by allowing users to block and filter content on their own (Jennifer Grygiel, 2019). Thus, it plays a crucial role in maintaining a safe and respectful online environment. Content moderation is now occurring in many digital media platforms, which I will describe below.

In Facebook, users have appeared to find ways to circumvent the content review policy. Many users describe the circumvention behaviour as a technical challenge rather than a policy violation. This phenomenon therefore suggests that in the absence of a strong compliance culture, the content moderation deterrence model employed by Facebook may be ineffective (Gillett, R. et al, 2023). Platforms should cultivate users’ perceptions of online social compliance to create safer and more inclusive digital environments.

“facebook” on Facebook offices on University Avenue by  pshab CC BY-NC 2.0 

References:

Grygiel, J., & Brown, N. (2019). Are social media companies motivated to be good corporate citizens? Examination of the connection between corporate social responsibility and social media safety. Telecommunications Policy, 43(5), 445-460. https://doi.org/10.1016/j.telpol.2018.12.003

Gillett, R., Gray, J. E., & Valdovinos Kaye, D. B. (2023). “Just a little hack”: Investigating cultures of content moderation circumvention by Facebook users. New Media & Society, 146144482211476-. https://doi.org/10.1177/14614448221147661