Navigating Digital Platforms: Internet Governance and Content Moderation in Facebook

Introduction

In today’s society, Internet governance includes monitoring the online activities of social media and its users. In contrast, content moderation on social media platforms ensures that content is policy compliant, of high quality, and meets appropriateness. Internet governance in the digital age is essential to protect individuals, groups, and critical infrastructure from online threats and to ensure that networked technologies are used responsibly and securely. Internet governance and content moderation are therefore two key concepts in the evolving field of social media and online communication. Born out of the chaos of the web, social media platforms offer unprecedented opportunities for global interaction but also introduce new risks (Gillespie, 2018). In order to establish order in a chaotic online environment, platforms must engage in content moderation to ensure the smooth functioning of the Internet. In the following section, we will take an in-depth look at the importance of content moderation and Internet governance on digital platforms. Next, we will look at specific social media platforms such as Facebook and explore their content moderation challenges and issues.

Content Moderation on Digital Platforms

Now, let’s explore further the importance of content moderation on digital platforms. This is critical to maintaining internet security and user privacy, as well as helping to ensure that the content posted on platforms is legal, appropriate and of high quality. Content moderation in digital platforms is an ongoing and evolving phenomenon that manifests itself in the active monitoring and management of user-generated content on various online platforms. Content moderation is carried out by removing problematic content or labeling it with warnings, or by allowing users to block and filter content on their own (Grygiel & Brown, 2019). Thus, it plays a crucial role in maintaining a safe and respectful online environment. Fundamentally, social media platforms are supported by content moderation and its related policies, which determine the platform’s identity, influence, and social impact. Content moderation strikes a complex balance between ensuring responsible online behavior and promoting the free expression of users online, becoming an important aspect of Internet governance in the digital age. Presently, numerous digital media platforms are conducting content moderation, and I will analyze and investigate the phenomenon and cases of content moderation in digital platforms below.

Importance of Digital Platform Regulation

In the Web 2.0 era, an information monopoly, social media platforms control the majority of search advertising and social media traffic. The concentration of search advertising and social media traffic on a few dominant social media platforms could have deep adverse effects on users, including compromised privacy, competitive market dislocation, monopoly power, and erosion of trust. The application of business models on digital-based platforms beyond the provision of services and the sale of goods on the Internet affects competition in traditional business models and creates new ways for monopoly (Flew et al., 2019). This means that the monopoly of social media blocks competition and may result in users’ choices being limited, free expression not being possible, and data being exploited. Therefore the particularities of the way digital platforms operate all require special regulation. Internet governance on digital platforms is essential to ensure fair competition, protect user privacy, and safeguard user speech.

How should the internet be regulated?- BBC newsnight by BBC Newsnight, via Youtube

Content Moderation in Facebook

“facebook” on Facebook offices on University Avenue by  pshab is licensed under CC BY-NC 2.0 

When talking about challenges in content moderation, we need to focus on Facebook as a specific case. The following is a detailed analysis of Facebook’s content moderation issues and some suggestions for optimization. On Facebook, users have appeared to find ways to circumvent the content review policy. Many users describe the circumvention behavior as a technical challenge rather than a policy violation. This phenomenon therefore suggests that in the absence of a strong compliance culture, the content moderation deterrence model employed by Facebook may be ineffective (Gillett, R. et al, 2023). Platforms should cultivate users’ perceptions of online social compliance to create safer and more inclusive digital environments.

Terms – Facebook by Auntie P  is licensed under CC BY-NC-SA 2.0.

This means that Facebook’s original policy is ineffective, and if Facebook is unable to respond to users circumventing its content moderation policy, it may lose the trust of other compliant users. This is because users will feel uncomfortable about their privacy and security in the social media platform and believe that the platform is not up to standard for content moderation. Therefore, I believe that the Facebook platform can optimize its content moderation technology to help the platform better identify and detect non-compliant content, thus reducing the chances of users circumventing the policy. At the same time, the platform should develop content creators’ and users’ awareness of online social compliance. This can help users who create content for the platform to be more aware of the platform’s policies and regulations, making it easier for users to comply with the rules. Thereby creating a safer and more inclusive digital environment.

The Case of Pam Moore’s Hijacked Facebook Account

Facebook ‘friends’ hijacked in scam by Simon Willison  is licensed under CC BY-NC 2.0.

What follows is an in-depth look at Internet regulation and content moderation on social media platforms through a specific Facebook case study. Pam Moore’s story will spark thoughts on user account security, content moderation, and Internet safety education. In the article Facebook hijack victim shares experience to help other users recognize scam, about the case of Pam Moore, whose Facebook page was hijacked after scammers started posting scam posts to rip off her friends (Serrano, 2023). The case in the article tells the story of Pam Moore, whose account was hijacked by a scammer who changed her password and contact information, leaving Moore with no way to gain access to her account. The scammers then posted scams on her account to try to defraud her friends. Moore lost her Facebook account, warning people about the increasing tactics of hackers and urging people not to share their whereabouts.

Based on analyzing and researching the hijacking of Pam Moore’s Facebook account and the subsequent scam posts, I have identified several important issues about internet regulation and content moderation on social media platforms. Firstly, regarding user account security and privacy, Pam Moore’s experience leads to a scenario where user social media accounts are vulnerable to hacking and hijacking, highlighting the vulnerability of the accounts. As users use their accounts, they often divulge personal information to social media platforms such as Facebook. And when accounts are attacked and hijacked there is an adverse effect of user security and privacy being compromised. This raises questions about the responsibility of platforms, as Facebook and other social media platforms need to avoid the risks associated with potential weaknesses in security policies, and they have a responsibility to ensure the security of user accounts. Therefore, platforms need to prioritize the protection of user privacy and account security.

Anonymous computer hacker over abstract digital background. Obscured dark face in mask and hood. Data thief, internet attack, darknet fraud, dangerous viruses and cyber security. by focusonmore.com is licensed under  CC BY 2.0 

Secondly, the fraudulent posts made after the hijacking of Moore’s account exemplify the challenges associated with content moderation. These scam posts have a fraudulent nature and pose a risk of users being defrauded of their money, and Facebook and other social media platforms need to pay strict attention to this by putting in place effective content moderation system techniques. his ensures that platforms can identify and remove such harmful content, and ensure that they can regulate and moderate the content immediately.

On the other hand, Moore’s experience, in this case, made me realize the need to educate users about internet safety, especially older users who may not be aware of the risks of phishing on social media platforms. Therefore, platforms should increase their user education campaigns to achieve better internet regulation. In conclusion, Moore’s case reminds us that we need to address the risks of social media platforms through internet regulation and content moderation, as well as by actively working with platform operators and law enforcement agencies to create a safer internet environment for users.

Conclusion

In conclusion, after understanding content moderation and Internet governance on digital platforms and analyzing the Pam Moore case, it can be concluded that social media platforms need to take positive steps to ensure user safety and content compliance. These challenges require that social media platforms should optimize content moderation techniques, improve user security and privacy protection, and enhance user education in order to create a safer and inclusive digital environment.

Bibliography:

Gillespie, T. (2018). All Platforms Moderate. In Yale University Press (Ed.), Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp.1-23). Yale University Press. https://doi.org/10.12987/9780300235029

BBC newsnight. (2018, February 7). How should the internet be regulated?- BBC newsnight. [Video]. Youtube. https://youtu.be/8EjfqM1ka1Y?si=cmI2Zq0_RjpyGSpW

Grygiel, J., & Brown, N. (2019). Are social media companies motivated to be good corporate citizens? Examination of the connection between corporate social responsibility and social media safety. Telecommunications Policy, 43(5), 445-460. https://doi.org/10.1016/j.telpol.2018.12.003

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillett, R., Gray, J. E., & Valdovinos Kaye, D. B. (2023). “Just a little hack”: Investigating cultures of content moderation circumvention by Facebook users. New Media & Society, 146144482211476-. https://doi.org/10.1177/14614448221147661

Serrano, S. (2023, May 25). Facebook hijack victim shares experience to help other users recognize scam. Tahlequah Daily Press. https://www.tahlequahdailypress.com/news/facebook-hijack-victim-shares-experience-to-help-other-users-recognize-scam/article_2e8b0170-fb49-11ed-a3f1-8b9b5c7846a1.html

FraudWatch. (2023, September 11). Phishing protection and prevention services. https://fraudwatch.com/services/phishing/?gad=1&gclid=CjwKCAjwseSoBhBXEiwA9iZtxqEK9JFo2YLKD3UIACBevswPj3RfdVQaMJz0y1TuTiT6RTQZnWHVyxoCwVMQAvD_BwE