Should digital platforms be regulated? How to do?

Cyberbullying Law and the New School Year” by CT Senate Democrats is licensed under CC BY-NC-ND 2.0.

Introduction/Source/Thinking

Today’s problems with a harmful speech on digital platforms continue to arise because the structural biases of the early Internet continue to influence modern digital culture, which is predominantly white, male, and middle class and they have evolved from the original purpose of creating the Internet as altruistic to the egoist, which is not alert to structural inequality. Moreover, a “bro culture” of indifference to minorities has emerged in Silicon Valley, which has exposed the inequalities within the Valley. One myth that the tech community has thrown at us is that the state itself is evil and that it does not represent the people. Instead, only the tech community represents the people because they are busy collating their voices with search engines and social media (Lusoli and Turner, 2021). They are hostile to state regulation, which limits users’ freedom of expression.

 

What’s wrong with digital platforms today that causes bad content to spread?

 

Because at this stage, citizens support freedom of speech, and they believe that the media can speak on behalf of the people without interference from the government. It brings some harm to this problem, most Internet users will choose to remain anonymous, which helps in a more positive information communication network platform, can also help well protect individual privacy, let users can better express themselves, and, in fact, the anonymous comments instead allow users to hate speech and almost don’t have to be responsible for the review, This leads to bullying, harassment, violent content, hate, pornography and other problematic content being disseminated on digital platforms. Therefore, censorship and regulation of content on digital platforms should be strengthened(Lusoli and Turner, 2021). These hazards can be addressed by technology companies, governments and civic groups.

Bullying” by Yulissa Lanchi is licensed under CC BY-NC-ND 2.0.

 should tech companies regulate? How to do?

  Self-regulation of technology companies (businesses)

Because enterprises have a social responsibility, they can better protect users. For platforms, their self-regulation is more efficient, because companies know their own problems and can formulate specific solutions based on specific cases. Moreover, the fast implementation of enterprise supervision means that users can be protected more quickly. Because the national regulation needs to make specific rules, which will take a lot of time, the enterprise regulation will be faster.

 

Social media platforms are massively adding their use of artificial intelligence, which uses computer code to censor the content posted by users. The use of algorithms to discover and delete offending content from users’ news feeds, rather than personalised review by human moderators, is a form of pre-moderation (Young, 2021). The purpose of algorithmic audits is to apply the platform’s policies to content uploaded to the site and delete prohibited content before other users are able to view and interact with it, and share it. While the use of automated moderation systems can prevent banned content from reaching or affecting many users, and may reduce considerable workload for human moderators (Young, 2021). And Marina Gorbis, executive director of the Institute for the Future, added: “I hope we will develop more social bots and algorithmic filters to take out some of the trolls and hate speech. I hope we will create bots to facilitate helpful connections and possibly insert context-specific data/facts/stories that will be conducive to more positive discourse. Of course, any filter and algorithm will create questions about what is filtered out and the values embedded in the algorithm(Rainie, Anderson and Albright,2017).”

 

On top of that, Media platforms often take a post-hoc way when censoring content. Under this approach, Content they deem inappropriate or in violation of platform policies is first reported by platform users, and then the reported content is reviewed by a team of moderators and removed as appropriate (Young, 2021). At the same time, lots of social networks also use some form of algorithm to detect and block bad content as quickly as possible, before many users are likely to encounter it. It’s often larger and better-resourced companies, Facebook, Instagram and YouTube, for example, have their own algorithms that automatically find suspicious content, but other platforms depend on human review of flagged content(Young,2021).

Creative Director at Buck and Digital Design Grad Ryan Honey Visits VFS” by vancouverfilmschool is licensed under CC BY 2.0.

 

 

Why should the state regulate? How to do?

  The supervision of the platform is not strong?

Indeed, many platforms may want to avoid moderation altogether, seeking headlines as bastions of free expression. Platforms with this philosophy have attracted some users who are frustrated with the enhanced moderation policies of well-known platforms such as Twitter, Reddit and Tumblr. Users look for platforms that have these lax or non-existent content policies (Young,2021). Frank Pasquale, the author of The Black Box Society, commented: “The goal of every major Internet platform is profit. Many times, hatred, anxiety and anger drive people to engage with the platform. Anything that adds advertising revenue will not only be allowed but it will also be encouraged, except in some egregious cases.” Therefore, if there is no incentive mechanism/strong supervision mechanism, enterprises may not have strong self-supervision for the purpose of profit.

 

Therefore, the enforcement of government regulation is stronger, which leads to more effective supervision through legal implementation. Governments have proposed and passed new legislation to impose penalties on online platforms that fail to remove illegal content.

 

Platforms seem eager to demonstrate their willingness and ability to enforce government priorities through private regulation (Bloch-Wehba, 2021). However, there are also efforts by governments (especially outside the US) to incentivize platforms to tackle illegal content more aggressively. After the massacres at two Christchurch mosques in March 2019, governments introduced and passed new legislation to impose penalties on online platforms that fail to remove illegal content. Both law enforcement and platforms see the potential for AI and other automated technologies to enhance compliance with these measures and accelerate takedowns (Bloch-Wehba, 2021). In Australia, for example, the law now imposes criminal penalties on online service providers who do not “promptly” remove “abhorrent violent material.” The Cyber Enforcement Act (sometimes called the “Facebook Law”) came into effect on January 1, 2018, imposing hefty fines of up to €50 million (C $75 million) on social media firms that fail to delete “manifestly illegal” content, including defamation, incitement to violence and hate speech within 24 hours of the content being reported. If it is not entirely distinct whether the content is illegal, the company will have seven days to consult and verdict (Amira Elghawaby, 2018). Their decision is issued after the enterprise has finished blocking the spread of illegal content, and then the secondary screening is carried out, so the order will be inefficient.

dinero facebook” by clasesdeperiodismo is licensed under CC BY-SA 2.0.

Why should individuals regulate? How to do?

  raise civic awareness

Individual monitoring of bad content on digital platforms can raise civic awareness and drive public opinion to support the need for monitoring. But individuals are actually less effective at blocking offending content on digital platforms. For one thing, it requires professional moderators to be knowledgeable about the platform. They must be experts in the tastes of the site’s presumed audience and have cultural knowledge of the location and audience of the platform. Both the headquarters of a technology platform and its audience can be geographically and culturally remote from where staff view and moderate user-generated content. Moderators must have linguistic competence in the content language (which may be a learned or second language for screeners), be familiar with relevant laws in the site’s country of origin, and have expertise in user guides and other extremely detailed platform-level details about what is and is not allowed (Gillespie, 2018).

 

Second, if users complain about content on issues such as harassment, this can also indirectly prompt social platform companies to conduct content screening. Site moderator or other webmasters, external parties (for example, claims to have material was embezzling company) or others because of what you see and feel uncomfortable or concerns a complaint about the user on the material, and then trigger mechanism on the site, the content of “tag” action, to promote professional moderators to review, is likely to trigger the content filtering (Gillespie, 2018). Platforms rely on users to report new or original content that violates community guidelines. In theory, user reporting compensates for the limited resources in manual tagging, and it is also in the public interest to allow tagging and removal. As a result, individual users primarily report offending content when they view it, rather than using artificial intelligence or algorithmic machines to block content on social platforms, as companies do.

 

Conclusion

There are pros and cons to preventing bullying, harassment, violent content, hate, pornography, and other issues from spreading on digital platforms from countries, platforms, and individuals, but ultimately it depends on laws and regulations or the rules of industry organizations.

 

 

Reference List:

Lusoli and Turner.(2021). “It’s an Ongoing Bromance”: Counterculture and Cyberculture in Silicon Valley—An Interview with Fred Turner. Journal of management inquiry,235-242.https://doi-org.ezproxy.library.sydney.edu.au/10.1177/1056492620941075

Gillespie.(2018).Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media.All Platforms Moderate,1-23.https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

Young.(2021,January).How Much is Too Much: The Difficulties of Social Media Content Moderation.http://dx.doi.org/10.2139/ssrn.3792647

Bloch-Wehba.(2021,June). Content Moderation as Surveillance. Berkeley Technology Law Journal, Vol. 36, Forthcoming, Texas A&M University School of Law Legal Studies Research Paper No. 21-37. https://ssrn.com/abstract=3872915

Elghawaby.(2018).Social Media’s Self-regulation Isn’t Enough.

https://www.cigionline.org/articles/social-medias-self-regulation-isnt-enough/?utm_source=google_ads&utm_medium=grant&gclid=Cj0KCQjwnP-ZBhDiARIsAH3FSRe2nK-7ESbqHWqcOidBKPEm-qfQ9G8017kYTd7mLM7k03jDO06hOvwaAk4cEALw_wcB

Rainie,Anderson and Albright.(2017).The Future of Free Speech, Trolls, Anonymity and Fake News Online.

https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/