Digital Platform Regulation, Who should be responsible for cyber violence

 

With the development of the Internet, network regulation has become a hot topic of discussion and research.Due to the complexity of the Internet, it is not an easy thing to regulate digital platforms. Meanwhile, its various social platforms are flourishing. At the same time, the dark side of the society also benefits from the hidden, virtual and anonymous characteristics of the Internet and grows barbarously (Content Regulation in the Digital Age, 2018)). Negative comments, including online violence, pornography and hate, spread freely. This article discusses who is responsible for preventing this content from spreading and how in the history of digital platforms.

 

Start of negative content spread to digital platforms

The first thing to understand is the history of negative content impact on digital platforms. The earliest cyber violence began in the 1990s, but for a long time, due to the anonymity of the Internet and many reasons, some negative content published on the Internet was not effectively stopped and paid attention to, and its negative impact on people was ignored. It was not until 2007 that a girl named Megan committed suicide as a result of cyberbullying, and since then the case prompted her home state of Missouri to pass an anti-harassment law covering cyberbullying (p327, Ruedy, 2007). At the same time, the victim’s mother also set up a foundation dedicated to eliminating cyberbullying, which can be said to be the first time that cyberbullying had a stimulating impact on the society, and people began to gradually realize the seriousness of the matter.

The development of The Times and technology is unstoppable. With the increasingly widespread use of the Internet, digital platforms have quietly become hotbeds for the propagation of negative information, violent content and pornography since their birth. It can be found that in modern society, negative information on the Internet is very normal. According to research, 20 to 30 percent of women have been sexually harassed on social media platforms, and the number of deaths caused by online violence is also numerous.

Who is responsible for these things?

At the top of the list, of course, are operators of digital platforms, such as Facebook and Twitter, which have a global audience. As the manager of the platform, it is the responsibility and obligation to manage and evaluate the negative events that may occur on their platform.

The United States introduced Section 230 of the Communication Decency Act in 1996, Its content is “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. As a legal provision supporting free speech, on the other hand, became one of the foundations of the modern digital platform situation. Because its content disguised means that digital platform companies only act as intermediaries of media communication, and do not need to pay the main responsibility for the content published on digital platforms. The specific content of the terms also fully explains the protection of users of digital platforms, for example, the publisher of a blog is not responsible for the comments under the content of their posts.

 

On the other hand, “the law that gave us the modern internet” (p406, Byrd, 2019). But it also makes CDA 230 a safe haven for websites that want to provide a platform for controversial or political speech and a legal environment conducive to free speech, giving Internet companies an excuse to shirk their responsibilities.

To sum up, faced with increasingly mundane negative content on digital platforms, it is feasible and appropriate for people and the government to encourage digital platform companies to conduct self-management (Cusumano, 2021). Platforms like Facebook and Twitter, for example, have their own big data algorithms that automatically block and delete sensitive content and content that touches on keywords. In fact, on two counts, they are not doing so well.

The UK has criticized digital platforms for not taking responsibility for online content. The HOME AFFAIRS SELECT COMMITTEE said in its report, “There is a lot of hate on the Internet and illegal far-right speech is spread freely. If technology companies such as Google have the ability to block and regulate it, but do not actively deal with it, This can be interpreted as highly irresponsible (House of Commons Home Affairs Committee, 2017). There’s a lot of this stuff going on. Almost 75 percent of Americans think that social media content censorship in the United States is unfair and unreasonable.

On the other hand, when it comes to sensitive topics such as hate speech and race, different countries have different social values, and Facebook is also subject to controversy over its content moderation. For example, Facebook’s board of directors has spoken out and raised concerns about the algorithm that automatically deleted an article about breast cancer, which was intended to reduce the spread of pornography. As it turns out, the AI algorithm still has a lot of problems with content moderation, not to mention the fact that it automatically removes some politically correct content, which has been met with fierce reactions from netizens. So when Elon Musk bought twitter, he said he wanted to restore it to its pre-eminent position as a platform for free speech, and put a team of 1,500 people in charge of manual reviews. However, many argue that this requires a content-moderation model that reflects Musk’s commitment to transparency and freedom of speech, because even people can be biased and that would be scary.

The government and the authorities, as the core authority with the state as the carrier, are naturally responsible for controlling the trend of the country’s social form and avoiding the proliferation of vicious events. Censorship is a common censorship system. For example, in the last century, the United States established the PG-13 (Parents Strongly warned) censorship system through the MPA review department, which has been widely used by the world.

Conclusion

We’ve seen where responsibility for the spread of negative content on digital platforms comes from, and what actions have been taken by these agencies and authorities. It must be admitted that because of the virtual and anonymous characteristics of the network, judicial proof is difficult, which puts forward higher requirements for the legal supervision system. Therefore, more relevant laws need to be issued to take corresponding measures against cyber violence and strengthen the supervision and management of online speech. At the same time, legal constraint is a very complex issue, and a complete legal system should be built to prevent cyber violence, control its violent acts and remedy after violent acts. Companies such as Facebook and Twitter use big data algorithms to block sensitive words and keywords. However, pornographic and violent content still exists on digital platforms, some of which cannot be easily blocked by the authorities for reasons such as freedom of speech. Despite the age rating, is that people have a way of finding out what they want to watch. Especially with modern teens’ proficiency with the Internet, it’s not hard to bypass these barriers to access content that’s not your age group.

Second, the implementation of the network real-name system, after the responsibility. The online real-name system is implemented to psychologically weaken the fluke mentality of perpetrators involved in violence. After the occurrence of violence, technical tracking and responsibility investigation will be carried out for perpetrators of violence, and responsibility investigation and severe punishment will be carried out for the operation platform where violence occurs, so as to reduce the occurrence of online violence.

In addition, we can also learn from more expert opinions. For example, Marsden’s research supports co-regulation. He believes that co-regulation means that companies and digital platforms themselves adopt or create mechanisms to regulate users while being recognized by democratic and legitimate government institutions, who also supervise their effectiveness (Marsden, 2020). This not only shows that the existing mechanism has been recognized by both sides, but also avoids possible conflicts and frictions between the authorities and the platform.

 

 

 

 

 

 

 

Reference list:

 

Byrd, M., & Strandburg, K. J. (2019). CDA 230 for a Smart Internet. Fordham L. Rev., 88, 405.

 

Cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021). Can self-regulation save digital platforms? Industrial and Corporate Change, 30(5), 1259–1285. https://doi.org/10.1093/icc/dtab052

 

Content Regulation in the Digital Age. (2018). Progressive Communications (APC). https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjZgfubwKn7AhW_8zgGHX_5D-gQFnoECBQQAQ&url=https%3A%2F%2Fwww.ohchr.org%2Fsites%2Fdefault%2Ffiles%2FDocuments%2FIssues%2FOpinion%2FContentRegulation%2FAPC.pdf&usg=AOvVaw2ViEFrvgX6wFPj9NYS8ma_

 

ELECTRONIC FRONTIER FUNDATION. (2022). Retrieved fromhttps://www.eff.org/issues/cda230

 

Faiza Patel., & Laura Hecht-Felella(February 22, 2021). Facebook’s Content Moderation Rules Are a Mess. Retrieved from https://www.brennancenter.org/our-work/analysis-opinion/facebooks-content-moderation-rules-are-mess

 

House of Commons Home Affairs Committee. (2017). Hate crime: Abuse, hate and extremism Online. Fourteenth Report of Session 2016, 17. https://www.parliament.uk/globalassets/documents/commons-committees/home-affairs/Correspondence-17-19/Hate-crime-abuse-hate-and-extremism-online-Government-Response-to-Fourteenth-Committee-Report-16-17.pdf

 

Know the facts about women online. (2020). Retrieved from https://www.esafety.gov.au/women/know-facts-about-women-online

 

Marsden, C., Meyer, T., & Brown, I. (2020). Platform values and democratic elections: How can the law regulate digital disinformation? The Computer Law and Security Report, 36, 105373–. https://doi.org/10.1016/j.clsr.2019.105373

 

MEGAN EIER FUNDATION. (2022). Retrieved from https://www.meganmeierfoundation.org/

 

Robert Zafft. (Oct 9, 2022). Twitter, Facebook, Et Al: The Case For Freelance Content Moderation. Retrieved from

https://www.forbes.com/sites/robertzafft/2022/10/09/twitter-facebook-et-al-the-case-for-freelance-content-moderation/?sh=29d1e744de62

 

Ruedy, M. C. (2007). Repercussions of a myspace teen suicide: Should anti-cyberbullying laws be created. NCJL & Tech., 9, 323.

 

The Bark Team. (March 22, 2017). The History of Cyberbullying. Retrieved from https://www.bark.us/blog/the-history-of-cyberbullying/