How to Manage Internet Spam and Bad Information

computer spam
Spam by Zduncan is licensed under CC BY-SA 2.0

In recent years, the spread of pornography, false information, and harmful information such as violence in cyberspace have become a major problem plaguing the world’s network development.

The United States, European Union member states, and other developed economies have taken measures to strengthen the regulation of online social platforms, crack down on online false information, and focus on online extremist ideas and content to manage harmful information online.

Drawing on these practices and experiences, we can gain insights from continuously improving the construction of the legal system for online harmful information, focusing on rectifying extremism and other harmful information, strengthening the main responsibility of Internet enterprises, and actively promoting transnational cooperation.

  1. Classification of harmful information on the Internet

There are mainly two types of harmful information on the Internet. One category is reactionary, pornographic, superstitious, abusive and confidential information that hinders social morality and inconveniences the public; the other category is undesirable codes that will affect the Internet itself and the security of users’ computers, such as special controls and computer viruses.

  1. Subjects involved in network information management

    Information Management II
    ‘Information Management II’ by Acellus is licensed under CC BY-SA 2.0

To the current governance of harmful information on the Internet in the process of governance legal basis is insufficient, governance system fragmentation, governance means have not yet formed a synergy and other issues, it should be targeted in the governance process to adhere to the basic principles of comprehensive governance according to law.

The realization of this principle requires legal, technical, and institutional infrastructure, but also within the framework of the rule of law, the integrated application of a variety of legal means, play a synergistic role of state control, market self-regulation, social supervision, international cooperation in a variety of ways. The above-mentioned specific measures for comprehensive governance in accordance with the law is a powerful response to the current problems in the process of harmful information governance.

  1. Strengthening the regulation of online social networking platforms

On January 1, 2018, Germany’s Social Media Management Act came into force, imposing strict regulatory requirements on social networking platforms that provide content services, including Facebook and Twitter. According to the law, social networking platforms with more than 2 million users in Germany are responsible for cleaning up their platforms involving defamation, slander, and incitement to violence, and must remove illegal content reported by users within 7 days, and illegal statements must be removed or blocked within 24 hours of reporting, and controversial statements must also be dealt with within 7 days of reporting, otherwise, they will face fines of up to 50 million euros (Human Rights Watch, 2018). In April 2019, the UK released The Code of Conduct for Online Social Media Platforms and The White Paper on Cyber Hazards, which calls for enhanced self-regulation of social networking platforms and advocates for social media platforms to take responsibility for regulating undesirable speech posted on their platforms. The White Paper focuses on online platforms that allow users to create and share content on their own, including Twitter, Facebook, Instagram, forums, webpages, and search engines, and mandates social media sites to remove harmful content including violence, false information, and cyberbullying. The White Paper also points out that the British government will legislate the legal obligations of social media companies while establishing an independent regulator responsible for regulating social media, which will have the power to issue large fines to companies that violate the law, and require them to shut down their pages, and even hold company executives personally responsible (LegalZoom, 2011).

  1. Cracking down on online disinformation
See the source image
France’s anti-fake news’ by Euro is licensed under CC BY-SA 2.0

In July 2018, France adopted the Anti-Fake News Act, which aims to more effectively stop the spread of fake news. on September 26, 2018, Europe’s major online platforms, social media giants, advertisers and advertising operators jointly issued the EU’s first Anti-False Information Code of Conduct to tackle fake news and online disinformation, which defines “false information” defined as verifiably false or misleading information that is created, presented and disseminated for the purpose of seeking financial gain or deliberately deceiving the public, as well as information that can cause harm to the public and pose a threat to democratic political and decision-making processes, the public interest (e.g. the health, environment or safety of EU citizens) (European Commission, 2018). In December 2018, US Senators introduced the Malicious Forgery Ban Act of 2018, which aims to regulate individuals who produce and distribute deeply falsified content that causes crimes and violations, as well as platforms that continue to distribute content knowing that it is deeply falsified.

On June 28, 2019, a bipartisan bill called the Deepfake Report Act (DRA) was introduced in the U.S. House and Senate, aiming The bill aims to reduce the harm of “Deepfake” videos that use artificial intelligence (AI) technology to manipulate original video content. In September 2019, the U.S. passed legislation called the Identifying Output of Adversarial Generation Networks Act (IOGAN Act), which requires the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) to accelerate the detection of manipulated and damaging videos in order to combat the growing prevalence of deep forgery techniques.

  1.  Focus on extremist ideas and content online

In May 2017, the Group of Seven Summit (G7 Summit) was held in Taormina, Italy. The G7 issued a statement calling on Internet service providers and social media giants to take action against extremist online content and encouraged the industry to immediately develop and share new technologies and tools to improve the ability to automatically detect violent incitement. In April 2019, the European Parliament adopted the Rules on Addressing the Online Distribution of Terrorist Content, which provides that if online and social companies such as Facebook, Google, and Twitter fail to remove extremist content within one hour of being requested to do so by regulators, they can be fined up to 4% of global business (EU, 2022). On May 15, 2019, heads of government, department heads, and representatives of technology companies met in Paris for a special conference to discuss issues related to preventing the spread of extremism on social media.

The conference issued the “Christchurch Call,” which calls for a collective and voluntary commitment by governments and online service providers to eliminate terrorist, extremist, and violent content to prevent the misuse of the Internet. It was announced that national and international governments including Canada, Australia, the UK, New Zealand, Germany, France, the European Commission, Indonesia, India, Ireland, Italy, Japan, Jordan, the Netherlands, Norway, Senegal, Spain, and Sweden, as well as eight technology companies – Facebook, Google, Twitter, Amazon, Microsoft, Daily Motion, Qwant, and YouTube – have signed the call. signed on to this call ( Wang Yu, 2019 ).

  1. Technical protection

The current technical legislation on network governance is centred on the prevention and control of computer viruses, the spread of harmful information is also very technical, and whether through control, market self-regulation or social supervision and other means of harmful information governance,

See the source image
‘Cyber Security Data’ by Alexandersikov is licensed under CC BY-SA 2.0

technical security is an important basic condition. Therefore, the management of harmful information must be provided from the legislative aspects of the corresponding technical support.

The discovery technology includes active discovery and passive defense, the active discovery method mainly refers to the active monitoring of harmful information based on search engines, and the passive defense method is based on network content filtering and blocking.

With the help of the network harmful information discovery mechanism, the network harmful information can be blocked by effective technical means in order to make it difficult to spread on the network, so as to achieve the purpose of purifying the network. At present, many countries have implemented content filtering policies: for example, the European Union has taken technical measures to deal with harmful content, enhance the practical effects of filtering software and services, and ensure users’ right to receive information; Japan’s Ministry of Internal Affairs and Communications and NEC have jointly developed a filtering system to prevent the dissemination of information about crime, pornography, and other crimes.

In Singapore and other countries with “strict media restrictions”, some websites and keywords that need to be filtered are publicly listed, and Internet service providers (ISPs) are forced to block them. It can be seen that the use of technical means to provide technical support for the governance of harmful information on the Internet is also a common experience in many countries in the process of governance of harmful information on the Internet.

The rapid development of the Internet has made it easy for people to get all kinds of information, and at the same time, resisting the ideological and cultural infiltration of harmful information such as reactionary, obscene or pornographic information on the Internet has become an urgent problem to be solved.

 

Reference list:

[1] Acellus. (2020). Information management II. https://www.science.edu/acellus/course/information-management-ii/

[2]Alexandersikov. (2017, October 7). Cyber Security Data Protection Business Technology Privacy concept. Image of Lock, Padlock: 101254585. https://www.dreamstime.com/cyber-security-data-protection-business-technology-privacy-concept-cyber-security-data-protection-business-technology-privacy-image101254585

[3]Euro, topics. (2018, June 8). France’s anti-fake news initiative under fire. Bundeszentrale Für Politische Bildung. https://www.eurotopics.net/en/200825/france-s-anti-fake-news-initiative-under-fire

[4] Germany: Flawed social media law. (2018, February 14). Human Rights Watch. https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law

[5] Kaiser, B. (2011, June 22). Social media’s new intellectual property challenges. Legalzoom.Com. https://www.legalzoom.com/articles/social-medias-new-intellectual-property-challenges

[6] European Union. (2022, June 16). 2018 code of practice on disinformation. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/library/2018-code-practice-disinformation

[7] EU: Social leader can be fined 4% of global revenue if extreme content is not removed within one hour. (n.d.).https://www.thepaper.cn/newsDetail_forward_3314894

[8] Wang Yu. (2019, May 22). Multi-National Consultation to Eliminate Social Media Terrorism. China National Defense News.

[9] Zduncan. (2012, February 22). Malware increases while spam decreases. Computer Service Now Blog. https://blog.computerservicenow.com/2012/02/22/malware-increases-while-spam-decreases/