As one of the greatest inventions of mankind, the Internet has brought great convenience to information dissemination and fundamental changes to human information dissemination mode, but also brought a series of social governance problems, among which various kinds of fake news are the problematic contents spread on the digital platform, fake news is spread through digital platforms on the Internet and has a negative impact on social development(Zannettou et al., 2019). The rapid spread of fake news may affect the public opinion of a social group and even trigger social anxiety and panic, thus having a negative impact on people’s daily life and social harmony and stability(Chadwick et al., 2022). For example, after the “2011 earthquake in Japan led to the leakage of nuclear power plants”, rumors such as “iodized salt can prevent nuclear radiation” and “Chinese salt will be contaminated by nuclear radiation” appeared on Sina Weibo and other Chinese social media. Some merchants saw the business opportunity and hoarded the salt maliciously, raising the price and causing a nationwide rush to buy iodized salt, which once caused a “salt shortage” in the market and seriously disturbed the normal social order.
How false news can spread- Noah Tavlin by TED-ed. All right reserved. Retrieved from: https://www.youtube.com/watch?v=cSKGa_7XJkg&t=11s
Who should be …
Social media platforms should be responsible for stopping online fake news. Social media platforms, as an information dissemination environment with an alternating mix of new and traditional media, are interlocked at the communication nodes of information production, content distribution, and mutual infusion, creating a highly emotionally contagious information dissemination space(Schwanholz et al., 2018). At the level of audience participation in information production and dissemination, emotions are driven by representational facts such as text, images, and sounds, if the content is geographically and psychologically close to the audience, the emotions that the audience wants to engage with are more likely to be awakened(Schwanholz et al., 2018). The emotionally shaped information dissemination environment catalyzes the audience’s desire to reproduce and retransmit, allowing a bulk influx of new information production and dissemination subjects, making the risk of information dissemination distortion increase simultaneously. Social media false information needs to have three prerequisites: false motive, content distortion and negative impact, in fact, users take the initiative to spread misinformation, deliberately create false content, comprehensive misrepresentation or partial misrepresentation, resulting in information that does not match the facts, increasing the difficulty of identifying false information(Zannettou et al., 2019). At present, social media platforms audit the massive amount of data information, relying mainly on automated verification means such as titles, keywords and tags, while false videos and in-depth fake news spread by social media are difficult to be identified quickly, and the potential or real negative impact brought by the spread of these information challenges the current regulatory rules(Schwanholz et al., 2018).

In addition, social media users are also responsible for stopping the spread of false information. According to the 2015 China Social Application User Behavior Research Report, social media users have a high proportion of low and middle-education groups, and the income level and occupational structure of social media users highlight grassroots or civilian characteristics, while among the occupations of Internet users, the proportion of students, corporate employees and freelancers is high. Social media users have different responses to false information in social media because of their knowledge, life experience and social environment. They fully express their opinions in the network and enjoy the initiative of spreading information. Due to the limitations of their personal quality and life experience, social media users do not have high judgment of false information or rumors, and they are easy to become potential producers and spreaders of false information.
How to regulate

1: Strong regulation of laws and rules
Currently, algorithmic recommendations are widely used in the field of content distribution, and personalized recommendations can easily fall into the “manipulation trap”, to fundamentally break the algorithm ethics problem, it is necessary not only to adhere to the mainstream value guidance, but also to optimize the algorithm recommendation mechanism(Lilian, 2022). On August 27, 2021, the State Internet Information Office issued the “Regulations on the Administration of Algorithmic Recommendation of Internet Information Services (Draft for Comments),” which clarifies that algorithmic recommendation service providers shall not use algorithms to implement traffic falsification and traffic hijacking, and shall not The use of algorithms to block information, excessive recommendations, manipulation of the list or search results sorting, control of hot search or selection and other interference with the presentation of information, the implementation of self-preferential treatment, unfair competition, influence network public opinion or circumvent supervision(Guo et al., 2020). Especially for users to provide the option to turn off the algorithmic recommendation service, more attention needs to be paid to protect the user’s right to choose and privacy protection, and constantly improve the social media user independent choice mechanism. At the same time, the relevant regulatory departments should increase supervision, strengthen the strong legal regulation, and effectively ensure that network ideological security can be managed and controlled. It is worth noting that the effective governance of social media should still be based on the premise of effective dissemination of security.
2: Strengthen information verification efforts
The reason for social media platforms to publish unverified information is to grab timeliness, earn clicks and guide traffic(Chadwick et al., 2022). This practice of ignoring the credibility of the platform not only blurs and misleads public perception, but also artificially tears up the already fragile social trust and makes the whole society less tolerant(Chadwick et al., 2022). It is undeniable that publishing exclusive news is still an important means for social media to import traffic, but strengthening the timeliness of information dissemination should not be at the expense of authenticity. Currently, social media platforms are embedded with artificial intelligence, and through the processing and filtering of natural language computer information, machine learning potential is deeply explored to conduct statistical analysis of platform information and strictly verify platform information(Guo et al., 2020). For some information that cannot be confirmed as true or false, “objectivity” and “prudence” are the basic attitude, and seeking evidence should become the “required action” of the platform(Guo et al., 2020).
3: Introduction of third-party verification tracking
With the boom of social media, western societies have started to rectify false information, with professional verification companies or non-profit organizations marking false information for verification, further establishing and improving information filtering and intervention mechanisms, optimizing and adjusting platform algorithms, and eliminating false data created by bots such as likes, follows or retweets, thus allowing social media platforms to return to natural traffic(Lilian, 2022). At the same time, as some deep false information cannot be recognized by machines in time, manual verification of information sources, publishing platforms, key details and publishers’ subjective intentions is needed to effectively balance the relationship between quality content and users’ preferences, so that more users can focus on public issues and mainstream values(Lilian, 2022).
4: Cultivating public media literacy
For China, social media information flow presents the characteristics of universalization of communication subjects, autonomy of communication contents and diversification of communication methods, which makes information ferment and catalyze rapidly in a relatively open environment(Schwanholz et al., 2018). As more audiences have difficulties in screening false information, coupled with the lack of a perfect, sound and standardized media literacy education system, it makes it easy for irrational public opinion on the Internet to influence the social public(Schwanholz et al., 2018). In particular, the current wide spread of various social trends in social media platforms not only brings challenges to the mainstream ideology, but also the infiltration of diversified values such as consumerist culture propagated from Western societies, which makes the mainstream values in China suffer. Emotional preferences can be shaped by the state through rituals, education and other means to shape public emotional preferences and thus construct “emotional institutions”
Conclusion
The management of false information on social media platforms is not only an effective process of regulating and reshaping the communication order, but also a game of interaction between relevant interest groups in the online discourse space. In the information environment recommended by the algorithm, it is necessary to understand and observe society rationally, not limited to low-level sensory stimulation, and not manipulated by the information cocoon created by technology. It is conceivable that a social media platform flooded with false information not only destroys the underlying ecology of information dissemination, but also provides a breeding ground and soil for rumors and deceptions, making everyone possible to become a victim. As rational people surrounded by emotions, to disbelieve rumors, not to spread rumors, and to be prudent in forwarding them, not only can we maximize the negative impact of false information, but also defend our common public value pursuit.
References list:
Chadwick, A., Vaccari, C., & Kaiser, J. (2022). The Amplification of Exaggerated and False News on Social Media: The Roles of Platform Use, Motivations, Affect, and Ideology. The American Behavioral Scientist (Beverly Hills). https://doi.org/10.1177/00027642221118264
Guo, B., Ding, Y., Yao, L., Liang, Y., & Yu, Z. (2020). The Future of False Information Detection on Social Media: New Perspectives and Trends. ACM Computing Surveys, 53(4), 1–36. https://doi.org/10.1145/3393880
Lilian, E. (2022, Jan 25). How to regulate misinformation. THE ROYAL SOCIETY. https://royalsociety.org/blog/2022/01/how-to-regulate-misinformation/
Schwanholz, J., Graham, T., & Stoll, P.-T. (2018). Managing Democracy in the Digital Age Internet Regulation, Social Media Use, and Online Civic Engagement (J. Schwanholz, T. Graham, & P.-T. Stoll, Eds.; 1st ed. 2018.). Springer International Publishing. https://doi.org/10.1007/978-3-319-61708-4
Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans. ACM Journal of Data and Information Quality, 11(3), 1–37. https://doi.org/10.1145/3309699