
Introduction
The emergence of social media has enriched people’s daily life, such as Facebook and Instagram with many users. People speak freely on social media. Users share their daily life with friends through these social media and record every bit of life. With the help of social media, the transmission of information has been efficient and convenient. However, racial hatred, gender confrontation, harassment, rumors, pornography and other issues are widely spread on social media, because the freedom of speech and the high-speed transmission of information on social media provide a way for these issues to spread and survive. The disseminators and producers of these problems are hard to get the corresponding punishment, even if they will repeatedly commit the crime, they will not bear the corresponding responsibility. What is more outrageous is that the relevant person in charge of social media ignored the occurrence of this behavior. Even if these problems cause serious incidents, they will be relieved of their responsibilities or keep silent afterwards. Therefore, the most important thing is who should shoulder the responsibility of spreading these problems on the digital platform and prevent the spread of these problems. The following will talk about how to deal with the above two problems in detail.

Terrorism
The spread of terrorist speech and videos on social platforms is nothing new. Among them, the most impressive one is that two Muslim mosque near Christchurch, New Zealand, were brutally slaughtered by gunmen on March 15, 2013. The massacre resulted in 50 deaths (Flynn, 2019). But the key is that this mosque massacre was live-streamed on Facebook by the gunmen. Facebook banned and deleted the video only after the live broadcast of the massacre lasted 29 minutes and was watched thousands of times (Flynn, 2019). It can be found that terrorism can so easily broadcast the process of terrorist acts. And according to the statistics of Facebook, 1.5 million relevant videos have been deleted within 24 hours after the New Zealand mosque massacre, of which 1.2 million were automatically blocked by the system (Flynn, 2019). In addition to the New Zealand mosque massacre, for example, the terrorist organization ISIS released videos of beheading journalists and civilians on various digital platforms. It can be seen that terrorists can easily transmit videos of terrorism through digital platforms. Even though this process is very short, as long as it causes people to panic, their goal is achieved.
Fake news and rumors
Fake news and rumors are also common on digital platforms, and their manufacturing and dissemination capabilities far exceed terrorism. Fake news does not require strict verification of the facts and does not require professional personnel, which leads to its production cost much lower than the original report (Napoli, 2018). Social media, as a digital platform with freedom of speech and huge users, gives fake news and rumor spreading an audience and base. Social media is an important part of the spread of fake news (Napoli, 2018). Medical news and news is the focus of everyone’s attention because it is related to people’s physical and mental health. But this is also the place where fakes are flooded. For example, according to the research report, 40% of the hundreds of thousands of medical news materials collected by him are fake news (Waszak, 2018). Among them, a fake article about any cancer can be cured within 42 days was shared 65000 times (Waszak, 2018). A public opinion survey in 2016 by BuzzFeed News shows that 75 percent of American adults believe that certain fake news is correct, and 23 percent said they shared it with others (Waszak, 2018). The spread and trust of fake news is beyond imagination.

Who is responsible for the spread of these problems?
It can be seen from the above two contents that the spread of these problems on digital platforms is quite harmful. This is not only a threat to a large number of digital platform users, but also a threat to social stability. So who should be responsible for this is particularly important. Firstly, the state as a regulatory agency that has enormous power to manage the relationship between users, digital platforms and governments through law, so its relevant law enforcement departments are qualified to participate in the management and supervision of these issues (Riedl et al., 2021). The law is also the ultimate weapon to sanction these issues. Secondly, social media and news organizations like Facebook should also shoulder relevant responsibilities. These platforms have many users, but because of the protection of the Communications Decency Act, they can supervise in their own way and shirk responsibility for accidents (Medeiros, 2017). Therefore, if these media platforms can take the responsibility of monitoring and managing these issues, they can avoid the spread of these issues to the greatest extent. In addition, users should also bear part of the responsibility. They have the right to report the content with problems on various social platforms and news organizations. And they should also have the ability to self identify the true and false rather than blindly follow the choices of most people. When the country, social platforms, news organizations and users jointly shoulder their own responsibilities, the digital platform environment will also become better because the content with problems on the digital platform will not be so rampant.

How to prevent and organize the dissemination of these problem?
Traffic is a very important indicator in the digital platform. Therefore, if the cold treatment and does not pay attention to these problems, the flow of these problems will be greatly reduced, which also plays a role in preventing the spread. Users should also be aware that their unintentional speech or sharing of information may affect and hurt others. Individual freedom of expression may infringe the rights and interests of others (Weckert, 2000). Secondly, relevant national regulatory authorities and social media need to improve relevant publicity to help users identify and deal with these problems correctly. As a social media with a large number of users, Twitter will attach tags to remind users when facing misleading information (ORTUTAY, 2021). This is undoubtedly a good way to prevent. It is obviously more effective to improve users’ awareness than to prevent these problems after they occur. Finally, each platform can improve the experience of preventing terrorism, fake news, pornography and related issues by exchanging and sharing information with each other. For example, Facebook and Twitter joined an organization to deal with fake news (Moon, 2016). This improves the authenticity of the news and reduces the burden on everyone.

Conclusion
In conclusion, the digital platform environment needs to be jointly maintained and managed by everyone. Relevant regulatory authorities and users should not choose to avoid or keep silent when problems occur, but take the initiative to take responsibility for solving problems to avoid problems. The complexity and transmission speed of information on the digital platform are far faster than imagined. Thus it is necessary to maintain a sense of crisis at all times and not blindly follow, also have judgment. The dissemination of pornography, terrorism, racism and other issues should be reported in a timely manner to prevent the situation from spreading. Digital platforms are not out of the law.
Reference list
Flynn, M. (2019, March 19). No one who watched New Zealand shooter’s video live reported it to Facebook, company says. Washington Post. Retrieved October 7, 2022, from https://www.washingtonpost.com/nation/2019/03/19/new-zealand-mosque-shooters-facebook-live-stream-was-viewed-thousands-times-before-being-removed/
Medeiros, B. (2017). Platform (non-)intervention and the ‘marketplace’ paradigm for speech. Regulation Social Media and Society, 3(1), 1–10.
Moon, A. (2016, September 13). Facebook, Twitter join network to tackle fake news – VnExpress International. VnExpress International – Latest News, Business, Travel and Analysis From Vietnam. Retrieved October 8, 2022, from https://e.vnexpress.net/news/world/facebook-twitter-join-network-to-tackle-fake-news-3467776.html
ORTUTAY, B. (2021, November 17). Twitter rolls out redesigned misinformation warning labels. ABC News. Retrieved October 8, 2022, from https://abcnews.go.com/Technology/wireStory/twitter-rolls-redesigned-misinformation-warning-labels-81212172
Riedl, M. J., Naab, T. K., Masullo, G. M., Jost, P., & Ziegele, M. (2021, May 14). Who is responsible for interventions against problematic comments? Comparing user attitudes in Germany and the United States. Policy &Amp; Internet, 13(3), 433–451. https://doi.org/10.1002/poi3.257
undefined [ABP NEWS]. (2014, September 4). ISIS, beheadings and brutality: the How and Why [Video]. YouTube. Retrieved October 7, 2022, from https://www.youtube.com/watch?v=JUxVvI6iCEY
Waszak, P. M., Kasprzycka-Waszak, W., & Kubanek, A. (2018). The spread of medical fake news in social media – The pilot quantitative study. Health Policy and Technology, 7(2), 115–118. https://doi.org/10.1016/j.hlpt.2018.03.002
Weckert, J. (2000). What is so bad about Internet content regulation? Ethics and Information Technology, 2(2), 105-111, accessed 8 October 2022.