
Introduction
In recent years, digital platforms and social media have gradually integrated into people’s daily lives, and the era of big data has arrived. While providing convenience and connection for human beings, the Internet has also brought problems, especially manifested in various digital platforms. In this era of participatory culture, everyone can participate in online activities and disseminate information and images, leading to the emergence of problematic content on the platforms. The spread of online violence, bullying, pornography, and hate has jeopardized the otherwise harmonious online environment.
A large-scale European study showed that one in six women experienced some form of digital harassment from 15, including cyber bullying, cyber violence, cyberstalking and pornographic images distributed without consent (Al-Alosi, 2017). In addition, the spread of vulgar information, pornographic content and hate speech on platforms also negatively impacts Internet users. In order to create a healthy and harmonious online environment, national governments, digital platforms, external organizations and users are responsible for making efforts.
Government Regulation
In Internet content regulation, the state and government bear important responsibilities. With the advent of the age of self-media, people are free to speak and disseminate information on digital platforms, and the influence of their content is growing. However, since Internet users have different levels of education, those who are disseminated are easily infiltrated by vulgar and violent information, and in this case, government regulation is very important.

Effective government legislation can help stop the spread of violent content online. For example, a man from New South Wales, Australia, carried out two brutal mass shootings in Christchurch, New Zealand, and live-streamed the shootings on Facebook, which ultimately resulted in 51 people being killed. The platform did not limit the spread of this live stream in time, leaving many users in fear of violence. Australia has since passed a sweeping new law that internet companies will be obligated to stop the spread of violent material. If they fail to do so, executives could face up to three years in prison or a fine of up to 10% of the platform’s annual turnover (Griffiths, 2019). The release of this law helps digital platforms and Internet providers organize the spread of violent content on time.
49 killed in terror attack at New Zealand mosques by ABC News. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=TPPeCtO3EPo
While some data show that both men and women are likely to be victims of cyberbullying, research shows that women are the main victims of online abuse, and sexual harassment (Al-Alosi, 2017). Since 2007, the Harper government has provided over $62 million in funding for projects to end violence against women through Status of Women Canada’s programs. In addition, the Canadian government will introduce legislation and create a new criminal offence that prohibits the distribution of intimate, pornographic images without consent (Targeted News Service, 2013).
In addition to issuing civil and criminal laws to stop the distribution of objectionable content online, the government and Internet regulators can also take direct action. The Cyberspace Administration of China announced a massive month-long “purification” Internet campaign. The purpose is to make the home pages of key media platforms, search lists of popular topics and important news content pages carefully managed to present “positive information” (He, 2022). The Cyberspace Administration of China has eradicated negative information such as violent content and pornographic and obscene images to create an excellent online atmosphere and provide a healthy online experience for users. In fact, direct control of platform content by the government and online regulators has eliminated most undesirable information and created a relatively good online environment.

The government can also curb the spread of objectionable content for extreme incidents by blocking online pages and digital platforms. A hardline group was allegedly involved in tampering with images and spreading them on social networking sites such as Facebook, Twitter and You Tube to incite Muslims and create fear among people living in the Northeast across India. It led to the spread of a large number of inflammatory comments and hate speech. In response to this online disruption, the Indian government ordered intermediaries, including national and international social networking sites, to block 156 pages (The Economic Times, 2012).
The Moderation of Digital Platform
As an information-sharing platform with vast influence, digital platforms are primarily responsible for content management. Creating a good online environment requires external supervision by government departments and control of its content dissemination by the digital platforms themselves. In the era of big data, some social media platforms, including Facebook, Twitter and Youtube, have enormous power and influence, so strict supervision is needed to prevent online violence.
In April 2018, Facebook released internal public-facing guidelines for its Community Standards, which govern what Facebook is more than 2.2 billion monthly active users can post on the site (Gorwa, 2019). Users’ bad online behaviour is largely regulated through these specific platform rules. They have also initiated a project in collaboration with academia to create a reputable mechanism for third-party data access and independent research (Gorwa, 2019). Self-regulation of digital platforms and websites is critical, as it counteracts the spread of undesirable content, such as obscene pornography and violent abuse at the root.

In the era of big data, algorithms can also serve as an enabling technology for platform content moderation. Most social media platforms use algorithms and web technologies to restrict the posting of undesirable content, thus preventing its further spread among platform users. For example, Youtube’s home page is populated by algorithms that assess popularity and keep obscene and pornographic information from appearing through content regulation (Gillespie, 2018). Also, the platform’s content review mechanism limits the circulation of sensitive terms; Twitter adds terms such as “gooddick” and “gorillapenis” to the list of blocked terms (Gillespie, 2018). Modifying sensitive words can prevent the spread of certain pornographic and obscene information, which in a sense, maintains the safety of young people online.
People & Internet Users
Many online platforms have established censorship departments and online blocking organizations that specialize in dealing with objectionable online content so that people who find themselves in the midst of cyberbullying or harassment can immediately report it to the appropriate department or call the platform’s hotline to report the situation. For example, When a user reports to Facebook, the platform reviews and removes any content that violates Facebook community standards and does not reveal any information about the person reporting. Through user feedback, the overall environment of the network can be remediated, which in turn acts on the moral character of users, thus creating a positive cycle. In addition, according to TikTok, when a user receives harassment or bullying, they can report it to the platform so that the review team can take appropriate action (Mason, 2020). Finally, people should continuously improve their moral and educational quality, consciously resist online pornography and violent content, and report to the relevant government authorities once they find objectionable information.

Conclusion
Briefly, government departments, digital platforms and people are responsible for monitoring and managing the spread of undesirable content on digital platforms. Only external monitoring and internal governance can create a healthy and positive online environment. The government and Internet regulators should fulfill their regulatory responsibilities and pass legislation to limit the spread of online violence, hate and pornographic content. Digital platforms and social media should be strictly controlled through algorithms and have specific community norms. People should improve their education and quality to resist undesirable online content.
References
Al-Alosi, H. (2017). Cyber-violence: digital abuse in the context of domestic violence. University of New South Wales Law Journal, 40(4), 1573-1603.
Gillespie, T. (2018). Custodians of the internet : platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. https://doi.org/10.1080/1369118x.2019.1573914
Griffiths, J. (2019, April 4). Australia passes law to stop spread of violent content online after Christchurch massacre. CNN. https://edition.cnn.com/2019/04/04/australia/australia-violent-video-social-media-law-intl/index.html
He, L. (2022, January 26). China pledges “purification” of the internet ahead of the Beijing Winter Olympics and the Lunar New Year | CNN Business. CNN. https://edition.cnn.com/2022/01/26/tech/china-clean-up-internet-beijing-olympics-intl-hnk/index.html
Mason, E. G. (2020). 5 Steps You Can Take To Protect Yourself From Online Harassment. Verywell Health. https://www.verywellhealth.com/covid-19-online-harassment-5084544
Targeted News Service. (2013). eResources, The University of Sydney Library. Login.ezproxy.library.sydney.edu.au. https://www-proquest-com.ezproxy.library.sydney.edu.au/docview/1448679877?pq-origsite=primo
The Economic Times. (2012). Assam violence: Cyber war continues, government – ProQuest. Www.proquest.com. https://www.proquest.com/docview/1034536135?accountid=14757&parentSessionId=9r590M0eSGJm9U8o75VGKJbwT6JE8FgkLsuuLMJHekU%3D&pq-origsite=primo
This work is licensed under CC BY-NC 2.0.