
Violence in information transmission in social interactions is a problem that has been present in human society since its inception. With the introduction of the “Internet” and the context of online communication, violence seems to be more prevalent. Numerous societal issues are being caused by the increased usage of social media. Its rapid, convenient, and widespread influence has turned it into a “distribution centre” for the spread of violent ideas, the dissemination of fake news, the spread of online rumours, the incitement of online violence, bullying, harassment, pornography, and other aggregated illegal online chaos, causing significant harm to national security, public interests, and social stability. I believe it is the duty of governments, as well as of individuals, to control the content of social media platforms and prevent inappropriate content from being shared.
Self-regulation of social platform companies
- Why they should be responsible for:
Social media platforms seek to convey an image that supports freedom of expression while also promoting a pleasant user experience. Social media platforms currently have a dominating place in the media landscape, but they do not want to be controlled or shoulder the burden of truth-finding that has been the role of conventional media for decades. Mainstream social media such as Facebook perform the function of creating an information environment, and then they should take responsibility for it. Influence comes with the corresponding responsibility. Over the years social media platforms have rarely removed hate speech, and they usually accommodate a variety of racist, homophobic and anti-Semitic posts and comments. They also allow problematic content to spread on social platforms. A 14-year-old girl filed a lawsuit in Belfast High Court, showing her Facebook page and claiming that social media has a responsibility to manage the nude images that are repeated on the “Shame” page(BBC News, 2016). This has caused alarm in the technology world. If these explicit, coercive and abusive images are not managed properly, they can have a significant impact on society. In the past, social media have tried to put the responsibility on individuals. But this case is being called a “landmark” because the teenager is claiming that Facebook has a responsibility to regulate the posting of information, even though it is personal. The teenager is suing Facebook for misuse of private information, negligence and violation of data protection laws. Facebook has taken action to counter such retaliatory racy photos and launched new security tools in March 2017. In addition, trained specialists in the site’s community operations team can remove explicit photos from Facebook, Messenger and Instagram without the user’s permission.
- Measures taken by social platforms:
- Measures already in place: Google, Facebook, and Twitter did make the announcement to battle lies and misinformation, hate speech, and social media abuse, nevertheless, towards the end of November 2016. Google said that it will prohibit pages with fake news from showing up on its AdSense advertising network, preventing those websites from being credited with ad income. According to Facebook, advertisements with unlawful, false, or deceptive information will not be permitted on the app or website. This includes “fake” news, which is news that is purposefully factually inaccurate. Twitter has already taken action to assist users in “blocking” such content and reporting infractions. Why do victims of abuse due to inappropriate content on social media platforms continue to appear even after these policies are launched? Firstly, it is caused by the incomplete policy of the digital platform. Secondly, it is because the country has not introduced suitable laws to intervene in the regulation. Another more important point is that users who use social platforms do not have a strict self-control and ethical standards, leading some of them to post such content.
- Theoretical measures that have been proposed: Social platform companies should take extensive advice and lessons learned to introduce better rules to regulate what users post. For example, the three imperfect solutions mentioned by Gillespie in “custodians of the internet” include editorial review, community flagging and automatic detection(Gillespie, 2021). Meanwhile, Shen Qinlan at Carnegie Mellon University has also applied natural image processing techniques to understand how people behave in social media, leading to the design of a series of censorship and retrieval procedures for users who express specific undesirable behaviours. The attempt is to capture more sentence-based discourse intention attributes in context and narrow down the scope to capture offensive words.
Regulation of government involvement
- Why they should be responsible for:
Should the government intervene strongly in social networks to create a stronger line of defence? While regulation and laws are helpful, the challenge is enforcing them. While these efforts by companies to introduce policies are laudable, platform parties still carry the responsibility for decision-making and the associated consequences when such decisions are politicized. As a result, companies have publicly stated their support for government regulation, shifting a degree of responsibility from the platform side to the government. When the government gets involved, there will be a total of two regulatory models.
- Two models of government regulation:
- One is self-regulation, such as the U.S. Communications Regulation Act, which unconditionally provides third-party content exemptions for platform parties, giving companies a high degree of autonomy and flexibility(Wright, 2021). National legislation plays a role, but this takes longer to take effect. It is still the social platform companies that dominate.
- The other is co-regulation. The state provides a broader legal framework to ensure that platforms, users and public institutions take responsibility for cooperating to achieve public value, such as Germany’s Cyber Enforcement Act enacted in 2018(Jenny, 2021). For undesirable content, the state can also promote future innovation and multifaceted regulation of social platform content by limiting the power of technological monopolies through competitive regulation and by no longer enabling social platform companies to limit user choice.
- Examples of government legislative participation in management:
Concerning government legislative involvement in content regulation, for example, Australia claims in 2021 to introduce legislation that would hold social media companies accountable for defamatory comments posted on their platforms(Mcguirk, 2021). This legislation is to ensure that social media companies have strong tools to require companies such as Facebook to disclose the details of such individuals when they receive complaints. Additionally, mainland China blocks access to foreign social networking sites including Twitter, Google, and instant messaging service WhatsApp. Weibo, Baidu, and WeChat are some of the comparable social networking networks available in China. Furthermore, the Chinese government has had some success in preventing some Chinese internet users from attempting to utilise “wall” software to access those restricted foreign social networking sites. Chinese Internet regulators said in late January that over the previous six months, they had “cleaned up” 9,382 mobile applications and closed 733 websites.
Self-awareness of users
In addition to an external law, internal self-regulation is likewise a critical manner of controlling social media networks. Not only the country and company regulation but additionally the customers themselves need to play a restraining position on themselves to lessen posting bad records on social media systems. Relevant departments must continue to increase awareness and education of the company and users, develop a social media ethics system, encourage social media users to create positive environments of conscious self-discipline, and guide the proper method of information sharing. However, for such conscious restraint to be effective, it must be supported by specific laws and rules; otherwise, it will simply be loose. The role of public monitoring should also be stressed. In order to maintain order and safeguard discourse on online systems, social media platforms should conspicuously set up easy reporting gateways to enable platform users to report inappropriate speech once they discover it.
Companies, governments and individuals are mutually binding and complementary
Combating objectionable content on social media platforms will be a complex and multifaceted process. Therefore, for regulating bad information on online platforms, the roles played by social platform companies, the state and individuals are mutually reinforcing.
Reference
BBC News. (2016, September 8). Northern Ireland teenager sues Facebook over nude photo. BBC News. Retrieved October 14, 2022, from https://www.bbc.co.uk/news/uk-northern-ireland-37312890
BBC News. (2019, April 18). BBC News 中文. BBC事實核查:各國政府如何監管五花八門的社交網站?. Retrieved October 14, 2022, from https://www.bbc.com/zhongwen/trad/world-47910500
Facebook. (2016). Unrealistic outcomes. Transparency Center. Retrieved October 14, 2022, from https://transparency.fb.com/zh-cn//policies/ad-standards/deceptive-content/unrealistic-outcomes/?source=https%3A%2F%2Fwww.facebook.com%2Fpolicies_center%2Fads%2Fprohibited_content%2Fmisleading_claims
Gillespie, T. (2021). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Google. (2016). AdSense guide to allowing and blocking ads on your site. Google AdSense Help. Retrieved October 14, 2022, from https://support.google.com/adsense/answer/180609?hl=en
Jenny, G. (2021). Germany: Network enforcement act amended to better fight online hate speech. The Library of Congress. Retrieved October 14, 2022, from https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforcement-act-amended-to-better-fight-online-hate-speech/
Mcguirk, R. (2021, October 7). Australia wants Facebook held liable for anonymous comments. AP NEWS. Retrieved October 14, 2022, from https://apnews.com/article/technology-business-scott-morrison-australia-social-media-e68fe4edc2bcdb71f33a9392bb9d5471
Twitter. (2016). How to report abusive behavior on Twitter | Twitter help. Twitter. Retrieved October 14, 2022, from https://help.twitter.com/en/safety-and-security/report-abusive-behavior
Wright, E. (Ed.). (2021). Telecoms, Media and Internet Laws and Regulations 2022.