A cross-cultural platform comparison of Chinese media services based on the Christchurch incident

TOPICS:Christchurch Call, Content Moderation, Livestreaming, Social Media, Content Distribution, Regulation, Privacy and Safety, Chinese Media Services, Comparison.

In recent years, the rise of livestreaming platforms has attracted wide attention around the world. This rise has not only changed the landscape of the media and entertainment industry, it has also had a profound impact on social interactions, economic models and the way content is distributed. China’s livestreaming industry has experienced explosive growth in recent years, becoming one of the largest markets in the world. This success is partly due to China’s huge online population and payment ecosystem. Major Chinese livestreaming platforms such as Douyin, Kuaishou, Douyu and Huya offer a wide variety of livestreaming content, from entertainment to education to e-commerce.

XIAOTINGWANG, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons

However, with the popularity of livestreaming platforms, regulatory issues have become increasingly complex. Violent content policies of cross-cultural platforms will have multiple impacts on society and users, such as enhanced user safety, controversy over freedom of speech, and social stability (Zotos et al.2014). "China" by tomasdev is licensed under CC BY 2.0.

On the regulatory front, the Chinese government has taken tough measures to ensure that live-streaming platforms comply with laws, regulations and social ethical standards. These measures include real-name system, content censorship, time limits, tax requirements and so on. These regulatory measures have helped maintain social stability and moral values to a certain extent, but they have also caused some controversies, such as censorship and freedom of speech issues.

In other countries, platforms like Facebook, Twitch, YouTube, and Instagram have also experienced rapid growth in live content.

"Facebook at Mozcon - Alex" by Thos003 is licensed under CC BY 2.0.

These platforms allow users to share their lives, skills, and interests, interact with audiences, and reap financial benefits. However, these platforms have a relatively lax approach to regulation, focusing more on self-policing and content reporting, and therefore face some challenges, such as cyberbullying, privacy violations and copyright issues. In addition, the spread of disinformation and inappropriate content also poses a risk.

Christchurch mosque shootings: 19 minutes of terror

Take, for example, the tragic events in Christchurch. The incident was a tragedy that occurred in Christchurch, New Zealand, on March 15, 2019. In this incident, a gunman armed with heavy weapons attacked two mosques in the city, the Al Noor Mosque and the Linwood Islamic Centre, and in the Christchurch incident, the attackers used social media platforms to livestream the terrorist attack, which attracted global attention (Battersby and Ball,2019). The livestream attracted a large audience and went viral on social media, sparking widespread attention and outrage. Facebook quickly took steps to remove the live video and shut down the attacker’s account, but the content had already been downloaded and shared on other platforms, making it difficult to fully control its spread. The incident has also triggered a re-examination of social media platforms’ real-time regulation of live content, content dissemination and counter-terrorism policies.

Facebook Live allows users to broadcast video content in real time, which in some cases can be a valuable tool for sharing life moments, event reports, and interactive content. However, in the Christchurch incident, real-time regulatory issues for Facebook Live became apparent, with specific issues including:

1. Insufficient automatic audit

"Livestreaming" by Social Enterprise UK is licensed under CC BY 2.0.

Facebook Live relies on automated moderation algorithms to detect and remove offending content, but these algorithms may not be sufficient for complex situations. In the Christchurch incident, the attacker’s livestream was not immediately recognized as offending content, allowing the video to circulate on the platform for hours. China’s live-streaming platforms have a mandatory real-name system, and users must register an account with their real identity information. This helps improve user traceability and makes it easier to trace violations back to specific individuals. However, Facebook does not have such strict real-name requirements, and users can use the platform in relative anonymity. This can make it harder to track down the perpetrators of malicious acts. And Chinese streaming platforms have developed detailed community guidelines to help users understand what constitutes violent content and how to avoid Posting such content on their platforms (Akar et al.,2019).

2. Lack of human intervention

"Delete key" by Ervins Strauhmanis is licensed under CC BY 2.0.

Even if automated auditing fails, the platform should have human intervention mechanisms in place to respond to emergencies. In the Christchurch incident, Facebook live appears to have been too slow to respond and did not take timely steps to stop the spread of content. In contrast, Chinese live-streaming platforms often have reporting mechanisms where users can report inappropriate content. In addition, some platforms have established emergency response teams that can quickly take measures to deal with emergencies, such as malicious live streaming or content that endangers social stability. And the Chinese government imposes strict regulations on live-streaming platforms, requiring them to comply with national laws and regulations and cooperate with the government to deal with malicious content and speech. Such regulation could lead platforms to take more aggressive steps to avoid breaches. Streaming services in China have adopted strict punitive measures such as warnings, bans, and legal proceedings to deal with offending users (Wang and Lobato,2019). Under the influence of the international community, Facebook has also made some improvements, but it is subject to the laws of different countries, so its regulation is not as strict as China’s live-streaming platform.

"Automotive Social Media Marketing" by socialautomotive is licensed under CC BY 2.0.

In addition to the aforementioned issues with Facebook’s real-time policing, the speed at which Facebook Live content spreads is astounding. Once a video of a terrorist attack is uploaded on Facebook Live, it can be continually shared, copied and spread to other social media platforms such as YouTube and Twitter. This situation complicates regulation, as content is already backed up across multiple platforms and difficult to control thoroughly. Moreover, social media platforms have network effects, and content can spread quickly, which has aroused widespread attention. Social media companies such as Facebook also have some legal and ethical responsibility for the content on their platforms. In this case, however, it appears that Facebook did not do enough in a timely manner to police and stop the spread of malicious content. Social media users should consciously abide by the rules of use of their platforms and not spread violent, hateful or terrorist content. In addition, these intercultural platforms should strengthen international cooperation, working with national law enforcement agencies and international organizations to combat terrorism and hate crimes (Brailovskaia and Bierhoff,2016).

"united nations flag" by sanjitbakshi is licensed under CC BY 2.0.

After the Christchurch incident, Facebook took a number of steps to improve the security and content policing of its platform, such as content moderation, which refers to the process of screening and reviewing information published on the Internet to ensure its legality, authenticity and appropriateness (Roberts,2017). Disinformation can cause panic, misdirection and confusion. Content moderation can help identify and remove false information and maintain the accuracy and credibility of information. There is a lot of hate speech, violent content and pornography on the Internet. Through content moderation, you can reduce the impact of these harmful content on users and provide a safer online environment. Certain Internet platforms may disclose users’ personal information, leading to issues such as privacy violations and identity theft, and content moderation can help to monitor and prevent such incidents (Stewart,2021). Also important is community management, which refers to the process of monitoring and maintaining order and rules in online communities. Social media and online communities are places where people communicate and share their views. Community management can ensure that communication takes place in a civil and orderly environment, avoiding malicious attacks and abuse. Some users may abuse Internet platforms, post malicious content or engage in harassment (Thompson and Weldon,2022).

Overall, the rise of live-streaming platforms has generated widespread interest around the world, but it has also raised regulatory issues. There is a clear difference in the regulatory approach between China and other countries, with China taking tougher regulatory measures to ensure social stability and ethical standards. In contrast, other countries are more focused on self-regulation and community guidelines, relying on cooperation between users and platforms to police content. However, there is a need to continue to discuss how to balance freedom of expression, user privacy and social responsibility in order to meet the evolving challenges of live streaming platforms. The goal of regulation should be to ensure that these platforms can provide users with a rich experience while maintaining public safety and social values.


Be the first to comment on "A cross-cultural platform comparison of Chinese media services based on the Christchurch incident"

Leave a comment