Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how? 

The Internet has enabled people from different countries, languages and ethnicities around the world to come together. With the prevalence of the Internet, more and more users are beginning to flock to the Internet and join digital platforms. The widespread use of the Internet and related digital technologies has created an online space for individuals to express their views and ideas, and to connect and communicate with others as they wish. Digital platforms have broken down geographical and temporal distances, allowing users to export and disseminate their content anywhere, anytime and instantly, and the anonymity of social software on digital platforms allows users to express themselves more freely. However, this freedom of expression has by its very nature given rise to a large amount of undesirable content, such as verbal violence, discrimination and sexual harassment, which has caused harm to a large number of users.

‘The horse walked in the middle of the road’ by Sensual Shadows photography  CC BY-NC 2.0

 

 

Social media

Social media is widely used today, with an absolute amount of users and content posted, and the speed of dissemination is relentless, so there is an unprecedented amount of content and users on digital platforms today (Gillespie, 2018). It is the influx of users of all races, ages and walks of life that has made social platforms less of a youth-only software and more towards a wider demographic. Under this means that users of all shapes and sizes will be present in the platforms, then digital platforms will also feature a wide variety of content. This content includes a great deal of vulgarity, pornography and violence, which is being spread around the internet at high speed. As the Internet is liberal in nature, the protection of privacy and anonymity of the user forms a shield for some users, allowing them to spread negative content without fear. The speed at which the internet can be spread is staggering, and the spread of large amounts of undesirable content can also have a negative impact on society.

 

Undesirable phenomena emerging from digital platforms

Since in social software users can edit their nicknames, use virtual avatars, shape their persona and even choose the opposite gender at will, users have what amounts to another virtual identity in social media. This identity gives users a greater degree of anonymity, and this anonymity will also be the key to users posting bad content at will. Some users do not worry about their real identity being discovered, as long as they do not post anything related to themselves in their accounts, or steal photos or profiles of others altogether and portray themselves as someone else, it will be difficult for ordinary users to recognise their real identity, so these users are carefree and get pleasure from posting abusive, pornographic and violent content at will through social media platforms. This behaviour is particularly detrimental to the female population, who are the most likely to experience sexual harassment, as these users will humiliate female users through comments, private messages, or send their genitals to be shown to female users. A large number of women have experienced sexual harassment on the Internet, but they have never known how to avoid such incidents. Because the anonymity of the software means that it is difficult to punish users for harassing women (Hanson, 2020), even if the harassed user reports the bad behaviour, it is difficult to get the user punished.

 

It is because of the freedom and anonymity of social platforms that some users feel free to abuse others on the internet, using excessive words to stir up emotions and offend other users, as they believe that they do not have to pay any responsibility for speaking on the internet and that they can easily abuse a person with the snap of their fingers. This kind of behaviour can lead to extreme incidents and can leave some abused users with intense psychological damage, but some users who enjoy abusing others on the internet do not even feel that their abusive behaviour is wrong, but rather that the abused person just deserves to be abused and if he cannot take their scolding, then he is too weak in his mind.

 

Online platforms allow these users to do nasty things with impunity and use this to satisfy their own dark psychology. Some malicious users are even able to obtain accurate and true information about a user through the presentation of their real information on social media platforms, such as photos, location information and other content. The collection of this true information by malicious users can lead to people, especially women, being subjected to extremely bad acts such as violence, rape or murder, which can be fatal (Carlson, 2020).

Here’ s What Online Harassment Looks Like by AJ+

 

It is worth noting that currently users’ posts are instantly public, there is no content censorship and the platform can only remove questionable content after the fact, meaning that anything can be posted to the platform and be available until it is discovered and removed (Gillespie, 2018). This has resulted in bad content flooding the platforms and making it difficult to stop.

 

Content control on digital platforms

The control of undesirable content on digital platforms relies heavily on the platforms themselves and on government control.

‘Social Media Logos’ by BrickinNick is licensed under CC BY-NC 2.0.

Platforms: need to intervene immediately in content management to stop the spread of undesirable content through technical means.

Content from traditional media has already undergone content vetting when it reaches the audience, for example when television programmes are commissioned and news articles are assigned and archived for processing (Gillespie, 2018). The editorial review of traditional media allows the content that appears in the public eye to be controlled and it stops offensive or harmful content from appearing (Gillespie, 2018). Online media also attempts a delay in the present day, and this delay enables writers to have a short period of time to monitor and review. However, it is currently only possible to achieve a small but sufficiently large time lag: a seven-second delay, which is brief but allows the reviewer to stop some undesirable content from appearing to some extent (Gillespie, 2018). Continued improvements in technology might allow for longer dissemination lags, allowing reviewers to fix away problems in a timely manner.

Platforms could also develop automated detection features that allow systems to automatically detect content such as nudity and hate speech, and identify and ‘flag’ such content for subsequent review (Gillespie, 2018). Passing algorithms that identify pornography or hate speech will immediately remove or block content from being posted by some users (Gillespie, 2018). Furthermore, automated detection does not show human bias and is fair in identifying and removing content in any language (Gillespie, 2018). However, for the time being, the technology still needs continuous improvement as automated detection is not difficult to accurately identify offensive content or behaviour, which can still appear on the platform when content lacks context, users use evasion tactics and attacks of mobility, or even content that is mistakenly removed without undesirable information (Gillespie, 2018). The control of undesirable content on platforms could perhaps try to combine automated detection and editorial oversight.

‘Big Tech icons, including apple, facebook, amazon and google smashed’ by a hammer CC BY-NC 2.0.

Government: The control of digital platforms requires government control in addition to the platforms themselves. Although the internet promotes freedom of expression, it also needs to be controlled by the state. Good control can lead to a significant reduction in radical speech, illegal speech or incidents of pornography and violence on the internet. Only relatively coercive and deterrent government measures can effectively stop these behaviours, and relying on platform controls alone will not instil fear in users. In real life, it is the laws and regulations of the government that restrain the behaviour of the public, so that they do not dare to act arbitrarily, and on the Internet only the restraint of the law can really reduce the occurrence of unfavourable behaviour.

 

Conclusion

The use of the Internet has continued to increase worldwide(Dutton, 2009). This has led to a proliferation of users and content on digital platforms, resulting in an influx of violent, pornographic, discriminatory and other objectionable content, which is not limited to the internet and can sometimes be harmful to society. It is therefore necessary to strengthen regulation of digital platforms. The platforms themselves need to allow content to be optimised by means of enhanced technology, while governments need to make some users wary of content published on the internet through strong political means, and not allow internet users to believe that the internet is a place outside the law and that they can spread malicious content at will. As the digital age has fully entered the current world, and users of digital platforms will continue to grow with time and technology, and may even see life go fully digital in the future, it is necessary to improve the content of digital platforms and reduce the amount of bad content on them.

‘Cyberbullying, would you do it?’ by kid-josh is licensed with CC BY-NC 2.0.

 

Reference:

Carlson, B. (2020). Love and hate at the Cultural Interface: Indigenous Australians and dating apps. Journal of Sociology, 56(2), 133–150. https://doi.org/10.1177/1440783319833181

 

Dutton. (2009). The Fifth Estate Emerging through the Network of Networks. Prometheus (Saint Lucia, Brisbane, Qld.), 27(1), 1–15. https://doi.org/10.1080/08109020802657453

 

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

 

 

Hanson. (2020). Becoming a (Gendered) Dating App User: An Analysis of How Heterosexual College Students Navigate Deception and Interactional Ambiguity on Dating Apps. Sexuality & Culture, 25(1), 75–92. https://doi.org/10.1007/s12119-020-09758-w

 

IGI Global.(n.d).What is Traditional Media. https://www.igi-global.com/dictionary/traditional-media/47688

 

Wikipedia.(n.d).Social media.https://en.wikipedia.org/wiki/Social_media