With the development of technology, smartphones and social media are now almost universal necessities in the lives of a new generation of teenagers and children. With this comes the concern of the general public, especially parents, about the vast amount of information that teenagers and children are exposed to on the Internet, and they cannot fully trust social platforms to protect their children’s safety, including privacy, personal safety and health. Using Twitter and TikTok as examples, this article will explore the safety rules used by both platforms and how they operate, the real major safety concerns of parents, and user feedback on these measures.
As globally popular social media platforms, Twitter and TikTok both have relatively well-established protections for minors, based on international organizations’ protections for the safety of minors online. Detailed rules and policies can be found in the user help centers of both Twitter and TikTok. As the two social media platforms differ in the way they share information, with Twitter relying more on text content and TikTok mainly on short videos, there are some clear differences in the way they protect minors.
Unfortunately, the only relatively obvious child-related policy on Twitter’s official help page is the no child molestation policy. The article mentions in detail what type of behavior violates this policy and what the penalties are for violating it, i.e., banning the user’s account and reporting it to the appropriate child protection authorities. And at the end of the article, Twitter users are encouraged to actively report the content. In fact, the Twitter platform has a corresponding review and ban on the content of all messages sent by users, such as violence, private information, sensitive topics and non-truthful information and so on, which are not allowed to be posted. The platform also asks users about their age and supports them in blocking and reporting offending content.
The platform’s safety rules and solutions for minors can be easily found in TikTok’s User Center. TikTok automatically recognizes underage users and matches them with age-appropriate video content once they submit their age information. It can even identify underage users who have lied about their age and adjust the content it delivers. As with traditional social media, violence and extreme hateful content is prohibited and age-rated, and parents can even directly associate their children with the content. Parents can even associate their children’s accounts directly with the site, controlling the amount of time teens can spend on the site and limiting the content.
And parents are able to exercise some limited control over the safety of minors under these platform policies. Twitter has a range of privacy and security settings that parents can use to keep their teens safe on the platform. They offer users the option to not only block and report users, but also to mute them. Users can also limit who can see their tweets, who can contact them, and who can tag them. They can also curate the type of content they see to match their interests and hide content that contains sensitive content. Of note Twitter introduced Safe Mode in September 2021. It allows users to automatically block any abusive or spammy accounts that contact them for a short period of time. At the end of this period, users receive information about which accounts have been blocked (Internet Matters Team, 2022). And TikTok offers a Family Pairing Mode, which allows parents and teens to customize security settings to their needs. And users under 18 are limited to 60 minutes of screen time per day. There’s even a Restricted Mode feature to help limit the presence of content that may not be appropriate for underage users. All of this needs to be set up with the help of a parent.
Filtering and blocking software is one of the most commonly touted prevention devices. Although filtering and blocking software programs are advertised as a way to prevent teens from encountering pornography and other inappropriate sexual material online, research findings suggest that many families do not use the feature (Mitchell et al., 2005). On the one hand parents are skeptical about the effectiveness of the platform’s restrictive features and prefer to use didactic or direct supervision to limit underage use of social software. On the other hand, parents may also worry that too strict control will make adolescents rebellious, and also believe that this kind of information does not pose a serious risk to their children’s psychology. Some scholars and related research centers in the United States at different times conducted a sample survey of 1,000 parents of teenagers. According to the data, more than two-thirds of parents are concerned that their children are spending too much time in front of their cell phone screens, and most of them control the time of use, checking and controlling what their children are watching (Anderson, 2019). While the vast majority of parents (84%) believe that adults should be extremely concerned about teens’ exposure to sexual content on the Internet, a minority of parents (33%) report using some type of filtering or blocking feature (Mitchell et al., 2005). Although these data are only a sample of U.S. parents of teens, the issue of online safety for teens has been an international one, in other words these data apply to an international sample as well. However, teens’ criminal behavior outside of the Internet or risky behavior on the Internet is not related to the use of filtering features. And the fact that teenagers have access to bad information is also not related to whether the filtering function is turned on or not. The platform does not serve as a good shield.
Both sides have given assurances to the users on the platform in terms of official policy introduction and rule making, but in the real use process they have not received positive feedback from the users. Twitter has a very strict program and proposed serious penalties, but in the real use of the process of the user did not experience the official Twitter said protection, and did not make the appropriate punishment in a timely manner. In recent years, media outlets have been reporting that Twitter has not chosen to delete information about minors who have been sexually abused or prostituted, but rather to profit from it. Knowing that it took the intervention of the relevant law enforcement authorities to get Twitter to remove the content when the messages had already been broadcast countless times on the platform. Even though TikTok has been working to eliminate messages of sexual exploitation of children from the platform, it cannot be denied that there are still a large number of unsafe photos that are spreading on the TikTok platform (Levine, 2022). Most parents, when asked about this, also mentioned that they do not have the technical knowledge to adequately track their children’s internet usage and do not know which internet tools to use (Stewart et al., 2021). These cases confirm parents’ concerns that while social media platforms do have appropriate security policies and protection rules in place, there are still many unscrupulous people who use the platforms to disseminate content that they are concerned about and do not want their children to have access to. This also means that although both platforms and users want to protect the safety of minors, there is currently no direct solution to address the spread and control of bad information.
To summarize, so far parents are very worried about minors’ exposure to undesirable information on social media platforms, but there is still no effective solution. The administrators of social platforms have also tried their best to formulate protection policies and control methods, but they are still unable to eliminate the proliferation of undesirable information. Parents are also skeptical about the restrictive features of these platforms because they have not seen any effective control. However, as the public gradually pays more attention to this kind of incident, there may be effective functions to protect minors in the future.
Anderson, M. (2019, March 22). How parents feel about – and manage – their teens’ online behavior and Screen Time. Pew Research Center. https://www.pewresearch.org/short-reads/2019/03/22/how-parents-feel-about-and-manage-their-teens-online-behavior-and-screen-time/
Internet Matters Team. (2022, September 20). Twitter’s parental controls and privacy settings for parents. Internet Matters. https://www.internetmatters.org/hub/news-blogs/twitters-parental-controls-and-privacy-settings-what-parents-need-to-know/
Levine, A. S. (2022, August 4). TikTok moderators are being trained using graphic images of Child sexual abuse. Forbes. https://www.forbes.com/sites/alexandralevine/2022/08/04/tiktok-is-storing-uncensored-images-of-child-sexual-abuse-and-using-them-to-train-moderators/?sh=46e33f65acb8
Mitchell, K. J., Finkelhor, D., & Wolak, J. (2005). Protecting youth online: Family use of filtering and blocking software. Child Abuse & Neglect, 29(7), 753–765. https://doi.org/10.1016/j.chiabu.2004.05.008
Stewart, K., Brodowsky, G., & Sciglimpaglia, D. (2021). Parental supervision and control of adolescents’ problematic internet use: Understanding and predicting adoption of parental control software. Young Consumers, 23(2), 213–232. https://doi.org/10.1108/yc-04-2021-1307
Stewart, K., Brodowsky, G., & Sciglimpaglia, D. (2022). Parental supervision and control of adolescents’ problematic internet use: understanding and predicting adoption of parental control software. Young Consumers, 23(2), 213-232. https://doi.org/10.1108/YC-04-2021-1307
TikTok parental controls and safety settings. Internet Matters. (2023). https://www.internetmatters.org/parental-controls/social-media/tiktok-privacy-and-safety-settings/
Twitter has violated federal sex trafficking laws, lawsuit alleges. National Center on Sexual Exploitation. (n.d.). https://ncose.salsalabs.org/twitter/index.html