The Popular Video Platform Allegedly Directs Children's Profiles to Explicit Material In Just a Few Taps

Per findings from a fresh inquiry, the widely-used social media app has been observed to direct children's accounts to explicit material in just a couple of steps.

Testing Approach

A campaign organization created fake accounts using a birthdate of a 13-year-old and enabled the "restricted mode" setting, which is designed to limit exposure to sexually suggestive content.

Study authors found that TikTok suggested inappropriate and adult-themed search terms to multiple test profiles that were set up on new devices with no prior browsing data.

Alarming Recommendation Features

Keywords proposed under the "suggested searches" feature contained "very very rude skimpy outfits" and "very rude babes" – and then progressed to terms such as "graphic sexual content".

Regarding three of the accounts, the inappropriate search terms were suggested immediately.

Fast Track to Adult Material

Within minimal interaction, the researchers found explicit material including revealing content to penetrative sex.

The research group stated that the content sought to avoid detection, usually by displaying the content within an benign visual or video.

Regarding one profile, the process took two interactions after signing in: one interaction on the search bar and then another on the suggested search.

Regulatory Context

The research entity, whose mandate includes investigating digital platforms' effect on human rights, said it conducted several experimental rounds.

The first group occurred prior to the enforcement of child protection rules under the United Kingdom's digital protection law on July 25th, and a second set after the rules took effect.

Alarming Results

The organization noted that several pieces of content included someone who seemed to be under 16 years old and had been submitted to the online safety group, which monitors online child sexual abuse material.

The research organization claimed that the social media app was in breach of the UK safety legislation, which mandates social media firms to prevent children from viewing dangerous material such as explicit content.

Regulatory Response

An official representative for the UK communications regulator, which is responsible for overseeing the legislation, commented: "We appreciate the research behind this research and will examine its results."

The regulator's guidelines for following the act specify that tech companies that carry a substantial threat of showing harmful content must "adjust their systems to remove harmful content from children's feeds.

TikTok's content guidelines forbid pornographic content.

Company Reaction

The social media company stated that after being contacted from the organization, it had deleted the violating content and introduced modifications to its suggestion feature.

"Immediately after notification" of these assertions, we responded quickly to investigate them, delete material that violated our policies, and introduce upgrades to our recommendation system," commented a company representative.

Brian Curry
Brian Curry

A seasoned journalist with a passion for digital media and storytelling, bringing fresh perspectives to global events.