A study conducted in the United Kingdom has found that minors as young as 13 may be exposed to explicit sexual content on the social media platform X.
The research, carried out by the Center for Countering Digital Hate (CCDH), examined how underage users interact with the platform and what type of content is recommended to them.
Researchers created two test accounts posing as 13-year-old users — the minimum age permitted on the platform — and searched commonly used terms such as “sex” and “porn” to assess how effectively content moderation systems restrict explicit material.
According to findings cited in reports, eight out of ten searches returned posts classified as explicit under X’s own policies. These included material depicting sexual acts, nudity, and other adult content.
#WATCH | On H1-B visa, US Vice President JD Vance says, "… On one hand, there is a lot of fraud in the H1-B system and on the other hand, there are people who came in and enriched the country, like my in-laws… One of the obligations of such citizens is that they must think… pic.twitter.com/GXhl9jtpTM
— ANI (@ANI) April 14, 2026
Algorithm Continued Recommending Explicit Content
The study also found that X’s recommendation system continued to surface similar content even without additional searches.
In the “For You” feed — the platform’s personalised content stream — researchers reported that approximately 30.5% of posts shown to the test accounts contained explicit material. This included images and videos of sexual acts and masturbation, as well as content that appeared to involve minors.
Researchers suggested that such recommendations increase the risk of prolonged exposure to harmful material, particularly when curiosity leads young users to search sensitive keywords even once.
Messaging Features Also Raised Concerns
The report highlighted potential risks associated with direct messaging settings on accounts belonging to teenage users.
Although users under 18 are set by default to receive messages only from accounts they follow, the researchers found that these settings can be easily changed.
After adjusting the settings on the test accounts, researchers reported receiving unsolicited messages from adult accounts. In one case, an explicit video was reportedly sent to an account posing as a 13-year-old girl. Other messages included sexually suggestive images and offers of further adult content.
Access To Explicit Communities And Content
The study found that test accounts were able to join communities dedicated to sharing adult content.
Some of the communities identified included groups focused on pornography sharing. Researchers also reported exposure to posts promoting adult websites and content featuring the image of Jeffrey Epstein, along with material appearing to depict minors in sexual contexts.
These findings raised concerns about whether platform safeguards are sufficient to prevent underage users from accessing harmful content.
Researchers Flag Weak Safeguards And Legal Risks
The Center for Countering Digital Hate stated that exposure to explicit material was driven partly by the platform’s recommendation algorithm and described current safety controls as inadequate.
Callum Hood said even brief curiosity by young users could lead to harmful exposure.
“Even short-lived curiosity could expose children to explicit sexual material and risks of grooming, proving the platform’s safeguards simply do not work,” he said.
The organisation warned that the findings may indicate potential breaches of the Online Safety Act, which requires digital platforms to protect children from harmful content.
Wider Scrutiny Of X And Its AI Tools
The study adds to ongoing scrutiny of X, which was acquired by Elon Musk in 2022.
The platform has also faced criticism over its artificial intelligence tool Grok, which allows users to edit images of real individuals.
Keir Starmer previously criticised such AI-generated imagery as “disgusting” and “shameful,” prompting the introduction of some restrictions.
Meanwhile, the UK regulator Ofcom has launched a formal investigation into the platform’s safety systems, including Grok and broader content moderation policies.
Timeline Of The Study
The research testing was conducted in the United Kingdom between February and March this year. The findings have intensified debate over child safety, online regulation, and the responsibility of social media platforms to protect younger users.
