Warning: Mentions of child sexual abuse material.
Australia’s online safety regulator has issued a stark warning about Elon Musk’s social media platform X, describing it as the worst-performing major platform for tackling child sexual abuse material, intensifying scrutiny of the company’s safety practices and artificial intelligence tools.
According to reporting by Crikey, the eSafety Commissioner has raised serious concerns about the prevalence and accessibility of child exploitation content on X, nearly three years after Musk pledged that eliminating such material would be the platform’s “priority number one.”
The regulator’s assessment follows ongoing investigations into X and its artificial intelligence chatbot, Grok, which critics say has amplified risks by enabling the creation and spread of sexualised and manipulated images online.
“Systemic” concerns
Australia’s eSafety Commission warned that child sexual exploitation material appeared “particularly systemic” on X compared with other mainstream social media services. Officials said harmful content could still be found using seemingly ordinary hashtags, raising questions about moderation effectiveness.
The regulator has sought detailed information from X about safeguards designed to detect and remove illegal content, including measures addressing generative AI tools integrated into the platform. Under Australia’s Online Safety framework, platforms are required to actively identify and remove child exploitation material and demonstrate compliance through transparency reporting.
X has disputed the characterisation, stating it proactively removes more than 99 per cent of accounts associated with child sexual abuse material and works with law enforcement agencies globally. The company has also criticised regulators for what it says is insufficient evidence supporting some claims.
AI tool under scrutiny
Much of the renewed concern centres on Grok, X’s AI chatbot developed by Musk’s company xAI. Investigations and lawsuits allege the technology has been used to generate sexualised images, including manipulated depictions of minors.
A recent US lawsuit filed by three teenage girls claims Grok-enabled tools were used to create explicit images derived from real photographs, which were then circulated online without consent. The plaintiffs argue inadequate safeguards allowed the abuse to occur and caused significant psychological harm.
Separate analyses have found large volumes of sexualised images produced by the system within short timeframes, prompting regulatory attention across multiple countries.
Growing regulatory pressure
The controversy adds to mounting global scrutiny of Musk’s platform following earlier disputes with Australia’s eSafety Commissioner over compliance with online safety laws and transparency notices related to harmful content. Courts have previously affirmed the regulator’s authority to require platforms to report on child exploitation prevention measures.
Online safety advocates argue the situation highlights broader challenges facing technology companies as generative AI tools rapidly expand the scale and speed at which harmful content can be produced.
Australia’s regulator has indicated that new industry codes coming into force in 2026 will impose stronger obligations on digital platforms and AI services to limit children’s exposure to explicit material and improve preventative safeguards.
For now, the dispute underscores an unresolved tension between innovation and responsibility in social media governance — and raises renewed questions about whether platforms can effectively police harmful content in an era increasingly shaped by artificial intelligence.
The Uniting Church in Australia has developed a Child Safe Commitment Statement which is available here.
The Assembly of the Uniting Church in Australia, its’ NSW and ACT Synod and this masthead announced in February 2025 that it was leaving X (formerly Twitter) citing concerns over the platform’s governance, content moderation in allowing explicit material, and overall safety.


