eSafety raises concerns about misuse of Grok to generate sexualised content
eSafety Commissioner
eSafety remains concerned about the use of the generative AI system Grok on X to generate content that may sexualise or exploit people, particularly children.
While the number of reports eSafety has received remains small, eSafety has seen a recent increase from almost none to several reports over the past couple of weeks relating to the use of Grok to generate sexualised or exploitative imagery. eSafety will use its powers, including removal notices, where appropriate and where material meets the relevant thresholds defined in the Online Safety Act.
X, Grok, and a wide range of other services are also subject to systemic safety obligations to detect and remove child sexual exploitation material and other unlawful material as part of Australia’s world-leading industry codes and standards.
eSafety has written to X seeking further information about the safeguards in place to prevent Grok’s misuse on its service and to comply with these obligations
The safety of generative AI services and features is a key regulatory priority for eSafety. eSafety has already taken enforcement action in 2025 in relation to some of the “nudify” services most widely used to create AI child sexual exploitation material, leading to their withdrawal from Australia (eSafety action prevents services “nudifying” Australian school children).
Additional mandatory codes will commence on 9 March 2026, which create new obligations for AI services, among others, to limit children’s access to sexually explicit content, as well as violent material and themes related to self-harm and suicide.
In addition, eSafety expects all covered services to take reasonable steps to comply with the Basic Online Safety Expectations, including the expectation to proactively minimise the extent to which material or activity on the service is unlawful or harmful to children.
X has previously been issued transparency notices requiring it to report on its compliance with those expectations in relation to child sexual exploitation and abuse material, including matters relating to the use of generative AI features such as Grok. eSafety is also engaging closely with international child protection organisations and online safety regulators, who have identified similar emerging patterns involving Grok and other generative AI tools.
These developments reinforce the importance of strong safeguards and Safety by Design measures to prevent the misuse of generative AI, particularly where children are involved.
Contact details: