eSafety report shows AI companions are putting children at risk
eSafety Commissioner
Popular AI companion chatbots are failing to protect Australian children from exposure to sexually explicit content and not doing enough to prevent users generating child sexual exploitation and abuse material, according to eSafety’s latest transparency report.
The report summarises responses from four AI companion services – Character.AI, Nomi, Chai, and Chub AI – to questions asked by the regulator about how they are tackling these and other issues.
The report also revealed that most of the AI companions featured failed to refer users who engaged in chats related to suicide or self-harm to appropriate support services and did not warn users of the potential risk and criminality of accessing or creating child sexual exploitation and abuse material through their service.
eSafety Commissioner Julie Inman Grant said AI companion services marketed as sources of friendship, emotional support, or romantic companionship, are becoming increasingly popular with Australian children, but they pose significant risks if safety guardrails are not put in place.
“We are riding a new wave of AI companions that are entrapping and entrancing impressionable young minds, with human-like, sycophantic and often sexually explicit conversations, some even going as far as encouraging self-harm and suicide,” Ms Inman Grant said.
“As this report shows, none of these four AI companions had any meaningful age checks in place to protect children from age-inappropriate content that many of these chatbots are capable of producing, primarily relying instead on self-declaration of age at sign up. In Australia, this is no longer good enough.
“In addition to this report, our recent survey of 1,950 children aged 10 to 17 in Australia shows AI companions and AI assistants are already a common part of their lives. 79% of children told us they had used either an AI companion or AI assistant. While the majority of these children had used an AI assistant, 8% said they had used an AI companion, which we estimate represents around 200,000 children in Australia.
“But we’re just at the beginning of this and we’re also starting to see the lines begin to blur between AI assistant chatbots kids might use to help them with their homework and these AI companions in terms of their features and functionality.
”While AI companions can feel personal and supportive, they really are not designed for children and they are not mental health experts either, which is why I’m concerned that most of the companion services we asked questions of did not automatically refer users to appropriate support when self-harm or suicide were detected in chats.
“It’s also extremely troubling to discover that a number of these services were not checking all the AI models they used to provide their service for inputs (or prompts) relating to child sexual exploitation and abuse material.
”And many didn’t check outputs either for the potential generation of child sexual exploitation and abuse material, or using proven deterrent measures like advising users of the criminality of engaging in conduct related to child sexual exploitation and abuse.”
The report also showed that some AI companion chatbots employed insufficient numbers of trust and safety personnel. Nomi and Chub AI reported they had no dedicated trust and safety staff or moderators.
The report follows the recent commencement of Age-Restricted Material Codes in Australia designed to protect children from exposure to a range of age-inappropriate content. Among other service types, these new codes also apply to the growing number of AI chatbots.
These codes complement the existing Unlawful Material Codes and Standards, which require industry to take system-wide action to prevent child sexual exploitation material , as well as pro-terror and extreme crime and violence material .
“The Age-Restricted Material Codes are now law and require AI companion chatbots to protect children from age-inappropriate content such as sexually explicit material by preventing the service from generating this content, or through implementing appropriate age assurance,” Ms Inman Grant said. “And they also require them to provide appropriate crisis and mental health information and services.”
The codes and standards are legally enforceable and breach of a direction to comply may result in civil penalties of up to $49.5 million.
Since the four companies received transparency notices from eSafety in October 2025, some have improved their age assurance measures while one company removed its service from Australia. Given the safety gaps revealed through the transparency process, eSafety considers these moves to be positive developments.
Following the transparency notice process, Character AI introduced age assurance measures for Australian users in early 2026 and has removed the chat function for its under 18s experience, while Chub AI decided to geo-block, or withdraw, its service from Australia.
Chai has now restricted free access to chat with AI Companions, instead requiring users to pay a subscription while Nomi has committed to ‘implementing further age assurance functionality’.
The findings reveal serious gaps in basic safeguards for children:
- Children were able to access adult features: None of the providers had robust age verification measures, relying instead on app store ratings or self-declaration at signup.
- Self-harm support lacking: Chai, Chub AI, and Nomi did not direct users to mental health or crisis support when self-harm was detected in user-prompts.
- Failure to check for harmful content: Chub AI and Nomi did not monitor inputs or outputs across all relevant text, image and video AI models used to provide their service for unlawful or potentially harmful material. Chai failed to check outputs across all relevant models.
- Limited trust and safety staffing: Nomi and Chub AI had no staff dedicated to trust and safety or moderation.
- No reporting of CSEA attempts: Chai and Nomi did not advise users of the criminality of prompting for child sexual exploitation and abuse material, nor did they report child sexual exploitation and abuse material to enforcement authorities or to child protection organisations like the US National Center for Missing and Exploited Children (NCMEC).
- Failure to red-team: Chub AI and Nomi did not conduct red-teaming (i.e., testing for vulnerabilities, limitations or potential for misuse) across all models used to provide their service. Not red-teaming across all models can mean that services are more exposed to the risk of illegal or harmful material being produced.
Phone: 0439 519 684 (virtual line – please do not send texts)
or [email protected]