Australia Considers Blocking Non-Compliant AI Services
- tech360.tv

- 6 hours ago
- 3 min read
Australia’s internet regulator is considering action against search engines and app stores that provide access to artificial intelligence services failing to verify user ages. This move comes as more than half of surveyed AI services have not publicly outlined compliance steps ahead of an upcoming deadline.

The warning signals one of the most assertive global efforts to regulate AI companies, which are facing increasing lawsuits for failures to prevent or even encourage self-harm and violence. Researchers also warn that these platforms may harm youth mental health more than social media.
Australia became the first country to ban social media for teenagers in Dec. due to mental health concerns, inspiring similar intentions from world leaders. The nation is now leading a comparable crackdown on AI by imposing age restrictions on content accessed via the technology.
From March 9, internet services in Australia, including search tools like OpenAI’s ChatGPT and lesser-known companion chatbots, must prevent users under 18 from accessing pornography, extreme violence, self-harm, and eating disorder content. Non-compliance could result in fines of up to A$49.5 million (USD 35 million).
An eSafety commissioner spokesperson stated the organisation will "use the full range of our powers where there is non-compliance," including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services."
OpenAI and Character.AI have been involved in wrongful death lawsuits regarding young users' interactions. OpenAI acknowledged this week it deactivated a Canadian teen mass shooting suspect’s ChatGPT account months before an attack, without informing authorities.
While Australia has not reported chatbot-linked violence or self-harm, the regulator has reported being told about children as young as 10 talking to AI-powered interactive tools for up to six hours daily. The eSafety spokesperson expressed concern that "AI companies are leveraging emotional manipulation, anthropomorphism, and other advanced techniques to entice, entrance, and entrench young people into excessive chatbot usage."
Top app store operator Apple did not respond but stated on its website last week that it would employ "reasonable methods" to stop minors from downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without detailing these methods. Google, Australia’s dominant search engine provider and No.2 app store operator, declined to comment.
Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, noted that eSafety was trying to inform chatbot services about the new rules. Duxbury added that "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them."
A review conducted a week before Australia's deadline found that only nine of the 50 most popular text-based AI products had implemented or announced plans for age assurance systems. This review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters.
Another 11 platforms had universal content filters or planned to block all Australian users, which would comply by keeping restricted content from everyone. This left 30 platforms with no apparent steps taken to adhere to the new regulations.
Most large chat-based search assistants, including ChatGPT, Replika, and Anthropic’s Claude, had begun rolling out age assurance systems or blanket filters. Chatbot provider Character.AI restricted open-ended chat for users under 18.
Companion chatbot providers Candy AI, Pi, Kindroid, and Nomi told Reuters they planned to comply without elaborating. HammerAI stated it would initially block its services from Australia to meet the code’s requirements.
However, these compliant services were a minority among companion chatbots. Three-quarters had no functional or planned filtering or age assurance, and one-sixth lacked a published email address for reporting suspected breaches, which is also a requirement.
Elon Musk’s chat-based search tool, Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok’s parent company, xAI, did not respond to a request for comment.
Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls." Given added, "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed."
Australia may compel app stores and search engines to block AI services that fail age verification.
New regulations from March 9 require AI services to restrict access for users under 18 to content such as pornography or self-harm material.
Non-compliance could lead to fines of up to A$49.5 million (USD 35 million).
Source: REUTERS


