Back
Information Technology
Social Cyber Group

Australia's AI Safety is at Risk

Sopcial Cyber Group

Australia’s AI Week from 24-30 November came with a significant Ministerial announcement on 25 November: a plan to create a national AI Safety Institute by the federal government.  The country faces a very hard slog to help this institute deliver AI security.

 

The Social Cyber Group (SCG) welcomes this overdue step by the Australian government but warned that the emerging global picture of AI use is one of increasing threats and escalating risks. At the very moment governments are racing to build AI safety regimes, the firms at the centre of the ecosystem are not consistently turning safety principles into practice.

 

A co-founder of SCG, Professor Greg Austin, pointed to new research from the United States to underline the scale of the challenge. Work by US AI Safety Institute researcher Kevin Klyman has examined how well 15 major tech companies implemented the voluntary AI safety commitments they made to the White House in 2023. The findings are sobering. Most companies did not fully honour their commitments, with particularly weak follow‑through on testing for extreme risks, strengthening security, and enabling independent scrutiny. 

 

“This gap between rhetoric and implementation is now central to Australia’s own

‘technology crisis’ which the new AI Safety Institute must address”, Austin said.  

 

Australia was a member of a small group of close allies supporting the launch the International Network of AI Safety Institutes in San Francisco on 21 November 2024. Other countries and entities involved were Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States

 

On 20 November, the US chipmaker NVIDIA enjoined all staff globally to use AI tools as part of their work. The CEO, Jensen Huang, called for all employees to treat AI as the start point for every task, suggesting it would be “insane” not to do so. His comments provoked some public disquiet and employee concern.

 

Yet in Australia, public policy has only just begun to catch up with such radically expanding use. The new AI Safety Institute will be tasked with evaluating emerging capabilities and advising on risk. It will have to swim against strong tides of commercial momentum, technical complexity and limited domestic oversight capacity.

 

A particular weak point is Australia’s under‑investment in AI education. Since 2019, the country has issued high‑level AI ethics principles and voluntary safety standards, but it has not matched these with a serious national program to build AI literacy and technical capability at scale. Universities, vocational providers and professional‑education organisations are only now starting to design short, intensive programs that could help public servants, regulators and industry leaders quickly understand and manage AI risks. Without a major uplift in education and training, the formal architecture of an AI safety regime will sit on modest foundations.

 

“The business commitment to AI safety in Australia is largely rhetorical, Austin said. Most Australian businesses are experimenting with AI, but few have put in place robust internal governance, testing and audit processes that match the pace at which tools are being rolled out. The new AI Safety Institute has been framed as an “expert hub” and an adviser to regulators, rather than an enforcement body armed with strong statutory powers.

 

One tool that could help close this gap is Technology Impact Assessment (TIA) – structured assessments that examine how digital systems, including AI, affect people, institutions and critical infrastructure. The Australian Government has begun to promote AI impact assessments in its own operations, requiring agencies to use an AI Impact Assessment tool and related assurance frameworks when deploying sensitive new systems. New South Wales has gone further with a mandatory AI assessment framework for state projects. However, these efforts remain fragmented and limited largely to government. There is, as yet, no economy‑wide requirement for high‑risk private‑sector AI systems to undergo rigorous, independent assessment before or during deployment.

 

Researchers associated with Social Cyber Group and its related Institute argue that this must change. Professor Glenn Withers AO of the Crawford School at the Australian National University  was co‑leader of a government‑funded project on TIA with Indian partners. “Our research with Indian colleagues on TIA, funded by the Australian government, suggests that governments need to commit more heavily to this tool as a necessary response to AI pressures”, Withers said. “In my view, systematic and regular assessment according to rigorous principles is the only way to connect abstract safety principles with the messy realities of AI systems embedded in workplaces, markets and public services.” Withers is a cofounder of SCG.

 

Withers also points to the slowness of Australian government to follow up on overseas learnings, as seen in SCG hosting a UK and US expert delegation visit to Australia in March this year which is now promoting a joint AUKUS cyber and AI education initiative with little effective response from the government. 

 

For Lisa Materano, the CEO of Blended Learning International and a member of the same research team from the Social Cyber Institute (SCI), the priority is workforce capability. She is calling for “rapid escalation by Australia of investment in AI education, especially through professional education, such as one-day or week-long courses”.

 

“Short, targeted programs for managers, regulators and technical staff could give Australia’s institutions a fighting chance to keep pace with AI advances and to use tools like TIA effectively”,  Materano said. She is also a co-founder of SCG and SCI, and the related Social Cyber and Tech Academy.

 

The Social Cyber Group very much welcomes the announcement of the AI Safety Institute as a necessary first step but emphasises that such institutions alone will not solve Australia’s AI safety problem. Without enforceable obligations on high‑risk systems, strong incentives for firms to follow through on their commitments, and a substantial uplift in AI education, the new Institute risks becoming symbolic rather than a driver of real change.


About us:

The mission of SCG is to help businesses, governments and community organisations  avoid or minimise potentially high costs of cyber crises. Its work is based on the principle that each organisation has unique social characteristics that shape its security outcomes. SCG helps enterprise leaders understand the social DNA of their information technology and rewire it for sustainable and superior risk management.


Contact details:

Greg Austin +61450190323 (available in Delhi until 30 November 3pm)

Glenn Withers +61 416 249 350

Lisa Materano +61 438 134 558