Author: Dr Steve Nuttall | Posted On: 26 Nov 2025

In October this year, I attended the CEDA AI Leadership Summit 2025 at Brisbane’s Royal International Convention Centre, Australia’s most influential AI conference of the year. Sponsored by the National AI Centre, the sold-out event brought together government ministers, industry leaders, academics, and international experts including senior representatives from OpenAI, AWS and NVIDIA.
Fifth Quadrant was there in Research Alley, showcasing our Responsible AI Index with this digital poster and encouraging attendees to use our free self-assessment tool. But mostly, I was there to listen. And what I heard was a fascinating mix of ambition, pragmatism, and honest acknowledgement of the challenges ahead.
Australia’s Framework For Responsible AI Adoption
The summit’s most significant announcement came from Lee Hickin, NAIC’s CEO, in the Summit opening and welcome. After extensive consultation with hundreds of organisations across sectors, NAIC released new Guidance for AI Adoption – streamlining the previous 10-point Voluntary AI Safety Standard into six essential practices now known as ‘AI6.’
The new framework considers value creation, as well as risk mitigation. Take transparency, for example. ‘It’s not just because you need to do it to avoid the risk,’ Hickin noted. ‘You do it because customers will buy into that idea of transparency. If you’re transparent with them, they will trust your business.’
The six essential practices provide a foundation that organisations can build upon as their AI maturity grows. The guidance includes practical tools such as AI policy templates, risk assessment guides, and register frameworks, making responsible AI adoption accessible for organisations of all sizes.
Fifth Quadrant’s Responsible AI Self-Assessment Tool, complements the AI6 practices by helping Australian organisations evaluate their RAI maturity.
The Productivity Prize
The Hon Dr Andrew Charlton MP Assistant Minister for Science, Technology and the Digital Economy, reinforced why practical adoption frameworks matter, delivering a direct message that AI is an economic imperative for Australia: ‘We will be a poorer nation if we do not do this.’
The numbers are substantial. According to the Productivity Commission, AI could improve Australia’s multi-factor productivity by 2.3% over the next decade: an additional $116 billion of GDP. Other speakers cited $142 billion in productivity gains by 2030 from deploying existing models.
The optimism is grounded in real advantages. Australia ranks third globally in AI usage per capita, ahead of the UK and United States. ChatGPT has roughly 10 million weekly users, about 50% of our internet-enabled population, according to Jake Wilczynski, Head of Communications APAC for OpenAI.
Australian Organisations Leading Adoption
Craig Lawton, Head of AI and Data Strategy at AWS, provided compelling evidence of this adoption acceleration during the ‘AI: The New Arms Race’ panel. He noted that Australian businesses are adopting AI ‘every three minutes,’ with recent research showing 41% of Australian organisations have already adopted agentic AI, and another 50% planning implementation within the next six months.
Our SME AI Pulse tracker, conducted on behalf of NAIC confirms this momentum: 47% of Australian small and medium businesses are now actively adopting AI, with the number using AI broadly across operations doubling in recent months.
The Workforce Question
Economics professor Daniel Susskind from Oxford University delivered the most thought-provoking session. His research documents centuries of automation anxiety, from the Luddites to Geoffrey Hinton’s failed 2016 prediction that radiologists would be obsolete within five years.
Susskind’s historical analysis has been optimistic. Technologies create ‘substituting forces’ displacing workers from tasks, but also ‘complementing forces’ increasing demand through productivity gains, economic growth, and entirely new industries.
But Susskind offered a provocative question: ‘Given that these generative systems are so good at performing many tasks that white-collar workers do… where is it that displaced workers can retreat? Where in the economy are there going to be all these unautomated tasks that will provide a refuge for human workers?’
It was intellectual honesty in a conference otherwise focused on opportunity. The question isn’t whether AI creates productivity gains….the real question is how those gains will be distributed.
What Lies Ahead
On AI sovereignty, Jake Wilczynski from OpenAI framed AI sovereignty as an important but often misunderstood part of Australia’s AI agenda. He argued that while it is positive for Australia to invest in sovereign large language models and local infrastructure, the national discussion risks becoming too narrowly focused on building domestic LLMs at the expense of much larger and more immediate opportunities.
Australia already has widespread uptake of existing models, tens of thousands of local developers building tools on top of global platforms, and a substantial productivity upside available today. The real sovereignty question, in his view, is broader: ensuring Australia captures economic value, builds a skilled workforce, supports local R&D, and invests meaningfully in infrastructure so it can compete globally.
Dr Tomasz Bednarz from NVIDIA introduced ‘the next ChatGPT moment’: the Physical AI Era. Physical AI applies multimodal inputs such as images, video, sensor data and text to generate real-world actions, enabling robotics, autonomous vehicles, warehouse automation and smart infrastructure. He argues that this could transform industries worth tens of trillions of dollars, given the scale of global cameras, factories, warehouses and vehicles that could be automated. His conceptual endpoint is the “city as a robot”, where an entire urban environment becomes an interconnected, intelligent system.
Final Thoughts
Australia’s opportunity is substantial and grounded in real advantages. Technical barriers are falling faster than cultural and organisational ones. Workforce implications remain genuinely uncertain. Sovereignty should be understood broadly, about capabilities, not just model development.
Fundamentally, leadership matters. Whether in government setting frameworks like the AI6, businesses driving cultural change, or organisations supporting SME adoption, human decisions about how we deploy AI matter as much as the technology itself.

AI Leadership Summit 2025
Posted in B2B, Technology & Telco, TL