
Panoramic: Automotive and Mobility 2025
On 17 October 2025, the National AI Centre (NAIC) unveiled the Guidance for AI Adoption (the “Guidance”), a new national framework designed to guide the responsible adoption of artificial intelligence (AI). This comprehensive update to the 2024 Voluntary AI Safety Standard (VAISS) reinforces Australia's commitment to a principles-based, globally aligned approach to AI governance. The framework is intended to support both AI developers and deployers in embedding responsible AI practices throughout the lifecycle of AI systems.
The Guidance consolidates and expands upon the VAISS, introducing six essential practices that organisations are encouraged to adopt:
i. decide who is accountable;
ii. understand impacts and plan accordingly;
iii. measure and manage risks;
iv. share essential information;
v. test and monitor; and
vi. maintain human control.
These practices are intended to be adaptable across sectors and organisation sizes, offering a scalable approach to AI governance.
The Guidance is structured in two versions:
To support adoption, the NAIC has also released a suite of practical tools, including: (i) an AI screening tool; (ii) a policy guide and template; (iii) an AI register template; and (iv) a glossary of terms and definitions. These resources aim to lower the barrier to responsible AI use, particularly for small and medium-sized enterprises.
The release of the Guidance affirms Australia’s inclination toward a principles-led, advisory model for AI oversight, favouring practical guidance over immediate legislative intervention. Rather than introducing new laws, the framework complements existing regulatory instruments such as the Privacy Act 1988, Australian Consumer Law, and sector-specific regimes including those governing medical devices, critical infrastructure, financial services, and APRA prudential standards. This measured approach enables organisations to strengthen internal governance and demonstrate accountability, all while retaining the agility needed to innovate responsibly.
For businesses operating in or engaging with the Australian market, the release of this guidance signals a clear direction: responsible AI is no longer a future consideration but rather a present imperative. While the framework remains voluntary, it is poised to become a de facto benchmark for demonstrating accountability and maintaining public trust. Organisations that proactively align with these practices will be better positioned to navigate stakeholder expectations and regulatory scrutiny.
Crucially, the Guidance’s alignment with international standards such as ISO/IEC 42001 and the NIST AI Risk Management Framework ensures that Australian businesses are not operating in isolation. Instead, they are participating in a global conversation about trustworthy AI, with the tools and frameworks to support cross-border compliance and collaboration.
As jurisdictions around the world continue to explore regulatory models for AI, Australia’s approach offers a pragmatic alternative: one that emphasises capability-building, transparency, and ethical design. For legal and compliance teams, this presents an opportunity to embed responsible AI principles early, shaping innovation in a way that is both commercially viable and socially sustainable.
Authored by Charmian Aw, Ciara O'Leary, and Rosen Chen.