
AI Governance for Business Leaders
AI governance is the strategic framework that ensures AI adoption aligns with ethical standards, regulatory requirements, and business objectives. For C-suite executives, it provides the structure to mitigate risks, drive compliance, and build trust in AI-driven decisions.
Staying ahead of evolving AI regulations is essential to unlocking innovation while safeguarding your company’s reputation and operations.
Effective AI Governance
Explore AI Governance Frameworks & Their Benefits for Your Organization
The Hiroshima Protocol
Australia's membership in the Hiroshima AI Process Friends Group signals a global effort to promote the safe and responsible use of AI. The Hiroshima AI Process, initiated by the G7, aims to create an international framework with guiding principles and a code of conduct for AI development. The OECD launched a reporting framework for monitoring the adoption of the Hiroshima AI Process International Code of Conduct by organizations developing advanced AI systems. A CSIS report analyzes the Hiroshima AI Process Comprehensive Framework, examining its potential to enhance international cooperation and regulatory interoperability, especially among G7 nations, while recommending enhancements. Japan is leading efforts to ensure AI safety and to balance AI regulation and innovation. The Hiroshima Process International Code of Conduct provides voluntary guidance for organizations developing advanced AI systems, focusing on risk management, transparency, and ethical considerations.
The EU AI Act
One of the most significant developments in AI regulation is the European Union Artificial Intelligence Act (EU AI Act). Key takeaways of this Act include:
- Risk-based approach: AI systems are categorized based on their level of risk, with different requirements for each category.
- Prohibited AI practices: Certain AI applications, such as government social scoring, are prohibited.
- High-risk AI systems: Strict requirements are set for AI systems used in critical areas like healthcare, education, and law enforcement.
- Transparency requirements: Obligations to inform users when and in what form they are interacting with AI systems.
Australia's Voluntary AI Safety Standard
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence (AI). Through the Safe and Responsible AI agenda, the Australian Government is acting to ensure the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on.
The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do. The guardrails help organisations to benefit from AI while mitigating and managing the risks that AI may pose to organisations, people and groups. The 10 voluntary guardrails have been intentionally aligned with prominent international standards, particularly the AS ISO/IEC 42001:2023, which focuses on AI management systems, and the NIST AI RMF 1.0 (see next arrow) developed in the US, which focuses on AI risk management.
Australia’s AI Ethics Principles
The principles are entirely voluntary. They are designed to prompt organisations to consider the impact of using AI enabled systems. They are intended to be aspirational and complement, not substitute, existing AI regulations and practices.
Not every principle will be relevant to your use of AI. Not every business uses AI, and not every use of AI requires comprehensive analysis against the principles. For example, many businesses use systems that may incorporate AI such as email or accounting software. This use is unlikely to be of sufficient impact to require the use of the principles. If your AI use doesn’t involve or affect human beings, you may not need to consider all of the principles.
USA - AI Risk Management Framework (AI RMF)
Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.