Governance arrangements in the face of AI innovation in Oz

07.04.25 09:56 PM

Beware of the Gaps

ASIC's review of 23 financial services and credit licensees revealed a "rapid acceleration" in AI adoption, accompanied by a shift towards "more complex and opaque" AI techniques. While licensees generally adopted a cautious approach to AI deployment, ASIC identified significant "weaknesses that create the potential for gaps as AI use accelerates", raising concerns about a widening governance gap and increased consumer harm.


The survey categorized licensees along a spectrum of AI governance maturity, from "latent" to "strategic and centralised". Weaknesses were observed across all but the most mature category, indicating systemic challenges in adapting existing governance frameworks to the unique risks and complexities of AI.


Here's a breakdown of the key governance weaknesses identified by ASIC, with a comparative lens across the maturity spectrum:


1. Lack of Clear Visibility of AI Use:

  • Description: Several licensees struggled to provide a comprehensive inventory of their AI use cases, suggesting a lack of centralized tracking and oversight. This was attributed to the absence of a dedicated AI inventory or the recording of models in dispersed registers. A case study highlighted instances of models missing from a central register despite policy requirements.
  • Implications: Hinders effective board and management oversight, impeding risk assessment, accountability, and strategic planning for AI deployment. Without a clear understanding of where AI is being used, organizations cannot effectively manage associated risks or ensure compliance.
  • Maturity Comparison:
    • Latent: Complete lack of visibility as AI risks and governance haven't been considered.
    • Leveraged and Decentralised: Visibility is fragmented, often residing within business units, leading to incomplete central records.
    • Strategic and Centralised: Characterized by a maintained AI inventory, providing a clear understanding of AI usage across the organization.


2. Complexity and Fragmentation of Governance Frameworks:

  • Description: Some licensees developed AI governance iteratively, resulting in policies and procedures spread across numerous documents. This fragmented approach creates a risk of inconsistencies and gaps, making comprehensive oversight challenging.
  • Implications: Increases the difficulty of ensuring consistent application of standards, identifying and mitigating cross-functional risks, and adapting to the evolving AI landscape. Compliance becomes harder to manage within a complex web of documents.
  • Maturity Comparison:
    • Latent: Reliance on existing frameworks without AI-specific considerations, leading to potential gaps.
    • Leveraged and Decentralised: Frameworks evolve ad-hoc, contributing to complexity and fragmentation.
    • Strategic and Centralised: Establish AI-specific policies and procedures that are integrated and reflect a holistic, risk-based approach across the AI lifecycle.


3. Failure to Apply Evolving Expectations to Existing Models:

  • Description: Licensees sometimes failed to retrospectively apply updated AI policies (e.g., on ethics or disclosure) to models already in use. This lag in applying evolving standards can lead to outdated governance of existing AI deployments.
  • Implications: Creates a mismatch between current best practices and the operational reality of deployed AI, potentially exposing consumers to risks that newer policies aim to address. Undermines the intended impact of updated governance standards.
  • Maturity Comparison:
    • Latent: No consideration of evolving AI expectations.
    • Leveraged and Decentralised: Inconsistent application of new standards to existing models due to decentralized control and potentially less rigorous central oversight.
    • Strategic and Centralised: Implement processes to ensure that evolving policies and ethical considerations are systematically applied to both new and existing AI models.


4. Weaknesses in Board Reporting:

  • Description: Poorer practices involved ad-hoc reporting on a subset of AI risks or a complete absence of board-level reporting on AI strategy and risk. Better practice included periodic reporting on holistic AI risk.
  • Implications: Insufficient board oversight can lead to a lack of strategic direction, inadequate resource allocation for AI governance, and a failure to hold management accountable for AI-related risks and outcomes.
  • Maturity Comparison:
    • Latent: No board-level consideration of AI.
    • Leveraged and Decentralised: Reporting is often ad-hoc and may not provide the board with a comprehensive view of AI risks and strategy.
    • Strategic and Centralised: Ensure periodic and comprehensive reporting to the board on AI strategy, risks, and performance.


5. Immature Oversight Mechanisms:

  • Description: While some licensees established committees for AI oversight, their effectiveness varied. Poorer practices included infrequent meetings and poorly defined mandates, limiting their ability to provide effective oversight. Better practices involved cross-functional, executive-level committees with clear responsibility and decision-making authority.
  • Implications: Weak oversight can result in a lack of proactive risk management, delayed identification and resolution of AI-related issues, and insufficient accountability for AI outcomes.
  • Maturity Comparison:
    • Latent: No specific oversight mechanisms for AI.
    • Leveraged and Decentralised: Oversight may be distributed and lack clear central coordination and authority, leading to inconsistencies.
    • Strategic and Centralised: Establish well-defined, cross-functional AI oversight bodies with executive-level representation and clear mandates.


6. Inconsistent Application of AI Ethics Principles:

  • Description: While some licensees referenced the Australian AI Ethics Principles, their application was often high-level and unclear in practice. Weaknesses were noted in considering the disclosure of AI outputs and contestability. Some relied on general codes of conduct rather than explicit AI ethics principles.
  • Implications: Increases the risk of unfair or discriminatory outcomes, erodes consumer trust due to a lack of transparency and contestability, and potentially leads to regulatory breaches.
  • Maturity Comparison:
    • Latent: No consideration of AI ethics.
    • Leveraged and Decentralised: Ethical considerations may be documented but inconsistently applied and operationalized across the AI lifecycle.
    • Strategic and Centralised: Integrate AI ethics principles into policies, procedures, and decision-making processes across the entire AI lifecycle, with specific attention to disclosure and contestability.


7. Misalignment Between Governance Maturity and AI Use:

  • Description: The maturity of governance and risk management did not always align with the scale and complexity of AI deployment. Some licensees with significant AI use had lagging governance frameworks, posing the "greatest immediate risk of consumer harm".
  • Implications: Exposes organizations and consumers to heightened risks as AI capabilities outpace the ability to manage them effectively. Undermines the safe and responsible adoption of AI.
  • Maturity Comparison:
    • Latent: Low AI use with low governance maturity - risk emerges if AI adoption increases without governance uplift.
    • Leveraged and Decentralised: Governance may struggle to keep pace with rapidly expanding or increasingly complex AI deployments.
    • Strategic and Centralised: Proactively develop and update governance frameworks to lead and guide AI adoption, ensuring alignment between AI use and management capabilities.


8. Inadequate Governance of Third-Party AI Models:

  • Description: Many licensees relied on third-party AI models but lacked appropriate governance for managing associated risks like transparency and control. Poorer practices included the absence of dedicated third-party supplier policies for AI models.
  • Implications: Reduces the ability to understand model operation and potential biases, complicates risk assessment and monitoring, and creates dependencies on external entities with potentially different risk appetites and standards.
  • Maturity Comparison:
    • Latent: Third-party AI governance likely not considered.
    • Leveraged and Decentralised: Inconsistent application of governance principles to third-party models, potentially lacking dedicated policies and validation processes.
    • Strategic and Centralised: Establish clear policies and processes for the governance of third-party AI models, including due diligence, ongoing monitoring, and contractual requirements regarding transparency and control.


Commonalities in Weaknesses:


Across ASIC's findings, several common threads emerge:

  • Reactive vs. Proactive Governance: Many licensees were updating governance in response to AI adoption rather than proactively establishing frameworks that guide and lead AI deployment.
  • Business-Centric vs. Consumer-Centric Risk Assessment: Some licensees focused more on business risks than on potential harm to consumers arising from AI use, including issues like algorithmic bias and regulatory compliance.
  • Immature Consideration of Transparency and Contestability: Licensees generally showed a lack of maturity in addressing how and when to disclose AI use to consumers and in establishing mechanisms for consumers to contest AI-driven outcomes.
  • Operationalization Gaps: Even where policies existed, their practical implementation and consistent application across the AI lifecycle often presented weaknesses.


Table: Comparative Analysis of AI Governance Maturity and Weaknesses


Feature

Latent

Leveraged and Decentralised

Strategic and Centralised

AI Strategy

Not considered

Decentralised, potentially lacking clear articulation

Clearly articulated, aligned with business objectives

Risk Appetite

AI not explicitly included

May not explicitly include AI

AI explicitly included

Ownership & Accountability

Not defined for AI specifically

Model/Business Unit level, senior exec may not exist

Clear organizational level, AI-specific committee

Policies & Procedures

Reliance on existing, no AI-specific ones

Iterative, fragmented, gaps possible

AI-specific, risk-based, spanning AI lifecycle

Ethics Principles

Not considered

Documented but inconsistent application

Integrated into policies and operationalized

Board Reporting

None or ad-hoc, subset of risks

Often ad-hoc, may lack holistic view

Periodic, holistic AI risk reporting

Oversight Mechanisms

None

Decentralised, mandates may be unclear

Cross-functional, executive-level, clear mandate

AI Inventory

Lack of visibility

Fragmented records

Centralized and maintained

Third-Party Governance

Likely not considered

May lack dedicated policies

Clear policies and processes for validation & monitoring

Alignment (Gov & Use)

Low use, low maturity (potential future risk)

Broadly aligned but can lag with increased complexity

Governance leads AI use



Advice and Suggestions for Drafting Future AI Frameworks and Implementation:


Drawing from ASIC's findings, C-suite and senior executives should consider the following when drafting and implementing future AI governance frameworks:


  1. Establish a Clear and Articulated AI Strategy: Define the organization's objectives for AI adoption, its risk appetite, and the ethical principles that will guide its use. This strategy should inform all aspects of the AI governance framework.
  2. Implement Centralized Oversight and Accountability: Designate clear ownership and accountability for AI at a senior executive level and establish a cross-functional AI governance body with the authority to oversee AI strategy, risk management, and ethical considerations.
  3. Develop Comprehensive and Integrated AI-Specific Policies and Procedures: Translate the AI strategy and ethical principles into clear, actionable policies and procedures that span the entire AI lifecycle – from design and data acquisition to deployment, monitoring, and decommissioning. Ensure these policies are integrated with existing risk and compliance frameworks but address the unique challenges of AI.
  4. Prioritize Proactive Risk Management with a Consumer Lens: Develop processes for identifying, assessing, mitigating, and monitoring both business and consumer-specific risks associated with AI, including algorithmic bias, lack of explainability, and potential for unfair outcomes. Risk assessments should be conducted throughout the AI lifecycle and consider the impact on regulatory obligations.
  5. Embed AI Ethics and Fairness Principles: Go beyond high-level statements and ensure that AI ethics principles, including fairness, transparency, and contestability, are practically embedded into AI development and deployment processes. Establish clear guidelines on disclosure of AI use to consumers and mechanisms for addressing their concerns.
  6. Ensure Robust Governance of AI Models, Including Third-Party Solutions: Implement rigorous processes for the validation, monitoring, and review of all AI models, whether developed internally or by third parties. Establish clear contractual requirements for transparency and auditability with third-party providers.
  7. Foster Clear Visibility and Inventory Management: Implement and maintain a centralized AI inventory to track all AI use cases across the organization. This is crucial for effective oversight, risk management, and compliance.
  8. Establish Continuous Monitoring and Adaptation: Regularly review and update the AI governance framework to ensure it remains aligned with the evolving nature of AI, increasing adoption, and regulatory expectations. Implement mechanisms for ongoing monitoring of AI performance and unexpected outputs, with clear protocols for investigation and remediation.
  9. Invest in Skills and Resources: Ensure that the organization has the necessary technological and human resources with the skills and expertise to develop, deploy, govern, and oversee AI effectively, including compliance and internal audit functions.
  10. Promote Board Engagement and Reporting: Establish clear channels for regular and comprehensive reporting to the board on AI strategy, risks, performance, and ethical considerations to ensure informed oversight and accountability.


By addressing these considerations, C-suite and senior executives can build robust AI governance frameworks that not only mitigate risks and ensure compliance but also foster consumer trust and enable the safe and responsible realization of AI's potential benefits within their organizations.

 


Harold Lucero