AI Transparency in the Australian Government

10.03.25 09:54 PM

Navigating the New Landscape

In this Newsletter we provide a comprehensive overview of the Australian Government's Artificial Intelligence (AI) Transparency Statement initiative. This mandatory requirement for Non-Corporate Commonwealth Entities (NCEs) marks a significant step towards fostering public trust and ensuring the responsible adoption of AI across government. 


Understanding the key components, obligations, and timelines associated with these statements is crucial for your agency's compliance and strategic AI planning. We will outline what these statements entail, their mandated components, the critical information they must disclose, and the recent compliance figures following the initial filing deadline.


The Imperative of AI Transparency:

The Australian Government is actively promoting the development and adoption of trusted, secure, and responsible Artificial Intelligence (AI). Recognizing the transformative potential of AI, while also acknowledging public concerns surrounding its use, the government has introduced measures to enhance transparency and accountability. A cornerstone of this approach is the requirement for specific government agencies to publish AI transparency statements.

These statements are not merely bureaucratic exercises; they serve a vital purpose in bridging the gap between the opportunities presented by AI in public service delivery and the imperative to maintain and build public confidence. By providing clear and accessible information about how agencies are using and managing AI, the government aims to demonstrate its commitment to ethical and responsible AI deployment. This initiative aligns with broader principles of transparency and integrity within the Australian Public Service (APS).


Key Mandated Components of AI Transparency Statements:

As mandated by the Digital Transformation Agency (DTA) under its Policy for the responsible use of AI in government and further detailed in the Standard for AI transparency statements, NCEs (excluding Defence and intelligence agencies) are legally obligated to publish these statements. 


Corporate Commonwealth Entities are strongly encouraged to follow suit. These statements, which had an initial filing deadline of February 28, 2025, must adhere to a consistent format and expectation to facilitate public understanding and comparison across agencies.


The key mandated components that your agency's AI transparency statement must include are:

  • Intentions Behind AI Use: Clearly articulate the reasons why the agency is currently utilizing AI or is considering its adoption. This includes detailing the anticipated benefits of AI implementation, such as improvements in efficiency, accuracy, and consistency in service delivery. Agencies should explain how AI systems improve upon previous methods and why AI was chosen over non-AI alternatives. Both current and planned AI applications should be addressed.
  • Classification of AI Use: Categorize all AI applications within the agency according to the DTA's defined usage patterns and domains.
    • Usage Patterns encompass:
      • Decision making and administrative action: AI used to support or make decisions or administrative actions.
      • Analytics for insights: AI employed to identify patterns and generate insights from data.
      • Workplace productivity: AI tools used to automate tasks, manage workflows, and improve communication.
      • Image processing: AI systems that analyze images for pattern and object recognition.
    • Domains include:
      • Service delivery: AI enhancing the efficiency and accuracy of government services.
      • Compliance and fraud detection: AI identifying anomalies and patterns to detect fraud and ensure regulatory compliance.
      • Law enforcement, intelligence and security: AI supporting these functions through data analysis and prediction.
      • Policy and legal: AI analyzing legal and policy documents and aiding in policy development.
      • Scientific: AI leveraged for complex data processing, simulations, and predictions in scientific endeavors.
      • Corporate and enabling: AI supporting internal functions like HR, finance, and IT. Each AI application should be classified under at least one usage pattern and one domain. Agencies are encouraged to consult and link to the DTA's resource on use classification.
  • Classification of Public-Facing AI: Specifically identify and classify instances where the public directly interacts with or is significantly impacted by AI without human intervention. This includes chatbots and automated decision-making systems. Given the sensitivity of such applications, a thorough explanation and justification for their use are required.
  • Measures to Monitor Effectiveness: Detail the governance structures and processes in place to monitor the effectiveness of deployed AI systems. This demonstrates ongoing oversight and commitment to ensuring AI achieves its intended outcomes.
  • Compliance with Legislation and Regulation: Outline how the agency ensures its AI use complies with all relevant legislation and regulations.
  • Efforts to Protect Against Negative Impacts: Describe the measures implemented to identify and mitigate potential negative impacts of AI systems on the public. This should include:
    • Processes for conducting AI impact and assurance assessments before deployment.
    • Strategies for ensuring data privacy and security, including the use of "open" AI systems.
    • The role of oversight bodies and implemented review processes. For example, the Department of Industry, Science and Resources established an AI Governance Committee (AIGC) for central oversight.
    • Methods for ensuring understanding of AI systems and mitigating bias and errors.
    • Practices for monitoring and evaluating AI performance.
    • Mechanisms for controlling AI used by service providers.
    • Identification of any residual risks accepted by the agency.
  • Compliance with the Policy for Responsible Use of AI in Government: Detail how the agency is meeting each requirement stipulated in the overarching DTA policy. This includes information on staff AI training, the establishment of internal AI registers, the integration of AI considerations into existing governance frameworks (privacy, security, record keeping, etc.), participation in government-wide AI initiatives (e.g., assurance framework pilots, Microsoft Copilot trials), and the implementation of monitoring and reporting mechanisms.
  • Identification of the AI Accountable Official: Clearly state the title and contact details of the agency's accountable official responsible for the implementation of the AI policy. For instance, at the Department of Industry, Science and Resources, the Chief Information Officer holds this role.
  • Public Contact Information: Provide or direct to a dedicated public contact email address for inquiries regarding the transparency statement. For example, the Department of Industry, Science and Resources provides info@industry.gov.au.
  • Date of Last Update: Clearly indicate the date when the transparency statement was last reviewed and updated. These are living documents and require regular review.


Key Information to Disclose:

In essence, AI transparency statements must disclose how your agency is using and managing AI, your agency's commitment to safe and responsible use, and your agency's compliance with the DTA's policy. This includes providing context on the intentions behind AI adoption, detailed classifications of its use, measures for ensuring effectiveness and mitigating risks, and clear accountability mechanisms.


Agencies are encouraged to go beyond the minimum requirements and provide real-world examples of AI applications, the implemented safeguards, and the tangible public benefits derived from their use. This level of detail enhances the meaningfulness and impact of the transparency statement. Remember, the target audience is the general public, so the use of clear, plain language is paramount.


The February 2025 Filing Deadline and Compliance Status

The deadline for Non-Corporate Commonwealth Entities (NCEs) to publish their AI transparency statements was February 28, 2025.

By this date, these agencies were required to publish a statement on their public-facing websites outlining their approach to AI adoption, adhering to the requirements set forth by the DTA. This included all the key mandated components detailed above.


As of March 2025, six months after the Digital Transformation Agency’s Policy for the responsible use of AI in government came into effect (September 1, 2024), it was reported that more than 50 non-corporate Commonwealth entities had published their statements. However, approximately forty percent of the nearly 100 agencies that were obligated to produce a statement had missed the February filing deadline. This indicates that a significant portion of NCEs were not compliant by the initial deadline.


Moving Forward: Ensuring Ongoing Transparency and Compliance

The publication of the initial transparency statement is not the end of the process. These are "living documents" that must be actively managed, reviewed, and updated. The Standard for AI transparency statements mandates reviews and updates at least annually, whenever significant changes occur in the agency's AI approach, or if any new factor materially impacts the accuracy of the existing statement. Accountable officials are responsible for providing the DTA with a link to the statement upon initial publication and each subsequent update.


Agencies must also establish internal mechanisms for ongoing monitoring of AI use, ensuring that the transparency statement accurately reflects all AI applications, including those embedded in common commercial products. Comprehensive governance arrangements and the establishment of internal AI registers are crucial for maintaining accurate and up-to-date transparency statements.


In Summary

The Australian Government's AI Transparency Statement initiative represents a critical step towards responsible AI adoption and building public trust. While a significant number of agencies met the initial deadline, the non-compliance of a substantial portion underscores the ongoing need for focus and effort in this area. Senior executives must ensure their agencies not only prioritize the timely publication of these statements but also establish robust processes for their ongoing review and maintenance. By embracing transparency, we can collectively foster a public environment of trust and confidence in the government's use of artificial intelligence for the benefit of all Australians.


We encourage all senior executives to familiarize themselves with the DTA's Policy for the responsible use of AI in government and the Standard for AI transparency statements to ensure full understanding and compliance.


Harold Lucero