Skip to searchSkip to main content
  • A Deep Dive

    Accelerate AI Adoption

    Assessing the Perfect AI Governance Framework for Your Business Success


AI Governance for Business Leaders


AI governance is the strategic framework that ensures AI adoption aligns with ethical standards, regulatory requirements, and business objectives. For C-suite executives, it provides the structure to mitigate risks, drive compliance, and build trust in AI-driven decisions. 


Staying ahead of evolving AI regulations is essential to unlocking innovation while safeguarding your company’s reputation and operations.


Effective AI Governance

Explore AI Governance Frameworks & Their Benefits for Your Organization

G7's Hiroshima
EU Act
Oz AI Safety Stdrs
Oz AI Ethics Principles
USA AI NIST RMF
G7's Hiroshima

The Hiroshima Protocol

Australia's membership in the Hiroshima AI Process Friends Group signals a global effort to promote the safe and responsible use of AI. The Hiroshima AI Process, initiated by the G7, aims to create an international framework with guiding principles and a code of conduct for AI development. The OECD launched a reporting framework for monitoring the adoption of the Hiroshima AI Process International Code of Conduct by organizations developing advanced AI systems. A CSIS report analyzes the Hiroshima AI Process Comprehensive Framework, examining its potential to enhance international cooperation and regulatory interoperability, especially among G7 nations, while recommending enhancements. Japan is leading efforts to ensure AI safety and to balance AI regulation and innovation. The Hiroshima Process International Code of Conduct provides voluntary guidance for organizations developing advanced AI systems, focusing on risk management, transparency, and ethical considerations.

EU Act

The EU AI Act

One of the most significant developments in AI regulation is the European Union Artificial Intelligence Act (EU AI Act). Key takeaways of this Act include:

  1. Risk-based approach: AI systems are categorized based on their level of risk, with different requirements for each category.
  2. Prohibited AI practices: Certain AI applications, such as government social scoring, are prohibited.
  3. High-risk AI systems: Strict requirements are set for AI systems used in critical areas like healthcare, education, and law enforcement.
  4. Transparency requirements: Obligations to inform users when and in what form they are interacting with AI systems.

Oz AI Safety Stdrs

Australia's Voluntary AI Safety Standard

The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence (AI). Through the Safe and Responsible AI agenda, the Australian Government is acting to ensure the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on.

The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do. The guardrails help organisations to benefit from AI while mitigating and managing the risks that AI may pose to organisations, people and groups. The 10 voluntary guardrails have been intentionally aligned with prominent international standards, particularly the AS ISO/IEC 42001:2023, which focuses on AI management systems, and the NIST AI RMF 1.0 (see next arrow) developed in the US, which focuses on AI risk management.

Oz AI Ethics Principles

Australia’s AI Ethics Principles

The principles are entirely voluntary. They are designed to prompt organisations to consider the impact of using AI enabled systems. They are intended to be aspirational and complement, not substitute, existing AI regulations and practices.

Not every principle will be relevant to your use of AI. Not every business uses AI, and not every use of AI requires comprehensive analysis against the principles. For example, many businesses use systems that may incorporate AI such as email or accounting software. This use is unlikely to be of sufficient impact to require the use of the principles. If your AI use doesn’t involve or affect human beings, you may not need to consider all of the principles.

USA AI NIST RMF

USA - AI Risk Management Framework (AI RMF)

In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF)  is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

Hiroshima AI Process: Governance, Principles, and Benefits

Hiroshima Process - MindMap
The Bulletin Brief: Hiroshima AI Process

Summary

The Hiroshima AI Process (HAIP) is a G7-led initiative launched in May 2023 to foster international cooperation on the safe, secure, and trustworthy development and use of advanced Artificial Intelligence (AI) systems, particularly generative AI. It aims to establish a common global governance framework built on guiding principles and a code of conduct for organizations developing advanced AI. The initiative has expanded beyond the G7 to include a broader "Friends Group" of nations, including Australia, signaling a growing international consensus on responsible AI development. A key element of the HAIP is the "Hiroshima AI Process Comprehensive Policy Framework," which includes guiding principles for all AI actors and a code of conduct for AI developers. The OECD is playing a crucial role in supporting the HAIP, including launching a reporting framework to monitor the application of the International Code of Conduct. While the HAIP represents a significant step forward, challenges remain in achieving interoperability between different national AI governance frameworks and ensuring the code of conduct is sufficiently specific to provide practical guidance.


Key Themes and Ideas:

International Cooperation and Governance: The central theme is the urgent need for international cooperation to govern the development and deployment of advanced AI. The HAIP is positioned as a leading effort in this space. "It is expected that the Hiroshima AI Process will be gathering broader support from a diverse range of actors...and will facilitate building up inclusive global governance on AI for the common good of the world." The initiative's expansion into the "Friends Group" demonstrates a commitment to inclusivity.

  • Safe, Secure, and Trustworthy AI: The overarching goal is to promote AI that is safe, secure, and trustworthy. This involves addressing potential risks and challenges associated with AI, such as bias, disinformation, and threats to human rights and democratic values. The Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems


Comprehensive Policy Framework

The HAIP Comprehensive Policy Framework is the tangible output of the process. It consists of:

      • Guiding Principles for All AI Actors: General principles for designing, developing, deploying, providing, and using advanced AI systems.
      • Code of Conduct for Organizations Developing Advanced AI Systems: Detailed instructions for AI developers focusing on risk management, stakeholder engagement, and ethical considerations. It includes 11 core principles that serve as a foundation for responsible AI governance.
      • OECD's Role: The OECD is a key partner in the HAIP, providing analysis, developing monitoring tools, and facilitating discussions among stakeholders. The OECD launched the reporting framework for monitoring the application of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.
      • Voluntary vs. Binding Measures: The HAIP currently relies on voluntary commitments and "soft law," like guidelines and principles. However, there's a recognition that more legally binding regulations may be necessary, particularly for high-risk AI systems. "While compliance with these documents helps companies with risk prevention strategies and forward-looking accountability measures, there are no guarantees or enforceability measures to ensure adherence to these standards."
      • Interoperability Challenges: Achieving interoperability between different national AI governance frameworks is a significant challenge. The regulatory inconsistency creates challenges for global businesses, forcing them to navigate complex legal landscapes and varying rights and obligations across these key markets. The HCOC holds promise as a unifying mechanism, bridging these regulatory disparities and promoting interoperability.
      • The Importance of Specificity: The HCOC is seen as lacking the necessary specificity to provide truly effective guidance for practical implementation. Future discussions among G7 leaders should focus on how the HCOC may be updated to ensure interoperability of rules for advanced AI systems across not only G7 countries but also the global community
      • Risk Management and Governance: There is a strong emphasis on risk management throughout the AI lifecycle. Organizations should take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
      • Stakeholder Engagement and Transparency: Openness and collaboration are seen as crucial for building trust in AI.Organizations should work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia
      • Ethical and Societal Considerations: AI development should be aligned with ethical standards, human rights, and democratic values. Organizations should also work towards prioritizing the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education
      • The Need for Updates: The Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems will be reviewed and updated as necessary, including through ongoing inclusive multistakeholder consultations, in order to ensure it remains fit for purpose and responsive to this rapidly evolving technology.

Key Quotes:

  • "the Hiroshima AI Process was launched in May 2023, following the Leaders’ direction at the G7 Hiroshima Summit, with the objective of discussing the opportunities and risks of the technology."
  • "the Hiroshima AI Process Comprehensive Policy Framework...was the first policy package the democratic leaders of the G7 have agreed upon to effectively steward the principles of human-centered AI design, safeguard individual rights, and enhance systems of trust."
  • "The G7 is a group of democratic nations... This shared commitment to democratic principles facilitates a focus on common values, equipping the HAIP to serve as a key foundation not just for safety, but also for realizing fundamental values such as human rights, democracy, and the rule of law in the development and implementation of advanced AI systems."
  • "The HCOC serves as a central reference point in the evolving global landscape of AI governance."
  • "Future discussions among G7 leaders should focus on how the HCOC may be updated to ensure interoperability of rules for advanced AI systems across not only G7 countries but also the global community."

Implications:

  • The HAIP has the potential to shape the future of global AI governance.
  • Companies developing advanced AI systems need to be aware of the HAIP's principles and code of conduct.
  • Governments will likely use the HAIP as a reference point for developing their own AI regulations.
  • Further work is needed to address the challenges of interoperability and ensure the HAIP's effectiveness.

Next Steps:

  • Continue to expand the HAIP Friends Group to include more countries.
  • Refine the HCOC to make it more specific and practical.
  • Develop mechanisms for monitoring and enforcing compliance with the HCOC.
  • Promote international collaboration on AI safety research and standards.
  • Address the ethical and societal implications of AI.




The Protocol's Benefits to Your Organisation

Below we outline several benefits for organizations that adopt the Hiroshima AI Process, particularly the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (HCOC). 

These benefits include:

  • Enhanced Risk Management and Governance Organizations can leverage the HCOC to implement comprehensive risk management practices throughout the AI lifecycle. This includes identifying and mitigating vulnerabilities, managing misuse, and establishing transparency regarding governance and risk management policies. By adhering to the HCOC, organizations can proactively address potential safety, security, and trustworthiness issues associated with their AI systems.
  • Improved Stakeholder Engagement The HCOC emphasizes the importance of transparency and multistakeholder engagement. Organizations are encouraged to publicly report their AI systems' capabilities, limitations, and appropriate/inappropriate uses. Furthermore, the HCOC promotes responsible information sharing among organizations, industry, governments, civil society, and academia regarding potential risks, incidents, and best practices. This fosters trust and collaboration among stakeholders.
  • Alignment with Ethical and Societal Considerations By following the HCOC, organizations can ensure that their AI development and deployment align with human rights, democratic values, and efforts to address global challenges. The HCOC encourages prioritizing research for societal safety, focusing on key risks such as upholding democratic values, respecting human rights, and protecting vulnerable groups. Additionally, it promotes the development of AI systems to address global challenges like climate change, health, and education, aligning with the UN Sustainable Development Goals.
  • Contribution to International Standards The HCOC encourages organizations to contribute to the development and adoption of international technical standards. This includes practices to promote transparency by allowing users to identify AI-generated content (e.g., watermarking), testing methodologies, and cybersecurity policies. By participating in the development of these standards, organizations can help shape the future of AI governance and ensure interoperability across different frameworks.
  • Proactive Risk Mitigation Compliance with the HCOC can serve as a forward-looking risk mitigation strategy, anticipating further regulation and demonstrating a commitment to responsible AI development. This can be particularly valuable for organizations operating in jurisdictions with evolving AI governance frameworks.
  • Competitive Advantage As the HCOC gains recognition and adoption, organizations that adhere to its principles may gain a competitive advantage. Demonstrating a commitment to safe, secure, and trustworthy AI can enhance an organization's reputation, attract customers and investors, and facilitate partnerships.
  • Access to a Network of Expertise: The Hiroshima AI Process involves various stakeholders, including governments, private sector entities, academia, and civil society. By participating in the process and adopting the HCOC, organizations gain access to a network of expertise and can collaborate with others to advance responsible AI development.
  • Opportunity to Shape Future Updates: The HCOC is designed to be a living document that is reviewed and updated as necessary. Organizations that adopt the HCOC have the opportunity to contribute to its evolution and ensure that it remains relevant and effective in addressing the challenges of advanced AI systems.
  • Harmonized AI Governance Framework The HCOC serves as a pivotal instrument to enhance interoperability between various AI governance frameworks. There already exists significant overlap between the core elements of G7 nations’ regulatory documents and the HCOC. The HCOC can be integrated into a jurisdiction’s regulatory framework, and G7 nations are poised to either introduce new regulations or revise existing structures on AI governance. This opens a window to integrate the HCOC principles in new regulatory waves.

Hiroshima AI Code of Conduct: Actionable Items

The Hiroshima AI Process International Code of Conduct (HCOC) aims to promote safe, secure, and trustworthy AI. It provides voluntary guidance for organizations developing advanced AI systems, including the most advanced foundation models and generative AI systems. The HCOC is a living document that builds on the existing OECD AI Principles.


The below 11 guiding principles  or actionable items can be divided into three groups:

  • Risk management and governance: Focuses on actions to assess and mitigate risks associated with AI systems to a level deemed acceptable by stakeholders.
  • Stakeholder engagement: Focuses on ensuring clear communication and accountability to all relevant stakeholders.
  • Ethical and societal considerations: Focuses on ensuring the development, deployment, and usage of AI align with ethical standards and societal values.


The HCOC 11 actionable items for organizations developing advanced AI systems are as follows:

  • Risk identification and mitigation: Implement rigorous testing throughout the AI life cycle, such as red-teaming, to identify and address potential safety, security, and trustworthiness issues. Testing should take place in secure environments and be performed at several checkpoints throughout the AI lifecycle, in particular before deployment and placement on the market. Testing measures should devote attention to chemical, biological, radiological, and nuclear risks, offensive cyber capabilities, risks to health and safety, risks from models of self-replication, societal risks, threats to democratic values and human rights.
  • Vulnerability and misuse management after deployment: Monitor post-deployment for vulnerabilities, incidents, emerging risks, and misuse, and take appropriate action to address these. Organizations are encouraged to consider facilitating third-party and user discovery and reporting of issues and vulnerabilities after deployment such as through bounty systems, contests, or prizes to incentivize the responsible disclosure of weaknesses.
  • Transparency and accountability: Publicly report advanced AI systems’ capabilities, limitations, and domains of appropriate and inappropriate use to support ensuring sufficient transparency, thereby contributing to increase accountability. This includes publishing transparency reports containing meaningful information for all new significant releases of advanced AI systems.
  • Responsible information sharing: Encourage organizations to share information on potential risks, incidents, and best practices with each other, including industry, governments, academia, and the public.
  • AI governance and risk management policies: Develop, implement, and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.
  • Security investments: Invest in and implement robust security controls, including physical security, cybersecurity, and insider threat safeguards across the AI lifecycle.
  • Content authentication: Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  • Research prioritization for societal safety: Prioritize research to mitigate societal, safety, and security risks and prioritize investment in effective mitigation measures.
  • AI for global challenges: Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health, and education.
  • International technical standards: Advance the development of and, where appropriate, adoption of international technical standards.
  • Data input measures and protections: Implement appropriate data input measures and protections for personal data and intellectual property.
G7 AI Governance as Common Guidance

The Hiroshima AI Process International Code of Conduct (HCOC) serves as a central reference point in the evolving global landscape of AI governance. It aims to promote safe, secure, and trustworthy AI, providing voluntary guidance for organizations developing advanced AI systems.


Key developments and perspectives on AI governance, using the HCOC as a reference:

  • Canada: Formulating a comprehensive regulatory framework for AI under Bill C-27, known as AIDA, which prioritizes risk mitigation for "high-impact" AI systems. Canada has also published non-binding guidelines in a Voluntary Code of Conduct for Responsible Development and Management of Advanced Generative AI Systems. AIDA has the potential to translate the HCOC's principles into enforceable regulations.
  • European Union: Has been at the forefront of AI regulation with the AI Act, passed in March 2024. The AI Act sets a framework for trustworthy AI development and implementation, emphasizing a risk-based regulatory approach and mandating the development of codes of practice that align with international standards. The European Union acknowledges the influence of international standards in shaping these codes of practice, presenting an opportunity to integrate or reference the HCOC in the EU AI governance framework.
  • Japan: Emphasizes maximizing the positive societal impacts of AI with a risk-based and agile governance model. Japan takes a sector-specific approach, promoting AI implementation through regulatory reforms tailored to specific industries and markets. Japan also launched the AI Guidelines for Business as a voluntary AI risk management tool, integrating the HIGP principles. The HCOC principles are already integrated into Japan’s AI Guidelines for Business.
  • United Kingdom: Is developing a decentralized regulatory approach focusing on sector-specific guidelines, a pro-innovation stance, and public-private collaboration through specialized AI institutions. While the United Kingdom is not enforcing a comprehensive AI law or drafting a central code of conduct, it emphasizes traditional AI governance principles such as safety, security, transparency, and fairness to inform its sector-driven regulations. The United Kingdom might leverage the HCOC and its international scope to inform potential regulatory initiatives.
  • United States: Has adopted a decentralized, multi-tiered regulatory strategy for AI governance, with specialized agencies overseeing sector-specific regulations. Key initiatives include the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The HCOC’s emphasis on responsible risk management and governance aligns with the United States’ principles-based trajectory and could fit into proposed risk mitigation legislation, positioning the HCOC as a crucial reference in shaping AI regulatory policy in the United States.


Despite sharing common principles and core values, the AI governance landscape across the G7 is complex and multifaceted. The HCOC holds promise as a unifying mechanism, bridging regulatory disparities and promoting interoperability.

Timeline of Hiroshima AI Process
  • Early 2010s: Deep learning breakthroughs lead to increased AI adoption across various industries and sectors.
  • Mid-2010s: Growing awareness of potential AI risks leads to publications of policy and principle documents by governments, international organizations, tech companies, and nonprofits.
  • 2019: The European Union publishes the "Ethics Guidelines for Trustworthy AI". The OECD publishes the Recommendation of the Council on Artificial Intelligence.
  • 2021: The European Commission introduces the draft AI Act. UNESCO publishes the Recommendation on the Ethics of Artificial Intelligence.
  • 2022: Canada presents the legislative proposal, the Artificial Intelligence and Data Act (AIDA).
  • January 2023: The National Institute of Standards and Technology (NIST) in the US publishes the AI Risk Management Framework (RMF).
  • May 2023: The G7 Hiroshima Summit is held under Japan's presidency. Leaders confirm the need for generative AI governance and agree to establish the Hiroshima AI Process. The Hiroshima AI Process is launched.
  • August 2023: The Chinese government implements the Interim Measures for the Administration of Generative Artificial Intelligence Services.
  • December 2023: The G7 Digital and Tech Ministers agree on the "Hiroshima AI Process Comprehensive Policy Framework." The framework is endorsed by the G7 Leaders. The International Organization for Standardization (ISO) publishes the AI management system standard ISO/IEC 42001. The UN AI Advisory Body issues the interim report Governing AI for Humanity.
  • November 2023: The AI Safety Summit is held in the UK. The Bletchley Declaration is endorsed by 29 countries and regions.
  • February 9, 2024: The Government of Japan publishes an article on the Hiroshima AI Process, emphasizing its role in shaping inclusive governance for generative AI.
  • February 2024: Japan's Liberal Democratic Party proposes the concept note for the Basic Law for the Promotion of Responsible AI.
  • March 2024: The European Parliament approves the AI Act. The Council of Europe’s Ad Hoc Committee on Artificial Intelligence introduces the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (AI Treaty).
  • April 2024: The Generative AI Public Working Group of NIST is slated for public review.
  • May 2, 2024: Japanese Prime Minister Kishida Fumio announces the launch of the Hiroshima AI Process Friends Group at the OECD Ministerial Council Meeting in Paris.
  • May 3, 2024: Australia joins the Hiroshima AI Process Friends Group.
  • May 24, 2024: The Center for Strategic and International Studies (CSIS) publishes the report "Shaping Global AI Governance: Enhancements and Next Steps for the G7 Hiroshima AI Process".
  • September 6, 2024: Cambodia joins the Hiroshima AI Process Friends Group.
  • October 15, 2024: The G7 Digital and Tech Ministerial Meeting is held.
  • December 2, 2024: Viet Nam joins the Hiroshima AI Process Friends Group.
  • December 26, 2024: The G7 finalizes the “Reporting Framework” for the International Code of Conduct.
  • February 6, 2025: OECD publishes AI safety solutions mapping.
  • February 7, 2025: The OECD launches the reporting framework for monitoring the application of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.
  • February 14, 2025: OECD publishes Artificial intelligence and intellectual property information.
  • February 18-20, 2025: CSIS is scheduled to host a variety of webcasts and live events.
  • April 15, 2025: Deadline for organizations to submit initial reports using the Reporting Framework.
  • Early 2025: The “Reporting Framework” for the International Code of Conduct is planned to be operationalized.

Key People Involved

  • Kishida Fumio: Japanese Prime Minister. Announced the launch of the Hiroshima AI Process Friends Group.
  • MATSUO Yutaka: Professor of the Graduate School of Engineering at the University of Tokyo, and Chair of the Government of Japan’s AI Strategy Council. Emphasizes the importance of speed in addressing AI governance.
  • Audrey Plonk: Directorate for Science, Technology and Innovation, OECD.
  • Karine Perset: OECD, Acting Head of the OECD Division on AI and Emerging Digital Technologies.
  • Sara Fialho Esposito: OECD, Policy Analyst.
  • Hiroki Habuka: Senior associate (non-resident) of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS) in Washington, D.C, and a research professor at Kyoto University Graduate School of Law. Co-author of the CSIS report.
  • David U. Socol de la Osa: Assistant professor at the Hitotsubashi Institute for Advanced Study and Graduate School of Law at Hitotsubashi University. Co-author of the CSIS report.

FAQ - Hiroshima AI Process

What is the Hiroshima AI Process?

The Hiroshima AI Process is an international initiative launched by the G7 in May 2023, under Japan's presidency, aimed at fostering safe, secure, and trustworthy AI. It seeks to establish common ground for the responsible development and use of AI, particularly advanced AI systems like generative AI. It's designed to maximize AI's innovative opportunities while mitigating risks and challenges. It is the first international framework of its kind, intended to drive inclusive global governance on AI.

What is the Hiroshima AI Process Comprehensive Policy Framework?

The Comprehensive Policy Framework is the primary output of the Hiroshima AI Process, agreed upon in December 2023. It serves as the world's first international framework aimed at promoting safe, secure, and trustworthy advanced AI systems. It includes the "Hiroshima Process International Guiding Principles for All AI Actors" and the "Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems."

What are the Hiroshima Process International Guiding Principles for All AI Actors (HIGP)?

The HIGP are a set of 12 general principles applicable to all participants involved in the AI lifecycle, from developers to users. They emphasize responsible AI development and use, covering aspects such as risk management, stakeholder engagement, and ethical and societal considerations. The principles aim to ensure that AI development aligns with human rights, democracy, and sustainability.

What is the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (HCOC)?

The HCOC translates the HIGP into a more specific code of practice for organizations developing and deploying advanced AI systems. It outlines actionable items on risk management, stakeholder engagement, and ethical considerations. It is designed to provide a comprehensive roadmap for AI processes and risk mitigation, emphasizing transparency, accountability, and the need to address societal, safety, and security risks.

What is the purpose of the HCOC reporting framework?

The reporting framework, launched by the OECD, provides a standardized approach for organizations to demonstrate their alignment with the HCOC actions. It allows companies to offer transparent and comparable information about their AI risk management practices, risk assessment, incident reporting, and information-sharing mechanisms. This initiative aims to build global trust through standardized reporting and continuous improvement in AI development practices.

How does the Hiroshima AI Process promote international cooperation on AI governance?

The Hiroshima AI Process promotes international cooperation by providing a platform for G7 nations, and now an expanded group through the Hiroshima AI Process Friends Group, to align on AI governance principles and best practices. It encourages collaboration across multilateral forums like the OECD and the UN. By establishing common ground and fostering dialogue, it aims to harmonize international rules and reduce redundancy in reporting requirements.

What are some key actions organizations should take to adhere to the HCOC?

Organizations should take measures including identifying and mitigating risks across the AI lifecycle, publicly reporting advanced AI systems' capabilities and limitations, sharing information responsibly on incidents, implementing AI governance and risk management policies, investing in robust security controls, developing content authentication mechanisms, prioritizing research to mitigate societal risks, developing AI to address global challenges, and implementing data protection measures.

What is the Hiroshima AI Process Friends Group?

The Hiroshima AI Process Friends Group broadens the Hiroshima AI Process beyond G7 members. It includes 49 countries and regions (primarily OECD members) and aims to promote global access to safe, secure, and trustworthy generative AI. This group supports the implementation of international guidelines and codes of conduct outlined in the Comprehensive Policy Framework, ensuring that diverse voices, particularly from the Indo-Pacific region, contribute to the global conversation on AI.