Capital Markets AI Navigator: An Executive Briefing

24.03.25 06:58 PM

The AI Imperative in Capital Markets

Artificial intelligence is rapidly transforming capital markets, presenting both significant opportunities and critical challenges that demand executive attention.

Recent advancements, particularly in large language models (LLMs) and generative AI, have expanded AI applications beyond traditional areas, impacting everything from client communication to algorithmic trading and internal operations.

This newsletter summarizes IOSCO's latest findings on these developments, highlighting key use cases, the evolving landscape of risks to investor protection, market integrity, and financial stability, and the nascent steps market participants are taking to manage these risks.

Strategic leaders must understand these dynamics to navigate the changing regulatory environment, capitalize on AI's potential, and mitigate its inherent risks to ensure the long-term success and stability of their organizations.

IOSCO's ongoing work signals an increasing regulatory focus in this area, necessitating proactive engagement and strategic planning by capital market participants.


Below is a comprehensive review of AI's evolving role, inherent risks, and emerging governance in global capital markets, drawing insights from IOSCO's latest consultation report.


Introduction: Setting the Stage for AI in Finance

  • Building upon its 2021 report, IOSCO's latest consultation report addresses the significant developments in AI technologies and their expanding use in financial products and services.
  • The report underscores the potential of AI to enhance investor access, engagement, and overall market efficiency, while simultaneously recognizing the amplification of existing and emergence of new risks.
  • The objective of the latest report, stemming from the work of IOSCO's Fintech Task Force (FTF) and its AI Working Group (AIWG), is to foster a shared understanding among regulators regarding the issues, risks, and challenges posed by AI, viewed through the lens of investor protection, market integrity, and financial stability.
  • The findings are based on extensive research, including surveys of IOSCO members and Self-Regulatory Organizations (SROs), stakeholder engagement roundtables, and literature reviews.
  • This newsletter leverages these insights to provide an executive-level overview of the key considerations for capital market leaders.


AI Use Cases in Capital Markets: A Rapidly Expanding Horizon

AI adoption in capital markets is no longer nascent, with firms increasingly integrating these technologies across various functions.

  • Decision-Making Support: AI is prevalent in robo-advising, algorithmic trading, investment research, and sentiment analysis, aiding in more data-driven strategies. For example, AI algorithms analyze vast datasets to identify trading opportunities that human traders might miss.
  • Operational Efficiency: Recent AI advancements, particularly GenAI, are being deployed for internal process automation, including coding, information extraction, text summarization, and enhancing internal communications through chatbots. For instance, LLMs can automate the summarization of lengthy internal reports, freeing up executive time.
  • Surveillance and Compliance: Regulated firms utilize AI to enhance surveillance and compliance functions, particularly in anti-money laundering (AML) and counter-terrorist financing (CFT) systems, as well as for fraud detection. AI can analyze transaction patterns to identify suspicious activities more effectively than traditional rule-based systems.
  • Client Interactions: Communication with clients is a significant area of AI use, including client inquiry management through chatbots and personalized marketing. AI-powered chatbots can provide instant responses to common client queries, improving efficiency and client satisfaction.
  • Specific Use Cases Highlighted by IOSCO Surveys:
    • Broker-Dealers: Predominantly use AI for communication with clients, algorithmic trading, and surveillance/fraud detection. Larger firms also leverage AI for coding and internal chatbots.
    • Asset Managers: Frequently employ AI for robo-advising/asset management and investment research, with larger firms also using it for coding, internal productivity support, and internal chatbots. AI assists in portfolio construction, risk-return assessment, and personalized investment advice generation.
    • Financial Exchanges: Primarily utilize AI for transaction processing and automation, including optimizing trade settlement. An example is Nasdaq's introduction of an AI-driven dynamic timer for order execution.
    • SROs: Integrate AI in regulatory processes to enhance data-driven applications and support compliance efforts, including document processing and advertising regulation. Future potential uses include advanced market surveillance and automated report generation.
  • Emerging Applications of Advanced AI: Firms are exploring the use of GenAI for streamlining trading strategy development, analyzing financial reports for deeper insights, creating specialized LLM platforms for financial data, and even automating the publication of investment research.


Risks, Issues, and Challenges: Navigating the Perils of AI in Finance

  • The increasing sophistication and pervasiveness of AI in capital markets introduce a complex web of risks that demand careful consideration at the highest levels.
  • Malicious Uses:
    • Cybersecurity Threats: AI can be leveraged by malicious actors to plan and execute more sophisticated cyberattacks, including enhanced phishing scams, malware generation, and the creation of manipulated identification documents. Deepfakes pose a growing threat in business compromise attacks.
      • Example: Deepfakes could be used to impersonate executives in video conferences to authorize fraudulent wire transfers.
    • Misinformation and Market Manipulation: GenAI can create and disseminate highly believable misinformation to manipulate markets and negatively impact investors.
      • Example: AI could generate fake news articles designed to artificially inflate or deflate stock prices.
  • AI Model and Data Considerations:
    • Explainability and Complexity: The "black box" nature of many advanced AI models, particularly LLMs, makes it difficult to understand and explain how they arrive at specific outputs, posing challenges for disclosure, suitability assessments, and regulatory oversight.
    • Limitations and Errors: AI models trained on historical data may not adapt to rapidly changing market conditions, leading to performance degradation. Probabilistic outputs can be inconsistent, and models can generate factually incorrect information ("hallucinations").
      • Example: An AI trading algorithm might fail to recognize and react appropriately to a sudden geopolitical event not reflected in its training data.
    • Bias: Biases inherent in training data can be perpetuated or amplified by AI models, leading to discriminatory outcomes in financial services, such as favoring certain investor groups or promoting specific products unfairly.
  • Concentration, Outsourcing, and Third-Party Dependency:
    • Reliance on a small number of technology infrastructure providers, data aggregators, and model providers creates concentration risks and potential single points of failure.
    • Outsourcing AI development and deployment introduces third-party dependencies and challenges in regulatory oversight, as most technology providers are not directly regulated. Obtaining sufficient information from vendors to assess AI risks can be difficult.
  • Insufficient Oversight and Talent Scarcity:
    • Firms may lack the in-house expertise to effectively supervise the development, implementation, and monitoring of complex AI systems.
    • Risk management and governance frameworks may struggle to keep pace with the rapid evolution of AI technologies.
  • Interconnectedness:
    • The increasing interconnectedness of financial institutions through shared AI technologies and infrastructure can amplify risks, leading to cascading failures and potential systemic instability.
    • Vulnerabilities in one AI system could potentially compromise the security of many others.
  • Herding:
    • The widespread use of common AI models and datasets by a large number of market participants could lead to homogeneous decision-making, potentially exacerbating market volatility and reducing liquidity during stress events.


Steps Market Participants Have Taken to Manage Risks, and Govern Internal Development, Deployment, and Maintenance of AI Systems

Recognizing the novel challenges posed by AI, some financial institutions are actively developing and implementing risk management and governance frameworks tailored to these technologies. Some of these include:

  • Integration into Existing Frameworks: Many firms are adapting their existing risk management structures for data, model, technology, compliance, and third-party risks to encompass AI.
  • Bespoke AI Governance: Some institutions are establishing separate AI risk management and governance frameworks with specific policies, procedures, and controls.
  • Key Features of Emerging Governance Practices:
    • Holistic Controls: Implementing controls across the organization, recognizing that AI is no longer confined to specialist teams and requires broader employee education on responsible use.
    • Interdisciplinary Teams: Forming risk management and governance groups with expertise from various organizational lines, including technical, business, legal, compliance, cybersecurity, and data privacy.
    • "Tone from the Top": Ensuring strong senior leadership involvement, often with the appointment of a "Chief AI Officer".
    • Domain Expertise: Emphasizing the need for domain experts throughout the AI lifecycle.
    • Focus on Data and Cybersecurity: Paying close attention to the quality and provenance of training data and addressing cybersecurity risks associated with AI models and their deployment.
    • Outcome-Based Analysis: Shifting towards mitigating potential negative outcomes, particularly for non-deterministic AI technologies, rather than solely focusing on meeting pre-defined requirements.
  • Risk Management Principles: Larger firms are incorporating principles such as transparency, reliability, investor protection, fairness, security, accountability, risk management and governance, and human oversight into their AI strategies.
  • Third-Party Risk Management: Firms are adapting existing third-party risk management frameworks to address the unique aspects of outsourcing AI technologies, including vendor risk assessments and contractual safeguards. However, obtaining sufficient information from vendors remains a challenge.
  • Human Oversight: The concept of "human-in-the-loop" is prevalent, with the view that AI should augment, not replace, human judgment and responsibility. However, practical challenges and risks associated with this concept are being recognized.


Responses by IOSCO Members: A Global Regulatory Landscape in Formation

IOSCO members are employing various approaches to understand, monitor, and respond to the use of AI in the financial sector.

  • Applying Existing Regulatory Frameworks: Many regulators are applying their current laws and regulations to AI activities, including those related to market conduct, consumer protection, and cybersecurity.
  • Issuing Guidance: Several jurisdictions have issued or are consulting on guidance to clarify how existing regulations apply to AI use in areas like governance, risk management, data protection, and transparency. Examples include guidance from ESMA in the EU on the use of AI in retail investment services and the CSA in Canada on the applicability of securities laws to AI systems.
  • Developing Bespoke/AI-Specific Frameworks: Some jurisdictions are implementing or considering new laws and regulations specifically to address the unique challenges of AI in finance. Japan's "AI Guidelines for Business" and Australia's consideration of whole-of-economy AI regulation are examples.
  • Regulatory Engagement: Most regulators are actively engaging with market participants through surveys, market studies, innovation hubs, and roundtables to gather information and foster dialogue. Singapore's "Project MindForge" is an example of a collaborative initiative to examine GenAI risks and opportunities.
  • Collaboration Among Authorities: Collaboration between financial regulators, central banks, and data protection agencies on AI-related issues is widespread.
  • Assessing Resources and Expertise: Many regulators are evaluating and increasing their internal resources and expertise to effectively supervise AI use in the financial sector.
  • Information Gathering & Factfinding: Numerous jurisdictions have undertaken initiatives to gather data and understand the extent and nature of AI adoption in their markets.
  • Investor Alerts and Education: Regulators are increasingly issuing investor alerts to raise awareness about AI-related investment fraud and emphasizing the importance of due diligence.

 

The Ongoing Evolution of AI in Capital Markets

The rapid pace of AI development and adoption necessitates continuous monitoring and adaptation by both market participants and regulators.

  • IOSCO's next phase of work will focus on potentially developing additional tools, recommendations, or considerations to assist its members in addressing the identified issues, risks, and challenges.
  • Given the diverse implications of AI across various use cases, a nuanced and potentially non-uniform regulatory approach may be required.
  • Ongoing dialogue and collaboration between regulators, industry, and other stakeholders will be crucial in navigating this evolving landscape and ensuring the responsible and beneficial use of AI in capital markets.

Harold Lucero