AI Risk and Accountability: Who Is Responsible When AI Acts?

When AI makes a decision that causes harm, who is accountable? A practical framework for risk ownership, liability, and governance in AI-driven banking.

The regulatory response to AI in financial services is taking shape - unevenly, but with increasing urgency. From the EU AI Act to sector-specific guidance from the Bank of England, Federal Reserve, and MAS, banks face a patchwork of emerging frameworks that will fundamentally shape how they develop, deploy, and govern AI systems. This 65-page report maps the global regulatory landscape and analyses its practical implications for banking institutions.

The report examines regulatory developments across 12 jurisdictions and assesses how banks are adapting their AI governance, model risk management, and compliance architectures to meet requirements that are still being defined.

Key Findings

  • Regulatory fragmentation is creating significant compliance complexity - banks operating across multiple jurisdictions face overlapping and occasionally contradictory requirements, with no globally harmonised framework in sight and limited regulatory coordination on AI-specific rules.
  • The EU AI Act is setting the de facto global standard - its risk-based classification approach is influencing regulatory thinking worldwide, and banks with EU operations are building compliance frameworks that will likely exceed requirements in other markets.
  • Model governance requirements are expanding beyond traditional model risk - regulators are pushing banks to extend governance frameworks to cover foundation models, third-party AI services, and embedded AI components that fall outside conventional model risk management perimeters.
  • Explainability requirements are creating genuine technical challenges - regulatory expectations for AI transparency are colliding with the inherent opacity of large language models and deep learning systems, forcing banks to invest in interpretability tools and alternative model architectures.
  • Ethical AI frameworks are moving from voluntary to mandatory - fairness testing, bias monitoring, and impact assessments are transitioning from corporate responsibility initiatives to hard regulatory requirements, with enforcement mechanisms becoming increasingly specific.

What the Report Covers

  1. Executive Summary - The accountability gap in AI-driven banking
  2. Legal Landscape - Liability issues and evolving case law
  3. Governance Models - Responsibility frameworks and decision documentation
  4. Risk Ownership - Allocating accountability between business and technology
  5. Case Studies - Incidents, failures, and institutional responses
  6. Strategic Implications - Building accountability into AI deployment

Who Should Read This

This report is essential for chief risk officers, heads of compliance, and legal counsel navigating the emerging AI regulatory landscape. It is equally relevant for CIOs and heads of AI whose deployment strategies must account for regulatory constraints, government affairs teams engaging with policymakers, and industry bodies seeking to shape proportionate regulatory outcomes for the financial services sector.

For enquiries about accessing this report, contact [email protected]