A report from Zango AI, a compliance technology firm, has found that senior leaders across UK and European financial services are warning of a critical gap in AI governance standards that is leaving the sector exposed to systemic risk.
Financial Reporter claims that the report, titled The Future of AI Governance & Compliance in Financial Services, draws on interviews with 27 C-suite and senior leaders across risk, compliance and AI governance, as well as four industry roundtables involving 60 additional senior practitioners.
Contributors include senior figures from Santander, St James’s Place, Stripe, Standard Chartered, Lloyds Banking Group, Monzo, Allica Bank, Commerzbank, Revolut, and Ecommpay, alongside John Glen MP, a member of the Treasury Committee.
The research highlights a fundamental shift in the types of AI systems being adopted by UK financial institutions. Firms have moved away from tools producing predictable outputs towards generative and agentic systems whose outputs are context-dependent and cannot be fully validated ahead of deployment.
This transition is widening the gap between AI adoption and the governance structures designed to oversee it, with business and technology teams deploying AI at a considerably faster rate than risk and compliance functions can manage. Several institutions were found to be unable to account for all AI tools in use across their own organisations.
The scale of the threat is significant. Global fraud losses reached $579bn in 2025, with 90% of financial professionals reporting a rise in AI-enabled attacks, as criminal networks exploit the oversight gap.
The report points to a notable disparity between the UK and other jurisdictions. In February 2026, the US published a practical Financial Services AI Risk Management Framework, developed through a Treasury-led public-private collaboration involving 108 financial institutions, with input from bodies including the National Institute of Standards and Technology. Singapore’s Monetary Authority followed with its own equivalent in March. No comparable operational standard exists in the UK or EU. Without shared guidance, firms are independently resolving identical governance challenges, producing inconsistent controls and creating systemic vulnerabilities that can be exploited at scale.
The findings also coincide with the Bank of England preparing to convene the Treasury, FCA and National Cyber Security Centre to assess the risks posed by Anthropic’s Mythos model.
The report calls for practitioner-built, sector-specific implementation guidance developed with regulatory engagement, modelled on the precedent set by the Joint Money Laundering Steering Group — the industry-developed standard for financial crime compliance that carries government endorsement without being formally mandated. No equivalent framework currently exists for AI governance.
In the foreword, Lord Clement-Jones, Liberal Democrat spokesperson for science, innovation and technology in the House of Lords and co-chair of the All-Party Parliamentary Group on AI, said: “What is immediately missing is the translation of high-level regulatory principles into day-to-day operational practice. We cannot simply wait for the aftermath of the first major AI-fuelled financial scandal to force us into action.”
Zango AI CEO Ritesh Singhania said: “Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone’s defences. Weak governance doesn’t just create individual risk; it creates systemic vulnerability across the entire sector. What’s missing is a shared implementation standard that gives firms a consistent basis for governing AI as they adopt it.”
Santander global chief operating officer (legal) and Zango adviser Dean Nash said: “The kinds of AI systems now being adopted across financial services don’t behave the way the systems we built our governance frameworks around behaved. They make judgements, produce different outputs in different contexts, and cannot be fully tested in advance. This poses a significant accountability problem. Right now, most firms are trying to solve it alone, without a shared standard to work from.”
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





