FCA launches AI Live Testing to boost AML innovation

Napier AI: preparing for FCA AI Live Testing

Artificial intelligence is rapidly shifting from concept to application across financial services, with anti-money laundering (AML) compliance among the areas set to benefit most.

To support safe adoption, the UK’s Financial Conduct Authority (FCA) will launch AI Live Testing in September 2025. According to Napier AI, the initiative provides a unique opportunity for firms to trial new solutions in a secure, regulated environment while building trust with supervisors.

AI Live Testing is designed to go beyond traditional sandbox exercises. Instead of focusing on approvals or compliance box-ticking, it offers a collaborative framework where firms and regulators can jointly evaluate AI systems under real-world conditions. This includes identifying risks early, adapting controls dynamically, and sharing insights that can inform industry-wide best practices.

Napier AI noted that the programme complements the FCA’s Supercharged Sandbox, creating a complete innovation journey from early experimentation through to deployment. For firms already working on technologies such as transaction monitoring, sanctions screening, or behavioural analytics, this provides a pathway to accelerate adoption with greater clarity on regulatory expectations.

At the same time, Napier AI emphasised that firms must also strengthen internal readiness to make the most of AI Live Testing. The first step is assessing data maturity. Since AI depends on high-quality inputs, organisations need to evaluate whether their datasets are accurate, complete, and consistent. Data validation and auditability are crucial to ensure outputs remain reliable. Napier AI’s own work with the FCA on synthetic data demonstrates how privacy-compliant datasets can help address these challenges.

The second step is conducting a comprehensive financial crime risk assessment. This helps firms identify where AI can have the greatest impact, from improving transaction monitoring efficiency to enabling real-time sanctions checks. Understanding the organisation’s threat landscape also guides vendor selection and technology implementation.

Integration does not require replacing legacy infrastructure. Napier AI highlighted that modern AI solutions are typically modular, designed to sit on top of existing systems. This approach allows firms to enhance detection and efficiency through machine learning, natural language processing, or behavioural analytics without major system overhauls.

Robust governance is another key consideration. In the absence of AI-specific FCA rules, Napier AI recommended leaning on established model risk management frameworks. This includes independent validation, clear audit trails, explainability, and policies addressing fairness, bias mitigation, and human oversight. Documented risk assessments should set out the AI system’s purpose, risks, mitigations, and acceptable residual risks.

Training is equally important. Napier AI pointed out that AI reshapes workflows and roles, meaning compliance teams must be prepared to interpret and challenge AI outputs. Human judgment remains central to ensuring accountability.

Finally, firms need to measure the return on investment. According to Napier AI, efficiency gains, stronger risk mitigation, and scalability are key benefits of responsible adoption. Costs can be managed by piloting AI in high-impact areas, using parallel testing alongside existing systems, and working with vendors offering phased integration.

For more information, read the full story here

Read the daily FinTech news
Copyright © 2025 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.