By the end of 2025, artificial intelligence had moved decisively from theory to practice across financial crime functions.
According to Quantifind, at major industry gatherings including Transform Finance, ACAMS chapter meetings, 1LoD forums, the FRC Leaders Convention and Money20/20, the message from regulators, practitioners and analysts was consistent: AI is no longer experimental.
It is becoming operational. This shift has prompted a more pressing question for financial institutions as they look toward the year ahead – not whether to adopt AI, but whether they are truly ready for it.
Insights shared across the 2025 conference circuit point to a clear conclusion. AI adoption is already delivering tangible value in areas such as operational efficiency, alert prioritisation and risk identification, supported by increasingly explicit regulatory guidance. At the same time, the biggest obstacles have little to do with model performance. Organisational readiness – spanning people, processes and governance – has emerged as the defining challenge for 2026.
Encouragingly, momentum behind AI adoption has accelerated throughout the year. Industry analysts have consistently reinforced this trend. McKinsey observed in 2025 that “risk and compliance teams are accelerating deployment of domain-specific AI models, supported by clearer regulatory expectations.” Deloitte’s 2025 Financial Crime Trends report found that more institutions had “moved beyond experimentation into structured AI-enabled workflows.” Forrester identified explainable AI as “a top priority for financial crime platforms in 2025,” while Gartner highlighted growing demand for “AI with transparent, traceable logic” in AML procurement decisions.
Regulators have played a significant role in legitimising this shift. FATF’s updated guidance on responsible AI in AML reaffirmed that explainable models can enhance effectiveness. The OCC’s 2025 supervisory priorities encouraged responsible AI adoption under strong oversight frameworks, and the FCA’s AI & Innovation Review stressed that explainability is essential within regulated financial services. Collectively, these signals mark 2025 as a turning point, with AI increasingly viewed as core infrastructure for modern financial intelligence units (FIUs).
Yet readiness, rather than technology, remains the principal constraint. Across multiple events involving Quantifind, the same issues surfaced repeatedly: gaps in skills, inconsistent data, weak evidence lineage, immature governance and misaligned workflows. One panelist captured the challenge succinctly: “AI governance starts with user governance.” Another warned, “You cannot operationalize what you do not understand.” As investigators transition from information gatherers to interpreters and explainers of AI-generated intelligence, many institutions have yet to equip their teams for this shift.
This readiness gap matters because the role of investigators is evolving rapidly. Tasks once focused on manual review and data collection now require judgement, validation and explanation. Regulatory expectations have followed suit. FATF’s 2025 guidance emphasised documented human oversight, the OCC highlighted training and override documentation, and the FCA focused explicitly on user competency. Basel Committee and BIS guidance similarly underscored transparency and human review as prerequisites for responsible AI deployment.
As FIUs prepare for 2026, five lessons have emerged from industry dialogue. First, AI is not synonymous with generative tools. As one speaker put it, “ChatGPT cannot run your investigations. Purpose-built models can.” Second, explainability is non-negotiable. “If you cannot show how you got the answer with AI, you will not pass an exam.” Third, legacy infrastructure remains a major blocker. “Most banks do not have an AI problem. They have a plumbing problem.” Fourth, model risk management now extends to users themselves. “Human oversight is part of the system. Train the human, not just the model.” Finally, AI literacy is becoming a source of competitive advantage. “The institutions that understand AI will outpace those that simply deploy it.”
Looking ahead, institutions that invest now in skills, governance and explainability will be best positioned to scale AI safely and effectively. If 2024 brought AI into the conversation and 2025 defined what responsible deployment looks like, 2026 will reward those that are truly ready.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





