Regulators warm to AI in financial crime compliance

Regulators warm to AI in financial crime compliance

Attitudes towards artificial intelligence in financial regulation appear to be shifting, according to new research examining how banking leaders expect regulatory approaches to evolve.

A recent report from Hawk, produced in collaboration with Chartis, suggests a growing belief across the sector that regulators will become more receptive to the use of AI, particularly in financial crime and compliance functions, over the next few years.

The report, AI in Financial Crime and Compliance: Charting the Path from Pilot to Maturity, provides a detailed snapshot of how FCC professionals are currently deploying AI and how those strategies are expected to mature. The banking edition can be read here and the payment and FinTech version can be read here.

When compliance and risk leaders from banks around the world were asked how they expected regulatory attitudes to AI to change over the next two to three years, the majority anticipated a more favourable environment. Some 38% of respondents said regulators would become cautiously more supportive, signalling gradual acceptance paired with clear safeguards. A further 22% believed regulators would become significantly more supportive, actively encouraging AI adoption across regulated financial services.

Regional differences reveal where optimism is strongest. Banking leaders in Latin America expressed the highest expectations of change, with 30% predicting regulators would become significantly more supportive of AI. North America followed closely, with 25% of respondents sharing the same view.

Regulatory attitudes matter because uncertainty remains one of the biggest obstacles to broader AI adoption in financial crime and compliance. When asked to identify the main business challenges preventing further AI deployment, 73% of respondents ranked regulatory concerns or a lack of clarity around supervisory expectations as either their first or second biggest barrier.

Notably, regulatory concerns do not disappear once AI initiatives move beyond pilot stages. Instead, they often intensify. More than a third of respondents said their regulatory concerns increased during the early stages of AI adoption, even as other challenges such as limited skills or internal alignment became less pressing.

As expectations of regulatory support rise, the importance of transparency and accountability in AI systems becomes increasingly clear. Explainable AI is emerging as a critical bridge between innovation and regulation, enabling banks to demonstrate how automated decisions are made and ensuring those decisions can withstand regulatory scrutiny.

With regulatory uncertainty cited by 73% of banking leaders as a major barrier to AI adoption, explainability is becoming more than a technical enhancement. It is increasingly viewed as a foundational requirement for confident, compliant AI deployment, helping financial institutions balance innovation with regulatory trust as supervisory expectations continue to evolve.

For more insights, read the story here

Read the daily FinTech news
Copyright © 2025 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.