Building better AI: a governance framework for finance

One of the things to pursue is identifying AI solutions with specific, smaller business cases. So, solving a specific challenge at lower complexity and lower risk. I feel that working through smaller cases first is where you'll find all the landmines, and find them on a smaller scale, so you can apply those lessons as you take on larger initiatives. It helps to build confidence with regulators and auditors since you’re being cautious and reasonable, rather than diving in headfirst

Financial institutions racing to adopt artificial intelligence risk replicating other organisations’ mistakes rather than solving their own problems — that was one of the central warnings to emerge from a recent webinar hosted by RegTech firm Hawk in partnership with ACAMS.

The panel brought together senior voices from across compliance, risk advisory, and financial crime prevention, including ING global head of financial crime compliance for investment banking Adrianna Fabijanska, Wintrust Financial Corporation VP of compliance technology product management Michael Morrison, and Grant Thornton (US) partner in risk advisory services Kyle Daddio, moderated by Hawk senior product marketing manager Erica Brackman.

What good AI actually looks like

Morrison reframed what it means for an AI model to be “good,” arguing it goes far beyond accuracy. Wintrust Financial Corporation VP of compliance technology product management Michael Morrison said, “It’s a lifecycle with clear ownership. If you can’t explain how your model is performing today and not six months ago, it’s not good, no matter how sophisticated it is. Good AI isn’t just accurate, it’s operationally embedded and defensible. This starts at the point of selecting the right AI model by establishing what problems you’re trying to solve with it”

Fabijanska stressed that data quality underpins everything. ING global head of financial crime compliance for investment banking Adrianna Fabijanska said, ““I strongly believe that data quality is the bedrock for deploying AI solutions in an organization. Poor data equals poor AI. If you want to successfully deploy AI, structure your data and work on your data lineage. That will save you from having to explain false positives that wouldn’t have occurred had the data been cleansed at the start of the process.”

Avoiding the ‘copycat league’

Daddio raised a concern that has become increasingly prevalent — organisations blindly mimicking peers rather than assessing their own needs. Grant Thornton (US) partner in risk advisory services Kyle Daddio said, “Everybody is talking about what good AI looks like, but we’re also seeing what bad AI looks like. It has almost become a copycat league. People hear that another institution has implemented AI for transaction monitoring or sanctions screening, and suddenly they need to figure out what that institution did and replicate it. What really ends up happening is you’re doing what was good for somebody else, not what’s good for your organization.”

His prescription was a more deliberate, strategy-led approach. Daddio said, “Take a step back, set your goals, get the board involved, and understand where you want to be in three or five years’ time. It’s far more valuable than reacting out of fear that other institutions are implementing AI and you’ll miss the boat.”

Building a defensible governance framework

Morrison challenged the perception that rigorous oversight slows progress. Morrison said, “Governance often gives the impression that it slows things down. However, it also makes sure things are sustainable and manageable long term so you’re not flying too close to the sun. One typical piece of documentation to include is a clear purpose statement for the model: why are we doing it and what problem is it solving?”

Fabijanska warned that concentrating model knowledge in a single individual creates a critical vulnerability during regulatory scrutiny. Fabijanska said, “Just as much as the person who designed the model knows how it works, if an analyst can’t explain why they’re making the decision they are — or if an examiner comes and asks a question and there’s only one person who can answer it — the AI you’ve designed is flawed. It lacks the right explainability and documentation to effectively communicate that organizational literacy.”

Morrison suggested starting small to build regulator confidence. He said, “One of the things to pursue is identifying AI solutions with specific, smaller business cases. So, solving a specific challenge at lower complexity and lower risk. I feel that working through smaller cases first is where you’ll find all the landmines, and find them on a smaller scale, so you can apply those lessons as you take on larger initiatives. It helps to build confidence with regulators and auditors since you’re being cautious and reasonable, rather than diving in headfirst.”

Hawk argues that bridging the gap between data science and compliance strategy requires technology that automates documentation, provides explainable alert samples, and enables financial crime teams to manage the full model lifecycle more independently.

For more insights from the discussion, read the full report here.

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.