Generative AI’s role in FinCrime compliance

Generative AI’s role in FinCrime compliance

As financial institutions move from experimentation to practical implementation of generative AI, a critical question emerges: how can this technology be responsibly integrated into financial crime compliance frameworks without creating new risks?

This was the focus of a recent “Hype vs. Reality” fireside series episode, hosted by World Salon and Columbia University’s Global Dialogue. Featuring SymphonyAI’s financial crime and sanctions subject matter expert, Elizabeth Callan, the discussion explored how large language models (LLMs) are reshaping the risk and regulatory landscape in financial services.

With over 25 years of experience in financial crime policy and enforcement, Callan laid out a practical roadmap for adopting generative AI while maintaining compliance and reducing risk.

Governance remains the core of any responsible AI strategy, ensuring institutions do not rush adoption without proper accountability. “Most institutions now have AI risk committees,” the Callan explained, noting that these structures are not sufficient alone. Clear, documented policies are vital to outline roles, responsibilities and accountability across internal teams and external partners. “It sounds basic, but documented policies and procedures are absolutely critical.”

Transparency with stakeholders and regulators is equally essential. Organisations need to maintain shareable documentation on model governance, training cycles, explainability standards, incident response, data integrity and control frameworks. This transparency can strengthen regulator confidence and protect firms as they scale AI across compliance functions, Callan said.

Proactive regulatory engagement was highlighted as a priority, particularly given uneven regulatory landscapes globally. While the US lacks federal AI regulation, state-level laws around AI bias are emerging. Engaging early with regulators can help institutions contribute to shaping future frameworks that support innovation while maintaining compliance.

Data quality underpins effective AI-led AML systems, but data management is not only about maintaining clean datasets. Institutions need to “understand your data and continuously test for bias”, Callan warned, stressing the growing requirement for demonstrating fairness and effectiveness as legislation evolves.

Explainability, once a buzzword, is now an operational necessity. Institutions need full visibility into how models function, which inputs are used, and how outputs can be audited and defended. “If you can’t explain it to a regulator, you probably shouldn’t be using it,” Callan stated.

Human oversight remains critical even as AI drives efficiencies in compliance workflows. Controlled pilots in sandbox environments were recommended to test AI systems, ensuring risks are identified before scaling them broadly.

Vendor relationships were also positioned as a cornerstone of responsible AI use. Institutions should expect transparency, lifecycle partnership and commitment to responsible AI from vendors, ensuring technology adoption aligns with compliance and risk frameworks.

Ultimately, the discussion underscored that AI is transforming not only the processes within financial crime compliance but also redefining what it means to operate responsibly and transparently in an automated landscape.

Read the full story here.

Read the daily FinTech news
Copyright © 2025 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.