The future of smarter KYC with AI and ML

KYC

GenAI has become one of the most talked-about technologies of recent years, driven by its ability to handle tasks that once required specialist skills. Its popularity has soared thanks to how easily it can support activities ranging from content creation and translation to writing and debugging code.

While GenAI appears to have risen suddenly, the foundations for its development have been in place for some time, claims Moody’s.

Earlier models were capable of performing many of the same functions, but the latest wave stands out for its ability to understand natural language and follow instructions with remarkable precision. That shift in usability has been the major catalyst behind its global traction.

One of the clearest examples of this leap forward can be seen in ChatGPT. The model can interpret intent, maintain the thread of a conversation, and adapt responses based on a user’s preferences. A person can ask it to summarise earlier exchanges, refine outputs, or adopt a specific tone, and the system can respond intelligently. These conversational abilities have become a core part of how people learn, produce content, and organise tasks. It has also paved the way for a new application: using chat-based interfaces to support know your customer (KYC) workflows with a more natural, human-centric experience.

Incorporating GenAI into KYC opens the door to interactive investigations and smarter screening of entities. Rather than relying solely on rules and static inputs, KYC analysts can engage with the system to ask targeted questions, explore risk factors, and test hypotheses. GenAI excels at supporting human reasoning, helping compliance teams frame queries more effectively and access information faster. Yet despite its strengths, GenAI also comes with limitations that compliance professionals must keep firmly in mind.

Most public GenAI models only know what exists within their training data, meaning they may not reflect the latest regulatory updates or offer verified sources. Without fact-checking capabilities, any assessment that relies solely on GenAI risks inaccuracies, hallucinations, or conclusions unsupported by evidence. The global regulatory landscape remains fragmented, with countries taking varied approaches to governing AI use. Differences between frameworks in the EU, United Kingdom, Singapore, and the Middle East mean organisations must carefully evaluate how AI fits into their compliance controls. Ethical questions also persist, particularly around bias and fairness when models are trained on incomplete or skewed datasets.

Moody’s argues that the best path forward lies in combining GenAI with trusted proprietary data and workflow technology. Public tools such as ChatGPT can only draw from information available on the web, whereas datasets like Moody’s Grid, Orbis, and official registry information from Kompany provide verified insights essential for risk assessment. Integrating these datasets into GenAI-enhanced KYC platforms could support analysts with more complete, reliable responses. To enable this, Moody’s is developing an AI-driven chat interface within its KYC products to provide structured outputs supported by LLMs running behind the scenes.

AI and machine learning (ML) are already reshaping KYC processes beyond GenAI alone. Digital transformation and RegTech adoption have helped financial institutions move away from manual checks that were once slow, inconsistent, and vulnerable to human error. Today’s automated platforms can support onboarding, ongoing monitoring, rules-based risk management, and transaction screening, while maintaining the human oversight required for sensitive judgments.

Through advances in AI and ML, organisations can authenticate identities, analyse suspicious patterns, detect document fraud, and apply liveness verification. Continuous monitoring tools can identify changes in a customer’s risk profile in real time.

Moody’s notes that AI-driven screening and risk scoring help firms reduce false positives by using alert thresholds, enabling compliance teams to focus on genuinely high-risk cases. Importantly, these models remain static by design, preventing uncontrolled learning that could expose firms to regulatory issues. Instead, Moody’s monitors performance to ensure each model remains fit for purpose.

Despite the benefits, AI adoption in KYC is still challenged by integration costs, data quality issues, regulatory expectations, and the need for explainability. A recent Moody’s study revealed that 84% of risk and compliance professionals see clear advantages in using AI, and 62% expect it to become mainstream within three years. However, firms recognise that responsible deployment requires transparency, fairness, and effective governance. Many professionals expect their roles to evolve, taking on more strategic decision-making and collaborating closely with technology teams as AI capabilities expand.

Looking ahead, combining AI, ML, GenAI, and proprietary datasets is expected to create more intelligent, efficient KYC systems that support smarter screening and reduce friction for customers. Agentic AI, in particular, is viewed as a major next step. With AI able to automate verification, enhance risk assessment, and assist investigations during enhanced due diligence (EDD), compliance teams may soon rely on a new generation of interactive, data-driven tools to navigate increasingly complex financial crime risks.

As these technologies continue to mature, the future of KYC is likely to feature AI-powered, data-integrated platforms capable of providing faster decisions, reduced costs, and a more seamless customer experience—all while maintaining the human judgement essential to safe and ethical compliance practices.

Read the daily RegTech news

Copyright © 2025 FinTech Global

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.