As AI reshapes financial services, a paradox is emerging.RegTech, built to manage regulatory compliance, now struggles to regulate AI itself. The tools designed to ensure oversight are falling behind the very technology they’re meant to monitor. AI models driving credit decisions and fraud detection carry risks—bias, opacity, data drift—that traditional compliance frameworks can’t capture. A question lingers – is RegTech ready for the age of AI model risk management?
From the standpoint of South African RegTech RelyComply, the longevity of AI solutions relies heavily on how simply compliance teams are able to manage their models’ risk factors, all as part of a comprehensive lifecycle management plan that comes with the territory of manning complex systems.
The firm remarked, “Given the level of technical expertise needed for AI maintenance, it’s usually not enough for them to do this alone. Luckily, RegTech platforms are already adept at finding or mitigating potential drawbacks in an algorithmic setup, as implemented by their human counterparts. These include discriminatory biases, deviations, and the omission of results that all contribute errors to undermine the ethical standards required of today’s firms, and jeopardise system integrity that has disastrous knock-on effects for their AML and end users too.”
RelyComply stated that AI is gaining traction so much, that ill-kempt models and legacy systems – including those that operate in black box formats – can repeatedly reproduce poor, unfair and non-compliant decisions for such high-risk critical use cases as transaction monitoring.
“They will then become obsolete with the greater need for explainable AI and transparency for regulatory audits. One major factor that will set one institution apart and in its model risk management is how easily integrated a RegTech solution can be. Speedy adaptability is all about finding a modern platform that will fit with core infrastructures that can run using the same data,” said the firm.
For the South African firm, adopting practical AI starts with clean data and comprehensive standards – whereby a model can learn over time and accommodate any changing rules around data storage or privacy.
The company said, “Static solutions are not flexible enough; especially in light of the flexibility that a lot of modular or cloud-based RegTech providers offer. AI models are organically-shifting beings themselves, which must be retrained in order to effectively filter out criminal techniques that are similarly (or, rather, worryingly) growing in sophistication.
“To be truly innovative, RegTech partnerships enable AI models that can be consistently updated to be ethical, explainable and sustainable – not just to appease a compliance officer’s rulebook, but to enhance the battle against launderers and other criminals that understand AI’s smart capabilities for the dark arts.”
Need for demonstration
AI model governance is increasingly becoming a requirement in regulatory compliance, stressed Supradeep Appikonda, COO and co-founder at 4CRisk.ai.
He said, “Organisations need to be able to demonstrate to regulators how they ensure that AI systems they use are supported by AI models and algorithms with a training process that involves meticulous curation of data, utilising a robust data governance framework that encompasses steps to evaluate datasets against governance criteria, conduct data clearance, and perform document quality checks.”
For Appikonda, these should be reflected in a firm’s obligations, rule book, policies and controls. RegTech needs to monitor gaps in the regulatory framework and raise alerts, not only on changes in laws, rules and standards in AI model governance, but critically, in gaps in controls coverage.
Furthermore, Appikonda stressed that RegTech processes are beginning to integrate with risk management beyond the risk of being out of compliance. One of the key capabilities is to be able to track and provide context around emerging or systemic risks, including cyber risks, and use AI to analyse gaps between and organisations’ controls and these risks.
A key question being asked in the industry is whether RegTech has the scalability to handle AI model compliance. In the opinion of Appikonda, RegTech can scale core AI compliance functions across industries, but there’s no one-size-fits-all solution yet.
He went on, “This is mainly because each industry has specific regulatory requirements unique to that industry’s processes. For example, the manufacturing industry is more focused on health, safety and quality regulations, while the health services industry is focused on privacy of patient records. Horizontal capabilities like drift monitoring, explainability, and logging scale effectively across customers, while vertical regulatory rule engines need customisation to account for industry- or client-specific nuances.”
This hybrid approach, Appikonda remarks, allows broad coverage while addressing specialised compliance requirements.
What are the gaps in current RegTech solutions that hinder robust AI model risk management? Robust AI model risk management, Appikonda answers, refers to a comprehensive approach to identifying, monitoring, and mitigating risks associated with AI models throughout their lifecycle. It ensures that models are accurate, fair, reliable, and compliant with regulations.
He detailed, “RegTech itself, as it starts to incorporate AI, need to attest that their own systems use AI models and algorithms that adhere to Model risk management processes including the meticulous curation of data, utilising a robust data governance framework that encompasses steps to evaluate datasets against governance criteria, conducts data clearance, and performs document quality checks.
“Beyond RegTech systems attesting to their own AI Model risk management programs, their software needs to extend to provide regulatory compliance of their customers’ systems by monitoring, alerting and providing suggestions on how to close gaps between an organisation’s obligations, rule book, polices and controls,” Appikonda finished.
Starting with measurement
For Madhu Nadig, co-founder at Flagright, AI risk model management readiness starts with measurement.
He explained, “Effective tools monitor performance, drift, and fairness continuously, not just at deployment. They compare outcomes across cohorts, run challenger models on the same stream, and use red‑team scenarios to probe blind spots. This is how you detect false negatives early, especially the rare, adaptive patterns that matter most in financial crime.”
The second pillar of this for the Flagright co-founder is human oversight. Analysts need clear escalation paths, and Madhu underlines that the authority to override a model, and training that focuses on interpreting explanations rather than re‑running raw data.
He went on, “Regulators are moving toward this continuous assurance model, but many frameworks still emphasize point‑in‑time validation. RegTech needs to fill that gap with real‑time evidence and controls that are easy to review. Scalability across industries depends on two capabilities.”
The first of these capabilities is a common risk and control taxonomy that maps different regulations and data shapes into a single policy engine. Second, an MLOps backbone that treats models as living products: registries, versioning, lineage, and policy‑as‑code so changes move safely from experiment to production.
“Privacy‑preserving techniques help when data cannot be centralized, including federated learning, differential privacy for analytics, and strict data‑residency controls. The biggest gaps today are fragmented evidence, limited explainability for complex deep models, and weak supply‑chain governance around third‑party AI components,” said Nadig.
He finished by outlining that Flagright’s approach at is outcomes first. “We log every feature used, track drift and fairness in production, keep a standing challenger, and tie each automated action to a human‑review path with a full audit trail. That is what turns AI from a clever component into a governable control.”
Centralise and prepare
Whilst AI is broadly seen as increasingly competent in able at managing risk management workloads, there needs to vigilance and a certain level of centralisation to stay on top of it.
“It’s ready to use, not to switch off your brain,” said Ryan Swann, founder of RiskSmart. “We can track which models you run, spot when they drift off course, check for unfair outcomes, and keep tidy records. The gaps are outside tools you didn’t build and newer AI risks like prompt tricks and data leaks.”
For Swann, the fix is to centralise – one register for all models, clear checks before anything goes live, live monitoring and an easy rollback if something misbehaves.
Meanwhile, Rick Grashel, co-founder and CTO at Red Oak, stressed that the vast majority of RegTech solutions do not maintain detailed records of model usage or AI algorithm changes as part of their regulatory documentation.
He said, “Financial records, emails, and legal documents are already required to be retained under regulations such as SOX and SEC Rules 17a-3 and 17a-4, and we anticipate that similar requirements for AI governance and recordkeeping will be introduced soon.
“In preparation, Red Oak already treats every AI review—its inputs, outputs, and model activity—as a regulatory record. This proactive approach ensures that when such regulations take effect, our clients will already be audit-ready,” Grashel concluded.
Copyright © 2025 RegTech Analyst
Copyright © 2018 RegTech Analyst





