How many of your high-risk clients are actually high risk? It is a deceptively straightforward question, but one that strikes at the heart of how many compliance functions operate today.
According to Muinmos, across the industry, a significant share of client books sit in the high-risk category — not because of anything those clients have done, but because of assumptions embedded in frameworks built for a different regulatory era.
Muinmos recently put together a write-up on a webinar that discussed rethinking AML client risk categorisation.
This was the central challenge explored at a recent Compliance Café Connect session, hosted in partnership with FAI Comply, which brought together compliance and risk professionals from multiple jurisdictions. The discussion centred on how financial institutions can move away from static, assumption-heavy risk classification towards approaches that are more dynamic, defensible and genuinely fit for purpose.
Remonda Kirketerp-Møller, chief executive of Muinmos, was among the voices sharing what is working — and what is not — across the institutions she works with globally.
The problem with periodic reviews
Traditional anti-money laundering risk categorisation was designed for a world of slower regulatory change, more face-to-face client relationships and far less available data. Risk was assessed at onboarding, revisited at fixed intervals and managed through predefined scoring rules. That model is now struggling to keep pace.
The consequences are practical. An inflated high-risk population dilutes the attention of compliance teams and, paradoxically, makes it harder to identify genuinely suspicious behaviour. It also introduces commercial friction: additional document requests, longer onboarding timelines and service restrictions frustrate legitimate clients to the point that some walk away entirely.
The most effective AML frameworks are not the ones with the most controls. They are the ones that apply the right controls, to the right clients, at the right time. Regulators — including FATF, the Basel Committee and supervisory bodies across the UK and EU — have converged on this view. The question has shifted from “how many controls do you have?” to “can you demonstrate that your decisions were reasonable, proportionate and consistently applied?”
Risk is not static, and neither should your framework be
One of the most significant ideas to emerge from the session is the concept of temporal risk: a client’s risk profile can shift even when the client’s own circumstances have not changed. New sanctions regimes are introduced overnight. Geopolitical developments create indirect exposure. Ownership structures can evolve without triggering a scheduled review.
Kirketerp-Møller was direct on this point: the periodic refresh model is becoming obsolete. The data required to assess risk in real time is increasingly accessible, and the industry must move towards continuous monitoring as the norm rather than the exception.
Muinmos CEO Remonda Kirketerp-Møller said, “If you work with providers that enable continuous monitoring, you don’t need to think about it again. Focus your attention on the areas that require genuine human judgement — and let technology handle what it can handle reliably.”
Sanctions screening illustrates this clearly. In many jurisdictions, the legal obligation is not simply to screen at onboarding, but to monitor on an ongoing basis. A name that was clean at the point of entry may not remain so. And a client who intended to deceive at onboarding would have known exactly what was being checked for — the real picture often only emerges later, through patterns in behaviour and transactions.
The copy-and-paste problem
Kirketerp-Møller identified a recurring pattern in the market: institutions importing a compliance framework from another organisation and expecting it to function effectively for them. It rarely does. A framework’s value is entirely dependent on how well it maps to the specific client base, product set, jurisdictional footprint and counterparty ecosystem of the institution using it.
The same principle applies to technology. Her advice to compliance leaders looking to modernise is to understand their own logic first, before seeking a provider to support it. Know your clients. Know your risks. Value your data. Build a framework you can observe, test and explain — and then find technology that gives that framework operational scale. A RegTech platform is a medium, not a solution. The logic must originate with the institution.
Automating the routine to elevate the human
Technology’s role in this evolution is not to displace compliance professionals. It is to free them from work that machines can perform reliably, so that they can concentrate on the judgement calls that genuinely require human expertise.
Manual document checks, spreadsheet-based sanctions screenings and tick-box onboarding flows are not simply inefficient — they are inherently risky, because they introduce human error into processes where consistency matters most. Automating straight-through processing for routine, rules-based decisions is not a threat to compliance teams. It is what enables them to function as genuine analysts rather than process handlers.
Reducing over-classification: a practical starting point
A common question raised in the session was how firms with disproportionately large high-risk client populations can reduce over-classification without taking on additional regulatory risk.
The answer begins with understanding what is driving those numbers. In many cases, a single default attribute — operating in the crypto sector, or conducting business entirely remotely — is pushing every client into the same risk band. When everyone is high risk, the classification loses all meaning.
The recommended approach is to identify baseline default attributes, define the mandatory responses to them and then direct risk management energy towards the factors that are genuinely variable. That is where real risk intelligence — and meaningful differentiation — resides. A regulator reviewing a firm’s framework would expect exactly this: not a uniform high-risk population, but a calibrated approach that reflects the actual distribution of risk across the client book.
Auditability: the new non-negotiable
A theme that ran consistently through the session was that any framework an institution builds must be fully explainable. Regulators increasingly expect compliance decisions to be traceable end to end — what information was used, what logic was applied, and who approved the outcome.
This expectation extends to the technology supporting those decisions. It is not sufficient for institutions to point to a piece of software as their answer. They must be able to articulate how it is configured, why those parameters were selected, and how its outputs align with the institution’s own risk appetite and client profile. Regulators are not concerned about whether a firm uses ten tools or one — but they are very concerned if risk decisions cannot be traced from start to finish.
The compliance leaders navigating this landscape most effectively are those who treat technology as an enabler of explainable, auditable, consistently applied decisions — not as a substitute for having a coherent framework in the first place.
The road ahead
AML compliance is not becoming simpler. Sanctions regimes are expanding in scope and pace. Digital onboarding is now the standard. Regulatory expectations around data are rising. The institutions best placed to manage what comes next will not be those with the greatest number of controls, but those with the clearest thinking — and the frameworks to match.
Read the full Muinmos post here.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





