Regulation used to sit at the edges of the enterprise, tracked by specialist teams, interpreted periodically and pushed downstream into compliance, legal and risk functions. That model is starting to break. As regulatory change grows faster, broader and more complex, firms can no longer afford to treat regulatory intelligence as a background task or a periodic monitoring exercise.
Instead, it is becoming something far more central. Regulatory intelligence is increasingly shaping how firms understand their obligations, make operational decisions and adapt at speed across markets. In that sense, it is moving beyond a support role and becoming part of the infrastructure that helps the enterprise see, interpret and respond to change.
According to Roseanne Spagnuolo, chief research and data officer at Vixio, for years, regulatory compliance was viewed by many boards as a necessary friction: a defensive posture designed to keep the company out of the headlines and out of the crosshairs of regulators.
However, the landscape has shifted fundamentally. In today’s high uncertainty and dynamic global market, regulatory intelligence is no longer just a line item in the compliance budget; it has evolved into critical infrastructure, and a modern business cannot survive without a defensible, tech-enabled regulatory compliance program.
“There is a hard truth facing our industry: manual regulatory tracking is not just inefficient and unsustainable, it is a threat to growth,” exclaimed Spagnuolo. “In the past, a talented team of compliance officers using spreadsheets, emails and various project tools might have been “good enough.” Today, that approach is a liability.”
For the Vixio chief, manual processes are inherently disjointed and slow, creating a lag between a regulatory shift and a business response. “That lag represents more than just a compliance risk; it represents missed opportunities. While you are manually parsing a regulatory update, your competitor, armed with real-time intelligence, has already adjusted their roadmap and captured the market,” she said.
There is a growing realisation amongst many that we are entering an era of unprecedented scrutiny – a place where, as Spagnuolo outlines, ‘we tried our best’ is not a valid defence.
She said, “RCM must be defensible. Boards are increasingly looking at their compliance teams and asking: “What is coming three years down the line, and what controls and mechanisms are in place to evidence that we are ready?” If the answer relies on a fragmented manual process, you cannot provide the evidence trail required to satisfy governance standards.”
Spagnuolo also made clear there is currently a lot of noise regarding machine-readable regulation, and the total automation of compliance, but she is succinct: we are not there yet, and we aren’t even close.
“The stakes in our industry are too high to outsource change management entirely to an algorithm,” she remarked. She explained that Vixio sees two primary hurdles that require a human-hybrid tech philosophy. Firstly, is the reliability gap. “Current AI models are still being trained, and remain prone to hallucinations, confidently stating a fact that isn’t true, or misinterpreting information. In a world of high-consequence risk, relying on unproven tech without a safety net is a gamble no serious business would take,” said Spagnuolo.
Another area is the nuance of expertise. As she states, regulation is rarely binary – it is written by humans, for humans and is often shaded in grey. Very often, machines excel at processing volume but lack the ability to see the nuances the way humans can.
Until technology is proven over years of running parallel to existing processes, Spagnuolo believes it must serve to complement human expertise, not replace it. “Our philosophy is; use the machine to cut through the noise, but rely on experts to provide the signal.”
She also believes that regulatory intelligence is a competitive advantage, but its actually much more than that – it’s a business lifeline, as businesses need it to thrive.
“In a volatile geo-political era, the ability to shift and pivot is the difference between an industry leader and a cautionary tale. Boards and senior management are increasingly held accountable not just for what they knew, but for what they should have known. The risks that damage companies rarely come from what they’re watching, they come from what they never saw coming,” said Spagnuolo.
Regulatory intelligence provides the early warning signals that enable a firm to proactively pre-empt market shifts. “When you know a change is coming twelve months out, you can build it into your development sprint rather than scrambling at the deadline, and that same foresight enables first-mover advantage in new markets, while less informed competitors remain paralysed by the uncertainty of new regulations,” said Spagnuolo. “When your regulatory compliance program is treated as critical infrastructure, it stops being a cost center and starts being a growth engine, allowing the business to move faster, with more confidence.”
According to the Vixio CR&DO, it is a known fact that the work done by compliance only becomes visible when something wrong. They’re rarely celebrated for risks avoided, but they’re blamed for sure when risks are missed.
“This is the problem Vixio exists to solve,” remarked Spagnuolo. “This is what we do and have been doing for 20 years. Every feature we build, every conversation we have with clients, every piece of intelligence that we write, it’s all in service of helping compliance teams answer one question with confidence: Could we defend what we knew and when we knew it? And then translate that into growth and competitive advantage. Because delivering only one side of the coin is no longer good enough.”
For Spagnuolo, the transition from manual to automated, and from reactive to proactive, isn’t optional. To stay ahead, firms must stop viewing compliance as a peripheral function and start treating it as the foundational data layer of their entire global operation.
Why manual regulatory tracking is faltering
Robb Verna, vice president of marketing and alliances at AscentAI, said manual regulatory tracking is becoming harder to sustain because firms are dealing with a growing volume of change through processes that remain slow and labour-intensive.
He said teams still have to “spend significant amounts of time curating content, assessing it for impact to their business, and figuring out what they need to do to comply”. In Verna’s view, that burden is becoming heavier as regulatory information becomes more dynamic and more complex.
He said manual approaches are now “incredibly arduous” because they force firms to work through large amounts of material by hand, while also increasing the risk of delay. That matters, Verna said, because it can take “weeks or even months to fully operationalize the updates across the enterprise”, leaving firms slower to respond to change.
Verna also argued that the issue now extends beyond compliance operations alone. As regulation expands in areas such as crypto, buy now pay later and payments, he said manual tracking is “impacting company growth and innovation agendas too”.
Still, Verna made clear that firms do not have to remain stuck in those processes. He said AI-powered automation can help teams move away from repetitive manual work and focus instead on “higher value tasks like making decisions and taking action instead of searching websites, downloading PDFs, and reading hundreds of pages of complex text to identify impacts to their business”.
Meanwhile, Charmian Simmons, financial crime and compliance expert and strategy leader at SymphonyAI, said manual regulatory tracking is becoming unsustainable because the pace and scale of change now far exceed what compliance teams can realistically manage by hand.
She said that the volume, velocity and geographic reach of regulatory change has fundamentally outpaced any team’s capacity to track it manually. In her view, that challenge is especially acute in AML, sanctions and fraud, where requirements are shifting across multiple jurisdictions at the same time, often under tight implementation deadlines.
Simmons said teams are now being pulled in too many directions at once, covering “horizon scanning, impact assessment, control mapping and evidence gathering, all at once, all the time”. That makes manual monitoring not just inefficient, but increasingly difficult to sustain without creating operational strain and regulatory risk.
For Simmons, this is exactly why continuous compliance capabilities are becoming more important. She said SymphonyAI’s idea of “Always-on Compliance” is now foundational to a more proactive model, because static and periodic reviews can leave “dangerous blind spots between update cycles”.
What compliance teams need instead, she said, is “continuous, embedded regulatory intelligence that monitors change in real time, interprets its implications, and feeds directly into detection models and workflows”. In practice, Simmons argued, when a sanctions designation is issued or a regulatory guideline changes, “the system should identify and respond, not wait for a quarterly review to pick it up”.
Vall Herard, CEO of Saifr, said manual regulatory tracking is no longer just resource-intensive. In his view, it is increasingly out of step with the structure and pace of modern regulation.
He said the problem is not simply that regulatory volume continues to rise. It is also that today’s rules contain “layers of conditions, exceptions, carve-outs, interdependencies, and jurisdiction-specific nuances that cannot be reliably captured through human review alone”. That, Herard suggested, makes manual tracking less dependable as regulatory frameworks become more detailed and more interconnected.
He also argued that firms are not only dealing with the question of what a rule says, but with how its implications ripple across the business. As he put it, operational teams must consider “how a rule’s implications cascade across business processes, systems, products, and supervisory expectations”. In some cases, Herard said, “a single regulatory document may require hundreds of discrete analytical checkpoints to interpret correctly”, making consistency difficult to maintain across teams, product lines and time.
For Herard, the pace of change only intensifies that burden. He said teams are caught in a continuous cycle of “monitoring, interpretation, cross-functional review, and re-interpretation”, with every regulatory shift triggering further downstream work, from product changes and policy revisions to control updates and supervisory dialogue. Maintaining the required level of subject-matter expertise across that landscape, he suggested, demands more resources than many institutions can realistically sustain.
Herard also said manual processes introduce delay at a time when business stakeholders expect answers in real time. He argued that interpretation gaps are too often resolved through meetings, while “information chasing replaces structured analysis”. As more institutions move towards automated workflows and real-time decision-making, he said the mismatch between regulatory complexity and manual operations is becoming even more visible.
In the end, Herard’s argument was not that people are incapable of doing the work. Rather, he said “the regulatory environment has outgrown manual capability” and that gap, in his view, is widening year by year.
Ryan Swann, CRO of RiskSmart, said manual regulatory tracking is becoming unsustainable because the scale and speed of change now exceed what teams can manage effectively by hand.
He said “the volume, velocity, and fragmentation of regulatory change have outpaced human capacity”, particularly for organisations operating across multiple jurisdictions. In those environments, Swann argued, firms are dealing with a steady flow of updates, more nuanced interpretation demands and overlapping obligations that are increasingly difficult to track through manual processes alone.
Swann also said manual tracking is “slow, error-prone, and reactive”, which in his view makes it poorly suited to an environment where firms are expected to manage risk and demonstrate accountability in real time.
Will regulatory intelligence become an advantage?
Herard believes that regulatory intelligence is currently both a competitive advantage and a cost of doing business, but argued that the balance is already starting to shift.
In the near term, he said it offers a clear advantage. Institutions that invest in structured regulatory intelligence can “move faster, operate with more transparency, and reduce internal friction”. In Herard’s view, automated monitoring reduces latency, while structured interpretation cuts down on unnecessary meetings and gives firms greater clarity around their obligations, helping them make product and business decisions more quickly.
He also argued that regulatory intelligence improves defensibility. Herard said firms with consistent interpretation frameworks are less likely to end up with conflicting judgments across teams or business lines. When supervisors ask why a decision was made, he said those institutions can point to “traceable logic rather than reconstructed reasoning”.
That advantage, in his view, has direct operational consequences. Herard said it can lead to shorter review cycles, fewer bottlenecks and more predictable interactions between compliance, legal and operational teams. For firms managing complex product sets or multi-jurisdictional obligations, he suggested, those benefits can build on one another over time.
Still, Herard made clear that he sees this advantage narrowing in the longer run. As regulatory complexity increases and supervisory expectations rise, he said engineered regulatory intelligence will stop looking like a differentiator and start looking more like essential infrastructure. He argued that firms will no longer be able to justify “manual or inconsistent interpretation practices”, particularly as boards demand more transparency and regulators expect structured, auditable decision-making.
For that reason, Herard compared the shift to the way cybersecurity evolved over the past decade. What begins as a competitive edge, he said, eventually becomes a basic requirement. In the end, Herard’s view was that firms with strong regulatory intelligence will be faster, more efficient and less exposed to scrutiny, while those without it will become slower, more error-prone and more vulnerable. Over time, he said, what starts as an advantage becomes simply part of the cost of doing business.
Swann added that regulatory intelligence will become both a baseline requirement and a source of competitive advantage, depending on how well firms put it to work.
At a basic level, he said “baseline regulatory intelligence will become a cost of entry” and something firms need in order to “operate safely and compliantly”. In that sense, Swann suggested it will increasingly be treated as part of the minimum infrastructure required to function in a complex regulatory environment.
At the same time, he argued that firms which go further and embed regulatory intelligence into decision-making and strategy will still be able to pull ahead. Swann said organisations that operationalise it effectively can gain “a clear competitive edge through faster adaptation, reduced risk exposure, and better-informed growth”.
On the other hand, Verna detailed that tech-driven regulatory intelligence is already a competitive advantage, particularly for firms trying to build and launch new financial products at speed.
He said “it’s not a question of ‘will’ it become a competitive advantage, tech driven regulatory intelligence absolutely is a competitive advantage”. In Verna’s view, the ability to gather, analyse and interpret regulatory information quickly and accurately is becoming essential for firms that need to anticipate change, adapt quickly and stay compliant as requirements evolve.
He suggested this matters most for digitally native firms, where compliance is increasingly embedded in the product development process rather than treated as a function that steps in later. Verna said these businesses are building the next generation of financial services products and, in that environment, compliance has “a core responsibility in the product development process”.
Put simply, he argued, the longer it takes risk and compliance teams to define requirements, establish compliance routines and confirm that a new offering meets regulatory standards, the longer it takes to launch. For that reason, Verna said firms are looking for tools that allow them to “work at the speed of the business” and support growth rather than slow it down.
AscentAI’s solutions provide the automation and granular, tailored, fully processed and enriched regulatory information that firms require to work faster, more accurately, and more confidently. We’re helping companies who are hyper-focused on growth and innovation define and manage their regulatory foundation that powers growth.
Simmons also explained that regulatory intelligence will remain a genuine competitive advantage for the next two to three years, although she argued that window is starting to narrow.
She said firms are increasingly embedding “always-on” regulatory and threat intelligence into their compliance platforms so they can respond to change faster, improve detection accuracy and give regulators a more coherent and auditable control environment. In Simmons’ view, the real point of differentiation is not just speed, but the quality of the response that follows.
She said “Always-on Compliance” allows firms to react proactively when FATF updates its guidance, when a new sanctions regime is introduced or when an emerging fraud typology appears. Rather than waiting for a person to identify the change and trigger a process, Simmons said the system can surface it, assess the impact and adjust more quickly.
For that reason, she argued that firms operating this way are beginning to set the standard. By contrast, Simmons said those that still treat regulatory intelligence as a manual back-office task will remain stuck in a reactive position, facing higher remediation costs and greater regulatory scrutiny.
Are we close to machine-readable regulation?
Simmons stressed that machine-readable regulation is closer in practice than many compliance leaders may think, although she stressed that important barriers still remain.
She said regulators are increasingly publishing more structured, digital-first guidance, while AI models are showing “a genuine ability to interpret, classify, and contextualise regulatory text at scale”. In Simmons’ view, the technology itself is advancing quickly. The more difficult challenge now is integration.
For Simmons, machine-readable regulation only becomes valuable when it is connected to live compliance infrastructure and supported by the wider components needed for “Always-on Compliance”. She said that means feeding regulatory updates directly into detection engines, risk models and investigation workflows, so controls can evolve in step with the rules and models they are meant to enforce.
She also argued that an ontology-driven architecture is what makes that possible in practice. By maintaining a continuously updated knowledge graph of entities, behaviours and regulatory context, Simmons said firms can move beyond simply reading regulation and instead begin to understand it and act on it automatically. In her view, “Always-on Compliance” depends on always-current intelligence, and that requires much more than monitoring text alone.
Saifr CEO Herrard said the idea of machine-readable regulation is widely discussed, but in practice the industry is still some distance from it becoming a reality.
He said many in the sector imagine a future in which rules are published in standardised, machine-readable formats that can be automatically ingested and operationalised. That vision, often framed as “regulation-as-code” or regulatory digitisation, offers clear benefits, including greater consistency, transparency and a closer link between regulatory intent and industry implementation.
Still, Herard made clear that he sees that future as some way off. He said progress on the regulatory side remains uneven, with many authorities still publishing rules in “text-heavy formats”, including PDFs, guidance documents, speeches and rulebooks with inconsistent structures. Even where taxonomies or data standards exist, he said they are often narrow in scope or slow to develop. In his view, most regulators do not yet have either the infrastructure or the mandate to produce fully structured, machine-readable rules.
He also argued that the challenge does not end there. Even if regulators began publishing machine-readable rules tomorrow, Herard said firms would still need to interpret them through the lens of their own business models, risk appetites and supervisory expectations. As he put it, machine-readable rules “do not eliminate interpretation — they shift where interpretation starts”.
Herard was also cautious about the current role of AI in closing that gap. He said large language models are powerful, but “probabilistic”, which makes them poorly suited to parsing rules that depend on precise exceptions, conditions and firm-specific context. At this stage, he said, they can help accelerate analysis, but they cannot replace structured logic or authoritative interpretive frameworks.
For that reason, Herard argued that regulatory intelligence still has to operate in a world where regulation is “human-designed and human-delivered”. Rather than waiting for fully digital rulebooks to emerge, he said firms need to build systems that can translate regulation as it exists today into something operationally usable, deterministically interpretable and auditable over time.
Swann added that the industry is moving towards machine-readable regulation in practice, but is not there yet.
He said “we’re partway there”, with some regulators beginning to experiment with more structured, digital-first approaches to rulemaking. Even so, Swann noted that most regulation is still published in formats designed for human readers rather than machine ingestion.
In his view, the gap is starting to narrow through AI and natural language processing, which can help translate unstructured rules into more usable intelligence. Still, Swann made clear that full standardisation remains a work in progress.
Why regulatory intelligence is critical
Herard finished by stating that regulatory intelligence is beginning to function less like a supporting capability and more like critical infrastructure for financial institutions.
He argued that this shift is not being driven primarily by the rise of AI, even if AI is intensifying the need for change. Instead, Herard said it reflects a more basic problem in the regulatory environment itself. In his view, the ecosystem is not built for the speed, scale or operational demands of modern financial services.
He said regulators continue to produce rules that are long, complex and often structurally inconsistent across jurisdictions. Even where there has been some pushback against adding further rules in certain markets, Herard suggested the overall framework remains a dense web that firms still have to navigate in order to stay compliant. Publication formats vary widely, he said, digitisation remains uneven, and regulation is still fundamentally designed to be read by people rather than machines.
As a result, Herard argued that firms have little choice but to build the connective layer themselves. He said institutions need infrastructure that can translate regulatory intent into operational interpretation and action, and in that context regulatory intelligence is becoming indispensable.
Herard said regulatory intelligence is now critical infrastructure because the regulatory system has not yet evolved to meet the structural needs of modern financial institutions. Rules remain complex, unstructured and difficult to operationalise consistently, while digitisation and standardisation are still some way from maturity.
For that reason, he said firms cannot afford to wait for a more machine-readable future. They need systems that can bridge the gap between regulatory intent and implementation, handle continuous change, maintain consistency in interpretation and provide auditability over time.
In the end, Herard’s view was that as regulatory complexity rises faster than industry infrastructure can keep up, regulatory intelligence is becoming the backbone that allows firms to operate safely, efficiently and defensibly at scale. As he put it, it is no longer optional, but “the infrastructure that makes everything else possible.”
Why data-sharing infrastructure is key
RelyComply argued that having an AML strategy on paper is no longer enough on its own. In today’s regulatory environment, the company said firms also need a data-sharing infrastructure that can connect financial institutions, regulators, financial intelligence units and law enforcement.
In its view, that kind of infrastructure is important not only for building a stronger compliance culture, but also for making sure suspicious activity alerts can lead to meaningful enforcement action. RelyComply said better-connected data can help turn detection into prosecution and limit the spread of increasingly complex global criminal networks.
The company also stressed the urgency of the problem. Criminals do not pause, it said, and are constantly looking for weaknesses in AML controls that allow them to avoid detection and keep funding activities ranging from trafficking to terrorist financing. Yet many firms still rely on manual document checks during onboarding and slow reviews of payment flows, with processes that can take days or even weeks to complete.
RelyComply warned that where cross-referenced data is not readily available, teams end up stuck doing repetitive work, onboarding experiences deteriorate and the risk of human error rises. That, it said, can increase false positives and create costs that become harder to sustain as customer volumes and digital transaction levels continue to grow.
The company also argued that rule-based systems alone are no longer sufficient. In its view, AML platforms need to reflect regulatory nuance, which means combining explainable AI with human oversight. RelyComply said firms need models that can screen documents against watchlists and detect anomalous behaviour across large datasets, while still relying on analysts to approve high-stakes decisions.
At the same time, it said firms must be able to justify how investigations are carried out, particularly while machine-readable regulation is still developing. For that reason, RelyComply argued that data quality should be a priority now, so that high-risk indicators can be audited, reviewed and shared in real time, giving both institutions and regulators a clearer view of genuine financial crime activity.
Ultimately, RelyComply said an effective anti-financial crime response depends on the strength of customer and transactional data, as well as the automated systems that can capture, store and track it across each stage of risk management. As criminal threats continue to evolve, the company argued, AML infrastructure must evolve with them, placing greater emphasis on centralised data, stronger investigative capability and platforms that can scale with both regulatory change and operational demand. In that sense, it said next-generation AML infrastructure can become both a competitive advantage for firms and a foundation for a stronger anti-financial crime ecosystem.
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





