Launched in 2021, Massachusetts-based Rhino Federated Computing is focused on one of AI’s biggest challenges: activating siloed data through federated computing.
Over the past three decades, financial institutions have invested heavily in making data available to enable better decisions for the business. Now the focus is on enabling advanced AI models and agents on top of that data. While the move toward data centralization has delivered enormous value, it was always doomed to hit a wall: some data simply cannot be moved due to regulatory, sovereignty, or intellectual property concerns. That’s where Rhino comes in.
Rhino’s Federated Computing Platform is an AI collaboration stack that sits inside and across enterprise firewalls, enabling computing resources, data preparation and discoverability, as well as model development, deployment, and monitoring within secure, privacy-enhanced environments. With Rhino’s flexible architecture, companies can securely deploy data pipelines, queries, models, agents, and third party applications wherever data lives.
How FIs can collaborate on AML efforts
When asked how financial institutions can collaborate on anti-money laundering (AML) efforts without sharing sensitive customer data, Dr. Ittai Dayan, CEO of Rhino Federated Computing, said financial crime does not respect institutional boundaries, but data privacy rules must.
He said, “Criminal networks move funds across institutions and across borders, knowing that no single bank can see the full picture. Most institutions are fighting financial crime in isolation, and that isolation is a strategic advantage for criminals. Current estimates suggest less than one percent of illicit flows are intercepted. That number reflects not a lack of data, but a failure to connect data that already exists to form a broader context.”
For Dr. Dayan, Federated Computing changes the equation for cross-institutional collaboration. Instead of pooling transaction records in a central repository – something that creates both a high-value breach target and a major compliance risk – banks instead keep data exactly where it lives and bring the analytical workload to it.
“Models train locally on each bank’s own data. Insights aggregate across the network. No raw transaction records, no customer PII, and in sum no intellectual property ever leaves the bank’s environment. We like to think about it as “compliance-by-architecture,” said Dayan.
Rhino is now testing this model at scale with SWIFT through a proof of concept involving multiple banks in a consortium.
One project is focused on cross-border payment fraud detection models that continuously improve by running locally on each bank’s data, allowing every participant to retain full sovereignty over its information. “What travels across the network is learning, not records,” he said.
Learning from SWIFT
With Rhino engaged in a large-scale project with SWIFT, the company has been able to learn more about the real barriers that exist to multi-bank AI collaboration.
For Rhino, the technology was the straightforward part. Dayan remarked that the harder problems were trust, governance, and the question every compliance officer asked before the word models ever crossed anyone’s lips was – how do I know my data never leaves?
“That question sounds simple, but it has deep implications for how you architect everything,” said Dayan. “Not just the data pipeline, but the audit trail, the access controls, and the contractual framework between participants. Banks are familiar with consortium arrangements on the payment and settlement side of the business, but a shared AI system around AML and Know Your Customer (KYC) is a different conversation. The liability questions are different. The regulatory questions are different. And the instinct to protect intellectual property, which includes transaction patterns and fraud typologies, is strong.”
Overall, what Rhino learned from the collaboration is that you have to solve for institutional trust while you solve for model accuracy.
Dayan concluded, “The federated architecture addresses the technical side — data stays in-place by design, not by policy. But participants also need governance structures that give them visibility into how the shared model is being used, what it has learned, and how updates are applied to their local systems.”
Avoiding data exposure risks
Once shared models are running, a common question being asked is how banks can contribute feedback – confirming fraud and flagging false positives – without such feedback itself becoming a data exposure risk.
In the opinion of Dayan, this is one of the ‘subtler’ compliance questions in federated AML, and it matters because a shared model compounds in value over time.
He explained, “It improves as more institutions contribute their investigative judgments to it. But those judgments are sensitive — a SAR disposition, a mule account confirmation, a false positive determination — and they cannot travel in plain text across a network.”
The feedback mechanism, Dayan stated, has to carry the same privacy guarantees as the initial model training. When a compliance officer closes an alert, that label stays within the bank’s environment. What gets shared is not the disposition, Dayan explains, it is the mathematical update the disposition generates in the model.
“That model gradient represents what the model learned from the feedback, but it has been abstracted to the point where the underlying case cannot be reconstructed from it,” the Rhino CEO remarked.
“We layer additional protections on top of that,” said Dayan. During aggregation, Rhino uses technologies such as trusted execution environments and secure multi-party computation so that the network coordinator can benefit from overall improvement across participants without attributing any specific update to any specific institution. When updated model parameters are distributed back to the individual banks, the company uses differential privacy to add calibrated mathematical noise that prevents anyone from reconstructing the model’s inputs with advanced techniques. Rhino also takes advantage of Confidential Computing to ensure the bank’s data always remains absolutely secret, and federated homomorphic encryption can provide added privacy to the model’s intellectual property.
“The outcome for a compliance team is that their investigative judgments improve the network without ever constituting a disclosure under applicable data-sharing restrictions,” said Dayan.
Demonstrating traceability and defensibility
Dayan was also posed with a critical question in relation to federated models: if a compliance team is relying on a shared federated model to support SAR decisions, how can they be able to demonstrate to an examiner that those recommendations are traceable and defensible.
Dayan explained that this is the question that has to be answered before any chief compliance officer will operationalise a shared model – and it is the right question to be asking.
He said, “The federated architecture works essentially the same as traditional centralized AI. Because processing happens locally at each institution, the audit trail lives within your own environment. Every input the model processed, every recommendation it produced, every model version deployed — all of that happens inside your virtual four walls, where your existing data and model governance are enforced. You are not dependent on a third-party vendor to produce audit evidence on your behalf.”
The Rhino CEO made clear that in practice, a compliance officer reviewing a flagged transaction can trace the recommendation back through the local model’s logic, document what features drove the output and include that in the case file. The aggregated learning from the consortium is documented as a governance-approved enhancement to the local model, with version history and update logs maintained within your own environment.
Dayan also shone a light on the model risk management frameworks across jurisdictions – SR 11-7 in the US, the EU AI Act, SS1/23 in the UK and EBA’s guidelines across the EU, and stated that that is a clear model risk management story.
“You are not asking an examiner to trust a black box sitting outside your perimeter. You are showing them a governed, auditable system that benefits from consortium intelligence,” he said.
The future of agentic AI
What is the future of agentic AI in Alert-to-SAR workflows? In the view of Dayan, agentic AI is going to completely reshape how institutions handle the Alert-to-SAR process from end-to-end over the next few years.
He commented, “Right now, most of an analyst’s time gets burned on false positive triage and data gathering rather than exercising their professional judgment. AI agents can automate the routine work: culling false positives, pulling case evidence across systems, drafting narratives, and routing completed packages for review. This will free analysts to do what they are best at: deciding whether a case clears the SAR threshold and what it means.”
Once the banks get over the initial adoption hurdle, Dayan believes they will find building agents to be ‘wonderfully easy’ since they have strong existing practices to inform the agentic systems design.
He remarked, “The fun part is that the tools themselves help you create stronger tools. The hard part is architecture for scale: getting those agents to work across disparate systems. Alert-to-SAR success depends on reasoning across siloed data: transaction records in one system, customer identity in another, watchlists in a third, correspondent bank data in a fourth. Each silo has its own owners, data residency rules, and PII constraints.”
Dayan continues by explaining how centralizing that data to feed an AI system creates regulatory and governance risk that many institutions are not willing to take. “Many banks simply don’t have the IT resources needed for big data centralization projects like this,” he said.
For the Rhino CEO, this is exactly where a federated computing architecture is able to alleviate this challenge.
“It’s a lot easier to install a Rhino ‘client’ where the data lives in most cases than it is to nail a massive data migration project. Agents can query and reason across distributed data silos without the underlying data moving or sharing sensitive data with unauthorized people. Meanwhile, agent performance and updates can be governed across data siloes so that you see a continuous improvement loop. The agent stays at the data, not the other way around. Without these capabilities, agents will not be able to coordinate, and you will be stuck in the same shuffle between systems, doing the “swivel chair” routine pasting agentic outputs from one screen to another, just like you do today.”
A message for regulators
Dayan was also asked how regulators should think about their role as cross-bank AI consortia mature.
He stated that regulatory frameworks for shared AI systems in financial services are still forming, and the right answer will likely vary across different markets.
He said, “What has been established – the FCA, the Federal Reserve, and the EBA have each been explicit on this – is that existing accountability principles apply. Institutions remain responsible for AI-informed decisions under frameworks already in force, and AI-specific regulation isn’t required to drive enforcement.”
Dayan hopes to see the evolution of the regulatory oversight model itself, stating, “Right now, regulators oversee institutions individually, with limited visibility into the shared intelligence layer that increasingly connects them. As federated consortia scale, that gap becomes a systemic risk – not because the technology is opaque, but because the governance frameworks are lagging.”
Dayan advocated for enabling regulators to participate in these systems, and not just observe them from the outside.
“A regulatory authority could, in principle, be a node in a federated consortium, receiving aggregate insights and pattern-level intelligence without accessing raw transaction data from any individual institution. That would give the regulators the network-level visibility they need to identify emerging typologies and systemic risk, without requiring any institution to hand over customer records or compromise data sovereignty,” he stated.
Dayan made it clear that the market is not there yet in terms of regulatory appetite or legal frameworks, but argued that the technical foundations already exist.
He concluded, “Institutions that build federated infrastructure now will be positioned to extend access to regulators as those frameworks evolve, rather than retrofitting a centralized architecture that was never designed for that kind of governed, privacy-preserving transparency.”
Copyright © 2026 RegTech Analyst
Copyright © 2018 RegTech Analyst





