In 2025, synthetic identity fraud has surged as a fast-growing financial crime, with lenders exposed to an estimated $3.3bn from suspected synthetics in the first half of the year alone. Fueled by GenAI, which enables fraudsters to craft convincing hybrid identities from real and fabricated data, this threat is outpacing traditional defenses and escalating global losses into the billions. The pressing question: Can overburdened risk teams evolve quickly enough to stem the tide?
In the view of Fraser Mitchell, chief technology officer at SmartSearch, the financial and regulatory landscape has long been a game of cat and mouse, with risk and compliance teams working often tirelessly to stay one step ahead of bad actors.
He added, “But in the age of generative AI, the rules of engagement have changed. The rise of sophisticated tools has given criminals the power to create hyper-realistic “synthetic identities” and deepfakes, posing a new and complex threat that requires an equally advanced response.”
For Mitchell, the uncomfortable truth is that traditional onboarding processes – which rely on document review and a limited set of data checks – are proving to be no match for the new generation of AI-drive fraud.
“Synthetic identities are not stolen from a single, real person. Instead, they are meticulously fabricated using a blend of real and fake data points—a stolen Driving Licence with a fake address and an AI-generated photograph. These “identities” can be used to open accounts, apply for credit, and perpetrate fraud, all without a single red flag on legacy systems,” he said.
An even more insidious area for the SmartSearch CTO is deepfakes. For Mitchell, the technology has advanced to the point where a criminal can create a convincing video or audio of a person, mimicking their appearance, voice and even mannerisms.
“This allows them to bypass biometric liveness checks that are not designed to detect such sophisticated attacks, impersonating a legitimate person in a video-based KYC check,” he said.
When asking if today’s tools possibly be strong enough to detect this new wave of AI-driven fraud, Mitchell answers a resounding yes, but only if they evolve to meet the threat.
He said, “Legacy systems are no longer sufficient; firms must adopt a multi-layered approach that combines multiple technologies. Leading solutions, such as those from SmartSearch, are already on the front lines of this battle.”
Recently, SmartSearch partnered with Daon to integrate their AI-powered biometric identity technology directly into the SmartDoc solution. This integration, Mitchell waxed, is designed to enhance the identity verification experience for customers by enabling faster onboarding with less manual intervention. Daon’s technology provides a more intuitive user interface for ID checks, which guides users on how to capture clear and accurate images of their documents and selfies. This helps detect and flag issues like glare or blur, thereby reducing user error and customer drop-off rates.
He remarked, “This enhanced SmartDoc solution goes far beyond a simple photo match. It uses a combination of machine learning and human expertise. An initial check uses Optical Character Recognition (OCR) and facial recognition to verify documents like passports and driving licenses, followed by passive liveness detection to ensure the user is a real person and not a photograph or video.
“Any documents flagged are then reviewed by border security-trained experts who can spot subtle signs of forgery that automated systems might miss. SmartSearch’s unique triple bureau approach, leveraging data from Equifax, Experian, and TransUnion, provides an unparalleled level of accuracy in electronic identity verification, making it far more difficult for synthetic identities to be established.”
For Mitchell, adversarial AI is not just a tool for criminals, it is a critical weapon for defence. By training a system by deliberately trying to trick it with false data, developers can proactively identify vulnerabilities in their fraud detection models and strengthen them against future attacks. This continuous process of testing and hardening, Mitchell believes, ensures that identity verification systems remain robust and adaptable to new and evolving threats.
With this considered, Mitchell believes that regulators are beginning to catch up to the pace of technological change. “The UK’s Online Safety Act, for example, is a significant step forward. While its primary focus is on protecting children from harmful content online, it signals a broader regulatory intent to hold platforms accountable for the content they host and the identities of their users,” he stated.
However, the challenge Mitchell remarks is that AI has made it ‘frighteningly’ easy for bad actors to bypass these measures.
He said, “Platforms now must grapple with high-quality, AI-generated fake IDs that are nearly indistinguishable from real ones, as well as the widespread use of Virtual Private Networks (VPNs). The use of a VPN can make a user’s location appear to be in a different country, allowing them to circumvent regional age-gating and identity verification requirements. This highlights a critical flaw in regulatory frameworks that rely on geography and traditional ID verification methods.”
Mitchell concluded, “Despite these hurdles, the industry is responding with new initiatives and partnerships, moving towards a consensus that a combination of robust technology, layered security, and ongoing vigilance is the only way to protect client data, our businesses, and our teams from these new and sophisticated threats.”
A blended persona
Jason Lee, senior director, industry practice lead at Moody’s, outlined that synthetic identities are built by blending real and fabricated information to create a new, seemingly credible persona.
He stated, “Fraudsters employ advanced “backstopping” techniques to construct detailed backstories, supported by false digital footprints across social media, public records, and even fabricated historical data. These measures can make synthetic IDs appear legitimate to both automated and manual checks.”
The rise of deepfake video and audio technology has further complicated detection, said Lee. “Generative AI now enables the creation of hyper-realistic images, voices, and even “liveness” test responses, giving bad actors the tools to bypass biometric verification. Combined with the fact that many onboarding processes still rely heavily on matching discrete data points, synthetic identities can evade detection and gain fraudulent access.”
On the question of whether today’s tools are strong enough to detect AI-driven fraud, Lee believes that whilst current onboarding and due diligence tools have evolved with automation and AI, they are not foolproof against today’s AI-driven threats.
“Machine-led processes can struggle to distinguish between genuine and artificially generated identities, particularly when fraudsters exploit global data gaps and regulatory inconsistencies. Detection technology has made significant advancements, but it still lacks the nuanced sensitivity that humans possess, sometimes making judgments that are too binary,” said Lee.
He finished by remarking that AI-powered analytics can detect subtle “tells” in digital footprints, but results are only as strong as the underlying datasets. Without unified, global data coverage and a hybrid approach that combines machine efficiency with human intuition, organisations risk missing nuanced red flags, he said.
AI-dominated discussions
For Saifr strategic advisor Jon Elvin, the conversation amongst today’s crime fighting community is dominated with discussion of AI, with hopeful projections that it will make significant impact thwarting fraud and financial crime.
Elvin added, “While there is a positive outlook and some examples of gains, the reality is also true that individual bad actors, fraud networks, and organized criminal entities also use AI as their effective tool to professionalize and enhance their tradecraft. This manifests in several ways and recent industry focus groups predict ongoing major concerns and challenges across the risk spectrum related to countermeasures involving deepfakes, synthetic identification, fraudulent documents, facial recognition and interactive AI Avatars.”
Elvin said he expected the challenge – and the cat and mouse moves and countermoves – will continue as it always has. “When AI is used effectively by law enforcement and compliance professionals, it can help reduce the breadth, depth and duration of harmful exposures and close windows of vulnerability,” he said.
Despite this, Elvin stressed the knowledge that AI is used for nefarious acts. “Bad Actors benefit from the speed and adaptability of criminal tradecraft which always has first mover advantage when finding weakness and gaps in control frameworks, technology vulnerabilities and when dealing with schemes capitalizing on human/victim emotions particularly those with mass-marketing fraud in communication financial channels.”
Elvin concluded that the crime fighting community, including public and private sectors and regulatory entities, are routinely posting alerts and warnings of these risks.
He added, “We have noted much stronger collaboration on emerging threats and the right balance of controls and safeguards. Perhaps one of the best keys to limit this is promoting awareness to consumers and sharing information between investigatory agencies.”
Keep up with all the latest RegTech news here
Copyright © 2025 RegTech Analyst
Copyright © 2018 RegTech Analyst





