Generative AI set to supercharge scams in 2026

scam

Generative AI is expected to help drive a sharp rise in fraud and impersonation attacks in 2026, according to new research published by the World Economic Forum (WEF), as cybersecurity leaders warn that convincing deepfakes and localised scams are becoming easier to produce at scale.

The report found cyber-enabled fraud has become pervasive enough that 73% of surveyed CEOs said they, or someone in their professional or home network, had been affected in 2025.

The most commonly observed attacks were phishing, vishing and smishing, cited by 62% of respondents, while 37% said they had encountered invoice or payment fraud and 32% reported identity theft.

The WEF said CEO concerns have shifted over the past year, with AI vulnerabilities and cyber-enabled fraud now taking priority over ransomware, which had previously dominated board-level worries.

Konstantin Levinzon, co-founder of Planet VPN, argued that consumers are increasingly exposed as attackers use generative AI to make scams more credible and cheaper to run. Planet VPN co-founder Konstantin Levinzon said, “As businesses face challenges in protecting their networks, individual consumers are also seeing an increase in personal cybersecurity risks. Recent developments in generative AI are lowering the barriers to executing all kinds of attacks, while at the same time increasing their sophistication and making them appear more credible,” he says.

US data underscores the scale of the problem. The Federal Trade Commission (FTC) reported that consumers said they lost more than $12.5bn to fraud in 2024, a 25% year-on-year increase, and Levinzon expects losses could climb further in 2026 as criminals adopt AI-enabled techniques.

Separate research from Experian suggests anxiety is rising alongside the threat, with 68% of consumers identifying identity theft as their top concern and 61% worried about stolen credit card data.

The WEF also warned that generative AI can amplify digital safety risks for groups including children and women, particularly through impersonation and synthetic image abuse, while fraudsters can now translate and localise social engineering campaigns far more effectively. Planet VPN co-founder Konstantin Levinzon said, “Criminal networks that previously focused on a limited range of languages can now target populations all over the world with local languages. This expansion also speeds up the spread of AI-driven disinformation and makes it harder for platforms and regulators to protect users from coordinated manipulation,” he says.

Businesses, meanwhile, are contending with persistent cybersecurity skills shortages. The report said 33% of firms in Europe and Central Asia and 35% in North America reported needing stronger expertise, with shortages rising as high as 70% in parts of Latin America and Africa. Levinzon said AI tools may ease the pressure if deployed well, but warned that poor implementation can introduce fresh risks, including misconfiguration, bias, over-reliance on automation and exposure to adversarial manipulation.

Planet VPN co-founder Konstantin Levinzon said, “The most effective remedy for both direct customers and businesses is education. Well-informed employees and users are less likely to fall for scams, more likely to use unique passwords, and enable multi-factor authentication. Additionally, using a VPN should be a part of daily internet hygiene,” Levinzon concludes.

Keep up with all the latest RegTech news here

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.