59% of employees use unapproved AI tools at work

AI

A new report by Cybernews has revealed that a majority of employees in the U.S. are using AI tools that have not been approved by their employers — and many are sharing sensitive company information with them.

The findings highlight the growing risks associated with “shadow AI”, where workers adopt unvetted tools to enhance productivity, often without understanding the security implications.

According to the study, 59% of employees admitted to using AI tools not authorised by their organisations. Of these, 75% said they had shared potentially sensitive data, including customer information, internal documents, and employee details. While 89% of respondents acknowledged being aware of the risks linked to AI, this awareness has not deterred the widespread use of such tools.

Executives and senior managers were found to be among the biggest offenders, with 93% admitting to using unapproved AI tools in their daily work. This trend suggests that those responsible for setting cybersecurity standards are, paradoxically, also the most likely to break them.

Cybernews security researcher Mantas Sabeckis said, “If employees use unapproved AI tools for work, there’s no way to know what kind of information is shared with them. Since tools like ChatGPT feel like you’re chatting with a friend, people forget that this data is actually shared with the company behind the chatbot. As it turns out, many managers quietly give a thumbs-up to using these tools, even if they’re not officially approved. That creates a gray zone where employees feel encouraged to use AI, but companies lose oversight of how and where sensitive information is being shared.”

The survey also uncovered that nearly a quarter (23%) of employers lack any official policy governing AI use in the workplace, allowing shadow AI to flourish unchecked. IBM data referenced in the report shows that unauthorised AI usage can increase the average cost of a data breach by $670,000 — a figure that underscores the financial danger of this growing problem.

Žilvinas Girėnas, head of product at nexos.ai, warned, “When employees paste sensitive data into unapproved AI tools, there’s no guarantee of where that data will end up. It might be stored, used to train someone else’s model, exposed in logs, or even sold to third parties. That means customer details, contracts, or internal documents can quietly leak outside the company without anyone noticing.”

Girėnas added, “Once sensitive data enters an unsecured AI tool, you lose control. It can be stored, reused, or exposed in ways you’ll never know about. That’s why companies need secure, approved tools to keep critical information protected and traceable.”

The report found that while 57% of those using unapproved tools would stop if a breach occurred, experts believe this reactive stance is dangerous. “While awareness of the risks of irresponsible AI use does exist, employees still need more knowledge. It would be a shame if the only actual way to stop employees from using unapproved AI tools at work were an actual data breach,” said Sabeckis. “For many companies, even a single data breach can be impossible to recover from.”

Cybernews concluded that the rise of shadow AI is a sign of deeper organisational issues. Employees often turn to unapproved tools because official alternatives do not meet their needs. Only one-third of employees using company-approved tools said they were satisfied with them.

Sabeckis said, “It’s time for companies to stop pretending they can keep up with the market without using AI tools. Those times have passed. Now, if your company isn’t using AI, it’s already behind the competition. Of course, this doesn’t mean employers should blindly approve all tools for the sake of efficiency. But it does mean that companies should look into ways to incorporate AI into their processes securely, efficiently, and responsibly.”

Cybernews warns that without clear policies and leadership, shadow AI will continue to expand, posing a serious threat to business data security.

Keep up with all the latest RegTech news here

Copyright © 2025 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.