How investment advisers should manage AI risk

AI

Artificial intelligence is reshaping financial services at pace. A June–July 2025 McKinsey survey of nearly 2,000 companies across 105 countries found that 88% were piloting AI in at least one business function, up from 72% in 2024 and 55% in 2023.

Meanwhile, 79% reported using generative AI — a sharp rise from 33% just two years prior. Despite the momentum, the majority of organisations remain in an experimentation phase, with nearly two-thirds yet to begin scaling AI enterprise-wide, noted ACA Group in a recent post.

The wealth management and investment advisory sector has been more measured in its response. Robo-advisers have led the charge, deploying AI to build and manage low-cost, personalised portfolios aligned to client goals and risk profiles. However, most registered investment advisers (RIAs) have moved deliberately, carefully weighing how AI fits within their fiduciary and regulatory obligations before committing.

Where advisers stand today

The 2025 Investment Management Compliance Testing (IMCT) Survey of 577 investment advisers provides a clear snapshot of the industry’s posture. Some 40% have adopted AI for internal use only — covering areas such as investment research, portfolio testing and monitoring, and IT support — while 25% are developing AI use cases without having deployed tools yet.

A further 18% allow staff to use AI informally, 8% have banned or restricted use outright, and just 4% are using AI externally for basic client interactions such as chatbots. Only 1% have deployed AI to support complex client interactions including investment advice.

More recent data from ACA’s 2025 AI Benchmarking Report suggests the picture is shifting. Internal AI usage has climbed from 37% in 2024 to 60% in 2025, while combined internal and external use has grown from 8% to 11%.

The share of firms still in exploratory mode has fallen from 38% to 23%, and the proportion maintaining an outright ban has dropped from 15% to just 4%. The direction of travel is clear: compliance teams and business leaders are increasingly alive to both the opportunity AI presents and the urgency of putting proper governance in place.

Why adoption remains gradual

The slower pace of AI uptake among US investment advisers is both understandable and appropriate. Operating under a fiduciary duty to act in clients’ best interests, advisers must evaluate new technology rigorously before deployment.

A core consideration is the cost-benefit analysis. Advisers must assess whether AI tools genuinely improve client outcomes relative to their risks and costs, including tool accuracy, potential performance implications, and suitability across different client segments. Equally important is how AI integrates into existing supervisory and compliance frameworks. Generative AI and predictive technologies require thoughtful incorporation into supervisory procedures, compliance testing, oversight workflows, and recordkeeping and disclosure requirements. Firms must understand how the technology functions and its limitations — and have credible means of monitoring it.

Conflicts of interest represent another concern. AI tools may carry embedded biases or commercial incentives that could inadvertently favour the adviser over the client. Firms need to scrutinise tools for data conflicts, algorithmic bias, vendor incentives, and any misalignment between adviser and client objectives. Finally, the pace of AI development itself creates risk. Rapid evolution poses operational, cybersecurity, and compliance challenges, making it essential that governance frameworks can keep up with a landscape where tools, rules, and risks are in constant flux.

The road ahead

AI is no longer an emerging trend in financial services — it is fast becoming an operational reality. While investment advisers have historically been more cautious technology adopters, recent survey data points to a clear acceleration. More firms are now actively exploring how AI can support research, portfolio management, internal operations, and client engagement.

But opportunity carries responsibility. Firms must balance innovation against their compliance obligations, ensuring AI deployment aligns with fiduciary duties, regulatory expectations, and sound governance. Those who approach AI thoughtfully — anchored in strong risk management and compliance oversight — will be best placed to realise its potential without compromising the trust clients place in them.

Find the full post here. 

Read the daily RegTech news

Copyright © 2026 RegTech Analyst

Enjoyed the story? 

Subscribe to our weekly RegTech newsletter and get the latest industry news & research

Copyright © 2018 RegTech Analyst

Investors

The following investor(s) were tagged in this article.