The debate over whether enterprises should buy or build AI capabilities took centre stage at the 2025 Central Park AI Forum, hosted by Norm AI.
Across multiple sessions, leaders explored how talent constraints, long-term ownership and regulatory risk are reshaping enterprise AI strategy. A fireside chat between Henry Moniz, chief compliance officer at Meta, and John Nay, founder and CEO of Norm AI, set the tone, challenging the assumption that building internally always delivers control or competitive advantage.
One recurring theme was the gap between building software and sustaining it. Prototypes are relatively easy to assemble, particularly with modern AI tooling, but maintaining production-grade systems that can operate in mission-critical environments is a different challenge entirely. This distinction is especially important in legal and regulatory use cases, where systems must be continuously updated, audited and defended over time. “If you’re an engineer, you might be onto the next sexy thing you want to build. You might not necessarily be pushing […] updates and improving. It’s not as fun to maintain as it is to build.” — Henry Moniz, CPAIF 2025.
That reality feeds directly into how enterprises should think about incentives and talent allocation. Engineering capacity capable of delivering reliable, compliant, production-ready systems is scarce. Organisations must decide whether that resource is best spent on initiatives that directly differentiate the business or on internal tools that, while necessary, rarely move the revenue needle. “There’s a cost benefit, right? Like if you have engineers, we have a lot of engineers. All these companies have a lot of engineers. But do you want them working on products that are accretive and commercial and they can really build the bottom line?” — Henry Moniz, CPAIF 2025.
Another emerging risk lies in vendors that offer little more than a thin interface on top of foundation models such as those provided by OpenAI or Anthropic, it said. As these underlying models continue to improve rapidly, enterprises may gain little from intermediaries that do not invest deeply in specific workflows or domain expertise. In some cases, going direct to the model provider can be more effective than buying a lightly customised wrapper.
The most resilient approach, according to forum participants, is to work with partners that combine deep category focus with the flexibility to evolve alongside their clients. In areas such as legal and compliance AI, this means vendors that understand regulatory nuance, invest in long-term product roadmaps and are open to co-developing functionality as enterprise needs change. This structure allows organisations to influence product direction without absorbing the cost and risk of building outside their core strengths.
Experience suggests that companies insisting on building everything in-house often overspend and still fall short of their goals, Norm AI said. By contrast, enterprises that collaborate with established vendors tend to deploy faster and with fewer long-term surprises. The do-it-yourself route hides familiar traps: star engineers may lose interest in maintenance, and exceptional talent is always vulnerable to being hired away once the initial build is complete.
As AI systems move closer to mission-critical functions, the buy versus build calculation shifts further toward buying. While enterprises may reasonably build internal tools for areas like marketing or customer support, regulated touchpoints demand broader industry visibility and regulatory awareness. Just as large organisations continue to rely on external law firms and consultants, enterprise AI for high-stakes compliance use cases increasingly favours partners that see across the wider market.
For more insights, read the full story here.
Read the daily FinTech news
Copyright © 2026 FinTech Global
Copyright © 2018 RegTech Analyst





