For those working at the cutting edge of artificial intelligence adoption, the most technically demanding challenge is perhaps not what you might expect.
According to the head of innovation lab at Zeidler Group, the hardest part of building with generative AI is not the architecture, the data pipelines, or even the regulatory compliance — it is simply saying no.
Speaking during a recent Q&A session, the Zeidler Group head of innovation lab reflected on how the nature of questions from industry peers has shifted over time. Where conversations once centred on how to leverage generative AI or what strategic advantages it could offer an organisation, practitioners are now beginning to grapple with the deeper mechanics of implementation — including knowing when not to proceed.
The enthusiasm surrounding generative AI shows little sign of abating. New product launches and capability announcements continue to dominate professional discourse, with platforms like LinkedIn flooded daily by pitches promising transformative outcomes for organisations willing to embrace the technology. This feverish optimism, the Zeidler Group executive argues, is precisely what makes saying no so difficult. Declining to proceed with a project can feel confrontational — like being the only person not dancing at a party, they said.
Yet the ability to pump the brakes is, in practice, one of the most valuable skills a technologist can develop. Zeidler Group head of innovation lab said, “The hardest part about building with generative AI is saying no.”
Deciding when to refuse a project is, admittedly, more art than science. There is no purely analytical framework that produces a clean answer. Instead, practitioners must weigh up several overlapping factors: the inherent technological limitations of large language models (LLMs) as they relate to the specific use case; the end requirements of a project in terms of accuracy, cost, and time; and the underlying rationale — the “why” — behind the initiative in the first place. Two projects that appear almost identical on the surface can carry vastly different feasibility profiles once these variables are examined closely.
A practical example illustrates this well. Automating the review of a first draft of a document is a fundamentally different proposition to automating the sign-off review of a final draft. Similarly, a workflow where a human remains in the loop throughout differs enormously from one designed for full automation. These distinctions matter, and failing to interrogate them carefully can lead organisations to invest in solutions that are simply not ready — or not appropriate — for the task at hand.
Critically, not every “no” is permanent, they said. Technology evolves rapidly, and a decision that was the right call twelve months ago may warrant revisiting today. Before multimodal LLMs became widely available, the state of the art for graph and chart detection sat at around 50% accuracy — a threshold that rendered many projects unfeasible. Those same projects are now entirely viable.
The broader implication, the Zeidler Group executive argues, is that generative AI is not going away. Pandora’s box is open. The question facing organisations is no longer whether to engage with the technology, but how to engage with it wisely — and that means developing a clear, considered sense of when it genuinely adds value, and when it does not.
For more insights, read the full story here.
Copyright © 2026 FinTech Global
Copyright © 2018 RegTech Analyst





