87% of CFOs say AI will be critical to their operations in 2026. Yet only 12% report both cost and revenue gains. The problem isn't the technology—it's who's steering it and how.
The conversation about enterprise AI has shifted rooms. It used to happen in engineering. Now it happens in finance.
In Deloitte’s Q4 2025 CFO Signals survey, 87% of CFOs said AI would be extremely or very important to their finance operations in 2026. More than half said integrating AI agents into their departments was a transformation priority. These are not passive observers hedging their answers to a survey. These are stewards of corporate capital deciding, actively, where the money goes and whether it comes back.
The AI decision has become a finance decision. Understanding why — and what it means for how AI programmes succeed or fail — matters more now than most organisations have acknowledged.
01 — The CFO Did Not Choose This Role
No CFO in 2022 planned to become the arbiter of enterprise AI strategy. That shift happened for structural reasons, not because finance leaders campaigned for it.
AI investment has scaled. According to Goldman Sachs, companies worldwide will invest more than $500 billion in AI in 2026. At that capital intensity, technology spend is no longer an IT discretionary line — it requires board-level justification. Boards, in turn, have placed that justification with the person accountable for financial results. That person is the CFO.
A second force is the ROI gap. PwC’s 2026 CEO Survey found that only 12% of executives report AI delivering measurable gains on both cost and revenue. 56% reported no significant financial benefit at all. When capital outlays are this large and returns are this inconsistent, the finance function does not wait to be invited into the conversation. It arrives.
The third factor is accountability. As KPMG research found, 59% of CFOs now claim primary responsibility for AI and technology investments. The CIO, unsurprisingly, also claims it — 61% of CIOs say the same. That overlap is not coincidence. It is a collision.
02 — The CFO-CIO Fault Line
The tension between CFOs and CIOs over AI is not a personality conflict. It is a structural one, rooted in how each role defines value and measures time.
KPMG’s survey of 102 CFOs and CIOs found that 39% of CFOs and 49% of CIOs consider the definition of technology ROI to be a contested area between them. They disagree on what return means, when it should arrive, and who bears responsibility when it does not.
CFOs typically expect returns within 12 months. Deloitte’s research suggests most AI projects need two to four years to generate measurable payback. That gap creates predictable friction: finance cuts funding on timelines that technology leaders consider premature. Technology teams, meanwhile, continue building on timelines that finance considers speculative.
The ownership question makes this worse. When two leaders both believe they hold primary accountability for a programme, neither holds it fully. Escalations slow down. Governance becomes political. Pilots multiply without anyone with the authority — or incentive — to kill the weakest ones.
L.E.K.’s 2025 Office of the CFO survey captured the operational version of this tension clearly. Finance leaders cited integration failure as the single largest blocker — not model capability, not workforce resistance, but the inability to connect AI tooling into legacy systems that were never designed to be connected. The CIO’s problem, CFOs might argue. The CFO’s problem, CIOs would say. In practice, it is nobody’s problem until it becomes everyone’s crisis.
03 — What the 12% Are Doing Differently
PwC’s finding that only 12% of organisations report AI gains across both cost and revenue deserves more attention than it typically receives. The gap between that 12% and the majority is not primarily technical. It is structural and operational.
The Cisco AI Readiness Index found that roughly 13% of organisations qualify as AI “pacesetters” — those with the infrastructure, data integration, and governance maturity to scale AI effectively. Those organisations are approximately four times more likely to move AI from pilot into production and 50% more likely to report measurable business value. The correlation between governance maturity and production readiness is not coincidental. It is causal.
Three patterns distinguish the high performers.
They treat AI as workflow redesign, not tool deployment. Deloitte’s research is explicit on this: organisations trying to automate existing processes designed for humans, rather than redesigning those processes for AI-first operations, consistently underdeliver. The failure is not in the model. It is in assuming the model slots into a process architecture that was never designed for it.
They set outcome gates before scale. High-performing organisations require quantified impact evidence before moving from pilot to production. This is not bureaucratic caution — it is the same discipline applied to any capital allocation decision. CFOs who have been doing this for decades recognise it. The organisations that skip it are the ones sitting in the 56% reporting no financial benefit.
They distribute decision rights deliberately. Deloitte’s predictive modelling on C-suite collaboration found that AI-driven business outcomes peak when decision rights are shared among the CIO or CTO, the CFO, and the chief strategy officer — not owned by any single role. The model that works is not CFO-led or CIO-led. It is CFO and CIO operating on a unified framework, with clear ownership of different parts of the value equation.
04 — The Questions the CFO Is Now Asking
The shift in decision-making authority has changed the questions that enterprise AI programmes must answer before they receive funding or scale.
CFOs are no longer asking whether AI works in principle. They are asking: what does this cost fully loaded — including data preparation, governance setup, change management, and the ongoing cost of operating the model at production scale? When does it pay back? What is the residual risk if the programme fails to deliver, and who owns that risk?
These are not hostile questions. They are the same questions applied to any significant capital outlay. Enterprise AI programmes that have not been designed to answer them are not well-designed programmes.
The emergence of AI agents as a finance priority makes this more acute. Deloitte found that 54% of CFOs see integrating AI agents into their departments as a 2026 transformation priority. Agentic systems that act autonomously on financial data, generate forecasts, and flag anomalies carry a different risk profile from analytical tools that present information to a human. CFOs comfortable with the latter are applying considerably more scrutiny to the former — and rightly so.
The L.E.K. survey captured the CFO’s position precisely. For AI to deliver value in finance, it needs to be accurate and explainable. Black-box systems making decisions that affect reported financials are not a governance problem that can be deferred. They are a liability.
05 — The Implication for How AI Is Delivered
For any organisation building or procuring enterprise AI, the practical implication of this shift is direct: the CFO is now a primary stakeholder, not a downstream approver.
Programmes designed with the CIO as sole sponsor and the CFO as budget gatekeeper are structurally misaligned with how enterprise AI decisions are now made. The CFO needs to be in the room when the problem is being scoped, not when the invoice arrives.
This changes what good delivery looks like. It means defining success in financial terms — not model accuracy, but margin impact, working capital improvement, or time recovered from high-value roles. It means building ROI measurement into the programme architecture from day one, not retrofitting it when a budget review demands it. And it means governance that finance leaders can stand behind: audit trails, explainability, and decision-class frameworks that distinguish where autonomous action is acceptable and where it is not.
The organisations navigating this well are not the ones with the most sophisticated models. They are the ones with the clearest answer to the CFO’s questions — and the programme architecture to prove it.
The honeymoon period for enterprise AI is over. The boardrooms that once celebrated pilot announcements are now demanding production results. The CFO did not take over this conversation by choice. They took it over because the capital is real, the accountability is theirs, and the results, for most organisations, have not yet matched the investment.
Getting this right requires more than technology capability. It requires the kind of financial discipline, governance architecture, and organisational alignment that most AI programmes were never built to deliver. That is the gap worth closing.

