The following is a guest post from Alexander D. Hilton, MBA, a digital strategy and AI consultant at Agile New England, an ACM Chapter. Opinions are the author’s own.
A National Bureau of Economic Research study published in February surveyed nearly 6,000 CFOs, CEOs and executives across the U.S., U.K., Germany and Australia. The finding: Over 80% of firms reported zero measurable impact from artificial intelligence on either employment or productivity over the past three years. Yet those same executives forecast AI will boost productivity by 1.4% and output by 0.8% over the next three years. Economists are already drawing parallels to Solow’s paradox — the same disconnect between investment and results that defined the early computer age.
The standard explanation is patience: Transformative technologies take time. That’s true, but insufficient. The deeper problem is that most enterprises are evaluating AI investments using the wrong accounting logic — and until CFOs fix that, the numbers won’t move.
The cost accounting trap
Most AI business cases are built on cost accounting: Hours saved, headcount avoided, cycle time reduced. This logic treats every local efficiency gain as a contribution to the bottom line. It’s the same logic that told manufacturers for decades to run every machine at full utilization — a practice that Eliyahu Goldratt’s Theory of Constraints proved was actually generating excess inventory, longer lead times and hidden costs.
The same error is now playing out with AI. Forrester predicts that enterprises will defer 25% of planned AI spend into 2027, noting that fewer than one-third of decision-makers can tie AI’s value to financial growth. The PwC 2026 Global CEO Survey found that 56% of CEOs report getting nothing from their AI investments. And when Klarna replaced 700 customer service agents with an AI chatbot in 2024, cost accounting called it a win — until customer satisfaction collapsed and the company quietly began rehiring humans. Forrester separately found that 55% of employers who laid off workers for AI now regret it.
Worse, some AI deployments don’t just fail to generate value — they actively destroy it. Air Canada deployed a chatbot to reduce customer service costs; it fabricated a bereavement fare refund policy, and a tribunal ordered the airline to pay compensation. Deloitte Australia used AI to accelerate a government report; the deliverable contained fabricated quotes and fictitious references, forcing a partial reimbursement of a $290,000 contract. In each case, the cost accounting business case showed reduced operating expense. What it didn’t capture was the increase in investment and OE from legal exposure, remediation and reputational damage — costs that dwarfed the original savings.
In every case, the initiative reduced local cost or accelerated a local task. In every case, it failed to improve the rate at which the organization generated value. That distinction is precisely what throughput accounting is designed to capture.
A better lens: Throughput accounting
As Goldratt wrote in “Beyond the Goal,” “Technology can bring benefits if, and only if, it diminishes a limitation.” Through this lens, the question for CFOs is not whether AI is powerful, but whether it diminishes the specific limitation that constrains financial performance. If it doesn’t, the investment creates activity without value.
Throughput accounting operationalizes this insight through three measures. Throughput (T) is the rate at which the organization generates money through sales — revenue minus truly variable costs. Investment (I) is capital tied up in the system: not just AI licenses and compute, but retraining costs, validation infrastructure and compliance overhead. Operating Expense (OE) is the ongoing cost to sustain the system, including human oversight, error correction and governance.
The decision rule is straightforward: a good investment increases T or decreases I and OE — but only if it does so at the system’s constraint. If an AI initiative accelerates a step that isn’t the bottleneck, the system’s output doesn’t change. The bottleneck hasn’t moved. The impressive speed gain never reaches the P&L.
I have seen this firsthand. In a large-scale enterprise transformation program, an AI tool accelerated document generation by a factor of 240. Cost accounting would score that as a transformational gain. But the system constraint was subject-matter expert validation — a manual, human-dependent review step that couldn’t be parallelized. The result: Throughput reached only 89% of the target despite the 240-fold speed improvement upstream. The AI made the non-bottleneck faster. The constraint didn’t move. The system barely changed.
What CFOs should ask instead
The NBER study offers a revealing detail: Executives who use AI themselves spend only 1.5 hours per week with it, yet they are approving millions in enterprise-wide deployments. The gap between executive experience and investment scale suggests decisions are being driven by vendor narratives and peer pressure rather than constraint analysis.
Before approving the next AI initiative, CFOs should require answers to three questions. First, what is the system constraint this initiative addresses? If the team cannot name the specific bottleneck limiting throughput, the project is optimizing a non-constraint. Second, does this increase T, or only reduce local OE? A process that runs faster but feeds into the same downstream bottleneck hasn’t increased the organization’s rate of generating money. It has cut local costs while throughput stays flat. Third, what new I and OE does this create? Every AI deployment carries hidden Investment — retraining, integration, validation workflows — and ongoing OE in the form of human oversight and error correction. If the net effect on T minus the increase in I and OE is negative, the initiative destroys value regardless of how impressive the demo looks.
When the hype fades, the math remains
This is not an anti-AI argument. AI applied at the constraint — where it genuinely diminishes the limitation that restricts throughput — can be transformational. But applied indiscriminately, evaluated on local speed rather than system throughput and approved without constraint analysis, it becomes the most expensive way to achieve nothing. The firms reporting zero productivity impact are not failing because AI doesn’t work. They are failing because they are measuring the wrong thing. Throughput accounting gives CFOs a discipline for telling the difference — and for ensuring that AI investments land where they actually move the needle.