By Lynn Räbsamen, CFA | COO, Global Swiss Learning | Advisory Board Member, CFA Institute | Author, Artificial Stupelligence
There is a metric quietly circulating in corporate boardrooms right now. It does not measure revenue. It does not measure cost savings. It measures the number of AI agents your workforce has built.
Let that sink in.
Not whether those agents work. Not whether they replace something that needed replacing. Just: how many did you build? Welcome to the current state of enterprise AI adoption — where the KPI is the effort, not the outcome.
When the KPI Is the Effort
There is a phenomenon underway in enterprise AI adoption that has no polite name. Boards are pressuring leadership to demonstrate progress. Leadership is pressuring teams to show adoption. Teams, being rational, are responding by building something. And because the metric is volume rather than value, they build agents. For everything.
I have heard firsthand accounts of agents being deployed for tasks that an Excel formula would handle in milliseconds. Deterministic, rule-based processes are being re-engineered as probabilistic LLM workflows because the output is a thing you can count on a slide.
The business case is not “this is the best tool for the job.” The business case is “this contributes to the metric.”
A language model is a powerful, expensive, non-deterministic tool. Using it to add two numbers — or to execute logic that a simple, auditable, deterministic system already handles perfectly — is not innovation. It is theatre. And unlike theatre, it has infrastructure costs, latency, and an occasional habit of hallucinating the wrong answer.
The spreadsheet, meanwhile, just works.
What an Agent Actually Is (And Mostly Isn’t)
To be precise: a real AI agent is not a chatbot with ambition. It is a system that perceives its environment, reasons about a goal, plans a multi-step path, calls external tools, and executes — often without step-by-step prompting. The key word is autonomous.
Most things called “AI agents” inside companies are not this. Gartner has a name for the phenomenon — “agent washing” — and estimates that of the thousands of vendors claiming agentic capabilities, only about 130 are real. The rest are repackaged chatbots, RPA tools, and assistants wearing fresh marketing.
Of the thousands of vendors claiming agentic capabilities, Gartner estimates only about 130 are real.
Inside enterprises, the same dynamic plays out one layer down. Genuine agentic problems — those requiring reasoning across ambiguous inputs — are rare. Most workflows that get the agent treatment do not need it. As Gartner’s Anushree Verma put it: “Many use cases positioned as agentic today don’t require agentic implementations.”
That sentence deserves to be printed and taped to every CIO’s monitor.
Vanity Metrics, Predictable Outcomes
The numbers behind the hype are unflattering. Gartner projects that more than 40% of agentic AI projects will be cancelled by the end of 2027, citing escalating costs, unclear value, and inadequate risk controls. MIT’s analysis of enterprise deployments found that the vast majority of generative AI pilots produce no measurable bottom-line impact. McKinsey calls the gap between universal adoption and absent results the gen AI paradox.
These are three different research firms describing the same problem from three angles.
Companies are doing a great deal of AI. They are getting comparatively little out of it.
The cause is rarely the technology. It is the framing. When the success metric is “agents deployed,” teams optimize for that metric. They produce agents. Whether those agents solve real problems becomes a secondary concern, then a tertiary one, then a footnote in the post-mortem when the project gets quietly shelved.
The Worklytics 2025 ROI study coined the right term for what gets reported up the chain in the meantime: vanity metrics. Total interactions. Raw adoption numbers. Counts of agents built. Numbers that look impressive in a quarterly review and tell you nothing about whether the work got better, faster, or cheaper.
The Question Nobody Is Asking
The right question before deploying an agent is not can we build one? The right question is whether the task requires reasoning, adaptability, and judgment across genuinely ambiguous inputs — or whether it requires a formula. If it is the latter, the spreadsheet wins. Always. It is faster, cheaper, more reliable, and considerably better at not hallucinating numbers under pressure.
The spreadsheet wins. Always. It is faster, cheaper, more reliable, and considerably better at not hallucinating numbers under pressure.
The workflows where agents do earn their keep share a profile: high volume, repeatable, well-defined, with a clear stopping point at which a human takes over. Document triage. Routine compliance review. Code review assistance. Customer support for predictable queries. None of these are glamorous. All of them are where the actual ROI is hiding.
The unglamorous use cases are the productive ones. The glamorous ones — the agent that writes the email, the agent that summarizes the meeting, the agent built to satisfy a board metric — tend to be where the budget evaporates and the spreadsheet quietly goes back to doing the real work.
The Honest Verdict
AI agents, deployed thoughtfully in the right workflows with proper governance, deliver genuine value. That is true. It is also true that most agents currently being deployed are not those agents. They are the artefact of a counting exercise — built to populate a metric, not to solve a problem.
The solution is not more agents. It is better judgment about which problems agents are the right tool for.
Which, somewhat ironically, is exactly the kind of judgment that still requires a human.
Counting agents is not a strategy. Solving problems is.
If you are seeing this play out inside your own firm — agents being built to satisfy a metric rather than solve a problem — I would genuinely like to hear about it. Contact me here. Anything you share will be treated in the strictest confidence and never attributed without your explicit permission. You are welcome to remain entirely anonymous if you prefer.
For more insights about what AI can or cannot do, check out my book Artificial Stupelligence: The Hilarious Truth About AI.
Subscribe here to be the first to receive my insights.







