By Lynn Räbsamen, CFA | COO, Global Swiss Learning | Advisory Board Member, CFA Institute | Author, Artificial Stupelligence
“Banks are mandating AI training that takes ten minutes. Then asking why their employees can’t use AI properly.“
Citi just rolled out mandatory AI prompt training for 175,000 employees. The module takes ten minutes. Thirty if you’re a beginner.
That is the entire AI curriculum for one of the largest banks on the planet.
JPMorgan made AI training a mandatory part of onboarding back in 2024. Bank of America reports that more than 90% of its 213,000-strong workforce now uses AI tools in daily work. Wells Fargo sent 4,000 people through Stanford’s Human-Centered AI program. The numbers are impressive. The outcomes, less so.
There is a particular kind of corporate theatre unfolding inside financial services right now, and it deserves a closer look.
The Ten-Minute Transformation
Citi’s program is called “Asking Smart Questions.” It uses adaptive learning, which means experts can finish it in under ten minutes and beginners in about thirty. The bank is treating this as a foundational competency rollout.
A reasonable person might ask what kind of foundational skill can be conferred in the time it takes to make a sandwich.
“The honest answer is that the training was never really designed to teach anything. It was designed to be completed. Those are not the same goal.”
The L&D Industry Quietly Admits the Problem
Gary Lamach, SVP at ELB Learning, put it plainly in Fortune: too many organizations are treating AI as a box to check, launching a tool or rolling out a one-time training, and calling it transformation. That, he said, is when implementation fails.
This is not a fringe view. The phrase “checkbox training” appears across the learning and development literature with the regularity of a liturgical chant. Traditional compliance training focuses more on proving training happened than on making sure it works.
The EU AI Act, effective August 2024, requires that employees working with AI systems be “AI literate” starting February 2025. The threshold is literacy, not competence. The bar was set by lawyers, and lawyers set bars they can defensibly clear.
“The bar was set by lawyers, and lawyers set bars they can defensibly clear.”
The Punchline Writes Itself
Here is where the story turns from absurd to genuinely funny.
“Employees are now using ChatGPT’s Agent Mode to complete their mandatory training. They are using AI to skip the AI training designed to teach them how to use AI.”
Read that again slowly. Companies may no longer be able to trust that a module was “completed” by a human at all.
And here’s the irony: if you are technically fluent enough to configure ChatGPT’s Agent Mode to complete an AI module on your behalf, you have already demonstrated more practical AI competency than the module was designed to test. The people gaming the training have, in a narrow but real sense, passed it.
If your training is so disconnected from actual work that the most efficient way to handle it is to delegate it to a bot, the training has answered its own question about whether it was worth doing.
Why None of This Is Working
The pattern across the sources is consistent. Mandatory AI trainings in finance are designed by Learning & Development and compliance teams. They are not designed by the people actually using AI in real workflows.
The people designing the curriculum have never sat in the seats where the work actually happens.
The curriculum is built for legal defensibility, not for capability. Proof that training happened, not proof that an employee can deploy AI usefully. The result is generic prompt-writing modules detached from the messy, role-specific judgment calls that actually determine whether AI helps a piece of work or quietly degrades it.
There is a deeper problem underneath. AI adoption at banks is triggering a compliance nightmare because the technology changes faster than review cycles can possibly accommodate. Lawyers and compliance teams are being asked to sign off on models that evolve mid-quarter. The process moves at a regulatory crawl while the technology runs laps around it.
What Good Would Even Look Like
“Useful AI training in a bank would not be a ten-minute module. It would look like apprenticeship.“
A senior credit analyst sitting beside a junior one, working through an AI-generated risk summary against the actual loan file. Catching together where the model sounded certain and was not. Feeding those findings back to the AI teams to make it better. It would be slow, expensive, role-specific, and almost impossible to scale through a learning management system.
Which is precisely why it isn’t happening.
What is happening instead is a global financial industry checking a box, photographing the box, filing the photograph, and announcing transformation. The employees know. The L&D consultants know. The regulators are starting to suspect.
The only entity that doesn’t seem to fully grasp it is the official quarterly report to the management.
The Real Curriculum
If you want to know how much a bank’s employees actually understand about AI, look at how much time they were given to learn it. Ten minutes is not a strategy. Ten minutes is a position on liability.
The technology is moving faster than any institution can train for.
The honest response would be to deploy AI where it actually makes sense — in the specific workflows where it adds value, for the specific people who need it.
“When a tool is genuinely useful, employees learn it because they want to.“
Nobody mandated spreadsheet training when Excel displaced the calculator. The people whose jobs got easier picked it up. The rest didn’t need to.
The convenient response is to mandate a ten-minute module, log the completion, and move on.
One of those is harder than the other. Guess which one is winning.
Have You Lived This?
If any of this sounds familiar — if you have sat through a ten-minute AI module, watched a colleague delegate their compliance training to a chatbot, or tried to explain to your L&D team why the curriculum bears no resemblance to your actual work — I would love to hear about it. Your anonymity is fully protected.
Reach out here and tell me what is actually happening inside your organization.
For more insights about what AI can or cannot do, check out my book “Artificial Stupelligence: The Hilarious Truth About AI“.
Subscribe here to be the first to receive my insights.







