|
The Machine Told The Truth
What Klarna's $60M AI mistake reveals about every business
Published on February 24, 2026•5 min read
In January, Klarna reported that its AI customer service agent was doing the work of 853 full-time employees. Resolution times dropped from eleven minutes to two. The company projected $60 million in savings.
Then customers started leaving.
Generic answers. Robotic tone. No ability to handle anything requiring judgment. By mid-2025, the CEO admitted publicly that while cost savings were real, the result was "lower quality." Klarna began frantically rehiring the human agents it had let go.
Most people tell this story as proof that AI can't handle nuance.
I think the more interesting reading is that the AI told the truth.
Klarna didn't have an AI problem. They had a clarity problem that AI made impossible to ignore.
The agent was given a goal: resolve tickets fast. It did that brilliantly. The problem was that "resolve tickets fast" was never actually what Klarna needed. What they needed was to build lasting customer relationships in a competitive market. Those are profoundly different goals — and they require profoundly different decisions at the point of interaction.
A human agent with five years at the company knows this difference intuitively. She knows when to bend a policy. When to spend three extra minutes because the customer's tone says they're about to churn. When efficiency is the right move and when generosity is.
She knows this not because anyone wrote it down, but because she absorbed the company's real values — the unwritten ones — through years of proximity.
The AI knew none of it. It had instructions. It had data. It didn't have intent.
So it did exactly what it was told. And in doing so, it revealed what the company actually prioritized — which was not what the company said it prioritized.
The machine told the truth. They just didn't like what it said.
Here's the part that has nothing to do with AI.
Every founder I've watched has a version of this problem. Not with agents and algorithms — with people, with teams, with their own execution.
You hire someone. They're talented. They work hard. And something's off. The output doesn't feel like what you meant. The decisions they make on your behalf aren't the ones you would make. You say "they don't get it."
But you've never actually articulated what "it" is.
You scale. Revenue grows. More people, more systems, more moving parts. And the thing that made the business work in the first place — that instinct, that coherence, that particular way of seeing the problem — starts to dilute. You feel it before the numbers show it. Something is drifting. You can't point to a single decision that's wrong. The aggregate just doesn't feel right.
This is the same gap. The same failure mode. The same invisible problem.
Your team is doing what Klarna's AI did. They're optimizing for what they can measure — because you never made the other thing explicit. The thing that actually matters. The intent underneath the instructions.
There's a principle in control systems that isn't reassuring.
When you have a powerful system pointed at the wrong target, it doesn't just miss the right one. It actively destroys value on its way to the wrong one. A strong engine with a misaligned compass is more dangerous than a weak engine — not less.
More capability without better intent is worse than less capability.
This is the Klarna story. A supremely capable system optimizing for the wrong objective caused more damage than a mediocre one ever would have. The $60 million in savings didn't cover the reputational cost.
But it's also the scaling story. The founder who raises money and hires fast without clarifying what the business actually is. The operator who adds tools and systems without resolving the architecture underneath. The builder who gets more capable every year while the foundation goes unexamined.
The capability isn't the problem. The missing intent is.
For decades, "humans just know" was enough. New hires absorbed the culture through osmosis. Teams approximated the founder's intent through proximity — hallway conversations, watching how decisions got made over months and years.
It worked. Sort of.
The truth is, "humans just know" was never quite true. Humans approximate. And the approximation holds until the system gets complex enough, fast enough, or big enough that the drift becomes visible.
AI agents made it visible in months. For most businesses, the same drift takes years — which means it's slower, but also harder to catch.
Architecture is what makes intent explicit before the system outruns it. Before the team approximates what you meant. Before the growth reveals what you never resolved. Before the capability gets ahead of the clarity.
Every cathedral had this. Every city that lasted had this. The architect's first question was never "what should we build?" It was "what is this for?" And the answer to that question constrained, organized, and gave coherence to everything that followed.
When you skip that question — when you let capability run ahead of intent — you get Klarna's AI. Fast, efficient, impressive output. Destroying the thing that actually mattered.
This is what I think about when people ask what I do.
I don't fix marketing. I don't optimize funnels. I don't coach people through mindset blocks.
I design the layer underneath — the one that makes what matters explicit before the system gets powerful enough to optimize for the wrong thing.
Because by then, it's expensive to fix. And the people who carried the intent in their heads? Sometimes they've already walked out the door.

Trenton Jackson
Trenton Jackson builds and writes at the intersection of human systems, business architecture, and design.


