Field Notes · Chapter II
Chapter 2: What kind of productivity does AI actually unlock?
AI is the largest productivity revolution in history. True. I agree with that. But the devil hides in the details, even of slogans like this one. The devil at the heart of the AI revolution is that the productivity gain does not translate into a fall in development cost.
There is already a lot of evidence pointing this way. An article on AI being more expensive than the employees it supposedly replaces (in Korean) makes the point cleanly. It argues two things at once: AI is more expensive than the workers it would replace, and AI adoption hasn’t actually been driving headcount down. Both observations matter, and they pull in the same direction.
The market for sensational headlines doesn’t help. “AI boosts productivity, so developers are no longer needed.” “Even high-wage knowledge workers will be replaced.” Stacked on top of that is the narrative I touched on in Chapter 1 — vibe coding and the click-to-code stories that suggest AI has already absorbed Code Work. Read all of that wrong and it’s easy to walk away believing AI is replacing people, full stop. In most cases, that reading is just wrong. Today I want to take that misreading apart, partly through my own experience as a developer and partly through the actual mechanics of how LLMs work.
The most aggressive case I can make from personal experience is about vibe coding itself. Two reasons for picking it: I’m a developer, and vibe coding is, to my eye, the single most visible and most powerful productivity boost any AI tool usage pattern has produced so far.
Does vibe coding actually replace developers?
No. Let me start with the public-record evidence.
The layoffs at Meta — the layoffs that became the symbol of this cycle — were actually concentrated in Reality Labs (in Korean), the metaverse research org, and Meta’s own statements pointed to over-hiring during COVID as the cause, not AI-driven productivity. Marc Benioff at Salesforce (in Korean) said roughly the same about his round of cuts. And a year later, Salesforce is hiring again (in Korean) — a fairly direct refutation of the “we let those people go because AI is doing their work” framing. If AI had genuinely replaced those roles, the seats wouldn’t have come back.
What dozens of articles and research notes converge on is simple, almost boring: AI may be a convenient excuse for layoffs, but it is not the actual reason for them. The reason is straightforward — AI is not cheaper than a human, and AI cannot replace a human.
The fundamental cause is even more straightforward: today’s LLMs are not, in fact, intelligence. They are an enormous, fast, interactive knowledge store. Even research on this (in Korean) underscores the point — though honestly, you barely need the research. The reason you barely need it is that the way LLMs are built, and the way they operate, is fundamentally a sophisticated probability calculator dressed up. When GPT or Gemini or Claude returns what feels like an intelligent answer, they are returning the most probabilistically plausible arrangement of tokens. Code generation is no different. The model isn’t judging and deciding; it’s placing the code that, by training distribution, has the highest probability of belonging in that slot. The mechanic is closer to AlphaGo’s move-prediction at Go than it is to logical reasoning — predicting the highest-probability next move toward victory, not deriving a move from first principles.
(By the way, this is exactly why LLMs are far more efficient when used for probabilistic success than for creativity or generation. I’ve seen this play out repeatedly in code work, and it deserves a longer treatment of its own — maybe a future post.)
You might push back here. If LLMs can produce the highest-probability-correct code for a given system, doesn’t that mean the answer exists without a person being needed? Doesn’t that mean developers are replaceable?
False. That reading is wrong. It’s wrong because it misunderstands the nature of coding and software engineering — it confuses “building a new application or webpage” with the entirety of development. The essence of development is problem-solving. Five years ago, when we argued about who counted as a top-tier developer, the two attributes that kept showing up were (1) algorithmic reasoning and (2) experience. Why those two? Because development, properly done, isn’t about rearranging code from a manual — it’s about translating a problem you’ve encountered into something a computer can solve. That’s where the cost goes.
Take an example. With enough YouTube tutorials, anyone can build an Instagram-like platform. You could have built it long before AI or Claude Code existed, and my family using my version would have no problem with it. Tailwind exists for design reuse, an entire ecosystem of libraries exists for logic reuse, and even top-tier developers leaned heavily on Stack Overflow for most of their day-to-day work — that’s just true. The real problems start after. What happens when 100 or 1,000 users hit your Instagram-clone at the same time? What happens when the AWS storage you provisioned suddenly explodes past the S3 capacity you had planned and paid for? What happens when two people in the U.S. and Israel try to edit the same post at the exact same moment, with a real physical timing gap between them? How do you preserve consistency in that environment?
That, mostly, is what development is — finding answers to those questions. Which is exactly why people have argued for years that we should distinguish “coders” from “software engineers.”
A natural follow-up: those problems already happened to someone — and if AI is a probabilistic calculator and a giant knowledge store, then there must be data on those situations too, right?
True. There is. And that’s exactly the most efficient way to use LLMs.
Then doesn’t it follow that most of the world’s problems are solvable, and developers can be replaced?
False. Because the world only pays serious money for problems no one has solved yet. Simple test: would you pay a subscription for a personal-YouTube I built, with a smaller video pool than YouTube’s? Of course not — that problem has already been solved by YouTube and Netflix. Pushed to the extreme: you can’t make money from problems whose solutions are already inside AI’s stored knowledge. The fastest way to feel this is to launch a vibe-coded app and try to charge for it. So the conclusion is direct: someone is still needed to solve the problems that haven’t been solved yet.
If that lands, the next question is: then why is AX important? If AI isn’t even cheaper, why should anyone be in such a hurry to roll it out? The answer is that how you adopt AX, and with what philosophy, will produce wildly different outcomes. Over time the gap will be large enough to reshape industries, reorder corporate rankings, and redirect where money flows.
If we agree that AI is a giant, interactive, fast knowledge store, then its real use case is surprisingly simple. It lives in the domain of radically compressing learning time.
I noted earlier that top-tier developers are already, at heart, people skilled at finding how others have solved similar problems — through Stack Overflow, through GitHub issues — and adapting those solutions to whatever sits in front of them. The same logic applies to learning a new programming language or stepping into a new domain: understand the new context’s particularities, transfer success patterns from other languages or domains, and build a new success on top. That’s what software engineering looked like, and the productivity question was always: how fast, accurately, and rationally can you select among those options? It’s why we obsess over readability, why we made code reusable, why npm and PyPI and Tailwind became some of the largest reusable-code platforms in human history.
But for an engineer who actually understands this engineering logic, Stack Overflow is no longer necessary. Pulling the highest-probability success patterns and the proven success cases out of historical data — and implementing them — is something AI does much faster and with fewer mistakes. Engineers can now design adaptive paths, test them, and decide between them at a dramatically higher speed. And on top of that, the time it used to take to learn — to adapt to a new programming language, to a new domain — has compressed by something like 1,000x, exaggerating only a little. That is the real face of the productivity boost AI has delivered.
To summarize: AI has made it possible for individual developers — including developers whose learning speed had previously been on the slow side — to operate the way top-tier developers used to operate. Not everyone will use AI that way, of course, but the path to operating that way is now overwhelmingly easier than it used to be.
So the real question to ask is: if every individual on your developer team can now potentially become top-tier, then as the leader of that team, is the move to cut headcount?
Sometimes yes, sometimes no.
Yes — in industries that are saturated, where the business cannot meaningfully expand and the ceiling on revenue has been hit. Internal-combustion-engine car businesses might be there. Netflix could be there too — though, ironically, my own read is that Netflix has chosen to start a new war against YouTube rather than accept the ceiling. That’s a difference of judgment, not of fact.
But for most growth industries, the answer is no. Think about what bringing in a top-tier developer used to cost — far more than money. The value of the people who can still do what AI cannot (“the people building AI itself”) is so high that even at $300M or $1B in compensation, finding one is hard; what Tesla’s board pays to keep Elon Musk in the CEO seat is genuinely staggering. To recruit, retain, and lock in top-tier developers, U.S. big tech has thrown astronomical salaries, frankly absurd working conditions, conditional RSUs, and rising stock prices at the problem — the entire tower of perks exists because finding such people was the binding constraint.
Now imagine that the same population of developers, the ones below the top tier, suddenly has a much easier path to operating like top-tier developers.
In a growth industry, the obvious decision is to invest, not to cut. Train the people who could become that kind of engineer; keep recruiting people who look like they could. Bluntly, the talent pool that used to be hoarded inside a few U.S. companies — the pool that let those companies dominate frontier industries and global-scale platforms — has just opened up enough that companies outside that handful, and startups, suddenly have a lane to compete in. Was a top-tier developer ever realistically going to walk away from Google or Meta — career, status, peers, location — to join a startup? Now picture the U.S. AI gurus who already noticed this and are leaving big tech to start their own companies. That’s an explicit bet on exactly this fact.
So if AI is read correctly, the chance that most growth industries will choose a cost-efficiency framing of AX is close to zero. The space that money alone couldn’t bridge has just become a space money can bridge. That is the real essence of AI’s productivity gain.
AX, then, isn’t about cutting people. It’s about training more people to use and understand AI well, and using that to start ventures you couldn’t have attempted before. That is AX, and that is productivity.
Which is why I expect most AX programs framed as cost-efficiency to fail in the end. And the companies that understand this and use it to challenge the existing big incumbents are worth watching very carefully. (This article (in Korean) is a useful complement on that thread.)
A confession in closing: I’m one of the people whose career directly benefits from this prediction being right. I want to flag that bias openly rather than hide it inside the argument.
Originally written in Korean.
The Korean version lives on my Naver blog.