Anyone Can Prompt. Not Everyone Can Engineer.

What the AI coding revolution actually changes, and what it doesn’t.

The Translation Layer Is Gone

I recently watched an IBM video that made a point I’ve been thinking about ever since.

The speaker walked through the entire history of programming languages, from machine code and assembler, through COBOL and FORTRAN, to object-oriented, web, and scripting languages, and made a simple observation: every generation moved a little closer to the way humans actually think and speak.

Then he said this: we used to go from intent to instructions to results. The programmer was the translation layer. Now, with LLMs, we go straight from intent to results. The translation layer is gone. And therefore, his conclusion, not everyone who wants to write code has to be a programmer. Because you already know the most important programming language of the AI era. You’ve been speaking it your entire life.

I think that’s right.

But it’s incomplete. And the missing piece is the one that matters most.

Every Abstraction Shift Moved the Challenge Upward

Here’s what software history actually shows: every time the abstraction layer rose, more people could build, and the real engineering challenge moved upward, not away.

Assembly gave way to higher-level languages. Memory management got abstracted away. Frameworks made common patterns fast. Cloud services removed entire categories of operational work. Low-code tools let non-developers wire together functional systems.

Each step made entry easier. Each step also compressed execution. And at every step, the value shifted, from syntax knowledge toward structural thinking. From memorizing APIs toward understanding tradeoffs.

Natural language is the next compression. Not a break from this pattern. The continuation of it.

I Know This From the Other Direction

I grew up with Basic, learned Pascal in high school, and wrote Java at university. Later, I picked up C#, JavaScript, and Python on my own. Enough to build things. Enough to automate. Enough to brute-force a solution until it did what I wanted.

But not enough to make me a developer.

My default mode was: make it work. For personal scripts and prototypes, that’s often fine. The moment software is meant to serve real users, real edge cases, and real consequences, that mode becomes risk, not resourcefulness.

In other words, I was the person who knew the language. I just didn’t know what I was doing with it. That distinction didn’t disappear when the language changed to English.

What the IBM Model Leaves Out

The IBM framing is built around the intent → results model. And it’s correct that AI has collapsed the translation step. What it doesn’t address is what happens before the intent and after the results.

Before the intent: someone has to know what problem they’re actually solving. Not what they think they’re solving, what the system actually needs to do, for real users, under real conditions, with real constraints.

After the results: someone has to evaluate whether the output is correct, secure, maintainable, scalable, and appropriate. Whether it handles failure. Whether it makes the right tradeoffs. Whether it survives contact with reality, not just the happy path, but the messy one.

Neither of those is a translation problem. Neither of them goes away when the interface changes.

What We Actually Paid Developers For

We never paid developers because they knew C, Java, or Python.

We paid them because they could look at a system and see what would break before it broke. Because they understood that the happy path is the easy part, the real work is designing for failure. What happens when the network drops? When two users modify the same record simultaneously? When the input is malformed, or malicious, or just unexpected in a way no one anticipated?

We paid them because they knew how to make tradeoffs that have no clean answer. Speed versus correctness. Simplicity versus flexibility. Build it now versus build it right. Ship fast versus sleep at night. Every production system is a graveyard of decisions made under uncertainty, and good engineers make those decisions deliberately, with awareness of the consequences.

We paid them because they understood that software doesn’t exist in isolation. It runs on infrastructure, depends on other systems, gets maintained by people who weren’t there when it was built, and evolves in ways nobody fully predicted. Designing for that reality, for change, for scale, for the next engineer who inherits the codebase, is a different skill entirely from making something work once.

None of that is a language problem. None of it gets solved by a better interface.

“I just prompt” is the new “I just use Python.” Fluency in the interface was never the hard part.

Lower Execution Cost Amplifies Bad Judgment

Prompting lowers the cost of execution dramatically. That matters. More ideas get tried. More prototypes reach a state where they can be challenged. More people get to participate in building.

But lower execution cost also means faster amplification of bad judgment.

A weak engineering decision can now be implemented more quickly and more confidently. A poorly thought-through architecture can be scaffolded in minutes. A misunderstood requirement can be turned into working code before anyone realizes the requirement was wrong.

AI reduces the cost of doing. It does not reduce the cost of doing the wrong thing.

In fact, it may raise it.

Where JDD Comes In: Two Layers, Neither Optional

This is where Judgment-Driven Development becomes relevant.

JDD operates on the premise that execution is not the bottleneck. Judgment is. And judgment operates on two distinct layers that AI cannot collapse.

The first is product judgment: what problem are we actually solving, for whom, under what constraints, and what does success look like? This is the layer of intent, but not naive intent. Real intent, shaped by understanding the user, the market, the tradeoffs, and the consequences of getting it wrong. No prompt produces this. A product leader has to own it.

The second is engineering judgment: given what we’ve decided to build, how do we build it so it holds? This is the layer of consequences, what breaks, what scales, what’s secure, what’s maintainable, and what the next engineer will face when they inherit the system. A model can generate code. It cannot yet be held accountable for what that code does in production.

What AI changes is the distance between these two layers. Execution used to buffer them. A team had days or weeks between a product decision and a working implementation, time to catch misalignments, ask questions, push back, and reconsider. Now that the buffer is gone. A product decision can become running code in an afternoon.

That’s powerful. It’s also dangerous.

When judgment is weak at either layer, AI doesn’t slow the damage down. It speeds it up.

The answer isn’t to slow down execution. It’s to sharpen judgment before you start.

The Line That Actually Matters

The line isn’t between people who can build and people who can’t.

The line is between people who can generate output and people who understand the consequences of that output.

Anyone can prompt. Not everyone can engineer. And increasingly, the difference between them isn’t visible until something breaks.