When Vibe Coding Meets Reality

I didn’t start thinking about this because I’m excited about AI writing code. I started thinking about it because I kept seeing the same patterns repeat over the years.

This time, they come under a new name: vibe coding, the habit of describing what you want in natural language and letting an AI scaffold the system for you, often without deeply understanding the code it produces.

Vibe coding feels almost magical. You describe what you want, and something real starts to take shape. At first glance, it genuinely works. You get something running shockingly fast, faster than any team I’ve worked with could have done a few years ago, and it often looks surprisingly good.

But once the initial excitement fades, I find myself asking a different question.

Not just can we build it, but what happens next?

The moment speed stops being impressive

In small projects, vibe coding is intoxicating. You don’t need specs. You don’t need long discussions or careful coordination. You just build.

And that freedom feels incredible.

But enterprises don’t fail because they can’t build fast enough. They fail because they build the wrong thing, or because they build the right thing in a way they can’t live with six months later. A prototype that took a weekend to ship can easily turn into a quarter of painful rewrites once it’s entangled with real customers, real data, and real constraints.

That’s the point where speed stops being impressive and starts becoming a maintenance tax.

The problem isn’t velocity itself. It’s velocity without judgment. AI is extremely good at continuing. It will happily move forward, generate more, and refine further. What it’s bad at is stopping, reflecting, and questioning the direction it’s already taken.

And in real systems, stopping is often the most important decision you make.

Why enterprises feel uneasy about vibe coding

When people explain why vibe coding “doesn’t work in enterprises,” they usually point to the obvious things: security, compliance, regulation, legacy systems.

Those concerns are real, but they’re not the root of the discomfort.

The deeper issue is accountability over time. Decisions echo. Context accumulates. Mistakes don’t disappear after a demo. When you accelerate execution without a way to preserve judgment, you’re not innovating; you’re simply generating future problems faster.

That’s the part that should make us uneasy.

The uncomfortable shift nobody is naming

For most of my career, execution was expensive. Writing code took time and people, testing was slow, and deployments were painful. That reality shaped how we worked and how we organized teams.

We built entire processes around protecting execution. Specs, reviews, handoffs, and ceremonies weren’t bureaucracy for its own sake; they were survival mechanisms. They existed to reduce the cost of being wrong in a world where change was slow and mistakes were expensive.

Time to code used to be a natural filter for bad ideas.

That filter is gone.

AI quietly broke the assumption that execution cost would keep us honest. Today, turning an idea into something runnable can be extremely cheap. And when execution stops being the hard part, the bottleneck shifts elsewhere.

Not to tooling.
Not to frameworks.
But to deciding what deserves to exist at all, and what we’re willing to support over time.

We’re moving from an era of creation to an era of curation. The scarce resource is no longer developer capacity, but judgment.

This is where many conversations go wrong. We argue about prompts, models, or copilots, as if the core challenge were choosing the right tool. But the real shift is deeper than that. We now live in a world where building is no longer the hardest part; living with what we build is.

And living with it requires something AI fundamentally lacks: accountability.

What we’re actually missing

Vibe coding quietly assumes that if you can build fast enough, the right thing will eventually emerge. That assumption holds in environments where the blast radius is small, context is local, and failure is cheap.

It breaks everywhere else.

Enterprises don’t just need faster builders. What they lack are better decision systems. Systems that make it clear who is allowed to decide, who has the authority to say no, when something should stop, and what must be remembered once the work moves on.

That’s not a prompt problem. It’s a modus operandi problem.

A different framing

So if execution is cheap, instead of asking how we make AI write better code, the real question becomes: how do we design systems that protect judgment?

Once you look at the problem through that lens, a lot of things start to feel off. Specs, roles, even some of the debates we’re having about AI suddenly seem like they’re solving the wrong problem.

What I kept coming back to was the same boundary. Humans need to own intent and accountability. AI should accelerate everything below that line. Speed is still valuable, but only when it’s used deliberately, not blindly.

The role shift is from author to editor-in-chief. We’re no longer checking for syntax; we’re validating outcomes against business objectives and long-term viability.

I don’t think there’s an established name for this yet. I’ve been calling it Judgment Driven Development.

Because in the end, the future of software won’t be decided by how fast we can build. It will be decided by how well we decide what not to build, and when to stop.

In the next post, I’ll unpack what “judgment” actually looks like in practice, and what it means to maintain meaningful human oversight when AI makes moving fast feel effortless.