A recent Sequoia piece made an argument that immediately felt familiar to me. Not because it repeated the phrase “Judgment Driven Development” word for word. But because it pointed at the same shift from a different direction.
The article is here: Services: The New Software.
Its claim is simple: the next generation of great companies may not look like software vendors at all. They may look like software and AI powered service providers. Instead of selling tools to people who do the work, they will sell the work itself. Instead of selling accounting software, they will close the books. Instead of selling legal tooling, they will draft the NDA. Instead of helping a team perform a process faster, they will increasingly perform the process directly.
That is an important argument in business models, but underneath it sits something even more important.
A change in what is scarce.
For years, software development was built around the assumption that execution was expensive. Writing code was expensive. Testing was expensive. Integrating systems was expensive. Changing direction was expensive. Most of our methods, rituals, processes, and organizational habits were built to protect that scarce resource.
That world is changing.
AI is driving down the cost of execution at a pace that still feels unreal. Prototypes that once took weeks can now be built in hours. Features can be scaffolded almost instantly. Boilerplate is increasingly irrelevant. Translation from intent to implementation is becoming cheaper, faster, and more fluid.
When the cost of execution drops, the bottleneck shifts, and that is the core idea behind Judgment Driven Development.
JDD starts with a simple observation: if building becomes dramatically cheaper, the value of human contribution rises. The scarce resource is no longer the ability to produce artifacts. It is the ability to decide what should exist, what is worth pursuing, what tradeoffs are acceptable, what quality means in context, what risks are tolerable, and when something is good enough to ship.
Do not get me wrong, execution still matters, but judgment matters more.
The Intelligence Layer and the Judgment Layer
One of the sharpest ideas in the Sequoia article is the distinction between intelligence and judgment.
There is a category of work that is structured, rule based, pattern heavy, and actionable. It may be complicated, but it is still sufficiently bounded for AI systems to handle increasingly. In software, this includes writing code from a spec, debugging certain classes of issues, generating tests, refactoring repetitive structures, and handling well formed implementation tasks.
Here is the thing: when execution was expensive, poor ideas often died early. Friction acted as a filter. Teams had to think harder before building because building itself consumed so much time and energy.
Now the filter is weaker.
Bad judgment becomes more dangerous when execution is cheap. You can produce more with less effort, which sounds great until you realize that organizations can now also produce more waste, more inconsistency, more shallow features, more security problems, more architectural drift, and more product noise, with less effort.
Judgment is deciding which problem actually matters. It is deciding whether the spec itself is wrong. It is recognizing that the obvious feature should not be built. It is choosing where quality matters more than speed and where speed matters more than elegance. It is understanding the user, the market, the timing, the political reality inside the organization, the hidden cost of a shortcut, and the long term consequences of today’s local optimization.
AI does not remove the need for judgment. It amplifies the consequences of having it or lacking it, and that is why JDD matters.
From Copilots to Autopilots
The Sequoia piece frames a shift from copilots to autopilots.
- A copilot helps the professional do the work.
- An autopilot sells the completed work as the product.
That distinction matters far beyond startup positioning. It helps explain where software itself is going.
For years, enterprise software often stopped at enablement. It gave a human user better tools to do a job. Better dashboards. Better workflow automation. Better reporting. Better visibility.
Now, a different model is emerging in which the buyer may care less about the tool and more about the outcome.
- Do not help me manage billing. Resolve the billing workflow.
- Do not help me review contracts. Produce acceptable contracts.
- Do not help me monitor infrastructure. Keep the system healthy.
This is where JDD becomes bigger than software development. If AI keeps compressing the cost of execution across domains, then many markets will shift from software that assists work to systems that perform work. But those systems still need constraints, oversight, policy, escalation rules, exception handling, and accountability.
In other words, they still need judgment.
Even the most autonomous future does not eliminate judgment. It shifts judgment away from low level execution and toward system design, governance, goal definition, risk policy, and exception management.
That is exactly the move JDD is trying to describe inside product and engineering organizations.
What This Means in Practice
Here is a concrete version of what this shift actually looks like.
A PM under delivery pressure gets a request for a new filter on a reporting screen. In the old world, she writes a ticket, a developer builds it, QA tests it, and it ships. Two weeks, maybe three. The friction of the process forces a few implicit questions: Is this worth a sprint slot? Are we sure this is what users need?
In the new world, the feature can be prototyped in an afternoon. The friction is gone, and with it, the implicit questions that friction used to force. Nobody stops to ask if the placement makes sense. Nobody checks the security implications. Nobody puts the prototype in front of a real user before it ships. The filter lands in production. Users ignore it.
The execution was fast. The judgment was absent.
Now run it again, but with JDD applied. The PM still moves fast on execution. But before anything ships, judgment enters from multiple directions. The PM asks: Is this the right filter, or just the one someone requested? What does success look like? Who is accountable for this decision? UX asks: Is the placement right? Does this fit the existing flow, or does it create confusion? Does the prototype actually make sense to someone seeing it for the first time? The developer asks: Does this introduce any security implications? Are we creating debt we will regret? And because a working prototype now exists in hours rather than weeks, the team can put it in front of a solutions engineer, a customer, a skeptic, and get a real signal before a line of production code is written.
That is the JDD layer. Not one person asking better questions. A distributed judgment practice, running in parallel with fast execution, that treats the prototype as a thinking tool rather than a done thing.
The Risk of Oversimplifying
Judgment, properly understood, includes ethics, trust, accountability, timing, politics, customer empathy, business context, and the willingness to own consequences. It is not only knowing what good looks like. It is deciding what is acceptable when tradeoffs are painful, and information is incomplete.
That matters because the biggest failures in AI enabled organizations will not come primarily from technical errors.
They will come from judgment failures.
Teams are automating the wrong work. Leaders are measuring the wrong outcomes. Companies are optimizing local efficiency while undermining long term trust. Organizations are mistaking activity for value because activity is now so easy to generate.
This is why JDD is not just a slogan about humans staying in the loop. It is a recognition that as execution becomes abundant, judgment becomes the core management discipline. Not a soft skill. Not a philosophical luxury. A hard operational requirement.
Services, Software, and What Comes Next
The most interesting thing about the Sequoia thesis is not the headline that services may become the new software.
It is the deeper implication.
When AI gets good enough, the boundary between tool and service starts to blur. Software no longer just enables action. It increasingly becomes the mechanism through which action is delivered. That changes product strategy, pricing, defensibility, and organizational design.
And inside companies, it changes how we should build.
The old development worldview assumed that execution was the primary scarce asset and optimized around protecting it. The new world demands something else: clearer intent, stronger judgment, better constraints, tighter governance, more disciplined prioritization, a sharper understanding of quality, and a deeper appreciation for consequences.
That is the world JDD is trying to describe, not one where humans become irrelevant, but one where the dramatic drop in execution costs raises the premium on human judgment. Not because judgment is fashionable or philosophically appealing, but because without it, cheap execution does not produce more value. It produces more mistakes, faster.
Final Thought
If services really are becoming the new software, then judgment may be becoming the new engine of value creation.
That is true for startups deciding what to sell. It is true for enterprises deciding what to automate. And it is true for software teams deciding how to work in the age of AI.
The more execution becomes abundant, the less impressive execution alone becomes.
What matters more is what we choose to do with it.
And that is exactly where judgment begins.