How Do You Grow a Senior Engineer When AI Does the Grunt Work?

For decades, the path was obvious. A junior engineer joined a team, got handed a bug nobody else wanted, fixed it, broke something else, fixed that too, and over a few years accumulated the scar tissue that turned into judgment. Senior engineers were not trained; they were grown, slowly, by the system itself. That system has stopped working at both ends. At the entry point, juniors aren’t being hired. AI makes a senior dramatically more productive and a junior only marginally so, which makes the rational move, for any single team, in any single quarter, to skip the junior hire and let an experienced engineer with AI do the work two juniors used to do. Industry-wide, the pipeline is being shut off before it starts. ...

May 9, 2026 · 12 min · Rami Pinku

The Abstraction Layer Severed the Natural Learning Path

The Lesson That Used to Be Unavoidable Every senior engineer I respect has a similar war story. They wrote a Python script that was too slow, and someone told them to learn what a list comprehension actually does under the hood. They built a React app that re-rendered itself into a coma, and they had to crawl back into the DOM to figure out why. They shipped a service that fell over the first time real traffic hit it, and they spent a weekend learning what a connection pool is. ...

May 2, 2026 · 7 min · Rami Pinku

What Senior Engineers Know That AI Doesn't

Working with AI to generate code is extremely satisfying. In a matter of minutes, you get something that looks great and, in most cases, does what you wanted it to do and even more. But many times, what looks ready for production is far from being production-safe. A large-scale study conducted by two researchers at FernUniversität in Hagen analyzed 7,703 files from public GitHub repositories explicitly attributed to AI tools. Using CodeQL, the researchers identified 4,241 CWE instances across 77 different vulnerability types. While 87.9% of the analyzed AI-generated code contained no identifiable CWE-mapped vulnerabilities, the risk came from code that appeared to work fine. It compiled, it solved the visible task, but it still carried hidden assumptions, unsafe patterns, and security debt. ...

April 25, 2026 · 6 min · Rami Pinku

You Can't Govern What Nobody Owns

I recently argued on the JFrog blog that trusted AI requires more than model quality. It requires visibility, provenance, governance, and a real system of control around the things models consume, build, and ship. That is the foundation. This post is about what you build on top of it. Because visibility is necessary. Without it, you cannot govern anything. If you cannot see which models are running, where they came from, how they behave, and what they touch, you do not have a governance posture. You have hope dressed up as architecture. ...

April 18, 2026 · 7 min · Rami Pinku

Your Job Isn't to Write the Code. It's to Own the Decision.

A developer recently gave Claude Code write access to a live Meta Ads account. The agent’s read-only analysis was genuinely valuable; it correctly identified the cheapest campaign as having the worst ROI. The insight was good. The judgment about what to do next was absent. The agent executed autonomously, triggered API rate limits through automated publishing, and resulted in the account being permanently banned. The read was right, the write destroyed the business relationship. ...

April 11, 2026 · 6 min · Rami Pinku

Decision Boundaries: Where Judgment Actually Lives

In the previous posts, I argued that execution is no longer scarce and that judgment, not effort, has become the limiting factor. I also argued that judgment without memory slowly degrades into improvisation. But even if you solve memory, even if context is accessible and history is visible, something can still go wrong. You can have full information, experienced people, and AI assisting at every step, and still end up making decisions that gradually weaken the system. ...

February 28, 2026 · 4 min · Rami Pinku

Memory Is the Missing Layer in AI-Assisted Development

Memory Is the Missing Layer in AI-Assisted Development In the previous posts, I argued that execution is no longer the bottleneck and that judgment isn’t intuition, it’s accountability. Those two shifts already change how we should think about building software. But there is a third layer that matters just as much and is far less visible. Memory. Not model memory. Not guardrails. Not rule engines. Institutional memory. The living history of why things exist. ...

February 21, 2026 · 5 min · Rami Pinku

The Stages of Judgment-Driven Development

The Stages of Judgment-Driven Development Most of the pain I’ve seen in software wasn’t caused by bad code. It was caused by bad decisions that were never treated explicitly as such. In the last two posts, I argued that execution is no longer the bottleneck. AI made building cheap. Judgment is now the scarce resource. If that’s true, the way we develop software must change, not in terminology or ceremonies, but in where and how we place judgment. ...

February 14, 2026 · 5 min · Rami Pinku

What "Human Judgment" Actually Means in the Age of AI

What “Human Judgment” Actually Means in the Age of AI We often talk about judgment as if it were intuition, taste, or seniority, something vague that people either have or don’t. That framing is wrong. Judgment is not intuition. It’s accountability. In real systems, judgment isn’t about gut feeling or instinct. It’s about being accountable for decisions made under uncertainty. Judgment shows up in moments like deciding something is good enough to ship, deciding not to ship even though it technically works, deciding to stop a direction after weeks of investment, or deciding that a shortcut today will become an unacceptable liability six months from now. ...

February 7, 2026 · 6 min · Rami Pinku

When Vibe Coding Meets Reality

I didn’t start thinking about this because I’m excited about AI writing code. I started thinking about it because I kept seeing the same patterns repeat over the years. This time, they come under a new name: vibe coding, the habit of describing what you want in natural language and letting an AI scaffold the system for you, often without deeply understanding the code it produces. Vibe coding feels almost magical. You describe what you want, and something real starts to take shape. At first glance, it genuinely works. You get something running shockingly fast, faster than any team I’ve worked with could have done a few years ago, and it often looks surprisingly good. ...

January 31, 2026 · 5 min · Rami Pinku