I heard Satya Nadella describe this week on the All-In podcast how LinkedIn has been collapsing product management, UX, and engineering roles into a single “full-stack builder” role. The framing was optimistic: AI reduces construction costs and speeds up delivery.
At a surface level, this makes sense. AI dramatically accelerates prototyping, implementation, and iteration. In many environments, the traditional separation between PM, design, and engineering really does introduce friction that slows learning and execution.
The key question is not whether the idea is good.
It is where and when it is good.
Where the full-stack builder model works well
For startups, small teams, and point-solution companies, this model can be a real advantage.
When you are early, the scope is narrow, the systems are small, and the cost of mistakes is relatively low. Speed matters more than polish. Learning matters more than optimization. One person who can think about the problem, design a reasonable interface, and use AI to build a working solution can remove enormous overhead.
This is not new. Startups have always relied on people wearing multiple hats. AI simply expands how much one person can reasonably do before hitting a wall.
In these environments, collapsing roles often increases clarity rather than reducing it.
Where the model becomes risky
Problems appear when this model is treated as a default for large companies building complex, long-lived products.
As products scale, three things change:
- First, systems stop being understandable end-to-end by a single person. Architecture becomes layered, dependencies multiply, and edge cases matter.
- Second, the cost of mistakes becomes asymmetric. A subtle security issue, a misunderstood data contract, or an accessibility failure can affect millions of users or create regulatory and reputational damage.
- Third, organizations stop just building software and start maintaining ecosystems. Code lives for years. Teams change. Decisions need to be explainable long after they were made.
These are the conditions in which specialization emerged.
AI does not remove this complexity. It changes who is exposed to it.
AI is an accelerator, not a substitute for understanding
I use AI extensively myself in my day-to-day work and also for my personal projects: for mocks, rough POCs, research synthesis, and early drafts of product documents. It is an extraordinary accelerator.
But AI does not replace the work of understanding the problem, the user, or the system.
I still read deeply. I still talk to people. I still run numbers. I still iterate manually and make tradeoffs explicit. AI helps me move faster once I understand the terrain. It does not map the terrain for me.
In complex environments, the most valuable work is not generating artifacts. It is preserving coherence and shared understanding over time.
The hidden risk: speed without shared understanding
One of the under-discussed risks of AI-driven role collapse is not just technical debt, but what some researchers call epistemic debt: the gap between how complex a system becomes and how well the people building it actually understand how it works.
When AI generation outpaces human comprehension, organizations can end up owning systems they cannot reason about. Debugging becomes harder. Reviews become forensic. Institutional memory erodes.
This is acceptable when it happens to me in a small-scale POC or demo.
It is unbearable in large production systems.
There is growing evidence pointing in this direction:
-
GitClear analysis shows AI-assisted workflows correlate with higher code churn, more duplication, and less refactoring
https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality -
Research on epistemic debt explains how time saved on writing is often lost in review, debugging, and coordination
https://failingfast.io/ai-epistemic-debt/ -
A Stanford study found developers using AI assistants wrote less secure code while being more confident in it
https://arxiv.org/abs/2211.03622 -
Veracode reports AI-generated code introduces security flaws at meaningful rates, amplified by higher velocity
https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/
UX shows similar limits. AI can generate interfaces that look complete but miss context, accessibility, and real user behavior. Nielsen Norman Group outlines why AI struggles with true user understanding:
https://www.nngroup.com/articles/ai-tools-ux-research/
A more pragmatic framing
The mistake is not experimenting with the full-stack builder model.
The mistake is assuming it scales cleanly everywhere.
For startups, small teams, internal tools, and focused products, collapsing roles can unlock speed and learning.
For large organizations building complex, long-lived systems, collapsing roles without compensating structures risks trading short-term velocity for long-term fragility.
A more robust approach is AI-augmented specialization:
- Use AI to reduce friction within roles, not to eliminate the roles themselves
- Let product managers focus on intent and tradeoffs
- Let designers focus on empathy and experience
- Let engineers focus on architecture, security, and reliability
- Let all of them move faster with AI support
In complex systems, the collaboration tax is not just overhead.
It is also quality insurance.
AI changes how we build.
It does not remove the need for human judgment.
It simply raises the cost of getting that judgment wrong.