The Limits of Spec-Driven Development
Nov 25, 2025
In the 1990s, developers wrote long functional specifications before coding. By 2010, agile replaced the idea that you should define everything upfront. Today, as AI coding struggle with quality, the old playbook is returning: writing detailed specs in hopes of getting reliable outcomes.
On paper, spec-driven development (SDD) feels like the perfect solution: write a detailed spec first, then let the model generate “correct” code from it.
But reality hits hard.
Just like the pattern we have seen before: when we try to “solve unpredictability” by writing more things down upfront, the development fails, and always for the same reason — Reality changes faster than specs do.
What Is Spec-Driven Development?
Spec-driven development (SDD) is the practice of writing detailed upfront specifications first, and then using AI to generate code from them. These specs aim to define a system’s behavior, requirements, constraints, and interfaces precise enough for an AI model to produce code reliably.
But it overlooks the fact that static artifacts can't contain all the context regardless how precise your specs are.
Let’s break this down.
Where Spec-Driven Development Fails
SDD are failing for four reasons that no amount of prompting or AI models have fixed yet:
1. Specs Are Expensive to Maintain
Writing comprehensive specs takes a significant amount of time. In addition, software development is an interactive process. With so many variables in play (requirements changing, constraints shifting, and new insights emerging during implementation), keeping specs in sync with the code creates a maintenance tax that grows with system complexity. Instead of reducing overhead, SDD often doubles it.
Suppose you’re building a subscription invoices system. You write a spec describing billing cycles, proration rules, tax conditions, and grace periods. But a week later, finance says, “We need European VAT handling”.
Updating the code is much easier than updating the spec first. But this leads to a situation where the code, the spec, and the team’s mental model no longer match.
As a result, every update becomes documentation debt disguised as engineering discipline.
2. Specs Don't Reflect All Context
Specs are used to describe what a system should do, but they can't explain why it works that way. And the “why” carries the real context:
Why certain assumptions were made
Why specific tradeoffs were chosen
What the team learned while iterating
What real-world constraints shaped the solution. But these things never make it into the spec. And the missing context is where the real problems show up:
Edge cases only appear when the system is used.
Performance issues only appear under load.
User behavior only appears after launch.
So LLMs don’t struggle because the spec is “wrong.” They struggle because the spec can never capture all the context they need.
3. Over-specification creates the illusion of completeness
A detailed spec feels like control. It gives teams a sense that all cases are covered. But this confidence is often false.
Software development is exploratory. The most important insights come after you begin building. Being too fixed to a static spec leads to less iteration, creativity, and emergent solutions. It makes development into a brittle, waterfall-like process, just with AI in the loop.
4. The wrong level of abstraction
SDD tools today are optimized for parsing specs, not interpreting intent.
Most SDD approaches focus on implementation detail - The hows:
Field definitions
Enums
Request/response schemas
Function signatures
But what matters more is the whys behind:
Intent
Constraints
Context
Most current SDD tools (including systems like Kiro) generate code directly from these low-level specs. They can produce accurate scaffolding, but missing context for resilient behavior. The result is code that is structurally correct but misaligned with the actual intent of the system.
What Actually Matters — Context Engineering
The missing piece in AI coding isn't more detailed specs, but better preserved context. This means AI-native development should:
1. Start with intent
Instead of jumping into writing specs, the workflow should begin by defining the core context. For instance, the problem you’re solving and why, the non-negotiable constraints, and the assumptions you have in the context.
2. Keep context up to date
AI-led development should be just as iterative as traditional software development. When requirements change or new insights come up, the context the model uses needs to be refreshed so the team, and the AI, stays aligned.
3. Specs should follow the codebase
Specs should be living artifacts and aligned with the actual implementation.
4. Preserve the whys, and not just requirements
Code shouldn’t just be about what it does, but also explain why it was built that way.
The Path Forward
For stable contracts and well-understood domains, spec-driven approaches can work great. But for exploratory development that comes with evolving requirements, context-driven approaches adapt better.
Most real-world projects have both: stable contracts at system boundaries, adaptive iteration within them. This is the principle that shaped Yansu, our AI-led coding platform originally built for internal use to serve PE firms and mid-market engineering teams. The philosophy translated as a dynamic software development lifecycle (SDLC) in Yansu that:
Captures intent and constraints from discussions, examples, and tribal knowledge
Updates context and specs as understanding evolves
Simulates scenarios that reflect real system behavior before writing any code
Embeds explicitly the "whys" in the code, so the team can trace back to the reason behind each line
If you’d like to explore our work, check out yansu.isoform.ai




