Production Readiness Gap
Definition
The Production Readiness Gap is the distance between “demo works” and “runs securely in production at scale.” This gap represents the validation work required when transitioning Vibe Coded prototypes to production systems.
The gap encompasses:
- Correctness: From “90% correct” (probabilistic generation) to “always correct” (authentication, data integrity)
- Performance: From seconds (LLM latency) to milliseconds (business logic)
- Cost: From acceptable demo spend to sustainable unit economics
- Maintainability: From “I understand it” to “the team understands it in 2 years”
- Compliance: From “works” to “auditable, secure, and legally defensible”
The Fundamental Asymmetry
Crossing the Production Readiness Gap requires capabilities that LLMs currently lack without structural support:
| Demo Requirements | Production Requirements |
|---|---|
| Local correctness (this function works) | Global correctness (system behaves consistently) |
| Happy path | All edge cases, error states, failure modes |
| Works once | Works reliably under load, over time |
| Developer understands it | Team maintains it for years |
| Acceptable cost for testing | Sustainable unit economics at scale |
“You can’t ship ‘90% correct’ to enterprise customers. You can’t have authentication that works ‘most of the time’ or data integrity that’s ‘pretty good.’” — Dan Cripe
The “Missing Incentive” Test
A useful heuristic for evaluating AI capability claims: Are domain experts doing it?
If autonomous agents could spin up production SaaS with small teams, experienced engineers would be doing it en masse. They’re not. The people claiming it’s possible are typically:
- Building personal productivity tools (valid, but not enterprise SaaS)
- Running demos that haven’t hit production
- Not disclosing how much human intervention (L2/L3) is actually happening
Observability as a Production Requirement
The Production Readiness Gap isn’t just about security, performance, and maintainability—it’s about verifiability in production. If you can’t observe what your code is doing after deployment, you can’t validate that it works.
“The bottleneck shifts from, ‘How fast can I write code?’ to, ‘How fast can I understand what’s happening and make good decisions about it?’” — Charity Majors
AI has made code generation nearly free. The constraint has shifted to understanding and validating what that code does in production. This reframes production readiness:
| Old Constraint | New Constraint |
|---|---|
| Writing code | Understanding code |
| Testing before deploy | Validating after deploy |
| Hope it works | Observe that it works |
Without observability, you’re “shipping blind”—deploying code that nobody fully understands, with no feedback loop to validate success. See Feedback Loop Compression for how AI enables tighter observe → validate → learn cycles.
ASDLC Usage
Applied in:
References
- (2026).
The Mythical LLM: Why Rumors of the Death of Software are Premature
.
Accessed January 24, 2026.
Defines the gap and the 'missing incentive' test, differentiating between demo capabilities and enterprise production requirements.
- (2025).
Vibe Coding Fails Enterprise Reality Check
.
Accessed January 24, 2026.
James Gosling commentary on complexity scaling failures in vibe-coded projects.
- (2026).
"You Had One Job": Why Twenty Years of DevOps Has Failed to Do it
.
Accessed January 26, 2026.
Defines observability as the key constraint in AI-accelerated development—code is cheap, validation is expensive.