Further Reading

This feed tracks external thought-leadership, articles, and frameworks that align with ASDLC principles, but don't necessarily constitute structural evidence. We log convergent thinking from engineers, researchers, and organizations navigating agentic development as we synthesize them.

Built by agents, tested by agents, trusted by whom?

A legal and regulatory critique of the "Dark Factory" non-interactive software development model. It highlights that traditional software accountability mechanisms—product liability, professional licensing, and algorithmic audits—break down when human code review is entirely eliminated. It introduces the critical concepts of the Liability Gap, Disclosure Gap, and Contractual Gap inherent in sustained L4 Autonomy operations.

The Agentic Software Factory

An empirical case study detailing a multi-agent factory building security features into an enterprise identity server. It provides structural validation for Adversarial Code Review by using three distinct models in parallel, demonstrating that "models are better adversaries than collaborators." It also highlights the critical operational need for Identity Separation (giving distinct models their own API credentials) for auditability.

Microsoft's Agent Factory

An interview with Microsoft's EVP of Core AI that validates the shift toward an Agent Factory model. Parikh notes that tracking lines of AI-generated code is the wrong metric; the true industrial goal is eliminating "run the business" technical debt to free humans for creative architecture. He also underscores the limitations of one-dimensional evaluations versus empirical, multi-dimensional "lived experience" in tracking agent value.

Software Factories and the Agentic Moment

Provides a look into a "Dark Factory" where AI teams build software without any human code review. It details replacing boolean test success with Probabilistic Satisfaction across thousands of Holdout Scenarios, enabled by Digital Twins of external services. This serves as critical validation for how an AI Software Factory can safely operate at L4 Autonomy without drifting.

"Because much of the software we grow itself has an agentic component, we transitioned from boolean definitions of success (“the test suite is green”) to a probabilistic and empirical one... We use the term satisfaction to quantify this validation: of all the observed trajectories through all the scenarios, what fraction of them likely satisfy the user?"

Spec-Driven Development / Shifting Responsibility

Source: Dylan Brown

Provides a profound human-centric critique of naive "spec-as-source" development. It highlights that separating engineers from implementation details leads to a degraded mental model ("Context Rot") and removes the inherent "joy of engineering." This serves as strong qualitative validation for the ASDLC spec-anchored philosophy—specs provide intent and boundaries, but maintaining interaction with the deterministic code is crucial for sustainable development.

"The tools can of course take in the context from the whole solution, but as that grows over time the quality of the model’s work will degrade. We see this with the context rot problem that many people are facing... there’s still so much more that you’re not realising by not actually getting into the weeds and tackling the problem yourself."

The AI Triangle: The Bottleneck Nobody Priced In

Source: Alex Bond

Provides excellent qualitative validation of the ASDLC's core thesis that the bottleneck in software engineering is shifting from generation to verification. It identifies that as the effort of coding goes down, the cognitive load of verification (checking architectural integrity, reviewing PRs from AI agents) goes up. This thoroughly supports our emphasis on deterministic enforcement using Context Gates.

"The bottleneck shifts from doing to checking, and checking doesn’t get faster just because the doing did."

Is software engineering still a craft?

Source: Swarmia Blog

Explores the human impact of the shift from craft-based development to industrial-scale agentic production. It acts as qualitative validation of the ASDLC "industrialization" thesis, mirroring the assertion that AI creates a "factory production line" feeling versus a traditional "craft." It explicitly warns against vibe-coding production systems and emphasizes that rigorous engineering practices still apply.

Boris Cherny: Plan-and-Iterate Discipline

Source: The Peterman Podcast Role: Principal Engineer, Anthropic; Creator of Claude Code

Advocates for a disciplined AI workflow: Ask the model to generate a plan first, implement in small iterative steps, and write by hand where you have strong technical opinions. This closely mirrors ASDLC's Spec-Driven Development and Context Gates.

"Speed is seductive. Maintainability is survival."

Matt Watson: Product Thinking as Core Competency

Source: LinkedIn and Product Driven Role: 5x Founder & CTO, CEO of Full Scale

Argues that "vibe coders" outperform average engineers because they focus on product outcomes, not just implementation. In an AI world where "just build this" work is automated, human engineers must decide what matters.

"For years, we rewarded engineers for staying in their lane, closing tickets, and not rocking the boat. Then we act surprised when they don't think like owners... Product thinking isn't a bonus skill anymore. In an AI world, it's the job."

Rasmus Widing: Product Requirement Prompts (PRPs)

Source: GitHub Role: Engineering Leader; Creator of PRP methodology

Defines the minimum viable specification an AI agent needs to ship production-ready code in one pass: "PRD + curated codebase intelligence + agent runbook." His principles—plan before you prompt, context is everything, and scope to what the model can do reliably—mirror ASDLC's Spec-Driven Development philosophy.

Industry Data Points: The Technical Debt Warning

Sources: Google, Anthropic, Forrester Research

Various data points highlighting the risks of agentic development without discipline:

  • Google (2024): Approximately 30% of code is AI-generated, leading to the first year where copy-pasted code exceeded refactored code.
  • Anthropic: Claude Code adoption led to a 70% productivity increase, validating agentic power when structured properly.
  • Forrester: Predicts 75% of tech leaders will face moderate-to-severe technical debt by 2026 due to speed prioritized over maintainability in AI-assisted development.