March 31, 2026
8 mins

You Don’t Have a Code Problem. You Have a Requirements Problem.

Whiteboard sketch showing requirements documents flowing into code, with garbage in garbage out amplified by AI speed
Image
Image

AI can generate thousands of lines of code in minutes. That makes vague requirements more dangerous than ever — because now you build the wrong thing at 10x speed.

The Most Expensive Conversation You Never Had

We chose to write 19 PRD documents before writing a single line of production code. The product owner wanted to start coding on day one. We said no — and spent the first two days just talking.

This was a recent migration we led — an AI hiring decisioning platform, rebuilt from Bubble.io to Next.js on AWS. The kind of project where every instinct says start building. The platform is painful. The new stack is ready. The AI tools are fast. Why are we still in a meeting room?

Because the meeting room is where you prevent the six-figure mistake. Not the code. Not the deployment. Not the infrastructure. The conversation you didn’t have — about what the system actually needs to do, for whom, and why — is the single most expensive omission in software development. It always has been. AI just made it worse.

Garbage In, Garbage Out — At 10x Speed

The oldest rule in computing has a new twist.

When a traditional development team receives a vague brief, they build slowly. The ambiguity surfaces gradually — in standups, in code reviews, in QA. Someone asks a question. The team course-corrects. It’s inefficient, but the friction creates natural checkpoints where bad assumptions get caught.

When an AI coding tool receives a vague brief, it builds fast. It doesn’t ask questions. It doesn’t flag ambiguity. It makes decisions — dozens of them, silently, confidently — and produces code that compiles, runs, and looks correct. Every assumption it makes is internally consistent. The data model makes sense. The API routes are clean. The tests pass.

And the whole thing is wrong in ways you won’t discover until your customers do.

When AI writes the code, a vague requirement produces a fast build of the wrong thing. Same mistake. Ten times the waste.

This is the GIGO amplification. A brief like “build an interview tool” sounds clear enough. But it embeds silent questions: What’s a candidate? What’s an application? Can a candidate apply to multiple jobs? Who sees what? What happens when there are zero applicants? What does “in progress” mean for an interview — scheduled, started, or halfway through? Every one of those questions becomes a decision in the code. If you didn’t make the decision explicitly, the AI made it for you. Its guesses will be plausible. They will be internally consistent. And they will be wrong for your business.

The discipline isn’t new. It’s requirements engineering — the oldest, most undervalued practice in software. What’s new is the cost of skipping it. When your developers were writing code by hand, a vague requirement produced a slow build of the wrong thing. When AI writes the code, a vague requirement produces a fast build of the wrong thing. Same mistake. Ten times the waste.

Nineteen Documents Before a Single Line of Code

Here’s what the first two days of the migration looked like: no code. No prototypes. No technical decisions. Just two people in a room — the product owner and the architect — having a conversation.

Ninety minutes, recorded. The product owner talked about what “clunky” actually meant. Not slow page loads — though those too — but invisible workflows where users couldn’t see where they were in a process. Buried features that existed but nobody found. Terminology that made sense to the builder but confused paying customers. A UI designed for the person who built it, not the person who uses it.

From that transcript, we produced 19 Product Requirements Documents. Not because we love paperwork. Because each PRD forced an explicit decision about something the old system had left implicit. The candidate data model. The permission structure. The interview workflow states. The evaluation framework hierarchy. The integration points. The edge cases.

The AI can structure what you know. It cannot know what you know.

Here’s the part that surprises people: AI is excellent at generating structured specifications. Feed Claude Code a 90-minute conversation transcript and ask for a comprehensive PRD set, and it will produce something genuinely useful — numbered, cross-referenced, covering architecture, data models, authentication, and feature-by-feature specifications. The output is good. Sometimes it’s very good.

But someone has to have the conversation first.

The AI can structure what you know. It cannot know what you know. The product owner’s insight that “clunky” meant six different things — that only emerged because someone asked, and someone listened, and someone pushed back with “what do you mean by that, specifically?” No AI tool in existence can substitute for a domain expert and a technical architect sitting in a room, working through what the product actually needs to do.

The PRD Isn’t Bureaucracy. It’s Insurance.

There’s a persistent myth in startup culture that documentation slows you down. That real builders ship code, not specs. That PRDs are a relic of waterfall methodology, incompatible with modern agile development.

This has always been wrong, but it used to be forgivable. When code was expensive to write, you could argue that the cost of extensive documentation exceeded the cost of rework. You’d build, learn, and iterate. The waste was tolerable because the iteration speed was slow enough to absorb it.

That argument collapsed the moment AI made code cheap to write. When you can generate thousands of lines in minutes, the cost of rework isn’t an afternoon — it’s an architecture. A wrong assumption about your data model doesn’t mean fixing a few database columns. It means the AI built an internally consistent system around that wrong assumption — routes, components, validation logic, tests — all perfectly implementing the wrong thing. Unwinding that is harder than starting again.

The PRD is your insurance against this. Each document is a set of explicit decisions. Not “the system should handle interviews” but “an interview moves through five states: SCHEDULED, IN_PROGRESS, PAUSED, COMPLETED, CANCELLED. Only the assigned interviewer or an admin can transition states. A completed interview cannot be reopened. An in-progress interview must have at least one note before it can be marked complete.”

That level of specificity isn’t bureaucracy. It’s the difference between an AI that builds what you need and an AI that builds what it guessed you meant.

“Let the AI Figure It Out” Is the New “Let the Intern Decide”

Every engineering team has a version of this story: a critical architectural decision was made by whoever happened to be writing the code that day. Nobody explicitly decided on the permission model — the developer just picked something reasonable. Nobody explicitly decided on the data model for multi-tenancy — the first feature that needed it set the pattern, and everything else followed.

These implicit decisions compound. Each one is small. Each one is “reasonable.” And six months later, you’re looking at a system where the permission model is spread across fourteen different files, each implementing a slightly different interpretation of who should see what, because nobody ever wrote down the rules.

When you tell an AI to “build the admin dashboard,” it will make decisions about role hierarchy, data visibility, action permissions, and audit logging. Every decision will be competent. None of them will be yours.

AI amplifies this pattern dramatically. When you tell an AI to “build the admin dashboard,” it will make decisions about role hierarchy, data visibility, action permissions, and audit logging. Every decision will be competent. None of them will be yours. And when you discover — three months and forty features later — that the AI’s interpretation of “admin” doesn’t match your business’s interpretation of “admin,” you’re not fixing a bug. You’re redesigning an architecture.

The fallacy of the almost-done software project is exactly this: a system that looks 90% complete but embeds dozens of unexamined assumptions, each of which can unravel the whole thing when it meets reality. AI didn’t create this problem. AI made it faster to arrive at.

Don’t Port. Re-evaluate.

There’s a second trap hiding in migrations, and it’s subtler than vague requirements. It’s the assumption that you’re rebuilding what you had.

You’re not. Or at least, you shouldn’t be.

The product owner described the existing app as “clunky.” When we unpacked that word across 90 minutes, it meant: users couldn’t tell where they were in a workflow. Features existed but were buried three clicks deep. The terminology on screen — labels, buttons, section headers — used internal jargon that meant nothing to customers. The navigation was designed for the person who built the app, not the person paying for it.

None of this was a technology problem. It was a product problem. And a feature-for-feature port to a new tech stack would have faithfully reproduced every one of those issues in faster, shinier code.

The migration is the moment to fix what you couldn’t fix on the old platform. Not just the technology — the product itself. What do your paying customers actually need? What do they struggle with? What features do they never use, and why? These are requirements questions, not code questions. And they belong in your PRDs before anyone touches a keyboard.

The Bolt-On Trap

Here’s the version of this that every business leader will recognise.

In V1 of your product, you built features as customers requested them. Chat widget here. Dashboard there. Export button there. Each feature made sense in isolation. Together, they created something held together with duct tape and good intentions.

Now the feature request is “AI.” And the instinct is the same: bolt it on. Add a chatbot. Add an AI summary button. Add “powered by AI” to the marketing page.

But bolting AI onto a bolted-together product just makes the mess more expensive to run. The chat widget now has an AI layer that doesn’t understand your data model. The dashboard has AI summaries that hallucinate because they’re pulling from inconsistent data sources. The export has AI-generated insights that contradict the numbers on the previous page.

The platform we rebuilt didn’t add “an AI feature.” We redesigned the interview workflow so that AI coaching, real-time transcription, and structured evaluation were native to the process — woven into every step, not bolted onto the sidebar.

The platform we rebuilt didn’t add “an AI feature.” We redesigned the interview workflow so that AI coaching, real-time transcription, and structured evaluation were native to the process — woven into every step, not bolted onto the sidebar. That decision came from the requirements phase. From asking “where does intelligence genuinely reduce friction?” rather than “where can we put a chatbot?”

That question — where does AI fit into what users actually need — is a requirements question. The math has changed. Cheaper execution demands better planning, not less of it.

The 80/20 That Nobody Talks About

The engineering disciplines that make software work — requirements, architecture, security, QA, deployment, operations — haven’t changed. These are decision-making disciplines, not typing disciplines. What AI changed is the typing: the mechanical act of turning decisions into code.

That’s a genuine revolution. But it’s the last 20% of the work.

The first 80% — figuring out what to build, who it’s for, what “done” looks like, where the edge cases hide — is still human judgement. AI will happily write your data model, your API routes, your test suite, your deployment scripts. It’ll do it fast, and it’ll do it well. But it writes them based on your inputs. Rubbish requirements produce perfectly passing tests for rubbish features. A green test suite isn’t proof of quality; it’s proof that AI is good at implementing whatever you told it.

The discipline was always in the decisions. Now the cost of bad decisions is amplified, because they propagate at 10x speed through a codebase that looks, by every automated measure, like it’s working perfectly.

The discipline was always in the decisions. Now the cost of bad decisions is amplified, because they propagate at 10x speed through a codebase that looks, by every automated measure, like it’s working perfectly.

What This Means for Your Next Project

If you’re planning a migration, a rebuild, or a new product — and you’re counting on AI to accelerate development — start here:

Have the conversation. Get the product owner and someone with technical architecture experience in a room for 90 minutes. Record it. Talk about what the system needs to do, for whom, and why. Talk about what’s wrong with the current system — not the technology, but the product.

Write it down. Turn that conversation into structured specifications. PRDs, user stories, data models, workflow diagrams — the format matters less than the act of making every decision explicit. If it’s not written down, it’s not a decision. It’s a hope.

Don’t port. Re-evaluate. The migration is your chance to fix what was wrong, not just what was slow. Question every feature, every workflow, every label through the eyes of your paying customers.

Then — and only then — let the AI write the code. With explicit requirements, AI development tools are genuinely extraordinary. The hiring platform went from 19 PRDs to production deployment in weeks, not months. The constraint wasn’t the coding speed. It was never the coding speed. The constraint was getting the decisions right.

Your developers aren’t slow. They’re building the wrong thing because nobody told them the right thing. AI just lets them build it faster.