March 31, 2026
9 mins

From Bubble to Production: The 8-Day Migration Recipe

Whiteboard sketch of an 8-day timeline with phases from planning session to audit, showing parallel workstreams and handoff artefacts
Image
Image

We compressed what should have been a 12-month rebuild into 8 working days. Here’s the phase-by-phase recipe — and why AI made the economics viable.

Lessons from a Bubble Migration  —  Part 3 of 6

This article is part of a 6-part series exploring the engineering decisions behind rebuilding an AI hiring decisioning platform from Bubble.io to production.

The Recipe Exists Because the First Attempt Failed

Every methodology is a scar from a previous failure. This one is no exception.

The 8-day recipe didn’t emerge from theory. It emerged from watching earlier migration attempts stall — where teams jumped straight to code, discovered gaps three weeks in, and spent more time unwinding bad assumptions than building features. The recipe is a sequencing discipline: it forces you to do the right work in the right order, so AI can do what it’s good at (writing code fast) without doing what it’s bad at (deciding what to build).

The recipe is a sequencing discipline: it forces you to do the right work in the right order, so AI can do what it’s good at without doing what it’s bad at.

The project was an AI hiring decisioning platform — 70 data models, 30 production pages, 200 workflows, 8 external integrations — moving from Bubble.io to Next.js on AWS. The kind of migration that, with a traditional team, takes 12 to 18 months. We did the specification and design in 8 working days. Not the entire build — the complete handoff package that makes the build predictable.

Day 1: The Strategic Planning Session

Ninety minutes. Two people. A recording.

The product owner and the architect sit in a room and talk about pain. Not features. Not technology. Pain. What’s slow. What’s confusing. What customers complain about. What the team avoids touching because it might break something else.

This conversation is the highest-leverage activity in the entire migration. Every minute spent here saves hours later. The product owner knows things about the business that exist nowhere in the codebase — customer workarounds, features that exist but nobody uses, terminology that confuses new users, workflows that made sense three years ago but no longer match how the business operates.

The output isn’t a document. It’s a transcript. The transcript becomes the raw material for everything that follows.

We covered: current pain points and their root causes. The vision for what “good” looks like. Who the users are and what they need. Every major feature and its actual usage pattern. Non-functional requirements — performance, security, compliance. Risks the product owner loses sleep over.

One question from this session reshaped the entire architecture: “What happens when the interviewer loses internet connectivity mid-interview?” The answer — “they lose everything they’ve typed” — made offline-first architecture a non-negotiable requirement. That single requirement influenced the tech stack, the data model, and the deployment architecture. It would never have surfaced from reading the existing code.

Days 1–2: PRD Generation

Feed the transcript to Claude Code. Ask for a comprehensive, numbered set of Product Requirements Documents.

The AI is genuinely excellent at this. Given a detailed conversation transcript and a clear prompt, it produces structured specifications covering architecture, data models, authentication, and feature-by-feature requirements. The hiring platform generated 19 PRDs in two iterations — the first pass produced a solid foundation, the second pass filled gaps we identified during review.

The PRD set followed a consistent structure: an overview and architecture document, a core data model specification, an authentication and authorisation document, then one PRD per major feature area, an integrations document, and non-functional requirements.

The critical step is iteration. The first draft is good but incomplete. Review it with the product owner. Identify gaps. Feed the gaps back. Repeat until the product owner says “yes, that’s everything.” Two to three iterations is typical.

This is where explicit decisions prevent implicit disasters. Each PRD forces a decision about something the old system left vague. The interview workflow states. The permission model. The evaluation hierarchy. The candidate lifecycle. Every decision written down is a decision the AI won’t have to guess at later.

Days 2–3: Data Extraction

Four parallel workstreams, all running simultaneously.

Option sets. Every enum, every constant, every dropdown value in the existing application. The hiring platform had 72 option sets — role types, application stages, assessment products, interview statuses. These become TypeScript enums in the new system. Extract them programmatically from the Bubble API, document what each one means, group them by functional area.

AI configuration. Every AI assistant, every prompt, every model configuration. The existing platform used OpenAI Assistants — stateful threads with per-application vector stores. Over 2,000 individual assistant instances. All of that needed to be captured, understood, and re-architected as stateless chat completions. Document the system prompts, the function schemas, the polling patterns, the data flows.

Workflow documentation. The business logic. In Bubble, this lives in visual workflows — conditional rules, API sequences, page behaviours. Use extraction tools to generate detailed documentation of each workflow: what triggers it, what conditions it checks, what actions it takes, what data it modifies. Query in priority tiers — core flows first, supporting features second, infrastructure patterns last.

These workstreams produce the complete extraction of institutional knowledge from the existing platform — knowledge that lives nowhere except in the running application and the product owner’s head.

Screenshots. Every page, every state, every modal. Automated browser capture of the entire existing application. This creates a visual reference for mockups and a validation checklist for the rebuild. The hiring platform produced 50 screenshots organised by module.

These workstreams are independent and can run in parallel. Together they produce the complete extraction of institutional knowledge from the existing platform — knowledge that lives nowhere except in the running application and the product owner’s head.

Days 2–3 (Parallel): The CLAUDE.md File

This is the single most important artefact in the entire migration.

CLAUDE.md is the file that AI development tools read at the start of every conversation. It’s your project’s ground truth — the document that prevents the AI from making wrong assumptions about your architecture, your terminology, your conventions, and your constraints.

A good CLAUDE.md contains: a two-paragraph project overview. A reference to the PRD set. The complete tech stack with rationale for each choice. Infrastructure details. Architecture decisions — multi-tenancy approach, role hierarchy, key routes, real-time patterns. Domain terminology — the specific words your business uses, defined precisely so the AI doesn’t normalise them to generic alternatives. Key design decisions with their rationale.

The 30 minutes you spend writing CLAUDE.md saves hours across every future conversation with every AI tool that touches your codebase. It’s the difference between an AI that understands your project and an AI that treats every conversation as a cold start.

Days 3–5: Mockup Construction

Build mockups in the production tech stack. Not Figma. Not wireframes. Actual Next.js components with TypeScript interfaces, Tailwind CSS, and realistic sample data.

This is the critical decision that most teams get wrong. The conventional approach — design in a design tool, then translate to code — introduces a translation tax on every screen. The alternative: build directly in the stack. Components transfer to production unchanged. The product owner reviews something that runs in a browser.

The build order matters: TypeScript interfaces from the data model PRD first. Realistic sample data second — data that exercises edge cases, not happy paths. Shared components third — navigation, layout shells, data tables, form patterns. Then individual pages, one at a time, each built against its PRD specification.

The hiring platform produced 23 production-quality pages in this phase. Dashboard. Job management. Candidate pipeline. Interview conductor. Evaluation framework editor. Analytics. Admin panels. Notifications. Each one responsive, interactive, and populated with realistic data.

Days 5–7: Product Review and Iteration

Two recorded review sessions with the product owner. Walk through every mockup. Capture reactions, corrections, and new requirements.

The first session always produces the biggest changes. The product owner sees their product reimagined and has strong opinions. The interview conductor — the highest-stakes screen — needed a fundamentally different layout. AI assistance needed to be contextual, not hidden behind a sidebar. The navigation structure needed to reflect how customers work, not how the builder organised features internally.

Extract changes from the transcript. Categorise them: PRD changes, mockup changes, confirmed decisions. Apply changes the same day. This velocity is the point — when a change goes from “identified” to “implemented” in four hours instead of four sprints, the review cycles tighten dramatically.

When a change goes from “identified” to “implemented” in four hours instead of four sprints, the review cycles tighten dramatically.

The second session is refinement. Terminology adjustments. Layout polish. Edge case discussions. By this point, the product owner is looking at something that feels like the real product, and their feedback shifts from “this is wrong” to “this could be better.”

Days 7–8: Audit and Readiness Check

The final gate before implementation begins.

Cross-reference every feature mentioned in every PRD against the mockup pages. Classify each as: ready (PRD and mockup exist, product owner approved), needs clarification (PRD mentions it but key details are missing), not specified (discovered during mockups but not in any PRD), or deferred (explicitly out of scope for V1).

The hiring platform audit covered 23 pages and approximately 50 features. The result: a clear picture of what’s ready to build, what needs one more conversation, and what’s been consciously deferred. No ambiguity. No surprises.

The Handoff Package

Eight days of intensive work produces 13 artefacts:

CLAUDE.md. The PRD set with dependency graph. Migration goals. AI configuration documentation. Option sets — both raw data and descriptive documentation. Workflow documentation from extraction tools. Screenshots of the existing application. The design brief. The design system specification. TypeScript interfaces. All mockup pages, approved. The audit report.

This package is what makes the implementation phase predictable. Every decision has been made. Every screen has been designed. Every data model has been specified. Every edge case has been discussed. The AI coding tools receive explicit, comprehensive instructions instead of vague briefs.

Why 8 Days and Not 8 Months

Three factors compressed the timeline.

AI as analyst. Claude Code didn’t just write code in this phase — it generated PRDs from transcripts, extracted data from APIs, produced documentation, and built mockups. The same tool that will eventually write the production code was the primary tool for specification and design. This eliminates the handoff between “planning team” and “development team” — it’s all one continuous process.

Parallel workstreams. Data extraction, documentation, and CLAUDE.md creation all happen simultaneously. The recipe is sequenced where dependencies exist and parallel where they don’t.

Same-day iteration. When a review session identifies a change, it’s implemented in hours. No ticket queue. No sprint boundary. No “we’ll get to that next week.” The feedback loop is tight enough that two review sessions in a week produce more refinement than six sessions spread across six weeks.

The economics work because AI made the labour cheap. A 90-minute conversation produces 19 PRDs in a day. A design brief produces 23 mockup pages in three days.

The economics work because AI made the labour cheap. A 90-minute conversation produces 19 PRDs in a day. A design brief produces 23 mockup pages in three days. A review session produces corrected mockups by end of day. These timelines were impossible two years ago. They’re routine now.

What the Recipe Doesn’t Include

The 8-day recipe covers specification and design — not implementation. The production build happens after the handoff package is complete, and it follows a different rhythm: milestone-based, with the highest-risk integrations first, core application second, and polish last.

But the recipe is what makes the build fast. When every decision has been made explicitly, when every screen has been designed and approved, when every data model has been specified — implementation becomes largely mechanical. The AI writes code against clear specifications instead of vague briefs. The product owner reviews working features against approved mockups instead of guessing whether the output matches their intent.

The math has changed. Cheaper execution demands better planning. The 8-day recipe is how you do the planning at AI speed without sacrificing the discipline that makes AI-generated code actually work.