How do I run a revision round on a project with AI?

v1.0 Updated 2026-04-27 Source crystopa-forge-sop-revision-workflow.md
Direct answer

Take the unstructured dump of changes verbatim, group it into sprints by dependency and blast radius, save the plan as a versioned revision doc, then have the AI execute one sprint at a time — smallest effective change per task, verify before moving on. Add proactive suggestions you noticed during execution. Each new round of feedback gets a new doc, never appended to the current one.

The Spec

What this is

A seven-step workflow for executing a revision round on any project — client website, SaaS product, internal tool — with an AI coding agent. The point is to organize chaos before touching code so the agent has a clear plan to follow.

1. Revision Dump

The process starts with an unstructured list of changes. Sources: client review, internal QA, stakeholder feedback in chat, developer self-audit.

Capture rules

  • Write everything down verbatim — don't filter or prioritize yet.
  • Include exact wording for copy changes.
  • Note any references — screenshots, URLs, competitor examples.
  • Flag items needing client decisions vs. items you can just do.

2. Task Segmenting

Organize the dump into logical groups:

Grouping criteriaWhy
DependencyBackend changes that frontend depends on go first
Blast radiusFrontend-only (low risk) before backend/DB (higher risk)
File localityGroup changes that touch the same files together
Feature areaCart changes together, checkout changes together, etc.

Output: a numbered sprint list where each sprint is a coherent batch that can be tested independently.

3. Revision Doc

  • Filename: {PROJECT}-REVISION-R{N}.md — e.g., ECOMM-SITE-REVISION-R1.md for an online store project
  • Location: project's plans and docs/ folder
  • Format: sprints → tasks → file-level implementation notes

Each task should include:

  • What to change (specific enough to execute without re-reading the dump)
  • Which files are affected
  • Any gotchas or dependencies

4. Sprint Execution

Work one sprint at a time, in order.

Start

  1. Create trackable tasks for every item in the sprint
  2. Read all affected files before making changes
  3. If multiple changes touch the same file, batch them

Execute

  • Mark each task in-progress before starting, completed when done
  • Make the smallest effective change — don't refactor adjacent code
  • Verify each change works before moving to the next task

Checkpoint

  • After each sprint is complete, pause for visual/functional QA
  • Fix anything broken before starting the next sprint
  • Note scope changes or new items discovered during execution

5. Naming Conventions

ArtifactConventionExample
Revision doc{PROJECT}-REVISION-R{N}.mdECOMM-SITE-REVISION-R1.md
Sprint labelSprint {N}: {Description}Sprint 3: Cart Drawer Enhancements
Task label{Sprint}{Letter}: {Description}3A: Add product thumbnail to cart items

6. Proactive Suggestions

Every revision delivery includes a proactive suggestions list — improvements noticed during execution that weren't in the original dump.

Qualifies

  • UX inconsistencies spotted while working in the same files
  • Missing states (error, empty, loading) that should exist
  • Copy/labeling mismatches across pages
  • Quick wins that would take minutes but the client wouldn't think to ask for
  • Security or performance issues encountered during the revision

Doesn't qualify

  • Feature requests or scope expansions — those go in a new revision round
  • Opinions about design direction — stay in your lane

Format: numbered list, plain language, no jargon. Each item should be understandable without dev context. Include enough detail that the client can say yes or no without a follow-up conversation. Present with the delivery — never hold suggestions for a separate conversation.

7. Closing the Round

  • Update the project's ROADMAP.md to reflect completed items
  • Write session notes covering what was built/changed
  • If new items were discovered during execution, start a new revision doc (R2, R3, etc.) — don't append to the current one
  • Deploy to dev/staging for full QA before production

Key Principles

  • Don't mix revision rounds. One round = one dump. New feedback = new round.
  • Sprints are sequential. Don't skip ahead — dependencies matter.
  • Tasks within a sprint can parallelize if they don't touch the same files.
  • Always read before writing. Understand existing code before modifying it.
  • Preserve, don't delete. When removing features, comment out or hide — don't delete files unless explicitly told to.

Conditions

When this works

  • The project has version control (git) and a discoverable file structure
  • The AI agent can read files, edit them, and run verification commands
  • You have a way to test changes — local dev server, staging environment, etc.
  • The dump is from a single source — one client review, one QA pass, etc. Mixed dumps from multiple rounds get tangled

When it doesn't

  • You're trying to redesign mid-revision — that's a new project, not a revision round
  • The codebase has no working tests AND no quick visual QA path — you can't verify changes
  • The "revision" is actually a re-spec — go back to the spec doc, don't run revision rounds against a moving target

Outcome

Output Versioned revision doc + completed sprints
Typical timing 2–8 hours per round
Bonus Proactive suggestions list

The revision doc itself is the proof. A completed round produces a permanent record showing what was asked, what was done, and what was suggested but not requested. Future rounds reference back to it.

Specs provided as-is. chadworks isn't responsible for how you use these prompts or any effects they may have on your code, content, infrastructure, or business. Review and test before applying.

chadworks — Chad Last updated 2026-04-27