Stop Letting Your AI Agent Wing It: The Wave Plan Protocol for Organized Execution
You gave your AI assistant a complex task. It started coding immediately. Three files in, it realized the approach was wrong. It backtracked, broke something that was working, tried to fix it, introduced a new bug, and then proudly announced "Done!" while half the system was on fire.
Sound familiar?
The problem isn't intelligence. It's discipline. AI coding assistants are extraordinary at generating code but have zero instinct for project management. They don't plan. They don't delegate. They don't verify. They just... do things. Fast. Often wrong.
Here's how to fix that with a single system prompt.
The Core Problem: Your AI Is Doing Too Many Jobs
When you give an AI assistant a complex task, it tries to be everything at once: architect, planner, developer, tester, and project manager. This is like asking your CEO to also write the code, run the tests, and deploy to production. In the same afternoon.
The result is predictable:
- Context thrashing — it loses the big picture while fixing a semicolon
- No rollback plan — when something breaks, there's no way back
- Unverified work — it says "done" but nobody checked
- Cascading failures — one bad change ripples through everything
The fix is simple in concept, radical in practice: make your AI an orchestrator that never touches code.
The Orchestrator Pattern: Brain, Not Hands
The most effective AI workflow separates thinking from doing. Your primary AI agent becomes a project manager — it reads, plans, delegates, and verifies. It never writes a single line of code.
This feels counterintuitive. Why would you stop your AI from coding? Because an AI that's coding is an AI that's not thinking. The moment it starts implementing, it loses the ability to see the whole board.
The Bright Line Test
Before every action, the AI asks itself one question: "Am I doing work, or am I directing work?"
- About to write code → spawn a sub-agent
- About to run a build command → spawn a sub-agent
- Thinking "this is so small I'll just do it myself" → that thought is the failure mode → spawn a sub-agent
There is no task small enough to justify doing it directly. A one-character fix gets a sub-agent. A missing comma gets a sub-agent. The conductor does not pick up an instrument.
Wave Planning: The Secret to Complex Task Execution
A wave plan is a phased execution strategy created before any work begins. Think of it like a military operation order — you don't just say "take the hill." You define phases, dependencies, rollback points, and review gates.
Anatomy of a Wave Plan
1. Current State Assessment Document what exists, what works, what will be affected. This prevents the classic AI mistake of "improving" something that was fine.
2. Wave Definition Break work into sequential waves. Each wave has:
- What changes
- What it depends on from previous waves
- What risks it introduces
- How to verify it worked
3. Review Gates Explicit points where execution pauses for human approval. Not every wave needs one — but architectural decisions and destructive operations always do.
4. Rollback Strategy For every wave: how do you undo it? If you can't answer that before starting, you're not ready to start.
Execution Rules
Once a wave plan is approved:
- Execute all waves automatically until completion or a review gate
- Never ask "should I continue?" on an already-approved plan
- If a wave fails, invoke the rollback strategy, re-plan that wave, and present the revised plan
This eliminates the maddening pattern where AI assistants stop after every small step to ask permission. Approve the plan once. Let it run.
Agent Classification: Strategic vs. Tactical
Not all sub-agents are created equal. Classifying them before spawning prevents the most common delegation failures.
| Type | Purpose | Context Given | Model Tier |
|---|---|---|---|
| Strategic | Analysis, architecture, research | Full project context | Higher capability |
| Tactical | Implementation, single-file edits, builds | Only what's needed | Lower cost, faster |
A strategic agent gets the whole picture because it needs to make judgment calls. A tactical agent gets surgical scope because it needs to execute precisely without being distracted by the broader codebase.
The Spawning Protocol
Every sub-agent gets a spec before it's created:
- What to do — specific, measurable deliverable
- What files to touch — explicit scope boundaries
- What success looks like — how to verify the work is correct
Vague specs produce vague work. "Fix the auth" is a bad spec. "In /src/middleware/auth.ts, replace the session token validation on line 47 with a JWT verification using the jose library, matching the pattern in /src/lib/jwt.ts" is a good spec.
The Verification Mandate
This is where most AI workflows fall apart. The AI says "done." You trust it. Three days later you discover it broke something.
The rule is absolute: never trust completion reports without verification.
Before reporting any task as complete:
- Read the actual output files
- Run read-only verification checks
- Confirm deliverables match the original spec
- If anything doesn't match, spawn fix agents before reporting success
The AI orchestrator should never say "done" until it has personally inspected every change its sub-agents made. Not "I told the sub-agent to test it." Not "the sub-agent reported success." Actually look at the files. Actually verify the behavior.
Sub-Agent Failure Recovery
Sub-agents will fail. That's fine. What's not fine is the orchestrator trying to "quickly fix it" by doing the work itself.
The failure protocol:
- Read the sub-agent's output to understand what went wrong
- Write a corrected spec that addresses the specific failure
- Spawn a new targeted fix agent with the corrected spec
- Do NOT fix it yourself
This maintains the separation between thinking and doing. The orchestrator stays in command mode, maintaining the big picture, while specialists handle the implementation details.
Putting It Together
The prompt at the top and bottom of this article encodes all of these patterns into a single system prompt. Drop it into your AI assistant's configuration and it will:
- Plan before acting — wave plans for every multi-step task
- Delegate everything — sub-agents for all implementation work
- Classify intelligently — strategic vs. tactical agent routing
- Verify obsessively — never report done without inspection
- Recover gracefully — structured failure handling without panic
The result is an AI that behaves less like an eager junior developer and more like a seasoned technical lead — one who knows that the most valuable thing they can do is think clearly and direct precisely.
Try It Yourself
Copy the prompt below, paste it into your ~/.claude/CLAUDE.md (for Claude Code) or your system prompt configuration, and watch your AI stop winging it and start orchestrating.
The difference between a good AI assistant and a great one isn't intelligence — it's structure.