Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:28:37 +08:00
commit ccc65b3f07
180 changed files with 53970 additions and 0 deletions

View File

@@ -0,0 +1,366 @@
# Workflow: Complete Milestone
<required_reading>
**Read these files NOW:**
1. templates/milestone.md
2. `.planning/ROADMAP.md`
3. `.planning/BRIEF.md`
</required_reading>
<purpose>
Mark a shipped version (v1.0, v1.1, v2.0) as complete. This creates a historical record in MILESTONES.md, updates BRIEF.md with current state, reorganizes ROADMAP.md with milestone groupings, and tags the release in git.
This is the ritual that separates "development" from "shipped."
</purpose>
<process>
<step name="verify_readiness">
Check if milestone is truly complete:
```bash
cat .planning/ROADMAP.md
ls .planning/phases/*/SUMMARY.md 2>/dev/null | wc -l
```
**Questions to ask:**
- Which phases belong to this milestone?
- Are all those phases complete (all plans have summaries)?
- Has the work been tested/validated?
- Is this ready to ship/tag?
Present:
```
Milestone: [Name from user, e.g., "v1.0 MVP"]
Appears to include:
- Phase 1: Foundation (2/2 plans complete)
- Phase 2: Authentication (2/2 plans complete)
- Phase 3: Core Features (3/3 plans complete)
- Phase 4: Polish (1/1 plan complete)
Total: 4 phases, 8 plans, all complete
Ready to mark this milestone as shipped?
(yes / wait / adjust scope)
```
Wait for confirmation.
If "adjust scope": Ask which phases should be included.
If "wait": Stop, user will return when ready.
</step>
<step name="gather_stats">
Calculate milestone statistics:
```bash
# Count phases and plans in milestone
# (user specified or detected from roadmap)
# Find git range
git log --oneline --grep="feat(" | head -20
# Count files modified in range
git diff --stat FIRST_COMMIT..LAST_COMMIT | tail -1
# Count LOC (adapt to language)
find . -name "*.swift" -o -name "*.ts" -o -name "*.py" | xargs wc -l 2>/dev/null
# Calculate timeline
git log --format="%ai" FIRST_COMMIT | tail -1 # Start date
git log --format="%ai" LAST_COMMIT | head -1 # End date
```
Present summary:
```
Milestone Stats:
- Phases: [X-Y]
- Plans: [Z] total
- Tasks: [N] total (estimated from phase summaries)
- Files modified: [M]
- Lines of code: [LOC] [language]
- Timeline: [Days] days ([Start] → [End])
- Git range: feat(XX-XX) → feat(YY-YY)
```
Confirm before proceeding.
</step>
<step name="extract_accomplishments">
Read all phase SUMMARY.md files in milestone range:
```bash
cat .planning/phases/01-*/01-*-SUMMARY.md
cat .planning/phases/02-*/02-*-SUMMARY.md
# ... for each phase in milestone
```
From summaries, extract 4-6 key accomplishments.
Present:
```
Key accomplishments for this milestone:
1. [Achievement from phase 1]
2. [Achievement from phase 2]
3. [Achievement from phase 3]
4. [Achievement from phase 4]
5. [Achievement from phase 5]
Does this capture the milestone? (yes / adjust)
```
If "adjust": User can add/remove/edit accomplishments.
</step>
<step name="create_milestone_entry">
Create or update `.planning/MILESTONES.md`.
If file doesn't exist:
```markdown
# Project Milestones: [Project Name from BRIEF]
[New entry]
```
If exists, prepend new entry (reverse chronological order).
Use template from `templates/milestone.md`:
```markdown
## v[Version] [Name] (Shipped: YYYY-MM-DD)
**Delivered:** [One sentence from user]
**Phases completed:** [X-Y] ([Z] plans total)
**Key accomplishments:**
- [List from previous step]
**Stats:**
- [Files] files created/modified
- [LOC] lines of [language]
- [Phases] phases, [Plans] plans, [Tasks] tasks
- [Days] days from [start milestone or start project] to ship
**Git range:** `feat(XX-XX)``feat(YY-YY)`
**What's next:** [Ask user: what's the next goal?]
---
```
Confirm entry looks correct.
</step>
<step name="update_brief">
Update `.planning/BRIEF.md` to reflect current state.
Add/update "Current State" section at top (after YAML if present):
```markdown
# Project Brief: [Name]
## Current State (Updated: YYYY-MM-DD)
**Shipped:** v[X.Y] [Name] (YYYY-MM-DD)
**Status:** [Production / Beta / Internal]
**Users:** [If known, e.g., "~500 downloads, 50 DAU" or "Internal use only"]
**Feedback:** [Key themes from users, or "Initial release, gathering feedback"]
**Codebase:** [LOC] [language], [key tech stack], [platform/deployment target]
## [Next Milestone] Goals
**Vision:** [What's the goal for next version?]
**Motivation:**
- [Why this next work matters]
- [User feedback driving it]
- [Technical debt or improvements needed]
**Scope (v[X.Y]):**
- [Feature/improvement 1]
- [Feature/improvement 2]
- [Feature/improvement 3]
---
<details>
<summary>Original Vision (v1.0 - Archived for reference)</summary>
[Move original brief content here]
</details>
```
**If this is v1.0 (first milestone):**
Just add "Current State" section, no need to archive original vision yet.
**If this is v1.1+:**
Collapse previous version's content into `<details>` section.
Show diff, confirm changes.
</step>
<step name="reorganize_roadmap">
Update `.planning/ROADMAP.md` to group completed milestone phases.
Add milestone headers and collapse completed work:
```markdown
# Roadmap: [Project Name]
## Milestones
-**v1.0 MVP** - Phases 1-4 (shipped YYYY-MM-DD)
- 🚧 **v1.1 Security** - Phases 5-6 (in progress)
- 📋 **v2.0 Redesign** - Phases 7-10 (planned)
## Phases
<details>
<summary>✅ v1.0 MVP (Phases 1-4) - SHIPPED YYYY-MM-DD</summary>
- [x] Phase 1: Foundation (2/2 plans) - completed YYYY-MM-DD
- [x] Phase 2: Authentication (2/2 plans) - completed YYYY-MM-DD
- [x] Phase 3: Core Features (3/3 plans) - completed YYYY-MM-DD
- [x] Phase 4: Polish (1/1 plan) - completed YYYY-MM-DD
</details>
### 🚧 v[Next] [Name] (In Progress / Planned)
- [ ] Phase 5: [Name] ([N] plans)
- [ ] Phase 6: [Name] ([N] plans)
## Progress
| Phase | Milestone | Plans Complete | Status | Completed |
|-------|-----------|----------------|--------|-----------|
| 1. Foundation | v1.0 | 2/2 | Complete | YYYY-MM-DD |
| 2. Authentication | v1.0 | 2/2 | Complete | YYYY-MM-DD |
| 3. Core Features | v1.0 | 3/3 | Complete | YYYY-MM-DD |
| 4. Polish | v1.0 | 1/1 | Complete | YYYY-MM-DD |
| 5. Security Audit | v1.1 | 0/1 | Not started | - |
| 6. Hardening | v1.1 | 0/2 | Not started | - |
```
Show diff, confirm changes.
</step>
<step name="git_tag">
Create git tag for milestone:
```bash
git tag -a v[X.Y] -m "$(cat <<'EOF'
v[X.Y] [Name]
Delivered: [One sentence]
Key accomplishments:
- [Item 1]
- [Item 2]
- [Item 3]
See .planning/MILESTONES.md for full details.
EOF
)"
```
Confirm: "Tagged: v[X.Y]"
Ask: "Push tag to remote? (y/n)"
If yes:
```bash
git push origin v[X.Y]
```
</step>
<step name="git_commit_milestone">
Commit milestone completion (MILESTONES.md + BRIEF.md + ROADMAP.md updates):
```bash
git add .planning/MILESTONES.md
git add .planning/BRIEF.md
git add .planning/ROADMAP.md
git commit -m "$(cat <<'EOF'
chore: milestone v[X.Y] [Name] shipped
- Added MILESTONES.md entry
- Updated BRIEF.md current state
- Reorganized ROADMAP.md with milestone grouping
- Tagged v[X.Y]
EOF
)"
```
Confirm: "Committed: chore: milestone v[X.Y] shipped"
</step>
<step name="offer_next">
```
✅ Milestone v[X.Y] [Name] complete
Shipped:
- [N] phases ([M] plans, [P] tasks)
- [One sentence of what shipped]
Summary: .planning/MILESTONES.md
Tag: v[X.Y]
Next steps:
1. Plan next milestone work (add phases to roadmap)
2. Archive and start fresh (for major rewrite/new codebase)
3. Take a break (done for now)
```
Wait for user decision.
If "1": Route to workflows/plan-phase.md (but ask about milestone scope first)
If "2": Route to workflows/archive-planning.md (to be created)
</step>
</process>
<milestone_naming>
**Version conventions:**
- **v1.0** - Initial MVP
- **v1.1, v1.2, v1.3** - Minor updates, new features, fixes
- **v2.0, v3.0** - Major rewrites, breaking changes, significant new direction
**Name conventions:**
- v1.0 MVP
- v1.1 Security
- v1.2 Performance
- v2.0 Redesign
- v2.0 iOS Launch
Keep names short (1-2 words describing the focus).
</milestone_naming>
<what_qualifies>
**Create milestones for:**
- Initial release (v1.0)
- Public releases
- Major feature sets shipped
- Before archiving planning
**Don't create milestones for:**
- Every phase completion (too granular)
- Work in progress (wait until shipped)
- Internal dev iterations (unless truly shipped internally)
If uncertain, ask: "Is this deployed/usable/shipped in some form?"
If yes → milestone. If no → keep working.
</what_qualifies>
<success_criteria>
Milestone completion is successful when:
- [ ] MILESTONES.md entry created with stats and accomplishments
- [ ] BRIEF.md updated with current state
- [ ] ROADMAP.md reorganized with milestone grouping
- [ ] Git tag created (v[X.Y])
- [ ] Milestone commit made
- [ ] User knows next steps
</success_criteria>

View File

@@ -0,0 +1,95 @@
# Workflow: Create Brief
<required_reading>
**Read these files NOW:**
1. templates/brief.md
</required_reading>
<purpose>
Create a project vision document that captures what we're building and why.
This is the ONLY human-focused document - everything else is for Claude.
</purpose>
<process>
<step name="gather_vision">
Ask the user (conversationally, not AskUserQuestion):
1. **What are we building?** (one sentence)
2. **Why does this need to exist?** (the problem it solves)
3. **What does success look like?** (how we know it worked)
4. **Any constraints?** (tech stack, timeline, budget, etc.)
Keep it conversational. Don't ask all at once - let it flow naturally.
</step>
<step name="decision_gate">
After gathering context:
Use AskUserQuestion:
- header: "Ready"
- question: "Ready to create the brief, or would you like me to ask more questions?"
- options:
- "Create brief" - I have enough context
- "Ask more questions" - There are details to clarify
- "Let me add context" - I want to provide more information
Loop until "Create brief" selected.
</step>
<step name="create_structure">
Create the planning directory:
```bash
mkdir -p .planning
```
</step>
<step name="write_brief">
Use the template from `templates/brief.md`.
Write to `.planning/BRIEF.md` with:
- Project name
- One-line description
- Problem statement (why this exists)
- Success criteria (measurable outcomes)
- Constraints (if any)
- Out of scope (what we're NOT building)
**Keep it SHORT.** Under 50 lines. This is a reference, not a novel.
</step>
<step name="offer_next">
After creating brief, present options:
```
Brief created: .planning/BRIEF.md
NOTE: Brief is NOT committed yet. It will be committed with the roadmap as project initialization.
What's next?
1. Create roadmap now (recommended - commits brief + roadmap together)
2. Review/edit brief
3. Done for now (brief will remain uncommitted)
```
</step>
</process>
<anti_patterns>
- Don't write a business plan
- Don't include market analysis
- Don't add stakeholder sections
- Don't create executive summaries
- Don't add timelines (that's roadmap's job)
Keep it focused: What, Why, Success, Constraints.
</anti_patterns>
<success_criteria>
Brief is complete when:
- [ ] `.planning/BRIEF.md` exists
- [ ] Contains: name, description, problem, success criteria
- [ ] Under 50 lines
- [ ] User knows what's next
</success_criteria>

View File

@@ -0,0 +1,158 @@
# Workflow: Create Roadmap
<required_reading>
**Read these files NOW:**
1. templates/roadmap.md
2. Read `.planning/BRIEF.md` if it exists
</required_reading>
<purpose>
Define the phases of implementation. Each phase is a coherent chunk of work
that delivers value. The roadmap provides structure, not detailed tasks.
</purpose>
<process>
<step name="check_brief">
```bash
cat .planning/BRIEF.md 2>/dev/null || echo "No brief found"
```
**If no brief exists:**
Ask: "No brief found. Want to create one first, or proceed with roadmap?"
If proceeding without brief, gather quick context:
- What are we building?
- What's the rough scope?
</step>
<step name="identify_phases">
Based on the brief/context, identify 3-6 phases.
Good phases are:
- **Coherent**: Each delivers something complete
- **Sequential**: Later phases build on earlier
- **Sized right**: 1-3 days of work each (for solo + Claude)
Common phase patterns:
- Foundation → Core Feature → Enhancement → Polish
- Setup → MVP → Iteration → Launch
- Infrastructure → Backend → Frontend → Integration
</step>
<step name="confirm_phases">
Present the phase breakdown inline:
"Here's how I'd break this down:
1. [Phase name] - [goal]
2. [Phase name] - [goal]
3. [Phase name] - [goal]
...
Does this feel right? (yes / adjust)"
If "adjust": Ask what to change, revise, present again.
</step>
<step name="decision_gate">
After phases confirmed:
Use AskUserQuestion:
- header: "Ready"
- question: "Ready to create the roadmap, or would you like me to ask more questions?"
- options:
- "Create roadmap" - I have enough context
- "Ask more questions" - There are details to clarify
- "Let me add context" - I want to provide more information
Loop until "Create roadmap" selected.
</step>
<step name="create_structure">
```bash
mkdir -p .planning/phases
```
</step>
<step name="write_roadmap">
Use template from `templates/roadmap.md`.
Write to `.planning/ROADMAP.md` with:
- Phase list with names and one-line descriptions
- Dependencies (what must complete before what)
- Status tracking (all start as "not started")
Create phase directories:
```bash
mkdir -p .planning/phases/01-{phase-name}
mkdir -p .planning/phases/02-{phase-name}
# etc.
```
</step>
<step name="git_commit_initialization">
Commit project initialization (brief + roadmap together):
```bash
git add .planning/
git commit -m "$(cat <<'EOF'
docs: initialize [project-name] ([N] phases)
[One-liner from BRIEF.md]
Phases:
1. [phase-name]: [goal]
2. [phase-name]: [goal]
3. [phase-name]: [goal]
EOF
)"
```
Confirm: "Committed: docs: initialize [project] ([N] phases)"
</step>
<step name="offer_next">
```
Project initialized:
- Brief: .planning/BRIEF.md
- Roadmap: .planning/ROADMAP.md
- Committed as: docs: initialize [project] ([N] phases)
What's next?
1. Plan Phase 1 in detail
2. Review/adjust phases
3. Done for now
```
</step>
</process>
<phase_naming>
Use `XX-kebab-case-name` format:
- `01-foundation`
- `02-authentication`
- `03-core-features`
- `04-polish`
Numbers ensure ordering. Names describe content.
</phase_naming>
<anti_patterns>
- Don't add time estimates
- Don't create Gantt charts
- Don't add resource allocation
- Don't include risk matrices
- Don't plan more than 6 phases (scope creep)
Phases are buckets of work, not project management artifacts.
</anti_patterns>
<success_criteria>
Roadmap is complete when:
- [ ] `.planning/ROADMAP.md` exists
- [ ] 3-6 phases defined with clear names
- [ ] Phase directories created
- [ ] Dependencies noted if any
- [ ] Status tracking in place
</success_criteria>

View File

@@ -0,0 +1,982 @@
# Workflow: Execute Phase
<purpose>
Execute a phase prompt (PLAN.md) and create the outcome summary (SUMMARY.md).
</purpose>
<process>
<step name="identify_plan">
Find the next plan to execute:
- Check ROADMAP.md for "In progress" phase
- Find plans in that phase directory
- Identify first plan without corresponding SUMMARY
```bash
cat .planning/ROADMAP.md
# Look for phase with "In progress" status
# Then find plans in that phase
ls .planning/phases/XX-name/*-PLAN.md 2>/dev/null | sort
ls .planning/phases/XX-name/*-SUMMARY.md 2>/dev/null | sort
```
**Logic:**
- If `01-01-PLAN.md` exists but `01-01-SUMMARY.md` doesn't → execute 01-01
- If `01-01-SUMMARY.md` exists but `01-02-SUMMARY.md` doesn't → execute 01-02
- Pattern: Find first PLAN file without matching SUMMARY file
Confirm with user if ambiguous.
Present:
```
Found plan to execute: {phase}-{plan}-PLAN.md
[Plan X of Y for Phase Z]
Proceed with execution?
```
</step>
<step name="parse_segments">
**Intelligent segmentation: Parse plan into execution segments.**
Plans are divided into segments by checkpoints. Each segment is routed to optimal execution context (subagent or main).
**1. Check for checkpoints:**
```bash
# Find all checkpoints and their types
grep -n "type=\"checkpoint" .planning/phases/XX-name/{phase}-{plan}-PLAN.md
```
**2. Analyze execution strategy:**
**If NO checkpoints found:**
- **Fully autonomous plan** - spawn single subagent for entire plan
- Subagent gets fresh 200k context, executes all tasks, creates SUMMARY, commits
- Main context: Just orchestration (~5% usage)
**If checkpoints found, parse into segments:**
Segment = tasks between checkpoints (or start→first checkpoint, or last checkpoint→end)
**For each segment, determine routing:**
```
Segment routing rules:
IF segment has no prior checkpoint:
→ SUBAGENT (first segment, nothing to depend on)
IF segment follows checkpoint:human-verify:
→ SUBAGENT (verification is just confirmation, doesn't affect next work)
IF segment follows checkpoint:decision OR checkpoint:human-action:
→ MAIN CONTEXT (next tasks need the decision/result)
```
**3. Execution pattern:**
**Pattern A: Fully autonomous (no checkpoints)**
```
Spawn subagent → execute all tasks → SUMMARY → commit → report back
```
**Pattern B: Segmented with verify-only checkpoints**
```
Segment 1 (tasks 1-3): Spawn subagent → execute → report back
Checkpoint 4 (human-verify): Main context → you verify → continue
Segment 2 (tasks 5-6): Spawn NEW subagent → execute → report back
Checkpoint 7 (human-verify): Main context → you verify → continue
Aggregate results → SUMMARY → commit
```
**Pattern C: Decision-dependent (must stay in main)**
```
Checkpoint 1 (decision): Main context → you decide → continue in main
Tasks 2-5: Main context (need decision from checkpoint 1)
No segmentation benefit - execute entirely in main
```
**4. Why this works:**
**Segmentation benefits:**
- Fresh context for each autonomous segment (0% start every time)
- Main context only for checkpoints (~10-20% total)
- Can handle 10+ task plans if properly segmented
- Quality impossible to degrade in autonomous segments
**When segmentation provides no benefit:**
- Checkpoint is decision/human-action and following tasks depend on outcome
- Better to execute sequentially in main than break flow
**5. Implementation:**
**For fully autonomous plans:**
```
Use Task tool with subagent_type="general-purpose":
Prompt: "Execute plan at .planning/phases/{phase}-{plan}-PLAN.md
This is an autonomous plan (no checkpoints). Execute all tasks, create SUMMARY.md in phase directory, commit with message following plan's commit guidance.
Follow all deviation rules and authentication gate protocols from the plan.
When complete, report: plan name, tasks completed, SUMMARY path, commit hash."
```
**For segmented plans (has verify-only checkpoints):**
```
Execute segment-by-segment:
For each autonomous segment:
Spawn subagent with prompt: "Execute tasks [X-Y] from plan at .planning/phases/{phase}-{plan}-PLAN.md. Read the plan for full context and deviation rules. Do NOT create SUMMARY or commit - just execute these tasks and report results."
Wait for subagent completion
For each checkpoint:
Execute in main context
Wait for user interaction
Continue to next segment
After all segments complete:
Aggregate all results
Create SUMMARY.md
Commit with all changes
```
**For decision-dependent plans:**
```
Execute in main context (standard flow below)
No subagent routing
Quality maintained through small scope (2-3 tasks per plan)
```
See step name="segment_execution" for detailed segment execution loop.
</step>
<step name="segment_execution">
**Detailed segment execution loop for segmented plans.**
**This step applies ONLY to segmented plans (Pattern B: has checkpoints, but they're verify-only).**
For Pattern A (fully autonomous) and Pattern C (decision-dependent), skip this step.
**Execution flow:**
```
1. Parse plan to identify segments:
- Read plan file
- Find checkpoint locations: grep -n "type=\"checkpoint" PLAN.md
- Identify checkpoint types: grep "type=\"checkpoint" PLAN.md | grep -o 'checkpoint:[^"]*'
- Build segment map:
* Segment 1: Start → first checkpoint (tasks 1-X)
* Checkpoint 1: Type and location
* Segment 2: After checkpoint 1 → next checkpoint (tasks X+1 to Y)
* Checkpoint 2: Type and location
* ... continue for all segments
2. For each segment in order:
A. Determine routing (apply rules from parse_segments):
- No prior checkpoint? → Subagent
- Prior checkpoint was human-verify? → Subagent
- Prior checkpoint was decision/human-action? → Main context
B. If routing = Subagent:
```
Spawn Task tool with subagent_type="general-purpose":
Prompt: "Execute tasks [task numbers/names] from plan at [plan path].
**Context:**
- Read the full plan for objective, context files, and deviation rules
- You are executing a SEGMENT of this plan (not the full plan)
- Other segments will be executed separately
**Your responsibilities:**
- Execute only the tasks assigned to you
- Follow all deviation rules and authentication gate protocols
- Track deviations for later Summary
- DO NOT create SUMMARY.md (will be created after all segments complete)
- DO NOT commit (will be done after all segments complete)
**Report back:**
- Tasks completed
- Files created/modified
- Deviations encountered
- Any issues or blockers"
Wait for subagent to complete
Capture results (files changed, deviations, etc.)
```
C. If routing = Main context:
Execute tasks in main using standard execution flow (step name="execute")
Track results locally
D. After segment completes (whether subagent or main):
Continue to next checkpoint/segment
3. After ALL segments complete:
A. Aggregate results from all segments:
- Collect files created/modified from all segments
- Collect deviations from all segments
- Collect decisions from all checkpoints
- Merge into complete picture
B. Create SUMMARY.md:
- Use aggregated results
- Document all work from all segments
- Include deviations from all segments
- Note which segments were subagented
C. Commit:
- Stage all files from all segments
- Stage SUMMARY.md
- Commit with message following plan guidance
- Include note about segmented execution if relevant
D. Report completion
**Example execution trace:**
```
Plan: 01-02-PLAN.md (8 tasks, 2 verify checkpoints)
Parsing segments...
- Segment 1: Tasks 1-3 (autonomous)
- Checkpoint 4: human-verify
- Segment 2: Tasks 5-6 (autonomous)
- Checkpoint 7: human-verify
- Segment 3: Task 8 (autonomous)
Routing analysis:
- Segment 1: No prior checkpoint → SUBAGENT ✓
- Checkpoint 4: Verify only → MAIN (required)
- Segment 2: After verify → SUBAGENT ✓
- Checkpoint 7: Verify only → MAIN (required)
- Segment 3: After verify → SUBAGENT ✓
Execution:
[1] Spawning subagent for tasks 1-3...
→ Subagent completes: 3 files modified, 0 deviations
[2] Executing checkpoint 4 (human-verify)...
════════════════════════════════════════
CHECKPOINT: Verification Required
Task 4 of 8: Verify database schema
I built: User and Session tables with relations
How to verify: Check src/db/schema.ts for correct types
════════════════════════════════════════
User: "approved"
[3] Spawning subagent for tasks 5-6...
→ Subagent completes: 2 files modified, 1 deviation (added error handling)
[4] Executing checkpoint 7 (human-verify)...
User: "approved"
[5] Spawning subagent for task 8...
→ Subagent completes: 1 file modified, 0 deviations
Aggregating results...
- Total files: 6 modified
- Total deviations: 1
- Segmented execution: 3 subagents, 2 checkpoints
Creating SUMMARY.md...
Committing...
✓ Complete
```
**Benefits of this pattern:**
- Main context usage: ~20% (just orchestration + checkpoints)
- Subagent 1: Fresh 0-30% (tasks 1-3)
- Subagent 2: Fresh 0-30% (tasks 5-6)
- Subagent 3: Fresh 0-20% (task 8)
- All autonomous work: Peak quality
- Can handle large plans with many tasks if properly segmented
**When NOT to use segmentation:**
- Plan has decision/human-action checkpoints that affect following tasks
- Following tasks depend on checkpoint outcome
- Better to execute in main sequentially in those cases
</step>
<step name="load_prompt">
Read the plan prompt:
```bash
cat .planning/phases/XX-name/{phase}-{plan}-PLAN.md
```
This IS the execution instructions. Follow it exactly.
</step>
<step name="previous_phase_check">
Before executing, check if previous phase had issues:
```bash
# Find previous phase summary
ls .planning/phases/*/SUMMARY.md 2>/dev/null | sort -r | head -2 | tail -1
```
If previous phase SUMMARY.md has "Issues Encountered" != "None" or "Next Phase Readiness" mentions blockers:
Use AskUserQuestion:
- header: "Previous Issues"
- question: "Previous phase had unresolved items: [summary]. How to proceed?"
- options:
- "Proceed anyway" - Issues won't block this phase
- "Address first" - Let's resolve before continuing
- "Review previous" - Show me the full summary
</step>
<step name="execute">
Execute each task in the prompt. **Deviations are normal** - handle them automatically using embedded rules below.
1. Read the @context files listed in the prompt
2. For each task:
**If `type="auto"`:**
- Work toward task completion
- **If CLI/API returns authentication error:** Handle as authentication gate (see below)
- **When you discover additional work not in plan:** Apply deviation rules (see below) automatically
- Continue implementing, applying rules as needed
- Run the verification
- Confirm done criteria met
- Track any deviations for Summary documentation
- Continue to next task
**If `type="checkpoint:*"`:**
- STOP immediately (do not continue to next task)
- Execute checkpoint_protocol (see below)
- Wait for user response
- Verify if possible (check files, env vars, etc.)
- Only after user confirmation: continue to next task
3. Run overall verification checks from `<verification>` section
4. Confirm all success criteria from `<success_criteria>` section met
5. Document all deviations in Summary (automatic - see deviation_documentation below)
</step>
<authentication_gates>
## Handling Authentication Errors During Execution
**When you encounter authentication errors during `type="auto"` task execution:**
This is NOT a failure. Authentication gates are expected and normal. Handle them dynamically:
**Authentication error indicators:**
- CLI returns: "Error: Not authenticated", "Not logged in", "Unauthorized", "401", "403"
- API returns: "Authentication required", "Invalid API key", "Missing credentials"
- Command fails with: "Please run {tool} login" or "Set {ENV_VAR} environment variable"
**Authentication gate protocol:**
1. **Recognize it's an auth gate** - Not a bug, just needs credentials
2. **STOP current task execution** - Don't retry repeatedly
3. **Create dynamic checkpoint:human-action** - Present it to user immediately
4. **Provide exact authentication steps** - CLI commands, where to get keys
5. **Wait for user to authenticate** - Let them complete auth flow
6. **Verify authentication works** - Test that credentials are valid
7. **Retry the original task** - Resume automation where you left off
8. **Continue normally** - Don't treat this as an error in Summary
**Example: Vercel deployment hits auth error**
```
Task 3: Deploy to Vercel
Running: vercel --yes
Error: Not authenticated. Please run 'vercel login'
[Create checkpoint dynamically]
════════════════════════════════════════
CHECKPOINT: Authentication Required
════════════════════════════════════════
Task 3 of 8: Authenticate Vercel CLI
I tried to deploy but got authentication error.
What you need to do:
Run: vercel login
This will open your browser - complete the authentication flow.
I'll verify after: vercel whoami returns your account
Type "done" when authenticated
════════════════════════════════════════
[Wait for user response]
[User types "done"]
Verifying authentication...
Running: vercel whoami
✓ Authenticated as: user@example.com
Retrying deployment...
Running: vercel --yes
✓ Deployed to: https://myapp-abc123.vercel.app
Task 3 complete. Continuing to task 4...
```
**Example: Stripe API needs key**
```
Task 5: Create Stripe webhook endpoint
Using Stripe API...
Error: 401 Unauthorized - No API key provided
[Create checkpoint dynamically]
════════════════════════════════════════
CHECKPOINT: Credentials Required
════════════════════════════════════════
Task 5 of 8: Provide Stripe API key
I tried to create webhook but need your Stripe API key.
What you need to do:
1. Visit dashboard.stripe.com/apikeys
2. Copy your "Secret key" (starts with sk_test_ or sk_live_)
3. Paste it here, or run: export STRIPE_SECRET_KEY=sk_...
I'll verify after: Stripe API call succeeds
Type "done" when ready, or paste the key
════════════════════════════════════════
[Wait for user response]
[User pastes key or exports env var]
Saving key to .env...
Verifying Stripe API access...
✓ Stripe API authenticated
Retrying webhook creation...
✓ Webhook endpoint created: whsec_abc123
Task 5 complete. Continuing to task 6...
```
**In Summary documentation:**
Document authentication gates as normal flow, not deviations:
```markdown
## Authentication Gates
During execution, I encountered authentication requirements:
1. Task 3: Vercel CLI required authentication
- Paused for `vercel login`
- Resumed after authentication
- Deployed successfully
2. Task 5: Stripe API required API key
- Paused for API key input
- Saved to .env
- Resumed webhook creation
These are normal gates, not errors.
```
**Key principles:**
- Authentication gates are NOT failures or bugs
- They're expected interaction points during first-time setup
- Handle them gracefully and continue automation after unblocked
- Don't mark tasks as "failed" or "incomplete" due to auth gates
- Document them as normal flow, separate from deviations
See references/cli-automation.md "Authentication Gates" section for complete examples.
</authentication_gates>
<step name="execute">
<deviation_rules>
## Automatic Deviation Handling
**While executing tasks, you WILL discover work not in the plan.** This is normal.
Apply these rules automatically. Track all deviations for Summary documentation.
---
**RULE 1: Auto-fix bugs**
**Trigger:** Code doesn't work as intended (broken behavior, incorrect output, errors)
**Action:** Fix immediately, track for Summary
**Examples:**
- Wrong SQL query returning incorrect data
- Logic errors (inverted condition, off-by-one, infinite loop)
- Type errors, null pointer exceptions, undefined references
- Broken validation (accepts invalid input, rejects valid input)
- Security vulnerabilities (SQL injection, XSS, CSRF, insecure auth)
- Race conditions, deadlocks
- Memory leaks, resource leaks
**Process:**
1. Fix the bug inline
2. Add/update tests to prevent regression
3. Verify fix works
4. Continue task
5. Track in deviations list: `[Rule 1 - Bug] [description]`
**No user permission needed.** Bugs must be fixed for correct operation.
---
**RULE 2: Auto-add missing critical functionality**
**Trigger:** Code is missing essential features for correctness, security, or basic operation
**Action:** Add immediately, track for Summary
**Examples:**
- Missing error handling (no try/catch, unhandled promise rejections)
- No input validation (accepts malicious data, type coercion issues)
- Missing null/undefined checks (crashes on edge cases)
- No authentication on protected routes
- Missing authorization checks (users can access others' data)
- No CSRF protection, missing CORS configuration
- No rate limiting on public APIs
- Missing required database indexes (causes timeouts)
- No logging for errors (can't debug production)
**Process:**
1. Add the missing functionality inline
2. Add tests for the new functionality
3. Verify it works
4. Continue task
5. Track in deviations list: `[Rule 2 - Missing Critical] [description]`
**Critical = required for correct/secure/performant operation**
**No user permission needed.** These are not "features" - they're requirements for basic correctness.
---
**RULE 3: Auto-fix blocking issues**
**Trigger:** Something prevents you from completing current task
**Action:** Fix immediately to unblock, track for Summary
**Examples:**
- Missing dependency (package not installed, import fails)
- Wrong types blocking compilation
- Broken import paths (file moved, wrong relative path)
- Missing environment variable (app won't start)
- Database connection config error
- Build configuration error (webpack, tsconfig, etc.)
- Missing file referenced in code
- Circular dependency blocking module resolution
**Process:**
1. Fix the blocking issue
2. Verify task can now proceed
3. Continue task
4. Track in deviations list: `[Rule 3 - Blocking] [description]`
**No user permission needed.** Can't complete task without fixing blocker.
---
**RULE 4: Ask about architectural changes**
**Trigger:** Fix/addition requires significant structural modification
**Action:** STOP, present to user, wait for decision
**Examples:**
- Adding new database table (not just column)
- Major schema changes (changing primary key, splitting tables)
- Introducing new service layer or architectural pattern
- Switching libraries/frameworks (React → Vue, REST → GraphQL)
- Changing authentication approach (sessions → JWT)
- Adding new infrastructure (message queue, cache layer, CDN)
- Changing API contracts (breaking changes to endpoints)
- Adding new deployment environment
**Process:**
1. STOP current task
2. Present clearly:
```
⚠️ Architectural Decision Needed
Current task: [task name]
Discovery: [what you found that prompted this]
Proposed change: [architectural modification]
Why needed: [rationale]
Impact: [what this affects - APIs, deployment, dependencies, etc.]
Alternatives: [other approaches, or "none apparent"]
Proceed with proposed change? (yes / different approach / defer)
```
3. WAIT for user response
4. If approved: implement, track as `[Rule 4 - Architectural] [description]`
5. If different approach: discuss and implement
6. If deferred: log to ISSUES.md, continue without change
**User decision required.** These changes affect system design.
---
**RULE 5: Log non-critical enhancements**
**Trigger:** Improvement that would enhance code but isn't essential now
**Action:** Add to .planning/ISSUES.md automatically, continue task
**Examples:**
- Performance optimization (works correctly, just slower than ideal)
- Code refactoring (works, but could be cleaner/DRY-er)
- Better naming (works, but variables could be clearer)
- Organizational improvements (works, but file structure could be better)
- Nice-to-have UX improvements (works, but could be smoother)
- Additional test coverage beyond basics (basics exist, could be more thorough)
- Documentation improvements (code works, docs could be better)
- Accessibility enhancements beyond minimum
**Process:**
1. Create .planning/ISSUES.md if doesn't exist (use template)
2. Add entry with ISS-XXX number (auto-increment)
3. Brief notification: `📋 Logged enhancement: [brief] (ISS-XXX)`
4. Continue task without implementing
**Template for ISSUES.md:**
```markdown
# Project Issues Log
Enhancements discovered during execution. Not critical - address in future phases.
## Open Enhancements
### ISS-001: [Brief description]
- **Discovered:** Phase [X] Plan [Y] Task [Z] (YYYY-MM-DD)
- **Type:** [Performance / Refactoring / UX / Testing / Documentation / Accessibility]
- **Description:** [What could be improved and why it would help]
- **Impact:** Low (works correctly, this would enhance)
- **Effort:** [Quick / Medium / Substantial]
- **Suggested phase:** [Phase number or "Future"]
## Closed Enhancements
[Moved here when addressed]
```
**No user permission needed.** Logging for future consideration.
---
**RULE PRIORITY (when multiple could apply):**
1. **If Rule 4 applies** → STOP and ask (architectural decision)
2. **If Rules 1-3 apply** → Fix automatically, track for Summary
3. **If Rule 5 applies** → Log to ISSUES.md, continue
4. **If genuinely unsure which rule** → Apply Rule 4 (ask user)
**Edge case guidance:**
- "This validation is missing" → Rule 2 (critical for security)
- "This validation could be better" → Rule 5 (enhancement)
- "This crashes on null" → Rule 1 (bug)
- "This could be faster" → Rule 5 (enhancement) UNLESS actually timing out → Rule 2 (critical)
- "Need to add table" → Rule 4 (architectural)
- "Need to add column" → Rule 1 or 2 (depends: fixing bug or adding critical field)
**When in doubt:** Ask yourself "Does this affect correctness, security, or ability to complete task?"
- YES → Rules 1-3 (fix automatically)
- NO → Rule 5 (log it)
- MAYBE → Rule 4 (ask user)
</deviation_rules>
<deviation_documentation>
## Documenting Deviations in Summary
After all tasks complete, Summary MUST include deviations section.
**If no deviations:**
```markdown
## Deviations from Plan
None - plan executed exactly as written.
```
**If deviations occurred:**
```markdown
## Deviations from Plan
### Auto-fixed Issues
**1. [Rule 1 - Bug] Fixed case-sensitive email uniqueness constraint**
- **Found during:** Task 4 (Follow/unfollow API implementation)
- **Issue:** User.email unique constraint was case-sensitive - Test@example.com and test@example.com were both allowed, causing duplicate accounts
- **Fix:** Changed to `CREATE UNIQUE INDEX users_email_unique ON users (LOWER(email))`
- **Files modified:** src/models/User.ts, migrations/003_fix_email_unique.sql
- **Verification:** Unique constraint test passes - duplicate emails properly rejected
- **Commit:** abc123f
**2. [Rule 2 - Missing Critical] Added JWT expiry validation to auth middleware**
- **Found during:** Task 3 (Protected route implementation)
- **Issue:** Auth middleware wasn't checking token expiry - expired tokens were being accepted
- **Fix:** Added exp claim validation in middleware, reject with 401 if expired
- **Files modified:** src/middleware/auth.ts, src/middleware/auth.test.ts
- **Verification:** Expired token test passes - properly rejects with 401
- **Commit:** def456g
**3. [Rule 3 - Blocking] Fixed broken import path for UserService**
- **Found during:** Task 5 (Profile endpoint)
- **Issue:** Import path referenced old location (src/services/User.ts) but file was moved to src/services/users/UserService.ts in previous plan
- **Fix:** Updated import path
- **Files modified:** src/api/profile.ts
- **Verification:** Build succeeds, imports resolve
- **Commit:** ghi789h
**4. [Rule 4 - Architectural] Added Redis caching layer (APPROVED BY USER)**
- **Found during:** Task 6 (Feed endpoint)
- **Issue:** Feed queries hitting database on every request, causing 2-3 second response times under load
- **Proposed:** Add Redis cache with 5-minute TTL for feed data
- **User decision:** Approved
- **Fix:** Implemented Redis caching with ioredis client, cache invalidation on new posts
- **Files created:** src/cache/RedisCache.ts, src/cache/CacheKeys.ts, docker-compose.yml (added Redis)
- **Verification:** Feed response time reduced to <200ms, cache hit rate >80% in testing
- **Commit:** jkl012m
### Deferred Enhancements
Logged to .planning/ISSUES.md for future consideration:
- ISS-001: Refactor UserService into smaller modules (discovered in Task 3)
- ISS-002: Add connection pooling for Redis (discovered in Task 6)
- ISS-003: Improve error messages for validation failures (discovered in Task 2)
---
**Total deviations:** 4 auto-fixed (1 bug, 1 missing critical, 1 blocking, 1 architectural with approval), 3 deferred
**Impact on plan:** All auto-fixes necessary for correctness/security/performance. No scope creep.
```
**This provides complete transparency:**
- Every deviation documented
- Why it was needed
- What rule applied
- What was done
- User can see exactly what happened beyond the plan
</deviation_documentation>
<step name="checkpoint_protocol">
When encountering `type="checkpoint:*"`:
**Critical: Claude automates everything with CLI/API before checkpoints.** Checkpoints are for verification and decisions, not manual work.
**Display checkpoint clearly:**
```
════════════════════════════════════════
CHECKPOINT: [Type]
════════════════════════════════════════
Task [X] of [Y]: [Action/What-Built/Decision]
[Display task-specific content based on type]
[Resume signal instruction]
════════════════════════════════════════
```
**For checkpoint:human-verify (90% of checkpoints):**
```
I automated: [what was automated - deployed, built, configured]
How to verify:
1. [Step 1 - exact command/URL]
2. [Step 2 - what to check]
3. [Step 3 - expected behavior]
[Resume signal - e.g., "Type 'approved' or describe issues"]
```
**For checkpoint:decision (9% of checkpoints):**
```
Decision needed: [decision]
Context: [why this matters]
Options:
1. [option-id]: [name]
Pros: [pros]
Cons: [cons]
2. [option-id]: [name]
Pros: [pros]
Cons: [cons]
[Resume signal - e.g., "Select: option-id"]
```
**For checkpoint:human-action (1% - rare, only for truly unavoidable manual steps):**
```
I automated: [what Claude already did via CLI/API]
Need your help with: [the ONE thing with no CLI/API - email link, 2FA code]
Instructions:
[Single unavoidable step]
I'll verify after: [verification]
[Resume signal - e.g., "Type 'done' when complete"]
```
**After displaying:** WAIT for user response. Do NOT hallucinate completion. Do NOT continue to next task.
**After user responds:**
- Run verification if specified (file exists, env var set, tests pass, etc.)
- If verification passes or N/A: continue to next task
- If verification fails: inform user, wait for resolution
See references/checkpoints.md and references/cli-automation.md for complete checkpoint guidance.
</step>
<step name="verification_failure_gate">
If any task verification fails:
STOP. Do not continue to next task.
Present inline:
"Verification failed for Task [X]: [task name]
Expected: [verification criteria]
Actual: [what happened]
How to proceed?
1. Retry - Try the task again
2. Skip - Mark as incomplete, continue
3. Stop - Pause execution, investigate"
Wait for user decision.
If user chose "Skip", note it in SUMMARY.md under "Issues Encountered".
</step>
<step name="create_summary">
Create `{phase}-{plan}-SUMMARY.md` as specified in the prompt's `<output>` section.
Use templates/summary.md for structure.
**File location:** `.planning/phases/XX-name/{phase}-{plan}-SUMMARY.md`
**Title format:** `# Phase [X] Plan [Y]: [Name] Summary`
The one-liner must be SUBSTANTIVE:
- Good: "JWT auth with refresh rotation using jose library"
- Bad: "Authentication implemented"
**Next Step section:**
- If more plans exist in this phase: "Ready for {phase}-{next-plan}-PLAN.md"
- If this is the last plan: "Phase complete, ready for transition"
</step>
<step name="issues_review_gate">
Before proceeding, check SUMMARY.md content:
If "Issues Encountered" is NOT "None":
Present inline:
"Phase complete, but issues were encountered:
- [Issue 1]
- [Issue 2]
Please review before proceeding. Acknowledged?"
Wait for acknowledgment.
If "Next Phase Readiness" mentions blockers or concerns:
Present inline:
"Note for next phase:
[concerns from Next Phase Readiness]
Acknowledged?"
Wait for acknowledgment.
</step>
<step name="update_roadmap">
Update ROADMAP.md:
**If more plans remain in this phase:**
- Update plan count: "2/3 plans complete"
- Keep phase status as "In progress"
**If this was the last plan in the phase:**
- Mark phase complete: status → "Complete"
- Add completion date
- Update plan count: "3/3 plans complete"
</step>
<step name="git_commit_plan">
Commit plan completion (PLAN + SUMMARY + code):
```bash
git add .planning/phases/XX-name/{phase}-{plan}-PLAN.md
git add .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md
git add .planning/ROADMAP.md
git add src/ # or relevant code directories
git commit -m "$(cat <<'EOF'
feat({phase}-{plan}): [one-liner from SUMMARY.md]
- [Key accomplishment 1]
- [Key accomplishment 2]
- [Key accomplishment 3]
EOF
)"
```
Confirm: "Committed: feat({phase}-{plan}): [what shipped]"
**Commit scope pattern:**
- `feat(01-01):` for phase 1 plan 1
- `feat(02-03):` for phase 2 plan 3
- Creates clear, chronological git history
</step>
<step name="offer_next">
**If more plans in this phase:**
```
Plan {phase}-{plan} complete.
Summary: .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md
[X] of [Y] plans complete for Phase Z.
What's next?
1. Execute next plan ({phase}-{next-plan})
2. Review what was built
3. Done for now
```
**If phase complete (last plan done):**
```
Plan {phase}-{plan} complete.
Summary: .planning/phases/XX-name/{phase}-{plan}-SUMMARY.md
Phase [Z]: [Name] COMPLETE - all [Y] plans finished.
What's next?
1. Transition to next phase
2. Review phase accomplishments
3. Done for now
```
</step>
</process>
<success_criteria>
- All tasks from PLAN.md completed
- All verifications pass
- SUMMARY.md created with substantive content
- ROADMAP.md updated
</success_criteria>

View File

@@ -0,0 +1,84 @@
# Workflow: Get Planning Guidance
<purpose>
Help decide the right planning approach based on project state and goals.
</purpose>
<process>
<step name="understand_situation">
Ask conversationally:
- What's the project/idea?
- How far along are you? (idea, started, mid-project, almost done)
- What feels unclear?
</step>
<step name="recommend_approach">
Based on situation:
**Just an idea:**
→ Start with Brief. Capture vision before diving in.
**Know what to build, unclear how:**
→ Create Roadmap. Break into phases first.
**Have phases, need specifics:**
→ Plan Phase. Get Claude-executable tasks.
**Mid-project, lost track:**
→ Audit current state. What exists? What's left?
**Project feels stuck:**
→ Identify the blocker. Is it planning or execution?
</step>
<step name="offer_next_action">
```
Recommendation: [approach]
Because: [one sentence why]
Start now?
1. Yes, proceed with [recommended workflow]
2. Different approach
3. More questions first
```
</step>
</process>
<decision_tree>
```
Is there a brief?
├─ No → Create Brief
└─ Yes → Is there a roadmap?
├─ No → Create Roadmap
└─ Yes → Is current phase planned?
├─ No → Plan Phase
└─ Yes → Plan Chunk or Generate Prompts
```
</decision_tree>
<common_situations>
**"I have an idea but don't know where to start"**
→ Brief first. 5 minutes to capture vision.
**"I know what to build but it feels overwhelming"**
→ Roadmap. Break it into 3-5 phases.
**"I have a phase but tasks are vague"**
→ Plan Phase with Claude-executable specificity.
**"I have a plan but Claude keeps going off track"**
→ Tasks aren't specific enough. Add Files/Action/Verification.
**"Context keeps running out mid-task"**
→ Tasks are too big. Break into smaller chunks + use handoff.
</common_situations>
<success_criteria>
Guidance is complete when:
- [ ] User's situation understood
- [ ] Appropriate approach recommended
- [ ] User knows next step
</success_criteria>

View File

@@ -0,0 +1,134 @@
# Workflow: Create Handoff
<required_reading>
**Read these files NOW:**
1. templates/continue-here.md
</required_reading>
<purpose>
Create a context handoff file when pausing work. This preserves full context
so a fresh Claude session can pick up exactly where you left off.
**Handoff is a parking lot, not a journal.** Create when leaving, delete when returning.
</purpose>
<when_to_create>
- User says "pack it up", "stopping for now", "save my place"
- Context window at 15% or below (offer to create)
- Context window at 10% (auto-create)
- Switching to different project
</when_to_create>
<process>
<step name="identify_location">
Determine which phase we're in:
```bash
# Find current phase (most recently modified PLAN.md)
ls -lt .planning/phases/*/PLAN.md 2>/dev/null | head -1
```
Handoff goes in the current phase directory.
</step>
<step name="gather_context">
Collect everything needed for seamless resumption:
1. **Current position**: Which phase, which task
2. **Work completed**: What's done this session
3. **Work remaining**: What's left
4. **Decisions made**: Why things were done this way
5. **Blockers/issues**: Anything stuck
6. **Mental context**: The "vibe" - what you were thinking
</step>
<step name="write_handoff">
Use template from `templates/continue-here.md`.
Write to `.planning/phases/XX-name/.continue-here.md`:
```yaml
---
phase: XX-name
task: 3
total_tasks: 7
status: in_progress
last_updated: [ISO timestamp]
---
```
Then markdown body with full context.
</step>
<step name="git_commit_wip">
Commit handoff as WIP:
```bash
git add .planning/
git commit -m "$(cat <<'EOF'
wip: [phase-name] paused at task [X]/[Y]
Current: [task name]
[If blocked:] Blocked: [reason]
EOF
)"
```
Confirm: "Committed: wip: [phase] paused at task [X]/[Y]"
</step>
<step name="handoff_confirmation">
Require acknowledgment:
"Handoff created: .planning/phases/[XX]/.continue-here.md
Current state:
- Phase: [XX-name]
- Task: [X] of [Y]
- Status: [in_progress/blocked/etc]
- Committed as WIP
To resume: Invoke this skill in a new session.
Confirmed?"
Wait for acknowledgment before ending.
</step>
</process>
<context_trigger>
**Auto-handoff at 10% context:**
When system warning shows ~20k tokens remaining:
1. Complete current atomic operation (don't leave broken state)
2. Create handoff automatically
3. Tell user: "Context limit reached. Handoff created at [location]."
4. Stop working - don't start new tasks
**Warning at 15%:**
"Context getting low (~30k remaining). Create handoff now or push through?"
</context_trigger>
<handoff_lifecycle>
```
Working → No handoff exists
"Pack it up" → CREATE .continue-here.md
[Session ends]
[New session]
"Resume" → READ handoff, then DELETE it
Working → No handoff (context is fresh)
Phase complete → Ensure no stale handoff exists
```
Handoff is temporary. If it persists after resuming, it's stale.
</handoff_lifecycle>
<success_criteria>
Handoff is complete when:
- [ ] .continue-here.md exists in current phase
- [ ] YAML frontmatter has phase, task, status, timestamp
- [ ] Body has: completed work, remaining work, decisions, context
- [ ] User knows how to resume
</success_criteria>

View File

@@ -0,0 +1,70 @@
# Workflow: Plan Next Chunk
<required_reading>
**Read the current phase's PLAN.md**
</required_reading>
<purpose>
Identify the immediate next 1-3 tasks to work on. This is for when you want
to focus on "what's next" without replanning the whole phase.
</purpose>
<process>
<step name="find_current_position">
Read the phase plan:
```bash
cat .planning/phases/XX-current/PLAN.md
```
Identify:
- Which tasks are complete (marked or inferred)
- Which task is next
- Dependencies between tasks
</step>
<step name="identify_chunk">
Select 1-3 tasks that:
- Are next in sequence
- Have dependencies met
- Form a coherent chunk of work
Present:
```
Current phase: [Phase Name]
Progress: [X] of [Y] tasks complete
Next chunk:
1. Task [N]: [Name] - [Brief description]
2. Task [N+1]: [Name] - [Brief description]
Ready to work on these?
```
</step>
<step name="offer_execution">
Options:
1. **Start working** - Begin with Task N
2. **Generate prompt** - Create meta-prompt for this chunk
3. **See full plan** - Review all remaining tasks
4. **Different chunk** - Pick different tasks
</step>
</process>
<chunk_sizing>
Good chunks:
- 1-3 tasks
- Can complete in one session
- Deliver something testable
If user asks "what's next" - give them ONE task.
If user asks "plan my session" - give them 2-3 tasks.
</chunk_sizing>
<success_criteria>
Chunk planning is complete when:
- [ ] Current position identified
- [ ] Next 1-3 tasks selected
- [ ] User knows what to work on
</success_criteria>

View File

@@ -0,0 +1,334 @@
# Workflow: Plan Phase
<required_reading>
**Read these files NOW:**
1. templates/phase-prompt.md
2. references/plan-format.md
3. references/scope-estimation.md
4. references/checkpoints.md
5. Read `.planning/ROADMAP.md`
6. Read `.planning/BRIEF.md`
**If domain expertise should be loaded (determined by intake):**
7. Read domain SKILL.md: `~/.claude/skills/expertise/[domain]/SKILL.md`
8. Determine phase type from ROADMAP (UI, database, API, etc.)
9. Read ONLY relevant references from domain's `<references_index>` section
</required_reading>
<purpose>
Create an executable phase prompt (PLAN.md). This is where we get specific:
objective, context, tasks, verification, success criteria, and output specification.
**Key insight:** PLAN.md IS the prompt that Claude executes. Not a document that
gets transformed into a prompt.
</purpose>
<process>
<step name="identify_phase">
Check roadmap for phases:
```bash
cat .planning/ROADMAP.md
ls .planning/phases/
```
If multiple phases available, ask which one to plan.
If obvious (first incomplete phase), proceed.
Read any existing PLAN.md or FINDINGS.md in the phase directory.
</step>
<step name="check_research_needed">
For this phase, assess:
- Are there technology choices to make?
- Are there unknowns about the approach?
- Do we need to investigate APIs or libraries?
If yes: Route to workflows/research-phase.md first.
Research produces FINDINGS.md, then return here.
If no: Proceed with planning.
</step>
<step name="gather_phase_context">
For this specific phase, understand:
- What's the phase goal? (from roadmap)
- What exists already? (scan codebase if mid-project)
- What dependencies are met? (previous phases complete?)
- Any research findings? (FINDINGS.md)
```bash
# If mid-project, understand current state
ls -la src/ 2>/dev/null
cat package.json 2>/dev/null | head -20
```
</step>
<step name="break_into_tasks">
Decompose the phase into tasks.
Each task must have:
- **Type**: auto, checkpoint:human-verify, checkpoint:decision (human-action rarely needed)
- **Task name**: Clear, action-oriented
- **Files**: Which files created/modified (for auto tasks)
- **Action**: Specific implementation (including what to avoid and WHY)
- **Verify**: How to prove it worked
- **Done**: Acceptance criteria
**Identify checkpoints:**
- Claude automated work needing visual/functional verification? → checkpoint:human-verify
- Implementation choices to make? → checkpoint:decision
- Truly unavoidable manual action (email link, 2FA)? → checkpoint:human-action (rare)
**Critical:** If external resource has CLI/API (Vercel, Stripe, Upstash, GitHub, etc.), use type="auto" to automate it. Only checkpoint for verification AFTER automation.
See references/checkpoints.md and references/cli-automation.md for checkpoint structure and automation guidance.
</step>
<step name="estimate_scope">
After breaking into tasks, assess scope against the **quality degradation curve**.
**ALWAYS split if:**
- >3 tasks total
- Multiple subsystems (DB + API + UI = separate plans)
- >5 files modified in any single task
- Complex domains (auth, payments, data modeling)
**Aggressive atomicity principle:** Better to have 10 small, high-quality plans than 3 large, degraded plans.
**If scope is appropriate (2-3 tasks, single subsystem, <5 files per task):**
Proceed to confirm_breakdown for a single plan.
**If scope is large (>3 tasks):**
Split into multiple plans by:
- Subsystem (01-01: Database, 01-02: API, 01-03: UI, 01-04: Frontend)
- Dependency (01-01: Setup, 01-02: Core, 01-03: Features, 01-04: Testing)
- Complexity (01-01: Layout, 01-02: Data fetch, 01-03: Visualization)
- Autonomous vs Interactive (group auto tasks for subagent execution)
**Each plan must be:**
- 2-3 tasks maximum
- ~50% context target (not 80%)
- Independently committable
**Autonomous plan optimization:**
- Plans with NO checkpoints → will execute via subagent (fresh context)
- Plans with checkpoints → execute in main context (user interaction required)
- Try to group autonomous work together for maximum fresh contexts
See references/scope-estimation.md for complete splitting guidance and quality degradation analysis.
</step>
<step name="confirm_breakdown">
Present the breakdown inline:
**If single plan (2-3 tasks):**
```
Here's the proposed breakdown for Phase [X]:
### Tasks (single plan: {phase}-01-PLAN.md)
1. [Task name] - [brief description] [type: auto/checkpoint]
2. [Task name] - [brief description] [type: auto/checkpoint]
[3. [Task name] - [brief description] [type: auto/checkpoint]] (optional 3rd task if small)
Autonomous: [yes/no] (no checkpoints = subagent execution with fresh context)
Does this breakdown look right? (yes / adjust / start over)
```
**If multiple plans (>3 tasks or multiple subsystems):**
```
Here's the proposed breakdown for Phase [X]:
This phase requires 3 plans to maintain quality:
### Plan 1: {phase}-01-PLAN.md - [Subsystem/Component Name]
1. [Task name] - [brief description] [type]
2. [Task name] - [brief description] [type]
3. [Task name] - [brief description] [type]
### Plan 2: {phase}-02-PLAN.md - [Subsystem/Component Name]
1. [Task name] - [brief description] [type]
2. [Task name] - [brief description] [type]
### Plan 3: {phase}-03-PLAN.md - [Subsystem/Component Name]
1. [Task name] - [brief description] [type]
2. [Task name] - [brief description] [type]
Each plan is independently executable and scoped to ~80% context.
Does this breakdown look right? (yes / adjust / start over)
```
Wait for confirmation before proceeding.
If "adjust": Ask what to change, revise, present again.
If "start over": Return to gather_phase_context step.
</step>
<step name="approach_ambiguity">
If multiple valid approaches exist for any task:
Use AskUserQuestion:
- header: "Approach"
- question: "For [task], there are multiple valid approaches:"
- options:
- "[Approach A]" - [tradeoff description]
- "[Approach B]" - [tradeoff description]
- "Decide for me" - Use your best judgment
Only ask if genuinely ambiguous. Don't ask obvious choices.
</step>
<step name="decision_gate">
After breakdown confirmed:
Use AskUserQuestion:
- header: "Ready"
- question: "Ready to create the phase prompt, or would you like me to ask more questions?"
- options:
- "Create phase prompt" - I have enough context
- "Ask more questions" - There are details to clarify
- "Let me add context" - I want to provide more information
Loop until "Create phase prompt" selected.
</step>
<step name="write_phase_prompt">
Use template from `templates/phase-prompt.md`.
**If single plan:**
Write to `.planning/phases/XX-name/{phase}-01-PLAN.md`
**If multiple plans:**
Write multiple files:
- `.planning/phases/XX-name/{phase}-01-PLAN.md`
- `.planning/phases/XX-name/{phase}-02-PLAN.md`
- `.planning/phases/XX-name/{phase}-03-PLAN.md`
Each file follows the template structure:
```markdown
---
phase: XX-name
plan: {plan-number}
type: execute
domain: [if domain expertise loaded]
---
<objective>
[Plan-specific goal - what this plan accomplishes]
Purpose: [Why this plan matters for the phase]
Output: [What artifacts will be created by this plan]
</objective>
<execution_context>
@~/.claude/skills/create-plans/workflows/execute-phase.md
@~/.claude/skills/create-plans/templates/summary.md
[If plan has ANY checkpoint tasks (type="checkpoint:*"), add:]
@~/.claude/skills/create-plans/references/checkpoints.md
</execution_context>
<context>
@.planning/BRIEF.md
@.planning/ROADMAP.md
[If research done:]
@.planning/phases/XX-name/FINDINGS.md
[If continuing from previous plan:]
@.planning/phases/XX-name/{phase}-{prev}-SUMMARY.md
[Relevant source files:]
@src/path/to/relevant.ts
</context>
<tasks>
[Tasks in XML format with type attribute]
[Mix of type="auto" and type="checkpoint:*" as needed]
</tasks>
<verification>
[Overall plan verification checks]
</verification>
<success_criteria>
[Measurable completion criteria for this plan]
</success_criteria>
<output>
After completion, create `.planning/phases/XX-name/{phase}-{plan}-SUMMARY.md`
[Include summary structure from template]
</output>
```
**For multi-plan phases:**
- Each plan has focused scope (3-6 tasks)
- Plans reference previous plan summaries in context
- Last plan's success criteria includes "Phase X complete"
</step>
<step name="offer_next">
**If single plan:**
```
Phase plan created: .planning/phases/XX-name/{phase}-01-PLAN.md
[X] tasks defined.
What's next?
1. Execute plan
2. Review/adjust tasks
3. Done for now
```
**If multiple plans:**
```
Phase plans created:
- {phase}-01-PLAN.md ([X] tasks) - [Subsystem name]
- {phase}-02-PLAN.md ([X] tasks) - [Subsystem name]
- {phase}-03-PLAN.md ([X] tasks) - [Subsystem name]
Total: [X] tasks across [Y] focused plans.
What's next?
1. Execute first plan ({phase}-01)
2. Review/adjust tasks
3. Done for now
```
</step>
</process>
<task_quality>
Good tasks:
- "Add User model to Prisma schema with email, passwordHash, createdAt"
- "Create POST /api/auth/login endpoint with bcrypt validation"
- "Add protected route middleware checking JWT in cookies"
Bad tasks:
- "Set up authentication" (too vague)
- "Make it secure" (not actionable)
- "Handle edge cases" (which ones?)
If you can't specify Files + Action + Verify + Done, the task is too vague.
</task_quality>
<anti_patterns>
- Don't add story points
- Don't estimate hours
- Don't assign to team members
- Don't add acceptance criteria committees
- Don't create sub-sub-sub tasks
Tasks are instructions for Claude, not Jira tickets.
</anti_patterns>
<success_criteria>
Phase planning is complete when:
- [ ] One or more PLAN files exist with XML structure ({phase}-{plan}-PLAN.md)
- [ ] Each plan has: Objective, context, tasks, verification, success criteria, output
- [ ] @context references included
- [ ] Each plan has 3-6 tasks (scoped to ~80% context)
- [ ] Each task has: Type, Files (if auto), Action, Verify, Done
- [ ] Checkpoints identified and properly structured
- [ ] Tasks are specific enough for Claude to execute
- [ ] If multiple plans: logical split by subsystem/dependency/complexity
- [ ] User knows next steps
</success_criteria>

View File

@@ -0,0 +1,106 @@
# Workflow: Research Phase
<purpose>
Create and execute a research prompt for phases with unknowns.
Produces FINDINGS.md that informs PLAN.md creation.
</purpose>
<when_to_use>
- Technology choice unclear
- Best practices needed
- API/library investigation required
- Architecture decision pending
</when_to_use>
<process>
<step name="identify_unknowns">
Ask: What do we need to learn before we can plan this phase?
- Technology choices?
- Best practices?
- API patterns?
- Architecture approach?
</step>
<step name="create_research_prompt">
Use templates/research-prompt.md.
Write to `.planning/phases/XX-name/RESEARCH.md`
Include:
- Clear research objective
- Scoped include/exclude lists
- Source preferences (official docs, Context7, 2024-2025)
- Output structure for FINDINGS.md
</step>
<step name="execute_research">
Run the research prompt:
- Use web search for current info
- Use Context7 MCP for library docs
- Prefer 2024-2025 sources
- Structure findings per template
</step>
<step name="create_findings">
Write `.planning/phases/XX-name/FINDINGS.md`:
- Summary with recommendation
- Key findings with sources
- Code examples if applicable
- Metadata (confidence, dependencies, open questions, assumptions)
</step>
<step name="confidence_gate">
After creating FINDINGS.md, check confidence level.
If confidence is LOW:
Use AskUserQuestion:
- header: "Low Confidence"
- question: "Research confidence is LOW: [reason]. How would you like to proceed?"
- options:
- "Dig deeper" - Do more research before planning
- "Proceed anyway" - Accept uncertainty, plan with caveats
- "Pause" - I need to think about this
If confidence is MEDIUM:
Inline: "Research complete (medium confidence). [brief reason]. Proceed to planning?"
If confidence is HIGH:
Proceed directly, just note: "Research complete (high confidence)."
</step>
<step name="open_questions_gate">
If FINDINGS.md has open_questions:
Present them inline:
"Open questions from research:
- [Question 1]
- [Question 2]
These may affect implementation. Acknowledge and proceed? (yes / address first)"
If "address first": Gather user input on questions, update findings.
</step>
<step name="offer_next">
```
Research complete: .planning/phases/XX-name/FINDINGS.md
Recommendation: [one-liner]
Confidence: [level]
What's next?
1. Create phase plan (PLAN.md) using findings
2. Refine research (dig deeper)
3. Review findings
```
NOTE: FINDINGS.md is NOT committed separately. It will be committed with phase completion.
</step>
</process>
<success_criteria>
- RESEARCH.md exists with clear scope
- FINDINGS.md created with structured recommendations
- Confidence level and metadata included
- Ready to inform PLAN.md creation
</success_criteria>

View File

@@ -0,0 +1,124 @@
# Workflow: Resume from Handoff
<required_reading>
**Read the handoff file found by context scan.**
</required_reading>
<purpose>
Load context from a handoff file and restore working state.
After loading, DELETE the handoff - it's a parking lot, not permanent storage.
</purpose>
<process>
<step name="locate_handoff">
Context scan already found handoff. Read it:
```bash
cat .planning/phases/*/.continue-here.md 2>/dev/null
```
Parse YAML frontmatter for: phase, task, status, last_updated
Parse markdown body for: context, completed work, remaining work
</step>
<step name="calculate_time_ago">
Convert `last_updated` to human-readable:
- "3 hours ago"
- "Yesterday"
- "5 days ago"
If > 2 weeks, warn: "This handoff is [X] old. Code may have changed."
</step>
<step name="present_summary">
Display to user:
```
Resuming: Phase [X] - [Name]
Last updated: [time ago]
Task [N] of [Total]: [Task name]
Status: [in_progress/blocked/etc]
Completed this phase:
- [task 1]
- [task 2]
Remaining:
- [task 3] ← You are here
- [task 4]
Context notes:
[Key decisions, blockers, mental state from handoff]
Ready to continue? (1) Yes (2) See full handoff (3) Different action
```
</step>
<step name="user_confirms">
**WAIT for user confirmation.** Do not auto-proceed.
On confirmation:
1. Load relevant files mentioned in handoff
2. Delete the handoff file
3. Continue from where we left off
</step>
<step name="delete_handoff">
After user confirms and context is loaded:
```bash
rm .planning/phases/XX-name/.continue-here.md
```
Tell user: "Handoff loaded and cleared. Let's continue."
</step>
<step name="continue_work">
Based on handoff state:
- If mid-task: Continue that task
- If between tasks: Start next task
- If blocked: Address blocker first
Offer: "Continue with [next action]?"
</step>
</process>
<stale_handoff>
If handoff is > 2 weeks old:
```
Warning: This handoff is [X days] old.
The codebase may have changed. Recommend:
1. Review what's changed (git log)
2. Discard handoff, reassess from PLAN.md
3. Continue anyway (risky)
```
</stale_handoff>
<multiple_handoffs>
If multiple `.continue-here.md` files found:
```
Found multiple handoffs:
1. phases/02-auth/.continue-here.md (3 hours ago)
2. phases/01-setup/.continue-here.md (2 days ago)
Which one? (likely want #1, the most recent)
```
Most recent is usually correct. Older ones may be stale/forgotten.
</multiple_handoffs>
<success_criteria>
Resume is complete when:
- [ ] Handoff located and parsed
- [ ] Time-ago displayed
- [ ] Summary presented to user
- [ ] User explicitly confirmed
- [ ] Handoff file deleted
- [ ] Context loaded, ready to continue
</success_criteria>

View File

@@ -0,0 +1,151 @@
# Workflow: Transition to Next Phase
<required_reading>
**Read these files NOW:**
1. `.planning/ROADMAP.md`
2. Current phase's plan files (`*-PLAN.md`)
3. Current phase's summary files (`*-SUMMARY.md`)
</required_reading>
<purpose>
Mark current phase complete and advance to next. This is the natural point
where progress tracking happens - implicit via forward motion.
"Planning next phase" = "current phase is done"
</purpose>
<process>
<step name="verify_completion">
Check current phase has all plan summaries:
```bash
ls .planning/phases/XX-current/*-PLAN.md 2>/dev/null | sort
ls .planning/phases/XX-current/*-SUMMARY.md 2>/dev/null | sort
```
**Verification logic:**
- Count PLAN files
- Count SUMMARY files
- If counts match: all plans complete
- If counts don't match: incomplete
**If all plans complete:**
Ask: "Phase [X] complete - all [Y] plans finished. Ready to mark done and move to Phase [X+1]?"
**If plans incomplete:**
Present:
```
Phase [X] has incomplete plans:
- {phase}-01-SUMMARY.md ✓ Complete
- {phase}-02-SUMMARY.md ✗ Missing
- {phase}-03-SUMMARY.md ✗ Missing
Options:
1. Continue current phase (execute remaining plans)
2. Mark complete anyway (skip remaining plans)
3. Review what's left
```
Wait for user decision.
</step>
<step name="cleanup_handoff">
Check for lingering handoffs:
```bash
ls .planning/phases/XX-current/.continue-here*.md 2>/dev/null
```
If found, delete them - phase is complete, handoffs are stale.
Pattern matches:
- `.continue-here.md` (legacy)
- `.continue-here-01-02.md` (plan-specific)
</step>
<step name="update_roadmap">
Update `.planning/ROADMAP.md`:
- Mark current phase: `[x] Complete`
- Add completion date
- Update plan count to final (e.g., "3/3 plans complete")
- Update Progress table
- Keep next phase as `[ ] Not started`
**Example:**
```markdown
## Phases
- [x] Phase 1: Foundation (completed 2025-01-15)
- [ ] Phase 2: Authentication ← Next
- [ ] Phase 3: Core Features
## Progress
| Phase | Plans Complete | Status | Completed |
|-------|----------------|--------|-----------|
| 1. Foundation | 3/3 | Complete | 2025-01-15 |
| 2. Authentication | 0/2 | Not started | - |
| 3. Core Features | 0/1 | Not started | - |
```
</step>
<step name="archive_prompts">
If prompts were generated for the phase, they stay in place.
The `completed/` subfolder pattern from create-meta-prompts handles archival.
</step>
<step name="offer_next_phase">
```
Phase [X] marked complete.
Next: Phase [X+1] - [Name]
What would you like to do?
1. Plan Phase [X+1] in detail
2. Review roadmap
3. Take a break (done for now)
```
</step>
</process>
<implicit_tracking>
Progress tracking is IMPLICIT:
- "Plan phase 2" → Phase 1 must be done (or ask)
- "Plan phase 3" → Phases 1-2 must be done (or ask)
- Transition workflow makes it explicit in ROADMAP.md
No separate "update progress" step. Forward motion IS progress.
</implicit_tracking>
<partial_completion>
If user wants to move on but phase isn't fully complete:
```
Phase [X] has incomplete plans:
- {phase}-02-PLAN.md (not executed)
- {phase}-03-PLAN.md (not executed)
Options:
1. Mark complete anyway (plans weren't needed)
2. Defer work to later phase
3. Stay and finish current phase
```
Respect user judgment - they know if work matters.
**If marking complete with incomplete plans:**
- Update ROADMAP: "2/3 plans complete" (not "3/3")
- Note in transition message which plans were skipped
</partial_completion>
<success_criteria>
Transition is complete when:
- [ ] Current phase plan summaries verified (all exist or user chose to skip)
- [ ] Any stale handoffs deleted
- [ ] ROADMAP.md updated with completion status and plan count
- [ ] Progress table updated
- [ ] User knows next steps
</success_criteria>