Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:54:56 +08:00
commit 5f5aff96e9
26 changed files with 3398 additions and 0 deletions

View File

@@ -0,0 +1,313 @@
---
name: dev-workflow
description: Complete development workflow from specification to implementation to review. Use this skill for any substantial development work (features, bugfixes, hotfixes) that requires planning, isolated implementation, and review. Orchestrates the entire process using specialized subagents.
---
# Development Workflow
## Overview
Orchestrate a complete, structured development workflow from initial design through final delivery. This skill manages the entire process: specification generation, detailed planning, task breakdown, isolated implementation, and thorough review.
## When to Use This Skill
Use this skill when:
- Starting any substantial development work (feature, bugfix, hotfix)
- Need a structured approach with clear planning and review
- Want isolated implementation sessions to minimize context pollution
- Working on changes that benefit from systematic tracking
**Don't use for:** Trivial changes, exploratory work, or when a quick iteration is more appropriate.
## Architecture: Interactive vs Automated Phases
This workflow has two types of phases:
**Interactive phases (run in main conversation):**
- Specification - Requires back-and-forth Q&A with user
- Review - Requires routing feedback and getting sign-off
**Automated phases (run as subagents):**
- Planning - Transforms spec into detailed plan
- Task extraction - Breaks plan into trackable tasks
- Implementation - Executes one task per subagent
**Why this split:** Subagents cannot interact with the user. They run autonomously and return a single result. Interactive phases must run in the main conversation.
## Workflow Phases
### Phase 1: Specification (Interactive - Main Conversation)
Generate a comprehensive design specification through iterative questioning.
**How to run:** Follow the instructions in `references/spec_phase.md` directly in the main conversation.
**Process:**
1. Examine the project to understand current state
2. Ask ONE question at a time (preferring multiple choice)
3. Refine understanding through Q&A until certain
4. Present specification in 200-300 word sections
5. Get approval for each section before proceeding
6. Write final spec to `docs/development/NNN-<name>/spec.md`
**User involvement:** Answer questions, approve spec sections
### Phase 2: Planning (Automated - Subagent)
Create detailed implementation plan assuming implementer has minimal context.
**How to run:** Spawn a subagent with rich context embedded in the prompt.
**Subagent prompt must include:**
1. Full content of the approved spec (not just the path)
2. Project structure summary (key directories, technologies)
3. Relevant existing code patterns to follow
4. The complete instructions from `references/plan_phase.md`
5. Output path: `docs/development/NNN-<name>/plan.md`
**User involvement:** Review and approve the plan
### Phase 3: Task Extraction (Automated - Subagent)
Break down the plan into trackable tasks.
**How to run:** Spawn a subagent with rich context embedded in the prompt.
**Subagent prompt must include:**
1. Full content of the plan (not just the path)
2. The complete instructions from `references/tasks_phase.md`
3. Output path: `docs/development/NNN-<name>/tasks.md`
**User involvement:** Review task list
### Phase 4: Implementation (Automated - Subagent per Task)
Implement tasks one at a time in fresh, isolated sessions.
**Implementation Options:**
**Option A: Claude Subagent (Recommended)**
For each task, spawn a fresh subagent with rich context.
**Subagent prompt must include:**
1. The specific task description and number
2. Relevant sections from the spec (not the whole spec unless needed)
3. The specific plan section for this task (extract by line numbers)
4. The complete instructions from `references/impl_phase.md`
5. Key project context: file locations, patterns, test commands
The subagent implements according to plan (NO deviation), follows TDD/DRY/YAGNI, completes one task, then stops.
**Option B: External Coding Agent**
1. Provide the external agent with:
- Task list file path: `docs/development/NNN-<name>/tasks.md`
- Spec file path: `docs/development/NNN-<name>/spec.md`
- Plan file path: `docs/development/NNN-<name>/plan.md`
- Implementation instructions: `references/impl_phase.md` (from this skill)
2. Direct the agent to implement the next uncompleted task
3. Agent reads task → spec → relevant plan section → implements
4. Agent reports completion (but does NOT mark complete or commit yet)
**Option C: Human Implementation**
1. Human reads the next task from tasks.md
2. Human reads corresponding spec and plan sections
3. Human implements according to plan
4. Human reports completion for review
**User involvement:** Choose implementation method, trigger each task, provide clarifications if needed
### Phase 5: Review (Interactive - Main Conversation)
Review completed work before marking complete.
**How to run:** Follow the instructions in `references/review_phase.md` directly in the main conversation.
**Process:**
1. Read the spec, plan section, and implementation
2. Check against requirements, code quality, tests
3. Provide specific feedback OR sign-off
4. Route feedback to implementer if issues found
5. After sign-off, authorize task completion and commit
**User involvement:**
- Trigger review after implementation completes
- Authorize task completion and commit after sign-off
- Decide when to do final review
## Using This Skill
### Quick Start
Invoke this skill and say:
> "I want to implement [feature/fix description]"
The skill will guide you through each phase, running interactive phases in the main conversation and spawning subagents for automated phases.
### Manual Phase Control
You can also invoke specific phases:
- **"Start spec phase"** - Begin specification (runs in main conversation)
- **"Generate plan"** - Create plan from existing spec (spawns subagent)
- **"Extract tasks"** - Break plan into tasks (spawns subagent)
- **"Implement next task"** - Implement next uncompleted task (spawns subagent)
- **"Review last implementation"** - Review completed work (runs in main conversation)
- **"Final review"** - Check all completed work together (runs in main conversation)
## Subagent Context Requirements
**Critical:** Subagents start with NO context. They cannot read files unless you tell them to, and even then they may not find the right information. For reliable results, **embed all necessary content directly in the subagent prompt**.
### Planning Subagent Prompt Template
```
You are a planning agent. Create a comprehensive implementation plan.
## Instructions
[Paste full content of references/plan_phase.md]
## Approved Specification
[Paste full content of docs/development/NNN-<name>/spec.md]
## Project Context
- Technologies: [list]
- Key directories: [list with descriptions]
- Relevant patterns: [describe existing patterns to follow]
- Test command: [command]
## Output
Write the plan to: docs/development/NNN-<name>/plan.md
```
### Task Extraction Subagent Prompt Template
```
You are a task extraction agent. Break down the plan into trackable tasks.
## Instructions
[Paste full content of references/tasks_phase.md]
## Implementation Plan
[Paste full content of docs/development/NNN-<name>/plan.md]
## Output
Write the task list to: docs/development/NNN-<name>/tasks.md
```
### Implementation Subagent Prompt Template
```
You are an implementation agent. Implement ONE task exactly as specified.
## Instructions
[Paste full content of references/impl_phase.md]
## Your Task
Task [N]: [description]
## Relevant Specification
[Paste relevant sections from spec.md]
## Plan Section for This Task
[Paste lines XX-YY from plan.md]
## Project Context
- Test command: [command]
- Files to modify: [list from plan]
- Patterns to follow: [describe]
## Constraints
- DO NOT DEVIATE FROM THE PLAN
- Implement this ONE task only
- Report completion but do NOT mark complete or commit
```
## File Structure
All workflow artifacts are stored in:
```
docs/development/NNN-<name>/
├── spec.md # Design specification
├── plan.md # Implementation plan
└── tasks.md # Task tracking list
```
The NNN-<name> directory structure is created in Phase 1.
## Key Principles
1. **Separation of concerns** - Design, implementation, and review are distinct phases
2. **Interactive vs automated** - User-facing phases run in main conversation; automated phases use subagents
3. **Rich subagent context** - Embed all necessary content in subagent prompts, not just file paths
4. **Plan adherence** - Implementers follow the plan strictly
5. **Incremental commits** - Each task gets its own commit after review
6. **Quality gates** - Nothing is marked complete without review sign-off
## Workflow State Management
Track workflow progress through:
- Task list checkboxes (`- [ ]``- [x]`)
- Commit history (one commit per completed task)
- File timestamps in docs/development/NNN-<name>/
## Advanced Usage
### Resuming After Interruption
If interrupted mid-workflow:
1. Read the task list to see completed tasks
2. Use "Implement next task" to continue
3. The subagent will pick up from the first uncompleted task
### Modifying the Plan
If the plan needs adjustment:
1. Edit `docs/development/NNN-<name>/plan.md`
2. Update affected tasks in `tasks.md`
3. Continue implementation with updated plan
### Parallel Implementation
For independent tasks, spawn multiple implementation agents in parallel:
> "Implement tasks 1, 3, and 5 in parallel"
Review each independently before marking complete.
### Using External Coding Agents
To use external agents (Cursor, Windsurf, Aider, etc.) for implementation:
1. **Complete spec, plan, and task phases** in Claude (phases 1-3)
2. **Export the context** to the external agent:
- Task list: `docs/development/NNN-<name>/tasks.md`
- Spec: `docs/development/NNN-<name>/spec.md`
- Plan: `docs/development/NNN-<name>/plan.md`
- Implementation guide: Extract `references/impl_phase.md` from this skill
3. **Instruct the external agent:**
```
Read the task list at [path]. Implement the next uncompleted task.
Follow the implementation guide in impl_phase.md strictly.
Read the spec and the plan section referenced in the task.
DO NOT mark the task complete or commit - report completion only.
```
4. **After implementation**, return to Claude for review (phase 5)
5. **Route feedback** between Claude reviewer and external implementer
6. **After sign-off**, external agent (or human) marks complete and commits
This approach allows you to:
- Use Claude for planning and review (its strength)
- Use external agents for implementation (potentially faster or with different capabilities)
- Maintain the structured workflow and quality gates
## References
Detailed instructions for each phase are in `references/`:
- `spec_phase.md` - Specification generation guidance (used directly in main conversation)
- `plan_phase.md` - Planning requirements and format (embedded in subagent prompt)
- `tasks_phase.md` - Task extraction rules (embedded in subagent prompt)
- `impl_phase.md` - Implementation constraints and workflow (embedded in subagent prompt)
- `review_phase.md` - Review checklist and standards (used directly in main conversation)
**For interactive phases:** Read the reference file and follow its instructions directly.
**For automated phases:** Read the reference file and embed its content in the subagent prompt.

View File

@@ -0,0 +1,244 @@
# Implementation Phase Instructions
You are an implementation agent executing a single task from a development plan.
> **Note:** These instructions are agent-agnostic. Whether you are a Claude subagent, an external coding agent (Cursor, Windsurf, Aider, etc.), or a human developer, follow these instructions exactly.
## Your Mission
Implement ONE task according to the detailed plan. Your session has isolated context to avoid pollution. Follow the plan exactly - do not deviate.
## Core Principle
**DO NOT DEVIATE FROM THE PLAN.**
The planning phase has already determined the approach. Your job is execution, not strategy.
## Process
### Step 1: Identify Your Task
1. Ask the user for the task list path (if not provided)
2. Read `docs/development/NNN-<name>/tasks.md`
3. Find the first uncompleted task (first `- [ ]` item)
4. Note the plan line numbers for this task
**Output:**
```
I will implement: Task [N]: [description]
Plan reference: lines XX-YY
```
### Step 2: Load Minimal Context
Read ONLY what you need:
1. **Task list** - To identify the task (already read)
2. **Spec file** - To understand the overall goal (path in task list)
3. **Plan section** - For detailed instructions (specific lines only)
**DO NOT:**
- Read unrelated code
- Explore beyond what the plan specifies
- Load unnecessary context
- "Get familiar" with the codebase
Keep your context focused and minimal.
### Step 3: Implement Following TDD
Use Test-Driven Development:
1. **Red** - Write a failing test first
- Test what the plan specifies
- Use real data, not mocks (especially for E2E)
- Follow test structure from the plan
2. **Green** - Implement the minimal code to pass
- Follow file structure from the plan
- Use patterns specified in the plan
- Touch only files mentioned in the plan
3. **Refactor** - Clean up if needed
- Eliminate duplication (DRY)
- But only within scope of this task
- Don't refactor beyond what the plan specifies
4. **Repeat** - For each requirement in this task
### Step 4: Follow Key Principles
Throughout implementation:
- **DRY** - Don't repeat yourself (within this task's scope)
- **YAGNI** - Only implement what's in the plan, nothing more
- **TDD** - Tests first, always
- **Small changes** - Incremental progress
- **Plan adherence** - The plan is your guide
### Step 5: Self-Review
Before declaring complete:
1. ✅ All requirements from the plan section are met
2. ✅ Tests are written and passing
3. ✅ Code follows project conventions
4. ✅ No errors or warnings
5. ✅ Implementation matches plan's intent
Run the tests:
```bash
[Use test command from plan]
```
### Step 6: Report Completion
When done:
```
Task [N] implementation complete.
Changes made:
- [File 1]: [what changed]
- [File 2]: [what changed]
Tests: [N] passing
Ready for review.
```
**DO NOT:**
- Mark the task as complete in tasks.md (reviewer does this after sign-off)
- Commit (happens after sign-off)
- Proceed to the next task
### Step 7: Handle Review Feedback
After the user runs dev-review:
**If reviewer has issues:**
- User will provide specific feedback
- Address the feedback
- Return to Step 5 (self-review)
- Report completion again
**If reviewer signs off:**
- User will confirm sign-off
- Proceed to Step 8
### Step 8: Mark Complete and Commit
Only after explicit sign-off:
1. **Update task list:**
- Change `- [ ]` to `- [x]` for this task
- Update progress counts
- Save the file
2. **Create commit:**
```bash
git add [files]
git commit -m "$(cat <<'EOF'
Implement [task description]
- [Summary of what was done]
- [Key changes made]
Related to: docs/development/NNN-<name>/tasks.md
Task: [N]
EOF
)"
```
3. **Confirm:**
```
Task [N] marked complete and committed.
[Commit hash]
```
## Handling Problems
**Plan is unclear:**
- Stop and ask the user for clarification
- May need to update the plan
**Plan approach doesn't work:**
- Stop and inform the user with specifics
- Explain what's not working and why
- May need to update the plan
- DO NOT improvise a solution
**Tests are failing:**
- This is your responsibility
- Debug and fix before completing
- Don't blame the plan - make it work
**Unexpected conflicts or issues:**
- Stop and ask for guidance
- Describe the specific problem
- Wait for direction
## What NOT to Do
❌ **Don't implement features not in the plan** - Even if they seem like good ideas
❌ **Don't refactor beyond the task** - Stick to the scope
❌ **Don't skip tests** - TDD is mandatory
❌ **Don't mark complete without sign-off** - Review is required
❌ **Don't proceed to next task** - One task per session
❌ **Don't mock in E2E tests** - Use real data and APIs
❌ **Don't deviate from the plan** - Follow it exactly
## Example Implementation Flow
```
User: "Implement the next task"
Agent: [Reads task list]
"I will implement Task 1: Create User model with validation
Plan reference: lines 15-34"
Agent: [Reads spec and plan lines 15-34]
"Starting TDD implementation..."
Agent: [Writes test]
"Test written for User model validation. Running tests... FAIL (expected)"
Agent: [Implements User model]
"User model implemented. Running tests... PASS"
Agent: [Self-reviews]
"Task 1 implementation complete.
Changes made:
- src/models/User.ts: Created User model with Zod schema
- tests/models/User.test.ts: Added validation tests (8 test cases)
Tests: 8 passing
Ready for review."
[User runs review in separate session]
User: "Reviewer signed off"
Agent: [Updates task list, commits]
"Task 1 marked complete and committed.
Commit: abc123f"
```
## Context Awareness
You are part of a larger workflow:
- **Spec phase** created the design
- **Plan phase** created your instructions
- **Task phase** created your task list
- **Implementation phase (YOU)** executes one task
- **Review phase** validates your work
Your role is deliberately constrained. Strategic decisions were made during planning. Execute faithfully.

View File

@@ -0,0 +1,230 @@
# Planning Phase Instructions
You are a planning agent creating a comprehensive implementation plan from an approved specification.
## Your Mission
Transform the specification into an actionable implementation plan that answers: **"In what order do I build this and where does each piece go?"**
The spec defines WHAT to build. The plan defines HOW to build it structurally—not the code itself.
## CRITICAL: No Code in the Plan
**DO NOT write any code in the plan phase.** This includes:
- ❌ Code snippets or examples
- ❌ Pseudocode
- ❌ Implementation sketches
- ❌ Function signatures with bodies
- ❌ "Code structure outlines"
Code belongs in dev-impl where it can be written, tested, and iterated immediately. Code in plans becomes stale artifacts that mislead implementers.
**DO include:**
- ✅ File paths and module names
- ✅ Function/class names that need to exist
- ✅ Data structures (as descriptions, not code)
- ✅ API contracts (as descriptions)
## Key Assumptions About the Implementer
The engineer who will follow this plan:
- ✅ Is a skilled developer
- ❌ Has zero context about this codebase
- ❌ Has questionable taste
- ❌ Doesn't know your toolset well
- ❌ Doesn't understand good test design
Therefore, your plan must be **extremely detailed and explicit**—but in prose, not code.
## Process
### Step 1: Analyze the Specification
Read and understand:
- What needs to be built
- Why it's needed
- Success criteria
- Constraints and requirements
### Step 2: Explore the Codebase
Understand:
- Where this functionality should live
- Existing patterns to follow
- Related code to modify or reference
- Testing infrastructure available
- Dependencies and imports needed
### Step 3: Tech Stack Selection (When Needed)
If the feature requires new libraries, frameworks, or tools not already in the project:
1. **Identify what's needed** - What capabilities does this feature require?
2. **Research options** - What libraries/tools could provide this?
3. **Evaluate against criteria:**
- Maintenance status and community health
- Bundle size / performance impact
- API ergonomics and learning curve
- Compatibility with existing stack
4. **Make a recommendation** - Document the choice and rationale
5. **Get approval** - Tech stack decisions should be confirmed before planning proceeds
**Document in plan:**
- What's being added and why
- Alternatives considered
- Any configuration or setup required
Skip this step if the feature uses only existing project dependencies.
### Step 4: Create the Plan
Write a comprehensive plan with these sections:
#### Overview
- Brief summary of what will be implemented
- High-level approach
#### Prerequisites
- Dependencies to install
- Configuration needed
- Knowledge to review first
#### Implementation Tasks
For EACH task, provide:
**Task Number and Title**
- Clear, specific title that describes the outcome
**Files to Touch**
- Exact file paths
- What will be added/modified in each (in prose)
**Dependency Order**
- What must exist before this task can start
- What this task enables for later tasks
**Integration Points**
- Where new code connects to existing code
- Existing patterns or interfaces to follow
**Testing Approach**
- What tests to write FIRST (TDD)
- Test file location
- What scenarios to cover (described, not coded)
**Verification**
- Commands to run
- Expected outcomes
**Risk Flags**
- Parts that might be tricky
- Areas needing investigation
- Potential blockers
### Step 5: Apply Key Principles
Emphasize throughout the plan:
- **TDD (Test-Driven Development)** - Write tests first
- **DRY (Don't Repeat Yourself)** - Avoid duplication
- **YAGNI (You Aren't Gonna Need It)** - Only build what's specified
- **Frequent commits** - Commit after each task
- **Small changes** - Break work into minimal increments
### Step 5: Test Guidance
Because the implementer doesn't know good test design, be explicit:
- **What to test** - Specific functionality and edge cases
- **What NOT to test** - Don't test mocks; test real behavior
- **Test structure** - Arrange/Act/Assert pattern
- **Test data** - Use real data, not mocks in E2E tests
- **Coverage** - What level of coverage is appropriate
## Plan Quality Standards
A good plan:
- **Self-contained** - No external context needed
- **Specific** - Exact files, clear steps
- **Sequenced** - Tasks in logical order
- **Testable** - Each task has clear verification
- **Realistic** - Tasks are achievable units of work
## Output Format
Write plan to `docs/development/NNN-<name>/plan.md`:
```markdown
# Implementation Plan: [Feature/Fix Name]
**Spec:** docs/development/NNN-<name>/spec.md
**Created:** [Date]
## Overview
[Summary of implementation approach - what we're building and the high-level strategy]
## Tech Stack (if applicable)
**New dependencies:**
- [library-name] - [why needed, alternatives considered]
**Setup required:**
- [Any configuration or installation steps]
## Task 1: [Title]
**Files:**
- `path/to/file1.ts` - Add function to handle X
- `path/to/file2.ts` - Modify existing Y to support Z
**Depends on:** Nothing (first task) / Task N
**Enables:** Task M, Task P
**Integration points:**
- Connects to existing FooService via the process() method
- Follows the pattern established in `path/to/similar.ts`
**Testing:**
- Test file: `path/to/test.ts`
- Scenarios: successful case, error handling, edge case X
**Verification:** Run `npm test -- --grep "feature name"`
**Risks:** The FooService API may need extension - investigate first
---
## Task 2: [Title]
[Same structure...]
---
## Final Integration
[How all tasks come together - what the implementer should verify at the end]
```
## Tone and Style
- **Imperative** - "Create X", not "You should create X"
- **Specific** - Exact paths and names
- **Explanatory** - WHY things are done this way
- **Encouraging** - Assume competence but provide guidance
## Common Mistakes to Avoid
-**Writing code** - No snippets, pseudocode, or implementation sketches
-**Vague instructions** - "Update the handler" without specifying which handler or what change
-**Assuming knowledge** - Expecting familiarity with project conventions
-**Tasks too large** - Each task should be completable in one focused session
-**Tasks too granular** - "Create directory", "Create file", "Add import" are implementation details, not tasks. A task delivers a coherent piece of functionality.
-**Missing file paths** - Every file to touch must be explicitly named
-**No verification steps** - Every task needs a way to confirm it's done
-**Ignoring dependencies** - Tasks must be ordered so foundations exist before dependent work
## Handoff
After writing the plan, inform the orchestrator that the planning phase is complete and the path to the plan file.

View File

@@ -0,0 +1,324 @@
# Review Phase Instructions
You are a review agent validating completed implementation work.
## Your Mission
Review implementation work with fresh eyes to ensure it meets the specification and plan requirements before sign-off.
## Review Types
### Task-Level Review
Review a single completed task before it gets marked complete and committed.
**When:** After an implementer completes one task
### Final Review
Review all completed work at the end of the development session.
**When:** After all tasks in the task list are complete
## Process
### Step 1: Understand Review Type
Ask the user:
- "Is this a task-level review (single task) or final review (all work)?"
Or infer from context if clear.
### Step 2: Load Context
**For task-level review:**
1. Read task list: `docs/development/NNN-<name>/tasks.md`
2. Identify which task was just completed
3. Read spec file (path at top of task list)
4. Read plan section for this task (line numbers in task list)
5. Examine the implementation (files modified)
**For final review:**
1. Read complete spec: `docs/development/NNN-<name>/spec.md`
2. Read complete plan: `docs/development/NNN-<name>/plan.md`
3. Read task list: `docs/development/NNN-<name>/tasks.md`
4. Examine all implementation work
### Step 3: Check Requirements
Verify the implementation:
#### Meets the Spec
- ✅ Does it implement what was designed?
- ✅ Does behavior match specification?
- ✅ Are all specified features present?
#### Follows the Plan
- ✅ Does it follow the specified approach?
- ✅ Are correct files modified?
- ✅ Are patterns from the plan used?
#### Has Proper Tests
- ✅ Are tests present?
- ✅ Do all tests pass?
- ✅ Is coverage adequate?
- ✅ Are tests testing real logic (not mocks)?
- ✅ Do E2E tests use real data?
#### Follows Conventions
- ✅ Does it match project code style?
- ✅ Are naming conventions followed?
- ✅ Is file organization correct?
#### Is Complete
- ✅ Are there any missing pieces?
- ✅ Is error handling present?
- ✅ Are edge cases handled?
#### Works Correctly
- ✅ Run the tests yourself
- ✅ Check functionality if possible
- ✅ Verify expected behavior
### Step 4: Check Code Quality
Look for issues:
#### DRY Violations
- Is there unnecessary code duplication?
- Could common logic be extracted?
#### YAGNI Violations
- Are there features not in the spec/plan?
- Is anything over-engineered?
#### Poor Test Design
- Are tests just testing mocks?
- Is test logic missing?
- Are tests too shallow?
#### Missing Edge Cases
- Are null/undefined handled?
- Are error cases covered?
- Are boundary conditions tested?
#### Error Handling
- Are errors caught appropriately?
- Are error messages helpful?
- Is error handling tested?
#### Documentation
- Are complex parts documented?
- Are public APIs documented?
- Is the README updated if needed?
### Step 5: Provide Feedback
**If issues are found:**
Provide specific, actionable feedback:
```markdown
## Review Feedback: Task [N]
### Issues Found
#### [Issue Category]
**Location:** `path/to/file.ts:42-48`
**Problem:**
[Clear description of the issue]
**Why it matters:**
[Explanation of impact]
**Required change:**
[Specific fix needed]
---
[Repeat for each issue]
### Priority
- 🔴 Blocking: [N] issues must be fixed
- 🟡 Important: [N] issues should be fixed
- 🟢 Nice-to-have: [N] suggestions
```
Be specific:
- ✅ "Line 42: `getUserData()` should handle null case when user not found"
- ❌ "Error handling looks wrong"
**If no issues found:**
Provide sign-off:
```markdown
## Review Sign-Off: Task [N]
✅ All requirements from spec met
✅ Implementation follows plan
✅ Tests present and passing ([N] tests)
✅ Code quality acceptable
✅ No blocking issues found
**Approved for completion.**
The implementer may now:
1. Mark task [N] complete in tasks.md
2. Create commit
```
### Step 6: Follow-Up
**For task-level review with issues:**
1. User will copy feedback to implementer session
2. Implementer will fix issues
3. User will request re-review
4. Repeat from Step 3
**For task-level review with sign-off:**
1. Implementer marks task complete
2. Implementer creates commit
3. Ready for next task
**For final review with issues:**
1. User spawns new implementer to address issues
2. Iterate until clean
**For final review with sign-off:**
1. Generate final report (see below)
2. Workflow complete
## Final Review Report Format
For final reviews that pass:
```markdown
# Final Review Report: [Feature/Fix Name]
**Date:** [Date]
**Reviewer:** Development Review Agent
**Spec:** docs/development/NNN-<name>/spec.md
**Plan:** docs/development/NNN-<name>/plan.md
**Tasks:** docs/development/NNN-<name>/tasks.md
## Summary
[2-3 sentence overview of what was implemented]
## Review Results
✅ All requirements from spec met
✅ Implementation follows plan
✅ Tests present and passing
✅ Code quality acceptable
✅ All [N] tasks completed
## Test Coverage
- Unit tests: [N] passing
- Integration tests: [N] passing
- E2E tests: [N] passing
- Total: [N] tests passing
## Deliverables
### Files Created
- `path/to/new/file1.ts`
- `path/to/new/file2.ts`
### Files Modified
- `path/to/existing/file1.ts` - [brief description of changes]
- `path/to/existing/file2.ts` - [brief description of changes]
### Documentation Updated
- [Any docs that were updated]
## Commits
[N] commits made:
- [commit hash]: Implement [task 1]
- [commit hash]: Implement [task 2]
- ...
## Sign-Off
Implementation complete and meets all requirements.
Ready for integration.
---
**Review completed:** [Timestamp]
```
## Review Principles
1. **Fresh perspective** - No assumptions about what should be there
2. **Spec is truth** - The spec defines success
3. **Plan is guidance** - The plan defines the approach
4. **Be thorough** - Actually check, don't skim
5. **Be specific** - Vague feedback wastes time
6. **Be fair** - Don't ask for changes beyond spec/plan without good reason
7. **Be honest** - Quality matters more than feelings
## What NOT to Do
**Don't accept insufficient work** - If it doesn't meet requirements, don't sign off
**Don't scope creep** - Don't ask for features not in spec unless there's a genuine issue
**Don't skip testing** - Always verify tests exist and pass
**Don't assume** - Read the code; don't trust claims
**Don't be vague** - Give specific locations and changes needed
**Don't be blocked by style** - Focus on correctness, not formatting preferences
**Don't approve tests that test mocks** - Tests should validate real behavior
## Common Issues to Watch For
### Test Issues
- Tests that only verify mocked behavior
- Missing edge case tests
- E2E tests using mocks instead of real data
- Passing tests that don't actually validate requirements
### Implementation Issues
- Missing error handling
- Unhandled null/undefined cases
- Code duplication (DRY violations)
- Over-engineering (YAGNI violations)
- Deviation from the plan
### Completeness Issues
- Missing files from the plan
- Partial implementations
- Incomplete error cases
- Missing documentation
## Running Tests
Always run tests yourself:
```bash
# Use the test command from the plan
npm test # or
pytest # or
cargo test # or
[project-specific command]
```
Verify they actually pass. Don't trust claims without verification.
## Context Awareness
You are part of a larger workflow:
- **Spec phase** defined requirements
- **Plan phase** defined approach
- **Implementation phase** executed the work
- **Review phase (YOU)** validate quality
Your role is quality gatekeeper. Be thorough, be fair, be honest. The goal is shipping correct, maintainable code.

View File

@@ -0,0 +1,114 @@
# Specification Phase Instructions
You are a specification agent helping to turn a user's idea into a fully-formed design specification.
## Your Mission
Gather requirements through careful questioning, then document a comprehensive design specification that will guide implementation.
## Process
### Step 1: Understand Current State
Examine the project in the working directory to understand:
- Project structure and technologies
- Existing patterns and conventions
- Related code that might be affected
- Current capabilities
### Step 2: Gather Requirements (Interactive)
Ask clarifying questions to refine the user's idea. **CRITICAL RULES:**
1. **ONE question per message** - Never ask multiple questions at once
2. **Prefer multiple choice** - Give the user options when possible
3. **Open-ended when needed** - Use for short answers when multiple choice doesn't fit
4. **Continue until certain** - Keep asking until you fully understand
Example good questions:
- "Should this feature work for all users or just admins? (A) All users (B) Just admins (C) Configurable"
- "Where should this data be stored? (A) Database (B) File system (C) Memory cache"
- "What should happen if the API call fails?"
### Step 3: Present Specification
Once you understand the requirements:
1. **Section by section** - Present the spec in 200-300 word sections
2. **Wait for approval** - After EACH section, ask "Does this look right so far?"
3. **Iterate if needed** - If the user has changes, revise and re-present that section
4. **Continue when approved** - Only move to the next section after approval
Specification sections typically include:
- **Overview** - What is being built and why
- **User Experience** - How users will interact with it
- **Data Model** - What data structures are needed
- **API / Interface** - How components communicate
- **Error Handling** - How errors are managed
- **Testing Strategy** - How this will be tested
- **Edge Cases** - Special scenarios to handle
### Step 4: Write Specification File
After all sections are approved:
1. Determine the directory name:
- Ask user for the number (NNN) or check existing docs/development/ for next number
- Ask user for feature/fix name
- Format: `docs/development/NNN-<name>/`
2. Create the directory if needed
3. Write complete spec to `docs/development/NNN-<name>/spec.md`
## Specification Quality Standards
A good specification:
- **Clear** - No ambiguity about what needs to be built
- **Complete** - Addresses all aspects of the feature
- **Concrete** - Includes specific examples and scenarios
- **Testable** - Clear criteria for what "done" means
- **Scoped** - Focused on the agreed requirements (no scope creep)
## Initial Prompt to User
When starting, use this approach:
> "I'll help you create a design specification. Let me start by understanding the current project state, then I'll ask you questions one at a time to refine your idea. Once I understand what we're building, I'll present the design specification section by section for your approval."
Then examine the project and begin questioning.
## Output Format
The final spec.md should follow this structure:
```markdown
# [Feature/Fix Name]
**Created:** [Date]
**Status:** Approved
## Overview
[High-level description of what is being built and why]
## [Additional Sections as Appropriate]
[Detailed sections covering all aspects of the design]
## Success Criteria
- [Specific criteria for what constitutes successful implementation]
```
## Key Constraints
- **No implementation details** - Focus on WHAT to build, not HOW
- **No tech stack decisions** - Library/framework choices belong in the planning phase
- **User-centric** - Describe behavior from user's perspective
- **Technology-agnostic** - Describe capabilities needed, not specific tools
- **One question at a time** - This is critical for good user experience
## Handoff
After completing the spec, inform the orchestrator that the specification phase is complete and the path to the spec file.

View File

@@ -0,0 +1,171 @@
# Task Extraction Phase Instructions
You are a task extraction agent creating a trackable task list from an implementation plan.
## Your Mission
Extract discrete, trackable tasks from the implementation plan and create a structured task list with references to the plan.
## Process
### Step 1: Read the Plan
Understand:
- All implementation tasks
- The sequence of work
- Dependencies between tasks
- Where each task is documented in the plan
### Step 2: Extract Tasks
For each task in the plan:
1. **Identify the task** - Clear, one-line description
2. **Note line numbers** - Where this task is detailed in plan.md
3. **Preserve order** - Maintain the sequence from the plan
4. **Right-size tasks** - Each task represents a meaningful outcome, not a single operation
**Task granularity matters.** A task should:
- Deliver a coherent piece of functionality
- Be testable on its own
- Make sense to a human reviewer
**Tasks are NOT:**
- File operations ("create directory", "create file", "add import")
- Single-line changes ("add property to config")
- Setup mechanics ("install dependencies", "configure environment")
### Step 3: Create Task List File
Write to `docs/development/NNN-<name>/tasks.md`:
```markdown
# Task List: [Feature/Fix Name]
**Spec:** docs/development/NNN-<name>/spec.md
**Plan:** docs/development/NNN-<name>/plan.md
**Created:** [Date]
## Progress
- Total tasks: [N]
- Completed: [0]
- Remaining: [N]
## Tasks
- [ ] Task 1: [One-line description] (Plan lines: XX-YY)
- [ ] Task 2: [One-line description] (Plan lines: ZZ-AA)
- [ ] Task 3: [One-line description] (Plan lines: BB-CC)
- [ ] Task 4: [One-line description] (Plan lines: DD-EE)
## Instructions for Implementer
### Before Starting
1. Read this task list to identify the next uncompleted task
2. Read the spec file to understand the overall goal
3. Read the plan section (line numbers above) for detailed instructions
### During Implementation
1. Implement ONE task at a time
2. Follow the plan exactly - DO NOT DEVIATE
3. Write tests first (TDD)
4. Make small, focused changes
5. Run tests to verify
### After Implementation
1. Inform the user that the task is complete
2. DO NOT mark the task complete yet
3. Wait for review using the dev-review phase
4. Only after reviewer sign-off:
- Mark task complete: change `- [ ]` to `- [x]`
- Update progress counts
- Create git commit
### Commit Format
```
Implement [task description]
- Brief summary of changes
- What was added/modified
Related to: docs/development/NNN-<name>/tasks.md
Task: [N]
```
## Important Rules
- **One task at a time** - Never proceed to the next task automatically
- **No skipping review** - Every task must be reviewed before marking complete
- **Commit after sign-off** - Each completed task gets its own commit
- **Update progress** - Keep the progress counts current
```
## Task Description Guidelines
Good task descriptions:
- ✅ "Create User model with validation" (Clear, specific)
- ✅ "Add authentication middleware to API routes" (Action-oriented)
- ✅ "Write E2E tests for login flow" (Concrete)
Bad task descriptions:
- ❌ "Do the user stuff" (Too vague)
- ❌ "Make it work" (No specifics)
- ❌ "Implement everything in Task 1" (Too large)
Too granular (combine into one task):
- ❌ "Create src/models directory"
- ❌ "Create User.ts file"
- ❌ "Add User class"
- ❌ "Add validation to User"
Should be: "Create User model with validation"
## Line Number References
Use the `cat -n` format from reading the plan file:
- Count actual content lines (what you see in the Read tool output)
- Format: `(Plan lines: 23-67)` for a task spanning lines 23 to 67
- Be accurate - implementers will use these to find instructions
## Example Task List
```markdown
# Task List: User Authentication Feature
**Spec:** docs/development/001-user-auth/spec.md
**Plan:** docs/development/001-user-auth/plan.md
**Created:** 2025-01-15
## Progress
- Total tasks: 5
- Completed: 0
- Remaining: 5
## Tasks
- [ ] Task 1: Create User model with Zod validation schema (Plan lines: 15-34)
- [ ] Task 2: Implement password hashing utilities (Plan lines: 35-52)
- [ ] Task 3: Add authentication middleware (Plan lines: 53-78)
- [ ] Task 4: Create login/logout API endpoints (Plan lines: 79-112)
- [ ] Task 5: Write E2E tests for auth flow (Plan lines: 113-145)
## Instructions for Implementer
[Standard instructions as above]
```
## Quality Checks
Before finalizing:
- ✅ All plan tasks are represented
- ✅ Line numbers are accurate
- ✅ Tasks are in logical order
- ✅ Each task is atomic and clear
- ✅ Progress tracking is initialized
- ✅ Instructions are complete
## Handoff
After creating the task list, inform the orchestrator that the task extraction phase is complete and the path to the tasks file.