27 KiB
name, description
| name | description |
|---|---|
| feature:bugfix | Systematic bug investigation and fixing workflow for existing features. Uses code-reviewer for deep analysis, code-consolidator for context, parallel web research for solutions, and atomic phase implementation with review gates. |
Bug Fix Command: Systematic Bug Resolution Workflow
This command provides a structured approach to investigating and fixing bugs in existing features through deep code analysis, solution research, and phased implementation with review gates.
Command Structure
The bugfix workflow follows these phases:
- Phase 0: Bug Investigation (Parallel Analysis)
- Phase 1: Bug Clarification (User Confirmation)
- Phase 2: Solution Research (4 Parallel Web Searches)
- Phase 3: Solution Design (User Approval Required)
- Phase 3.5: Design Review (code-reviewer)
- Phase 4: Phased Implementation (Review Gates)
- Phase 5: Regression Testing (User Verification)
- Phase 6: Documentation Update (After Fix Confirmed)
Core Principle
⚠️ CRITICAL: Read the code carefully before designing new solutions - the existing implementation often already handles edge cases. Understanding what's already there prevents duplicate solutions and unnecessary complexity.
Spec File Structure
All specifications are stored in:
.claude/specs/bugfix-{feature-name}/
├── 00-investigation.md # Bug investigation findings
├── 01-bug-report.md # User-confirmed bug details
├── 02-research.md # Solution research findings
├── 03-solution-design.md # Fix strategy and phases
├── 03-design-review.md # Design review feedback
├── phase-{X}-{name}.md # Atomic fix phase specs
└── phase-{X}-{name}-review.md # Phase implementation reviews
Phase Execution
Phase 0: Bug Investigation (Parallel Analysis)
CRITICAL: This phase uses PARALLEL execution for efficiency.
Execute in SINGLE message with TWO Task tool calls:
Task 1: Deep Code Analysis
Task tool with subagent_type: "code-reviewer"
Prompt: "Analyze the {feature-name} feature to identify potential bugs.
ANALYSIS REQUIREMENTS:
1. Code Design Understanding:
- Break down code into atomic components
- Understand intended design and architecture
- Map dependencies and data flow
- Identify critical execution paths
2. Line-by-Line Investigation:
- Analyze each function for logic errors
- Check error handling and edge cases
- Verify type safety and null checks
- Look for race conditions and timing issues
- Check resource management (memory, files, connections)
3. Common Bug Patterns:
- Off-by-one errors
- Null/undefined references
- Type mismatches
- Incorrect conditionals
- Missing error handling
- Resource leaks
- Concurrency issues
4. Potential Bug Locations:
- Identify suspicious code sections
- Flag code smells and anti-patterns
- Note areas with high complexity
- Highlight recent changes (if in git history)
Return DETAILED findings with:
- File paths and line numbers
- Code snippets of suspicious sections
- Explanation of potential issues
- Severity rating (Critical/High/Medium/Low)"
Task 2: Documentation Context (Parallel)
Task tool with subagent_type: "code-consolidator"
Prompt: "Find and read all documentation related to {feature-name}.
SEARCH LOCATIONS:
- /docs/ (all subdirectories)
- Feature-specific README files
- API documentation
- Architecture documentation
- Related system documentation
EXTRACT:
1. Feature purpose and expected behavior
2. Known limitations or caveats
3. Configuration requirements
4. Dependencies and prerequisites
5. Common issues or troubleshooting notes
6. Recent changes or migration notes
Return:
- List of relevant documentation files
- Summary of expected behavior
- Any documented known issues
- Context that might explain the bug"
After BOTH agents complete:
Create spec directory and investigation report:
mkdir -p .claude/specs/bugfix-{feature-name}
Write 00-investigation.md:
# Bug Investigation: {feature-name}
## Investigation Date
{current-date}
## Code Analysis Findings
{code-reviewer findings}
### Potential Bug Locations
{list of suspicious code sections with file:line references}
### Code Smells Detected
{anti-patterns and design issues found}
### Critical Execution Paths
{map of important code flows}
## Documentation Context
{code-consolidator findings}
### Expected Behavior
{what the feature should do according to docs}
### Known Issues
{any documented problems or limitations}
### Dependencies
{external dependencies that might affect behavior}
## Investigation Summary
{consolidated understanding of the code and potential issues}
Phase 1: Bug Clarification (User Confirmation)
Present investigation findings to user:
=== Bug Investigation Complete ===
I've analyzed the {feature-name} feature and found {N} potential issues:
{summary of findings from 00-investigation.md}
CRITICAL QUESTIONS:
1. What specific bugs are you experiencing?
- What action triggers the bug?
- What is the expected behavior?
- What actually happens?
- Error messages or symptoms?
2. When did this start?
- After a specific change?
- Intermittent or consistent?
- Environment-specific (dev/prod)?
3. Reproduction steps:
- Minimal steps to reproduce
- Required conditions or data
- Success rate (always/sometimes/rarely)
4. Impact assessment:
- Who is affected?
- Severity level (blocker/critical/major/minor)
- Workarounds available?
Please provide details for each atomic component affected.
STOP and wait for user response.
After receiving user input, create 01-bug-report.md:
# Bug Report: {feature-name}
## Report Date
{current-date}
## User-Reported Issues
### Bug 1: {bug-title}
**Trigger**: {what causes the bug}
**Expected**: {correct behavior}
**Actual**: {observed behavior}
**Error Messages**: {any errors}
**Severity**: {Critical/High/Medium/Low}
**Frequency**: {Always/Often/Sometimes/Rarely}
### Bug 2: {bug-title}
{repeat structure}
## Reproduction Steps
1. {step 1}
2. {step 2}
3. {observe bug}
## Environment
- Environment: {dev/staging/prod}
- OS: {operating system}
- Dependencies: {relevant versions}
- Configuration: {relevant settings}
## Impact Assessment
- Users Affected: {number or description}
- Business Impact: {how this affects operations}
- Workarounds: {temporary solutions if any}
## Root Cause Hypothesis
{initial theory based on investigation}
## Investigation Mapping
{link bug reports to code sections from 00-investigation.md}
Phase 2: Solution Research (4 Parallel Web Searches)
CRITICAL: Execute 4 web searches in SINGLE message for efficiency.
Execute these searches in parallel:
Search 1: Language-Specific Bug Patterns (Simple & SOLID)
WebSearch or mcp__exa__web_search_exa
Query: "{language} {bug-category} simple fixes SOLID principles {year}"
Example: "Python async race condition simple fixes SOLID principles 2025"
EXTRACT:
- Simplest fix patterns that follow SOLID
- Minimal change approaches
- Anti-patterns to avoid (over-engineering)
- Language-specific gotchas
Search 2: Framework/Library-Specific Issues
WebSearch or mcp__exa__web_search_exa
Query: "{framework} {feature-type} known issues and solutions {year}"
Example: "FastAPI WebSocket connection issues and solutions 2025"
EXTRACT:
- Framework-specific bug patterns
- Official issue tracker results
- Recommended fixes from maintainers
- Version-specific considerations
Search 3: Similar Bug Reports and Solutions
WebSearch or mcp__exa__web_search_exa
Query: "{specific-error-message} OR {bug-symptoms} solutions"
Example: "connection reset by peer WebSocket solutions"
EXTRACT:
- Similar reported issues
- Community-validated solutions
- Root cause explanations
- Success stories and outcomes
Search 4: Simple Prevention Patterns (KISS Principle)
WebSearch or mcp__exa__web_search_exa
Query: "{bug-category} prevention simple patterns KISS principle {language} {year}"
Example: "race condition prevention simple patterns KISS principle Python 2025"
EXTRACT:
- Simplest preventive patterns (avoid complexity)
- Minimal testing strategies that catch this bug
- Basic monitoring without over-engineering
- SOLID principles that prevent recurrence
After ALL searches complete, consolidate findings into 02-research.md:
# Solution Research: {feature-name}
## Research Date
{current-date}
## Bug Category
{type of bug: race condition, memory leak, logic error, etc.}
## Research Findings
### Language-Specific Patterns
{findings from search 1}
#### Common Causes
- {cause 1}
- {cause 2}
#### Proven Fixes
- {fix pattern 1}
- {fix pattern 2}
#### Anti-Patterns
- {what to avoid}
### Framework/Library Issues
{findings from search 2}
#### Known Issues
- {issue 1 with link}
- {issue 2 with link}
#### Official Recommendations
- {recommendation 1}
- {recommendation 2}
### Community Solutions
{findings from search 3}
#### Similar Cases
- {case 1 with link}
- {case 2 with link}
#### Validated Solutions
- {solution 1}
- {solution 2}
#### Success Stories
- {outcome 1}
- {outcome 2}
### Prevention Best Practices
{findings from search 4}
#### Preventive Patterns
- {pattern 1}
- {pattern 2}
#### Testing Strategies
- {strategy 1}
- {strategy 2}
#### Monitoring Approaches
- {approach 1}
- {approach 2}
## Solution Strategy Recommendation
{consolidated recommendation based on all research}
### Primary Approach
{main fix strategy}
### Alternative Approaches
{backup options if primary fails}
### Testing Approach
{how to verify the fix}
### Prevention Measures
{how to prevent recurrence}
Phase 3: Solution Design (User Approval Required)
Analyze all gathered information and design the fix:
- Review 00-investigation.md (code analysis)
- Review 01-bug-report.md (user-confirmed bugs)
- Review 02-research.md (solution research)
Design the fix strategy:
Design Principles (SOLID & Simple):
- Minimal change scope (fix the bug, don't refactor unnecessarily)
- Apply SOLID only where it simplifies the fix (avoid over-engineering)
- Preserve existing functionality (no regressions)
- Address root cause with simplest solution (not just symptoms)
- Include verification steps
- Consider edge cases
- Plan for testing
- KISS principle: Keep It Simple, Stupid
Create Atomic Fix Phases: Break the fix into independently implementable phases:
- Each phase should be testable
- Each phase should have clear success criteria
- Phases should be ordered by dependency
- Each phase should be small enough to review thoroughly
Create 03-solution-design.md:
# Solution Design: {feature-name}
## Design Date
{current-date}
## Bug Summary
{concise summary of the bug from 01-bug-report.md}
## Root Cause Analysis
{detailed explanation of why the bug occurs}
### Code Location
- File: {file-path}
- Function: {function-name}
- Lines: {line-range}
### Root Cause
{technical explanation of the underlying issue}
### Why This Happened
{contributing factors: design flaw, edge case, race condition, etc.}
## Solution Strategy
### Approach
{chosen fix approach from research}
### Rationale
{why this approach over alternatives}
### Risks and Mitigations
- Risk: {potential issue}
- Mitigation: {how to prevent}
### Scope of Changes
- Files to modify: {list}
- New files needed: {list or "None"}
- Tests to update: {list}
## Atomic Fix Phases
### Overview
Total phases: {N}
Estimated complexity: {Low/Medium/High}
### Phase 1: {phase-name}
**Objective**: {what this phase fixes}
**Changes**: {specific modifications}
**Success Criteria**: {how to verify this phase works}
**Dependencies**: {none or list previous phases}
**Files**: {files modified in this phase}
### Phase 2: {phase-name}
{repeat structure}
## Testing Strategy
### Unit Tests
- {test 1}
- {test 2}
### Integration Tests
- {test 1}
- {test 2}
### Regression Tests
- {verify feature 1 still works}
- {verify feature 2 still works}
### Manual Testing
1. {step 1}
2. {step 2}
3. {verify bug is fixed}
## Rollback Plan
{how to undo changes if fix causes issues}
## Prevention Measures
{changes to prevent recurrence: monitoring, tests, validation, etc.}
Present design to user:
=== Solution Design Complete ===
BUG: {bug summary}
ROOT CAUSE: {explanation}
PROPOSED FIX:
{solution strategy summary}
ATOMIC FIX PHASES ({N} phases):
1. {phase 1 name}: {objective}
2. {phase 2 name}: {objective}
...
TESTING PLAN:
{testing strategy summary}
RISKS:
{identified risks and mitigations}
QUESTION: Does this solution align with your desired outcome?
Options:
- "Yes, proceed with implementation"
- "No, I want a different approach" (explain what you'd prefer)
- "Modify the design" (specify changes needed)
STOP and wait for user approval.
If user says NO or requests changes:
- Ask for specific feedback
- Update 03-solution-design.md
- Present revised design
- Loop until approved
Phase 3.5: Design Review (Before Implementation)
CRITICAL: Review the DESIGN before implementing.
Execute:
Task tool with subagent_type: "code-reviewer"
Prompt: "Review the fix design in .claude/specs/bugfix-{feature-name}/03-solution-design.md
REVIEW CRITERIA:
1. Root Cause Analysis:
- Is the root cause correctly identified?
- Are all contributing factors considered?
- Could there be deeper underlying issues?
2. Solution Approach:
- Does the fix address root cause (not just symptoms)?
- Is the approach consistent with codebase patterns?
- Are there simpler/better alternatives?
- Does it follow SOLID principles?
3. Scope and Impact:
- Is the change scope minimal?
- Are all affected areas identified?
- Could this introduce regressions?
- Are dependencies properly considered?
4. Atomic Phase Design:
- Are phases truly atomic and independent?
- Is the phase order correct?
- Are success criteria clear and testable?
- Can each phase be reviewed independently?
5. Testing Strategy:
- Is testing comprehensive?
- Are edge cases covered?
- Is regression testing adequate?
- Are manual test steps clear?
6. Risk Assessment:
- Are all risks identified?
- Are mitigations adequate?
- Is rollback plan feasible?
7. Prevention Measures:
- Will prevention measures work?
- Are they maintainable?
- Do they cover future similar bugs?
Return DETAILED review with:
- Design strengths
- Potential issues or concerns
- Recommendations for improvement
- Approval status (Approved/Changes Required)"
Check if review file exists:
⚠️ CRITICAL - File Existence Checking:
- ✅ ONLY use bash test:
[ -f filepath ] - ❌ NEVER use:
ls,ls -la,ls -l, or anylsvariant - ❌ NEVER use:
stat,find, or other file inspection commands - Reason:
lsreturns errors to stderr when file doesn't exist, causing noise
# CORRECT METHOD - Use bash test [ -f ... ]
if [ -f .claude/specs/bugfix-{feature-name}/03-design-review.md ]; then
# File exists, UPDATE it
else
# File doesn't exist, CREATE it
fi
If file exists, UPDATE with review history:
# Design Review: {feature-name}
## Review History
### Review {N}: {date}
{new review content}
---
### Review {N-1}: {previous date}
{previous review content}
---
{earlier reviews...}
If file doesn't exist, CREATE new review:
# Design Review: {feature-name}
## Review Date
{current-date}
## Design Analysis
{code-reviewer findings}
### Strengths
- {strength 1}
- {strength 2}
### Concerns
- {concern 1}
- {concern 2}
### Recommendations
- {recommendation 1}
- {recommendation 2}
## Review Decision
{Approved / Changes Required}
### Required Changes (if any)
- {change 1}
- {change 2}
## Reviewer Notes
{additional observations}
If review requires changes:
- Update 03-solution-design.md based on feedback
- Run code-reviewer again
- UPDATE review file with new review entry
- Loop until design is approved
Proceed only when design review shows "Approved".
Phase 4: Phased Implementation (With MANDATORY Review Gates)
CRITICAL ENFORCEMENT: This phase follows the MANDATORY pattern for EACH atomic phase:
For each atomic fix phase:
├── Implement changes (from phase spec)
├── MANDATORY review gate (code-reviewer) - NEVER SKIP
└── Fix issues in loop until approved (NO PROCEEDING WITH ISSUES)
IMPORTANT: You MUST NOT proceed to the next phase until current phase is APPROVED.
For EACH atomic fix phase in 03-solution-design.md:
Step 1: Read Phase Spec
Read .claude/specs/bugfix-{feature-name}/03-solution-design.md
Extract Phase {N} details:
- Objective
- Changes required
- Success criteria
- Files to modify
Step 2: Create Atomic Phase Spec
Create detailed specification for this phase:
# Phase {N}: {phase-name}
## Objective
{what this phase fixes}
## Bug Context
{relevant bug information for this phase}
## Current Code Analysis
{analyze current code that needs fixing}
### File: {file-path}
```{language}
{current code snippet with line numbers}
Issue
{explain what's wrong with current code}
Proposed Fix
File: {file-path}
{new code with fix applied}
Changes Explained
- {change 1 explanation}
- {change 2 explanation}
Why This Fix Works
{technical explanation}
Success Criteria
- {criterion 1}
- {criterion 2}
Testing
{how to test this phase}
Dependencies
{any dependencies on previous phases}
Rollback
{how to undo this phase if needed}
Write to: `.claude/specs/bugfix-{feature-name}/phase-{X}-{phase-name}.md`
**Step 3: Implement Phase**
Implement the changes specified in the phase:
- Use Edit tool for existing files
- Use Write tool only if new files required (rare for bug fixes)
- Make changes exactly as specified
- Preserve all existing functionality
- Add comments explaining the fix if non-obvious
**Step 4: Review Gate (MANDATORY)**
STOP after implementation and execute review:
**First, check if review file already exists:**
**⚠️ CRITICAL - File Existence Checking:**
- ✅ **ONLY use bash test**: `[ -f filepath ]`
- ❌ **NEVER use**: `ls`, `ls -la`, `ls -l`, or any `ls` variant
- ❌ **NEVER use**: `stat`, `find`, or other file inspection commands
- **Reason**: `ls` returns errors to stderr when file doesn't exist, causing noise
```bash
# CORRECT METHOD - Use bash test [ -f ... ]
if [ -f .claude/specs/bugfix-{feature-name}/phase-{X}-{phase-name}-review.md ]; then
# Review file exists, we'll UPDATE it
REVIEW_EXISTS=true
else
# No review file, we'll CREATE it
REVIEW_EXISTS=false
fi
Execute code review:
Task tool with subagent_type: "code-reviewer"
Prompt: "Review Phase {N} implementation for bugfix-{feature-name}.
PHASE SPEC: .claude/specs/bugfix-{feature-name}/phase-{X}-{phase-name}.md
REVIEW FOCUS:
1. Fix Correctness:
- Does implementation match phase spec?
- Is the bug actually fixed?
- Are edge cases handled?
- Could this fix introduce new bugs?
2. Code Quality:
- Is the code clean and readable?
- Are variable names clear?
- Is error handling adequate?
- Are types correct and complete?
3. Minimal Change Principle:
- Is change scope minimal?
- Are only necessary lines modified?
- Is existing code preserved?
- No unnecessary refactoring?
4. Testing:
- Can this phase be tested independently?
- Are success criteria met?
- Are edge cases covered?
5. Integration:
- Does this integrate with previous phases?
- Does this prepare for next phases?
- Are dependencies respected?
6. Regression Risk:
- Could this break existing functionality?
- Are all code paths still valid?
- Is backward compatibility maintained?
Return DETAILED review with:
- What was done correctly
- Issues found (with severity)
- Required fixes (if any)
- Approval status (Approved/Changes Required)"
If REVIEW_EXISTS is true, UPDATE the review file:
# Phase {N} Review: {phase-name}
## Review History
### Review {M}: {current-date}
{new review content from code-reviewer}
#### Issues Found
- {issue 1}
- {issue 2}
#### Required Fixes
- {fix 1}
- {fix 2}
#### Approval Status
{Approved / Changes Required}
---
### Review {M-1}: {previous-date}
{previous review content}
---
{earlier reviews...}
If REVIEW_EXISTS is false, CREATE new review file:
# Phase {N} Review: {phase-name}
## Review Date
{current-date}
## Phase Objective
{from phase spec}
## Implementation Review
{code-reviewer findings}
### Correctly Implemented
- {item 1}
- {item 2}
### Issues Found
- {issue 1 with severity}
- {issue 2 with severity}
### Required Fixes
- {fix 1}
- {fix 2}
## Approval Status
{Approved / Changes Required}
## Reviewer Notes
{additional observations}
Step 5: Fix Issues Loop
If review shows "Changes Required":
- Read the review feedback
- Make necessary corrections
- Run code-reviewer again
- UPDATE review file with new review entry
- Repeat until "Approved"
Step 6: Proceed to Next Phase
Only when phase review is "Approved":
- Confirm phase completion to user
- Move to next phase
- Repeat Steps 1-6 for each remaining phase
Phase 5: Regression Testing (User Verification)
After ALL phases implemented and reviewed:
Present completion status to user:
=== Bug Fix Implementation Complete ===
FIXED BUGS:
- {bug 1}: {what was fixed}
- {bug 2}: {what was fixed}
PHASES COMPLETED:
- Phase 1: {phase-name} - Approved
- Phase 2: {phase-name} - Approved
...
FILES MODIFIED:
- {file 1}
- {file 2}
TESTING REQUIRED:
1. Bug Verification:
{steps to verify each bug is fixed}
2. Regression Testing:
{steps to verify existing functionality still works}
3. Edge Cases:
{specific edge cases to test}
CRITICAL: Please test the fix thoroughly.
TEST CHECKLIST:
- [ ] Original bug is resolved
- [ ] No new bugs introduced
- [ ] Existing functionality works
- [ ] Edge cases handled correctly
- [ ] Performance not degraded
STOP - Test the feature. Report results:
- "Fix works, all tests passed"
- "Bug still exists" (describe what's still broken)
- "New issue appeared" (describe new problem)
- "Partial fix" (describe what works and what doesn't)
STOP and wait for user testing feedback.
If user reports issues:
- Return to Phase 3 (Solution Design)
- Analyze why fix didn't work
- Update solution design
- Re-implement with review gates
- Test again
Loop until user confirms: "Fix works, all tests passed"
Phase 6: Documentation Update (After Fix Confirmed)
CRITICAL: Only execute this phase AFTER user confirms fix works.
Execute:
Task tool with subagent_type: "code-consolidator"
Prompt: "Update documentation for {feature-name} to reflect bug fix.
CONTEXT:
- Investigation: .claude/specs/bugfix-{feature-name}/00-investigation.md
- Bug Report: .claude/specs/bugfix-{feature-name}/01-bug-report.md
- Solution: .claude/specs/bugfix-{feature-name}/03-solution-design.md
UPDATE TASKS:
1. Find Existing Documentation:
- Feature documentation
- API documentation
- Troubleshooting guides
- Known issues sections
2. Documentation Updates:
- Update feature description if behavior changed
- Add troubleshooting section for this bug type
- Document any new configuration options
- Update examples if they were affected
- Add known limitations if discovered
3. Create/Update Troubleshooting Section:
- Bug symptoms
- Diagnosis steps
- Solution
- Prevention measures
4. Changelog Entry:
- Bug description
- Fix summary
- Affected versions
- Breaking changes (if any)
REQUIREMENTS:
- Follow existing documentation patterns
- Use project's markdown standards
- Add cross-references to related docs
- Update navigation/index files
- Keep changelog format consistent
Return:
- List of documentation files updated
- Summary of changes made
- Any new sections added"
After code-consolidator completes:
Create bug fix summary document:
# Bug Fix Summary: {feature-name}
## Fix Date
{current-date}
## Bug Summary
{brief description of bug}
## Root Cause
{technical explanation}
## Solution Implemented
{description of fix}
## Files Modified
- {file 1}
- {file 2}
## Testing Performed
{summary of testing done}
## Documentation Updated
{list of docs updated}
## Prevention Measures
{measures added to prevent recurrence}
## Lessons Learned
{insights from this bug fix}
Write to: .claude/specs/bugfix-{feature-name}/fix-summary.md
Present completion to user:
=== Bug Fix Complete ===
BUG: {bug summary}
STATUS: Fixed and Verified
DATE: {current-date}
CHANGES:
- {change 1}
- {change 2}
DOCUMENTATION UPDATED:
- {doc 1}
- {doc 2}
PREVENTION MEASURES:
- {measure 1}
- {measure 2}
All specifications saved in: .claude/specs/bugfix-{feature-name}/
Next steps (optional):
- Run full test suite
- Deploy to staging for further testing
- Monitor for related issues
- Review prevention measures effectiveness
Critical Rules
Review Update Protocol
- ALWAYS check if review file exists before creating using
[ -f filepath ] - ❌ NEVER use
lsorls -lato check file existence (causes errors) - ✅ ONLY use bash test:
[ -f filepath ] - If exists: UPDATE with new review entry at top, preserve history
- If not exists: CREATE new review file
- NEVER replace or delete previous review entries
- Maintain chronological review history
User Approval Gates
- Phase 1: User must confirm bug details
- Phase 3: User must approve solution design
- Phase 5: User must verify fix works
- NEVER skip these approval gates
Parallel Execution
- Phase 0: 2 agents in parallel (code-reviewer + code-consolidator)
- Phase 2: 4 web searches in parallel
- Use SINGLE message with multiple Task/WebSearch calls
Minimal Change Principle
- Fix the bug, don't refactor unnecessarily
- Preserve existing code structure
- Maintain backward compatibility
- No scope creep
Review Gates
- MANDATORY after each phase implementation
- Must achieve "Approved" status before proceeding
- Fix issues in loop until approved
- Preserve review history
Error Handling
If any phase fails:
- Document the failure in appropriate spec file
- Analyze why it failed
- Ask user for guidance
- Update approach if needed
- Resume from failed phase
If bug persists after fix:
- Return to Phase 1 (Bug Clarification)
- Gather more details from user
- Update investigation
- Research alternative solutions
- Design new fix approach
Usage Example
# User invokes command
/feature:bugfix camera-connection
# System creates spec directory and runs Phase 0
# After investigation, system asks for bug details (Phase 1)
# User provides bug information
# System researches solutions (Phase 2)
# System designs fix and asks approval (Phase 3)
# User approves
# System implements with review gates (Phase 4)
# System asks user to test (Phase 5)
# User confirms fix works
# System updates documentation (Phase 6)
# Complete
Notes
- This command focuses on FIXING bugs, not refactoring features
- Use /feature:refactor for broader improvements
- Each phase is documented for future reference
- Review gates ensure quality and prevent regressions
- User involvement ensures fixes solve actual problems
- Documentation updates help prevent future issues