Files
2025-11-30 08:52:54 +08:00

56 KiB

description
description
Design and implement a new feature with parallel best-practice research, codebase exploration, architectural planning, atomic phase specs, phase-by-phase code review, and automated documentation

You are orchestrating a comprehensive feature development workflow with multi-agent collaboration. Follow this structured process:

Phase 0: Codebase Exploration (Pre-Design Analysis)

IMPORTANT: This phase runs FIRST to understand the codebase context before researching solutions.

Step -1.1: Launch Parallel Research Agents

Use multiple web search tools IN PARALLEL (single message with 4 tool calls) to research:

  1. Language Best Practices (Python + TypeScript for this project):

    • Search: "Python best practices 2025 FastAPI async patterns type hints"
    • Search: "TypeScript React best practices 2025 hooks patterns"
    • Focus: Modern idioms, async/await patterns, type safety, error handling
  2. SOLID Design Principles (Keep It Simple):

    • Search: "SOLID principles practical examples Python TypeScript 2025 simple implementation"
    • Focus: Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion
    • Look for: Simple, clear implementations (avoid over-engineering), minimal complexity patterns
    • Emphasize: Simplest solution that follows SOLID, avoid premature abstraction
  3. Feature Design Patterns (Simplicity First):

    • Search: "simple software architecture patterns minimal complexity 2025"
    • Search: "KISS principle feature design patterns avoid over-engineering"
    • Focus: Minimal viable architecture, feature isolation, testability without complexity
    • Emphasize: Start simple, evolve as needed, avoid premature optimization
  4. Domain-Specific Patterns (based on $ARGUMENTS):

    • Analyze the feature description in $ARGUMENTS
    • Determine the domain (e.g., "WebSocket" → real-time communication patterns)
    • Search: "{domain} design patterns best practices 2025"
    • Focus: Domain-specific anti-patterns, proven solutions, performance considerations

Step -1.2: Synthesize Research Findings

After ALL 4 parallel searches complete, synthesize the findings:

Create: .claude/specs/new-{feature-name}/00-research.md

# Best Practices Research: {feature-name}

**Research Date**: {date}
**Feature Domain**: {domain extracted from $ARGUMENTS}

## Language Best Practices

### Python
- [Key practices found from search 1]
- [Relevant async patterns]
- [Type hint best practices]

### TypeScript/React
- [Key practices found from search 1]
- [Modern React patterns]
- [TypeScript strict mode practices]

## SOLID Design Principles Application

### Single Responsibility
- [How to apply to this feature]
- [Examples from search]

### Open/Closed Principle
- [How to design for extension]
- [Plugin/strategy patterns]

### [Other SOLID principles...]

## Feature Design Patterns

### Applicable Patterns
- [Pattern 1: description and when to use]
- [Pattern 2: description and when to use]
- [Anti-patterns to avoid]

### Modularity Considerations
- [How to isolate this feature]
- [Dependency injection points]
- [Testing strategies]

## Domain-Specific Patterns

### {Domain} Best Practices
- [Domain-specific patterns from search 4]
- [Performance considerations]
- [Security considerations]
- [Common pitfalls in this domain]

## Recommendations for {feature-name}

Based on research, this feature should:
1. [Specific recommendation 1 based on SOLID + language practices]
2. [Specific recommendation 2 based on feature patterns]
3. [Specific recommendation 3 based on domain patterns]
4. [Anti-patterns to explicitly avoid]

## References
- [Links to key resources found]

Step -1.3: Present Research Summary

Present a brief summary to the user:

 BEST PRACTICES RESEARCH COMPLETE

Researched:
 Python + TypeScript best practices (2025)
 SOLID design principles application
 Feature design patterns (modularity, testability)
 {Domain}-specific patterns and anti-patterns

Key Recommendations:
- [Top 3 recommendations for this feature]

Full research saved to: .claude/specs/new-{feature-name}/00-research.md

This research will inform the architecture design in the next phase.

Phase 0: Codebase Exploration (Pre-Design Analysis)

Step 0.1: Invoke Code Explorer

Use the Task tool with subagent_type=code-explorer to analyze the existing codebase for features that might connect with: $ARGUMENTS

The code-explorer should:

  • Identify related features and existing implementations that might be affected
  • Find integration points, shared components, and dependencies
  • Trace execution paths of similar features to understand patterns
  • Map architecture layers and design patterns currently in use
  • Document any features that will interface with the new feature
  • Search the /docs folder for relevant documentation that relates to or connects with this feature:
    • Check /docs/backend/systems/ for system documentation
    • Check /docs/backend/guides/ for implementation guides
    • Check /docs/frontend/ for frontend-related docs
    • Check /docs/architecture.md for architectural patterns
    • Note any documentation that should be updated or referenced
  • List all essential files (code AND docs) for understanding related functionality

Step 0.2: Create Exploration Report

After receiving the code-explorer's analysis:

  1. Extract the feature name from "$ARGUMENTS" and create a kebab-case folder name
  2. Create the spec folder structure: .claude/specs/new-{feature-name}/
  3. Use the Write tool to save the exploration report to: .claude/specs/new-{feature-name}/01-exploration.md

The exploration report should include:

  • Related Features: Existing features that connect or interact with the new feature
  • Integration Points: Where the new feature will interface with existing code (file:line references)
  • Shared Components: Reusable components that should be leveraged
  • Architecture Patterns: Design patterns found in similar features
  • Dependencies: Internal and external dependencies to consider
  • Relevant Documentation: Documentation in /docs that relates to this feature (with file paths)
  • Documentation to Update: Existing docs that should be updated when feature is complete
  • Essential Files: Critical files (code AND docs) to understand for this feature domain

Step 0.3: Extract Domain Context

After exploration complete, analyze the feature requirements and codebase context to extract:

  • Feature Domain: Determine the technical domain (e.g., "WebSocket", "ML Pipeline", "Authentication", "Data Processing")
  • Similar Features: Identify existing features in the codebase that are architecturally similar
  • Technology Stack: Note the specific frameworks/libraries used in related features
  • Codebase Patterns: Document the design patterns already in use

Append domain context to 01-exploration.md:

## Domain Analysis

**Feature Domain**: {extracted domain from $ARGUMENTS and codebase}
**Technology Stack**: {relevant stack from exploration}
**Existing Patterns**: {patterns found in similar features}
**Integration Complexity**: {Low/Medium/High based on integration points}

IMPORTANT: This phase runs AFTER exploration to ensure research is informed by codebase context.

Step 1.1: Launch Context-Aware Parallel Research

Use multiple web search tools IN PARALLEL (single message with 4 tool calls) to research solutions based on the domain and codebase context from exploration:

  1. Language Best Practices (Python + TypeScript for this project):

    • Search: "Python best practices 2025 FastAPI async patterns type hints"
    • Search: "TypeScript React best practices 2025 hooks patterns"
    • Focus: Modern idioms, async/await patterns, type safety, error handling
    • Context: Apply to the specific technology stack found in exploration
  2. SOLID Design Principles (Keep It Simple):

    • Search: "SOLID principles practical examples Python TypeScript 2025 simple implementation"
    • Focus: Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, Dependency Inversion
    • Look for: Simple, clear implementations (avoid over-engineering), minimal complexity patterns
    • Emphasize: Simplest solution that follows SOLID, avoid premature abstraction
    • Context: Consider existing patterns from codebase
  3. Feature Design Patterns (Simplicity First):

    • Search: "simple software architecture patterns minimal complexity 2025"
    • Search: "KISS principle feature design patterns avoid over-engineering"
    • Focus: Minimal viable architecture, feature isolation, testability without complexity
    • Emphasize: Start simple, evolve as needed, avoid premature optimization
    • Context: Align with architecture patterns found in exploration
  4. Domain-Specific Patterns (based on domain analysis from exploration):

    • Use the Feature Domain extracted in Step 0.3
    • Search: "{domain} design patterns best practices 2025"
    • Example: "WebSocket design patterns best practices 2025" or "ML pipeline design patterns best practices 2025"
    • Focus: Domain-specific anti-patterns, proven solutions, performance considerations
    • Context: Consider integration points and similar features from exploration

Step 1.2: Synthesize Research Findings with Codebase Context

After ALL 4 parallel searches complete, synthesize the findings WITH codebase context:

Create: .claude/specs/new-{feature-name}/02-research.md

# Best Practices Research: {feature-name}

**Research Date**: {date}
**Feature Domain**: {domain from exploration Step 0.3}
**Codebase Context**: Based on exploration of related features

## Codebase Integration Context

**Related Features**: {from exploration}
**Existing Patterns**: {patterns from codebase}
**Technology Stack**: {from exploration}
**Integration Points**: {from exploration}

## Language Best Practices

### Python
- [Key practices found from search 1]
- [Relevant async patterns]
- [Type hint best practices]
- **How this applies to our codebase**: {specific to existing patterns}

### TypeScript/React
- [Key practices found from search 1]
- [Modern React patterns]
- [TypeScript strict mode practices]
- **How this applies to our codebase**: {specific to existing patterns}

## SOLID Design Principles Application

### Single Responsibility
- [How to apply to this feature]
- [Examples from search]
- **Codebase alignment**: {how this aligns with existing code}

### Open/Closed Principle
- [How to design for extension]
- [Plugin/strategy patterns]
- **Codebase alignment**: {how this aligns with existing code}

### [Other SOLID principles...]

## Feature Design Patterns

### Applicable Patterns
- [Pattern 1: description and when to use]
- [Pattern 2: description and when to use]
- [Anti-patterns to avoid]
- **Codebase consistency**: {ensure consistency with existing patterns}

### Modularity Considerations
- [How to isolate this feature]
- [Dependency injection points]
- [Testing strategies]

## Domain-Specific Patterns

### {Domain} Best Practices
- [Domain-specific patterns from search 4]
- [Performance considerations]
- [Security considerations]
- [Common pitfalls in this domain]
- **Integration with existing {domain} features**: {specific to our codebase}

## Recommendations for {feature-name}

Based on research AND codebase context:
1. [Specific recommendation 1 based on SOLID + language practices + existing patterns]
2. [Specific recommendation 2 based on feature patterns + integration points]
3. [Specific recommendation 3 based on domain patterns + similar features]
4. [Anti-patterns to explicitly avoid based on both research and codebase]
5. [How to maintain consistency with existing features from exploration]

## References
- [Links to key resources found]

Step 1.3: Present Research Summary

Present a brief summary to the user:

★ CONTEXT-AWARE RESEARCH COMPLETE

Codebase Context (from exploration):
✓ {N} related features analyzed
✓ Existing patterns: {list patterns}
✓ Integration points: {N} identified
✓ Domain: {domain}

Web Research (informed by codebase):
✓ Python + TypeScript best practices (2025)
✓ SOLID design principles application
✓ Feature design patterns (modularity, testability)
✓ {Domain}-specific patterns and anti-patterns

Key Recommendations:
- [Top 3 recommendations that integrate with existing codebase]

Full research saved to: .claude/specs/new-{feature-name}/02-research.md

This context-aware research will inform the architecture design in the next phase.

Phase 2: Architecture & Design

Step 2.1: Invoke Code Architect

Use the Task tool with subagent_type=code-architect to design the feature architecture for: $ARGUMENTS

IMPORTANT: Provide the code-architect with:

  • Exploration report: .claude/specs/new-{feature-name}/01-exploration.md (related features, integration points, codebase context)
  • Research report: .claude/specs/new-{feature-name}/02-research.md (best practices, SOLID, patterns - informed by codebase)

The code-architect should design with full awareness of:

  • Codebase context from exploration (existing features, patterns, integration points)
  • Best practices from research (language-specific patterns, SOLID principles)
  • Domain-specific patterns from research (proven solutions, anti-patterns to avoid)
  • Existing patterns and conventions from exploration (ensure consistency)
  • Shared components to reuse (from exploration)
  • Dependencies to consider (from exploration)
  • Context-aware recommendations from research report (apply to architecture while maintaining codebase consistency)

The code-architect should:

  • Design a complete feature architecture that integrates with existing features found by code-explorer
  • Apply research best practices while maintaining codebase consistency
  • Provide component design, data flows, and integration points
  • Specify files to create/modify with detailed change descriptions
  • Break implementation into clear, atomic phases (each phase should be independently reviewable)
  • Ensure each phase has clear acceptance criteria

Step 2.2: Create Architecture Overview

After receiving initial analysis from code-architect:

  1. Use the Write tool to save the overall architecture to: .claude/specs/new-{feature-name}/03-overview.md

The overview should include:

  • Feature Overview: What the feature does and why
  • Architecture Decision: Chosen approach with rationale and trade-offs
  • Component Design: Each component with file paths, responsibilities, dependencies, interfaces
  • Data Flow: Complete flow diagram from entry to output
  • Integration Strategy: How this feature integrates with related features from exploration
  • Build Sequence Overview: List of phases (names only, not detailed specs yet)
  • Quality Requirements: Error handling, state management, testing, performance, security
  • Acceptance Criteria: How to verify the complete feature works correctly

Step 2.3: Design Atomic Phases Incrementally (With User Approval Per Phase)

IMPORTANT: Design phases ONE AT A TIME with user approval after each phase design.

For EACH phase in the Build Sequence:

2.3.1 - Invoke code-architect for Single Phase Design

Use the Task tool with subagent_type=code-architect to design ONLY the current phase.

Provide context:

  • Overview spec: .claude/specs/new-{feature-name}/03-overview.md
  • Exploration report: .claude/specs/new-{feature-name}/01-exploration.md (codebase context)
  • Research report: .claude/specs/new-{feature-name}/02-research.md (best practices)
  • Previously approved phase specs (if any)
  • Instruction: "Design ONLY Phase X: {phase-name}. Create a detailed atomic phase spec with objectives, dependencies, files to modify/create, implementation details, integration points, acceptance criteria, and testing requirements. Apply research best practices while maintaining codebase consistency. Be specific and actionable."

2.3.2 - Create Atomic Phase Spec

After receiving the phase design from code-architect:

Use the Write tool to save the atomic phase spec to: .claude/specs/new-{feature-name}/0{X}-phase-{phase-name}.md

Each atomic phase spec should include:

  • Phase Name: Clear, descriptive name (e.g., "Add WebSocket Message Types")
  • Phase Number: Sequential number (Phase 1, Phase 2, etc.)
  • Objective: What this phase accomplishes
  • Dependencies: Which previous phases must be complete
  • Files to Modify/Create: Specific file paths with change descriptions
  • Implementation Details: Exact changes needed with code examples
  • Integration Points: How this phase connects to existing code
  • Acceptance Criteria: How to verify this phase is complete and correct
  • Testing Requirements: Unit tests or validation steps for this phase

1.3.3 - Present Phase Design for Approval (CHECKPOINT)

Present the phase design to the user:

 PHASE {X} DESIGN: {phase-name}

Objective: [What this phase accomplishes]

Files to Modify/Create:
- [List of files with brief descriptions]

Key Implementation Points:
- [Bullet points of critical changes]

Dependencies:
- [Previous phases this depends on]

Acceptance Criteria:
- [How to verify this phase works]

Spec saved to: .claude/specs/new-{feature-name}/0{X}-phase-{phase-name}.md

Does this phase design look correct? Should I proceed to design the next phase?

STOP and WAIT for user approval.

1.3.4 - Handle User Response

  • If APPROVED: Continue to next phase (repeat from Step 1.3.1)
  • If NEEDS CHANGES: Re-invoke code-architect with feedback, update the phase spec, re-present
  • If ALL PHASES DESIGNED: Proceed to Step 1.4

Step 1.4: Present Complete Architecture for Final Approval

After ALL atomic phase specs are designed and individually approved:

Present the complete architecture summary:

 COMPLETE ARCHITECTURE DESIGN READY

 Specifications Created:
- Exploration: .claude/specs/new-{feature-name}/01-exploration.md
- Overview: .claude/specs/new-{feature-name}/00-overview.md
- Phase 1 spec: 02-phase-{name}.md  (approved)
- Phase 2 spec: 03-phase-{name}.md  (approved)
- Phase 3 spec: 04-phase-{name}.md  (approved)
- ... (all phases listed)

 Implementation Plan:
[Summary of all phases in order]

All individual phases have been approved. Ready to proceed to design review phase.

WAIT FOR USER CONFIRMATION BEFORE CONTINUING TO DESIGN REVIEW.


Phase 1.5: Design Review (Architecture Quality Assurance)

Note: This phase reviews the DESIGN of each phase before implementation begins. code-reviewer checks if the architecture is sound, complete, and follows best practices.

Step 2.5.1: Check for Existing Design Review

Before starting design review, check if a design review already exists for this feature:

⚠️ CRITICAL - File Existence Checking:

  • ONLY use bash test: [ -f filepath ]
  • NEVER use: ls, ls -la, ls -l, or any ls variant
  • NEVER use: stat, find, or other file inspection commands
  • Reason: ls returns errors to stderr when file doesn't exist, causing noise

Check for file: .claude/specs/new-{feature-name}/03-design-review.md

# CORRECT METHOD - Use bash test [ -f ... ]
if [ -f .claude/specs/new-{feature-name}/03-design-review.md ]; then
  # Design review EXISTS
else
  # Design review DOES NOT EXIST
fi

If design review EXISTS:

  1. Read the existing review
  2. Note any previous issues or concerns
  3. Prepare to UPDATE the review based on current phase designs
  4. Check if phase designs have changed since last review

If design review DOES NOT EXIST:

  1. Proceed to create a new design review

Step 2.5.2: Invoke code-reviewer for Phase-by-Phase Design Review

IMPORTANT: Review the DESIGN of each phase sequentially, not the implementation.

For EACH atomic phase spec (in order):

Use the Task tool with subagent_type=code-reviewer to review the phase DESIGN.

Provide the code-reviewer with:

  • The phase spec path: .claude/specs/new-{feature-name}/0{X}-phase-{phase-name}.md

  • The overview spec: .claude/specs/new-{feature-name}/03-overview.md

  • The exploration report: .claude/specs/new-{feature-name}/01-exploration.md (codebase context)

  • The research report: .claude/specs/new-{feature-name}/02-research.md (best practices)

  • Previously reviewed phase specs (if any)

  • Existing design review (if updating)

  • Instruction: "Review the DESIGN of this phase specification (not implementation). Check for:

    1. Design Completeness: Are all necessary files, components, and changes specified?
    2. Architecture Alignment: Does this phase align with the overall architecture in 03-overview.md?
    3. Integration Soundness: Are integration points with existing code clearly defined and correct?
    4. Dependency Correctness: Are phase dependencies accurate and complete?
    5. Acceptance Criteria Quality: Are acceptance criteria clear, testable, and sufficient?
    6. Implementation Clarity: Can a developer implement this phase from the spec alone?
    7. Missing Considerations: Any error handling, edge cases, or patterns not addressed?
    8. Pattern Compliance: Does the design follow patterns found in exploration report?
    9. Best Practices Application: Does the design apply best practices from research report?
    10. Codebase Consistency: Does the design maintain consistency with existing codebase patterns?

    Only report high-confidence design issues (≥ 80). Focus on design flaws that would cause implementation problems."

Step 2.5.3: Aggregate Phase Design Reviews

After reviewing ALL phase designs, aggregate the findings:

Create or UPDATE: .claude/specs/new-{feature-name}/03-design-review.md

The design review should include:

# Design Review: {feature-name}

**Review Date**: {current-date}
**Reviewer**: code-reviewer
**Architecture Reviewed**: All phases (Phase 1 through Phase {N})

## Overall Architecture Assessment

[Summary of architecture quality, strengths, and concerns]

## Phase-by-Phase Review

### Phase 1: {phase-name}
**Spec**: 02-phase-{name}.md
**Status**:  Approved |  Minor Issues |  Critical Issues

**Findings**:
- [Issue 1 with confidence score]
- [Issue 2 with confidence score]
- ...

**Recommendations**:
- [Specific actions to address issues]

---

### Phase 2: {phase-name}
[Same structure as Phase 1]

---

[Continue for all phases]

## Cross-Phase Analysis

[Any issues that span multiple phases or affect overall integration]

## High-Confidence Issues Summary (≥ 80)

[List of all issues with confidence ≥ 80 that MUST be addressed]

## Design Quality Score

- **Completeness**: [Score/10]
- **Architecture Alignment**: [Score/10]
- **Integration Clarity**: [Score/10]
- **Implementation Readiness**: [Score/10]

## Recommendation

 **APPROVED**: Design is sound, ready for implementation
 **APPROVED WITH NOTES**: Minor issues noted, but can proceed
 **REVISIONS REQUIRED**: Critical issues must be addressed before implementation

Step 2.5.4: Handle Design Review Feedback (Update Loop)

If high-confidence issues found (≥ 80):

  1. Present issues to user:

     DESIGN REVIEW: Critical Issues Found
    
    The code-reviewer found {N} high-confidence design issues:
    
    Phase {X}: {phase-name}
    - Issue: {description} (Confidence: {score})
    - Impact: {why this matters}
    - Fix: {recommended action}
    
    [List all issues]
    
    These design flaws should be fixed before implementation.
    Should I update the affected phase specs?
    
  2. WAIT FOR USER DECISION:

    • YES, FIX: Re-invoke code-architect to redesign affected phases, update specs, re-run design review (LOOP)
    • NO, PROCEED ANYWAY: Document the decision to proceed with noted issues
    • STOP: User will manually fix the design issues
  3. Update Loop:

    • If user approves fixes, update the phase specs
    • Re-run design review on updated phases
    • Update 00-design-review.md with new findings
    • Repeat until no high-confidence issues OR user decides to proceed

If no high-confidence issues (design approved):

  1. Update 00-design-review.md with approval status
  2. Present design review summary to user:
     DESIGN REVIEW COMPLETE
    
    code-reviewer has reviewed all phase designs.
    
     Review Summary:
    - All phases reviewed: {N} phases
    - Design quality scores: [scores]
    - High-confidence issues: 0
    - Status: APPROVED
    
    Design review saved to: .claude/specs/new-{feature-name}/00-design-review.md
    
    Ready to proceed with implementation!
    

Step 1.5.5: Final Design Approval

STOP and ASK:

 ARCHITECTURE & DESIGN PHASE COMPLETE

 All Specifications:
- Exploration report
- Architecture overview
- Atomic phase specs  (all {N} phases individually approved)
- Design review  (code-reviewer approved)

The design has been validated and is ready for implementation.
Should we proceed to phased implementation?

WAIT FOR USER APPROVAL BEFORE CONTINUING TO IMPLEMENTATION.


Phase 2: Phased Implementation with Incremental Review (Only after design approval)

Step 3.1: Initialize Phase Tracking

Use the TodoWrite tool to create a detailed task list for ALL phases from the atomic spec files. Each todo should be:

  • Named after the phase (e.g., "Phase 1: Add WebSocket Message Types")
  • Linked to its atomic spec file path (e.g., .claude/specs/new-{feature-name}/0{X}-phase-{name}.md)
  • Status: pending (only first phase starts as in_progress)
  • Ordered by dependencies (foundations first, integrations last)

Step 3.2: Implement Each Phase with Review Gate

For EACH phase in sequence:

3.2.1 - Implementation

  1. Mark the current phase task as in_progress in the todo list
  2. Read the atomic phase spec: .claude/specs/new-{feature-name}/0X-phase-{phase-name}.md
  3. Implement ALL changes specified in that phase spec:
    • Follow the exact file paths and change descriptions
    • Implement according to the integration points documented
    • Use the existing patterns found by code-explorer (from exploration)
    • Apply best practices from research report
    • Follow CLAUDE.md conventions and FIX-FIRST principle
  4. Verify implementation matches the phase's acceptance criteria

3.2.2 - Phase Review Gate (MANDATORY - NEVER SKIP)

CRITICAL ENFORCEMENT: This is a MANDATORY gate. The pattern MUST be:

For each phase:
├── Implement changes (from phase spec)
├── MANDATORY review gate (code-reviewer)
└── Fix issues in loop until approved (NO PROCEEDING WITH ISSUES)

Step A: Check for Existing Phase Implementation Review

⚠️ CRITICAL - File Existence Checking:

  • ONLY use bash test: [ -f filepath ]
  • NEVER use: ls, ls -la, ls -l, or any ls variant
  • NEVER use: stat, find, or other file inspection commands
  • Reason: ls returns errors to stderr when file doesn't exist, causing noise

Check for file: .claude/specs/new-{feature-name}/0X-phase-{phase-name}-review.md

# CORRECT METHOD - Use bash test [ -f ... ]
if [ -f .claude/specs/new-{feature-name}/0X-phase-{phase-name}-review.md ]; then
  # Implementation review EXISTS
else
  # Implementation review DOES NOT EXIST
fi

If implementation review EXISTS for this phase:

  1. Read the existing review
  2. Note any previously identified issues
  3. Check if they were addressed in current implementation
  4. Prepare to UPDATE the review

If implementation review DOES NOT EXIST:

  1. Proceed to create a new implementation review

Step B: Invoke code-reviewer for Phase Implementation

Use the Task tool with subagent_type=code-reviewer to review ONLY the changes made in this phase.

Provide the code-reviewer with:

  • The phase spec path: .claude/specs/new-{feature-name}/0X-phase-{phase-name}.md

  • The design review: .claude/specs/new-{feature-name}/03-design-review.md (section for this phase)

  • The research report: .claude/specs/new-{feature-name}/02-research.md (best practices to check against)

  • List of files modified/created in THIS phase only

  • Existing phase implementation review (if updating)

  • Instruction: "Review this phase IMPLEMENTATION against the phase spec at .claude/specs/new-{feature-name}/0X-phase-{phase-name}.md. Check for:

    1. Spec Compliance: Does implementation match the phase spec exactly?
    2. Acceptance Criteria: Are all phase acceptance criteria fulfilled?
    3. CLAUDE.md Conventions: Adherence to project standards (types, no print(), file size, etc.)
    4. Integration Correctness: Integration with existing code as specified in design
    5. Code Quality: Bugs, security issues, or quality problems (confidence ≥ 80 only)
    6. Error Handling: Proper error handling and type safety
    7. Design Review Issues: Were any issues from design review addressed?
    8. Best Practices Application: Are best practices from research report applied?

    Only report high-confidence issues (≥ 80). If updating an existing review, note what changed."

Step C: Create or Update Phase Implementation Review

Save the review to: .claude/specs/new-{feature-name}/0X-phase-{phase-name}-review.md

# Phase {X} Implementation Review: {phase-name}

**Review Date**: {current-date}
**Reviewer**: code-reviewer
**Spec**: 0X-phase-{phase-name}.md
**Files Modified/Created**:
- [List of files]

## Implementation Assessment

[Summary of implementation quality and compliance with spec]

## Spec Compliance Check

 Objective achieved: [yes/no with details]
 All files created/modified: [yes/no]
 Integration points implemented: [yes/no]
 Acceptance criteria met: [yes/no for each criterion]

## Issues Found (Confidence ≥ 80)

[List of high-confidence issues with confidence scores, or "None" if clean]

### Issue 1: {description}
- **Confidence**: {score}
- **Location**: {file:line}
- **Impact**: {why this matters}
- **Fix**: {specific action to resolve}

[More issues if found]

## CLAUDE.md Compliance

 Type hints: [pass/fail]
 No print() statements: [pass/fail]
 File size < 300 lines: [pass/fail]
 Function size < 30 lines: [pass/fail]
 Logging usage: [pass/fail]

## Review History

[If updating: note changes from previous review]

## Recommendation

 **APPROVED**: Implementation passes all checks
 **REQUIRES FIXES**: {N} high-confidence issues must be addressed

3.2.3 - Handle Phase Review Feedback (Update Loop)

If code-reviewer finds high-confidence issues (≥ 80):

  1. DO NOT PROCEED to the next phase
  2. Present issues to user (brief summary)
  3. Fix each issue according to the reviewer's suggestions
  4. Re-run code-reviewer on the fixed code for this phase (LOOP)
  5. Update the phase implementation review (0X-phase-{phase-name}-review.md)
  6. Repeat until no high-confidence issues remain

If code-reviewer approves (no issues ≥ 80 confidence):

  1. Update the phase implementation review with APPROVED status
  2. Mark the phase task as completed in the todo list
  3. Proceed to the next phase (return to Step 3.2.1)

3.2.4 - Phase Completion Verification

After completing each phase, briefly confirm to the user:

  • " Phase X completed: {phase-name}"
  • " Code review passed (no issues ≥ 80 confidence)"
  • "→ Moving to Phase X+1: {next-phase-name}"

CRITICAL:

  • Follow the existing codebase patterns found by code-explorer
  • Do NOT create workarounds or "simpler" alternatives
  • Do NOT skip to the next phase if code review finds issues
  • Each phase must pass review before proceeding

Phase 4: Final Integration Review & Verification

Note: Individual phases have already been reviewed incrementally. This phase verifies the complete feature integration.

Step 4.1: Final Comprehensive Review

After ALL implementation phases are complete, use the Task tool with subagent_type=code-reviewer to perform a final comprehensive review of the COMPLETE feature.

Provide the code-reviewer with:

  • The overview spec path: .claude/specs/new-{feature-name}/03-overview.md
  • The research report: .claude/specs/new-{feature-name}/02-research.md (best practices)
  • List of ALL modified/created files across all phases
  • Instruction: "Perform a comprehensive review of the complete feature implementation against the architecture overview at .claude/specs/new-{feature-name}/03-overview.md. Focus on:
    1. Feature-level integration and data flow correctness
    2. Overall architecture alignment with the design
    3. Complete acceptance criteria fulfillment
    4. Cross-phase integration issues or inconsistencies
    5. End-to-end functionality verification
    6. Best practices application from research report
    7. Any high-confidence issues (≥ 80) missed in phase reviews"

Step 4.2: Handle Final Review Feedback

If code-reviewer identifies high-confidence issues (≥ 80):

  1. Create new todos for each issue
  2. Fix each issue according to the reviewer's suggestions
  3. Re-run code-reviewer on the complete feature
  4. Repeat until no high-confidence issues remain

Step 3.3: Final Verification Summary

Present a comprehensive summary to the user:

 FEATURE IMPLEMENTATION COMPLETE: {feature-name}

 Specifications:
   - Exploration: .claude/specs/new-{feature-name}/01-exploration.md
   - Overview: .claude/specs/new-{feature-name}/00-overview.md
   - Phase specs: .claude/specs/new-{feature-name}/0X-phase-*.md

 Phase Reviews: All X phases passed incremental review
 Integration Review: Complete feature review passed
 Architecture Compliance: Matches design blueprint
 Code Quality: No high-confidence issues (≥ 80)
 Acceptance Criteria: All criteria met

📄 Files Modified/Created:
   - [List all files]

 Next Steps:
   - Run tests: [suggest test commands if applicable]
   - Manual testing: [suggest verification steps]
   - Integration testing: [suggest integration points to verify]
   - Documentation: [suggest docs to update if needed]

Confirm the feature is ready for testing and integration.


Phase 4: Documentation Consolidation (Only after USER confirms feature works)

IMPORTANT: This phase ONLY runs after the USER (not Claude) confirms the feature is working correctly in production/testing.

Step 5.1: User Confirmation Checkpoint

After Phase 4 completion, present the following to the user:

 FEATURE IMPLEMENTATION COMPLETE

The feature has been implemented and passed all code reviews.

 TESTING REQUIRED

Before we proceed to documentation, please test the feature to ensure it works correctly:

Testing Steps:
[List relevant testing steps from acceptance criteria]

Once you've tested and confirmed the feature is working correctly, let me know and I'll update the documentation.

 Have you tested the feature and confirmed it's working?

STOP and WAIT for user confirmation.

If user says NO or needs fixes:

  • Return to Phase 3 for fixes
  • Do NOT proceed to documentation

If user confirms YES (feature is working):

  • Proceed to Step 5.2

Step 5.2: Invoke code-consolidator for Documentation

Use the Task tool with subagent_type=code-consolidator to consolidate or create documentation for the feature.

Provide the code-consolidator with:

  • The exploration report: .claude/specs/new-{feature-name}/01-exploration.md (especially "Relevant Documentation" and "Documentation to Update" sections)

  • The research report: .claude/specs/new-{feature-name}/02-research.md (to document applied best practices)

  • The overview spec: .claude/specs/new-{feature-name}/03-overview.md

  • List of all files created/modified

  • Feature description: $ARGUMENTS

  • Instruction: "Analyze the feature implementation and existing documentation. Determine if this feature requires:

    1. New documentation (if new feature/system)
    2. Documentation update (if extending existing feature)
    3. Documentation consolidation (if documentation is scattered)

    Based on the exploration report's 'Relevant Documentation' section, identify which docs in /docs should be updated.

    Follow the project's documentation standards:

    • Use lowercase filenames (e.g., readme.md)
    • Follow existing patterns from similar docs
    • Update /docs/readme.md and appropriate index files
    • Eliminate duplication
    • Cross-reference related documentation

    Specific actions:

    • If NEW feature: Create documentation in appropriate location (/docs/backend/systems/, /docs/frontend/, etc.)
    • If REFACTOR: Update existing documentation identified in exploration
    • If EXTENDING: Add to existing docs and update examples
    • Always update index files and main readme

    Document the following:

    • Feature overview and purpose
    • Architecture and component design
    • Integration points with existing systems
    • Usage examples with code
    • Configuration options (if any)
    • Testing approach
    • Links to relevant existing docs"

Step 5.3: Review Documentation Changes

After code-consolidator completes:

  1. Verify documentation changes follow project standards:

    • Lowercase filenames
    • Proper integration with existing docs
    • Index files updated
    • No duplication
    • Clear and concise
  2. Present documentation summary to user:

 DOCUMENTATION UPDATED

The code-consolidator has completed documentation for this feature.

Documentation Changes:
- [List of docs created/updated with file paths]

Updated Index Files:
- [List of index files updated]

Cross-References Added:
- [List of cross-references to related docs]

Summary:
[Brief description of documentation changes]

Documentation follows project standards:
 Lowercase filenames
 Integrated with existing documentation structure
 Zero duplication
 Cross-referenced with related docs
 Updated index files

Would you like me to make any adjustments to the documentation?

Step 4.4: Final Completion Summary

Present the complete feature summary including documentation:

 FEATURE FULLY COMPLETE: {feature-name}

 Implementation Specifications:
   - Exploration: .claude/specs/new-{feature-name}/01-exploration.md
   - Overview: .claude/specs/new-{feature-name}/00-overview.md
   - Phase specs: .claude/specs/new-{feature-name}/0X-phase-*.md
   - Design review: .claude/specs/new-{feature-name}/00-design-review.md
   - Implementation reviews: .claude/specs/new-{feature-name}/0X-phase-*-review.md

 Completion Checklist:
    Exploration completed (related features identified)
    Architecture designed (all phases individually approved)
    Design review passed (no high-confidence issues)
    Implementation completed (all phases reviewed)
    Integration review passed (complete feature verified)
    User testing passed (feature confirmed working)
    Documentation updated (consolidated and integrated)

 Code Files Modified/Created:
   [List all code files]

 Documentation Files Created/Updated:
   [List all documentation files]

 Feature Status: COMPLETE & DOCUMENTED

The feature is fully implemented, tested, reviewed, and documented according to project standards.

Important Guidelines

Exploration Phase (Phase 0):

  • RUNS FIRST to understand codebase context before research
  • Use Task tool with subagent_type=code-explorer (never call agents directly with "@agent-name")
  • code-explorer identifies related features, integration points, and dependencies
  • code-explorer MUST search /docs folder for relevant documentation
  • Extract domain context from feature requirements and codebase
  • Exploration report includes: related docs, docs to update, essential files (code + docs), domain analysis
  • Save exploration report to .claude/specs/new-{feature-name}/01-exploration.md
  • This codebase context informs the research phase

Research Phase (Phase 1):

  • RUNS AFTER exploration to ensure research is context-aware
  • Execute 4 parallel web searches based on domain extracted from exploration
  • Research: Language best practices, SOLID principles, feature patterns, domain-specific patterns
  • Synthesize findings WITH codebase context from exploration
  • Save to .claude/specs/new-{feature-name}/02-research.md
  • Research recommendations consider both best practices AND existing codebase patterns

Architecture Phase (Phase 2):

  • Use Task tool with subagent_type=code-architect
  • Provide code-architect with BOTH exploration report (context) AND research report (best practices)
  • Create a folder structure: .claude/specs/new-{feature-name}/
  • Save overview to 03-overview.md
  • Design atomic phases ONE AT A TIME with user approval after each phase design
  • After each phase design: STOP and ASK user if design is correct before next phase
  • Create atomic spec files for EACH phase: 04-phase-{name}.md, 05-phase-{name}.md, etc.
  • Each atomic phase spec must be independently implementable and reviewable
  • If user requests changes: Re-invoke code-architect, update spec, re-present
  • ALL phases must be individually approved before proceeding to design review

Design Review Phase (Phase 2.5):

  • Check for existing design review using [ -f 03-design-review.md ] (NEVER use ls)
  • Use Task tool with subagent_type=code-reviewer to review phase DESIGNS (not implementation)
  • Review each atomic phase spec for completeness, alignment, integration soundness, AND best practices application
  • Create or UPDATE 03-design-review.md with phase-by-phase findings
  • If high-confidence design issues found (≥ 80): Present to user, fix specs, re-review (LOOP)
  • Update loop continues until design is approved or user decides to proceed
  • Final design approval required before implementation starts

Implementation Phase (Phase 3):

  • Use TodoWrite to track ALL phases from the atomic specs
  • Implement ONE phase at a time, following its atomic spec exactly
  • Apply best practices from research report while maintaining codebase consistency
  • After each phase: MANDATORY review gate using code-reviewer
  • Do NOT proceed to next phase until current phase passes review (no issues ≥ 80)
  • One phase in_progress at a time, mark completed only after review passes
  • Follow existing codebase patterns found by code-explorer (FIX-FIRST PRINCIPLE)
  • No deviations from specs without user approval

Phase Review Gates (Critical):

  • Check for existing phase implementation review using [ -f 0X-phase-{name}-review.md ] (NEVER use ls)
  • After implementing each phase, invoke code-reviewer for THAT phase only
  • Provide the phase spec path, design review, research report, and list of files changed in that phase
  • Create or UPDATE phase implementation review with findings
  • If existing review: Note what changed and if previous issues were addressed
  • Verify best practices from research report were applied
  • Fix all high-confidence issues (≥ 80) before proceeding
  • Re-run review and update review file until the phase is clean (LOOP)
  • Brief confirmation to user after each phase passes

Final Review Phase (Phase 4):

  • Use Task tool with subagent_type=code-reviewer for comprehensive feature review
  • Focus on integration, data flow, cross-phase consistency, AND best practices application
  • Fix any issues found and re-review until clean
  • Present final verification summary with all specs and files
  • STOP and ASK user to test the feature before proceeding to documentation

Documentation Phase (Phase 5):

  • ONLY runs after USER confirms feature works (not Claude's assessment)
  • STOP and WAIT for user confirmation of feature working
  • If user says feature NOT working: Return to Phase 3 for fixes
  • If user confirms feature IS working: Proceed to documentation
  • Use Task tool with subagent_type=code-consolidator
  • Provide exploration report (relevant docs + docs to update sections) AND research report (best practices applied)
  • code-consolidator determines: New docs / Update existing / Consolidate scattered
  • Creates/updates documentation following project standards (lowercase, no duplication)
  • Updates index files and cross-references
  • Distinguishes between new features (create docs) and refactors (update docs)
  • Present documentation changes summary to user

Tool Usage:

  • Task tool with subagent_type=code-explorer, code-architect, code-reviewer, or code-consolidator
  • TodoWrite for phase tracking (all phases listed, one in_progress at a time)
  • Write to create spec folder and atomic spec files
  • Read to read atomic phase specs before implementing each phase
  • Edit for code changes (prefer over Write for existing files)

Critical Rules:

  • NEVER skip phase design approval checkpoints (must approve each phase design)
  • NEVER skip phase implementation review gates
  • NEVER proceed to next phase with unresolved high-confidence issues (≥ 80)
  • NEVER call agents with text mentions like "@agent-name"
  • NEVER skip checking for existing reviews (always check with [ -f ... ] and update if present)
  • NEVER use ls or ls -la to check file existence (use [ -f ... ] only)
  • NEVER skip documentation phase (Phase 4) for features that work
  • NEVER proceed to documentation without USER confirmation feature works
  • ALWAYS use Task tool with subagent_type parameter
  • ALWAYS design atomic phases ONE AT A TIME with user approval
  • ALWAYS create atomic phase specs (one file per phase)
  • ALWAYS run design review (Phase 1.5) before implementation starts
  • ALWAYS check for existing review files using [ -f filepath ] and UPDATE them (not replace)
  • ALWAYS use bash test [ -f ... ] for file existence checks (NEVER ls, ls -la, or stat)
  • ALWAYS run code-reviewer after each phase implementation before moving forward
  • ALWAYS update review files in loop until approved or user decides to proceed
  • ALWAYS wait for user approval at each checkpoint
  • ALWAYS have code-explorer search /docs folder for relevant documentation
  • ALWAYS ask user to test feature before documentation phase
  • ALWAYS use code-consolidator after user confirms feature works

Execution Flow Summary

Phase 0: Exploration (UNDERSTAND FIRST) ⭐
├── code-explorer analyzes codebase
├── SEARCHES /docs folder for relevant documentation
├── Extracts domain context (feature domain, tech stack, patterns)
└── Output: 01-exploration.md (codebase context + domain analysis)

Phase 1: Research (CONTEXT-AWARE SOLUTION SEARCH) ⭐
├── Uses domain context FROM exploration
├── 4 parallel web searches (single message):
│   ├── Language best practices (Python + TypeScript)
│   ├── SOLID design principles
│   ├── Feature design patterns
│   └── Domain-specific patterns (based on domain FROM exploration)
├── Synthesize findings WITH codebase context
└── Output: 02-research.md (context-aware best practices, SOLID, patterns)

Phase 2: Architecture (Incremental Design)
├── code-architect (with exploration + research context) → 03-overview.md
├── FOR EACH PHASE:
│   ├── code-architect designs Phase X → 0X-phase-{name}.md
│   ├── STOP → Ask user: "Phase X design correct?"
│   ├── User approves → Next phase OR User requests changes → Redesign → Re-present
│   └── Repeat until all phases designed and approved
└── Present complete architecture → Final approval checkpoint

Phase 2.5: Design Review (Quality Assurance)
├── Check for existing: 03-design-review.md
├── FOR EACH PHASE:
│   └── code-reviewer reviews phase DESIGN (not implementation)
│       Checks: Best practices application + codebase consistency
├── Aggregate findings → Create/UPDATE 03-design-review.md
├── IF high-confidence issues (≥ 80):
│   ├── Present issues → User decides: Fix/Proceed/Stop
│   ├── IF Fix: Update phase specs → Re-review (LOOP)
│   └── Repeat until approved or user proceeds anyway
└── STOP → Ask user: "Design validated. Proceed to implementation?"

Phase 3: Implementation (per phase with review gates)
├── FOR EACH PHASE:
│   ├── Read phase spec: 0X-phase-{name}.md
│   ├── Implement phase (applying research best practices + existing patterns)
│   ├── Check for existing: 0X-phase-{name}-review.md
│   ├── code-reviewer reviews implementation (phase review gate)
│   │   Checks: Spec compliance + best practices + codebase consistency
│   ├── Create/UPDATE 0X-phase-{name}-review.md
│   ├── IF high-confidence issues (≥ 80):
│   │   ├── Fix issues → Re-review (LOOP)
│   │   └── Update review file until approved
│   └── Mark complete → Next phase
└── All phases complete

Phase 4: Final Integration Review
├── code-reviewer (comprehensive feature review)
│   Checks: Integration + best practices application
├── Fix issues if any
├── Present completion summary
└── STOP → Ask user: "Test the feature. Does it work?"

Phase 5: Documentation (ONLY after user confirms feature works)
├── WAIT for user confirmation: "Feature is working"
├── IF user says NOT working → Return to Phase 3 for fixes
├── IF user confirms IS working:
│   ├── code-consolidator analyzes feature + exploration + research reports
│   ├── Determines: New docs / Update existing / Consolidate
│   ├── Creates/updates docs in /docs (following project standards)
│   ├── Updates index files and cross-references
│   └── Present documentation changes summary
└── Final completion: Feature fully implemented AND documented

⭐ KEY IMPROVEMENT: Exploration (Phase 0) runs BEFORE Research (Phase 1)
   This ensures web research is informed by actual codebase context!

Complete Example: WebSocket Trajectory Visualization

This example demonstrates the full workflow with all phases and agents.

Phase -1: Best Practices Research (Parallel)

[4 parallel searches execute in single message]

Search 1: "Python FastAPI WebSocket best practices async 2025"
Search 2: "SOLID principles practical Python TypeScript"
Search 3: "real-time feature design patterns pub/sub"
Search 4: "WebSocket architecture patterns scalability 2025"

 RESEARCH COMPLETE

Key Findings:
- WebSocket lifecycle management (connect, disconnect, heartbeat)
- Pub/sub pattern for broadcasting (SOLID: Single Responsibility)
- Message type discrimination (type field for routing)
- Async/await patterns for WebSocket handlers
- React hooks patterns for WebSocket state management

Saved to: .claude/specs/feature-trajectory-viz/00-research.md

Phase 0: Exploration

code-explorer finds:
- WebSocket system (app/api/websocket/detection.py)
- Tracking system (providers/tracking/tracker.py)
- Frontend hooks (frontend/src/hooks/UseWebSocket.ts)
- DOCS: /docs/backend/systems/tracking.md
- DOCS: /docs/frontend/guides/websocket-detection.md

Saved to: 01-exploration.md

Phase 1: Architecture (Incremental)

code-architect (with research + exploration context):

Overview → 00-overview.md (references SOLID principles from research)

Phase 1 Design: Add TrajectoryMessage
 Design uses: Type discrimination pattern (from research)
         Follows: Existing DetectionMessage pattern (from exploration)
 User approves

Phase 2 Design: TrajectoryBroadcaster Service
 Design uses: Pub/sub pattern (from research)
         Applies: Single Responsibility (SOLID from research)
 User approves

[Phases 3-5 continue...]

Phase 1.5: Design Review

code-reviewer reviews designs:
 All phases follow SOLID principles
 Applies WebSocket best practices from research
 No high-confidence issues

Saved to: 00-design-review.md

Phase 2: Implementation

For each phase:
- Implement following research recommendations
- Review gate catches issues
- Update review files

Example: Phase 2 review finds missing heartbeat (from research best practices)
→ Fixed → Re-reviewed → Approved

Phase 3: Final Review

 Feature complete
Ask user: "Test trajectory visualization. Does it work?"

Phase 4: Documentation

User: "Yes, trajectories display correctly!"

code-consolidator:
- Updates /docs/backend/systems/tracking.md (trajectory section)
- Creates /docs/frontend/guides/trajectory-canvas.md
- Updates index files
- Cross-references

 Documentation complete!

Spec Folder Structure

After completion, .claude/specs/feature-{name}/ contains:

.claude/specs/feature-trajectory-viz/
├── 00-research.md                      # Best practices research 
├── 01-exploration.md                   # Codebase exploration
├── 00-overview.md                       # Architecture overview
│
├── 02-phase-message-type.md            # Phase 1 spec
├── 03-phase-broadcaster.md             # Phase 2 spec
├── 04-phase-endpoint.md                # Phase 3 spec
├── 05-phase-frontend-hook.md           # Phase 4 spec
├── 06-phase-visualization.md           # Phase 5 spec
│
├── 00-design-review.md                  # Design review
│
├── 02-phase-message-type-review.md     # Phase 1 implementation review
├── 03-phase-broadcaster-review.md      # Phase 2 implementation review
├── 04-phase-endpoint-review.md         # Phase 3 implementation review
├── 05-phase-frontend-hook-review.md    # Phase 4 implementation review
└── 06-phase-visualization-review.md    # Phase 5 implementation review

Total: 14 files providing complete development audit trail with best practices research!


Key Benefits Summary

Phase -1: Research

  • Researches best practices BEFORE designing
  • Applies SOLID principles from research
  • Uses domain-specific patterns
  • Avoids known anti-patterns
  • 4 parallel searches for efficiency

Phase 0: Exploration

  • Finds related code AND documentation
  • Identifies integration points
  • Lists docs to update

Phase 1: Architecture

  • Uses research recommendations
  • Incremental phase design with approvals
  • Each phase independently approved

Phase 1.5: Design Review

  • Verifies SOLID compliance
  • Checks best practices application
  • Update loop until approved

Phase 2: Implementation

  • Follows research patterns
  • Phase-by-phase review gates
  • Update loop per phase

Phase 3: Final Review

  • Integration verification
  • User testing confirmation

Phase 4: Documentation

  • Automatic after user confirms
  • New features vs refactors
  • Complete integration

Agent Collaboration

Agent Phase Input Output Enhancement
(web search) -1 Feature description 00-research.md ****
code-explorer 0 Feature + research 01-exploration.md Uses research
code-architect 1 Research + exploration Phase specs Applies SOLID
code-reviewer 1.5 Phase designs + research Design review Verifies patterns
(implementation) 2 Phase specs + research Code Follows practices
code-reviewer 2 Implementation + spec Phase reviews Checks patterns
code-reviewer 3 Complete feature Final review Integration check
code-consolidator 4 Feature + exploration Documentation Auto-docs

Time/Quality Impact

Without Context-Aware Workflow:

Research (generic) → Design → Implement → Find pattern violations → Refactor
Time: ~14 hours
Quality: Pattern inconsistencies, missed best practices, codebase conflicts

With Context-Aware Workflow (Explore → Research):

Explore (understand) → Research (context-aware) → Design → Implement correctly
Time: ~11 hours (exploration adds 20min, research 30min, saves 3+ hours rework)
Quality: Consistent patterns, SOLID principles, domain best practices, codebase harmony

Savings: ~20% faster + higher quality + educational + codebase consistency

Why This Matters:

  • Generic research may suggest patterns that conflict with existing code
  • Context-aware research suggests patterns that integrate seamlessly
  • Exploration first ensures research is relevant to actual codebase context

Quick Reference

Usage

/feature:new <feature description>

Phases

  1. Explore - Code + docs + domain context FIRST
  2. Research (context-aware) - Best practices, SOLID, patterns SECOND
  3. Design - Incremental with approvals
  4. Review Design - Verify patterns + consistency
  5. Implement - Phase-by-phase with review gates
  6. Review Final - Integration check + practices
  7. Document - After user confirms works

Checkpoints

  • Each phase design (user approval)
  • Complete architecture (user approval)
  • Design review (quality check)
  • Each phase implementation (review gate)
  • Feature complete (integration review)
  • Feature works (user testing)
  • Documentation (auto-consolidated)

Output

  • Exploration report (codebase context + domain) FIRST
  • Research report (context-aware best practices) SECOND
  • Architecture specs (atomic phases)
  • Design review (pattern verification + consistency)
  • Implementation reviews (per phase)
  • Documentation (auto-generated)

Begin with Phase 0: Codebase Exploration for the feature: $ARGUMENTS