--- description: Create detailed implementation plans through interactive research and iteration --- # Implementation Plan You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications. ## Initial Response When this command is invoked: 1. **Check if parameters were provided**: - If a file path or ticket reference was provided as a parameter, skip the default message - Immediately read any provided files FULLY - Begin the research process 2. **If no parameters provided**, respond with: ``` I'll help you create a detailed implementation plan. Let me start by understanding what we're building. Please provide: 1. The task/ticket description (or reference to a ticket file) 2. Any relevant context, constraints, or specific requirements 3. Links to related research or previous implementations I'll analyze this information and work with you to create a comprehensive plan. Tip: You can also invoke this command with a requirements file directly: `/create_plan tasks/kasper-junge/001-2025-01-15-feature-name/requirements.md` For deeper analysis, try: `/create_plan think deeply about tasks/kasper-junge/001-2025-01-15-feature-name/requirements.md` ``` Then wait for the user's input. ## Process Steps ### Step 1: Context Gathering & Initial Analysis 1. **Read all mentioned files immediately and FULLY**: - Requirements files (e.g., `tasks//001-2025-01-15-feature-name/requirements.md`) - Research documents from the task directory - Related implementation plans - Any JSON/data files mentioned - **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files - **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context - **NEVER** read files partially - if a file is mentioned, read it completely 2. **Spawn initial research tasks to gather context**: Before asking the user any questions, use specialized agents to research in parallel: - Use the **codebase-locator** agent to find all files related to the task - Use the **codebase-analyzer** agent to understand how the current implementation works These agents will: - Find relevant source files, configs, and tests - Identify the specific directories to focus on (e.g., if WUI is mentioned, they'll focus on humanlayer-wui/) - Trace data flow and key functions - Return detailed explanations with file:line references 3. **Read all files identified by research tasks**: - After research tasks complete, read ALL files they identified as relevant - Read them FULLY into the main context - This ensures you have complete understanding before proceeding 4. **Analyze and verify understanding**: - Cross-reference the task requirements with actual code - Identify any discrepancies or misunderstandings - Note assumptions that need verification - Determine true scope based on codebase reality 5. **Present informed understanding and focused questions**: ``` Based on the requirements and my research of the codebase, I understand we need to [accurate summary]. I've found that: - [Current implementation detail with file:line reference] - [Relevant pattern or constraint discovered] - [Potential complexity or edge case identified] Questions that my research couldn't answer: - [Specific technical question that requires human judgment] - [Business logic clarification] - [Design preference that affects implementation] ``` Only ask questions that you genuinely cannot answer through code investigation. ### Step 2: Research & Discovery After getting initial clarifications: 1. **If the user corrects any misunderstanding**: - DO NOT just accept the correction - Spawn new research tasks to verify the correct information - Read the specific files/directories they mention - Only proceed once you've verified the facts yourself 2. **Create a research todo list** using TodoWrite to track exploration tasks 3. **Spawn parallel sub-tasks for comprehensive research**: - Create multiple Task agents to research different aspects concurrently - Use the right agent for each type of research: **For deeper investigation:** - **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]") - **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works") - **codebase-pattern-finder** - To find similar features we can model after **For web research (if needed):** - **web-search-researcher** - To find external documentation, tutorials, or best practices Each agent knows how to: - Find the right files and code patterns - Identify conventions and patterns to follow - Look for integration points and dependencies - Return specific file:line references - Find tests and examples 3. **Wait for ALL sub-tasks to complete** before proceeding 4. **Present findings and design options**: ``` Based on my research, here's what I found: **Current State:** - [Key discovery about existing code] - [Pattern or convention to follow] **Design Options:** 1. [Option A] - [pros/cons] 2. [Option B] - [pros/cons] **Open Questions:** - [Technical uncertainty] - [Design decision needed] Which approach aligns best with your vision? ``` ### Step 3: Plan Structure Development Once aligned on approach: 1. **Create initial plan outline**: ``` Here's my proposed plan structure: ## Overview [1-2 sentence summary] ## Implementation Phases: 1. [Phase name] - [what it accomplishes] 2. [Phase name] - [what it accomplishes] 3. [Phase name] - [what it accomplishes] Does this phasing make sense? Should I adjust the order or granularity? ``` 2. **Get feedback on structure** before writing details ### Step 4: Detailed Plan Writing After structure approval: 1. **Determine username:** - Run the `${CLAUDE_PLUGIN_ROOT}/scripts/spec_metadata.sh` script if not already run - Check the script output for "Normalized Username" - If present → use it - If not present: - Check for "Existing Users" in the output - If existing users found → prompt: "Which user are you? [user1/user2/user3]: " - If no existing users → prompt: "Enter your name: " then normalize it (lowercase, spaces to hyphens) - Store the username for creating the directory path 2. **Determine the task directory**: - If working on an existing task (e.g., research already exists), write to that task's directory: `tasks//NNN-YYYY-MM-DD-description/plan.md` - If starting a new task, create a new numbered directory: - Check what task numbers already exist in `tasks//` - Use the next sequential number (e.g., if tasks/kasper-junge/003-... exists, create tasks/kasper-junge/004-...) - Format: `tasks//NNN-YYYY-MM-DD-description/plan.md` where: - `` is the normalized username (e.g., kasper-junge, jonas-peterson) - NNN is a zero-padded 3-digit number (001, 002, etc.) - per-user numbering - YYYY-MM-DD is today's date - description is a brief kebab-case description - Example: `tasks/kasper-junge/005-2025-01-15-add-authentication/plan.md` 3. **Write the plan** to the task directory 4. **Use this template structure**: ````markdown # [Feature/Task Name] Implementation Plan ## Overview [Brief description of what we're implementing and why] ## Current State Analysis [What exists now, what's missing, key constraints discovered] ## Desired End State [A Specification of the desired end state after this plan is complete, and how to verify it] ### Key Discoveries: - [Important finding with file:line reference] - [Pattern to follow] - [Constraint to work within] ## What We're NOT Doing [Explicitly list out-of-scope items to prevent scope creep] ## Implementation Approach [High-level strategy and reasoning] ## Phase 1: [Descriptive Name] ### Overview [What this phase accomplishes] ### Changes Required: #### 1. [Component/File Group] **File**: `path/to/file.ext` **Changes**: [Summary of changes] ```[language] // Specific code to add/modify ``` ### Success Criteria: #### Automated Verification: - [ ] Migration applies cleanly: `[project-specific command, e.g., python manage.py migrate]` - [ ] Unit tests pass: `[project-specific command, e.g., npm test, pytest tests/unit/]` - [ ] Type checking passes: `[if applicable, e.g., npm run typecheck, mypy src/]` - [ ] Linting passes: `[project-specific command, e.g., npm run lint, flake8 src/]` - [ ] Integration tests pass: `[project-specific command, e.g., npm run test:integration]` #### Manual Verification: - [ ] Feature works as expected when tested via UI - [ ] Performance is acceptable under load - [ ] Edge case handling verified manually - [ ] No regressions in related features **Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase. --- ## Phase 2: [Descriptive Name] [Similar structure with both automated and manual success criteria...] --- ## Testing Strategy ### Unit Tests: - [What to test] - [Key edge cases] ### Integration Tests: - [End-to-end scenarios] ### Manual Testing Steps: 1. [Specific step to verify feature] 2. [Another verification step] 3. [Edge case to test manually] ## Performance Considerations [Any performance implications or optimizations needed] ## Migration Notes [If applicable, how to handle existing data/systems] ## References - Original requirements: `tasks//NNN-YYYY-MM-DD-description/requirements.md` (if applicable) - Related research: `tasks//NNN-YYYY-MM-DD-description/research.md` (if applicable) - Similar implementation: `[file:line]` ```` ### Step 5: Review and Iterate 1. **Present the draft plan location**: ``` I've created the initial implementation plan at: `tasks//NNN-YYYY-MM-DD-description/plan.md` Please review it and let me know: - Are the phases properly scoped? - Are the success criteria specific enough? - Any technical details that need adjustment? - Missing edge cases or considerations? ``` 2. **Iterate based on feedback** - be ready to: - Add missing phases - Adjust technical approach - Clarify success criteria (both automated and manual) - Add/remove scope items 3. **Continue refining** until the user is satisfied ## Important Guidelines 1. **Be Skeptical**: - Question vague requirements - Identify potential issues early - Ask "why" and "what about" - Don't assume - verify with code 2. **Be Interactive**: - Don't write the full plan in one shot - Get buy-in at each major step - Allow course corrections - Work collaboratively 3. **Be Thorough**: - Read all context files COMPLETELY before planning - Research actual code patterns using parallel sub-tasks - Include specific file paths and line numbers - Write measurable success criteria with clear automated vs manual distinction - Use the project's standard testing/verification commands in success criteria 4. **Be Practical**: - Focus on incremental, testable changes - Consider migration and rollback - Think about edge cases - Include "what we're NOT doing" 5. **Track Progress**: - Use TodoWrite to track planning tasks - Update todos as you complete research - Mark planning tasks complete when done 6. **No Open Questions in Final Plan**: - If you encounter open questions during planning, STOP - Research or ask for clarification immediately - Do NOT write the plan with unresolved questions - The implementation plan must be complete and actionable - Every decision must be made before finalizing the plan ## Success Criteria Guidelines **Always separate success criteria into two categories:** 1. **Automated Verification** (can be run by execution agents): - Commands that can be run: `make test`, `npm run lint`, etc. - Specific files that should exist - Code compilation/type checking - Automated test suites 2. **Manual Verification** (requires human testing): - UI/UX functionality - Performance under real conditions - Edge cases that are hard to automate - User acceptance criteria **Format example:** ```markdown ### Success Criteria: #### Automated Verification: - [ ] Database migration runs successfully: `python manage.py migrate` - [ ] All unit tests pass: `pytest tests/` - [ ] No linting errors: `flake8 src/` - [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint` #### Manual Verification: - [ ] New feature appears correctly in the UI - [ ] Performance is acceptable with 1000+ items - [ ] Error messages are user-friendly - [ ] Feature works correctly on mobile devices ``` ## Common Patterns ### For Database Changes: - Start with schema/migration - Add store methods - Update business logic - Expose via API - Update clients ### For New Features: - Research existing patterns first - Start with data model - Build backend logic - Add API endpoints - Implement UI last ### For Refactoring: - Document current behavior - Plan incremental changes - Maintain backwards compatibility - Include migration strategy ## Sub-task Spawning Best Practices When spawning research sub-tasks: 1. **Spawn multiple tasks in parallel** for efficiency 2. **Each task should be focused** on a specific area 3. **Provide detailed instructions** including: - Exactly what to search for - Which directories to focus on - What information to extract - Expected output format 4. **Be EXTREMELY specific about directories**: - If the requirements mention specific subsystems, specify those directories - Be explicit about directory paths in your research prompts - Include the full path context in your prompts 5. **Specify read-only tools** to use 6. **Request specific file:line references** in responses 7. **Wait for all tasks to complete** before synthesizing 8. **Verify sub-task results**: - If a sub-task returns unexpected results, spawn follow-up tasks - Cross-check findings against the actual codebase - Don't accept results that seem incorrect Example of spawning multiple tasks: ```python # Spawn these tasks concurrently: tasks = [ Task("Research database schema", db_research_prompt), Task("Find API patterns", api_research_prompt), Task("Investigate UI components", ui_research_prompt), Task("Check test patterns", test_research_prompt) ] ``` ## Example Interaction Flow ``` User: /implementation_plan Assistant: I'll help you create a detailed implementation plan... User: We need to add authentication to the API. See tasks/kasper-junge/003-2025-01-10-api-auth/requirements.md Assistant: Let me read that requirements file completely first... [Reads file fully] Based on the requirements, I understand we need to add JWT-based authentication to the API endpoints. Before I start planning, I have some questions... [Interactive process continues...] ```