Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:30:41 +08:00
commit 1c9802ae09
10 changed files with 1421 additions and 0 deletions

451
commands/create_plan.md Normal file
View File

@@ -0,0 +1,451 @@
---
description: Create detailed implementation plans through interactive research and iteration
---
# Implementation Plan
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
## Initial Response
When this command is invoked:
1. **Check if parameters were provided**:
- If a file path or ticket reference was provided as a parameter, skip the default message
- Immediately read any provided files FULLY
- Begin the research process
2. **If no parameters provided**, respond with:
```
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a requirements file directly: `/create_plan tasks/kasper-junge/001-2025-01-15-feature-name/requirements.md`
For deeper analysis, try: `/create_plan think deeply about tasks/kasper-junge/001-2025-01-15-feature-name/requirements.md`
```
Then wait for the user's input.
## Process Steps
### Step 1: Context Gathering & Initial Analysis
1. **Read all mentioned files immediately and FULLY**:
- Requirements files (e.g., `tasks/<username>/001-2025-01-15-feature-name/requirements.md`)
- Research documents from the task directory
- Related implementation plans
- Any JSON/data files mentioned
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
- **NEVER** read files partially - if a file is mentioned, read it completely
2. **Spawn initial research tasks to gather context**:
Before asking the user any questions, use specialized agents to research in parallel:
- Use the **codebase-locator** agent to find all files related to the task
- Use the **codebase-analyzer** agent to understand how the current implementation works
These agents will:
- Find relevant source files, configs, and tests
- Identify the specific directories to focus on (e.g., if WUI is mentioned, they'll focus on humanlayer-wui/)
- Trace data flow and key functions
- Return detailed explanations with file:line references
3. **Read all files identified by research tasks**:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
4. **Analyze and verify understanding**:
- Cross-reference the task requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
5. **Present informed understanding and focused questions**:
```
Based on the requirements and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
```
Only ask questions that you genuinely cannot answer through code investigation.
### Step 2: Research & Discovery
After getting initial clarifications:
1. **If the user corrects any misunderstanding**:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
2. **Create a research todo list** using TodoWrite to track exploration tasks
3. **Spawn parallel sub-tasks for comprehensive research**:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research:
**For deeper investigation:**
- **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]")
- **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works")
- **codebase-pattern-finder** - To find similar features we can model after
**For web research (if needed):**
- **web-search-researcher** - To find external documentation, tutorials, or best practices
Each agent knows how to:
- Find the right files and code patterns
- Identify conventions and patterns to follow
- Look for integration points and dependencies
- Return specific file:line references
- Find tests and examples
3. **Wait for ALL sub-tasks to complete** before proceeding
4. **Present findings and design options**:
```
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
```
### Step 3: Plan Structure Development
Once aligned on approach:
1. **Create initial plan outline**:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
```
2. **Get feedback on structure** before writing details
### Step 4: Detailed Plan Writing
After structure approval:
1. **Determine username:**
- Run the `${CLAUDE_PLUGIN_ROOT}/scripts/spec_metadata.sh` script if not already run
- Check the script output for "Normalized Username"
- If present → use it
- If not present:
- Check for "Existing Users" in the output
- If existing users found → prompt: "Which user are you? [user1/user2/user3]: "
- If no existing users → prompt: "Enter your name: " then normalize it (lowercase, spaces to hyphens)
- Store the username for creating the directory path
2. **Determine the task directory**:
- If working on an existing task (e.g., research already exists), write to that task's directory: `tasks/<username>/NNN-YYYY-MM-DD-description/plan.md`
- If starting a new task, create a new numbered directory:
- Check what task numbers already exist in `tasks/<username>/`
- Use the next sequential number (e.g., if tasks/kasper-junge/003-... exists, create tasks/kasper-junge/004-...)
- Format: `tasks/<username>/NNN-YYYY-MM-DD-description/plan.md` where:
- `<username>` is the normalized username (e.g., kasper-junge, jonas-peterson)
- NNN is a zero-padded 3-digit number (001, 002, etc.) - per-user numbering
- YYYY-MM-DD is today's date
- description is a brief kebab-case description
- Example: `tasks/kasper-junge/005-2025-01-15-add-authentication/plan.md`
3. **Write the plan** to the task directory
4. **Use this template structure**:
````markdown
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `[project-specific command, e.g., python manage.py migrate]`
- [ ] Unit tests pass: `[project-specific command, e.g., npm test, pytest tests/unit/]`
- [ ] Type checking passes: `[if applicable, e.g., npm run typecheck, mypy src/]`
- [ ] Linting passes: `[project-specific command, e.g., npm run lint, flake8 src/]`
- [ ] Integration tests pass: `[project-specific command, e.g., npm run test:integration]`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human that the manual testing was successful before proceeding to the next phase.
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original requirements: `tasks/<username>/NNN-YYYY-MM-DD-description/requirements.md` (if applicable)
- Related research: `tasks/<username>/NNN-YYYY-MM-DD-description/research.md` (if applicable)
- Similar implementation: `[file:line]`
````
### Step 5: Review and Iterate
1. **Present the draft plan location**:
```
I've created the initial implementation plan at:
`tasks/<username>/NNN-YYYY-MM-DD-description/plan.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
```
2. **Iterate based on feedback** - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
3. **Continue refining** until the user is satisfied
## Important Guidelines
1. **Be Skeptical**:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
2. **Be Interactive**:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
3. **Be Thorough**:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
- Use the project's standard testing/verification commands in success criteria
4. **Be Practical**:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
5. **Track Progress**:
- Use TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
6. **No Open Questions in Final Plan**:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
## Success Criteria Guidelines
**Always separate success criteria into two categories:**
1. **Automated Verification** (can be run by execution agents):
- Commands that can be run: `make test`, `npm run lint`, etc.
- Specific files that should exist
- Code compilation/type checking
- Automated test suites
2. **Manual Verification** (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
**Format example:**
```markdown
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `python manage.py migrate`
- [ ] All unit tests pass: `pytest tests/`
- [ ] No linting errors: `flake8 src/`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
```
## Common Patterns
### For Database Changes:
- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
### For New Features:
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
### For Refactoring:
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
## Sub-task Spawning Best Practices
When spawning research sub-tasks:
1. **Spawn multiple tasks in parallel** for efficiency
2. **Each task should be focused** on a specific area
3. **Provide detailed instructions** including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
4. **Be EXTREMELY specific about directories**:
- If the requirements mention specific subsystems, specify those directories
- Be explicit about directory paths in your research prompts
- Include the full path context in your prompts
5. **Specify read-only tools** to use
6. **Request specific file:line references** in responses
7. **Wait for all tasks to complete** before synthesizing
8. **Verify sub-task results**:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
Example of spawning multiple tasks:
```python
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
```
## Example Interaction Flow
```
User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add authentication to the API. See tasks/kasper-junge/003-2025-01-10-api-auth/requirements.md
Assistant: Let me read that requirements file completely first...
[Reads file fully]
Based on the requirements, I understand we need to add JWT-based authentication to the API endpoints. Before I start planning, I have some questions...
[Interactive process continues...]
```

View File

@@ -0,0 +1,87 @@
---
description: Implement technical plans from tasks/<username>/ with verification
---
# Implement Plan
You are tasked with implementing an approved technical plan from `tasks/<username>/`. These plans contain phases with specific changes and success criteria.
Plans are organized by username in the tasks directory structure: `tasks/<username>/NNN-YYYY-MM-DD-description/plan.md`
## Getting Started
When given a plan path (e.g., `tasks/kasper-junge/003-2025-01-15-feature-name/plan.md`):
- Read the plan completely and check for any existing checkmarks (- [x])
- Read the original requirements file and all files mentioned in the plan
- Check if there's a research.md in the same directory for additional context
- **Read files fully** - never use limit/offset parameters, you need complete context
- Think deeply about how the pieces fit together
- Create a todo list to track your progress
- Start implementing if you understand what needs to be done
If no plan path provided, ask for one with the format: `tasks/<username>/NNN-YYYY-MM-DD-description/plan.md`
## Implementation Philosophy
Plans are carefully designed, but reality can be messy. Your job is to:
- Follow the plan's intent while adapting to what you find
- Implement each phase fully before moving to the next
- Verify your work makes sense in the broader codebase context
- Update checkboxes in the plan as you complete sections
When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.
If you encounter a mismatch:
- STOP and think deeply about why the plan can't be followed
- Present the issue clearly:
```
Issue in Phase [N]:
Expected: [what the plan says]
Found: [actual situation]
Why this matters: [explanation]
How should I proceed?
```
## Verification Approach
After implementing a phase:
- Run the success criteria checks listed in the plan (these will be project-specific commands)
- Fix any issues before proceeding
- Update your progress in both the plan and your todos
- Check off completed items in the plan file itself using Edit
- **Pause for human verification**: After completing all automated verification for a phase, pause and inform the human that the phase is ready for manual testing. Use this format:
```
Phase [N] Complete - Ready for Manual Verification
Automated verification passed:
- [List automated checks that passed]
Please perform the manual verification steps listed in the plan:
- [List manual verification items from the plan]
Let me know when manual testing is complete so I can proceed to Phase [N+1].
```
If instructed to execute multiple phases consecutively, skip the pause until the last phase. Otherwise, assume you are just doing one phase.
do not check off items in the manual testing steps until confirmed by the user.
## If You Get Stuck
When something isn't working as expected:
- First, make sure you've read and understood all the relevant code
- Consider if the codebase has evolved since the plan was written
- Present the mismatch clearly and ask for guidance
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.
## Resuming Work
If the plan has existing checkmarks:
- Trust that completed work is done
- Pick up from the first unchecked item
- Verify previous work only if something seems off
Remember: You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.

View File

@@ -0,0 +1,198 @@
---
description: Document codebase as-is with thoughts directory for historical context
---
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique the implementation or identify problems
- DO NOT recommend refactoring, optimization, or architectural changes
- ONLY describe what exists, where it exists, how it works, and how components interact
- You are creating a technical map/documentation of the existing system
## Initial Setup:
When this command is invoked, respond with:
```
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
```
Then wait for the user's research query.
## Steps to follow after receiving the research query:
1. **Read any directly mentioned files first:**
- If the user mentions specific files (tickets, docs, JSON), read them FULLY first
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
- This ensures you have full context before decomposing the research
2. **Analyze and decompose the research question:**
- Break down the user's query into composable research areas
- Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite to track all subtasks
- Consider which directories, files, or architectural patterns are relevant
3. **Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
- We now have specialized agents that know how to do specific research tasks:
**For codebase research:**
- Use the **codebase-locator** agent to find WHERE files and components live
- Use the **codebase-analyzer** agent to understand HOW specific code works (without critiquing it)
- Use the **codebase-pattern-finder** agent to find examples of existing patterns (without evaluating them)
**IMPORTANT**: All agents are documentarians, not critics. They will describe what exists without suggesting improvements or identifying issues.
**For web research (only if user explicitly asks):**
- Use the **web-search-researcher** agent for external documentation and resources
- IF you use web-research agents, instruct them to return LINKS with their findings, and please INCLUDE those links in your final report
The key is to use these agents intelligently:
- Start with locator agents to find what exists
- Then use analyzer agents on the most promising findings to document how they work
- Run multiple agents in parallel when they're searching for different things
- Each agent knows its job - just tell it what you're looking for
- Don't write detailed prompts about HOW to search - the agents already know
- Remind agents they are documenting, not evaluating or improving
4. **Wait for all sub-agents to complete and synthesize findings:**
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
- Compile all sub-agent results
- Connect findings across different components
- Include specific file paths and line numbers for reference
- Highlight patterns, connections, and architectural decisions
- Answer the user's specific questions with concrete evidence
5. **Gather metadata and determine research document location:**
- Run the `${CLAUDE_PLUGIN_ROOT}/scripts/spec_metadata.sh` script to generate all relevant metadata
- **Determine username:**
- Check the script output for "Normalized Username"
- If present → use it
- If not present:
- Check for "Existing Users" in the output
- If existing users found → prompt: "Which user are you? [user1/user2/user3]: "
- If no existing users → prompt: "Enter your name: " then normalize it (lowercase, spaces to hyphens)
- Store the username for creating the directory path
- **Determine task directory:**
- If research is for an existing task, write to that task's directory: `tasks/<username>/NNN-YYYY-MM-DD-description/research.md`
- If starting new research (not tied to existing task), create a new task directory:
- Check what task numbers already exist in `tasks/<username>/`
- Use the next sequential number (e.g., if tasks/kasper-junge/003-... exists, create tasks/kasper-junge/004-...)
- Format: `tasks/<username>/NNN-YYYY-MM-DD-description/research.md` where:
- `<username>` is the normalized username (e.g., kasper-junge, jonas-peterson)
- NNN is a zero-padded 3-digit number (001, 002, etc.) - per-user numbering
- YYYY-MM-DD is today's date
- description is a brief kebab-case description of the research topic
- Example: `tasks/kasper-junge/005-2025-01-15-authentication-flow/research.md`
6. **Generate research document:**
- Use the metadata gathered in step 5
- Structure the document with YAML frontmatter followed by content:
```markdown
---
date: [Current date and time with timezone in ISO format]
researcher: [Researcher name from git config]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[User's Question/Topic]"
tags: [research, codebase, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
last_updated_by: [Researcher name]
---
# Research: [User's Question/Topic]
**Date**: [Current date and time with timezone from step 5]
**Researcher**: [Researcher name from git config]
**Git Commit**: [Current commit hash from step 5]
**Branch**: [Current branch name from step 5]
**Repository**: [Repository name]
## Research Question
[Original user query]
## Summary
[High-level documentation of what was found, answering the user's question by describing what exists]
## Detailed Findings
### [Component/Area 1]
- Description of what exists ([file.ext:line](link))
- How it connects to other components
- Current implementation details (without evaluation)
### [Component/Area 2]
...
## Code References
- `path/to/file.py:123` - Description of what's there
- `another/file.ts:45-67` - Description of the code block
## Architecture Documentation
[Current patterns, conventions, and design implementations found in the codebase]
## Related Research
[Links to other research documents in tasks/<username>/]
## Open Questions
[Any areas that need further investigation]
```
7. **Add GitHub permalinks (if applicable):**
- Check if on main branch or if commit is pushed: `git branch --show-current` and `git status`
- If on main/master or pushed, generate GitHub permalinks:
- Get repo info: `gh repo view --json owner,name`
- Create permalinks: `https://github.com/{owner}/{repo}/blob/{commit}/{file}#L{line}`
- Replace local file references with permalinks in the document
8. **Present findings:**
- Present a concise summary of findings to the user
- Include key file references for easy navigation
- Note the location of the research document
- Ask if they have follow-up questions or need clarification
9. **Handle follow-up questions:**
- If the user has follow-up questions, append to the same research document
- Update the frontmatter fields `last_updated` and `last_updated_by` to reflect the update
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
- Add a new section: `## Follow-up Research [timestamp]`
- Spawn new sub-agents as needed for additional investigation
- Continue updating the document and syncing
## Important notes:
- Always use parallel Task agents to maximize efficiency and minimize context usage
- Always run fresh codebase research - never rely solely on existing research documents
- Focus on finding concrete file paths and line numbers for developer reference
- Research documents should be self-contained with all necessary context
- Each sub-agent prompt should be specific and focused on read-only documentation operations
- Document cross-component connections and how systems interact
- Include temporal context (when the research was conducted)
- Link to GitHub when possible for permanent references
- Keep the main agent focused on synthesis, not deep file reading
- Have sub-agents document examples and usage patterns as they exist
- **CRITICAL**: You and all sub-agents are documentarians, not evaluators
- **REMEMBER**: Document what IS, not what SHOULD BE
- **NO RECOMMENDATIONS**: Only describe the current state of the codebase
- **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
- **Critical ordering**: Follow the numbered steps exactly
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
- ALWAYS gather metadata before writing the document (step 5 before step 6)
- NEVER write the research document with placeholder values
- **Frontmatter consistency**:
- Always include frontmatter at the beginning of research documents
- Keep frontmatter fields consistent across all research documents
- Update frontmatter when adding follow-up research
- Use snake_case for multi-word field names (e.g., `last_updated`, `git_commit`)
- Tags should be relevant to the research topic and components studied