Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:22:50 +08:00
commit e6ac7edfc0
14 changed files with 2164 additions and 0 deletions

465
commands/create-plan.md Normal file
View File

@@ -0,0 +1,465 @@
---
name: create-plan
description: Create detailed implementation plans through an interactive, iterative process with thorough research and collaboration
---
# Create Plan
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
## Initial Response
When this command is invoked:
1. **Evaluate if any available skill is relevant and a fit for the task**:
- If so, respond with:
```I believe the [skill name] skill is well-suited to assist with this implementation plan. It provides expertise in [skill description]. I will leverage this skill throughout the planning process to ensure we follow best practices and produce a high-quality plan.
```
2. **Check if parameters were provided**:
- If a file path or ticket reference was provided as a parameter, skip the default message
- Immediately read any provided files FULLY
- Begin the research process
3. **If no parameters provided**, respond with:
```
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/eng_1234.md`
For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/eng_1234.md`
```
Then wait for the user's input.
## Process
### Step 1: Context Gathering & Initial Analysis
1. **Read all mentioned files immediately and FULLY**:
- Ticket files (e.g., `thoughts/allison/tickets/eng_1234.md`)
- Research documents
- Related implementation plans
- Any JSON/data files mentioned
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
- **NEVER** read files partially - if a file is mentioned, read it completely
2. **Spawn initial research tasks to gather context**:
Before asking the user any questions, use specialized agents to research in parallel:
- Use the **codebase-locator** agent to find all files related to the ticket/task
- Use the **codebase-analyzer** agent to understand how the current implementation works
- If relevant, use the **thoughts-locator** agent to find any existing thoughts documents about this feature
These agents will:
- Find relevant source files, configs, and tests
- Trace data flow and key functions
- Return detailed explanations with file:line references
3. **Read all files identified by research tasks**:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
4. **Analyze and verify understanding**:
- Cross-reference the ticket requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
5. **Present informed understanding and focused questions**:
```
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
```
Only ask questions that you genuinely cannot answer through code investigation.
### Step 2: Research & Discovery
After getting initial clarifications:
1. **If the user corrects any misunderstanding**:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
2. **Create a research todo list** using TodoWrite to track exploration tasks
3. **Spawn parallel sub-tasks for comprehensive research**:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research:
**For deeper investigation:**
- **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]")
- **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works")
- **codebase-pattern-finder** - To find similar features we can model after
**For historical context:**
- **thoughts-locator** - To find any research, plans, or decisions about this area
- **thoughts-analyzer** - To extract key insights from the most relevant documents
**For related tickets:**
- **linear-searcher** - To find similar issues or past implementations
Each agent knows how to:
- Find the right files and code patterns
- Identify conventions and patterns to follow
- Look for integration points and dependencies
- Return specific file:line references
- Find tests and examples
3. **Wait for ALL sub-tasks to complete** before proceeding
4. **Present findings and design options**:
```
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
```
### Step 3: Plan Structure Development
Once aligned on approach:
1. **Create initial plan outline**:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
```
2. **Get feedback on structure** before writing details
### Step 4: Detailed Plan Writing
After structure approval:
1. **Write the plan** to `docs/claude/plans/YYYY-MM-DD-ENG-XXXX-description.md`
- Format: `YYYY-MM-DD-ENG-XXXX-description.md` where:
- `YYYY-MM-DD` is today's date
- `ENG-XXXX` is the ticket number (omit if no ticket)
- `description` is a brief kebab-case description
- Examples:
- With ticket: `2025-01-08-ENG-1478-parent-child-tracking.md`
- Without ticket: `2025-01-08-improve-error-handling.md`
2. **Use this template structure**:
````markdown
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
#### 2. [Another component/File group]
**File**: `path/to/file.ext`
**Changes**: [Summary of another change]
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
---
## Phase 2: [Descriptive Name]
[Similar structure as in phase 1...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `thoughts/allison/tickets/eng_XXXX.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`
## Code changes apendix
[Specific code changes, where each item should reference a previous identified change on `Changes Required`]
### 1. [Component/File group]
**File**: `path/to/file.ext`
```[language]
// Specific code to add/modify/delete
```
### 2. [Another component/File group]
**File**: `path/to/file.ext`
```[language]
// Specific code to add/modify/delete
```
````
### Step 5: Sync and Review
1. **Write the plan to a file**:
- This ensures the plan is available for review by the user
2. **Present the draft plan location**:
```
I've created the initial implementation plan at:
`<plan-file-path>`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
```
3. **Iterate based on feedback** - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
4. **Continue refining** until the user is satisfied
## Success Criteria Guidelines
**Always separate success criteria into two categories:**
1. **Automated Verification** (can be run by execution agents):
- Commands that can be run: `make test`, `make build`, etc.
- Specific files that should exist
- Code compilation/type checking
- Automated test suites
2. **Manual Verification** (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
**Format example:**
```markdown
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
```
## Important Guidelines
1. **Be Skeptical**:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
2. **Be Interactive**:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
3. **Be Thorough**:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
4. **Be Practical**:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
5. **Track Progress**:
- Use TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
6. **No Open Questions in Final Plan**:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
7. **Reuse existing build and testing tools**:
- Assess if the project contains build and testing scripts (e.g., Makefiles)
- Identify which scripts (e.g., Makefile goals) are appropriate to be used on automated verification steps
- Favor reusing existing scripts to run automated verifications (e.g., `make test`, `make build`)
## Common Patterns
### For Database Changes:
- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
### For New Features:
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
### For Refactoring:
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
## Sub-task Spawning Best Practices
When spawning research sub-tasks:
1. **Spawn multiple tasks in parallel** for efficiency
2. **Each task should be focused** on a specific area
3. **Provide detailed instructions** including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
4. **Be EXTREMELY specific about directories**:
- Include the full path context in your prompts
5. **Specify read-only tools** to use
6. **Request specific file:line references** in responses
7. **Wait for all tasks to complete** before synthesizing
8. **Verify sub-task results**:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
Example of spawning multiple tasks:
```python
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
```
## Example Interaction Flow
```
User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/eng_1478.md
Assistant: Let me read that ticket file completely first...
[Reads file fully]
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the daemon. Before I start planning, I have some questions...
[Interactive process continues...]
```

View File

@@ -0,0 +1,91 @@
---
name: implement-plan
description: Implement approved technical plans phase by phase with automated and manual verification at each step
---
# Implement Plan
You are tasked with implementing an approved technical plan. These plans contain phases with specific changes and success criteria.
## Process
### Step 1: Context Gathering
1. **Read all relevant files**:
- Read the plan completely
- Read the original ticket (if any) and all files mentioned in the plan
- **Read files fully** - never use limit/offset parameters, you need complete context
2. **Identify the current implementation status**:
- Check for any existing checkmarks (- [x]) to assess which phases are already implemented (if any)
- Identify the phase that needs to be implemented next
### Step 2: Implementation
1. **Think deeply to ensure that you have the needed context to successfully implement the next phase**
2. **Create a todo list to track your progress**
3. **Start implemeting**
- Execute the changes identified on the plan document
- ALWAYS implement unit tests
- Tests are part of the implementation phase and must NEVER be postponed
### Step 3: Validation
1. **Verify your implementation**
- Execute the success criteria automated checks
- Fix any issues before proceeding
- Verify your work makes sense in the broader codebase context
- Update your progress in both the plan file and your todos
2. **Pause for manual validations**
- Inform the human that the phase is ready for manual testing. Use this format:
```
Phase [N] Complete - Ready for Manual Verification
Automated verification passed:
- [List automated checks that passed]
Please perform the manual verification steps listed in the plan:
- [List manual verification items from the plan]
Let me know when manual testing is complete so I can proceed to Phase [N+1].
```
- If instructed to execute multiple phases consecutively, skip the pause until the last phase. Otherwise, assume you are just doing one phase.
- If no manual steps are required, proceed.
- Do not check off items in the manual testing steps until confirmed by the user.
6. **Wrap up**
- after a phase is fully implemented and tested, ensure the plan progress is updated before moving to the next phase
- NEVER proceed to the next phase if automated checks are not validated, unless explicitely allowed by the user
- NEVER proceed to the next phase if unit tests were not implemented, unless explicitely allowed by the user
## Implementation Philosophy
Plans are carefully designed, but reality can be messy. Your job is to:
- Follow the plan's intent while adapting to what you find
- Implement each phase fully before moving to the next
- Update checkboxes in the plan as you complete sections/phases
When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.
If you encounter a mismatch:
- STOP and think deeply about why the plan can't be followed
- Present the issue clearly:
```
Issue in Phase [N]:
Expected: [what the plan says]
Found: [actual situation]
Why this matters: [explanation]
How should I proceed?
```
## If You Get Stuck
When something isn't working as expected:
- First, make sure you've read and understood all the relevant code
- Consider if the codebase has evolved since the plan was written
- Present the mismatch clearly and ask for guidance
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.

View File

@@ -0,0 +1,185 @@
---
name: research-codebase
description: Conduct comprehensive codebase research using parallel sub-agents to answer questions and document findings
---
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## Initial Setup
When this command is invoked, respond with:
```
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
```
Then wait for the user's research query.
## Process
1. **Evaluate if any available skill is relevant and a fit for the task**:
- If so, respond with:
```I believe the [skill name] skill is well-suited to assist with this research. It provides expertise in [skill description]. I will leverage this skill throughout process to ensure we follow best practices and produce a high-quality research.
```
2. **Read any directly mentioned files first:**
- If the user mentions specific files (tickets, docs, JSON), read them FULLY first
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
- This ensures you have full context before decomposing the research
3. **Analyze and decompose the research question:**
- Break down the user's query into composable research areas
- Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite to track all subtasks
- Consider which directories, files, or architectural patterns are relevant
4. **Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
The key is to use these agents intelligently:
- Start with locator agents to find what exists
- Then use analyzer agents on the most promising findings
- Run multiple agents in parallel when they're searching for different things
- Each agent knows its job - just tell it what you're looking for
- Don't write detailed prompts about HOW to search - the agents already know
5. **Wait for all sub-agents to complete and synthesize findings:**
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
- Compile all sub-agent results (both codebase and thoughts findings)
- Prioritize live codebase findings as primary source of truth
- Use thoughts/ findings as supplementary historical context
- Connect findings across different components
- Include specific file paths and line numbers for reference
- Verify all thoughts/ paths are correct (e.g., thoughts/allison/ not thoughts/shared/ for personal files)
- Highlight patterns, connections, and architectural decisions
- Answer the user's specific questions with concrete evidence
6. **Gather metadata for the research document:**
- generate all relevant metadata
- Filename: `thoughts/shared/research/YYYY-MM-DD-ENG-XXXX-description.md`
- Format: `YYYY-MM-DD-ENG-XXXX-description.md` where:
- YYYY-MM-DD is today's date
- ENG-XXXX is the ticket number (omit if no ticket)
- description is a brief kebab-case description of the research topic
- Examples:
- With ticket: `2025-01-08-ENG-1478-parent-child-tracking.md`
- Without ticket: `2025-01-08-authentication-flow.md`
7. **Generate research document:**
- Use the metadata gathered in step 4
- Structure the document with YAML frontmatter followed by content:
```markdown
---
date: [Current date and time with timezone in ISO format]
researcher: [Researcher name]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[User's Question/Topic]"
tags: [research, codebase, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
last_updated_by: [Researcher name]
---
# Research: [User's Question/Topic]
**Date**: [Current date and time with timezone from step 4]
**Researcher**: [Researcher name]
**Git Commit**: [Current commit hash from step 4]
**Branch**: [Current branch name from step 4]
**Repository**: [Repository name]
## Research Question
[Original user query]
## Summary
[High-level findings answering the user's question]
## Detailed Findings
### [Component/Area 1]
- Finding with reference ([file.ext:line](link))
- Connection to other components
- Implementation details
### [Component/Area 2]
...
## Code References
- `path/to/file.py:123` - Description of what's there
- `another/file.ts:45-67` - Description of the code block
## Architecture Insights
[Patterns, conventions, and design decisions discovered]
## Historical Context (from thoughts/)
[Relevant insights from thoughts/ directory with references]
- `thoughts/shared/something.md` - Historical decision about X
- `thoughts/local/notes.md` - Past exploration of Y
Note: Paths exclude "searchable/" even if found there
## Related Research
[Links to other research documents in thoughts/shared/research/]
## Open Questions
[Any areas that need further investigation]
```
8. **Add GitHub permalinks (if applicable):**
- Check if on main branch or if commit is pushed: `git branch --show-current` and `git status`
- If on main/master or pushed, generate GitHub permalinks:
- Get repo info: `gh repo view --json owner,name`
- Create permalinks: `https://github.com/{owner}/{repo}/blob/{commit}/{file}#L{line}`
- Replace local file references with permalinks in the document
9. **Sync and present findings:**
- Present a concise summary of findings to the user
- Include key file references for easy navigation
- Ask if they have follow-up questions or need clarification
10. **Handle follow-up questions:**
- If the user has follow-up questions, append to the same research document
- Update the frontmatter fields `last_updated` and `last_updated_by` to reflect the update
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
- Add a new section: `## Follow-up Research [timestamp]`
- Spawn new sub-agents as needed for additional investigation
- Continue updating the document and syncing
## Important notes:
- Always use parallel Task agents to maximize efficiency and minimize context usage
- Always run fresh codebase research - never rely solely on existing research documents
- The thoughts/ directory provides historical context to supplement live findings
- Focus on finding concrete file paths and line numbers for developer reference
- Research documents should be self-contained with all necessary context
- Each sub-agent prompt should be specific and focused on read-only operations
- Consider cross-component connections and architectural patterns
- Include temporal context (when the research was conducted)
- Link to GitHub when possible for permanent references
- Keep the main agent focused on synthesis, not deep file reading
- Encourage sub-agents to find examples and usage patterns, not just definitions
- Explore all of thoughts/ directory, not just research subdirectory
- **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
- **Critical ordering**: Follow the numbered steps exactly
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
- ALWAYS gather metadata before writing the document (step 5 before step 6)
- NEVER write the research document with placeholder values
- **Path handling**: The thoughts/searchable/ directory contains hard links for searching
- Always document paths by removing ONLY "searchable/" - preserve all other subdirectories
- Examples of correct transformations:
- `thoughts/searchable/allison/old_stuff/notes.md` → `thoughts/allison/old_stuff/notes.md`
- `thoughts/searchable/shared/prs/123.md` → `thoughts/shared/prs/123.md`
- `thoughts/searchable/global/shared/templates.md` → `thoughts/global/shared/templates.md`
- NEVER change allison/ to shared/ or vice versa - preserve the exact directory structure
- This ensures paths are correct for editing and navigation
- **Frontmatter consistency**:
- Always include frontmatter at the beginning of research documents
- Keep frontmatter fields consistent across all research documents
- Update frontmatter when adding follow-up research
- Use snake_case for multi-word field names (e.g., `last_updated`, `git_commit`)
- Tags should be relevant to the research topic and components studied
- Format the research document for 80-character line width for readability