Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:39:44 +08:00
commit 287a80287f
18 changed files with 2362 additions and 0 deletions

511
commands/create-plan-doc.md Normal file
View File

@@ -0,0 +1,511 @@
# Implementation Plan
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
When this command is invoked:
1. **Check if arguments were provided**:
- If the provided file paths, ticket references, or task descriptions, skip the default message below
- Look for:
- File paths (e.g., `working-notes/...`, `notes/...`)
- @-mentions of files (e.g., `@working-notes/...`)
- Ticket references (e.g., `ABC-1234`, `PROJ-567`)
- Task descriptions or requirements text
- Immediately read any provided files FULLY (without using limit/offset)
- Begin the research process
2. **If no arguments were provided**, first check for existing documents:
a. **Find recent documents**:
- Use Bash to find the 2 most recently edited documents: `ls -t working-notes/*.md 2>/dev/null | head -2`
- Extract just the filenames (without path) for display
- Calculate the relative path from current working directory for descriptions
b. **Present options to the user**:
- Use the AskUserQuestion tool to present documents as options
- Question: "What would you like to create a plan for?"
- Header: "Source"
- Options: Show up to 2 most recent documents from working-notes/
- Label: Filename only (e.g., `2025-01-15_research_auth-flow.md` or `2025-01-14_plan_feature-x.md`)
- Description: Relative path from current working directory (e.g., `working-notes/2025-01-15_research_auth-flow.md`)
- If 2+ docs found: Show 2 most recent
- If 1 doc found: Show that single document
- If 0 docs found: Skip this step and go directly to step c
- The automatic "Other" option will handle users who want to describe a new task
c. **Handle the user's selection**:
**If a document was selected**:
- Read the document FULLY (without limit/offset) into context
- Respond with:
```
I'll create an implementation plan based on [filename].
Let me read through the document to understand what we're building...
```
- After reading, extract key information:
- The topic/feature being discussed
- Key findings, discoveries, or decisions
- Any constraints or requirements identified
- Open questions or decisions needed
- Skip to Step 1 (Context Gathering) using the document as primary context
- When spawning research tasks in Step 1, reference the document's findings
**If "Other" was selected (or no docs found)**:
- Respond with:
```
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
```
- Wait for the user's input before proceeding
If a Jira ticket number is given, use the `workflow-tools:jira-searcher` agent to get information about the ticket.
## Process Steps
### Step 1: Context Gathering & Initial Analysis
1. **Read all mentioned files immediately and FULLY**:
- Ticket files
- Research documents
- Related implementation plans
- Any JSON/data files mentioned
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
- **NEVER** read files partially - if a file is mentioned, read it completely
2. **Spawn initial research tasks to gather context**:
Before asking the user any questions, use specialized agents to research in parallel:
- Use the `workflow-tools:codebase-locator` agent to find all files related to the ticket/task
- Use the `workflow-tools:codebase-analyzer` agent to understand how the current implementation works
- If relevant, use the `workflow-tools:notes-locator` agent to find any existing notes documents about this feature
These agents will:
- Find relevant source files, configs, and tests
- Trace data flow and key functions
- Return detailed explanations with file:line references
3. **Read all files identified by research tasks**:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
4. **Analyze and verify understanding**:
- Cross-reference the ticket requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
5. **Present informed understanding and focused questions**:
```
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
```
Only ask questions that you cannot answer through code investigation. Use the AskUserQuestion tool to ask the user questions.
### Step 2: Research & Discovery
After getting initial clarifications:
1. **If the user corrects any misunderstanding**:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
- Keep the research file (if there is one) up-to-date with any new findings and decisions
2. **Create a research todo list** using TodoWrite to track exploration tasks
3. **Spawn parallel sub-tasks for comprehensive research**:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research:
**For deeper investigation:**
- Use the `workflow-tools:codebase-locator` agent to find more specific files (e.g., "find all files that handle [specific component]")
- Use the `workflow-tools:codebase-analyzer` agent to understand implementation details (e.g., "analyze how [system] works")
- Use the `workflow-tools:codebase-pattern-finder` agent to find similar features we can model after
**For historical context:**
- Use the `workflow-tools:notes-locator` agent to find any research, plans, or decisions about this area
- Use the `workflow-tools:notes-analyzer` agent to extract key insights from the most relevant documents
Each agent knows how to:
- Find the right files and code patterns
- Identify conventions and patterns to follow
- Look for integration points and dependencies
- Return specific file:line references
- Find tests and examples
4. **Wait for ALL sub-tasks to complete** before proceeding
5. **Present findings and design options**:
```
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
```
### Step 3: Plan Structure Development
Once aligned on approach:
1. **Create initial plan outline**:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
```
2. **Share this plan outline with the user and get approval** before writing details
### Step 4: Detailed Plan Writing
After structure approval:
1. Use the `workflow-tools:frontmatter-generator` agent to collect metadata. Wait for the agent to return metadata before proceeding.
2. **Write the plan** to `working-notes/{YYYY-MM-DD}_plan_[descriptive-name].md`. Use `date '%Y-%m-%d'` for the timestamp in the filename
3. **Use this template structure**:
````markdown
---
date: [Current date and time with timezone in ISO format]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[Feature/Task Name]"
tags: [plans, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
---
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
````
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: Jira ticket [ABC-####]
- Related research: `working-notes/[relevant].md`
- Similar implementation: `[file:line]`
```
### Step 5: Automatic External Review (Internal Quality Check)
1. **Check for external review environment variable:**
- Use Bash to check if `CLAUDE_EXTERNAL_REVIEW_COMMAND` environment variable is set: `bash -c 'echo ${CLAUDE_EXTERNAL_REVIEW_COMMAND:-NOT_SET}'`
- If the output is "NOT_SET", skip to step 2 (User Review)
- If the output contains a command, proceed with internal review
2. **If environment variable IS set - Execute internal quality review:**
- Invoke the external review command with the comprehensive review prompt (from review-doc.md step 4) for the just-created plan document
- **Critically analyze the feedback with a VERY skeptical lens**:
- Dismiss theoretical concerns that don't apply to this specific plan
- Ignore feedback that adds unnecessary complexity
- Only identify feedback that reveals genuine gaps, errors, or missing critical considerations
- **Silently address ONLY critical issues**:
- Fix any technical errors or missing critical implementation details
- Add only truly important missing considerations
- Make minimal, focused updates - do NOT implement every suggestion
- **Do NOT present the review to the user** - this is an internal quality check
### Step 6: User Review
1. **Present the draft plan location**:
```
I've created the initial implementation plan at:
`working-notes/[filename].md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
````
2. **Iterate based on feedback** - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
3. **Continue refining** until the user is satisfied
## Important Guidelines
1. **Be Skeptical**:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
2. **Be Interactive**:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
3. **Be Thorough**:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
- automated steps should use `make`/`yarn`/`just` whenever possible
4. **Be Practical**:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
5. **Track Progress**:
- Use TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
6. **No Open Questions in Final Plan**:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
## Success Criteria Guidelines
**Always separate success criteria into two categories:**
1. **Automated Verification** (can be run by execution agents):
- Commands that can be run: `make test`, `npm run lint`, etc.
- Specific files that should exist
- Code compilation/type checking
- Automated test suites
2. **Manual Verification** (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
**Format example:**
```markdown
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
````
## Common Patterns
### For Database Changes:
- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
### For New Features:
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
### For Refactoring:
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
## Sub-task Spawning Best Practices
When spawning research sub-tasks:
1. **Spawn multiple tasks in parallel** for efficiency
2. **Each task should be focused** on a specific area
3. **Provide detailed instructions** including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
4. **Be EXTREMELY specific about directories**:
- Never use generic terms like "UI" when you mean "WUI"
- Include the full path context in your prompts
5. **Specify read-only tools** to use
6. **Request specific file:line references** in responses
7. **Wait for all tasks to complete** before synthesizing
8. **Verify sub-task results**:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
Example of spawning multiple tasks:
```python
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
```
## Example Interaction Flow
```
User: /create-plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See Jira ABC-1234
Assistant: Let me read that Jira work item completely using the Jira subagent first...
Based on the work item I understand we need to track parent-child relationships for Claude sub-task events in the old daemon. Before I start planning, I have some questions...
[Interactive process continues...]
```

View File

@@ -0,0 +1,200 @@
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## Initial Setup:
When this command is invoked, if you already think you know what the user wants to research, confirm that with the user. If you do not know, respond with:
```
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
```
Then wait for the user's research query.
## Steps to follow after receiving the research query:
1. **Read any directly mentioned files first:**
- If the user mentions specific files (tickets, docs, JSON), read them FULLY first
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
- This ensures you have full context before decomposing the research
2. **Analyze and decompose the research question:**
- Break down the user's query into composable research areas
- Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite to track all subtasks
- Consider which directories, files, or architectural patterns are relevant
3. **Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
- We now have specialized agents that know how to do specific research tasks:
**For codebase research:**
- Use the `workflow-tools:codebase-locator` agent to find WHERE files and components live
- Use the `workflow-tools:codebase-analyzer` agent to understand HOW specific code works
- Use the `workflow-tools:codebase-pattern-finder` agent if you need examples of similar implementations
**For `working-notes/` directory:**
- Use the `workflow-tools:notes-locator` agent to discover what documents exist about the topic
- Use the `workflow-tools:notes-analyzer` agent to extract key insights from specific documents (only the most relevant ones)
**For web research:**
- Use the `workflow-tools:web-search-researcher` agent for external documentation and resources
- Instruct the agent to return LINKS with their findings, and please INCLUDE those links in your final report
**For historical context:**
- Use the `workflow-tools:jira-searcher` agent to search for relevant Jira issues that may provide business context
- Use the `workflow-tools:git-history` agent to search git history, PRs, and PR comments for implementation context and technical decisions
The key is to use these agents intelligently:
- Start with locator agents to find what exists
- Then use analyzer agents on the most promising findings
- Run multiple agents in parallel when they're searching for different things
- Each agent knows its job - just tell it what you're looking for
- Do NOT write detailed prompts about HOW to search - the agents already know
4. **Wait for all sub-agents to complete and synthesize findings:**
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
- Compile all sub-agent results (codebase, `working-notes/` findings, and web research)
- Prioritize live codebase findings as primary source of truth
- Use `working-notes/` findings as supplementary historical context
- Connect findings across different components
- Include specific file paths and line numbers for reference
- Highlight patterns, connections, and architectural decisions
- Answer the user's specific questions with concrete evidence
5. **Gather metadata for the research document:**
- Filename: `working-notes/{YYYY-MM-DD}_research_[descriptive-name].md`. Use `date '+%Y-%m-%d'` for the timestamp in the filename.
- Use the `workflow-tools:frontmatter-generator` agent to collect metadata.
- Wait for the agent to return metadata before proceeding.
6. **Generate research document:**
- Use the metadata gathered in the previous step
- Structure the document with YAML frontmatter followed by content:
```markdown
---
date: [Current date and time with timezone in ISO format]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[User's Question/Topic]"
tags: [research, codebase, relevant-component-names]
last_updated: [Current date in YYYY-MM-DD format]
---
# Research: [User's Question/Topic]
**Date**: [Current date and time with timezone from step 4]
**Git Commit**: [Current commit hash from step 4]
**Branch**: [Current branch name from step 4]
**Repository**: [Repository name]
## Research Question
[Original user query]
## Summary
[High-level findings answering the user's question]
## Detailed Findings
### [Component/Area 1]
- Finding with reference ([file.ext:line](link))
- Connection to other components
- Implementation details
### [Component/Area 2]
...
## Code References
- `path/to/file.py:123` - Description of what's there
- `another/file.ts:45-67` - Description of the code block
## Architecture Insights
[Patterns, conventions, and design decisions discovered]
## Historical Context
[Relevant insights from `working-notes/` directory and any relevant Jira issues. Include references for all insights.]
## Related Research
[Links to past research documents in `working-notes/`]
## Open Questions
[Any areas that need further investigation]
```
7. **Automatic External Review (Internal Quality Check):**
- Check for external review environment variable: `bash -c 'echo ${CLAUDE_EXTERNAL_REVIEW_COMMAND:-NOT_SET}'`
- **If environment variable IS set:**
- Invoke the external review command with the comprehensive review prompt (from review-doc.md step 4) for the just-created research document
- **Critically analyze the feedback with a VERY skeptical lens**:
- Dismiss theoretical concerns that don't apply to this specific research
- Ignore feedback that adds unnecessary complexity
- Only identify feedback that reveals genuine gaps or errors
- **Silently address ONLY critical issues**:
- Fix any factual errors or missing critical information in the document
- Add only truly important missing considerations
- Make minimal, focused updates - do NOT implement every suggestion
- **Do NOT present the review to the user** - this is an internal quality check
- **If environment variable is NOT set:**
- Skip this step and proceed to adding GitHub permalinks
8. **Add GitHub permalinks (if applicable):**
- Check if on main branch or if commit is pushed: `git branch --show-current` and `git status`
- If on main/master or pushed, generate GitHub permalinks:
- Get repo info: `gh repo view --json owner,name`
- Create permalinks: `https://github.com/{}/{repo}/blob/{commit}/{file}#L{line}`
- Replace local file references with permalinks in the document
9. **Present findings:**
- Present a concise summary of findings to the user
- Include key file references for easy navigation
- Ask if they have follow-up questions or need clarification
10. **Handle follow-up questions:**
- If the user has follow-up questions, append to the same research document
- Update the frontmatter fields `last_updated` and `last_updated_by` to reflect the update
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
- Add a new section: `## Follow-up Research [timestamp]`
- Spawn new sub-agents as needed for additional investigation
- Continue updating the document
## Important notes:
- Always use parallel Task agents to maximize efficiency and minimize context usage
- Always run fresh codebase research - never rely solely on existing research documents
- The `working-notes/` directory provides historical context to supplement live findings
- Focus on finding concrete file paths and line numbers for developer reference
- The research document should NOT include any references to how long things will take (i.e., Phase 1 will take 2 days)
- Research documents should be self-contained with all necessary context
- Each sub-agent prompt should be specific and focused on read-only operations
- Consider cross-component connections and architectural patterns
- Include temporal context (when the research was conducted)
- Link to GitHub when possible for permanent references
- Keep the main agent focused on synthesis, not deep file reading. Use subagents for any deep file reading.
- Encourage sub-agents to find examples and usage patterns, not just definitions
- Explore all of `working-notes/` directory, not just research subdirectory
- **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
- **Critical ordering**: Follow the numbered steps exactly
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
- ALWAYS gather metadata before writing the document (step 5 before step 6)
- NEVER write the research document with placeholder values
- This ensures paths are correct for editing and navigation
- **Frontmatter consistency**:
- Always include frontmatter at the beginning of research documents
- Keep frontmatter fields consistent across all research documents
- Update frontmatter when adding follow-up research
- Use snake_case for multi-word field names (e.g., `last_updated`, `git_commit`)
- Tags should be relevant to the research topic and components studied

View File

@@ -0,0 +1,290 @@
# Summarize Work
You are tasked with creating comprehensive implementation summaries that document completed work. These summaries capture what was changed and why, serving as permanent documentation for future developers and AI coding agent instances.
## Process Steps
### Step 1: Check for Uncommitted Code
**Check for uncommitted code changes:**
- Run `git status` to check for uncommitted changes
- Filter out documentation files (files in `working-notes/`, `notes/`, or ending in `.md`)
- If there are uncommitted CODE changes:
```
You have uncommitted code changes. Consider committing your work before generating implementation documentation.
Uncommitted changes:
[list the uncommitted code files]
```
- STOP and wait for the user to prompt you to proceed
### Step 2: Present Initial Prompt
Respond with:
```
I'll help you document the implementation work. This will create a comprehensive summary explaining what was changed and why.
Please provide any research or plan documents that were used and/or a brief description or the relevant Jira ticket.
With this context plus the git diff, I'll generate an implementation summary.
```
Then wait for the user's input.
### Step 3: Check for Jira Ticket Number
1. **Check if Jira ticket is mentioned:**
- Review the user's response and any referenced documents
- Look for Jira ticket numbers (e.g., ABC-1234, PROJ-567)
- If a Jira ticket number is found, note it and move to the next step
- If NO Jira reference is found, ask: "Is there a Jira ticket associated with this work? If so, please provide the ticket number."
- Wait for the user's response
- Note: Do not fetch the actual Jira ticket details now. We'll do that later in Step 5
### Step 4: Determine Default Branch and Select Git Diff
1. **Determine the default branch:**
- Run: `git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@'`
- This will return the default branch name (e.g., "main", "master", "carbon_ubuntu")
- Use this as the base branch for all subsequent git commands
- Store this in a variable: `DEFAULT_BRANCH`
2. **Prompt user to select the git diff scope:**
Use the AskUserQuestion tool to present these options:
```
Which changes should be documented?
```
Options:
- **Changes from `[DEFAULT_BRANCH]` branch** - All changes on this branch since it diverged from `[DEFAULT_BRANCH]`
- **Most recent commit** - Only the changes in the latest commit
- **Uncommitted changes** - Current uncommitted changes (not recommended)
- **[OTHER]** - User provides custom changes that should be considered
3. **Execute the appropriate git diff command:**
- Diff from default branch: `git diff [DEFAULT_BRANCH]...HEAD`
- Most recent commit: `git diff HEAD~1 HEAD`
- Uncommitted: `git diff HEAD`
- Custom: Determine an appropriate git diff command based on the user's request
### Step 5: Gather Context
1. **Fetch Jira ticket details (if applicable):**
- If a Jira ticket number was identified in Step 3:
- Use the `workflow-tools:jira-searcher` agent to fetch ticket details: "Get details for Jira ticket [TICKET-NUMBER]"
- Extract key information: summary, description, acceptance criteria, comments
- Use this as additional context for understanding what was implemented and why
2. **Read provided documentation fully:**
- If research documents provided: Read them FULLY (no limit/offset parameters)
- If plan documents provided: Read them FULLY (no limit/offset parameters)
- Extract key context about what was being implemented and why
### Step 6: Gather Git Metadata
1. **Collect comprehensive git metadata:**
Run these commands to gather commit information:
- Current branch: `git branch --show-current`
- Commit history for the range: `git log --oneline --no-decorate <range>`
- Detailed commit info: `git log --format="%H%n%an%n%ae%n%aI%n%s%n%b" <range>`
- Check if PR exists: `gh pr view --json number,url` (may not exist yet)
- Get base commit: `git merge-base [DEFAULT_BRANCH] HEAD`
- Repository info: `gh repo view --json owner,name`
- Jira ticket info (if provided earlier)
2. **Determine commit range context:**
- Identify the base commit (where branch diverged)
- Identify the head commit (current or latest)
- Note the branch name
- Capture all commit hashes in the range (they may change on force-push, but provide context)
### Step 7: Analyze Changes
1. **Analyze the git diff:**
- Understand what files changed
- Identify the key changes and their purposes
- Connect changes to the context from research/plan docs (if provided)
- Focus on understanding WHY these changes accomplish the goals
### Step 8: Find GitHub Permalinks (if applicable)
1. **Obtain GitHub permalinks:**
- Check if commits are pushed: `git branch -r --contains HEAD`
- If pushed, or if on main branch:
- Get repo info: `gh repo view --json owner,name`
- Get GitHub permalinks for all commits (i.e., `https://github.com/{owner}/{repo}/blob/{commit}`)
### Step 9: Generate Implementation Summary
1. **Gather metadata for the document:**
- Use the `workflow-tools:frontmatter-generator` agent to collect metadata. Wait for the agent to return metadata before proceeding.
- Use `date '+%Y-%m-%d'` for the filename timestamp
- Create descriptive filename: `notes/YYYY-MM-DD_descriptive-name.md`.
2. **Write the implementation summary using this strict template:**
````markdown
---
date: [Current date and time with timezone in ISO format]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
jira_ticket: "[TICKET-NUMBER]" # Optional - include if applicable
topic: "[Feature/Task Name]"
tags: [implementation, relevant-component-names]
last_updated: [Current date in YYYY-MM-DD format]
---
# [Feature/Task Name]
## Summary
[1-3 sentence high-level summary of what was accomplished]
## Overview
[High-level description of the changes, written for developers to quickly understand what was done and why. This should be readable in a few minutes. Minimal code citations and quotations - only include them if central to understanding the change. Focus on the business/technical goals and how they were achieved.]
## Technical Details
[Comprehensive explanation of the changes with focus on WHY. This is NOT just a recitation of what changed (that's available in the git commits). Instead, explain:
- What the purpose was behind the different changeds
- Why these specific changes were chosen to accomplish those goals
- Key design decisions and their rationale
- How different pieces fit together
For the most important changes, include code quotations to illustrate the implementation. For moderately important changes, include code references (file:line). Small changes like name changes should not be referenced at all.]
### [Component/Area 1]
[Explain what was changed in this component and why these changes accomplish the goals. Include code quotations for the most important changes. There should almost always be at least one code change quotation for each component/area:]
```[language]
// Most important code change
function criticalFunction() {
// ...
}
```
[For moderately important changes, use code references like `path/to/file.ext:123`]
### [Component/Area 2]
[Similar structure...]
[Add additional sections as necessary for additional Components/Areas]
## Git References
**Branch**: `[branch-name]`
**Commit Range**: `[base-commit-hash]...[head-commit-hash]`
**Commits Documented**:
**[commit-hash]** ([date])
[Full commit message including body]
[If on main branch or commits are pushed, include GitHub permalink to files]
**[commit-hash]** ([date])
[Full commit message including body]
[Continue for all commits in the range...]
**Pull Request**: [#123](https://github.com/owner/repo/pull/123) _(if available)_
````
### Step 10: Present Summary to User
1. **Present the implementation summary:**
```
I've created the implementation summary at: `notes/YYYY-MM-DD_descriptive-name.md`
```
## Important Guidelines
1. **Document Standalone Nature**:
- The implementation summary is a standalone document
- Do NOT reference research or plan documents in the summary itself
- All necessary context should be incorporated into the summary
- Research/plan docs are only used as input to understand what to write
2. **Focus on WHY, Not WHAT**:
- The git diff shows WHAT changed
- The summary explains WHY those changes accomplish the goals
- Focus on intent, design decisions, and rationale
- Explain how the changes achieve the desired outcome
3. **Three-Level Structure is Mandatory**:
- **Summary**: Always exactly 1-3 sentences
- **Overview**: Always high-level, readable by any developer quickly
- **Technical Details**: Always comprehensive with WHY focus
4. **Git Metadata Must Be Complete**:
- Include all commit hashes in the range
- Include full commit messages (subject and body)
- Include dates and times
- Include branch name and commit range
- This metadata helps locate commits even after force-pushes
5. **Uncommitted Code Warning**:
- Always check for uncommitted code FIRST
- Only check for uncommitted CODE files, not documentation files
- Stop immediately if uncommitted code exists
- Advise committing before proceeding
6. **Read Documentation Fully**:
- Never use limit/offset when reading research or plan docs
- Read the entire document to understand full context
- Extract relevant information to inform the summary
7. **Jira Context**:
- Always check if a Jira ticket is mentioned or exists
- Use the `workflow-tools:jira-searcher` agent to fetch ticket details when available
- Include Jira ticket reference in the document header
- Use Jira information as context for understanding requirements and goals
8. **Dynamic Default Branch**:
- Always determine the default branch dynamically
- Never assume it's "main" - could be "master", "carbon_ubuntu", etc.
- Use the determined default branch for all git diff and merge-base commands
9. **Use Objective Language**
- Use objective technical language only.
- Avoid subjective quality judgments like 'clever', 'elegant', 'nice', 'beautiful', 'clean', 'simple', 'pragmatic', or similar terms that evaluate.
- Focus on facts and mechanisms, not value judgments.
## Success Criteria
The implementation summary is complete when:
- [ ] Jira ticket checked for and fetched (if applicable)
- [ ] Default branch determined dynamically
- [ ] All relevant research/plan documents have been read fully
- [ ] Git diff has been analyzed thoroughly
- [ ] All git metadata collected (commits, messages, branch, range, PR if available, Jira ticket)
- [ ] Document follows strict three-level template
- [ ] Summary section is 1-3 sentences
- [ ] Overview section is high-level and readable
- [ ] Technical Details explain WHY, not just WHAT
- [ ] Git References section includes all commits with full messages
- [ ] GitHub permalinks included (if applicable)
- [ ] Frontmatter generated via frontmatter-generator agent
- [ ] File saved to `notes/YYYY-MM-DD_descriptive-name.md`
- [ ] Document is standalone (no references to research/plan docs)

103
commands/implement-plan.md Normal file
View File

@@ -0,0 +1,103 @@
# Implement Plan
You are tasked with implementing an approved technical plan from `working-notes/`. These plans contain phases with specific changes and success criteria.
## Getting Started
When given a plan path:
- Read the plan completely and check for any existing checkmarks (- [x])
- Read the original ticket and all files mentioned in the plan
- **Read files fully** - never use limit/offset parameters, you need complete context
- Think deeply about how the pieces fit together
- Create a todo list to track your progress
- Start implementing if you understand what needs to be done
If no plan path provided:
1. Find the 2 most recently edited plan documents:
```bash
ls -t ~/.claude/working-notes/*.md 2>/dev/null | head -2
```
2. Extract just the filenames (without path) from the results
3. Use the AskUserQuestion tool to present them as options:
- If 2+ plans found: Show the 2 most recent as options
- If 1 plan found: Show that single plan as an option
- If 0 plans found: Fall back to simple text prompt "What plan file do you want to implement?"
4. The question should be: "Which plan do you want to implement?"
- Header: "Plan"
- Options: The filenames only (e.g., "implement-auth.md")
- Each option description should be the path from the current working directory (e.g., "working-notes/implement-auth.md")
## Implementation Philosophy
Plans are carefully designed, but reality can be messy. Your job is to:
- Follow the plan's intent while adapting to what you find
- Implement each phase fully before moving to the next
- Verify your work makes sense in the broader codebase context
- Update checkboxes in the plan as you complete sections
When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.
If you encounter a mismatch:
- STOP and think deeply about why the plan can't be followed
- Present the issue clearly:
```
Issue in Phase [N]:
Expected: [what the plan says]
Found: [actual situation]
Why this matters: [explanation]
How should I proceed?
```
### Use Test-Driven Development
Write tests before doing implementation. Keep the tests focused on behavior not implementation. Describe the tests you intend to write to the user.
When writing tests follow this process:
1. Determine the scenarios you are going to test. These should roughly correspond to the individual tests you plan to write.
2. Get the user's approval for the scenarios you are testing so that we can course-correct early in the process.
3. Once you have obtained the user's approval, proceed to implement the tests.
## Verification Approach
After implementing a phase:
- Run the success criteria checks (often running all the tests will cover everything)
- Fix any issues before proceeding
- Update your progress in both the plan and your todos
- Check off completed items in the plan file itself using Edit
Don't let verification interrupt your flow - batch it at natural stopping points.
## If You Get Stuck
When something isn't working as expected:
- First, make sure you've read and understood all the relevant code
- Consider if the codebase has evolved since the plan was written
- Present the mismatch clearly and ask for guidance
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.
## Resuming Work
If the plan has existing checkmarks:
- Trust that completed work is done
- Pick up from the first unchecked item
- Verify previous work only if something seems off
Remember: You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.
## Keep the Plan Documented Updated
As you make progress, update the plan document with what you have done. The plan document is a living document that should always be kept up-to-date.

View File

@@ -0,0 +1,90 @@
Every bug tells a story. Your job is to uncover the true root cause of the bug and identify why the root cause happened. You are not interested in band-aids or workarounds that only address they symptoms. We will use the scientific method to systematically isolate and identify the root cause. Ultrathink
**CRITICAL:** Keep a record of your hypotheses and test results in `working-notes/YYYY-MM-DD_bug-investigation_[descriptive name].md`. This should include each hypothesis, what specifically you did to test the hypothesis, and what was the result of the test, and any proposed fixes for the bug.
# Phase 1: Root Cause Investigation (BEFORE attempting fixes)
- **Gather Information**: Gather all symptoms and evidence about the bug you can.
- **Read Error Messages Carefully**: Don't skip past errors or warnings - they often contain the exact solution
- **Reproduce Consistently**: Ensure you can reliably reproduce the issue before investigating
- **Check Recent Changes**: Are there recent changes that could have caused this? Git diff, recent commits, etc.
**Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
- We now have specialized agents that know how to do specific research tasks:
**For codebase research:**
- Use the `workflow-tools:codebase-locator` agent to find WHERE files and components live
- Use the `workflow-tools:codebase-analyzer` agent to understand HOW specific code works
- Use the `workflow-tools:codebase-pattern-finder` agent to find examples of similar implementations. Look for similar working code in the codebase.
**For `working-notes/` directory:**
- Use the `workflow-tools:notes-locator` agent to discover what documents exist about the topic
- Use the `workflow-tools:notes-analyzer` agent to extract key insights from specific documents (only the most relevant ones)
**For web research:**
- Use the `workflow-tools:web-search-researcher` agent for external documentation and resources
- Instruct the agent to return LINKS with their findings, and please INCLUDE those links in your final report and for it to read any reference implementation completely
**For historical context:**
- Use the `workflow-tools:jira-searcher` agent to search for relevant Jira issues that may provide business context
- Use the `workflow-tools:git-history` agent to search git history, PRs, and PR comments for implementation context and technical decisions
The key is to use these agents intelligently:
- Start with locator agents to find what exists
- Then use analyzer agents on the most promising findings
- Run multiple agents in parallel when they're searching for different things
- Each agent knows its job - just tell it what you're looking for
- Do NOT write detailed prompts about HOW to search - the agents already know
# Phase 2: Pattern Analysis
- **Find Working Examples**: Locate similar working code in the same codebase
- **Compare Against References**: If implementing a pattern, read the reference implementation completely
- **Identify Differences**: What's different between working and broken code?
- **Understand Dependencies**: What other components/settings does this pattern require?
# Phase 3: Record Findings and Hypotheses
Record your hypotheses and test results in `working-notes/YYYY-MM-DD_bug-investigation_[descriptive name].md`.
When you don't know, admit you don't understand something. Do not pretend to know. It is much better to admit uncertainty and I will trust you more if you do.
# Phase 4: Hypothesis and Testing
One by one, select the most important unconfirmed hypothesis and test it using these steps.
1. **Form Single Hypothesis**: What do you think is the root cause? State it clearly
2. **Test Minimally**: Make the smallest possible change to test your hypothesis
- **Generate Data**: If appropriate, add log statements, or create helper scripts to give more insight. Use CLI tools, screenshots, and other tools if they would be helpful.
3. **Record Results**: Update the file we are tracking this work in with::
- the hypothesis we tested,
- how we tested the hypothesis,
- the results of that test,
- any conclusions and new hypotheses that follow from those results
4. **Repeat**: If there are remaining uncomfirmed hypotheses, repeat this testing process with the next most important uncomfirmed hypothesis.
## Creative problem-solving techniques
- UI bugs: Create temporary visual elements to understand layout/rendering issues
- State bugs: Log state changes at every mutation point
- Async bugs: Trace the timeline of operations with timestamps
- Integration bugs: Test each component in isolation
# Phase 4: Generate Proposed Fix Implementation
Update the bug investigation document with the proposed fix implementation.
# Important Notes
- ALWAYS update the research file with the test that you ran, and the results that were observed from that test.
- NEVER update the research file to say that a test worked unless the user has confirmed the results of the test.
- If a hypothesis is found to be incorrect, STOP and re-consider the data to determine if we should modify or add to our remaining hypotheses.
- NEVER try to fix the bug without first testing the simplest hypothesis possible. Our goal is not to fix the bug as quickly as possible, but instead to slowly and systematically PROVE what the bug is.

81
commands/review-doc.md Normal file
View File

@@ -0,0 +1,81 @@
You are tasked with obtaining external review of a document.
Follow these specific steps:
1. **Check if arguments were provided**:
- If the user provided a reference to a specific document to be removed, skip the default message below.
2. **If no arguments were provided**, respond with:
Use the AskUserQuestion tool to present these options:
```
What document would you like me to review?
```
For Options, present at most 2 documents: prioritize documents you created in the current session (most recent first), then fall back to the most recent documents in the `working-notes/` directory.
3. **Check for the external review command environment variable**
Look for the environment variable `CLAUDE_EXTERNAL_REVIEW_COMMAND`. If that variable exists, move to the next step. If it does not exist, give the user the following prompt:
```
To use this slash command you must set up the terminal command to use for external review and store it as the environment variable `CLAUDE_EXTERNAL_REVIEW_COMMAND`. This command should include everything other than the prompt that is needed to access another model.
For example, if you want to use opencode to obtain the external review, you could use something like:
"opencode --model github-copilot/gpt-5 run"
```
4. **Obtain external review of the document**
Invoke the provided external review command by appending the following prompt to the command in the following form:
```
${CLAUDE_EXTERNAL_REVIEW_COMMAND} "Review the document at
RELEVANT_DOC_PATH and
provide detailed feedback. Evaluate:
1. Technical accuracy and completeness of the implementation approach
2. Alignment with project standards (check project documentation like CLAUDE.md,
package.json, configuration files, and existing patterns)
3. Missing technical considerations (error handling, rollback procedures, monitoring,
security)
4. Missing behavioral considerations (user experience, edge cases, backward
compatibility)
5. Missing strategic considerations (deployment strategy, maintenance burden,
alternative timing)
6. Conflicts with established patterns in the codebase
7. Risk analysis completeness
8. Testing strategy thoroughness
Be specific about what's missing or incorrect. Cite file paths and line numbers where
relevant. Focus on actionable improvements that would reduce implementation risk."
```
Feel free to tweak the prompt to be more applicable to this document and codebase.
5. **Critically analyze the external review feedback**
Apply a skeptical lens to the feedback received from the external review. Your job is to identify which feedback items are truly critical and actionable. Consider:
- Is this feedback technically sound?
- Does this feedback identify real risks or just theoretical concerns?
- Would implementing this feedback provide meaningful value, or is it unnecessary complexity?
- Does this feedback align with the project's constraints and priorities?
- Is the feedback making assumptions?
Dismiss feedback that doesn't meet a high bar for quality and relevance. It's possible that none of the feedback is valuable - if that's the case, clearly state that and explain why.
6. **Present summary to the user**
Provide a concise summary of the external review feedback with your recommendations. For each significant piece of feedback, include:
- **Summary**: Brief description of the feedback point
- **Recommended action**: One of:
- **Implement**: Critical feedback that should be addressed
- **Consider**: Potentially valuable feedback worth discussing with the user
- **Discard**: Feedback that is not valuable or applicable
- **Reasoning**: Clear explanation for your recommendation
Format your response to be scannable and actionable. Group similar feedback items together where appropriate.