Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:20:33 +08:00
commit 977fbf5872
27 changed files with 5714 additions and 0 deletions

110
commands/breakdown.md Normal file
View File

@@ -0,0 +1,110 @@
---
argument-hint: [SPEC DOCUMENT]
description: Create a task breakdown from a design document
---
Document: $1
You are my project-planning assistant.
Given a high-level feature or milestone description in the above document, produce an **agile task breakdown** following the **Last Responsible Moment** principle. Update the above document with this breakdown.
## Last Responsible Moment Principle
**Defer decisions until you have enough information, but not so late that it blocks progress.**
Tasks should provide:
- **Clear outcomes**: What needs to be achieved
- **Constraints**: Known requirements and limitations
- **Dependencies**: What this task relies on
- **Guidance**: Enough context to understand the problem
Tasks should NOT specify:
- **Implementation details**: Specific functions, classes, or algorithms (unless critical)
- **Step-by-step instructions**: How to write the code
- **Premature optimization**: Performance tuning before validating approach
- **Tool choices**: Specific libraries or patterns (unless required by existing architecture)
**Why**: Implementation details become clearer as you work. Early decisions lock you into approaches that may not fit the reality you discover during development.
## Task Format
For every task you generate, include:
1. **Iteration header** `### 🔄 **Iteration <n>: <Theme>**`
2. **Task header** `#### Task <n>: <Concise Task Name>`
3. **Status** always start as `Status: **Pending**`
4. **Goal** 1-2 sentences describing the purpose and outcome
5. **Working Result** what is concretely "done" at the end (working code, passing test, validated integration)
6. **Constraints** technical limitations, performance requirements, compatibility needs, existing patterns to follow
7. **Dependencies** what this task assumes exists or depends on
8. **Implementation Guidance** a fenced <guidance></guidance> block with:
- Context about the problem domain
- Key considerations and trade-offs
- Risks to watch for
- Questions to resolve during implementation
- Reference to relevant existing code patterns (if applicable)
**NOT**: step-by-step instructions or specific implementation choices
9. **Validation** a checklist (`- [ ]`) of objective pass/fail checks (tests, scripts, CI runs, manual verifications)
10. Separate tasks and iterations with `---`
## Example Task Structure
```markdown
#### Task 3: User Authentication
Status: **Pending**
**Goal**: Enable users to securely authenticate and maintain sessions across requests.
**Working Result**: Users can log in, their session persists, and protected routes verify authentication.
**Constraints**:
- Must integrate with existing Express middleware pattern (see src/middleware/)
- Session data should not exceed 4KB
- Authentication must work with existing PostgreSQL user table
**Dependencies**:
- User registration system (Task 2)
- Database connection pool configured
<guidance>
**Context**: The application uses session-based auth (not JWT) following the pattern in src/middleware/. Refer to existing middleware for consistency.
**Key Considerations**:
- Session storage: Choose between in-memory (simple, dev-only) vs. Redis (production). Decision can be deferred until Task 8 (deployment planning)
- Password hashing: Use established library (bcrypt/argon2), but specific choice depends on performance testing in Task 7
- Session duration: Start with reasonable default (24h), tune based on user feedback
**Risks**:
- Session fixation vulnerabilities
- Timing attacks on password comparison
- CSRF if not using proper token validation
**Questions to Resolve**:
- Should "remember me" functionality be included in this task or deferred?
- What's the session renewal strategy?
**Existing Patterns**: Review src/middleware/requestLogger.js for middleware structure.
</guidance>
**Validation**:
- [ ] Users can log in with valid credentials
- [ ] Invalid credentials are rejected
- [ ] Sessions persist across page reloads
- [ ] Protected routes redirect unauthenticated users
- [ ] Tests cover authentication success and failure cases
- [ ] No timing vulnerabilities in password comparison
```
## Constraints & Conventions
- Each task must be a single atomic unit of work that results in running, testable code
- Favor incremental progress over perfection; every task should leave the repo in a working state
- Validation should prefer automated tests/scripts but may include human review items
- Use **bold** for filenames, routes, commands, entities to improve readability
- Keep the entire answer pure Markdown; do not embed explanatory prose outside of the required structure
- You may run into output token limits, so write one iteration at a time in the document, then add another one
- Focus on **what** needs to be achieved and **why**, not **how** to implement it
- When you must be specific (e.g., "use existing auth middleware pattern"), provide context about where to find examples
- Encourage learning and discovery during implementation rather than prescribing all decisions upfront

38
commands/catchup.md Normal file
View File

@@ -0,0 +1,38 @@
---
model: haiku
description: "Review all changes between current branch and main"
allowed-tools: Bash(git:*), Read
---
# Catch Up on Branch Changes
Analyzing this branch vs main...
## Branch Info
Current branch: `git rev-parse --abbrev-ref HEAD`
!`git fetch origin main 2>/dev/null; echo "Commits ahead of main:" && git log main..HEAD --oneline | wc -l && echo "" && echo "Files changed:" && git diff main...HEAD --name-only | wc -l`
## All Changed Files
!`git diff main...HEAD --name-only | sort`
## Change Summary
!`git diff main...HEAD --stat`
## Recent Commits
!`git log main..HEAD --oneline`
---
## Analysis
Read the key files that were changed (prioritize manifest files, commands, skills, and config), understand the changes, and provide a concise summary including:
- What was built or changed (purpose and scope)
- Why it matters (new capabilities, workflow improvements, config changes)
- Key impact areas and potential testing focus
- Any breaking changes or architectural shifts

134
commands/commit.md Normal file
View File

@@ -0,0 +1,134 @@
---
model: haiku
allowed-tools: Bash(git:*)
argument-hint: "[optional: description or specific files]"
description: "Smart commit workflow - adapts to change size"
---
# Commit Assistant
Quick overview of changes:
!`git status --short && echo "" && echo "=== STATS ===" && git diff HEAD --numstat | awk '{add+=$1; del+=$2; files++} END {print "Files changed: " files " | Lines added: " add " | Lines deleted: " del}'`
## Analysis Strategy
I will analyze changes intelligently based on scope:
**1. Small changesets** (<10 files, <500 lines):
- Show full diffs with `git diff HEAD -- <files>`
- Review all files in detail
- Single atomic commit
**2. Medium changesets** (10-50 files, 500-1500 lines):
- Show summary of all files
- Detailed diffs for source code only (`src/`, `lib/`, etc)
- Suggest atomic groupings for 1-2 commits
**3. Large changesets** (>50 files or >1500 lines):
- Show file list grouped by type
- Ask which specific files to review in detail
- Suggest commit strategy by file grouping
### Automatic Exclusions
Files skipped from detailed review:
- **Lockfiles**: `*lock*`, `*lock.json`, `*.lock` (dependency updates)
- **Generated**: `dist/`, `build/`, `.next/`, `out/` (build artifacts)
- **Large changes**: >500 lines modified (likely auto-generated)
- **Binary/compiled**: `*.min.js`, `*.map`, images, `.wasm`
These are mentioned in stats but not reviewed line-by-line.
## Commit Message Format
```
<type>(<scope>): <description>
[Optional body: explain what and why]
```
**Types**: `feat` | `fix` | `docs` | `refactor` | `test` | `chore` | `style` | `perf` | `ci`
**Rules**:
- Imperative mood ("add" not "added")
- ~50 character subject line
- No period at end of subject
- Body lines wrapped at 72 characters
- Blank line between subject and body
- Body explains *what* and *why*, not *how*
## Examples
```
feat(auth): add JWT token refresh mechanism
Refresh tokens now expire after 7 days instead of never expiring.
Reduces security risk for long-lived tokens. Implements automatic
refresh logic when token is within 1 hour of expiration.
fix(api): handle missing Content-Type header gracefully
Previously crashed with 500 if Content-Type missing. Now defaults
to application/json and validates structure separately.
```
## Your Task
Commit message context: "${{{ARGS}}}"
### Step 1: Analyze Changes
${changeSize === 'large' ? `
- Run: \`git diff HEAD --numstat | head -50\`
- Group files by type (source vs config vs tests vs docs)
- Report file counts and total size
` : `
- Examine full diffs for all files
- Identify logical groupings
- Flag any unexpected changes
`}
### Step 2: Plan Commits
**If single logical change:**
- Create one commit with all files
- Clear message describing the change
**If multiple logical groupings:**
- Suggest 2-3 atomic commits
- Each commit should pass tests independently
- Group related changes together (e.g., feature + tests, not feature + unrelated refactor)
**Staging approach:**
- For single commit: \`git add .\` (if clean working directory)
- For selective: \`git add <files>\` for each atomic grouping
- For partial files: \`git add -p\` for manual hunk selection
### Step 3: Verify
Before committing:
- \`git diff --staged\` - review what will be committed
- \`git status\` - confirm all intended files are staged
- Consider: does each commit pass tests? can it be reverted cleanly?
### Step 4: Commit
Craft clear commit message with context provided above. Use the format guidelines.
```bash
git commit -m "$(cat <<'EOF'
<type>(<scope>): <subject>
[body explaining what and why]
EOF
)"
```
Then push or continue with additional commits.
## Context
The user mentioned: "${{{ARGS}}}"
Is there a specific aspect of the changes you'd like help with (staging strategy, message clarity, commit grouping)?

57
commands/do.md Normal file
View File

@@ -0,0 +1,57 @@
---
description: Do the actual task
argument-hint: [SPEC DOCUMENT] [TASK NUMBER | --resume] [ADDITIONAL CONTEXT] [--auto]
allowed-tools: Bash(git add:*), Bash(git commit:*), Bash(git diff:*), Bash(git status:*)
---
**Flags:**
- `--resume`: If present, the agent starts from the first incomplete task instead of the specified task number
- `--auto`: If present, the agent will automatically:
1. Perform the task
2. Commit changes with a descriptive message describing what was done
3. Move to the next task
4. If unable to complete a task, update the document with current progress and stop for user feedback
# Instructions
1. Update the document when you start by changing the task status to **In Progress**
2. Read the full task including:
- **Goal**: What outcome needs to be achieved
- **Working Result**: Concrete definition of "done"
- **Constraints**: Technical limitations and requirements
- **Dependencies**: What this task relies on
- **Implementation Guidance** (`<guidance>` block): Context, considerations, and trade-offs
- **Validation**: Pass/fail checks
3. Perform the actual task:
- Use the guidance to understand the problem domain and key considerations
- Make implementation decisions based on what you learn during development
- Aim to meet the "Working Result" criteria
- Ensure the implementation passes the "Validation" checklist
- **Important**: The guidance provides context, not step-by-step instructions. Use your judgment to choose the best approach as you work.
4. After completion:
- Update the document with your progress
- Change task status to **Complete** if finished successfully
- If using `--auto` flag, create a commit with a message describing what was done (e.g., "Initialize Repository", "Set up API endpoints", not "Complete task 1")
- Move to the next task if `--auto` is enabled
5. If unable to complete the task:
- Document what was attempted and any blockers
- Update task status to **Blocked** or **Pending Review**
- If `--auto` is enabled, stop and request user feedback before proceeding
## Working with Implementation Guidance
The `<guidance>` blocks provide context and considerations, not prescriptive instructions:
- **Context**: Understanding of the problem domain
- **Key Considerations**: Trade-offs and options to evaluate
- **Risks**: What to watch out for
- **Questions to Resolve**: Decisions to make during implementation
- **Existing Patterns**: References to similar code in the codebase
You should:
- Read and understand the guidance before starting
- Make informed decisions based on what you discover during implementation
- Follow existing code patterns where referenced
- Resolve open questions using your best judgment
- Document significant decisions if they differ from what the guidance suggested
Context: $1
Implement task $2
$ARGUMENTS

89
commands/fix-quality.md Normal file
View File

@@ -0,0 +1,89 @@
---
description: Fix linting and quality issues following root-cause-first philosophy
argument-hint: [FILES OR PATTERN]
---
# Fix Quality Issues
Fix quality issues in `{{ARGS}}` (files, glob pattern, or `.` for entire project).
## Priority Order
**1. Fix Root Cause** (ALWAYS first)
- Remove unused imports/variables
- Fix type errors properly
- Add missing return types
- Fix naming violations
**2. Maintain/Improve Safety & Reliability** (if #1 complicates)
- Find alternative approach that keeps code at least as safe as before
- Don't add hacks or workarounds that deteriorate quality
- Refactor to simpler patterns if needed
- Add proper validation/checks
**3. Local Ignores** (ONLY when #1 and #2 both complicate)
- Use most LOCAL scope: inline > file > pattern > global
- Document WHY (comment above ignore)
- Use specific rule name (not blanket disable)
## Process
**1. Gather & Categorize**
Run linting and type checking tools to collect all issues. Group by: root cause fixable, needs alternative approach, legitimate ignore.
**For large codebases (>20 issues)**: Use TodoWrite to track fixes by category:
```
☐ Root cause fixes (X issues)
☐ Remove unused imports (N files)
☐ Fix type errors (N issues)
☐ Add missing return types (N functions)
☐ Safety alternatives (Y issues)
☐ Legitimate ignores (Z issues)
☐ Validate all checks pass
☐ Run full test suite
```
**2. Fix in Priority Order**
- Root causes: Remove unused, fix types, add return types
- Safety alternatives: Refactor without deteriorating quality
- Ignores: Document reason, choose local scope, specific rule
Mark todos completed as you fix each category. This prevents losing track when interrupted.
**3. Validate**
All linting checks + full test suite must pass.
**4. Report**
```
✅ Quality Fixed: X root cause, Y alternatives, Z ignores
All checks ✓ | Tests X/X ✓
```
## Locality Hierarchy (for ignores)
1. **Inline**: Single line ignore
2. **Block**: Section/function ignore
3. **File**: Entire file ignore
4. **Pattern**: Glob pattern ignore (e.g., test files, generated code)
5. **Global**: Config-level disable (LAST RESORT)
## Anti-Patterns
❌ Blanket disable without justification
❌ Global disable for local issue
❌ Ignoring without understanding why
❌ Fixing symptoms instead of root cause
❌ Making code less safe to silence warnings
## Valid Ignore Reasons
- Test code needs flexibility for mocking third-party APIs
- Generated code shouldn't be modified
- Performance-critical code needs specific optimization
- Third-party contract requires specific implementation
## Notes
- Run full test suite after fixes
- Review ignores periodically (some may become fixable)
- If >10% needs ignores, reconsider the rule

199
commands/optimize-doc.md Normal file
View File

@@ -0,0 +1,199 @@
---
description: Optimize documentation for conciseness and clarity by strengthening vague instructions and removing redundancy
source: https://www.reddit.com/r/ClaudeCode/comments/1o3ku9t/hack_and_slash_your_md_files_to_reduce_context_use/?share_id=gJBjUdlUApY73VB0TANvU&utm_medium=android_app&utm_name=androidcss&utm_source=share&utm_term=2
---
# Optimize Documentation
**Task**: Optimize `{{arg}}`
## Objective
Make docs more concise and clear without vagueness or misinterpretation.
**Goals** (priority order):
1. Eliminate vagueness - add explicit criteria and measurable steps
2. Increase conciseness - remove redundancy, preserve necessary info
3. Preserve clarity AND meaning - never sacrifice understanding for brevity
**Idempotent**: Run multiple times safely - first pass strengthens and removes redundancy, subsequent passes only act if improvements found.
## Analysis Methodology
For each instruction section:
### Step 1: Evaluate Clarity
**Can instruction be executed correctly WITHOUT examples?**
- Cover examples, read instruction only
- Contains subjective terms without definition?
- Has measurable criteria or explicit steps?
Decision: Clear → Step 2 | Vague → Step 3
### Step 2: If Clear - Evaluate Examples
Check if examples serve operational purpose:
| Keep If | Remove If |
|---------|-----------|
| Defines what "correct" looks like | Explains WHY (educational) |
| Shows exact commands/success criteria | Restates clear instruction |
| Sequential workflow (order matters) | Obvious application of clear rule |
| Resolves ambiguity | Duplicate template |
| Data structures (JSON, schemas) | Verbose walkthrough when numbered steps exist |
| Boundary demos (wrong vs right) | |
| Pattern extraction rules | |
### Step 3: If Vague - Strengthen First
**DO NOT remove examples yet.**
1. Identify vagueness source: subjective terms, missing criteria, unclear boundaries, narrative vs explicit steps
2. Strengthen instruction: replace subjective terms, convert to numbered steps, add thresholds, define success
3. Keep all examples - needed until strengthened
4. Mark for next pass - re-evaluate after strengthening
## Execution-Critical Content (Never Condense)
Preserve these even if instructions are clear:
### 1. Concrete Examples Defining "Correct"
Examples showing EXACT correct vs incorrect when instruction uses abstract terms.
**Test**: Does example define something ambiguous in instruction?
### 2. Sequential Steps for State Machines
Numbered workflows where order matters for correctness.
**Test**: Can steps be executed in different order and still work? If NO → Keep sequence
### 3. Inline Comments Specifying Verification
Comments explaining what output to expect or success criteria.
**Test**: Does comment specify criteria not in instruction? If YES → Keep
### 4. Disambiguation Examples
Examples resolving ambiguity when rule uses subjective terms.
**Test**: Can instruction be misinterpreted without this? If YES → Keep
### 5. Pattern Extraction Rules
Annotations generalizing specific examples into reusable decision principles (e.g., "→ Shows that 'delete' means remove lines").
**Test**: If removed, would Claude lose ability to apply reasoning to NEW examples? If YES → Keep
## Reference-Based Consolidation Rules
### Never Replace with References
- Content within sequential workflows (breaks flow)
- Quick-reference lists (serve different purpose than detailed sections)
- Success criteria at decision points (needed inline)
### OK to Replace with References
- Explanatory content appearing in multiple places
- Content at document boundaries (intro/conclusion)
- Cross-referencing related but distinct concepts
### Semantic Equivalence Test
Before replacing with reference:
1. ✅ Referenced section contains EXACT same information
2. ✅ Referenced section serves same purpose
3. ✅ No precision lost in referenced content
**If ANY fails → Keep duplicate inline**
## The Execution Test
Before removing ANY content:
1. **Can Claude execute correctly without this?**
- NO → KEEP (execution-critical)
- YES → Continue
2. **Does this explain WHY (rationale/educational)?**
- YES → REMOVE
- NO → KEEP (operational)
3. **Does this show WHAT "correct" looks like?**
- YES → KEEP (execution-critical)
- NO → Continue
4. **Does this extract general decision rule from example?**
- YES → KEEP (pattern extraction)
- NO → May remove if redundant
### Examples
**Remove** (explains WHY):
```
RATIONALE: Git history rewriting can silently drop commits...
Manual verification is the only reliable way to ensure no data loss.
```
**Keep** (defines WHAT "correct" means):
```
SUCCESS CRITERIA:
- git diff shows ONLY deletions in todo.md
- git diff shows ONLY additions in changelog.md
- Both files in SAME commit
```
## Conciseness Strategies
1. **Eliminate redundancy**: Remove repeated info, consolidate overlapping instructions
2. **Tighten language**: "execute" not "you MUST execute", "to" not "in order to", remove filler
3. **Structure over prose**: Bullets not paragraphs, tables for multi-dimensional info, numbered steps for sequences
4. **Preserve essentials**: Keep executable commands, data formats, boundaries, criteria, patterns
**Never sacrifice**:
- Scannability (vertical lists > comma-separated)
- Pattern recognition (checkmarks/bullets > prose)
- Explicit criteria ("ALL", "NEVER", exact counts/strings)
- Prevention patterns (prohibited vs required)
## Execution Instructions
1. Read `{{arg}}`
2. **For large documents (>100 lines)**: Use TodoWrite to track sections:
```
☐ Section: [name] - analyze clarity
☐ Section: [name] - analyze clarity
...
☐ Apply all optimizations
☐ Verify quality standards met
```
3. Analyze each section using methodology above
4. Optimize directly: strengthen vague instructions, remove redundancy, apply conciseness strategies
5. Report changes to user
6. Commit with descriptive message
## Quality Standards
Every change must satisfy:
- ✅ Meaning preserved
- ✅ Executability preserved
- ✅ Success criteria intact
- ✅ Ambiguity resolved
- ✅ Conciseness increased
## Change Summary Format
```
## Optimization Summary
**Changes Made**:
1. [Section] (Lines X-Y): [Change description]
- Before: [Issue - vagueness/redundancy/verbosity]
- After: [Improvement]
**Metrics**:
- Lines removed: N
- Sections strengthened: M
- Redundancy eliminated: [examples]
**Next Steps**: [Further optimization possible?]
```

145
commands/research.md Normal file
View File

@@ -0,0 +1,145 @@
---
description: Research blockers or questions using specialized research agents
---
# Research
Research specific blocker or question using specialized research agents and MCP tools.
## Usage
```bash
/research experimental/.plans/user-auth/implementation/003-jwt.md # Stuck task
/research "How to implement rate limiting with Redis?" # General question
/research "Best practices for writing technical blog posts" # Writing research
```
## Your Task
Research: "${{{ARGS}}}"
### Step 1: Analyze & Select Agents
${isTaskFile ? 'Read task file to understand blocker context.' : 'Analyze question to determine approach.'}
| Research Need | Agent Combination |
|--------------|-------------------|
| **New technology/patterns** | breadth + technical |
| **Specific error/issue** | depth + technical |
| **API/library integration** | technical + depth |
| **Best practices comparison** | breadth + depth |
**Agents available:**
- **research-breadth** (haiku) - WebSearch → Parallel Search → Perplexity: industry trends, consensus, multiple perspectives
- **research-depth** (haiku) - WebFetch → Parallel Search: specific URLs, implementations, case studies, gotchas
- **research-technical** (haiku) - Context7: official docs, API signatures, types, configs
### Step 2: Launch Agents in Parallel
Use Promise.all to launch 2-3 agents:
```typescript
await Promise.all([
Task({
subagent_type: 'research-breadth', // or 'research-depth' or 'research-technical'
model: 'haiku',
description: 'Brief agent description',
prompt: `Research: "${{{ARGS}}}"
Focus areas and guidance for this agent.
Specify which MCP tool to use.
Expected output format.`
}),
Task({
subagent_type: 'research-technical',
model: 'haiku',
description: 'Brief agent description',
prompt: `Research official docs for: "${{{ARGS}}}"
Focus areas and guidance for this agent.`
})
]);
```
### Step 3: Synthesize Findings
Use **research-synthesis skill** to:
- Consolidate findings by theme, identify consensus, note contradictions
- Narrativize into story (not bullet dumps): "Industry uses X (breadth), via Y API (technical), as shown by Z (depth)"
- Maintain source attribution (note which agent provided insights)
- Identify gaps (unanswered questions, disagreements)
- Extract actions (implementation path, code/configs, risks)
${isTaskFile ? `
### Step 4: Update Task File
Append research findings to task file:
\`\`\`bash
cat >> "$task_file" <<EOF
**research findings:**
- [Agent]: [key insights with sources]
- [Agent]: [key insights with sources]
**resolution:**
[Concrete path forward]
**next steps:**
[Specific actions]
EOF
\`\`\`
Update status from STUCK to Pending if blocker resolved.
` : ''}
## Output Format
### For Stuck Tasks
```markdown
✅ Research Complete
Task: 003-jwt.md
Blocker: [Description]
Agents Used: breadth (industry patterns), technical (official docs)
Key Findings:
1. **Agent 1**: [Key insight with source]
2. **Agent 2**: [Key insight with source]
Resolution: [Concrete recommendation]
Updated task: Findings in Notes, LLM Prompt updated, Status: STUCK → Pending
Next: Resume implementation with /implement-plan <project>
```
### For General Questions
```markdown
✅ Research Complete
Question: [Original question]
Agents Used: [List with focus areas]
Synthesis:
[Narrative combining insights from all agents with source attribution]
Recommendation: [What to do with rationale]
Alternative: [If applicable]
Sources: [Links with descriptions]
```
## Key Points
- Launch agents **in parallel** (Promise.all) for speed
- Use **research-synthesis skill** to consolidate (narrative, not lists)
- Maintain **source attribution** (link claims to agents/sources)
- For tasks: update file with findings and change status if resolved
- See `essentials/skills/research-synthesis/reference/multi-agent-invocation.md` for detailed patterns