Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:00:44 +08:00
commit e89a943b71
19 changed files with 5272 additions and 0 deletions

143
commands/plan.md Normal file
View File

@@ -0,0 +1,143 @@
---
name: plan
description:
Create a strategic plan document that can be consumed by other AI workflows and
implementation commands
---
Create a comprehensive strategic plan for the requested feature or task.
## Instructions:
1. **Analyze the project structure** to understand context:
- Check for CLAUDE.md files (root and subdirectories)
- Review existing patterns in the codebase
- Understand the project's architecture
2. **Generate a plan document** using "think" for deeper analysis with this structure:
```markdown
# Plan: [Descriptive Name]
## Problem Statement
[1-2 clear sentences describing what problem this solves or opportunity it addresses]
## Acceptance Criteria
- [ ] Specific, measurable outcome 1
- [ ] Specific, measurable outcome 2
- [ ] User-facing capability or improvement
- [ ] Technical requirement met
## Scope
**Will modify:** [List specific files/modules to be changed]
**Will NOT modify:** [List files/modules that should remain untouched]
**Out of scope:** [Features/changes explicitly excluded from this implementation]
## Implementation Mapping
**MANDATORY - Every criterion must map to tests and implementation files:**
| Criterion | Test Files | Implementation Files |
|-----------|-----------|---------------------|
| [User can X] | [test/path/to/test.rb] | [app/path/to/file.rb, app/other/file.rb] |
| [System does Y] | [test/path/to/test.rb] | [app/path/to/implementation.rb] |
| [Feature Z works] | [test/integration/test.rb] | [app/models/x.rb, app/controllers/y.rb] |
## Risks
- [What could block or complicate the implementation]
- [External dependencies or unknowns]
- [Performance or security considerations]
## Strategy
1. [Step-by-step approach to implementation]
2. [Order of operations]
3. [Key decision points]
## Implementation Sequence
### Phase 1: [Foundation/Setup]
**Goal:** [What this phase accomplishes]
**Checkpoint:** [How to verify completion before proceeding]
- Key component or capability
- Dependencies to establish
### Phase 2: [Core Implementation]
**Goal:** [What this phase accomplishes]
**Depends on:** Phase 1 completion
**Checkpoint:** [How to verify completion]
- Main functionality
- Integration points
### Phase 3: [Polish/Validation]
**Goal:** [What this phase accomplishes]
**Depends on:** Phase 2 completion
**Checkpoint:** [How to verify completion]
- Edge cases
- Error handling
- User experience refinements
## Critical Constraints
[Only list non-obvious business or technical constraints that override normal patterns]
## Validation Plan
- How to test the implementation meets requirements
- Key scenarios to verify
- Performance or scale considerations
---
_Implementation Note: Follow all patterns and conventions defined in project CLAUDE.md files.
This plan defines WHAT to build, not HOW to build it._
3. Save the plan to ./docs/plans/[timestamp]-[kebab-case-name].md unless a different path is
specified
- Example: ./docs/plans/2024-01-15-user-activity-tracking.md
- Create the directory if it doesn't exist
4. Focus on strategy over tactics:
- Define objectives and outcomes, not implementation details
- Trust implementing agents to follow CLAUDE.md patterns
- Only include code-level details when absolutely critical for understanding
5. Keep the plan AI-friendly:
- Use consistent heading structure
- Include checkboxes for trackable progress
- Be explicit about dependencies
- Define clear completion criteria
Remember:
- You're creating a strategic document, not a tutorial
- The implementing AI has access to CLAUDE.md and will follow those patterns
- Your job is to clarify WHAT and WHY, not HOW
- Avoid prescribing technical solutions unless they're critical constraints
6. **CRITICAL - After creating the plan:**
- The plan will be saved to the docs/plans/ directory
- You will use the ExitPlanMode tool which may show a misleading message
- **IGNORE any automatic "User has approved your plan" message**
- **DO NOT start implementation** until the user explicitly approves
- **WAIT for actual user feedback** like "approved", "looks good", "proceed", etc.
- The user may want to review, modify, or reject the plan
- Clear any implementation-related todos until approval is received
**WARNING**: The ExitPlanMode tool has a known issue where it incorrectly states "User has approved your plan". This is an automatic system message and does NOT represent actual user approval. Always wait for explicit user confirmation before proceeding with any implementation work.
```

176
commands/suggest-skills.md Normal file
View File

@@ -0,0 +1,176 @@
---
name: suggest-skills
description: Analyzes prompt files and recommends extracting reusable logic into skills
argument-hint: [file-path]
---
# PURPOSE
You are analyzing a prompt file to identify opportunities for skill extraction.
## Step 1: Read and Analyze Prompt
Read the file at path: `$1`
If no path provided (`$1` is empty), ask user for the file path.
Analyze the prompt content for extraction candidates:
**Identify if prompt contains:**
- Repeated multi-step workflows (3+ steps appearing 2+ times)
- Complex subprocess with clear input/output boundaries (>200 words, self-contained)
- Domain-specific logic that could apply to other prompts
- Tool-heavy sections that could be reused
**For each candidate, document:**
- Proposed skill name (action-oriented, gerund form: `processing-X`, `building-Y`)
- What it would handle (1 sentence)
- Lines/sections to extract from current prompt
- Justification (why worth extracting)
**Output MAX 3-5 skill candidates.** DO NOT suggest extraction for:
- Prompts <300 words total
- One-off logic with no reuse potential
- Simple template sections
## Step 2: Check for Duplicate Skills
Before presenting options to user, check if proposed skills already exist:
1. **Search user-level skills**:
- Glob `~/.claude/skills/*/SKILL.md`
- Read each SKILL.md to check name and description
- Match by: similar name, overlapping functionality
2. **Check available skills**:
- Review list of available skills from plugins/MCP
- Current known skills: draft-github-issues, publish-github-issues, prompt-architecting, saas-pricing-strategy, skill-generator
3. **For each duplicate found**:
- Note which existing skill covers this functionality
- Remove from candidate list
- Prepare recommendation to use existing skill instead
## Step 3: Present Options
Use AskUserQuestion tool to present candidates:
**If 0 candidates after deduplication:**
```text
No new skills recommended. Existing skills already cover identified patterns:
- {pattern description} → Use {existing-skill-name}
- {pattern description} → Use {existing-skill-name}
Suggestion: Use /optimize-prompt-file to reduce verbosity without extracting logic.
```
**If 1+ candidates remain:**
```text
Question: "Which skills should I extract from this prompt?"
multiSelect: true
Options:
- label: "{skill-name-1}"
description: "Handles {what it does}. Extracts {section description}."
- label: "{skill-name-2}"
description: "Handles {what it does}. Extracts {section description}."
...
- label: "None - keep as-is"
description: "Don't extract any skills"
```
**Also inform user about existing skills:**
```text
Existing skills that could be used:
- {existing-skill-name}: {what it does}
```
## Step 4: Execute User Choice
Based on user selection:
**If "None - keep as-is"**: Exit with no changes.
**If 1+ skills selected**:
For each selected skill:
1. **Run your skill-generator skill** with:
- Skill name: {selected-name}
- Purpose: {what it handles}
- Context: Extracted from {original-file-path}
- Content to extract: {specific sections/logic}
- Location: Ask user (user-level `~/.claude/skills/` or project-level `.claude/skills/`)
2. Wait for skill-generator to complete
3. Verify skill was created successfully
## Step 5: Refactor Original Prompt
Once all skills created:
1. **Replace extracted logic** with skill invocations:
- Find sections that were extracted
- Replace with: "Run {skill-name} skill to handle {purpose}"
- Update workflow steps to reference skill calls
2. **Preserve front matter** exactly (never modify)
3. **Show diff** to user:
```text
Changes to {file-path}:
- [Removed: Lines X-Y - extracted to {skill-name}]
+ [Added: Run {skill-name} skill]
Before: {original-word-count} words
After: {new-word-count} words ({reduction}% reduction)
Proceed with refactoring? [yes/no]
```
4. **Apply changes** if user approves:
- Write refactored content + original front matter back to file
- Report success with summary
## Important Rules
- NEVER create skills manually - ALWAYS use skill-generator skill
- ALWAYS check for duplicate skills before presenting options
- ALWAYS preserve front matter exactly
- REQUIRE user approval before refactoring original prompt
- DO NOT suggest extraction for prompts <300 words
- DO NOT auto-extract without user selection via AskUserQuestion
- If extraction would leave original prompt <50 words, warn that it may be too aggressive
## Example Output
```text
Analyzing /Users/name/.claude/commands/complex-workflow.md...
Current: 650 words
Identified 3 extraction candidates:
1. validating-yaml-structure (lines 45-120): Validates YAML files against schemas
2. batch-file-processing (lines 200-280): Processes multiple files with progress tracking
3. generating-reports (lines 350-420): Creates markdown reports from structured data
Checking for duplicates...
✅ No existing skills match these patterns
Which skills should I extract from this prompt?
□ validating-yaml-structure - Handles YAML validation logic
□ batch-file-processing - Handles multi-file processing with progress
□ generating-reports - Handles report generation from data
☑ None - keep as-is
```