Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:28:37 +08:00
commit ccc65b3f07
180 changed files with 53970 additions and 0 deletions

View File

@@ -0,0 +1,96 @@
# Workflow: Add a Reference to Existing Skill
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/skill-structure.md
</required_reading>
<process>
## Step 1: Select the Skill
```bash
ls ~/.claude/skills/
```
Present numbered list, ask: "Which skill needs a new reference?"
## Step 2: Analyze Current Structure
```bash
cat ~/.claude/skills/{skill-name}/SKILL.md
ls ~/.claude/skills/{skill-name}/references/ 2>/dev/null
```
Determine:
- **Has references/ folder?** → Good, can add directly
- **Simple skill?** → May need to create references/ first
- **What references exist?** → Understand the knowledge landscape
Report current references to user.
## Step 3: Gather Reference Requirements
Ask:
- What knowledge should this reference contain?
- Which workflows will use it?
- Is this reusable across workflows or specific to one?
**If specific to one workflow** → Consider putting it inline in that workflow instead.
## Step 4: Create the Reference File
Create `references/{reference-name}.md`:
Use semantic XML tags to structure the content:
```xml
<overview>
Brief description of what this reference covers
</overview>
<patterns>
## Common Patterns
[Reusable patterns, examples, code snippets]
</patterns>
<guidelines>
## Guidelines
[Best practices, rules, constraints]
</guidelines>
<examples>
## Examples
[Concrete examples with explanation]
</examples>
```
## Step 5: Update SKILL.md
Add the new reference to `<reference_index>`:
```markdown
**Category:** existing.md, new-reference.md
```
## Step 6: Update Workflows That Need It
For each workflow that should use this reference:
1. Read the workflow file
2. Add to its `<required_reading>` section
3. Verify the workflow still makes sense with this addition
## Step 7: Verify
- [ ] Reference file exists and is well-structured
- [ ] Reference is in SKILL.md reference_index
- [ ] Relevant workflows have it in required_reading
- [ ] No broken references
</process>
<success_criteria>
Reference addition is complete when:
- [ ] Reference file created with useful content
- [ ] Added to reference_index in SKILL.md
- [ ] Relevant workflows updated to read it
- [ ] Content is reusable (not workflow-specific)
</success_criteria>

View File

@@ -0,0 +1,93 @@
# Workflow: Add a Script to a Skill
<required_reading>
**Read these reference files NOW:**
1. references/using-scripts.md
</required_reading>
<process>
## Step 1: Identify the Skill
Ask (if not already provided):
- Which skill needs a script?
- What operation should the script perform?
## Step 2: Analyze Script Need
Confirm this is a good script candidate:
- [ ] Same code runs across multiple invocations
- [ ] Operation is error-prone when rewritten
- [ ] Consistency matters more than flexibility
If not a good fit, suggest alternatives (inline code in workflow, reference examples).
## Step 3: Create Scripts Directory
```bash
mkdir -p ~/.claude/skills/{skill-name}/scripts
```
## Step 4: Design Script
Gather requirements:
- What inputs does the script need?
- What should it output or accomplish?
- What errors might occur?
- Should it be idempotent?
Choose language:
- **bash** - Shell operations, file manipulation, CLI tools
- **python** - Data processing, API calls, complex logic
- **node/ts** - JavaScript ecosystem, async operations
## Step 5: Write Script File
Create `scripts/{script-name}.{ext}` with:
- Purpose comment at top
- Usage instructions
- Input validation
- Error handling
- Clear output/feedback
For bash scripts:
```bash
#!/bin/bash
set -euo pipefail
```
## Step 6: Make Executable (if bash)
```bash
chmod +x ~/.claude/skills/{skill-name}/scripts/{script-name}.sh
```
## Step 7: Update Workflow to Use Script
Find the workflow that needs this operation. Add:
```xml
<process>
...
N. Run `scripts/{script-name}.sh [arguments]`
N+1. Verify operation succeeded
...
</process>
```
## Step 8: Test
Invoke the skill workflow and verify:
- Script runs at the right step
- Inputs are passed correctly
- Errors are handled gracefully
- Output matches expectations
</process>
<success_criteria>
Script is complete when:
- [ ] scripts/ directory exists
- [ ] Script file has proper structure (comments, validation, error handling)
- [ ] Script is executable (if bash)
- [ ] At least one workflow references the script
- [ ] No hardcoded secrets or credentials
- [ ] Tested with real invocation
</success_criteria>

View File

@@ -0,0 +1,74 @@
# Workflow: Add a Template to a Skill
<required_reading>
**Read these reference files NOW:**
1. references/using-templates.md
</required_reading>
<process>
## Step 1: Identify the Skill
Ask (if not already provided):
- Which skill needs a template?
- What output does this template structure?
## Step 2: Analyze Template Need
Confirm this is a good template candidate:
- [ ] Output has consistent structure across uses
- [ ] Structure matters more than creative generation
- [ ] Filling placeholders is more reliable than blank-page generation
If not a good fit, suggest alternatives (workflow guidance, reference examples).
## Step 3: Create Templates Directory
```bash
mkdir -p ~/.claude/skills/{skill-name}/templates
```
## Step 4: Design Template Structure
Gather requirements:
- What sections does the output need?
- What information varies between uses? (→ placeholders)
- What stays constant? (→ static structure)
## Step 5: Write Template File
Create `templates/{template-name}.md` with:
- Clear section markers
- `{{PLACEHOLDER}}` syntax for variable content
- Brief inline guidance where helpful
- Minimal example content
## Step 6: Update Workflow to Use Template
Find the workflow that produces this output. Add:
```xml
<process>
...
N. Read `templates/{template-name}.md`
N+1. Copy template structure
N+2. Fill each placeholder based on gathered context
...
</process>
```
## Step 7: Test
Invoke the skill workflow and verify:
- Template is read at the right step
- All placeholders get filled appropriately
- Output structure matches template
- No placeholders left unfilled
</process>
<success_criteria>
Template is complete when:
- [ ] templates/ directory exists
- [ ] Template file has clear structure with placeholders
- [ ] At least one workflow references the template
- [ ] Workflow instructions explain when/how to use template
- [ ] Tested with real invocation
</success_criteria>

View File

@@ -0,0 +1,120 @@
# Workflow: Add a Workflow to Existing Skill
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/workflows-and-validation.md
</required_reading>
<process>
## Step 1: Select the Skill
**DO NOT use AskUserQuestion** - there may be many skills.
```bash
ls ~/.claude/skills/
```
Present numbered list, ask: "Which skill needs a new workflow?"
## Step 2: Analyze Current Structure
Read the skill:
```bash
cat ~/.claude/skills/{skill-name}/SKILL.md
ls ~/.claude/skills/{skill-name}/workflows/ 2>/dev/null
```
Determine:
- **Simple skill?** → May need to upgrade to router pattern first
- **Already has workflows/?** → Good, can add directly
- **What workflows exist?** → Avoid duplication
Report current structure to user.
## Step 3: Gather Workflow Requirements
Ask using AskUserQuestion or direct question:
- What should this workflow do?
- When would someone use it vs existing workflows?
- What references would it need?
## Step 4: Upgrade to Router Pattern (if needed)
**If skill is currently simple (no workflows/):**
Ask: "This skill needs to be upgraded to the router pattern first. Should I restructure it?"
If yes:
1. Create workflows/ directory
2. Move existing process content to workflows/main.md
3. Rewrite SKILL.md as router with intake + routing
4. Verify structure works before proceeding
## Step 5: Create the Workflow File
Create `workflows/{workflow-name}.md`:
```markdown
# Workflow: {Workflow Name}
<required_reading>
**Read these reference files NOW:**
1. references/{relevant-file}.md
</required_reading>
<process>
## Step 1: {First Step}
[What to do]
## Step 2: {Second Step}
[What to do]
## Step 3: {Third Step}
[What to do]
</process>
<success_criteria>
This workflow is complete when:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
</success_criteria>
```
## Step 6: Update SKILL.md
Add the new workflow to:
1. **Intake question** - Add new option
2. **Routing table** - Map option to workflow file
3. **Workflows index** - Add to the list
## Step 7: Create References (if needed)
If the workflow needs domain knowledge that doesn't exist:
1. Create `references/{reference-name}.md`
2. Add to reference_index in SKILL.md
3. Reference it in the workflow's required_reading
## Step 8: Test
Invoke the skill:
- Does the new option appear in intake?
- Does selecting it route to the correct workflow?
- Does the workflow load the right references?
- Does the workflow execute correctly?
Report results to user.
</process>
<success_criteria>
Workflow addition is complete when:
- [ ] Skill upgraded to router pattern (if needed)
- [ ] Workflow file created with required_reading, process, success_criteria
- [ ] SKILL.md intake updated with new option
- [ ] SKILL.md routing updated
- [ ] SKILL.md workflows_index updated
- [ ] Any needed references created
- [ ] Tested and working
</success_criteria>

View File

@@ -0,0 +1,138 @@
# Workflow: Audit a Skill
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/skill-structure.md
3. references/use-xml-tags.md
</required_reading>
<process>
## Step 1: List Available Skills
**DO NOT use AskUserQuestion** - there may be many skills.
Enumerate skills in chat as numbered list:
```bash
ls ~/.claude/skills/
```
Present as:
```
Available skills:
1. create-agent-skills
2. build-macos-apps
3. manage-stripe
...
```
Ask: "Which skill would you like to audit? (enter number or name)"
## Step 2: Read the Skill
After user selects, read the full skill structure:
```bash
# Read main file
cat ~/.claude/skills/{skill-name}/SKILL.md
# Check for workflows and references
ls ~/.claude/skills/{skill-name}/
ls ~/.claude/skills/{skill-name}/workflows/ 2>/dev/null
ls ~/.claude/skills/{skill-name}/references/ 2>/dev/null
```
## Step 3: Run Audit Checklist
Evaluate against each criterion:
### YAML Frontmatter
- [ ] Has `name:` field (lowercase-with-hyphens)
- [ ] Name matches directory name
- [ ] Has `description:` field
- [ ] Description says what it does AND when to use it
- [ ] Description is third person ("Use when...")
### Structure
- [ ] SKILL.md under 500 lines
- [ ] Pure XML structure (no markdown headings # in body)
- [ ] All XML tags properly closed
- [ ] Has required tags: objective OR essential_principles
- [ ] Has success_criteria
### Router Pattern (if complex skill)
- [ ] Essential principles inline in SKILL.md (not in separate file)
- [ ] Has intake question
- [ ] Has routing table
- [ ] All referenced workflow files exist
- [ ] All referenced reference files exist
### Workflows (if present)
- [ ] Each has required_reading section
- [ ] Each has process section
- [ ] Each has success_criteria section
- [ ] Required reading references exist
### Content Quality
- [ ] Principles are actionable (not vague platitudes)
- [ ] Steps are specific (not "do the thing")
- [ ] Success criteria are verifiable
- [ ] No redundant content across files
## Step 4: Generate Report
Present findings as:
```
## Audit Report: {skill-name}
### ✅ Passing
- [list passing items]
### ⚠️ Issues Found
1. **[Issue name]**: [Description]
→ Fix: [Specific action]
2. **[Issue name]**: [Description]
→ Fix: [Specific action]
### 📊 Score: X/Y criteria passing
```
## Step 5: Offer Fixes
If issues found, ask:
"Would you like me to fix these issues?"
Options:
1. **Fix all** - Apply all recommended fixes
2. **Fix one by one** - Review each fix before applying
3. **Just the report** - No changes needed
If fixing:
- Make each change
- Verify file validity after each change
- Report what was fixed
</process>
<audit_anti_patterns>
## Common Anti-Patterns to Flag
**Skippable principles**: Essential principles in separate file instead of inline
**Monolithic skill**: Single file over 500 lines
**Mixed concerns**: Procedures and knowledge in same file
**Vague steps**: "Handle the error appropriately"
**Untestable criteria**: "User is satisfied"
**Markdown headings in body**: Using # instead of XML tags
**Missing routing**: Complex skill without intake/routing
**Broken references**: Files mentioned but don't exist
**Redundant content**: Same information in multiple places
</audit_anti_patterns>
<success_criteria>
Audit is complete when:
- [ ] Skill fully read and analyzed
- [ ] All checklist items evaluated
- [ ] Report presented to user
- [ ] Fixes applied (if requested)
- [ ] User has clear picture of skill health
</success_criteria>

View File

@@ -0,0 +1,605 @@
# Workflow: Create Exhaustive Domain Expertise Skill
<objective>
Build a comprehensive execution skill that does real work in a specific domain. Domain expertise skills are full-featured build skills with exhaustive domain knowledge in references, complete workflows for the full lifecycle (build → debug → optimize → ship), and can be both invoked directly by users AND loaded by other skills (like create-plans) for domain knowledge.
</objective>
<critical_distinction>
**Regular skill:** "Do one specific task"
**Domain expertise skill:** "Do EVERYTHING in this domain, with complete practitioner knowledge"
Examples:
- `expertise/macos-apps` - Build macOS apps from scratch through shipping
- `expertise/python-games` - Build complete Python games with full game dev lifecycle
- `expertise/rust-systems` - Build Rust systems programs with exhaustive systems knowledge
- `expertise/web-scraping` - Build scrapers, handle all edge cases, deploy at scale
Domain expertise skills:
- ✅ Execute tasks (build, debug, optimize, ship)
- ✅ Have comprehensive domain knowledge in references
- ✅ Are invoked directly by users ("build a macOS app")
- ✅ Can be loaded by other skills (create-plans reads references for planning)
- ✅ Cover the FULL lifecycle, not just getting started
</critical_distinction>
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/core-principles.md
3. references/use-xml-tags.md
</required_reading>
<process>
## Step 1: Identify Domain
Ask user what domain expertise to build:
**Example domains:**
- macOS/iOS app development
- Python game development
- Rust systems programming
- Machine learning / AI
- Web scraping and automation
- Data engineering pipelines
- Audio processing / DSP
- 3D graphics / shaders
- Unity/Unreal game development
- Embedded systems
Get specific: "Python games" or "Python games with Pygame specifically"?
## Step 2: Confirm Target Location
Explain:
```
Domain expertise skills go in: ~/.claude/skills/expertise/{domain-name}/
These are comprehensive BUILD skills that:
- Execute tasks (build, debug, optimize, ship)
- Contain exhaustive domain knowledge
- Can be invoked directly by users
- Can be loaded by other skills for domain knowledge
Name suggestion: {suggested-name}
Location: ~/.claude/skills/expertise/{suggested-name}/
```
Confirm or adjust name.
## Step 3: Identify Workflows
Domain expertise skills cover the FULL lifecycle. Identify what workflows are needed.
**Common workflows for most domains:**
1. **build-new-{thing}.md** - Create from scratch
2. **add-feature.md** - Extend existing {thing}
3. **debug-{thing}.md** - Find and fix bugs
4. **write-tests.md** - Test for correctness
5. **optimize-performance.md** - Profile and speed up
6. **ship-{thing}.md** - Deploy/distribute
**Domain-specific workflows:**
- Games: `implement-game-mechanic.md`, `add-audio.md`, `polish-ui.md`
- Web apps: `setup-auth.md`, `add-api-endpoint.md`, `setup-database.md`
- Systems: `optimize-memory.md`, `profile-cpu.md`, `cross-compile.md`
Each workflow = one complete task type that users actually do.
## Step 4: Exhaustive Research Phase
**CRITICAL:** This research must be comprehensive, not superficial.
### Research Strategy
Run multiple web searches to ensure coverage:
**Search 1: Current ecosystem**
- "best {domain} libraries 2024 2025"
- "popular {domain} frameworks comparison"
- "{domain} tech stack recommendations"
**Search 2: Architecture patterns**
- "{domain} architecture patterns"
- "{domain} best practices design patterns"
- "how to structure {domain} projects"
**Search 3: Lifecycle and tooling**
- "{domain} development workflow"
- "{domain} testing debugging best practices"
- "{domain} deployment distribution"
**Search 4: Common pitfalls**
- "{domain} common mistakes avoid"
- "{domain} anti-patterns"
- "what not to do {domain}"
**Search 5: Real-world usage**
- "{domain} production examples GitHub"
- "{domain} case studies"
- "successful {domain} projects"
### Verification Requirements
For EACH major library/tool/pattern found:
- **Check recency:** When was it last updated?
- **Check adoption:** Is it actively maintained? Community size?
- **Check alternatives:** What else exists? When to use each?
- **Check deprecation:** Is anything being replaced?
**Red flags for outdated content:**
- Articles from before 2023 (unless fundamental concepts)
- Abandoned libraries (no commits in 12+ months)
- Deprecated APIs or patterns
- "This used to be popular but..."
### Documentation Sources
Use Context7 MCP when available:
```
mcp__context7__resolve-library-id: {library-name}
mcp__context7__get-library-docs: {library-id}
```
Focus on official docs, not tutorials.
## Step 5: Organize Knowledge Into Domain Areas
Structure references by domain concerns, NOT by arbitrary categories.
**For game development example:**
```
references/
├── architecture.md # ECS, component-based, state machines
├── libraries.md # Pygame, Arcade, Panda3D (when to use each)
├── graphics-rendering.md # 2D/3D rendering, sprites, shaders
├── physics.md # Collision, physics engines
├── audio.md # Sound effects, music, spatial audio
├── input.md # Keyboard, mouse, gamepad, touch
├── ui-menus.md # HUD, menus, dialogs
├── game-loop.md # Update/render loop, fixed timestep
├── state-management.md # Game states, scene management
├── networking.md # Multiplayer, client-server, P2P
├── asset-pipeline.md # Loading, caching, optimization
├── testing-debugging.md # Unit tests, profiling, debugging tools
├── performance.md # Optimization, profiling, benchmarking
├── packaging.md # Building executables, installers
├── distribution.md # Steam, itch.io, app stores
└── anti-patterns.md # Common mistakes, what NOT to do
```
**For macOS app development example:**
```
references/
├── app-architecture.md # State management, dependency injection
├── swiftui-patterns.md # Declarative UI patterns
├── appkit-integration.md # Using AppKit with SwiftUI
├── concurrency-patterns.md # Async/await, actors, structured concurrency
├── data-persistence.md # Storage strategies
├── networking.md # URLSession, async networking
├── system-apis.md # macOS-specific frameworks
├── testing-tdd.md # Testing patterns
├── testing-debugging.md # Debugging tools and techniques
├── performance.md # Profiling, optimization
├── design-system.md # Platform conventions
├── macos-polish.md # Native feel, accessibility
├── security-code-signing.md # Signing, notarization
└── project-scaffolding.md # CLI-based setup
```
**For each reference file:**
- Pure XML structure
- Decision trees: "If X, use Y. If Z, use A instead."
- Comparison tables: Library vs Library (speed, features, learning curve)
- Code examples showing patterns
- "When to use" guidance
- Platform-specific considerations
- Current versions and compatibility
## Step 6: Create SKILL.md
Domain expertise skills use router pattern with essential principles:
```yaml
---
name: build-{domain-name}
description: Build {domain things} from scratch through shipping. Full lifecycle - build, debug, test, optimize, ship. {Any specific constraints like "CLI-only, no IDE"}.
---
<essential_principles>
## How {This Domain} Works
{Domain-specific principles that ALWAYS apply}
### 1. {First Principle}
{Critical practice that can't be skipped}
### 2. {Second Principle}
{Another fundamental practice}
### 3. {Third Principle}
{Core workflow pattern}
</essential_principles>
<intake>
**Ask the user:**
What would you like to do?
1. Build a new {thing}
2. Debug an existing {thing}
3. Add a feature
4. Write/run tests
5. Optimize performance
6. Ship/release
7. Something else
**Then read the matching workflow from `workflows/` and follow it.**
</intake>
<routing>
| Response | Workflow |
|----------|----------|
| 1, "new", "create", "build", "start" | `workflows/build-new-{thing}.md` |
| 2, "broken", "fix", "debug", "crash", "bug" | `workflows/debug-{thing}.md` |
| 3, "add", "feature", "implement", "change" | `workflows/add-feature.md` |
| 4, "test", "tests", "TDD", "coverage" | `workflows/write-tests.md` |
| 5, "slow", "optimize", "performance", "fast" | `workflows/optimize-performance.md` |
| 6, "ship", "release", "deploy", "publish" | `workflows/ship-{thing}.md` |
| 7, other | Clarify, then select workflow or references |
</routing>
<verification_loop>
## After Every Change
{Domain-specific verification steps}
Example for compiled languages:
```bash
# 1. Does it build?
{build command}
# 2. Do tests pass?
{test command}
# 3. Does it run?
{run command}
```
Report to the user:
- "Build: ✓"
- "Tests: X pass, Y fail"
- "Ready for you to check [specific thing]"
</verification_loop>
<reference_index>
## Domain Knowledge
All in `references/`:
**Architecture:** {list files}
**{Domain Area}:** {list files}
**{Domain Area}:** {list files}
**Development:** {list files}
**Shipping:** {list files}
</reference_index>
<workflows_index>
## Workflows
All in `workflows/`:
| File | Purpose |
|------|---------|
| build-new-{thing}.md | Create new {thing} from scratch |
| debug-{thing}.md | Find and fix bugs |
| add-feature.md | Add to existing {thing} |
| write-tests.md | Write and run tests |
| optimize-performance.md | Profile and speed up |
| ship-{thing}.md | Deploy/distribute |
</workflows_index>
```
## Step 7: Write Workflows
For EACH workflow identified in Step 3:
### Workflow Template
```markdown
# Workflow: {Workflow Name}
<required_reading>
**Read these reference files NOW before {doing the task}:**
1. references/{relevant-file}.md
2. references/{another-relevant-file}.md
3. references/{third-relevant-file}.md
</required_reading>
<process>
## Step 1: {First Action}
{What to do}
## Step 2: {Second Action}
{What to do - actual implementation steps}
## Step 3: {Third Action}
{What to do}
## Step 4: Verify
{How to prove it works}
```bash
{verification commands}
```
</process>
<anti_patterns>
Avoid:
- {Common mistake 1}
- {Common mistake 2}
- {Common mistake 3}
</anti_patterns>
<success_criteria>
A well-{completed task}:
- {Criterion 1}
- {Criterion 2}
- {Criterion 3}
- Builds/runs without errors
- Tests pass
- Feels {native/professional/correct}
</success_criteria>
```
**Key workflow characteristics:**
- Starts with required_reading (which references to load)
- Contains actual implementation steps (not just "read references")
- Includes verification steps
- Has success criteria
- Documents anti-patterns
## Step 8: Write Comprehensive References
For EACH reference file identified in Step 5:
### Structure Template
```xml
<overview>
Brief introduction to this domain area
</overview>
<options>
## Available Approaches/Libraries
<option name="Library A">
**When to use:** [specific scenarios]
**Strengths:** [what it's best at]
**Weaknesses:** [what it's not good for]
**Current status:** v{version}, actively maintained
**Learning curve:** [easy/medium/hard]
```code
# Example usage
```
</option>
<option name="Library B">
[Same structure]
</option>
</options>
<decision_tree>
## Choosing the Right Approach
**If you need [X]:** Use [Library A]
**If you need [Y]:** Use [Library B]
**If you have [constraint Z]:** Use [Library C]
**Avoid [Library D] if:** [specific scenarios]
</decision_tree>
<patterns>
## Common Patterns
<pattern name="Pattern Name">
**Use when:** [scenario]
**Implementation:** [code example]
**Considerations:** [trade-offs]
</pattern>
</patterns>
<anti_patterns>
## What NOT to Do
<anti_pattern name="Common Mistake">
**Problem:** [what people do wrong]
**Why it's bad:** [consequences]
**Instead:** [correct approach]
</anti_pattern>
</anti_patterns>
<platform_considerations>
## Platform-Specific Notes
**Windows:** [considerations]
**macOS:** [considerations]
**Linux:** [considerations]
**Mobile:** [if applicable]
</platform_considerations>
```
### Quality Standards
Each reference must include:
- **Current information** (verify dates)
- **Multiple options** (not just one library)
- **Decision guidance** (when to use each)
- **Real examples** (working code, not pseudocode)
- **Trade-offs** (no silver bullets)
- **Anti-patterns** (what NOT to do)
### Common Reference Files
Most domains need:
- **architecture.md** - How to structure projects
- **libraries.md** - Ecosystem overview with comparisons
- **patterns.md** - Design patterns specific to domain
- **testing-debugging.md** - How to verify correctness
- **performance.md** - Optimization strategies
- **deployment.md** - How to ship/distribute
- **anti-patterns.md** - Common mistakes consolidated
## Step 9: Validate Completeness
### Completeness Checklist
Ask: "Could a user build a professional {domain thing} from scratch through shipping using just this skill?"
**Must answer YES to:**
- [ ] All major libraries/frameworks covered?
- [ ] All architectural approaches documented?
- [ ] Complete lifecycle addressed (build → debug → test → optimize → ship)?
- [ ] Platform-specific considerations included?
- [ ] "When to use X vs Y" guidance provided?
- [ ] Common pitfalls documented?
- [ ] Current as of 2024-2025?
- [ ] Workflows actually execute tasks (not just reference knowledge)?
- [ ] Each workflow specifies which references to read?
**Specific gaps to check:**
- [ ] Testing strategy covered?
- [ ] Debugging/profiling tools listed?
- [ ] Deployment/distribution methods documented?
- [ ] Performance optimization addressed?
- [ ] Security considerations (if applicable)?
- [ ] Asset/resource management (if applicable)?
- [ ] Networking (if applicable)?
### Dual-Purpose Test
Test both use cases:
**Direct invocation:** "Can a user invoke this skill and build something?"
- Intake routes to appropriate workflow
- Workflow loads relevant references
- Workflow provides implementation steps
- Success criteria are clear
**Knowledge reference:** "Can create-plans load references to plan a project?"
- References contain decision guidance
- All options compared
- Complete lifecycle covered
- Architecture patterns documented
## Step 10: Create Directory and Files
```bash
# Create structure
mkdir -p ~/.claude/skills/expertise/{domain-name}
mkdir -p ~/.claude/skills/expertise/{domain-name}/workflows
mkdir -p ~/.claude/skills/expertise/{domain-name}/references
# Write SKILL.md
# Write all workflow files
# Write all reference files
# Verify structure
ls -R ~/.claude/skills/expertise/{domain-name}
```
## Step 11: Document in create-plans
Update `~/.claude/skills/create-plans/SKILL.md` to reference this new domain:
Add to the domain inference table:
```markdown
| "{keyword}", "{domain term}" | expertise/{domain-name} |
```
So create-plans can auto-detect and offer to load it.
## Step 12: Final Quality Check
Review entire skill:
**SKILL.md:**
- [ ] Name matches directory (build-{domain-name})
- [ ] Description explains it builds things from scratch through shipping
- [ ] Essential principles inline (always loaded)
- [ ] Intake asks what user wants to do
- [ ] Routing maps to workflows
- [ ] Reference index complete and organized
- [ ] Workflows index complete
**Workflows:**
- [ ] Each workflow starts with required_reading
- [ ] Each workflow has actual implementation steps
- [ ] Each workflow has verification steps
- [ ] Each workflow has success criteria
- [ ] Workflows cover full lifecycle (build, debug, test, optimize, ship)
**References:**
- [ ] Pure XML structure (no markdown headings)
- [ ] Decision guidance in every file
- [ ] Current versions verified
- [ ] Code examples work
- [ ] Anti-patterns documented
- [ ] Platform considerations included
**Completeness:**
- [ ] A professional practitioner would find this comprehensive
- [ ] No major libraries/patterns missing
- [ ] Full lifecycle covered
- [ ] Passes the "build from scratch through shipping" test
- [ ] Can be invoked directly by users
- [ ] Can be loaded by create-plans for knowledge
</process>
<success_criteria>
Domain expertise skill is complete when:
- [ ] Comprehensive research completed (5+ web searches)
- [ ] All sources verified for currency (2024-2025)
- [ ] Knowledge organized by domain areas (not arbitrary)
- [ ] Essential principles in SKILL.md (always loaded)
- [ ] Intake routes to appropriate workflows
- [ ] Each workflow has required_reading + implementation steps + verification
- [ ] Each reference has decision trees and comparisons
- [ ] Anti-patterns documented throughout
- [ ] Full lifecycle covered (build → debug → test → optimize → ship)
- [ ] Platform-specific considerations included
- [ ] Located in ~/.claude/skills/expertise/{domain-name}/
- [ ] Referenced in create-plans domain inference table
- [ ] Passes dual-purpose test: Can be invoked directly AND loaded for knowledge
- [ ] User can build something professional from scratch through shipping
</success_criteria>
<anti_patterns>
**DON'T:**
- Copy tutorial content without verification
- Include only "getting started" material
- Skip the "when NOT to use" guidance
- Forget to check if libraries are still maintained
- Organize by document type instead of domain concerns
- Make it knowledge-only with no execution workflows
- Skip verification steps in workflows
- Include outdated content from old blog posts
- Skip decision trees and comparisons
- Create workflows that just say "read the references"
**DO:**
- Verify everything is current
- Include complete lifecycle (build → ship)
- Provide decision guidance
- Document anti-patterns
- Make workflows execute real tasks
- Start workflows with required_reading
- Include verification in every workflow
- Make it exhaustive, not minimal
- Test both direct invocation and knowledge reference use cases
</anti_patterns>

View File

@@ -0,0 +1,191 @@
# Workflow: Create a New Skill
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/skill-structure.md
3. references/core-principles.md
4. references/use-xml-tags.md
</required_reading>
<process>
## Step 1: Adaptive Requirements Gathering
**If user provided context** (e.g., "build a skill for X"):
→ Analyze what's stated, what can be inferred, what's unclear
→ Skip to asking about genuine gaps only
**If user just invoked skill without context:**
→ Ask what they want to build
### Using AskUserQuestion
Ask 2-4 domain-specific questions based on actual gaps. Each question should:
- Have specific options with descriptions
- Focus on scope, complexity, outputs, boundaries
- NOT ask things obvious from context
Example questions:
- "What specific operations should this skill handle?" (with options based on domain)
- "Should this also handle [related thing] or stay focused on [core thing]?"
- "What should the user see when successful?"
### Decision Gate
After initial questions, ask:
"Ready to proceed with building, or would you like me to ask more questions?"
Options:
1. **Proceed to building** - I have enough context
2. **Ask more questions** - There are more details to clarify
3. **Let me add details** - I want to provide additional context
## Step 2: Research Trigger (If External API)
**When external service detected**, ask using AskUserQuestion:
"This involves [service name] API. Would you like me to research current endpoints and patterns before building?"
Options:
1. **Yes, research first** - Fetch current documentation for accurate implementation
2. **No, proceed with general patterns** - Use common patterns without specific API research
If research requested:
- Use Context7 MCP to fetch current library documentation
- Or use WebSearch for recent API documentation
- Focus on 2024-2025 sources
- Store findings for use in content generation
## Step 3: Decide Structure
**Simple skill (single workflow, <200 lines):**
→ Single SKILL.md file with all content
**Complex skill (multiple workflows OR domain knowledge):**
→ Router pattern:
```
skill-name/
├── SKILL.md (router + principles)
├── workflows/ (procedures - FOLLOW)
├── references/ (knowledge - READ)
├── templates/ (output structures - COPY + FILL)
└── scripts/ (reusable code - EXECUTE)
```
Factors favoring router pattern:
- Multiple distinct user intents (create vs debug vs ship)
- Shared domain knowledge across workflows
- Essential principles that must not be skipped
- Skill likely to grow over time
**Consider templates/ when:**
- Skill produces consistent output structures (plans, specs, reports)
- Structure matters more than creative generation
**Consider scripts/ when:**
- Same code runs across invocations (deploy, setup, API calls)
- Operations are error-prone when rewritten each time
See references/recommended-structure.md for templates.
## Step 4: Create Directory
```bash
mkdir -p ~/.claude/skills/{skill-name}
# If complex:
mkdir -p ~/.claude/skills/{skill-name}/workflows
mkdir -p ~/.claude/skills/{skill-name}/references
# If needed:
mkdir -p ~/.claude/skills/{skill-name}/templates # for output structures
mkdir -p ~/.claude/skills/{skill-name}/scripts # for reusable code
```
## Step 5: Write SKILL.md
**Simple skill:** Write complete skill file with:
- YAML frontmatter (name, description)
- `<objective>`
- `<quick_start>`
- Content sections with pure XML
- `<success_criteria>`
**Complex skill:** Write router with:
- YAML frontmatter
- `<essential_principles>` (inline, unavoidable)
- `<intake>` (question to ask user)
- `<routing>` (maps answers to workflows)
- `<reference_index>` and `<workflows_index>`
## Step 6: Write Workflows (if complex)
For each workflow:
```xml
<required_reading>
Which references to load for this workflow
</required_reading>
<process>
Step-by-step procedure
</process>
<success_criteria>
How to know this workflow is done
</success_criteria>
```
## Step 7: Write References (if needed)
Domain knowledge that:
- Multiple workflows might need
- Doesn't change based on workflow
- Contains patterns, examples, technical details
## Step 8: Validate Structure
Check:
- [ ] YAML frontmatter valid
- [ ] Name matches directory (lowercase-with-hyphens)
- [ ] Description says what it does AND when to use it (third person)
- [ ] No markdown headings (#) in body - use XML tags
- [ ] Required tags present: objective, quick_start, success_criteria
- [ ] All referenced files exist
- [ ] SKILL.md under 500 lines
- [ ] XML tags properly closed
## Step 9: Create Slash Command
```bash
cat > ~/.claude/commands/{skill-name}.md << 'EOF'
---
description: {Brief description}
argument-hint: [{argument hint}]
allowed-tools: Skill({skill-name})
---
Invoke the {skill-name} skill for: $ARGUMENTS
EOF
```
## Step 10: Test
Invoke the skill and observe:
- Does it ask the right intake question?
- Does it load the right workflow?
- Does the workflow load the right references?
- Does output match expectations?
Iterate based on real usage, not assumptions.
</process>
<success_criteria>
Skill is complete when:
- [ ] Requirements gathered with appropriate questions
- [ ] API research done if external service involved
- [ ] Directory structure correct
- [ ] SKILL.md has valid frontmatter
- [ ] Essential principles inline (if complex skill)
- [ ] Intake question routes to correct workflow
- [ ] All workflows have required_reading + process + success_criteria
- [ ] References contain reusable domain knowledge
- [ ] Slash command exists and works
- [ ] Tested with real invocation
</success_criteria>

View File

@@ -0,0 +1,121 @@
# Workflow: Get Guidance on Skill Design
<required_reading>
**Read these reference files NOW:**
1. references/core-principles.md
2. references/recommended-structure.md
</required_reading>
<process>
## Step 1: Understand the Problem Space
Ask the user:
- What task or domain are you trying to support?
- Is this something you do repeatedly?
- What makes it complex enough to need a skill?
## Step 2: Determine If a Skill Is Right
**Create a skill when:**
- Task is repeated across multiple sessions
- Domain knowledge doesn't change frequently
- Complex enough to benefit from structure
- Would save significant time if automated
**Don't create a skill when:**
- One-off task (just do it directly)
- Changes constantly (will be outdated quickly)
- Too simple (overhead isn't worth it)
- Better as a slash command (user-triggered, no context needed)
Share this assessment with user.
## Step 3: Map the Workflows
Ask: "What are the different things someone might want to do with this skill?"
Common patterns:
- Create / Read / Update / Delete
- Build / Debug / Ship
- Setup / Use / Troubleshoot
- Import / Process / Export
Each distinct workflow = potential workflow file.
## Step 4: Identify Domain Knowledge
Ask: "What knowledge is needed regardless of which workflow?"
This becomes references:
- API patterns
- Best practices
- Common examples
- Configuration details
## Step 5: Draft the Structure
Based on answers, recommend structure:
**If 1 workflow, simple knowledge:**
```
skill-name/
└── SKILL.md (everything in one file)
```
**If 2+ workflows, shared knowledge:**
```
skill-name/
├── SKILL.md (router)
├── workflows/
│ ├── workflow-a.md
│ └── workflow-b.md
└── references/
└── shared-knowledge.md
```
## Step 6: Identify Essential Principles
Ask: "What rules should ALWAYS apply, no matter which workflow?"
These become `<essential_principles>` in SKILL.md.
Examples:
- "Always verify before reporting success"
- "Never store credentials in code"
- "Ask before making destructive changes"
## Step 7: Present Recommendation
Summarize:
- Recommended structure (simple vs router pattern)
- List of workflows
- List of references
- Essential principles
Ask: "Does this structure make sense? Ready to build it?"
If yes → offer to switch to "Create a new skill" workflow
If no → clarify and iterate
</process>
<decision_framework>
## Quick Decision Framework
| Situation | Recommendation |
|-----------|----------------|
| Single task, repeat often | Simple skill |
| Multiple related tasks | Router + workflows |
| Complex domain, many patterns | Router + workflows + references |
| User-triggered, fresh context | Slash command, not skill |
| One-off task | No skill needed |
</decision_framework>
<success_criteria>
Guidance is complete when:
- [ ] User understands if they need a skill
- [ ] Structure is recommended and explained
- [ ] Workflows are identified
- [ ] References are identified
- [ ] Essential principles are identified
- [ ] User is ready to build (or decided not to)
</success_criteria>

View File

@@ -0,0 +1,161 @@
# Workflow: Upgrade Skill to Router Pattern
<required_reading>
**Read these reference files NOW:**
1. references/recommended-structure.md
2. references/skill-structure.md
</required_reading>
<process>
## Step 1: Select the Skill
```bash
ls ~/.claude/skills/
```
Present numbered list, ask: "Which skill should be upgraded to the router pattern?"
## Step 2: Verify It Needs Upgrading
Read the skill:
```bash
cat ~/.claude/skills/{skill-name}/SKILL.md
ls ~/.claude/skills/{skill-name}/
```
**Already a router?** (has workflows/ and intake question)
→ Tell user it's already using router pattern, offer to add workflows instead
**Simple skill that should stay simple?** (under 200 lines, single workflow)
→ Explain that router pattern may be overkill, ask if they want to proceed anyway
**Good candidate for upgrade:**
- Over 200 lines
- Multiple distinct use cases
- Essential principles that shouldn't be skipped
- Growing complexity
## Step 3: Identify Components
Analyze the current skill and identify:
1. **Essential principles** - Rules that apply to ALL use cases
2. **Distinct workflows** - Different things a user might want to do
3. **Reusable knowledge** - Patterns, examples, technical details
Present findings:
```
## Analysis
**Essential principles I found:**
- [Principle 1]
- [Principle 2]
**Distinct workflows I identified:**
- [Workflow A]: [description]
- [Workflow B]: [description]
**Knowledge that could be references:**
- [Reference topic 1]
- [Reference topic 2]
```
Ask: "Does this breakdown look right? Any adjustments?"
## Step 4: Create Directory Structure
```bash
mkdir -p ~/.claude/skills/{skill-name}/workflows
mkdir -p ~/.claude/skills/{skill-name}/references
```
## Step 5: Extract Workflows
For each identified workflow:
1. Create `workflows/{workflow-name}.md`
2. Add required_reading section (references it needs)
3. Add process section (steps from original skill)
4. Add success_criteria section
## Step 6: Extract References
For each identified reference topic:
1. Create `references/{reference-name}.md`
2. Move relevant content from original skill
3. Structure with semantic XML tags
## Step 7: Rewrite SKILL.md as Router
Replace SKILL.md with router structure:
```markdown
---
name: {skill-name}
description: {existing description}
---
<essential_principles>
[Extracted principles - inline, cannot be skipped]
</essential_principles>
<intake>
**Ask the user:**
What would you like to do?
1. [Workflow A option]
2. [Workflow B option]
...
**Wait for response before proceeding.**
</intake>
<routing>
| Response | Workflow |
|----------|----------|
| 1, "keywords" | `workflows/workflow-a.md` |
| 2, "keywords" | `workflows/workflow-b.md` |
</routing>
<reference_index>
[List all references by category]
</reference_index>
<workflows_index>
| Workflow | Purpose |
|----------|---------|
| workflow-a.md | [What it does] |
| workflow-b.md | [What it does] |
</workflows_index>
```
## Step 8: Verify Nothing Was Lost
Compare original skill content against new structure:
- [ ] All principles preserved (now inline)
- [ ] All procedures preserved (now in workflows)
- [ ] All knowledge preserved (now in references)
- [ ] No orphaned content
## Step 9: Test
Invoke the upgraded skill:
- Does intake question appear?
- Does each routing option work?
- Do workflows load correct references?
- Does behavior match original skill?
Report any issues.
</process>
<success_criteria>
Upgrade is complete when:
- [ ] workflows/ directory created with workflow files
- [ ] references/ directory created (if needed)
- [ ] SKILL.md rewritten as router
- [ ] Essential principles inline in SKILL.md
- [ ] All original content preserved
- [ ] Intake question routes correctly
- [ ] Tested and working
</success_criteria>

View File

@@ -0,0 +1,204 @@
# Workflow: Verify Skill Content Accuracy
<required_reading>
**Read these reference files NOW:**
1. references/skill-structure.md
</required_reading>
<purpose>
Audit checks structure. **Verify checks truth.**
Skills contain claims about external things: APIs, CLI tools, frameworks, services. These change over time. This workflow checks if a skill's content is still accurate.
</purpose>
<process>
## Step 1: Select the Skill
```bash
ls ~/.claude/skills/
```
Present numbered list, ask: "Which skill should I verify for accuracy?"
## Step 2: Read and Categorize
Read the entire skill (SKILL.md + workflows/ + references/):
```bash
cat ~/.claude/skills/{skill-name}/SKILL.md
cat ~/.claude/skills/{skill-name}/workflows/*.md 2>/dev/null
cat ~/.claude/skills/{skill-name}/references/*.md 2>/dev/null
```
Categorize by primary dependency type:
| Type | Examples | Verification Method |
|------|----------|---------------------|
| **API/Service** | manage-stripe, manage-gohighlevel | Context7 + WebSearch |
| **CLI Tools** | build-macos-apps (xcodebuild, swift) | Run commands |
| **Framework** | build-iphone-apps (SwiftUI, UIKit) | Context7 for docs |
| **Integration** | setup-stripe-payments | WebFetch + Context7 |
| **Pure Process** | create-agent-skills | No external deps |
Report: "This skill is primarily [type]-based. I'll verify using [method]."
## Step 3: Extract Verifiable Claims
Scan skill content and extract:
**CLI Tools mentioned:**
- Tool names (xcodebuild, swift, npm, etc.)
- Specific flags/options documented
- Expected output patterns
**API Endpoints:**
- Service names (Stripe, Meta, etc.)
- Specific endpoints documented
- Authentication methods
- SDK versions
**Framework Patterns:**
- Framework names (SwiftUI, React, etc.)
- Specific APIs/patterns documented
- Version-specific features
**File Paths/Structures:**
- Expected project structures
- Config file locations
Present: "Found X verifiable claims to check."
## Step 4: Verify by Type
### For CLI Tools
```bash
# Check tool exists
which {tool-name}
# Check version
{tool-name} --version
# Verify documented flags work
{tool-name} --help | grep "{documented-flag}"
```
### For API/Service Skills
Use Context7 to fetch current documentation:
```
mcp__context7__resolve-library-id: {service-name}
mcp__context7__get-library-docs: {library-id}, topic: {relevant-topic}
```
Compare skill's documented patterns against current docs:
- Are endpoints still valid?
- Has authentication changed?
- Are there deprecated methods being used?
### For Framework Skills
Use Context7:
```
mcp__context7__resolve-library-id: {framework-name}
mcp__context7__get-library-docs: {library-id}, topic: {specific-api}
```
Check:
- Are documented APIs still current?
- Have patterns changed?
- Are there newer recommended approaches?
### For Integration Skills
WebSearch for recent changes:
```
"[service name] API changes 2025"
"[service name] breaking changes"
"[service name] deprecated endpoints"
```
Then Context7 for current SDK patterns.
### For Services with Status Pages
WebFetch official docs/changelog if available.
## Step 5: Generate Freshness Report
Present findings:
```
## Verification Report: {skill-name}
### ✅ Verified Current
- [Claim]: [Evidence it's still accurate]
### ⚠️ May Be Outdated
- [Claim]: [What changed / newer info found]
→ Current: [what docs now say]
### ❌ Broken / Invalid
- [Claim]: [Why it's wrong]
→ Fix: [What it should be]
### Could Not Verify
- [Claim]: [Why verification wasn't possible]
---
**Overall Status:** [Fresh / Needs Updates / Significantly Stale]
**Last Verified:** [Today's date]
```
## Step 6: Offer Updates
If issues found:
"Found [N] items that need updating. Would you like me to:"
1. **Update all** - Apply all corrections
2. **Review each** - Show each change before applying
3. **Just the report** - No changes
If updating:
- Make changes based on verified current information
- Add verification date comment if appropriate
- Report what was updated
## Step 7: Suggest Verification Schedule
Based on skill type, recommend:
| Skill Type | Recommended Frequency |
|------------|----------------------|
| API/Service | Every 1-2 months |
| Framework | Every 3-6 months |
| CLI Tools | Every 6 months |
| Pure Process | Annually |
"This skill should be re-verified in approximately [timeframe]."
</process>
<verification_shortcuts>
## Quick Verification Commands
**Check if CLI tool exists and get version:**
```bash
which {tool} && {tool} --version
```
**Context7 pattern for any library:**
```
1. resolve-library-id: "{library-name}"
2. get-library-docs: "{id}", topic: "{specific-feature}"
```
**WebSearch patterns:**
- Breaking changes: "{service} breaking changes 2025"
- Deprecations: "{service} deprecated API"
- Current best practices: "{framework} best practices 2025"
</verification_shortcuts>
<success_criteria>
Verification is complete when:
- [ ] Skill categorized by dependency type
- [ ] Verifiable claims extracted
- [ ] Each claim checked with appropriate method
- [ ] Freshness report generated
- [ ] Updates applied (if requested)
- [ ] User knows when to re-verify
</success_criteria>