Initial commit
This commit is contained in:
101
commands/skill/create.md
Normal file
101
commands/skill/create.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
description: "Create a skill using the proven skill-creator workflow with research context"
|
||||
allowed-tools: Bash(command:*)
|
||||
argument-hint: [skill-name] [research-dir] [output-dir]
|
||||
---
|
||||
|
||||
# Skill Create
|
||||
|
||||
Create a skill using the proven skill-creator workflow with research context.
|
||||
|
||||
## Arguments
|
||||
|
||||
- `$1` - Name of the skill to create (e.g., docker-master)
|
||||
- `$2` - Path to directory containing research materials
|
||||
- `$3` - (Optional) Custom output directory path (defaults to plugins/meta/meta-claude/skills/)
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:create $1 $2 [$3]
|
||||
```
|
||||
|
||||
## Your Task
|
||||
|
||||
Your task is to invoke the skill-creator skill to guide through the complete creation workflow:
|
||||
|
||||
1. Understanding (uses research as context)
|
||||
2. Planning skill contents
|
||||
3. Initializing structure (init_skill.py)
|
||||
4. Editing SKILL.md and resources
|
||||
5. Packaging (package_skill.py)
|
||||
6. Iteration
|
||||
|
||||
## Instructions
|
||||
|
||||
Invoke the skill-creator skill using the Skill tool. Use the following syntax:
|
||||
|
||||
```text
|
||||
Skill(skill: "example-skills:skill-creator")
|
||||
```
|
||||
|
||||
Then provide this instruction to the skill:
|
||||
|
||||
```text
|
||||
I need to create a new skill called $1.
|
||||
|
||||
Research materials are available at: $2
|
||||
|
||||
Please guide me through the skill creation process using this research as context for:
|
||||
- Understanding the skill's purpose and use cases
|
||||
- Planning reusable resources (scripts, references, assets)
|
||||
- Implementing SKILL.md with proper structure
|
||||
- Creating supporting files if needed
|
||||
|
||||
Output location: plugins/meta/meta-claude/skills/$1/
|
||||
```
|
||||
|
||||
If `$3` is provided (custom output location), replace the output location in the
|
||||
instruction with `$3/$1/` instead of the default path.
|
||||
|
||||
## Expected Workflow
|
||||
|
||||
Your task is to guide through these phases using skill-creator:
|
||||
|
||||
1. **Understanding:** Review research to identify use cases
|
||||
2. **Planning:** Determine scripts, references, assets needed
|
||||
3. **Initializing:** Run init_skill.py to create structure
|
||||
4. **Editing:** Implement SKILL.md and bundled resources
|
||||
5. **Packaging:** Run package_skill.py for distribution
|
||||
6. **Iteration:** Refine based on testing
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If research directory missing:**
|
||||
|
||||
- Report error: "Research directory not found at `$2`"
|
||||
- Suggest: Run `/meta-claude:skill:research` first or provide correct path
|
||||
- Exit with failure
|
||||
|
||||
**If skill-creator errors:**
|
||||
|
||||
- Report the specific error from skill-creator
|
||||
- Preserve research materials
|
||||
- Exit with failure
|
||||
|
||||
## Examples
|
||||
|
||||
**Create skill from research:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:create docker-master docs/research/skills/docker-master/
|
||||
# Output: Skill created at plugins/meta/meta-claude/skills/docker-master/
|
||||
```
|
||||
|
||||
**With custom output location:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:create coderabbit plugins/meta/claude-dev-sandbox/skills/coderabbit/ plugins/code-review/claude-dev-sandbox/skills/
|
||||
# Note: When custom output path is provided as third argument ($3), use it instead of default
|
||||
# Output: Skill created at plugins/code-review/claude-dev-sandbox/skills/coderabbit/
|
||||
```
|
||||
91
commands/skill/format.md
Normal file
91
commands/skill/format.md
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
description: "Light cleanup of research materials - remove UI artifacts and basic formatting"
|
||||
allowed-tools: Bash(command:*)
|
||||
argument-hint: [research-dir]
|
||||
---
|
||||
|
||||
# Skill Format
|
||||
|
||||
Light cleanup of research materials - remove UI artifacts and apply basic formatting.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:format <research-dir>
|
||||
```
|
||||
|
||||
## What This Does
|
||||
|
||||
Runs `format_skill_research.py` to:
|
||||
|
||||
- Remove GitHub UI elements (navigation, headers, footers)
|
||||
- Remove redundant blank lines
|
||||
- Ensure code fences have proper spacing
|
||||
- Ensure lists have proper spacing
|
||||
|
||||
**Philosophy:** "Car wash" approach - remove chunks of mud before detail work.
|
||||
|
||||
Does NOT restructure content for skill format.
|
||||
|
||||
## Instructions
|
||||
|
||||
Your task is to format and clean markdown research files.
|
||||
|
||||
1. Extract the research directory path from `$ARGUMENTS` (provided by user as research directory)
|
||||
1. Verify the directory exists using a test command
|
||||
1. If directory doesn't exist, show the error from the Error Handling section
|
||||
1. Run the cleanup script with the provided directory path:
|
||||
|
||||
```bash
|
||||
${CLAUDE_PLUGIN_ROOT}/../../scripts/format_skill_research.py "$ARGUMENTS"
|
||||
```
|
||||
|
||||
1. Display the script output showing which files were processed
|
||||
|
||||
The script processes all `.md` files recursively in the directory.
|
||||
|
||||
## Expected Output
|
||||
|
||||
```text
|
||||
Found N markdown files to clean
|
||||
Processing: file1.md
|
||||
✓ Cleaned: file1.md
|
||||
Processing: file2.md
|
||||
✓ Cleaned: file2.md
|
||||
|
||||
✓ Formatted N files in <research-dir>
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If directory not found:**
|
||||
|
||||
```text
|
||||
Error: Directory not found: <research-dir>
|
||||
```
|
||||
|
||||
Suggest: Check path or run `/meta-claude:skill:research` first
|
||||
|
||||
**If no markdown files:**
|
||||
|
||||
```text
|
||||
No markdown files found in <research-dir>
|
||||
```
|
||||
|
||||
Action: Skip formatting, continue workflow
|
||||
|
||||
## Examples
|
||||
|
||||
**Format research:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:format docs/research/skills/docker-master/
|
||||
# Output: ✓ Formatted 5 files in docs/research/skills/docker-master/
|
||||
```
|
||||
|
||||
**Already clean:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:format docs/research/skills/clean-skill/
|
||||
# Output: No markdown files found (or no changes needed)
|
||||
```
|
||||
399
commands/skill/research.md
Normal file
399
commands/skill/research.md
Normal file
@@ -0,0 +1,399 @@
|
||||
---
|
||||
description: Fully automated research gathering for skill creation
|
||||
argument-hint: [skill-name] [sources]
|
||||
allowed-tools: Bash(python:*), Bash(mkdir:*), Bash(ls:*), Bash(echo:*), Read, Write, AskUserQuestion
|
||||
---
|
||||
|
||||
# Skill Research
|
||||
|
||||
Your task is to gather research materials for skill creation with intelligent automation.
|
||||
|
||||
## Purpose
|
||||
|
||||
Automate the research phase of skill creation by:
|
||||
|
||||
- Selecting appropriate research tools based on context
|
||||
- Executing research scripts with correct parameters
|
||||
- Organizing research into skill-specific directories
|
||||
- Providing clean, attributed source materials for skill authoring
|
||||
|
||||
## Inputs
|
||||
|
||||
Your task is to parse arguments from `$ARGUMENTS`:
|
||||
|
||||
- **Required:** `skill-name` - Name of skill being researched (kebab-case)
|
||||
- **Optional:** `sources` - URLs, keywords, or categories to research
|
||||
|
||||
## Research Script Selection
|
||||
|
||||
Choose the appropriate research script based on input context:
|
||||
|
||||
### 1. If User Provides Specific URLs
|
||||
|
||||
When `sources` contains one or more URLs (http/https):
|
||||
|
||||
```bash
|
||||
scripts/firecrawl_scrape_url.py "<url>" --output "docs/research/skills/<skill-name>/<filename>.md"
|
||||
```
|
||||
|
||||
Run for each URL provided.
|
||||
|
||||
### 2. If Researching Claude Code Patterns
|
||||
|
||||
When skill relates to Claude Code functionality (skills, commands, agents,
|
||||
hooks, plugins, MCP):
|
||||
|
||||
Ask user to confirm if this is about Claude Code:
|
||||
|
||||
```text
|
||||
This appears to be related to Claude Code functionality.
|
||||
Use official Claude Code documentation? [Yes/No]
|
||||
```
|
||||
|
||||
If yes:
|
||||
|
||||
```bash
|
||||
scripts/jina_reader_docs.py --output-dir "docs/research/skills/<skill-name>"
|
||||
```
|
||||
|
||||
### 3. General Topic Research (Default)
|
||||
|
||||
For all other cases, use Firecrawl web search with intelligent category selection.
|
||||
|
||||
First, conduct a mini brainstorm with the user to refine scope:
|
||||
|
||||
```text
|
||||
Let's refine the research scope for "<skill-name>":
|
||||
|
||||
1. What specific aspects should we focus on?
|
||||
2. Which categories are most relevant? (choose one or multiple)
|
||||
- github (code examples, repositories)
|
||||
- research (academic papers, technical articles)
|
||||
- pdf (documentation, guides)
|
||||
- web (general web content - default, omit flag)
|
||||
|
||||
3. Any specific keywords or search terms to include?
|
||||
```
|
||||
|
||||
Then execute:
|
||||
|
||||
```bash
|
||||
# Single category (most common)
|
||||
scripts/firecrawl_sdk_research.py "<query>" \
|
||||
--limit <num-results> \
|
||||
--category <category> \
|
||||
--output "docs/research/skills/<skill-name>/research.md"
|
||||
|
||||
# Multiple categories (advanced)
|
||||
scripts/firecrawl_sdk_research.py "<query>" \
|
||||
--limit <num-results> \
|
||||
--categories github,research,pdf \
|
||||
--output "docs/research/skills/<skill-name>/research.md"
|
||||
```
|
||||
|
||||
**Default parameters:**
|
||||
|
||||
- `limit`: 10 (adjustable based on scope)
|
||||
- `category`: Based on user input, or use `--categories` for multiple, or omit
|
||||
for general web search
|
||||
- `query`: Skill name + refined keywords from brainstorm
|
||||
|
||||
## Output Directory Management
|
||||
|
||||
All research saves to: `docs/research/skills/<skill-name>/`
|
||||
|
||||
## Execution Process
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
Your task is to extract skill name and sources from `$ARGUMENTS`:
|
||||
|
||||
- Split arguments by space
|
||||
- First argument: skill name (required)
|
||||
- Remaining arguments: sources (optional)
|
||||
|
||||
**Validation:**
|
||||
|
||||
- Skill name must be kebab-case (lowercase with hyphens)
|
||||
- Skill name cannot be empty
|
||||
|
||||
### Step 2: Determine Research Strategy
|
||||
|
||||
Your task is to analyze sources to select script:
|
||||
|
||||
```text
|
||||
If sources contain URLs (starts with http:// or https://):
|
||||
→ Use firecrawl_scrape_url.py for each URL
|
||||
|
||||
Else if skill-name matches Claude Code patterns:
|
||||
(Contains: skill, command, agent, hook, plugin, mcp, slash, subagent)
|
||||
→ Ask user if they want official Claude Code docs
|
||||
→ If yes: Use jina_reader_docs.py
|
||||
|
||||
Else:
|
||||
→ Use firecrawl_sdk_research.py with brainstorm
|
||||
```
|
||||
|
||||
### Step 3: Create Output Directory
|
||||
|
||||
```bash
|
||||
mkdir -p "docs/research/skills/<skill-name>"
|
||||
```
|
||||
|
||||
### Step 4: Execute Research Script
|
||||
|
||||
Run selected script with appropriate parameters based on selection logic.
|
||||
|
||||
**Environment check:**
|
||||
|
||||
Before running Firecrawl scripts, verify API key:
|
||||
|
||||
```bash
|
||||
if [ -z "$FIRECRAWL_API_KEY" ]; then
|
||||
echo "Error: FIRECRAWL_API_KEY environment variable not set"
|
||||
echo "Set it with: export FIRECRAWL_API_KEY='fc-your-api-key'"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
**Script execution patterns:**
|
||||
|
||||
**For URL scraping:**
|
||||
|
||||
```bash
|
||||
for url in $urls; do
|
||||
filename=$(echo "$url" | sed 's|https\?://||' | sed 's|/|-|g' | cut -c1-50)
|
||||
scripts/firecrawl_scrape_url.py "$url" \
|
||||
--output "docs/research/skills/<skill-name>/${filename}.md"
|
||||
done
|
||||
```
|
||||
|
||||
**For Claude Code docs:**
|
||||
|
||||
```bash
|
||||
scripts/jina_reader_docs.py \
|
||||
--output-dir "docs/research/skills/<skill-name>"
|
||||
```
|
||||
|
||||
**For general research:**
|
||||
|
||||
```bash
|
||||
# Single category
|
||||
scripts/firecrawl_sdk_research.py "$query" \
|
||||
--limit $limit \
|
||||
--category $category \
|
||||
--output "docs/research/skills/<skill-name>/research.md"
|
||||
|
||||
# Or multiple categories
|
||||
scripts/firecrawl_sdk_research.py "$query" \
|
||||
--limit $limit \
|
||||
--categories github,research,pdf \
|
||||
--output "docs/research/skills/<skill-name>/research.md"
|
||||
```
|
||||
|
||||
### Step 5: Verify Research Output
|
||||
|
||||
Check that research files were created:
|
||||
|
||||
```bash
|
||||
ls -lh "docs/research/skills/<skill-name>/"
|
||||
```
|
||||
|
||||
Display summary:
|
||||
|
||||
```text
|
||||
✓ Research completed for <skill-name>
|
||||
|
||||
Output directory: docs/research/skills/<skill-name>/
|
||||
Files created: X files
|
||||
Total size: Y KB
|
||||
|
||||
Research materials ready for formatting and skill creation.
|
||||
|
||||
Next steps:
|
||||
1. Review research materials
|
||||
2. Run: /meta-claude:skill:format docs/research/skills/<skill-name>
|
||||
3. Run: /meta-claude:skill:create <skill-name> docs/research/skills/<skill-name>
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Missing FIRECRAWL_API_KEY
|
||||
|
||||
```text
|
||||
Error: FIRECRAWL_API_KEY environment variable not set.
|
||||
|
||||
Firecrawl research scripts require an API key.
|
||||
|
||||
Set it with:
|
||||
export FIRECRAWL_API_KEY='fc-your-api-key'
|
||||
|
||||
Get your API key from: https://firecrawl.dev
|
||||
|
||||
Alternative: Use manual research and skip this step.
|
||||
```
|
||||
|
||||
Exit with error code 1.
|
||||
|
||||
### Script Execution Failures
|
||||
|
||||
If research script fails:
|
||||
|
||||
```text
|
||||
Error: Research script failed with exit code X
|
||||
|
||||
Script: <script-name>
|
||||
Command: <full-command>
|
||||
Error output: <stderr>
|
||||
|
||||
Troubleshooting:
|
||||
- Verify API key is valid
|
||||
- Check network connectivity
|
||||
- Verify script permissions (chmod +x)
|
||||
- Review script output above for specific errors
|
||||
|
||||
Research failed. Fix the error and try again.
|
||||
```
|
||||
|
||||
Exit with error code 1.
|
||||
|
||||
### Invalid Skill Name
|
||||
|
||||
```text
|
||||
Error: Invalid skill name format: <skill-name>
|
||||
|
||||
Skill names must:
|
||||
- Use kebab-case (lowercase with hyphens)
|
||||
- Contain only letters, numbers, and hyphens
|
||||
- Not start or end with hyphens
|
||||
- Not contain consecutive hyphens
|
||||
|
||||
Examples:
|
||||
✓ docker-compose-helper
|
||||
✓ git-workflow-automation
|
||||
✗ DockerHelper (use docker-helper)
|
||||
✗ git__workflow (no consecutive hyphens)
|
||||
|
||||
Please provide a valid skill name.
|
||||
```
|
||||
|
||||
Exit with error code 1.
|
||||
|
||||
### No Sources Provided for URL Scraping
|
||||
|
||||
If user provides no sources but you detect they want URL scraping:
|
||||
|
||||
```text
|
||||
No URLs provided for research.
|
||||
|
||||
Usage:
|
||||
/meta-claude:skill:research <skill-name> <url1> [url2] [url3]
|
||||
|
||||
Example:
|
||||
/meta-claude:skill:research docker-best-practices https://docs.docker.com/develop/dev-best-practices/
|
||||
|
||||
Or run without URLs for general web research:
|
||||
/meta-claude:skill:research docker-best-practices
|
||||
```
|
||||
|
||||
Exit with error code 1.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: General Research with Defaults
|
||||
|
||||
User invocation:
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:research ansible-vault-security
|
||||
```
|
||||
|
||||
Process:
|
||||
|
||||
1. Detect no URLs, not Claude Code specific
|
||||
2. Mini brainstorm with user about scope
|
||||
3. Execute firecrawl_sdk_research.py:
|
||||
|
||||
```bash
|
||||
scripts/firecrawl_sdk_research.py \
|
||||
"ansible vault security best practices" \
|
||||
--limit 10 \
|
||||
--output docs/research/skills/ansible-vault-security/research.md
|
||||
```
|
||||
|
||||
4. Display summary with next steps
|
||||
|
||||
### Example 2: Scraping Specific URLs
|
||||
|
||||
User invocation:
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:research terraform-best-practices \
|
||||
https://developer.hashicorp.com/terraform/tutorials \
|
||||
https://spacelift.io/blog/terraform-best-practices
|
||||
```
|
||||
|
||||
Process:
|
||||
|
||||
1. Detect URLs in arguments
|
||||
2. Create output directory: `docs/research/skills/terraform-best-practices/`
|
||||
3. Scrape each URL:
|
||||
- `developer-hashicorp-com-terraform-tutorials.md`
|
||||
- `spacelift-io-blog-terraform-best-practices.md`
|
||||
4. Display summary with file list
|
||||
|
||||
### Example 3: Claude Code Documentation
|
||||
|
||||
User invocation:
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:research skill-creator-advanced
|
||||
```
|
||||
|
||||
Process:
|
||||
|
||||
1. Detect "skill" in name, matches Claude Code pattern
|
||||
2. Ask: "This appears to be related to Claude Code functionality. Use
|
||||
official Claude Code documentation? [Yes/No]"
|
||||
3. User: Yes
|
||||
4. Execute: `scripts/jina_reader_docs.py --output-dir docs/research/skills/skill-creator-advanced`
|
||||
5. Display summary with downloaded docs list
|
||||
|
||||
### Example 4: Research with Category Filtering
|
||||
|
||||
User invocation:
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:research machine-learning-pipelines
|
||||
```
|
||||
|
||||
Process:
|
||||
|
||||
1. Mini brainstorm reveals focus on academic research papers
|
||||
2. User selects category: `research`
|
||||
3. Execute firecrawl_sdk_research.py:
|
||||
|
||||
```bash
|
||||
scripts/firecrawl_sdk_research.py \
|
||||
"machine learning pipelines" \
|
||||
--limit 10 \
|
||||
--category research \
|
||||
--output docs/research/skills/machine-learning-pipelines/research.md
|
||||
```
|
||||
|
||||
4. Display summary
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Research is successful when:
|
||||
|
||||
1. Research scripts execute without errors
|
||||
2. Output directory contains research files
|
||||
3. Files are non-empty and contain markdown content
|
||||
4. Summary displays file count and total size
|
||||
5. Next steps guide user to formatting and creation phases
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0:** Success - research completed and saved
|
||||
- **1:** Failure - invalid input, missing API key, script errors, or execution failures
|
||||
88
commands/skill/review-compliance.md
Normal file
88
commands/skill/review-compliance.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
description: Run technical compliance validation on a skill using quick_validate.py
|
||||
argument-hint: [skill-path]
|
||||
---
|
||||
|
||||
# Skill Review Compliance
|
||||
|
||||
Your task is to run technical compliance validation on a skill using quick_validate.py and report the results.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-compliance /path/to/skill
|
||||
```
|
||||
|
||||
## What This Does
|
||||
|
||||
Validates:
|
||||
|
||||
- SKILL.md file exists
|
||||
- YAML frontmatter is valid
|
||||
- Required fields present (name, description)
|
||||
- Name follows hyphen-case convention (max 64 chars)
|
||||
- Description has no angle brackets (max 1024 chars)
|
||||
- No unexpected frontmatter properties
|
||||
|
||||
## Instructions
|
||||
|
||||
Run the quick_validate.py script from skill-creator with the skill path provided by the user:
|
||||
|
||||
```bash
|
||||
${CLAUDE_PLUGIN_ROOT}/skills/skill-creator/scripts/quick_validate.py $ARGUMENTS
|
||||
```
|
||||
|
||||
Where `$ARGUMENTS` is the skill path provided by the user.
|
||||
|
||||
**Expected output if valid:**
|
||||
|
||||
```text
|
||||
Skill is valid!
|
||||
```
|
||||
|
||||
**Expected output if invalid:**
|
||||
|
||||
```text
|
||||
[Specific error message describing the violation]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If validation passes:**
|
||||
|
||||
- Report: "✅ Compliance validation passed"
|
||||
- Exit with success
|
||||
|
||||
**If validation fails:**
|
||||
|
||||
- Report the specific violation
|
||||
- Include categorization in output: "This is a Tier 1 (auto-fix)" or "This is a Tier 3 (manual fix)"
|
||||
- Exit with failure
|
||||
|
||||
**Tier 1 (Auto-fix) examples:**
|
||||
|
||||
- Missing description → Add generic description
|
||||
- Invalid YAML → Fix YAML syntax
|
||||
- Name formatting → Convert to hyphen-case
|
||||
|
||||
**Tier 3 (Manual fix) examples:**
|
||||
|
||||
- Invalid name characters
|
||||
- Description too long
|
||||
- Unexpected frontmatter keys
|
||||
|
||||
## Examples
|
||||
|
||||
**Valid skill:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-compliance plugins/meta/meta-claude/skills/skill-creator
|
||||
# Output: Skill is valid!
|
||||
```
|
||||
|
||||
**Invalid skill:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-compliance plugins/example/broken-skill
|
||||
# Output: Missing 'description' in frontmatter. This is a Tier 1 (auto-fix)
|
||||
```
|
||||
254
commands/skill/review-content.md
Normal file
254
commands/skill/review-content.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Skill Review Content
|
||||
|
||||
Review content quality (clarity, completeness, usefulness) of a skill's SKILL.md file.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-content [skill-path]
|
||||
```
|
||||
|
||||
## What This Does
|
||||
|
||||
Analyzes SKILL.md content for:
|
||||
|
||||
- **Clarity:** Instructions are clear, well-structured, and easy to follow
|
||||
- **Completeness:** All necessary information is present and organized
|
||||
- **Usefulness:** Content provides value for Claude's context
|
||||
- **Examples:** Practical, accurate, and helpful examples are included
|
||||
- **Actionability:** Instructions are specific and executable
|
||||
|
||||
## Instructions
|
||||
|
||||
Read and analyze the SKILL.md file at the provided path. Evaluate content across five quality dimensions:
|
||||
|
||||
### 1. Clarity Assessment
|
||||
|
||||
Check for:
|
||||
|
||||
- Clear, concise writing without unnecessary verbosity
|
||||
- Logical structure with appropriate headings and sections
|
||||
- Technical terms explained when necessary
|
||||
- Consistent terminology throughout
|
||||
- Proper markdown formatting (lists, code blocks, emphasis)
|
||||
|
||||
**Issues to flag:**
|
||||
|
||||
- Ambiguous or vague instructions
|
||||
- Confusing organization or missing structure
|
||||
- Unexplained jargon or acronyms
|
||||
- Inconsistent terminology
|
||||
- Poor markdown formatting
|
||||
|
||||
### 2. Completeness Assessment
|
||||
|
||||
Verify:
|
||||
|
||||
- Purpose and use cases clearly stated
|
||||
- All necessary context provided
|
||||
- Prerequisites or dependencies documented
|
||||
- Edge cases and error handling covered
|
||||
- Table of contents (if content is long)
|
||||
|
||||
**Issues to flag:**
|
||||
|
||||
- Missing purpose statement or unclear use cases
|
||||
- Incomplete workflows or partial instructions
|
||||
- Undocumented dependencies
|
||||
- No error handling guidance
|
||||
- Long content without navigation aids
|
||||
|
||||
### 3. Examples Assessment
|
||||
|
||||
Evaluate:
|
||||
|
||||
- Examples are practical and relevant
|
||||
- Code examples have correct syntax
|
||||
- Examples demonstrate real use cases
|
||||
- Sufficient variety to cover common scenarios
|
||||
- Examples are accurate and tested
|
||||
|
||||
**Issues to flag:**
|
||||
|
||||
- Missing examples for key workflows
|
||||
- Incorrect or broken code examples
|
||||
- Trivial examples that don't demonstrate value
|
||||
- Insufficient coverage of use cases
|
||||
- Outdated or inaccurate examples
|
||||
|
||||
### 4. Actionability Assessment
|
||||
|
||||
Confirm:
|
||||
|
||||
- Instructions are specific and executable
|
||||
- Clear steps for workflows
|
||||
- Commands or code ready to use
|
||||
- Expected outcomes documented
|
||||
- Success criteria defined
|
||||
|
||||
**Issues to flag:**
|
||||
|
||||
- Vague instructions without concrete steps
|
||||
- Missing command syntax or parameters
|
||||
- No expected output examples
|
||||
- Unclear success criteria
|
||||
- Abstract guidance without implementation details
|
||||
|
||||
### 5. Usefulness for Claude
|
||||
|
||||
Assess:
|
||||
|
||||
- Description triggers skill appropriately
|
||||
- Content enhances Claude's capabilities
|
||||
- Follows progressive disclosure principle
|
||||
- Avoids redundant general knowledge
|
||||
- Provides specialized context Claude lacks
|
||||
|
||||
**Issues to flag:**
|
||||
|
||||
- Description too narrow or too broad
|
||||
- Content duplicates Claude's general knowledge
|
||||
- Excessive verbosity wasting tokens
|
||||
- Missing specialized knowledge
|
||||
- Poor description for skill triggering
|
||||
|
||||
## Scoring Guidelines
|
||||
|
||||
Each quality dimension receives PASS or FAIL based on these criteria:
|
||||
|
||||
**PASS if:**
|
||||
|
||||
- No issues flagged in that dimension, OR
|
||||
- Only Tier 1 (auto-fixable) issues found
|
||||
|
||||
**FAIL if:**
|
||||
|
||||
- 2+ issues of any tier in that dimension, OR
|
||||
- Any Tier 2 (guided fix) issue found, OR
|
||||
- Any Tier 3 (complex) issue found
|
||||
|
||||
**Rationale:** A dimension with only simple auto-fixable issues should not fail
|
||||
the review, but substantive problems should be flagged for remediation.
|
||||
|
||||
## Quality Report Format
|
||||
|
||||
Generate a structured report with the following sections:
|
||||
|
||||
```text
|
||||
## Content Quality Review: <skill-name>
|
||||
|
||||
**Overall Status:** PASS | FAIL
|
||||
|
||||
### Summary
|
||||
|
||||
[1-2 sentence overview of quality assessment]
|
||||
|
||||
### Quality Scores
|
||||
|
||||
- Clarity: PASS | FAIL
|
||||
- Completeness: PASS | FAIL
|
||||
- Examples: PASS | FAIL
|
||||
- Actionability: PASS | FAIL
|
||||
- Usefulness: PASS | FAIL
|
||||
|
||||
### Issues Found
|
||||
|
||||
#### Tier 1 (Simple - Auto-fixable)
|
||||
[Issues that can be automatically corrected]
|
||||
- [ ] Issue description (e.g., "Missing blank lines around code blocks")
|
||||
|
||||
#### Tier 2 (Medium - Guided fixes)
|
||||
[Issues requiring judgment but clear remediation]
|
||||
- [ ] Issue description (e.g., "Add examples for error handling workflow")
|
||||
|
||||
#### Tier 3 (Complex - Manual review)
|
||||
[Issues requiring significant rework or design decisions]
|
||||
- [ ] Issue description (e.g., "Restructure content for better progressive disclosure")
|
||||
|
||||
### Recommendations
|
||||
|
||||
[Specific, actionable suggestions for improvement]
|
||||
|
||||
### Strengths
|
||||
|
||||
[Positive aspects worth highlighting]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If SKILL.md not found:**
|
||||
|
||||
```text
|
||||
Error: SKILL.md not found at <skill-path>
|
||||
```
|
||||
|
||||
Action: Verify path is correct or run `/meta-claude:skill:create` first
|
||||
|
||||
**If content passes review:**
|
||||
|
||||
- Report: "Content Quality Review: PASS"
|
||||
- Highlight strengths
|
||||
- Note minor suggestions if any
|
||||
- Exit with success
|
||||
|
||||
**If content has issues:**
|
||||
|
||||
- Report: "Content Quality Review: FAIL"
|
||||
- List all issues categorized by tier
|
||||
- Provide specific recommendations
|
||||
- Exit with failure
|
||||
|
||||
**Issue Categorization:**
|
||||
|
||||
- **Tier 1 (Auto-fix):** Formatting, spacing, markdown syntax
|
||||
- **Tier 2 (Guided fix):** Missing sections, incomplete examples, unclear descriptions
|
||||
- **Tier 3 (Complex):** Structural problems, fundamental clarity issues, major rewrites
|
||||
|
||||
## Pass Criteria
|
||||
|
||||
Content PASSES if:
|
||||
|
||||
- At least 4 of 5 quality dimensions pass
|
||||
- No Tier 3 (complex) issues found
|
||||
- Critical sections (description, purpose, examples) are adequate
|
||||
|
||||
Content FAILS if:
|
||||
|
||||
- 2 or more quality dimensions fail
|
||||
- Any Tier 3 (complex) issues found
|
||||
- Critical sections are missing or fundamentally flawed
|
||||
|
||||
## Examples
|
||||
|
||||
**High-quality skill:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-content @plugins/meta/meta-claude/skills/skill-creator
|
||||
# Output: Content Quality Review: PASS
|
||||
# - All quality dimensions passed
|
||||
# - Clear structure with progressive disclosure
|
||||
# - Comprehensive examples and actionable guidance
|
||||
```
|
||||
|
||||
**Skill needing improvement:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:review-content @/path/to/draft-skill
|
||||
# Output: Content Quality Review: FAIL
|
||||
#
|
||||
# Issues Found:
|
||||
# Tier 2:
|
||||
# - Add examples for error handling workflow
|
||||
# - Clarify success criteria for validation step
|
||||
# Tier 3:
|
||||
# - Restructure content for better progressive disclosure
|
||||
# - Description too broad, needs refinement for triggering
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This review focuses on **content quality**, not technical compliance
|
||||
- Technical validation (frontmatter, naming) is handled by `/meta-claude:skill:review-compliance`
|
||||
- Be constructive: highlight strengths and provide actionable suggestions
|
||||
- Content quality is subjective: use judgment and consider the skill's purpose
|
||||
- Focus on whether Claude can effectively use the skill, not perfection
|
||||
76
commands/skill/validate-audit.md
Normal file
76
commands/skill/validate-audit.md
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
argument-hint: [skill-path]
|
||||
description: Run comprehensive skill audit using skill-auditor agent (non-blocking)
|
||||
---
|
||||
|
||||
# Skill Validate Audit
|
||||
|
||||
Run comprehensive skill audit using skill-auditor agent (non-blocking).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-audit <skill-path>
|
||||
```
|
||||
|
||||
## What This Does
|
||||
|
||||
Invokes the skill-auditor agent to perform comprehensive analysis against official Anthropic specifications:
|
||||
|
||||
- Structure validation
|
||||
- Content quality assessment
|
||||
- Best practice compliance
|
||||
- Progressive disclosure design
|
||||
- Frontmatter completeness
|
||||
|
||||
**Note:** This is non-blocking validation - provides recommendations even if prior validation failed.
|
||||
|
||||
## Instructions
|
||||
|
||||
Your task is to invoke the skill-auditor agent using the Agent tool to audit the skill at `$ARGUMENTS`.
|
||||
|
||||
Call the Agent tool with the following prompt:
|
||||
|
||||
```text
|
||||
I need to audit the skill at $ARGUMENTS for compliance with official Claude Code specifications.
|
||||
|
||||
Please review:
|
||||
- SKILL.md structure and organization
|
||||
- Frontmatter quality and completeness
|
||||
- Progressive disclosure patterns
|
||||
- Content clarity and usefulness
|
||||
- Adherence to best practices
|
||||
|
||||
Provide a detailed audit report with recommendations.
|
||||
```
|
||||
|
||||
## Expected Output
|
||||
|
||||
The agent will provide:
|
||||
|
||||
- Overall assessment (compliant/needs improvement)
|
||||
- Specific recommendations by category
|
||||
- Best practice suggestions
|
||||
- Priority levels for improvements
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Always succeeds** - audit is purely informational.
|
||||
|
||||
Even if the skill has validation failures, the audit provides debugging feedback.
|
||||
|
||||
## Examples
|
||||
|
||||
**Audit a new skill:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-audit plugins/meta/meta-claude/skills/docker-master
|
||||
# Output: Comprehensive audit report with recommendations
|
||||
```
|
||||
|
||||
**Audit after fixes:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-audit plugins/meta/meta-claude/skills/docker-master
|
||||
# Output: Updated audit showing improvements
|
||||
```
|
||||
354
commands/skill/validate-integration.md
Normal file
354
commands/skill/validate-integration.md
Normal file
@@ -0,0 +1,354 @@
|
||||
---
|
||||
description: Test skill integration with Claude Code ecosystem (conflict detection and compatibility)
|
||||
allowed-tools: Bash(test:*), Bash(find:*), Bash(rg:*), Bash(git:*), Read
|
||||
---
|
||||
|
||||
# Skill Validate Integration
|
||||
|
||||
Test skill integration with Claude Code ecosystem (conflict detection and compatibility).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-integration $ARGUMENTS
|
||||
```
|
||||
|
||||
**Arguments:**
|
||||
|
||||
- `$1`: Path to the skill directory to validate (required)
|
||||
|
||||
## What This Does
|
||||
|
||||
Validates that the skill integrates cleanly with the Claude Code ecosystem:
|
||||
|
||||
- **Naming Conflicts:** Verifies no duplicate skill names exist
|
||||
- **Functionality Overlap:** Checks for overlapping descriptions/purposes
|
||||
- **Command Composition:** Tests compatibility with slash commands
|
||||
- **Component Integration:** Validates interaction with other meta-claude components
|
||||
- **Ecosystem Fit:** Ensures skill complements existing capabilities
|
||||
|
||||
**Note:** This is integration validation - it tests ecosystem compatibility, not runtime behavior.
|
||||
|
||||
## Instructions
|
||||
|
||||
Your task is to perform integration validation checks on the skill at the provided path.
|
||||
|
||||
### Step 1: Verify Skill Exists
|
||||
|
||||
Verify that the skill directory contains a valid SKILL.md file:
|
||||
|
||||
!`test -f "$1/SKILL.md" && echo "SKILL.md exists" || echo "Error: SKILL.md not found"`
|
||||
|
||||
If SKILL.md does not exist, report error and exit.
|
||||
|
||||
### Step 2: Extract Skill Metadata
|
||||
|
||||
Read the SKILL.md frontmatter to extract key metadata. The frontmatter format is:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: skill-name
|
||||
description: Skill description text
|
||||
---
|
||||
```
|
||||
|
||||
**Extract the following from the skill at `$1/SKILL.md`:**
|
||||
|
||||
- Skill name
|
||||
- Skill description
|
||||
- Any additional metadata fields
|
||||
|
||||
Use this metadata for conflict detection and overlap analysis.
|
||||
|
||||
### Step 3: Check for Naming Conflicts
|
||||
|
||||
Search for existing skills with the same name across the marketplace.
|
||||
|
||||
**Find all SKILL.md files:**
|
||||
|
||||
!`find "$(git rev-parse --show-toplevel)/plugins" -type f -name "SKILL.md"`
|
||||
|
||||
**For each existing skill, perform the following analysis:**
|
||||
|
||||
1. Extract the skill name from frontmatter
|
||||
2. Compare with the new skill's name (from `$1/SKILL.md`)
|
||||
3. If names match, record as a naming conflict
|
||||
|
||||
**Detect these types of conflicts:**
|
||||
|
||||
- Exact name matches (case-insensitive comparison)
|
||||
- Similar names that differ only by pluralization
|
||||
- Names that differ only in separators (hyphens vs underscores)
|
||||
|
||||
### Step 4: Detect Functionality Overlap
|
||||
|
||||
Analyze skill descriptions to identify overlapping functionality:
|
||||
|
||||
**Compare:**
|
||||
|
||||
- Skill descriptions for semantic similarity
|
||||
- Key phrases and trigger words
|
||||
- Domain overlap (e.g., both handle "Docker containers")
|
||||
- Use case overlap (e.g., both create infrastructure as code)
|
||||
|
||||
**Overlap categories:**
|
||||
|
||||
- **Duplicate:** Essentially the same functionality (HIGH concern)
|
||||
- **Overlapping:** Significant overlap but different focus (MEDIUM concern)
|
||||
- **Complementary:** Related but distinct functionality (LOW concern)
|
||||
- **Independent:** No overlap (PASS)
|
||||
|
||||
**Analysis approach:**
|
||||
|
||||
1. **Extract key terms** from both descriptions:
|
||||
- Tokenize descriptions (split on whitespace, punctuation)
|
||||
- Remove stopwords (the, a, an, is, are, for, etc.)
|
||||
- Identify domain keywords (docker, kubernetes, terraform, ansible, etc.)
|
||||
|
||||
2. **Calculate semantic overlap score** using Jaccard similarity:
|
||||
- Intersection: Terms present in both descriptions
|
||||
- Union: All unique terms from both descriptions
|
||||
- Score = (Intersection size / Union size) * 100
|
||||
- Example: If 7 of 10 unique terms overlap → 70% score
|
||||
|
||||
3. **Categorize overlap based on score**:
|
||||
- **Duplicate:** >70% term overlap or identical purpose
|
||||
- **Overlapping:** 50-70% term overlap with different focus
|
||||
- **Complementary:** 30-50% overlap, related domain
|
||||
- **Independent:** <30% overlap
|
||||
|
||||
4. **Flag if overlap exceeds threshold** (>70% duplicate, >50% overlapping)
|
||||
|
||||
### Step 5: Test Slash Command Composition
|
||||
|
||||
Verify the skill can work with existing slash commands by checking for:
|
||||
|
||||
- References to non-existent commands
|
||||
- Circular dependencies between skills and commands
|
||||
- Incompatible parameter expectations
|
||||
- Missing prerequisite commands
|
||||
|
||||
**Find all slash command references:**
|
||||
|
||||
!`rg -o '/[a-z][a-z0-9-]+\b' "$1/SKILL.md"`
|
||||
|
||||
**For each referenced command, perform these checks:**
|
||||
|
||||
1. Verify command exists in marketplace
|
||||
2. Check parameter compatibility
|
||||
3. Ensure no circular dependencies
|
||||
4. Validate execution order makes sense
|
||||
|
||||
### Step 6: Validate Component Integration
|
||||
|
||||
Verify compatibility with other meta-claude components.
|
||||
|
||||
**Analyze integration with these meta-claude components:**
|
||||
|
||||
- **Skills:** Other skills in meta-claude plugin
|
||||
- **Agents:** Agent definitions that might invoke this skill
|
||||
- **Hooks:** Automation hooks that trigger skills
|
||||
- **Commands:** Slash commands in the same plugin
|
||||
|
||||
**Perform these integration checks:**
|
||||
|
||||
1. Verify skill doesn't conflict with existing meta-claude workflows
|
||||
2. Check if skill references valid agent definitions
|
||||
3. Ensure skill uses correct hook event names
|
||||
4. Validate skill works with plugin architecture
|
||||
|
||||
**Examples of what to verify:**
|
||||
|
||||
- If skill references `/meta-claude:skill:create`, verify it exists
|
||||
- If skill mentions "skill-auditor agent", verify agent exists
|
||||
- If skill uses hooks, verify hook events are valid
|
||||
- If skill depends on scripts, verify they exist
|
||||
|
||||
### Step 7: Assess Ecosystem Fit
|
||||
|
||||
Evaluate whether the skill complements existing capabilities:
|
||||
|
||||
**Assessment criteria:**
|
||||
|
||||
- **Fills a gap:** Provides functionality not currently available
|
||||
- **Enhances existing:** Improves or extends current capabilities
|
||||
- **Consolidates:** Combines fragmented functionality
|
||||
- **Replaces:** Better alternative to existing skill (requires justification)
|
||||
|
||||
**Red flags:**
|
||||
|
||||
- Skill provides identical functionality to existing skill
|
||||
- Skill conflicts with established patterns
|
||||
- Skill introduces breaking changes
|
||||
- Skill duplicates without clear improvement
|
||||
|
||||
### Generate Integration Report
|
||||
|
||||
Generate a structured report in the following format:
|
||||
|
||||
```markdown
|
||||
## Integration Validation Report: <skill-name>
|
||||
|
||||
**Overall Status:** PASS | FAIL
|
||||
|
||||
### Summary
|
||||
|
||||
[1-2 sentence overview of integration validation results]
|
||||
|
||||
### Validation Results
|
||||
|
||||
- Naming Conflicts: PASS | FAIL
|
||||
- Functionality Overlap: PASS | FAIL
|
||||
- Command Composition: PASS | FAIL
|
||||
- Component Integration: PASS | FAIL
|
||||
- Ecosystem Fit: PASS | FAIL
|
||||
|
||||
### Conflicts Found
|
||||
|
||||
#### Critical (Must Resolve)
|
||||
[Conflicts that prevent integration]
|
||||
- [ ] Conflict description with affected component
|
||||
|
||||
#### Warning (Should Review)
|
||||
[Potential issues that need attention]
|
||||
- [ ] Issue description with recommendation
|
||||
|
||||
#### Info (Consider)
|
||||
[Minor considerations or enhancements]
|
||||
- [ ] Suggestion for better integration
|
||||
|
||||
### Integration Details
|
||||
|
||||
**Existing Skills Analyzed:** [count]
|
||||
**Commands Referenced:** [list]
|
||||
**Agents Referenced:** [list]
|
||||
**Conflicts Detected:** [count]
|
||||
**Overlap Analysis:**
|
||||
- Duplicate functionality: [list]
|
||||
- Overlapping functionality: [list]
|
||||
- Complementary skills: [list]
|
||||
|
||||
### Recommendations
|
||||
|
||||
[Specific, actionable suggestions for improving integration]
|
||||
|
||||
### Next Steps
|
||||
|
||||
[What to do based on validation results]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If SKILL.md not found:**
|
||||
|
||||
```text
|
||||
Error: SKILL.md not found at $1
|
||||
```
|
||||
|
||||
Report this error and advise verifying the path is correct or running `/meta-claude:skill:create` first.
|
||||
|
||||
**If integration validation passes:**
|
||||
|
||||
Report the following:
|
||||
|
||||
- Status: "Integration Validation: PASS"
|
||||
- Validation results summary
|
||||
- Any complementary skills found
|
||||
- Success confirmation
|
||||
|
||||
**If integration validation fails:**
|
||||
|
||||
Report the following:
|
||||
|
||||
- Status: "Integration Validation: FAIL"
|
||||
- All conflicts categorized by severity
|
||||
- Specific resolution recommendations
|
||||
- Failure indication
|
||||
|
||||
**Conflict Severity Levels:**
|
||||
|
||||
- **Critical:** Must resolve before deployment (exact name conflict, duplicate functionality)
|
||||
- **Warning:** Should address before deployment (high overlap, missing commands)
|
||||
- **Info:** Consider for better integration (similar names, potential consolidation)
|
||||
|
||||
## Pass Criteria
|
||||
|
||||
Integration validation PASSES if:
|
||||
|
||||
- No exact naming conflicts found
|
||||
- Functionality overlap below threshold (<50%)
|
||||
- All referenced commands exist
|
||||
- Compatible with meta-claude architecture
|
||||
- Complements existing ecosystem
|
||||
|
||||
Integration validation FAILS if:
|
||||
|
||||
- Exact skill name already exists
|
||||
- Duplicate functionality without improvement
|
||||
- References non-existent commands
|
||||
- Breaks existing component integration
|
||||
- Conflicts with established patterns
|
||||
|
||||
## Examples
|
||||
|
||||
**Skill with clean integration:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-integration plugins/meta/meta-claude/skills/docker-security
|
||||
# Output: Integration Validation: PASS
|
||||
# - No naming conflicts detected
|
||||
# - Functionality is complementary to existing skills
|
||||
# - All referenced commands exist
|
||||
# - Compatible with meta-claude components
|
||||
# - Fills gap in security analysis domain
|
||||
```
|
||||
|
||||
**Skill with naming conflict:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-integration /path/to/duplicate-skill
|
||||
# Output: Integration Validation: FAIL
|
||||
#
|
||||
# Conflicts Found:
|
||||
# Critical:
|
||||
# - Skill name 'skill-creator' already exists in plugins/meta/meta-claude/skills/skill-creator
|
||||
# - Exact duplicate functionality: both create SKILL.md files
|
||||
# Recommendation: Choose different name or consolidate with existing skill
|
||||
```
|
||||
|
||||
**Skill with functionality overlap:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-integration /path/to/overlapping-skill
|
||||
# Output: Integration Validation: FAIL
|
||||
#
|
||||
# Conflicts Found:
|
||||
# Warning:
|
||||
# - 75% description overlap with 'python-code-quality' skill
|
||||
# - Both skills handle Python linting and formatting
|
||||
# - Referenced command '/run-pytest' does not exist
|
||||
# Recommendation: Consider consolidating or clearly differentiating scope
|
||||
```
|
||||
|
||||
**Skill with minor concerns:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-integration /path/to/new-skill
|
||||
# Output: Integration Validation: PASS
|
||||
#
|
||||
# Info:
|
||||
# - Skill name similar to 'ansible-best-practice' (note singular vs plural)
|
||||
# - Could complement 'ansible-best-practices' skill with cross-references
|
||||
# - Consider mentioning relationship in description
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This validation tests **ecosystem integration**, not runtime or compliance
|
||||
- Run after `/meta-claude:skill:validate-runtime` passes
|
||||
- Focuses on conflicts with existing skills, commands, and components
|
||||
- Sequential dependency: requires runtime validation to pass first
|
||||
- Integration issues often require human judgment to resolve
|
||||
- Consider both technical conflicts and strategic fit
|
||||
- Some overlap may be acceptable if skills serve different use cases
|
||||
- Clear differentiation in descriptions helps avoid false positives
|
||||
278
commands/skill/validate-runtime.md
Normal file
278
commands/skill/validate-runtime.md
Normal file
@@ -0,0 +1,278 @@
|
||||
---
|
||||
description: Test skill by attempting to load it in Claude Code context (runtime validation)
|
||||
argument-hint: [skill-path]
|
||||
allowed-tools: Bash(test:*), Read
|
||||
---
|
||||
|
||||
# Skill Validate Runtime
|
||||
|
||||
Test skill by attempting to load it in Claude Code context (runtime validation).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-runtime <skill-path>
|
||||
```
|
||||
|
||||
The skill path is available as `$ARGUMENTS` in the command context.
|
||||
|
||||
## What This Does
|
||||
|
||||
Validates that the skill actually loads and functions in Claude Code runtime:
|
||||
|
||||
- **Syntax Check:** Verifies SKILL.md markdown syntax is valid
|
||||
- **Frontmatter Parsing:** Ensures YAML frontmatter parses correctly
|
||||
- **Description Triggering:** Tests if description would trigger the skill
|
||||
appropriately
|
||||
- **Progressive Disclosure:** Confirms skill content loads without errors
|
||||
- **Context Loading:** Verifies skill can be loaded into Claude's context
|
||||
|
||||
**Note:** This is runtime validation - it tests actual loading behavior,
|
||||
not just static analysis.
|
||||
|
||||
## Instructions
|
||||
|
||||
Your task is to perform runtime validation checks on the skill at the provided path.
|
||||
|
||||
### Step 1: Verify Skill Structure
|
||||
|
||||
Your task is to check that the skill directory contains a valid SKILL.md file:
|
||||
|
||||
!`test -f $ARGUMENTS/SKILL.md && echo "SKILL.md exists" || echo "Error: SKILL.md not found"`
|
||||
|
||||
If SKILL.md does not exist, report error and exit.
|
||||
|
||||
### Step 2: Validate Markdown Syntax
|
||||
|
||||
Your task is to read the SKILL.md file and check for markdown syntax issues:
|
||||
|
||||
- Verify all code blocks are properly fenced with language identifiers
|
||||
- Check for balanced heading levels (no missing hierarchy)
|
||||
- Ensure lists are properly formatted
|
||||
- Look for malformed markdown that could break rendering
|
||||
|
||||
**Common syntax issues to detect:**
|
||||
|
||||
- Unclosed code blocks (odd number of triple backticks)
|
||||
- Code blocks without language specifiers
|
||||
- Broken links with malformed syntax
|
||||
- Improperly nested lists
|
||||
- Invalid YAML frontmatter structure
|
||||
|
||||
### Step 3: Parse Frontmatter
|
||||
|
||||
Your task is to extract and parse the YAML frontmatter block:
|
||||
|
||||
```yaml
|
||||
---
|
||||
name: skill-name
|
||||
description: Skill description text
|
||||
---
|
||||
```
|
||||
|
||||
**Validation checks:**
|
||||
|
||||
- Frontmatter block exists and is properly delimited with `---`
|
||||
- YAML syntax is valid (no parsing errors)
|
||||
- Required fields present: `name`, `description`
|
||||
- Field values are non-empty strings
|
||||
- No special characters that could break parsing
|
||||
|
||||
### Step 4: Test Description Triggering
|
||||
|
||||
Your task is to evaluate whether the skill description would trigger appropriately:
|
||||
|
||||
**Check for:**
|
||||
|
||||
- Description is specific enough to trigger the skill (not too broad)
|
||||
- Description is general enough to be useful (not too narrow)
|
||||
- Description clearly indicates when to use the skill
|
||||
- Description uses action-oriented language
|
||||
- No misleading or ambiguous phrasing
|
||||
|
||||
**Example good descriptions:**
|
||||
|
||||
- "Deploy applications to Kubernetes clusters using Helm charts"
|
||||
- "Analyze Docker container security and generate compliance reports"
|
||||
|
||||
**Example poor descriptions:**
|
||||
|
||||
- "General Kubernetes tasks" (too broad)
|
||||
- "Deploy myapp version 2.3.1 to production cluster" (too narrow)
|
||||
- "Kubernetes" (unclear when to trigger)
|
||||
|
||||
### Step 5: Verify Progressive Disclosure
|
||||
|
||||
Your task is to check that the skill content follows progressive disclosure principles:
|
||||
|
||||
**Key aspects:**
|
||||
|
||||
- Essential information is presented first
|
||||
- Content is organized in logical sections
|
||||
- Detailed information is nested under appropriate headings
|
||||
- Examples and advanced usage are clearly separated
|
||||
- Skill doesn't front-load unnecessary details
|
||||
|
||||
**Warning signs:**
|
||||
|
||||
- Giant wall of text at the top
|
||||
- No clear section structure
|
||||
- Examples mixed with core instructions
|
||||
- Edge cases presented before main workflow
|
||||
- Overwhelming amount of information at once
|
||||
|
||||
### Step 6: Test Context Loading
|
||||
|
||||
Your task is to simulate loading the skill content to verify it would work in Claude's context:
|
||||
|
||||
**Checks:**
|
||||
|
||||
- Total skill content size is reasonable (not exceeding token limits)
|
||||
- Content can be parsed and structured properly
|
||||
- No circular references or broken internal links
|
||||
- Skill references only valid tools/commands
|
||||
- File paths and code examples are well-formed
|
||||
|
||||
### Generate Runtime Test Report
|
||||
|
||||
Your task is to create a structured report with the following format:
|
||||
|
||||
```plaintext
|
||||
## Runtime Validation Report: <skill-name>
|
||||
|
||||
**Overall Status:** PASS | FAIL
|
||||
|
||||
### Summary
|
||||
|
||||
[1-2 sentence overview of runtime validation results]
|
||||
|
||||
### Test Results
|
||||
|
||||
- Markdown Syntax: PASS | FAIL
|
||||
- Frontmatter Parsing: PASS | FAIL
|
||||
- Description Triggering: PASS | FAIL
|
||||
- Progressive Disclosure: PASS | FAIL
|
||||
- Context Loading: PASS | FAIL
|
||||
|
||||
### Issues Found
|
||||
|
||||
#### Critical (Must Fix)
|
||||
[Issues that prevent skill from loading]
|
||||
- [ ] Issue description with specific location
|
||||
|
||||
#### Warning (Should Fix)
|
||||
[Issues that could cause problems]
|
||||
- [ ] Issue description with specific location
|
||||
|
||||
#### Info (Consider Fixing)
|
||||
[Minor issues or suggestions]
|
||||
- [ ] Issue description
|
||||
|
||||
### Details
|
||||
|
||||
[Specific details about each test, including:]
|
||||
- File size: X KB
|
||||
- Frontmatter fields: name, description, [other fields]
|
||||
- Content sections: [list of main sections]
|
||||
- Code blocks: [count and languages]
|
||||
|
||||
### Recommendations
|
||||
|
||||
[Specific, actionable suggestions for fixing issues]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If SKILL.md not found:**
|
||||
|
||||
```plaintext
|
||||
Error: SKILL.md not found at $ARGUMENTS
|
||||
```
|
||||
|
||||
Action: Verify path is correct or run `/meta-claude:skill:create` first
|
||||
|
||||
**If runtime validation passes:**
|
||||
|
||||
- Report: "Runtime Validation: PASS"
|
||||
- Show test results summary
|
||||
- Note any info-level suggestions
|
||||
- Exit with success
|
||||
|
||||
**If runtime validation fails:**
|
||||
|
||||
- Report: "Runtime Validation: FAIL"
|
||||
- List all issues categorized by severity
|
||||
- Provide specific fix recommendations
|
||||
- Exit with failure
|
||||
|
||||
**Issue Severity Levels:**
|
||||
|
||||
- **Critical:** Skill cannot load (syntax errors, invalid YAML, missing
|
||||
required fields)
|
||||
- **Warning:** Skill loads but may have problems (poor description, unclear structure)
|
||||
- **Info:** Skill works but could be improved (minor formatting, suggestions)
|
||||
|
||||
## Pass Criteria
|
||||
|
||||
Runtime validation PASSES if:
|
||||
|
||||
- All critical tests pass (syntax, frontmatter, context loading)
|
||||
- No critical issues found
|
||||
- Skill can be loaded into Claude's context without errors
|
||||
|
||||
Runtime validation FAILS if:
|
||||
|
||||
- Any critical test fails
|
||||
- Skill cannot be loaded due to syntax or parsing errors
|
||||
- Description is fundamentally unusable for triggering
|
||||
- Content structure breaks progressive disclosure
|
||||
|
||||
## Examples
|
||||
|
||||
**Valid skill with good runtime characteristics:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-runtime plugins/meta/meta-claude/skills/skill-creator
|
||||
# Output: Runtime Validation: PASS
|
||||
# - All syntax checks passed
|
||||
# - Frontmatter parsed successfully
|
||||
# - Description triggers appropriately
|
||||
# - Progressive disclosure structure confirmed
|
||||
# - Skill loads into context successfully
|
||||
```
|
||||
|
||||
**Skill with runtime issues:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-runtime /path/to/broken-skill
|
||||
# Output: Runtime Validation: FAIL
|
||||
#
|
||||
# Issues Found:
|
||||
# Critical:
|
||||
# - YAML frontmatter has syntax error on line 3 (unclosed string)
|
||||
# - Code block at line 47 is unclosed (missing closing backticks)
|
||||
# Warning:
|
||||
# - Description is too broad: "General development tasks"
|
||||
# - Progressive disclosure violated: 200 lines before first heading
|
||||
```
|
||||
|
||||
**Skill with minor warnings:**
|
||||
|
||||
```bash
|
||||
/meta-claude:skill:validate-runtime /path/to/working-skill
|
||||
# Output: Runtime Validation: PASS
|
||||
#
|
||||
# Info:
|
||||
# - Consider adding language identifier to code block at line 89
|
||||
# - Description could be more specific about trigger conditions
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This validation tests **runtime behavior**, not just static compliance
|
||||
- Focus on whether the skill actually loads and functions in Claude Code
|
||||
- Unlike `/meta-claude:skill:review-compliance` (static validation), this tests live
|
||||
loading
|
||||
- Runtime validation should be run after compliance validation passes
|
||||
- Tests simulate how Claude Code will interact with the skill at runtime
|
||||
- Check for issues that only appear when trying to load the skill
|
||||
Reference in New Issue
Block a user