9.5 KiB
Skill Research
Your task is to gather research materials for skill creation with intelligent automation.
Purpose
Automate the research phase of skill creation by:
- Selecting appropriate research tools based on context
- Executing research scripts with correct parameters
- Organizing research into skill-specific directories
- Providing clean, attributed source materials for skill authoring
Inputs
Your task is to parse arguments from $ARGUMENTS:
- Required:
skill-name- Name of skill being researched (kebab-case) - Optional:
sources- URLs, keywords, or categories to research
Research Script Selection
Choose the appropriate research script based on input context:
1. If User Provides Specific URLs
When sources contains one or more URLs (http/https):
scripts/firecrawl_scrape_url.py "<url>" --output "docs/research/skills/<skill-name>/<filename>.md"
Run for each URL provided.
2. If Researching Claude Code Patterns
When skill relates to Claude Code functionality (skills, commands, agents, hooks, plugins, MCP):
Ask user to confirm if this is about Claude Code:
This appears to be related to Claude Code functionality.
Use official Claude Code documentation? [Yes/No]
If yes:
scripts/jina_reader_docs.py --output-dir "docs/research/skills/<skill-name>"
3. General Topic Research (Default)
For all other cases, use Firecrawl web search with intelligent category selection.
First, conduct a mini brainstorm with the user to refine scope:
Let's refine the research scope for "<skill-name>":
1. What specific aspects should we focus on?
2. Which categories are most relevant? (choose one or multiple)
- github (code examples, repositories)
- research (academic papers, technical articles)
- pdf (documentation, guides)
- web (general web content - default, omit flag)
3. Any specific keywords or search terms to include?
Then execute:
# Single category (most common)
scripts/firecrawl_sdk_research.py "<query>" \
--limit <num-results> \
--category <category> \
--output "docs/research/skills/<skill-name>/research.md"
# Multiple categories (advanced)
scripts/firecrawl_sdk_research.py "<query>" \
--limit <num-results> \
--categories github,research,pdf \
--output "docs/research/skills/<skill-name>/research.md"
Default parameters:
limit: 10 (adjustable based on scope)category: Based on user input, or use--categoriesfor multiple, or omit for general web searchquery: Skill name + refined keywords from brainstorm
Output Directory Management
All research saves to: docs/research/skills/<skill-name>/
Execution Process
Step 1: Parse Arguments
Your task is to extract skill name and sources from $ARGUMENTS:
- Split arguments by space
- First argument: skill name (required)
- Remaining arguments: sources (optional)
Validation:
- Skill name must be kebab-case (lowercase with hyphens)
- Skill name cannot be empty
Step 2: Determine Research Strategy
Your task is to analyze sources to select script:
If sources contain URLs (starts with http:// or https://):
→ Use firecrawl_scrape_url.py for each URL
Else if skill-name matches Claude Code patterns:
(Contains: skill, command, agent, hook, plugin, mcp, slash, subagent)
→ Ask user if they want official Claude Code docs
→ If yes: Use jina_reader_docs.py
Else:
→ Use firecrawl_sdk_research.py with brainstorm
Step 3: Create Output Directory
mkdir -p "docs/research/skills/<skill-name>"
Step 4: Execute Research Script
Run selected script with appropriate parameters based on selection logic.
Environment check:
Before running Firecrawl scripts, verify API key:
if [ -z "$FIRECRAWL_API_KEY" ]; then
echo "Error: FIRECRAWL_API_KEY environment variable not set"
echo "Set it with: export FIRECRAWL_API_KEY='fc-your-api-key'"
exit 1
fi
Script execution patterns:
For URL scraping:
for url in $urls; do
filename=$(echo "$url" | sed 's|https\?://||' | sed 's|/|-|g' | cut -c1-50)
scripts/firecrawl_scrape_url.py "$url" \
--output "docs/research/skills/<skill-name>/${filename}.md"
done
For Claude Code docs:
scripts/jina_reader_docs.py \
--output-dir "docs/research/skills/<skill-name>"
For general research:
# Single category
scripts/firecrawl_sdk_research.py "$query" \
--limit $limit \
--category $category \
--output "docs/research/skills/<skill-name>/research.md"
# Or multiple categories
scripts/firecrawl_sdk_research.py "$query" \
--limit $limit \
--categories github,research,pdf \
--output "docs/research/skills/<skill-name>/research.md"
Step 5: Verify Research Output
Check that research files were created:
ls -lh "docs/research/skills/<skill-name>/"
Display summary:
✓ Research completed for <skill-name>
Output directory: docs/research/skills/<skill-name>/
Files created: X files
Total size: Y KB
Research materials ready for formatting and skill creation.
Next steps:
1. Review research materials
2. Run: /meta-claude:skill:format docs/research/skills/<skill-name>
3. Run: /meta-claude:skill:create <skill-name> docs/research/skills/<skill-name>
Error Handling
Missing FIRECRAWL_API_KEY
Error: FIRECRAWL_API_KEY environment variable not set.
Firecrawl research scripts require an API key.
Set it with:
export FIRECRAWL_API_KEY='fc-your-api-key'
Get your API key from: https://firecrawl.dev
Alternative: Use manual research and skip this step.
Exit with error code 1.
Script Execution Failures
If research script fails:
Error: Research script failed with exit code X
Script: <script-name>
Command: <full-command>
Error output: <stderr>
Troubleshooting:
- Verify API key is valid
- Check network connectivity
- Verify script permissions (chmod +x)
- Review script output above for specific errors
Research failed. Fix the error and try again.
Exit with error code 1.
Invalid Skill Name
Error: Invalid skill name format: <skill-name>
Skill names must:
- Use kebab-case (lowercase with hyphens)
- Contain only letters, numbers, and hyphens
- Not start or end with hyphens
- Not contain consecutive hyphens
Examples:
✓ docker-compose-helper
✓ git-workflow-automation
✗ DockerHelper (use docker-helper)
✗ git__workflow (no consecutive hyphens)
Please provide a valid skill name.
Exit with error code 1.
No Sources Provided for URL Scraping
If user provides no sources but you detect they want URL scraping:
No URLs provided for research.
Usage:
/meta-claude:skill:research <skill-name> <url1> [url2] [url3]
Example:
/meta-claude:skill:research docker-best-practices https://docs.docker.com/develop/dev-best-practices/
Or run without URLs for general web research:
/meta-claude:skill:research docker-best-practices
Exit with error code 1.
Examples
Example 1: General Research with Defaults
User invocation:
/meta-claude:skill:research ansible-vault-security
Process:
-
Detect no URLs, not Claude Code specific
-
Mini brainstorm with user about scope
-
Execute firecrawl_sdk_research.py:
scripts/firecrawl_sdk_research.py \ "ansible vault security best practices" \ --limit 10 \ --output docs/research/skills/ansible-vault-security/research.md -
Display summary with next steps
Example 2: Scraping Specific URLs
User invocation:
/meta-claude:skill:research terraform-best-practices \
https://developer.hashicorp.com/terraform/tutorials \
https://spacelift.io/blog/terraform-best-practices
Process:
- Detect URLs in arguments
- Create output directory:
docs/research/skills/terraform-best-practices/ - Scrape each URL:
developer-hashicorp-com-terraform-tutorials.mdspacelift-io-blog-terraform-best-practices.md
- Display summary with file list
Example 3: Claude Code Documentation
User invocation:
/meta-claude:skill:research skill-creator-advanced
Process:
- Detect "skill" in name, matches Claude Code pattern
- Ask: "This appears to be related to Claude Code functionality. Use official Claude Code documentation? [Yes/No]"
- User: Yes
- Execute:
scripts/jina_reader_docs.py --output-dir docs/research/skills/skill-creator-advanced - Display summary with downloaded docs list
Example 4: Research with Category Filtering
User invocation:
/meta-claude:skill:research machine-learning-pipelines
Process:
-
Mini brainstorm reveals focus on academic research papers
-
User selects category:
research -
Execute firecrawl_sdk_research.py:
scripts/firecrawl_sdk_research.py \ "machine learning pipelines" \ --limit 10 \ --category research \ --output docs/research/skills/machine-learning-pipelines/research.md -
Display summary
Success Criteria
Research is successful when:
- Research scripts execute without errors
- Output directory contains research files
- Files are non-empty and contain markdown content
- Summary displays file count and total size
- Next steps guide user to formatting and creation phases
Exit Codes
- 0: Success - research completed and saved
- 1: Failure - invalid input, missing API key, script errors, or execution failures