Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:01:22 +08:00
commit 4cbc82fbd4
9 changed files with 2670 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "skill-finder",
"description": "Skill: Find and evaluate Claude skills using semantic search and quality assessment",
"version": "0.0.0-2025.11.28",
"author": {
"name": "Misha Kolesnik",
"email": "misha@kolesnik.io"
},
"skills": [
"./skills/skill"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# skill-finder
Skill: Find and evaluate Claude skills using semantic search and quality assessment

64
plugin.lock.json Normal file
View File

@@ -0,0 +1,64 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:tenequm/claude-plugins:skill-finder",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "efb66c3f54600ce350d1b59295c8776171cb858f",
"treeHash": "267a2e7a01ac22d21e891419f49a49698217cf9feb64672322ac64ec72f426e8",
"generatedAt": "2025-11-28T10:28:37.672867Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "skill-finder",
"description": "Skill: Find and evaluate Claude skills using semantic search and quality assessment"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "87685b1c58e4a0268e06113aad66a4a42c46e96f2f0290ee5c239b64dfda9024"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "f23d41eb01d7e4066362fc3a494160c3d8914d3c88e44795d7eee802df393b8a"
},
{
"path": "skills/skill/SKILL.md",
"sha256": "195c829bcf55918348890947fda850a6ea1db2d543386b66898a11542770b49c"
},
{
"path": "skills/skill/references/ranking-algorithm.md",
"sha256": "1cb1daebb5df8dafbf61d1d0b8982dedc728b483b17322f0a2ae5b0a2a1556c7"
},
{
"path": "skills/skill/references/installation-workflow.md",
"sha256": "a294ee88ed66d5b2496d61ce0820efcf3545a0749e0c5adbd76838f39d761e47"
},
{
"path": "skills/skill/references/best-practices-checklist.md",
"sha256": "c06f36afa05bbbd1ddab1558885e377e839edee1f026384702abc9d5845da1a3"
},
{
"path": "skills/skill/references/search-strategies.md",
"sha256": "00f518d15ba28300644ecac3467058369c86e0373014501cfd82bd6bf40e8f80"
},
{
"path": "skills/skill/examples/sample-output.md",
"sha256": "8033affd13ed553833c284f32a0fd0aeb90a57626c10940b237ac77cd36f52d9"
}
],
"dirSha256": "267a2e7a01ac22d21e891419f49a49698217cf9feb64672322ac64ec72f426e8"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

529
skills/skill/SKILL.md Normal file
View File

@@ -0,0 +1,529 @@
---
name: skill-finder
description: Find and evaluate Claude skills for specific use cases using semantic search, Anthropic best practices assessment, and fitness scoring. Use when the user asks to find skills for a particular task (e.g., "find me a skill for pitch decks"), not for generic "show all skills" requests.
---
# Skill Finder
Find and evaluate Claude skills for your specific needs with intelligent semantic search, quality assessment, and fitness scoring.
## What This Skill Does
Skill-finder is a query-driven evaluation engine that:
- Searches GitHub for skills matching your specific use case
- Fetches and reads actual SKILL.md content
- Evaluates skills against Anthropic's best practices
- Scores fitness to your exact request
- Provides actionable quality assessments and recommendations
This is NOT a "show me popular skills" tool - it's a semantic matcher that finds the RIGHT skill for YOUR specific need.
## When to Use
- User asks to find skills for a **specific purpose**: "find me a skill for creating pitch decks"
- User needs help choosing between similar skills
- User wants quality-assessed recommendations, not just popularity rankings
- User asks "what's the best skill for [specific task]"
## Quick Start Examples
```bash
# Find skills for specific use case
"Find me a skill for creating pitch decks"
"What's the best skill for automated data analysis"
"Find skills that help with git commit messages"
# NOT: "Show me popular skills" (too generic)
# NOT: "List all skills" (use skill list command instead)
```
## Core Workflow
### Phase 1: Query Understanding
**Extract semantic terms from user query:**
User: "Find me a skill for creating pitch decks"
Extract terms:
- Primary: "pitch deck", "presentation"
- Secondary: "slides", "powerpoint", "keynote"
- Related: "business", "template"
### Phase 2: Multi-Source Search
**Search Strategy:**
```bash
# 1. Repository search with semantic terms
gh search repos "claude skills pitch deck OR presentation OR slides" \
--sort stars --limit 20 --json name,stargazersCount,description,url,pushedAt,owner
# 2. Code search for SKILL.md with keywords
gh search code "pitch deck OR presentation" "filename:SKILL.md" \
--limit 20 --json repository,path,url
# 3. Search awesome-lists separately
gh search repos "awesome-claude-skills" --sort stars --limit 5 \
--json name,url,owner
```
**Deduplication:**
Collect all unique repositories from search results.
### Phase 3: Content Fetching
**For each candidate skill:**
```bash
# 1. Find SKILL.md location
gh api repos/OWNER/REPO/git/trees/main?recursive=1 | \
jq -r '.tree[] | select(.path | contains("SKILL.md")) | .path'
# 2. Fetch full SKILL.md content
gh api repos/OWNER/REPO/contents/PATH/TO/SKILL.md | \
jq -r '.content' | base64 -d > temp_skill.md
# 3. Fetch repository metadata
gh api repos/OWNER/REPO --jq '{
stars: .stargazers_count,
updated: .pushed_at,
description: .description
}'
```
**IMPORTANT:** Actually READ the SKILL.md content. Don't just use metadata.
### Phase 4: Quality Evaluation
**Use [best-practices-checklist.md](references/best-practices-checklist.md) to evaluate:**
For each skill, assess:
1. **Description Quality (2.0 points)**
- Specific vs vague?
- Includes what + when to use?
- Third person?
2. **Name Convention (0.5 points)**
- Follows naming rules?
- Descriptive?
3. **Conciseness (1.5 points)**
- Under 500 lines?
- No fluff?
4. **Progressive Disclosure (1.0 points)**
- Uses reference files?
- Good organization?
5. **Examples and Workflows (1.0 points)**
- Has concrete examples?
- Clear workflows?
6. **Appropriate Degree of Freedom (0.5 points)**
- Matches task complexity?
7. **Dependencies (0.5 points)**
- Documented?
- Verified available?
8. **Structure (1.0 points)**
- Well organized?
- Clear sections?
9. **Error Handling (0.5 points)**
- Scripts handle errors?
- Validation loops?
10. **Avoids Anti-Patterns (1.0 points)**
- No time-sensitive info?
- Consistent terminology?
- Unix paths?
11. **Testing (0.5 points)**
- Evidence of testing?
**Calculate quality_score (0-10):**
See [best-practices-checklist.md](references/best-practices-checklist.md) for detailed scoring.
### Phase 5: Fitness Scoring
**Semantic match calculation:**
```python
# Pseudo-code for semantic matching
user_query_terms = ["pitch", "deck", "presentation"]
skill_content = read_skill_md(skill_path)
# Check occurrences of user terms in skill
matches = []
for term in user_query_terms:
if term.lower() in skill_content.lower():
matches.append(term)
semantic_match_score = len(matches) / len(user_query_terms) * 10
```
**Fitness formula:**
```
fitness_score = (
semantic_match * 0.4 + # How well does it solve the problem?
quality_score * 0.3 + # Follows best practices?
(stars/100) * 0.2 + # Community validation
freshness_multiplier * 0.1 # Recent updates
)
Where:
- semantic_match: 0-10 (keyword matching in SKILL.md content)
- quality_score: 0-10 (from evaluation checklist)
- stars: repository star count
- freshness_multiplier: 0-10 based on days since update
```
**Freshness multiplier:**
```bash
days_old=$(( ($(date +%s) - $(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$pushed_at" +%s)) / 86400 ))
if [ $days_old -lt 30 ]; then
freshness_score=10
freshness_badge="🔥"
elif [ $days_old -lt 90 ]; then
freshness_score=7
freshness_badge="📅"
elif [ $days_old -lt 180 ]; then
freshness_score=5
freshness_badge="📆"
else
freshness_score=2
freshness_badge="⏰"
fi
```
### Phase 6: Awesome-List Processing
**Extract skills from awesome-lists:**
```bash
# For each awesome-list found
for repo in awesome_lists; do
# Fetch README or main content
gh api repos/$repo/readme | jq -r '.content' | base64 -d > readme.md
# Extract GitHub links to potential skills
grep -oE 'https://github.com/[^/]+/[^/)]+' readme.md | sort -u
# For each linked repo, check if it contains SKILL.md
# If yes, evaluate same as other skills
done
```
**Display awesome-list skills separately** in results for comparison.
### Phase 7: Result Ranking and Display
**Sort by fitness_score (descending)**
**Output format:**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Skills for: "[USER QUERY]"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #1 skill-name ⭐ STARS FRESHNESS | FITNESS: X.X/10
Quality Assessment:
✅ Description: Excellent (2.0/2.0)
✅ Structure: Well organized (0.9/1.0)
⚠️ Length: 520 lines (over recommended 500)
✅ Examples: Clear workflows included
Overall Quality: 8.5/10 (Excellent)
Why it fits your request:
• Specifically designed for [relevant aspect]
• Mentions [user's key terms] 3 times
• Has [relevant feature]
• Includes [useful capability]
Why it's high quality:
• Follows Anthropic best practices
• Has comprehensive examples
• Clear workflows and validation
• Well-tested and maintained
📎 https://github.com/OWNER/REPO/blob/main/PATH/SKILL.md
[Preview Full Analysis] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #2 another-skill ⭐ STARS FRESHNESS | FITNESS: Y.Y/10
Quality Assessment:
✅ Good description and examples
⚠️ Some best practices not followed
❌ No progressive disclosure
Overall Quality: 6.2/10 (Good)
Why it fits your request:
• Partially addresses [need]
• Has [some relevant feature]
Why it's not ideal:
• Not specifically focused on [user's goal]
• Quality could be better
• Missing [important feature]
📎 https://github.com/OWNER/REPO/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 From Awesome Lists:
Found in awesome-claude-skills (BehiSecc):
• related-skill-1 (FITNESS: 7.5/10) - Good match
• related-skill-2 (FITNESS: 5.2/10) - Partial match
Found in awesome-claude-skills (travisvn):
• another-option (FITNESS: 6.8/10) - Consider this
[Evaluate All] [Show Details]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 Recommendation: skill-name (FITNESS: 8.7/10)
Best match for your needs. High quality, well-maintained,
and specifically designed for [user's goal].
Next best: another-skill (FITNESS: 7.2/10) if you need [alternative approach]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Key Differences from Generic Search
**Generic/Bad approach:**
- "Show me top 10 popular skills"
- Ranks only by stars
- No evaluation of actual content
- No fitness to user's specific need
**Query-Driven/Good approach:**
- "Find skills for [specific use case]"
- Reads actual SKILL.md content
- Evaluates against best practices
- Scores fitness to user's query
- Explains WHY it's a good match
## Evaluation Workflow
### Quick Evaluation (per skill ~3-4 min)
1. **Fetch SKILL.md** (30 sec)
2. **Read frontmatter** (30 sec)
- Check description quality
- Check name convention
3. **Scan body** (1-2 min)
- Check length
- Look for examples
- Check for references
- Note anti-patterns
4. **Check structure** (30 sec)
- Reference files?
- Scripts/utilities?
5. **Calculate scores** (30 sec)
- Quality score
- Semantic match
- Fitness score
### Full Evaluation (for top candidates)
For the top 3-5 candidates by fitness score, provide detailed analysis:
```
Full Analysis for: [skill-name]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Quality Breakdown
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Description Quality: 2.0/2.0 ✅
• Specific and clear
• Includes what and when to use
• Written in third person
Name Convention: 0.5/0.5 ✅
• Follows naming rules
• Descriptive gerund form
Conciseness: 1.3/1.5 ⚠️
• 520 lines (over 500 recommended)
• Could be more concise
Progressive Disclosure: 1.0/1.0 ✅
• Excellent use of reference files
• Well-organized structure
Examples & Workflows: 1.0/1.0 ✅
• Clear concrete examples
• Step-by-step workflows
Degree of Freedom: 0.5/0.5 ✅
• Appropriate for task type
Dependencies: 0.5/0.5 ✅
• All documented
• Verified available
Structure: 0.9/1.0 ✅
• Well organized
• Minor heading inconsistencies
Error Handling: 0.4/0.5 ⚠️
• Good scripts
• Could improve validation
Anti-Patterns: 0.9/1.0 ✅
• Mostly clean
• One instance of inconsistent terminology
Testing: 0.5/0.5 ✅
• Clear testing approach
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Quality Score: 8.5/10 (Excellent)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Semantic Match Analysis
User Query: "pitch deck creation"
Skill Content Analysis:
✅ "pitch deck" mentioned 5 times
✅ "presentation" mentioned 12 times
✅ "slides" mentioned 8 times
✅ Has templates section
✅ Has business presentation examples
Semantic Match Score: 9.2/10
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Final FITNESS Score: 8.8/10
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Recommendation: Highly Recommended ⭐⭐⭐⭐⭐
```
## Reference Files
- [best-practices-checklist.md](references/best-practices-checklist.md) - Anthropic's best practices evaluation criteria
- [search-strategies.md](references/search-strategies.md) - Advanced search patterns
- [ranking-algorithm.md](references/ranking-algorithm.md) - Detailed scoring algorithms
- [installation-workflow.md](references/installation-workflow.md) - Installation process
## Example Usage
See [examples/sample-output.md](examples/sample-output.md) for complete output examples.
## Error Handling
**No results found:**
```
No skills found for: "[user query]"
Suggestions:
• Try broader search terms
• Check if query is too specific
• Search awesome-lists directly
• Consider creating a custom skill
```
**Low fitness scores (all < 5.0):**
```
⚠️ Found skills but none are a strong match.
Best partial matches:
1. [skill-name] (FITNESS: 4.2/10) - Missing [key feature]
2. [skill-name] (FITNESS: 3.8/10) - Different focus
Consider:
• Combine multiple skills
• Request skill from awesome-list curators
• Create custom skill for your specific need
```
**GitHub API rate limit:**
```
⚠️ GitHub API rate limit reached.
Current: 0/60 requests remaining (unauthenticated)
Resets: in 42 minutes
Solution:
export GH_TOKEN="your_github_token"
This increases limit to 5000/hour.
```
## Performance Optimization
**Parallel execution:**
```bash
# Run searches in parallel
{
gh search repos "claude skills $QUERY" > repos.json &
gh search code "$QUERY" "filename:SKILL.md" > code.json &
gh search repos "awesome-claude-skills" > awesome.json &
wait
}
```
**Caching:**
```bash
# Cache skill evaluations for 1 hour
cache_file=".skill-eval-cache/$repo_owner-$repo_name.json"
if [ -f "$cache_file" ] && [ $(($(date +%s) - $(stat -f %m "$cache_file"))) -lt 3600 ]; then
cat "$cache_file"
else
evaluate_skill | tee "$cache_file"
fi
```
## Quality Tiers
Based on fitness score:
- **9.0-10.0:** Perfect match - Highly Recommended ⭐⭐⭐⭐⭐
- **7.0-8.9:** Excellent match - Recommended ⭐⭐⭐⭐
- **5.0-6.9:** Good match - Consider ⭐⭐⭐
- **3.0-4.9:** Partial match - Review carefully ⭐⭐
- **0.0-2.9:** Poor match - Not recommended ⭐
## Important Notes
### This is NOT:
- A "show popular skills" tool
- A generic ranking by stars
- A list of all skills
### This IS:
- A query-driven semantic matcher
- A quality evaluator against Anthropic best practices
- A fitness scorer for your specific need
- A recommendation engine
### Always:
- Read actual SKILL.md content (don't just use metadata)
- Evaluate against best practices checklist
- Score fitness to user's specific query
- Explain WHY a skill fits or doesn't fit
- Show quality assessment, not just stars
---
**Remember:** The goal is to find the RIGHT skill for the user's SPECIFIC need, not just show what's popular.

View File

@@ -0,0 +1,530 @@
# Sample Output Examples
Expected output format for skill-finder searches with fitness-based evaluation.
## Example 1: Specific Use Case Query
**User Query:** "Find me a skill for creating pitch decks"
**Output:**
```
🔍 Searching for skills matching: "pitch deck creation"
Semantic terms: pitch deck, presentation, slides, powerpoint, keynote
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Skills for: "creating pitch decks"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #1 presentation-builder ⭐ 245 🔥 | FITNESS: 9.2/10
Quality Assessment:
✅ Description: Excellent (2.0/2.0)
"Create presentations and pitch decks with templates, charts, and data visualization"
✅ Structure: Well organized (0.9/1.0)
✅ Examples: Comprehensive workflows (1.0/1.0)
⚠️ Length: 520 lines (slightly over 500 recommended)
✅ Progressive disclosure: Excellent use of reference files
Overall Quality: 8.7/10 (Excellent)
Why it fits your request:
• Specifically designed for pitch deck creation
• Mentions "pitch deck" 8 times in SKILL.md
• Has pitch deck templates and examples
• Includes business presentation workflows
• Supports PowerPoint and Google Slides
• Has data visualization helpers
Why it's high quality:
• Follows all Anthropic best practices
• Clear, concise instructions
• Comprehensive examples with workflows
• Well-tested and actively maintained
• Good error handling and validation
📎 https://github.com/user/presentation-builder/blob/main/SKILL.md
[Preview Full Analysis] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #2 office-automation ⭐ 450 📅 | FITNESS: 6.5/10
Quality Assessment:
✅ Well-documented with examples
✅ Good structure and organization
⚠️ Description too broad ("office automation")
⚠️ Not focused on presentations specifically
❌ Missing pitch deck specific features
Overall Quality: 7.2/10 (Good)
Why it partially fits:
• Includes PowerPoint/Slides capabilities
• Has slide creation examples
• Can handle basic presentations
Why it's not ideal:
• Generic "office automation" scope
• No pitch deck templates
• Lacks business presentation focus
• No data visualization specific to decks
• Lower semantic match (only 2 relevant mentions)
📎 https://github.com/user/office-automation/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #3 document-creator ⭐ 180 🔥 | FITNESS: 4.8/10
Quality Assessment:
✅ Clear description and examples
⚠️ Mixed focus (documents + presentations)
❌ No progressive disclosure
❌ Very long SKILL.md (850 lines)
Overall Quality: 5.5/10 (Fair)
Why it's a weak match:
• Primarily focused on documents, not presentations
• Mentions "presentation" only once
• No pitch deck specific content
• Would need significant adaptation
📎 https://github.com/user/document-creator/blob/main/SKILL.md
[Review] [Skip]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 From Awesome Lists:
Found in awesome-claude-skills (BehiSecc):
• pptx-generator (FITNESS: 7.2/10) - Good for basic slides
• slide-master (FITNESS: 8.1/10) - Strong presentation focus
Found in awesome-claude-skills (travisvn):
• deck-builder (FITNESS: 7.8/10) - Pitch deck oriented
[Evaluate These] [Show All]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 Recommendation: presentation-builder (FITNESS: 9.2/10) ⭐⭐⭐⭐⭐
Best match for your needs. Specifically designed for pitch decks,
high quality, follows best practices, and actively maintained.
Alternative: slide-master from awesome-list (FITNESS: 8.1/10)
if you need different templates or workflow.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Example 2: Data Analysis Query
**User Query:** "What's the best skill for automated data analysis?"
**Output:**
```
🔍 Searching for skills matching: "automated data analysis"
Semantic terms: data analysis, automation, analytics, statistics, visualization
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Skills for: "automated data analysis"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #1 data-analyzer ⭐ 312 🔥 | FITNESS: 9.5/10
Quality Assessment:
✅ Description: Perfect specificity (2.0/2.0)
✅ Concise: 380 lines with excellent references
✅ Examples: Multiple workflows with validation
✅ Progressive disclosure: Exemplary
✅ Error handling: Comprehensive
Overall Quality: 9.3/10 (Excellent)
Why it's perfect for your need:
• Explicitly designed for automated data analysis
• Mentions "automation" and "analysis" throughout
• Has complete analysis workflows
• Includes validation and feedback loops
• Supports multiple data formats
• Has statistical analysis helpers
Key Features:
• CSV, Excel, JSON analysis
• Automated statistical tests
• Visualization generation
• Report automation
• Quality checks and validation
📎 https://github.com/user/data-analyzer/blob/main/SKILL.md
[Preview Full Analysis] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #2 excel-master ⭐ 456 📅 | FITNESS: 7.8/10
Quality Assessment:
✅ Good quality, well-maintained
⚠️ Excel-specific (limited to one format)
⚠️ Less automation focus
Overall Quality: 7.5/10 (Good)
Why it's good but not perfect:
• Strong Excel analysis capabilities
• Has some automation features
• But limited to Excel format only
• Less comprehensive than data-analyzer
📎 https://github.com/user/excel-master/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 Recommendation: data-analyzer (FITNESS: 9.5/10) ⭐⭐⭐⭐⭐
Exactly what you need. Purpose-built for automated data analysis,
excellent quality, comprehensive features.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Example 3: No Strong Match Found
**User Query:** "Find me a skill for quantum computing simulations"
**Output:**
```
🔍 Searching for skills matching: "quantum computing simulations"
Semantic terms: quantum, computing, simulation, qubit, circuit
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ No Strong Matches Found
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Searched 47 skills, but none strongly match your query.
Best partial matches:
🏆 #1 scientific-computing ⭐ 123 📅 | FITNESS: 4.2/10
Quality: 7.0/10 (Good)
Partial match because:
• General scientific computing
• Mentions "simulation" a few times
• No quantum-specific content
• Would need significant adaptation
📎 https://github.com/user/scientific-computing/blob/main/SKILL.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #2 python-automation ⭐ 89 🔥 | FITNESS: 3.1/10
Quality: 6.5/10 (Good)
Weak match:
• Python scripting focus
• Could theoretically run quantum libraries
• But no quantum-specific guidance
📎 https://github.com/user/python-automation/blob/main/SKILL.md
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 Recommendations:
None of these skills are strong matches for quantum computing.
Consider:
• Searching awesome-lists directly for quantum skills
• Requesting a quantum skill from curators
• Creating a custom skill for your specific need
• Broaden search: "scientific computing" or "physics simulations"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Example 4: Full Analysis Details
**User Action:** Clicks [Preview Full Analysis] on presentation-builder
**Output:**
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Full Analysis: presentation-builder
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏢 Repository Info:
Owner: presentation-tools
Stars: ⭐ 245
Updated: 🔥 2 days ago (very active)
URL: https://github.com/presentation-tools/presentation-builder
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Quality Breakdown (Anthropic Best Practices)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Description Quality: 2.0/2.0 ✅
✅ Highly specific: "Create presentations and pitch decks"
✅ Includes what: presentation/pitch deck creation
✅ Includes when: "Use when creating business presentations"
✅ Written in third person
✅ Contains key trigger terms
✅ Under 1024 characters
Name Convention: 0.5/0.5 ✅
✅ Follows naming rules (lowercase, hyphens)
✅ Descriptive gerund form
✅ Clear and specific
✅ No reserved words
Conciseness: 1.3/1.5 ⚠️
⚠️ 520 lines (slightly over 500 recommended)
✅ No unnecessary fluff
✅ Gets to the point quickly
✅ Additional details in separate files
Progressive Disclosure: 1.0/1.0 ✅
✅ SKILL.md serves as excellent overview
✅ References 4 additional files appropriately:
• templates.md (pitch deck templates)
• charts.md (data visualization guide)
• workflows.md (presentation creation flows)
• examples.md (real-world examples)
✅ All references are 1 level deep
✅ Well-organized by feature
Examples & Workflows: 1.0/1.0 ✅
✅ Concrete pitch deck example
✅ Step-by-step workflow
✅ Input/output pairs shown
✅ Code snippets included
✅ Real patterns, not placeholders
Degree of Freedom: 0.5/0.5 ✅
✅ Appropriate for task type
✅ Flexible for creative tasks
✅ Structured for technical steps
✅ Good balance
Dependencies: 0.5/0.5 ✅
✅ All dependencies listed (python-pptx, matplotlib)
✅ Installation instructions clear
✅ Verified available in environment
Structure: 0.9/1.0 ✅
✅ Excellent organization
✅ Clear section headings
✅ Logical flow
⚠️ Minor: One inconsistent heading style
Error Handling: 0.5/0.5 ✅
✅ Scripts handle errors explicitly
✅ Validation loops for quality
✅ Clear error messages
✅ Feedback loops implemented
Anti-Patterns: 0.9/1.0 ✅
✅ No time-sensitive information
✅ Consistent terminology
✅ Unix-style paths throughout
⚠️ One instance: offers 2 template choices (minor)
Testing: 0.5/0.5 ✅
✅ Clear testing approach documented
✅ Example evaluations included
✅ Success criteria defined
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Total Quality Score: 8.7/10 (Excellent Tier)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Semantic Match Analysis
User Query: "creating pitch decks"
Extracted terms: pitch deck, presentation, slides, powerpoint, keynote
Skill Content Analysis:
✅ "pitch deck" - 8 occurrences
✅ "presentation" - 15 occurrences
✅ "slides" - 12 occurrences
✅ "powerpoint" - 4 occurrences
✅ "business" - 6 occurrences (contextually relevant)
✅ Has "templates" section with pitch deck examples
✅ Has "data visualization" section for charts
✅ Has complete workflow for deck creation
Key Features Found:
• Pitch deck template library
• Business presentation workflows
• Data chart integration
• Multi-format export (PPTX, Google Slides)
• Automated layout suggestions
Semantic Match Score: 9.5/10
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Fitness Score Calculation
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
semantic_match * 0.4 = 9.5 * 0.4 = 3.8
quality_score * 0.3 = 8.7 * 0.3 = 2.61
(stars/100) * 0.2 = 2.45 * 0.2 = 0.49
freshness * 0.1 = 10 * 0.1 = 1.0
------
Final FITNESS Score: 8.9/10
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Recommendation: Highly Recommended ⭐⭐⭐⭐⭐
This skill is an excellent match for your needs:
Strengths:
• Purpose-built for pitch deck creation
• Follows Anthropic best practices closely
• Well-maintained and actively developed
• Comprehensive features and workflows
• High semantic alignment with your query
Minor considerations:
• Slightly over 500 lines (520) - still very good
• One minor heading inconsistency
Overall: This is the best available skill for creating pitch decks.
Install with confidence.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Install Now] [View SKILL.md] [Back to Results]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Example 5: Multiple Good Matches
**User Query:** "Find skills for git commit messages"
**Output:**
```
🔍 Searching for skills matching: "git commit messages"
Semantic terms: git, commit, message, changelog, version control
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 Skills for: "git commit messages"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Found 3 excellent matches!
🏆 #1 commit-helper ⭐ 89 🔥 | FITNESS: 9.1/10
Quality: 8.9/10 (Excellent)
Perfect for:
• Generating commit messages from diffs
• Following conventional commit format
• Analyzing changes semantically
📎 https://github.com/user/commit-helper/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #2 git-automation ⭐ 156 🔥 | FITNESS: 8.7/10
Quality: 8.2/10 (Excellent)
Good for:
• Broader git workflows
• Includes commit message generation
• Plus branching and PR helpers
📎 https://github.com/user/git-automation/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🏆 #3 changelog-generator ⭐ 67 📅 | FITNESS: 7.4/10
Quality: 7.8/10 (Good)
Alternative approach:
• Focused on changelog generation
• Can help with commit message consistency
• Different workflow than #1 and #2
📎 https://github.com/user/changelog-generator/blob/main/SKILL.md
[Preview] [Install]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
💡 Recommendation:
All three are high quality! Choose based on your workflow:
• commit-helper (FITNESS: 9.1/10) - Best if you ONLY need commit messages
• git-automation (FITNESS: 8.7/10) - Best if you want broader git help
• changelog-generator (FITNESS: 7.4/10) - Best if you maintain changelogs
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Key Differences from Old Output
### Old Approach (Popularity-Based):
```
🏆 #1 awesome-claude-skills ⭐ 1703 🔥
BehiSecc/awesome-claude-skills • Updated 6 days ago
A curated list of Claude Skills.
📎 https://github.com/BehiSecc/awesome-claude-skills
```
Problems:
- Just shows star count
- No quality assessment
- No fitness to user query
- Awesome-lists mixed with actual skills
- No explanation of WHY it's ranked #1
### New Approach (Fitness-Based):
```
🏆 #1 presentation-builder ⭐ 245 🔥 | FITNESS: 9.2/10
Quality Assessment: 8.7/10 (Excellent)
Why it fits your request:
• Specifically designed for pitch decks
• Mentions your key terms 8 times
• Has templates and workflows
Why it's high quality:
• Follows Anthropic best practices
• Well-tested and maintained
```
Benefits:
- Shows FITNESS to specific query
- Explains WHY it's a good match
- Evaluates against best practices
- Separates awesome-lists
- Actionable quality assessment
---
**Remember:** The goal is finding the RIGHT skill for the SPECIFIC need, not just what's popular.

View File

@@ -0,0 +1,292 @@
# Anthropic Best Practices Checklist
Evaluation criteria for assessing Claude Skill quality based on official Anthropic guidelines.
## Purpose
Use this checklist to evaluate skills found on GitHub. Each criterion contributes to the overall quality score (0-10).
## Evaluation Criteria
### 1. Description Quality (Weight: 2.0)
**What to check:**
- [ ] Description is specific, not vague
- [ ] Includes what the skill does
- [ ] Includes when to use it (trigger conditions)
- [ ] Contains key terms users would mention
- [ ] Written in third person
- [ ] Under 1024 characters
- [ ] No XML tags
**Scoring:**
- 2.0: All criteria met, very clear and specific
- 1.5: Most criteria met, good clarity
- 1.0: Basic description, somewhat vague
- 0.5: Very vague or generic
- 0.0: Missing or completely unclear
**Examples:**
**Good (2.0):**
```yaml
description: Analyze Excel spreadsheets, create pivot tables, generate charts. Use when working with Excel files, spreadsheets, tabular data, or .xlsx files.
```
**Bad (0.5):**
```yaml
description: Helps with documents
```
### 2. Name Convention (Weight: 0.5)
**What to check:**
- [ ] Uses lowercase letters, numbers, hyphens only
- [ ] Under 64 characters
- [ ] Follows naming pattern (gerund form preferred)
- [ ] Descriptive, not vague
- [ ] No reserved words ("anthropic", "claude")
**Scoring:**
- 0.5: Follows all conventions
- 0.25: Minor issues (e.g., not gerund but still clear)
- 0.0: Violates conventions or very vague
**Good:** `processing-pdfs`, `analyzing-spreadsheets`
**Bad:** `helper`, `utils`, `claude-tool`
### 3. Conciseness (Weight: 1.5)
**What to check:**
- [ ] SKILL.md body under 500 lines
- [ ] No unnecessary explanations
- [ ] Assumes Claude's intelligence
- [ ] Gets to the point quickly
- [ ] Additional content in separate files if needed
**Scoring:**
- 1.5: Very concise, well-edited, <300 lines
- 1.0: Reasonable length, <500 lines
- 0.5: Long but not excessive, 500-800 lines
- 0.0: Very verbose, >800 lines
### 4. Progressive Disclosure (Weight: 1.0)
**What to check:**
- [ ] SKILL.md serves as overview/table of contents
- [ ] Additional details in separate files
- [ ] Clear references to other files
- [ ] Files organized by domain/feature
- [ ] No deeply nested references (max 1 level deep)
**Scoring:**
- 1.0: Excellent use of progressive disclosure
- 0.75: Good organization with some references
- 0.5: Some separation, could be better
- 0.25: All content in SKILL.md, no references
- 0.0: Poorly organized or deeply nested
### 5. Examples and Workflows (Weight: 1.0)
**What to check:**
- [ ] Has concrete examples (not abstract)
- [ ] Includes code snippets
- [ ] Shows input/output pairs
- [ ] Has clear workflows for complex tasks
- [ ] Examples use real patterns, not placeholders
**Scoring:**
- 1.0: Excellent examples and clear workflows
- 0.75: Good examples, some workflows
- 0.5: Basic examples, no workflows
- 0.25: Few or abstract examples
- 0.0: No examples
### 6. Appropriate Degree of Freedom (Weight: 0.5)
**What to check:**
- [ ] Instructions match task fragility
- [ ] High freedom for flexible tasks (text instructions)
- [ ] Low freedom for fragile tasks (specific scripts)
- [ ] Clear when to use exact commands vs adapt
**Scoring:**
- 0.5: Perfect match of freedom to task type
- 0.25: Reasonable but could be better
- 0.0: Inappropriate level (too rigid or too loose)
### 7. Dependencies Documentation (Weight: 0.5)
**What to check:**
- [ ] Required packages listed
- [ ] Installation instructions provided
- [ ] Dependencies verified as available
- [ ] No assumption of pre-installed packages
**Scoring:**
- 0.5: All dependencies documented and verified
- 0.25: Dependencies mentioned but not fully documented
- 0.0: Dependencies assumed or not mentioned
### 8. Structure and Organization (Weight: 1.0)
**What to check:**
- [ ] Clear section headings
- [ ] Logical flow of information
- [ ] Table of contents for long files
- [ ] Consistent formatting
- [ ] Unix-style paths (forward slashes)
**Scoring:**
- 1.0: Excellently organized
- 0.75: Well organized with minor issues
- 0.5: Basic organization
- 0.25: Poor organization
- 0.0: No clear structure
### 9. Error Handling (Weight: 0.5)
**What to check (for skills with scripts):**
- [ ] Scripts handle errors explicitly
- [ ] Clear error messages
- [ ] Fallback strategies provided
- [ ] Validation loops for critical operations
- [ ] No "voodoo constants"
**Scoring:**
- 0.5: Excellent error handling
- 0.25: Basic error handling
- 0.0: No error handling or punts to Claude
### 10. Avoids Anti-Patterns (Weight: 1.0)
**What to avoid:**
- [ ] Time-sensitive information
- [ ] Inconsistent terminology
- [ ] Windows-style paths
- [ ] Offering too many options without guidance
- [ ] Deeply nested references
- [ ] Vague or generic content
**Scoring:**
- 1.0: No anti-patterns
- 0.75: 1-2 minor anti-patterns
- 0.5: Multiple anti-patterns
- 0.0: Severe anti-patterns
### 11. Testing and Validation (Weight: 0.5)
**What to check:**
- [ ] Evidence of testing mentioned
- [ ] Evaluation examples provided
- [ ] Clear success criteria
- [ ] Feedback loops for quality
**Scoring:**
- 0.5: Clear testing approach
- 0.25: Some testing mentioned
- 0.0: No testing mentioned
## Scoring System
**Total possible: 10.0 points**
Calculate weighted score:
```
quality_score = (
description_score * 2.0 +
name_score * 0.5 +
conciseness_score * 1.5 +
progressive_disclosure_score * 1.0 +
examples_score * 1.0 +
freedom_score * 0.5 +
dependencies_score * 0.5 +
structure_score * 1.0 +
error_handling_score * 0.5 +
anti_patterns_score * 1.0 +
testing_score * 0.5
)
```
## Quality Tiers
**Excellent (8.0-10.0):**
- Follows all best practices
- Clearly professional
- Ready for production use
- **Recommendation:** Strongly recommended
**Good (6.0-7.9):**
- Follows most best practices
- Minor improvements needed
- Usable but not perfect
- **Recommendation:** Recommended with minor notes
**Fair (4.0-5.9):**
- Follows some best practices
- Several improvements needed
- May work but needs review
- **Recommendation:** Consider with caution
**Poor (0.0-3.9):**
- Violates many best practices
- Significant issues
- High risk of problems
- **Recommendation:** Not recommended
## Quick Evaluation Process
For rapid assessment during search:
1. **Read SKILL.md frontmatter** (30 sec)
- Check description quality (most important)
- Check name convention
2. **Scan SKILL.md body** (1-2 min)
- Check length (<500 lines?)
- Look for examples
- Check for references to other files
- Note any obvious anti-patterns
3. **Check file structure** (30 sec)
- Look for reference files
- Check for scripts/utilities
- Verify organization
4. **Calculate quick score** (30 sec)
- Focus on weighted criteria
- Estimate tier (Excellent/Good/Fair/Poor)
**Total time per skill: ~3-4 minutes**
## Automation Tips
When evaluating multiple skills:
```bash
# Check SKILL.md length
wc -l SKILL.md
# Count reference files
find . -name "*.md" -not -name "SKILL.md" | wc -l
# Check for common anti-patterns
grep -i "claude can help\|I can help\|you can use" SKILL.md
# Verify Unix paths
grep -E '\\\|\\\\' SKILL.md
# Check description length
head -10 SKILL.md | grep "description:" | wc -c
```
## Reference
Based on official Anthropic documentation:
- [Agent Skills Overview](https://docs.anthropic.com/en/docs/agents-and-tools/agent-skills/overview)
- [Best Practices Guide](https://docs.anthropic.com/en/docs/agents-and-tools/agent-skills/best-practices)
- [Claude Code Skills](https://docs.anthropic.com/en/docs/claude-code/skills)
---
**Usage:** Use this checklist when evaluating skills found through skill-finder to provide quality scores and recommendations to users.

View File

@@ -0,0 +1,547 @@
# Installation Workflow for Claude Skills
Complete guide to previewing, downloading, and installing Claude skills from GitHub.
## Installation Process Overview
1. **Preview** - Show skill details and requirements
2. **Confirm** - Get user approval
3. **Download** - Fetch skill files from GitHub
4. **Install** - Place in correct directory structure
5. **Verify** - Confirm installation success
6. **Setup** - Run any required setup steps
## Step 1: Preview Skill
### Fetch SKILL.md Content
```bash
# Get direct link to SKILL.md
skill_url="https://github.com/OWNER/REPO/blob/main/PATH/SKILL.md"
skill_path="PATH/SKILL.md"
# Fetch content (first 50 lines for preview)
gh api repos/OWNER/REPO/contents/$skill_path | \
jq -r '.content' | base64 -d | head -50
```
### Extract Key Information
```bash
# Parse SKILL.md for important details
skill_content=$(gh api repos/OWNER/REPO/contents/$skill_path | jq -r '.content' | base64 -d)
# Extract name (first # heading)
skill_name=$(echo "$skill_content" | grep -m1 '^# ' | sed 's/^# //')
# Extract description (first paragraph after title)
description=$(echo "$skill_content" | sed -n '/^# /,/^$/p' | grep -v '^#' | head -1)
# Extract dependencies
dependencies=$(echo "$skill_content" | grep -A10 -i "dependencies\|requirements\|prerequisites" | head -10)
# Extract usage examples
examples=$(echo "$skill_content" | grep -A10 -i "usage\|example\|quick start" | head -15)
```
### Display Preview
```bash
cat <<EOF
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📦 Skill Preview: $skill_name
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📝 Description:
$description
⭐ Repository: $repo_full_name
🌟 Stars: $stars
🔄 Last Updated: $days_ago days ago
📋 Dependencies:
$dependencies
💡 Usage Example:
$examples
📎 Full Documentation:
$skill_url
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
EOF
```
## Step 2: Confirm Installation
### Check Existing Installation
```bash
# Determine skill directory name
skill_dir_name=$(echo "$skill_name" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
skill_dir=".claude/skills/$skill_dir_name"
# Check if already installed
if [ -d "$skill_dir" ]; then
echo "⚠️ Skill '$skill_name' is already installed at: $skill_dir"
echo ""
echo "Options:"
echo " [U] Update (overwrite existing)"
echo " [K] Keep existing (cancel)"
echo " [B] Backup and install new"
echo ""
read -p "Choose option [U/K/B]: " choice
case $choice in
[Uu])
echo "Overwriting existing installation..."
;;
[Kk])
echo "Keeping existing installation. Cancelled."
exit 0
;;
[Bb])
backup_dir="${skill_dir}.backup.$(date +%s)"
mv "$skill_dir" "$backup_dir"
echo "✅ Backed up to: $backup_dir"
;;
*)
echo "Invalid option. Cancelled."
exit 1
;;
esac
fi
```
### Get User Confirmation
```bash
echo ""
echo "Install '$skill_name' to $skill_dir?"
echo ""
read -p "Continue? [y/N]: " confirm
if [[ ! "$confirm" =~ ^[Yy] ]]; then
echo "Installation cancelled."
exit 0
fi
```
## Step 3: Download Skill Files
### Determine Skill Structure
Skills can have different structures:
1. **Simple** - Single SKILL.md file
2. **Standard** - SKILL.md + reference files
3. **Plugin** - Nested in `skills/` subdirectory
4. **Complex** - Multiple files, scripts, dependencies
```bash
# Detect structure type
structure_type=$(detect_skill_structure "$repo" "$skill_path")
case $structure_type in
"simple")
download_simple_skill "$repo" "$skill_path" "$skill_dir"
;;
"standard")
download_standard_skill "$repo" "$skill_path" "$skill_dir"
;;
"plugin")
download_plugin_skill "$repo" "$skill_path" "$skill_dir"
;;
"complex")
download_complex_skill "$repo" "$skill_path" "$skill_dir"
;;
esac
```
### Download Simple Skill (SKILL.md only)
```bash
download_simple_skill() {
local repo=$1
local skill_path=$2
local dest_dir=$3
echo "📥 Downloading simple skill..."
# Create destination directory
mkdir -p "$dest_dir"
# Download SKILL.md
gh api "repos/$repo/contents/$skill_path" | \
jq -r '.content' | base64 -d > "$dest_dir/SKILL.md"
if [ -f "$dest_dir/SKILL.md" ]; then
echo "✅ Downloaded SKILL.md"
else
echo "❌ Failed to download SKILL.md"
return 1
fi
}
```
### Download Standard Skill (with references)
```bash
download_standard_skill() {
local repo=$1
local skill_path=$2
local dest_dir=$3
echo "📥 Downloading standard skill..."
# Get skill directory path from SKILL.md path
skill_dir_path=$(dirname "$skill_path")
# Get all files in skill directory
gh api "repos/$repo/contents/$skill_dir_path?recursive=1" | \
jq -r '.tree[] | select(.type == "blob") | .path' | \
while read file_path; do
# Calculate relative path
rel_path=${file_path#$skill_dir_path/}
dest_file="$dest_dir/$rel_path"
# Create subdirectories
mkdir -p "$(dirname "$dest_file")"
# Download file
gh api "repos/$repo/contents/$file_path" | \
jq -r '.content' | base64 -d > "$dest_file"
echo "$rel_path"
done
echo "✅ Downloaded all skill files"
}
```
### Download Plugin Skill (nested structure)
```bash
download_plugin_skill() {
local repo=$1
local skill_path=$2
local dest_dir=$3
echo "📥 Downloading plugin skill..."
echo " (This may take a moment...)"
# Clone repository to temporary location
temp_dir=$(mktemp -d)
gh repo clone "$repo" "$temp_dir" -- --depth 1 --quiet
# Extract skill directory from SKILL.md path
# Example: skills/playwright-skill/SKILL.md -> skills/playwright-skill
skill_subdir=$(dirname "$skill_path")
# Copy skill directory to destination
if [ -d "$temp_dir/$skill_subdir" ]; then
cp -r "$temp_dir/$skill_subdir/"* "$dest_dir/"
echo "✅ Copied skill from $skill_subdir"
else
echo "❌ Skill directory not found: $skill_subdir"
rm -rf "$temp_dir"
return 1
fi
# Cleanup
rm -rf "$temp_dir"
}
```
### Download Complex Skill (with setup)
```bash
download_complex_skill() {
local repo=$1
local skill_path=$2
local dest_dir=$3
echo "📥 Downloading complex skill..."
# Use plugin download method
download_plugin_skill "$repo" "$skill_path" "$dest_dir"
# Check for dependencies
if [ -f "$dest_dir/package.json" ]; then
echo ""
echo "📦 This skill has npm dependencies."
echo " Run: cd $dest_dir && npm install"
fi
if [ -f "$dest_dir/requirements.txt" ]; then
echo ""
echo "🐍 This skill has Python dependencies."
echo " Run: cd $dest_dir && pip install -r requirements.txt"
fi
if [ -f "$dest_dir/setup.sh" ]; then
echo ""
echo "🔧 This skill has a setup script."
read -p " Run setup.sh now? [y/N]: " run_setup
if [[ "$run_setup" =~ ^[Yy] ]]; then
(cd "$dest_dir" && bash setup.sh)
fi
fi
}
```
## Step 4: Verify Installation
### Check Required Files
```bash
verify_installation() {
local skill_dir=$1
local errors=0
echo ""
echo "🔍 Verifying installation..."
# Check SKILL.md exists
if [ ! -f "$skill_dir/SKILL.md" ]; then
echo " ❌ Missing SKILL.md"
((errors++))
else
echo " ✅ SKILL.md present"
fi
# Check file permissions
if [ ! -r "$skill_dir/SKILL.md" ]; then
echo " ❌ SKILL.md not readable"
((errors++))
else
echo " ✅ File permissions OK"
fi
# Check for reference files (optional but good)
if [ -d "$skill_dir/references" ]; then
ref_count=$(find "$skill_dir/references" -type f | wc -l)
echo " ✅ Found $ref_count reference files"
fi
# Check for examples (optional)
if [ -d "$skill_dir/examples" ]; then
example_count=$(find "$skill_dir/examples" -type f | wc -l)
echo " ✅ Found $example_count example files"
fi
return $errors
}
```
### Validate SKILL.md Content
```bash
validate_skill_content() {
local skill_file=$1
# Check for required sections
local has_title=$(grep -q '^# ' "$skill_file" && echo "yes" || echo "no")
local has_description=$(grep -qi 'description\|what.*does' "$skill_file" && echo "yes" || echo "no")
local has_usage=$(grep -qi 'usage\|example\|how.*use' "$skill_file" && echo "yes" || echo "no")
if [ "$has_title" = "yes" ] && [ "$has_description" = "yes" ]; then
echo " ✅ SKILL.md structure valid"
return 0
else
echo " ⚠️ SKILL.md may be incomplete (missing title or description)"
return 1
fi
}
```
## Step 5: Post-Installation
### Run Setup Scripts
```bash
# Check for and run setup
if [ -f "$skill_dir/setup.sh" ]; then
echo ""
echo "🔧 Running setup script..."
(cd "$skill_dir" && bash setup.sh)
if [ $? -eq 0 ]; then
echo "✅ Setup completed successfully"
else
echo "⚠️ Setup script had warnings (check above)"
fi
fi
```
### Install Dependencies
```bash
# npm dependencies
if [ -f "$skill_dir/package.json" ]; then
echo ""
echo "📦 Installing npm dependencies..."
(cd "$skill_dir" && npm install --silent)
echo "✅ npm dependencies installed"
fi
# Python dependencies
if [ -f "$skill_dir/requirements.txt" ]; then
echo ""
echo "🐍 Installing Python dependencies..."
pip install -q -r "$skill_dir/requirements.txt"
echo "✅ Python dependencies installed"
fi
```
### Create Usage Instructions
```bash
cat <<EOF
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Installation Complete!
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📁 Installed to: $skill_dir
🚀 Usage:
Invoke the skill by typing: /$skill_dir_name
Or let Claude auto-invoke when relevant
📖 Documentation:
Read: $skill_dir/SKILL.md
Examples: $skill_dir/examples/ (if available)
🔄 Update:
Re-run installation to update to latest version
❌ Uninstall:
rm -rf $skill_dir
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
EOF
```
## Complete Installation Script
```bash
#!/bin/bash
install_skill() {
local repo=$1
local skill_path=$2
local skill_name=$3
# 1. Determine destination
skill_dir_name=$(echo "$skill_name" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
skill_dir=".claude/skills/$skill_dir_name"
# 2. Preview
echo "Fetching skill preview..."
skill_content=$(gh api "repos/$repo/contents/$skill_path" | jq -r '.content' | base64 -d)
description=$(echo "$skill_content" | sed -n '/^# /,/^$/p' | grep -v '^#' | head -1)
echo ""
echo "📦 $skill_name"
echo "📝 $description"
echo "📁 Will install to: $skill_dir"
echo ""
# 3. Confirm
read -p "Install this skill? [y/N]: " confirm
[[ ! "$confirm" =~ ^[Yy] ]] && { echo "Cancelled."; return 1; }
# 4. Check existing
if [ -d "$skill_dir" ]; then
read -p "Skill exists. Overwrite? [y/N]: " overwrite
[[ ! "$overwrite" =~ ^[Yy] ]] && { echo "Cancelled."; return 1; }
rm -rf "$skill_dir"
fi
# 5. Download
mkdir -p "$skill_dir"
# Detect if plugin format or simple
if [[ "$skill_path" == *"/skills/"* ]]; then
# Plugin format - clone and extract
temp_dir=$(mktemp -d)
gh repo clone "$repo" "$temp_dir" -- --depth 1 --quiet
skill_subdir=$(dirname "$skill_path")
cp -r "$temp_dir/$skill_subdir/"* "$skill_dir/"
rm -rf "$temp_dir"
else
# Simple format - direct download
gh api "repos/$repo/contents/$skill_path" | \
jq -r '.content' | base64 -d > "$skill_dir/SKILL.md"
fi
# 6. Verify
if [ -f "$skill_dir/SKILL.md" ]; then
echo "✅ Installation successful!"
echo "📁 Location: $skill_dir"
echo "🚀 Use: /$skill_dir_name"
else
echo "❌ Installation failed"
return 1
fi
}
# Usage:
# install_skill "lackeyjb/playwright-skill" "skills/playwright-skill/SKILL.md" "playwright-skill"
```
## Error Handling
### Common Issues
**Issue: Repository not found**
```bash
if ! gh api "repos/$repo" &>/dev/null; then
echo "❌ Repository not found or not accessible: $repo"
echo " Check if the repository exists and is public"
exit 1
fi
```
**Issue: SKILL.md not found**
```bash
if ! gh api "repos/$repo/contents/$skill_path" &>/dev/null; then
echo "❌ SKILL.md not found at: $skill_path"
echo " Searching for SKILL.md in repository..."
# Try to find it
found_paths=$(gh api "repos/$repo/git/trees/main?recursive=1" | \
jq -r '.tree[] | select(.path | contains("SKILL.md")) | .path')
if [ -n "$found_paths" ]; then
echo " Found SKILL.md at:"
echo "$found_paths" | sed 's/^/ /'
else
echo " No SKILL.md found in repository"
fi
exit 1
fi
```
**Issue: Permission denied**
```bash
if [ ! -w ".claude/skills" ]; then
echo "❌ Cannot write to .claude/skills directory"
echo " Check permissions: ls -la .claude/skills"
exit 1
fi
```
**Issue: Network/API error**
```bash
if ! ping -c 1 api.github.com &>/dev/null; then
echo "❌ Cannot reach GitHub API"
echo " Check your internet connection"
exit 1
fi
```
---
**Summary:** The installation workflow ensures safe, verified installation of Claude skills with proper error handling, user confirmation, and post-installation setup.

View File

@@ -0,0 +1,359 @@
# Ranking Algorithm for Claude Skills
Comprehensive scoring system to rank skills by popularity, freshness, and quality.
## Core Ranking Formula
```
final_score = (base_stars * freshness_multiplier * quality_bonus)
```
## Component 1: Base Stars
Direct indicator of community validation and popularity.
```bash
base_stars = repository.stargazers_count
# Minimum threshold: 10 stars (filter out experiments)
if [ $base_stars -lt 10 ]; then
skip_result=true
fi
```
## Component 2: Freshness Multiplier
Recent updates indicate active maintenance and modern practices.
### Calculate Days Since Last Update
```bash
# Get pushed_at timestamp from repository
pushed_at="2025-10-28T12:00:00Z"
# Calculate days old (macOS)
days_old=$(( ($(date +%s) - $(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$pushed_at" +%s)) / 86400 ))
# Calculate days old (Linux)
days_old=$(( ($(date +%s) - $(date -d "$pushed_at" +%s)) / 86400 ))
```
### Apply Freshness Multiplier
```bash
if [ $days_old -lt 30 ]; then
freshness_multiplier=1.5
freshness_badge="🔥"
freshness_label="Hot"
elif [ $days_old -lt 90 ]; then
freshness_multiplier=1.2
freshness_badge="📅"
freshness_label="Recent"
elif [ $days_old -lt 180 ]; then
freshness_multiplier=1.0
freshness_badge="📆"
freshness_label="Active"
else
freshness_multiplier=0.5
freshness_badge="⏰"
freshness_label="Older"
fi
```
### Freshness Tiers
| Age Range | Multiplier | Badge | Label | Reasoning |
|-----------|------------|-------|-------|-----------|
| < 30 days | 1.5x | 🔥 | Hot | Very active, likely works with latest Claude |
| 30-90 days | 1.2x | 📅 | Recent | Active maintenance |
| 90-180 days | 1.0x | 📆 | Active | Stable, still maintained |
| > 180 days | 0.5x | ⏰ | Older | May be outdated or abandoned |
## Component 3: Quality Bonus (Optional)
Additional signals of skill quality beyond stars and freshness.
### Quality Signals
```bash
quality_bonus=1.0 # Start at neutral
# Has comprehensive description
if [ ${#description} -gt 100 ]; then
quality_bonus=$(echo "$quality_bonus + 0.1" | bc)
fi
# Has reference files
reference_count=$(gh api "repos/$repo/contents" | jq '[.[] | select(.name | test("references|docs|examples"))] | length')
if [ $reference_count -gt 0 ]; then
quality_bonus=$(echo "$quality_bonus + 0.1" | bc)
fi
# Has dependencies documentation
if gh api "repos/$repo/contents" | jq -e '.[] | select(.name == "package.json")' > /dev/null; then
quality_bonus=$(echo "$quality_bonus + 0.05" | bc)
fi
# Active issues/PRs indicate engagement
open_issues=$(gh api "repos/$repo" | jq '.open_issues_count')
if [ $open_issues -gt 0 ] && [ $open_issues -lt 20 ]; then
quality_bonus=$(echo "$quality_bonus + 0.05" | bc)
fi
# Has recent commits (beyond just pushed_at)
recent_commits=$(gh api "repos/$repo/commits?per_page=10" | jq 'length')
if [ $recent_commits -ge 5 ]; then
quality_bonus=$(echo "$quality_bonus + 0.1" | bc)
fi
# Cap quality bonus at 1.5x
if (( $(echo "$quality_bonus > 1.5" | bc -l) )); then
quality_bonus=1.5
fi
```
### Quality Tier Examples
| Quality Bonus | Characteristics |
|---------------|-----------------|
| 1.0x (baseline) | Basic SKILL.md only |
| 1.1x | + Good description |
| 1.2x | + Reference files |
| 1.3x | + Dependencies documented |
| 1.4x | + Active community (issues/PRs) |
| 1.5x (max) | All of the above |
## Complete Scoring Implementation
### Bash Implementation
```bash
#!/bin/bash
calculate_score() {
local repo=$1
local stars=$2
local pushed_at=$3
# Calculate days since last update
days_old=$(( ($(date +%s) - $(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$pushed_at" +%s)) / 86400 ))
# Freshness multiplier
if [ $days_old -lt 30 ]; then
freshness=1.5
badge="🔥"
elif [ $days_old -lt 90 ]; then
freshness=1.2
badge="📅"
elif [ $days_old -lt 180 ]; then
freshness=1.0
badge="📆"
else
freshness=0.5
badge="⏰"
fi
# Calculate final score
score=$(echo "$stars * $freshness" | bc)
echo "$score|$badge|$days_old"
}
# Example usage
result=$(calculate_score "lackeyjb/playwright-skill" 612 "2025-10-28T12:00:00Z")
score=$(echo "$result" | cut -d'|' -f1)
badge=$(echo "$result" | cut -d'|' -f2)
days=$(echo "$result" | cut -d'|' -f3)
echo "Score: $score, Badge: $badge, Days: $days"
# Output: Score: 918.0, Badge: 🔥, Days: 2
```
### JQ Implementation (More Portable)
```bash
gh search repos "claude skills" --json name,stargazersCount,pushedAt --limit 20 | \
jq -r --arg now "$(date -u +%s)" '.[] |
. as $repo |
($repo.pushedAt | fromdateiso8601) as $pushed |
(($now | tonumber) - $pushed) / 86400 | floor as $days |
(if $days < 30 then 1.5 elif $days < 90 then 1.2 elif $days < 180 then 1.0 else 0.5 end) as $multiplier |
(if $days < 30 then "🔥" elif $days < 90 then "📅" elif $days < 180 then "📆" else "⏰" end) as $badge |
($repo.stargazersCount * $multiplier) as $score |
"\($score)|\($repo.name)|\($repo.stargazersCount)|\($badge)|\($days)"
' | sort -t'|' -k1 -nr | head -10
```
## Ranking Examples
### Real-World Scores
| Skill | Stars | Days Old | Freshness | Multiplier | Final Score | Rank |
|-------|-------|----------|-----------|------------|-------------|------|
| playwright-skill | 612 | 2 | 🔥 | 1.5x | **918** | #1 |
| agent-skill-creator | 96 | 5 | 🔥 | 1.5x | **144** | #2 |
| skill-codex | 153 | 9 | 🔥 | 1.5x | **229.5** | Would be #2, but bumped by age |
| ios-simulator-skill | 77 | 2 | 🔥 | 1.5x | **115.5** | #3 |
| claude-skills-mcp | 85 | 2 | 🔥 | 1.5x | **127.5** | #4 |
*With hot multiplier, even 77 stars beats 96 stars from 30+ days ago*
### Score Comparison Scenarios
**Scenario 1: Fresh skill vs. Popular old skill**
- Skill A: 50 stars, 10 days old → 50 × 1.5 = **75 points** 🔥
- Skill B: 100 stars, 200 days old → 100 × 0.5 = **50 points**
- **Winner: Skill A** (freshness wins)
**Scenario 2: Very popular but older skill**
- Skill A: 1000 stars, 365 days old → 1000 × 0.5 = **500 points**
- Skill B: 200 stars, 15 days old → 200 × 1.5 = **300 points** 🔥
- **Winner: Skill A** (massive popularity overcomes age)
**Scenario 3: Moderate skill, regularly updated**
- Skill A: 50 stars, 85 days old → 50 × 1.2 = **60 points** 📅
- Skill B: 60 stars, 95 days old → 60 × 1.0 = **60 points** 📆
- **Tie** (freshness threshold matters)
## Handling Edge Cases
### Newly Created Skills (< 7 days old)
```bash
# New skills may have inflated scores, add small penalty
if [ $days_old -lt 7 ]; then
new_skill_penalty=0.9
score=$(echo "$score * $new_skill_penalty" | bc)
note="⚠️ Very new skill - limited validation"
fi
```
### Archived or Deprecated Skills
```bash
# Check if repository is archived
is_archived=$(gh api "repos/$repo" | jq -r '.archived')
if [ "$is_archived" = "true" ]; then
score=0 # Exclude from results
note="🔒 Archived - no longer maintained"
fi
```
### Fork vs. Original
```bash
# Check if repository is a fork
is_fork=$(gh api "repos/$repo" | jq -r '.fork')
if [ "$is_fork" = "true" ]; then
# Check if fork has more stars than parent
parent_stars=$(gh api "repos/$repo" | jq -r '.parent.stargazers_count')
if [ $stars -gt $parent_stars ]; then
# Fork is more popular, keep it
note="🍴 Popular fork"
else
# Prefer original
score=$(echo "$score * 0.8" | bc)
note="🍴 Fork - see original"
fi
fi
```
## Sorting and Display
### Final Sort Order
```bash
# Sort by score (descending), then by stars (descending)
results | sort -t'|' -k1,1nr -k3,3nr | head -10
```
### Ties
When scores are identical:
1. Sort by stars (higher first)
2. If still tied, sort by freshness (newer first)
3. If still tied, sort alphabetically
```bash
# Multi-level sort
jq -r '.[] | "\(.score)|\(.stars)|\(.days_old)|\(.name)"' | \
sort -t'|' -k1,1nr -k2,2nr -k3,3n -k4,4
```
## Performance Optimization
### Bulk Scoring
```bash
# Score all repos in one pass
gh search repos "claude skills" --json name,stargazersCount,pushedAt,fullName --limit 50 | \
jq -r --arg now "$(date -u +%s)" '
.[] |
. as $repo |
($repo.pushedAt | fromdateiso8601) as $pushed |
(($now | tonumber) - $pushed) / 86400 | floor as $days |
(if $days < 30 then 1.5 elif $days < 90 then 1.2 elif $days < 180 then 1.0 else 0.5 end) as $mult |
($repo.stargazersCount * $mult) as $score |
{
name: $repo.name,
full_name: $repo.fullName,
stars: $repo.stargazersCount,
days_old: $days,
score: $score
}
' | jq -s 'sort_by(-.score) | .[0:10]'
```
### Caching Scores
```bash
# Cache scores to avoid recalculation
score_cache=".skill-scores.json"
# Generate or load cache
if [ ! -f "$score_cache" ] || [ $(($(date +%s) - $(stat -f %m "$score_cache"))) -gt 3600 ]; then
# Regenerate cache (older than 1 hour)
calculate_all_scores > "$score_cache"
fi
# Use cached scores
cat "$score_cache" | jq '.[] | select(.score > 50)'
```
## Customization
### User-Defined Weights
Allow users to adjust ranking preferences:
```bash
# Default weights
STAR_WEIGHT=1.0
FRESHNESS_WEIGHT=1.0
QUALITY_WEIGHT=0.5
# Calculate weighted score
weighted_score=$(echo "$stars * $STAR_WEIGHT + $freshness_score * $FRESHNESS_WEIGHT + $quality_score * $QUALITY_WEIGHT" | bc)
```
### Category-Specific Ranking
Different categories may value different factors:
```bash
# For automation skills: prioritize quality (working code)
if [ "$category" = "automation" ]; then
QUALITY_WEIGHT=1.5
fi
# For research skills: prioritize freshness (up-to-date sources)
if [ "$category" = "research" ]; then
FRESHNESS_WEIGHT=1.5
fi
```
---
**Summary:** The ranking algorithm balances popularity (stars) with recency (freshness), ensuring users see both well-established skills and cutting-edge new capabilities. Quality signals provide additional nuance for edge cases.

View File

@@ -0,0 +1,334 @@
# Search Strategies for Finding Claude Skills
Comprehensive guide to searching GitHub for Claude skills using multiple approaches.
## Overview
Use a **multi-method approach** combining repository search, code search, and pattern matching to find the maximum number of relevant skills.
## Method 1: Repository Search
Find repositories dedicated to Claude skills.
### Basic Repository Search
```bash
gh search repos "claude skills" --sort stars --order desc --limit 20 \
--json name,stargazersCount,description,url,createdAt,pushedAt,owner
```
### Advanced Repository Queries
```bash
# Skills specifically for Claude Code
gh search repos "claude code skills" --sort stars --limit 20 --json name,stargazersCount,url,pushedAt
# Recent skills (last 30 days)
gh search repos "claude skills" "created:>$(date -v-30d +%Y-%m-%d)" --sort stars --limit 20
# Highly starred skills
gh search repos "claude skills" "stars:>50" --sort stars --limit 20
# Active repositories (recently updated)
gh search repos "claude skills" --sort updated --limit 20
# Repositories with specific topics
gh search repos "topic:claude topic:skills" --sort stars --limit 20
```
### Filter Out Noise
```bash
# Exclude awesome-lists and collections
gh search repos "claude skills" --sort stars --limit 30 | \
jq '[.[] | select(.name | test("awesome|collection|curated") | not)]'
```
## Method 2: Code Search for SKILL.md
Directly find SKILL.md files across all repositories.
### Basic Code Search
```bash
gh search code "filename:SKILL.md" --limit 30 \
--json repository,path,url,sha
```
### Path-Specific Searches
```bash
# Skills in .claude/skills directory
gh search code "path:.claude/skills" "filename:SKILL.md" --limit 30
# Skills in skills/ subdirectory (plugin format)
gh search code "path:skills/" "filename:SKILL.md" --limit 30
# Root-level SKILL.md files
gh search code "filename:SKILL.md" "NOT path:/" --limit 30
```
### Content-Based Search
```bash
# Find skills mentioning specific capabilities
gh search code "browser automation" "filename:SKILL.md" --limit 20
# Find MCP-related skills
gh search code "MCP server" "filename:SKILL.md" --limit 20
# Find research/analysis skills
gh search code "web search OR data analysis" "filename:SKILL.md" --limit 20
```
## Method 3: Skill-Specific Pattern Matching
Search for known skill patterns and structures.
### Known Skill Repositories
```bash
# Check popular skill collections
repos=(
"BehiSecc/awesome-claude-skills"
"travisvn/awesome-claude-skills"
"simonw/claude-skills"
"mrgoonie/claudekit-skills"
)
for repo in "${repos[@]}"; do
gh api "repos/$repo/git/trees/main?recursive=1" | \
jq -r '.tree[] | select(.path | contains("SKILL.md")) | .path'
done
```
### Plugin Format Detection
```bash
# Repositories following .claude-plugin structure
gh search code "filename:.claude-plugin" --limit 20 | \
jq -r '.[] | .repository.full_name' | \
while read repo; do
# Check for skills subdirectory
gh api "repos/$repo/contents/skills" 2>/dev/null | \
jq -r '.[].name'
done
```
## Method 4: Organization/User-Based Search
Find skills from known skill creators.
### Popular Skill Authors
```bash
# Search by user
users=(
"lackeyjb" # playwright-skill
"FrancyJGLisboa" # agent-skill-creator
"alirezarezvani" # skill factory
)
for user in "${users[@]}"; do
gh search repos "user:$user" "SKILL.md" --limit 10
done
```
### Organization Search
```bash
# Search within organizations
gh search repos "org:anthropics" "skills" --limit 20
gh search repos "org:skills-directory" --limit 20
```
## Combining Results
### Deduplication Strategy
```bash
# Collect all results
all_repos=()
# From repository search
repos1=$(gh search repos "claude skills" --json name,owner | jq -r '.[] | "\(.owner.login)/\(.name)"')
# From code search
repos2=$(gh search code "filename:SKILL.md" --json repository | jq -r '.[].repository.full_name' | sort -u)
# Combine and deduplicate
all_repos=($(echo "$repos1 $repos2" | tr ' ' '\n' | sort -u))
# Fetch metadata for unique repos
for repo in "${all_repos[@]}"; do
gh api "repos/$repo" --jq '{
name: .name,
full_name: .full_name,
stars: .stargazers_count,
updated: .pushed_at,
description: .description
}'
done
```
## Search Optimization
### Parallel Execution
```bash
# Run all searches in parallel for speed
{
gh search repos "claude skills" --limit 20 > repos.json &
gh search code "filename:SKILL.md" --limit 30 > code.json &
gh search code "path:.claude/skills" --limit 20 > paths.json &
wait
}
# Merge results
jq -s 'add | unique_by(.repository.full_name)' repos.json code.json paths.json
```
### Caching Results
```bash
# Cache results to avoid hitting rate limits
cache_file=".skill-finder-cache.json"
cache_ttl=3600 # 1 hour
if [ -f "$cache_file" ] && [ $(($(date +%s) - $(stat -f %m "$cache_file"))) -lt $cache_ttl ]; then
# Use cached results
cat "$cache_file"
else
# Fetch fresh results and cache
gh search repos "claude skills" --limit 50 > "$cache_file"
cat "$cache_file"
fi
```
## Category-Specific Searches
### Automation Skills
```bash
gh search code "playwright OR selenium OR puppeteer" "filename:SKILL.md"
gh search code "browser automation OR web automation" "filename:SKILL.md"
```
### Research Skills
```bash
gh search code "web search OR research OR analysis" "filename:SKILL.md"
gh search code "data collection OR scraping" "filename:SKILL.md"
```
### Development Skills
```bash
gh search code "git OR github OR code review" "filename:SKILL.md"
gh search code "testing OR linting OR formatting" "filename:SKILL.md"
```
### Integration Skills
```bash
gh search code "MCP server OR API integration" "filename:SKILL.md"
gh search code "webhook OR external service" "filename:SKILL.md"
```
## Quality Filters
### Star-Based Filtering
```bash
# Only repos with 10+ stars
gh search repos "claude skills" "stars:>=10" --limit 20
# Trending (many stars, recently created)
gh search repos "claude skills" "stars:>50" "created:>2025-01-01" --limit 20
```
### Activity-Based Filtering
```bash
# Updated in last 30 days
gh search repos "claude skills" "pushed:>$(date -v-30d +%Y-%m-%d)" --limit 20
# Active development (multiple commits recently)
gh api graphql -f query='
{
search(query: "claude skills sort:updated", type: REPOSITORY, first: 20) {
nodes {
... on Repository {
name
stargazerCount
pushedAt
defaultBranchRef {
target {
... on Commit {
history(first: 10) {
totalCount
}
}
}
}
}
}
}
}'
```
## Error Handling
### Rate Limit Checking
```bash
# Check remaining API calls
gh api rate_limit | jq '.rate.remaining'
# If low, wait or authenticate
if [ $(gh api rate_limit | jq '.rate.remaining') -lt 10 ]; then
echo "⚠️ Low on API calls. Waiting..."
sleep 60
fi
```
### Fallback Strategies
```bash
# If authenticated search fails, try unauthenticated
gh search repos "claude skills" --limit 10 2>/dev/null || \
curl -s "https://api.github.com/search/repositories?q=claude+skills&per_page=10"
```
## Performance Benchmarks
| Method | API Calls | Results | Speed | Best For |
|--------|-----------|---------|-------|----------|
| Repository search | 1 | 20-30 | Fast | Popular skills |
| Code search | 1 | 30-50 | Medium | All skills |
| Recursive tree | N repos | 50+ | Slow | Completeness |
| Combined | 3-5 | 100+ | Medium | Best coverage |
## Recommended Workflow
1. **Quick search** (1 API call, <5 sec):
```bash
gh search repos "claude skills" --limit 20
```
2. **Comprehensive search** (3 API calls, ~15 sec):
```bash
# Parallel execution
gh search repos "claude skills" &
gh search code "filename:SKILL.md" &
gh search code "path:.claude/skills" &
wait
```
3. **Deep search** (10+ API calls, ~60 sec):
- All of the above
- Repository tree traversal
- Organization searches
- Known author searches
Choose based on user needs and time constraints.