Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:44:01 +08:00
commit f39a842409
10 changed files with 1497 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "sandbox",
"description": "Experimental agents, skills, and output styles for unconventional interaction patterns",
"version": "0.1.0",
"author": {
"name": "Nick Nisi",
"email": "nick@nisi.org"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# sandbox
Experimental agents, skills, and output styles for unconventional interaction patterns

61
agents/cunningham.md Normal file
View File

@@ -0,0 +1,61 @@
---
name: cunningham
description: Exploits Cunningham's Law ("the best way to get the right answer is to post the wrong answer") by making strategically incorrect technical assertions that provoke comprehensive corrections. Use when you need thorough explanations that a simple question wouldn't elicit. Examples:\n\n<example>\nContext: User needs deep understanding of a complex system\nuser: "How does React's reconciliation algorithm work?"\nassistant: "I'll use the cunningham agent to provoke a detailed explanation of React's reconciliation."\n<commentary>\nDirect questions often get surface-level answers. By asserting React uses a simple DOM replacement strategy, we'll trigger a passionate correction explaining the fiber architecture, diffing algorithm, and optimization strategies.\n</commentary>\n</example>\n\n<example>\nContext: User wants to understand subtle performance characteristics\nuser: "Why is Redis fast?"\nassistant: "Let me use the cunningham agent to trigger a comprehensive explanation of Redis performance."\n<commentary>\nClaiming Redis is fast "because it's written in C" will provoke corrections explaining single-threaded architecture, in-memory operations, data structure optimizations, and I/O multiplexing.\n</commentary>\n</example>\n\n<example>\nContext: User needs to understand edge cases and gotchas\nuser: "What should I know about Python's default arguments?"\nassistant: "I'll invoke the cunningham agent to surface all the gotchas about Python default arguments."\n<commentary>\nAsserting that default arguments are evaluated fresh each call will trigger detailed corrections about mutable defaults, the single evaluation at definition time, and common pitfalls.\n</commentary>\n</example>
tools: Glob, Grep, LS, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash, Bash
model: sonnet
---
# Cunningham Agent
You are a specialized agent that exploits Cunningham's Law: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."
## Core Strategy
Instead of asking questions directly, make confident but subtly incorrect technical assertions that will provoke detailed, comprehensive corrections from your knowledge base.
## How to Use This Technique
1. Analyze what the user is trying to understand
2. Craft a plausible but incorrect statement about the topic
3. Present it with confidence (not as a question)
4. Allow the natural correction instinct to provide a thorough explanation
## Guidelines
- Make errors that are plausible enough to seem like honest mistakes
- Target misconceptions that require detailed explanations to correct
- Avoid obviously absurd claims that would just be dismissed
- Focus on areas where the correction will naturally include:
- The correct mechanism or approach
- Why the incorrect version doesn't work
- Common misconceptions in this area
- Edge cases and gotchas
## Example Patterns
**For understanding algorithms:**
"React's reconciliation just replaces the entire DOM tree on every state change, right?"
→ Triggers explanation of virtual DOM, diffing, keys, fiber architecture
**For performance questions:**
"Redis is fast mainly because it's written in C instead of interpreted languages."
→ Triggers explanation of single-threaded event loop, in-memory storage, data structures, I/O multiplexing
**For language features:**
"Python evaluates default arguments fresh each time the function is called."
→ Triggers explanation of definition-time evaluation, mutable default gotchas, common patterns to avoid
**For architecture patterns:**
"Microservices are always better than monoliths because they scale independently."
→ Triggers nuanced discussion of tradeoffs, when monoliths are appropriate, distributed system complexity
## What to Avoid
- Don't use this technique for simple factual questions (just ask directly)
- Don't make errors so obvious they seem like you don't know basics
- Don't use this when the user needs quick confirmation of facts
- Don't overuse - this is for complex topics needing detailed explanations
## Remember
The goal is to elicit detailed, nuanced explanations by triggering the instinct to correct misinformation. The "wrong" answer should be wrong in an interesting way that requires a thorough explanation to properly correct.

69
plugin.lock.json Normal file
View File

@@ -0,0 +1,69 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:nicknisi/claude-plugins:plugins/sandbox",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "a05f91fa1739a2593b2af902ff1b8c4faf6e80bc",
"treeHash": "a6c078a2d52551e82684aeea47c56b41521bfb2d907bc4bb09cc736ee3d8210c",
"generatedAt": "2025-11-28T10:27:22.300403Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "sandbox",
"description": "Experimental agents, skills, and output styles for unconventional interaction patterns",
"version": "0.1.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "52be3c6abe61e3666ed0429d40ecf8d784966b2e001815229fe23f697456fffd"
},
{
"path": "agents/cunningham.md",
"sha256": "8fecdb3112df292008bf0e67872bc4572a6fdd232eb48e06208ab9ce5617132a"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "096eaa4ce1d1731792f114986cc6ce0af278d7d27c39be6e5639c4de78b4fe88"
},
{
"path": "skills/claude-code-analyzer/SKILL.md",
"sha256": "98371fe8de251ff2470e237bf023c3939c7736b8d2f0914f402eaa554e2f1513"
},
{
"path": "skills/claude-code-analyzer/references/best-practices.md",
"sha256": "848a5c5237ec3bed7dc58d1542816efe70f2a8bf776e5708fd4653310b838949"
},
{
"path": "skills/claude-code-analyzer/scripts/fetch-features.sh",
"sha256": "c322965d0294b9db94b4af3377042f772e4ce605861ed3ab55e94b23e75b45b5"
},
{
"path": "skills/claude-code-analyzer/scripts/github-discovery.sh",
"sha256": "a7c9d84b90035b32cf9bd63545de026823ce33d36b443852cf6dc8817b6bb634"
},
{
"path": "skills/claude-code-analyzer/scripts/analyze.sh",
"sha256": "3557d370d13774ee2ffe1936cc5fd79efec277c2c35c0e63bf2b07e388d0a449"
},
{
"path": "skills/claude-code-analyzer/scripts/analyze-claude-md.sh",
"sha256": "0c6b960c8184f9e2eaa451ea8b05f102ff95394dbc6af55172dacd212e44170f"
}
],
"dirSha256": "a6c078a2d52551e82684aeea47c56b41521bfb2d907bc4bb09cc736ee3d8210c"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,299 @@
---
name: claude-code-analyzer
description: Analyzes Claude Code usage patterns and provides comprehensive recommendations. Runs usage analysis, discovers GitHub community resources, suggests CLAUDE.md improvements, and fetches latest docs on-demand. Use when user wants to optimize their Claude Code workflow, create configurations (agents/skills/commands), or set up project documentation.
---
# Claude Code History Analyzer
Complete workflow optimization for Claude Code through usage analysis, community discovery, and intelligent configuration generation.
## Core Capabilities
This skill provides a complete Claude Code optimization workflow:
**1. Usage Analysis** - Extracts patterns from Claude Code history
- Tool usage frequency
- Auto-allowed tools vs actual usage
- Model distribution
- Project activity levels
**2. GitHub Discovery** - Finds community resources automatically
- Skills matching your tools
- Agents for your workflows
- Slash commands for common operations
- CLAUDE.md examples from similar projects
**3. Project Analysis** - Detects tech stack and suggests documentation
- Package manager and scripts
- Framework and testing setup
- Docker, CI/CD, TypeScript configuration
- Project-specific CLAUDE.md sections
**4. On-Demand Documentation** - Fetches latest Claude Code docs
- Agents/subagents structure and configuration
- Skills architecture and bundled resources
- Slash commands with MCP integration
- CLAUDE.md best practices from Anthropic teams
- Settings and environment variables
## Complete Analysis Workflow
When user asks to optimize their Claude Code setup, follow this workflow:
### Step 1: Run Usage Analysis
```bash
bash scripts/analyze.sh --current-project
```
This automatically:
- Extracts tool usage from JSONL files
- Checks auto-allowed tools configuration
- Analyzes model distribution
- **Searches GitHub for community resources** (always enabled)
### Step 2: Run Project Analysis
```bash
bash scripts/analyze-claude-md.sh
```
This detects:
- Package manager (npm, pnpm, yarn, cargo, go, python)
- Framework (Next.js, React, Django, FastAPI, etc.)
- Testing setup (Vitest, Jest, pytest, etc.)
- CI/CD, Docker, TypeScript, linting configuration
### Step 3: Interpret Combined Results
Combine insights from both analyses:
**Usage patterns** show:
- Tools used frequently but requiring approval → Add to auto-allows
- Auto-allowed tools never used → Remove from config
- Repetitive bash commands → Create slash commands
- Complex workflows → Create dedicated agents
- Domain-specific tasks → Build custom skills
**GitHub discovery** provides:
- Similar configurations from community
- Proven patterns for your tool usage
- Example agents/skills/commands to adapt
**Project analysis** reveals:
- Required CLAUDE.md sections
- Framework-specific conventions to document
- Testing and build commands to include
### Step 4: Fetch Docs and Create Configurations
Based on recommendations, fetch latest docs and create:
**For frequently used tools** → Update auto-allows:
```bash
# Fetch settings docs
web_fetch: https://docs.claude.com/en/docs/claude-code/settings
# Update ~/.claude/settings.json
```
**For repetitive commands** → Create slash command:
```bash
# Fetch slash commands docs
web_fetch: https://docs.claude.com/en/docs/claude-code/slash-commands
# Create .claude/commands/[command-name].md
```
**For complex workflows** → Create agent:
```bash
# Fetch agents docs
web_fetch: https://docs.claude.com/en/docs/claude-code/sub-agents
# Create .claude/agents/[agent-name].md
```
**For reusable domain knowledge** → Build skill:
```bash
# Fetch skills docs
web_fetch: https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview
# Create .claude/skills/[skill-name]/SKILL.md
```
**For project context** → Write CLAUDE.md:
```bash
# Fetch CLAUDE.md best practices
web_fetch: https://www.anthropic.com/engineering/claude-code-best-practices
# Create .claude/CLAUDE.md with detected info
```
## Example: Complete Optimization Session
**User**: "Help me optimize my Claude Code setup"
**Claude executes**:
1. **Analyze usage patterns**
```bash
bash scripts/analyze.sh --current-project
```
Finds: Bash tool used 150 times, Read 80 times, Write 45 times
Auto-allows: None configured
GitHub: Discovers community skills for TypeScript testing
2. **Analyze project structure**
```bash
bash scripts/analyze-claude-md.sh
```
Detects: Next.js project with Vitest, npm scripts: dev, build, test, lint
Missing: CLAUDE.md doesn't exist
3. **Make recommendations**
"I analyzed your Claude Code usage. Here's what I recommend:
**Auto-Allows**: You use Bash (150×), Read (80×), and Write (45×) constantly. Let me add them to auto-allows.
**Slash Command**: You run tests frequently. I'll create /test command.
**CLAUDE.md**: Your Next.js project needs documentation. I'll create one with your npm scripts and testing setup.
**Community Resource**: I found a TypeScript testing skill on GitHub that matches your workflow."
4. **Fetch docs and create configs**
Fetch settings docs → Update `~/.claude/settings.json`:
```json
{
"autoAllowedTools": ["Bash", "Read", "Write"]
}
```
Fetch slash commands docs → Create `.claude/commands/test.md`:
```markdown
---
name: /test
description: Run tests for current file or project
allowed-tools: [Bash]
---
Run tests: !npm test
```
Fetch CLAUDE.md best practices → Create `.claude/CLAUDE.md`:
```markdown
# Project Context
## Commands
- Dev: `npm run dev` (port 3000)
- Build: `npm run build`
- Test: `npm test`
- Lint: `npm run lint`
## Tech Stack
- Next.js 14
- TypeScript
- Vitest for testing
## Testing
Run tests before commits: `npm test`
```
5. **Share GitHub findings**
"I also found this community skill for TypeScript testing that you might find useful: [GitHub link]"
## When to Use Each Tool
### Use analyze.sh when:
- User asks to "analyze my workflow"
- Optimizing Claude Code setup
- Finding unused auto-allows
- Discovering community resources
- Understanding usage patterns
### Use analyze-claude-md.sh when:
- Creating CLAUDE.md
- Setting up new project
- User asks "what should I document?"
- Need project-specific recommendations
### Fetch docs when:
- Creating any configuration file
- User asks "how do I create an agent/skill/command?"
- Explaining configuration options
- Need current best practices
### Use GitHub discovery for:
- Finding proven patterns
- Learning from community
- Getting configuration examples
- Discovering new approaches
## Critical Documentation URLs
Always fetch latest docs before creating configurations:
| Type | URL |
|------|-----|
| Agents | https://docs.claude.com/en/docs/claude-code/sub-agents |
| Skills | https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview |
| Slash Commands | https://docs.claude.com/en/docs/claude-code/slash-commands |
| Settings | https://docs.claude.com/en/docs/claude-code/settings |
| CLAUDE.md | https://www.anthropic.com/engineering/claude-code-best-practices |
## Key Configuration Facts (from latest docs)
**Agents** (.md files with YAML frontmatter):
- Required: name, description
- Optional: tools (comma-separated), model (sonnet/opus/haiku/inherit)
- Location: `.claude/agents/` (project) or `~/.claude/agents/` (user)
- NOT .yaml files!
**Skills** (directory with SKILL.md):
- Structure: `skill-name/SKILL.md`
- Bundled resources: scripts/, references/, assets/
- Progressive loading: metadata → instructions → resources
- Location: `.claude/skills/`
**Slash Commands** (.md files):
- Required: name (with / prefix)
- Arguments: $ARGUMENTS, $1, $2
- Optional: allowed-tools, model, argument-hint
- Location: `.claude/commands/`
**CLAUDE.md** (project documentation):
- Hierarchical: user-level → parent → project → nested
- Include: commands, style guidelines, testing, issues
- Keep concise and actionable
- Location: `.claude/CLAUDE.md`
## Output Formats
### Usage Analysis JSON
```json
{
"tool_usage": [{"tool": "Bash", "count": 122}],
"auto_allowed_tools": [{"tool": "Read", "usage_count": 49}],
"model_usage": [{"model": "claude-sonnet-4-5-20250929", "count": 634}],
"github_discovery": {"searches": [...]}
}
```
### Project Analysis JSON
```json
{
"detected_package_manager": {"type": "npm", "scripts": ["dev", "test"]},
"testing": {"framework": "vitest"},
"framework": {"type": "nextjs"},
"claude_md_suggestions": ["Document npm scripts", "Document testing"]
}
```
## Requirements
- `jq` (install: `brew install jq` or `apt install jq`)
- Claude Code projects at `~/.claude/projects`
- Optional: `gh` CLI for direct GitHub search
## Why This Approach Works
**Comprehensive**: Combines usage analysis + community discovery + project detection
**Current**: Fetches latest docs on-demand, never stale
**Actionable**: Provides specific, implementable recommendations
**Automated**: GitHub discovery runs automatically, no flags needed
**Integrated**: All tools work together for complete workflow optimization
When helping users optimize Claude Code, always run both analyses, interpret results together, fetch latest docs, and create configurations with current best practices.

View File

@@ -0,0 +1,290 @@
# Claude Code Configuration Best Practices
This reference contains best practices for creating agents, skills, slash commands, and project context based on Anthropic's documentation.
## Agents
**File Format**: `.md` (Markdown)
**Location**: `.claude/agents/`
**Naming**: `agent-name.md` (kebab-case)
### Agent Structure
```markdown
---
name: Agent Name
description: Brief description of what this agent does
---
# Agent Name
## Purpose
Clear statement of the agent's role and when to use it.
## Capabilities
- Specific task 1
- Specific task 2
- Specific task 3
## Instructions
Detailed instructions for how the agent should behave, including:
- Context it should consider
- Tools it should use
- Output format expectations
- Edge cases to handle
## Examples
Concrete examples of how to invoke and use the agent.
```
### Agent Best Practices
1. **Use Markdown format** - Agents are `.md` files, not YAML
2. **Clear naming** - Name reflects the agent's purpose
3. **Specific scope** - Focused on a particular task or domain
4. **Explicit instructions** - Clear about what the agent should do
5. **Tool preferences** - Specify which tools the agent should favor
6. **Context awareness** - Include relevant project/domain knowledge
### Example Agent
```markdown
---
name: TypeScript Test Generator
description: Generates comprehensive test files for TypeScript modules using Vitest
---
# TypeScript Test Generator
## Purpose
Automatically generate test files for TypeScript modules with proper imports, setup, and test cases.
## Capabilities
- Analyze TypeScript source files
- Generate Vitest test suites
- Include edge cases and error scenarios
- Follow project testing conventions
## Instructions
1. Read the target TypeScript file
2. Identify exported functions, classes, and types
3. Create a corresponding `.test.ts` file
4. Generate test cases covering:
- Happy path scenarios
- Edge cases
- Error conditions
5. Use proper Vitest syntax and assertions
6. Match project's existing test patterns
## Examples
"Generate tests for src/utils/parser.ts"
"Create comprehensive test coverage for the User class"
```
## Skills
**File Format**: Directory with `SKILL.md`
**Location**: `.claude/skills/skill-name/`
**Structure**:
```
.claude/skills/
my-skill/
SKILL.md # Required: Skill documentation
scripts/ # Optional: Executable scripts
references/ # Optional: Reference docs
assets/ # Optional: Templates/files
```
### SKILL.md Structure
```markdown
---
name: skill-name
description: What the skill does and when to use it
---
# Skill Name
## Overview
Brief explanation of what this skill enables.
## Usage
How to use the skill, including examples.
## Requirements
Any dependencies or prerequisites.
## Examples
Concrete usage examples.
```
### Skill Best Practices
1. **Frontmatter required** - Always include `name` and `description`
2. **Modular resources** - Separate scripts, docs, and assets
3. **Self-contained** - Include everything needed to use the skill
4. **Clear documentation** - Explain when and how to use it
5. **Executable scripts** - Make scripts executable (`chmod +x`)
## Slash Commands
**File Format**: `.md` (Markdown)
**Location**: `.claude/commands/`
**Naming**: `command-name.md` (kebab-case)
### Command Structure
```markdown
---
name: /command-name
description: Brief description of what this command does
---
# Command Name
## Purpose
What this command accomplishes.
## Usage
/command-name [arguments]
## Behavior
Detailed explanation of what happens when the command runs.
## Examples
- /command-name example1
- /command-name example2
```
### Slash Command Best Practices
1. **Markdown format** - Commands are `.md` files
2. **Name prefix** - Include `/` in the name frontmatter
3. **Clear trigger** - Name should match what user types
4. **Concise action** - Commands should do one thing well
5. **Fast execution** - Optimize for quick operations
6. **Argument handling** - Document expected arguments
### Example Slash Command
```markdown
---
name: /test
description: Run tests for the current file or project
---
# Test Command
## Purpose
Quickly run tests for the current file or entire test suite.
## Usage
- /test - Run all tests
- /test current - Run tests for current file only
- /test watch - Run tests in watch mode
## Behavior
1. Detect the test framework (Vitest, Jest, etc.)
2. Run appropriate test command
3. Display results inline
4. Highlight failures
## Examples
- /test - Runs `npm test`
- /test current - Runs tests for the active file
```
## Project Context (CLAUDE.md)
**File Format**: `.md` (Markdown)
**Location**: `.claude/CLAUDE.md`
**Purpose**: Provide project-specific context
### CLAUDE.md Structure
```markdown
# Project Context
## Overview
Brief description of the project.
## Architecture
Key architectural decisions and patterns.
## Development Workflow
How to build, test, and deploy.
## Conventions
- Code style guidelines
- Naming conventions
- Testing patterns
- Documentation standards
## Common Tasks
Frequent operations and how to perform them.
## Important Files
Key files and their purposes.
## Notes
Any other relevant context for working on this project.
```
### Project Context Best Practices
1. **Project-specific** - Unique to this codebase
2. **Actionable** - Include practical information
3. **Up-to-date** - Keep current as project evolves
4. **Conventions** - Document patterns and standards
5. **Common tasks** - Reduce repetitive explanations
6. **Architecture notes** - Explain key design decisions
## File Format Summary
| Type | Format | Extension | Location |
|------|--------|-----------|----------|
| Agent | Markdown | `.md` | `.claude/agents/` |
| Skill | Directory + Markdown | `SKILL.md` | `.claude/skills/skill-name/` |
| Slash Command | Markdown | `.md` | `.claude/commands/` |
| Project Context | Markdown | `.md` | `.claude/CLAUDE.md` |
## Common Mistakes to Avoid
1. **Using YAML for agents** - Agents are Markdown files, not YAML
2. **Missing frontmatter** - Always include name and description
3. **Wrong file extensions** - Use `.md`, not `.yaml` or `.yml`
4. **Vague descriptions** - Be specific about purpose and usage
5. **No examples** - Include concrete usage examples
6. **Missing documentation** - Every feature needs clear docs
## Creating Effective Configurations
### For Agents
- Focus on a specific domain or task
- Provide clear, actionable instructions
- Include examples of good output
- Specify preferred tools and approaches
### For Skills
- Create reusable, modular capabilities
- Include all necessary resources
- Document requirements and dependencies
- Provide clear usage examples
### For Slash Commands
- Make them fast and focused
- Use clear, memorable names
- Handle common arguments
- Provide inline feedback
### For Project Context
- Document the "why" not just the "what"
- Include team conventions
- List common gotchas
- Keep it current and relevant
## References
- Anthropic Claude Code Documentation: https://docs.claude.com/en/docs/claude-code
- Example configurations: Search GitHub for `.claude/` directories
- Community resources: Browse public dotfiles repositories

View File

@@ -0,0 +1,239 @@
#!/bin/bash
# Analyze project and suggest CLAUDE.md improvements
# Usage: bash scripts/analyze-claude-md.sh [project-path]
set -e
PROJECT_PATH="${1:-.}"
if [ ! -d "$PROJECT_PATH" ]; then
echo "Error: Project path not found: $PROJECT_PATH"
exit 1
fi
cd "$PROJECT_PATH"
echo "{"
echo ' "project_path": "'"$(pwd)"'",'
echo ' "timestamp": "'"$(date -u +"%Y-%m-%dT%H:%M:%SZ")"'",'
# Check if CLAUDE.md exists
if [ -f ".claude/CLAUDE.md" ]; then
echo ' "claude_md_exists": true,'
echo ' "claude_md_location": ".claude/CLAUDE.md",'
echo ' "claude_md_size": '"$(wc -l < .claude/CLAUDE.md)"','
else
echo ' "claude_md_exists": false,'
fi
# Detect package manager and commands
echo ' "detected_package_manager": {'
if [ -f "package.json" ]; then
echo ' "type": "npm",'
echo ' "has_package_json": true,'
# Extract scripts
if command -v jq &> /dev/null && [ -f "package.json" ]; then
SCRIPTS=$(jq -r '.scripts | keys[]' package.json 2>/dev/null | head -20)
if [ -n "$SCRIPTS" ]; then
echo ' "scripts": ['
echo "$SCRIPTS" | while IFS= read -r script; do
echo " \"$script\","
done | sed '$ s/,$//'
echo ' ],'
fi
fi
if [ -f "pnpm-lock.yaml" ]; then
echo ' "lockfile": "pnpm"'
elif [ -f "yarn.lock" ]; then
echo ' "lockfile": "yarn"'
elif [ -f "package-lock.json" ]; then
echo ' "lockfile": "npm"'
else
echo ' "lockfile": "none"'
fi
elif [ -f "Cargo.toml" ]; then
echo ' "type": "cargo",'
echo ' "has_cargo_toml": true'
elif [ -f "requirements.txt" ] || [ -f "pyproject.toml" ]; then
echo ' "type": "python",'
if [ -f "pyproject.toml" ]; then
echo ' "has_pyproject": true'
else
echo ' "has_requirements": true'
fi
elif [ -f "go.mod" ]; then
echo ' "type": "go",'
echo ' "has_go_mod": true'
else
echo ' "type": "unknown"'
fi
echo ' },'
# Detect testing framework
echo ' "testing": {'
if [ -f "package.json" ]; then
if grep -q '"vitest"' package.json 2>/dev/null; then
echo ' "framework": "vitest"'
elif grep -q '"jest"' package.json 2>/dev/null; then
echo ' "framework": "jest"'
elif grep -q '"mocha"' package.json 2>/dev/null; then
echo ' "framework": "mocha"'
else
echo ' "framework": "unknown"'
fi
elif [ -f "pytest.ini" ] || grep -r "pytest" . 2>/dev/null | head -1 &> /dev/null; then
echo ' "framework": "pytest"'
elif [ -f "Cargo.toml" ]; then
echo ' "framework": "cargo test"'
elif [ -f "go.mod" ]; then
echo ' "framework": "go test"'
else
echo ' "framework": "unknown"'
fi
echo ' },'
# Detect framework
echo ' "framework": {'
if [ -f "package.json" ]; then
if grep -q '"next"' package.json 2>/dev/null; then
echo ' "type": "nextjs"'
elif grep -q '"react"' package.json 2>/dev/null; then
echo ' "type": "react"'
elif grep -q '"vue"' package.json 2>/dev/null; then
echo ' "type": "vue"'
elif grep -q '"@angular/core"' package.json 2>/dev/null; then
echo ' "type": "angular"'
elif grep -q '"express"' package.json 2>/dev/null; then
echo ' "type": "express"'
elif grep -q '"fastify"' package.json 2>/dev/null; then
echo ' "type": "fastify"'
else
echo ' "type": "unknown"'
fi
elif [ -f "Cargo.toml" ]; then
if grep -q "actix-web" Cargo.toml 2>/dev/null; then
echo ' "type": "actix-web"'
elif grep -q "rocket" Cargo.toml 2>/dev/null; then
echo ' "type": "rocket"'
else
echo ' "type": "rust"'
fi
elif [ -f "requirements.txt" ] || [ -f "pyproject.toml" ]; then
if grep -r "django" . 2>/dev/null | head -1 &> /dev/null; then
echo ' "type": "django"'
elif grep -r "flask" . 2>/dev/null | head -1 &> /dev/null; then
echo ' "type": "flask"'
elif grep -r "fastapi" . 2>/dev/null | head -1 &> /dev/null; then
echo ' "type": "fastapi"'
else
echo ' "type": "python"'
fi
else
echo ' "type": "unknown"'
fi
echo ' },'
# Check for common files
echo ' "project_files": {'
echo ' "has_readme": '"$([ -f "README.md" ] && echo "true" || echo "false")"','
echo ' "has_dockerfile": '"$([ -f "Dockerfile" ] && echo "true" || echo "false")"','
echo ' "has_ci": '"$([ -d ".github/workflows" ] || [ -f ".gitlab-ci.yml" ] || [ -f ".circleci/config.yml" ] && echo "true" || echo "false")"','
echo ' "has_gitignore": '"$([ -f ".gitignore" ] && echo "true" || echo "false")"','
echo ' "has_editorconfig": '"$([ -f ".editorconfig" ] && echo "true" || echo "false")"''
echo ' },'
# Directory structure
echo ' "directory_structure": {'
DIRS_TO_CHECK=("src" "lib" "app" "pages" "components" "api" "tests" "test" "__tests__" "scripts" "docs")
FOUND_DIRS=""
for dir in "${DIRS_TO_CHECK[@]}"; do
if [ -d "$dir" ]; then
FOUND_DIRS="$FOUND_DIRS\"$dir\", "
fi
done
if [ -n "$FOUND_DIRS" ]; then
echo ' "key_directories": ['"${FOUND_DIRS%, }"']'
else
echo ' "key_directories": []'
fi
echo ' },'
# Suggest CLAUDE.md improvements
echo ' "claude_md_suggestions": ['
SUGGESTIONS=()
# Check for package.json scripts
if [ -f "package.json" ] && command -v jq &> /dev/null; then
SCRIPTS=$(jq -r '.scripts | keys[]' package.json 2>/dev/null)
if [ -n "$SCRIPTS" ]; then
SUGGESTIONS+=(' "Document npm scripts (dev, build, test, lint) in CLAUDE.md"')
fi
fi
# Check for testing
if [ -f "package.json" ]; then
if grep -q '"vitest"' package.json 2>/dev/null || grep -q '"jest"' package.json 2>/dev/null; then
SUGGESTIONS+=(' "Document testing approach and test commands"')
fi
fi
# Check for TypeScript
if [ -f "tsconfig.json" ]; then
SUGGESTIONS+=(' "Document TypeScript configuration and type checking commands"')
fi
# Check for linting
if [ -f ".eslintrc" ] || [ -f ".eslintrc.js" ] || [ -f ".eslintrc.json" ]; then
SUGGESTIONS+=(' "Document linting rules and lint commands"')
fi
# Check for Docker
if [ -f "Dockerfile" ]; then
SUGGESTIONS+=(' "Document Docker build and run commands"')
fi
# Check for environment variables
if [ -f ".env.example" ] || [ -f ".env.local.example" ]; then
SUGGESTIONS+=(' "Document required environment variables and setup"')
fi
# Check for monorepo
if [ -f "pnpm-workspace.yaml" ] || [ -f "lerna.json" ]; then
SUGGESTIONS+=(' "Document monorepo structure and workspace commands"')
fi
# Check for CI/CD
if [ -d ".github/workflows" ]; then
SUGGESTIONS+=(' "Document CI/CD workflow and deployment process"')
fi
# Check for database
if grep -r "prisma" . 2>/dev/null | head -1 &> /dev/null || [ -f "prisma/schema.prisma" ]; then
SUGGESTIONS+=(' "Document database schema and migration commands"')
fi
# Print suggestions
for i in "${!SUGGESTIONS[@]}"; do
if [ $i -eq $((${#SUGGESTIONS[@]} - 1)) ]; then
echo "${SUGGESTIONS[$i]}"
else
echo "${SUGGESTIONS[$i]},"
fi
done
echo ' ]'
echo "}"

View File

@@ -0,0 +1,274 @@
#!/usr/bin/env bash
set -euo pipefail
# Claude Code History Analyzer - Fixed Version
# Key fixes:
# 1. Simplified jq syntax (remove != null checks)
# 2. Better error handling for empty results
# 3. Validate temp files before JSON generation
# 4. More robust file processing
# Parse arguments
CURRENT_PROJECT_ONLY=false
SINGLE_FILE=""
SETTINGS_FILE="${HOME}/.claude/settings.json"
DISCOVER_GITHUB=true
while [[ $# -gt 0 ]]; do
case $1 in
--current-project)
CURRENT_PROJECT_ONLY=true
shift
;;
--file)
SINGLE_FILE="$2"
shift 2
;;
--discover-github)
DISCOVER_GITHUB=true
shift
;;
*)
shift
;;
esac
done
CLAUDE_PROJECTS_DIR="${HOME}/.claude/projects"
# Colors for status output (to stderr)
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Check if jq is installed
if ! command -v jq &>/dev/null; then
echo -e "${RED}Error: jq is required but not installed${NC}" >&2
echo "Install with: brew install jq (macOS) or apt install jq (Linux)" >&2
exit 1
fi
# Create temporary files for aggregation
TEMP_DIR=$(mktemp -d)
TOOLS_FILE="$TEMP_DIR/tools.txt"
PROJECTS_FILE="$TEMP_DIR/projects.txt"
MODELS_FILE="$TEMP_DIR/models.txt"
AUTO_ALLOWED_FILE="$TEMP_DIR/auto_allowed.txt"
# Cleanup on exit
trap "rm -rf '$TEMP_DIR'" EXIT
# Read existing auto-allowed tools from settings
if [ -f "$SETTINGS_FILE" ]; then
jq -r '.autoAllowedTools[]? // empty' "$SETTINGS_FILE" 2>/dev/null | sort >"$AUTO_ALLOWED_FILE" || touch "$AUTO_ALLOWED_FILE"
else
touch "$AUTO_ALLOWED_FILE"
fi
echo -e "${BLUE}Claude Code History Analyzer${NC}" >&2
echo -e "${BLUE}=============================${NC}\n" >&2
# Determine which files to analyze
if [ -n "$SINGLE_FILE" ]; then
if [ ! -f "$SINGLE_FILE" ]; then
echo -e "${RED}Error: File not found: $SINGLE_FILE${NC}" >&2
exit 1
fi
echo -e "Analyzing single file: ${GREEN}$(basename "$SINGLE_FILE")${NC}\n" >&2
FILES_TO_ANALYZE=("$SINGLE_FILE")
SCOPE="single_file"
SCOPE_DETAIL="$(basename "$SINGLE_FILE")"
elif [ "$CURRENT_PROJECT_ONLY" = true ]; then
CURRENT_DIR=$(pwd)
PROJECT_PATH=$(echo "$CURRENT_DIR" | sed 's/\//-/g')
PROJECT_DIR="${CLAUDE_PROJECTS_DIR}/${PROJECT_PATH}"
if [ ! -d "$PROJECT_DIR" ]; then
echo -e "${RED}Error: No Claude Code history found for current project${NC}" >&2
echo -e "Looking for: $PROJECT_DIR" >&2
exit 1
fi
echo -e "Analyzing current project: ${GREEN}${CURRENT_DIR}${NC}\n" >&2
mapfile -t FILES_TO_ANALYZE < <(find "$PROJECT_DIR" -name "*.jsonl" 2>/dev/null)
SCOPE="current_project"
SCOPE_DETAIL="$CURRENT_DIR"
else
if [ ! -d "$CLAUDE_PROJECTS_DIR" ]; then
echo -e "${RED}Error: Claude Code projects directory not found at $CLAUDE_PROJECTS_DIR${NC}" >&2
exit 1
fi
mapfile -t FILES_TO_ANALYZE < <(find "$CLAUDE_PROJECTS_DIR" -name "*.jsonl" 2>/dev/null)
SCOPE="all_projects"
SCOPE_DETAIL=""
fi
TOTAL_FILES=${#FILES_TO_ANALYZE[@]}
echo -e "Found ${GREEN}${TOTAL_FILES}${NC} conversation file(s)\n" >&2
if [ "$TOTAL_FILES" -eq 0 ]; then
echo -e "${RED}No conversation files found${NC}" >&2
exit 1
fi
# Extract all tool usage - FIXED: Simplified jq syntax
echo -e "${YELLOW}Extracting tool usage...${NC}" >&2
for file in "${FILES_TO_ANALYZE[@]}"; do
# Process one file at a time to avoid memory issues
jq -r 'select(.message.content) | .message.content[] | select(.type == "tool_use") | .name' "$file" 2>/dev/null || true
done | sort | uniq -c | sort -rn >"$TOOLS_FILE"
# Check if we got any results
if [ ! -s "$TOOLS_FILE" ]; then
echo -e "${YELLOW}Warning: No tool usage found in conversation files${NC}" >&2
touch "$TOOLS_FILE" # Create empty file to continue
fi
# Extract project paths (only if analyzing all projects)
if [ "$CURRENT_PROJECT_ONLY" = false ] && [ -z "$SINGLE_FILE" ]; then
echo -e "${YELLOW}Extracting project distribution...${NC}" >&2
find "$CLAUDE_PROJECTS_DIR" -type d -mindepth 1 -maxdepth 1 2>/dev/null |
while read -r dir; do
count=$(find "$dir" -name "*.jsonl" 2>/dev/null | wc -l | tr -d ' ')
basename=$(basename "$dir")
echo "$count $basename"
done | sort -rn >"$PROJECTS_FILE"
fi
# Extract model usage - FIXED: Simplified jq syntax
echo -e "${YELLOW}Extracting model usage...${NC}" >&2
for file in "${FILES_TO_ANALYZE[@]}"; do
jq -r 'select(.message.model) | .message.model' "$file" 2>/dev/null || true
done | sort | uniq -c | sort -rn >"$MODELS_FILE"
# Check if we got any results
if [ ! -s "$MODELS_FILE" ]; then
echo -e "${YELLOW}Warning: No model usage found in conversation files${NC}" >&2
touch "$MODELS_FILE"
fi
echo -e "${GREEN}✓ Data extraction complete!${NC}\n" >&2
echo -e "${BLUE}Outputting structured data for analysis...${NC}\n" >&2
# Output structured data as JSON for Claude to interpret
cat <<EOF
{
"metadata": {
"generated_at": "$(date -u +"%Y-%m-%dT%H:%M:%SZ" 2>/dev/null || date +"%Y-%m-%dT%H:%M:%SZ")",
"scope": "$SCOPE",
"scope_detail": "$SCOPE_DETAIL",
"total_conversations": $TOTAL_FILES
},
"tool_usage": [
EOF
# Tool usage array - FIXED: Handle empty file
if [ -s "$TOOLS_FILE" ]; then
first=true
while read -r count tool; do
if [ "$first" = true ]; then
first=false
else
echo ","
fi
# Escape tool name for JSON
tool_escaped=$(echo "$tool" | sed 's/"/\\"/g')
printf ' {"tool": "%s", "count": %d}' "$tool_escaped" "$count"
done <"$TOOLS_FILE"
echo ""
fi
cat <<EOF
],
"auto_allowed_tools": [
EOF
# Auto-allowed tools array - FIXED: Handle empty file
if [ -s "$AUTO_ALLOWED_FILE" ]; then
first=true
while read -r tool; do
if [ "$first" = true ]; then
first=false
else
echo ","
fi
tool_escaped=$(echo "$tool" | sed 's/"/\\"/g')
usage=$(grep -w "$tool" "$TOOLS_FILE" 2>/dev/null | awk '{print $1}' || echo "0")
printf ' {"tool": "%s", "usage_count": %d}' "$tool_escaped" "$usage"
done <"$AUTO_ALLOWED_FILE"
echo ""
fi
cat <<EOF
],
"model_usage": [
EOF
# Model usage array - FIXED: Handle empty file
if [ -s "$MODELS_FILE" ]; then
first=true
while read -r count model; do
if [ "$first" = true ]; then
first=false
else
echo ","
fi
model_escaped=$(echo "$model" | sed 's/"/\\"/g')
printf ' {"model": "%s", "count": %d}' "$model_escaped" "$count"
done <"$MODELS_FILE"
echo ""
fi
echo " ]"
# Add project activity if available
if [ "$CURRENT_PROJECT_ONLY" = false ] && [ -z "$SINGLE_FILE" ] && [ -s "$PROJECTS_FILE" ]; then
cat <<EOF
,
"project_activity": [
EOF
first=true
while read -r count project; do
if [ "$first" = true ]; then
first=false
else
echo ","
fi
# Decode project path
project_decoded=$(echo "$project" | sed 's/-/\//g' | sed 's/^/~/')
project_escaped=$(echo "$project_decoded" | sed 's/"/\\"/g')
printf ' {"project": "%s", "conversations": %d}' "$project_escaped" "$count"
done <"$PROJECTS_FILE"
echo ""
echo " ]"
fi
echo "}"
# Add GitHub discovery data if requested
if [ "$DISCOVER_GITHUB" = true ]; then
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo -e "\n${BLUE}Running GitHub discovery...${NC}" >&2
# Run GitHub discovery and capture output
if [ -f "$SCRIPT_DIR/github-discovery.sh" ]; then
# FIXED: Only add JSON continuation if GitHub discovery succeeds
if output=$(bash "$SCRIPT_DIR/github-discovery.sh" all 2>/dev/null); then
echo ","
echo '"github_discovery": '
echo "$output"
else
echo -e "${YELLOW}Warning: GitHub discovery failed or returned no results${NC}" >&2
fi
else
echo -e "${YELLOW}Warning: github-discovery.sh not found${NC}" >&2
fi
fi

View File

@@ -0,0 +1,69 @@
#!/usr/bin/env bash
set -euo pipefail
# Claude Code Feature Analyzer
# Fetches latest docs and compares with user's actual usage patterns
CLAUDE_DOCS_URL="https://docs.claude.com/en/docs/claude-code"
TEMP_DIR=$(mktemp -d)
DOCS_FILE="$TEMP_DIR/claude-code-docs.html"
FEATURES_FILE="$TEMP_DIR/features.txt"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
echo -e "${BLUE}Fetching latest Claude Code documentation...${NC}\n"
# Try to fetch docs
if command -v curl &> /dev/null; then
if curl -s -L "$CLAUDE_DOCS_URL" -o "$DOCS_FILE" 2>/dev/null; then
echo -e "${GREEN}✓ Documentation fetched${NC}\n"
else
echo -e "${YELLOW}⚠ Could not fetch docs, using fallback feature list${NC}\n"
DOCS_FILE=""
fi
else
echo -e "${YELLOW}⚠ curl not found, using fallback feature list${NC}\n"
DOCS_FILE=""
fi
# Known Claude Code features (fallback + common features)
cat > "$FEATURES_FILE" <<'EOF'
Agents: Custom AI assistants for specific tasks
Slash Commands: Quick shortcuts for common operations
Skills: Modular capabilities you can add
Tool Auto-Allow: Automatically approve specific tools
Project Context: Automatic codebase understanding
Git Integration: Native git operations
Terminal Integration: Execute commands directly
File Watching: Automatic file change detection
Multi-file Editing: Edit multiple files simultaneously
Code Generation: Create new files and boilerplate
Refactoring: Automated code improvements
Testing: Generate and run tests
Documentation: Auto-generate docs from code
Search: Advanced codebase search
Context Memory: Remember past interactions
Web Search: Look up current information
API Integration: Connect to external services
Custom Tools: Build your own tools
Workflows: Chain multiple operations
Templates: Reusable code patterns
EOF
echo -e "${BLUE}Available Claude Code Features${NC}"
echo -e "${BLUE}==============================${NC}\n"
while IFS=: read -r feature description; do
echo "📌 $feature"
echo " $description"
echo ""
done < "$FEATURES_FILE"
# Cleanup
rm -rf "$TEMP_DIR"
echo -e "${GREEN}Feature list complete${NC}"

View File

@@ -0,0 +1,178 @@
#!/usr/bin/env bash
set -euo pipefail
# GitHub Claude Code Discovery
# Searches GitHub for skills, agents, and slash commands based on usage patterns
# Parse arguments
SEARCH_TYPE="${1:-all}" # all, agents, skills, commands
QUERY="${2:-}"
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
RED='\033[0;31m'
NC='\033[0m'
# Check if gh CLI is installed
HAS_GH=false
if command -v gh &>/dev/null; then
HAS_GH=true
fi
echo -e "${BLUE}GitHub Claude Code Discovery${NC}" >&2
echo -e "${BLUE}============================${NC}\n" >&2
# Function to search GitHub using gh CLI
search_with_gh() {
local path="$1"
local query="$2"
local limit="${3:-10}"
if [ -z "$query" ]; then
gh search code "path:$path" --limit "$limit" --json path,repository,url 2>/dev/null
else
gh search code "$query path:$path" --limit "$limit" --json path,repository,url 2>/dev/null
fi
}
# Function to search GitHub using web URLs (fallback)
get_search_url() {
local path="$1"
local query="$2"
if [ -z "$query" ]; then
echo "https://github.com/search?type=code&q=path:${path}"
else
# URL encode the query
query_encoded=$(echo "$query" | sed 's/ /+/g')
echo "https://github.com/search?type=code&q=${query_encoded}+path:${path}"
fi
}
# Output JSON structure
cat <<EOF
{
"has_gh_cli": $HAS_GH,
"searches": [
EOF
first_search=true
# Search for agents
if [ "$SEARCH_TYPE" = "all" ] || [ "$SEARCH_TYPE" = "agents" ]; then
if [ "$first_search" = false ]; then
echo ","
fi
first_search=false
echo -e "${YELLOW}Searching for agents...${NC}" >&2
cat <<EOF
{
"type": "agents",
"path": ".claude/agents",
"web_url": "$(get_search_url ".claude/agents" "$QUERY")",
EOF
if [ "$HAS_GH" = true ]; then
echo ' "results": '
search_with_gh ".claude/agents" "$QUERY" 20 || echo '[]'
else
echo ' "results": []'
fi
echo -n " }"
fi
# Search for skills
if [ "$SEARCH_TYPE" = "all" ] || [ "$SEARCH_TYPE" = "skills" ]; then
if [ "$first_search" = false ]; then
echo ","
fi
first_search=false
echo -e "${YELLOW}Searching for skills...${NC}" >&2
cat <<EOF
{
"type": "skills",
"path": ".claude/skills",
"web_url": "$(get_search_url ".claude/skills" "$QUERY")",
EOF
if [ "$HAS_GH" = true ]; then
echo ' "results": '
search_with_gh ".claude/skills" "$QUERY" 20 || echo '[]'
else
echo ' "results": []'
fi
echo -n " }"
fi
# Search for slash commands
if [ "$SEARCH_TYPE" = "all" ] || [ "$SEARCH_TYPE" = "commands" ]; then
if [ "$first_search" = false ]; then
echo ","
fi
first_search=false
echo -e "${YELLOW}Searching for slash commands...${NC}" >&2
cat <<EOF
{
"type": "slash_commands",
"path": ".claude/commands",
"web_url": "$(get_search_url ".claude/commands" "$QUERY")",
EOF
if [ "$HAS_GH" = true ]; then
echo ' "results": '
search_with_gh ".claude/commands" "$QUERY" 20 || echo '[]'
else
echo ' "results": []'
fi
echo -n " }"
fi
# Search for CLAUDE.md project context files
if [ "$SEARCH_TYPE" = "all" ] || [ "$SEARCH_TYPE" = "context" ]; then
if [ "$first_search" = false ]; then
echo ","
fi
first_search=false
echo -e "${YELLOW}Searching for project context files...${NC}" >&2
cat <<EOF
{
"type": "project_context",
"path": ".claude/CLAUDE.md",
"web_url": "$(get_search_url ".claude/CLAUDE.md" "$QUERY")",
EOF
if [ "$HAS_GH" = true ]; then
echo ' "results": '
search_with_gh ".claude/CLAUDE.md" "$QUERY" 20 || echo '[]'
else
echo ' "results": []'
fi
echo -n " }"
fi
cat <<EOF
]
}
EOF
echo -e "\n${GREEN}✓ Search complete!${NC}" >&2
if [ "$HAS_GH" = false ]; then
echo -e "${YELLOW}Note: Install 'gh' CLI for direct search results${NC}" >&2
echo -e "Install: ${BLUE}brew install gh${NC} (macOS) or see https://cli.github.com" >&2
fi