Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:16:25 +08:00
commit c50a3be78a
28 changed files with 9866 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "meta-automation-architect",
"description": "Intelligent project analysis and custom automation generation. Analyzes projects using AI agents, discovers existing tools, and generates tailored skills, commands, agents, and hooks with cost transparency and preference learning.",
"version": "2.0.0",
"author": {
"name": "Tobias Weber",
"email": "comzine@gmail.com"
},
"skills": [
"./skills/meta-automation-architect"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# meta-automation-architect
Intelligent project analysis and custom automation generation. Analyzes projects using AI agents, discovers existing tools, and generates tailored skills, commands, agents, and hooks with cost transparency and preference learning.

141
plugin.lock.json Normal file
View File

@@ -0,0 +1,141 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:comzine/claude-code-marketplace:meta-automation-architect",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "3d097fe68fa432c732488d63e4a4b8f24bacbd2c",
"treeHash": "b46ad5ff7d6fef0eb50218a3d07e395a91c0c9903defadb844c387e0524ed849",
"generatedAt": "2025-11-28T10:15:45.879584Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "meta-automation-architect",
"description": "Intelligent project analysis and custom automation generation. Analyzes projects using AI agents, discovers existing tools, and generates tailored skills, commands, agents, and hooks with cost transparency and preference learning.",
"version": "2.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "84eff59dd24546ee52533d93262e84aee3f8f45d589eeb8b820e4584c40ccbba"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "0a3e9e0447b341726c2c2adcb67ecd5a56bcf8a77fbd93d4bf54a011b97ef50f"
},
{
"path": "skills/meta-automation-architect/OVERVIEW.md",
"sha256": "5d99cd6ef28c54b72f7dcaa7f4d6bb177a6fe30e6d82184a1b7f29821df000d6"
},
{
"path": "skills/meta-automation-architect/CHANGELOG.md",
"sha256": "e5aa3ac0a21e6a6236faf4d550a3032b8350467e552f157355c1fae93a3a49fb"
},
{
"path": "skills/meta-automation-architect/README.md",
"sha256": "facd3bf8e3d5535a36fe02ba22e3ac1550f8dd1c4ae48787a8be133aaaf7b01c"
},
{
"path": "skills/meta-automation-architect/SKILL.md",
"sha256": "83edbdd334318b14d3876188143fa81b3b73a745e3d675803e223d1f8120b901"
},
{
"path": "skills/meta-automation-architect/references/COMMUNICATION_PROTOCOL.md",
"sha256": "7f93cc2cff7fe99daaa77eef877e98d8fb40a6d0eeb4d97b4d509011fc20ed47"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_WEB_APP.md",
"sha256": "d87185939ee0bc53da4b5b1c7165976ed4d6e0dfdafd9376bdadbc0a8d2e31ff"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_PYTHON_CLI.md",
"sha256": "03509adf2af8f02d87b6a920ce02b139f93b0eea194e118ca4d5ea5b76cdce8c"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_FILE_ORGANIZATION.md",
"sha256": "98d34356d541b5492cc160699af09f24221829c1cc643e2c65906d4ce92ea3f5"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_RESEARCH_PAPER.md",
"sha256": "33b7952cbe044b312ece7ac4fe05545c856fd31d3dabf372dcb184e28a66733c"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_EDUCATIONAL_COURSE.md",
"sha256": "959bf591eaba7e99ac7ebd064c9ab95f347f76c4ad739d8dcb6d1cc9f277ce6a"
},
{
"path": "skills/meta-automation-architect/examples/EXAMPLE_PROJECT_MANAGEMENT.md",
"sha256": "5721dd239be29de427e042723d30b7966188df0ceff8c098ee11d56bf89b2543"
},
{
"path": "skills/meta-automation-architect/scripts/agent_reuse.py",
"sha256": "ea446425456c06a723612a75aeef6e1ebe73d99c282d01f414c3eb101d0f5e9d"
},
{
"path": "skills/meta-automation-architect/scripts/generate_agents.py",
"sha256": "976a181e99c8a4d65fbea887f4fbc97c0f9ae7db4fc0709826e509da8c5c2070"
},
{
"path": "skills/meta-automation-architect/scripts/user_preferences.py",
"sha256": "bf836d3c8806b30d43e790a6cdb788ceb7219046caa115e7c629fc7059542dbe"
},
{
"path": "skills/meta-automation-architect/scripts/generate_coordinator.py",
"sha256": "a93b008c68c9a162a78c054d3ca1281f08b7ffdf49f1bf07a2331fcf2eea664a"
},
{
"path": "skills/meta-automation-architect/scripts/metrics_tracker.py",
"sha256": "2048d47c40fbf801316eab8a2f71339da783de7fd007afe4b8e82ac94b5122bd"
},
{
"path": "skills/meta-automation-architect/scripts/collect_project_metrics.py",
"sha256": "3a85fa3d44b7eecc7b7a7a503f33617a226cf1136d8e3f7c89d06827116c4de1"
},
{
"path": "skills/meta-automation-architect/scripts/discover_existing_tools.py",
"sha256": "6d54365b54b455c9e8f6a28f9c99ea37060e4c0c3581e6943f771edb006cf358"
},
{
"path": "skills/meta-automation-architect/scripts/cost_estimator.py",
"sha256": "6a8da02c8a07efd98b03d40391bd958555334256d9dbdcfb53865514d195c017"
},
{
"path": "skills/meta-automation-architect/scripts/template_renderer.py",
"sha256": "824d4b2b59341b8ac452b14a0bf4d4afb14a6d69d8fe24e80b8eceab24bd5f10"
},
{
"path": "skills/meta-automation-architect/scripts/rollback_manager.py",
"sha256": "f59f6291933d9fde51a7dc41430a98512646e1e7650478d1b15aedefe02dc8aa"
},
{
"path": "skills/meta-automation-architect/templates/project-analyzer.md",
"sha256": "f4d2a173068cc33594013471b3179cbeaa86a7cfc5277926d6d8679e6a610e8e"
},
{
"path": "skills/meta-automation-architect/templates/agent-base.md.template",
"sha256": "839c760f50aa809bba3e5ce1756529b7b9c053c323bfb96c3d7bb011508f4003"
},
{
"path": "skills/meta-automation-architect/templates/command-base.md.template",
"sha256": "07246cab6718b847d86c269b511159248814b82d7981cee6b5110959360edf74"
},
{
"path": "skills/meta-automation-architect/templates/skill-base.md.template",
"sha256": "cf647581178b8324dccab265b63795da9ccc6a2728061704783138b2e4471cc7"
}
],
"dirSha256": "b46ad5ff7d6fef0eb50218a3d07e395a91c0c9903defadb844c387e0524ed849"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,125 @@
# Changelog
## [2.0.0] - 2025-11-23 - Major Architecture Overhaul
### 🚀 Major Changes
**Agent-Based Detection (vs Python Pattern Matching)**
- Replaced 727-line `detect_project.py` Python script with intelligent `project-analyzer` agent
- Agent reads key files, understands context, and asks clarifying questions
- More accurate, context-aware project understanding
**Interactive Workflow**
- Added mode selection: Quick (⚡ $0.03, 10 min) → Focused (🔧 $0.10, 20 min) → Comprehensive (🏗️ $0.15, 30 min)
- No more guessing - system asks questions with recommendations
- Simple mode first for new users
**Template-Based Generation**
- Replaced Python string building with `.template` files using `{{variable}}` syntax
- Easier to customize and maintain
- Cleaner separation of structure from logic
**Tool Discovery**
- Automatically detects existing automation (linting, testing, CI/CD, git hooks, etc.)
- Prevents duplication and integration conflicts
- Recommends: fill gaps, enhance existing, or create independent
**Cost Transparency**
- Shows token estimates, time estimates, and costs BEFORE execution
- No surprises - users see exactly what they're getting
### 🎯 New Features
**User Preference Learning** (`scripts/user_preferences.py`)
- Tracks mode preferences, agent usage, satisfaction ratings
- Provides personalized recommendations based on history
- Calculates ROI: actual time saved / setup time
**Metrics Tracking** (`scripts/metrics_tracker.py`)
- Records ACTUAL time saved (not just estimates)
- Tracks effectiveness: which automation is actually used
- Proves value with real data
**Rollback Capability** (`scripts/rollback_manager.py`)
- Creates automatic backups before making changes
- Manifest-based tracking of all changes
- One-command rollback to pre-automation state
**Configuration Reuse** (`scripts/agent_reuse.py`)
- Saves successful automation configurations
- Finds similar projects using similarity matching
- Recommends reuse to save 5-10 minutes
### 📁 New Scripts
- `scripts/collect_project_metrics.py` - Simple metrics collection (150 lines vs 586)
- `scripts/template_renderer.py` - Template rendering engine
- `scripts/discover_existing_tools.py` - Existing automation detection
- `scripts/cost_estimator.py` - Cost/time estimation
- `scripts/user_preferences.py` - User learning and recommendations
- `scripts/metrics_tracker.py` - Real usage tracking
- `scripts/rollback_manager.py` - Backup and restore
- `scripts/agent_reuse.py` - Configuration reuse
### 📝 New Templates
- `templates/agent-base.md.template` - Base template for agents
- `templates/skill-base.md.template` - Base template for skills
- `templates/command-base.md.template` - Base template for commands
- `templates/project-analyzer.md` - Intelligent project analyzer agent
### 🗑️ Removed
**Obsolete Code:**
-`scripts/detect_project.py` (727 lines) - Replaced by agent-based detection
**Test Data:**
-`.claude/meta-automation/` directories with test runs
**Obsolete Documentation:**
-`IMPROVEMENT_ROADMAP.md`
-`PHASE1_IMPLEMENTATION.md`
-`PHASE2_IMPLEMENTATION.md`
-`PHASE3_IMPLEMENTATION.md`
-`IMPLEMENTATION_COMPLETE.md`
-`UNIVERSAL_UPGRADE.md`
-`DOCUMENT_FORMATS_EXPANSION.md`
**Old Templates:**
-`templates/example-skill-template.md`
-`templates/example-command-template.md`
-`templates/example-hook-template.py`
### 📊 Impact
- **Lines Removed:** ~3,500 lines
- **Files Removed:** ~18 files
- **Code Quality:** Production-ready, maintainable structure
- **User Experience:** Interactive, transparent, learns from usage
- **Efficiency:** Simple mode first, progressive enhancement
### 🔄 Migration Notes
**Breaking Changes:**
- `detect_project.py` removed - use `project-analyzer` agent instead
- Old template format `{variable}` replaced with `{{variable}}`
**New Workflow:**
1. Skill asks: Quick, Focused, or Comprehensive?
2. Collects project metrics
3. Launches project-analyzer agent
4. Agent analyzes and asks questions
5. Discovers existing tools
6. Shows cost/time estimates
7. Generates automation
8. Tracks usage and learns
---
## [1.0.0] - Initial Release
- Universal project detection (8 project types)
- 37 specialized agents
- Skill, command, and hook generation
- Coordinator architecture
- Communication protocol

View File

@@ -0,0 +1,444 @@
# Meta-Automation Architect - System Overview
A comprehensive skill that analyzes projects and generates tailored automation systems with parallel subagents, custom skills, commands, and hooks.
## Quick Links
- **[README.md](README.md)** - Main usage guide
- **[SKILL.md](SKILL.md)** - Full skill definition
- **[Communication Protocol](references/COMMUNICATION_PROTOCOL.md)** - Agent Communication Protocol (ACP) specification
- **[Examples](examples/)** - Complete examples for different project types
- **[Templates](templates/)** - Templates for generated artifacts
## Directory Structure
```
.claude/skills/meta-automation-architect/
├── SKILL.md # Main skill definition
├── README.md # Usage guide
├── OVERVIEW.md # This file
├── scripts/ # Generation scripts
│ ├── detect_project.py # Project analysis
│ ├── generate_agents.py # Agent generation (11 templates)
│ └── generate_coordinator.py # Coordinator generation
├── templates/ # Output templates
│ ├── example-skill-template.md # Skill template structure
│ ├── example-command-template.md # Command template structure
│ └── example-hook-template.py # Hook template structure
├── examples/ # Complete examples
│ ├── EXAMPLE_WEB_APP.md # Next.js web app automation
│ └── EXAMPLE_PYTHON_CLI.md # Python CLI tool automation
└── references/ # Technical docs
└── COMMUNICATION_PROTOCOL.md # ACP specification
```
## What This Meta-Skill Does
### 1. Interactive Discovery
- Analyzes project structure and tech stack
- Provides data-driven recommendations
- Asks targeted questions with smart defaults
- Never guesses - always validates with user
### 2. Generates Parallel Subagent System
- **Analysis Agents** - Run in parallel to analyze different domains
- **Implementation Agents** - Generate automation artifacts
- **Validation Agents** - Test and validate the system
- **Coordinator Agent** - Orchestrates the entire workflow
### 3. Creates Complete Automation
- **Custom Agents** - Specialized for project patterns
- **Skills** - Auto-invoked capabilities
- **Commands** - Slash commands for workflows
- **Hooks** - Event-driven automation
- **MCP Integrations** - External service connections
### 4. Enables Agent Communication
Uses **Agent Communication Protocol (ACP)** for coordination:
- File-based communication at `.claude/agents/context/{session-id}/`
- Coordination file for status tracking
- Message bus for event transparency
- Standardized reports for findings
- Data artifacts for detailed exchange
## Available Agent Templates
### Analysis Agents (Run in Parallel)
1. **security-analyzer** - Security vulnerabilities, auth flaws, secret exposure
2. **performance-analyzer** - Bottlenecks, inefficient algorithms, optimization opportunities
3. **code-quality-analyzer** - Code complexity, duplication, maintainability
4. **dependency-analyzer** - Outdated packages, vulnerabilities, conflicts
5. **documentation-analyzer** - Documentation completeness and quality
### Implementation Agents
6. **skill-generator** - Creates custom skills from findings
7. **command-generator** - Creates slash commands for workflows
8. **hook-generator** - Creates automation hooks
9. **mcp-configurator** - Configures external integrations
### Validation Agents
10. **integration-tester** - Validates all components work together
11. **documentation-validator** - Ensures comprehensive documentation
## Agent Communication Protocol (ACP)
### Core Concept
Parallel agents with isolated contexts communicate via structured files:
```
.claude/agents/context/{session-id}/
├── coordination.json # Status tracking
├── messages.jsonl # Event log (append-only)
├── reports/ # Agent outputs
│ └── {agent-name}.json
└── data/ # Shared artifacts
```
### Key Features
-**Asynchronous** - Agents don't block each other
-**Discoverable** - Any agent can read any report
-**Persistent** - Survives crashes
-**Transparent** - Complete audit trail
-**Orchestratable** - Coordinator manages dependencies
See [COMMUNICATION_PROTOCOL.md](references/COMMUNICATION_PROTOCOL.md) for full specification.
## Usage Patterns
### Basic Invocation
```
"Set up automation for my project"
```
### Specific Project Type
```
"Create automation for my Next.js web app"
"Generate automation for my Python CLI tool"
"Set up automation for my data science workflow"
```
### With Priorities
```
"Focus automation on testing and security"
"Prioritize documentation and code quality"
```
### With Scope
```
"Create comprehensive automation with 8 agents"
"Generate basic automation (3-4 agents)"
```
## Example Output
For a typical web application, generates:
```
.claude/
├── agents/
│ ├── security-analyzer.md
│ ├── performance-analyzer.md
│ ├── code-quality-analyzer.md
│ ├── skill-generator.md
│ ├── command-generator.md
│ └── automation-coordinator.md
├── skills/
│ ├── tdd-workflow/
│ ├── api-doc-generator/
│ └── security-checker/
├── commands/
│ ├── test-fix.md
│ ├── security-scan.md
│ └── perf-check.md
├── hooks/
│ ├── security_validation.py
│ └── run_tests.py
├── settings.json (updated)
├── AUTOMATION_README.md
└── QUICK_REFERENCE.md
```
Plus complete session data:
```
.claude/agents/context/{session-id}/
├── coordination.json
├── messages.jsonl
├── reports/
│ ├── security-analyzer.json
│ ├── performance-analyzer.json
│ └── ...
└── data/
└── ...
```
## Workflow Phases
### Phase 1: Discovery (Interactive)
- Project type detection with confidence scores
- Tech stack analysis
- Team size and workflow questions
- Pain point identification
- Priority setting
- Agent count recommendation
### Phase 2: Setup
- Generate unique session ID
- Create communication directory structure
- Initialize coordination file
- Export environment variables
### Phase 3: Analysis (Parallel)
- Launch analysis agents concurrently
- Each agent analyzes specific domain
- Agents log progress to message bus
- Generate standardized reports
- Update coordination status
### Phase 4: Synthesis
- Coordinator reads all reports
- Aggregates findings
- Identifies patterns
- Makes decisions on what to generate
### Phase 5: Implementation (Parallel)
- Launch implementation agents
- Generate skills, commands, hooks
- Configure MCP servers
- Create artifacts
### Phase 6: Validation (Sequential)
- Test all components
- Validate documentation
- Ensure everything works
### Phase 7: Delivery
- Generate documentation
- Create usage guides
- Report to user
## Key Scripts
### `detect_project.py`
```python
# Analyzes project to determine:
# - Project type (web app, CLI, data science, etc.)
# - Tech stack (frameworks, languages)
# - Pain points (testing, docs, dependencies)
# - Statistics (file counts, test coverage)
python scripts/detect_project.py
```
### `generate_agents.py`
```python
# Generates specialized agents with communication protocol
# Available types: security-analyzer, performance-analyzer, etc.
python scripts/generate_agents.py \
--session-id "abc-123" \
--agent-type "security-analyzer" \
--output ".claude/agents/security-analyzer.md"
```
### `generate_coordinator.py`
```python
# Creates coordinator agent that orchestrates workflow
python scripts/generate_coordinator.py \
--session-id "abc-123" \
--agents "security,performance,quality" \
--output ".claude/agents/coordinator.md"
```
## Benefits
### For Solo Developers
- Automates tedious documentation and testing
- Provides instant code quality feedback
- Reduces context switching
- Focuses on writing code, not boilerplate
### For Small Teams
- Standardizes workflows across team
- Ensures consistent code quality
- Automates code reviews
- Improves onboarding with documentation
### For Large Projects
- Comprehensive analysis across domains
- Identifies technical debt systematically
- Provides actionable recommendations
- Scales with multiple parallel agents
## Customization
All generated artifacts can be customized:
- **Agents** - Edit `.claude/agents/{agent-name}.md`
- **Skills** - Modify `.claude/skills/{skill-name}/SKILL.md`
- **Commands** - Update `.claude/commands/{command-name}.md`
- **Hooks** - Change `.claude/hooks/{hook-name}.py`
- **Settings** - Adjust `.claude/settings.json`
## Monitoring & Debugging
### Watch Agent Progress
```bash
watch -n 2 'cat .claude/agents/context/*/coordination.json | jq ".agents"'
```
### Follow Live Events
```bash
tail -f .claude/agents/context/*/messages.jsonl | jq
```
### Check Reports
```bash
ls .claude/agents/context/*/reports/
cat .claude/agents/context/*/reports/security-analyzer.json | jq
```
### Aggregate Findings
```bash
jq -s 'map(.findings[]) | map(select(.severity == "high"))' \
.claude/agents/context/*/reports/*.json
```
## Best Practices
### When Invoking
1. Let the skill analyze your project first
2. Answer questions honestly
3. Use recommendations when unsure
4. Start with moderate agent count
5. Review generated automation
### After Generation
1. Read AUTOMATION_README.md
2. Try example invocations
3. Customize for your needs
4. Review session logs to understand decisions
5. Iterate based on usage
### For Maintenance
1. Review agent reports periodically
2. Update skills as patterns evolve
3. Add new commands for new workflows
4. Adjust hooks as needed
5. Keep documentation current
## Technical Details
### Requirements
- Python 3.8+
- Claude Code with Task tool support
- Write access to `.claude/` directory
### Dependencies
Scripts use only Python standard library:
- `json` - JSON parsing
- `subprocess` - Git analysis
- `pathlib` - File operations
- `argparse` - CLI parsing
### Performance
- Analysis phase: 3-5 minutes (parallel execution)
- Implementation phase: 2-3 minutes (parallel execution)
- Validation phase: 1-2 minutes (sequential)
- **Total: ~10-15 minutes** for complete automation system
### Scalability
- 2-3 agents: Basic projects, solo developers
- 4-6 agents: Medium projects, small teams
- 7-10 agents: Large projects, comprehensive coverage
- 10+ agents: Enterprise projects, all domains
## Examples
### Web Application (Next.js)
See [EXAMPLE_WEB_APP.md](examples/EXAMPLE_WEB_APP.md)
- 6 agents (4 analysis, 2 implementation)
- 3 skills (TDD workflow, API docs, security checker)
- 3 commands (test-fix, security-scan, perf-check)
- 2 hooks (security validation, run tests)
- GitHub MCP integration
### Python CLI Tool
See [EXAMPLE_PYTHON_CLI.md](examples/EXAMPLE_PYTHON_CLI.md)
- 4 agents (2 analysis, 2 implementation)
- 2 skills (docstring generator, CLI test helper)
- 2 commands (test-cov, release-prep)
- 1 hook (auto-lint Python)
- Focused on documentation and testing
## Related Claude Code Features
This meta-skill leverages:
- **Task Tool** - For parallel agent execution
- **Skills System** - Creates auto-invoked capabilities
- **Commands** - Creates user-invoked shortcuts
- **Hooks** - Enables event-driven automation
- **MCP** - Connects to external services
## Support & Troubleshooting
### Check Session Logs
```bash
# Review what happened
cat .claude/agents/context/{session-id}/messages.jsonl | jq
# Find errors
jq 'select(.type == "error")' .claude/agents/context/{session-id}/messages.jsonl
```
### Agent Failed
```bash
# Check status
jq '.agents | to_entries | map(select(.value.status == "failed"))' \
.claude/agents/context/{session-id}/coordination.json
# Options:
# 1. Retry the agent
# 2. Continue without it
# 3. Manual intervention
```
### Missing Reports
```bash
# List what was generated
ls .claude/agents/context/{session-id}/reports/
# Check if agent completed
jq '.agents["agent-name"]' \
.claude/agents/context/{session-id}/coordination.json
```
## Future Enhancements
Potential additions:
- Language-specific analyzers (Go, Rust, Java)
- CI/CD integration agents
- Database optimization agent
- API design analyzer
- Accessibility checker
- Performance profiling agent
- Machine learning workflow agent
## License & Attribution
Part of the Claude Code ecosystem.
Generated with Meta-Automation Architect skill.
---
**Ready to use?** Simply say: `"Set up automation for my project"`
The meta-skill will guide you through the entire process with smart recommendations and generate a complete, customized automation system!

View File

@@ -0,0 +1,397 @@
# Meta-Automation Architect
A meta-skill that analyzes your project and generates a comprehensive automation system with custom subagents, skills, commands, and hooks.
## What It Creates
The meta-skill generates:
1. **Custom Subagents** - Specialized analysis and implementation agents that run in parallel
2. **Skills** - Auto-invoked capabilities for common patterns in your project
3. **Commands** - Slash commands for frequent workflows
4. **Hooks** - Event-driven automation at lifecycle points
5. **MCP Configurations** - External service integrations
6. **Complete Documentation** - Usage guides and quick references
## How to Use
### Basic Invocation
Simply describe what you want:
```
"Set up automation for my project"
```
Or be more specific:
```
"Create comprehensive automation for my Next.js e-commerce project"
```
```
"Generate a custom automation system for my Python data science workflow"
```
### What Happens
1. **Interactive Discovery** - You'll be asked questions about:
- Project type (with smart detection and recommendations)
- Tech stack and frameworks
- Team size and workflow
- Pain points and priorities
- Desired automation scope
2. **Smart Recommendations** - Every question includes:
- Data-driven analysis of your project
- Confidence scores and reasoning
- Recommended options based on evidence
- Clear trade-offs and explanations
3. **Multi-Agent Generation** - The system creates:
- A coordinator agent that orchestrates everything
- Specialized analysis agents (security, performance, quality, etc.)
- Implementation agents (skill/command/hook generators)
- Validation agents (testing and documentation)
4. **Parallel Execution** - Agents run concurrently and communicate via the Agent Communication Protocol (ACP)
5. **Complete Delivery** - You receive:
- All automation artifacts
- Comprehensive documentation
- Usage examples
- Customization guides
## Example Sessions
### Web Application Project
```
User: "Set up automation for my React TypeScript project"
Meta-Skill:
1. Detects: Web application (95% confidence)
- Found package.json with React dependencies
- Found src/App.tsx and TypeScript config
- Detected testing with Jest and React Testing Library
2. Asks: "What are your main pain points?"
- Recommends: Testing automation (detected low test coverage)
- Recommends: Code quality checks (found 47 bug-fix commits recently)
3. Recommends: 6 agents for comprehensive coverage
- Analysis: Security, Performance, Code Quality, Dependencies
- Implementation: Skill Generator, Command Generator
- Validation: Integration Tester
4. Generates automation system with:
- /test-fix command for TDD workflow
- PostToolUse hook for auto-formatting
- GitHub MCP integration for PR automation
- Custom skills for common React patterns
```
### Python Data Science Project
```
User: "Create automation for my machine learning project"
Meta-Skill:
1. Detects: Data Science (88% confidence)
- Found notebooks/ directory with 15 .ipynb files
- Found requirements.txt with pandas, scikit-learn, tensorflow
- Found data/ and models/ directories
2. Asks: "What would you like to automate first?"
- Recommends: Experiment tracking (detected many model versions)
- Recommends: Documentation generation (missing architecture docs)
- Recommends: Data validation (found data pipeline code)
3. Generates automation system with:
- /run-experiment command for standardized ML runs
- Custom skill for model comparison and analysis
- Hooks for auto-documenting experiments
- MCP integration for MLflow or Weights & Biases
```
## Agent Communication Protocol (ACP)
The generated subagents communicate via a file-based protocol:
### Directory Structure
```
.claude/agents/context/{session-id}/
├── coordination.json # Tracks agent status and dependencies
├── messages.jsonl # Append-only event log
├── reports/ # Standardized agent outputs
│ ├── security-analyzer.json
│ ├── performance-analyzer.json
│ └── ...
└── data/ # Shared data artifacts
├── vulnerabilities.json
├── performance-metrics.json
└── ...
```
### How Agents Communicate
1. **Check Dependencies** - Read `coordination.json` to see which agents have completed
2. **Read Context** - Review reports from other agents
3. **Log Progress** - Write events to `messages.jsonl`
4. **Share Findings** - Create standardized report in `reports/`
5. **Share Data** - Store detailed artifacts in `data/`
6. **Update Status** - Mark completion in `coordination.json`
### Report Format
Every agent writes a standardized JSON report:
```json
{
"agent_name": "security-analyzer",
"timestamp": "2025-01-23T10:00:00Z",
"status": "completed",
"summary": "Found 5 security vulnerabilities requiring immediate attention",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "SQL Injection Risk",
"description": "User input not sanitized in query builder",
"location": "src/db/queries.ts:42",
"recommendation": "Use parameterized queries",
"example": "db.query('SELECT * FROM users WHERE id = ?', [userId])"
}
],
"metrics": {
"items_analyzed": 150,
"issues_found": 5,
"time_taken": "2m 34s"
},
"recommendations_for_automation": [
"Skill: SQL injection checker",
"Hook: Validate queries on PreToolUse",
"Command: /security-scan for quick checks"
]
}
```
## What Gets Generated
### 1. Custom Subagents
Specialized agents tailored to your project:
- **Analysis Agents** - Security, performance, code quality, dependencies, documentation
- **Implementation Agents** - Generate skills, commands, hooks, MCP configs
- **Validation Agents** - Test integration, validate documentation
Each agent:
- Has communication protocol built-in
- Knows how to coordinate with others
- Writes standardized reports
- Suggests automation opportunities
### 2. Skills
Auto-invoked capabilities for your specific patterns:
```
.claude/skills/
├── api-doc-generator/ # Generate API docs from code
├── tdd-enforcer/ # Test-driven development workflow
├── security-checker/ # Quick security validation
└── ...
```
### 3. Commands
Slash commands for frequent tasks:
```
.claude/commands/
├── test-fix.md # Run tests and fix failures
├── deploy-check.md # Pre-deployment validation
├── security-scan.md # Quick security audit
└── ...
```
### 4. Hooks
Event-driven automation:
```
.claude/hooks/
├── format_on_save.py # PostToolUse: Auto-format code
├── security_check.py # PreToolUse: Validate operations
└── run_tests.py # Stop: Execute test suite
```
### 5. Documentation
Complete usage guides:
- `.claude/AUTOMATION_README.md` - Main system documentation
- `.claude/QUICK_REFERENCE.md` - Cheat sheet for all features
- `.claude/agents/context/{session-id}/` - Generation session details
## Monitoring the Generation Process
While agents work, you can monitor progress:
```bash
# Watch agent status
watch -n 2 'cat .claude/agents/context/*/coordination.json | jq ".agents"'
# Follow live events
tail -f .claude/agents/context/*/messages.jsonl | jq
# Check completion
cat .claude/agents/context/*/coordination.json | \
jq '.agents | to_entries | map(select(.value.status == "completed")) | map(.key)'
```
## Customizing Generated Automation
All generated artifacts can be customized:
### Modify Agents
```bash
# Edit agent behavior
vim .claude/agents/security-analyzer.md
# Adjust analysis focus, tools, or process
```
### Customize Skills
```bash
# Update skill behavior
vim .claude/skills/api-doc-generator/SKILL.md
# Modify when skill triggers or what it does
```
### Update Commands
```bash
# Change command behavior
vim .claude/commands/test-fix.md
# Adjust workflow or add arguments
```
### Adjust Hooks
```bash
# Modify hook logic
vim .claude/hooks/format_on_save.py
# Change trigger conditions or actions
```
## Troubleshooting
### Agent Failed
```bash
# Check status
jq '.agents | to_entries | map(select(.value.status == "failed"))' \
.claude/agents/context/{session-id}/coordination.json
# Find error
jq 'select(.from == "failed-agent") | select(.type == "error")' \
.claude/agents/context/{session-id}/messages.jsonl | tail -1
# Options:
# 1. Retry the agent
# 2. Continue without it
# 3. Manual intervention
```
### Missing Reports
```bash
# List generated reports
ls .claude/agents/context/{session-id}/reports/
# Check if agent completed
jq '.agents["agent-name"]' \
.claude/agents/context/{session-id}/coordination.json
```
### Review What Happened
```bash
# Full event log
cat .claude/agents/context/{session-id}/messages.jsonl | jq
# Agent-specific events
jq 'select(.from == "agent-name")' \
.claude/agents/context/{session-id}/messages.jsonl
# Events by type
jq -s 'group_by(.type) | map({type: .[0].type, count: length})' \
.claude/agents/context/{session-id}/messages.jsonl
```
## Advanced Usage
### Specify Agent Count
```
"Create automation with 8 parallel agents for comprehensive coverage"
```
### Target Specific Areas
```
"Focus automation on security and testing"
```
### Prioritize Implementation
```
"Generate skills and commands first, hooks later"
```
### Re-run Analysis
```bash
# Generate new session with different configuration
# Previous sessions remain in .claude/agents/context/
```
## Architecture
The meta-skill uses a multi-phase architecture:
1. **Discovery Phase** - Interactive questioning with recommendations
2. **Setup Phase** - Initialize communication infrastructure
3. **Analysis Phase** - Parallel agent execution for deep analysis
4. **Synthesis Phase** - Coordinator reads all reports and makes decisions
5. **Implementation Phase** - Parallel generation of automation artifacts
6. **Validation Phase** - Sequential testing and documentation checks
7. **Delivery Phase** - Complete documentation and user report
## Benefits
- **Parallel Execution** - Multiple agents work concurrently
- **Isolated Contexts** - Each agent has focused responsibility
- **Communication Protocol** - Agents share findings reliably
- **Data-Driven** - Recommendations based on actual project analysis
- **Comprehensive** - Covers security, performance, quality, testing, docs
- **Customizable** - All generated artifacts can be modified
- **Transparent** - Full event log shows what happened
- **Reusable** - Generated automation works immediately
## Support
For issues or questions:
1. Review agent reports in `reports/`
2. Check message log in `messages.jsonl`
3. Consult individual documentation
4. Review session details in context directory
---
*Generated automation is project-specific but follows Claude Code best practices for skills, commands, hooks, and MCP integration.*

View File

@@ -0,0 +1,656 @@
---
name: meta-automation-architect
description: Use when user wants to set up comprehensive automation for their project. Generates custom subagents, skills, commands, and hooks tailored to project needs. Creates a multi-agent system with robust communication protocol.
allowed-tools: ["Bash", "Read", "Write", "Glob", "Grep", "Task", "AskUserQuestion"]
---
# Meta-Automation Architect
You are the Meta-Automation Architect, responsible for analyzing projects and generating comprehensive, subagent-based automation systems.
## Core Philosophy
**Communication is Everything**. You create systems where:
- Subagents run in parallel with isolated contexts
- Agents communicate via structured file system protocol
- All findings are discoverable and actionable
- Coordination happens through explicit status tracking
- The primary coordinator orchestrates the entire workflow
## Your Mission
1. **Understand** the project through interactive questioning
2. **Analyze** project structure and identify automation opportunities
3. **Design** a custom subagent team with communication protocol
4. **Generate** all automation artifacts (agents, skills, commands, hooks)
5. **Validate** the system works correctly
6. **Document** everything comprehensively
## Execution Workflow
### Phase 0: Choose Automation Mode
**CRITICAL FIRST STEP**: Ask user what level of automation they want.
Use `AskUserQuestion`:
```
"What level of automation would you like?
a) ⚡ Quick Analysis (RECOMMENDED for first time)
- Launch 2-3 smart agents to analyze your project
- See findings in 5-10 minutes
- Then decide if you want full automation
- Cost: ~$0.03, Time: ~10 min
b) 🔧 Focused Automation
- Tell me specific pain points
- I'll create targeted automation
- Cost: ~$0.10, Time: ~20 min
c) 🏗️ Comprehensive System
- Full agent suite, skills, commands, hooks
- Complete automation infrastructure
- Cost: ~$0.15, Time: ~30 min
I recommend (a) to start - you can always expand later."
```
If user chooses **Quick Analysis**, go to "Simple Mode Workflow" below.
If user chooses **Focused** or **Comprehensive**, go to "Full Mode Workflow" below.
---
## Simple Mode Workflow (Quick Analysis)
This is the **default recommended path** for first-time users.
### Phase 1: Intelligent Project Analysis
**Step 1: Collect Basic Metrics**
```bash
# Quick structural scan (no decision-making)
python scripts/collect_project_metrics.py > /tmp/project-metrics.json
```
This just collects data:
- File counts by type
- Directory structure
- Key files found (package.json, .tex, etc.)
- Basic stats (size, depth)
**Step 2: Launch Project Analyzer Agent**
```bash
# Generate session ID
SESSION_ID=$(python3 -c "import uuid; print(str(uuid.uuid4()))")
# Create minimal context directory
mkdir -p ".claude/agents/context/${SESSION_ID}"
# Launch intelligent project analyzer
```
Use the `Task` tool to launch the project-analyzer agent:
```markdown
Launch "project-analyzer" agent with these instructions:
"Analyze this project intelligently. I've collected basic metrics (see /tmp/project-metrics.json),
but I need you to:
1. Read key files (README, package.json, main files) to UNDERSTAND the project
2. Identify the real project type (not just pattern matching)
3. Find actual pain points (not guessed ones)
4. Check what automation already exists (don't duplicate)
5. Recommend 2-3 high-value automations
6. ASK clarifying questions if needed
Be interactive. Don't guess. Ask the user to clarify anything unclear.
Write your analysis to: .claude/agents/context/${SESSION_ID}/project-analysis.json
Session ID: ${SESSION_ID}
Project root: ${PWD}"
```
**Step 3: Review Analysis with User**
After the project-analyzer agent completes, read its analysis and present to user:
```bash
# Read the analysis
cat ".claude/agents/context/${SESSION_ID}/project-analysis.json"
```
Present findings:
```
The project-analyzer found:
📊 Project Type: [type]
🔧 Tech Stack: [stack]
⚠️ Top Pain Points:
1. [Issue] - Could save [X hours]
2. [Issue] - Could improve [quality]
💡 Recommended Next Steps:
Option A: Run deeper analysis
- Launch [agent-1], [agent-2] to validate findings
- Time: ~10 min
- Then get detailed automation plan
Option B: Go straight to full automation
- Generate complete system based on these findings
- Time: ~30 min
Option C: Stop here
- You have the analysis, implement manually
What would you like to do?
```
**If user wants deeper analysis:** Launch 2-3 recommended agents, collect reports, then offer full automation.
**If user wants full automation now:** Switch to Full Mode Workflow.
---
## Full Mode Workflow (Comprehensive Automation)
This creates the complete multi-agent automation system.
### Phase 1: Interactive Discovery
**CRITICAL**: Never guess. Always ask with intelligent recommendations.
**Step 1: Load Previous Analysis** (if coming from Simple Mode)
```bash
# Check if we already have analysis
if [ -f ".claude/agents/context/${SESSION_ID}/project-analysis.json" ]; then
# Use existing analysis
cat ".claude/agents/context/${SESSION_ID}/project-analysis.json"
else
# Run project-analyzer first (same as Simple Mode)
# [Launch project-analyzer agent]
fi
```
**Step 2: Confirm Key Details**
Based on the intelligent analysis, confirm with user:
1. **Project Type Confirmation**
```
"The analyzer believes this is a [primary type] project with [secondary aspects].
Is this accurate, or should I adjust my understanding?"
```
2. **Pain Points Confirmation**
```
"The top issues identified are:
- [Issue 1] - [impact]
- [Issue 2] - [impact]
Do these match your experience? Any others I should know about?"
```
3. **Automation Scope**
```
"I can create automation for:
⭐ [High-value item 1]
⭐ [High-value item 2]
- [Medium-value item 3]
Should I focus on the starred items, or include everything?"
```
4. **Integration with Existing Tools**
```
"I see you already have [existing tools].
Should I:
a) Focus on gaps (RECOMMENDED)
b) Enhance existing tools
c) Create independent automation"
```
### Phase 2: Initialize Communication Infrastructure
```bash
# Generate session ID
SESSION_ID=$(uuidgen | tr '[:upper:]' '[:lower:]')
# Create communication directory structure
mkdir -p ".claude/agents/context/${SESSION_ID}"/{reports,data}
touch ".claude/agents/context/${SESSION_ID}/messages.jsonl"
# Initialize coordination file
cat > ".claude/agents/context/${SESSION_ID}/coordination.json" << EOF
{
"session_id": "${SESSION_ID}",
"started_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"project_type": "...",
"agents": {}
}
EOF
# Export for agents to use
export CLAUDE_SESSION_ID="${SESSION_ID}"
```
### Phase 3: Generate Custom Subagent Team
Based on user responses, generate specialized agents.
**Analysis Agents** (Run in parallel):
- Security Analyzer
- Performance Analyzer
- Code Quality Analyzer
- Dependency Analyzer
- Documentation Analyzer
**Implementation Agents** (Run after analysis):
- Skill Generator Agent
- Command Generator Agent
- Hook Generator Agent
- MCP Configuration Agent
**Validation Agents** (Run last):
- Integration Test Agent
- Documentation Validator Agent
**For each agent:**
```bash
# Use template
python scripts/generate_agents.py \
--session-id "${SESSION_ID}" \
--agent-type "security-analyzer" \
--output ".claude/agents/security-analyzer.md"
```
**Template ensures each agent:**
1. Knows how to read context directory
2. Writes standardized reports
3. Logs events to message bus
4. Updates coordination status
5. Shares data via artifacts
### Phase 4: Generate Coordinator Agent
The coordinator orchestrates the entire workflow:
```bash
python scripts/generate_coordinator.py \
--session-id "${SESSION_ID}" \
--agents "security,performance,quality,skill-gen,command-gen,hook-gen" \
--output ".claude/agents/automation-coordinator.md"
```
Coordinator responsibilities:
- Launch agents in correct order (parallel where possible)
- Monitor progress via coordination.json
- Read all reports when complete
- Synthesize findings
- Make final decisions
- Generate artifacts
- Report to user
### Phase 5: Launch Multi-Agent Workflow
**IMPORTANT**: Use Task tool to launch agents in parallel.
```markdown
Launch the automation-coordinator agent:
"Use the automation-coordinator agent to set up the automation system for this ${PROJECT_TYPE} project"
```
The coordinator will:
1. Launch analysis agents in parallel
2. Wait for all to complete
3. Synthesize findings
4. Launch implementation agents
5. Create all automation files
6. Validate the system
7. Generate documentation
### Phase 6: Monitor & Report
While agents work, monitor progress:
```bash
# Watch coordination status
watch -n 2 'cat .claude/agents/context/${SESSION_ID}/coordination.json | jq ".agents"'
# Follow message log
tail -f .claude/agents/context/${SESSION_ID}/messages.jsonl
```
When coordinator finishes, it will have created:
- `.claude/agents/` - Custom agents
- `.claude/commands/` - Custom commands
- `.claude/skills/` - Custom skills
- `.claude/hooks/` - Hook scripts
- `.claude/settings.json` - Updated configuration
- `.claude/AUTOMATION_README.md` - Complete documentation
## Agent Communication Protocol (ACP)
All generated agents follow this protocol:
### Directory Structure
```
.claude/agents/context/{session-id}/
├── coordination.json # Status tracking
├── messages.jsonl # Event log (append-only)
├── reports/ # Agent outputs
│ ├── security-agent.json
│ ├── performance-agent.json
│ └── ...
└── data/ # Shared artifacts
├── vulnerabilities.json
├── performance-metrics.json
└── ...
```
### Reading from Other Agents
```bash
# List available reports
ls .claude/agents/context/${SESSION_ID}/reports/
# Read specific agent's report
cat .claude/agents/context/${SESSION_ID}/reports/security-agent.json
# Read all reports
for report in .claude/agents/context/${SESSION_ID}/reports/*.json; do
echo "=== $(basename $report) ==="
cat "$report" | jq
done
```
### Writing Your Report
```bash
# Create standardized report
cat > ".claude/agents/context/${SESSION_ID}/reports/${AGENT_NAME}.json" << 'EOF'
{
"agent_name": "your-agent-name",
"timestamp": "2025-01-23T10:00:00Z",
"status": "completed",
"summary": "Brief overview of findings",
"findings": [
{
"type": "issue|recommendation|info",
"severity": "high|medium|low",
"title": "Finding title",
"description": "Detailed description",
"location": "file:line or component",
"recommendation": "What to do about it"
}
],
"data_artifacts": [
"data/vulnerabilities.json",
"data/test-coverage.json"
],
"metrics": {
"key": "value"
},
"next_actions": [
"Suggested follow-up action 1",
"Suggested follow-up action 2"
]
}
EOF
```
### Logging Events
```bash
# Log progress, findings, or status updates
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"${AGENT_NAME}\",\"type\":\"status\",\"message\":\"Starting analysis\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"${AGENT_NAME}\",\"type\":\"finding\",\"severity\":\"high\",\"data\":{\"issue\":\"SQL injection risk\"}}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
### Updating Coordination Status
```python
import json
from pathlib import Path
# Read current coordination
coord_path = Path(f".claude/agents/context/{session_id}/coordination.json")
with open(coord_path) as f:
coord = json.load(f)
# Update your status
coord['agents'][agent_name] = {
"status": "in_progress", # or "completed", "failed"
"started_at": "2025-01-23T10:00:00Z",
"progress": "Analyzing authentication module",
"reports": ["reports/security-agent.json"]
}
# Write back
with open(coord_path, 'w') as f:
json.dump(coord, f, indent=2)
```
## Agent Templates
Each generated agent includes:
```markdown
---
name: {agent-name}
description: {specific-purpose}
tools: Read, Write, Bash, Grep, Glob
color: {color}
model: sonnet
---
# {Agent Title}
You are a {specialization} in a multi-agent automation system.
## Communication Setup
**Session ID**: Available as `$CLAUDE_SESSION_ID` environment variable
**Context Directory**: `.claude/agents/context/$CLAUDE_SESSION_ID/`
## Your Mission
{Specific analysis or generation task}
## Before You Start
1. Read coordination file to check dependencies
2. Review relevant reports from other agents
3. Log your startup to message bus
## Process
{Step-by-step instructions}
## Output Requirements
1. Write comprehensive report to `reports/{your-name}.json`
2. Create data artifacts in `data/` if needed
3. Log significant findings to message bus
4. Update coordination status to "completed"
## Report Format
Use the standardized JSON structure...
```
## Recommendations Engine
When asking questions, provide smart recommendations:
### Project Type Detection
```python
indicators = {
'web_app': {
'files': ['package.json', 'public/', 'src/App.*'],
'patterns': ['react', 'vue', 'angular', 'svelte']
},
'api': {
'files': ['routes/', 'controllers/', 'openapi.yaml'],
'patterns': ['express', 'fastapi', 'gin', 'actix']
},
'cli': {
'files': ['bin/', 'cmd/', 'cli.py'],
'patterns': ['argparse', 'click', 'cobra', 'clap']
}
}
```
Show confidence: "Based on finding React components in src/ and package.json with react dependencies, this appears to be a **Web Application** (92% confidence)"
### Pain Point Analysis
```bash
# Analyze git history
git log --since="1 month ago" --pretty=format:"%s" | grep -i "fix\|bug" | wc -l
# High count suggests testing/quality issues
# Check test coverage
find . -name "*test*" -o -name "*spec*" | wc -l
# Low count suggests need for test automation
# Check documentation
ls README.md docs/ | wc -l
# Missing suggests documentation automation
```
Recommend based on data: "Git history shows 47 bug-fix commits in the last month, suggesting **Testing Automation** should be high priority"
### Agent Count Recommendation
| Project Size | Complexity | Recommended Agents | Rationale |
|--------------|-----------|-------------------|-----------|
| Small (< 10 files) | Low | 2-3 | Basic analysis + implementation |
| Medium (10-100 files) | Moderate | 4-6 | Multi-domain coverage |
| Large (> 100 files) | High | 7-10 | Comprehensive automation |
| Enterprise | Very High | 10+ | Full lifecycle coverage |
## Output & Documentation
After all agents complete, create:
### 1. Automation Summary
```markdown
# Automation System for {Project Name}
## What Was Created
### Custom Agents (7)
- **security-analyzer**: Scans for vulnerabilities
- **performance-analyzer**: Identifies bottlenecks
- [etc...]
### Skills (4)
- **tdd-enforcer**: Test-driven development workflow
- **api-doc-generator**: Auto-generate API docs
- [etc...]
### Commands (6)
- `/test-fix`: Run tests and fix failures
- `/deploy-check`: Pre-deployment validation
- [etc...]
### Hooks (3)
- **PreToolUse**: Validate dangerous operations
- **PostToolUse**: Auto-format and lint
- **Stop**: Run test suite
### MCP Integrations (2)
- **GitHub**: PR automation, issue tracking
- **Database**: Query optimization insights
## How to Use
[Detailed usage instructions]
## Customization Guide
[How to modify and extend]
```
### 2. Quick Start Guide
```markdown
# Quick Start
## Test the System
1. Test an agent:
```bash
"Use the security-analyzer agent to check src/auth.js"
```
2. Try a command:
```bash
/test-fix src/
```
3. Trigger a skill:
```bash
"Analyze the API documentation for completeness"
# (api-doc-generator skill auto-invokes)
```
## Next Steps
1. Review `.claude/agents/` for custom agents
2. Explore `.claude/commands/` for shortcuts
3. Check `.claude/settings.json` for hooks
4. Read individual agent documentation
## Support
- See `.claude/agents/context/{session-id}/` for generation details
- Check messages.jsonl for what happened
- Review agent reports for findings
```
## Success Criteria
The meta-skill succeeds when:
✅ User's questions were answered with data-driven recommendations
✅ Custom subagent team was generated for their specific needs
✅ Agents can communicate via established protocol
✅ All automation artifacts were created
✅ System was validated and documented
✅ User can immediately start using the automation
## Error Handling
If anything fails:
1. Check coordination.json for agent status
2. Review messages.jsonl for errors
3. Read agent reports for details
4. Offer to regenerate specific agents
5. Provide debugging guidance
## Example Invocation
User: "Set up automation for my Next.js e-commerce project"
You:
1. Detect it's a web app (Next.js, TypeScript)
2. Ask about team size, pain points, priorities
3. Recommend 6 agents for comprehensive coverage
4. Generate coordinator + specialized agents
5. Launch multi-agent workflow
6. Deliver complete automation system
7. Provide usage documentation
The user now has a custom automation system where agents work together through the communication protocol!

View File

@@ -0,0 +1,863 @@
# Example: Educational Course Automation
This example shows what the meta-automation-architect generates for an online programming course project.
## Project Context
- **Type**: Educational (primary) + Programming (secondary)
- **Content**: "Learn Python Programming" online course
- **Structure**: 12 modules, 48 lessons, exercises and quizzes
- **Pain Points**: Incomplete assessments, inconsistent formatting, manual progress tracking
- **Priority**: Assessment coverage and content consistency
## User Conversation
```
User: "Set up automation for my Python programming course"
Meta-Skill: Analyzing project...
- Detected: Educational (72% confidence)
- Found lessons/ directory with 48 markdown files
- Found quizzes/ with 8 assessment files
- Found exercises/ with 32 exercise files
- Sequential numbering detected (01_intro.md, 02_variables.md)
- Secondary: Programming (28% confidence)
- Found Python code examples in lessons
- Found exercise solutions in .py files
Meta-Skill: What are your main pain points?
Options:
- Incomplete assessment coverage (RECOMMENDED: 8 quizzes for 48 lessons = 17% coverage)
- Inconsistent lesson formatting
- Manual progress tracking
- Missing prerequisites validation
User: [Selects: Incomplete assessment coverage, Inconsistent formatting]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis
- Moderate (4-6): Multi-domain coverage (RECOMMENDED)
- Many (7+): Comprehensive automation
User: [Selects: Moderate]
Meta-Skill: Generating automation system...
Session ID: ed-course-abc123
```
## Project Structure
```
python-course/
├── modules/
│ ├── 01-introduction/
│ │ ├── 01_what-is-python.md
│ │ ├── 02_setup-environment.md
│ │ ├── 03_first-program.md
│ │ └── 04_variables.md
│ ├── 02-data-types/
│ └── ... (10 more modules)
├── quizzes/
│ ├── module-01-quiz.md
│ └── ... (7 more quizzes)
├── exercises/
│ ├── 01_hello_world.py
│ ├── 02_variables_practice.py
│ └── ... (30 more exercises)
├── solutions/
│ └── ... (exercise solutions)
├── syllabus.md
└── README.md
```
## Generated Automation System
### 1. Custom Subagents (6)
All agents created in `.claude/agents/`:
#### Universal Analysis Agents
**structure-analyzer.md**
- Analyzes course directory organization
- Checks module/lesson hierarchy
- Validates naming conventions
- Ensures consistent structure
**workflow-analyzer.md**
- Identifies repetitive content creation patterns
- Finds bottlenecks in course development
- Maps content creation workflow
- Suggests automation opportunities
#### Educational Domain Agents
**learning-path-analyzer.md**
- Maps lesson dependencies and prerequisites
- Analyzes difficulty progression curve
- Validates learning objective coverage
- Checks skill development sequence
**assessment-analyzer.md**
- Maps quizzes to modules (found only 17% coverage!)
- Analyzes quiz difficulty distribution
- Checks learning objective alignment
- Reviews question quality and variety
#### Implementation Agents
**skill-generator.md**
- Creates custom skills for course automation
- Generated: `quiz-generator`, `lesson-formatter`, `prerequisite-validator`
**command-generator.md**
- Creates commands for common workflows
- Generated: `/generate-quiz`, `/check-progression`, `/export-course`
### 2. Custom Skills (3)
**`.claude/skills/quiz-generator/SKILL.md`**
```markdown
---
name: quiz-generator
description: Automatically generates quiz questions from lesson content
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Quiz Generator
Automatically generates comprehensive quiz questions from lesson content.
## When This Activates
- User requests "generate quiz for module X"
- User says "create assessment for lessons"
- User asks "add quiz questions"
## Process
1. **Read Lesson Content**
- Parse lesson markdown files
- Extract key concepts and terms
- Identify code examples
- Note learning objectives
2. **Generate Question Types**
- Multiple choice (concept understanding)
- Fill-in-the-blank (terminology)
- Code completion (practical skills)
- True/false (misconception checking)
- Short answer (deeper understanding)
3. **Create Quiz File**
- Standard format with frontmatter
- Varied question types
- Progressive difficulty
- Aligned with learning objectives
4. **Validate Quality**
- Check question clarity
- Ensure correct answers
- Verify difficulty appropriateness
- Test completeness
## Example
**Input Lesson** (02_variables.md):
```markdown
# Variables in Python
Variables are containers for storing data values. In Python, you don't need to declare a variable type.
```python
x = 5
name = "Alice"
```
Variables can change type:
```python
x = 5 # int
x = "text" # now string
```
```
**Generated Quiz** (module-01-quiz.md):
```markdown
---
module: 1
lessons_covered: [1, 2, 3, 4]
difficulty: beginner
time_estimate: 10 minutes
---
# Module 1 Quiz: Introduction to Python
## Question 1 (Multiple Choice)
What is a variable in Python?
a) A fixed value that cannot change
b) A container for storing data values
c) A type of function
d) A Python keyword
**Answer:** b
## Question 2 (Fill in the Blank)
In Python, you _____ need to declare a variable's type explicitly.
**Answer:** don't / do not
## Question 3 (Code Completion)
Complete this code to create a variable named `age` with value 25:
```python
___ = ___
```
**Answer:** age = 25
## Question 4 (True/False)
A Python variable can change its type during program execution.
**Answer:** True
## Question 5 (Short Answer)
Explain in one sentence why Python is considered "dynamically typed".
**Sample Answer:** Python determines variable types at runtime rather than requiring explicit type declarations.
```
[... full skill implementation ...]
```
**`.claude/skills/lesson-formatter/SKILL.md`**
```markdown
---
name: lesson-formatter
description: Enforces consistent lesson structure and formatting across all course content
allowed-tools: ["Read", "Write", "Edit", "Grep", "Glob"]
---
# Lesson Formatter
Automatically formats lessons to maintain consistency across the course.
## Standard Lesson Format
Every lesson should have:
1. **Frontmatter** (metadata)
2. **Title** (# heading)
3. **Learning Objectives** (bullet list)
4. **Prerequisites** (if any)
5. **Content Sections** (## headings)
6. **Code Examples** (with syntax highlighting)
7. **Key Takeaways** (bullet list)
8. **Practice Exercise** (link)
9. **Next Lesson** (link)
## Process
1. **Scan Lesson**
- Check for required sections
- Validate frontmatter
- Verify code block formatting
2. **Add Missing Sections**
- Generate learning objectives from content
- Add takeaways summary
- Create exercise links
3. **Format Consistently**
- Standardize heading levels
- Fix code block languages
- Normalize spacing
4. **Validate Links**
- Check prerequisite links
- Verify exercise references
- Validate next lesson
## Example Transformation
**Before:**
```markdown
# Variables
Let's learn about variables.
x = 5
That's a variable.
```
**After:**
```markdown
---
module: 1
lesson: 4
title: Variables in Python
duration: 15 minutes
difficulty: beginner
prerequisites: [03_first-program]
---
# Variables in Python
## Learning Objectives
By the end of this lesson, you will be able to:
- Define what a variable is in Python
- Create variables with different data types
- Understand Python's dynamic typing
- Follow variable naming conventions
## Prerequisites
- Completed: [First Python Program](03_first-program.md)
## What are Variables?
Variables are containers for storing data values. In Python, you don't need to declare a variable type explicitly.
## Creating Variables
```python
x = 5
name = "Alice"
is_student = True
```
## Dynamic Typing
Python is dynamically typed, meaning variables can change type:
```python
x = 5 # int
x = "text" # now string (valid in Python!)
```
## Key Takeaways
- Variables store data values
- No type declaration needed
- Can change type during execution
- Use descriptive names
## Practice
Complete [Exercise 02: Variables Practice](../../exercises/02_variables_practice.py)
## Next
Continue to [Data Types](../02-data-types/01_numbers.md)
```
[... full skill implementation ...]
```
**`.claude/skills/prerequisite-validator/SKILL.md`**
```markdown
---
name: prerequisite-validator
description: Validates that lesson prerequisites form a valid learning path
allowed-tools: ["Read", "Grep", "Glob"]
---
# Prerequisite Validator
Ensures lessons have valid prerequisites and creates a coherent learning path.
## What It Checks
1. **Prerequisite Existence**
- Referenced lessons exist
- Paths are correct
2. **No Circular Dependencies**
- Lesson A → B → A is invalid
- Detects cycles in prerequisite graph
3. **Logical Progression**
- Prerequisites come before lesson
- Difficulty increases appropriately
4. **Completeness**
- All lessons reachable from start
- No orphaned lessons
## Process
1. **Parse Prerequisites**
```python
# Extract from frontmatter
prerequisites: [01_intro, 02_variables]
```
2. **Build Dependency Graph**
```
01_intro
├─ 02_variables
│ ├─ 03_data_types
│ └─ 04_operators
└─ 05_strings
```
3. **Validate**
- Check cycles
- Verify order
- Find orphans
4. **Generate Report**
- Issues found
- Suggested fixes
- Visualization of learning path
## Example Output
```
✅ Prerequisite Validation Complete
📊 Learning Path Statistics:
- Total lessons: 48
- Entry points: 1 (01_what-is-python)
- Maximum depth: 6 levels
- Average prerequisites per lesson: 1.4
❌ Issues Found: 3
1. Circular dependency detected:
15_functions → 16_scope → 17_recursion → 15_functions
Recommendation: Remove prerequisite from 17_recursion
2. Orphaned lesson:
advanced/99_metaprogramming.md
No lesson links to this. Add to module 12.
3. Missing prerequisite:
Lesson 23_list_comprehensions uses concepts from 20_loops
but doesn't list it as prerequisite.
Recommendation: Add 20_loops to prerequisites
📈 Learning Path Diagram saved to: docs/learning-path.mmd
```
[... full skill implementation ...]
```
### 3. Custom Commands (3)
**`.claude/commands/generate-quiz.md`**
```markdown
---
description: Generate quiz for a module or lesson
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Generate Quiz
Creates comprehensive quiz from lesson content.
## Usage
```bash
/generate-quiz module-01 # Generate quiz for module 1
/generate-quiz 15_functions # Generate quiz for specific lesson
/generate-quiz --all # Generate missing quizzes for all modules
```
## What It Does
1. Reads lesson content from specified module/lesson
2. Extracts key concepts and learning objectives
3. Generates varied question types
4. Creates quiz file in standard format
5. Updates quiz index
## Example
```bash
/generate-quiz module-02
```
Output:
```
📝 Generating quiz for Module 02: Data Types...
✅ Analyzed 4 lessons:
- 05_numbers.md
- 06_strings.md
- 07_lists.md
- 08_dictionaries.md
✅ Generated 15 questions:
- 6 multiple choice
- 3 fill-in-blank
- 4 code completion
- 2 short answer
✅ Quiz saved to: quizzes/module-02-quiz.md
📊 Estimated completion time: 12 minutes
💡 Difficulty: Beginner
Next: Review and adjust questions in quizzes/module-02-quiz.md
```
[... full command implementation ...]
```
**`.claude/commands/check-progression.md`**
```markdown
---
description: Check learning path and prerequisite validity
allowed-tools: ["Read", "Grep", "Glob"]
---
# Check Progression
Validates course structure and learning path.
## Usage
```bash
/check-progression # Full validation
/check-progression --module 3 # Check specific module
/check-progression --visual # Generate visual diagram
```
## Checks Performed
1. **Structure Validation**
- All modules present
- Lessons numbered correctly
- No gaps in sequence
2. **Prerequisite Validation**
- No circular dependencies
- Prerequisites exist
- Logical progression
3. **Assessment Coverage**
- Quiz per module
- Exercises per lesson
- Coverage percentage
4. **Content Consistency**
- Standard lesson format
- Required sections present
- Code examples formatted
[... full command implementation ...]
```
**`.claude/commands/export-course.md`**
```markdown
---
description: Export course to various formats (PDF, HTML, SCORM)
allowed-tools: ["Read", "Bash", "Write", "Glob"]
---
# Export Course
Exports course content to distributable formats.
## Usage
```bash
/export-course pdf # Export to PDF
/export-course html # Export to static website
/export-course scorm # Export to SCORM package
/export-course --module 3 pdf # Export specific module
```
[... full command implementation ...]
```
### 4. Hooks (1)
**`.claude/hooks/validate_lesson_format.py`**
```python
#!/usr/bin/env python3
"""
Lesson Format Validation Hook
Type: PostToolUse
Validates lesson format after editing
"""
import sys
import json
import re
from pathlib import Path
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Only trigger on Write/Edit to lesson files
if tool not in ['Write', 'Edit']:
sys.exit(0)
file_path = params.get('file_path', '')
if '/lessons/' not in file_path or not file_path.endswith('.md'):
sys.exit(0)
print(f"📋 Validating lesson format: {Path(file_path).name}", file=sys.stderr)
try:
with open(file_path) as f:
content = f.read()
issues = []
# Check frontmatter
if not content.startswith('---'):
issues.append("Missing frontmatter")
# Check required sections
required_sections = [
'# ', # Title
'## Learning Objectives',
'## Key Takeaways'
]
for section in required_sections:
if section not in content:
issues.append(f"Missing section: {section}")
# Check code blocks have language
code_blocks = re.findall(r'```(\w*)', content)
if any(lang == '' for lang in code_blocks):
issues.append("Code blocks missing language specification")
# Check for exercise link
if '../../exercises/' not in content and '/exercises/' not in content:
issues.append("Missing practice exercise link")
if issues:
print(f"⚠️ Format issues found:", file=sys.stderr)
for issue in issues:
print(f" - {issue}", file=sys.stderr)
print(f"\n💡 Tip: Use the lesson-formatter skill to auto-fix", file=sys.stderr)
else:
print(f"✅ Lesson format valid", file=sys.stderr)
except Exception as e:
print(f"❌ Validation error: {e}", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PostToolUse": {
"commands": [".claude/hooks/validate_lesson_format.py"]
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Python Programming Course
## Generated On
2025-01-23
## Session ID
ed-course-abc123
## What Was Created
### Analysis Phase
- **structure-analyzer**: Course well-organized, but inconsistent lesson numbering in module 5
- **workflow-analyzer**: Identified repetitive quiz creation as major time sink
- **learning-path-analyzer**: Clear progression, but module 8 prerequisites need clarification
- **assessment-analyzer**: LOW COVERAGE - Only 17% (8 quizzes for 48 lessons)
### Generated Artifacts
#### Custom Agents (6)
- **structure-analyzer**: Analyzes course organization
- **workflow-analyzer**: Identifies automation opportunities
- **learning-path-analyzer**: Validates learning progression
- **assessment-analyzer**: Checks quiz coverage
- **skill-generator**: Created 3 custom skills
- **command-generator**: Created 3 slash commands
#### Skills (3)
- **quiz-generator**: Auto-generates quiz questions from lessons (SAVES 20 MIN/QUIZ!)
- **lesson-formatter**: Enforces consistent lesson structure
- **prerequisite-validator**: Validates learning path dependencies
#### Commands (3)
- **/generate-quiz**: Create quiz for module/lesson
- **/check-progression**: Validate course structure
- **/export-course**: Export to PDF/HTML/SCORM
#### Hooks (1)
- **PostToolUse**: Validates lesson format on save
## Impact Assessment
### Time Savings
- Quiz generation: 20 min/quiz × 40 missing quizzes = **13.3 hours saved**
- Lesson formatting: 5 min/lesson × 48 lessons = **4 hours saved**
- Prerequisite validation: 30 min/module × 12 modules = **6 hours saved**
- **Total: ~23 hours saved** + ongoing maintenance
### Quality Improvements
- **100% quiz coverage** (up from 17%)
- **Consistent lesson format** across all content
- **Valid learning path** with no circular dependencies
- **Professional export formats** (PDF, HTML, SCORM)
## Quick Start
1. Generate missing quizzes:
```bash
/generate-quiz --all
```
2. Validate course structure:
```bash
/check-progression --visual
```
3. Format all lessons:
```bash
"Format all lessons in the course"
# lesson-formatter skill auto-invokes
```
4. Create new lesson (format validated automatically):
```bash
# Edit any lesson file
# Hook validates format on save
```
## Course Statistics
- **48 Lessons** across 12 modules
- **8 Quizzes** → Will be 48 quizzes (100% coverage)
- **32 Exercises** with solutions
- **Learning Path Depth:** 6 levels
- **Estimated Course Duration:** 24 hours
## Customization
All generated automation can be customized:
- Edit skills in `.claude/skills/`
- Modify commands in `.claude/commands/`
- Adjust hooks in `.claude/hooks/`
## Session Data
All agent communication is logged in:
`.claude/agents/context/ed-course-abc123/`
Review this directory to understand what automation decisions were made and why.
```
## Agent Communication Example
**`coordination.json`**
```json
{
"session_id": "ed-course-abc123",
"started_at": "2025-01-23T14:00:00Z",
"project_type": "educational",
"secondary_types": ["programming"],
"agents": {
"structure-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:03:00Z",
"report_path": "reports/structure-analyzer.json"
},
"learning-path-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:05:00Z",
"report_path": "reports/learning-path-analyzer.json"
},
"assessment-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:06:00Z",
"report_path": "reports/assessment-analyzer.json"
}
}
}
```
**`reports/assessment-analyzer.json`** (excerpt)
```json
{
"agent_name": "assessment-analyzer",
"summary": "CRITICAL: Only 17% assessment coverage. 40 modules lack quizzes.",
"findings": [
{
"type": "gap",
"severity": "critical",
"title": "Insufficient Quiz Coverage",
"description": "Only 8 quizzes for 48 lessons (17% coverage). Industry standard is 80-100%.",
"location": "quizzes/",
"recommendation": "Generate quizzes for all modules using automated question extraction",
"time_saved_if_automated": "20 minutes per quiz × 40 quizzes = 13.3 hours"
}
],
"recommendations_for_automation": [
"Skill: quiz-generator - Auto-generate from lesson content",
"Command: /generate-quiz --all - Batch generate missing quizzes",
"Hook: Suggest quiz creation when module is complete"
],
"automation_impact": {
"time_saved": "13.3 hours",
"quality_improvement": "83% increase in coverage (17% → 100%)"
}
}
```
## Result
Course creator now has powerful automation:
- ✅ Can generate 40 missing quizzes in minutes (vs. 13+ hours manually)
- ✅ All lessons formatted consistently
- ✅ Learning path validated with no circular dependencies
- ✅ Hook prevents incorrectly formatted lessons
- ✅ Can export to professional formats (PDF, SCORM)
-**23+ hours saved** in course development and maintenance
## Before vs After
**Before:**
```
# Manual workflow
- Write lesson → 30 min
- Format manually → 5 min
- Create quiz → 20 min
- Validate prerequisites → 5 min
- Total: 60 min per lesson × 48 = 48 hours
```
**After:**
```
# Automated workflow
- Write lesson → 30 min
- Auto-formatted on save → 0 min
- Generate quiz → 1 min (/generate-quiz)
- Auto-validated → 0 min
- Total: 31 min per lesson × 48 = 24.8 hours
SAVINGS: 23+ hours (48% reduction!)
```
**Quality Improvement:**
- Quiz coverage: 17% → 100% (+83%)
- Format consistency: 60% → 100% (+40%)
- Valid learning paths: 90% → 100% (+10%)

View File

@@ -0,0 +1,310 @@
# Example: Media Library Organization Automation
This example shows what the meta-automation-architect generates for a personal media and document library.
## Project Context
- **Type**: File Organization (primary)
- **Content**: 5,000+ files (photos, videos, documents, music)
- **Size**: 250GB across multiple folders
- **Pain Points**: Inconsistent naming, duplicates, missing metadata, hard to find files
- **Priority**: Auto-tagging and duplicate detection
## Project Structure (Before)
```
media-library/
├── Photos/
│ ├── IMG_0001.jpg
│ ├── IMG_0002.jpg
│ ├── vacation.jpg
│ ├── DSC001.jpg (duplicate of IMG_0001.jpg with different name)
│ └── ... (2,000+ images)
├── Videos/
│ ├── VID_001.mp4
│ ├── movie_001.mp4
│ └── ... (500+ videos)
├── Documents/
│ ├── document.pdf
│ ├── report_final_final.docx (duplicate)
│ └── ... (1,500+ docs)
├── Music/
│ └── ... (1,000+ songs)
└── Misc/
└── ... (random files)
```
## User Conversation
```
User: "Help me organize my media library"
Meta-Skill: Analyzing project...
- Detected: File Organization (95% confidence)
- Found 5,247 files across 89 directories
- Mixed naming conventions detected
- Estimated 380 duplicate files (7% of total)
- Only 15% of files have metadata tags
- Date-based folder structure partially used
Meta-Skill: What are your main pain points?
Options:
- Finding duplicate files (RECOMMENDED: ~1.2GB wasted, 7% duplication rate)
- Inconsistent file naming
- Missing metadata/tags (RECOMMENDED: 85% files lack tags)
- Hard to find specific files
User: [Selects: Duplicate files, Missing metadata]
Meta-Skill: Automation opportunities identified:
- Auto-tag files based on content: Saves 2 min/file × 4,460 files = 148 hours
- Find and merge duplicates: Saves 10 min/duplicate × 380 = 63 hours
- Total potential savings: 211 hours + 1.2GB storage
Meta-Skill: Generating automation system...
Session ID: media-org-def456
```
## Generated Automation System
### 1. Custom Subagents (5)
- **structure-analyzer** - Reviews folder organization and hierarchy
- **metadata-analyzer** - Checks tagging coverage and consistency
- **duplication-analyzer** - Finds duplicate and similar files
- **asset-analyzer** - Inventories all media types
- **command-generator** - Creates organization commands
### 2. Custom Skills (3)
**`auto-tagger`** - Automatically tags files based on content
- Images: Extracts EXIF data (date, location, camera)
- Videos: Analyzes metadata, duration, resolution
- Documents: Extracts title, author, creation date
- Music: Reads ID3 tags, adds genre/artist
**Example:**
```
Before: IMG_0523.jpg (no metadata)
After: IMG_0523.jpg
Tags: [vacation, beach, 2024-07-15, hawaii, sunset]
Location: Waikiki Beach, HI
Camera: iPhone 14 Pro
```
**`duplicate-merger`** - Identifies and consolidates duplicates
- Exact duplicates (same hash)
- Similar images (perceptual hash)
- Same content, different formats
- Version variations
**Example:**
```
Found 3 duplicates of vacation_beach.jpg:
- Photos/IMG_0523.jpg (original, highest quality)
- Photos/vacation.jpg (duplicate)
- Backup/beach.jpg (duplicate)
Action: Keep IMG_0523.jpg, create symbolic links for others
Savings: 8.2 MB
```
**`index-generator`** - Creates searchable catalog
- Generates `library-index.md` with all files
- Categorizes by type, date, tags
- Creates search-friendly format
- Updates automatically
### 3. Custom Commands (3)
**`/organize`**
```bash
/organize # Organize entire library
/organize Photos/ # Organize specific folder
/organize --dry-run # Preview changes
```
Actions:
- Renames files with consistent convention
- Moves to appropriate category folders
- Adds metadata tags
- Detects and merges duplicates
- Generates index
**`/find-duplicates`**
```bash
/find-duplicates # Find all duplicates
/find-duplicates Photos/ # In specific folder
/find-duplicates --auto-merge # Auto-merge safe duplicates
```
**`/generate-index`**
```bash
/generate-index # Full library index
/generate-index --by-date # Chronological index
/generate-index --by-tag # By tag category
```
### 4. Hooks (2)
**`auto_tag_new_files.py`** (PostToolUse)
- Triggers when files are added
- Automatically extracts and adds metadata
- Tags based on content analysis
**`duplicate_alert.py`** (PostToolUse)
- Triggers when files are added
- Checks for duplicates
- Alerts if duplicate detected
### 5. Impact
**Time Savings:**
- Manual tagging: 2 min/file × 4,460 files = **148 hours** → Automated
- Finding duplicates: Manual search would take **20+ hours** → 5 minutes automated
- Creating index: **5 hours** manual → 2 minutes automated
- **Total: 173+ hours saved**
**Storage Savings:**
- Duplicates removed: **1.2GB** recovered
- Optimized organization: **Better disk cache performance**
**Quality Improvements:**
- Metadata coverage: 15% → **100%** (+85%)
- Findability: Manual search → **Instant** via indexed catalog
- Consistency: Mixed naming → **100% standardized**
## Example Results
### Before `/organize`
```
Photos/
├── IMG_0001.jpg (no tags)
├── vacation.jpg (no tags, actually duplicate of IMG_0001)
├── DSC001.JPG (no tags)
└── ... (mixed names, no metadata)
```
### After `/organize`
```
library/
├── photos/
│ ├── 2024/
│ │ ├── 07-july/
│ │ │ ├── 2024-07-15_hawaii-beach_sunset.jpg
│ │ │ │ Tags: [vacation, beach, hawaii, sunset]
│ │ │ │ Location: Waikiki, HI
│ │ │ └── ...
│ │ └── 08-august/
│ └── 2023/
├── videos/
│ ├── 2024/
│ │ └── 2024-07-15_beach-waves_1080p.mp4
│ │ Tags: [vacation, ocean, hawaii]
├── documents/
│ ├── personal/
│ └── work/
├── music/
│ ├── by-artist/
│ └── by-genre/
├── library-index.md (searchable catalog)
└── .metadata/ (tag database)
```
### Generated Index (excerpt)
```markdown
# Media Library Index
Last Updated: 2025-01-23
Total Files: 5,247
Total Size: 248.8 GB
## Recent Additions (Last 7 Days)
- 2024-07-20_family-dinner.jpg [Tags: family, home, dinner]
- 2024-07-19_work-presentation.pptx [Tags: work, slides]
## By Category
### Photos (2,000 files, 45.2 GB)
#### 2024 (523 files)
- **July** (156 files)
- Hawaii Vacation (45 files) - Tags: vacation, beach, hawaii
- Home Events (28 files) - Tags: family, home
- **August** (89 files)
### Videos (500 files, 180.5 GB)
...
### Documents (1,500 files, 18.1 GB)
...
## By Tag
- **vacation** (245 files)
- **family** (432 files)
- **work** (567 files)
...
## Search Tips
- By date: Find "2024-07"
- By location: Find "hawaii" or "beach"
- By type: Find ".jpg" or ".mp4"
```
## Agent Communication
**`reports/duplication-analyzer.json`** (excerpt):
```json
{
"agent_name": "duplication-analyzer",
"summary": "Found 380 duplicate files (7.2% duplication rate) wasting 1.18GB storage",
"findings": [
{
"type": "duplicate_group",
"severity": "medium",
"title": "Vacation Photos Duplicated",
"description": "45 vacation photos have 2-3 copies each with different names",
"storage_wasted": "285 MB",
"recommendation": "Keep highest quality version, create symlinks for others"
}
],
"metrics": {
"total_files_scanned": 5247,
"duplicate_groups": 127,
"total_duplicates": 380,
"storage_wasted_mb": 1210,
"deduplication_potential": "23% size reduction after compression"
},
"automation_impact": {
"time_saved": "63 hours (manual duplicate finding)",
"storage_recovered": "1.2 GB"
}
}
```
## Result
User now has:
-**Fully organized library** with consistent structure
-**100% metadata coverage** (up from 15%)
-**Zero duplicates** (removed 380, recovered 1.2GB)
-**Searchable index** for instant finding
-**Auto-tagging** for all new files
-**173+ hours saved** in organization work
**Before vs After:**
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Files with metadata | 15% (788) | 100% (5,247) | +85% |
| Duplicate files | 380 (7.2%) | 0 (0%) | -100% |
| Wasted storage | 1.2 GB | 0 GB | 1.2GB recovered |
| Time to find file | 5-10 min | <10 sec | 30-60x faster |
| Manual org time | 173+ hours | 2 hours setup | 98% reduction |
**Ongoing Benefits:**
- New files auto-tagged immediately
- Duplicates detected before saving
- Index updates automatically
- Consistent organization maintained

View File

@@ -0,0 +1,339 @@
# Example: Project Management Automation
This example shows what the meta-automation-architect generates for a software delivery project management workspace.
## Project Context
- **Type**: Project Management (primary) + Programming (secondary)
- **Project**: Q1 2025 Mobile App Development
- **Team**: 8 people (2 devs, 2 designers, 1 PM, 1 QA, 2 stakeholders)
- **Duration**: 3 months (12 sprints)
- **Pain Points**: Manual status reporting, resource over-allocation, missing risk tracking
- **Priority**: Automated reporting and resource management
## Project Structure
```
mobile-app-project/
├── project-plan.md
├── roadmap.md
├── sprints/
│ ├── sprint-01/
│ │ ├── planning.md
│ │ ├── tasks.md
│ │ ├── retrospective.md
│ │ └── burndown-chart.png
│ ├── sprint-02/
│ └── ... (12 sprints)
├── resources/
│ ├── team-capacity.md
│ ├── allocation.xlsx
│ └── skills-matrix.md
├── risks/
│ ├── risk-register.md
│ └── mitigation-plans.md
├── milestones/
│ ├── m1-design-complete.md
│ ├── m2-mvp-ready.md
│ └── m3-launch-ready.md
└── reports/
├── weekly-status/
└── monthly-summary/
```
## User Conversation
```
User: "Set up automation for my mobile app project management"
Meta-Skill: Analyzing project...
- Detected: Project Management (78% confidence)
- Found sprints/ directory with 12 sprint folders
- Found project-plan.md and roadmap.md
- Found resources/allocation.xlsx
- Found risks/risk-register.md
- Secondary: Programming (22% confidence)
- References to code deliverables
Meta-Skill: What are your main pain points?
Options:
- Manual status reporting (RECOMMENDED: 12 weekly reports expected, only 3 exist)
- Resource over-allocation (RECOMMENDED: Excel shows conflicts)
- Missing risk tracking
- Timeline validation
User: [Selects: Manual status reporting, Resource over-allocation]
Meta-Skill: Generating automation system...
Session ID: pm-mobile-app-xyz789
```
## Generated Automation System
### 1. Custom Subagents (6)
#### Universal Agents
- **structure-analyzer** - Project organization and folder hierarchy
- **workflow-analyzer** - Sprint and delivery processes
#### Project Management Domain Agents
- **timeline-analyzer** - Sprint schedules, dependencies, critical paths
- **resource-analyzer** - Team allocation, capacity, conflicts
- **risk-analyzer** - Risk identification and mitigation coverage
#### Implementation Agent
- **command-generator** - Created 3 PM-specific commands
### 2. Custom Skills (3)
**`status-reporter`** - Auto-generates weekly status reports from sprint data
- Reads sprint tasks, completion status, blockers
- Generates formatted report with metrics
- Saves time: **45 min/week** (9 hours over 12 sprints)
**`resource-optimizer`** - Identifies and resolves allocation conflicts
- Parses resource allocation data
- Detects over/under allocation
- Suggests rebalancing
- Saves time: **30 min/sprint** (6 hours total)
**`risk-tracker`** - Maintains risk register and tracks mitigation
- Monitors risks from register
- Tracks mitigation progress
- Alerts on new risks
- Saves time: **20 min/week** (4 hours total)
### 3. Custom Commands (3)
**`/sprint-report`**
```bash
/sprint-report # Current sprint
/sprint-report sprint-05 # Specific sprint
/sprint-report --all # All sprints summary
```
Generates comprehensive sprint report:
- Completed tasks vs. planned
- Velocity and burndown
- Blockers and risks
- Team capacity utilization
- Next sprint forecast
**`/resource-check`**
```bash
/resource-check # Check current allocation
/resource-check --week 5 # Specific week
/resource-check --conflicts # Show only conflicts
```
Analyzes resource allocation:
- Capacity vs. assigned work
- Over-allocated team members
- Under-utilized resources
- Skill match for tasks
- Rebalancing suggestions
**`/timeline-validate`**
```bash
/timeline-validate # Validate full timeline
/timeline-validate --critical # Show critical path
/timeline-validate --risks # Timeline risks
```
Validates project timeline:
- Dependency validation
- Critical path analysis
- Buffer analysis
- Risk to deadlines
- Suggested adjustments
### 4. Hooks (2)
**`update_progress.py`** (PostToolUse)
- Triggers when task markdown files are updated
- Extracts completion status
- Updates sprint progress automatically
- Regenerates burndown chart
**`resource_validation.py`** (PreToolUse)
- Triggers when allocation files are modified
- Validates no over-allocation
- Blocks if conflicts detected
- Suggests fixes before allowing change
### 5. Documentation
**Impact Assessment:**
**Time Savings:**
- Weekly status reports: 45 min/week × 12 weeks = **9 hours**
- Resource planning: 30 min/sprint × 12 sprints = **6 hours**
- Risk tracking: 20 min/week × 12 weeks = **4 hours**
- Timeline validation: 1 hour/month × 3 months = **3 hours**
- **Total: 22 hours saved**
**Quality Improvements:**
- **100% sprint coverage** for status reports (up from 25%)
- **Zero resource conflicts** (automated detection)
- **Real-time risk visibility** (vs. monthly reviews)
- **Validated timeline** with critical path analysis
## Example Usage
### Generating Sprint Report
```bash
/sprint-report sprint-05
```
**Output:**
```markdown
# Sprint 05 Report
**Period:** Jan 15-19, 2025
**Team:** Mobile App Team
## Summary
✅ Sprint Goal: Complete user authentication flow - ACHIEVED
## Metrics
- **Planned Story Points:** 34
- **Completed Story Points:** 32 (94%)
- **Velocity:** 32 pts (trending up from 28)
- **Team Capacity:** 160 hours
- **Actual Hours:** 152 hours (95% utilization)
## Completed Tasks (8)
✅ User login UI (8 pts) - @designer1
✅ OAuth integration (13 pts) - @dev1
✅ Session management (5 pts) - @dev2
✅ Login testing (3 pts) - @qa
[... 4 more ...]
## Carried Over (2)
➡️ Password reset flow (5 pts) - blocked by API
➡️ Biometric auth (8 pts) - iOS issue
## Blockers
⚠️ API authentication endpoint delayed → unblocks next sprint
⚠️ iOS biometric API documentation unclear → dev investigating
## Risks
🔴 HIGH: OAuth provider latency affecting UX → mitigation: add loading states
🟡 MEDIUM: Test coverage only 65% → action: dedicated testing time in sprint 06
## Team Notes
- Great collaboration between design and dev
- Need more QA capacity in next sprint
- Consider adding API buffer time
## Next Sprint Preview
Sprint 06 will focus on:
- Complete password reset (carried over)
- User profile management
- Settings screen
- Target: 30-35 story points
---
Auto-generated by meta-automation-architect
```
### Checking Resource Allocation
```bash
/resource-check
```
**Output:**
```
📊 Resource Allocation Analysis
Current Week: Week 6 (Jan 22-26, 2025)
✅ Properly Allocated (5):
- @dev1: 40/40 hours (100%)
- @designer2: 32/40 hours (80%)
- @qa: 40/40 hours (100%)
- @pm: 35/40 hours (88%)
- @stakeholder1: 10/10 hours (100%)
⚠️ Over-Allocated (2):
- @dev2: 52/40 hours (130% - CRITICAL)
Tasks: Feature A (20h), Feature B (18h), Bug fixes (14h)
Recommendation: Move Bug fixes to @dev1 (8h available)
- @designer1: 45/40 hours (113%)
Tasks: UI mockups (25h), Asset creation (20h)
Recommendation: Defer Asset creation to Week 7
🔵 Under-Utilized (1):
- @stakeholder2: 3/10 hours (30%)
Opportunity: Review sessions, feedback collection
💡 Optimization Suggestions:
1. Redistribute 14h from @dev2 to @dev1
2. Move Asset creation from @designer1 to Week 7
3. Add review tasks for @stakeholder2
Estimated Rebalancing Time: 10 minutes
After optimization: 100% feasible allocation
```
## Agent Communication
**`reports/timeline-analyzer.json`** (excerpt):
```json
{
"agent_name": "timeline-analyzer",
"summary": "Timeline feasible but tight. Critical path includes 4 sprints with zero buffer.",
"findings": [
{
"type": "risk",
"severity": "high",
"title": "Zero Buffer on Critical Path",
"description": "Sprints 4, 7, 9, 11 are on critical path with no schedule buffer",
"recommendation": "Add 10% buffer to each critical sprint or reduce scope",
"time_impact": "Any delay in these sprints directly impacts launch date"
},
{
"type": "opportunity",
"severity": "medium",
"title": "Parallel Workstreams Possible",
"description": "Design and backend development can run in parallel in sprints 2-5",
"recommendation": "Optimize resource allocation to leverage parallelism",
"time_saved_if_optimized": "2 weeks off critical path"
}
],
"automation_impact": {
"time_saved": "3 hours per month in timeline reviews",
"quality_improvement": "Real-time critical path visibility vs. monthly checks"
}
}
```
## Result
**PM now has powerful automation:**
- ✅ Weekly status reports generated in 30 seconds (vs. 45 minutes)
- ✅ Resource conflicts detected instantly (vs. discovered in standup)
- ✅ Risk register automatically maintained
- ✅ Timeline validated continuously
-**22 hours saved** over project duration
-**Better decision making** with real-time data
**Before vs After:**
**Before (Manual):**
- Weekly status report: 45 min
- Resource planning: 30 min/sprint
- Risk review: 20 min/week
- Timeline validation: 1 hour/month
- **Total: ~3 hours/week**
**After (Automated):**
- Status report: `/sprint-report` → 30 seconds
- Resource check: `/resource-check` → 30 seconds
- Risk tracking: Auto-updated from tasks
- Timeline: Auto-validated on every change
- **Total: ~5 minutes/week**
**Impact: 97% time reduction + higher data quality!**

View File

@@ -0,0 +1,670 @@
# Example: Python CLI Tool Automation
This example shows what the meta-automation-architect generates for a Python command-line tool project.
## Project Context
- **Type**: CLI Tool
- **Stack**: Python 3.11, Click framework, SQLite
- **Team Size**: Solo developer
- **Pain Points**: Documentation, testing, dependency management
- **Priority**: User documentation and testing
## User Conversation
```
User: "Create automation for my Python CLI project"
Meta-Skill: Analyzing project...
- Detected: CLI Tool (87% confidence)
- Found cli.py with Click decorators
- Found setup.py with console_scripts entry point
- Found requirements.txt with CLI dependencies
- Found README.md but missing detailed docs
Meta-Skill: What are your main pain points?
Options:
- Low test coverage (RECOMMENDED: 8 test files for 42 modules)
- Missing documentation (RECOMMENDED: No docs/ directory)
- Dependency issues
- Release process
User: [Selects: Low test coverage, Missing documentation]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis (RECOMMENDED for solo projects)
- Moderate (4-6): Multi-domain coverage
- Many (7+): Comprehensive automation
User: [Selects: Few]
Meta-Skill: Generating automation system...
Session ID: b2c3d4e5-f6a7-8901-bcde-f23456789012
```
## Generated Automation System
### 1. Custom Subagents (4)
All agents created in `.claude/agents/`:
#### Analysis Agents (Run in Parallel)
**code-quality-analyzer.md**
- Checks PEP 8 compliance
- Analyzes function complexity
- Identifies code duplication
- Reviews error handling
**testing-analyzer.md**
- Measures test coverage
- Identifies untested CLI commands
- Reviews test patterns
- Checks for integration tests
#### Implementation Agents
**skill-generator.md**
- Creates custom skills for Python patterns
- Generated: `docstring-generator`, `cli-test-helper`
**command-generator.md**
- Creates commands for Python workflows
- Generated: `/test-cov`, `/release-prep`
### 2. Custom Skills (2)
**`.claude/skills/docstring-generator/SKILL.md`**
```markdown
---
name: docstring-generator
description: Generates comprehensive docstrings for Python functions and modules
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Docstring Generator
Automatically generates NumPy-style docstrings for Python code.
## When This Activates
- User asks to "add documentation" to Python files
- User requests "docstrings" for functions
- User says "document this module"
## Process
1. Scan Python files for functions/classes without docstrings
2. Analyze function signatures, type hints, and logic
3. Generate NumPy-style docstrings with:
- Brief description
- Parameters with types
- Returns with type
- Raises (exceptions)
- Examples
4. Insert docstrings into code
5. Validate with pydocstyle
## Example
**Input:**
```python
def parse_config(path, validate=True):
with open(path) as f:
config = json.load(f)
if validate:
validate_config(config)
return config
```
**Output:**
```python
def parse_config(path: str, validate: bool = True) -> dict:
"""
Parse configuration from JSON file.
Parameters
----------
path : str
Path to configuration file
validate : bool, optional
Whether to validate configuration (default: True)
Returns
-------
dict
Parsed configuration dictionary
Raises
------
FileNotFoundError
If configuration file doesn't exist
ValidationError
If configuration is invalid and validate=True
Examples
--------
>>> config = parse_config('config.json')
>>> config['database']['host']
'localhost'
"""
with open(path) as f:
config = json.load(f)
if validate:
validate_config(config)
return config
```
[... detailed implementation ...]
```
**`.claude/skills/cli-test-helper/SKILL.md`**
```markdown
---
name: cli-test-helper
description: Generates tests for Click CLI commands with fixtures
allowed-tools: ["Read", "Write", "Bash", "Grep"]
---
# CLI Test Helper
Automatically generates pytest tests for Click commands.
## When This Activates
- User implements new CLI command
- User requests "test this command"
- User says "add CLI tests"
## Process
1. Identify Click commands in code
2. Extract command parameters, options, flags
3. Generate pytest tests with:
- CliRunner fixtures
- Success case tests
- Error case tests
- Edge case tests
- Output validation
4. Create test fixtures for complex inputs
5. Run tests to verify
## Example
**CLI Command:**
```python
@click.command()
@click.option('--name', required=True, help='User name')
@click.option('--email', help='User email')
@click.option('--verbose', is_flag=True)
def create_user(name, email, verbose):
"""Create a new user."""
user = User(name=name, email=email)
db.save(user)
if verbose:
click.echo(f"Created user: {user}")
else:
click.echo(user.id)
```
**Generated Test:**
```python
import pytest
from click.testing import CliRunner
from myapp.cli import create_user
@pytest.fixture
def runner():
return CliRunner()
def test_create_user_success(runner):
"""Test successful user creation."""
result = runner.invoke(create_user, ['--name', 'Alice'])
assert result.exit_code == 0
assert 'user-' in result.output
def test_create_user_with_email(runner):
"""Test user creation with email."""
result = runner.invoke(create_user, [
'--name', 'Alice',
'--email', 'alice@example.com'
])
assert result.exit_code == 0
def test_create_user_verbose(runner):
"""Test verbose output."""
result = runner.invoke(create_user, [
'--name', 'Alice',
'--verbose'
])
assert result.exit_code == 0
assert 'Created user:' in result.output
def test_create_user_missing_name(runner):
"""Test error when name is missing."""
result = runner.invoke(create_user, [])
assert result.exit_code != 0
assert 'Missing option' in result.output
```
[... detailed implementation ...]
```
### 3. Custom Commands (2)
**`.claude/commands/test-cov.md`**
```markdown
---
description: Run tests with coverage report
allowed-tools: ["Bash", "Read"]
---
# Test Coverage Command
Runs pytest with coverage and generates detailed report.
## Usage
```bash
/test-cov # Full coverage
/test-cov tests/unit # Specific directory
/test-cov --html # Generate HTML report
```
## What This Does
1. **Run Tests with Coverage**
```bash
pytest --cov=src --cov-report=term-missing $ARGUMENTS
```
2. **Generate Report**
- Terminal: Coverage percentage by module
- Missing lines highlighted
- HTML report (if --html flag)
3. **Check Thresholds**
- Warn if coverage < 80%
- Error if coverage < 60%
4. **Identify Gaps**
- List untested files
- Highlight critical paths without tests
## Example Output
```
---------- coverage: platform darwin, python 3.11.5 -----------
Name Stmts Miss Cover Missing
-------------------------------------------------------
src/__init__.py 2 0 100%
src/cli.py 145 23 84% 67-73, 89-92
src/config.py 34 0 100%
src/database.py 89 45 49% 23-67, 78-89
src/utils.py 23 2 91% 45-46
-------------------------------------------------------
TOTAL 293 70 76%
⚠️ Coverage below 80% target
❌ database.py has only 49% coverage (critical module!)
Suggestions:
- Add integration tests for database.py
- Test error paths in cli.py lines 67-73
```
[... detailed implementation ...]
```
**`.claude/commands/release-prep.md`**
```markdown
---
description: Prepare project for release (version bump, tests, build)
allowed-tools: ["Bash", "Read", "Write"]
---
# Release Preparation
Automates release preparation checklist.
## Usage
```bash
/release-prep # Interactive mode
/release-prep patch # Auto-bump patch version
/release-prep minor # Auto-bump minor version
/release-prep major # Auto-bump major version
```
## Process
1. **Run Full Test Suite**
```bash
pytest -v
```
2. **Check Coverage**
```bash
pytest --cov=src --cov-report=term
```
3. **Lint Code**
```bash
ruff check src/
mypy src/
```
4. **Bump Version**
- Update version in setup.py, __version__.py
- Update CHANGELOG.md
- Create git tag
5. **Build Distributions**
```bash
python -m build
```
6. **Test Installation**
```bash
pip install dist/*.whl
```
7. **Generate Release Notes**
- Extract commits since last tag
- Categorize changes (features, fixes, breaking)
- Write to RELEASE_NOTES.md
8. **Checklist Output**
```
✅ All tests pass (147 passed)
✅ Coverage: 87%
✅ Linting: No issues
✅ Version bumped: 1.2.3 → 1.2.4
✅ CHANGELOG.md updated
✅ Distribution built
✅ Installation tested
✅ Release notes generated
Ready to release! Next steps:
1. Review RELEASE_NOTES.md
2. git push --tags
3. twine upload dist/*
```
[... detailed implementation ...]
```
### 4. Hooks (1)
**`.claude/hooks/lint_python.py`**
```python
#!/usr/bin/env python3
"""
Python Linting Hook
Type: PostToolUse
Automatically lints Python files after edits
"""
import sys
import json
import subprocess
from pathlib import Path
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Only trigger on Write/Edit to Python files
if tool not in ['Write', 'Edit']:
sys.exit(0)
file_path = params.get('file_path', '')
if not file_path.endswith('.py'):
sys.exit(0)
# Run ruff
print(f"🔍 Linting {file_path}...", file=sys.stderr)
ruff_result = subprocess.run(
['ruff', 'check', '--fix', file_path],
capture_output=True,
text=True
)
if ruff_result.returncode != 0:
print(f"⚠️ Ruff found issues:", file=sys.stderr)
print(ruff_result.stdout, file=sys.stderr)
# Run black
print(f"🎨 Formatting {file_path}...", file=sys.stderr)
black_result = subprocess.run(
['black', '--quiet', file_path],
capture_output=True
)
if black_result.returncode == 0:
print(f"✅ Formatted successfully", file=sys.stderr)
else:
print(f"❌ Formatting failed", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PostToolUse": {
"commands": [".claude/hooks/lint_python.py"]
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Python CLI Tool
## Generated On
2025-01-23
## Session ID
b2c3d4e5-f6a7-8901-bcde-f23456789012
## What Was Created
### Analysis Phase
- **code-quality-analyzer**: Identified 8 PEP 8 violations and 3 complex functions
- **testing-analyzer**: Test coverage at 58%, many CLI commands untested
### Generated Artifacts
#### Custom Agents (4)
- **code-quality-analyzer**: Evaluates code quality and PEP 8 compliance
- **testing-analyzer**: Measures test coverage for CLI commands
- **skill-generator**: Created 2 custom skills
- **command-generator**: Created 2 slash commands
#### Skills (2)
- **docstring-generator**: Auto-generates NumPy-style docstrings
- **cli-test-helper**: Generates pytest tests for Click commands
#### Commands (2)
- **/test-cov**: Run tests with coverage report
- **/release-prep**: Prepare project for release
#### Hooks (1)
- **PostToolUse**: Auto-lint and format Python files
## Quick Start
1. Generate docstrings:
```bash
"Add documentation to all functions in src/cli.py"
# docstring-generator skill auto-invokes
```
2. Generate tests:
```bash
"Create tests for the create_user command"
# cli-test-helper skill auto-invokes
```
3. Check coverage:
```bash
/test-cov
```
4. Prepare release:
```bash
/release-prep patch
```
5. Auto-formatting:
- Every time you write/edit a .py file, it's automatically linted and formatted
## Customization
- Edit skills in `.claude/skills/`
- Modify commands in `.claude/commands/`
- Adjust hook in `.claude/hooks/lint_python.py`
- Configure linters (ruff.toml, pyproject.toml)
[... more documentation ...]
```
## Agent Communication
**`coordination.json`**
```json
{
"session_id": "b2c3d4e5-f6a7-8901-bcde-f23456789012",
"started_at": "2025-01-23T14:00:00Z",
"project_type": "cli",
"agents": {
"code-quality-analyzer": {
"status": "completed",
"started_at": "2025-01-23T14:00:00Z",
"completed_at": "2025-01-23T14:03:00Z",
"report_path": "reports/code-quality-analyzer.json"
},
"testing-analyzer": {
"status": "completed",
"started_at": "2025-01-23T14:00:01Z",
"completed_at": "2025-01-23T14:04:00Z",
"report_path": "reports/testing-analyzer.json"
},
"skill-generator": {
"status": "completed",
"started_at": "2025-01-23T14:05:00Z",
"completed_at": "2025-01-23T14:08:00Z",
"report_path": "reports/skill-generator.json"
},
"command-generator": {
"status": "completed",
"started_at": "2025-01-23T14:08:30Z",
"completed_at": "2025-01-23T14:10:00Z",
"report_path": "reports/command-generator.json"
}
}
}
```
**Key Report Excerpts:**
**`reports/testing-analyzer.json`**
```json
{
"agent_name": "testing-analyzer",
"summary": "Test coverage at 58%. Many CLI commands lack tests.",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "Untested CLI Commands",
"description": "5 Click commands have no tests",
"location": "src/cli.py",
"recommendation": "Generate tests for each command"
}
],
"recommendations_for_automation": [
"Skill: Auto-generate CLI tests using CliRunner",
"Command: /test-cov for quick coverage checks"
]
}
```
**`reports/skill-generator.json`**
```json
{
"agent_name": "skill-generator",
"summary": "Generated 2 skills: docstring-generator and cli-test-helper",
"findings": [
{
"type": "info",
"title": "Created docstring-generator skill",
"description": "Automates NumPy-style docstring generation",
"location": ".claude/skills/docstring-generator/"
},
{
"type": "info",
"title": "Created cli-test-helper skill",
"description": "Automates pytest test generation for Click commands",
"location": ".claude/skills/cli-test-helper/"
}
]
}
```
## Result
Solo developer now has efficient automation:
- ✅ 2 skills that handle tedious documentation and testing tasks
- ✅ 2 commands for common workflows (coverage, releases)
- ✅ 1 hook that auto-formats on every save
- ✅ Focuses on writing code, not boilerplate
- ✅ Complete documentation
- ✅ Ready to use immediately
Total generation time: ~10 minutes
## Before vs After
**Before:**
```bash
# Manual workflow
$ vim src/cli.py # Add new command
$ vim tests/test_cli.py # Manually write tests
$ pytest # Run tests
$ ruff check src/ # Manual linting
$ black src/ # Manual formatting
$ pytest --cov # Check coverage
$ vim docs/ # Update docs manually
# ~30-45 minutes per feature
```
**After:**
```bash
# Automated workflow
$ vim src/cli.py # Add new command
# Hook auto-formats and lints immediately ✅
"Create tests for the new command"
# cli-test-helper generates comprehensive tests ✅
/test-cov
# Instant coverage report ✅
"Add docstrings to src/cli.py"
# docstring-generator adds complete documentation ✅
# ~10 minutes per feature (3-4x faster!)
```

View File

@@ -0,0 +1,517 @@
# Example: Research Paper with Presentation and Documentation
This example shows what the meta-automation-architect generates for a research project that combines **LaTeX** (paper), **HTML** (presentation), and **Markdown** (documentation).
## Project Context
- **Type**: Academic Writing (primary) + Research (secondary)
- **Content**:
- LaTeX research paper (25 pages, 8 chapters, 45 references)
- HTML presentation slides (30 slides)
- Markdown documentation and notes (50+ files)
- **Pain Points**: Broken cross-references, unused citations, broken links, inconsistent formatting
- **Priority**: Citation validation and link checking
## Project Structure
```
research-paper/
├── paper/
│ ├── main.tex # Main LaTeX document
│ ├── chapters/
│ │ ├── 01_introduction.tex
│ │ ├── 02_related_work.tex
│ │ ├── 03_methodology.tex
│ │ ├── 04_results.tex
│ │ ├── 05_discussion.tex
│ │ └── 06_conclusion.tex
│ ├── figures/
│ │ ├── figure1.pdf
│ │ ├── figure2.pdf
│ │ └── ... (15 figures)
│ ├── tables/
│ │ ├── table1.tex
│ │ └── ... (5 tables)
│ ├── references.bib # Bibliography (45 entries)
│ └── custom.sty # Custom LaTeX style
├── presentation/
│ ├── index.html # Main presentation
│ ├── slides/
│ │ ├── 01_intro.html
│ │ ├── 02_methods.html
│ │ ├── 03_results.html
│ │ └── ... (30 slides)
│ ├── assets/
│ │ ├── figures/
│ │ ├── css/
│ │ └── js/
│ └── notes/
│ └── speaker_notes.md
├── notes/
│ ├── literature_review.md
│ ├── methodology_notes.md
│ ├── experiment_logs.md
│ ├── analysis/
│ │ ├── [[statistical_tests]].md
│ │ └── [[data_processing]].md
│ └── ideas/
│ └── future_work.md
└── README.md
```
## User Conversation
```
User: "Help me automate my research paper project"
Meta-Skill: Analyzing project...
- Detected: Academic Writing (68% confidence)
- Found 25 .tex files
- Found 1 .bib file with 45 entries
- Found 30 HTML slides
- Found 50+ interconnected markdown files
- Secondary: Research (32% confidence)
- Research methodology documentation
- Experiment logs and analysis
Meta-Skill: What are your main pain points?
Options:
- Broken LaTeX references (RECOMMENDED: Manual checking takes 30 min)
- Unused bibliography entries (RECOMMENDED: 12 unused entries detected)
- Broken HTML/Markdown links (RECOMMENDED: 8 broken links found)
- Inconsistent formatting
User: [Selects: Broken references, Unused citations, Broken links]
Meta-Skill: Automation opportunities identified:
- Validate LaTeX cross-references: Saves 30 min/compile
- Clean bibliography: Saves 20 min/review
- Check all links: Saves 30 min/validation
- Total potential savings: 80+ hours over project lifecycle
Meta-Skill: Generating automation system...
Session ID: research-paper-abc123
```
## Generated Automation System
### 1. Custom Subagents (8)
#### Universal Agents
- **structure-analyzer** - Reviews document organization across all formats
- **workflow-analyzer** - Analyzes compilation and publishing workflow
#### Academic Writing Domain Agents
- **latex-structure-analyzer** - LaTeX document structure and cross-references
- **citation-analyzer** - Bibliography validation and citation usage
- **html-structure-analyzer** - Presentation hierarchy and semantics
- **link-validator** - All links across HTML and Markdown
- **cross-reference-analyzer** - Cross-references across all document types
- **formatting-analyzer** - Formatting consistency
### 2. Custom Skills (4)
**`latex-validator`** - Comprehensive LaTeX validation
**Example:**
```
Running LaTeX validation...
✅ Document Structure
- 6 chapters found
- Proper hierarchy: chapter → section → subsection
- TOC depth: 2 levels
⚠️ Cross-References
- 23/25 \\ref commands valid
- 2 broken references:
* Line 145: \\ref{fig:missing} - target not found
* Line 289: \\ref{sec:old-name} - outdated reference
✅ Figures/Tables
- 15/15 figures referenced
- 5/5 tables referenced
- All captions present
⚠️ Bibliography
- 45 entries in references.bib
- 33 cited in text
- 12 unused entries:
* [Smith2020] - Never cited
* [Jones2019] - Never cited
* ...
📊 Compilation Status
- pdflatex: ✅ Success
- bibtex: ✅ Success
- Output: main.pdf (2.3 MB)
💡 Recommendations:
1. Fix 2 broken \\ref references
2. Remove 12 unused bibliography entries (saves 20% .bib size)
3. Consider adding \\label for Section 4.2 (referenced but not labeled)
```
**`link-checker`** - Validates all links in HTML and Markdown
**Example:**
```
Checking links across project...
📁 HTML Presentation (30 slides)
✅ Internal links: 45/45 valid
✅ External links: 12/12 valid
✅ Asset references: 28/28 valid
📁 Markdown Notes (52 files)
✅ Wiki-style [[links]]: 67/75 valid
⚠️ Broken wiki links (8):
* notes/analysis/stats.md → [[missing_page]]
* notes/ideas/future.md → [[old-experiment]]
* ...
✅ External links: 34/35 valid
⚠️ 1 broken external link:
* http://oldwebsite.com/data → 404 Not Found
📊 Summary
- Total links checked: 185
- Valid: 177 (95.7%)
- Broken: 8 (4.3%)
- Orphaned pages: 2 (no incoming links)
💡 Recommendations:
1. Fix 8 broken wiki links
2. Update 1 broken external link
3. Consider linking to orphaned pages
4. Estimated fix time: 15 minutes
```
**`cross-reference-checker`** - Validates references across all formats
**Example:**
```
Analyzing cross-references...
📄 LaTeX Paper
- \\ref commands: 25 (23 valid, 2 broken)
- \\cite commands: 33 (all valid)
- Figure refs: 15 (all valid)
- Table refs: 5 (all valid)
🖥️ HTML Presentation
- Internal anchor links: 45 (all valid)
- Figure references: 12 (all valid)
- Paper references: 8 links to LaTeX sections
📝 Markdown Notes
- Internal [[links]]: 75 (67 valid, 8 broken)
- Cross-format refs: 5 links to paper sections
🔗 Cross-Format Consistency
✅ Methodology: Consistent between paper and presentation
✅ Results: Figures match in paper and slides
⚠️ Discussion section in paper not reflected in presentation
💡 Consider adding discussion slide
📊 Overall Reference Health: 96.2%
```
**`bibliography-manager`** - Manages .bib entries and citations
**Example:**
```
Analyzing bibliography...
📚 references.bib
- Total entries: 45
- Used in paper: 33 (73%)
- Unused: 12 (27%)
🔍 Citation Analysis
- Most cited: [Smith2020] (8 times)
- Least cited: [Wang2021] (1 time)
- Average citations: 2.4 per entry
⚠️ Issues Found
1. Unused entries (12):
* [Smith2020] - Never cited (can be removed)
* [Jones2019] - Never cited (can be removed)
* ...
2. Missing fields (3):
* [Brown2021] - Missing 'pages' field
* [Davis2022] - Missing 'doi' field
* [Wilson2020] - Inconsistent author format
3. Duplicate entries (2):
* [Lee2019] and [Lee2019b] - Same paper
* [Miller2020] and [Miller2020a] - Same paper
💡 Recommendations:
1. Remove 12 unused entries → 27% smaller .bib file
2. Merge 2 duplicate entries
3. Complete missing fields for better citations
4. Run: /clean-bibliography to apply fixes
```
### 3. Custom Commands (4)
**`/validate-latex`**
```bash
/validate-latex # Full validation
/validate-latex --refs-only # Only check references
/validate-latex --fix # Auto-fix common issues
```
**`/check-links`**
```bash
/check-links # Check all links
/check-links presentation/ # Only HTML slides
/check-links notes/ # Only Markdown notes
/check-links --external # Include external links
```
**`/clean-bibliography`**
```bash
/clean-bibliography # Interactive cleanup
/clean-bibliography --remove-unused # Auto-remove unused entries
/clean-bibliography --fix-format # Fix formatting issues
```
**`/build-paper`**
```bash
/build-paper # Compile LaTeX to PDF
/build-paper --watch # Auto-compile on changes
/build-paper --validate # Validate before building
```
### 4. Hooks (3)
**`validate_on_save.py`** (PreToolUse)
- Triggers when .tex or .bib files are saved
- Runs quick validation checks
- Alerts if new issues introduced
**`update_references.py`** (PostToolUse)
- Triggers after editing .tex files
- Updates cross-reference index
- Checks for new broken references
**`link_check_on_md_save.py`** (PostToolUse)
- Triggers when .md files are saved
- Validates wiki-style [[links]]
- Alerts if broken links created
### 5. Impact
**Time Savings:**
- Manual LaTeX validation: 30 min/compile → **2 minutes** automated (93% reduction)
- Bibliography cleanup: 45 min/cleanup → **5 minutes** automated (89% reduction)
- Link checking: 30 min/check → **1 minute** automated (97% reduction)
- Cross-reference validation: 20 min/review → **2 minutes** automated (90% reduction)
- **Total: 125 min → 10 min** (92% time reduction per validation cycle)
Over typical paper lifecycle (50 validation cycles):
- Manual: **104 hours**
- Automated: **8 hours**
- **Savings: 96 hours (92%)**
**Quality Improvements:**
- Cross-reference accuracy: Manual checking → **100% validated** automatically
- Bibliography: 12 unused entries → **0 unused** (27% smaller .bib)
- Link health: 92% valid → **100% valid** (8 broken links fixed)
- Compilation success rate: 80% → **100%** (catches issues before compile)
**Concrete Fixes Applied:**
- Fixed 2 broken LaTeX \\ref references
- Removed 12 unused bibliography entries
- Fixed 8 broken Markdown wiki links
- Updated 1 broken external link
- Merged 2 duplicate .bib entries
- Completed 3 missing bibliography fields
## Example Results
### Before Automation
**LaTeX Compilation:**
```
! LaTeX Error: Reference `fig:missing' on page 12 undefined.
! LaTeX Error: Reference `sec:old-name' on page 23 undefined.
Warning: Citation 'Smith2020' unused
Warning: Citation 'Jones2019' unused
... (10 more unused citations)
Output: main.pdf generated with warnings
```
**Manual Link Checking:**
```
Manually clicking through 185 links...
Found broken link after 15 minutes
Found another after 20 minutes
Gave up after 30 minutes, unsure if all checked
```
**Bibliography Management:**
```
45 entries in .bib file
Manually grep for each to see if cited
Takes 45 minutes to identify 12 unused entries
Not sure about duplicates or format issues
```
### After Automation
**`/validate-latex` Output:**
```
✅ Running comprehensive LaTeX validation...
📊 Results (completed in 2 minutes):
✅ Document structure: Valid
⚠️ Cross-references: 2 issues found
✅ Bibliography: All citations valid
⚠️ Unused entries: 12 found
✅ Compilation: Success
🔧 Auto-fix available:
Run: /validate-latex --fix
```
**`/check-links` Output:**
```
✅ Link validation complete (1 minute):
- 185 total links
- 177 valid (95.7%)
- 8 broken (4.3%)
📋 Detailed report: reports/link-validator.json
💡 Run: /check-links --fix to auto-fix wiki links
```
**`/clean-bibliography` Output:**
```
✅ Bibliography analysis complete (5 minutes):
- Removed 12 unused entries
- Merged 2 duplicates
- Fixed 3 incomplete entries
- New size: 33 entries (73% of original)
💾 Backup: references.bib.backup
✅ Updated: references.bib
```
## Agent Communication
**`reports/latex-structure-analyzer.json`** (excerpt):
```json
{
"agent_name": "latex-structure-analyzer",
"summary": "Paper structure is sound. Found 2 broken cross-references and compilation warnings.",
"findings": [
{
"type": "broken_reference",
"severity": "high",
"location": "chapters/03_methodology.tex:145",
"description": "\\ref{fig:missing} references non-existent label",
"recommendation": "Add \\label{fig:missing} to appropriate figure or fix reference"
},
{
"type": "unused_bibliography",
"severity": "medium",
"description": "12 bibliography entries never cited in text",
"entries": ["Smith2020", "Jones2019", ...],
"recommendation": "Remove unused entries or add citations where appropriate"
}
],
"metrics": {
"total_chapters": 6,
"total_sections": 24,
"total_references": 25,
"valid_references": 23,
"broken_references": 2,
"bibliography_entries": 45,
"cited_entries": 33,
"unused_entries": 12
},
"automation_impact": {
"time_saved": "30 min/validation (manual checking)",
"quality_improvement": "100% reference validation vs. manual spot-checking"
}
}
```
**`reports/link-validator.json`** (excerpt):
```json
{
"agent_name": "link-validator",
"summary": "Found 8 broken links across HTML and Markdown. 95.7% link health.",
"findings": [
{
"type": "broken_wiki_link",
"severity": "medium",
"location": "notes/analysis/stats.md:23",
"description": "[[missing_page]] does not exist",
"recommendation": "Create missing_page.md or update link to correct page"
},
{
"type": "broken_external_link",
"severity": "high",
"location": "notes/literature_review.md:156",
"description": "http://oldwebsite.com/data returns 404",
"recommendation": "Update to current URL or mark as archived"
}
],
"metrics": {
"total_links": 185,
"valid_links": 177,
"broken_links": 8,
"link_health_percentage": 95.7,
"html_links": 57,
"markdown_wiki_links": 75,
"markdown_external_links": 35,
"orphaned_pages": 2
},
"automation_impact": {
"time_saved": "30 min/check (manual link clicking)",
"quality_improvement": "100% coverage vs. ~60% manual coverage"
}
}
```
## Result
Researcher now has:
-**100% validated cross-references** - No more broken \\ref in paper
-**Clean bibliography** - 27% smaller, no unused entries
-**All links validated** - 8 broken links fixed, 100% health
-**Consistent formatting** - Across LaTeX, HTML, and Markdown
-**Fast compilation** - Issues caught before build
-**96 hours saved** over project lifecycle (92% reduction)
**Before vs After:**
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Cross-reference validation | Manual, 30 min | 2 min automated | 93% faster |
| Bibliography unused entries | 12 (27%) | 0 (0%) | 100% clean |
| Link health | 92% (manual partial check) | 100% (full automated) | +8% |
| Validation coverage | ~60% (time limited) | 100% (comprehensive) | +40% |
| Time per validation cycle | 125 min | 10 min | 92% reduction |
| Time over project (50 cycles) | 104 hours | 8 hours | 96 hours saved |
**Ongoing Benefits:**
- Every save triggers validation
- New issues caught immediately
- No broken references in final paper
- Bibliography stays clean
- All links remain valid
- Compilation always succeeds
**Publication Quality:**
- Zero broken cross-references in submitted paper
- Professional bibliography (no unused entries)
- All presentation links work during defense
- Documentation fully interconnected
- Reviewers praise technical quality

View File

@@ -0,0 +1,548 @@
# Example: Web Application Automation
This example shows what the meta-automation-architect generates for a typical Next.js web application project.
## Project Context
- **Type**: Web Application
- **Stack**: Next.js, TypeScript, React, TailwindCSS
- **Team Size**: Small (3-5 developers)
- **Pain Points**: Testing coverage, security checks, performance optimization
- **Priority**: Testing automation and code quality
## User Conversation
```
User: "Set up automation for my Next.js project"
Meta-Skill: Analyzing project...
- Detected: Web Application (92% confidence)
- Found package.json with Next.js and React
- Found TypeScript configuration
- Found src/app/ directory (App Router)
- Found TailwindCSS configuration
Meta-Skill: What are your main pain points?
Options:
- Low test coverage (RECOMMENDED: 15 test files for 87 components)
- Frequent bug fixes (RECOMMENDED: 23 bug-fix commits last month)
- Performance issues
- Security concerns
- Documentation gaps (RECOMMENDED: Missing API docs)
User: [Selects: Low test coverage, Security concerns]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis
- Moderate (4-6): Multi-domain coverage (RECOMMENDED)
- Many (7+): Comprehensive automation
User: [Selects: Moderate]
Meta-Skill: Generating automation system...
Session ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890
```
## Generated Automation System
### 1. Custom Subagents (6)
All agents created in `.claude/agents/`:
#### Analysis Agents (Run in Parallel)
**security-analyzer.md**
- Scans for XSS, CSRF, SQL injection vulnerabilities
- Checks authentication/authorization patterns
- Identifies exposed secrets or API keys
- Reviews dependency vulnerabilities
**performance-analyzer.md**
- Analyzes bundle sizes and code splitting
- Identifies slow rendering components
- Checks for N+1 query patterns
- Reviews asset optimization
**code-quality-analyzer.md**
- Measures code complexity
- Detects duplication
- Checks naming conventions
- Reviews error handling patterns
**testing-analyzer.md**
- Measures test coverage
- Identifies untested critical paths
- Reviews test quality and patterns
- Suggests testing strategies
#### Implementation Agents (Run After Analysis)
**skill-generator.md**
- Creates custom skills based on findings
- Generated: `tdd-workflow`, `api-doc-generator`, `security-checker`
**command-generator.md**
- Creates slash commands for common tasks
- Generated: `/test-fix`, `/security-scan`, `/perf-check`
### 2. Custom Skills (3)
**`.claude/skills/tdd-workflow/SKILL.md`**
```markdown
---
name: tdd-workflow
description: Enforces test-driven development by requiring tests before implementation
allowed-tools: ["Read", "Write", "Bash", "Grep"]
---
# TDD Workflow
Automatically invoked when user requests new features or modifications.
## Process
1. Check if tests exist for the target code
2. If no tests, create test file first
3. Write failing test
4. Implement minimal code to pass
5. Refactor while keeping tests green
6. Run full test suite
[... detailed implementation ...]
```
**`.claude/skills/api-doc-generator/SKILL.md`**
```markdown
---
name: api-doc-generator
description: Generates OpenAPI documentation from Next.js API routes
allowed-tools: ["Read", "Write", "Grep", "Glob", "Bash"]
---
# API Documentation Generator
Automatically generates OpenAPI 3.0 documentation from your API routes.
## Process
1. Scan src/app/api/ for route handlers
2. Extract types from TypeScript
3. Generate OpenAPI schemas
4. Create interactive documentation
5. Validate against actual implementation
[... detailed implementation ...]
```
**`.claude/skills/security-checker/SKILL.md`**
```markdown
---
name: security-checker
description: Quick security validation for code changes
allowed-tools: ["Read", "Grep", "Bash"]
---
# Security Checker
Runs security checks on code before commits.
## Checks
- XSS vulnerabilities in JSX
- CSRF protection on mutations
- Exposed secrets or API keys
- Insecure dependencies
- Missing input validation
[... detailed implementation ...]
```
### 3. Custom Commands (3)
**`.claude/commands/test-fix.md`**
```markdown
---
description: Run tests and iteratively fix failures
allowed-tools: ["Bash", "Read", "Write", "Grep"]
---
# Test Fix Command
Runs your test suite and automatically fixes failures.
## Usage
```bash
/test-fix
/test-fix src/components
/test-fix --watch
```
## Process
1. Run test suite
2. Identify failures
3. Analyze failure causes
4. Propose fixes
5. Apply fixes with user approval
6. Re-run tests
7. Repeat until green
[... detailed implementation ...]
```
**`.claude/commands/security-scan.md`**
```markdown
---
description: Quick security audit of project
allowed-tools: ["Bash", "Read", "Grep"]
---
# Security Scan
Fast security check for common vulnerabilities.
## Usage
```bash
/security-scan
/security-scan src/
/security-scan --full
```
[... detailed implementation ...]
```
**`.claude/commands/perf-check.md`**
```markdown
---
description: Analyze performance and bundle size
allowed-tools: ["Bash", "Read", "Glob"]
---
# Performance Check
Analyzes bundle size, rendering performance, and optimization opportunities.
[... detailed implementation ...]
```
### 4. Hooks (2)
**`.claude/hooks/security_validation.py`**
```python
#!/usr/bin/env python3
"""
Security Validation Hook
Type: PreToolUse
Blocks writes to sensitive files and validates security patterns
"""
import sys
import json
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Block writes to sensitive files
if tool in ['Write', 'Edit']:
file_path = params.get('file_path', '')
if file_path.endswith('.env') or 'secrets' in file_path.lower():
print("❌ Blocked: Writing to sensitive file", file=sys.stderr)
sys.exit(2) # Block operation
# Validate API route security
if tool == 'Write' and '/api/' in params.get('file_path', ''):
content = params.get('content', '')
if 'export async function POST' in content:
if 'csrf' not in content.lower():
print("⚠️ Warning: API route missing CSRF protection", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
**`.claude/hooks/run_tests.py`**
```python
#!/usr/bin/env python3
"""
Run Tests Hook
Type: Stop
Runs test suite when session ends
"""
import sys
import json
import subprocess
def main():
context = json.load(sys.stdin)
# Run tests
print("🧪 Running test suite...", file=sys.stderr)
result = subprocess.run(['npm', 'test', '--', '--run'],
capture_output=True, text=True)
if result.returncode != 0:
print("❌ Tests failed:", file=sys.stderr)
print(result.stdout, file=sys.stderr)
else:
print("✅ All tests passed", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PreToolUse": {
"commands": [".claude/hooks/security_validation.py"]
},
"Stop": {
"commands": [".claude/hooks/run_tests.py"]
}
},
"mcpServers": {
"github": {
"command": "mcp-github",
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Next.js Project
## Generated On
2025-01-23
## Session ID
a1b2c3d4-e5f6-7890-abcd-ef1234567890
## What Was Created
### Analysis Phase
- **security-analyzer**: Found 3 high-severity issues requiring attention
- **performance-analyzer**: Identified 5 optimization opportunities
- **code-quality-analyzer**: Detected 12 code smells and complexity issues
- **testing-analyzer**: Test coverage at 45%, recommended 80%+ for critical paths
### Generated Artifacts
#### Custom Agents (6)
- **security-analyzer**: Scans for vulnerabilities and security issues
- **performance-analyzer**: Identifies performance bottlenecks
- **code-quality-analyzer**: Evaluates code maintainability
- **testing-analyzer**: Measures and improves test coverage
- **skill-generator**: Created 3 custom skills
- **command-generator**: Created 3 slash commands
#### Skills (3)
- **tdd-workflow**: Enforces test-driven development workflow
- **api-doc-generator**: Auto-generates API documentation from routes
- **security-checker**: Quick security validation for code changes
#### Commands (3)
- **/test-fix**: Run tests and fix failures iteratively
- **/security-scan**: Quick security audit
- **/perf-check**: Analyze performance and bundle size
#### Hooks (2)
- **PreToolUse**: Security validation (blocks sensitive file writes)
- **Stop**: Run test suite on session end
#### MCP Servers (1)
- **github**: PR automation and issue tracking
## Quick Start
1. Test an agent:
```bash
"Use the security-analyzer agent on src/app"
```
2. Try a skill:
```bash
"Implement user authentication feature"
# tdd-workflow skill auto-invokes
```
3. Execute a command:
```bash
/test-fix src/components
```
4. Hooks automatically run:
- Security validation on file writes
- Tests run when you end the session
## Customization
All generated automation can be customized:
- Edit agents in `.claude/agents/`
- Modify skills in `.claude/skills/`
- Update commands in `.claude/commands/`
- Adjust hooks in `.claude/hooks/`
[... more documentation ...]
```
**`.claude/QUICK_REFERENCE.md`**
```markdown
# Quick Reference
## Available Agents
- security-analyzer
- performance-analyzer
- code-quality-analyzer
- testing-analyzer
- skill-generator
- command-generator
## Available Commands
- /test-fix
- /security-scan
- /perf-check
## Available Skills
- tdd-workflow
- api-doc-generator
- security-checker
## Hooks Configured
- PreToolUse: security_validation.py
- Stop: run_tests.py
## MCP Servers
- github
## Usage Examples
### Use an agent:
"Use the security-analyzer agent to check src/app/api"
### Invoke a skill:
"Implement new feature X" (tdd-workflow auto-invokes)
"Generate API docs" (api-doc-generator auto-invokes)
### Execute command:
/test-fix src/
/security-scan
/perf-check
### Check hooks:
cat .claude/settings.json | jq '.hooks'
## Session Data
All agent communication is logged in:
`.claude/agents/context/a1b2c3d4-e5f6-7890-abcd-ef1234567890/`
Review this directory to understand what happened during generation.
```
## Agent Communication Example
During generation, agents communicated via ACP:
**`coordination.json`**
```json
{
"session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"started_at": "2025-01-23T10:00:00Z",
"project_type": "web_app",
"agents": {
"security-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:00Z",
"completed_at": "2025-01-23T10:05:00Z",
"report_path": "reports/security-analyzer.json"
},
"performance-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:01Z",
"completed_at": "2025-01-23T10:06:00Z",
"report_path": "reports/performance-analyzer.json"
},
"testing-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:02Z",
"completed_at": "2025-01-23T10:07:00Z",
"report_path": "reports/testing-analyzer.json"
},
"skill-generator": {
"status": "completed",
"started_at": "2025-01-23T10:08:00Z",
"completed_at": "2025-01-23T10:12:00Z",
"report_path": "reports/skill-generator.json"
}
}
}
```
**`messages.jsonl`** (excerpt)
```json
{"timestamp":"2025-01-23T10:00:00Z","from":"security-analyzer","type":"status","message":"Starting security analysis"}
{"timestamp":"2025-01-23T10:02:15Z","from":"security-analyzer","type":"finding","severity":"high","data":{"title":"Missing CSRF protection","location":"src/app/api/users/route.ts"}}
{"timestamp":"2025-01-23T10:05:00Z","from":"security-analyzer","type":"completed","message":"Found 3 high-severity issues"}
{"timestamp":"2025-01-23T10:08:00Z","from":"skill-generator","type":"status","message":"Reading analysis reports"}
{"timestamp":"2025-01-23T10:09:30Z","from":"skill-generator","type":"status","message":"Generating TDD workflow skill"}
```
**`reports/security-analyzer.json`** (excerpt)
```json
{
"agent_name": "security-analyzer",
"timestamp": "2025-01-23T10:05:00Z",
"status": "completed",
"summary": "Found 3 high-severity security issues requiring immediate attention",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "Missing CSRF Protection",
"description": "API routes lack CSRF token validation",
"location": "src/app/api/users/route.ts:12",
"recommendation": "Add CSRF token validation middleware",
"example": "import { validateCsrf } from '@/lib/csrf';"
}
],
"recommendations_for_automation": [
"Skill: CSRF validator that checks all API routes",
"Hook: PreToolUse hook to validate new API routes",
"Command: /security-scan for quick checks"
]
}
```
## Result
User now has a complete automation system:
- ✅ 6 specialized agents that can be run on-demand
- ✅ 3 skills that auto-invoke for common patterns
- ✅ 3 commands for quick workflows
- ✅ 2 hooks for automatic validation
- ✅ Complete documentation
- ✅ All agents communicated via ACP protocol
- ✅ Ready to use immediately
Total generation time: ~15 minutes (mostly analysis phase)

View File

@@ -0,0 +1,628 @@
# Agent Communication Protocol (ACP)
The Agent Communication Protocol enables parallel subagents with isolated contexts to coordinate and share information through a structured file-based system.
## Core Principles
1. **Asynchronous** - Agents don't block each other
2. **Discoverable** - Any agent can read any report
3. **Persistent** - Survives agent crashes and restarts
4. **Transparent** - Complete event log for debugging
5. **Atomic** - File operations are append-only or replace-whole
6. **Orchestratable** - Coordinator manages dependencies
## Directory Structure
```
.claude/agents/context/{session-id}/
├── coordination.json # Status tracking and dependencies
├── messages.jsonl # Append-only event log
├── reports/ # Standardized agent outputs
│ ├── {agent-name}.json
│ └── ...
└── data/ # Shared data artifacts
├── {artifact-name}.json
└── ...
```
### Session ID
Each automation generation gets a unique session ID (UUID):
```bash
SESSION_ID=$(uuidgen | tr '[:upper:]' '[:lower:]')
export CLAUDE_SESSION_ID="${SESSION_ID}"
```
All agents receive this session ID and use it to locate the context directory.
## Communication Components
### 1. Coordination File (`coordination.json`)
Central status tracking for all agents.
**Structure:**
```json
{
"session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"started_at": "2025-01-23T10:00:00Z",
"project_type": "web_app",
"project_path": "/path/to/project",
"agents": {
"security-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:00Z",
"completed_at": "2025-01-23T10:05:00Z",
"report_path": "reports/security-analyzer.json",
"dependencies": [],
"progress": "Analysis complete"
},
"performance-analyzer": {
"status": "in_progress",
"started_at": "2025-01-23T10:00:00Z",
"progress": "Analyzing database queries...",
"dependencies": []
},
"skill-generator": {
"status": "waiting",
"dependencies": ["security-analyzer", "performance-analyzer", "code-quality-analyzer"],
"reason": "Waiting for analysis agents to complete"
}
}
}
```
**Agent Status Values:**
- `waiting` - Not started, may have dependencies
- `in_progress` - Currently executing
- `completed` - Finished successfully
- `failed` - Encountered error
**Reading Coordination:**
```bash
# Check all agent statuses
jq '.agents' .claude/agents/context/${SESSION_ID}/coordination.json
# Check specific agent
jq '.agents["security-analyzer"]' .claude/agents/context/${SESSION_ID}/coordination.json
# List completed agents
jq '.agents | to_entries | map(select(.value.status == "completed")) | map(.key)' \
.claude/agents/context/${SESSION_ID}/coordination.json
# List waiting agents with dependencies
jq '.agents | to_entries | map(select(.value.status == "waiting")) | map({name: .key, deps: .value.dependencies})' \
.claude/agents/context/${SESSION_ID}/coordination.json
```
**Updating Coordination:**
```bash
# Update status to in_progress
cat .claude/agents/context/${SESSION_ID}/coordination.json | \
jq '.agents["my-agent"] = {
"status": "in_progress",
"started_at": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'",
"progress": "Starting analysis",
"dependencies": []
}' > /tmp/coord.json && \
mv /tmp/coord.json .claude/agents/context/${SESSION_ID}/coordination.json
# Update to completed
cat .claude/agents/context/${SESSION_ID}/coordination.json | \
jq '.agents["my-agent"].status = "completed" |
.agents["my-agent"].completed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'" |
.agents["my-agent"].report_path = "reports/my-agent.json"' > /tmp/coord.json && \
mv /tmp/coord.json .claude/agents/context/${SESSION_ID}/coordination.json
```
### 2. Message Bus (`messages.jsonl`)
Append-only log of all events. Each line is a JSON object.
**Event Types:**
- `status` - Progress updates
- `finding` - Discovery of issues or insights
- `error` - Failures or problems
- `data` - Data artifact creation
- `completed` - Agent completion announcement
**Event Format:**
```json
{"timestamp":"2025-01-23T10:00:00Z","from":"security-analyzer","type":"status","message":"Starting security analysis"}
{"timestamp":"2025-01-23T10:02:15Z","from":"security-analyzer","type":"finding","severity":"high","data":{"title":"SQL Injection Risk","location":"src/db/queries.ts:42"}}
{"timestamp":"2025-01-23T10:03:00Z","from":"security-analyzer","type":"data","artifact":"data/vulnerabilities.json","description":"Detailed vulnerability data"}
{"timestamp":"2025-01-23T10:05:00Z","from":"security-analyzer","type":"completed","message":"Analysis complete. Found 5 high-severity issues."}
```
**Writing Events:**
```bash
# Log status update
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-agent\",\"type\":\"status\",\"message\":\"Starting analysis\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Log finding
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-agent\",\"type\":\"finding\",\"severity\":\"high\",\"data\":{\"title\":\"Issue found\",\"location\":\"file:line\"}}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Log completion
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-agent\",\"type\":\"completed\",\"message\":\"Analysis complete\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
**Reading Events:**
```bash
# Watch live events
tail -f .claude/agents/context/${SESSION_ID}/messages.jsonl | jq
# Get events from specific agent
jq 'select(.from == "security-analyzer")' .claude/agents/context/${SESSION_ID}/messages.jsonl
# Get events by type
jq 'select(.type == "finding")' .claude/agents/context/${SESSION_ID}/messages.jsonl
# Get high-severity findings
jq 'select(.type == "finding" and .severity == "high")' .claude/agents/context/${SESSION_ID}/messages.jsonl
# Count events by type
jq -s 'group_by(.type) | map({type: .[0].type, count: length})' \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Timeline of agent activity
jq -s 'sort_by(.timestamp) | .[] | "\(.timestamp) [\(.from)] \(.type): \(.message // .data.title // "no message")"' \
.claude/agents/context/${SESSION_ID}/messages.jsonl -r
```
### 3. Agent Reports (`reports/{agent-name}.json`)
Standardized output from each agent.
**Standard Report Format:**
```json
{
"agent_name": "security-analyzer",
"timestamp": "2025-01-23T10:05:00Z",
"status": "completed",
"summary": "Brief 2-3 sentence overview of findings",
"findings": [
{
"type": "issue|recommendation|info",
"severity": "high|medium|low",
"title": "Short title",
"description": "Detailed description of the finding",
"location": "file:line or component",
"recommendation": "What to do about it",
"example": "Code snippet or example (optional)"
}
],
"metrics": {
"items_analyzed": 150,
"issues_found": 5,
"high_severity": 2,
"medium_severity": 2,
"low_severity": 1,
"time_taken": "2m 34s"
},
"data_artifacts": [
"data/vulnerabilities.json",
"data/dependency-graph.json"
],
"next_actions": [
"Fix SQL injection in queries.ts",
"Update vulnerable dependencies",
"Add input validation to API routes"
],
"recommendations_for_automation": [
"Skill: SQL injection checker that runs on code changes",
"Command: /security-scan for quick manual checks",
"Hook: Validate queries on PreToolUse for Write operations",
"MCP: Integrate with security scanning service"
]
}
```
**Writing Reports:**
```bash
cat > .claude/agents/context/${SESSION_ID}/reports/my-agent.json << 'EOF'
{
"agent_name": "my-agent",
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'",
"status": "completed",
"summary": "Your summary here",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "Finding title",
"description": "Detailed description",
"location": "src/file.ts:42",
"recommendation": "How to fix"
}
],
"metrics": {
"items_analyzed": 100,
"issues_found": 3
},
"next_actions": ["Action 1", "Action 2"],
"recommendations_for_automation": ["Suggestion 1", "Suggestion 2"]
}
EOF
```
**Reading Reports:**
```bash
# Read specific report
cat .claude/agents/context/${SESSION_ID}/reports/security-analyzer.json | jq
# Get summaries from all reports
jq -r '.summary' .claude/agents/context/${SESSION_ID}/reports/*.json
# Get all high-severity findings
jq -s 'map(.findings[]) | map(select(.severity == "high"))' \
.claude/agents/context/${SESSION_ID}/reports/*.json
# Aggregate metrics
jq -s '{
total_findings: map(.findings | length) | add,
high_severity: map(.findings[] | select(.severity == "high")) | length,
automation_opportunities: map(.recommendations_for_automation) | flatten | length
}' .claude/agents/context/${SESSION_ID}/reports/*.json
# List all data artifacts
jq -s 'map(.data_artifacts) | flatten | unique' \
.claude/agents/context/${SESSION_ID}/reports/*.json
```
### 4. Data Artifacts (`data/{artifact-name}.json`)
Shared data files for detailed information exchange.
Agents can create data artifacts when:
- Report would be too large
- Other agents need raw data
- Detailed analysis is needed
- Data should be reusable
**Example Artifacts:**
```bash
# Vulnerability details
data/vulnerabilities.json
# Performance profiling results
data/performance-profile.json
# Dependency graph
data/dependency-graph.json
# Test coverage report
data/test-coverage.json
# Code complexity metrics
data/complexity-metrics.json
```
**Creating Artifacts:**
```bash
cat > .claude/agents/context/${SESSION_ID}/data/vulnerabilities.json << 'EOF'
{
"scan_date": "2025-01-23T10:05:00Z",
"vulnerabilities": [
{
"id": "SQL-001",
"type": "SQL Injection",
"severity": "high",
"file": "src/db/queries.ts",
"line": 42,
"code": "db.query(`SELECT * FROM users WHERE id = ${userId}`)",
"fix": "db.query('SELECT * FROM users WHERE id = ?', [userId])",
"cwe": "CWE-89",
"references": ["https://cwe.mitre.org/data/definitions/89.html"]
}
]
}
EOF
# Log artifact creation
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"security-analyzer\",\"type\":\"data\",\"artifact\":\"data/vulnerabilities.json\",\"description\":\"Detailed vulnerability data\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
**Reading Artifacts:**
```bash
# Read artifact
cat .claude/agents/context/${SESSION_ID}/data/vulnerabilities.json | jq
# Find all artifacts
ls .claude/agents/context/${SESSION_ID}/data/
# Check which agents created artifacts
jq 'select(.type == "data") | {from: .from, artifact: .artifact}' \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
## Agent Workflows
### Analysis Agent Workflow
```bash
# 1. Check coordination
jq '.agents' .claude/agents/context/${SESSION_ID}/coordination.json
# 2. Read prerequisite reports (if any)
cat .claude/agents/context/${SESSION_ID}/reports/dependency-analyzer.json | jq
# 3. Announce startup
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-analyzer\",\"type\":\"status\",\"message\":\"Starting analysis\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# 4. Update coordination
# [Update to in_progress as shown above]
# 5. Perform analysis and log progress
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-analyzer\",\"type\":\"status\",\"message\":\"Analyzed 50% of codebase\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# 6. Write report
# [Create report as shown above]
# 7. Create data artifacts (if needed)
# [Create artifacts as shown above]
# 8. Update coordination to completed
# [Update status as shown above]
# 9. Announce completion
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-analyzer\",\"type\":\"completed\",\"message\":\"Analysis complete. Found X issues.\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
### Implementation Agent Workflow
```bash
# 1. Wait for analysis agents
while true; do
COMPLETED=$(jq -r '.agents | to_entries | map(select(.key | endswith("analyzer")) and .value.status == "completed") | length' \
.claude/agents/context/${SESSION_ID}/coordination.json)
TOTAL=$(jq -r '.agents | to_entries | map(select(.key | endswith("analyzer")) | length' \
.claude/agents/context/${SESSION_ID}/coordination.json)
if [ "$COMPLETED" -eq "$TOTAL" ]; then
break
fi
sleep 2
done
# 2. Read all analysis reports
for report in .claude/agents/context/${SESSION_ID}/reports/*-analyzer.json; do
cat "$report" | jq '.summary, .recommendations_for_automation'
done
# 3. Synthesize findings and make decisions
# [Aggregate recommendations, prioritize, decide what to generate]
# 4. Generate artifacts (skills, commands, hooks)
# [Create actual files in .claude/skills/, .claude/commands/, etc.]
# 5. Write report
# [Document what was generated and why]
# 6. Update coordination
# [Mark as completed]
```
### Coordinator Agent Workflow
```bash
# 1. Launch analysis agents in parallel
echo "Launching analysis agents: security, performance, quality, dependency, documentation"
# 2. Monitor progress
watch -n 2 'cat .claude/agents/context/${SESSION_ID}/coordination.json | jq ".agents"'
# 3. Wait for all analysis agents to complete
while true; do
COMPLETED=$(jq -r '.agents | to_entries | map(select(.key | endswith("analyzer")) and .value.status == "completed") | length' \
.claude/agents/context/${SESSION_ID}/coordination.json)
TOTAL=$(jq -r '.agents | to_entries | map(select(.key | endswith("analyzer")) | length' \
.claude/agents/context/${SESSION_ID}/coordination.json)
if [ "$COMPLETED" -eq "$TOTAL" ]; then
break
fi
sleep 5
done
# 4. Synthesize all findings
jq -s '{
total_findings: map(.findings | length) | add,
high_severity: map(.findings[] | select(.severity == "high")) | length,
automation_suggestions: map(.recommendations_for_automation) | flatten
}' .claude/agents/context/${SESSION_ID}/reports/*-analyzer.json
# 5. Make decisions on what to generate
# [Based on synthesis, decide which skills/commands/hooks/MCP to create]
# 6. Launch implementation agents in parallel
echo "Launching implementation agents: skill-gen, command-gen, hook-gen, mcp-config"
# 7. Monitor implementation
# [Similar monitoring loop]
# 8. Launch validation agents sequentially
echo "Launching integration-tester"
# [Wait for completion]
echo "Launching documentation-validator"
# 9. Generate final documentation
# [Create AUTOMATION_README.md, QUICK_REFERENCE.md]
# 10. Report to user
# [Summarize what was created and how to use it]
```
## Error Handling
### Detecting Failures
```bash
# Check for failed agents
jq '.agents | to_entries | map(select(.value.status == "failed"))' \
.claude/agents/context/${SESSION_ID}/coordination.json
# Find error events
jq 'select(.type == "error")' .claude/agents/context/${SESSION_ID}/messages.jsonl
```
### Recovery Strategies
1. **Retry Agent** - Re-run the failed agent
2. **Continue Without** - Proceed if agent was non-critical
3. **Manual Intervention** - Fix issue and resume
4. **Partial Results** - Check if agent wrote partial report
### Logging Errors
```bash
# Log error with details
echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"from\":\"my-agent\",\"type\":\"error\",\"message\":\"Failed to analyze X\",\"error\":\"Error details here\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Update coordination
cat .claude/agents/context/${SESSION_ID}/coordination.json | \
jq '.agents["my-agent"].status = "failed" |
.agents["my-agent"].error = "Error details"' > /tmp/coord.json && \
mv /tmp/coord.json .claude/agents/context/${SESSION_ID}/coordination.json
```
## Best Practices
### For Agents
1. **Check dependencies first** - Read coordination before starting
2. **Log frequently** - Write to message bus for transparency
3. **Standardize reports** - Follow the exact JSON format
4. **Be atomic** - Complete write-then-move for files
5. **Handle errors gracefully** - Log errors, update status
6. **Provide actionable output** - Clear recommendations
7. **Suggest automation** - Think about reusable patterns
### For Coordinators
1. **Launch in parallel when possible** - Maximize concurrency
2. **Respect dependencies** - Don't start agents before prerequisites
3. **Monitor actively** - Check coordination periodically
4. **Synthesize thoroughly** - Read all reports before decisions
5. **Validate results** - Test generated automation
6. **Document completely** - Explain what was created and why
### For File Operations
1. **Append-only for logs** - Use `>>` for messages.jsonl
2. **Replace-whole for state** - Use write-to-temp-then-move for coordination.json
3. **Unique names** - Avoid conflicts in data artifacts
4. **JSON formatting** - Always use valid JSON
5. **Timestamps** - ISO 8601 format (UTC)
## Example Session
Full example of 3 agents communicating:
```bash
# Session starts
SESSION_ID="abc123"
mkdir -p ".claude/agents/context/${SESSION_ID}"/{reports,data}
# Agent 1: Security Analyzer starts
echo "{\"timestamp\":\"2025-01-23T10:00:00Z\",\"from\":\"security\",\"type\":\"status\",\"message\":\"Starting\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Agent 2: Performance Analyzer starts (parallel)
echo "{\"timestamp\":\"2025-01-23T10:00:01Z\",\"from\":\"performance\",\"type\":\"status\",\"message\":\"Starting\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Security finds issue
echo "{\"timestamp\":\"2025-01-23T10:02:00Z\",\"from\":\"security\",\"type\":\"finding\",\"severity\":\"high\",\"data\":{\"title\":\"SQL Injection\"}}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Security completes
echo "{\"timestamp\":\"2025-01-23T10:05:00Z\",\"from\":\"security\",\"type\":\"completed\",\"message\":\"Found 5 issues\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Creates report: reports/security.json
# Performance completes
echo "{\"timestamp\":\"2025-01-23T10:06:00Z\",\"from\":\"performance\",\"type\":\"completed\",\"message\":\"Found 3 bottlenecks\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Creates report: reports/performance.json
# Coordinator reads both reports
cat .claude/agents/context/${SESSION_ID}/reports/security.json | jq .summary
cat .claude/agents/context/${SESSION_ID}/reports/performance.json | jq .summary
# Coordinator launches implementation agent
echo "{\"timestamp\":\"2025-01-23T10:07:00Z\",\"from\":\"coordinator\",\"type\":\"status\",\"message\":\"Launching skill generator\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
# Skill generator reads analysis reports
jq -s 'map(.recommendations_for_automation) | flatten' \
.claude/agents/context/${SESSION_ID}/reports/*.json
# Skill generator creates artifacts
# [Generates skills based on recommendations]
# Complete
echo "{\"timestamp\":\"2025-01-23T10:10:00Z\",\"from\":\"coordinator\",\"type\":\"completed\",\"message\":\"Automation system ready\"}" >> \
.claude/agents/context/${SESSION_ID}/messages.jsonl
```
## Protocol Guarantees
### What ACP Guarantees
**Visibility** - All agents can see all reports
**Ordering** - Events have timestamps
**Persistence** - Survives crashes
**Transparency** - Complete audit trail
**Atomicity** - No partial writes (when using temp files)
### What ACP Does NOT Guarantee
**Real-time coordination** - File-based, not instant
**Locking** - No distributed locks (use temp files + move)
**Transactions** - No multi-file atomicity
**Ordering of concurrent writes** - Append-only log doesn't guarantee order
## Summary
The Agent Communication Protocol provides a simple, robust way for parallel subagents to:
1. **Coordinate** - Via coordination.json
2. **Communicate findings** - Via standardized reports
3. **Share data** - Via data artifacts
4. **Maintain transparency** - Via message bus
5. **Enable orchestration** - Via dependency tracking
All communication is file-based, making it:
- Easy to implement
- Easy to debug
- Easy to monitor
- Reliable and persistent
- Language-agnostic
This protocol enables the meta-skill to generate sophisticated multi-agent automation systems that work reliably in parallel.

View File

@@ -0,0 +1,302 @@
#!/usr/bin/env python3
"""
Agent Reuse Manager
Avoids regenerating automation for similar projects
Reuses successful configurations
"""
import json
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional
from difflib import SequenceMatcher
class AgentReuseManager:
"""Manages reuse of automation configurations"""
def __init__(self, storage_path: str = ".claude/meta-automation/configurations"):
self.storage_dir = Path(storage_path)
self.storage_dir.mkdir(parents=True, exist_ok=True)
self.index_path = self.storage_dir / "index.json"
self.index = self._load_index()
def _load_index(self) -> Dict:
"""Load configuration index"""
if self.index_path.exists():
try:
with open(self.index_path, 'r') as f:
return json.load(f)
except:
return {'configurations': []}
return {'configurations': []}
def _save_index(self):
"""Save configuration index"""
with open(self.index_path, 'w') as f:
json.dump(self.index, f, indent=2)
def save_configuration(self, config: Dict) -> str:
"""
Save a successful automation configuration
Args:
config: {
'project_type': str,
'project_name': str,
'tech_stack': List[str],
'agents_used': List[str],
'skills_generated': List[str],
'commands_generated': List[str],
'hooks_generated': List[str],
'success_metrics': Dict,
'user_satisfaction': int (1-5)
}
Returns:
Configuration ID
"""
config_id = datetime.now().strftime("%Y%m%d_%H%M%S")
# Add metadata
config_with_meta = {
**config,
'config_id': config_id,
'created_at': datetime.now().isoformat(),
'reuse_count': 0
}
# Save full configuration
config_path = self.storage_dir / f"{config_id}.json"
with open(config_path, 'w') as f:
json.dump(config_with_meta, f, indent=2)
# Update index
self.index['configurations'].append({
'config_id': config_id,
'project_type': config['project_type'],
'project_name': config.get('project_name', 'unknown'),
'tech_stack': config.get('tech_stack', []),
'created_at': config_with_meta['created_at'],
'reuse_count': 0
})
self._save_index()
return config_id
def find_similar_configurations(self, project_info: Dict, min_similarity: float = 0.7) -> List[Dict]:
"""
Find similar configurations that could be reused
Args:
project_info: {
'project_type': str,
'tech_stack': List[str],
'existing_tools': List[str]
}
min_similarity: Minimum similarity score (0-1)
Returns:
List of similar configurations sorted by similarity
"""
similar = []
for config_ref in self.index['configurations']:
config = self._load_configuration(config_ref['config_id'])
if not config:
continue
similarity = self._calculate_similarity(project_info, config)
if similarity >= min_similarity:
similar.append({
**config_ref,
'similarity': round(similarity, 2),
'full_config': config
})
# Sort by similarity (descending)
similar.sort(key=lambda x: x['similarity'], reverse=True)
return similar
def _calculate_similarity(self, project_info: Dict, config: Dict) -> float:
"""
Calculate similarity between project and configuration
Returns:
Similarity score 0-1
"""
score = 0.0
weights = {
'project_type': 0.4,
'tech_stack': 0.4,
'size': 0.2
}
# Project type match
if project_info.get('project_type') == config.get('project_type'):
score += weights['project_type']
# Tech stack similarity
project_stack = set(project_info.get('tech_stack', []))
config_stack = set(config.get('tech_stack', []))
if project_stack and config_stack:
intersection = len(project_stack & config_stack)
union = len(project_stack | config_stack)
tech_similarity = intersection / union if union > 0 else 0
score += weights['tech_stack'] * tech_similarity
return min(score, 1.0)
def reuse_configuration(self, config_id: str) -> Dict:
"""
Reuse a configuration
Args:
config_id: ID of configuration to reuse
Returns:
Configuration to apply
"""
config = self._load_configuration(config_id)
if not config:
return None
# Increment reuse count
config['reuse_count'] += 1
self._save_configuration(config_id, config)
# Update index
for cfg in self.index['configurations']:
if cfg['config_id'] == config_id:
cfg['reuse_count'] += 1
self._save_index()
return config
def get_reuse_recommendation(self, project_info: Dict) -> Optional[Dict]:
"""
Get recommendation for reusing a configuration
Args:
project_info: Information about current project
Returns:
Recommendation or None if no good match
"""
similar = self.find_similar_configurations(project_info, min_similarity=0.75)
if not similar:
return None
best_match = similar[0]
return {
'recommended': True,
'config_id': best_match['config_id'],
'similarity': best_match['similarity'],
'project_name': best_match['project_name'],
'created_at': best_match['created_at'],
'reuse_count': best_match['reuse_count'],
'time_saved': '5-10 minutes (no need to regenerate)',
'agents': best_match['full_config']['agents_used'],
'skills': best_match['full_config']['skills_generated'],
'reason': f"This configuration was successful for a similar {best_match['project_type']} project"
}
def _load_configuration(self, config_id: str) -> Optional[Dict]:
"""Load a configuration file"""
config_path = self.storage_dir / f"{config_id}.json"
if not config_path.exists():
return None
try:
with open(config_path, 'r') as f:
return json.load(f)
except:
return None
def _save_configuration(self, config_id: str, config: Dict):
"""Save a configuration file"""
config_path = self.storage_dir / f"{config_id}.json"
with open(config_path, 'w') as f:
json.dump(config, f, indent=2)
def get_statistics(self) -> Dict:
"""Get reuse statistics"""
total_configs = len(self.index['configurations'])
total_reuses = sum(cfg['reuse_count'] for cfg in self.index['configurations'])
project_types = {}
for cfg in self.index['configurations']:
ptype = cfg['project_type']
if ptype not in project_types:
project_types[ptype] = 0
project_types[ptype] += 1
return {
'total_configurations': total_configs,
'total_reuses': total_reuses,
'average_reuses': round(total_reuses / total_configs, 1) if total_configs > 0 else 0,
'project_types': project_types,
'most_reused': sorted(
self.index['configurations'],
key=lambda x: x['reuse_count'],
reverse=True
)[:3]
}
# Example usage
if __name__ == '__main__':
manager = AgentReuseManager()
# Save a configuration
print("Saving successful configuration...")
config_id = manager.save_configuration({
'project_type': 'programming',
'project_name': 'my-web-app',
'tech_stack': ['TypeScript', 'React', 'Next.js'],
'agents_used': ['project-analyzer', 'security-analyzer', 'test-coverage-analyzer'],
'skills_generated': ['security-scanner', 'test-generator'],
'commands_generated': ['/security-check', '/generate-tests'],
'hooks_generated': ['pre-commit-security'],
'success_metrics': {
'time_saved': 50,
'issues_prevented': 3
},
'user_satisfaction': 5
})
print(f"Saved configuration: {config_id}\n")
# Find similar
print("Finding similar configurations for new project...")
similar = manager.find_similar_configurations({
'project_type': 'programming',
'tech_stack': ['TypeScript', 'React', 'Vite'], # Similar but not exact
'existing_tools': ['ESLint']
})
print(f"Found {len(similar)} similar configurations\n")
if similar:
print("Best match:")
print(json.dumps({
'config_id': similar[0]['config_id'],
'similarity': similar[0]['similarity'],
'project_name': similar[0]['project_name']
}, indent=2))
# Get recommendation
print("\nRecommendation:")
rec = manager.get_reuse_recommendation({
'project_type': 'programming',
'tech_stack': ['TypeScript', 'React', 'Vite']
})
print(json.dumps(rec, indent=2))
# Statistics
print("\nReuse Statistics:")
stats = manager.get_statistics()
print(json.dumps(stats, indent=2))

View File

@@ -0,0 +1,198 @@
#!/usr/bin/env python3
"""
Simple Project Metrics Collector
Just collects basic data - NO decision making or pattern matching
The project-analyzer agent does the intelligent analysis
"""
import json
from pathlib import Path
from collections import defaultdict
from typing import Dict
class ProjectMetricsCollector:
"""Collects basic project metrics for agent analysis"""
def __init__(self, project_root: str = "."):
self.root = Path(project_root).resolve()
def collect_metrics(self) -> Dict:
"""Collect basic project metrics"""
return {
'file_analysis': self._analyze_files(),
'directory_structure': self._get_directory_structure(),
'key_files': self._find_key_files(),
'project_stats': self._get_basic_stats()
}
def _analyze_files(self) -> Dict:
"""Count files by category"""
type_categories = {
'code': {'.py', '.js', '.ts', '.jsx', '.tsx', '.java', '.go', '.rs', '.c', '.cpp', '.php', '.rb'},
'markup': {'.html', '.xml', '.svg'},
'stylesheet': {'.css', '.scss', '.sass', '.less'},
'document': {'.md', '.txt', '.pdf', '.doc', '.docx', '.odt'},
'latex': {'.tex', '.bib', '.cls', '.sty'},
'spreadsheet': {'.xlsx', '.xls', '.ods', '.csv'},
'image': {'.jpg', '.jpeg', '.png', '.gif', '.svg', '.webp'},
'video': {'.mp4', '.avi', '.mov', '.mkv'},
'data': {'.json', '.yaml', '.yml', '.toml', '.xml'},
'notebook': {'.ipynb'},
}
counts = defaultdict(int)
files_by_type = defaultdict(list)
for item in self.root.rglob('*'):
if item.is_file() and not self._is_ignored(item):
suffix = item.suffix.lower()
categorized = False
for category, extensions in type_categories.items():
if suffix in extensions:
counts[category] += 1
files_by_type[category].append(str(item.relative_to(self.root)))
categorized = True
break
if not categorized:
counts['other'] += 1
total = sum(counts.values()) or 1
return {
'counts': dict(counts),
'percentages': {k: round((v / total) * 100, 1) for k, v in counts.items()},
'total_files': total,
'sample_files': {k: v[:5] for k, v in files_by_type.items()} # First 5 of each type
}
def _get_directory_structure(self) -> Dict:
"""Get top-level directory structure"""
dirs = []
for item in self.root.iterdir():
if item.is_dir() and not self._is_ignored(item):
file_count = sum(1 for _ in item.rglob('*') if _.is_file())
dirs.append({
'name': item.name,
'file_count': file_count
})
return {
'top_level_directories': sorted(dirs, key=lambda x: x['file_count'], reverse=True),
'total_directories': len(dirs)
}
def _find_key_files(self) -> Dict:
"""Find common configuration and important files"""
key_patterns = {
# Programming
'package.json': 'Node.js project',
'requirements.txt': 'Python project',
'Cargo.toml': 'Rust project',
'go.mod': 'Go project',
'pom.xml': 'Java Maven project',
'build.gradle': 'Java Gradle project',
# Configuration
'.eslintrc*': 'ESLint config',
'tsconfig.json': 'TypeScript config',
'jest.config.js': 'Jest testing',
'pytest.ini': 'Pytest config',
# CI/CD
'.github/workflows': 'GitHub Actions',
'.gitlab-ci.yml': 'GitLab CI',
'Jenkinsfile': 'Jenkins',
# Hooks
'.pre-commit-config.yaml': 'Pre-commit hooks',
'.husky': 'Husky hooks',
# Documentation
'README.md': 'README',
'CONTRIBUTING.md': 'Contribution guide',
'LICENSE': 'License file',
# LaTeX
'main.tex': 'LaTeX main',
'*.bib': 'Bibliography',
# Build tools
'Makefile': 'Makefile',
'CMakeLists.txt': 'CMake',
'docker-compose.yml': 'Docker Compose',
'Dockerfile': 'Docker',
}
found = {}
for pattern, description in key_patterns.items():
matches = list(self.root.glob(pattern))
if matches:
found[pattern] = {
'description': description,
'count': len(matches),
'paths': [str(m.relative_to(self.root)) for m in matches[:3]]
}
return found
def _get_basic_stats(self) -> Dict:
"""Get basic project statistics"""
total_files = 0
total_dirs = 0
total_size = 0
max_depth = 0
for item in self.root.rglob('*'):
if self._is_ignored(item):
continue
if item.is_file():
total_files += 1
try:
total_size += item.stat().st_size
except:
pass
depth = len(item.relative_to(self.root).parts)
max_depth = max(max_depth, depth)
elif item.is_dir():
total_dirs += 1
return {
'total_files': total_files,
'total_directories': total_dirs,
'total_size_bytes': total_size,
'total_size_mb': round(total_size / (1024 * 1024), 2),
'deepest_nesting': max_depth
}
def _is_ignored(self, path: Path) -> bool:
"""Check if path should be ignored"""
ignore_patterns = {
'node_modules', '.git', '__pycache__', '.venv', 'venv',
'dist', 'build', '.cache', '.pytest_cache', 'coverage',
'.next', '.nuxt', 'out', 'target'
}
parts = path.parts
return any(pattern in parts for pattern in ignore_patterns)
def generate_report(self) -> Dict:
"""Generate complete metrics report"""
metrics = self.collect_metrics()
return {
'project_root': str(self.root),
'scan_purpose': 'Basic metrics collection for intelligent agent analysis',
'metrics': metrics,
'note': 'This is raw data. The project-analyzer agent will interpret it intelligently.'
}
if __name__ == '__main__':
import sys
path = sys.argv[1] if len(sys.argv) > 1 else '.'
collector = ProjectMetricsCollector(path)
report = collector.generate_report()
print(json.dumps(report, indent=2))

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
"""
Cost & Performance Estimator
Provides transparent estimates for automation operations
"""
import json
from typing import Dict, List
from dataclasses import dataclass, asdict
@dataclass
class AgentEstimate:
"""Estimate for a single agent"""
agent_name: str
description: str
estimated_tokens: int
estimated_minutes: int
priority: str # high, medium, low
purpose: str
@dataclass
class AutomationEstimate:
"""Complete automation estimate"""
mode: str # quick, focused, comprehensive
total_agents: int
agents: List[AgentEstimate]
total_tokens_min: int
total_tokens_max: int
total_minutes_min: int
total_minutes_max: int
total_cost_min: float
total_cost_max: float
recommendations: List[str]
class CostEstimator:
"""Estimates cost and time for automation"""
# Token costs (as of Jan 2025, Claude Sonnet)
TOKEN_COST_INPUT = 0.000003 # $3 per 1M input tokens
TOKEN_COST_OUTPUT = 0.000015 # $15 per 1M output tokens
# Approximate tokens per agent type
AGENT_TOKEN_ESTIMATES = {
'project-analyzer': {'input': 2000, 'output': 1500, 'minutes': 3},
'structure-analyzer': {'input': 1000, 'output': 800, 'minutes': 2},
'security-analyzer': {'input': 1500, 'output': 1000, 'minutes': 3},
'performance-analyzer': {'input': 1500, 'output': 1000, 'minutes': 4},
'test-coverage-analyzer': {'input': 1200, 'output': 800, 'minutes': 3},
'latex-structure-analyzer': {'input': 1000, 'output': 800, 'minutes': 3},
'citation-analyzer': {'input': 800, 'output': 600, 'minutes': 2},
'link-validator': {'input': 1000, 'output': 700, 'minutes': 2},
}
# Default estimate for unknown agents
DEFAULT_ESTIMATE = {'input': 1000, 'output': 800, 'minutes': 3}
def estimate_agent(self, agent_name: str, priority: str = 'medium', purpose: str = '') -> AgentEstimate:
"""
Estimate cost/time for a single agent
Args:
agent_name: Name of the agent
priority: high, medium, or low
purpose: What this agent does
Returns:
AgentEstimate object
"""
estimate = self.AGENT_TOKEN_ESTIMATES.get(agent_name, self.DEFAULT_ESTIMATE)
total_tokens = estimate['input'] + estimate['output']
minutes = estimate['minutes']
return AgentEstimate(
agent_name=agent_name,
description=purpose or f"Analyzes {agent_name.replace('-', ' ')}",
estimated_tokens=total_tokens,
estimated_minutes=minutes,
priority=priority,
purpose=purpose
)
def estimate_quick_mode(self) -> AutomationEstimate:
"""Estimate for quick analysis mode"""
agents = [
self.estimate_agent('project-analyzer', 'high', 'Intelligent project analysis'),
]
return self._calculate_total_estimate('quick', agents, [
'Fastest way to understand your project',
'Low cost, high value',
'Can expand to full automation after'
])
def estimate_focused_mode(self, focus_areas: List[str]) -> AutomationEstimate:
"""Estimate for focused automation mode"""
# Map focus areas to agents
area_to_agents = {
'security': ['security-analyzer'],
'testing': ['test-coverage-analyzer'],
'performance': ['performance-analyzer'],
'structure': ['structure-analyzer'],
'latex': ['latex-structure-analyzer', 'citation-analyzer'],
'links': ['link-validator'],
}
agents = [self.estimate_agent('project-analyzer', 'high', 'Initial analysis')]
for area in focus_areas:
for agent_name in area_to_agents.get(area, []):
agents.append(self.estimate_agent(agent_name, 'high', f'Analyze {area}'))
return self._calculate_total_estimate('focused', agents, [
'Targeted automation for your specific needs',
'Medium cost, high relevance',
'Focuses on what matters most to you'
])
def estimate_comprehensive_mode(self, project_type: str) -> AutomationEstimate:
"""Estimate for comprehensive automation mode"""
agents = [
self.estimate_agent('project-analyzer', 'high', 'Project analysis'),
self.estimate_agent('structure-analyzer', 'high', 'Structure analysis'),
]
# Add type-specific agents
if project_type in ['programming', 'web_app', 'cli']:
agents.extend([
self.estimate_agent('security-analyzer', 'high', 'Security audit'),
self.estimate_agent('performance-analyzer', 'medium', 'Performance check'),
self.estimate_agent('test-coverage-analyzer', 'high', 'Test coverage'),
])
elif project_type in ['academic_writing', 'research']:
agents.extend([
self.estimate_agent('latex-structure-analyzer', 'high', 'LaTeX structure'),
self.estimate_agent('citation-analyzer', 'high', 'Citations & bibliography'),
self.estimate_agent('link-validator', 'medium', 'Link validation'),
])
return self._calculate_total_estimate('comprehensive', agents, [
'Complete automation system',
'Highest cost, most comprehensive',
'Full agent suite, skills, commands, hooks'
])
def _calculate_total_estimate(self, mode: str, agents: List[AgentEstimate], recommendations: List[str]) -> AutomationEstimate:
"""Calculate total estimates from agent list"""
total_tokens = sum(a.estimated_tokens for a in agents)
total_minutes = max(5, sum(a.estimated_minutes for a in agents) // 2) # Parallel execution
# Add buffer (20-50% uncertainty)
tokens_min = total_tokens
tokens_max = int(total_tokens * 1.5)
minutes_min = total_minutes
minutes_max = int(total_minutes * 1.3)
# Calculate costs (rough approximation: 60% input, 40% output)
cost_min = (tokens_min * 0.6 * self.TOKEN_COST_INPUT) + (tokens_min * 0.4 * self.TOKEN_COST_OUTPUT)
cost_max = (tokens_max * 0.6 * self.TOKEN_COST_INPUT) + (tokens_max * 0.4 * self.TOKEN_COST_OUTPUT)
return AutomationEstimate(
mode=mode,
total_agents=len(agents),
agents=agents,
total_tokens_min=tokens_min,
total_tokens_max=tokens_max,
total_minutes_min=minutes_min,
total_minutes_max=minutes_max,
total_cost_min=round(cost_min, 3),
total_cost_max=round(cost_max, 3),
recommendations=recommendations
)
def format_estimate(self, estimate: AutomationEstimate) -> str:
"""Format estimate for display"""
lines = []
lines.append(f"╔══════════════════════════════════════════════════╗")
lines.append(f"║ Automation Estimate - {estimate.mode.upper()} Mode")
lines.append(f"╚══════════════════════════════════════════════════╝")
lines.append("")
# Agent list
for agent in estimate.agents:
priority_icon = "" if agent.priority == "high" else ""
lines.append(f"{priority_icon} {agent.agent_name}")
lines.append(f" {agent.description}")
lines.append(f" ⏱️ ~{agent.estimated_minutes} min | 💰 ~{agent.estimated_tokens} tokens")
lines.append("")
lines.append("────────────────────────────────────────────────────")
# Totals
lines.append(f"Total Agents: {estimate.total_agents}")
lines.append(f"Estimated Time: {estimate.total_minutes_min}-{estimate.total_minutes_max} minutes")
lines.append(f"Estimated Tokens: {estimate.total_tokens_min:,}-{estimate.total_tokens_max:,}")
lines.append(f"Estimated Cost: ${estimate.total_cost_min:.3f}-${estimate.total_cost_max:.3f}")
lines.append("")
lines.append("💡 Notes:")
for rec in estimate.recommendations:
lines.append(f"{rec}")
return "\n".join(lines)
def export_estimate(self, estimate: AutomationEstimate, output_path: str = None) -> Dict:
"""Export estimate as JSON"""
data = asdict(estimate)
if output_path:
with open(output_path, 'w') as f:
json.dump(data, f, indent=2)
return data
# Example usage
if __name__ == '__main__':
estimator = CostEstimator()
print("1. QUICK MODE ESTIMATE")
print("="*60)
quick = estimator.estimate_quick_mode()
print(estimator.format_estimate(quick))
print("\n\n2. FOCUSED MODE ESTIMATE (Security + Testing)")
print("="*60)
focused = estimator.estimate_focused_mode(['security', 'testing'])
print(estimator.format_estimate(focused))
print("\n\n3. COMPREHENSIVE MODE ESTIMATE (Programming Project)")
print("="*60)
comprehensive = estimator.estimate_comprehensive_mode('programming')
print(estimator.format_estimate(comprehensive))

View File

@@ -0,0 +1,342 @@
#!/usr/bin/env python3
"""
Existing Tool Discovery
Checks what automation tools are already in place
Prevents duplication and suggests integration points
"""
import json
from pathlib import Path
from typing import Dict, List
from collections import defaultdict
class ExistingToolDiscovery:
"""Discovers existing automation tools in a project"""
def __init__(self, project_root: str = "."):
self.root = Path(project_root).resolve()
def discover_all(self) -> Dict:
"""Discover all existing automation tools"""
return {
'linting': self._discover_linting(),
'testing': self._discover_testing(),
'ci_cd': self._discover_ci_cd(),
'git_hooks': self._discover_git_hooks(),
'formatting': self._discover_formatting(),
'security': self._discover_security(),
'documentation': self._discover_documentation(),
'summary': self._generate_summary()
}
def _discover_linting(self) -> Dict:
"""Find linting tools"""
tools = {}
linting_patterns = {
'.eslintrc*': {'tool': 'ESLint', 'language': 'JavaScript/TypeScript', 'purpose': 'Code linting'},
'.pylintrc': {'tool': 'Pylint', 'language': 'Python', 'purpose': 'Code linting'},
'pylint.rc': {'tool': 'Pylint', 'language': 'Python', 'purpose': 'Code linting'},
'.flake8': {'tool': 'Flake8', 'language': 'Python', 'purpose': 'Code linting'},
'tslint.json': {'tool': 'TSLint', 'language': 'TypeScript', 'purpose': 'Code linting'},
'.rubocop.yml': {'tool': 'RuboCop', 'language': 'Ruby', 'purpose': 'Code linting'},
'phpcs.xml': {'tool': 'PHP_CodeSniffer', 'language': 'PHP', 'purpose': 'Code linting'},
}
for pattern, info in linting_patterns.items():
matches = list(self.root.glob(pattern))
if matches:
tools[info['tool']] = {
**info,
'config_file': str(matches[0].relative_to(self.root)),
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._linting_recommendation(tools)
}
def _discover_testing(self) -> Dict:
"""Find testing frameworks"""
tools = {}
testing_patterns = {
'jest.config.js': {'tool': 'Jest', 'language': 'JavaScript', 'purpose': 'Unit testing'},
'jest.config.ts': {'tool': 'Jest', 'language': 'TypeScript', 'purpose': 'Unit testing'},
'pytest.ini': {'tool': 'Pytest', 'language': 'Python', 'purpose': 'Unit testing'},
'phpunit.xml': {'tool': 'PHPUnit', 'language': 'PHP', 'purpose': 'Unit testing'},
'karma.conf.js': {'tool': 'Karma', 'language': 'JavaScript', 'purpose': 'Test runner'},
'.rspec': {'tool': 'RSpec', 'language': 'Ruby', 'purpose': 'Testing'},
'go.mod': {'tool': 'Go test', 'language': 'Go', 'purpose': 'Testing'},
}
for pattern, info in testing_patterns.items():
matches = list(self.root.glob(pattern))
if matches:
tools[info['tool']] = {
**info,
'config_file': str(matches[0].relative_to(self.root)),
'found': True
}
# Check for test directories
test_dirs = []
for pattern in ['tests/', 'test/', '__tests__/', 'spec/']:
if (self.root / pattern).exists():
test_dirs.append(pattern)
return {
'tools_found': tools,
'test_directories': test_dirs,
'count': len(tools),
'recommendation': self._testing_recommendation(tools, test_dirs)
}
def _discover_ci_cd(self) -> Dict:
"""Find CI/CD configurations"""
tools = {}
ci_patterns = {
'.github/workflows': {'tool': 'GitHub Actions', 'platform': 'GitHub', 'purpose': 'CI/CD'},
'.gitlab-ci.yml': {'tool': 'GitLab CI', 'platform': 'GitLab', 'purpose': 'CI/CD'},
'.circleci/config.yml': {'tool': 'CircleCI', 'platform': 'CircleCI', 'purpose': 'CI/CD'},
'Jenkinsfile': {'tool': 'Jenkins', 'platform': 'Jenkins', 'purpose': 'CI/CD'},
'.travis.yml': {'tool': 'Travis CI', 'platform': 'Travis', 'purpose': 'CI/CD'},
'azure-pipelines.yml': {'tool': 'Azure Pipelines', 'platform': 'Azure', 'purpose': 'CI/CD'},
'.drone.yml': {'tool': 'Drone CI', 'platform': 'Drone', 'purpose': 'CI/CD'},
}
for pattern, info in ci_patterns.items():
path = self.root / pattern
if path.exists():
tools[info['tool']] = {
**info,
'config': str(Path(pattern)),
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._ci_cd_recommendation(tools)
}
def _discover_git_hooks(self) -> Dict:
"""Find git hooks configuration"""
tools = {}
hook_patterns = {
'.pre-commit-config.yaml': {'tool': 'pre-commit', 'purpose': 'Pre-commit hooks'},
'.husky': {'tool': 'Husky', 'purpose': 'Git hooks (Node.js)'},
'.git/hooks': {'tool': 'Native Git hooks', 'purpose': 'Git hooks'},
'lefthook.yml': {'tool': 'Lefthook', 'purpose': 'Git hooks'},
}
for pattern, info in hook_patterns.items():
path = self.root / pattern
if path.exists():
tools[info['tool']] = {
**info,
'location': str(Path(pattern)),
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._git_hooks_recommendation(tools)
}
def _discover_formatting(self) -> Dict:
"""Find code formatting tools"""
tools = {}
formatting_patterns = {
'.prettierrc*': {'tool': 'Prettier', 'language': 'JavaScript/TypeScript', 'purpose': 'Code formatting'},
'.editorconfig': {'tool': 'EditorConfig', 'language': 'Universal', 'purpose': 'Editor settings'},
'pyproject.toml': {'tool': 'Black (if configured)', 'language': 'Python', 'purpose': 'Code formatting'},
'.php-cs-fixer.php': {'tool': 'PHP-CS-Fixer', 'language': 'PHP', 'purpose': 'Code formatting'},
}
for pattern, info in formatting_patterns.items():
matches = list(self.root.glob(pattern))
if matches:
tools[info['tool']] = {
**info,
'config_file': str(matches[0].relative_to(self.root)),
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._formatting_recommendation(tools)
}
def _discover_security(self) -> Dict:
"""Find security scanning tools"""
tools = {}
# Check for dependency scanning
if (self.root / 'package.json').exists():
tools['npm audit'] = {
'tool': 'npm audit',
'platform': 'Node.js',
'purpose': 'Dependency scanning',
'found': True
}
if (self.root / 'Pipfile').exists():
tools['pipenv check'] = {
'tool': 'pipenv check',
'platform': 'Python',
'purpose': 'Dependency scanning',
'found': True
}
# Check for security configs
security_patterns = {
'.snyk': {'tool': 'Snyk', 'purpose': 'Security scanning'},
'sonar-project.properties': {'tool': 'SonarQube', 'purpose': 'Code quality & security'},
}
for pattern, info in security_patterns.items():
if (self.root / pattern).exists():
tools[info['tool']] = {
**info,
'config': pattern,
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._security_recommendation(tools)
}
def _discover_documentation(self) -> Dict:
"""Find documentation tools"""
tools = {}
doc_patterns = {
'mkdocs.yml': {'tool': 'MkDocs', 'purpose': 'Documentation site'},
'docusaurus.config.js': {'tool': 'Docusaurus', 'purpose': 'Documentation site'},
'conf.py': {'tool': 'Sphinx', 'purpose': 'Documentation (Python)'},
'jsdoc.json': {'tool': 'JSDoc', 'purpose': 'JavaScript documentation'},
'.readthedocs.yml': {'tool': 'ReadTheDocs', 'purpose': 'Documentation hosting'},
}
for pattern, info in doc_patterns.items():
if (self.root / pattern).exists():
tools[info['tool']] = {
**info,
'config': pattern,
'found': True
}
return {
'tools_found': tools,
'count': len(tools),
'recommendation': self._documentation_recommendation(tools)
}
def _generate_summary(self) -> Dict:
"""Generate overall summary"""
all_discoveries = [
self._discover_linting(),
self._discover_testing(),
self._discover_ci_cd(),
self._discover_git_hooks(),
self._discover_formatting(),
self._discover_security(),
self._discover_documentation(),
]
total_tools = sum(d['count'] for d in all_discoveries)
maturity_level = "minimal"
if total_tools >= 10:
maturity_level = "comprehensive"
elif total_tools >= 5:
maturity_level = "moderate"
return {
'total_tools_found': total_tools,
'maturity_level': maturity_level,
'gaps': self._identify_gaps(all_discoveries)
}
def _identify_gaps(self, discoveries: List[Dict]) -> List[str]:
"""Identify missing automation"""
gaps = []
# Check for common gaps
linting = discoveries[0]
testing = discoveries[1]
ci_cd = discoveries[2]
security = discoveries[5]
if linting['count'] == 0:
gaps.append('No linting tools configured')
if testing['count'] == 0:
gaps.append('No testing framework configured')
if ci_cd['count'] == 0:
gaps.append('No CI/CD pipeline configured')
if security['count'] == 0:
gaps.append('No security scanning tools')
return gaps
# Recommendation methods
def _linting_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up linting (ESLint for JS/TS, Pylint for Python)"
return "ENHANCE: Extend existing linting rules"
def _testing_recommendation(self, tools: Dict, test_dirs: List) -> str:
if not tools and not test_dirs:
return "ADD: Set up testing framework (Jest, Pytest, etc.)"
if tools and not test_dirs:
return "ADD: Create test directories and write tests"
return "ENHANCE: Increase test coverage"
def _ci_cd_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up CI/CD (GitHub Actions, GitLab CI, etc.)"
return "ENHANCE: Add more checks to existing CI/CD"
def _git_hooks_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up pre-commit hooks for quality checks"
return "ENHANCE: Add more hooks (pre-push, commit-msg, etc.)"
def _formatting_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up code formatting (Prettier, Black, etc.)"
return "OK: Formatting tools in place"
def _security_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up security scanning (critical gap!)"
return "ENHANCE: Add more security tools (SAST, dependency scanning)"
def _documentation_recommendation(self, tools: Dict) -> str:
if not tools:
return "ADD: Set up documentation generation"
return "OK: Documentation tools in place"
def generate_report(self) -> Dict:
"""Generate complete discovery report"""
return self.discover_all()
if __name__ == '__main__':
import sys
path = sys.argv[1] if len(sys.argv) > 1 else '.'
discoverer = ExistingToolDiscovery(path)
report = discoverer.generate_report()
print(json.dumps(report, indent=2))

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,451 @@
#!/usr/bin/env python3
"""
Coordinator Generator Script
Creates the orchestrator agent that manages multi-agent workflows
"""
import argparse
from pathlib import Path
def generate_coordinator(session_id: str, agents: list, output_path: str) -> None:
"""Generate coordinator agent"""
agent_list = ', '.join(agents)
content = f'''---
name: automation-coordinator
description: Orchestrates multi-agent automation workflow. Manages agent execution, synthesizes findings, and generates final automation system.
tools: Task, Read, Write, Bash, Grep, Glob
color: White
model: sonnet
---
# Automation Coordinator
You are the Automation Coordinator, responsible for orchestrating a multi-agent workflow to create a comprehensive automation system.
## Your Role
As coordinator, you:
1. Launch specialized agents in the correct order
2. Monitor their progress
3. Read and synthesize their reports
4. Make final decisions on what to generate
5. Create the automation artifacts
6. Validate everything works
7. Document the system
## Communication Setup
**Session ID**: `{session_id}`
**Context Directory**: `.claude/agents/context/{session_id}/`
**Your Agents**: {agent_list}
## Execution Workflow
### Phase 1: Launch Analysis Agents (Parallel)
Launch these agents **in parallel** using the Task tool:
{chr(10).join([f'- {agent}' for agent in agents if 'analyzer' in agent])}
```bash
# Example of parallel launch
"Launch the following agents in parallel:
- security-analyzer
- performance-analyzer
- code-quality-analyzer
- dependency-analyzer
- documentation-analyzer
Use the Task tool to run each agent concurrently."
```
### Phase 2: Monitor Progress
While agents work, monitor their status:
```bash
# Watch coordination file
watch -n 2 'cat .claude/agents/context/{session_id}/coordination.json | jq ".agents"'
# Or check manually
cat .claude/agents/context/{session_id}/coordination.json | jq '.agents | to_entries | map({{name: .key, status: .value.status}})'
# Follow message log for real-time updates
tail -f .claude/agents/context/{session_id}/messages.jsonl
```
### Phase 3: Synthesize Findings
Once all analysis agents complete, read their reports:
```bash
# Read all reports
for report in .claude/agents/context/{session_id}/reports/*-analyzer.json; do
echo "=== $(basename $report) ==="
cat "$report" | jq '.summary, .findings | length'
done
# Aggregate key metrics
cat .claude/agents/context/{session_id}/reports/*.json | jq -s '
{{
total_findings: map(.findings | length) | add,
high_severity: map(.findings[] | select(.severity == "high")) | length,
automation_opportunities: map(.recommendations_for_automation) | flatten | length
}}
'
```
### Phase 4: Make Decisions
Based on synthesis, decide what to generate:
**Decision Framework:**
1. **Skills**: Generate if multiple findings suggest a reusable pattern
- Example: If security-analyzer finds repeated auth issues → generate "secure-auth-checker" skill
2. **Commands**: Generate for frequent manual tasks
- Example: If testing issues detected → generate "/test-fix" command
3. **Hooks**: Generate for workflow automation points
- Example: If formatting inconsistencies → generate PostToolUse format hook
4. **MCP Integrations**: Configure for external services needed
- Example: If GitHub integration would help → configure github MCP
### Phase 5: Launch Implementation Agents (Parallel)
Based on decisions, launch implementation agents:
```bash
# Launch generators in parallel
"Launch the following implementation agents in parallel:
- skill-generator (to create custom skills)
- command-generator (to create slash commands)
- hook-generator (to create automation hooks)
- mcp-configurator (to set up external integrations)
Each should read the analysis reports and my decision notes."
```
### Phase 6: Monitor Implementation
```bash
# Check implementation progress
cat .claude/agents/context/{session_id}/coordination.json | \\
jq '.agents | to_entries | map(select(.key | endswith("generator") or . == "mcp-configurator"))'
```
### Phase 7: Launch Validation Agents (Sequential)
After implementation, validate:
```bash
# Launch validation sequentially
"Launch integration-tester agent to validate all automation components"
# Wait for completion, then
"Launch documentation-validator agent to ensure everything is documented"
```
### Phase 8: Aggregate & Report
Create final deliverables:
1. **Automation Summary**
```bash
cat > .claude/AUTOMATION_README.md << 'EOF'
# Automation System for [Project Name]
## Generated On
$(date)
## Session ID
{session_id}
## What Was Created
### Analysis Phase
$(cat .claude/agents/context/{session_id}/reports/*-analyzer.json | jq -r '.agent_name + ": " + .summary')
### Generated Artifacts
#### Custom Agents (X)
- **agent-name**: Description and usage
#### Skills (X)
- **skill-name**: What it does and when to use
#### Commands (X)
- **/command**: Purpose and syntax
#### Hooks (X)
- **HookType**: What triggers it
#### MCP Servers (X)
- **server-name**: External service integrated
## Quick Start
1. Test an agent:
```bash
"Use the security-analyzer agent on src/"
```
2. Try a skill:
```bash
"Check code quality using the quality-checker skill"
```
3. Execute a command:
```bash
/test-fix
```
## Full Documentation
See individual agent/skill/command files for details.
## Customization
All generated automation can be customized:
- Edit agents in `.claude/agents/`
- Modify skills in `.claude/skills/`
- Update commands in `.claude/commands/`
- Adjust hooks in `.claude/hooks/`
## Communication Protocol
This automation system uses the Agent Communication Protocol (ACP).
See `.claude/agents/context/{session_id}/` for:
- `coordination.json`: Agent status tracking
- `messages.jsonl`: Event log
- `reports/`: Individual agent reports
- `data/`: Shared data artifacts
## Support
For issues or questions:
1. Review agent reports in `reports/`
2. Check message log in `messages.jsonl`
3. Consult individual documentation
---
*Generated by Meta-Automation Architect*
*Session: {session_id}*
EOF
```
2. **Quick Reference Card**
```bash
cat > .claude/QUICK_REFERENCE.md << 'EOF'
# Quick Reference
## Available Agents
$(ls .claude/agents/*.md | xargs -I {{}} basename {{}} .md | sed 's/^/- /')
## Available Commands
$(ls .claude/commands/*.md | xargs -I {{}} basename {{}} .md | sed 's/^/\\//')
## Available Skills
$(ls .claude/skills/*/SKILL.md | xargs -I {{}} dirname {{}} | xargs basename | sed 's/^/- /')
## Hooks Configured
$(cat .claude/settings.json | jq -r '.hooks | keys[]')
## MCP Servers
$(cat .claude/settings.json | jq -r '.mcpServers | keys[]')
## Usage Examples
### Use an agent:
"Use the [agent-name] agent to [task]"
### Invoke a skill:
"[Natural description that matches skill's description]"
### Execute command:
/[command-name] [args]
### Check hooks:
cat .claude/settings.json | jq '.hooks'
## Session Data
All agent communication is logged in:
`.claude/agents/context/{session_id}/`
Review this directory to understand what happened during generation.
EOF
```
### Phase 9: Final Validation
```bash
# Verify all components exist
echo "Validating generated automation..."
# Check agents
echo "Agents: $(ls .claude/agents/*.md 2>/dev/null | wc -l) files"
# Check skills
echo "Skills: $(find .claude/skills -name 'SKILL.md' 2>/dev/null | wc -l) files"
# Check commands
echo "Commands: $(ls .claude/commands/*.md 2>/dev/null | wc -l) files"
# Check hooks
echo "Hooks: $(ls .claude/hooks/*.py 2>/dev/null | wc -l) files"
# Check settings
echo "Settings updated: $(test -f .claude/settings.json && echo 'YES' || echo 'NO')"
# Test agent communication
echo "Testing agent communication protocol..."
if [ -d ".claude/agents/context/{session_id}" ]; then
echo "✅ Context directory exists"
echo "✅ Reports: $(ls .claude/agents/context/{session_id}/reports/*.json 2>/dev/null | wc -l)"
echo "✅ Messages: $(wc -l < .claude/agents/context/{session_id}/messages.jsonl) events"
fi
```
## Coordination Protocol
### Checking Agent Status
```bash
# Get status of all agents
jq '.agents' .claude/agents/context/{session_id}/coordination.json
# Check specific agent
jq '.agents["security-analyzer"]' .claude/agents/context/{session_id}/coordination.json
# List completed agents
jq '.agents | to_entries | map(select(.value.status == "completed")) | map(.key)' \\
.claude/agents/context/{session_id}/coordination.json
```
### Reading Reports
```bash
# Read a specific report
cat .claude/agents/context/{session_id}/reports/security-analyzer.json | jq
# Get all summaries
jq -r '.summary' .claude/agents/context/{session_id}/reports/*.json
# Find high-severity findings across all reports
jq -s 'map(.findings[]) | map(select(.severity == "high"))' \\
.claude/agents/context/{session_id}/reports/*.json
```
### Monitoring Message Bus
```bash
# Watch live events
tail -f .claude/agents/context/{session_id}/messages.jsonl | jq
# Get events from specific agent
jq 'select(.from == "security-analyzer")' .claude/agents/context/{session_id}/messages.jsonl
# Count events by type
jq -s 'group_by(.type) | map({{type: .[0].type, count: length}})' \\
.claude/agents/context/{session_id}/messages.jsonl
```
## Error Handling
If any agent fails:
1. Check its status in coordination.json
2. Review messages.jsonl for error events
3. Look for partial report in reports/
4. Decide whether to:
- Retry the agent
- Continue without it
- Manual intervention needed
```bash
# Check for failed agents
jq '.agents | to_entries | map(select(.value.status == "failed"))' \\
.claude/agents/context/{session_id}/coordination.json
# If agent failed, check its last message
jq 'select(.from == "failed-agent-name") | select(.type == "error")' \\
.claude/agents/context/{session_id}/messages.jsonl | tail -1
```
## Success Criteria
Your coordination is successful when:
✅ All analysis agents completed
✅ Findings were synthesized
✅ Implementation agents generated artifacts
✅ Validation agents confirmed everything works
✅ Documentation is comprehensive
✅ User can immediately use the automation
## Final Report to User
After everything is complete, report to the user:
```markdown
## Automation System Complete! 🎉
### What Was Created
**Analysis Phase:**
- Analyzed security, performance, code quality, dependencies, and documentation
- Found [X] high-priority issues and [Y] optimization opportunities
**Generated Automation:**
- **[N] Custom Agents**: Specialized for your project needs
- **[N] Skills**: Auto-invoked for common patterns
- **[N] Commands**: Quick shortcuts for frequent tasks
- **[N] Hooks**: Workflow automation at key points
- **[N] MCP Integrations**: Connected to external services
### How to Use
1. **Try an agent**: "Use the security-analyzer agent on src/"
2. **Test a command**: /test-fix
3. **Invoke a skill**: Describe a task matching a skill's purpose
### Documentation
- **Main Guide**: `.claude/AUTOMATION_README.md`
- **Quick Reference**: `.claude/QUICK_REFERENCE.md`
- **Session Details**: `.claude/agents/context/{session_id}/`
### Next Steps
1. Review generated automation
2. Customize for your specific needs
3. Run validation tests
4. Start using in your workflow!
All agents communicated successfully through the ACP protocol. Check the session directory for full details on what happened.
```
Remember: You're orchestrating a symphony of specialized agents. Your job is to ensure they work together harmoniously through the communication protocol!
'''
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
Path(output_path).write_text(content)
print(f"Generated coordinator agent at {output_path}")
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Generate coordinator agent')
parser.add_argument('--session-id', required=True, help='Session ID')
parser.add_argument('--agents', required=True, help='Comma-separated list of agent names')
parser.add_argument('--output', required=True, help='Output file path')
args = parser.parse_args()
agents = [a.strip() for a in args.agents.split(',')]
generate_coordinator(args.session_id, agents, args.output)

View File

@@ -0,0 +1,388 @@
#!/usr/bin/env python3
"""
Metrics Tracker
Tracks actual time saved vs estimates
Measures real impact of automation
"""
import json
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional
class MetricsTracker:
"""Tracks automation effectiveness metrics"""
def __init__(self, session_id: str, storage_path: str = None):
self.session_id = session_id
if storage_path:
self.storage_path = Path(storage_path)
else:
self.storage_path = Path(f".claude/meta-automation/metrics/{session_id}.json")
self.storage_path.parent.mkdir(parents=True, exist_ok=True)
self.metrics = self._load_or_create()
def _load_or_create(self) -> Dict:
"""Load existing metrics or create new"""
if self.storage_path.exists():
try:
with open(self.storage_path, 'r') as f:
return json.load(f)
except:
return self._create_new()
return self._create_new()
def _create_new(self) -> Dict:
"""Create new metrics structure"""
return {
'session_id': self.session_id,
'created_at': datetime.now().isoformat(),
'project_info': {},
'automation_generated': {
'agents': [],
'skills': [],
'commands': [],
'hooks': []
},
'time_tracking': {
'setup_time_minutes': 0,
'estimated_time_saved_hours': 0,
'actual_time_saved_hours': 0,
'accuracy': 0
},
'usage_metrics': {
'skills_run_count': {},
'commands_run_count': {},
'automation_frequency': []
},
'value_metrics': {
'issues_prevented': 0,
'quality_improvements': [],
'deployment_count': 0,
'test_failures_caught': 0
},
'cost_metrics': {
'setup_cost': 0,
'ongoing_cost': 0,
'total_cost': 0
},
'user_feedback': {
'satisfaction_ratings': [],
'comments': [],
'pain_points_resolved': []
}
}
def _save(self):
"""Save metrics to disk"""
with open(self.storage_path, 'w') as f:
json.dump(self.metrics, f, indent=2)
def set_project_info(self, info: Dict):
"""Set project information"""
self.metrics['project_info'] = {
**info,
'recorded_at': datetime.now().isoformat()
}
self._save()
def record_automation_generated(self, category: str, items: List[str]):
"""
Record what automation was generated
Args:
category: 'agents', 'skills', 'commands', 'hooks'
items: List of generated items
"""
if category in self.metrics['automation_generated']:
self.metrics['automation_generated'][category].extend(items)
self._save()
def record_setup_time(self, minutes: int):
"""Record time spent setting up automation"""
self.metrics['time_tracking']['setup_time_minutes'] = minutes
self._save()
def record_estimated_time_saved(self, hours: float):
"""Record estimated time savings"""
self.metrics['time_tracking']['estimated_time_saved_hours'] = hours
self._save()
def record_actual_time_saved(self, hours: float, description: str):
"""
Record actual time saved from automation
Args:
hours: Hours actually saved
description: What was automated
"""
current = self.metrics['time_tracking']['actual_time_saved_hours']
self.metrics['time_tracking']['actual_time_saved_hours'] = current + hours
# Calculate accuracy
estimated = self.metrics['time_tracking']['estimated_time_saved_hours']
if estimated > 0:
actual = self.metrics['time_tracking']['actual_time_saved_hours']
self.metrics['time_tracking']['accuracy'] = round((actual / estimated) * 100, 1)
# Track individual savings
if 'time_savings_breakdown' not in self.metrics:
self.metrics['time_savings_breakdown'] = []
self.metrics['time_savings_breakdown'].append({
'hours_saved': hours,
'description': description,
'recorded_at': datetime.now().isoformat()
})
self._save()
def record_skill_usage(self, skill_name: str):
"""Record that a skill was used"""
if skill_name not in self.metrics['usage_metrics']['skills_run_count']:
self.metrics['usage_metrics']['skills_run_count'][skill_name] = 0
self.metrics['usage_metrics']['skills_run_count'][skill_name] += 1
self._save()
def record_command_usage(self, command_name: str):
"""Record that a command was used"""
if command_name not in self.metrics['usage_metrics']['commands_run_count']:
self.metrics['usage_metrics']['commands_run_count'][command_name] = 0
self.metrics['usage_metrics']['commands_run_count'][command_name] += 1
self._save()
def record_issue_prevented(self, issue_type: str, description: str):
"""Record that automation prevented an issue"""
self.metrics['value_metrics']['issues_prevented'] += 1
if 'prevented_issues' not in self.metrics['value_metrics']:
self.metrics['value_metrics']['prevented_issues'] = []
self.metrics['value_metrics']['prevented_issues'].append({
'type': issue_type,
'description': description,
'prevented_at': datetime.now().isoformat()
})
self._save()
def record_quality_improvement(self, metric: str, before: float, after: float):
"""
Record quality improvement
Args:
metric: What improved (e.g., 'test_coverage', 'build_success_rate')
before: Value before automation
after: Value after automation
"""
improvement = {
'metric': metric,
'before': before,
'after': after,
'improvement_percent': round(((after - before) / before) * 100, 1) if before > 0 else 0,
'recorded_at': datetime.now().isoformat()
}
self.metrics['value_metrics']['quality_improvements'].append(improvement)
self._save()
def record_user_feedback(self, rating: int, comment: str = None):
"""
Record user satisfaction
Args:
rating: 1-5 rating
comment: Optional comment
"""
self.metrics['user_feedback']['satisfaction_ratings'].append({
'rating': rating,
'comment': comment,
'recorded_at': datetime.now().isoformat()
})
self._save()
def get_roi(self) -> Dict:
"""Calculate return on investment"""
setup_time = self.metrics['time_tracking']['setup_time_minutes'] / 60 # hours
actual_saved = self.metrics['time_tracking']['actual_time_saved_hours']
if setup_time == 0:
return {
'roi': 0,
'message': 'No setup time recorded'
}
roi = actual_saved / setup_time
return {
'roi': round(roi, 1),
'setup_hours': round(setup_time, 1),
'saved_hours': round(actual_saved, 1),
'net_gain_hours': round(actual_saved - setup_time, 1),
'break_even_reached': actual_saved > setup_time
}
def get_effectiveness(self) -> Dict:
"""Calculate automation effectiveness"""
generated = self.metrics['automation_generated']
usage = self.metrics['usage_metrics']
total_generated = sum(len(items) for items in generated.values())
total_used = (
len(usage['skills_run_count']) +
len(usage['commands_run_count'])
)
if total_generated == 0:
return {
'effectiveness': 0,
'message': 'No automation generated yet'
}
effectiveness = (total_used / total_generated) * 100
return {
'effectiveness_percent': round(effectiveness, 1),
'total_generated': total_generated,
'total_used': total_used,
'unused': total_generated - total_used
}
def get_summary(self) -> Dict:
"""Get comprehensive metrics summary"""
roi = self.get_roi()
effectiveness = self.get_effectiveness()
avg_satisfaction = 0
if self.metrics['user_feedback']['satisfaction_ratings']:
ratings = [r['rating'] for r in self.metrics['user_feedback']['satisfaction_ratings']]
avg_satisfaction = round(sum(ratings) / len(ratings), 1)
return {
'session_id': self.session_id,
'project': self.metrics['project_info'].get('project_type', 'unknown'),
'automation_generated': {
category: len(items)
for category, items in self.metrics['automation_generated'].items()
},
'time_metrics': {
'setup_time_hours': round(self.metrics['time_tracking']['setup_time_minutes'] / 60, 1),
'estimated_saved_hours': self.metrics['time_tracking']['estimated_time_saved_hours'],
'actual_saved_hours': self.metrics['time_tracking']['actual_time_saved_hours'],
'accuracy': f"{self.metrics['time_tracking']['accuracy']}%",
'net_gain_hours': roi.get('net_gain_hours', 0)
},
'roi': roi,
'effectiveness': effectiveness,
'value': {
'issues_prevented': self.metrics['value_metrics']['issues_prevented'],
'quality_improvements_count': len(self.metrics['value_metrics']['quality_improvements']),
'average_satisfaction': avg_satisfaction
},
'most_used': {
'skills': sorted(
self.metrics['usage_metrics']['skills_run_count'].items(),
key=lambda x: x[1],
reverse=True
)[:3],
'commands': sorted(
self.metrics['usage_metrics']['commands_run_count'].items(),
key=lambda x: x[1],
reverse=True
)[:3]
}
}
def export_report(self) -> str:
"""Export formatted metrics report"""
summary = self.get_summary()
report = f"""
# Automation Metrics Report
**Session:** {summary['session_id']}
**Project Type:** {summary['project']}
## Automation Generated
- **Agents:** {summary['automation_generated']['agents']}
- **Skills:** {summary['automation_generated']['skills']}
- **Commands:** {summary['automation_generated']['commands']}
- **Hooks:** {summary['automation_generated']['hooks']}
## Time Savings
- **Setup Time:** {summary['time_metrics']['setup_time_hours']} hours
- **Estimated Savings:** {summary['time_metrics']['estimated_saved_hours']} hours
- **Actual Savings:** {summary['time_metrics']['actual_saved_hours']} hours
- **Accuracy:** {summary['time_metrics']['accuracy']}
- **Net Gain:** {summary['time_metrics']['net_gain_hours']} hours
## ROI
- **Return on Investment:** {summary['roi']['roi']}x
- **Break-Even:** {'✅ Yes' if summary['roi']['break_even_reached'] else '❌ Not yet'}
## Effectiveness
- **Usage Rate:** {summary['effectiveness']['effectiveness_percent']}%
- **Generated:** {summary['effectiveness']['total_generated']} items
- **Actually Used:** {summary['effectiveness']['total_used']} items
- **Unused:** {summary['effectiveness']['unused']} items
## Value Delivered
- **Issues Prevented:** {summary['value']['issues_prevented']}
- **Quality Improvements:** {summary['value']['quality_improvements_count']}
- **User Satisfaction:** {summary['value']['average_satisfaction']}/5
## Most Used Automation
"""
if summary['most_used']['skills']:
report += "\n**Skills:**\n"
for skill, count in summary['most_used']['skills']:
report += f"- {skill}: {count} times\n"
if summary['most_used']['commands']:
report += "\n**Commands:**\n"
for cmd, count in summary['most_used']['commands']:
report += f"- {cmd}: {count} times\n"
return report
# Example usage
if __name__ == '__main__':
tracker = MetricsTracker('test-session-123')
# Simulate automation setup
tracker.set_project_info({
'project_type': 'programming',
'project_name': 'my-web-app'
})
tracker.record_automation_generated('skills', ['security-scanner', 'test-generator'])
tracker.record_automation_generated('commands', ['/security-check', '/generate-tests'])
tracker.record_setup_time(30) # 30 minutes to set up
tracker.record_estimated_time_saved(50) # Estimated 50 hours saved
# Simulate usage over time
tracker.record_skill_usage('security-scanner')
tracker.record_skill_usage('security-scanner')
tracker.record_skill_usage('test-generator')
tracker.record_command_usage('/security-check')
# Record actual time saved
tracker.record_actual_time_saved(5, 'Security scan caught 3 vulnerabilities before deployment')
tracker.record_actual_time_saved(8, 'Auto-generated 15 test scaffolds')
# Record quality improvements
tracker.record_quality_improvement('test_coverage', 42, 75)
# Record issues prevented
tracker.record_issue_prevented('security', 'SQL injection vulnerability caught')
# User feedback
tracker.record_user_feedback(5, 'This saved me so much time!')
print(tracker.export_report())

View File

@@ -0,0 +1,304 @@
#!/usr/bin/env python3
"""
Rollback Manager
Allows undoing automation if it's not helpful
Creates backups and can restore to pre-automation state
"""
import json
import shutil
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional
class RollbackManager:
"""Manages rollback of automation changes"""
def __init__(self, session_id: str):
self.session_id = session_id
self.backup_dir = Path(f".claude/meta-automation/backups/{session_id}")
self.manifest_path = self.backup_dir / "manifest.json"
self.backup_dir.mkdir(parents=True, exist_ok=True)
def create_backup(self, description: str = "Automation setup") -> str:
"""
Create backup before making changes
Args:
description: What this backup is for
Returns:
Backup ID
"""
backup_id = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = self.backup_dir / backup_id
# Create backup manifest
manifest = {
'backup_id': backup_id,
'session_id': self.session_id,
'created_at': datetime.now().isoformat(),
'description': description,
'backed_up_files': [],
'created_files': [], # Files that didn't exist before
'can_rollback': True
}
# Save manifest
with open(self.manifest_path, 'w') as f:
json.dump(manifest, f, indent=2)
return backup_id
def track_file_creation(self, file_path: str):
"""
Track that a file was created by automation
Args:
file_path: Path to file that was created
"""
manifest = self._load_manifest()
if manifest:
if file_path not in manifest['created_files']:
manifest['created_files'].append(file_path)
self._save_manifest(manifest)
def backup_file_before_change(self, file_path: str):
"""
Backup a file before changing it
Args:
file_path: Path to file to backup
"""
manifest = self._load_manifest()
if not manifest:
return
source = Path(file_path)
if not source.exists():
return
# Create backup
backup_id = manifest['backup_id']
backup_path = self.backup_dir / backup_id
backup_path.mkdir(parents=True, exist_ok=True)
# Preserve directory structure in backup
rel_path = source.relative_to(Path.cwd()) if source.is_absolute() else source
dest = backup_path / rel_path
dest.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(source, dest)
# Track in manifest
if str(rel_path) not in manifest['backed_up_files']:
manifest['backed_up_files'].append(str(rel_path))
self._save_manifest(manifest)
def rollback(self) -> Dict:
"""
Rollback all automation changes
Returns:
Summary of what was rolled back
"""
manifest = self._load_manifest()
if not manifest:
return {
'success': False,
'message': 'No backup found for this session'
}
if not manifest['can_rollback']:
return {
'success': False,
'message': 'Rollback already performed or backup corrupted'
}
files_restored = []
files_deleted = []
errors = []
# Restore backed up files
backup_id = manifest['backup_id']
backup_path = self.backup_dir / backup_id
for file_path in manifest['backed_up_files']:
try:
source = backup_path / file_path
dest = Path(file_path)
if source.exists():
dest.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(source, dest)
files_restored.append(file_path)
else:
errors.append(f"Backup not found: {file_path}")
except Exception as e:
errors.append(f"Error restoring {file_path}: {str(e)}")
# Delete files that were created
for file_path in manifest['created_files']:
try:
path = Path(file_path)
if path.exists():
path.unlink()
files_deleted.append(file_path)
except Exception as e:
errors.append(f"Error deleting {file_path}: {str(e)}")
# Mark as rolled back
manifest['can_rollback'] = False
manifest['rolled_back_at'] = datetime.now().isoformat()
self._save_manifest(manifest)
return {
'success': len(errors) == 0,
'files_restored': files_restored,
'files_deleted': files_deleted,
'errors': errors,
'summary': f"Restored {len(files_restored)} files, deleted {len(files_deleted)} files"
}
def get_backup_info(self) -> Optional[Dict]:
"""Get information about current backup"""
manifest = self._load_manifest()
if not manifest:
return None
return {
'backup_id': manifest['backup_id'],
'created_at': manifest['created_at'],
'description': manifest['description'],
'backed_up_files_count': len(manifest['backed_up_files']),
'created_files_count': len(manifest['created_files']),
'can_rollback': manifest['can_rollback'],
'total_changes': len(manifest['backed_up_files']) + len(manifest['created_files'])
}
def preview_rollback(self) -> Dict:
"""
Preview what would be rolled back
Returns:
Details of what would happen
"""
manifest = self._load_manifest()
if not manifest:
return {
'can_rollback': False,
'message': 'No backup found'
}
return {
'can_rollback': manifest['can_rollback'],
'will_restore': manifest['backed_up_files'],
'will_delete': manifest['created_files'],
'total_changes': len(manifest['backed_up_files']) + len(manifest['created_files']),
'created_at': manifest['created_at'],
'description': manifest['description']
}
def _load_manifest(self) -> Optional[Dict]:
"""Load backup manifest"""
if not self.manifest_path.exists():
return None
try:
with open(self.manifest_path, 'r') as f:
return json.load(f)
except:
return None
def _save_manifest(self, manifest: Dict):
"""Save backup manifest"""
with open(self.manifest_path, 'w') as f:
json.dump(manifest, f, indent=2)
# Convenience wrapper for use in skills
class AutomationSnapshot:
"""
Context manager for creating automatic backups
Usage:
with AutomationSnapshot(session_id, "Adding security checks") as snapshot:
# Make changes
create_new_file("skill.md")
snapshot.track_creation("skill.md")
modify_file("existing.md")
snapshot.track_modification("existing.md")
# Automatic backup created, can rollback later
"""
def __init__(self, session_id: str, description: str):
self.manager = RollbackManager(session_id)
self.description = description
def __enter__(self):
self.manager.create_backup(self.description)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
# Nothing to do on exit
pass
def track_creation(self, file_path: str):
"""Track file creation"""
self.manager.track_file_creation(file_path)
def track_modification(self, file_path: str):
"""Track file modification (backs up before change)"""
self.manager.backup_file_before_change(file_path)
# Example usage
if __name__ == '__main__':
import tempfile
import os
# Create test files
with tempfile.TemporaryDirectory() as tmpdir:
os.chdir(tmpdir)
# Create some test files
Path("existing.txt").write_text("original content")
manager = RollbackManager("test-session")
print("Creating backup...")
backup_id = manager.create_backup("Test automation setup")
# Simulate automation changes
print("\nMaking changes...")
manager.backup_file_before_change("existing.txt")
Path("existing.txt").write_text("modified content")
Path("new_skill.md").write_text("# New Skill")
manager.track_file_creation("new_skill.md")
Path("new_command.md").write_text("# New Command")
manager.track_file_creation("new_command.md")
# Show backup info
print("\nBackup info:")
info = manager.get_backup_info()
print(json.dumps(info, indent=2))
# Preview rollback
print("\nRollback preview:")
preview = manager.preview_rollback()
print(json.dumps(preview, indent=2))
# Perform rollback
print("\nPerforming rollback...")
result = manager.rollback()
print(json.dumps(result, indent=2))
# Check files
print("\nFiles after rollback:")
print(f"existing.txt exists: {Path('existing.txt').exists()}")
if Path("existing.txt").exists():
print(f"existing.txt content: {Path('existing.txt').read_text()}")
print(f"new_skill.md exists: {Path('new_skill.md').exists()}")
print(f"new_command.md exists: {Path('new_command.md').exists()}")

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""
Simple Template Renderer
Renders templates with variable substitution
"""
import re
from pathlib import Path
from typing import Dict, Any
class TemplateRenderer:
"""Simple template renderer using {{variable}} syntax"""
def __init__(self, template_dir: str = "templates"):
self.template_dir = Path(__file__).parent.parent / template_dir
def render(self, template_name: str, context: Dict[str, Any]) -> str:
"""
Render a template with the given context
Args:
template_name: Name of template file (e.g., 'agent-base.md.template')
context: Dictionary of variables to substitute
Returns:
Rendered template string
"""
template_path = self.template_dir / template_name
if not template_path.exists():
raise FileNotFoundError(f"Template not found: {template_path}")
template_content = template_path.read_text(encoding='utf-8')
# Simple variable substitution using {{variable}} syntax
def replace_var(match):
var_name = match.group(1)
value = context.get(var_name, f"{{{{MISSING: {var_name}}}}}")
return str(value)
rendered = re.sub(r'\{\{(\w+)\}\}', replace_var, template_content)
return rendered
def render_to_file(self, template_name: str, context: Dict[str, Any], output_path: str) -> None:
"""
Render template and write to file
Args:
template_name: Name of template file
context: Dictionary of variables
output_path: Where to write rendered output
"""
rendered = self.render(template_name, context)
output = Path(output_path)
output.parent.mkdir(parents=True, exist_ok=True)
output.write_text(rendered, encoding='utf-8')
def list_templates(self) -> list:
"""List available templates"""
if not self.template_dir.exists():
return []
return [
f.name for f in self.template_dir.iterdir()
if f.is_file() and f.suffix == '.template'
]
# Example usage
if __name__ == '__main__':
renderer = TemplateRenderer()
# Example: Render an agent
context = {
'agent_name': 'security-analyzer',
'agent_title': 'Security Analyzer',
'description': 'Analyzes code for security vulnerabilities',
'tools': 'Read, Grep, Glob, Bash',
'color': 'Red',
'model': 'sonnet',
'session_id': 'test-123',
'mission': 'Find security vulnerabilities in the codebase',
'process': '1. Scan for common patterns\n2. Check dependencies\n3. Review auth code'
}
print("Available templates:")
for template in renderer.list_templates():
print(f" - {template}")
print("\nRendering example agent...")
rendered = renderer.render('agent-base.md.template', context)
print("\n" + "="*60)
print(rendered[:500] + "...")

View File

@@ -0,0 +1,307 @@
#!/usr/bin/env python3
"""
User Preference Learning
Learns from user's choices to provide better recommendations over time
"""
import json
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Optional
from collections import defaultdict
class UserPreferences:
"""Learns and stores user preferences for automation"""
def __init__(self, storage_path: str = ".claude/meta-automation/user_preferences.json"):
self.storage_path = Path(storage_path)
self.storage_path.parent.mkdir(parents=True, exist_ok=True)
self.preferences = self._load()
def _load(self) -> Dict:
"""Load existing preferences or create new"""
if self.storage_path.exists():
try:
with open(self.storage_path, 'r') as f:
return json.load(f)
except:
return self._create_new()
return self._create_new()
def _create_new(self) -> Dict:
"""Create new preferences structure"""
return {
'version': '1.0',
'created_at': datetime.now().isoformat(),
'projects_analyzed': 0,
'automation_mode_preferences': {
'quick': 0,
'focused': 0,
'comprehensive': 0
},
'agent_usage': {},
'skill_usage': {},
'project_type_history': {},
'time_saved_total': 0,
'cost_spent_total': 0,
'satisfaction_ratings': [],
'most_valuable_automations': [],
'rarely_used': [],
'integration_preferences': {
'focus_on_gaps': 0,
'enhance_existing': 0,
'independent': 0
},
'sessions': []
}
def _save(self):
"""Save preferences to disk"""
with open(self.storage_path, 'w') as f:
json.dump(self.preferences, f, indent=2)
def record_session(self, session_data: Dict):
"""
Record a new automation session
Args:
session_data: {
'session_id': str,
'project_type': str,
'mode': 'quick|focused|comprehensive',
'agents_used': List[str],
'skills_generated': List[str],
'time_spent_minutes': int,
'cost': float,
'time_saved_estimate': int, # hours
'user_satisfaction': int, # 1-5
'integration_choice': str, # gaps|enhance|independent
}
"""
# Update counts
self.preferences['projects_analyzed'] += 1
# Record mode preference
mode = session_data.get('mode', 'quick')
self.preferences['automation_mode_preferences'][mode] += 1
# Record agent usage
for agent in session_data.get('agents_used', []):
if agent not in self.preferences['agent_usage']:
self.preferences['agent_usage'][agent] = 0
self.preferences['agent_usage'][agent] += 1
# Record skill usage
for skill in session_data.get('skills_generated', []):
if skill not in self.preferences['skill_usage']:
self.preferences['skill_usage'][skill] = 0
self.preferences['skill_usage'][skill] += 1
# Record project type
project_type = session_data.get('project_type', 'unknown')
if project_type not in self.preferences['project_type_history']:
self.preferences['project_type_history'][project_type] = 0
self.preferences['project_type_history'][project_type] += 1
# Track totals
self.preferences['time_saved_total'] += session_data.get('time_saved_estimate', 0)
self.preferences['cost_spent_total'] += session_data.get('cost', 0)
# Track satisfaction
satisfaction = session_data.get('user_satisfaction')
if satisfaction:
self.preferences['satisfaction_ratings'].append({
'session_id': session_data.get('session_id'),
'rating': satisfaction,
'date': datetime.now().isoformat()
})
# Track integration preference
integration = session_data.get('integration_choice')
if integration in self.preferences['integration_preferences']:
self.preferences['integration_preferences'][integration] += 1
# Store full session
self.preferences['sessions'].append({
**session_data,
'recorded_at': datetime.now().isoformat()
})
self._save()
def get_recommended_mode(self) -> str:
"""Get recommended automation mode based on history"""
prefs = self.preferences['automation_mode_preferences']
if self.preferences['projects_analyzed'] == 0:
return 'quick' # Default for first-time users
# Return mode user uses most
return max(prefs.items(), key=lambda x: x[1])[0]
def get_recommended_agents(self, project_type: str, count: int = 5) -> List[str]:
"""Get recommended agents based on past usage and project type"""
# Get agents user has used
agent_usage = self.preferences['agent_usage']
if not agent_usage:
# Default recommendations for new users
defaults = {
'programming': ['project-analyzer', 'security-analyzer', 'test-coverage-analyzer'],
'academic_writing': ['project-analyzer', 'latex-structure-analyzer', 'citation-analyzer'],
'educational': ['project-analyzer', 'learning-path-analyzer', 'assessment-analyzer'],
}
return defaults.get(project_type, ['project-analyzer'])
# Sort by usage count
sorted_agents = sorted(agent_usage.items(), key=lambda x: x[1], reverse=True)
return [agent for agent, _ in sorted_agents[:count]]
def get_rarely_used(self) -> List[str]:
"""Get agents/skills that user never finds valuable"""
rarely_used = []
# Check for agents used only once or twice
for agent, count in self.preferences['agent_usage'].items():
if count <= 2 and self.preferences['projects_analyzed'] > 5:
rarely_used.append(agent)
return rarely_used
def should_skip_agent(self, agent_name: str) -> bool:
"""Check if this agent is rarely useful for this user"""
rarely_used = self.get_rarely_used()
return agent_name in rarely_used
def get_integration_preference(self) -> str:
"""Get preferred integration approach"""
prefs = self.preferences['integration_preferences']
if sum(prefs.values()) == 0:
return 'focus_on_gaps' # Default
return max(prefs.items(), key=lambda x: x[1])[0]
def get_statistics(self) -> Dict:
"""Get usage statistics"""
total_sessions = self.preferences['projects_analyzed']
if total_sessions == 0:
return {
'total_sessions': 0,
'message': 'No automation sessions yet'
}
avg_satisfaction = 0
if self.preferences['satisfaction_ratings']:
avg_satisfaction = sum(r['rating'] for r in self.preferences['satisfaction_ratings']) / len(self.preferences['satisfaction_ratings'])
return {
'total_sessions': total_sessions,
'time_saved_total_hours': self.preferences['time_saved_total'],
'cost_spent_total': round(self.preferences['cost_spent_total'], 2),
'average_satisfaction': round(avg_satisfaction, 1),
'preferred_mode': self.get_recommended_mode(),
'most_used_agents': sorted(
self.preferences['agent_usage'].items(),
key=lambda x: x[1],
reverse=True
)[:5],
'project_types': self.preferences['project_type_history'],
'roi': round(self.preferences['time_saved_total'] / max(1, self.preferences['cost_spent_total'] * 60), 1)
}
def get_recommendations_for_user(self, project_type: str) -> Dict:
"""Get personalized recommendations"""
stats = self.get_statistics()
if stats['total_sessions'] == 0:
return {
'recommended_mode': 'quick',
'reason': 'First time - start with quick analysis to see how it works',
'recommended_agents': ['project-analyzer'],
'skip_agents': []
}
return {
'recommended_mode': self.get_recommended_mode(),
'reason': f"You've used {self.get_recommended_mode()} mode {self.preferences['automation_mode_preferences'][self.get_recommended_mode()]} times",
'recommended_agents': self.get_recommended_agents(project_type),
'skip_agents': self.get_rarely_used(),
'integration_preference': self.get_integration_preference(),
'stats': {
'total_time_saved': f"{stats['time_saved_total_hours']} hours",
'average_satisfaction': stats.get('average_satisfaction', 0),
'roi': f"{stats.get('roi', 0)}x return on investment"
}
}
def export_report(self) -> str:
"""Export usage report"""
stats = self.get_statistics()
report = f"""
# Meta-Automation Usage Report
## Overview
- **Total Sessions:** {stats['total_sessions']}
- **Time Saved:** {stats.get('time_saved_total_hours', 0)} hours
- **Cost Spent:** ${stats.get('cost_spent_total', 0):.2f}
- **ROI:** {stats.get('roi', 0)}x (hours saved per dollar spent × 60)
- **Avg Satisfaction:** {stats.get('average_satisfaction', 0)}/5
## Your Preferences
- **Preferred Mode:** {stats.get('preferred_mode', 'quick')}
- **Integration Style:** {self.get_integration_preference()}
## Most Used Agents
"""
for agent, count in stats.get('most_used_agents', []):
report += f"- {agent}: {count} times\n"
report += "\n## Project Types\n"
for ptype, count in self.preferences['project_type_history'].items():
report += f"- {ptype}: {count} projects\n"
return report
# Example usage
if __name__ == '__main__':
prefs = UserPreferences()
# Simulate some sessions
print("Simulating usage...\n")
prefs.record_session({
'session_id': 'session-1',
'project_type': 'programming',
'mode': 'quick',
'agents_used': ['project-analyzer'],
'skills_generated': [],
'time_spent_minutes': 5,
'cost': 0.03,
'time_saved_estimate': 10,
'user_satisfaction': 4,
'integration_choice': 'focus_on_gaps'
})
prefs.record_session({
'session_id': 'session-2',
'project_type': 'programming',
'mode': 'focused',
'agents_used': ['project-analyzer', 'security-analyzer', 'test-coverage-analyzer'],
'skills_generated': ['security-scanner', 'test-generator'],
'time_spent_minutes': 8,
'cost': 0.09,
'time_saved_estimate': 50,
'user_satisfaction': 5,
'integration_choice': 'focus_on_gaps'
})
print(prefs.export_report())
print("\nRecommendations for next programming project:")
recs = prefs.get_recommendations_for_user('programming')
print(json.dumps(recs, indent=2))

View File

@@ -0,0 +1,125 @@
---
name: {{agent_name}}
description: {{description}}
tools: {{tools}}
color: {{color}}
model: {{model}}
---
# {{agent_title}}
You are a specialized {{agent_name}} in a multi-agent automation system.
## Communication Protocol
**Session ID**: `{{session_id}}`
**Context Directory**: `.claude/agents/context/{{session_id}}/`
### Before You Start
1. **Check Dependencies**: Read `coordination.json` to see if prerequisite agents have finished
2. **Understand Context**: Read existing reports from other agents
3. **Check Messages**: Review `messages.jsonl` for important events
### Your Mission
{{mission}}
### Analysis Process
{{process}}
### Reporting
When you complete your analysis, write a comprehensive report to:
**File**: `.claude/agents/context/{{session_id}}/reports/{{agent_name}}.json`
**Format**:
```json
{
"agent_name": "{{agent_name}}",
"session_id": "{{session_id}}",
"status": "completed",
"timestamp": "2025-01-23T10:30:00Z",
"summary": "One-line summary of key findings",
"findings": [
{
"type": "issue|opportunity|observation",
"severity": "high|medium|low",
"category": "security|performance|quality|etc",
"title": "Short title",
"description": "Detailed description",
"location": "file.py:123 or directory/",
"evidence": "What you found",
"recommendation": "What to do about it",
"time_impact": "Estimated time cost or savings",
"effort": "low|medium|high"
}
],
"metrics": {
"files_analyzed": 0,
"issues_found": 0,
"opportunities_identified": 0
},
"recommendations": [
"Prioritized list of actions"
],
"automation_suggestions": {
"skills": ["skill-name"],
"commands": ["command-name"],
"hooks": ["hook-name"]
}
}
```
### Update Coordination
After writing your report, update the coordination file:
```bash
# Read current coordination
COORD_FILE=".claude/agents/context/{{session_id}}/coordination.json"
# Update with jq (or Python if needed)
# Mark your agent as completed
```
### Log Events
Log important events to the message bus:
```bash
echo '{"timestamp":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","agent":"{{agent_name}}","event":"analysis_complete","details":"'$SUMMARY'"}' >> .claude/agents/context/{{session_id}}/messages.jsonl
```
## Tools Available
You have access to: {{tools}}
**How to use them:**
- **Read**: Read files to understand code
- **Grep**: Search for patterns across codebase
- **Glob**: Find files matching patterns
- **Bash**: Run commands for analysis
## Success Criteria
✅ Comprehensive analysis completed
✅ All findings documented with evidence
✅ Recommendations are actionable
✅ Report written to correct location
✅ Coordination file updated
✅ Events logged
## Important Notes
- Focus on **actionable** findings, not theoretical issues
- Provide **specific evidence** (file locations, line numbers)
- Estimate **real impact** (time saved, quality improvement)
- Recommend **automation** where appropriate
- Be **honest about confidence** - if unsure, say so
---
Now begin your analysis. Good luck!

View File

@@ -0,0 +1,42 @@
# {{command_name}} Command
{{command_description}}
## Usage
```bash
/{{command_name}} # {{usage_default}}
/{{command_name}} {{args}} # {{usage_with_args}}
{{additional_usage}}
```
## What It Does
{{what_it_does}}
## When to Use
{{when_to_use}}
## Example
```bash
$ /{{command_name}} {{example_args}}
{{example_output}}
```
## Options
{{options}}
## Related
- **Skills**: {{related_skills}}
- **Agents**: {{related_agents}}
- **Commands**: {{related_commands}}
---
**Generated by meta-automation-architect**
**Session**: {{session_id}}

View File

@@ -0,0 +1,319 @@
---
name: project-analyzer
description: Intelligently analyzes projects to identify type, pain points, and automation opportunities
tools: Read, Glob, Grep, Bash, AskUserQuestion
color: Cyan
model: sonnet
---
# Project Analyzer
You are an intelligent project analyzer. Your mission is to deeply understand a project and identify the best automation opportunities.
## Mission
Analyze projects with **intelligence and context**, not just pattern matching. You should:
1. **Understand the project type** - Look beyond file counts to understand purpose and goals
2. **Identify REAL pain points** - What actually slows the team down?
3. **Recommend high-value automation** - What saves the most time?
4. **Respect existing tools** - Don't duplicate what already exists
5. **Ask clarifying questions** - Don't guess, ask the user!
## Analysis Process
### Phase 1: Quick Structural Scan (2 minutes)
Use the provided metrics from the basic scan to get oriented:
```json
{
"file_counts": { "code": 45, "document": 12, "markup": 8 },
"directories": ["src/", "tests/", "docs/"],
"key_files": ["package.json", "README.md", ".eslintrc"],
"total_files": 127,
"project_size_mb": 5.2
}
```
### Phase 2: Intelligent Context Gathering (5 minutes)
**Read key files to understand context:**
1. **README.md** - What is this project? What does it do?
2. **Package/dependency files** - What technology stack?
3. **Main entry point** - How is it structured?
4. **Existing configs** - What tools are already in use?
**Look for signals:**
- LaTeX (.tex, .bib) → Academic writing
- Sequential lessons/ → Educational content
- sprints/, milestones/ → Project management
- High .md + internal links → Knowledge base
- src/ + tests/ → Programming project
**Check for existing automation:**
- `.github/workflows/` → Already has CI/CD
- `.pre-commit-config.yaml` → Already has pre-commit hooks
- `.eslintrc*` → Already has linting
- `jest.config.js` → Already has testing
### Phase 3: Identify Pain Points (3 minutes)
**Scan for common issues:**
For **programming projects:**
- Low test coverage? (count test files vs source files)
- Missing documentation?
- Security vulnerabilities?
- No CI/CD setup?
For **LaTeX projects:**
- Broken cross-references? (search for \\ref, \\label)
- Unused bibliography entries? (parse .bib, search for \\cite)
- Manual compilation?
For **Markdown/documentation:**
- Broken links? (check [[links]] and [](links))
- Inconsistent formatting?
- Orphaned pages?
For **project management:**
- Manual status reporting?
- Resource tracking gaps?
- No timeline validation?
### Phase 4: Ask User Questions (Interactive)
**Don't guess - ask!** Use AskUserQuestion to clarify:
1. **Confirm project type:**
```
"I believe this is a [type] project. Is that correct?
- If hybrid, explain both aspects"
```
2. **Identify main pain points:**
```
"What are your main pain points with this project?
Based on my analysis, I recommend focusing on:
⭐ [Issue 1] - Could save X hours
⭐ [Issue 2] - Could improve quality by Y%
But please tell me what's actually slowing you down."
```
3. **Determine automation depth:**
```
"How much automation do you want?
a) Quick analysis only (2-3 agents, 5 min, see what we find)
b) Focused automation (address specific pain points)
c) Comprehensive system (full agent suite, skills, commands, hooks)
I recommend option (a) to start - we can always expand."
```
4. **Check existing workflow:**
```
"I see you already have [existing tools]. Should I:
a) Focus on gaps in your current setup (RECOMMENDED)
b) Enhance your existing tools
c) Create independent parallel automation"
```
### Phase 5: Generate Analysis Report
**Write comprehensive analysis to shared context:**
Create `.claude/agents/context/{session_id}/project-analysis.json`:
```json
{
"analyst": "project-analyzer",
"timestamp": "2025-01-23T10:30:00Z",
"project_type": {
"primary": "programming",
"secondary": ["documentation"],
"confidence": 85,
"reasoning": "Node.js/TypeScript web app with extensive markdown docs"
},
"technology_stack": {
"languages": ["TypeScript", "JavaScript"],
"frameworks": ["Next.js", "React"],
"tools": ["ESLint", "Jest", "GitHub Actions"]
},
"existing_automation": {
"has_linting": true,
"has_testing": true,
"has_ci_cd": true,
"has_pre_commit": false,
"gaps": ["Security scanning", "Test coverage enforcement", "Documentation validation"]
},
"pain_points": [
{
"category": "security",
"severity": "high",
"description": "No automated security scanning",
"evidence": "No security tools configured, sensitive dependencies found",
"time_cost": "Security reviews take 2 hours/sprint",
"recommendation": "Add security-analyzer agent"
},
{
"category": "testing",
"severity": "medium",
"description": "Low test coverage (42%)",
"evidence": "45 source files, 19 test files",
"time_cost": "Manual testing takes 3 hours/release",
"recommendation": "Add test-coverage-analyzer and auto-generate test scaffolds"
}
],
"automation_opportunities": [
{
"priority": "high",
"category": "security",
"automation": "Automated security scanning in CI",
"time_saved": "2 hours/sprint (26 hours/quarter)",
"quality_impact": "Catch vulnerabilities before production",
"agents_needed": ["security-analyzer"],
"skills_needed": ["security-scanner"],
"effort": "Low (integrates with existing CI)"
},
{
"priority": "high",
"category": "testing",
"automation": "Test coverage enforcement and scaffolding",
"time_saved": "3 hours/release (24 hours/quarter)",
"quality_impact": "42% → 80% coverage",
"agents_needed": ["test-coverage-analyzer"],
"skills_needed": ["test-generator"],
"effort": "Medium (requires test writing)"
}
],
"user_preferences": {
"automation_mode": "quick_analysis_first",
"selected_pain_points": ["security", "testing"],
"wants_interactive": true
},
"recommendations": {
"immediate": [
"Run security-analyzer and test-coverage-analyzer (10 min)",
"Review findings before generating full automation"
],
"short_term": [
"Set up security scanning in CI",
"Generate test scaffolds for uncovered code"
],
"long_term": [
"Achieve 80% test coverage",
"Automated dependency updates with security checks"
]
},
"estimated_impact": {
"time_saved_per_quarter": "50 hours",
"quality_improvement": "Catch security issues pre-production, 80% test coverage",
"cost": "~$0.10 per analysis run, minimal ongoing cost"
}
}
```
## Key Principles
1. **Intelligence over pattern matching** - Understand context, don't just count files
2. **Ask, don't guess** - Use AskUserQuestion liberally
3. **Recommend, don't dictate** - Provide options with clear trade-offs
4. **Respect existing tools** - Integrate, don't duplicate
5. **Start simple** - Quick analysis first, full automation on request
6. **Be transparent** - Show time/cost estimates
7. **Focus on value** - Prioritize high-impact automations
## Output Format
Your final response should be:
```markdown
# Project Analysis Complete
## 📊 Project Type
[Primary type] with [secondary aspects]
**Confidence:** [X]%
**Reasoning:** [Why you classified it this way]
## 🔧 Technology Stack
- **Languages:** [list]
- **Frameworks:** [list]
- **Tools:** [list]
## ✅ Existing Automation
You already have:
- [Tool 1] - [What it does]
- [Tool 2] - [What it does]
**Gaps identified:** [list]
## ⚠️ Pain Points (Prioritized)
### 🔴 High Priority
1. **[Issue name]** - [Description]
- **Impact:** [Time cost or quality issue]
- **Fix:** [Recommended automation]
- **Effort:** [Low/Medium/High]
### 🟡 Medium Priority
[Same format]
## 💡 Automation Recommendations
I recommend starting with **quick analysis mode**:
- Launch 2-3 agents to validate these findings (5-10 min)
- Review detailed reports
- Then decide on full automation
**High-value opportunities:**
1. [Automation 1] - Saves [X] hours/[period]
2. [Automation 2] - Improves [metric] by [Y]%
**Estimated total impact:** [Time saved], [Quality improvement]
## 🎯 Next Steps
**Option A: Quick Analysis** (RECOMMENDED)
- Run these agents: [list]
- Time: ~10 minutes
- Cost: ~$0.05
- See findings, then decide next steps
**Option B: Full Automation**
- Generate complete system now
- Time: ~30 minutes
- Cost: ~$0.15
- Immediate comprehensive automation
**Option C: Custom**
- You tell me what you want to focus on
- I'll create targeted automation
What would you like to do?
---
Analysis saved to: `.claude/agents/context/{session_id}/project-analysis.json`
```
## Important Notes
- **Always use AskUserQuestion** - Don't make assumptions
- **Read actual files** - Don't rely only on metrics
- **Provide reasoning** - Explain why you classified the project this way
- **Show trade-offs** - Quick vs comprehensive, time vs value
- **Be honest about confidence** - If uncertain, say so and ask
- **Focus on value** - Recommend what saves the most time or improves quality most
## Success Criteria
✅ User understands their project type
✅ User knows what their main pain points are
✅ User sees clear automation recommendations with value estimates
✅ User can choose their level of automation
✅ Analysis is saved for other agents to use

View File

@@ -0,0 +1,53 @@
---
name: {{skill_name}}
description: {{description}}
allowed-tools: [{{tools}}]
---
# {{skill_title}}
{{skill_description}}
## When to Use This Skill
{{when_to_use}}
## How It Works
{{how_it_works}}
## Usage
```bash
# Basic usage
{{usage_basic}}
# With options
{{usage_advanced}}
```
## Example
{{example}}
## Implementation Details
{{implementation_details}}
## Expected Output
{{expected_output}}
## Error Handling
{{error_handling}}
## Performance Considerations
{{performance_notes}}
---
**Generated by meta-automation-architect**
**Session**: {{session_id}}
**Date**: {{generated_date}}