Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:16:25 +08:00
commit c50a3be78a
28 changed files with 9866 additions and 0 deletions

View File

@@ -0,0 +1,863 @@
# Example: Educational Course Automation
This example shows what the meta-automation-architect generates for an online programming course project.
## Project Context
- **Type**: Educational (primary) + Programming (secondary)
- **Content**: "Learn Python Programming" online course
- **Structure**: 12 modules, 48 lessons, exercises and quizzes
- **Pain Points**: Incomplete assessments, inconsistent formatting, manual progress tracking
- **Priority**: Assessment coverage and content consistency
## User Conversation
```
User: "Set up automation for my Python programming course"
Meta-Skill: Analyzing project...
- Detected: Educational (72% confidence)
- Found lessons/ directory with 48 markdown files
- Found quizzes/ with 8 assessment files
- Found exercises/ with 32 exercise files
- Sequential numbering detected (01_intro.md, 02_variables.md)
- Secondary: Programming (28% confidence)
- Found Python code examples in lessons
- Found exercise solutions in .py files
Meta-Skill: What are your main pain points?
Options:
- Incomplete assessment coverage (RECOMMENDED: 8 quizzes for 48 lessons = 17% coverage)
- Inconsistent lesson formatting
- Manual progress tracking
- Missing prerequisites validation
User: [Selects: Incomplete assessment coverage, Inconsistent formatting]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis
- Moderate (4-6): Multi-domain coverage (RECOMMENDED)
- Many (7+): Comprehensive automation
User: [Selects: Moderate]
Meta-Skill: Generating automation system...
Session ID: ed-course-abc123
```
## Project Structure
```
python-course/
├── modules/
│ ├── 01-introduction/
│ │ ├── 01_what-is-python.md
│ │ ├── 02_setup-environment.md
│ │ ├── 03_first-program.md
│ │ └── 04_variables.md
│ ├── 02-data-types/
│ └── ... (10 more modules)
├── quizzes/
│ ├── module-01-quiz.md
│ └── ... (7 more quizzes)
├── exercises/
│ ├── 01_hello_world.py
│ ├── 02_variables_practice.py
│ └── ... (30 more exercises)
├── solutions/
│ └── ... (exercise solutions)
├── syllabus.md
└── README.md
```
## Generated Automation System
### 1. Custom Subagents (6)
All agents created in `.claude/agents/`:
#### Universal Analysis Agents
**structure-analyzer.md**
- Analyzes course directory organization
- Checks module/lesson hierarchy
- Validates naming conventions
- Ensures consistent structure
**workflow-analyzer.md**
- Identifies repetitive content creation patterns
- Finds bottlenecks in course development
- Maps content creation workflow
- Suggests automation opportunities
#### Educational Domain Agents
**learning-path-analyzer.md**
- Maps lesson dependencies and prerequisites
- Analyzes difficulty progression curve
- Validates learning objective coverage
- Checks skill development sequence
**assessment-analyzer.md**
- Maps quizzes to modules (found only 17% coverage!)
- Analyzes quiz difficulty distribution
- Checks learning objective alignment
- Reviews question quality and variety
#### Implementation Agents
**skill-generator.md**
- Creates custom skills for course automation
- Generated: `quiz-generator`, `lesson-formatter`, `prerequisite-validator`
**command-generator.md**
- Creates commands for common workflows
- Generated: `/generate-quiz`, `/check-progression`, `/export-course`
### 2. Custom Skills (3)
**`.claude/skills/quiz-generator/SKILL.md`**
```markdown
---
name: quiz-generator
description: Automatically generates quiz questions from lesson content
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Quiz Generator
Automatically generates comprehensive quiz questions from lesson content.
## When This Activates
- User requests "generate quiz for module X"
- User says "create assessment for lessons"
- User asks "add quiz questions"
## Process
1. **Read Lesson Content**
- Parse lesson markdown files
- Extract key concepts and terms
- Identify code examples
- Note learning objectives
2. **Generate Question Types**
- Multiple choice (concept understanding)
- Fill-in-the-blank (terminology)
- Code completion (practical skills)
- True/false (misconception checking)
- Short answer (deeper understanding)
3. **Create Quiz File**
- Standard format with frontmatter
- Varied question types
- Progressive difficulty
- Aligned with learning objectives
4. **Validate Quality**
- Check question clarity
- Ensure correct answers
- Verify difficulty appropriateness
- Test completeness
## Example
**Input Lesson** (02_variables.md):
```markdown
# Variables in Python
Variables are containers for storing data values. In Python, you don't need to declare a variable type.
```python
x = 5
name = "Alice"
```
Variables can change type:
```python
x = 5 # int
x = "text" # now string
```
```
**Generated Quiz** (module-01-quiz.md):
```markdown
---
module: 1
lessons_covered: [1, 2, 3, 4]
difficulty: beginner
time_estimate: 10 minutes
---
# Module 1 Quiz: Introduction to Python
## Question 1 (Multiple Choice)
What is a variable in Python?
a) A fixed value that cannot change
b) A container for storing data values
c) A type of function
d) A Python keyword
**Answer:** b
## Question 2 (Fill in the Blank)
In Python, you _____ need to declare a variable's type explicitly.
**Answer:** don't / do not
## Question 3 (Code Completion)
Complete this code to create a variable named `age` with value 25:
```python
___ = ___
```
**Answer:** age = 25
## Question 4 (True/False)
A Python variable can change its type during program execution.
**Answer:** True
## Question 5 (Short Answer)
Explain in one sentence why Python is considered "dynamically typed".
**Sample Answer:** Python determines variable types at runtime rather than requiring explicit type declarations.
```
[... full skill implementation ...]
```
**`.claude/skills/lesson-formatter/SKILL.md`**
```markdown
---
name: lesson-formatter
description: Enforces consistent lesson structure and formatting across all course content
allowed-tools: ["Read", "Write", "Edit", "Grep", "Glob"]
---
# Lesson Formatter
Automatically formats lessons to maintain consistency across the course.
## Standard Lesson Format
Every lesson should have:
1. **Frontmatter** (metadata)
2. **Title** (# heading)
3. **Learning Objectives** (bullet list)
4. **Prerequisites** (if any)
5. **Content Sections** (## headings)
6. **Code Examples** (with syntax highlighting)
7. **Key Takeaways** (bullet list)
8. **Practice Exercise** (link)
9. **Next Lesson** (link)
## Process
1. **Scan Lesson**
- Check for required sections
- Validate frontmatter
- Verify code block formatting
2. **Add Missing Sections**
- Generate learning objectives from content
- Add takeaways summary
- Create exercise links
3. **Format Consistently**
- Standardize heading levels
- Fix code block languages
- Normalize spacing
4. **Validate Links**
- Check prerequisite links
- Verify exercise references
- Validate next lesson
## Example Transformation
**Before:**
```markdown
# Variables
Let's learn about variables.
x = 5
That's a variable.
```
**After:**
```markdown
---
module: 1
lesson: 4
title: Variables in Python
duration: 15 minutes
difficulty: beginner
prerequisites: [03_first-program]
---
# Variables in Python
## Learning Objectives
By the end of this lesson, you will be able to:
- Define what a variable is in Python
- Create variables with different data types
- Understand Python's dynamic typing
- Follow variable naming conventions
## Prerequisites
- Completed: [First Python Program](03_first-program.md)
## What are Variables?
Variables are containers for storing data values. In Python, you don't need to declare a variable type explicitly.
## Creating Variables
```python
x = 5
name = "Alice"
is_student = True
```
## Dynamic Typing
Python is dynamically typed, meaning variables can change type:
```python
x = 5 # int
x = "text" # now string (valid in Python!)
```
## Key Takeaways
- Variables store data values
- No type declaration needed
- Can change type during execution
- Use descriptive names
## Practice
Complete [Exercise 02: Variables Practice](../../exercises/02_variables_practice.py)
## Next
Continue to [Data Types](../02-data-types/01_numbers.md)
```
[... full skill implementation ...]
```
**`.claude/skills/prerequisite-validator/SKILL.md`**
```markdown
---
name: prerequisite-validator
description: Validates that lesson prerequisites form a valid learning path
allowed-tools: ["Read", "Grep", "Glob"]
---
# Prerequisite Validator
Ensures lessons have valid prerequisites and creates a coherent learning path.
## What It Checks
1. **Prerequisite Existence**
- Referenced lessons exist
- Paths are correct
2. **No Circular Dependencies**
- Lesson A → B → A is invalid
- Detects cycles in prerequisite graph
3. **Logical Progression**
- Prerequisites come before lesson
- Difficulty increases appropriately
4. **Completeness**
- All lessons reachable from start
- No orphaned lessons
## Process
1. **Parse Prerequisites**
```python
# Extract from frontmatter
prerequisites: [01_intro, 02_variables]
```
2. **Build Dependency Graph**
```
01_intro
├─ 02_variables
│ ├─ 03_data_types
│ └─ 04_operators
└─ 05_strings
```
3. **Validate**
- Check cycles
- Verify order
- Find orphans
4. **Generate Report**
- Issues found
- Suggested fixes
- Visualization of learning path
## Example Output
```
✅ Prerequisite Validation Complete
📊 Learning Path Statistics:
- Total lessons: 48
- Entry points: 1 (01_what-is-python)
- Maximum depth: 6 levels
- Average prerequisites per lesson: 1.4
❌ Issues Found: 3
1. Circular dependency detected:
15_functions → 16_scope → 17_recursion → 15_functions
Recommendation: Remove prerequisite from 17_recursion
2. Orphaned lesson:
advanced/99_metaprogramming.md
No lesson links to this. Add to module 12.
3. Missing prerequisite:
Lesson 23_list_comprehensions uses concepts from 20_loops
but doesn't list it as prerequisite.
Recommendation: Add 20_loops to prerequisites
📈 Learning Path Diagram saved to: docs/learning-path.mmd
```
[... full skill implementation ...]
```
### 3. Custom Commands (3)
**`.claude/commands/generate-quiz.md`**
```markdown
---
description: Generate quiz for a module or lesson
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Generate Quiz
Creates comprehensive quiz from lesson content.
## Usage
```bash
/generate-quiz module-01 # Generate quiz for module 1
/generate-quiz 15_functions # Generate quiz for specific lesson
/generate-quiz --all # Generate missing quizzes for all modules
```
## What It Does
1. Reads lesson content from specified module/lesson
2. Extracts key concepts and learning objectives
3. Generates varied question types
4. Creates quiz file in standard format
5. Updates quiz index
## Example
```bash
/generate-quiz module-02
```
Output:
```
📝 Generating quiz for Module 02: Data Types...
✅ Analyzed 4 lessons:
- 05_numbers.md
- 06_strings.md
- 07_lists.md
- 08_dictionaries.md
✅ Generated 15 questions:
- 6 multiple choice
- 3 fill-in-blank
- 4 code completion
- 2 short answer
✅ Quiz saved to: quizzes/module-02-quiz.md
📊 Estimated completion time: 12 minutes
💡 Difficulty: Beginner
Next: Review and adjust questions in quizzes/module-02-quiz.md
```
[... full command implementation ...]
```
**`.claude/commands/check-progression.md`**
```markdown
---
description: Check learning path and prerequisite validity
allowed-tools: ["Read", "Grep", "Glob"]
---
# Check Progression
Validates course structure and learning path.
## Usage
```bash
/check-progression # Full validation
/check-progression --module 3 # Check specific module
/check-progression --visual # Generate visual diagram
```
## Checks Performed
1. **Structure Validation**
- All modules present
- Lessons numbered correctly
- No gaps in sequence
2. **Prerequisite Validation**
- No circular dependencies
- Prerequisites exist
- Logical progression
3. **Assessment Coverage**
- Quiz per module
- Exercises per lesson
- Coverage percentage
4. **Content Consistency**
- Standard lesson format
- Required sections present
- Code examples formatted
[... full command implementation ...]
```
**`.claude/commands/export-course.md`**
```markdown
---
description: Export course to various formats (PDF, HTML, SCORM)
allowed-tools: ["Read", "Bash", "Write", "Glob"]
---
# Export Course
Exports course content to distributable formats.
## Usage
```bash
/export-course pdf # Export to PDF
/export-course html # Export to static website
/export-course scorm # Export to SCORM package
/export-course --module 3 pdf # Export specific module
```
[... full command implementation ...]
```
### 4. Hooks (1)
**`.claude/hooks/validate_lesson_format.py`**
```python
#!/usr/bin/env python3
"""
Lesson Format Validation Hook
Type: PostToolUse
Validates lesson format after editing
"""
import sys
import json
import re
from pathlib import Path
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Only trigger on Write/Edit to lesson files
if tool not in ['Write', 'Edit']:
sys.exit(0)
file_path = params.get('file_path', '')
if '/lessons/' not in file_path or not file_path.endswith('.md'):
sys.exit(0)
print(f"📋 Validating lesson format: {Path(file_path).name}", file=sys.stderr)
try:
with open(file_path) as f:
content = f.read()
issues = []
# Check frontmatter
if not content.startswith('---'):
issues.append("Missing frontmatter")
# Check required sections
required_sections = [
'# ', # Title
'## Learning Objectives',
'## Key Takeaways'
]
for section in required_sections:
if section not in content:
issues.append(f"Missing section: {section}")
# Check code blocks have language
code_blocks = re.findall(r'```(\w*)', content)
if any(lang == '' for lang in code_blocks):
issues.append("Code blocks missing language specification")
# Check for exercise link
if '../../exercises/' not in content and '/exercises/' not in content:
issues.append("Missing practice exercise link")
if issues:
print(f"⚠️ Format issues found:", file=sys.stderr)
for issue in issues:
print(f" - {issue}", file=sys.stderr)
print(f"\n💡 Tip: Use the lesson-formatter skill to auto-fix", file=sys.stderr)
else:
print(f"✅ Lesson format valid", file=sys.stderr)
except Exception as e:
print(f"❌ Validation error: {e}", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PostToolUse": {
"commands": [".claude/hooks/validate_lesson_format.py"]
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Python Programming Course
## Generated On
2025-01-23
## Session ID
ed-course-abc123
## What Was Created
### Analysis Phase
- **structure-analyzer**: Course well-organized, but inconsistent lesson numbering in module 5
- **workflow-analyzer**: Identified repetitive quiz creation as major time sink
- **learning-path-analyzer**: Clear progression, but module 8 prerequisites need clarification
- **assessment-analyzer**: LOW COVERAGE - Only 17% (8 quizzes for 48 lessons)
### Generated Artifacts
#### Custom Agents (6)
- **structure-analyzer**: Analyzes course organization
- **workflow-analyzer**: Identifies automation opportunities
- **learning-path-analyzer**: Validates learning progression
- **assessment-analyzer**: Checks quiz coverage
- **skill-generator**: Created 3 custom skills
- **command-generator**: Created 3 slash commands
#### Skills (3)
- **quiz-generator**: Auto-generates quiz questions from lessons (SAVES 20 MIN/QUIZ!)
- **lesson-formatter**: Enforces consistent lesson structure
- **prerequisite-validator**: Validates learning path dependencies
#### Commands (3)
- **/generate-quiz**: Create quiz for module/lesson
- **/check-progression**: Validate course structure
- **/export-course**: Export to PDF/HTML/SCORM
#### Hooks (1)
- **PostToolUse**: Validates lesson format on save
## Impact Assessment
### Time Savings
- Quiz generation: 20 min/quiz × 40 missing quizzes = **13.3 hours saved**
- Lesson formatting: 5 min/lesson × 48 lessons = **4 hours saved**
- Prerequisite validation: 30 min/module × 12 modules = **6 hours saved**
- **Total: ~23 hours saved** + ongoing maintenance
### Quality Improvements
- **100% quiz coverage** (up from 17%)
- **Consistent lesson format** across all content
- **Valid learning path** with no circular dependencies
- **Professional export formats** (PDF, HTML, SCORM)
## Quick Start
1. Generate missing quizzes:
```bash
/generate-quiz --all
```
2. Validate course structure:
```bash
/check-progression --visual
```
3. Format all lessons:
```bash
"Format all lessons in the course"
# lesson-formatter skill auto-invokes
```
4. Create new lesson (format validated automatically):
```bash
# Edit any lesson file
# Hook validates format on save
```
## Course Statistics
- **48 Lessons** across 12 modules
- **8 Quizzes** → Will be 48 quizzes (100% coverage)
- **32 Exercises** with solutions
- **Learning Path Depth:** 6 levels
- **Estimated Course Duration:** 24 hours
## Customization
All generated automation can be customized:
- Edit skills in `.claude/skills/`
- Modify commands in `.claude/commands/`
- Adjust hooks in `.claude/hooks/`
## Session Data
All agent communication is logged in:
`.claude/agents/context/ed-course-abc123/`
Review this directory to understand what automation decisions were made and why.
```
## Agent Communication Example
**`coordination.json`**
```json
{
"session_id": "ed-course-abc123",
"started_at": "2025-01-23T14:00:00Z",
"project_type": "educational",
"secondary_types": ["programming"],
"agents": {
"structure-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:03:00Z",
"report_path": "reports/structure-analyzer.json"
},
"learning-path-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:05:00Z",
"report_path": "reports/learning-path-analyzer.json"
},
"assessment-analyzer": {
"status": "completed",
"completed_at": "2025-01-23T14:06:00Z",
"report_path": "reports/assessment-analyzer.json"
}
}
}
```
**`reports/assessment-analyzer.json`** (excerpt)
```json
{
"agent_name": "assessment-analyzer",
"summary": "CRITICAL: Only 17% assessment coverage. 40 modules lack quizzes.",
"findings": [
{
"type": "gap",
"severity": "critical",
"title": "Insufficient Quiz Coverage",
"description": "Only 8 quizzes for 48 lessons (17% coverage). Industry standard is 80-100%.",
"location": "quizzes/",
"recommendation": "Generate quizzes for all modules using automated question extraction",
"time_saved_if_automated": "20 minutes per quiz × 40 quizzes = 13.3 hours"
}
],
"recommendations_for_automation": [
"Skill: quiz-generator - Auto-generate from lesson content",
"Command: /generate-quiz --all - Batch generate missing quizzes",
"Hook: Suggest quiz creation when module is complete"
],
"automation_impact": {
"time_saved": "13.3 hours",
"quality_improvement": "83% increase in coverage (17% → 100%)"
}
}
```
## Result
Course creator now has powerful automation:
- ✅ Can generate 40 missing quizzes in minutes (vs. 13+ hours manually)
- ✅ All lessons formatted consistently
- ✅ Learning path validated with no circular dependencies
- ✅ Hook prevents incorrectly formatted lessons
- ✅ Can export to professional formats (PDF, SCORM)
-**23+ hours saved** in course development and maintenance
## Before vs After
**Before:**
```
# Manual workflow
- Write lesson → 30 min
- Format manually → 5 min
- Create quiz → 20 min
- Validate prerequisites → 5 min
- Total: 60 min per lesson × 48 = 48 hours
```
**After:**
```
# Automated workflow
- Write lesson → 30 min
- Auto-formatted on save → 0 min
- Generate quiz → 1 min (/generate-quiz)
- Auto-validated → 0 min
- Total: 31 min per lesson × 48 = 24.8 hours
SAVINGS: 23+ hours (48% reduction!)
```
**Quality Improvement:**
- Quiz coverage: 17% → 100% (+83%)
- Format consistency: 60% → 100% (+40%)
- Valid learning paths: 90% → 100% (+10%)

View File

@@ -0,0 +1,310 @@
# Example: Media Library Organization Automation
This example shows what the meta-automation-architect generates for a personal media and document library.
## Project Context
- **Type**: File Organization (primary)
- **Content**: 5,000+ files (photos, videos, documents, music)
- **Size**: 250GB across multiple folders
- **Pain Points**: Inconsistent naming, duplicates, missing metadata, hard to find files
- **Priority**: Auto-tagging and duplicate detection
## Project Structure (Before)
```
media-library/
├── Photos/
│ ├── IMG_0001.jpg
│ ├── IMG_0002.jpg
│ ├── vacation.jpg
│ ├── DSC001.jpg (duplicate of IMG_0001.jpg with different name)
│ └── ... (2,000+ images)
├── Videos/
│ ├── VID_001.mp4
│ ├── movie_001.mp4
│ └── ... (500+ videos)
├── Documents/
│ ├── document.pdf
│ ├── report_final_final.docx (duplicate)
│ └── ... (1,500+ docs)
├── Music/
│ └── ... (1,000+ songs)
└── Misc/
└── ... (random files)
```
## User Conversation
```
User: "Help me organize my media library"
Meta-Skill: Analyzing project...
- Detected: File Organization (95% confidence)
- Found 5,247 files across 89 directories
- Mixed naming conventions detected
- Estimated 380 duplicate files (7% of total)
- Only 15% of files have metadata tags
- Date-based folder structure partially used
Meta-Skill: What are your main pain points?
Options:
- Finding duplicate files (RECOMMENDED: ~1.2GB wasted, 7% duplication rate)
- Inconsistent file naming
- Missing metadata/tags (RECOMMENDED: 85% files lack tags)
- Hard to find specific files
User: [Selects: Duplicate files, Missing metadata]
Meta-Skill: Automation opportunities identified:
- Auto-tag files based on content: Saves 2 min/file × 4,460 files = 148 hours
- Find and merge duplicates: Saves 10 min/duplicate × 380 = 63 hours
- Total potential savings: 211 hours + 1.2GB storage
Meta-Skill: Generating automation system...
Session ID: media-org-def456
```
## Generated Automation System
### 1. Custom Subagents (5)
- **structure-analyzer** - Reviews folder organization and hierarchy
- **metadata-analyzer** - Checks tagging coverage and consistency
- **duplication-analyzer** - Finds duplicate and similar files
- **asset-analyzer** - Inventories all media types
- **command-generator** - Creates organization commands
### 2. Custom Skills (3)
**`auto-tagger`** - Automatically tags files based on content
- Images: Extracts EXIF data (date, location, camera)
- Videos: Analyzes metadata, duration, resolution
- Documents: Extracts title, author, creation date
- Music: Reads ID3 tags, adds genre/artist
**Example:**
```
Before: IMG_0523.jpg (no metadata)
After: IMG_0523.jpg
Tags: [vacation, beach, 2024-07-15, hawaii, sunset]
Location: Waikiki Beach, HI
Camera: iPhone 14 Pro
```
**`duplicate-merger`** - Identifies and consolidates duplicates
- Exact duplicates (same hash)
- Similar images (perceptual hash)
- Same content, different formats
- Version variations
**Example:**
```
Found 3 duplicates of vacation_beach.jpg:
- Photos/IMG_0523.jpg (original, highest quality)
- Photos/vacation.jpg (duplicate)
- Backup/beach.jpg (duplicate)
Action: Keep IMG_0523.jpg, create symbolic links for others
Savings: 8.2 MB
```
**`index-generator`** - Creates searchable catalog
- Generates `library-index.md` with all files
- Categorizes by type, date, tags
- Creates search-friendly format
- Updates automatically
### 3. Custom Commands (3)
**`/organize`**
```bash
/organize # Organize entire library
/organize Photos/ # Organize specific folder
/organize --dry-run # Preview changes
```
Actions:
- Renames files with consistent convention
- Moves to appropriate category folders
- Adds metadata tags
- Detects and merges duplicates
- Generates index
**`/find-duplicates`**
```bash
/find-duplicates # Find all duplicates
/find-duplicates Photos/ # In specific folder
/find-duplicates --auto-merge # Auto-merge safe duplicates
```
**`/generate-index`**
```bash
/generate-index # Full library index
/generate-index --by-date # Chronological index
/generate-index --by-tag # By tag category
```
### 4. Hooks (2)
**`auto_tag_new_files.py`** (PostToolUse)
- Triggers when files are added
- Automatically extracts and adds metadata
- Tags based on content analysis
**`duplicate_alert.py`** (PostToolUse)
- Triggers when files are added
- Checks for duplicates
- Alerts if duplicate detected
### 5. Impact
**Time Savings:**
- Manual tagging: 2 min/file × 4,460 files = **148 hours** → Automated
- Finding duplicates: Manual search would take **20+ hours** → 5 minutes automated
- Creating index: **5 hours** manual → 2 minutes automated
- **Total: 173+ hours saved**
**Storage Savings:**
- Duplicates removed: **1.2GB** recovered
- Optimized organization: **Better disk cache performance**
**Quality Improvements:**
- Metadata coverage: 15% → **100%** (+85%)
- Findability: Manual search → **Instant** via indexed catalog
- Consistency: Mixed naming → **100% standardized**
## Example Results
### Before `/organize`
```
Photos/
├── IMG_0001.jpg (no tags)
├── vacation.jpg (no tags, actually duplicate of IMG_0001)
├── DSC001.JPG (no tags)
└── ... (mixed names, no metadata)
```
### After `/organize`
```
library/
├── photos/
│ ├── 2024/
│ │ ├── 07-july/
│ │ │ ├── 2024-07-15_hawaii-beach_sunset.jpg
│ │ │ │ Tags: [vacation, beach, hawaii, sunset]
│ │ │ │ Location: Waikiki, HI
│ │ │ └── ...
│ │ └── 08-august/
│ └── 2023/
├── videos/
│ ├── 2024/
│ │ └── 2024-07-15_beach-waves_1080p.mp4
│ │ Tags: [vacation, ocean, hawaii]
├── documents/
│ ├── personal/
│ └── work/
├── music/
│ ├── by-artist/
│ └── by-genre/
├── library-index.md (searchable catalog)
└── .metadata/ (tag database)
```
### Generated Index (excerpt)
```markdown
# Media Library Index
Last Updated: 2025-01-23
Total Files: 5,247
Total Size: 248.8 GB
## Recent Additions (Last 7 Days)
- 2024-07-20_family-dinner.jpg [Tags: family, home, dinner]
- 2024-07-19_work-presentation.pptx [Tags: work, slides]
## By Category
### Photos (2,000 files, 45.2 GB)
#### 2024 (523 files)
- **July** (156 files)
- Hawaii Vacation (45 files) - Tags: vacation, beach, hawaii
- Home Events (28 files) - Tags: family, home
- **August** (89 files)
### Videos (500 files, 180.5 GB)
...
### Documents (1,500 files, 18.1 GB)
...
## By Tag
- **vacation** (245 files)
- **family** (432 files)
- **work** (567 files)
...
## Search Tips
- By date: Find "2024-07"
- By location: Find "hawaii" or "beach"
- By type: Find ".jpg" or ".mp4"
```
## Agent Communication
**`reports/duplication-analyzer.json`** (excerpt):
```json
{
"agent_name": "duplication-analyzer",
"summary": "Found 380 duplicate files (7.2% duplication rate) wasting 1.18GB storage",
"findings": [
{
"type": "duplicate_group",
"severity": "medium",
"title": "Vacation Photos Duplicated",
"description": "45 vacation photos have 2-3 copies each with different names",
"storage_wasted": "285 MB",
"recommendation": "Keep highest quality version, create symlinks for others"
}
],
"metrics": {
"total_files_scanned": 5247,
"duplicate_groups": 127,
"total_duplicates": 380,
"storage_wasted_mb": 1210,
"deduplication_potential": "23% size reduction after compression"
},
"automation_impact": {
"time_saved": "63 hours (manual duplicate finding)",
"storage_recovered": "1.2 GB"
}
}
```
## Result
User now has:
-**Fully organized library** with consistent structure
-**100% metadata coverage** (up from 15%)
-**Zero duplicates** (removed 380, recovered 1.2GB)
-**Searchable index** for instant finding
-**Auto-tagging** for all new files
-**173+ hours saved** in organization work
**Before vs After:**
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Files with metadata | 15% (788) | 100% (5,247) | +85% |
| Duplicate files | 380 (7.2%) | 0 (0%) | -100% |
| Wasted storage | 1.2 GB | 0 GB | 1.2GB recovered |
| Time to find file | 5-10 min | <10 sec | 30-60x faster |
| Manual org time | 173+ hours | 2 hours setup | 98% reduction |
**Ongoing Benefits:**
- New files auto-tagged immediately
- Duplicates detected before saving
- Index updates automatically
- Consistent organization maintained

View File

@@ -0,0 +1,339 @@
# Example: Project Management Automation
This example shows what the meta-automation-architect generates for a software delivery project management workspace.
## Project Context
- **Type**: Project Management (primary) + Programming (secondary)
- **Project**: Q1 2025 Mobile App Development
- **Team**: 8 people (2 devs, 2 designers, 1 PM, 1 QA, 2 stakeholders)
- **Duration**: 3 months (12 sprints)
- **Pain Points**: Manual status reporting, resource over-allocation, missing risk tracking
- **Priority**: Automated reporting and resource management
## Project Structure
```
mobile-app-project/
├── project-plan.md
├── roadmap.md
├── sprints/
│ ├── sprint-01/
│ │ ├── planning.md
│ │ ├── tasks.md
│ │ ├── retrospective.md
│ │ └── burndown-chart.png
│ ├── sprint-02/
│ └── ... (12 sprints)
├── resources/
│ ├── team-capacity.md
│ ├── allocation.xlsx
│ └── skills-matrix.md
├── risks/
│ ├── risk-register.md
│ └── mitigation-plans.md
├── milestones/
│ ├── m1-design-complete.md
│ ├── m2-mvp-ready.md
│ └── m3-launch-ready.md
└── reports/
├── weekly-status/
└── monthly-summary/
```
## User Conversation
```
User: "Set up automation for my mobile app project management"
Meta-Skill: Analyzing project...
- Detected: Project Management (78% confidence)
- Found sprints/ directory with 12 sprint folders
- Found project-plan.md and roadmap.md
- Found resources/allocation.xlsx
- Found risks/risk-register.md
- Secondary: Programming (22% confidence)
- References to code deliverables
Meta-Skill: What are your main pain points?
Options:
- Manual status reporting (RECOMMENDED: 12 weekly reports expected, only 3 exist)
- Resource over-allocation (RECOMMENDED: Excel shows conflicts)
- Missing risk tracking
- Timeline validation
User: [Selects: Manual status reporting, Resource over-allocation]
Meta-Skill: Generating automation system...
Session ID: pm-mobile-app-xyz789
```
## Generated Automation System
### 1. Custom Subagents (6)
#### Universal Agents
- **structure-analyzer** - Project organization and folder hierarchy
- **workflow-analyzer** - Sprint and delivery processes
#### Project Management Domain Agents
- **timeline-analyzer** - Sprint schedules, dependencies, critical paths
- **resource-analyzer** - Team allocation, capacity, conflicts
- **risk-analyzer** - Risk identification and mitigation coverage
#### Implementation Agent
- **command-generator** - Created 3 PM-specific commands
### 2. Custom Skills (3)
**`status-reporter`** - Auto-generates weekly status reports from sprint data
- Reads sprint tasks, completion status, blockers
- Generates formatted report with metrics
- Saves time: **45 min/week** (9 hours over 12 sprints)
**`resource-optimizer`** - Identifies and resolves allocation conflicts
- Parses resource allocation data
- Detects over/under allocation
- Suggests rebalancing
- Saves time: **30 min/sprint** (6 hours total)
**`risk-tracker`** - Maintains risk register and tracks mitigation
- Monitors risks from register
- Tracks mitigation progress
- Alerts on new risks
- Saves time: **20 min/week** (4 hours total)
### 3. Custom Commands (3)
**`/sprint-report`**
```bash
/sprint-report # Current sprint
/sprint-report sprint-05 # Specific sprint
/sprint-report --all # All sprints summary
```
Generates comprehensive sprint report:
- Completed tasks vs. planned
- Velocity and burndown
- Blockers and risks
- Team capacity utilization
- Next sprint forecast
**`/resource-check`**
```bash
/resource-check # Check current allocation
/resource-check --week 5 # Specific week
/resource-check --conflicts # Show only conflicts
```
Analyzes resource allocation:
- Capacity vs. assigned work
- Over-allocated team members
- Under-utilized resources
- Skill match for tasks
- Rebalancing suggestions
**`/timeline-validate`**
```bash
/timeline-validate # Validate full timeline
/timeline-validate --critical # Show critical path
/timeline-validate --risks # Timeline risks
```
Validates project timeline:
- Dependency validation
- Critical path analysis
- Buffer analysis
- Risk to deadlines
- Suggested adjustments
### 4. Hooks (2)
**`update_progress.py`** (PostToolUse)
- Triggers when task markdown files are updated
- Extracts completion status
- Updates sprint progress automatically
- Regenerates burndown chart
**`resource_validation.py`** (PreToolUse)
- Triggers when allocation files are modified
- Validates no over-allocation
- Blocks if conflicts detected
- Suggests fixes before allowing change
### 5. Documentation
**Impact Assessment:**
**Time Savings:**
- Weekly status reports: 45 min/week × 12 weeks = **9 hours**
- Resource planning: 30 min/sprint × 12 sprints = **6 hours**
- Risk tracking: 20 min/week × 12 weeks = **4 hours**
- Timeline validation: 1 hour/month × 3 months = **3 hours**
- **Total: 22 hours saved**
**Quality Improvements:**
- **100% sprint coverage** for status reports (up from 25%)
- **Zero resource conflicts** (automated detection)
- **Real-time risk visibility** (vs. monthly reviews)
- **Validated timeline** with critical path analysis
## Example Usage
### Generating Sprint Report
```bash
/sprint-report sprint-05
```
**Output:**
```markdown
# Sprint 05 Report
**Period:** Jan 15-19, 2025
**Team:** Mobile App Team
## Summary
✅ Sprint Goal: Complete user authentication flow - ACHIEVED
## Metrics
- **Planned Story Points:** 34
- **Completed Story Points:** 32 (94%)
- **Velocity:** 32 pts (trending up from 28)
- **Team Capacity:** 160 hours
- **Actual Hours:** 152 hours (95% utilization)
## Completed Tasks (8)
✅ User login UI (8 pts) - @designer1
✅ OAuth integration (13 pts) - @dev1
✅ Session management (5 pts) - @dev2
✅ Login testing (3 pts) - @qa
[... 4 more ...]
## Carried Over (2)
➡️ Password reset flow (5 pts) - blocked by API
➡️ Biometric auth (8 pts) - iOS issue
## Blockers
⚠️ API authentication endpoint delayed → unblocks next sprint
⚠️ iOS biometric API documentation unclear → dev investigating
## Risks
🔴 HIGH: OAuth provider latency affecting UX → mitigation: add loading states
🟡 MEDIUM: Test coverage only 65% → action: dedicated testing time in sprint 06
## Team Notes
- Great collaboration between design and dev
- Need more QA capacity in next sprint
- Consider adding API buffer time
## Next Sprint Preview
Sprint 06 will focus on:
- Complete password reset (carried over)
- User profile management
- Settings screen
- Target: 30-35 story points
---
Auto-generated by meta-automation-architect
```
### Checking Resource Allocation
```bash
/resource-check
```
**Output:**
```
📊 Resource Allocation Analysis
Current Week: Week 6 (Jan 22-26, 2025)
✅ Properly Allocated (5):
- @dev1: 40/40 hours (100%)
- @designer2: 32/40 hours (80%)
- @qa: 40/40 hours (100%)
- @pm: 35/40 hours (88%)
- @stakeholder1: 10/10 hours (100%)
⚠️ Over-Allocated (2):
- @dev2: 52/40 hours (130% - CRITICAL)
Tasks: Feature A (20h), Feature B (18h), Bug fixes (14h)
Recommendation: Move Bug fixes to @dev1 (8h available)
- @designer1: 45/40 hours (113%)
Tasks: UI mockups (25h), Asset creation (20h)
Recommendation: Defer Asset creation to Week 7
🔵 Under-Utilized (1):
- @stakeholder2: 3/10 hours (30%)
Opportunity: Review sessions, feedback collection
💡 Optimization Suggestions:
1. Redistribute 14h from @dev2 to @dev1
2. Move Asset creation from @designer1 to Week 7
3. Add review tasks for @stakeholder2
Estimated Rebalancing Time: 10 minutes
After optimization: 100% feasible allocation
```
## Agent Communication
**`reports/timeline-analyzer.json`** (excerpt):
```json
{
"agent_name": "timeline-analyzer",
"summary": "Timeline feasible but tight. Critical path includes 4 sprints with zero buffer.",
"findings": [
{
"type": "risk",
"severity": "high",
"title": "Zero Buffer on Critical Path",
"description": "Sprints 4, 7, 9, 11 are on critical path with no schedule buffer",
"recommendation": "Add 10% buffer to each critical sprint or reduce scope",
"time_impact": "Any delay in these sprints directly impacts launch date"
},
{
"type": "opportunity",
"severity": "medium",
"title": "Parallel Workstreams Possible",
"description": "Design and backend development can run in parallel in sprints 2-5",
"recommendation": "Optimize resource allocation to leverage parallelism",
"time_saved_if_optimized": "2 weeks off critical path"
}
],
"automation_impact": {
"time_saved": "3 hours per month in timeline reviews",
"quality_improvement": "Real-time critical path visibility vs. monthly checks"
}
}
```
## Result
**PM now has powerful automation:**
- ✅ Weekly status reports generated in 30 seconds (vs. 45 minutes)
- ✅ Resource conflicts detected instantly (vs. discovered in standup)
- ✅ Risk register automatically maintained
- ✅ Timeline validated continuously
-**22 hours saved** over project duration
-**Better decision making** with real-time data
**Before vs After:**
**Before (Manual):**
- Weekly status report: 45 min
- Resource planning: 30 min/sprint
- Risk review: 20 min/week
- Timeline validation: 1 hour/month
- **Total: ~3 hours/week**
**After (Automated):**
- Status report: `/sprint-report` → 30 seconds
- Resource check: `/resource-check` → 30 seconds
- Risk tracking: Auto-updated from tasks
- Timeline: Auto-validated on every change
- **Total: ~5 minutes/week**
**Impact: 97% time reduction + higher data quality!**

View File

@@ -0,0 +1,670 @@
# Example: Python CLI Tool Automation
This example shows what the meta-automation-architect generates for a Python command-line tool project.
## Project Context
- **Type**: CLI Tool
- **Stack**: Python 3.11, Click framework, SQLite
- **Team Size**: Solo developer
- **Pain Points**: Documentation, testing, dependency management
- **Priority**: User documentation and testing
## User Conversation
```
User: "Create automation for my Python CLI project"
Meta-Skill: Analyzing project...
- Detected: CLI Tool (87% confidence)
- Found cli.py with Click decorators
- Found setup.py with console_scripts entry point
- Found requirements.txt with CLI dependencies
- Found README.md but missing detailed docs
Meta-Skill: What are your main pain points?
Options:
- Low test coverage (RECOMMENDED: 8 test files for 42 modules)
- Missing documentation (RECOMMENDED: No docs/ directory)
- Dependency issues
- Release process
User: [Selects: Low test coverage, Missing documentation]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis (RECOMMENDED for solo projects)
- Moderate (4-6): Multi-domain coverage
- Many (7+): Comprehensive automation
User: [Selects: Few]
Meta-Skill: Generating automation system...
Session ID: b2c3d4e5-f6a7-8901-bcde-f23456789012
```
## Generated Automation System
### 1. Custom Subagents (4)
All agents created in `.claude/agents/`:
#### Analysis Agents (Run in Parallel)
**code-quality-analyzer.md**
- Checks PEP 8 compliance
- Analyzes function complexity
- Identifies code duplication
- Reviews error handling
**testing-analyzer.md**
- Measures test coverage
- Identifies untested CLI commands
- Reviews test patterns
- Checks for integration tests
#### Implementation Agents
**skill-generator.md**
- Creates custom skills for Python patterns
- Generated: `docstring-generator`, `cli-test-helper`
**command-generator.md**
- Creates commands for Python workflows
- Generated: `/test-cov`, `/release-prep`
### 2. Custom Skills (2)
**`.claude/skills/docstring-generator/SKILL.md`**
```markdown
---
name: docstring-generator
description: Generates comprehensive docstrings for Python functions and modules
allowed-tools: ["Read", "Write", "Grep", "Glob"]
---
# Docstring Generator
Automatically generates NumPy-style docstrings for Python code.
## When This Activates
- User asks to "add documentation" to Python files
- User requests "docstrings" for functions
- User says "document this module"
## Process
1. Scan Python files for functions/classes without docstrings
2. Analyze function signatures, type hints, and logic
3. Generate NumPy-style docstrings with:
- Brief description
- Parameters with types
- Returns with type
- Raises (exceptions)
- Examples
4. Insert docstrings into code
5. Validate with pydocstyle
## Example
**Input:**
```python
def parse_config(path, validate=True):
with open(path) as f:
config = json.load(f)
if validate:
validate_config(config)
return config
```
**Output:**
```python
def parse_config(path: str, validate: bool = True) -> dict:
"""
Parse configuration from JSON file.
Parameters
----------
path : str
Path to configuration file
validate : bool, optional
Whether to validate configuration (default: True)
Returns
-------
dict
Parsed configuration dictionary
Raises
------
FileNotFoundError
If configuration file doesn't exist
ValidationError
If configuration is invalid and validate=True
Examples
--------
>>> config = parse_config('config.json')
>>> config['database']['host']
'localhost'
"""
with open(path) as f:
config = json.load(f)
if validate:
validate_config(config)
return config
```
[... detailed implementation ...]
```
**`.claude/skills/cli-test-helper/SKILL.md`**
```markdown
---
name: cli-test-helper
description: Generates tests for Click CLI commands with fixtures
allowed-tools: ["Read", "Write", "Bash", "Grep"]
---
# CLI Test Helper
Automatically generates pytest tests for Click commands.
## When This Activates
- User implements new CLI command
- User requests "test this command"
- User says "add CLI tests"
## Process
1. Identify Click commands in code
2. Extract command parameters, options, flags
3. Generate pytest tests with:
- CliRunner fixtures
- Success case tests
- Error case tests
- Edge case tests
- Output validation
4. Create test fixtures for complex inputs
5. Run tests to verify
## Example
**CLI Command:**
```python
@click.command()
@click.option('--name', required=True, help='User name')
@click.option('--email', help='User email')
@click.option('--verbose', is_flag=True)
def create_user(name, email, verbose):
"""Create a new user."""
user = User(name=name, email=email)
db.save(user)
if verbose:
click.echo(f"Created user: {user}")
else:
click.echo(user.id)
```
**Generated Test:**
```python
import pytest
from click.testing import CliRunner
from myapp.cli import create_user
@pytest.fixture
def runner():
return CliRunner()
def test_create_user_success(runner):
"""Test successful user creation."""
result = runner.invoke(create_user, ['--name', 'Alice'])
assert result.exit_code == 0
assert 'user-' in result.output
def test_create_user_with_email(runner):
"""Test user creation with email."""
result = runner.invoke(create_user, [
'--name', 'Alice',
'--email', 'alice@example.com'
])
assert result.exit_code == 0
def test_create_user_verbose(runner):
"""Test verbose output."""
result = runner.invoke(create_user, [
'--name', 'Alice',
'--verbose'
])
assert result.exit_code == 0
assert 'Created user:' in result.output
def test_create_user_missing_name(runner):
"""Test error when name is missing."""
result = runner.invoke(create_user, [])
assert result.exit_code != 0
assert 'Missing option' in result.output
```
[... detailed implementation ...]
```
### 3. Custom Commands (2)
**`.claude/commands/test-cov.md`**
```markdown
---
description: Run tests with coverage report
allowed-tools: ["Bash", "Read"]
---
# Test Coverage Command
Runs pytest with coverage and generates detailed report.
## Usage
```bash
/test-cov # Full coverage
/test-cov tests/unit # Specific directory
/test-cov --html # Generate HTML report
```
## What This Does
1. **Run Tests with Coverage**
```bash
pytest --cov=src --cov-report=term-missing $ARGUMENTS
```
2. **Generate Report**
- Terminal: Coverage percentage by module
- Missing lines highlighted
- HTML report (if --html flag)
3. **Check Thresholds**
- Warn if coverage < 80%
- Error if coverage < 60%
4. **Identify Gaps**
- List untested files
- Highlight critical paths without tests
## Example Output
```
---------- coverage: platform darwin, python 3.11.5 -----------
Name Stmts Miss Cover Missing
-------------------------------------------------------
src/__init__.py 2 0 100%
src/cli.py 145 23 84% 67-73, 89-92
src/config.py 34 0 100%
src/database.py 89 45 49% 23-67, 78-89
src/utils.py 23 2 91% 45-46
-------------------------------------------------------
TOTAL 293 70 76%
⚠️ Coverage below 80% target
❌ database.py has only 49% coverage (critical module!)
Suggestions:
- Add integration tests for database.py
- Test error paths in cli.py lines 67-73
```
[... detailed implementation ...]
```
**`.claude/commands/release-prep.md`**
```markdown
---
description: Prepare project for release (version bump, tests, build)
allowed-tools: ["Bash", "Read", "Write"]
---
# Release Preparation
Automates release preparation checklist.
## Usage
```bash
/release-prep # Interactive mode
/release-prep patch # Auto-bump patch version
/release-prep minor # Auto-bump minor version
/release-prep major # Auto-bump major version
```
## Process
1. **Run Full Test Suite**
```bash
pytest -v
```
2. **Check Coverage**
```bash
pytest --cov=src --cov-report=term
```
3. **Lint Code**
```bash
ruff check src/
mypy src/
```
4. **Bump Version**
- Update version in setup.py, __version__.py
- Update CHANGELOG.md
- Create git tag
5. **Build Distributions**
```bash
python -m build
```
6. **Test Installation**
```bash
pip install dist/*.whl
```
7. **Generate Release Notes**
- Extract commits since last tag
- Categorize changes (features, fixes, breaking)
- Write to RELEASE_NOTES.md
8. **Checklist Output**
```
✅ All tests pass (147 passed)
✅ Coverage: 87%
✅ Linting: No issues
✅ Version bumped: 1.2.3 → 1.2.4
✅ CHANGELOG.md updated
✅ Distribution built
✅ Installation tested
✅ Release notes generated
Ready to release! Next steps:
1. Review RELEASE_NOTES.md
2. git push --tags
3. twine upload dist/*
```
[... detailed implementation ...]
```
### 4. Hooks (1)
**`.claude/hooks/lint_python.py`**
```python
#!/usr/bin/env python3
"""
Python Linting Hook
Type: PostToolUse
Automatically lints Python files after edits
"""
import sys
import json
import subprocess
from pathlib import Path
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Only trigger on Write/Edit to Python files
if tool not in ['Write', 'Edit']:
sys.exit(0)
file_path = params.get('file_path', '')
if not file_path.endswith('.py'):
sys.exit(0)
# Run ruff
print(f"🔍 Linting {file_path}...", file=sys.stderr)
ruff_result = subprocess.run(
['ruff', 'check', '--fix', file_path],
capture_output=True,
text=True
)
if ruff_result.returncode != 0:
print(f"⚠️ Ruff found issues:", file=sys.stderr)
print(ruff_result.stdout, file=sys.stderr)
# Run black
print(f"🎨 Formatting {file_path}...", file=sys.stderr)
black_result = subprocess.run(
['black', '--quiet', file_path],
capture_output=True
)
if black_result.returncode == 0:
print(f"✅ Formatted successfully", file=sys.stderr)
else:
print(f"❌ Formatting failed", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PostToolUse": {
"commands": [".claude/hooks/lint_python.py"]
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Python CLI Tool
## Generated On
2025-01-23
## Session ID
b2c3d4e5-f6a7-8901-bcde-f23456789012
## What Was Created
### Analysis Phase
- **code-quality-analyzer**: Identified 8 PEP 8 violations and 3 complex functions
- **testing-analyzer**: Test coverage at 58%, many CLI commands untested
### Generated Artifacts
#### Custom Agents (4)
- **code-quality-analyzer**: Evaluates code quality and PEP 8 compliance
- **testing-analyzer**: Measures test coverage for CLI commands
- **skill-generator**: Created 2 custom skills
- **command-generator**: Created 2 slash commands
#### Skills (2)
- **docstring-generator**: Auto-generates NumPy-style docstrings
- **cli-test-helper**: Generates pytest tests for Click commands
#### Commands (2)
- **/test-cov**: Run tests with coverage report
- **/release-prep**: Prepare project for release
#### Hooks (1)
- **PostToolUse**: Auto-lint and format Python files
## Quick Start
1. Generate docstrings:
```bash
"Add documentation to all functions in src/cli.py"
# docstring-generator skill auto-invokes
```
2. Generate tests:
```bash
"Create tests for the create_user command"
# cli-test-helper skill auto-invokes
```
3. Check coverage:
```bash
/test-cov
```
4. Prepare release:
```bash
/release-prep patch
```
5. Auto-formatting:
- Every time you write/edit a .py file, it's automatically linted and formatted
## Customization
- Edit skills in `.claude/skills/`
- Modify commands in `.claude/commands/`
- Adjust hook in `.claude/hooks/lint_python.py`
- Configure linters (ruff.toml, pyproject.toml)
[... more documentation ...]
```
## Agent Communication
**`coordination.json`**
```json
{
"session_id": "b2c3d4e5-f6a7-8901-bcde-f23456789012",
"started_at": "2025-01-23T14:00:00Z",
"project_type": "cli",
"agents": {
"code-quality-analyzer": {
"status": "completed",
"started_at": "2025-01-23T14:00:00Z",
"completed_at": "2025-01-23T14:03:00Z",
"report_path": "reports/code-quality-analyzer.json"
},
"testing-analyzer": {
"status": "completed",
"started_at": "2025-01-23T14:00:01Z",
"completed_at": "2025-01-23T14:04:00Z",
"report_path": "reports/testing-analyzer.json"
},
"skill-generator": {
"status": "completed",
"started_at": "2025-01-23T14:05:00Z",
"completed_at": "2025-01-23T14:08:00Z",
"report_path": "reports/skill-generator.json"
},
"command-generator": {
"status": "completed",
"started_at": "2025-01-23T14:08:30Z",
"completed_at": "2025-01-23T14:10:00Z",
"report_path": "reports/command-generator.json"
}
}
}
```
**Key Report Excerpts:**
**`reports/testing-analyzer.json`**
```json
{
"agent_name": "testing-analyzer",
"summary": "Test coverage at 58%. Many CLI commands lack tests.",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "Untested CLI Commands",
"description": "5 Click commands have no tests",
"location": "src/cli.py",
"recommendation": "Generate tests for each command"
}
],
"recommendations_for_automation": [
"Skill: Auto-generate CLI tests using CliRunner",
"Command: /test-cov for quick coverage checks"
]
}
```
**`reports/skill-generator.json`**
```json
{
"agent_name": "skill-generator",
"summary": "Generated 2 skills: docstring-generator and cli-test-helper",
"findings": [
{
"type": "info",
"title": "Created docstring-generator skill",
"description": "Automates NumPy-style docstring generation",
"location": ".claude/skills/docstring-generator/"
},
{
"type": "info",
"title": "Created cli-test-helper skill",
"description": "Automates pytest test generation for Click commands",
"location": ".claude/skills/cli-test-helper/"
}
]
}
```
## Result
Solo developer now has efficient automation:
- ✅ 2 skills that handle tedious documentation and testing tasks
- ✅ 2 commands for common workflows (coverage, releases)
- ✅ 1 hook that auto-formats on every save
- ✅ Focuses on writing code, not boilerplate
- ✅ Complete documentation
- ✅ Ready to use immediately
Total generation time: ~10 minutes
## Before vs After
**Before:**
```bash
# Manual workflow
$ vim src/cli.py # Add new command
$ vim tests/test_cli.py # Manually write tests
$ pytest # Run tests
$ ruff check src/ # Manual linting
$ black src/ # Manual formatting
$ pytest --cov # Check coverage
$ vim docs/ # Update docs manually
# ~30-45 minutes per feature
```
**After:**
```bash
# Automated workflow
$ vim src/cli.py # Add new command
# Hook auto-formats and lints immediately ✅
"Create tests for the new command"
# cli-test-helper generates comprehensive tests ✅
/test-cov
# Instant coverage report ✅
"Add docstrings to src/cli.py"
# docstring-generator adds complete documentation ✅
# ~10 minutes per feature (3-4x faster!)
```

View File

@@ -0,0 +1,517 @@
# Example: Research Paper with Presentation and Documentation
This example shows what the meta-automation-architect generates for a research project that combines **LaTeX** (paper), **HTML** (presentation), and **Markdown** (documentation).
## Project Context
- **Type**: Academic Writing (primary) + Research (secondary)
- **Content**:
- LaTeX research paper (25 pages, 8 chapters, 45 references)
- HTML presentation slides (30 slides)
- Markdown documentation and notes (50+ files)
- **Pain Points**: Broken cross-references, unused citations, broken links, inconsistent formatting
- **Priority**: Citation validation and link checking
## Project Structure
```
research-paper/
├── paper/
│ ├── main.tex # Main LaTeX document
│ ├── chapters/
│ │ ├── 01_introduction.tex
│ │ ├── 02_related_work.tex
│ │ ├── 03_methodology.tex
│ │ ├── 04_results.tex
│ │ ├── 05_discussion.tex
│ │ └── 06_conclusion.tex
│ ├── figures/
│ │ ├── figure1.pdf
│ │ ├── figure2.pdf
│ │ └── ... (15 figures)
│ ├── tables/
│ │ ├── table1.tex
│ │ └── ... (5 tables)
│ ├── references.bib # Bibliography (45 entries)
│ └── custom.sty # Custom LaTeX style
├── presentation/
│ ├── index.html # Main presentation
│ ├── slides/
│ │ ├── 01_intro.html
│ │ ├── 02_methods.html
│ │ ├── 03_results.html
│ │ └── ... (30 slides)
│ ├── assets/
│ │ ├── figures/
│ │ ├── css/
│ │ └── js/
│ └── notes/
│ └── speaker_notes.md
├── notes/
│ ├── literature_review.md
│ ├── methodology_notes.md
│ ├── experiment_logs.md
│ ├── analysis/
│ │ ├── [[statistical_tests]].md
│ │ └── [[data_processing]].md
│ └── ideas/
│ └── future_work.md
└── README.md
```
## User Conversation
```
User: "Help me automate my research paper project"
Meta-Skill: Analyzing project...
- Detected: Academic Writing (68% confidence)
- Found 25 .tex files
- Found 1 .bib file with 45 entries
- Found 30 HTML slides
- Found 50+ interconnected markdown files
- Secondary: Research (32% confidence)
- Research methodology documentation
- Experiment logs and analysis
Meta-Skill: What are your main pain points?
Options:
- Broken LaTeX references (RECOMMENDED: Manual checking takes 30 min)
- Unused bibliography entries (RECOMMENDED: 12 unused entries detected)
- Broken HTML/Markdown links (RECOMMENDED: 8 broken links found)
- Inconsistent formatting
User: [Selects: Broken references, Unused citations, Broken links]
Meta-Skill: Automation opportunities identified:
- Validate LaTeX cross-references: Saves 30 min/compile
- Clean bibliography: Saves 20 min/review
- Check all links: Saves 30 min/validation
- Total potential savings: 80+ hours over project lifecycle
Meta-Skill: Generating automation system...
Session ID: research-paper-abc123
```
## Generated Automation System
### 1. Custom Subagents (8)
#### Universal Agents
- **structure-analyzer** - Reviews document organization across all formats
- **workflow-analyzer** - Analyzes compilation and publishing workflow
#### Academic Writing Domain Agents
- **latex-structure-analyzer** - LaTeX document structure and cross-references
- **citation-analyzer** - Bibliography validation and citation usage
- **html-structure-analyzer** - Presentation hierarchy and semantics
- **link-validator** - All links across HTML and Markdown
- **cross-reference-analyzer** - Cross-references across all document types
- **formatting-analyzer** - Formatting consistency
### 2. Custom Skills (4)
**`latex-validator`** - Comprehensive LaTeX validation
**Example:**
```
Running LaTeX validation...
✅ Document Structure
- 6 chapters found
- Proper hierarchy: chapter → section → subsection
- TOC depth: 2 levels
⚠️ Cross-References
- 23/25 \\ref commands valid
- 2 broken references:
* Line 145: \\ref{fig:missing} - target not found
* Line 289: \\ref{sec:old-name} - outdated reference
✅ Figures/Tables
- 15/15 figures referenced
- 5/5 tables referenced
- All captions present
⚠️ Bibliography
- 45 entries in references.bib
- 33 cited in text
- 12 unused entries:
* [Smith2020] - Never cited
* [Jones2019] - Never cited
* ...
📊 Compilation Status
- pdflatex: ✅ Success
- bibtex: ✅ Success
- Output: main.pdf (2.3 MB)
💡 Recommendations:
1. Fix 2 broken \\ref references
2. Remove 12 unused bibliography entries (saves 20% .bib size)
3. Consider adding \\label for Section 4.2 (referenced but not labeled)
```
**`link-checker`** - Validates all links in HTML and Markdown
**Example:**
```
Checking links across project...
📁 HTML Presentation (30 slides)
✅ Internal links: 45/45 valid
✅ External links: 12/12 valid
✅ Asset references: 28/28 valid
📁 Markdown Notes (52 files)
✅ Wiki-style [[links]]: 67/75 valid
⚠️ Broken wiki links (8):
* notes/analysis/stats.md → [[missing_page]]
* notes/ideas/future.md → [[old-experiment]]
* ...
✅ External links: 34/35 valid
⚠️ 1 broken external link:
* http://oldwebsite.com/data → 404 Not Found
📊 Summary
- Total links checked: 185
- Valid: 177 (95.7%)
- Broken: 8 (4.3%)
- Orphaned pages: 2 (no incoming links)
💡 Recommendations:
1. Fix 8 broken wiki links
2. Update 1 broken external link
3. Consider linking to orphaned pages
4. Estimated fix time: 15 minutes
```
**`cross-reference-checker`** - Validates references across all formats
**Example:**
```
Analyzing cross-references...
📄 LaTeX Paper
- \\ref commands: 25 (23 valid, 2 broken)
- \\cite commands: 33 (all valid)
- Figure refs: 15 (all valid)
- Table refs: 5 (all valid)
🖥️ HTML Presentation
- Internal anchor links: 45 (all valid)
- Figure references: 12 (all valid)
- Paper references: 8 links to LaTeX sections
📝 Markdown Notes
- Internal [[links]]: 75 (67 valid, 8 broken)
- Cross-format refs: 5 links to paper sections
🔗 Cross-Format Consistency
✅ Methodology: Consistent between paper and presentation
✅ Results: Figures match in paper and slides
⚠️ Discussion section in paper not reflected in presentation
💡 Consider adding discussion slide
📊 Overall Reference Health: 96.2%
```
**`bibliography-manager`** - Manages .bib entries and citations
**Example:**
```
Analyzing bibliography...
📚 references.bib
- Total entries: 45
- Used in paper: 33 (73%)
- Unused: 12 (27%)
🔍 Citation Analysis
- Most cited: [Smith2020] (8 times)
- Least cited: [Wang2021] (1 time)
- Average citations: 2.4 per entry
⚠️ Issues Found
1. Unused entries (12):
* [Smith2020] - Never cited (can be removed)
* [Jones2019] - Never cited (can be removed)
* ...
2. Missing fields (3):
* [Brown2021] - Missing 'pages' field
* [Davis2022] - Missing 'doi' field
* [Wilson2020] - Inconsistent author format
3. Duplicate entries (2):
* [Lee2019] and [Lee2019b] - Same paper
* [Miller2020] and [Miller2020a] - Same paper
💡 Recommendations:
1. Remove 12 unused entries → 27% smaller .bib file
2. Merge 2 duplicate entries
3. Complete missing fields for better citations
4. Run: /clean-bibliography to apply fixes
```
### 3. Custom Commands (4)
**`/validate-latex`**
```bash
/validate-latex # Full validation
/validate-latex --refs-only # Only check references
/validate-latex --fix # Auto-fix common issues
```
**`/check-links`**
```bash
/check-links # Check all links
/check-links presentation/ # Only HTML slides
/check-links notes/ # Only Markdown notes
/check-links --external # Include external links
```
**`/clean-bibliography`**
```bash
/clean-bibliography # Interactive cleanup
/clean-bibliography --remove-unused # Auto-remove unused entries
/clean-bibliography --fix-format # Fix formatting issues
```
**`/build-paper`**
```bash
/build-paper # Compile LaTeX to PDF
/build-paper --watch # Auto-compile on changes
/build-paper --validate # Validate before building
```
### 4. Hooks (3)
**`validate_on_save.py`** (PreToolUse)
- Triggers when .tex or .bib files are saved
- Runs quick validation checks
- Alerts if new issues introduced
**`update_references.py`** (PostToolUse)
- Triggers after editing .tex files
- Updates cross-reference index
- Checks for new broken references
**`link_check_on_md_save.py`** (PostToolUse)
- Triggers when .md files are saved
- Validates wiki-style [[links]]
- Alerts if broken links created
### 5. Impact
**Time Savings:**
- Manual LaTeX validation: 30 min/compile → **2 minutes** automated (93% reduction)
- Bibliography cleanup: 45 min/cleanup → **5 minutes** automated (89% reduction)
- Link checking: 30 min/check → **1 minute** automated (97% reduction)
- Cross-reference validation: 20 min/review → **2 minutes** automated (90% reduction)
- **Total: 125 min → 10 min** (92% time reduction per validation cycle)
Over typical paper lifecycle (50 validation cycles):
- Manual: **104 hours**
- Automated: **8 hours**
- **Savings: 96 hours (92%)**
**Quality Improvements:**
- Cross-reference accuracy: Manual checking → **100% validated** automatically
- Bibliography: 12 unused entries → **0 unused** (27% smaller .bib)
- Link health: 92% valid → **100% valid** (8 broken links fixed)
- Compilation success rate: 80% → **100%** (catches issues before compile)
**Concrete Fixes Applied:**
- Fixed 2 broken LaTeX \\ref references
- Removed 12 unused bibliography entries
- Fixed 8 broken Markdown wiki links
- Updated 1 broken external link
- Merged 2 duplicate .bib entries
- Completed 3 missing bibliography fields
## Example Results
### Before Automation
**LaTeX Compilation:**
```
! LaTeX Error: Reference `fig:missing' on page 12 undefined.
! LaTeX Error: Reference `sec:old-name' on page 23 undefined.
Warning: Citation 'Smith2020' unused
Warning: Citation 'Jones2019' unused
... (10 more unused citations)
Output: main.pdf generated with warnings
```
**Manual Link Checking:**
```
Manually clicking through 185 links...
Found broken link after 15 minutes
Found another after 20 minutes
Gave up after 30 minutes, unsure if all checked
```
**Bibliography Management:**
```
45 entries in .bib file
Manually grep for each to see if cited
Takes 45 minutes to identify 12 unused entries
Not sure about duplicates or format issues
```
### After Automation
**`/validate-latex` Output:**
```
✅ Running comprehensive LaTeX validation...
📊 Results (completed in 2 minutes):
✅ Document structure: Valid
⚠️ Cross-references: 2 issues found
✅ Bibliography: All citations valid
⚠️ Unused entries: 12 found
✅ Compilation: Success
🔧 Auto-fix available:
Run: /validate-latex --fix
```
**`/check-links` Output:**
```
✅ Link validation complete (1 minute):
- 185 total links
- 177 valid (95.7%)
- 8 broken (4.3%)
📋 Detailed report: reports/link-validator.json
💡 Run: /check-links --fix to auto-fix wiki links
```
**`/clean-bibliography` Output:**
```
✅ Bibliography analysis complete (5 minutes):
- Removed 12 unused entries
- Merged 2 duplicates
- Fixed 3 incomplete entries
- New size: 33 entries (73% of original)
💾 Backup: references.bib.backup
✅ Updated: references.bib
```
## Agent Communication
**`reports/latex-structure-analyzer.json`** (excerpt):
```json
{
"agent_name": "latex-structure-analyzer",
"summary": "Paper structure is sound. Found 2 broken cross-references and compilation warnings.",
"findings": [
{
"type": "broken_reference",
"severity": "high",
"location": "chapters/03_methodology.tex:145",
"description": "\\ref{fig:missing} references non-existent label",
"recommendation": "Add \\label{fig:missing} to appropriate figure or fix reference"
},
{
"type": "unused_bibliography",
"severity": "medium",
"description": "12 bibliography entries never cited in text",
"entries": ["Smith2020", "Jones2019", ...],
"recommendation": "Remove unused entries or add citations where appropriate"
}
],
"metrics": {
"total_chapters": 6,
"total_sections": 24,
"total_references": 25,
"valid_references": 23,
"broken_references": 2,
"bibliography_entries": 45,
"cited_entries": 33,
"unused_entries": 12
},
"automation_impact": {
"time_saved": "30 min/validation (manual checking)",
"quality_improvement": "100% reference validation vs. manual spot-checking"
}
}
```
**`reports/link-validator.json`** (excerpt):
```json
{
"agent_name": "link-validator",
"summary": "Found 8 broken links across HTML and Markdown. 95.7% link health.",
"findings": [
{
"type": "broken_wiki_link",
"severity": "medium",
"location": "notes/analysis/stats.md:23",
"description": "[[missing_page]] does not exist",
"recommendation": "Create missing_page.md or update link to correct page"
},
{
"type": "broken_external_link",
"severity": "high",
"location": "notes/literature_review.md:156",
"description": "http://oldwebsite.com/data returns 404",
"recommendation": "Update to current URL or mark as archived"
}
],
"metrics": {
"total_links": 185,
"valid_links": 177,
"broken_links": 8,
"link_health_percentage": 95.7,
"html_links": 57,
"markdown_wiki_links": 75,
"markdown_external_links": 35,
"orphaned_pages": 2
},
"automation_impact": {
"time_saved": "30 min/check (manual link clicking)",
"quality_improvement": "100% coverage vs. ~60% manual coverage"
}
}
```
## Result
Researcher now has:
-**100% validated cross-references** - No more broken \\ref in paper
-**Clean bibliography** - 27% smaller, no unused entries
-**All links validated** - 8 broken links fixed, 100% health
-**Consistent formatting** - Across LaTeX, HTML, and Markdown
-**Fast compilation** - Issues caught before build
-**96 hours saved** over project lifecycle (92% reduction)
**Before vs After:**
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Cross-reference validation | Manual, 30 min | 2 min automated | 93% faster |
| Bibliography unused entries | 12 (27%) | 0 (0%) | 100% clean |
| Link health | 92% (manual partial check) | 100% (full automated) | +8% |
| Validation coverage | ~60% (time limited) | 100% (comprehensive) | +40% |
| Time per validation cycle | 125 min | 10 min | 92% reduction |
| Time over project (50 cycles) | 104 hours | 8 hours | 96 hours saved |
**Ongoing Benefits:**
- Every save triggers validation
- New issues caught immediately
- No broken references in final paper
- Bibliography stays clean
- All links remain valid
- Compilation always succeeds
**Publication Quality:**
- Zero broken cross-references in submitted paper
- Professional bibliography (no unused entries)
- All presentation links work during defense
- Documentation fully interconnected
- Reviewers praise technical quality

View File

@@ -0,0 +1,548 @@
# Example: Web Application Automation
This example shows what the meta-automation-architect generates for a typical Next.js web application project.
## Project Context
- **Type**: Web Application
- **Stack**: Next.js, TypeScript, React, TailwindCSS
- **Team Size**: Small (3-5 developers)
- **Pain Points**: Testing coverage, security checks, performance optimization
- **Priority**: Testing automation and code quality
## User Conversation
```
User: "Set up automation for my Next.js project"
Meta-Skill: Analyzing project...
- Detected: Web Application (92% confidence)
- Found package.json with Next.js and React
- Found TypeScript configuration
- Found src/app/ directory (App Router)
- Found TailwindCSS configuration
Meta-Skill: What are your main pain points?
Options:
- Low test coverage (RECOMMENDED: 15 test files for 87 components)
- Frequent bug fixes (RECOMMENDED: 23 bug-fix commits last month)
- Performance issues
- Security concerns
- Documentation gaps (RECOMMENDED: Missing API docs)
User: [Selects: Low test coverage, Security concerns]
Meta-Skill: How many agents should we create?
- Few (2-3): Basic analysis
- Moderate (4-6): Multi-domain coverage (RECOMMENDED)
- Many (7+): Comprehensive automation
User: [Selects: Moderate]
Meta-Skill: Generating automation system...
Session ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890
```
## Generated Automation System
### 1. Custom Subagents (6)
All agents created in `.claude/agents/`:
#### Analysis Agents (Run in Parallel)
**security-analyzer.md**
- Scans for XSS, CSRF, SQL injection vulnerabilities
- Checks authentication/authorization patterns
- Identifies exposed secrets or API keys
- Reviews dependency vulnerabilities
**performance-analyzer.md**
- Analyzes bundle sizes and code splitting
- Identifies slow rendering components
- Checks for N+1 query patterns
- Reviews asset optimization
**code-quality-analyzer.md**
- Measures code complexity
- Detects duplication
- Checks naming conventions
- Reviews error handling patterns
**testing-analyzer.md**
- Measures test coverage
- Identifies untested critical paths
- Reviews test quality and patterns
- Suggests testing strategies
#### Implementation Agents (Run After Analysis)
**skill-generator.md**
- Creates custom skills based on findings
- Generated: `tdd-workflow`, `api-doc-generator`, `security-checker`
**command-generator.md**
- Creates slash commands for common tasks
- Generated: `/test-fix`, `/security-scan`, `/perf-check`
### 2. Custom Skills (3)
**`.claude/skills/tdd-workflow/SKILL.md`**
```markdown
---
name: tdd-workflow
description: Enforces test-driven development by requiring tests before implementation
allowed-tools: ["Read", "Write", "Bash", "Grep"]
---
# TDD Workflow
Automatically invoked when user requests new features or modifications.
## Process
1. Check if tests exist for the target code
2. If no tests, create test file first
3. Write failing test
4. Implement minimal code to pass
5. Refactor while keeping tests green
6. Run full test suite
[... detailed implementation ...]
```
**`.claude/skills/api-doc-generator/SKILL.md`**
```markdown
---
name: api-doc-generator
description: Generates OpenAPI documentation from Next.js API routes
allowed-tools: ["Read", "Write", "Grep", "Glob", "Bash"]
---
# API Documentation Generator
Automatically generates OpenAPI 3.0 documentation from your API routes.
## Process
1. Scan src/app/api/ for route handlers
2. Extract types from TypeScript
3. Generate OpenAPI schemas
4. Create interactive documentation
5. Validate against actual implementation
[... detailed implementation ...]
```
**`.claude/skills/security-checker/SKILL.md`**
```markdown
---
name: security-checker
description: Quick security validation for code changes
allowed-tools: ["Read", "Grep", "Bash"]
---
# Security Checker
Runs security checks on code before commits.
## Checks
- XSS vulnerabilities in JSX
- CSRF protection on mutations
- Exposed secrets or API keys
- Insecure dependencies
- Missing input validation
[... detailed implementation ...]
```
### 3. Custom Commands (3)
**`.claude/commands/test-fix.md`**
```markdown
---
description: Run tests and iteratively fix failures
allowed-tools: ["Bash", "Read", "Write", "Grep"]
---
# Test Fix Command
Runs your test suite and automatically fixes failures.
## Usage
```bash
/test-fix
/test-fix src/components
/test-fix --watch
```
## Process
1. Run test suite
2. Identify failures
3. Analyze failure causes
4. Propose fixes
5. Apply fixes with user approval
6. Re-run tests
7. Repeat until green
[... detailed implementation ...]
```
**`.claude/commands/security-scan.md`**
```markdown
---
description: Quick security audit of project
allowed-tools: ["Bash", "Read", "Grep"]
---
# Security Scan
Fast security check for common vulnerabilities.
## Usage
```bash
/security-scan
/security-scan src/
/security-scan --full
```
[... detailed implementation ...]
```
**`.claude/commands/perf-check.md`**
```markdown
---
description: Analyze performance and bundle size
allowed-tools: ["Bash", "Read", "Glob"]
---
# Performance Check
Analyzes bundle size, rendering performance, and optimization opportunities.
[... detailed implementation ...]
```
### 4. Hooks (2)
**`.claude/hooks/security_validation.py`**
```python
#!/usr/bin/env python3
"""
Security Validation Hook
Type: PreToolUse
Blocks writes to sensitive files and validates security patterns
"""
import sys
import json
def main():
context = json.load(sys.stdin)
tool = context.get('tool')
params = context.get('parameters', {})
# Block writes to sensitive files
if tool in ['Write', 'Edit']:
file_path = params.get('file_path', '')
if file_path.endswith('.env') or 'secrets' in file_path.lower():
print("❌ Blocked: Writing to sensitive file", file=sys.stderr)
sys.exit(2) # Block operation
# Validate API route security
if tool == 'Write' and '/api/' in params.get('file_path', ''):
content = params.get('content', '')
if 'export async function POST' in content:
if 'csrf' not in content.lower():
print("⚠️ Warning: API route missing CSRF protection", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
**`.claude/hooks/run_tests.py`**
```python
#!/usr/bin/env python3
"""
Run Tests Hook
Type: Stop
Runs test suite when session ends
"""
import sys
import json
import subprocess
def main():
context = json.load(sys.stdin)
# Run tests
print("🧪 Running test suite...", file=sys.stderr)
result = subprocess.run(['npm', 'test', '--', '--run'],
capture_output=True, text=True)
if result.returncode != 0:
print("❌ Tests failed:", file=sys.stderr)
print(result.stdout, file=sys.stderr)
else:
print("✅ All tests passed", file=sys.stderr)
sys.exit(0)
if __name__ == '__main__':
main()
```
### 5. Settings Configuration
**`.claude/settings.json`** (updated)
```json
{
"hooks": {
"PreToolUse": {
"commands": [".claude/hooks/security_validation.py"]
},
"Stop": {
"commands": [".claude/hooks/run_tests.py"]
}
},
"mcpServers": {
"github": {
"command": "mcp-github",
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}
```
### 6. Documentation
**`.claude/AUTOMATION_README.md`**
```markdown
# Automation System for Next.js Project
## Generated On
2025-01-23
## Session ID
a1b2c3d4-e5f6-7890-abcd-ef1234567890
## What Was Created
### Analysis Phase
- **security-analyzer**: Found 3 high-severity issues requiring attention
- **performance-analyzer**: Identified 5 optimization opportunities
- **code-quality-analyzer**: Detected 12 code smells and complexity issues
- **testing-analyzer**: Test coverage at 45%, recommended 80%+ for critical paths
### Generated Artifacts
#### Custom Agents (6)
- **security-analyzer**: Scans for vulnerabilities and security issues
- **performance-analyzer**: Identifies performance bottlenecks
- **code-quality-analyzer**: Evaluates code maintainability
- **testing-analyzer**: Measures and improves test coverage
- **skill-generator**: Created 3 custom skills
- **command-generator**: Created 3 slash commands
#### Skills (3)
- **tdd-workflow**: Enforces test-driven development workflow
- **api-doc-generator**: Auto-generates API documentation from routes
- **security-checker**: Quick security validation for code changes
#### Commands (3)
- **/test-fix**: Run tests and fix failures iteratively
- **/security-scan**: Quick security audit
- **/perf-check**: Analyze performance and bundle size
#### Hooks (2)
- **PreToolUse**: Security validation (blocks sensitive file writes)
- **Stop**: Run test suite on session end
#### MCP Servers (1)
- **github**: PR automation and issue tracking
## Quick Start
1. Test an agent:
```bash
"Use the security-analyzer agent on src/app"
```
2. Try a skill:
```bash
"Implement user authentication feature"
# tdd-workflow skill auto-invokes
```
3. Execute a command:
```bash
/test-fix src/components
```
4. Hooks automatically run:
- Security validation on file writes
- Tests run when you end the session
## Customization
All generated automation can be customized:
- Edit agents in `.claude/agents/`
- Modify skills in `.claude/skills/`
- Update commands in `.claude/commands/`
- Adjust hooks in `.claude/hooks/`
[... more documentation ...]
```
**`.claude/QUICK_REFERENCE.md`**
```markdown
# Quick Reference
## Available Agents
- security-analyzer
- performance-analyzer
- code-quality-analyzer
- testing-analyzer
- skill-generator
- command-generator
## Available Commands
- /test-fix
- /security-scan
- /perf-check
## Available Skills
- tdd-workflow
- api-doc-generator
- security-checker
## Hooks Configured
- PreToolUse: security_validation.py
- Stop: run_tests.py
## MCP Servers
- github
## Usage Examples
### Use an agent:
"Use the security-analyzer agent to check src/app/api"
### Invoke a skill:
"Implement new feature X" (tdd-workflow auto-invokes)
"Generate API docs" (api-doc-generator auto-invokes)
### Execute command:
/test-fix src/
/security-scan
/perf-check
### Check hooks:
cat .claude/settings.json | jq '.hooks'
## Session Data
All agent communication is logged in:
`.claude/agents/context/a1b2c3d4-e5f6-7890-abcd-ef1234567890/`
Review this directory to understand what happened during generation.
```
## Agent Communication Example
During generation, agents communicated via ACP:
**`coordination.json`**
```json
{
"session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"started_at": "2025-01-23T10:00:00Z",
"project_type": "web_app",
"agents": {
"security-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:00Z",
"completed_at": "2025-01-23T10:05:00Z",
"report_path": "reports/security-analyzer.json"
},
"performance-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:01Z",
"completed_at": "2025-01-23T10:06:00Z",
"report_path": "reports/performance-analyzer.json"
},
"testing-analyzer": {
"status": "completed",
"started_at": "2025-01-23T10:00:02Z",
"completed_at": "2025-01-23T10:07:00Z",
"report_path": "reports/testing-analyzer.json"
},
"skill-generator": {
"status": "completed",
"started_at": "2025-01-23T10:08:00Z",
"completed_at": "2025-01-23T10:12:00Z",
"report_path": "reports/skill-generator.json"
}
}
}
```
**`messages.jsonl`** (excerpt)
```json
{"timestamp":"2025-01-23T10:00:00Z","from":"security-analyzer","type":"status","message":"Starting security analysis"}
{"timestamp":"2025-01-23T10:02:15Z","from":"security-analyzer","type":"finding","severity":"high","data":{"title":"Missing CSRF protection","location":"src/app/api/users/route.ts"}}
{"timestamp":"2025-01-23T10:05:00Z","from":"security-analyzer","type":"completed","message":"Found 3 high-severity issues"}
{"timestamp":"2025-01-23T10:08:00Z","from":"skill-generator","type":"status","message":"Reading analysis reports"}
{"timestamp":"2025-01-23T10:09:30Z","from":"skill-generator","type":"status","message":"Generating TDD workflow skill"}
```
**`reports/security-analyzer.json`** (excerpt)
```json
{
"agent_name": "security-analyzer",
"timestamp": "2025-01-23T10:05:00Z",
"status": "completed",
"summary": "Found 3 high-severity security issues requiring immediate attention",
"findings": [
{
"type": "issue",
"severity": "high",
"title": "Missing CSRF Protection",
"description": "API routes lack CSRF token validation",
"location": "src/app/api/users/route.ts:12",
"recommendation": "Add CSRF token validation middleware",
"example": "import { validateCsrf } from '@/lib/csrf';"
}
],
"recommendations_for_automation": [
"Skill: CSRF validator that checks all API routes",
"Hook: PreToolUse hook to validate new API routes",
"Command: /security-scan for quick checks"
]
}
```
## Result
User now has a complete automation system:
- ✅ 6 specialized agents that can be run on-demand
- ✅ 3 skills that auto-invoke for common patterns
- ✅ 3 commands for quick workflows
- ✅ 2 hooks for automatic validation
- ✅ Complete documentation
- ✅ All agents communicated via ACP protocol
- ✅ Ready to use immediately
Total generation time: ~15 minutes (mostly analysis phase)