Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:00:50 +08:00
commit c5931553a6
106 changed files with 49995 additions and 0 deletions

318
commands/validate/all.md Normal file
View File

@@ -0,0 +1,318 @@
---
name: validate:all
description: Run comprehensive validation audit on tools, documentation, and best practices compliance
delegates-to: autonomous-agent:validation-controller
---
# Comprehensive Validation Check
Performs thorough validation of:
- Tool usage compliance (Edit/Write prerequisites, parameter validation)
- Documentation consistency (version sync, path references, component counts)
- Cross-reference integrity (all links and references valid)
- Best practices adherence (tool selection, error handling)
- Execution flow analysis (dependency tracking, state validation)
## How It Works
This command delegates to the **validation-controller** agent which:
1. **Scans tool usage patterns** in recent session history
2. **Analyzes documentation** for inconsistencies across all .md files and plugin.json
3. **Validates cross-references** to ensure all links and component references exist
4. **Checks best practices** compliance with Claude Code guidelines
5. **Reviews execution flow** for proper tool sequencing and state management
6. **Generates validation report** with severity-prioritized findings and auto-fix suggestions
## Skills Utilized
- **autonomous-agent:validation-standards** - Tool requirements, failure patterns, consistency checks
- **autonomous-agent:quality-standards** - Best practices and quality benchmarks
- **autonomous-agent:pattern-learning** - Historical success/failure patterns
## Usage
```bash
/validate:all
```
## Expected Output (Two-Tier Presentation)
### Terminal Output (Concise)
```
[PASS] Validation Complete - Score: 85/100
Key Findings:
* [ERROR] Documentation path inconsistency: 6 occurrences in CLAUDE.md
* [WARN] Write operation without prior Read: plugin.json
* [INFO] All cross-references valid
Top Recommendations:
1. [HIGH] Standardize path references in CLAUDE.md -> Prevent user confusion
2. [MED] Add Read before Write to plugin.json -> Follow tool requirements
3. [LOW] Consider adding path validation utility
📄 Full report: .claude/data/reports/validation-2025-10-21.md
⏱ Completed in 1.2 minutes
```
### File Report (Comprehensive)
Located at: `.claude/data/reports/validation-YYYY-MM-DD.md`
```markdown
# Comprehensive Validation Report
Generated: 2025-10-21 12:30:45
## Executive Summary
Validation Score: 85/100 (Good)
- Tool Usage: 27/30 [PASS]
- Documentation Consistency: 18/25 [FAIL]
- Best Practices: 20/20 [PASS]
- Error-Free Execution: 12/15 [PASS]
- Pattern Compliance: 8/10 [PASS]
## Detailed Findings
### 🔴 Critical Issues (2)
#### 1. Documentation Path Inconsistency
**Severity**: ERROR
**Category**: Documentation Consistency
**Impact**: High - User confusion, incorrect instructions
**Details**:
- File: CLAUDE.md
- Inconsistent path references detected:
- `.claude-patterns/patterns.json` (standardized)
- Line 17: Pattern learning location
- Line 63: Pattern database location
- Line 99: Skill auto-selection query
- Line 161: Verification command
- Line 269: Pattern storage
- Line 438: Notes for future instances
- Actual implementation: `.claude-patterns/patterns.json`
**Root Cause**: Documentation written before Python utilities (v1.4) implementation
**Recommendation**: Standardize all references to `.claude-patterns/patterns.json`
**Auto-Fix Available**: Yes
```bash
# Automated fix command
sed -i 's|\.claude/patterns/|\.claude-patterns/|g' **/*.md
```
#### 2. Write Without Prior Read
**Severity**: WARNING
**Category**: Tool Usage
**Impact**: Medium - Violates tool requirements
**Details**:
- Tool: Write
- File: .claude-plugin/plugin.json
- Error: "File has not been read yet"
**Root Cause**: Edit tool called without prerequisite Read operation
**Recommendation**: Always call Read before Edit on existing files
**Auto-Fix Available**: Yes
```python
# Correct sequence
Read(".claude-plugin/plugin.json")
Edit(".claude-plugin/plugin.json", old_string, new_string)
```
### ✅ Passed Validations (12)
- [PASS] Version consistency across all files (v1.6.1)
- [PASS] Component counts accurate (10 agents, 6 skills, 6 commands)
- [PASS] All cross-references valid
- [PASS] Tool selection follows best practices
- [PASS] Bash usage avoids anti-patterns
- [PASS] No broken links in documentation
- [PASS] All referenced files exist
- [PASS] Agent YAML frontmatter valid
- [PASS] Skill metadata complete
- [PASS] Command descriptions accurate
- [PASS] Pattern database schema valid
- [PASS] No duplicate component names
### 📊 Validation Breakdown
**Tool Usage Compliance**: 27/30 points
- [PASS] 15/16 Edit operations had prerequisite Read
- [FAIL] 1/16 Edit failed due to missing Read
- [PASS] 8/8 Write operations on new files proper
- [FAIL] 1/2 Write on existing file without Read
- [PASS] All Bash commands properly chained
- [PASS] Specialized tools preferred over Bash
**Documentation Consistency**: 18/25 points
- [FAIL] Path references inconsistent (6 violations)
- [PASS] Version numbers synchronized
- [PASS] Component counts accurate
- [PASS] No orphaned references
- [PASS] Examples match implementation
**Best Practices Adherence**: 20/20 points
- [PASS] Tool selection optimal
- [PASS] Error handling comprehensive
- [PASS] File operations use correct tools
- [PASS] Documentation complete
- [PASS] Code structure clean
**Error-Free Execution**: 12/15 points
- [PASS] 95% of operations successful
- [FAIL] 1 tool prerequisite violation
- [PASS] Quick error recovery
- [PASS] No critical failures
**Pattern Compliance**: 8/10 points
- [PASS] Follows successful patterns
- [FAIL] Minor deviation in tool sequence
- [PASS] Quality scores consistent
- [PASS] Learning patterns applied
## Recommendations (Prioritized)
### High Priority (Implement Immediately)
1. **Fix Documentation Path Inconsistency**
- Impact: Prevents user confusion and incorrect instructions
- Effort: Low (10 minutes)
- Auto-fix: Available
- Files: CLAUDE.md (6 replacements)
2. **Add Pre-flight Validation for Edit/Write**
- Impact: Prevents 87% of tool usage errors
- Effort: Medium (integrated in orchestrator)
- Auto-fix: Built into validation-controller agent
### Medium Priority (Address Soon)
3. **Create Path Validation Utility**
- Impact: Prevents path inconsistencies in future
- Effort: Medium (create new utility script)
- Location: lib/path_validator.py
4. **Enhance Session State Tracking**
- Impact: Better dependency tracking
- Effort: Medium (extend orchestrator)
- Benefit: 95% error prevention rate
### Low Priority (Nice to Have)
5. **Add Validation Metrics Dashboard**
- Impact: Visibility into validation effectiveness
- Effort: High (new component)
- Benefit: Data-driven improvement
## Failure Patterns Detected
### Pattern: Edit Before Read
- **Frequency**: 1 occurrence
- **Auto-fixed**: Yes
- **Prevention rule**: Enabled
- **Success rate**: 100%
### Pattern: Path Inconsistency
- **Frequency**: 6 occurrences
- **Type**: Documentation drift
- **Root cause**: Implementation changes without doc updates
- **Prevention**: Add doc consistency checks to CI/CD
## Validation Metrics
### Session Statistics
- Total operations: 48
- Successful: 46 (95.8%)
- Failed: 2 (4.2%)
- Auto-recovered: 2 (100% of failures)
### Tool Usage
- Read: 24 calls (100% success)
- Edit: 16 calls (93.8% success, 1 prerequisite violation)
- Write: 6 calls (83.3% success)
- Bash: 2 calls (100% success)
### Prevention Effectiveness
- Failures prevented: 0 (validation not yet active during session)
- Failures detected and fixed: 2
- False positives: 0
- Detection rate: 100%
## Next Steps
1. Apply high-priority fixes immediately
2. Enable pre-flight validation in orchestrator
3. Schedule medium-priority improvements
4. Monitor validation metrics for 10 tasks
5. Run /validate again to verify improvements
## Validation History
This validation compared to baseline (first validation):
- Score: 85/100 (baseline - first run)
- Issues found: 8 total (2 critical, 3 medium, 3 low)
- Auto-fix success: 100% (2/2 fixable issues)
- Time to complete: 1.2 minutes
---
**Next Validation Recommended**: After applying high-priority fixes
**Expected Score After Fixes**: 95/100
```
## When to Use
Run `/validate:all` when:
- Before releases or major changes
- After significant refactoring
- When documentation is updated
- After adding new components
- Periodically (every 10-25 tasks)
- When unusual errors occur
- To audit project health
## Integration with Autonomous Workflow
The orchestrator automatically triggers validation:
- **Pre-flight**: Before Edit/Write operations (checks prerequisites)
- **Post-error**: After tool failures (analyzes and auto-fixes)
- **Post-documentation**: After doc updates (checks consistency)
- **Periodic**: Every 25 tasks (comprehensive audit)
Users can also manually trigger full validation with `/validate:all`.
## Success Criteria
Validation passes when:
- Score ≥ 70/100
- No critical (ERROR) issues
- Tool usage compliance ≥ 90%
- Documentation consistency ≥ 80%
- All cross-references valid
- Best practices followed
## Validation Benefits
**For Users**:
- Catch issues before they cause problems
- Clear, actionable recommendations
- Auto-fix for common errors
- Improved project quality
**For Development**:
- Enforces best practices
- Prevents documentation drift
- Maintains consistency
- Reduces debugging time
**For Learning**:
- Builds failure pattern database
- Improves prevention over time
- Tracks validation effectiveness
- Continuous improvement loop

View File

@@ -0,0 +1,426 @@
---
name: validate:commands
description: Command validation and discoverability verification with automatic recovery
usage: /validate:commands [options]
category: validate
subcategory: system
---
# Command Validation and Discoverability
## Overview
The command validation system ensures all commands exist, are discoverable, and function correctly. It validates command structure, checks discoverability, and provides automatic recovery for missing commands.
This command specifically addresses issues like the missing `/monitor:dashboard` command by validating that all expected commands are present and accessible.
## Usage
```bash
/validate:commands # Validate all commands
/validate:commands --category monitor # Validate specific category
/validate:commands --missing-only # Show only missing commands
/validate:commands --discoverability # Check discoverability features
/validate:commands --recover # Auto-recover missing commands
/validate:commands --test /monitor:dashboard # Test specific command
```
## Parameters
### --category
Validate commands in a specific category only.
- **Type**: String
- **Valid values**: dev, analyze, validate, debug, learn, workspace, monitor
- **Example**: `/validate:commands --category monitor`
### --missing-only
Show only missing commands, skip validation of existing commands.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:commands --missing-only`
### --discoverability
Focus on discoverability validation (examples, descriptions, accessibility).
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:commands --discoverability`
### --recover
Automatically attempt to recover missing commands.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:commands --recover`
### --test
Test a specific command for validation.
- **Type**: String (command format: /category:name)
- **Example**: `/validate:commands --test /monitor:dashboard`
## Examples
### Basic Command Validation
```bash
/validate:commands
```
Output:
```
🔍 Command System Validation
✅ Overall Score: 96/100
📋 Commands: 23/23 present
🎯 Discoverable: 22/23 commands
📝 Valid Syntax: 23/23 commands
[WARN] Issues: 1 discoverability issue
```
### Category-Specific Validation
```bash
/validate:commands --category monitor
```
Output:
```
🔍 Monitor Commands Validation
✅ Category: monitor
📋 Expected Commands: 2
✅ Commands Found: recommend, dashboard
🎯 All Discoverable: True
📝 Syntax Valid: True
```
### Missing Commands Only
```bash
/validate:commands --missing-only
```
Output:
```
❌ Missing Commands Detected:
* /monitor:dashboard (CRITICAL)
Reason: File not found
Impact: Dashboard functionality unavailable
Recovery: Auto-recover available
* /workspace:archive (WARNING)
Reason: File not found
Impact: Workspace archive functionality missing
Recovery: Template creation available
```
### Auto-Recovery Mode
```bash
/validate:commands --recover
```
Output:
```
🔄 Automatic Command Recovery
📋 Missing Commands Found: 2
🔧 Recovery Progress:
✅ /monitor:dashboard restored from Git (commit: a4996ed)
❌ /workspace:archive recovery failed (no template available)
📊 Final Validation:
* Commands Present: 24/25
* Overall Score: 98/100 (+2 points)
```
### Discoverability Check
```bash
/validate:commands --discoverability
```
Output:
```
🔎 Command Discoverability Analysis
✅ Overall Discoverability: 87%
📊 Categories Analysis:
* dev: 100% discoverable
* analyze: 100% discoverable
* validate: 75% discoverable (2 issues)
* monitor: 50% discoverable (1 issue)
🎯 Common Issues:
* Missing usage examples: 3 commands
* Unclear descriptions: 2 commands
* No parameter docs: 5 commands
```
## Command Categories
### dev (Development Commands)
Critical for plugin development and maintenance.
- **Expected Commands**: auto, release, model-switch, pr-review
- **Critical Level**: Critical
- **Recovery Priority**: Immediate
### analyze (Analysis Commands)
Essential for code analysis and quality assessment.
- **Expected Commands**: project, quality, static, dependencies
- **Critical Level**: Critical
- **Recovery Priority**: Immediate
### validate (Validation Commands)
Core validation functionality for system integrity.
- **Expected Commands**: all, fullstack, plugin, patterns, integrity
- **Critical Level**: Critical
- **Recovery Priority**: Immediate
### debug (Debugging Commands)
Tools for debugging and troubleshooting.
- **Expected Commands**: eval, gui
- **Critical Level**: High
- **Recovery Priority**: High
### learn (Learning Commands)
Learning and analytics functionality.
- **Expected Commands**: init, analytics, performance, predict
- **Critical Level**: Medium
- **Recovery Priority**: Medium
### workspace (Workspace Commands)
Workspace organization and management.
- **Expected Commands**: organize, reports, improve
- **Critical Level**: Medium
- **Recovery Priority**: Medium
### monitor (Monitoring Commands)
System monitoring and recommendations.
- **Expected Commands**: recommend, dashboard
- **Critical Level**: Critical
- **Recovery Priority**: Immediate
## Validation Criteria
### Presence Validation
- **File Existence**: Command file exists in correct location
- **File Accessibility**: File is readable and not corrupted
- **Category Structure**: Commands organized in proper categories
### Syntax Validation
- **YAML Frontmatter**: Valid YAML with required fields
- **Markdown Structure**: Proper markdown formatting
- **Required Sections**: Essential sections present
- **Content Quality**: Adequate content length and structure
### Discoverability Validation
- **Clear Description**: Frontmatter description is clear and descriptive
- **Usage Examples**: Practical examples provided
- **Parameter Documentation**: Parameters documented (when applicable)
- **Accessibility**: Command can be discovered and understood
### Integration Validation
- **File System**: Command discoverable through file system
- **Category Organization**: Proper category placement
- **Naming Conventions**: Consistent naming patterns
- **Cross-references**: References in documentation
## Recovery Process
### Automatic Recovery
When `--recover` is enabled, missing commands are recovered using:
1. **Git History Recovery**
```bash
# Find in Git history
git log --all --full-history -- commands/monitor/dashboard.md
# Restore from commit
git checkout <commit> -- commands/monitor/dashboard.md
```
2. **Template Creation**
- Uses command templates
- Customizes with category and name
- Creates basic structure for completion
3. **Pattern-Based Recovery**
- Uses similar commands as reference
- Maintains consistency with existing commands
- Preserves category patterns
### Manual Recovery
For commands that can't be auto-recovered:
1. **Create from Template**
```markdown
---
name: monitor:dashboard
description: Launch system monitoring dashboard
usage: /monitor:dashboard [options]
category: monitor
subcategory: system
---
# Monitoring Dashboard
## Overview
Launch the autonomous agent monitoring dashboard...
```
2. **Use Similar Command**
- Copy structure from similar command
- Modify for specific functionality
- Ensure consistency with category
## Scoring System
Command validation score calculation:
- **Presence Score** (40 points): All expected commands present
- **Syntax Score** (25 points): Valid YAML and markdown structure
- **Discoverability Score** (25 points): Clear descriptions and examples
- **Integration Score** (10 points): Proper integration and organization
**Score Interpretation:**
- **90-100**: Excellent command system
- **80-89**: Good with minor issues
- **70-79**: Acceptable with some issues
- **60-69**: Needs improvement
- **0-59**: Serious command system issues
## Troubleshooting
### Missing Commands
**Symptoms**: Command validation shows missing commands
**Solutions**:
1. Run auto-recovery: `/validate:commands --recover`
2. Check Git history for deleted files
3. Create from template manually
4. Verify file system permissions
### Discoverability Issues
**Symptoms**: Commands exist but not easily discoverable
**Solutions**:
1. Add clear descriptions to frontmatter
2. Include practical usage examples
3. Document parameters clearly
4. Improve command categorization
### Syntax Errors
**Symptoms**: Invalid YAML frontmatter or markdown structure
**Solutions**:
1. Validate YAML syntax with linter
2. Check markdown formatting
3. Ensure required sections present
4. Review content quality guidelines
### File Organization Issues
**Symptoms**: Commands in wrong locations or disorganized
**Solutions**:
1. Use proper category structure
2. Follow naming conventions
3. Organize with consistent patterns
4. Run workspace organization: `/workspace:organize`
## Best Practices
### Command Development
1. **Use Templates**: Start from command templates
2. **Follow Structure**: Maintain consistent structure
3. **Include Examples**: Provide practical usage examples
4. **Document Parameters**: Clear parameter documentation
5. **Test Discoverability**: Verify command can be found and understood
### Maintenance
1. **Regular Validation**: Run command validation weekly
2. **Monitor Changes**: Validate after command modifications
3. **Backup Protection**: Ensure commands are backed up
4. **Documentation Sync**: Keep docs aligned with commands
### Organization
1. **Category Consistency**: Commands in appropriate categories
2. **Naming Patterns**: Consistent naming conventions
3. **File Structure**: Proper file organization
4. **Cross-references**: Maintain documentation links
## Integration Points
### Pre-Operation Validation
Automatically validates before:
- Command restructuring or organization
- Plugin updates affecting commands
- Release preparation
- File system operations
### Post-Operation Validation
Automatically validates after:
- Command creation or modification
- Category reorganization
- File system changes
- Version updates
### Continuous Monitoring
- Event-driven validation on file changes
- Periodic integrity checks
- Real-time missing command detection
- Automated recovery triggers
## Advanced Features
### Custom Validation Rules
```bash
# Validate with custom rules
/validate:commands --rules custom_rules.json
# Validate specific patterns
/validate:commands --pattern "*/monitor/*.md"
# Exclude specific commands
/validate:commands --exclude "*/test/*.md"
```
### Batch Operations
```bash
# Validate and fix issues
/validate:commands --fix-discoverability --add-examples
# Validate and generate report
/validate:commands --generate-report --output validation_report.md
```
## Monitoring and Analytics
Track command system health with:
- **Validation History**: Historical validation results
- **Issue Trends**: Recurring command issues
- **Recovery Success**: Auto-recovery effectiveness
- **Usage Patterns**: Command usage and discoverability
Use `/learn:performance` for analytics and `/learn:analytics` for comprehensive reporting.
## Related Commands
- `/validate:integrity` - Complete system integrity validation
- `/validate:all` - Full system validation
- `/workspace:organize` - Fix file organization issues
- `/learn:analytics` - Command system analytics
- `/monitor:recommend` - Get system improvement recommendations
## Configuration
### Validation Settings
```json
{
"command_validation": {
"auto_recover": true,
"critical_threshold": 80,
"validate_discoverability": true,
"exclude_patterns": ["*/test/*"],
"notification_level": "warning"
}
}
```
### Recovery Preferences
```json
{
"command_recovery": {
"strategies": ["git_history", "template_creation", "pattern_based"],
"create_backup_before_recovery": true,
"verify_after_recovery": true,
"notification_on_recovery": true
}
}
```

View File

@@ -0,0 +1,313 @@
---
name: validate:fullstack
description: Validate full-stack app (backend, frontend, database, API contracts) with auto-fix
delegates-to: autonomous-agent:orchestrator
---
# Validate Full-Stack Command
**Slash command**: `/validate:fullstack`
**Description**: Comprehensive validation workflow for full-stack applications. Automatically detects project structure, runs parallel validation for all components, validates API contracts, and auto-fixes common issues.
## What This Command Does
This command orchestrates a complete validation workflow for multi-component applications:
1. **Project Detection** (5-10 seconds)
- Identifies all technology components (backend, frontend, database, infrastructure)
- Detects frameworks and build tools
- Maps project structure
2. **Parallel Component Validation** (30-120 seconds)
- Backend: Dependencies, type hints, tests, API schema, database migrations
- Frontend: TypeScript, build, dependencies, bundle size
- Database: Schema, test isolation, query efficiency
- Infrastructure: Docker services, environment variables
3. **Cross-Component Validation** (15-30 seconds)
- API contract synchronization (frontend ↔ backend)
- Environment variable consistency
- Authentication flow validation
4. **Auto-Fix Application** (10-30 seconds)
- TypeScript unused imports removal
- SQLAlchemy text() wrapper addition
- React Query syntax updates
- Build configuration fixes
- Environment variable generation
5. **Quality Assessment** (5-10 seconds)
- Calculate quality score (0-100)
- Generate prioritized recommendations
- Create comprehensive report
## When to Use This Command
**Ideal scenarios**:
- Before deploying full-stack applications
- After significant code changes
- Setting up CI/CD pipelines
- Onboarding new team members
- Periodic quality checks
**Project types**:
- Monorepos with backend + frontend
- Separate repos with Docker Compose
- Microservices architectures
- Full-stack web applications
## Execution Flow
```
User runs: /validate:fullstack
v
Orchestrator Agent:
1. Load skills: fullstack-validation, code-analysis, quality-standards
2. Detect project structure
3. Create validation plan
v
Parallel Execution (background-task-manager):
+- [Frontend-Analyzer] TypeScript + Build validation
+- [Test-Engineer] Backend tests + coverage
+- [Quality-Controller] Code quality checks
+- [Build-Validator] Build config validation
v
Sequential Execution:
1. [API-Contract-Validator] Frontend ↔ Backend synchronization
2. [Quality-Controller] Cross-component quality assessment
v
Auto-Fix Loop (if quality score < 70):
1. Apply automatic fixes
2. Re-run validation
3. Repeat until quality ≥ 70 or max 3 attempts
v
Results:
- Terminal: Concise summary (15-20 lines)
- File: Detailed report saved to .claude/data/reports/
- Pattern Storage: Store results for future learning
```
## Expected Output
### Terminal Output (Concise)
```
✅ Full-Stack Validation Complete (2m 34s)
📊 Component Status:
+- Backend (FastAPI): ✅ 96/100 (42% coverage -> target 70%)
+- Frontend (React): ✅ 87/100 (0 errors, 882KB bundle)
+- API Contract: ✅ 23/23 endpoints matched
🔧 Auto-Fixed (11 issues):
[PASS] Removed 5 unused TypeScript imports
[PASS] Added text() wrapper to 3 SQL queries
[PASS] Fixed 2 React Query v5 syntax
[PASS] Generated vite-env.d.ts
[WARN] Recommended (2 actions):
1. Increase test coverage to 70% (currently 42%)
2. Add indexes to users.email, projects.created_at
🎯 Overall Score: 87/100 (Production Ready)
📄 Detailed report: .claude/data/reports/validate-fullstack-2025-10-22.md
```
### Detailed Report (File)
Saved to `.claude/data/reports/validate-fullstack-YYYY-MM-DD.md`:
- Complete project structure analysis
- All validation results with metrics
- Every issue found (auto-fixed and remaining)
- Complete recommendations with implementation examples
- Performance metrics and timing breakdown
- Pattern learning insights
- Historical comparison (if available)
## Auto-Fix Capabilities
### Automatically Fixed (No Confirmation)
| Issue | Detection | Fix | Success Rate |
|-------|-----------|-----|--------------|
| Unused TypeScript imports | ESLint | Remove import | 100% |
| Raw SQL strings | Regex | Add text() wrapper | 100% |
| ESM in .js file | File check | Rename to .mjs | 95% |
| Missing vite-env.d.ts | File check | Generate file | 100% |
| Database CASCADE | Error message | Add CASCADE | 100% |
| Missing .env.example | Env var scan | Generate file | 100% |
### Suggested Fixes (Confirmation Needed)
| Issue | Detection | Fix | Success Rate |
|-------|-----------|-----|--------------|
| React Query v4 syntax | Pattern match | Update to v5 | 92% |
| Missing type hints | mypy | Add annotations | 70% |
| Missing error handling | Pattern match | Add try-catch | 88% |
| Large bundle size | Size analysis | Code splitting | 85% |
| API contract mismatch | Schema compare | Generate types | 95% |
## Quality Score Calculation
```
Total Score (0-100):
+- Component Scores (60 points):
| +- Backend: 20 points max
| +- Frontend: 20 points max
| +- Integration: 20 points max
+- Test Coverage (15 points):
| +- 70%+ = 15, 50-69% = 10, <50% = 5
+- Auto-Fix Success (15 points):
| +- All fixed = 15, Some fixed = 10, None = 0
+- Best Practices (10 points):
+- Documentation, types, standards
Threshold:
✅ 70-100: Production Ready
[WARN] 50-69: Needs Improvement
❌ 0-49: Critical Issues
```
## Configuration Options
Create `.claude/config/fullstack-validation.json` to customize:
```json
{
"coverage_target": 70,
"quality_threshold": 70,
"auto_fix": {
"typescript_imports": true,
"sqlalchemy_text": true,
"react_query_syntax": false,
"build_configs": true
},
"parallel_validation": true,
"max_auto_fix_attempts": 3,
"skip_components": [],
"custom_validators": []
}
```
## Integration with Other Commands
**Before `/validate:fullstack`**:
- `/learn:init` - Initialize pattern learning
**After `/validate:fullstack`**:
- `/analyze:quality` - Deep dive into specific issues
- `/learn:performance` - Analyze performance trends
**Complementary**:
- `/monitor:recommend` - Get workflow suggestions based on validation results
## Success Criteria
Validation is considered successful when:
- ✅ All components validated
- ✅ Quality score ≥ 70/100
- ✅ No critical issues remaining
- ✅ API contracts synchronized
- ✅ Auto-fix success rate > 80%
- ✅ Execution time < 3 minutes
## Troubleshooting
**Validation takes too long (>5 min)**:
- Check for slow tests (timeout after 2 min)
- Disable parallel validation if causing issues
- Skip non-critical components
**Auto-fix failures**:
- Review `.claude/data/reports/` for detailed error messages
- Check autofix-patterns.json for pattern success rates
- Manual fixes may be required for complex issues
**Quality score unexpectedly low**:
- Review individual component scores
- Check test coverage (often the bottleneck)
- Review recommendations for quick wins
## Pattern Learning
This command automatically stores patterns for:
- Project structure (for faster detection next time)
- Common issues (for better detection)
- Auto-fix success rates (for reliability improvement)
- Validation performance (for optimization)
After 5-10 runs on similar projects, validation becomes significantly faster and more accurate.
## Example Use Cases
**Use Case 1: Pre-Deployment Check**
```bash
/validate:fullstack
# Wait for validation
# Review score and recommendations
# If score ≥ 70: Deploy
# If score < 70: Address critical issues and re-validate
```
**Use Case 2: CI/CD Integration**
```bash
# In CI pipeline
claude-code /validate:fullstack --ci-mode
# Exit code 0 if score ≥ 70
# Exit code 1 if score < 70
```
**Use Case 3: Code Review Preparation**
```bash
/validate:fullstack
# Auto-fixes applied automatically
# Review recommendations
# Commit fixes
# Create PR with validation report
```
## Performance Benchmarks
Typical execution times for different project sizes:
| Project Size | Components | Validation Time |
|--------------|------------|-----------------|
| Small | Backend + Frontend | 45-60 seconds |
| Medium | Backend + Frontend + DB | 90-120 seconds |
| Large | Microservices + Frontend | 120-180 seconds |
| Extra Large | Complex monorepo | 180-240 seconds |
Auto-fix adds 10-30 seconds depending on issue count.
## Version History
- **v2.0.0**: Full-stack validation with auto-fix capabilities
- **v2.2.0** (planned): Docker container validation
- **v2.2.0** (planned): Security vulnerability scanning
- **v2.3.0** (planned): Performance profiling integration
---
**Note**: This command requires the following agents to be available:
- `autonomous-agent:orchestrator`
- `autonomous-agent:frontend-analyzer`
- `autonomous-agent:api-contract-validator`
- `autonomous-agent:build-validator`
- `autonomous-agent:test-engineer`
- `autonomous-agent:quality-controller`
- `autonomous-agent:background-task-manager`
All agents are included in the autonomous-agent plugin v2.0+.

View File

@@ -0,0 +1,306 @@
---
name: validate:integrity
description: Comprehensive integrity validation and automatic recovery for missing components
usage: /validate:integrity [options]
category: validate
subcategory: system
---
# Comprehensive Integrity Validation
## Overview
The integrity validation command provides comprehensive analysis of plugin integrity with automatic recovery capabilities. It detects missing components, validates system structure, and can automatically restore lost components using multiple recovery strategies.
This command prevents issues like the missing `/monitor:dashboard` command by maintaining continuous integrity monitoring and providing immediate recovery options.
## Usage
```bash
/validate:integrity --auto-recover # Validate and auto-recover missing components
/validate:integrity --dry-run # Validate without making changes
/validate:integrity --critical-only # Check only critical components
/validate:integrity --detailed # Show detailed validation results
/validate:integrity --backup-check # Check backup system integrity
```
## Parameters
### --auto-recover
Automatically attempt to recover any missing components found during validation.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:integrity --auto-recover`
### --dry-run
Perform validation without executing any recovery operations.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:integrity --dry-run`
### --critical-only
Only validate critical components (core agents, essential commands, key configs).
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:integrity --critical-only`
### --detailed
Show detailed validation results including all issues and recommendations.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:integrity --detailed`
### --backup-check
Validate the backup system integrity and check for available backups.
- **Type**: Flag
- **Default**: False
- **Example**: `/validate:integrity --backup-check`
## Examples
### Basic Integrity Check
```bash
/validate:integrity
```
Output:
```
🔍 Plugin Integrity Validation
✅ Overall Integrity: 92/100
📊 Components: 56/58 present
[WARN] Issues: 2 non-critical
📦 Backups: 3 recent backups available
💡 Recommendations: 2 improvement suggestions
```
### Auto-Recovery Mode
```bash
/validate:integrity --auto-recover
```
Output:
```
🔄 Automatic Recovery Mode
📋 Missing Components Found:
* /monitor:dashboard (CRITICAL)
* /commands/workspace/archive (WARNING)
🔧 Recovery Attempting:
✅ /monitor:dashboard restored from backup (backup_20250127_143022)
❌ /commands/workspace:archive recovery failed (no template available)
📊 Final Integrity: 98/100 (+6 points)
```
### Critical Components Only
```bash
/validate:integrity --critical-only
```
Output:
```
🔍 Critical Components Validation
✅ All critical components present
📋 Critical Inventory:
* Commands: 22/22 present
* Agents: 7/7 present
* Core Skills: 6/6 present
* Plugin Config: 1/1 present
```
## Output Format
### Summary Section
- **Overall Integrity**: System integrity score (0-100)
- **Components Present**: Found vs. expected component count
- **Issues**: Number and severity of detected issues
- **Backup Status**: Availability and health of backup system
### Detailed Results (when using --detailed)
- **Component Analysis**: Breakdown by category (commands, agents, skills, configs)
- **Missing Components**: List of missing components with severity levels
- **Integrity Issues**: Detailed description of each issue
- **Recovery Options**: Available recovery strategies for each issue
### Recovery Results (when using --auto-recover)
- **Recovery Session**: Session ID and timestamp
- **Components Recovered**: Successfully recovered components
- **Failed Recoveries**: Components that couldn't be recovered
- **Recovery Strategies**: Strategies used and their success rates
## Integrity Scoring
The integrity score is calculated based on:
- **Component Presence** (40 points): All expected components present
- **Discoverability** (25 points): Components are discoverable and accessible
- **System Structure** (20 points): Proper file organization and structure
- **Backup Coverage** (15 points): Adequate backup protection exists
**Score Interpretation:**
- **90-100**: Excellent integrity, no issues
- **80-89**: Good integrity, minor issues
- **70-79**: Acceptable integrity, some issues present
- **60-69**: Poor integrity, significant issues
- **0-59**: Critical integrity problems
## Recovery Strategies
The validation system uses multiple recovery strategies in order of preference:
1. **Backup Restore** (95% success rate)
- Restores from recent automated backups
- Preserves original content and metadata
- Fastest recovery option
2. **Git Recovery** (85% success rate)
- Recovers from Git history
- Useful for recently deleted components
- Preserves version history
3. **Template Creation** (70% success rate)
- Creates components from templates
- Provides basic structure for new components
- Requires manual customization
4. **Pattern-Based** (60% success rate)
- Uses similar components as reference
- Maintains consistency with existing components
- May need manual adjustments
5. **Manual Guidance** (100% guidance rate)
- Provides step-by-step manual recovery instructions
- References similar existing components
- Includes best practices and examples
## Integration Points
### Pre-Operation Validation
Automatically triggered before:
- `/workspace:improve` - Plugin modifications
- `/dev:release` - Release preparation
- Major command restructuring
- Agent or skill modifications
### Post-Operation Validation
Automatically triggered after:
- File system operations
- Command modifications
- Plugin updates
- Version releases
### Continuous Monitoring
- Periodic integrity checks (configurable interval)
- Event-driven validation after file changes
- Real-time missing component detection
## Best Practices
### Prevention
1. **Run regularly**: Perform weekly integrity checks
2. **Auto-recover**: Enable auto-recovery for critical issues
3. **Backup verification**: Regularly verify backup system health
4. **Monitor trends**: Track integrity score over time
### Response Protocol
1. **Critical issues**: Immediate response with auto-recovery
2. **High issues**: Review and address within 24 hours
3. **Medium issues**: Plan fixes in next maintenance window
4. **Low issues**: Include in regular improvement cycle
### System Health
1. **Maintain 90+ score**: Target excellent integrity
2. **Zero missing critical**: Never accept missing critical components
3. **Regular backups**: Ensure recent backups available
4. **Documentation sync**: Keep documentation aligned with actual structure
## Troubleshooting
### Recovery Failures
When auto-recovery fails:
1. Check backup availability: `/validate:integrity --backup-check`
2. Verify Git repository status: `git status`
3. Review manual guidance provided
4. Consider manual creation using templates
### Validation Errors
Common validation issues:
1. **File permission errors**: Check file system permissions
2. **Locked files**: Close other programs using plugin files
3. **Git conflicts**: Resolve Git conflicts before validation
4. **Corrupted backups**: Verify backup system integrity
### Performance Issues
If validation is slow:
1. Use `--critical-only` for faster checks
2. Reduce scope with specific category validation
3. Check system resources and disk space
4. Verify plugin directory isn't excessively large
## Related Commands
- `/validate:commands` - Command-specific validation
- `/validate:all` - Full system validation
- `/workspace:organize` - File organization fixes
- `/dev:auto "validate integrity"` - Automated integrity management
## Configuration
### Validation Settings
```json
{
"validation": {
"auto_recover": true,
"critical_threshold": 70,
"backup_check_interval": "daily",
"notification_level": "warning"
}
}
```
### Recovery Preferences
```json
{
"recovery": {
"preferred_strategies": ["backup_restore", "git_recovery"],
"max_recovery_attempts": 3,
"require_confirmation": false,
"create_backup_before_recovery": true
}
}
```
## Advanced Usage
### Custom Validation Profiles
```bash
# Create custom validation profile
/validate:integrity --profile production --critical-only
# Validate specific categories
/validate:integrity --categories commands,agents --detailed
# Validate with custom thresholds
/validate:integrity --threshold 85 --strict-mode
```
### Batch Operations
```bash
# Validate and create recovery plan
/validate:integrity --dry-run --save-plan
# Execute recovery from saved plan
/validate:integrity --execute-plan recovery_plan_20250127.json
```
## Monitoring and Analytics
The integrity validation maintains comprehensive analytics:
- **Historical Trends**: Track integrity score over time
- **Issue Patterns**: Identify recurring component loss
- **Recovery Success**: Monitor recovery strategy effectiveness
- **System Health**: Overall plugin health assessment
Use `/learn:performance` to view detailed analytics and `/learn:analytics` for comprehensive reporting.

View File

@@ -0,0 +1,285 @@
---
name: validate:patterns
description: Validate pattern learning database integrity and generate health reports
delegates-to: autonomous-agent:orchestrator
---
# Command: `/validate:patterns`
Validates the pattern learning system across all commands and agents. Ensures patterns are being stored correctly, consistently formatted, and effectively used for improving performance over time.
## Purpose
- Validate pattern learning is working across all commands
- Check pattern format consistency and completeness
- Analyze learning effectiveness and trends
- Identify commands that aren't storing patterns
- Generate comprehensive learning analytics
## What It Does
### 1. **Command Coverage Validation** (10-20 seconds)
- Scan all commands in `commands/` directory
- Check which commands store patterns vs. utility commands
- Validate pattern storage code presence
- Identify missing pattern integration
### 2. **Agent Learning Validation** (10-15 seconds)
- Verify all agents contribute to pattern learning
- Check learning-engine integration points
- Validate agent effectiveness tracking
- Ensure proper handoff protocols
### 3. **Pattern Storage Analysis** (15-30 seconds)
- Validate `.claude-patterns/patterns.json` format
- Check for required fields and data types
- Analyze pattern quality and completeness
- Detect duplicate or corrupted patterns
### 4. **Learning Effectiveness Metrics** (10-20 seconds)
- Calculate pattern reuse rates
- Analyze success rates by task type
- Track skill effectiveness over time
- Identify improvement trends
### 5. **Cross-Reference Validation** (10-15 seconds)
- Validate skill references in patterns
- Check agent consistency with stored patterns
- Verify tool usage compliance
- Ensure documentation alignment
### 6. **Learning Analytics Report** (20-40 seconds)
- Generate comprehensive learning dashboard
- Create visualizations and charts
- Provide improvement recommendations
- Export data for external analysis
## Usage
```bash
# Basic pattern validation
/validate:patterns
# Include detailed analytics (slower but comprehensive)
/validate:patterns --analytics
# Quick validation skip analytics
/validate:patterns --quick
# Validate specific command or agent
/validate:patterns --filter orchestrator
/validate:patterns --filter release-dev
```
## Output
### Terminal Summary (concise)
```
Pattern Learning Validation Complete ✅
+- Commands Validated: 18/18 (100%)
+- Pattern Storage: Healthy ✅
+- Learning Effectiveness: 94% ✅
+- Issues Found: 0 critical, 2 minor
+- Duration: 1m 45s
📊 Full analytics: .claude/data/reports/validate-patterns-2025-01-15.md
```
### Detailed Report (file)
- Command-by-command validation results
- Pattern storage format validation
- Learning effectiveness metrics with charts
- Agent performance tracking
- Specific issues and fixes needed
- Trend analysis over time
## Validation Categories
### 1. Commands Pattern Storage
**Analysis Commands** (should store patterns):
- `/analyze:project`
- `/analyze:quality`
- `/validate:fullstack`
- `/dev:pr-review`
- And 12 more...
**Utility Commands** (don't store patterns - expected):
- `/monitor:dashboard` - Display only
- `/workspace:reports` - File management only
### 2. Pattern Format Validation
Required fields checked:
```json
{
"task_type": "string",
"context": "object",
"execution": {
"skills_used": "array",
"agents_delegated": "array",
"approach_taken": "string"
},
"outcome": {
"success": "boolean",
"quality_score": "number",
"duration_ms": "number"
},
"reuse_count": "number",
"last_used": "string"
}
```
### 3. Learning Effectiveness Metrics
- **Pattern Reuse Rate**: How often patterns are reused
- **Success Rate by Task Type**: Performance across different tasks
- **Skill Effectiveness**: Which skills perform best
- **Agent Performance**: Agent reliability and speed
- **Improvement Trend**: Learning progress over time
## Integration
The `/validate-patterns` command integrates with:
- **learning-engine agent**: Validates pattern capture and storage
- **pattern-learning skill**: Validates pattern format and structure
- **performance-analytics skill**: Generates learning metrics
- **orchestrator**: Uses validation to improve pattern selection
## Expected Validation Results
### Successful Validation (what you should see)
- 18/18 commands validated
- All analysis commands storing patterns
- Pattern format consistent
- Learning effectiveness > 80%
- No critical issues
### Common Issues and Fixes
1. **Missing Pattern Storage**
- Issue: Command not storing patterns when it should
- Fix: Add pattern learning integration
2. **Format Inconsistencies**
- Issue: Missing required fields in patterns
- Fix: Update pattern generation code
3. **Low Reuse Rate**
- Issue: Patterns not being reused effectively
- Fix: Improve pattern matching algorithm
4. **Storage Location Issues**
- Issue: Patterns not in `.claude-patterns/`
- Fix: Update storage path configuration
## Analytics Dashboard
When using `--analytics` flag, generates:
### Learning Metrics
- Total patterns stored: 247
- Average reuse count: 3.2
- Success rate: 89%
- Most reused pattern: "refactor-auth-module" (12 times)
### Skill Performance
```
Top Performing Skills:
1. code-analysis (94% success, 45 uses)
2. quality-standards (91% success, 38 uses)
3. pattern-learning (89% success, 52 uses)
```
### Agent Performance
```
Agent Reliability:
1. orchestrator: 96% success
2. code-analyzer: 94% success
3. quality-controller: 92% success
```
## Usage Examples
### Example 1: Basic Validation
```bash
User: /validate:patterns
System: ✅ Pattern learning system healthy
Commands storing patterns: 16/16
Pattern format: Valid
Learning effectiveness: 91%
```
### Example 2: With Analytics
```bash
User: /validate:patterns --analytics
System: 📊 Generated comprehensive analytics
Learning trends: Improving (+12% over 30 days)
Top skill: code-analysis (95% success)
Recommendation: Increase pattern reuse threshold
```
### Example 3: Filter Validation
```bash
User: /validate:patterns --filter orchestrator
System: ✅ Orchestrator pattern integration validated
Patterns contributed: 89
Effectiveness score: 96%
Integration quality: Excellent
```
## When to Use
Run `/validate:patterns` when:
- After implementing new commands or agents
- Suspecting pattern learning issues
- Regular system health checks
- Before major releases
- Analyzing learning effectiveness
## Automation
The orchestrator can automatically run `/validate:patterns`:
- Every 50 tasks to ensure system health
- When learning effectiveness drops below 75%
- After adding new commands or agents
- During system diagnostics
## Troubleshooting
### Common Validation Failures
1. **Pattern Database Missing**
```
Error: .claude-patterns/patterns.json not found
Fix: Run /learn:init to initialize
```
2. **Permission Issues**
```
Error: Cannot read pattern database
Fix: Check file permissions in .claude-patterns/
```
3. **Corrupted Patterns**
```
Error: Invalid JSON in patterns
Fix: Manual repair or reset patterns
```
## Related Commands
- `/learn:init` - Initialize pattern learning system
- `/analyze:project` - Analyze project and learn patterns
- `/analyze:quality` - Check overall system quality
## See Also
- [Learning-Engine Agent](../agents/learning-engine.md)
- [Pattern-Learning Skill](../skills/pattern-learning/SKILL.md)
- [Analytics Dashboard Guide](../docs/guidelines/ANALYTICS_GUIDE.md)
---

376
commands/validate/plugin.md Normal file
View File

@@ -0,0 +1,376 @@
---
name: validate:plugin
description: Validate Claude Code plugin against official guidelines
delegates-to: autonomous-agent:orchestrator
---
# Validate Claude Plugin
Comprehensive validation for Claude Code plugins against official development guidelines to prevent installation failures and ensure marketplace compatibility.
## Command: `/validate:plugin`
Validates the current plugin against Claude Code official guidelines, checking for common installation failures, compatibility issues, and marketplace requirements.
## How It Works
1. **Plugin Manifest Validation**: Validates .claude-plugin/plugin.json against Claude Code schema requirements
2. **Directory Structure Check**: Ensures proper plugin directory layout and required files
3. **File Format Compliance**: Validates Markdown files with YAML frontmatter
4. **Command Execution Validation**: Checks agent delegation and command-to-agent mappings
5. **Installation Readiness**: Checks for common installation blockers
6. **Cross-Platform Compatibility**: Validates plugin works on Windows, Linux, and Mac
### Command Execution Validation Details
The validator now checks for command execution issues that cause runtime failures:
**Agent Delegation Validation**:
- Verifies all command frontmatter includes proper `delegates-to` field
- Validates referenced agents exist in the `agents/` directory
- Checks agent identifiers use correct prefix format (`autonomous-agent:name`)
- Ensures command documentation matches delegation targets
**Common Command Execution Failures Detected**:
- Missing `delegates-to` field in command YAML frontmatter
- Agent names without `autonomous-agent:` prefix
- References to non-existent agent files
- Mismatched delegation between documentation and frontmatter
## Validation Criteria
### Critical Issues (Installation Blockers)
- Missing or invalid plugin.json manifest
- Invalid JSON syntax in manifest
- Missing required fields (name, version, description, author)
- Invalid version format (must be x.y.z semantic versioning)
- Non-UTF-8 file encoding
- Missing .claude-plugin directory
### Command Execution Issues (Runtime Failures)
- Invalid agent delegation references in commands
- Missing or incorrect agent identifiers
- Commands that reference non-existent agents
- Broken command-to-agent mappings
- Missing `delegates-to` field in command frontmatter
### Warnings (Non-Critical)
- Long file paths (Windows limit 260 characters)
- Missing optional YAML frontmatter fields
- Inconsistent line endings
- Very long or short descriptions
- Agent names without proper prefixes in documentation
### Quality Score
- **100**: Perfect - No issues found
- **90-99**: Ready - Minor warnings only
- **70-89**: Usable - Some fixes recommended
- **< 70**: Needs fixes before release
## Usage
### Quick Validation
```bash
/validate:plugin
```
### Strict Validation (treat warnings as errors)
```bash
/validate:plugin --strict
```
### Validate Specific Plugin Directory
```bash
/validate:plugin --dir /path/to/plugin
```
## Expected Output
### Successful Validation (Perfect)
```
============================================================
VALIDATE CLAUDE PLUGIN RESULTS
============================================================
[+] Plugin Validation PASSED - Ready for Release!
Validation Summary:
+- Plugin Manifest: [OK] Valid JSON schema
+- Directory Structure: [OK] Compliant layout
+- File Formats: [OK] Valid Markdown/YAML
+- Installation Readiness: [OK] No blockers
+- Cross-Platform Compatibility: [OK] Ready for all platforms
Quality Score: 100/100 (Perfect)
Detailed report: .claude/data/reports/validate-claude-plugin-2025-10-23.md
Completed in 1.2 minutes
[+] Assessment stored in pattern database for dashboard monitoring
[+] Plugin is fully compliant with Claude Code guidelines
Ready for immediate distribution and installation
```
### Issues Found
```
============================================================
VALIDATE CLAUDE PLUGIN RESULTS
============================================================
[WARN] Plugin Validation Issues Found
📊 Validation Summary:
+- Plugin Manifest: ❌ 2 critical issues
+- Directory Structure: ✅ Compliant layout
+- File Formats: [WARN] 3 warnings
+- Installation Readiness: ❌ 2 blockers
+- Cross-Platform Compatibility: ✅ Ready for all platforms
🚨 Critical Issues (Installation Blockers):
* Missing required field: version
* Invalid JSON syntax: trailing comma in plugin.json
* File encoding error: agents/orchestrator.md (not UTF-8)
[WARN] Command Execution Issues (Runtime Failures):
* Invalid agent delegation: commands/quality-check.md references 'orchestrator' (should be 'autonomous-agent:orchestrator')
* Missing delegates-to field: commands/auto-analyze.md lacks agent delegation specification
* Non-existent agent: commands/example.md references 'missing-agent' (file not found)
[WARN] Warnings:
* YAML frontmatter missing in 2 agent files
* Long file paths (Windows limit): 3 files
* Description too short (< 10 chars)
🔧 Auto-Fix Available:
* JSON syntax errors: Can be automatically corrected
* Missing required fields: Can be added with defaults
* File encoding: Can be converted to UTF-8
* Agent delegation errors: Can auto-correct prefixes and add missing fields
🛠️ Command Execution Fixes Applied:
* Fixed commands/quality-check.md: Added `delegates-to: autonomous-agent:orchestrator`
* Auto-corrected agent identifier: `orchestrator` -> `autonomous-agent:orchestrator`
* Updated command documentation: Explicit agent references with proper prefixes
🎯 Quality Score: 65/100 (Needs Fixes)
💡 Recommendations:
1. [HIGH] Fix JSON syntax in plugin.json
2. [HIGH] Add missing version field (use semantic versioning)
3. [HIGH] Convert files to UTF-8 encoding
4. [MED] Add missing YAML frontmatter to agents
5. [LOW] Reduce file path lengths
📄 Detailed report: .claude/data/reports/validate-claude-plugin-2025-10-23.md
⏱ Completed in 1.5 minutes
❌ Plugin needs fixes before release
Run recommended fixes and re-validate
```
## Files Created
The validation command creates detailed reports in:
1. **Console Output**: Concise summary with key findings
2. **Detailed Report**: `.claude/data/reports/validate-claude-plugin-YYYY-MM-DD.md`
3. **JSON Report**: Machine-readable validation results
## Integration with Development Workflow
### Pre-Release Checklist
```bash
# Required validation before any release
/validate:plugin --strict
# Only proceed if validation passes
if [ $? -eq 0 ]; then
echo "✅ Ready for release"
else
echo "❌ Fix issues before release"
exit 1
fi
```
### Continuous Integration
```yaml
# GitHub Actions example
- name: Validate Claude Plugin
run: |
/validate:plugin --strict
if [ $? -ne 0 ]; then
echo "Plugin validation failed - blocking release"
exit 1
fi
```
### Local Development
```bash
# During development
make validate-plugin # Custom command that runs validation
# Before committing changes
git add .
git commit -m "Update plugin (validated: ✅)"
```
## Common Installation Failure Prevention
The validator specifically targets the most common causes of plugin installation failures:
### 1. JSON Syntax Errors
```json
// ❌ INVALID (trailing comma)
{
"name": "my-plugin",
"version": "1.0.0",
}
// ✅ VALID
{
"name": "my-plugin",
"version": "1.0.0"
}
```
### 2. Missing Required Fields
```json
// ❌ MISSING VERSION
{
"name": "my-plugin",
"description": "A great plugin"
}
// ✅ COMPLETE
{
"name": "my-plugin",
"version": "1.0.0",
"description": "A great plugin",
"author": "Developer Name"
}
```
### 3. File Encoding Issues
```bash
# Check file encoding
file .claude-plugin/plugin.json
# Convert to UTF-8 if needed
iconv -f ISO-8859-1 -t UTF-8 input.txt > output.txt
```
### 4. Directory Structure
```
my-plugin/
+-- .claude-plugin/
| +-- plugin.json # REQUIRED
+-- agents/ # OPTIONAL
+-- skills/ # OPTIONAL
+-- commands/ # OPTIONAL
+-- lib/ # OPTIONAL
```
## Marketplace Compatibility
The validation ensures compatibility with Claude Code plugin marketplaces:
### Installation Methods Supported
- ✅ GitHub repository URLs
- ✅ Git repository URLs
- ✅ Local directory paths
- ✅ Team distribution sources
- ✅ Marketplace listing files
### Requirements Met
- ✅ JSON manifest schema compliance
- ✅ Semantic versioning format
- ✅ UTF-8 encoding throughout
- ✅ Cross-platform file paths
- ✅ Proper directory structure
- ✅ Valid file formats
## Error Recovery
### Auto-Fix Capabilities
The validator can automatically correct many common issues:
1. **JSON Syntax**: Remove trailing commas, fix quotes
2. **Missing Fields**: Add defaults (version: "1.0.0", author: "Unknown")
3. **File Encoding**: Convert to UTF-8 automatically
4. **Line Endings**: Normalize line endings for platform
5. **Agent Delegation**: Auto-correct agent identifier prefixes (`orchestrator` -> `autonomous-agent:orchestrator`)
6. **Command Frontmatter**: Add missing `delegates-to` fields based on command content analysis
7. **Agent Mapping**: Verify and fix command-to-agent mappings by cross-referencing agents directory
### Manual Fixes Required
1. **Structural Issues**: Directory reorganization
2. **Content Issues**: Improve documentation quality
3. **Naming Conflicts**: Resolve duplicate names
4. **Version Conflicts**: Semantic versioning corrections
## Troubleshooting
### Common Validation Failures
**Error**: "Missing plugin manifest"
- **Cause**: No `.claude-plugin/plugin.json` file
- **Fix**: Create manifest with required fields
**Error**: "Invalid JSON syntax"
- **Cause**: Syntax errors in plugin.json
- **Fix**: Use JSON linter, check for trailing commas
**Error**: "Missing required fields"
- **Cause**: Required JSON fields absent
- **Fix**: Add name, version, description, author fields
**Error**: "File encoding error"
- **Cause**: Non-UTF-8 encoded files
- **Fix**: Convert all files to UTF-8 encoding
**Error**: "Agent type not found" (Runtime Command Failure)
- **Cause**: Command references incorrect agent identifier
- **Example**: `/quality-check` tries to delegate to `orchestrator` instead of `autonomous-agent:orchestrator`
- **Fix**: Update command frontmatter with correct `delegates-to: autonomous-agent:agent-name`
**Error**: "Missing delegates-to field"
- **Cause**: Command YAML frontmatter lacks delegation specification
- **Fix**: Add `delegates-to: autonomous-agent:agent-name` to command frontmatter
**Error**: "Command execution failed"
- **Cause**: Referenced agent file doesn't exist in `agents/` directory
- **Fix**: Create missing agent file or update delegation to existing agent
### Getting Help
```bash
# Detailed validation with debugging
/validate:plugin --debug
# Check specific file
/validate:plugin --file .claude-plugin/plugin.json
# Show validation rules
/validate:plugin --show-rules
```
## Best Practices
### Development Workflow
1. **Create Plugin Structure**: Follow standard layout
2. **Write Manifest**: Complete all required fields
3. **Add Content**: Agents, skills, commands
4. **Validate**: Run `/validate:plugin`
5. **Fix Issues**: Address any problems found
6. **Re-validate**: Ensure all issues resolved
7. **Release**: Publish with confidence
### Quality Assurance
- Run validation before every commit
- Use `--strict` mode for pre-release checks
- Monitor validation scores over time
- Keep documentation up to date
- Test on multiple platforms
---
This validation command ensures your Claude Code plugin meets official guidelines and will install successfully across all supported platforms and marketplace types.

753
commands/validate/web.md Normal file
View File

@@ -0,0 +1,753 @@
---
name: validate:web
description: Validate web pages and detect JavaScript errors automatically using headless browser automation
category: validation
---
# Validate Web Command
**Slash command**: `/validate:web`
Automatically validate web pages (like dashboard.py) and detect JavaScript errors, console issues, and performance problems without manual browser inspection.
## Usage
```bash
/validate:web <URL> [options]
```
## Examples
```bash
# Validate local dashboard
/validate:web http://127.0.0.1:5000
# Validate with detailed output
/validate:web http://127.0.0.1:5000 --verbose
# Validate and auto-fix issues
/validate:web http://127.0.0.1:5000 --auto-fix
# Save validation report
/validate:web http://127.0.0.1:5000 --report
# Crawl and validate all subpages
/validate:web http://127.0.0.1:5000 --crawl
# Crawl with depth limit
/validate:web http://127.0.0.1:5000 --crawl --max-depth 2
# Crawl specific subpages only
/validate:web http://127.0.0.1:5000 --crawl --include "/api/*,/analytics/*"
# Exclude certain paths from crawling
/validate:web http://127.0.0.1:5000 --crawl --exclude "/admin/*,/debug/*"
```
## Implementation
```python
#!/usr/bin/env python3
import sys
import os
import json
import subprocess
from pathlib import Path
from datetime import datetime
# Add plugin lib directory to path
plugin_lib = Path(__file__).parent.parent.parent / 'lib'
sys.path.insert(0, str(plugin_lib))
try:
from web_page_validator import WebPageValidator, format_validation_report
VALIDATOR_AVAILABLE = True
except ImportError:
VALIDATOR_AVAILABLE = False
# Import additional modules for crawling
try:
from urllib.parse import urljoin, urlparse, urlunparse
from fnmatch import fnmatch
import re
import time
from collections import deque
CRAWLING_AVAILABLE = True
except ImportError:
CRAWLING_AVAILABLE = False
def crawl_and_validate(validator, start_url, max_depth=3, max_pages=50,
include_patterns=None, exclude_patterns=None,
same_domain=True, wait_for_load=3, verbose=False):
"""Crawl and validate all pages discovered from the start URL."""
if not CRAWLING_AVAILABLE:
raise ImportError("Required crawling modules not available")
from urllib.parse import urljoin, urlparse, urlunparse
from fnmatch import fnmatch
import time
from collections import deque
start_domain = urlparse(start_url).netloc
visited = set()
queue = deque([(start_url, 0)]) # (url, depth)
results = []
print(f"[INFO] Starting crawl from: {start_url}")
print(f"[INFO] Domain: {start_domain}")
print()
while queue and len(results) < max_pages:
current_url, depth = queue.popleft()
# Skip if already visited
if current_url in visited:
continue
visited.add(current_url)
# Check depth limit
if depth > max_depth:
continue
# Check domain restriction
if same_domain and urlparse(current_url).netloc != start_domain:
continue
# Check include/exclude patterns
if not should_crawl_url(current_url, include_patterns, exclude_patterns):
continue
print(f"[INFO] Validating (depth {depth}): {current_url}")
try:
# Validate the current page
result = validator.validate_url(current_url, wait_for_load=wait_for_load)
result.depth = depth # Add depth information
results.append(result)
# Show progress
status = "✅ PASS" if result.success else "❌ FAIL"
errors = len(result.console_errors) + len(result.javascript_errors)
print(f"[{status}] {errors} errors, {len(result.console_warnings)} warnings")
# Extract links for further crawling (only from successful pages)
if result.success and depth < max_depth:
links = extract_links_from_page(current_url, result.page_content or "")
for link in links:
if link not in visited and len(visited) + len(queue) < max_pages:
queue.append((link, depth + 1))
# Brief pause to avoid overwhelming the server
time.sleep(0.5)
except Exception as e:
print(f"[ERROR] Failed to validate {current_url}: {e}")
# Create a failed result object
from types import SimpleNamespace
failed_result = SimpleNamespace(
url=current_url,
success=False,
load_time=0,
page_title="Error",
console_errors=[f"Validation failed: {e}"],
console_warnings=[],
javascript_errors=[],
network_errors=[],
depth=depth,
page_content=""
)
results.append(failed_result)
print()
print(f"[INFO] Crawling completed: {len(results)} pages validated")
print(f"[INFO] Pages discovered: {len(visited)}")
return results
def should_crawl_url(url, include_patterns=None, exclude_patterns=None):
"""Check if URL should be crawled based on include/exclude patterns."""
from urllib.parse import urlparse
parsed = urlparse(url)
path = parsed.path
# Skip non-HTML resources
if any(path.endswith(ext) for ext in ['.css', '.js', '.png', '.jpg', '.jpeg', '.gif', '.svg', '.ico', '.pdf', '.zip']):
return False
# Skip hash fragments
if parsed.fragment:
return False
# Check include patterns
if include_patterns:
if not any(fnmatch(path, pattern.strip()) for pattern in include_patterns):
return False
# Check exclude patterns
if exclude_patterns:
if any(fnmatch(path, pattern.strip()) for pattern in exclude_patterns):
return False
return True
def extract_links_from_page(base_url, page_content):
"""Extract all valid links from page content."""
from urllib.parse import urljoin, urlparse
import re
if not page_content:
return []
# Extract links using regex patterns
link_patterns = [
r'href=["\']([^"\']+)["\']', # href attributes
r'action=["\']([^"\']+)["\']', # form actions
r'src=["\']([^"\']+)["\']', # src attributes (for some dynamic content)
]
links = set()
for pattern in link_patterns:
matches = re.findall(pattern, page_content, re.IGNORECASE)
for match in matches:
# Convert relative URLs to absolute
absolute_url = urljoin(base_url, match)
# Validate URL
parsed = urlparse(absolute_url)
if parsed.scheme in ['http', 'https'] and parsed.netloc:
links.add(absolute_url)
return list(links)
def display_crawling_results(results):
"""Display comprehensive crawling results."""
if not results:
print("[ERROR] No pages were validated")
return
# Sort results by URL for consistent output
results.sort(key=lambda r: r.url)
# Summary statistics
total_pages = len(results)
successful_pages = sum(1 for r in results if r.success)
failed_pages = total_pages - successful_pages
total_errors = sum(len(r.console_errors) + len(r.javascript_errors) for r in results)
total_warnings = sum(len(r.console_warnings) for r in results)
print("=" * 80)
print(f"🕷️ CRAWLING VALIDATION RESULTS")
print("=" * 80)
print(f"Total Pages: {total_pages}")
print(f"Successful: {successful_pages}")
print(f"Failed: {failed_pages}")
print(f"Total Errors: {total_errors}")
print(f"Total Warnings: {total_warnings}")
print(f"Success Rate: {(successful_pages/total_pages)*100:.1f}%")
print()
# Show failed pages first
failed_results = [r for r in results if not r.success]
if failed_results:
print("❌ FAILED PAGES:")
for i, result in enumerate(failed_results, 1):
print(f" {i}. {result.url}")
print(f" Status: FAILED")
errors = len(result.console_errors) + len(result.javascript_errors)
if errors > 0:
print(f" Errors: {errors}")
print()
# Show successful pages with warnings
successful_with_warnings = [r for r in results if r.success and
(len(r.console_warnings) > 0 or
len(r.console_errors) > 0 or
len(r.javascript_errors) > 0)]
if successful_with_warnings:
print("[WARN] SUCCESSFUL PAGES WITH ISSUES:")
for i, result in enumerate(successful_with_warnings[:10], 1): # Limit to first 10
errors = len(result.console_errors) + len(result.javascript_errors)
warnings = len(result.console_warnings)
status_parts = []
if errors > 0:
status_parts.append(f"{errors} errors")
if warnings > 0:
status_parts.append(f"{warnings} warnings")
print(f" {i}. {result.url}")
print(f" Issues: {', '.join(status_parts)}")
if len(successful_with_warnings) > 10:
print(f" ... and {len(successful_with_warnings) - 10} more pages with issues")
print()
# Show top errors across all pages
all_errors = []
for result in results:
for error in result.console_errors:
all_errors.append(f"[{error.level.upper()}] {error.message[:100]}")
for error in result.javascript_errors:
all_errors.append(f"[JS] {str(error)[:100]}")
if all_errors:
print("🔍 TOP ERRORS ACROSS ALL PAGES:")
from collections import Counter
error_counts = Counter(all_errors)
for i, (error, count) in enumerate(error_counts.most_common(10), 1):
print(f" {i}. ({count}×) {error}")
print()
# Overall status
if successful_pages == total_pages:
print("🎉 OVERALL STATUS: ALL PAGES PASSED ✅")
elif successful_pages > total_pages * 0.8:
print("✅ OVERALL STATUS: MOSTLY HEALTHY ({successful_pages}/{total_pages} pages passed)")
else:
print("[WARN] OVERALL STATUS: NEEDS ATTENTION ({successful_pages}/{total_pages} pages passed)")
print("=" * 80)
def save_crawling_report(results, base_url):
"""Save comprehensive crawling report to file."""
from datetime import datetime
report_dir = Path('.claude/reports')
report_dir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime('%Y-%m-%d_%H%M%S')
report_file = report_dir / f'web-crawling-{timestamp}.md'
# Generate report content
content = generate_crawling_report_content(results, base_url)
report_file.write_text(content, encoding='utf-8')
print(f"[OK] Comprehensive crawling report saved to: {report_file}")
def generate_crawling_report_content(results, base_url):
"""Generate comprehensive markdown report for crawling results."""
from datetime import datetime
total_pages = len(results)
successful_pages = sum(1 for r in results if r.success)
failed_pages = total_pages - successful_pages
total_errors = sum(len(r.console_errors) + len(r.javascript_errors) for r in results)
total_warnings = sum(len(r.console_warnings) for r in results)
content = f"""{'='*80}
WEB CRAWLING VALIDATION REPORT
{'='*80}
**Base URL**: {base_url}
**Timestamp**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
**Status**: {'PASSED' if successful_pages == total_pages else 'FAILED'}
## SUMMARY
{'-'*40}
- **Total Pages**: {total_pages}
- **Successful**: {successful_pages}
- **Failed**: {failed_pages}
- **Success Rate**: {(successful_pages/total_pages)*100:.1f}%
- **Total Errors**: {total_errors}
- **Total Warnings**: {total_warnings}
## DETAILED RESULTS
{'-'*40}
"""
# Sort results by status and URL
results.sort(key=lambda r: (not r.success, r.url))
for i, result in enumerate(results, 1):
status_icon = "" if result.success else ""
content += f"""
### {i}. {result.url} {status_icon}
**Status**: {'PASSED' if result.success else 'FAILED'}
**Load Time**: {result.load_time:.2f}s
**Page Title**: {result.page_title}
"""
if not result.success:
errors = len(result.console_errors) + len(result.javascript_errors)
content += f"**Total Errors**: {errors}\n"
if result.console_errors:
content += "\n**Console Errors:**\n"
for j, error in enumerate(result.console_errors[:5], 1):
content += f" {j}. [{error.level.upper()}] {error.message}\n"
if len(result.console_errors) > 5:
content += f" ... and {len(result.console_errors) - 5} more errors\n"
if result.javascript_errors:
content += "\n**JavaScript Errors:**\n"
for j, error in enumerate(result.javascript_errors[:5], 1):
content += f" {j}. {str(error)}\n"
if len(result.javascript_errors) > 5:
content += f" ... and {len(result.javascript_errors) - 5} more errors\n"
if result.console_warnings:
content += f"**Warnings**: {len(result.console_warnings)}\n"
if len(result.console_warnings) <= 3:
for j, warning in enumerate(result.console_warnings, 1):
content += f" {j}. {warning.message}\n"
else:
content += f" Top 3 warnings:\n"
for j, warning in enumerate(result.console_warnings[:3], 1):
content += f" {j}. {warning.message}\n"
content += f" ... and {len(result.console_warnings) - 3} more warnings\n"
content += "\n---\n"
# Add recommendations section
content += f"""
## RECOMMENDATIONS
{'-'*40}
"""
if failed_pages > 0:
content += f"""### 🚨 Priority Fixes ({failed_pages} pages failed)
1. **Fix JavaScript Errors**: Review and fix syntax errors in failed pages
2. **Check Server Configuration**: Ensure all resources load correctly
3. **Validate HTML Structure**: Check for malformed HTML causing issues
4. **Test Functionality**: Verify interactive elements work properly
"""
if total_errors > 0:
content += f"""### 🔧 Technical Improvements ({total_errors} total errors)
1. **Console Error Resolution**: Fix JavaScript runtime errors
2. **Resource Loading**: Ensure all assets (CSS, JS, images) are accessible
3. **Network Issues**: Check for failed API calls or missing resources
4. **Error Handling**: Implement proper error handling in JavaScript
"""
if total_warnings > 0:
content += f"""### [WARN] Code Quality ({total_warnings} warnings)
1. **Deprecation Warnings**: Update deprecated API usage
2. **Performance Optimization**: Address performance warnings
3. **Best Practices**: Follow modern web development standards
4. **Code Cleanup**: Remove unused code and console logs
"""
content += f"""
## VALIDATION METRICS
{'-'*40}
- **Validation Date**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
- **Tool**: Web Page Validator with Crawling
- **Scope**: Full site validation with subpage discovery
- **Coverage**: {total_pages} pages analyzed
- **Effectiveness**: {((successful_pages/total_pages)*100):.1f}% success rate
{'='*80}
"""
return content
def main():
"""Main execution function"""
import argparse
parser = argparse.ArgumentParser(description='Validate web pages automatically')
parser.add_argument('url', nargs='?', default='http://127.0.0.1:5000',
help='URL to validate (default: http://127.0.0.1:5000)')
parser.add_argument('--verbose', '-v', action='store_true',
help='Show detailed output including warnings')
parser.add_argument('--auto-fix', action='store_true',
help='Attempt to automatically fix detected issues')
parser.add_argument('--report', action='store_true',
help='Save detailed report to .claude/data/reports/')
parser.add_argument('--timeout', type=int, default=30,
help='Page load timeout in seconds')
parser.add_argument('--wait', type=int, default=3,
help='Wait time after page load in seconds')
# Crawling options
parser.add_argument('--crawl', action='store_true',
help='Crawl and validate all subpages found on the site')
parser.add_argument('--max-depth', type=int, default=3,
help='Maximum crawl depth (default: 3)')
parser.add_argument('--max-pages', type=int, default=50,
help='Maximum number of pages to crawl (default: 50)')
parser.add_argument('--include', type=str, default='',
help='Comma-separated list of path patterns to include (e.g., "/api/*,/analytics/*")')
parser.add_argument('--exclude', type=str, default='',
help='Comma-separated list of path patterns to exclude (e.g., "/admin/*,/debug/*")')
parser.add_argument('--same-domain', action='store_true', default=True,
help='Only crawl pages on the same domain (default: True)')
args = parser.parse_args()
print("[INFO] Web Page Validation")
print(f"[INFO] Target URL: {args.url}")
print()
if not VALIDATOR_AVAILABLE:
print("[ERROR] Web page validator not available")
print("[INFO] Install dependencies: pip install selenium")
print("[INFO] Install ChromeDriver: https://chromedriver.chromium.org/")
return 1
# Run validation
if args.crawl:
print("[INFO] Starting web page crawling and validation...")
print(f"[INFO] Max depth: {args.max_depth}, Max pages: {args.max_pages}")
print("[INFO] This may take several minutes depending on site size...")
print()
if args.include:
print(f"[INFO] Including only: {args.include}")
if args.exclude:
print(f"[INFO] Excluding: {args.exclude}")
print()
else:
print("[INFO] Starting headless browser validation...")
print("[INFO] This may take a few seconds...")
print()
try:
with WebPageValidator(headless=True, timeout=args.timeout) as validator:
if args.crawl:
# Enhanced crawling functionality
results = crawl_and_validate(
validator, args.url,
max_depth=args.max_depth,
max_pages=args.max_pages,
include_patterns=args.include.split(',') if args.include else [],
exclude_patterns=args.exclude.split(',') if args.exclude else [],
same_domain=args.same_domain,
wait_for_load=args.wait,
verbose=args.verbose
)
# Display crawling results
display_crawling_results(results)
# Save comprehensive report
if args.report or not all(r.success for r in results):
save_crawling_report(results, args.url)
# Return appropriate exit code
return 0 if all(r.success for r in results) else 1
else:
# Single page validation (existing logic)
result = validator.validate_url(args.url, wait_for_load=args.wait)
# Display results
if result.success:
print("=" * 80)
print("[OK] VALIDATION PASSED")
print("=" * 80)
print(f"URL: {result.url}")
print(f"Page Title: {result.page_title}")
print(f"Load Time: {result.load_time:.2f}s")
print(f"Console Errors: 0")
print(f"Console Warnings: {len(result.console_warnings)}")
print(f"JavaScript Errors: 0")
print()
print("[OK] No errors detected - page is functioning correctly")
print()
if result.console_warnings and args.verbose:
print("WARNINGS:")
for i, warning in enumerate(result.console_warnings[:5], 1):
print(f" {i}. {warning.message}")
if len(result.console_warnings) > 5:
print(f" ... and {len(result.console_warnings) - 5} more warnings")
print()
else:
print("=" * 80)
print("[ERROR] VALIDATION FAILED")
print("=" * 80)
print(f"URL: {result.url}")
print(f"Page Title: {result.page_title}")
print(f"Load Time: {result.load_time:.2f}s")
print()
print("ERROR SUMMARY:")
print(f" Console Errors: {len(result.console_errors)}")
print(f" Console Warnings: {len(result.console_warnings)}")
print(f" JavaScript Errors: {len(result.javascript_errors)}")
print(f" Network Errors: {len(result.network_errors)}")
print()
# Show top errors
if result.console_errors:
print("TOP CONSOLE ERRORS:")
for i, error in enumerate(result.console_errors[:3], 1):
print(f" {i}. [{error.level.upper()}] {error.message[:80]}")
if error.source and error.source != 'unknown':
print(f" Source: {error.source}")
if len(result.console_errors) > 3:
print(f" ... and {len(result.console_errors) - 3} more errors")
print()
# Show JavaScript errors
if result.javascript_errors:
print("JAVASCRIPT ERRORS:")
for i, js_error in enumerate(result.javascript_errors[:3], 1):
print(f" {i}. {js_error[:80]}")
if len(result.javascript_errors) > 3:
print(f" ... and {len(result.javascript_errors) - 3} more errors")
print()
# Auto-fix suggestions
if args.auto_fix:
print("AUTO-FIX ANALYSIS:")
print(" Analyzing errors for automatic fixes...")
print()
# Check for string escaping issues
has_escape_issues = any(
'SyntaxError' in str(e) or 'unexpected token' in str(e).lower()
for e in result.javascript_errors
)
if has_escape_issues:
print(" [DETECTED] String escaping issues in JavaScript")
print(" [FIX] Use Python raw strings (r'...') for JavaScript escape sequences")
print(" [EXAMPLE] Change 'Value\\n' to r'Value\\n' in Python source")
print()
print(" [ACTION] Would you like to apply automatic fixes? (y/n)")
else:
print(" [INFO] No auto-fixable issues detected")
print(" [INFO] Manual review required for detected errors")
print()
# Save detailed report
if args.report or not result.success:
report_dir = Path('.claude/reports')
report_dir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime('%Y-%m-%d_%H%M%S')
report_file = report_dir / f'web-validation-{timestamp}.md'
report_content = format_validation_report(result, verbose=True)
report_file.write_text(report_content, encoding='utf-8')
print(f"[OK] Detailed report saved to: {report_file}")
print()
# Performance metrics
if args.verbose and result.performance_metrics:
print("PERFORMANCE METRICS:")
for key, value in result.performance_metrics.items():
if isinstance(value, (int, float)):
print(f" {key}: {value:.2f}ms")
else:
print(f" {key}: {value}")
print()
print("=" * 80)
return 0 if result.success else 1
except KeyboardInterrupt:
print("\n[WARN] Validation interrupted by user")
return 130
except Exception as e:
print(f"[ERROR] Validation failed: {e}")
import traceback
traceback.print_exc()
return 1
if __name__ == '__main__':
sys.exit(main())
```
## Features
- **Automated Error Detection**: Captures JavaScript errors without manual browser inspection
- **Console Log Monitoring**: Captures errors, warnings, and info logs from browser console
- **Network Monitoring**: Detects failed HTTP requests and resource loading issues
- **Performance Metrics**: Measures page load time and resource usage
- **Auto-Fix Suggestions**: Provides guidance on fixing detected issues
- **Detailed Reports**: Saves comprehensive validation reports to `.claude/data/reports/`
- **🆕 Subpage Crawling**: Automatically discovers and validates all subpages on a website
- **🆕 Comprehensive Coverage**: Crawl with configurable depth limits (default: 3 levels)
- **🆕 Smart Filtering**: Include/exclude specific paths with pattern matching
- **🆕 Site-wide Analysis**: Aggregates errors and warnings across entire website
- **🆕 Progress Tracking**: Real-time crawling progress with detailed status updates
## Requirements
**Recommended** (for full functionality):
```bash
pip install selenium
```
**ChromeDriver** (for Selenium):
- Download from: https://chromedriver.chromium.org/
- Or install automatically: `pip install webdriver-manager`
**Alternative** (if Selenium unavailable):
```bash
pip install playwright
playwright install chromium
```
## Integration with Dashboard
This command is automatically invoked when starting dashboards via `/monitor:dashboard` to ensure no JavaScript errors exist before displaying to the user.
## Output Format
**Terminal Output** (concise summary):
```
[OK] VALIDATION PASSED
URL: http://127.0.0.1:5000
Page Title: Autonomous Agent Dashboard
Load Time: 1.23s
Console Errors: 0
JavaScript Errors: 0
```
**Report File** (detailed analysis):
```markdown
# WEB PAGE VALIDATION REPORT
## Summary
- URL: http://127.0.0.1:5000
- Status: PASSED
- Load Time: 1.23s
- Console Errors: 0
- JavaScript Errors: 0
## Console Errors
(none detected)
## Performance Metrics
- Load Time: 1234ms
- DOM Ready: 456ms
- Resources: 15 loaded successfully
```
## Error Categories
1. **JavaScript Syntax Errors**: Invalid JavaScript code
2. **Runtime Errors**: Uncaught exceptions during execution
3. **Reference Errors**: Undefined variables or functions
4. **Type Errors**: Invalid type operations
5. **Network Errors**: Failed HTTP requests
6. **Resource Errors**: Missing CSS, JS, or image files
## Best Practices
- Run validation after making changes to web components
- Always validate before committing dashboard changes
- Use `--auto-fix` for common issues like string escaping
- Save reports for debugging with `--report` flag
- Increase `--timeout` for slow-loading pages
- Use `--verbose` for detailed troubleshooting
## See Also
- `/monitor:dashboard` - Start dashboard with automatic validation
- `/analyze:quality` - Comprehensive quality control including web validation
- Skill: web-validation - Detailed methodology and best practices