Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:00:50 +08:00
commit c5931553a6
106 changed files with 49995 additions and 0 deletions

215
commands/learn/analytics.md Normal file
View File

@@ -0,0 +1,215 @@
---
name: learn:analytics
description: Display learning analytics dashboard with pattern progress, skill effectiveness, and trends
delegates-to: autonomous-agent:orchestrator
---
# Learning Analytics Dashboard
Display comprehensive analytics about the autonomous agent's learning progress, including:
- **Pattern Learning Progress**: Quality trends, learning velocity, improvement rates
- **Skill Effectiveness**: Top performing skills, success rates, quality contributions
- **Agent Performance**: Reliability scores, efficiency ratings, delegation patterns
- **Skill Synergies**: Best skill combinations and their effectiveness
- **Prediction System**: Accuracy metrics and model performance
- **Cross-Project Learning**: Universal patterns and knowledge transfer
- **Learning Insights**: Actionable recommendations and trend analysis
## Execution
Generate and display the learning analytics report:
```bash
# Auto-detects plugin path whether in development or installed from marketplace
python <plugin_path>/lib/learning_analytics.py show --dir .claude-patterns
```
## Output Format
The command produces a comprehensive terminal dashboard with:
1. **Overview Section**: Total patterns, quality scores, success rates
2. **Quality Trend Chart**: ASCII visualization of quality progression over time
3. **Learning Velocity**: Improvement rates and trajectory analysis
4. **Top Performing Skills**: Rankings by success rate and quality contribution
5. **Top Performing Agents**: Rankings by reliability and efficiency
6. **Skill Synergies**: Best skill combinations discovered
7. **Prediction System Status**: Accuracy and model training metrics
8. **Cross-Project Learning**: Universal pattern statistics
9. **Learning Patterns**: Fastest and slowest learning areas
10. **Key Insights**: Actionable recommendations based on data
## Example Output
```
+===========================================================================+
| LEARNING ANALYTICS DASHBOARD - ENHANCED SYSTEM v3.0 |
+===========================================================================+
📊 OVERVIEW
---------------------------------------------------------------------------
Total Patterns Captured: 156
Overall Quality Score: 88.5/100
Success Rate: 92.3%
Recent Quality: 91.2/100 (+2.7)
Activity (Last 7 days): 12 patterns
Activity (Last 30 days): 48 patterns
📈 QUALITY TREND OVER TIME
---------------------------------------------------------------------------
95.0 | ██████████|
| ████████████████|
| ████████████████████ |
| ████████████████████ |
87.5 | ████████████████ |
| ████████████ |
| ████████ |
| ████████ |
80.0 |████ |
+------------------------------------------------------+
106 -> 156
Trend: IMPROVING
🚀 LEARNING VELOCITY
---------------------------------------------------------------------------
Weeks Analyzed: 8
Early Average Quality: 85.3/100
Recent Average Quality: 91.2/100
Total Improvement: +5.9 points
Improvement Rate: 0.74 points/week
Trajectory: ACCELERATING
Acceleration: +0.52 (speeding up!)
⭐ TOP PERFORMING SKILLS
---------------------------------------------------------------------------
1. code-analysis Success: 94.3% Quality: 18.5
2. quality-standards Success: 92.1% Quality: 17.8
3. testing-strategies Success: 89.5% Quality: 16.2
4. security-patterns Success: 91.0% Quality: 15.9
5. pattern-learning Success: 88.7% Quality: 15.1
🤖 TOP PERFORMING AGENTS
---------------------------------------------------------------------------
1. code-analyzer Reliability: 96.9% Efficiency: 1.02
2. quality-controller Reliability: 95.2% Efficiency: 0.98
3. test-engineer Reliability: 93.5% Efficiency: 0.89
4. documentation-generator Reliability: 91.8% Efficiency: 0.95
5. frontend-analyzer Reliability: 90.5% Efficiency: 1.05
🔗 SKILL SYNERGIES (Top Combinations)
---------------------------------------------------------------------------
1. code-analysis + quality-standards Score: 8.5 Uses: 38
Quality: 92.3 Success: 97.8% [HIGHLY_RECOMMENDED]
2. code-analysis + security-patterns Score: 7.2 Uses: 28
Quality: 91.0 Success: 96.4% [HIGHLY_RECOMMENDED]
🎯 PREDICTION SYSTEM STATUS
---------------------------------------------------------------------------
Status: ACTIVE
Models Trained: 15 skills
Prediction Accuracy: 87.5%
[PASS] High accuracy - automated recommendations highly reliable
🌐 CROSS-PROJECT LEARNING
---------------------------------------------------------------------------
Status: ACTIVE
Universal Patterns: 45
Avg Transferability: 82.3%
[PASS] Knowledge transfer active - benefiting from other projects
💡 KEY INSIGHTS
---------------------------------------------------------------------------
[PASS] Learning is accelerating! Quality improving at 0.74 points/week and speeding up
[PASS] Recent performance (91.2) significantly better than historical average (88.5)
[PASS] Highly effective skill pair discovered: code-analysis + quality-standards (8.5 synergy score)
[PASS] Prediction system highly accurate (87.5%) - trust automated recommendations
[PASS] Fastest learning in: refactoring, bug-fix
+===========================================================================+
| Generated: 2025-10-23T14:30:52.123456 |
+===========================================================================+
```
## Export Options
### Export as JSON
```bash
# Auto-detects plugin path
python <plugin_path>/lib/learning_analytics.py export-json --output data/reports/analytics.json --dir .claude-patterns
```
### Export as Markdown
```bash
# Auto-detects plugin path
python <plugin_path>/lib/learning_analytics.py export-md --output data/reports/analytics.md --dir .claude-patterns
```
## Usage Scenarios
### Daily Standup
Review learning progress and identify areas needing attention:
```bash
/learning-analytics
```
### Weekly Review
Export comprehensive report for documentation:
```bash
# Auto-detects plugin path
python <plugin_path>/lib/learning_analytics.py export-md --output weekly_analytics.md
```
### Performance Investigation
Analyze why quality might be declining or improving:
```bash
/learning-analytics
# Review Learning Velocity and Learning Patterns sections
```
### Skill Selection Validation
Verify which skills and combinations work best:
```bash
/learning-analytics
# Review Top Performing Skills and Skill Synergies sections
```
## Interpretation Guide
### Quality Scores
- **90-100**: Excellent - Optimal performance
- **80-89**: Good - Meeting standards
- **70-79**: Acceptable - Some improvement needed
- **<70**: Needs attention - Review approach
### Learning Velocity
- **Accelerating**: System is learning faster over time (optimal)
- **Linear**: Steady improvement at constant rate (good)
- **Decelerating**: Improvement slowing down (may need new approaches)
### Prediction Accuracy
- **>85%**: High accuracy - Trust automated recommendations
- **70-85%**: Moderate accuracy - System still learning
- **<70%**: Low accuracy - Need more training data
### Skill Synergies
- **Score >5**: Highly recommended combination
- **Score 2-5**: Recommended combination
- **Score <2**: Use with caution
## Frequency Recommendations
- **After every 10 patterns**: Quick check of trends
- **Weekly**: Full review of all sections
- **Monthly**: Deep analysis with exported reports
- **After major changes**: Verify impact on learning
## Notes
- Analytics require at least 10 patterns for meaningful insights
- Learning velocity requires 3+ weeks of data
- Prediction accuracy improves with more training data
- Cross-project learning activates automatically when enabled
- All metrics update in real-time as new patterns are captured
---

418
commands/learn/clone.md Normal file
View File

@@ -0,0 +1,418 @@
---
name: learn:clone
description: Clone and learn features from external repos to implement in current project
delegates-to: autonomous-agent:dev-orchestrator
---
# Learn-Clone Command
## Command: `/learn:clone`
**Feature cloning through learning** - Analyzes features and capabilities in external GitHub/GitLab repositories, understands their implementation, and helps implement similar or equivalent functionality in the current project while respecting licenses and best practices.
**🔄 Intelligent Feature Cloning:**
- **Feature Analysis**: Deep understanding of how features work
- **Implementation Extraction**: Learn implementation patterns
- **Adaptation**: Adapt features to current project context
- **License Compliance**: Respect and comply with source licenses
- **Best Practice Integration**: Implement using current project standards
- **Testing Strategy**: Learn and adapt testing approaches
## How It Works
1. **Feature Identification**: Analyzes target repository for specific features
2. **Implementation Study**: Studies how features are implemented
3. **Pattern Extraction**: Extracts implementation patterns and approaches
4. **Adaptation Planning**: Plans how to adapt to current project
5. **Implementation**: Implements similar functionality (with attribution)
6. **Testing**: Adapts testing strategies from source
7. **Documentation**: Documents learnings and implementation
## Usage
### Basic Usage
```bash
# Clone specific feature from repository
/learn:clone https://github.com/user/repo --feature "JWT authentication"
# Clone multiple features
/learn:clone https://github.com/user/repo --features "auth,caching,rate-limiting"
# Learn implementation approach
/learn:clone https://github.com/user/repo --feature "real-time notifications" --learn-only
```
### With Implementation
```bash
# Clone and implement immediately
/learn:clone https://github.com/user/repo --feature "JWT auth" --implement
# Clone with adaptation
/learn:clone https://github.com/user/repo --feature "caching" --adapt-to-current
# Clone with testing
/learn:clone https://github.com/user/repo --feature "API validation" --include-tests
```
### Advanced Options
```bash
# Deep learning mode (understands internals)
/learn:clone https://github.com/user/repo --feature "auth" --deep-learning
# Compare implementations
/learn:clone https://github.com/user/repo --feature "caching" --compare-approaches
# Extract patterns only (no implementation)
/learn:clone https://github.com/user/repo --feature "queue" --extract-patterns
# With license attribution
/learn:clone https://github.com/user/repo --feature "parser" --add-attribution
```
## Output Format
### Terminal Output
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔄 FEATURE LEARNING COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Feature: JWT Authentication
Source: fastapi/fastapi (MIT License)
Complexity: Medium | Adaptation Required: Yes
Key Components Identified:
* Token generation with configurable expiry
* Dependency injection for auth validation
* Refresh token mechanism
Implementation Strategy:
1. Add python-jose dependency
2. Create auth utility module
3. Implement token generation/validation
4. Add authentication middleware
📄 Full analysis: .claude/data/reports/learn-clone-jwt-auth-2025-10-29.md
⏱ Analysis completed in 2.8 minutes
Next: Review analysis, then use /dev:auto to implement
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Detailed Report
```markdown
=======================================================
FEATURE LEARNING REPORT
=======================================================
Feature: JWT Authentication
Source: https://github.com/fastapi/fastapi
License: MIT (attribution required)
Analysis Date: 2025-10-29
+- Feature Overview -----------------------------------+
| Feature Name: JWT Authentication System |
| Location: fastapi/security/oauth2.py |
| Complexity: Medium |
| Dependencies: python-jose, passlib |
| |
| Core Capabilities: |
| * Access token generation with expiry |
| * Refresh token support |
| * Dependency injection for validation |
| * Multiple authentication schemes |
| * Token revocation support |
+-------------------------------------------------------+
+- Implementation Analysis ----------------------------+
| Key Files Analyzed: |
| * fastapi/security/oauth2.py (core logic) |
| * fastapi/security/utils.py (helpers) |
| * tests/test_security_oauth2.py (tests) |
| |
| Architecture: |
| +- Token Generation Layer |
| | * Uses python-jose for JWT encoding |
| | * Configurable algorithms (HS256, RS256) |
| | * Expiry and claims management |
| | |
| +- Validation Layer |
| | * Dependency injection pattern |
| | * Automatic token extraction from headers |
| | * Validation with error handling |
| | |
| +- Integration Layer |
| * Middleware for route protection |
| * Flexible authentication schemes |
| * OAuth2 PasswordBearer support |
+-------------------------------------------------------+
+- Code Patterns Extracted ----------------------------+
| Pattern 1: Token Generation |
| ```python |
| from jose import jwt |
| from datetime import datetime, timedelta |
| |
| def create_token(data: dict, expires_delta: timedelta):|
| to_encode = data.copy() |
| expire = datetime.utcnow() + expires_delta |
| to_encode.update({"exp": expire}) |
| return jwt.encode(to_encode, SECRET_KEY, ALGO) |
| ``` |
| |
| Pattern 2: Dependency Injection for Auth |
| ```python |
| from fastapi import Depends, HTTPException |
| from fastapi.security import OAuth2PasswordBearer |
| |
| oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")|
| |
| async def get_current_user(token: str = Depends(oauth2_scheme)):|
| credentials_exception = HTTPException(...) |
| try: |
| payload = jwt.decode(token, SECRET, ALGO) |
| username = payload.get("sub") |
| if username is None: |
| raise credentials_exception |
| return username |
| except JWTError: |
| raise credentials_exception |
| ``` |
| |
| Pattern 3: Route Protection |
| ```python |
| @app.get("/users/me") |
| async def read_users_me(current_user: User = Depends(get_current_user)):|
| return current_user |
| ``` |
+-------------------------------------------------------+
+- Adaptation Strategy for Current Project ------------+
| Current Project Context: |
| * Type: Claude Code Plugin |
| * Language: Python + Markdown config |
| * Architecture: Agent-based with skills |
| |
| Adaptation Required: |
| 1. Simplify for plugin context |
| * May not need OAuth2PasswordBearer |
| * Focus on token generation/validation |
| * Adapt for agent communication |
| |
| 2. Integration points |
| * Add to orchestrator for secure agent calls |
| * Protect sensitive agent operations |
| * Add authentication skill |
| |
| 3. Dependencies |
| * Add: python-jose[cryptography] |
| * Add: passlib[bcrypt] |
| * Keep: Lightweight, minimal deps |
+-------------------------------------------------------+
+- Implementation Roadmap ------------------------------+
| Phase 1: Core Implementation (2-3 hours) |
| Step 1: Add Dependencies |
| +- Add python-jose to requirements |
| +- Add passlib for password hashing |
| +- Update lock file |
| |
| Step 2: Create Auth Skill |
| +- Create skills/authentication/SKILL.md |
| +- Add JWT token generation patterns |
| +- Add validation best practices |
| +- Add security considerations |
| |
| Step 3: Implement Token Utilities |
| +- Create lib/auth_utils.py |
| +- Implement create_token() |
| +- Implement validate_token() |
| +- Add error handling |
| |
| Phase 2: Integration (1-2 hours) |
| Step 4: Agent Authentication |
| +- Add auth to sensitive agent operations |
| +- Implement token validation middleware |
| +- Add authentication examples |
| |
| Step 3: Testing (1 hour) |
| +- Write unit tests for token utils |
| +- Write integration tests |
| +- Add security tests |
| |
| Phase 3: Documentation (30 min) |
| +- Document auth skill usage |
| +- Add examples to README |
| +- Add security best practices |
| +- Include attribution to FastAPI |
+-------------------------------------------------------+
+- Testing Strategy Learned ---------------------------+
| From Source Repository Tests: |
| |
| Test Categories: |
| 1. Token Generation Tests |
| * Valid token creation |
| * Token expiry handling |
| * Custom claims inclusion |
| |
| 2. Token Validation Tests |
| * Valid token validation |
| * Expired token rejection |
| * Invalid signature detection |
| * Malformed token handling |
| |
| 3. Integration Tests |
| * Protected route access with valid token |
| * Protected route rejection without token |
| * Token refresh flow |
| |
| Test Implementation Example: |
| ```python |
| def test_create_access_token(): |
| data = {"sub": "user@example.com"} |
| token = create_access_token(data) |
| assert token is not None |
| payload = jwt.decode(token, SECRET, ALGO) |
| assert payload["sub"] == "user@example.com" |
| assert "exp" in payload |
| ``` |
+-------------------------------------------------------+
+- License Compliance ----------------------------------+
| Source License: MIT License |
| |
| Requirements: |
| ✅ Include original license notice |
| ✅ Include attribution in documentation |
| ✅ Do not claim original authorship |
| |
| Attribution Text (add to README and code files): |
| |
| """ |
| JWT Authentication implementation learned from: |
| FastAPI (https://github.com/tiangolo/fastapi) |
| Copyright (c) 2018 Sebastián Ramírez |
| MIT License |
| |
| Adapted for Claude Code Plugin with modifications. |
| """ |
+-------------------------------------------------------+
+- Learned Patterns to Store --------------------------+
| Pattern: Dependency Injection for Security |
| * Effectiveness: 95/100 |
| * Reusability: High |
| * Complexity: Medium |
| * Store in: .claude-patterns/security-patterns.json |
| |
| Pattern: Token-Based Authentication |
| * Effectiveness: 92/100 |
| * Reusability: High |
| * Complexity: Medium |
| * Store in: .claude-patterns/auth-patterns.json |
+-------------------------------------------------------+
=======================================================
NEXT STEPS
=======================================================
Ready to Implement?
* Review implementation roadmap above
* Check license compliance requirements
* Use: /dev:auto "implement JWT authentication based on learned patterns"
Need More Analysis?
* Analyze alternative implementations
* Compare with other auth approaches
* Deep-dive into security considerations
=======================================================
Analysis Time: 2.8 minutes
Feature Complexity: Medium
Implementation Estimate: 4-6 hours
License: MIT (attribution required)
Learned patterns stored in database for future reference.
```
## Integration with Learning System
Stores learned feature patterns:
```json
{
"feature_clone_patterns": {
"feature_name": "jwt_authentication",
"source_repo": "fastapi/fastapi",
"source_license": "MIT",
"patterns_extracted": 3,
"adaptation_required": true,
"implemented": false,
"implementation_approach": "adapted_for_plugin",
"attribution_added": true
}
}
```
## Agent Delegation
- **dev-orchestrator**: Coordinates learning and implementation
- **code-analyzer**: Analyzes source implementation
- **pattern-learning**: Extracts and stores patterns
- **security-auditor**: Ensures secure implementation
## Skills Integration
- **code-analysis**: For understanding source code
- **pattern-learning**: For pattern extraction
- **security-patterns**: For secure implementation
- **documentation-best-practices**: For proper attribution
## Use Cases
### Learning Authentication
```bash
/learn:clone https://github.com/fastapi/fastapi --feature "JWT auth"
```
### Learning Caching Strategies
```bash
/learn:clone https://github.com/django/django --feature "caching"
```
### Learning Testing Approaches
```bash
/learn:clone https://github.com/pytest-dev/pytest --feature "test fixtures"
```
## Best Practices
### License Compliance
- Always check and respect source licenses
- Add proper attribution in code and documentation
- Do not copy code verbatim - learn and adapt
- Understand license restrictions before cloning
### Feature Selection
- Choose features that fit project needs
- Consider maintenance burden
- Evaluate complexity vs value
- Check for dependencies
### Implementation
- Adapt to project conventions
- Don't blindly copy - understand first
- Write tests for cloned features
- Document learnings and adaptations
---
**Version**: 1.0.0
**Integration**: Uses dev-orchestrator, code-analyzer agents
**Skills**: code-analysis, pattern-learning, security-patterns
**Platform**: Cross-platform
**Scope**: Learn and adapt features from external repositories
**License**: Enforces proper attribution and compliance

450
commands/learn/history.md Normal file
View File

@@ -0,0 +1,450 @@
---
name: learn:history
description: Learn from commit history to identify patterns, debugging strategies, and improvement areas
delegates-to: autonomous-agent:orchestrator
---
# Learn-History Command
## Command: `/learn:history`
**Learn from repository evolution** - Analyzes commit history in external GitHub/GitLab repositories to discover successful debugging patterns, development workflows, and improvement strategies that can be applied to the current project.
**📚 Historical Pattern Learning:**
- **Commit Analysis**: Study how issues were resolved over time
- **Debug Pattern Discovery**: Learn effective debugging approaches
- **Development Workflow**: Understand successful development practices
- **Refactoring Patterns**: Identify effective code improvement strategies
- **Test Evolution**: Learn how testing strategies matured
- **Documentation Evolution**: Study documentation improvement patterns
## How It Works
1. **History Access**: Clones repository and analyzes commit history
2. **Pattern Extraction**: Identifies recurring patterns in commits
3. **Debug Strategy Analysis**: Studies how bugs were fixed
4. **Workflow Discovery**: Maps development and release workflows
5. **Quality Improvement Tracking**: Analyzes quality evolution over time
6. **Pattern Application**: Suggests how to apply learnings to current project
## Usage
### Basic Usage
```bash
# Learn from repository history
/learn:history https://github.com/username/repo
# Learn from specific branch
/learn:history https://github.com/username/repo --branch develop
# Learn from date range
/learn:history https://github.com/username/repo --since "2024-01-01" --until "2024-12-31"
```
### Focused Analysis
```bash
# Focus on bug fixes
/learn:history https://github.com/user/repo --focus bug-fixes
# Focus on refactoring patterns
/learn:history https://github.com/user/repo --focus refactoring
# Focus on test improvements
/learn:history https://github.com/user/repo --focus testing
# Focus on performance improvements
/learn:history https://github.com/user/repo --focus performance
```
### Advanced Options
```bash
# Analyze specific contributor's patterns
/learn:history https://github.com/user/repo --author "developer@email.com"
# Deep analysis with AI-powered insights
/learn:history https://github.com/user/repo --deep-analysis
# Compare with current project
/learn:history https://github.com/user/repo --apply-to-current
# Generate actionable roadmap
/learn:history https://github.com/user/repo --generate-improvements
```
## Output Format
### Terminal Output (Concise)
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 HISTORY ANALYSIS COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Repository: fastapi/fastapi
Commits Analyzed: 3,892 | Time Range: 3.5 years
Key Discoveries:
* Early focus on type safety prevented 60% of bugs
* Incremental refactoring approach (small PRs)
* Test-first development for all features
Top Patterns to Apply:
1. Implement pre-commit hooks for type checking
2. Use conventional commit messages for automation
3. Add integration tests before refactoring
📄 Full report: .claude/data/reports/learn-history-fastapi-2025-10-29.md
⏱ Analysis completed in 4.5 minutes
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
### Detailed Report
```markdown
=======================================================
REPOSITORY HISTORY ANALYSIS
=======================================================
Repository: https://github.com/fastapi/fastapi
Time Range: 2018-12-05 to 2025-01-15 (3.5 years)
Commits Analyzed: 3,892
Contributors: 487
+- Development Evolution ------------------------------+
| Phase 1: Initial Development (6 months) |
| * Focus: Core functionality and type safety |
| * Commits: 234 |
| * Key Pattern: Type-first development |
| * Result: Strong foundation, fewer bugs later |
| |
| Phase 2: Feature Expansion (12 months) |
| * Focus: Adding features while maintaining quality |
| * Commits: 892 |
| * Key Pattern: Test-before-feature approach |
| * Result: Features added without quality degradation |
| |
| Phase 3: Maturity & Optimization (24 months) |
| * Focus: Performance and developer experience |
| * Commits: 2,766 |
| * Key Pattern: Continuous small improvements |
| * Result: Best-in-class performance and DX |
+-------------------------------------------------------+
+- Bug Fix Patterns Discovered ------------------------+
| 1. Type Error Prevention (423 commits) |
| Pattern: Added type hints before features |
| Effectiveness: Prevented 60% of potential bugs |
| Application to Current Project: |
| -> Add comprehensive type hints to all agents |
| -> Use mypy in pre-commit hooks |
| -> Validate agent schemas with Pydantic |
| |
| 2. Test-Driven Bug Fixes (892 commits) |
| Pattern: Write failing test -> Fix -> Verify |
| Effectiveness: 95% of bugs didn't recur |
| Application to Current Project: |
| -> Add test case for every bug fix |
| -> Use regression test suite |
| -> Integrate with quality-controller agent |
| |
| 3. Incremental Refactoring (234 commits) |
| Pattern: Small, focused refactoring PRs |
| Effectiveness: Zero breaking changes |
| Application to Current Project: |
| -> Refactor one agent/skill at a time |
| -> Maintain backward compatibility |
| -> Use deprecation warnings before removal |
| |
| 4. Dependency Updates (156 commits) |
| Pattern: Regular, automated dependency updates |
| Effectiveness: Zero security incidents |
| Application to Current Project: |
| -> Use Dependabot or similar automation |
| -> Test after each dependency update |
| -> Pin versions with compatibility ranges |
+-------------------------------------------------------+
+- Development Workflow Patterns ----------------------+
| Commit Message Pattern Analysis: |
| * 78% use conventional commits (feat:, fix:, etc.) |
| * Average commit size: 127 lines changed |
| * 92% of commits reference issues |
| |
| PR Review Process: |
| * Average review time: 18 hours |
| * Requires 2+ approvals for core changes |
| * Automated CI checks (tests, linting, types) |
| * Documentation updated in same PR |
| |
| Release Workflow: |
| * Semantic versioning strictly followed |
| * Changelog auto-generated from commits |
| * Release notes include upgrade guide |
| * Beta releases before major versions |
| |
| Application to Current Project: |
| 1. Adopt conventional commit format |
| 2. Link commits to slash command implementations |
| 3. Auto-generate CHANGELOG.md from commits |
| 4. Add pre-commit hooks for validation |
| 5. Implement automated release workflow |
+-------------------------------------------------------+
+- Testing Strategy Evolution -------------------------+
| Timeline of Testing Improvements: |
| |
| Year 1 (2019): |
| * Coverage: 45% -> 75% |
| * Pattern: Added tests retrospectively |
| * Result: Many bugs caught late |
| |
| Year 2 (2020): |
| * Coverage: 75% -> 92% |
| * Pattern: Test-first for new features |
| * Result: Fewer bugs in new code |
| |
| Year 3 (2021): |
| * Coverage: 92% -> 96% |
| * Pattern: Property-based testing added |
| * Result: Edge cases discovered automatically |
| |
| Key Learnings: |
| * Early investment in testing pays off |
| * Property-based testing finds unexpected bugs |
| * Fast tests encourage frequent execution |
| * Integration tests complement unit tests |
| |
| Application to Current Project: |
| 1. Set coverage goal: 90%+ for agents/skills |
| 2. Add property-based tests for core logic |
| 3. Use test-engineer agent for all features |
| 4. Optimize test execution time (<60s total) |
| 5. Add integration tests for agent workflows |
+-------------------------------------------------------+
+- Documentation Improvement Patterns -----------------+
| Documentation Evolution: |
| |
| Early Stage: |
| * Basic README with installation steps |
| * Inline code comments only |
| * Result: High support burden |
| |
| Growth Stage: |
| * Added tutorials and examples |
| * API documentation from docstrings |
| * Result: 40% reduction in support requests |
| |
| Mature Stage: |
| * Multi-language documentation |
| * Interactive examples |
| * Video tutorials |
| * Result: Best-in-class documentation |
| |
| Key Patterns: |
| * Documentation updated with code (same PR) |
| * Examples tested as part of CI |
| * User feedback drives improvements |
| * Visual aids (diagrams, flowcharts) |
| |
| Application to Current Project: |
| 1. Keep command documentation with implementation |
| 2. Add usage examples to all slash commands |
| 3. Create visual architecture diagrams |
| 4. Test documentation examples automatically |
| 5. Add troubleshooting section to each command |
+-------------------------------------------------------+
+- Performance Optimization Journey -------------------+
| Performance Commits: 167 |
| |
| Major Optimizations: |
| 1. Async/Await Migration (Commit #1234) |
| * 3x throughput improvement |
| * Pattern: Gradual migration, one module at time |
| * Lesson: Plan async from start or budget time |
| |
| 2. Dependency Injection Caching (Commit #2456) |
| * 40% latency reduction |
| * Pattern: Cache resolved dependencies |
| * Lesson: Profile before optimizing |
| |
| 3. Response Model Optimization (Commit #3012) |
| * 25% faster serialization |
| * Pattern: Lazy loading and selective fields |
| * Lesson: Measure real-world impact |
| |
| Application to Current Project: |
| 1. Add async support to background-task-manager |
| 2. Cache pattern database queries |
| 3. Profile agent execution times |
| 4. Optimize skill loading (lazy load when possible) |
| 5. Implement parallel agent execution |
+-------------------------------------------------------+
+- Refactoring Strategy Analysis ----------------------+
| Refactoring Commits: 234 (6% of total) |
| |
| Successful Refactoring Patterns: |
| |
| Pattern A: Extract & Test |
| * Extract component -> Write tests -> Refactor -> Verify|
| * Success Rate: 98% |
| * Average PR size: 89 lines changed |
| |
| Pattern B: Deprecate -> Migrate -> Remove |
| * Mark old API deprecated |
| * Add new API alongside |
| * Migrate internally |
| * Remove after 2+ versions |
| * Success Rate: 100% (no breaking changes) |
| |
| Pattern C: Incremental Type Addition |
| * Add types to new code |
| * Gradually add to existing code |
| * Use Any temporarily if needed |
| * Success Rate: 94% |
| |
| Failed Refactoring Attempts: |
| * Big-bang rewrites (2 attempts, both failed) |
| * Premature optimization (4 reverted commits) |
| * Refactoring without tests (3 bugs introduced) |
| |
| Application to Current Project: |
| 1. Refactor agents one at a time |
| 2. Always add tests before refactoring |
| 3. Use deprecation warnings for breaking changes |
| 4. Keep refactoring PRs small (<200 lines) |
| 5. Profile before performance refactoring |
+-------------------------------------------------------+
+- Actionable Improvements for Current Project --------+
| IMMEDIATE ACTIONS (This Week): |
| |
| 1. Add Conventional Commit Format |
| Command: Configure Git hooks |
| Impact: Better changelog generation |
| Effort: 30 minutes |
| Implementation: /dev:auto "add conventional commit hooks"
| |
| 2. Implement Pre-Commit Type Checking |
| Command: Add mypy to pre-commit |
| Impact: Catch type errors before commit |
| Effort: 1 hour |
| Implementation: /dev:auto "add mypy pre-commit hook"
| |
| 3. Add Test Coverage Reporting |
| Command: Integrate coverage.py |
| Impact: Visibility into test gaps |
| Effort: 45 minutes |
| Implementation: /dev:auto "add test coverage reporting"
| |
| SHORT-TERM ACTIONS (This Month): |
| |
| 4. Implement Automated Dependency Updates |
| Tool: Dependabot or Renovate |
| Impact: Stay current, avoid security issues |
| Effort: 2 hours |
| |
| 5. Add Property-Based Testing |
| Library: Hypothesis for Python |
| Impact: Discover edge case bugs |
| Effort: 4 hours |
| |
| 6. Create Visual Architecture Diagrams |
| Tool: Mermaid in markdown |
| Impact: Better understanding for contributors |
| Effort: 3 hours |
| |
| LONG-TERM ACTIONS (This Quarter): |
| |
| 7. Migrate to Async-First Architecture |
| Scope: Background-task-manager and orchestrator |
| Impact: Faster execution, better scalability |
| Effort: 2-3 weeks |
| |
| 8. Implement Comprehensive Integration Tests |
| Scope: All agent workflows end-to-end |
| Impact: Catch integration bugs early |
| Effort: 2 weeks |
| |
| 9. Add Performance Profiling & Monitoring |
| Tool: Built-in profiler + custom metrics |
| Impact: Identify and fix bottlenecks |
| Effort: 1 week |
+-------------------------------------------------------+
=======================================================
NEXT STEPS
=======================================================
Ready to Apply Learnings?
* Start with immediate actions (easiest wins)
* Use /dev:auto for implementation
* Track progress with /learn:analytics
Want More Historical Analysis?
* Analyze another repository for comparison
* Deep-dive into specific time periods
* Focus on particular contributors' patterns
=======================================================
Analysis Time: 4.5 minutes
Commits Analyzed: 3,892
Patterns Extracted: 12 major patterns
Actionable Improvements: 9 recommendations
Historical patterns stored in learning database.
```
## Integration with Learning System
Stores historical patterns for future reference:
```json
{
"history_learning_patterns": {
"source_repo": "fastapi/fastapi",
"patterns_extracted": {
"bug_fix_strategies": 4,
"refactoring_approaches": 3,
"testing_evolution": 3,
"documentation_improvements": 4
},
"applied_to_current_project": true,
"effectiveness_tracking": true,
"reuse_count": 1
}
}
```
## Agent Delegation
- **orchestrator**: Coordinates analysis
- **code-analyzer**: Analyzes code changes over time
- **pattern-learning**: Extracts and stores patterns
- **quality-controller**: Evaluates quality improvements
## Use Cases
### Learning Debug Patterns
```bash
/learn:history https://github.com/user/repo --focus bug-fixes
```
### Understanding Quality Evolution
```bash
/learn:history https://github.com/user/repo --focus quality-improvements
```
### Studying Refactoring Success
```bash
/learn:history https://github.com/user/repo --focus refactoring
```
---
**Version**: 1.0.0
**Integration**: Full pattern learning integration
**Platform**: Cross-platform
**Scope**: Learn from repository evolution to improve current project

35
commands/learn/init.md Normal file
View File

@@ -0,0 +1,35 @@
---
name: learn:init
description: Initialize pattern learning database
---
EXECUTE THESE BASH COMMANDS DIRECTLY (no agents, no skills):
First, find the plugin installation path:
```bash
PLUGIN_PATH=$(find ~/.claude -name "exec_plugin_script.py" 2>/dev/null | head -1 | sed 's|/lib/exec_plugin_script.py||')
echo "Plugin found at: $PLUGIN_PATH"
```
Step 1 - Check status in current project directory:
```bash
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" pattern_storage.py --dir ./.claude-patterns check
```
Step 2 - Initialize if needed:
```bash
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" pattern_storage.py --dir ./.claude-patterns init --version 7.6.9
```
Step 3 - Validate:
```bash
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" pattern_storage.py --dir ./.claude-patterns validate
```
Step 4 - Verify patterns stored in current project:
```bash
ls -la ./.claude-patterns/ 2>/dev/null || echo "Pattern directory not found in current project"
```
Report results with simple text (no markdown formatting, no boxes).
The pattern database will be stored in your current project directory at ./.claude-patterns/

View File

@@ -0,0 +1,158 @@
---
name: learn:performance
description: Display performance analytics dashboard with metrics, trends, and optimization recommendations
delegates-to: autonomous-agent:orchestrator
---
# Performance Report Command
Generate comprehensive performance analytics report showing learning effectiveness, skill/agent performance trends, quality improvements, and optimization recommendations.
## How It Works
1. **Data Collection**: Reads pattern database, quality history, and task queue
2. **Metrics Calculation**: Computes learning effectiveness, trend analysis, success rates
3. **Insight Generation**: Identifies patterns, correlations, and improvement opportunities
4. **Visualization**: Creates ASCII charts showing performance over time
5. **Recommendations**: Provides actionable optimization suggestions
6. **Report Generation**: Outputs comprehensive analytics report
**IMPORTANT**: When delegating this command to the orchestrator agent, the agent MUST present the complete performance report with charts, metrics, and prioritized recommendations. This command is specifically designed to show comprehensive results to the user. Silent completion is not acceptable.
## Usage
```bash
/learn:performance
```
## What You'll Get
### Learning Effectiveness Analysis
- Pattern database growth rate and diversity
- Knowledge coverage across task types
- Pattern reuse rates and success correlation
- Time to competency for different task types
- Overall learning velocity metrics
### Skill Performance Dashboard
- Success rate per skill over time
- Quality score correlation with skill usage
- Top skill combinations and their effectiveness
- Skill loading efficiency metrics
- Recommendation accuracy analysis
### Agent Performance Summary
- Delegation success rates per agent
- Average quality scores achieved
- Task completion time analysis
- Agent specialization effectiveness
- Background task performance
### Quality Trend Visualization
- Quality score trends over time (ASCII charts)
- Improvement rate calculations
- Baseline vs. current comparison
- Threshold compliance tracking
- Consistency analysis (variance)
### Optimization Recommendations
- Top 5 actionable recommendations prioritized by impact
- Pattern-based insights (which patterns work best)
- Quality-based insights (when to run quality checks)
- Agent-based insights (optimal delegation strategies)
- Efficiency improvements (parallelization opportunities)
## Example Output
The orchestrator MUST present the full performance report. The example output in this file demonstrates the EXACT format expected. Do NOT summarize - show the complete report:
```
=======================================================
PERFORMANCE ANALYTICS REPORT
=======================================================
Generated: 2025-10-21 11:30:00
+- Executive Summary ----------------------------------+
| Learning Status: [PASS] Active and highly effective |
| Total Patterns: 47 patterns across 8 task types |
| Quality Trend: ^ +18% improvement (30 days) |
| Pattern Reuse: 67% reuse rate (excellent) |
+------------------------------------------------------+
+- Learning Effectiveness -----------------------------+
| Knowledge Growth: 3.2 patterns/week |
| Coverage: 8/10 common task types |
| Improvement Rate: +1.2 quality points/week |
| Time to Competency: ~5 similar tasks |
+------------------------------------------------------+
+- Skill Performance ----------------------------------+
| pattern-learning ████████████ 92% (12) |
| quality-standards ███████████░ 88% (15) |
| code-analysis ██████████░░ 85% (8) |
| documentation-practices ████████░░░░ 78% (6) |
| testing-strategies ███████░░░░░ 72% (5) |
| |
| Top Combination: pattern-learning + quality -> 94/100|
+------------------------------------------------------+
+- Quality Trends (30 Days) ---------------------------+
| 100 | [X] |
| 90 | [X]--[X]--[X] [X]--[X]-+ |
| 80 | [X]--+ ++ |
| 70 |[X]---+ | (threshold) |
| 60 | |
| +------------------------------------ |
| Week 1 Week 2 Week 3 Week 4 |
| |
| [PASS] Quality improved 23% from baseline (65 -> 92) |
| [PASS] Consistently above threshold for 3 weeks |
| [PASS] 15% improvement after learning 10+ patterns |
+------------------------------------------------------+
+- Top Recommendations --------------------------------+
| 1. [HIGH] Use pattern-learning skill more often |
| -> +12 points avg quality improvement |
| -> 95% success rate (highest) |
| |
| 2. [HIGH] Run quality-controller before completion |
| -> +13 points with quality check vs without |
| -> 88% auto-fix success rate |
| |
| 3. [MED] Delegate testing to test-engineer |
| -> 91% success vs 76% manual |
| -> 35% time savings |
| |
| 4. [MED] Combine pattern-learning + quality skills |
| -> Best combination: 94/100 avg quality |
| |
| 5. [LOW] Archive patterns with reuse_count = 0 |
| -> Free up 15% storage, improve query speed |
+------------------------------------------------------+
=======================================================
CONCLUSION: Learning system performing excellently
Continue current approach, implement recommendations
=======================================================
```
## Use Cases
1. **Monitor Learning Progress**: Track how the system improves over time
2. **Identify Optimization Opportunities**: Find which skills/agents to use more/less
3. **Validate Learning Effectiveness**: Prove the autonomous system is working
4. **Troubleshoot Issues**: Understand why quality might be declining
5. **Demonstrate ROI**: Show concrete improvements from the learning system
## Report Frequency
- **Weekly**: Review learning progress and trends
- **Monthly**: Comprehensive analysis and strategy adjustment
- **On-Demand**: When investigating specific performance questions
- **Automated**: After every 10 tasks (orchestrator integration)
## See Also
- `/auto-analyze` - Autonomous project analysis
- `/quality-check` - Comprehensive quality control
- `/learn-patterns` - Initialize pattern learning

392
commands/learn/predict.md Normal file
View File

@@ -0,0 +1,392 @@
---
name: learn:predict
description: Generate ML-powered predictive insights and optimization recommendations from patterns
delegates-to: autonomous-agent:orchestrator
---
# Predictive Analytics Command
Generate advanced predictive insights, optimization recommendations, and trend analysis using machine learning-inspired algorithms that learn from historical patterns to continuously improve prediction accuracy.
## Usage
```bash
/learn:predict [OPTIONS]
```
**Examples**:
```bash
/learn:predict # Comprehensive predictive analytics report
/learn:predict --action quality-trend # Predict quality trends for next 7 days
/learn:predict --action optimal-skills # Recommend optimal skills for task
/learn:predict --action learning-velocity # Predict learning acceleration
/learn:predict --action opportunities # Identify optimization opportunities
/learn:predict --action accuracy # Check prediction accuracy metrics
```
## Advanced Analytics Features
### 🎯 **Quality Trend Prediction**
**Predicts future quality scores** with confidence intervals:
**Features**:
- **Linear regression analysis** on historical quality data
- **7-day ahead predictions** with trend direction
- **Confidence scoring** based on data consistency
- **Trend analysis** (improving/stable/declining)
- **Automated recommendations** based on predictions
**Use Cases**:
- Forecast quality targets for sprints
- Identify when quality interventions are needed
- Plan quality improvement initiatives
- Track effectiveness of quality initiatives
### 🧠 **Optimal Skills Prediction**
**Recommends best skills for specific tasks** using historical performance:
**Features**:
- **Performance-based ranking** by success rate and quality impact
- **Context-aware recommendations** for task types
- **Confidence scoring** for each skill recommendation
- **Recent usage weighting** for current effectiveness
- **Multi-skill combinations** optimization
**Use Cases**:
- Optimize skill selection for new tasks
- Identify underutilized effective skills
- Plan skill development priorities
- Improve task delegation strategy
### 📈 **Learning Velocity Prediction**
**Predicts learning acceleration** and skill acquisition rate:
**Features**:
- **Exponential learning curve** modeling
- **14-day ahead learning velocity forecasts**
- **Success rate progression** prediction
- **Skills-per-task evolution** tracking
- **Learning acceleration factor** calculation
**Use Cases**:
- Forecast team learning milestones
- Plan training and development schedules
- Identify learning plateaus early
- Optimize learning resource allocation
### 🔍 **Optimization Opportunities**
**Identifies improvement areas** using pattern analysis:
**Features**:
- **Task type performance** gap analysis
- **Underutilized effective skills** detection
- **Agent performance** bottleneck identification
- **Priority-based** opportunity ranking
- **Impact estimation** for improvements
**Use Cases**:
- Prioritize optimization initiatives
- Focus improvement efforts effectively
- Maximize ROI on optimization investments
- Address performance bottlenecks systematically
### 📊 **Comprehensive Analytics Report**
**Complete predictive analytics** with executive summary:
**Features**:
- **All prediction types** in one report
- **Executive summary** for stakeholders
- **Action items** and recommendations
- **Predicted outcomes** with confidence scores
- **Historical accuracy** metrics
**Use Cases**:
- Executive reporting and planning
- Team performance reviews
- Strategic decision making
- Investment justification for improvements
## Command Options
### Prediction Actions
```bash
--action quality-trend # Predict quality trends (default: 7 days)
--action optimal-skills # Recommend optimal skills (default: 3 skills)
--action learning-velocity # Predict learning acceleration (default: 14 days)
--action opportunities # Identify optimization opportunities
--action accuracy # Check prediction accuracy metrics
--action comprehensive # Generate complete report (default)
```
### Parameters
```bash
--days <number> # Prediction horizon in days (default: 7)
--task-type <type> # Task type for skill prediction (default: general)
--top-k <number> # Number of top skills to recommend (default: 3)
--dir <directory> # Custom patterns directory (default: .claude-patterns)
```
## Output Examples
### Quality Trend Prediction
```json
{
"prediction_type": "quality_trend",
"days_ahead": 7,
"predictions": [
{
"day": 1,
"predicted_quality": 87.5,
"trend_direction": "improving"
}
],
"confidence_score": 85.2,
"recommendations": [
"📈 Strong positive trend detected - maintain current approach"
]
}
```
### Optimal Skills Prediction
```json
{
"prediction_type": "optimal_skills",
"task_type": "refactoring",
"recommended_skills": [
{
"skill": "code-analysis",
"confidence": 92.5,
"success_rate": 89.2,
"recommendation_reason": "High success rate | Strong quality impact"
}
],
"prediction_confidence": 88.7
}
```
### Learning Velocity Prediction
```json
{
"prediction_type": "learning_velocity",
"days_ahead": 14,
"current_velocity": {
"avg_quality": 78.3,
"success_rate": 0.8247
},
"predictions": [
{
"day": 7,
"predicted_quality": 85.9,
"learning_acceleration": 1.02
}
],
"learning_acceleration_factor": "2% daily improvement"
}
```
## Key Innovation: Learning from Predictions
### Prediction Accuracy Tracking
- **Automatically learns** from prediction vs actual outcomes
- **Improves models** based on historical accuracy
- **Adjusts confidence thresholds** dynamically
- **Tracks prediction patterns** over time
### Continuous Model Improvement
- **Accuracy metrics** stored and analyzed
- **Model adjustments** based on performance
- **Feature importance** evolves with usage
- **Prediction confidence** self-calibrates
### Smart Learning Integration
- **Every prediction** contributes to learning database
- **Cross-prediction** insights improve overall accuracy
- **Pattern recognition** enhances predictive capabilities
- **Feedback loops** continuously improve performance
## Integration with Automatic Learning
### Data Sources
The predictive analytics engine integrates with all learning system components:
```
Enhanced Patterns Database (.claude-patterns/enhanced_patterns.json)
+-- Historical task outcomes
+-- Skill performance metrics
+-- Agent effectiveness data
+-- Quality score evolution
Predictions Database (.claude-patterns/predictions.json)
+-- Quality trend predictions
+-- Skill recommendation accuracy
+-- Learning velocity forecasts
+-- Optimization outcomes
Insights Database (.claude-patterns/insights.json)
+-- Optimization opportunities
+-- Performance bottlenecks
+-- Improvement recommendations
+-- Strategic insights
```
### Learning Feedback Loop
1. **Make predictions** based on historical patterns
2. **Execute tasks** using predictions
3. **Compare actual outcomes** with predictions
4. **Update models** based on accuracy
5. **Improve future predictions** continuously
## Advanced Usage Scenarios
### Scenario 1: Sprint Planning
```bash
# Predict quality for upcoming sprint
/predictive-analytics --action quality-trend --days 14
# Identify optimization opportunities for sprint
/predictive-analytics --action opportunities
# Get comprehensive report for planning
/predictive-analytics --action comprehensive
```
### Scenario 2: Team Performance Analysis
```bash
# Analyze team learning velocity
/predictive-analytics --action learning-velocity
# Check prediction accuracy to build confidence
/predictive-analytics --action accuracy
# Identify skill gaps and opportunities
/predictive-analytics --action optimal-skills --task-type code-review
```
### Scenario 3: Continuous Improvement
```bash
# Weekly optimization review
/predictive-analytics --action opportunities
# Quality trend monitoring
/predictive-analytics --action quality-trend --days 7
# Skill optimization recommendations
/predictive-analytics --action optimal-skills --top-k 5
```
## Performance Metrics
### Prediction Accuracy (v3.2.0)
- **Quality Trends**: 85-90% accuracy with sufficient data
- **Skill Recommendations**: 88-92% relevance score
- **Learning Velocity**: 80-85% accuracy for 7-14 day predictions
- **Optimization Opportunities**: 90%+ actionable insights
### Resource Usage
| Component | CPU | Memory | Storage |
|
---
--------|-----|--------|---------|
| Prediction Engine | <2% | ~100MB | ~5MB (prediction history) |
| Data Analysis | <1% | ~50MB | Minimal (reads existing data) |
| Report Generation | <1% | ~30MB | None |
### Response Times
| Action | Average | Max | Data Required |
|--------|---------|-----|-------------|
| Quality Trend | 50-100ms | 200ms | 5+ historical data points |
| Optimal Skills | 30-80ms | 150ms | 3+ skill usage instances |
| Learning Velocity | 40-120ms | 250ms | 7+ days of activity |
| Opportunities | 100-200ms | 400ms | 10+ task patterns |
| Comprehensive | 200-500ms | 1s | All data sources |
## Troubleshooting
### Issue: "insufficient_data" Error
```bash
# Check available learning data
ls -la .claude-patterns/
# Initialize learning system if needed
/learn-patterns
# Run some tasks to generate data
/auto-analyze
/quality-check
```
### Issue: Low Confidence Scores
```bash
# Generate more historical data for better predictions
/auto-analyze
/pr-review
/static-analysis
# Wait for more data points (minimum 5-10 needed)
/predictive-analytics --action accuracy
```
### Issue: Slow Performance
```bash
# Use specific action instead of comprehensive report
/predictive-analytics --action quality-trend
# Reduce prediction horizon for faster results
/predictive-analytics --action quality-trend --days 3
```
## API Usage (Programmatic Access)
### Python Example
```python
import requests
# Get comprehensive predictive analytics
response = requests.post('/predictive-analytics')
analytics = response.json()
print("Quality Trend:", analytics['quality_trend_prediction'])
print("Top Skills:", analytics['optimal_skills_prediction'])
print("Learning Velocity:", analytics['learning_velocity_prediction'])
```
### JavaScript Example
```javascript
// Get optimization opportunities
fetch('/predictive-analytics', {
method: 'POST',
body: JSON.stringify({ action: 'opportunities' })
})
.then(response => response.json())
.then(data => {
console.log('Opportunities:', data.optimization_opportunities.opportunities);
});
```
## Best Practices
1. **Regular Usage**: Run analytics weekly for best insights
2. **Data Collection**: Ensure sufficient historical data (10+ tasks minimum)
3. **Action-Oriented**: Focus on implementing recommended optimizations
4. **Track Progress**: Monitor prediction accuracy over time
5. **Team Integration**: Share insights with team for collective improvement
## Future Enhancements
**Planned Features** (v3.3+):
- **Time Series Prediction**: Advanced ARIMA and Prophet models
- **Anomaly Detection**: Identify unusual patterns automatically
- **Cross-Project Learning**: Transfer predictions between projects
- **Real-Time Predictions**: Live prediction updates during tasks
- **Custom Models**: User-trained prediction models
- **Integration Alerts**: Automatic notifications for predicted issues
---
This predictive analytics system provides advanced insights that help optimize performance, predict future trends, and identify improvement opportunities - all while continuously learning from every prediction to become smarter over time.