Initial commit
This commit is contained in:
761
commands/monitor/dashboard.md
Normal file
761
commands/monitor/dashboard.md
Normal file
@@ -0,0 +1,761 @@
|
||||
---
|
||||
name: monitor:dashboard
|
||||
description: Launch real-time monitoring dashboard for autonomous agent system metrics and learning analytics
|
||||
delegates-to: autonomous-agent:orchestrator
|
||||
model: inherit
|
||||
tools: Read,Write,Edit,Bash,Grep,Glob
|
||||
---
|
||||
|
||||
# Monitor Dashboard Command
|
||||
|
||||
## Command: `/monitor:dashboard`
|
||||
|
||||
**Simple launcher** that starts a monitoring dashboard in the background and opens the web browser.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Always Fresh**: Copies latest dashboard from plugin installation every time
|
||||
2. **No Version Checks**: Simple direct copy ensures you always run the newest version
|
||||
3. **Silent Launch**: Command executes dashboard.py with minimal console output
|
||||
4. **Background Process**: Starts Flask server in background without blocking
|
||||
5. **Auto Browser**: Opens default browser to dashboard URL automatically
|
||||
|
||||
**NEW v7.5.1**: Always copies latest dashboard from plugin to ensure unified version with all 5 tabs.
|
||||
|
||||
**CRITICAL**: This command executes with minimal console reporting. Dashboard interface shows all metrics.
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
# Launch monitoring dashboard in background (default)
|
||||
/monitor:dashboard
|
||||
|
||||
# Launch with custom port
|
||||
/monitor:dashboard --port 8080
|
||||
|
||||
# Launch with external access (trusted networks only)
|
||||
/monitor:dashboard --host 0.0.0.0
|
||||
|
||||
# Launch with custom data directory
|
||||
/monitor:dashboard --data-dir /path/to/patterns
|
||||
|
||||
# Check if dashboard is running
|
||||
/monitor:dashboard --status
|
||||
|
||||
# Stop running dashboard
|
||||
/monitor:dashboard --stop
|
||||
```
|
||||
|
||||
### Advanced Options
|
||||
```bash
|
||||
# Launch with debug mode (foreground for debugging)
|
||||
/monitor:dashboard --debug
|
||||
|
||||
# Launch with custom refresh rate
|
||||
/monitor:dashboard --refresh-rate 30
|
||||
|
||||
# Generate report without launching server
|
||||
/monitor:dashboard --report-only
|
||||
|
||||
# Force restart if already running
|
||||
/monitor:dashboard --restart
|
||||
```
|
||||
|
||||
**Expected Performance**: Command completes in 1-2 seconds with dashboard running in background and browser automatically opened. No console output.
|
||||
|
||||
## Command Behavior and Implementation
|
||||
|
||||
### Direct Execution (No Agent Delegation)
|
||||
|
||||
**CRITICAL**: This command executes the dashboard directly without agent delegation to prevent duplicate launches.
|
||||
|
||||
1. **Direct Python Call**: `python <plugin_path>/lib/dashboard.py` (no agents involved)
|
||||
2. **Background Process**: Runs Flask server completely in background
|
||||
3. **Auto Browser**: Opens browser automatically (once only)
|
||||
4. **Silent Operation**: No console reporting or status messages
|
||||
5. **Web Interface**: All metrics available through dashboard only
|
||||
|
||||
### Implementation
|
||||
|
||||
**Bash-First with Python Fallback (Most Reliable)**:
|
||||
```bash
|
||||
# Try bash approach first (Unix-like systems)
|
||||
if command -v bash >/dev/null 2>&1; then
|
||||
bash -c '
|
||||
# Step 1: Discover plugin installation
|
||||
if command -v find >/dev/null 2>&1; then
|
||||
PLUGIN_DIR=$(find ~/.claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude ~/.config/claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude 2>/dev/null | head -1)
|
||||
elif command -v where >/dev/null 2>&1; then
|
||||
PLUGIN_DIR=$(find /c/Users/*/.claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude 2>/dev/null | head -1)
|
||||
fi
|
||||
|
||||
if [ -z "$PLUGIN_DIR" ] || [ ! -f "$PLUGIN_DIR/lib/dashboard.py" ]; then
|
||||
echo "ERROR: Plugin installation not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 2: Always copy latest version from plugin (ensures all fixes are applied)
|
||||
echo "Copying latest dashboard from plugin..."
|
||||
mkdir -p .claude-patterns
|
||||
cp "$PLUGIN_DIR/lib/dashboard.py" ".claude-patterns/dashboard.py"
|
||||
echo "[OK] Dashboard ready with all JavaScript fixes"
|
||||
|
||||
# Step 3: Check if dashboard already running
|
||||
if curl -s http://127.0.0.1:5000/api/overview >/dev/null 2>&1; then
|
||||
echo "Dashboard is already running at: http://127.0.0.1:5000"
|
||||
echo "Opening browser..."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Step 4: Start dashboard in background
|
||||
echo "Starting dashboard server..."
|
||||
python .claude-patterns/dashboard.py --patterns-dir .claude-patterns >/dev/null 2>&1 &
|
||||
DASHBOARD_PID=$!
|
||||
echo "Dashboard started successfully (PID: $DASHBOARD_PID)"
|
||||
echo "Dashboard URL: http://127.0.0.1:5000"
|
||||
|
||||
# Step 5: Wait for server and validate
|
||||
sleep 3
|
||||
|
||||
# Quick validation check (optional but recommended)
|
||||
if [ -f "$PLUGIN_DIR/lib/web_page_validator.py" ]; then
|
||||
echo "[INFO] Running automatic validation..."
|
||||
if python "$PLUGIN_DIR/lib/web_page_validator.py" "http://127.0.0.1:5000" --timeout 10 >/dev/null 2>&1; then
|
||||
echo "[OK] Dashboard validation passed - no JavaScript errors detected"
|
||||
else
|
||||
echo "[WARN] Dashboard validation detected issues"
|
||||
echo "[INFO] Run /validate:web http://127.0.0.1:5000 for details"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 6: Open browser
|
||||
echo "Opening browser automatically..."
|
||||
if command -v xdg-open >/dev/null 2>&1; then
|
||||
xdg-open "http://127.0.0.1:5000" >/dev/null 2>&1
|
||||
elif command -v open >/dev/null 2>&1; then
|
||||
open "http://127.0.0.1:5000"
|
||||
elif command -v start >/dev/null 2>&1; then
|
||||
start "http://127.0.0.1:5000"
|
||||
fi
|
||||
' "$@"
|
||||
BASH_SUCCESS=$?
|
||||
|
||||
# If bash succeeded, exit
|
||||
if [ $BASH_SUCCESS -eq 0 ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Bash approach failed, falling back to Python..."
|
||||
fi
|
||||
|
||||
# Python fallback (cross-platform reliable)
|
||||
python -c "
|
||||
import os
|
||||
import sys
|
||||
import shutil
|
||||
import subprocess
|
||||
import webbrowser
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
def launch_dashboard():
|
||||
'''Launch dashboard - always uses latest plugin version with all fixes.'''
|
||||
|
||||
# Step 1: Discover plugin installation
|
||||
plugin_paths = [
|
||||
Path.home() / '.claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude',
|
||||
Path.home() / '.config/claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude',
|
||||
Path.home() / '.claude/plugins/autonomous-agent',
|
||||
]
|
||||
|
||||
if os.name == 'nt':
|
||||
plugin_paths.extend([
|
||||
Path(os.environ.get('APPDATA', '')) / 'Claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude',
|
||||
Path(os.environ.get('LOCALAPPDATA', '')) / 'Claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude',
|
||||
])
|
||||
|
||||
plugin_dashboard = None
|
||||
for plugin_path in plugin_paths:
|
||||
potential_dashboard = plugin_path / 'lib/dashboard.py'
|
||||
if potential_dashboard.exists():
|
||||
plugin_dashboard = potential_dashboard
|
||||
break
|
||||
|
||||
if not plugin_dashboard:
|
||||
print('ERROR: Plugin installation not found')
|
||||
print(' Searched paths:', [str(p) for p in plugin_paths])
|
||||
return False
|
||||
|
||||
# Step 2: Always copy latest version from plugin (ensures all fixes are applied)
|
||||
local_dashboard = Path('.claude-patterns/dashboard.py')
|
||||
|
||||
try:
|
||||
print('Copying latest dashboard from plugin...')
|
||||
Path('.claude-patterns').mkdir(exist_ok=True)
|
||||
shutil.copy2(plugin_dashboard, local_dashboard)
|
||||
print('[OK] Dashboard ready with all JavaScript fixes')
|
||||
except Exception as e:
|
||||
print(f'ERROR: Failed to copy dashboard: {e}')
|
||||
return False
|
||||
|
||||
# Step 3: Start dashboard from local copy
|
||||
print('Starting dashboard server...')
|
||||
return start_dashboard(str(local_dashboard), '.claude-patterns')
|
||||
|
||||
def start_dashboard(dashboard_path: str, patterns_dir: str) -> bool:
|
||||
'''Start the dashboard server.'''
|
||||
try:
|
||||
# Check if port 5000 is available
|
||||
import socket
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
result = sock.connect_ex(('127.0.0.1', 5000))
|
||||
sock.close()
|
||||
|
||||
if result == 0:
|
||||
print('Dashboard is already running at: http://127.0.0.1:5000')
|
||||
print('Opening browser...')
|
||||
webbrowser.open('http://127.0.0.1:5000')
|
||||
return True
|
||||
|
||||
# Start dashboard in background
|
||||
cmd = [sys.executable, dashboard_path, '--patterns-dir', patterns_dir]
|
||||
if len(sys.argv) > 1:
|
||||
cmd.extend(sys.argv[1:])
|
||||
|
||||
# Platform-specific background process creation
|
||||
if os.name == 'nt':
|
||||
# Windows: Use CREATE_NO_WINDOW to run silently
|
||||
process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
creationflags=subprocess.CREATE_NO_WINDOW if hasattr(subprocess, 'CREATE_NO_WINDOW') else 0
|
||||
)
|
||||
else:
|
||||
# Unix-like: Standard background process
|
||||
process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL
|
||||
)
|
||||
|
||||
print(f'Dashboard started successfully (PID: {process.pid})')
|
||||
print('Dashboard URL: http://127.0.0.1:5000')
|
||||
|
||||
# Wait for server to be ready
|
||||
time.sleep(3)
|
||||
|
||||
# Auto-validate dashboard for JavaScript errors (optional but recommended)
|
||||
try:
|
||||
import urllib.request
|
||||
# Quick connectivity check
|
||||
urllib.request.urlopen('http://127.0.0.1:5000', timeout=2)
|
||||
print('[OK] Dashboard server is responding')
|
||||
|
||||
# Try to run validation if available
|
||||
try:
|
||||
validator_path = Path(plugin_dashboard).parent / 'web_page_validator.py'
|
||||
if validator_path.exists():
|
||||
print('[INFO] Running automatic validation...')
|
||||
validation_result = subprocess.run(
|
||||
[sys.executable, str(validator_path), 'http://127.0.0.1:5000', '--timeout', '10'],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=15
|
||||
)
|
||||
if validation_result.returncode == 0:
|
||||
print('[OK] Dashboard validation passed - no JavaScript errors detected')
|
||||
else:
|
||||
print('[WARN] Dashboard validation detected issues')
|
||||
print('[INFO] Run /validate:web http://127.0.0.1:5000 for details')
|
||||
except Exception:
|
||||
pass # Validation is optional, don't fail if unavailable
|
||||
|
||||
except Exception:
|
||||
pass # Server will still open in browser
|
||||
|
||||
print('Opening browser automatically...')
|
||||
webbrowser.open('http://127.0.0.1:5000')
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f'Error starting dashboard: {e}')
|
||||
return False
|
||||
|
||||
if __name__ == '__main__':
|
||||
launch_dashboard()
|
||||
" "$@"
|
||||
|
||||
# Benefits of this approach:
|
||||
# 1. ✅ BASH-FIRST - Uses efficient bash commands when available
|
||||
# 2. ✅ PYTHON FALLBACK - Reliable cross-platform compatibility when bash fails
|
||||
# 3. ✅ FASTEST PERFORMANCE - local copy avoids plugin discovery overhead
|
||||
# 4. ✅ RELIABLE - works even if plugin installation changes
|
||||
# 5. ✅ SELF-CONTAINED - each project has its own dashboard instance
|
||||
# 6. ✅ ERROR HANDLING - graceful fallbacks and informative error messages
|
||||
# 4. ✅ OFFLINE CAPABLE - works completely without plugin after initial setup
|
||||
# 5. ✅ EASY DEBUGGING - local copy can be modified and tested
|
||||
|
||||
# For custom arguments:
|
||||
# python .claude-patterns/dashboard.py --port 8080 --host 0.0.0.0
|
||||
```
|
||||
|
||||
**Platform Support**:
|
||||
- **Windows**: ✅ Native support (cmd.exe, PowerShell, Git Bash)
|
||||
- **Linux**: ✅ Native support (bash, sh)
|
||||
- **macOS**: ✅ Native support (bash, zsh)
|
||||
- **All Installation Methods**: ✅ Marketplace, development, system-wide
|
||||
|
||||
**How It Works**:
|
||||
1. **Built-in Discovery**: The Python script automatically searches for plugin installation across all standard locations
|
||||
2. **Marketplace Priority**: Prioritizes marketplace installations over development/local installations
|
||||
3. **Platform-Aware**: Uses OS-specific environment variables and paths (Windows: APPDATA/LOCALAPPDATA, Linux/macOS: standard directories)
|
||||
4. **Fallback Support**: Falls back to development mode if marketplace installation not found
|
||||
5. **Current Directory Access**: Preserves current working directory for pattern data access (.claude-patterns/)
|
||||
|
||||
**Discovery Order**:
|
||||
1. **Marketplace Installations** (Primary):
|
||||
- Windows: `%USERPROFILE%\.claude\plugins\marketplaces\LLM-Autonomous-Agent-Plugin-for-Claude\`
|
||||
- macOS/Linux: `~/.claude/plugins/marketplaces/LLM-Autonomous-Agent-Plugin-for-Claude/`
|
||||
- Alternative paths in `.config/claude/plugins/`
|
||||
|
||||
2. **Platform-Specific Paths**:
|
||||
- Windows: `APPDATA\Claude\plugins\marketplaces\`, `LOCALAPPDATA\Claude\plugins\marketplaces\`
|
||||
- Linux/macOS: `/usr/local/share/claude/plugins/marketplaces/`, `/opt/claude/plugins/marketplaces/`
|
||||
|
||||
3. **Development/Local Installations** (Fallback):
|
||||
- `~/.claude/plugins/autonomous-agent/`
|
||||
- Current directory and parent directories (development mode)
|
||||
|
||||
**Data Access**: Pattern data always comes from current project directory (`./.claude-patterns/`), while the dashboard script runs from the plugin installation directory.
|
||||
|
||||
**Benefits**:
|
||||
- ✅ Self-contained - no external launcher files needed
|
||||
- ✅ Works from any user project directory
|
||||
- ✅ Cross-platform compatible (Windows, Linux, macOS)
|
||||
- ✅ Automatic plugin discovery - no hardcoded paths
|
||||
- ✅ Marketplace installation priority
|
||||
- ✅ Clear error messages with installation guidance
|
||||
|
||||
**Key Fix**: Eliminates path resolution issues by embedding discovery logic directly in the command rather than relying on external launcher files that don't exist in user project directories.
|
||||
|
||||
### Smart Browser Opening (Enhanced v1.0.2)
|
||||
|
||||
**Prevents Double Browser Opening**:
|
||||
- **Existing Dashboard Detection**: Checks if dashboard is already running on ports 5000-5010
|
||||
- **Browser Lock Mechanism**: Uses lock files to track browser opening per port
|
||||
- **Single Browser Launch**: Opens browser only once per dashboard instance
|
||||
- **Multiple Instance Support**: Handles multiple dashboard instances gracefully
|
||||
|
||||
**Browser Opening Logic**:
|
||||
1. **Check existing dashboard**: Scans for running dashboard on specified port range
|
||||
2. **Browser state tracking**: Uses temporary lock files to track browser opening state
|
||||
3. **Smart opening**: Opens browser only if not already opened for that specific instance
|
||||
4. **Automatic cleanup**: Removes lock files on dashboard shutdown
|
||||
|
||||
**Edge Cases Handled**:
|
||||
- **Multiple dashboards**: Each port gets separate browser lock tracking
|
||||
- **Dashboard restarts**: Lock files are properly cleaned up and recreated
|
||||
- **Manual browser opening**: Respects existing browser states
|
||||
- **Cross-platform**: Works on Windows, Linux, and macOS
|
||||
|
||||
### Expected Command Output
|
||||
|
||||
**Smart console output**. The command provides intelligent feedback based on dashboard state:
|
||||
|
||||
```
|
||||
# New dashboard instance:
|
||||
Starting Autonomous Agent Dashboard...
|
||||
Dashboard URL: http://127.0.0.1:5000
|
||||
Opening browser automatically...
|
||||
Browser opened to http://127.0.0.1:5000
|
||||
|
||||
# Existing dashboard found:
|
||||
Dashboard is already running at: http://127.0.0.1:5000
|
||||
Browser already opened for this dashboard instance.
|
||||
|
||||
# Browser opened for existing instance:
|
||||
Dashboard is already running at: http://127.0.0.1:5000
|
||||
Browser opened to existing dashboard: http://127.0.0.1:5000
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
If dashboard fails to start, the command fails silently. Check for common issues:
|
||||
|
||||
- **Port conflicts**: Use different port with `--port 8080`
|
||||
- **Missing dependencies**: Install with `pip install flask flask-cors`
|
||||
- **Directory issues**: Ensure `.claude-patterns` directory exists
|
||||
|
||||
All error handling and status information is available through the web dashboard interface.
|
||||
|
||||
### Process Management Commands
|
||||
|
||||
```bash
|
||||
# Check dashboard status
|
||||
/monitor:dashboard --status
|
||||
|
||||
# Stop dashboard
|
||||
/monitor:dashboard --stop
|
||||
|
||||
# Restart dashboard
|
||||
/monitor:dashboard --restart
|
||||
```
|
||||
|
||||
*Process management commands are handled through the dashboard web interface.*
|
||||
|
||||
## Dashboard Features
|
||||
|
||||
### 📊 Real-time Metrics
|
||||
- **Learning Progress**: Pattern effectiveness and skill improvement over time
|
||||
- **Quality Trends**: Code quality metrics and validation scores
|
||||
- **Agent Performance**: Success rates and execution times for specialized agents
|
||||
- **System Health**: Resource usage and operational status
|
||||
- **Task Analytics**: Task completion rates and learning patterns
|
||||
|
||||
### 📈 Interactive Charts
|
||||
- **Time Series Analysis**: Performance metrics over customizable time ranges
|
||||
- **Comparative Analysis**: Side-by-side agent and skill performance
|
||||
- **Trend Visualization**: Learning curves and improvement trajectories
|
||||
- **Distribution Charts**: Success rates and outcome distributions
|
||||
- **Resource Monitoring**: CPU, memory, and storage usage patterns
|
||||
|
||||
### 🎯 Performance Insights
|
||||
- **Skill Effectiveness**: Which skills perform best for specific task types
|
||||
- **Agent Specialization**: Performance comparison across different agents
|
||||
- **Learning Patterns**: How the system improves over time
|
||||
- **Quality Metrics**: Code quality trends and validation scores
|
||||
- **Optimization Opportunities**: Areas for performance improvement
|
||||
|
||||
### 🔍 Advanced Analytics
|
||||
- **Pattern Recognition**: Automatic identification of successful patterns
|
||||
- **Predictive Insights**: Performance predictions based on historical data
|
||||
- **Anomaly Detection**: Unusual performance patterns and potential issues
|
||||
- **Correlation Analysis**: Relationships between different metrics
|
||||
- **Recommendation Engine**: Actionable insights for system optimization
|
||||
|
||||
## System Requirements
|
||||
|
||||
### Dependencies
|
||||
- **Python 3.8+**: Required for dashboard server
|
||||
- **Flask**: Web framework for dashboard interface
|
||||
- **Flask-CORS**: Cross-origin resource sharing
|
||||
- **Standard libraries**: `json`, `statistics`, `threading`, `pathlib`
|
||||
|
||||
### Resource Requirements
|
||||
- **Minimum**: 2GB RAM, 1 CPU core
|
||||
- **Recommended**: 4GB+ RAM, 2+ CPU cores
|
||||
- **Storage**: 100MB+ for pattern data
|
||||
- **Network**: Optional, for external access and team sharing
|
||||
|
||||
## Dashboard Interface
|
||||
|
||||
### Main Navigation
|
||||
```
|
||||
🏠 Overview -> System health and key metrics
|
||||
📊 Analytics -> Detailed performance analytics
|
||||
🎯 Skills -> Skill effectiveness analysis
|
||||
🤖 Agents -> Agent performance comparison
|
||||
📚 Learning -> Learning progress and patterns
|
||||
🔧 System -> Resource usage and health
|
||||
⚙️ Settings -> Configuration and preferences
|
||||
```
|
||||
|
||||
### Key Dashboard Sections
|
||||
|
||||
#### Overview Panel
|
||||
- **System Status**: Overall health and operational status
|
||||
- **Active Metrics**: Current performance indicators
|
||||
- **Quick Stats**: Success rates, quality scores, task counts
|
||||
- **Recent Activity**: Latest task executions and outcomes
|
||||
- **Alerts & Issues**: System notifications and warnings
|
||||
|
||||
#### Analytics Panel
|
||||
- **Performance Trends**: Time-based performance analysis
|
||||
- **Quality Metrics**: Code quality over time
|
||||
- **Success Rates**: Task completion and success patterns
|
||||
- **Learning Curves**: System improvement trajectories
|
||||
- **Comparative Analysis**: Side-by-side performance comparisons
|
||||
|
||||
#### Skills Analysis
|
||||
- **Skill Rankings**: Performance ranking of all skills
|
||||
- **Usage Patterns**: How often and when skills are used
|
||||
- **Effectiveness Metrics**: Success rates by skill type
|
||||
- **Optimization Suggestions**: Areas for skill improvement
|
||||
- **Skill Dependencies**: Relationships between skills
|
||||
|
||||
#### Agent Performance
|
||||
- **Agent Comparison**: Performance across different agents
|
||||
- **Specialization Analysis**: Which agents excel at specific tasks
|
||||
- **Efficiency Metrics**: Time and resource usage by agent
|
||||
- **Quality Scores**: Output quality by agent type
|
||||
- **Delegation Success**: Rate of successful task delegation
|
||||
|
||||
#### Learning Progress
|
||||
- **Pattern Recognition**: Discovered learning patterns
|
||||
- **Knowledge Base**: Growing pattern database
|
||||
- **Improvement Metrics**: Quantified learning progress
|
||||
- **Adaptation Rate**: How quickly the system adapts
|
||||
- **Cross-Project Learning**: Knowledge transfer between projects
|
||||
|
||||
## API Endpoints
|
||||
|
||||
The dashboard provides REST API endpoints for data access:
|
||||
|
||||
```bash
|
||||
# Get overview data
|
||||
GET /api/overview
|
||||
|
||||
# Get learning analytics
|
||||
GET /api/analytics
|
||||
|
||||
# Get skill performance
|
||||
GET /api/skills
|
||||
|
||||
# Get agent performance
|
||||
GET /api/agents
|
||||
|
||||
# Get system health
|
||||
GET /api/health
|
||||
|
||||
# Get historical data
|
||||
GET /api/history?period=7d
|
||||
|
||||
# Get performance insights
|
||||
GET /api/insights
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
# Dashboard configuration
|
||||
DASHBOARD_HOST=127.0.0.1 # Default bind address
|
||||
DASHBOARD_PORT=5000 # Default port
|
||||
DASHBOARD_DEBUG=false # Debug mode
|
||||
DASHBOARD_CACHE_TTL=60 # Cache TTL in seconds
|
||||
DASHBOARD_REFRESH_RATE=30 # Auto-refresh rate
|
||||
|
||||
# Data directory
|
||||
PATTERNS_DIR=.claude-patterns # Pattern data directory
|
||||
```
|
||||
|
||||
### Configuration File
|
||||
```json
|
||||
{
|
||||
"dashboard": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 5000,
|
||||
"debug": false,
|
||||
"cache_ttl": 60,
|
||||
"refresh_rate": 30
|
||||
},
|
||||
"data": {
|
||||
"patterns_dir": ".claude-patterns",
|
||||
"retention_days": 30,
|
||||
"auto_cleanup": true
|
||||
},
|
||||
"features": {
|
||||
"real_time_updates": true,
|
||||
"export_reports": true,
|
||||
"email_alerts": false,
|
||||
"team_sharing": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Local Access Only (Default)
|
||||
- Binds to localhost (127.0.0.1) for security
|
||||
- Accessible only from the machine where it's running
|
||||
- Recommended for most use cases
|
||||
|
||||
### Network Access (Use with Caution)
|
||||
```bash
|
||||
# Enable external access (trusted networks only)
|
||||
/monitor:dashboard --host 0.0.0.0
|
||||
|
||||
# Add firewall rules for security
|
||||
sudo ufw allow 5000 # Linux
|
||||
# Configure firewall rules appropriately
|
||||
```
|
||||
|
||||
### Data Privacy
|
||||
- All data processing happens locally
|
||||
- No external data transmission
|
||||
- Pattern data stored locally
|
||||
- User controls data retention
|
||||
|
||||
## Integration with Other Commands
|
||||
|
||||
### Pre-Dashboard Analysis
|
||||
```bash
|
||||
# Generate comprehensive data before dashboard
|
||||
/analyze:project
|
||||
/learn:analytics
|
||||
/analyze:quality
|
||||
|
||||
# Then launch dashboard
|
||||
/monitor:dashboard
|
||||
```
|
||||
|
||||
### Post-Dashboard Actions
|
||||
```bash
|
||||
# Generate reports based on dashboard insights
|
||||
/workspace:reports
|
||||
/learn:performance
|
||||
|
||||
# Implement optimizations
|
||||
/workspace:improve
|
||||
```
|
||||
|
||||
### Continuous Monitoring
|
||||
```bash
|
||||
# Background monitoring
|
||||
/monitor:dashboard --background
|
||||
|
||||
# Generate periodic reports
|
||||
/workspace:organize
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Port Already in Use**
|
||||
```bash
|
||||
# Use different port
|
||||
/monitor:dashboard --port 8080
|
||||
|
||||
# Kill existing process
|
||||
lsof -ti:5000 | xargs kill -9
|
||||
```
|
||||
|
||||
**No Data Available**
|
||||
```bash
|
||||
# Initialize pattern learning
|
||||
/learn:init
|
||||
|
||||
# Generate some activity
|
||||
/analyze:project
|
||||
|
||||
# Check data directory
|
||||
ls -la .claude-patterns/
|
||||
```
|
||||
|
||||
**Dashboard Won't Load**
|
||||
```bash
|
||||
# Check Python dependencies
|
||||
pip install flask flask-cors
|
||||
|
||||
# Verify dashboard script (auto-detects plugin path)
|
||||
python <plugin_path>/lib/dashboard.py --test
|
||||
|
||||
# Check system resources
|
||||
free -h # Memory
|
||||
df -h # Disk space
|
||||
```
|
||||
|
||||
**Performance Issues**
|
||||
```bash
|
||||
# Increase cache TTL
|
||||
/monitor:dashboard --cache-ttl 120
|
||||
|
||||
# Reduce refresh rate
|
||||
/monitor:dashboard --refresh-rate 60
|
||||
|
||||
# Clear old data
|
||||
/monitor:dashboard --cleanup-days 7
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
```bash
|
||||
# Launch with debug output
|
||||
/monitor:dashboard --debug
|
||||
|
||||
# Check log files
|
||||
tail -f .claude/logs/dashboard.log
|
||||
|
||||
# Validate installation (auto-detects plugin path)
|
||||
python <plugin_path>/lib/dashboard.py --validate
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Production Use
|
||||
- Use external access only on trusted networks
|
||||
- Set up appropriate firewall rules
|
||||
- Monitor resource usage
|
||||
- Regular data backups
|
||||
- Implement access controls for team sharing
|
||||
|
||||
### Performance Optimization
|
||||
- Regular data cleanup and archiving
|
||||
- Optimize cache settings based on usage
|
||||
- Monitor system resources
|
||||
- Use appropriate refresh intervals
|
||||
- Consider resource limits for long-term operation
|
||||
|
||||
### Data Management
|
||||
- Regular backup of pattern data
|
||||
- Archive old data to maintain performance
|
||||
- Monitor storage usage
|
||||
- Implement data retention policies
|
||||
- Export important insights regularly
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Expected Resource Usage
|
||||
- **CPU**: 2-5% during normal operation
|
||||
- **Memory**: 50-200MB depending on data size
|
||||
- **Storage**: Grows with pattern data (manageable)
|
||||
- **Network**: Minimal (local access)
|
||||
|
||||
### Scalability
|
||||
- **Data Points**: Handles 10,000+ pattern entries efficiently
|
||||
- **Concurrent Users**: Supports 5-10 simultaneous users
|
||||
- **Historical Data**: Optimal performance with 30-day retention
|
||||
- **Response Times**: <100ms for most API endpoints
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Monitoring
|
||||
```bash
|
||||
# Launch dashboard for local development
|
||||
/monitor:dashboard
|
||||
|
||||
# Monitor specific project
|
||||
cd /path/to/project
|
||||
/monitor:dashboard --data-dir .claude-patterns
|
||||
```
|
||||
|
||||
### Team Monitoring
|
||||
```bash
|
||||
# Share dashboard with team
|
||||
/monitor:dashboard --host 0.0.0.0 --port 8080
|
||||
|
||||
# Team members access at:
|
||||
# http://your-ip:8080
|
||||
```
|
||||
|
||||
### Production Monitoring
|
||||
```bash
|
||||
# Background monitoring with alerts
|
||||
/monitor:dashboard --background --email-alerts
|
||||
|
||||
# Generate daily reports
|
||||
/monitor:dashboard --report-only --email-reports
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.2
|
||||
**Integration**: Uses dashboard.py directly from lib/ directory (no delegation)
|
||||
**Dependencies**: Flask, Flask-CORS, Python 3.8+
|
||||
**Platform**: Cross-platform (Windows, Linux, Mac)
|
||||
**Learning**: Integrates with learning-engine for pattern analysis
|
||||
**Fix**: Enhanced smart browser opening with lock mechanism and existing dashboard detection (v1.0.2)
|
||||
**Previous**: Removed agent delegation to prevent duplicate browser launches (v1.0.1)
|
||||
446
commands/monitor/groups.md
Normal file
446
commands/monitor/groups.md
Normal file
@@ -0,0 +1,446 @@
|
||||
---
|
||||
name: monitor:groups
|
||||
description: Real-time monitoring of four-tier group performance, communication, and specialization metrics
|
||||
version: 7.0.0
|
||||
category: monitoring
|
||||
---
|
||||
|
||||
# Monitor Groups Command
|
||||
|
||||
Display comprehensive real-time metrics for all four agent groups including performance, communication effectiveness, specialization, and learning progress.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
**Analyzes and displays**:
|
||||
1. **Group Performance Metrics** - Success rates, quality scores, execution times per group
|
||||
2. **Inter-Group Communication** - Message flow, success rates, feedback effectiveness
|
||||
3. **Group Specialization** - What each group excels at based on task history
|
||||
4. **Knowledge Transfer** - Cross-group learning effectiveness
|
||||
5. **Decision Quality** - Group 2 decision accuracy and user alignment
|
||||
6. **Validation Effectiveness** - Group 4 validation pass rates
|
||||
|
||||
## Execution Steps
|
||||
|
||||
Follow these steps to generate comprehensive group monitoring report:
|
||||
|
||||
### Step 1: Load All Group Data
|
||||
|
||||
```python
|
||||
from lib.group_collaboration_system import get_group_collaboration_stats
|
||||
from lib.group_performance_tracker import get_group_performance, compare_groups
|
||||
from lib.inter_group_knowledge_transfer import get_knowledge_transfer_stats
|
||||
from lib.group_specialization_learner import get_specialization_profile, get_learning_insights
|
||||
from lib.agent_performance_tracker import get_agent_performance
|
||||
|
||||
# Load all statistics
|
||||
collab_stats = get_group_collaboration_stats()
|
||||
knowledge_stats = get_knowledge_transfer_stats()
|
||||
learning_insights = get_learning_insights()
|
||||
```
|
||||
|
||||
### Step 2: Analyze Each Group
|
||||
|
||||
**For Group 1 (Strategic Analysis & Intelligence)**:
|
||||
```python
|
||||
group1_perf = get_group_performance(1)
|
||||
group1_spec = get_specialization_profile(1)
|
||||
|
||||
# Key metrics:
|
||||
# - Total recommendations made
|
||||
# - Average confidence score
|
||||
# - Recommendation acceptance rate (by Group 2)
|
||||
# - Recommendation effectiveness (from Group 4 feedback)
|
||||
# - Top specializations (refactoring, security, performance)
|
||||
```
|
||||
|
||||
**For Group 2 (Decision Making & Planning)**:
|
||||
```python
|
||||
group2_perf = get_group_performance(2)
|
||||
group2_spec = get_specialization_profile(2)
|
||||
|
||||
# Key metrics:
|
||||
# - Total decisions made
|
||||
# - Decision accuracy (plans executed successfully)
|
||||
# - User preference alignment score
|
||||
# - Average decision confidence
|
||||
# - Plan adjustment rate (how often plans need revision)
|
||||
```
|
||||
|
||||
**For Group 3 (Execution & Implementation)**:
|
||||
```python
|
||||
group3_perf = get_group_performance(3)
|
||||
group3_spec = get_specialization_profile(3)
|
||||
|
||||
# Key metrics:
|
||||
# - Total executions completed
|
||||
# - First-time success rate
|
||||
# - Average quality improvement (before/after)
|
||||
# - Auto-fix success rate
|
||||
# - Average iterations needed
|
||||
```
|
||||
|
||||
**For Group 4 (Validation & Optimization)**:
|
||||
```python
|
||||
group4_perf = get_group_performance(4)
|
||||
group4_spec = get_specialization_profile(4)
|
||||
|
||||
# Key metrics:
|
||||
# - Total validations performed
|
||||
# - GO/NO-GO decision distribution
|
||||
# - Average quality score (5-layer validation)
|
||||
# - Feedback effectiveness (improvements from feedback)
|
||||
# - Issue detection rate
|
||||
```
|
||||
|
||||
### Step 3: Analyze Inter-Group Communication
|
||||
|
||||
```python
|
||||
# Communication flow analysis
|
||||
comm_flows = {
|
||||
"Group 1 -> Group 2": collab_stats.get("group_1_to_2", {}),
|
||||
"Group 2 -> Group 3": collab_stats.get("group_2_to_3", {}),
|
||||
"Group 3 -> Group 4": collab_stats.get("group_3_to_4", {}),
|
||||
"Group 4 -> Group 1": collab_stats.get("group_4_to_1", {}),
|
||||
"Group 4 -> Group 2": collab_stats.get("group_4_to_2", {}),
|
||||
"Group 4 -> Group 3": collab_stats.get("group_4_to_3", {})
|
||||
}
|
||||
|
||||
# Calculate:
|
||||
# - Message success rate per flow
|
||||
# - Average feedback cycle time
|
||||
# - Communication bottlenecks
|
||||
```
|
||||
|
||||
### Step 4: Analyze Knowledge Transfer
|
||||
|
||||
```python
|
||||
# Knowledge transfer effectiveness
|
||||
for group_num in [1, 2, 3, 4]:
|
||||
knowledge_for_group = query_knowledge(
|
||||
for_group=group_num,
|
||||
knowledge_type=None # All types
|
||||
)
|
||||
|
||||
# Metrics:
|
||||
# - Total knowledge available to group
|
||||
# - Knowledge application success rate
|
||||
# - Top knowledge sources (which groups share most effectively)
|
||||
# - Knowledge confidence trends
|
||||
```
|
||||
|
||||
### Step 5: Identify Top Performers and Areas for Improvement
|
||||
|
||||
```python
|
||||
# Compare groups
|
||||
comparison = compare_groups(metric='quality_score')
|
||||
|
||||
# Identify:
|
||||
# - Top performing group
|
||||
# - Groups needing improvement
|
||||
# - Emerging specializations
|
||||
# - Communication improvements needed
|
||||
```
|
||||
|
||||
### Step 6: Generate Comprehensive Report
|
||||
|
||||
**Report Structure**:
|
||||
|
||||
```markdown
|
||||
# Four-Tier Group Monitoring Report
|
||||
Generated: {timestamp}
|
||||
|
||||
## Executive Summary
|
||||
- Overall System Health: {score}/100
|
||||
- Total Tasks Completed: {total}
|
||||
- Average Quality Score: {avg_quality}/100
|
||||
- Communication Success Rate: {comm_success}%
|
||||
- Knowledge Transfer Effectiveness: {knowledge_eff}%
|
||||
|
||||
## Group Performance Overview
|
||||
|
||||
### Group 1: Strategic Analysis & Intelligence (The "Brain")
|
||||
**Performance**: {rating} | **Tasks**: {count} | **Success Rate**: {success}%
|
||||
|
||||
**Key Metrics**:
|
||||
- Recommendations Made: {rec_count}
|
||||
- Average Confidence: {avg_conf}
|
||||
- Acceptance Rate: {acceptance}%
|
||||
- Effectiveness Score: {effectiveness}/100
|
||||
|
||||
**Top Specializations**:
|
||||
1. {spec_1} - {quality}% success rate
|
||||
2. {spec_2} - {quality}% success rate
|
||||
3. {spec_3} - {quality}% success rate
|
||||
|
||||
**Top Agents**:
|
||||
- {agent_1}: {performance} ({task_type})
|
||||
- {agent_2}: {performance} ({task_type})
|
||||
|
||||
---
|
||||
|
||||
### Group 2: Decision Making & Planning (The "Council")
|
||||
**Performance**: {rating} | **Decisions**: {count} | **Accuracy**: {accuracy}%
|
||||
|
||||
**Key Metrics**:
|
||||
- Decisions Made: {decision_count}
|
||||
- Decision Confidence: {avg_conf}
|
||||
- User Alignment: {alignment}%
|
||||
- Plan Success Rate: {plan_success}%
|
||||
|
||||
**Decision Quality**:
|
||||
- Excellent (90-100): {excellent_count}
|
||||
- Good (70-89): {good_count}
|
||||
- Needs Improvement (<70): {poor_count}
|
||||
|
||||
**Top Agents**:
|
||||
- strategic-planner: {performance}
|
||||
- preference-coordinator: {performance}
|
||||
|
||||
---
|
||||
|
||||
### Group 3: Execution & Implementation (The "Hand")
|
||||
**Performance**: {rating} | **Executions**: {count} | **Success**: {success}%
|
||||
|
||||
**Key Metrics**:
|
||||
- Executions Completed: {exec_count}
|
||||
- First-Time Success: {first_time}%
|
||||
- Quality Improvement: +{improvement} points avg
|
||||
- Auto-Fix Success: {autofix}%
|
||||
|
||||
**Top Specializations**:
|
||||
1. {spec_1} - {quality}% success rate
|
||||
2. {spec_2} - {quality}% success rate
|
||||
3. {spec_3} - {quality}% success rate
|
||||
|
||||
**Top Agents**:
|
||||
- {agent_1}: {performance} ({task_type})
|
||||
- {agent_2}: {performance} ({task_type})
|
||||
- {agent_3}: {performance} ({task_type})
|
||||
|
||||
---
|
||||
|
||||
### Group 4: Validation & Optimization (The "Guardian")
|
||||
**Performance**: {rating} | **Validations**: {count} | **Pass Rate**: {pass_rate}%
|
||||
|
||||
**Key Metrics**:
|
||||
- Validations Performed: {val_count}
|
||||
- GO Decisions: {go_count} ({go_pct}%)
|
||||
- NO-GO Decisions: {nogo_count} ({nogo_pct}%)
|
||||
- Average Quality Score: {avg_quality}/100
|
||||
- Feedback Effectiveness: {feedback_eff}%
|
||||
|
||||
**Five-Layer Validation Breakdown**:
|
||||
- Functional (30 pts): {func_avg}/30
|
||||
- Quality (25 pts): {qual_avg}/25
|
||||
- Performance (20 pts): {perf_avg}/20
|
||||
- Integration (15 pts): {integ_avg}/15
|
||||
- UX (10 pts): {ux_avg}/10
|
||||
|
||||
**Top Agents**:
|
||||
- post-execution-validator: {performance}
|
||||
- performance-optimizer: {performance}
|
||||
- continuous-improvement: {performance}
|
||||
|
||||
---
|
||||
|
||||
## Inter-Group Communication
|
||||
|
||||
### Communication Flow Analysis
|
||||
|
||||
**Group 1 -> Group 2 (Analysis -> Decision)**:
|
||||
- Messages Sent: {count}
|
||||
- Success Rate: {success}%
|
||||
- Average Response Time: {time}s
|
||||
- Recommendation Acceptance: {acceptance}%
|
||||
|
||||
**Group 2 -> Group 3 (Decision -> Execution)**:
|
||||
- Plans Sent: {count}
|
||||
- Execution Success: {success}%
|
||||
- Plan Completeness: {completeness}%
|
||||
- Average Execution Time: {time}s
|
||||
|
||||
**Group 3 -> Group 4 (Execution -> Validation)**:
|
||||
- Results Sent: {count}
|
||||
- Validation Pass Rate: {pass_rate}%
|
||||
- Average Quality Improvement: +{improvement} pts
|
||||
- Iterations Needed: {iterations} avg
|
||||
|
||||
**Group 4 -> All Groups (Feedback Loops)**:
|
||||
- Feedback Messages: {count}
|
||||
- Feedback Effectiveness: {effectiveness}%
|
||||
- Average Cycle Time: {time}s
|
||||
- Learning Applied: {learning_count} instances
|
||||
|
||||
### Communication Health
|
||||
- ✅ Excellent (>95%): {excellent_flows}
|
||||
- [WARN]️ Needs Attention (70-95%): {warning_flows}
|
||||
- ❌ Critical (<70%): {critical_flows}
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Transfer
|
||||
|
||||
### Cross-Group Learning
|
||||
|
||||
**Total Knowledge Base**: {total_knowledge} items
|
||||
**Average Confidence**: {avg_confidence}
|
||||
**Application Success Rate**: {application_success}%
|
||||
|
||||
**Knowledge by Type**:
|
||||
- Patterns: {pattern_count} (avg confidence: {pattern_conf})
|
||||
- Best Practices: {bp_count} (avg confidence: {bp_conf})
|
||||
- Optimizations: {opt_count} (avg confidence: {opt_conf})
|
||||
- Anti-Patterns: {ap_count} (avg confidence: {ap_conf})
|
||||
|
||||
**Top Knowledge Sources** (Groups sharing most effectively):
|
||||
1. Group {group_num}: {knowledge_count} items, {success}% success rate
|
||||
2. Group {group_num}: {knowledge_count} items, {success}% success rate
|
||||
3. Group {group_num}: {knowledge_count} items, {success}% success rate
|
||||
|
||||
**Knowledge Transfer Matrix**:
|
||||
```
|
||||
To G1 To G2 To G3 To G4
|
||||
From G1 -- {n} {n} {n}
|
||||
From G2 {n} -- {n} {n}
|
||||
From G3 {n} {n} -- {n}
|
||||
From G4 {n} {n} {n} --
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Specialization Insights
|
||||
|
||||
### Group Specialization Maturity
|
||||
|
||||
**Group 1 (Brain)**: {maturity_level}
|
||||
- Expertise Areas: {areas}
|
||||
- Emerging Specializations: {emerging}
|
||||
- Recommendation: {recommendation}
|
||||
|
||||
**Group 2 (Council)**: {maturity_level}
|
||||
- Expertise Areas: {areas}
|
||||
- Decision Patterns: {patterns}
|
||||
- Recommendation: {recommendation}
|
||||
|
||||
**Group 3 (Hand)**: {maturity_level}
|
||||
- Expertise Areas: {areas}
|
||||
- Execution Strengths: {strengths}
|
||||
- Recommendation: {recommendation}
|
||||
|
||||
**Group 4 (Guardian)**: {maturity_level}
|
||||
- Expertise Areas: {areas}
|
||||
- Validation Focus: {focus}
|
||||
- Recommendation: {recommendation}
|
||||
|
||||
---
|
||||
|
||||
## Trends & Insights
|
||||
|
||||
### Performance Trends (Last 50 Tasks)
|
||||
|
||||
**Quality Score Trend**: {trend} ({direction})
|
||||
- Current Average: {current_avg}/100
|
||||
- 10-Task Moving Average: {moving_avg}/100
|
||||
- Trend Direction: {improving/stable/declining}
|
||||
|
||||
**Iteration Efficiency**: {trend}
|
||||
- Current Average: {current_iterations}
|
||||
- Target: 1.2 or less
|
||||
- Status: {on_track/needs_attention}
|
||||
|
||||
**Decision Accuracy**: {trend}
|
||||
- Current: {current_accuracy}%
|
||||
- Target: 90%+
|
||||
- Status: {excellent/good/needs_improvement}
|
||||
|
||||
### Learning Insights
|
||||
|
||||
{insight_1}
|
||||
|
||||
{insight_2}
|
||||
|
||||
{insight_3}
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### High Priority
|
||||
1. {recommendation_1}
|
||||
2. {recommendation_2}
|
||||
|
||||
### Medium Priority
|
||||
1. {recommendation_1}
|
||||
2. {recommendation_2}
|
||||
|
||||
### Optimization Opportunities
|
||||
1. {opportunity_1}
|
||||
2. {opportunity_2}
|
||||
|
||||
---
|
||||
|
||||
## System Health Score: {score}/100
|
||||
|
||||
**Breakdown**:
|
||||
- Group Performance (40 pts): {group_perf}/40
|
||||
- Communication Quality (25 pts): {comm_quality}/25
|
||||
- Knowledge Transfer (20 pts): {knowledge}/20
|
||||
- Specialization Maturity (15 pts): {specialization}/15
|
||||
|
||||
**Status**: {Excellent/Good/Needs Attention/Critical}
|
||||
|
||||
---
|
||||
|
||||
Report Path: .claude/data/reports/group-monitoring-{date}.md
|
||||
```
|
||||
|
||||
## Result Presentation
|
||||
|
||||
**Terminal Output (15-20 lines max)**:
|
||||
```
|
||||
+==============================================================+
|
||||
| FOUR-TIER GROUP MONITORING REPORT |
|
||||
+==============================================================+
|
||||
|
||||
System Health: {score}/100 ({status})
|
||||
Total Tasks: {count} | Avg Quality: {quality}/100 | Success Rate: {success}%
|
||||
|
||||
GROUP PERFORMANCE:
|
||||
Group 1 (Brain): {rating} | {tasks} tasks | {success}% success
|
||||
Group 2 (Council): {rating} | {decisions} decisions | {accuracy}% accurate
|
||||
Group 3 (Hand): {rating} | {executions} executions | {success}% success
|
||||
Group 4 (Guardian): {rating} | {validations} validations | {pass}% pass rate
|
||||
|
||||
COMMUNICATION: {comm_success}% success rate | {feedback_count} feedback loops
|
||||
|
||||
TOP PERFORMERS:
|
||||
1. {agent_name} ({group}): {performance}
|
||||
2. {agent_name} ({group}): {performance}
|
||||
3. {agent_name} ({group}): {performance}
|
||||
|
||||
TRENDS: Quality {trend_icon} {direction} | Iterations {trend_icon} {direction}
|
||||
|
||||
📄 Detailed Report: .claude/data/reports/group-monitoring-{date}.md
|
||||
⏱️ Execution Time: {time}s
|
||||
```
|
||||
|
||||
**File Report**: Save complete detailed report to `.claude/data/reports/group-monitoring-YYYY-MM-DD.md`
|
||||
|
||||
## Notes
|
||||
|
||||
- Automatically refreshes data from all learning systems
|
||||
- Identifies bottlenecks and improvement opportunities
|
||||
- Tracks specialization emergence over time
|
||||
- Monitors communication effectiveness
|
||||
- **Run regularly** (e.g., after every 10-20 tasks) to track trends
|
||||
- Use insights to optimize group coordination
|
||||
|
||||
## Integration
|
||||
|
||||
This command integrates with:
|
||||
- `lib/group_collaboration_system.py` - Communication tracking
|
||||
- `lib/group_performance_tracker.py` - Performance metrics
|
||||
- `lib/inter_group_knowledge_transfer.py` - Knowledge stats
|
||||
- `lib/group_specialization_learner.py` - Specialization insights
|
||||
- `lib/agent_performance_tracker.py` - Individual agent data
|
||||
- `lib/agent_feedback_system.py` - Feedback effectiveness
|
||||
36
commands/monitor/recommend.md
Normal file
36
commands/monitor/recommend.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
name: monitor:recommend
|
||||
description: Get smart workflow and optimization recommendations based on learned patterns
|
||||
---
|
||||
|
||||
EXECUTE THESE BASH COMMANDS DIRECTLY (no agents, no skills):
|
||||
|
||||
First, find the plugin installation path:
|
||||
```bash
|
||||
PLUGIN_PATH=$(find ~/.claude -name "exec_plugin_script.py" 2>/dev/null | head -1 | sed 's|/lib/exec_plugin_script.py||')
|
||||
echo "Plugin found at: $PLUGIN_PATH"
|
||||
```
|
||||
|
||||
Step 1 - Generate recommendations using project patterns:
|
||||
```bash
|
||||
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" recommendation_engine.py --dir ./.claude-patterns --task "your task description here"
|
||||
```
|
||||
|
||||
Step 2 - For general recommendations (no specific task):
|
||||
```bash
|
||||
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" recommendation_engine.py --dir ./.claude-patterns
|
||||
```
|
||||
|
||||
Step 3 - For JSON output (programmatic use):
|
||||
```bash
|
||||
python3 "$PLUGIN_PATH/lib/exec_plugin_script.py" recommendation_engine.py --dir ./.claude-patterns --task "your task" --format json
|
||||
```
|
||||
|
||||
Report will show:
|
||||
- Recommended approach with confidence level
|
||||
- Quality predictions and time estimates
|
||||
- Skill suggestions with success rates
|
||||
- Risk assessment with mitigations
|
||||
- Alternative approaches with trade-offs
|
||||
|
||||
The recommendations will use patterns from your current project directory at ./.claude-patterns/
|
||||
Reference in New Issue
Block a user