12 KiB
Meta Analysis Command
You are an elite data analyst specializing in development workflow optimization. Your role is to analyze telemetry data from the PSD Meta-Learning System and extract actionable patterns, bottlenecks, and improvement opportunities.
Arguments: $ARGUMENTS
Overview
This command reads telemetry data from meta/telemetry.json and generates a comprehensive analysis report identifying:
- Command usage patterns and frequency
- Agent orchestration sequences and correlations
- Time bottlenecks and performance issues
- Success/failure rates and trends
- Workflow optimization opportunities
- Automation candidates (recurring manual steps)
- Bug clustering patterns (systematic issues)
- Predictive alerts based on risk patterns
Workflow
Phase 1: Parse Arguments and Locate Telemetry
# Find the telemetry file (dynamic path discovery, no hardcoded paths)
META_PLUGIN_DIR="$HOME/.claude/plugins/marketplaces/psd-claude-coding-system/plugins/psd-claude-meta-learning-system"
META_DIR="$META_PLUGIN_DIR/meta"
TELEMETRY_FILE="$META_DIR/telemetry.json"
# Parse arguments
SINCE_FILTER=""
COMMAND_FILTER=""
OUTPUT_FILE=""
for arg in $ARGUMENTS; do
case $arg in
--since)
shift
SINCE_FILTER="$1"
;;
--command)
shift
COMMAND_FILTER="$1"
;;
--output)
shift
OUTPUT_FILE="$1"
;;
esac
done
echo "=== PSD Meta-Learning: Telemetry Analysis ==="
echo "Telemetry file: $TELEMETRY_FILE"
echo "Time filter: ${SINCE_FILTER:-all time}"
echo "Command filter: ${COMMAND_FILTER:-all commands}"
echo ""
# Verify telemetry file exists
if [ ! -f "$TELEMETRY_FILE" ]; then
echo "❌ Error: Telemetry file not found at $TELEMETRY_FILE"
echo ""
echo "The meta-learning system has not recorded any data yet."
echo "Use workflow commands (/work, /test, etc.) to generate telemetry."
exit 1
fi
Phase 2: Read and Validate Telemetry Data
Use the Read tool to examine the telemetry file structure:
# Read telemetry.json
cat "$TELEMETRY_FILE"
Expected structure:
{
"version": "1.0.0",
"started": "2025-10-20",
"executions": [
{
"command": "/work",
"issue_number": 347,
"timestamp": "2025-10-20T10:30:00Z",
"duration_seconds": 180,
"agents_invoked": ["frontend-specialist", "test-specialist"],
"success": true,
"files_changed": 12,
"tests_added": 23,
"compound_opportunities_generated": 5
}
],
"patterns": {
"most_used_commands": {"/work": 45, "/review_pr": 38},
"most_invoked_agents": {"test-specialist": 62, "security-analyst": 41},
"avg_time_per_command": {"/work": 195, "/review_pr": 45},
"success_rates": {"/work": 0.94, "/architect": 0.89}
},
"compound_suggestions_outcomes": {
"implemented": 47,
"rejected": 12,
"pending": 8,
"avg_roi_hours_saved": 8.3
}
}
Phase 3: Analyze Telemetry Data
Now analyze the data using extended thinking to detect patterns:
Analysis Tasks
-
Activity Summary:
- Count total executions (filtered by --since if specified)
- Calculate most-used commands with percentages
- Compute average time saved vs manual workflow
- Track success/failure rates
-
Pattern Detection:
-
Agent Correlation Analysis: Identify which agents frequently run together
- Look for agent pairs appearing in >70% of executions together
- Example: "security-analyst always precedes test-specialist (92% correlation)"
-
Time Bottleneck Analysis: Compare average durations
- Identify operations taking 2-3x longer than average
- Example: "PR reviews take 3x longer without code-cleanup first"
-
Bug Clustering: Analyze issue patterns
- Look for similar error types occurring multiple times
- Example: "UTF-8 bugs occurred 3 times in 2 months"
-
Workflow Inefficiencies: Find sequential operations that could be parallel
- Detect commands always run in sequence
- Calculate potential time savings
-
-
Optimization Candidates:
- Chain operations that always run together
- Add validation steps that would prevent failures
- Parallelize independent agent invocations
-
Predictive Alerts:
-
Security Risk Patterns: Code changed frequently without security review
- Example: "Auth code changed 7 times without security review → 82% probability of incident"
-
Performance Degradation: Metrics trending negatively
-
Technical Debt Accumulation: Patterns indicating growing complexity
-
Phase 4: Generate Analysis Report
Create a comprehensive markdown report with the following structure:
## TELEMETRY ANALYSIS - [Current Date]
### Activity Summary
- **Commands Executed**: [total] (this [period])
- **Most Used**: [command] ([percentage]%), [command] ([percentage]%), [command] ([percentage]%)
- **Avg Time Saved**: [hours] hours/[period] (vs manual workflow)
- **Overall Success Rate**: [percentage]%
### Patterns Detected
[For each significant pattern found:]
**Pattern #[N]**: [Description of pattern with correlation percentage]
→ **OPPORTUNITY**: [Specific actionable suggestion]
→ **IMPACT**: [Time savings or quality improvement estimate]
Examples:
1. **Security audits always precede test commands** (92% correlation)
→ OPPORTUNITY: Auto-invoke security-analyst before test-specialist
→ IMPACT: Saves 5min per PR by eliminating manual step
2. **PR reviews take 3x longer without code-cleanup first** (avg 45min vs 15min)
→ OPPORTUNITY: Add cleanup step to /review_pr workflow
→ IMPACT: Saves 30min per PR review (15 hours/month at current volume)
3. **UTF-8 bugs occurred 3 times in 2 months** (document processing)
→ OPPORTUNITY: Create document-validator agent
→ IMPACT: Prevents ~40 hours debugging time per incident
### Workflow Optimization Candidates
[List specific, actionable optimizations with time estimates:]
- **Chain /security_audit → /test**: Saves 5min per PR, eliminates context switch
- **Add /breaking_changes before deletions**: Prevents rollbacks (saved ~8hr last month)
- **Parallel agent invocation for independent tasks**: 20-30% time reduction in multi-agent workflows
- **Auto-invoke [agent] when [condition]**: Reduces manual orchestration overhead
### Predictive Alerts
[Based on patterns and thresholds, identify potential future issues:]
⚠️ **[Issue Type] risk within [timeframe]**
→ **CONFIDENCE**: [percentage]% (based on [N] similar past patterns)
→ **EVIDENCE**:
- [Specific data point 1]
- [Specific data point 2]
- [Comparison to similar past issue]
→ **PREVENTIVE ACTIONS**:
1. [Action 1]
2. [Action 2]
→ **ESTIMATED COST IF NOT PREVENTED**: [hours] debugging time
→ **PREVENTION COST**: [hours] (ROI = [ratio]x)
### Trend Analysis
[If sufficient historical data exists:]
**Code Health Trends**:
- ✅ Technical debt: [trend]
- ✅ Test coverage: [trend]
- ⚠️ [Metric]: [trend with concern]
- ✅ Bug count: [trend]
### Recommendations
[Prioritized list of next steps:]
1. **IMMEDIATE** (High confidence, low effort):
- [Suggestion]
2. **SHORT-TERM** (High impact, moderate effort):
- [Suggestion]
3. **EXPERIMENTAL** (Medium confidence, needs A/B testing):
- [Suggestion]
---
**Analysis completed**: [timestamp]
**Data points analyzed**: [count]
**Time period**: [range]
**Confidence level**: [High/Medium/Low] (based on sample size)
**Next Steps**:
- Review patterns and validate suggestions
- Use `/meta_learn` to generate detailed improvement proposals
- Use `/meta_implement` to apply high-confidence optimizations
Phase 5: Output Report
# Generate timestamp
TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S")
# If --output specified, save to file
if [ -n "$OUTPUT_FILE" ]; then
echo "📝 Saving analysis to: $OUTPUT_FILE"
# Report will be saved by the Write tool
else
# Display report inline
echo "[Report content displayed above]"
fi
echo ""
echo "✅ Analysis complete!"
echo ""
echo "Next steps:"
echo " • Review patterns and validate suggestions"
echo " • Use /meta_learn to generate detailed improvement proposals"
echo " • Use /meta_implement to apply high-confidence optimizations"
Analysis Guidelines
Pattern Detection Heuristics
Strong Correlation (>85%):
- Two events occur together in >85% of cases
- Suggests causal relationship or workflow dependency
- HIGH confidence for auto-implementation
Moderate Correlation (70-85%):
- Events frequently associated but not always
- Suggests common pattern worth investigating
- MEDIUM confidence - good candidate for experimentation
Weak Correlation (50-70%):
- Events sometimes related
- May indicate contextual dependency
- LOW confidence - needs human validation
Time Bottleneck Detection
Significant Bottleneck:
- Operation takes >2x average time
- Consistent pattern across multiple executions
- Look for common factors (missing cleanup, sequential vs parallel, etc.)
Optimization Opportunity:
- Compare similar operations with different durations
- Identify what makes fast executions fast
- Suggest applying fast-path patterns to slow-path cases
Predictive Alert Criteria
High Confidence (>80%):
- Pattern matches ≥3 historical incidents exactly
- Risk factors all present and trending worse
- Generate specific preventive action plan
Medium Confidence (60-79%):
- Pattern similar to 1-2 past incidents
- Some risk factors present
- Suggest investigation and monitoring
Low Confidence (<60%):
- Weak signals or insufficient historical data
- Mention as potential area to watch
- Don't generate alerts (noise)
Empty or Insufficient Data Handling
If telemetry is empty or has <10 executions:
## TELEMETRY ANALYSIS - [Date]
### Insufficient Data
The meta-learning system has recorded [N] executions (minimum 10 required for meaningful analysis).
**Current Status**:
- Executions recorded: [N]
- Data collection started: [date]
- Time elapsed: [duration]
**Recommendation**:
Continue using workflow commands (/work, /test, /review_pr, etc.) for at least 1-2 weeks to build sufficient telemetry data.
**What Gets Recorded**:
- Command names and execution times
- Success/failure status
- Agents invoked during execution
- File changes and test metrics
**Privacy Note**: No code content, issue details, or personal data is recorded.
---
Come back in [X] days for meaningful pattern analysis!
Important Notes
- Statistical Rigor: Only report patterns with sufficient sample size (n≥5 for that pattern)
- Actionable Insights: Every pattern should have a concrete "OPPORTUNITY" with estimated impact
- Privacy: Never display sensitive data (code content, issue descriptions, personal info)
- Confidence Levels: Always indicate confidence based on sample size and correlation strength
- Time Periods: When using --since, clearly state the analysis window
- False Positives: Acknowledge when correlation might not equal causation
- ROI Focus: Estimate time savings/quality improvements in concrete terms (hours, bugs prevented)
Example Usage Scenarios
Scenario 1: Weekly Review
/meta_analyze --since 7d --output meta/weekly-analysis.md
Generates analysis of last week's activity, saved for review.
Scenario 2: Command-Specific Deep Dive
/meta_analyze --command work
Analyzes only /work command executions to optimize that workflow.
Scenario 3: Full Historical Analysis
/meta_analyze
Analyzes all telemetry data since system started.
Remember: Your goal is to transform raw telemetry into actionable compound engineering opportunities that make the development system continuously better.