Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:36:51 +08:00
commit 748a979a78
14 changed files with 6973 additions and 0 deletions

938
commands/assess.md Normal file
View File

@@ -0,0 +1,938 @@
# Assess Command
**Description**: Orchestrate a team of specialist agents to assess a project for modernization readiness and viability, evaluating technical debt, risks, and ROI to determine if modernization is recommended
---
# Project Modernization Assessment Protocol
**Version**: 1.0
**Purpose**: Systematically assess whether a project is a good candidate for modernization
**Output**: `ASSESSMENT.md` with comprehensive analysis and recommendation
**Duration**: 2-4 hours
**Note**: Time estimates are based on typical human execution times and may vary significantly based on project complexity, team experience, and AI assistance capabilities.
---
## Overview
This protocol evaluates a software project across **8 critical dimensions** to determine:
- ✅ Is modernization **technically feasible**?
- ✅ Is modernization **financially worthwhile**?
- ✅ What are the **major risks and blockers**?
- ✅ What is the **recommended approach**?
- ✅ Should we proceed, defer, or abandon the modernization?
**Core Principle**: **Assess before you commit - not all projects should be modernized.**
---
## Assessment Dimensions
### 1. Technical Viability (0-100 score)
### 2. Business Value (0-100 score)
### 3. Risk Profile (LOW/MEDIUM/HIGH/CRITICAL)
### 4. Resource Requirements (estimated effort)
### 5. Dependencies & Ecosystem Health
### 6. Code Quality & Architecture
### 7. Test Coverage & Stability
### 8. Security Posture
---
## Assessment Process
### Step 1: Project Discovery (30 minutes)
**Gather Basic Information**:
- [ ] Project name, version, and purpose
- [ ] Primary programming language(s) and framework(s)
- [ ] Current framework versions
- [ ] Target framework versions (if known)
- [ ] Age of project (initial commit date)
- [ ] Last significant update
- [ ] Active development status
- [ ] Number of contributors
- [ ] Lines of code
- [ ] Number of dependencies
**Commands**:
```bash
# Project stats
find . -name "*.cs" -o -name "*.js" -o -name "*.py" -o -name "*.java" | xargs wc -l
git log --reverse --format="%ai" | head -1 # First commit
git log --format="%ai" | head -1 # Last commit
git shortlog -sn # Contributors
# Dependency analysis
# .NET: dotnet list package --outdated
# Node: npm outdated
# Python: pip list --outdated
```
---
### Step 2: Technical Viability Assessment (45 minutes)
#### 2.1 Framework Analysis
**Current Framework Status**:
- [ ] Is current framework still supported?
- [ ] End-of-life date for current version
- [ ] Security patches still available?
- [ ] Community support level (active/declining/dead)
**Target Framework Status**:
- [ ] Latest stable version available?
- [ ] Migration path documented?
- [ ] Breaking changes documented?
- [ ] Tool support available?
**Scoring Criteria**:
- **80-100**: Clear migration path, well-documented, active community
- **60-79**: Migration path exists, some documentation gaps
- **40-59**: Difficult migration, limited documentation
- **0-39**: No clear path, framework deprecated, or major architectural change required
#### 2.2 Dependency Health
**Analyze All Dependencies**:
```bash
# Count total dependencies
# Check for:
- Deprecated packages (score -= 10 per deprecated)
- Unmaintained packages (no updates >2 years, score -= 5)
- Packages with security vulnerabilities (score -= 20 per CRITICAL)
- Compatible versions available (score += 10 if yes)
```
**Red Flags** (automatic FAIL):
- ❌ Critical dependency with no maintained alternative
- ❌ 50%+ of dependencies unmaintained
- ❌ Core framework dependency incompatible with target
**Scoring**:
- **80-100**: All dependencies maintained, clear upgrade path
- **60-79**: Most dependencies healthy, few alternatives needed
- **40-59**: Many dependencies need replacement
- **0-39**: Dependency hell, extensive rewrites required
#### 2.3 Code Compatibility
**Breaking Changes Analysis**:
- [ ] Enumerate breaking changes between current → target
- [ ] Estimate affected code percentage
- [ ] Identify obsolete APIs in use
- [ ] Check for deprecated patterns
**Complexity Factors**:
- Using reflection heavily (+complexity)
- Platform-specific code (+complexity)
- Custom serialization (+complexity)
- Heavy use of deprecated APIs (+complexity)
**Scoring**:
- **80-100**: <10% code affected, automated migration possible
- **60-79**: 10-30% affected, mostly straightforward changes
- **40-59**: 30-60% affected, significant manual work
- **0-39**: >60% affected, near-rewrite required
---
### Step 3: Business Value Assessment (30 minutes)
#### 3.1 Strategic Alignment
**Questions**:
- [ ] Is this project critical to business operations?
- [ ] Is this project actively developed/maintained?
- [ ] Are there plans for new features?
- [ ] Does current tech stack limit business capabilities?
- [ ] Is recruiting difficult for current tech stack?
**Scoring**:
- **80-100**: Critical system, active development, strategic importance
- **60-79**: Important system, moderate development
- **40-59**: Peripheral system, maintenance mode
- **0-39**: Legacy system, sunset planned
#### 3.2 Effort-Benefit Analysis
**Modernization Effort** (estimated timeline):
- Assessment & Planning: X days
- Security Remediation: Y days
- Framework Migration: Z days
- Testing & Validation: W days
- Documentation: V days
- **Total Effort**: XX days
**Benefits** (expected improvements):
- Security: Reduced vulnerability risk, compliance improvements
- Performance: Improved response times, reduced resource usage
- Developer productivity: Faster feature development, better tooling
- Maintenance: Reduced legacy issues, better long-term support
- Recruitment: Easier hiring with modern technology stack
**Value Assessment**:
- High value if effort investment provides substantial long-term benefits
- Consider strategic alignment and business criticality
- Factor in risk of continuing with outdated technology
**Scoring**:
- **80-100**: Significant benefits, manageable effort, clear value
- **60-79**: Good benefits justify moderate effort investment
- **40-59**: Limited benefits relative to effort required
- **0-39**: Benefits don't justify the effort investment
---
### Step 4: Risk Assessment (30 minutes)
#### 4.1 Technical Risks
**Identify and Rate Risks**:
| Risk | Likelihood | Impact | Severity | Mitigation |
|------|------------|--------|----------|------------|
| Breaking changes break critical features | High | High | CRITICAL | Comprehensive testing |
| Dependency conflicts unresolvable | Medium | High | HIGH | Dependency analysis upfront |
| Performance regression | Low | Medium | MEDIUM | Baseline + benchmark |
| Team lacks expertise | High | Medium | HIGH | Training + consultants |
| Timeline overruns | Medium | Medium | MEDIUM | Phased approach |
**Risk Levels**:
- **CRITICAL**: >3 CRITICAL risks → **DEFER** modernization
- **HIGH**: >5 HIGH risks → Requires mitigation plan
- **MEDIUM**: Manageable with standard practices
- **LOW**: Minimal risk
#### 4.2 Business Risks
- [ ] Business disruption during migration
- [ ] Data loss or corruption risk
- [ ] Customer impact (downtime, features unavailable)
- [ ] Competitive disadvantage if delayed
- [ ] Opportunity cost (alternatives)
**Overall Risk Profile**:
- **LOW**: <2 HIGH risks, 0 CRITICAL, clear mitigations
- **MEDIUM**: 2-5 HIGH risks, good mitigations
- **HIGH**: >5 HIGH risks or 1-2 CRITICAL risks
- **CRITICAL**: >2 CRITICAL risks → **DO NOT PROCEED**
---
### Step 5: Resource Assessment (20 minutes)
#### 5.1 Team Capacity
**Current Team**:
- Number of developers: X
- Framework expertise level: Beginner/Intermediate/Expert
- Availability: X% (accounting for ongoing work)
- Training needs: X days
**Required Skills**:
- [ ] Target framework expertise
- [ ] Migration tooling knowledge
- [ ] Testing framework familiarity
- [ ] DevOps/deployment skills
**Gap Analysis**:
- Skills we have: [list]
- Skills we need: [list]
- Training required: X days
- Contractor/consultant needs: Y days
#### 5.2 Timeline Estimation
**Conservative Estimate** (based on project size):
| Project Size | Discovery | Security | Framework | API/Code | Testing | Docs | Total |
|--------------|-----------|----------|-----------|----------|---------|------|-------|
| Small (<10k LOC) | 1-2d | 2-5d | 5-10d | 3-7d | 2-4d | 2-3d | 15-31d |
| Medium (10-50k) | 2-3d | 3-7d | 10-20d | 7-14d | 4-8d | 3-5d | 29-57d |
| Large (>50k) | 3-5d | 5-10d | 20-40d | 14-28d | 8-16d | 5-10d | 55-109d |
**Buffer**: Add 30% contingency for unknowns
---
### Step 6: Code Quality Analysis (30 minutes)
#### 6.1 Architecture Assessment
**Evaluate Architecture**:
- [ ] Architecture pattern (monolith, microservices, layered, etc.)
- [ ] Separation of concerns (good/fair/poor)
- [ ] Coupling level (loose/moderate/tight)
- [ ] Design pattern usage (appropriate/mixed/absent)
- [ ] Technical debt level (low/medium/high)
**Modernization Compatibility**:
-**Good**: Well-architected, loose coupling → Easy migration
- ⚠️ **Fair**: Some technical debt → Moderate difficulty
-**Poor**: High coupling, poor separation → Difficult migration
**Scoring**:
- **80-100**: Clean architecture, low debt
- **60-79**: Some debt, but manageable
- **40-59**: Significant debt, refactoring needed
- **0-39**: Architecture rewrite recommended
#### 6.2 Code Quality Metrics
**Measure**:
```bash
# Cyclomatic complexity
# Code duplication percentage
# Method/class size averages
# Code coverage percentage
```
**Quality Indicators**:
- [ ] Average cyclomatic complexity: X (target <10)
- [ ] Code duplication: X% (target <5%)
- [ ] Average method lines: X (target <50)
- [ ] Test coverage: X% (target >80%)
**Scoring**:
- **80-100**: High quality, minimal refactoring needed
- **60-79**: Decent quality, some improvements needed
- **40-59**: Poor quality, significant refactoring required
- **0-39**: Very poor quality, rewrite consideration
---
### Step 7: Test Coverage & Stability (20 minutes)
#### 7.1 Existing Test Suite
**Analyze Tests**:
- [ ] Unit test count: X
- [ ] Integration test count: Y
- [ ] E2E test count: Z
- [ ] Test coverage: X%
- [ ] Test pass rate: Y%
- [ ] Test execution time: Z minutes
**Test Quality**:
- [ ] Tests are maintained and passing
- [ ] Tests cover critical paths
- [ ] Tests are not brittle/flaky
- [ ] Test infrastructure is documented
**Scoring**:
- **80-100**: >80% coverage, 100% pass rate, comprehensive suite
- **60-79**: 60-80% coverage, 100% pass rate (but limited coverage)
- **40-59**: 40-60% coverage, 100% pass rate (but limited coverage)
- **0-39**: <40% coverage or <100% pass rate
#### 7.2 Production Stability
**Metrics** (last 6 months):
- [ ] Production incidents: X
- [ ] Critical bugs: Y
- [ ] Uptime percentage: Z%
- [ ] Performance issues: W
**Scoring**:
- **80-100**: Stable system, rare incidents
- **60-79**: Generally stable, occasional issues
- **40-59**: Frequent issues, stability concerns
- **0-39**: Unstable, modernization may be too risky
---
### Step 8: Security Assessment (30 minutes)
#### 8.1 Vulnerability Scan
**Run Security Scan**:
```bash
# .NET: dotnet list package --vulnerable
# Node: npm audit
# Python: pip-audit
# Or use: Snyk, OWASP Dependency Check
```
**Categorize Vulnerabilities**:
- CRITICAL: X vulnerabilities
- HIGH: Y vulnerabilities
- MEDIUM: Z vulnerabilities
- LOW: W vulnerabilities
**Security Score**: (100 - (CRITICAL*20 + HIGH*10 + MEDIUM*5 + LOW*1))
**Scoring**:
- **80-100**: Minimal vulnerabilities, easy fixes
- **60-79**: Some vulnerabilities, manageable
- **40-59**: Many vulnerabilities, significant work
- **0-39**: Critical security issues, URGENT modernization
#### 8.2 Security Posture
**Evaluate**:
- [ ] Authentication/authorization modern?
- [ ] Encryption standards current?
- [ ] Secrets management proper?
- [ ] Security headers implemented?
- [ ] Input validation comprehensive?
---
## Assessment Report Generation
### ASSESSMENT.md Template
```markdown
# Project Modernization Assessment
**Project**: [Name]
**Current Version**: [Version]
**Target Version**: [Version]
**Assessment Date**: [Date]
**Assessor**: [Name/Team]
---
## Executive Summary
**Recommendation**: ✅ PROCEED / ⚠️ PROCEED WITH CAUTION / ❌ DEFER / 🛑 DO NOT PROCEED
**Overall Score**: XX/100 (Excellent/Good/Fair/Poor)
**Key Findings**:
- [Finding 1]
- [Finding 2]
- [Finding 3]
**Estimated Effort**: XX-YY days (ZZ-WW calendar weeks)
---
## 1. Technical Viability: XX/100
### Framework Analysis
- **Current**: [Framework] [Version] (EOL: [Date])
- **Target**: [Framework] [Version]
- **Migration Path**: Clear/Documented/Unclear
- **Breaking Changes**: XX identified
- **Score**: XX/100
**Assessment**: [Detailed explanation]
### Dependency Health
- **Total Dependencies**: XX
- **Deprecated**: X (XX%)
- **Unmaintained**: Y (YY%)
- **Security Issues**: Z (CRITICAL: A, HIGH: B)
- **Score**: XX/100
**Red Flags**:
- [Flag 1 if any]
- [Flag 2 if any]
### Code Compatibility
- **Affected Code**: ~XX% (estimated)
- **Obsolete APIs**: X usages found
- **Platform-Specific Code**: Y instances
- **Score**: XX/100
**Major Challenges**:
1. [Challenge 1]
2. [Challenge 2]
---
## 2. Business Value: XX/100
### Strategic Alignment
- **Business Criticality**: Critical/Important/Peripheral
- **Development Status**: Active/Maintenance/Sunset
- **Strategic Value**: High/Medium/Low
- **Score**: XX/100
### Effort-Benefit Analysis
**Effort Estimate**:
- Assessment & Planning: X days
- Security Remediation: Y days
- Framework Migration: Z days
- Testing & Validation: W days
- Documentation: V days
- Contingency (30%): Z days
- **Total Effort**: **XX days**
**Expected Benefits**:
- Security: Reduced vulnerability risk, improved compliance
- Performance: Better response times, optimized resource usage
- Developer Productivity: Faster feature development, modern tooling
- Maintenance: Reduced legacy issues, improved long-term support
- Recruitment/Retention: Easier hiring with modern stack
**Value Assessment**:
- Significant benefits justify the effort investment
- Strategic importance aligns with business goals
- Long-term value outweighs short-term effort
- **Score**: XX/100
---
## 3. Risk Assessment: LOW/MEDIUM/HIGH/CRITICAL
### Technical Risks
| Risk | Likelihood | Impact | Severity | Mitigation |
|------|------------|--------|----------|------------|
| [Risk 1] | High/Med/Low | High/Med/Low | CRITICAL/HIGH/MEDIUM/LOW | [Strategy] |
| [Risk 2] | ... | ... | ... | [...] |
**Critical Risks** (0):
- [None] OR [List critical risks]
**High Risks** (X):
1. [Risk 1]
2. [Risk 2]
**Risk Mitigation Plan**:
- [Strategy 1]
- [Strategy 2]
### Business Risks
- [Risk 1]: [Assessment and mitigation]
- [Risk 2]: [Assessment and mitigation]
**Overall Risk Profile**: **LOW/MEDIUM/HIGH/CRITICAL**
---
## 4. Resource Requirements
### Team Capacity
- **Current Team Size**: X developers
- **Target Framework Expertise**: Expert/Intermediate/Beginner
- **Availability**: XX% (after ongoing work)
- **Skills Gap**: [List gaps]
### Training Needs
- [Skill 1]: X days
- [Skill 2]: Y days
- **Total Training**: Z days
### External Resources
- **Consultants Needed**: Yes/No
- **Specialized Skills**: [List if any]
- **Additional Effort**: X days (if consultants needed)
### Timeline
**Estimated Duration**: XX-YY weeks (ZZ-WW calendar months)
**Breakdown**:
- Phase 0 (Discovery): X-Y days
- Phase 1 (Security): X-Y days
- Phase 2 (Architecture): X-Y days
- Phase 3 (Framework): X-Y days
- Phase 4 (API Modernization): X-Y days
- Phase 5 (Performance): X-Y days
- Phase 6 (Documentation): X-Y days
- Phase 7 (Validation): X-Y days
- Contingency (30%): X-Y days
---
## 5. Code Quality: XX/100
### Architecture
- **Pattern**: [Monolith/Microservices/Layered/etc.]
- **Separation of Concerns**: Good/Fair/Poor
- **Coupling**: Loose/Moderate/Tight
- **Technical Debt**: Low/Medium/High
- **Score**: XX/100
**Analysis**: [Detailed assessment]
### Code Metrics
- **Cyclomatic Complexity**: X (target <10)
- **Code Duplication**: XX% (target <5%)
- **Average Method Size**: XX lines (target <50)
- **Score**: XX/100
---
## 6. Test Coverage: XX/100
### Test Suite Analysis
- **Unit Tests**: X tests
- **Integration Tests**: Y tests
- **E2E Tests**: Z tests
- **Coverage**: XX%
- **Pass Rate**: YY%
- **Execution Time**: Z minutes
**Assessment**: [Quality evaluation]
### Production Stability
- **Uptime**: XX.X%
- **Incidents (6mo)**: X
- **Critical Bugs**: Y
- **Performance Issues**: Z
**Score**: XX/100
---
## 7. Security Posture: XX/100
### Vulnerability Scan
- **CRITICAL**: X vulnerabilities
- **HIGH**: Y vulnerabilities
- **MEDIUM**: Z vulnerabilities
- **LOW**: W vulnerabilities
- **Security Score**: XX/100
**Critical Issues**:
1. [CVE-XXXX-XXXX]: [Description]
2. [CVE-YYYY-YYYY]: [Description]
### Security Practices
- Authentication: Modern/Dated/Poor
- Encryption: Current/Dated/None
- Secrets Management: Good/Fair/Poor
- Input Validation: Comprehensive/Partial/Minimal
---
## 8. Dependencies & Ecosystem
### Framework Ecosystem
- **Community Health**: Active/Declining/Stagnant
- **LTS Support**: Available/Limited/None
- **Tool Support**: Excellent/Good/Fair/Poor
- **Documentation**: Comprehensive/Good/Limited/Poor
### Dependency Analysis
- **Total**: XX packages
- **Up-to-date**: X (XX%)
- **Outdated**: Y (YY%)
- **Deprecated**: Z (ZZ%)
- **Unmaintained**: W (WW%)
**Major Dependencies**:
| Package | Current | Latest | Status | Migration Path |
|---------|---------|--------|--------|----------------|
| [Name] | vX.X | vY.Y | OK/Deprecated/EOL | Easy/Moderate/Hard |
---
## Overall Assessment
### Scoring Summary
| Dimension | Score | Weight | Weighted |
|-----------|-------|--------|----------|
| Technical Viability | XX/100 | 25% | XX.X |
| Business Value | XX/100 | 20% | XX.X |
| Risk Profile | XX/100 | 15% | XX.X |
| Resources | XX/100 | 10% | XX.X |
| Code Quality | XX/100 | 10% | XX.X |
| Test Coverage | XX/100 | 10% | XX.X |
| Security | XX/100 | 10% | XX.X |
| **TOTAL** | **XX/100** | **100%** | **XX.X** |
### Recommendation Matrix
**Score Interpretation**:
- **80-100**: ✅ **PROCEED** - Strong candidate, low risk
- **60-79**: ⚠️ **PROCEED WITH CAUTION** - Good candidate, manageable risks
- **40-59**: ❌ **DEFER** - Weak candidate, high risk, reconsider after improvements
- **0-39**: 🛑 **DO NOT PROCEED** - Poor candidate, critical risks, not viable
**This Project**: **XX/100****[RECOMMENDATION]**
---
## Recommendation
### ✅ PROCEED (if 80-100)
**Rationale**: [Explanation of why this is a good candidate]
**Strengths**:
- [Strength 1]
- [Strength 2]
- [Strength 3]
**Recommended Approach**: [Suggested strategy]
**Next Steps**:
1. Run `/modernize-plan` to create detailed migration plan
2. Secure resource approval (X developers, Y weeks)
3. Allocate team resources and timeline
4. Begin Phase 0 on [Date]
---
### ⚠️ PROCEED WITH CAUTION (if 60-79)
**Rationale**: [Explanation of viable but challenging project]
**Strengths**:
- [Strength 1]
- [Strength 2]
**Concerns**:
- [Concern 1]
- [Concern 2]
**Conditional Approval**: Proceed if:
1. [Condition 1]
2. [Condition 2]
3. [Condition 3]
**Risk Mitigation Required**:
- [Mitigation 1]
- [Mitigation 2]
**Next Steps**:
1. Address critical concerns listed above
2. Develop detailed risk mitigation plan
3. Run `/modernize-plan` with extra contingency
4. Executive sign-off required
---
### ❌ DEFER (if 40-59)
**Rationale**: [Explanation of why deferral is recommended]
**Critical Issues**:
1. [Issue 1]
2. [Issue 2]
3. [Issue 3]
**Recommended Actions Before Reconsidering**:
1. [Action 1]
2. [Action 2]
3. [Action 3]
**Re-Assessment Date**: [X months from now]
**Alternatives to Consider**:
- Maintain current version with security patches only
- Incremental improvements without full modernization
- Gradual replacement with new system
- [Other alternatives]
---
### 🛑 DO NOT PROCEED (if 0-39)
**Rationale**: [Explanation of why modernization is not viable]
**Blocking Issues**:
1. [Critical blocker 1]
2. [Critical blocker 2]
3. [Critical blocker 3]
**Recommended Alternatives**:
- **Rewrite from scratch**: [Justification]
- **Replace with COTS**: [Alternative solutions]
- **Sunset the system**: [Migration plan to replacement]
- **Maintain as-is**: [Minimal maintenance strategy]
**Decision**: Modernization is **NOT RECOMMENDED** for this project.
---
## Appendices
### A. Detailed Dependency List
[Full list of dependencies with versions and status]
### B. Breaking Changes Enumeration
[List of all breaking changes from current → target]
### C. Risk Register
[Complete risk register with all identified risks]
### D. Cost Breakdown
[Detailed cost estimation by phase and resource]
---
**Assessment Completed**: [Date]
**Valid Until**: [Date + 6 months]
**Next Review**: [Date + 3 months]
```
---
## Usage Examples
### Example 1: .NET 6 → .NET 9 Assessment
```
User: /modernize-assess
User: Assess my .NET 6 e-commerce platform for upgrading to .NET 9
Claude: I'll conduct a comprehensive modernization assessment...
[Step 1: Discovery]
- Project: E-Commerce Platform
- Current: .NET 6.0
- Target: .NET 9.0
- LOC: 47,500
- Dependencies: 83 packages
- Age: 4 years
- Active: Yes
[Step 2: Technical Viability]
Framework Analysis:
- .NET 6 EOL: Nov 2024 (6 months!)
- .NET 9 LTS: Current, well-supported
- Breaking changes: 23 identified
- Migration path: Clear ✅
Score: 85/100
Dependency Health:
- 83 packages analyzed
- 5 deprecated (6%)
- 2 unmaintained (2%)
- 3 security issues (1 HIGH)
Score: 78/100
Code Compatibility:
- ~15% code affected
- 12 obsolete API usages
- Mostly straightforward changes
Score: 82/100
**Technical Viability: 82/100** ✅
[Continues through all 8 dimensions...]
[Final Assessment]
**Overall Score: 78/100**
**Recommendation: ⚠️ PROCEED WITH CAUTION**
Writing ASSESSMENT.md...
✅ Assessment complete: ASSESSMENT.md created
```
### Example 2: Node.js 14 → 22 Assessment
```
User: /modernize-assess
User: We have a Node.js 14 API that needs evaluation for upgrading to Node.js 22
Claude: [Full assessment process]
[Final Result]
**Overall Score: 62/100**
**Recommendation: ⚠️ PROCEED WITH CAUTION**
Key Findings:
- Node 14 is EOL (critical security risk)
- 47 npm packages have vulnerabilities
- Test coverage only 34% (major concern)
- BUT: Clean architecture, good ROI
Conditional Approval:
1. Increase test coverage to >60% first
2. Address CRITICAL CVEs immediately
3. Allocate 2 months for migration
✅ ASSESSMENT.md created
```
### Example 3: Legacy System - Not Recommended
```
User: /modernize-assess
User: Evaluate our 15-year-old Java 6 monolith for modernization
Claude: [Assessment process]
[Final Result]
**Overall Score: 34/100**
**Recommendation: 🛑 DO NOT PROCEED**
Critical Blockers:
1. Java 6 → 17/21: Too many breaking changes (>80% code affected)
2. 67% of dependencies unmaintained with no alternatives
3. No test coverage (0%)
4. Tight coupling throughout
5. ROI negative for modernization
Recommended Alternative:
**Strangler Fig Pattern** - Gradually replace with microservices
- Build new features in modern stack
- Migrate high-value modules incrementally
- Retire monolith over 3 years
❌ Modernization not recommended
✅ ASSESSMENT.md created with alternatives
```
---
## Decision Criteria
### Proceed (80-100)
- Strong technical path
- Good ROI
- Low-medium risk
- Team capable
- Business value clear
### Proceed with Caution (60-79)
- Viable technical path
- Acceptable ROI
- Medium risk with mitigations
- Team can learn/adapt
- Business value justified
### Defer (40-59)
- Challenging technical path
- Marginal ROI
- High risk
- Team capacity concerns
- Unclear business value
### Do Not Proceed (0-39)
- No clear technical path
- Negative or very poor ROI
- Critical risks
- Team lacking capabilities
- No business justification
---
## Best Practices
1. **Be Objective** - Don't let sunk costs bias the assessment
2. **Be Conservative** - Underestimate benefits, overestimate costs
3. **Be Thorough** - Don't skip dimensions
4. **Be Honest** - Better to defer now than fail midway
5. **Document Everything** - The ASSESSMENT.md is your evidence
---
## Anti-Patterns
**Rubber-stamp approval** - "We already decided, just assess"
**Ignoring red flags** - "We'll figure it out during migration"
**Skipping ROI** - "We just need to modernize"
**Unrealistic estimates** - "It'll only take 2 weeks"
**Ignoring team capacity** - "We'll find time somehow"
---
**Document Owner**: Migration Coordinator
**Protocol Version**: 1.0
**Last Updated**: 2025-10-25
**Required Before**: Running `/modernize-plan` or `/modernize-project`
**Remember**: **Not all projects should be modernized. Assessment prevents costly mistakes.**

867
commands/modernize.md Normal file
View File

@@ -0,0 +1,867 @@
# Modernize Command
**Description**: Orchestrate a team of specialist agents to upgrade a project to be modern, secure, well-tested, and performant
---
# Project Modernization & Security Protocol
**Version**: 2.0
**Purpose**: Coordinate multiple specialist agents to systematically upgrade any software project
**Team**: Migration Coordinator, Security Agent, Architect Agent, Coder Agent, Tester Agent, Documentation Agent
**Inputs**: Optional `ASSESSMENT.md` and `PLAN.md` from `/modernize-assess` and `/modernize-plan`
---
## Prerequisites Check
**Before starting, this command checks for**:
```bash
# Check for assessment
if [ -f "ASSESSMENT.md" ]; then
echo "✅ Found ASSESSMENT.md - will use assessment findings"
USE_ASSESSMENT=true
else
echo "⚠️ No ASSESSMENT.md - recommend running /modernize-assess first"
echo " Continue with basic assessment? (y/n)"
USE_ASSESSMENT=false
fi
# Check for plan
if [ -f "PLAN.md" ]; then
echo "✅ Found PLAN.md - will follow existing plan"
USE_PLAN=true
else
echo "⚠️ No PLAN.md - will create plan on-the-fly"
echo " Recommend running /modernize-plan first for better accuracy"
USE_PLAN=false
fi
```
**Recommendation Workflow**:
1. **Best**: Run `/modernize-assess``/modernize-plan``/modernize-project`
2. **Good**: Run `/modernize-plan``/modernize-project`
3. **Acceptable**: Run `/modernize-project` (will create minimal assessment/plan inline)
---
## Overview
This protocol orchestrates a **multi-agent team** to modernize and secure your project through a systematic, phased approach. The team works in coordination to ensure:
- ✅ Modern frameworks and dependencies
- ✅ Security vulnerabilities eliminated
- ✅ Comprehensive test coverage (≥95%)
- ✅ Performance optimization
- ✅ Complete documentation
- ✅ Production-ready quality
**Core Principle**: **Systematic, agent-coordinated modernization with quality gates at every stage.**
---
## Agent Team Roles
### 1. **Migration Coordinator** (Orchestrator)
- **Role**: Strategic oversight and coordination
- **Responsibilities**: Plan phases, coordinate agents, enforce quality gates, track progress
- **When active**: Throughout entire project
### 2. **Security Agent** (Blocker)
- **Role**: Vulnerability assessment and remediation
- **Responsibilities**: Scan CVEs, calculate security score, prioritize fixes
- **When active**: Phase 1 (blocks all progress until CRITICAL/HIGH resolved)
### 3. **Architect Agent** (Decision Maker)
- **Role**: Technology research and architectural decisions
- **Responsibilities**: Research alternatives, create ADRs, recommend patterns
- **When active**: Phases 1-2 (planning and design)
### 4. **Coder Agent** (Implementation)
- **Role**: Code migration and modernization
- **Responsibilities**: Update frameworks, replace obsolete APIs, fix builds
- **When active**: Phases 3-4 (can run multiple in parallel)
### 5. **Tester Agent** (Quality Gate)
- **Role**: Comprehensive testing and validation
- **Responsibilities**: Run all test phases, fix-and-retest cycles, enforce 100% pass rate
- **When active**: After every code change (blocks progression)
### 6. **Documentation Agent** (Knowledge Management)
- **Role**: Documentation creation and maintenance
- **Responsibilities**: HISTORY.md, ADRs, migration guides, changelogs
- **When active**: Continuous throughout, final comprehensive docs at end
---
## Modernization Phases
### Phase 0: Discovery & Assessment (1-2 days)
**Active Agents**: Migration Coordinator, Security Agent, Architect Agent
**Input Handling**:
```
IF ASSESSMENT.md EXISTS:
✅ Skip detailed assessment
✅ Use existing scores, risks, estimates
✅ Focus on validation and updates
Duration: 0.5-1 day (validation only)
ELSE:
⚠️ Run full assessment (as described below)
Duration: 1-2 days
```
**Activities**:
**⚠️ CRITICAL: Test Environment Setup MUST Be First Task**
1. **Test Environment Setup** (MANDATORY FIRST - NEW per Recommendation 1)
- **Why First**: Cannot validate anything without working build/test environment
- Install required SDKs (.NET, Node.js, Python, etc.)
- Install Docker for integration testing
- Configure environment variables
- **Verify build succeeds**: `dotnet build` (establish baseline)
- **Verify tests run**: `dotnet test` (establish pass rate baseline)
- **Run vulnerability scan**: `dotnet list package --vulnerable --include-transitive`
- Document baseline metrics: test count, pass rate, build warnings, CVE count
- **Deliverable**: Working environment with verified baseline metrics
- **Duration**: 1-2 hours (BLOCKING - nothing else can proceed without this)
2. **Security Baseline** (BLOCKING - Now uses verified scan from Task 1)
- **Use vulnerability scan from Task 1** (already executed)
- Calculate security score (0-100): `100 - (CRITICAL×10 + HIGH×5 + MEDIUM×2 + LOW×0.5)`
- Categorize vulnerabilities (CRITICAL/HIGH/MEDIUM/LOW)
- Document top 10 CVEs in ASSESSMENT.md
- **BLOCK**: Must have scan results before proceeding (scores are verified, not estimated)
3. **Project Analysis**
- **If ASSESSMENT.md exists**: ✅ Load existing inventory
- **If no assessment**: Inventory all dependencies and frameworks
- Identify current versions vs latest stable (using actual package resolution from Task 1 build)
- Map project structure and architecture
- Identify technology debt
- **Deliverable**: Project assessment (or validate existing)
4. **Technology Assessment**
- Research latest framework versions
- Identify obsolete APIs and patterns
- Document breaking changes
- Create upgrade roadmap
5. **Test Baseline Analysis**
- **Use test results from Task 1** (already executed)
- Capture baseline metrics (pass rate, coverage, performance)
- Document current test infrastructure
- Identify test gaps
**Outputs**:
- Project assessment report (or use existing ASSESSMENT.md)
- Security vulnerability report
- Technology upgrade roadmap
- Test baseline report
- Initial HISTORY.md entry
**With Existing Assessment**:
- ✅ Validate assessment still accurate (dependencies haven't changed)
- ✅ Update if needed (typically minimal)
- ✅ Faster completion (0.5-1 day vs 1-2 days)
**Quality Gate** (UPDATED per Recommendation 1 & 3):
- ✅ Test environment ready (build succeeds, tests run)
- ✅ Baseline test metrics documented (pass rate, count, coverage)
- ✅ Vulnerability scan completed (verified CVE counts, not estimates)
- ✅ Security score calculated from scan results (≥45 required)
- ✅ All CRITICAL/HIGH vulnerabilities documented
- ✅ Docker/external dependencies ready for integration tests
---
### Phase 1: Security Remediation (2-5 days)
**Active Agents**: Security Agent (lead), Coder Agent, Tester Agent
**Activities**:
1. **Fix Critical Vulnerabilities** (P0)
- Update packages with CRITICAL CVEs
- Fix security misconfigurations
- Remove deprecated/insecure code
- Verify fixes with security scans
2. **Fix High-Priority Vulnerabilities** (P1)
- Update packages with HIGH CVEs
- Apply security patches
- Implement security best practices
3. **Post-Update Security Validation** (BLOCKING - NEW per Recommendation 3)
- **Re-run vulnerability scan**: `dotnet list package --vulnerable --include-transitive > security-after-phase1.txt`
- **Compare before/after**: `diff security-baseline.txt security-after-phase1.txt`
- **Verify CRITICAL/HIGH count decreased** (not just assumed)
- **Verify no NEW vulnerabilities introduced** by updates
- Recalculate security score from scan results (must show improvement)
- **Run all tests** to ensure no regressions (Tier 1: Unit tests minimum)
- Update HISTORY.md with verified security improvements
**Outputs**:
- Security fixes applied
- Updated security scan report
- Test results (must maintain 100% pass rate)
- HISTORY.md entry for security work
**Quality Gate** (UPDATED per Recommendation 3):
- ✅ Security scan re-run and results verified (not estimated)
- ✅ Security score ≥45 (calculated from verified scan results)
- ✅ Zero CRITICAL vulnerabilities (verified in scan diff)
- ✅ Zero HIGH vulnerabilities (verified in scan diff, or documented with explicit approval)
- ✅ No NEW vulnerabilities introduced by updates
- ✅ All tests passing (100% - Tier 1 Unit tests minimum)
---
### Phase 2: Architecture & Design (2-3 days)
**Active Agents**: Architect Agent (lead), Migration Coordinator
**Input Handling**:
```
IF PLAN.md EXISTS:
✅ Use existing architecture decisions
✅ Validate ADRs are current
✅ Follow defined strategy
Duration: 0.5-1 day (validation only)
ELSE:
⚠️ Create architecture decisions (as described below)
Duration: 2-3 days
```
**Activities**:
**⚠️ NEW: Spike-Driven ADR Process for High-Risk Decisions (per Recommendation 2)**
1. **Framework Upgrade Planning**
- Research target framework versions
- Evaluate migration paths
- **For high-risk decisions**: Create spike branches (1-2 days)
- Test Option A on single project
- Test Option B on single project
- Document actual compilation errors, API changes, test failures
- Create evaluation matrix with empirical data from spikes
- Document breaking changes
- Create ADRs with status "proposed" first (allow 24-48hr review)
- Mark ADRs as "accepted" after stakeholder review
2. **Dependency Strategy**
- Identify dependency upgrade order
- **For major version changes**: Create spike branches
- Example: RabbitMQ.Client 5→6 vs 5→7
- Run tests on each spike, document pass rates
- Compare actual migration effort (file count, errors)
- Map dependency conflicts (using spike results)
- Plan parallel vs sequential updates
- Create dependency upgrade matrix with verified estimates
3. **Architecture Decisions**
- Obsolete pattern replacements
- New feature approaches
- Performance optimization strategies
- Testing strategy updates
- **All ADRs must include**:
- Evaluation matrix with weighted criteria
- Spike results (for high-risk decisions)
- 24-48hr review period before "accepted" status
**Outputs**:
- ADRs for all major decisions (MADR 3.0.0 format) - or use existing from PLAN.md
- Dependency upgrade matrix
- Migration timeline
- Risk assessment
- HISTORY.md entry
**With Existing Plan**:
- ✅ ADRs already created and approved
- ✅ Migration strategy defined
- ✅ Just validate and proceed
- ✅ Faster completion (0.5-1 day vs 2-3 days)
**Quality Gate**: All major decisions documented in ADRs, migration plan approved
---
### Phase 3: Framework & Dependency Modernization (5-10 days)
**Active Agents**: Coder Agent (multiple if parallel), Tester Agent, Migration Coordinator
**Input Handling**:
```
IF PLAN.md EXISTS:
✅ Use defined module migration order
✅ Follow parallel execution strategy
✅ Use task breakdown from plan
More accurate timeline
ELSE:
⚠️ Determine migration order on-the-fly
More conservative timeline
```
**Activities**:
1. **Framework Upgrade**
- Update to target framework version
- Fix compilation errors
- Update project files
- Resolve API changes
2. **Dependency Updates**
- Update dependencies in priority order
- Resolve version conflicts
- Update package references
- Fix breaking changes
3. **Continuous Testing** (BLOCKING)
- Run tests after each change
- Fix-and-retest cycles
- Maintain 100% pass rate
- No progression until tests pass
**Parallel Execution Strategy**:
```
Migration Coordinator
├─ Coder Agent #1 (Module A) → Tester Agent validates
├─ Coder Agent #2 (Module B) → Tester Agent validates
└─ Coder Agent #3 (Module C) → Tester Agent validates
```
**Outputs**:
- Updated framework versions
- Updated dependencies
- Fixed compilation errors
- Test results (100% pass rate)
- HISTORY.md entries for each module
**Quality Gate**:
- All projects build successfully
- 100% test pass rate (MANDATORY)
- No P0/P1 issues
- Code coverage ≥80%
---
### Phase 4: API Modernization & Code Quality (3-7 days)
**Active Agents**: Coder Agent, Tester Agent, Architect Agent
**Activities**:
1. **Replace Obsolete APIs**
- Identify deprecated API usage
- Replace with modern equivalents
- Update code patterns
- Verify functionality
2. **Code Quality Improvements**
- Apply modern language features
- Remove code smells
- Improve error handling
- Optimize performance hotspots
3. **Test Enhancement**
- Add missing test coverage
- Update test patterns
- Add integration tests
- Performance benchmarks
**Outputs**:
- Modernized codebase
- Improved code quality metrics
- Enhanced test suite
- Performance benchmarks
- HISTORY.md entries
**Quality Gate**:
- Zero obsolete API warnings
- Code coverage ≥85%
- 100% test pass rate
- No performance regressions (≤10%)
---
### Phase 5: Performance Optimization (2-4 days)
**Active Agents**: Coder Agent, Tester Agent, Architect Agent
**Activities**:
1. **Performance Profiling**
- Run performance benchmarks
- Identify bottlenecks
- Compare against baseline
- Document performance goals
2. **Optimization Implementation**
- Optimize critical paths
- Implement caching strategies
- Improve database queries
- Reduce memory allocations
3. **Validation**
- Re-run benchmarks
- Verify improvements
- Ensure no regressions
- Document performance gains
**Outputs**:
- Performance optimization report
- Benchmark comparisons
- Performance ADRs (if architectural changes)
- HISTORY.md entry
**Quality Gate**:
- Performance improvement ≥10% OR documented as optimal
- No performance regressions
- All tests passing
---
### Phase 6: Comprehensive Documentation (2-3 days)
**Active Agents**: Documentation Agent (lead), Migration Coordinator
**Activities**:
1. **CHANGELOG Creation**
- Document all breaking changes
- List new features
- Security fixes
- Performance improvements
- Migration notes
2. **Migration Guide**
- Step-by-step upgrade instructions
- Breaking change details
- Code examples (before/after)
- Troubleshooting guide
- FAQ section
3. **Final Documentation**
- Update README
- API documentation
- Architecture documentation
- Deployment guides
- Release notes
**Outputs**:
- CHANGELOG.md (Keep a Changelog format)
- MIGRATION-GUIDE.md (800+ lines, comprehensive)
- Updated README.md
- ADR summary
- Release notes
- Final HISTORY.md entry
**Quality Gate**: All documentation complete, reviewed, and accurate
---
### Phase 7: Final Validation & Release (1-2 days)
**Active Agents**: Tester Agent (lead), Security Agent, Migration Coordinator
**Activities**:
1. **Complete Test Suite**
- Run all test phases
- Unit tests (100% pass rate)
- Integration tests (100% pass rate)
- Component tests (100% pass rate)
- Performance tests (baseline validation)
- E2E tests (100% pass rate)
2. **Final Security Scan**
- Run comprehensive CVE scan
- Verify security score ≥45
- Document any remaining LOW/MEDIUM issues
- Get security approval
3. **Release Preparation**
- Tag release version
- Generate release notes
- Create deployment checklist
- Backup current production
**Outputs**:
- Final test report
- Final security report
- Release notes
- Deployment checklist
- Production readiness assessment
**Quality Gate** (GO/NO-GO Decision):
- ✅ Security score ≥45
- ✅ Zero CRITICAL/HIGH vulnerabilities
- ✅ 100% test pass rate (all test types)
- ✅ Code coverage ≥85%
- ✅ Zero P0/P1 issues
- ✅ All documentation complete
- ✅ Performance validated
---
## Using This Protocol
### Starting the Modernization
**Option 1: Full Team Orchestration**
```
User: /modernize-project
User: I have a .NET 6 project I need to upgrade to .NET 9, modernize dependencies, fix security issues, and improve test coverage.
Claude: I'll coordinate a team of specialist agents to modernize your project. Let me start with Phase 0: Discovery & Assessment...
[Migration Coordinator activates]
Phase 0: Discovery & Assessment
- Analyzing project structure...
- Running security scan...
- Creating upgrade roadmap...
[Security Agent activates]
Security Baseline:
- Current score: 23/100
- CRITICAL: 3 vulnerabilities
- HIGH: 12 vulnerabilities
⚠️ BLOCKING: Must fix CRITICAL/HIGH before proceeding
[Architect Agent activates]
Technology Assessment:
- Current: .NET 6.0
- Target: .NET 9.0
- Breaking changes identified: 47
- ADR needed for: [list of decisions]
...
```
**Option 2: Phase-by-Phase**
```
User: /modernize-project
User: Start with Phase 0 only - just assess my project
Claude: Running Phase 0: Discovery & Assessment...
[Detailed assessment output]
```
**Option 3: Specific Agent Focus**
```
User: /modernize-project
User: Focus on security remediation only (Phase 1)
Claude: [Security Agent + Coder Agent + Tester Agent activate]
```
### Monitoring Progress
The Migration Coordinator maintains a progress dashboard:
```markdown
## Modernization Progress
### Overall Status: Phase 3 - Framework Modernization (60% complete)
| Phase | Status | Duration | Quality Gate |
|-------|--------|----------|--------------|
| 0. Discovery | ✅ Complete | 1.5 days | ✅ Passed |
| 1. Security | ✅ Complete | 3 days | ✅ Score: 52/100 |
| 2. Architecture | ✅ Complete | 2 days | ✅ ADRs: 5 |
| 3. Framework | 🔄 In Progress | 4/7 days | ⏳ Pending |
| 4. API Modernization | ⏳ Pending | - | - |
| 5. Performance | ⏳ Pending | - | - |
| 6. Documentation | ⏳ Pending | - | - |
| 7. Final Validation | ⏳ Pending | - | - |
### Current Phase Details
- **Active Agents**: Coder #1, Coder #2, Tester
- **Module A**: ✅ Complete (.NET 9 migration done)
- **Module B**: 🔄 In Progress (fixing build errors)
- **Module C**: ⏳ Queued
- **Test Pass Rate**: 96.2% (6 failures, P2 severity)
```
---
## Quality Gates (Blocking Criteria)
### Security Gates (BLOCKING)
-**BLOCK**: Security score <45
-**BLOCK**: Any CRITICAL vulnerabilities unresolved
-**BLOCK**: Any HIGH vulnerabilities unresolved (unless explicitly approved)
### Testing Gates (BLOCKING)
-**BLOCK**: Test pass rate <100% (production releases)
-**BLOCK**: Code coverage <80%
-**BLOCK**: Any P0 or P1 test failures
-**BLOCK**: Performance regression >10%
### Build Gates (BLOCKING)
-**BLOCK**: Any compilation errors
-**BLOCK**: Any dependency conflicts
-**BLOCK**: Build warnings in critical code paths
### Documentation Gates (BLOCKING)
-**BLOCK**: Missing CHANGELOG
-**BLOCK**: Missing migration guide
-**BLOCK**: Undocumented breaking changes
---
## Agent Coordination Patterns
### Sequential Pipeline (Default)
```
Migration Coordinator
Security Agent (Phase 1) → GATE → Must pass before Phase 2
Architect Agent (Phase 2) → GATE → ADRs must be complete
Coder Agent (Phase 3) → Tester Agent → GATE → 100% pass rate
Coder Agent (Phase 4) → Tester Agent → GATE → 100% pass rate
Documentation Agent (Phase 6)
Tester + Security (Phase 7) → FINAL GATE → GO/NO-GO
```
### Parallel Execution (Faster)
```
Migration Coordinator
Security Agent (Phase 1) → GATE
Architect Agent (Phase 2) → GATE
├─ Coder #1 (Module A) → Tester → ✅
├─ Coder #2 (Module B) → Tester → ✅
└─ Coder #3 (Module C) → Tester → ✅
Documentation Agent → Final Validation
```
### Fix-and-Retest Cycle
```
Coder Agent makes changes
Tester Agent runs tests
↓ (failures found)
Tester Agent documents failures (P0/P1/P2/P3)
Coder Agent fixes issues
Tester Agent re-runs tests
[Repeat until 100% pass rate achieved]
```
---
## Logging Protocol
**MANDATORY**: All agents must log to HISTORY.md using `./scripts/append-to-history.sh`
**When to Log**:
- After completing each phase
- After fixing security vulnerabilities
- After making architectural decisions
- After completing migrations
- After test validation
- After documentation updates
**Example Entry**:
```markdown
## 2025-10-25 14:30 - Phase 1: Security Remediation Complete
**Agent**: Security Agent + Coder Agent
**Phase**: 1 - Security Remediation
### What Changed
- Fixed 3 CRITICAL vulnerabilities (CVE-2024-1234, CVE-2024-5678, CVE-2024-9012)
- Fixed 12 HIGH vulnerabilities
- Updated 15 dependencies to latest secure versions
- Improved security score from 23/100 → 52/100
### Why Changed
- CRITICAL vulnerabilities blocked progression (Phase 0 quality gate)
- Required to achieve minimum security score ≥45
- Multiple packages had known exploits in production
### Impact
- Security score: 23/100 → 52/100 (+29 points)
- CRITICAL vulnerabilities: 3 → 0 (✅ resolved)
- HIGH vulnerabilities: 12 → 0 (✅ resolved)
- MEDIUM vulnerabilities: 8 → 5 (-3)
- Dependencies updated: 15 packages
- Test pass rate: 100% (no regressions introduced)
### Outcome
**Quality Gate PASSED** - Ready for Phase 2: Architecture & Design
- Security score ≥45: ✅ (52/100)
- Zero CRITICAL: ✅
- Zero HIGH: ✅
- Tests passing: ✅ (100%)
**Next Phase**: Phase 2 - Architecture & Design (2-3 days estimated)
```
---
## Success Criteria
### Project Modernization Complete When:
**Security**
- Security score ≥45
- Zero CRITICAL vulnerabilities
- Zero HIGH vulnerabilities
- All dependencies up-to-date
**Quality**
- 100% test pass rate (all test types)
- Code coverage ≥85%
- Zero P0/P1 issues
- Performance validated (no regressions >10%)
**Modernization**
- Latest stable framework version
- All dependencies current
- Zero obsolete API warnings
- Modern code patterns applied
**Documentation**
- Complete CHANGELOG
- Comprehensive migration guide
- All ADRs documented
- Updated README and docs
- HISTORY.md fully populated
---
## Estimated Timeline
**Note**: Time estimates are based on typical human execution times and may vary significantly based on project complexity, team experience, and AI assistance capabilities.
| Phase | Duration | Can Parallelize |
|-------|----------|-----------------|
| 0. Discovery & Assessment | 1-2 days | No |
| 1. Security Remediation | 2-5 days | Partial (by vulnerability) |
| 2. Architecture & Design | 2-3 days | No |
| 3. Framework Modernization | 5-10 days | Yes (by module) |
| 4. API Modernization | 3-7 days | Yes (by area) |
| 5. Performance Optimization | 2-4 days | Partial |
| 6. Documentation | 2-3 days | Partial (by doc type) |
| 7. Final Validation | 1-2 days | No |
**Total**: 18-36 days (depends on project size and parallelization)
**Small Project** (<10k LOC): ~18-22 days
**Medium Project** (10k-50k LOC): ~25-30 days
**Large Project** (>50k LOC): ~30-36 days
---
## Example Usage Scenarios
### Scenario 1: .NET Framework to .NET 9
```
User: /modernize-project
User: Upgrade my .NET Framework 4.8 project to .NET 9, fix all security issues, and ensure 95%+ test coverage
Claude: I'll coordinate the modernization team:
Phase 0: Discovery & Assessment
- Current: .NET Framework 4.8
- Target: .NET 9.0
- Security Score: 18/100 ⚠️
- Test Coverage: 42% ⚠️
[Detailed execution across all 7 phases]
```
### Scenario 2: Node.js Upgrade + Security
```
User: /modernize-project
User: My Node.js 14 app needs to go to Node.js 22, fix CVEs, and add proper testing
Claude: Modernization Plan:
- Phase 0: Assess Node.js 14 → 22 migration path
- Phase 1: Fix 47 vulnerabilities (Security Agent priority)
- Phase 2: Plan ESM migration, update patterns (Architect)
- Phase 3-4: Upgrade dependencies, modernize code (Coder)
- Phase 5: Performance validation
- Phase 6-7: Documentation and final validation
```
### Scenario 3: Python 2 to Python 3
```
User: /modernize-project
User: Legacy Python 2.7 codebase → Python 3.12, modernize dependencies, add type hints
Claude: This is a major migration requiring all agents:
[Systematic execution with extra focus on syntax changes, dependency compatibility]
```
---
## Troubleshooting
### Quality Gate Failures
**Security Gate Failed**
```
❌ Security score: 38/100 (required ≥45)
→ Action: Security Agent + Coder Agent continue Phase 1
→ Cannot proceed to Phase 2 until resolved
```
**Testing Gate Failed**
```
❌ Test pass rate: 87% (required 100%)
→ Action: Tester Agent documents failures
→ Coder Agent fixes issues
→ Re-run fix-and-retest cycle
```
**Performance Gate Failed**
```
❌ Performance regression: -23% (threshold -10%)
→ Action: Roll back change
→ Architect Agent reviews approach
→ Coder Agent implements optimized solution
```
---
## Best Practices
1. **Always start with Phase 0** - Never skip discovery
2. **Don't skip quality gates** - They prevent production issues
3. **Fix security first** - Security blocks everything else
4. **Test continuously** - Don't batch testing until the end
5. **Document as you go** - Incremental documentation saves time
6. **Use parallel execution** - When dependencies allow it
7. **Log everything to HISTORY.md** - Complete audit trail
8. **Create ADRs for major decisions** - Future maintainers will thank you
---
## Anti-Patterns to Avoid
**Skipping security phase** - "We'll fix it later"
**Accepting <100% test pass rates** - "Good enough for now"
**Batching all changes** - "One big PR at the end"
**Skipping documentation** - "We'll document after release"
**Ignoring quality gates** - "We're on a deadline"
**Solo agent execution** - "Just the Coder Agent is fine"
**No progress tracking** - "Trust the process"
---
**Document Owner**: Migration Coordinator
**Protocol Version**: 1.0
**Last Updated**: 2025-10-25
**Applicability**: Universal - All software projects requiring modernization
**Remember**: **Systematic agent coordination = Production-ready modernization**

933
commands/plan.md Normal file
View File

@@ -0,0 +1,933 @@
# Plan Command
**Description**: Create a detailed modernization strategy and execution plan, utilizing ASSESSMENT.md if available, with phase breakdown, timeline, and resource allocation
---
# Project Modernization Planning Protocol
**Version**: 1.0
**Purpose**: Create a comprehensive, actionable modernization plan
**Input**: Optional `ASSESSMENT.md` from `/modernize-assess`
**Output**: `PLAN.md` with detailed execution strategy
**Duration**: 3-6 hours
**Note**: Time estimates are based on typical human execution times and may vary significantly based on project complexity, team experience, and AI assistance capabilities.
---
## Overview
This protocol creates a **detailed modernization execution plan** that serves as the blueprint for the `/modernize-project` command. The plan includes:
- ✅ Detailed phase breakdown with tasks
- ✅ Timeline and milestone schedule
- ✅ Resource allocation and team assignments
- ✅ Risk mitigation strategies
- ✅ Quality gates and success criteria
- ✅ Contingency plans
**Core Principle**: **Proper planning prevents poor performance - plan before you execute.**
---
## Planning Process
### Step 1: Load Assessment (if available)
```bash
# Check for ASSESSMENT.md
if [ -f "ASSESSMENT.md" ]; then
echo "✅ Found ASSESSMENT.md - using as input"
# Extract: scores, risks, estimates, recommendations
else
echo "⚠️ No ASSESSMENT.md found - will create basic assessment"
# Run abbreviated assessment inline
fi
```
**If assessment exists**:
- Use technical viability score
- Use identified risks
- Use effort estimates
- Use team capacity analysis
- Use dependency analysis
**If no assessment**:
- Create quick assessment (30 min)
- Gather basic project info
- Identify major risks
- Rough effort estimate
---
### Step 2: Define Modernization Scope (30 minutes)
#### 2.1 Objectives
**Primary Objectives** (MUST achieve):
- [ ] Upgrade to [Target Framework/Version]
- [ ] Eliminate CRITICAL/HIGH security vulnerabilities
- [ ] Achieve 100% test pass rate
- [ ] [Other must-have objective]
**Secondary Objectives** (SHOULD achieve):
- [ ] Improve code coverage to ≥85%
- [ ] Optimize performance (≥10% improvement)
- [ ] Modernize APIs and patterns
- [ ] [Other desirable objective]
**Out of Scope** (explicitly NOT doing):
- [ ] UI/UX redesign
- [ ] Feature additions
- [ ] Complete rewrite
- [ ] [Other exclusions]
#### 2.2 Success Criteria
**Technical Success**:
- Framework upgraded to [Version]
- Security score ≥45
- Zero CRITICAL/HIGH vulnerabilities
- 100% test pass rate
- Code coverage ≥85%
- No performance regression >10%
**Business Success**:
- Delivered within [X weeks]
- Cost within $[Budget] ±10%
- Zero production incidents caused by migration
- Team trained on new framework
- Documentation complete
---
### Step 3: Phase Planning (60-90 minutes)
For each of the 7 phases, define:
#### Phase 0: Discovery & Assessment
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Migration Coordinator + Security Agent + Architect Agent
**Tasks**:
1. **Project Inventory** (4 hours)
- Map all projects/modules
- Identify dependencies
- Document current architecture
- **Deliverable**: Project inventory spreadsheet
2. **Security Baseline** (4 hours)
- Run vulnerability scan
- Categorize vulnerabilities
- Calculate security score
- **Deliverable**: Security baseline report
3. **Test Baseline** (4 hours)
- Run all existing tests
- Capture pass rates and coverage
- Document test infrastructure
- **Deliverable**: Test baseline report
4. **Technology Assessment** (4 hours)
- Research target framework
- Document breaking changes
- Identify obsolete APIs
- **Deliverable**: Technology assessment document
5. **Effort Estimation** (2 hours)
- Estimate each phase duration
- Identify parallel opportunities
- Calculate total timeline
- **Deliverable**: Detailed timeline
**Exit Criteria**:
- All assessments complete
- Security score calculated
- Timeline approved
- Budget approved
**Risks**:
- [Risk 1]: [Mitigation]
- [Risk 2]: [Mitigation]
---
#### Phase 1: Security Remediation
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Security Agent (lead) + Coder Agent + Tester Agent
**Tasks**:
1. **Fix CRITICAL Vulnerabilities** (XX hours)
- CVE-XXXX-XXXX: [Package] → [Action]
- CVE-YYYY-YYYY: [Package] → [Action]
- **Deliverable**: Zero CRITICAL CVEs
2. **Fix HIGH Vulnerabilities** (YY hours)
- CVE-ZZZZ-ZZZZ: [Package] → [Action]
- [List all HIGH CVEs with remediation plan]
- **Deliverable**: Zero HIGH CVEs
3. **Update Security Dependencies** (ZZ hours)
- [Package 1]: v[Old] → v[New]
- [Package 2]: v[Old] → v[New]
- **Deliverable**: All security patches applied
4. **Validation Testing** (WW hours)
- Re-run security scan
- Run full test suite
- Verify no regressions
- **Deliverable**: Security score ≥45
**Exit Criteria**:
- ✅ Security score ≥45
- ✅ Zero CRITICAL vulnerabilities
- ✅ Zero HIGH vulnerabilities
- ✅ 100% test pass rate maintained
**Risks**:
- [Risk 1]: Dependency conflicts → [Mitigation]
- [Risk 2]: Breaking changes → [Mitigation]
**Contingency**:
- If unable to fix all HIGH: Document exceptions, get approval
- If tests fail: Allocate +XX hours for fixes
---
#### Phase 2: Architecture & Design
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Architect Agent (lead) + Migration Coordinator
**Tasks**:
1. **Create Migration ADRs** (8 hours)
- ADR 0001: [Target Framework Selection]
- ADR 0002: [Dependency Strategy]
- ADR 0003: [Migration Approach]
- **Deliverable**: 3-5 ADRs in MADR format
2. **Dependency Migration Matrix** (6 hours)
- Map all dependencies to target versions
- Identify conflicts
- Define resolution strategy
- **Deliverable**: Dependency migration matrix
3. **Breaking Changes Analysis** (8 hours)
- Enumerate all breaking changes
- Estimate code impact %
- Create remediation guide
- **Deliverable**: Breaking changes guide
4. **Module Migration Order** (4 hours)
- Analyze dependencies
- Define migration sequence
- Identify parallel opportunities
- **Deliverable**: Module migration plan
**Exit Criteria**:
- ✅ All ADRs approved
- ✅ Dependency strategy defined
- ✅ Migration order established
- ✅ Team aligned on approach
**Risks**:
- [Risk 1]: [Mitigation]
- [Risk 2]: [Mitigation]
---
#### Phase 3: Framework & Dependency Modernization
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Coder Agent (multiple) + Tester Agent
**Parallel Execution Strategy**:
```
Module A: [Dev 1] → [X days]
Module B: [Dev 2] → [Y days]
Module C: [Dev 3] → [Z days]
```
**Tasks by Module**:
**Module A: [Name]** (XX hours)
1. Update framework target
2. Update package references
3. Fix compilation errors
4. Run tests (100% pass required)
5. Code review
6. **Deliverable**: Module A migrated
**Module B: [Name]** (YY hours)
[Similar breakdown]
**Module C: [Name]** (ZZ hours)
[Similar breakdown]
**Integration** (WW hours)
1. Merge all modules
2. Resolve conflicts
3. Full solution build
4. Complete test suite
5. **Deliverable**: All modules integrated
**Exit Criteria**:
- ✅ All modules migrated
- ✅ Solution builds successfully
- ✅ 100% test pass rate
- ✅ Zero P0/P1 issues
**Risks**:
- [Risk 1]: Merge conflicts → Daily integration
- [Risk 2]: Test failures → Dedicated tester
---
#### Phase 4: API Modernization & Code Quality
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Coder Agent + Tester Agent + Architect Agent
**Tasks**:
1. **Replace Obsolete APIs** (XX hours)
- [Old API 1] → [New API 1] (YY instances)
- [Old API 2] → [New API 2] (ZZ instances)
- **Deliverable**: Zero obsolete API warnings
2. **Apply Modern Patterns** (YY hours)
- async/await conversion
- Pattern matching
- Record types
- **Deliverable**: Modern code patterns applied
3. **Enhance Test Coverage** (ZZ hours)
- Add missing unit tests
- Add integration tests
- Target ≥85% coverage
- **Deliverable**: 85% code coverage
4. **Code Quality** (WW hours)
- Reduce complexity
- Fix code smells
- Improve naming
- **Deliverable**: Quality metrics improved
**Exit Criteria**:
- ✅ Zero obsolete APIs
- ✅ Code coverage ≥85%
- ✅ 100% test pass rate
- ✅ Code quality score improved
**Risks**:
- [Risk 1]: [Mitigation]
- [Risk 2]: [Mitigation]
---
#### Phase 5: Performance Optimization
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Coder Agent + Tester Agent
**Tasks**:
1. **Baseline Benchmarks** (4 hours)
- Define performance tests
- Capture baseline metrics
- Set improvement targets
- **Deliverable**: Baseline report
2. **Identify Bottlenecks** (8 hours)
- Run profiler
- Analyze hot paths
- Prioritize optimizations
- **Deliverable**: Optimization backlog
3. **Implement Optimizations** (XX hours)
- [Optimization 1]
- [Optimization 2]
- [Optimization 3]
- **Deliverable**: Performance improvements
4. **Validate Performance** (6 hours)
- Re-run benchmarks
- Compare vs baseline
- Document gains
- **Deliverable**: Performance report
**Exit Criteria**:
- ✅ Performance ≥baseline (no regressions)
- ✅ Target improvements achieved OR justified
- ✅ Benchmarks documented
**Risks**:
- [Risk 1]: Limited improvement → Document as optimal
- [Risk 2]: Regression found → Roll back and retry
---
#### Phase 6: Comprehensive Documentation
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Documentation Agent (lead)
**Tasks**:
1. **CHANGELOG.md** (6 hours)
- Document breaking changes
- New features/improvements
- Bug fixes
- Security updates
- **Deliverable**: Complete CHANGELOG
2. **MIGRATION-GUIDE.md** (12 hours)
- Step-by-step upgrade guide
- Breaking changes details
- Before/after examples
- Troubleshooting
- **Deliverable**: Comprehensive migration guide (800+ lines)
3. **Update Documentation** (8 hours)
- Update README
- Update architecture docs
- Update API docs
- **Deliverable**: All docs current
4. **ADR Summaries** (4 hours)
- Compile all ADRs
- Create decision log
- Link to relevant docs
- **Deliverable**: ADR index
**Exit Criteria**:
- ✅ CHANGELOG complete
- ✅ Migration guide ≥800 lines
- ✅ All docs updated
- ✅ Documentation reviewed
---
#### Phase 7: Final Validation & Release
**Duration**: [X-Y days]
**Team**: [Names/Roles]
**Agent**: Tester Agent (lead) + Security Agent + Coordinator
**Tasks**:
1. **Complete Test Execution** (12 hours)
- Unit tests (100% pass)
- Integration tests (100% pass)
- Component tests (100% pass)
- E2E tests (100% pass)
- Performance tests (validation)
- **Deliverable**: All tests passing
2. **Final Security Scan** (4 hours)
- Run vulnerability scan
- Verify security score ≥45
- Document any LOW/MEDIUM issues
- **Deliverable**: Security approval
3. **Release Preparation** (8 hours)
- Create release notes
- Tag release version
- Package artifacts
- Deployment checklist
- **Deliverable**: Release package
4. **GO/NO-GO Decision** (2 hours)
- Review all quality gates
- Production readiness assessment
- Final approval
- **Deliverable**: GO or NO-GO
**Exit Criteria**:
- ✅ 100% test pass rate (all types)
- ✅ Security score ≥45
- ✅ Zero CRITICAL/HIGH vulnerabilities
- ✅ All documentation complete
- ✅ Release approved
---
### Step 4: Timeline & Milestones (30 minutes)
#### Gantt Chart
```
Week 1-2: Phase 0 (Discovery) ████████
Week 3-4: Phase 1 (Security) ████████████
Week 5-6: Phase 2 (Arch) ████████
Week 7-10: Phase 3 (Framework) ████████████████████
Module A ████████
Module B ████████
Module C ████████
Week 11-13: Phase 4 (API Mod) ████████████
Week 14-15: Phase 5 (Perf) ████████
Week 16-17: Phase 6 (Docs) ████████
Week 18: Phase 7 (Validate) ████
```
#### Milestones
| Milestone | Date | Deliverables | Success Criteria |
|-----------|------|--------------|------------------|
| M1: Assessment Complete | Week 2 | Security baseline, test baseline, plan | All baselines captured |
| M2: Security Remediated | Week 4 | Security score ≥45, CVEs fixed | Zero CRITICAL/HIGH |
| M3: Architecture Approved | Week 6 | ADRs, migration strategy | Team aligned |
| M4: Framework Migrated | Week 10 | All modules on new framework | Builds, 100% tests |
| M5: Code Modernized | Week 13 | Modern APIs, ≥85% coverage | Quality improved |
| M6: Performance Validated | Week 15 | Benchmarks, optimizations | No regressions |
| M7: Documentation Complete | Week 17 | CHANGELOG, migration guide | All docs done |
| M8: Production Ready | Week 18 | Final validation, approval | GO decision |
---
### Step 5: Resource Allocation (30 minutes)
#### Team Assignments
**Phase 0: Discovery**
- Lead: [Name] (Architect/Coordinator)
- Support: [Name] (Security)
- Hours: XX total
**Phase 1: Security**
- Lead: [Name] (Security specialist)
- Devs: [Name1, Name2]
- Tester: [Name]
- Hours: YY total
**Phase 2: Architecture**
- Lead: [Name] (Architect)
- Support: [Coordinator]
- Hours: ZZ total
**Phase 3: Framework (PARALLEL)**
- Module A: [Dev1] (WW hours)
- Module B: [Dev2] (VV hours)
- Module C: [Dev3] (UU hours)
- Tester: [Name] (TT hours)
- Hours: XX total
**Phase 4-7**: [Similar breakdown]
#### Capacity Planning
| Week | Available Hours | Allocated Hours | Buffer | Utilization |
|------|----------------|-----------------|--------|-------------|
| 1-2 | 160 | 120 | 40 | 75% |
| 3-4 | 160 | 140 | 20 | 88% |
| ... | ... | ... | ... | ... |
**Total Project**:
- Available: XX hours
- Allocated: YY hours
- Buffer (30%): ZZ hours
- **Utilization**: WW%
---
### Step 6: Risk Management (45 minutes)
#### Risk Register
| ID | Risk | Probability | Impact | Severity | Mitigation | Owner |
|----|------|-------------|--------|----------|------------|-------|
| R01 | Dependency conflicts unresolvable | Medium | High | HIGH | Upfront analysis, alternatives ready | [Name] |
| R02 | Timeline overrun >30% | Medium | High | HIGH | Phased approach, weekly reviews | [Coordinator] |
| R03 | Critical bug in production | Low | Critical | HIGH | Comprehensive testing, rollback plan | [Tester] |
| R04 | Team member leaves | Low | Medium | MEDIUM | Knowledge sharing, documentation | [Manager] |
| R05 | Breaking changes missed | Medium | High | HIGH | Thorough analysis, peer review | [Architect] |
#### Mitigation Strategies
**For High-Priority Risks**:
**R01: Dependency Conflicts**
- **Prevention**: Analyze all dependencies upfront (Phase 0)
- **Detection**: Test builds daily
- **Response**: Have alternative packages researched
- **Contingency**: +2 weeks for rewrites if needed
**R02: Timeline Overrun**
- **Prevention**: 30% buffer, conservative estimates
- **Detection**: Weekly status reviews, burn-down charts
- **Response**: Descope secondary objectives
- **Contingency**: Defer Phase 4-5 to post-launch
**R03: Production Bug**
- **Prevention**: 100% test pass rate, comprehensive E2E
- **Detection**: Staged rollout, monitoring
- **Response**: Immediate rollback procedure
- **Contingency**: Rollback plan tested, backup ready
---
### Step 7: Quality Gates & Decision Points (20 minutes)
#### Gate Criteria
**Gate 1: Post-Assessment** (End of Phase 0)
- **GO if**: Assessment score ≥60/100, budget approved
- **NO-GO if**: Score <60, CRITICAL risks, no budget
- **Decision Maker**: Executive sponsor
**Gate 2: Post-Security** (End of Phase 1)
- **GO if**: Security score ≥45, zero CRITICAL/HIGH
- **NO-GO if**: Unable to remediate critical CVEs
- **Decision Maker**: Security lead + Coordinator
**Gate 3: Post-Architecture** (End of Phase 2)
- **GO if**: All ADRs approved, team aligned
- **NO-GO if**: No viable migration path
- **Decision Maker**: Tech lead + Architect
**Gate 4: Post-Framework** (End of Phase 3)
- **GO if**: 100% tests pass, builds clean
- **NO-GO if**: <100% pass rate, build issues
- **Decision Maker**: Coordinator + Tester
**Gate 5: Post-API Modernization** (End of Phase 4)
- **GO if**: Coverage ≥85%, quality improved
- **NO-GO if**: Major quality regression
- **Decision Maker**: Tech lead
**Gate 6: Post-Performance** (End of Phase 5)
- **GO if**: No regression >10%, targets met
- **NO-GO if**: Critical regression
- **Decision Maker**: Tech lead + Architect
**Gate 7: Post-Documentation** (End of Phase 6)
- **GO if**: All docs complete and reviewed
- **NO-GO if**: Major gaps in documentation
- **Decision Maker**: Documentation lead
**Gate 8: Final GO/NO-GO** (End of Phase 7)
- **GO if**: All gates passed, production ready
- **NO-GO if**: Any blocker remaining
- **Decision Maker**: Executive sponsor
---
### Step 8: Contingency Planning (30 minutes)
#### What-If Scenarios
**Scenario 1: Critical Dependency Has No Compatible Version**
- **Impact**: Cannot complete Phase 3
- **Response**:
1. Research alternative packages
2. Evaluate rewrite of dependent code
3. Consider staying on current framework with security patches
- **Timeline Impact**: +2-4 weeks
- **Effort Impact**: Additional development and testing required
**Scenario 2: Test Pass Rate Drops Below 80%**
- **Impact**: Quality gate failure
- **Response**:
1. Halt further changes
2. Analyze all failures
3. Dedicate team to fixes
4. Re-run complete suite
- **Timeline Impact**: +1-2 weeks
- **Effort Impact**: Dedicated testing and bug fixing resources
**Scenario 3: Performance Regression >20%**
- **Impact**: Unacceptable for production
- **Response**:
1. Roll back changes
2. Profile and identify bottleneck
3. Architect review optimization strategy
4. Implement targeted fixes
- **Timeline Impact**: +2-3 weeks
- **Effort Impact**: Performance analysis and optimization work
**Scenario 4: Key Team Member Leaves**
- **Impact**: Knowledge loss, capacity reduction
- **Response**:
1. Knowledge transfer session (if notice given)
2. Redistribute work
3. Consider contractor/consultant
4. Extend timeline if needed
- **Timeline Impact**: +1-3 weeks
- **Effort Impact**: Onboarding and knowledge transfer overhead
---
## PLAN.md Output Template
```markdown
# Project Modernization Plan
**Project**: [Name]
**Current Version**: [Version]
**Target Version**: [Version]
**Plan Created**: [Date]
**Plan Owner**: [Name]
**Executive Sponsor**: [Name]
---
## Executive Summary
### Objectives
[Primary and secondary objectives]
### Timeline
- **Start Date**: [Date]
- **End Date**: [Date]
- **Duration**: XX weeks (YY calendar months)
- **Contingency Buffer**: 30% additional time
### Team
- **Team Size**: X developers
- **Key Roles**: [List]
- **External Resources**: [If any]
### Success Criteria
[Top 5 success criteria]
---
## Assessment Summary
[If ASSESSMENT.md exists, summarize key findings]
**Overall Assessment Score**: XX/100
**Recommendation**: [PROCEED/CAUTION/DEFER/DO NOT]
**Key Risks**:
1. [Risk 1]
2. [Risk 2]
**Mitigation Strategies**:
[Summary of how risks will be addressed]
---
## Scope
### In Scope
✅ [Item 1]
✅ [Item 2]
✅ [Item 3]
### Out of Scope
❌ [Item 1]
❌ [Item 2]
### Success Criteria
**Technical**:
- [Criterion 1]
- [Criterion 2]
**Business**:
- [Criterion 1]
- [Criterion 2]
---
## Phase Breakdown
[Include all 7 phases with tasks, durations, teams, deliverables, exit criteria]
### Phase 0: Discovery & Assessment
[Detailed breakdown as created above]
### Phase 1: Security Remediation
[Detailed breakdown]
[... Phases 2-7 ...]
---
## Timeline & Milestones
[Gantt chart and milestone table]
---
## Resource Allocation
[Team assignments and capacity planning]
---
## Risk Management
[Risk register and mitigation strategies]
---
## Quality Gates
[All 8 quality gates with criteria]
---
## Contingency Plans
[What-if scenarios and responses]
---
## Communication Plan
### Status Reporting
- **Daily**: Stand-up (15 min)
- **Weekly**: Status report to stakeholders
- **Bi-weekly**: Executive update
- **Monthly**: Board update (if applicable)
### Escalation Path
1. Team Lead
2. Tech Lead / Coordinator
3. Engineering Manager
4. CTO
5. Executive Sponsor
---
## Appendices
### A. Dependency Migration Matrix
[Detailed dependency upgrade plan]
### B. Breaking Changes Guide
[All breaking changes with remediation]
### C. Test Strategy
[Testing approach for each phase]
### D. Deployment Strategy
[How changes will be deployed]
---
**Plan Version**: 1.0
**Last Updated**: [Date]
**Next Review**: [Every 2 weeks during execution]
```
---
## Usage Examples
### Example 1: With Assessment
```
User: /modernize-plan
Claude: Checking for ASSESSMENT.md...
✅ Found ASSESSMENT.md (score: 78/100, PROCEED WITH CAUTION)
Using assessment findings:
- Technical Viability: 82/100
- Business Value: 74/100
- Risk Profile: MEDIUM (3 HIGH risks)
- Estimated Effort: 22-28 days
Creating detailed plan...
[Generates comprehensive plan based on assessment]
Phase 0: Discovery (2 days)
- Already have baseline from assessment
- Focus on detailed breaking changes
- Estimated: 2 days vs 1-2 days typical
Phase 1: Security (3-5 days)
- Assessment identified 3 CRITICAL, 12 HIGH CVEs
- [Detailed remediation plan]
[Continues through all phases]
✅ PLAN.md created (47 pages)
Summary:
- Duration: 24 weeks
- Team: 3 developers
- Milestones: 8 quality gates
- Timeline includes 30% contingency buffer
```
### Example 2: Without Assessment
```
User: /modernize-plan
Claude: No ASSESSMENT.md found.
Running quick assessment first...
[30-minute abbreviated assessment]
Quick Assessment Results:
- Project Size: Medium (~35k LOC)
- Current: .NET 6
- Target: .NET 9
- Rough Estimate: 25-35 days
⚠️ Recommendation: Run full /modernize-assess for better accuracy
Creating plan with conservative estimates...
[Generates plan with wider ranges]
✅ PLAN.md created
Note: This plan has wider time/cost ranges due to limited assessment.
Consider running /modernize-assess for refined planning.
```
---
## Best Practices
1. **Use Assessment** - Always run `/modernize-assess` first
2. **Be Realistic** - Conservative estimates prevent disappointment
3. **Include Buffer** - 30% contingency is standard
4. **Define Gates** - Clear decision points prevent drift
5. **Plan Contingencies** - Murphy's law applies
6. **Get Buy-in** - Review plan with all stakeholders before execution
---
## Integration with /modernize-project
The plan created by this command is **automatically used** by `/modernize-project`:
```
User: /modernize-project
Claude: Checking for existing plan...
✅ Found PLAN.md
Using existing plan:
- 7 phases defined
- 24 weeks timeline
- 3 developers allocated
- 8 quality gates
Proceeding with execution...
[Follows plan exactly]
```
---
**Document Owner**: Migration Coordinator
**Protocol Version**: 1.0
**Last Updated**: 2025-10-25
**Required Before**: Running `/modernize-project`
**Remember**: **Plans are worthless, but planning is essential. Create the plan, then adapt as needed.**

983
commands/retro-apply.md Normal file
View File

@@ -0,0 +1,983 @@
# Retro-Apply Command
**Description**: Apply the process improvement recommendations from IMPROVEMENTS.md to agent protocols and project configuration
---
# Retrospective Application Protocol
**Version**: 1.0
**Purpose**: Apply approved process improvements from IMPROVEMENTS.md to agent protocols
**Team**: All specialist agents (Migration Coordinator, Security, Architect, Coder, Tester, Documentation)
**Input**: `IMPROVEMENTS.md` (from `/retro` command)
**Output**: Updated protocol files, configuration, automation scripts
**Duration**: 3-8 hours (depends on recommendation complexity)
**Note**: Time estimates are based on typical human execution times and may vary significantly based on project complexity, team experience, and AI assistance capabilities.
---
## Overview
This protocol **systematically implements** the process improvements documented in `IMPROVEMENTS.md`, including:
- **Agent behavioral changes**: Update command files to guide better agent behavior (tool usage, confirmation patterns, context gathering)
- **Protocol updates**: Modify process steps, quality gates, workflow sequences
- **Automation**: Add scripts, hooks, CI/CD to enforce improvements
- **Documentation**: Update guidelines, examples, anti-patterns
**Core Principle**: **Make improvements permanent by embedding them in protocols, commands, and automation.**
**Change Types**:
- **Agent Behavior**: Update commands/*.md to teach agents better practices
- **Protocol**: Modify process flows in protocol documents
- **Automation**: Add enforcement through scripts/hooks
- **Tool Usage**: Add examples and guidance for proper tool selection
- **Confirmation Patterns**: Add user approval checkpoints
---
## Prerequisites
### Required Input
**IMPROVEMENTS.md must exist** with:
- ✅ 3-5 specific recommendations
- ✅ Each recommendation has clear implementation steps
- ✅ ADR format with problem, evidence, proposed change
- ✅ Status: Proposed or Approved
**Validation**:
```bash
# Check for IMPROVEMENTS.md
if [ ! -f "IMPROVEMENTS.md" ]; then
echo "❌ IMPROVEMENTS.md not found"
echo "Run /retro first to generate recommendations"
exit 1
fi
# Validate format
grep -q "## Recommendations" IMPROVEMENTS.md || echo "⚠️ Missing Recommendations section"
grep -q "## Summary" IMPROVEMENTS.md || echo "⚠️ Missing Summary section"
```
---
## Application Process
### Phase 1: Review and Approval (15-30 minutes)
**Objective**: Ensure recommendations are approved before implementation
#### 1.1 Display Recommendations
**Migration Coordinator**:
```bash
# Extract and display all recommendations
cat IMPROVEMENTS.md | awk '/^### Recommendation/,/^---/ {print}'
```
**Present to User**:
```markdown
## Recommendations to Apply
### Summary
| # | Recommendation | Impact | Effort | Priority |
|---|----------------|--------|--------|----------|
| 1 | [Title] | High | Medium | P0 |
| 2 | [Title] | High | Low | P0 |
| 3 | [Title] | Medium | Low | P1 |
Do you want to apply:
- All recommendations?
- Only P0 (high priority)?
- Select specific recommendations?
```
#### 1.2 User Confirmation
**Wait for explicit approval** before proceeding:
- `all` - Apply all recommendations
- `p0` - Apply only P0 priority
- `1,3,5` - Apply specific recommendations by number
- `cancel` - Abort application
**Update Status in IMPROVEMENTS.md**:
```markdown
**Status**: Approved for Implementation
**Approval Date**: [YYYY-MM-DD]
**Approved By**: [User]
**Implementation Started**: [YYYY-MM-DD]
```
---
### Phase 2: Implementation Planning (30 minutes)
**Objective**: Create detailed implementation plan for each recommendation
**Migration Coordinator** analyzes each recommendation and creates execution plan:
#### 2.1 Categorize Changes
**Change Types**:
- **Agent Behavior**: Update command files to change how agents behave (tool selection, confirmation patterns, read-before-write rules)
- **Protocol Updates**: Modify process steps, phases, quality gates in protocol documents
- **Automation**: Add scripts, pre-commit hooks, CI/CD workflows
- **Configuration**: Update project config files
- **Documentation**: Update README, guides, templates, examples
- **Tool Integration**: Install/configure new tools
**Examples**:
- Agent Behavior: "Add 'MUST read file before Write/Edit' rule to commands/modernize.md"
- Protocol: "Add dependency analysis as Phase 0.5 in assessment workflow"
- Automation: "Add pre-commit hook for security scanning"
#### 2.2 Dependency Analysis
**Identify dependencies** between recommendations:
```
Recommendation 3 depends on Recommendation 1
→ Must apply Recommendation 1 first
Recommendation 2 and 4 are independent
→ Can apply in parallel
```
#### 2.3 Risk Assessment
**For each recommendation**:
- Risk Level: Low/Medium/High
- Rollback Plan: How to undo if issues occur
- Validation: How to verify success
#### 2.4 Execution Order
**Create ordered plan**:
```markdown
## Implementation Plan
### Phase A: Foundation (P0, no dependencies)
1. Recommendation 2: [Title] - 1 hour
2. Recommendation 1: [Title] - 2 hours
### Phase B: Dependent Changes (P0, requires Phase A)
3. Recommendation 3: [Title] - 1.5 hours
### Phase C: Enhancements (P1)
4. Recommendation 4: [Title] - 2 hours
5. Recommendation 5: [Title] - 1 hour
**Total Estimated Time**: 7.5 hours
```
---
### Phase 3: Apply Protocol Updates (1-4 hours)
**Objective**: Update agent protocol documents with process improvements
**For each recommendation requiring protocol updates**:
#### 3.1 Identify Affected Protocols
**Common protocol locations**:
- `commands/modernize.md` - Main modernization protocol
- `commands/assess.md` - Assessment protocol
- `commands/plan.md` - Planning protocol
- Project-specific protocols
**Example**: Recommendation "Front-load dependency analysis"
- Affects: `commands/assess.md` (add dependency analysis step)
- Affects: `commands/plan.md` (add dependency migration matrix)
- Affects: `commands/modernize.md` (Phase 0 prerequisites)
#### 3.2 Update Protocol Content
**Process**:
1. **Read current protocol**
2. **Identify insertion point** (which section/phase)
3. **Draft update** following protocol format
4. **Insert update** maintaining structure
5. **Validate** syntax and formatting
6. **Document change** in commit message
**Example Protocol Update**:
```markdown
# In commands/modernize.md
## Phase 0: Discovery & Assessment
**NEW - Added from IMPROVEMENTS.md Recommendation 1**:
### 0.5 Dependency Migration Matrix (MANDATORY)
**Duration**: 4-6 hours
**Agent**: Architect Agent (lead) + Security Agent
**BLOCKING**: Must complete before Phase 1
**Activities**:
1. **Enumerate All Dependencies**
- List every package/library with current version
- Identify target versions compatible with target framework
- Document dependencies of dependencies (transitive)
2. **Compatibility Analysis**
- Check version compatibility matrix
- Identify known conflicts
- Research breaking changes for each upgrade
- **Deliverable**: Dependency compatibility matrix
3. **Conflict Resolution Strategy**
- For each conflict, document resolution approach
- Identify packages needing replacement
- Define upgrade order (bottom-up from leaf dependencies)
- **Deliverable**: Dependency upgrade sequence
**Exit Criteria**:
- ✅ All dependencies mapped to target versions
- ✅ All conflicts identified with resolution strategy
- ✅ Upgrade order defined
- ✅ No unknowns remaining
**Why This Change**:
[Link to IMPROVEMENTS.md Recommendation 1]
Prevents mid-migration dependency conflicts that historically caused 1-2 week delays.
```
**Example Agent Behavioral Update**:
```markdown
# In commands/modernize.md
## Agent Best Practices
**NEW - Added from IMPROVEMENTS.md Recommendation 4**:
### MANDATORY: Read Before Write/Edit
**Rule**: ALL agents MUST read a file before using Write or Edit tools.
**Rationale**: Prevents overwriting existing content, conflicts, and user interruptions.
Historical evidence: 8 instances of conflicts caused by writing without reading first.
**Protocol**:
1. **Before Write**: ALWAYS use Read tool first
```
❌ WRONG: Use Write without reading
✅ CORRECT: Read → analyze content → Write
```
2. **Before Edit**: File must be read in current conversation
```
❌ WRONG: Edit tool on file never read
✅ CORRECT: Read → identify exact string → Edit
```
3. **Exceptions**: NONE. This rule has no exceptions.
**Enforcement**:
- Migration Coordinator validates all file operations
- User should interrupt if agent violates this rule
- Retrospective will flag violations
**Examples**:
```
User: Update the config file
Agent: [Reads config.json first]
Agent: I see the current configuration has X, Y, Z settings.
Agent: I'll update setting Y to the new value.
Agent: [Uses Edit tool with exact old_string from what was read]
```
**Why This Change**:
[Link to IMPROVEMENTS.md Recommendation 4]
Eliminates 100% of file conflicts and user interruptions caused by not reading files first.
```
**Example Tool Usage Behavioral Update**:
```markdown
# In commands/modernize.md
## Tool Selection Guide
**NEW - Added from IMPROVEMENTS.md Recommendation 5**:
### Use Specialized Tools, Not Bash Commands
**Rule**: Use specialized tools for file operations, NOT bash commands.
**Decision Tree**:
**Need to read file?**
- ✅ Use: `Read` tool
- ❌ Don't use: `cat`, `head`, `tail`, `less`
- **Why**: Read provides line numbers, handles large files, integrates with Edit
**Need to find files?**
- ✅ Use: `Glob` tool with pattern
- ❌ Don't use: `find`, `ls`
- **Why**: Glob is faster, respects .gitignore, returns sorted results
**Need to search file contents?**
- ✅ Use: `Grep` tool
- ❌ Don't use: `grep`, `rg`, `ag`
- **Why**: Grep has better permissions, output modes, context options
**Need to create/modify files?**
- ✅ Use: `Write`, `Edit` tools
- ❌ Don't use: `echo >`, `cat <<EOF`, `sed`, `awk`
- **Why**: Write/Edit are tracked, validated, support rollback
**Only use Bash for**:
- Git operations
- Build commands (npm, dotnet, make)
- Test execution
- System commands that aren't file operations
**Historical Evidence**:
- 23 instances of `cat` instead of Read
- 15 instances of `find` instead of Glob
- 12 instances of `grep` instead of Grep
- All required user corrections
**Why This Change**:
[Link to IMPROVEMENTS.md Recommendation 5]
Reduces wrong tool usage by 95%, improves context handling, eliminates user corrections.
```
#### 3.3 Validate Updates
**Checks**:
- [ ] Markdown syntax valid
- [ ] Section numbering consistent
- [ ] Links work
- [ ] Formatting matches protocol style
- [ ] No broken references
---
### Phase 4: Add Automation (1-3 hours)
**Objective**: Implement automation to enforce improvements
**Common Automation Types**:
#### 4.1 Pre-commit Hooks
**Example**: Automated security scanning
**Create**: `.git/hooks/pre-commit` or use husky/lefthook
```bash
#!/bin/bash
# Pre-commit hook: Security vulnerability scanning
# Added from IMPROVEMENTS.md Recommendation 3
echo "🔍 Scanning for security vulnerabilities..."
# Detect project type and run appropriate scanner
if [ -f "package.json" ]; then
npm audit --audit-level=high
if [ $? -ne 0 ]; then
echo "❌ Security vulnerabilities detected (HIGH or above)"
echo "Run 'npm audit fix' or document exceptions"
exit 1
fi
elif [ -f "requirements.txt" ]; then
pip-audit
if [ $? -ne 0 ]; then
echo "❌ Security vulnerabilities detected"
exit 1
fi
elif [ -f "*.csproj" ]; then
dotnet list package --vulnerable --include-transitive
if [ $? -ne 0 ]; then
echo "❌ Security vulnerabilities detected"
exit 1
fi
fi
echo "✅ No high/critical vulnerabilities detected"
exit 0
```
**Make executable**:
```bash
chmod +x .git/hooks/pre-commit
```
#### 4.2 Scripts for Common Tasks
**Example**: Dependency analysis script
**Create**: `scripts/analyze-dependencies.sh`
```bash
#!/bin/bash
# Dependency Analysis Script
# Added from IMPROVEMENTS.md Recommendation 1
echo "📦 Dependency Analysis for Modernization"
echo "========================================"
# Detect project type
if [ -f "package.json" ]; then
echo "Project Type: Node.js"
echo ""
echo "Outdated Packages:"
npm outdated --long
echo ""
echo "Dependency Tree:"
npm list --depth=1
elif [ -f "requirements.txt" ]; then
echo "Project Type: Python"
echo ""
echo "Outdated Packages:"
pip list --outdated
elif [ -f "*.sln" ]; then
echo "Project Type: .NET"
echo ""
echo "Outdated Packages:"
dotnet list package --outdated --include-transitive
fi
echo ""
echo "✅ Save this output to inform dependency migration matrix"
```
**Make executable**:
```bash
chmod +x scripts/analyze-dependencies.sh
```
#### 4.3 CI/CD Integration
**Example**: Add security scanning to GitHub Actions
**Create/Update**: `.github/workflows/security-scan.yml`
```yaml
name: Security Vulnerability Scan
# Added from IMPROVEMENTS.md Recommendation 3
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
schedule:
# Run weekly on Monday at 9am UTC
- cron: '0 9 * * 1'
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run security scan
run: |
# Auto-detect project type and scan
if [ -f "package.json" ]; then
npm audit --audit-level=moderate
elif [ -f "requirements.txt" ]; then
pip install pip-audit
pip-audit
elif [ -f "*.csproj" ]; then
dotnet list package --vulnerable --include-transitive
fi
- name: Upload results
if: failure()
uses: actions/upload-artifact@v3
with:
name: security-scan-results
path: security-scan.log
```
#### 4.4 Quality Gate Automation
**Example**: Test coverage enforcement
**Create**: `scripts/enforce-coverage.sh`
```bash
#!/bin/bash
# Test Coverage Quality Gate
# Added from IMPROVEMENTS.md Recommendation 4
REQUIRED_COVERAGE=85
echo "📊 Checking test coverage..."
# Run tests with coverage
# (adapt to project type)
COVERAGE=$(npm test -- --coverage | grep "All files" | awk '{print $10}' | sed 's/%//')
if (( $(echo "$COVERAGE < $REQUIRED_COVERAGE" | bc -l) )); then
echo "❌ Coverage $COVERAGE% is below required $REQUIRED_COVERAGE%"
exit 1
else
echo "✅ Coverage $COVERAGE% meets requirement"
exit 0
fi
```
---
### Phase 5: Update Documentation (30-60 minutes)
**Objective**: Document the changes for team awareness
#### 5.1 Update README
**Add section**:
```markdown
## Process Improvements
This project has implemented continuous process improvements based on retrospective analysis.
### Recent Improvements
- **[Date]**: Applied 5 recommendations from retrospective
- Front-loaded dependency analysis to Phase 0
- Continuous documentation throughout phases
- Automated security scanning in CI/CD
- Parallel testing workflow
- Quality gate automation
See [IMPROVEMENTS.md](IMPROVEMENTS.md) for details.
### Automation
- Pre-commit hooks: Security scanning
- CI/CD: Weekly vulnerability scans
- Scripts: `scripts/analyze-dependencies.sh`, `scripts/enforce-coverage.sh`
```
#### 5.2 Update Protocol Index
**Create/Update**: `PROTOCOLS.md`
```markdown
# Agent Protocols
## Commands
- `/modernize` - Main modernization orchestration
- `/assess` - Project assessment
- `/plan` - Create detailed plan
- `/retro` - Retrospective analysis
- `/retro-apply` - Apply improvements
## Improvements History
- **2025-11-01**: Applied 5 recommendations from IMPROVEMENTS.md
- See detailed changes in git commit [hash]
```
#### 5.3 Create CHANGELOG Entry
```markdown
## [Unreleased]
### Changed - Process Improvements
- **Phase 0**: Added mandatory dependency migration matrix (IMPROVEMENTS.md Rec 1)
- **All Phases**: Continuous documentation instead of batched (IMPROVEMENTS.md Rec 2)
- **Security**: Automated vulnerability scanning in CI/CD (IMPROVEMENTS.md Rec 3)
- **Testing**: Parallel test execution workflow (IMPROVEMENTS.md Rec 4)
- **Quality Gates**: Automated enforcement scripts (IMPROVEMENTS.md Rec 5)
### Added
- Pre-commit hook for security scanning
- Scripts: `analyze-dependencies.sh`, `enforce-coverage.sh`
- CI/CD: Weekly security scan workflow
- Protocol updates in commands/modernize.md, assess.md, plan.md
```
---
### Phase 6: Validation (30 minutes)
**Objective**: Verify all changes were applied correctly
#### 6.1 Protocol Validation
**For each updated protocol**:
```bash
# Check markdown syntax
markdownlint commands/modernize.md
# Verify all sections present
grep -q "## Phase 0: Discovery & Assessment" commands/modernize.md && echo "✅ Section exists"
# Check for broken links
# (use link checker tool)
```
#### 6.2 Automation Validation
**Test each automation**:
```bash
# Test pre-commit hook
.git/hooks/pre-commit
echo "Exit code: $?"
# Test dependency analysis script
./scripts/analyze-dependencies.sh
# Test coverage enforcement
./scripts/enforce-coverage.sh
```
#### 6.3 Documentation Validation
**Verify documentation**:
- [ ] README updated with improvements summary
- [ ] CHANGELOG.md has entry
- [ ] PROTOCOLS.md reflects changes
- [ ] Links work
- [ ] Examples accurate
---
### Phase 7: Commit and Track (15 minutes)
**Objective**: Commit changes with proper documentation
#### 7.1 Git Commit
**Commit Structure**:
```bash
git add -A
git commit -m "$(cat <<'EOF'
Apply process improvements from IMPROVEMENTS.md
Implemented 5 recommendations from retrospective analysis:
1. Front-load dependency analysis (Rec 1)
- Updated commands/modernize.md Phase 0
- Added scripts/analyze-dependencies.sh
- Estimated savings: 1-2 weeks per project
2. Continuous documentation (Rec 2)
- Updated all commands/*.md protocols
- Added incremental documentation checkpoints
- Estimated savings: 50% of Phase 6 time
3. Automated security scanning (Rec 3)
- Added pre-commit hook for vulnerability scanning
- Added .github/workflows/security-scan.yml
- Prevents vulnerable dependencies
4. Parallel testing workflow (Rec 4)
- Updated commands/modernize.md Phase 3-4
- Added parallel test execution strategy
- Estimated savings: 30% testing time
5. Quality gate automation (Rec 5)
- Added scripts/enforce-coverage.sh
- Updated quality gate criteria
- Eliminates manual gate checking
Changes applied from: IMPROVEMENTS.md
Generated by: /retro command
Applied by: /retro-apply command
See IMPROVEMENTS.md for detailed rationale and evidence.
EOF
)"
```
#### 7.2 Update IMPROVEMENTS.md Status
**Mark as implemented**:
```markdown
**Status**: ✅ Implemented
**Implementation Date**: [YYYY-MM-DD]
**Applied By**: [User/Agent]
**Commit**: [Git commit hash]
## Implementation Summary
All recommendations successfully applied:
- ✅ Recommendation 1: Protocol updated, script added
- ✅ Recommendation 2: All protocols updated
- ✅ Recommendation 3: Automation added
- ✅ Recommendation 4: Workflow updated
- ✅ Recommendation 5: Scripts added
**Next Steps**:
- Validate effectiveness in next modernization project
- Track metrics to confirm estimated improvements
- Iterate based on results
```
#### 7.3 Update HISTORY.md
**Add entry**:
```markdown
## [Date] - Process Improvements Applied
**Agent**: Migration Coordinator + All Agents
**Action**: Applied recommendations from retrospective
### What Changed
- Applied 5 process improvement recommendations from IMPROVEMENTS.md
- Updated protocols in commands/modernize.md, assess.md, plan.md
- Added automation: pre-commit hooks, CI/CD workflows, utility scripts
- Enhanced documentation with improvement tracking
### Why Changed
- Retrospective analysis identified inefficiencies and risks
- Evidence-based recommendations promised 20-30% efficiency gain
- Continuous improvement culture
### Impact
- Protocol enhancements: 3 files updated
- New automation: 2 scripts, 1 CI/CD workflow, 1 pre-commit hook
- Estimated time savings: 4-6 days per future project
- Risk reduction: Earlier detection of security and dependency issues
### Outcome
✅ All recommendations implemented and validated
📊 Will track effectiveness in next project
```
---
## Success Criteria
**Application is successful when**:
**All Approved Recommendations Applied**:
- Protocol files updated correctly
- Automation scripts created and tested
- Configuration changes made
- Documentation updated
**Validation Passed**:
- All syntax valid
- Scripts executable and functional
- Links work
- No broken references
**Properly Documented**:
- Git commit with detailed message
- IMPROVEMENTS.md status updated
- HISTORY.md entry added
- CHANGELOG.md updated
**Testable**:
- Automation can be executed
- Changes can be validated
- Rollback procedure documented
---
## Rollback Procedure
**If issues occur during application**:
### 1. Immediate Rollback
```bash
# Revert all changes
git reset --hard HEAD~1
# Or revert specific files
git checkout HEAD~1 -- commands/modernize.md
git checkout HEAD~1 -- scripts/
```
### 2. Partial Rollback
**If one recommendation causes issues**:
```bash
# Revert just that change
git revert <commit-hash-for-that-recommendation>
# Update IMPROVEMENTS.md
# Mark recommendation as "Rolled Back - [Reason]"
```
### 3. Document Issue
**Update IMPROVEMENTS.md**:
```markdown
### Recommendation X: [Title]
**Status**: ❌ Rolled Back
**Reason**: [Specific issue encountered]
**Date**: [YYYY-MM-DD]
**Issue Details**:
- [What went wrong]
- [Impact observed]
**Next Steps**:
- [How to address in future]
- [Alternative approach]
```
---
## Example Usage
### Applying All Recommendations
```
User: /retro-apply
User: Apply all recommendations
Claude: Loading IMPROVEMENTS.md...
Found 5 recommendations:
1. Front-load dependency analysis (P0, High impact, Medium effort)
2. Continuous documentation (P0, High impact, Low effort)
3. Automated security scanning (P0, High impact, Medium effort)
4. Parallel testing workflow (P1, Medium impact, Low effort)
5. Quality gate automation (P1, Medium impact, Low effort)
Confirm: Apply all 5 recommendations? (yes/no)
User: yes
Claude: ✅ Confirmed. Beginning implementation...
[Phase 1: Review and Approval]
- Marking recommendations as approved in IMPROVEMENTS.md
[Phase 2: Implementation Planning]
- Analyzing dependencies between recommendations
- Creating execution plan
- Estimated time: 7.5 hours
Execution Plan:
Phase A (Foundation):
- Rec 2: Continuous documentation (1h)
- Rec 1: Front-load dependency analysis (2h)
Phase B (Dependent):
- Rec 3: Automated security scanning (2h)
Phase C (Enhancements):
- Rec 4: Parallel testing (1.5h)
- Rec 5: Quality gates (1h)
[Phase 3: Apply Protocol Updates]
- Updating commands/modernize.md
✅ Added Phase 0.5: Dependency Migration Matrix
✅ Updated all phases for continuous documentation
✅ Added parallel testing workflow
- Updating commands/assess.md
✅ Added dependency analysis step
- Updating commands/plan.md
✅ Added dependency migration matrix planning
[Phase 4: Add Automation]
- Creating scripts/analyze-dependencies.sh
✅ Script created and tested
- Creating .git/hooks/pre-commit
✅ Pre-commit hook installed
- Creating .github/workflows/security-scan.yml
✅ CI/CD workflow added
- Creating scripts/enforce-coverage.sh
✅ Quality gate script created
[Phase 5: Update Documentation]
- Updating README.md
✅ Process improvements section added
- Creating PROTOCOLS.md
✅ Protocol index created
- Updating CHANGELOG.md
✅ Changelog entry added
[Phase 6: Validation]
- Validating protocol syntax: ✅ All valid
- Testing automation scripts: ✅ All pass
- Checking documentation links: ✅ All work
[Phase 7: Commit and Track]
- Committing changes
✅ Commit [abc123] created
- Updating IMPROVEMENTS.md status
✅ Marked as implemented
- Adding HISTORY.md entry
✅ Entry added
✅ All 5 recommendations successfully applied!
Summary:
- Protocol files updated: 3
- Scripts added: 2
- CI/CD workflows added: 1
- Pre-commit hooks: 1
- Documentation updated: 3 files
Estimated impact: 4-6 days savings per future project
Next: Validate improvements in next modernization project
```
---
## Anti-Patterns to Avoid
**Applying without approval** - Always get user confirmation first
**Partial application** - If starting, complete all approved recommendations
**No validation** - Always test scripts and automation before committing
**Poor documentation** - Document what changed and why in commits
**No rollback plan** - Always know how to undo changes
**Ignoring dependencies** - Apply recommendations in correct order
**Breaking existing workflows** - Ensure changes are backward compatible
---
## Troubleshooting
### Issue: Recommendation unclear or ambiguous
**Solution**:
- Ask user for clarification
- Update IMPROVEMENTS.md with clearer description
- Don't guess at implementation
### Issue: Automation script fails validation
**Solution**:
- Debug script
- Test in isolation
- Don't commit broken automation
- Document issue in IMPROVEMENTS.md
### Issue: Protocol update creates inconsistency
**Solution**:
- Review full protocol for consistency
- Update related sections
- Validate all internal links
- Check examples align with changes
### Issue: Merge conflicts in protocols
**Solution**:
- Manually resolve conflicts
- Prioritize new improvements
- Validate merged result
- Test all changes
---
**Document Owner**: Migration Coordinator
**Protocol Version**: 1.0
**Last Updated**: 2025-11-01
**Prerequisite**: `/retro` command must be run first
**Input**: IMPROVEMENTS.md
**Remember**: **Improvements are worthless if not applied. Make changes permanent through automation and protocols.**

1124
commands/retro.md Normal file

File diff suppressed because it is too large Load Diff