Initial commit
This commit is contained in:
481
commands/analyze-quality.md
Normal file
481
commands/analyze-quality.md
Normal file
@@ -0,0 +1,481 @@
|
||||
---
|
||||
description: Comprehensive codebase quality analysis with prioritized recommendations
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: SpecLab plugin by Marty Bonacci & Claude Code (2025)
|
||||
2. Phase 2 Enhancement: Comprehensive quality analysis feature
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Perform comprehensive codebase quality analysis to identify improvement opportunities and generate prioritized recommendations.
|
||||
|
||||
**Key Capabilities**:
|
||||
1. **Test Coverage Gaps**: Find files without tests
|
||||
2. **Architecture Issues**: Detect anti-patterns and violations
|
||||
3. **Missing Documentation**: Identify undocumented code
|
||||
4. **Performance Issues**: Find optimization opportunities
|
||||
5. **Security Issues**: Detect vulnerabilities
|
||||
6. **Quality Scoring**: Calculate module-level quality scores
|
||||
|
||||
**Coverage**: Provides holistic codebase health assessment
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Initialize Analysis Context
|
||||
|
||||
**YOU MUST NOW initialize the analysis using the Bash tool:**
|
||||
|
||||
1. **Get repository root:**
|
||||
```bash
|
||||
git rev-parse --show-toplevel 2>/dev/null || pwd
|
||||
```
|
||||
Store as REPO_ROOT.
|
||||
|
||||
2. **Display analysis banner:**
|
||||
```
|
||||
📊 Codebase Quality Analysis
|
||||
============================
|
||||
|
||||
Analyzing: {REPO_ROOT}
|
||||
Started: {TIMESTAMP}
|
||||
```
|
||||
|
||||
3. **Detect project type** by reading package.json:
|
||||
- Framework: Vite, Next.js, Remix, CRA, or Generic
|
||||
- Language: JavaScript, TypeScript, Python, Go, etc.
|
||||
- Test framework: Vitest, Jest, Pytest, etc.
|
||||
|
||||
---
|
||||
|
||||
### 2. Test Coverage Gap Analysis
|
||||
|
||||
**YOU MUST NOW analyze test coverage gaps:**
|
||||
|
||||
1. **Find all source files** using Bash:
|
||||
```bash
|
||||
find ${REPO_ROOT} -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) \
|
||||
-not -path "*/node_modules/*" \
|
||||
-not -path "*/dist/*" \
|
||||
-not -path "*/build/*" \
|
||||
-not -path "*/.next/*" \
|
||||
-not -path "*/test/*" \
|
||||
-not -path "*/tests/*" \
|
||||
-not -path "*/__tests__/*"
|
||||
```
|
||||
Store count as TOTAL_SOURCE_FILES.
|
||||
|
||||
2. **Find all test files** using Bash:
|
||||
```bash
|
||||
find ${REPO_ROOT} -type f \( \
|
||||
-name "*.test.ts" -o -name "*.test.tsx" -o -name "*.test.js" -o -name "*.test.jsx" -o \
|
||||
-name "*.spec.ts" -o -name "*.spec.tsx" -o -name "*.spec.js" -o -name "*.spec.jsx" \
|
||||
\) -not -path "*/node_modules/*"
|
||||
```
|
||||
Store count as TOTAL_TEST_FILES.
|
||||
|
||||
3. **Calculate test coverage ratio:**
|
||||
```
|
||||
TEST_RATIO = (TOTAL_TEST_FILES / TOTAL_SOURCE_FILES) * 100
|
||||
```
|
||||
|
||||
4. **Find files without corresponding tests:**
|
||||
- For each source file in app/, src/, lib/
|
||||
- Check if corresponding test file exists
|
||||
- Add to UNTESTED_FILES list if no test found
|
||||
|
||||
5. **Display test coverage gaps:**
|
||||
```
|
||||
📋 Test Coverage Gaps
|
||||
=====================
|
||||
|
||||
Source Files: {TOTAL_SOURCE_FILES}
|
||||
Test Files: {TOTAL_TEST_FILES}
|
||||
Test Ratio: {TEST_RATIO}%
|
||||
|
||||
Files Without Tests ({COUNT}):
|
||||
{TOP_10_UNTESTED_FILES}
|
||||
|
||||
Priority: {HIGH/MEDIUM/LOW}
|
||||
Impact: Missing tests for {COUNT} files
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Architecture Pattern Analysis
|
||||
|
||||
**YOU MUST NOW analyze architectural patterns:**
|
||||
|
||||
1. **Run SSR pattern validator** using Bash:
|
||||
```bash
|
||||
bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/ssr-validator.sh
|
||||
```
|
||||
Capture issues count and details.
|
||||
|
||||
2. **Scan for common anti-patterns** using Grep:
|
||||
|
||||
a. **useEffect with fetch** (should use loaders):
|
||||
```bash
|
||||
grep -rn "useEffect.*fetch" app/ src/ --include="*.tsx" --include="*.ts"
|
||||
```
|
||||
|
||||
b. **Client-side state for server data** (should use loader data):
|
||||
```bash
|
||||
grep -rn "useState.*fetch\|useState.*axios" app/ src/ --include="*.tsx"
|
||||
```
|
||||
|
||||
c. **Class components** (should use functional):
|
||||
```bash
|
||||
grep -rn "class.*extends React.Component" app/ src/ --include="*.tsx" --include="*.jsx"
|
||||
```
|
||||
|
||||
d. **Inline styles** (should use Tailwind/CSS modules):
|
||||
```bash
|
||||
grep -rn "style={{" app/ src/ --include="*.tsx" --include="*.jsx"
|
||||
```
|
||||
|
||||
3. **Display architecture issues:**
|
||||
```
|
||||
🏗️ Architecture Issues
|
||||
======================
|
||||
|
||||
SSR Patterns: {SSR_ISSUES} issues
|
||||
- Hardcoded URLs: {COUNT}
|
||||
- Relative URLs in SSR: {COUNT}
|
||||
|
||||
React Anti-Patterns: {REACT_ISSUES} issues
|
||||
- useEffect with fetch: {COUNT}
|
||||
- Client-side state: {COUNT}
|
||||
- Class components: {COUNT}
|
||||
|
||||
Styling Issues: {STYLE_ISSUES} issues
|
||||
- Inline styles: {COUNT}
|
||||
|
||||
Priority: {CRITICAL/HIGH/MEDIUM/LOW}
|
||||
Impact: Architectural debt in {TOTAL_ISSUES} locations
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Documentation Gap Analysis
|
||||
|
||||
**YOU MUST NOW analyze documentation gaps:**
|
||||
|
||||
1. **Find functions without JSDoc** using Grep:
|
||||
```bash
|
||||
grep -rn "^function\|^export function\|^const.*= (" app/ src/ --include="*.ts" --include="*.tsx" | \
|
||||
while read line; do
|
||||
# Check if previous line has /** comment
|
||||
done
|
||||
```
|
||||
|
||||
2. **Find components without prop types:**
|
||||
```bash
|
||||
grep -rn "export.*function.*Component\|export.*const.*Component" app/ src/ --include="*.tsx"
|
||||
```
|
||||
Check if they have TypeScript interface/type definitions.
|
||||
|
||||
3. **Find API endpoints without OpenAPI/comments:**
|
||||
```bash
|
||||
find app/routes/ app/api/ -name "*.ts" -o -name "*.tsx"
|
||||
```
|
||||
Check for route handler documentation.
|
||||
|
||||
4. **Display documentation gaps:**
|
||||
```
|
||||
📚 Documentation Gaps
|
||||
=====================
|
||||
|
||||
Functions without JSDoc: {COUNT}
|
||||
Components without prop types: {COUNT}
|
||||
API endpoints without docs: {COUNT}
|
||||
|
||||
Priority: {MEDIUM/LOW}
|
||||
Impact: {COUNT} undocumented items
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. Performance Issue Detection
|
||||
|
||||
**YOU MUST NOW detect performance issues:**
|
||||
|
||||
1. **Run bundle size analyzer** (Phase 3 Enhancement) using Bash:
|
||||
```bash
|
||||
bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/speclabs/lib/bundle-size-monitor.sh ${REPO_ROOT}
|
||||
```
|
||||
Capture:
|
||||
- Total bundle size
|
||||
- List of large bundles (>500KB)
|
||||
- List of critical bundles (>1MB)
|
||||
- Bundle size score (0-20 points)
|
||||
|
||||
2. **Find missing lazy loading:**
|
||||
```bash
|
||||
grep -rn "import.*from" app/pages/ app/routes/ --include="*.tsx" | \
|
||||
grep -v "React.lazy\|lazy("
|
||||
```
|
||||
|
||||
3. **Find unoptimized images:**
|
||||
```bash
|
||||
find public/ static/ app/ -type f \( -name "*.jpg" -o -name "*.png" \) -size +100k
|
||||
```
|
||||
|
||||
4. **Display performance issues:**
|
||||
```
|
||||
⚡ Performance Issues
|
||||
=====================
|
||||
|
||||
Bundle Sizes (Phase 3):
|
||||
- Total: {TOTAL_SIZE}
|
||||
- Large bundles (>500KB): {COUNT}
|
||||
- Critical bundles (>1MB): {COUNT}
|
||||
- Top offenders:
|
||||
* {LARGEST_BUNDLE_1}
|
||||
* {LARGEST_BUNDLE_2}
|
||||
- Score: {SCORE}/20 points
|
||||
|
||||
Missing Lazy Loading: {COUNT} routes
|
||||
Unoptimized Images: {COUNT} files (>{SIZE})
|
||||
|
||||
Priority: {HIGH/MEDIUM}
|
||||
Impact: Page load performance degraded by {TOTAL_SIZE}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. Security Issue Detection
|
||||
|
||||
**YOU MUST NOW detect security issues:**
|
||||
|
||||
1. **Find exposed secrets** using Grep:
|
||||
```bash
|
||||
grep -rn "API_KEY\|SECRET\|PASSWORD.*=" app/ src/ --include="*.ts" --include="*.tsx" | \
|
||||
grep -v "process.env\|import.meta.env"
|
||||
```
|
||||
|
||||
2. **Find missing input validation:**
|
||||
```bash
|
||||
grep -rn "formData.get\|request.body\|params\." app/routes/ app/api/ --include="*.ts"
|
||||
```
|
||||
Check if followed by validation logic.
|
||||
|
||||
3. **Find potential XSS vulnerabilities:**
|
||||
```bash
|
||||
grep -rn "dangerouslySetInnerHTML\|innerHTML" app/ src/ --include="*.tsx" --include="*.jsx"
|
||||
```
|
||||
|
||||
4. **Display security issues:**
|
||||
```
|
||||
🔒 Security Issues
|
||||
==================
|
||||
|
||||
Exposed Secrets: {COUNT} potential leaks
|
||||
Missing Input Validation: {COUNT} endpoints
|
||||
XSS Vulnerabilities: {COUNT} locations
|
||||
|
||||
Priority: {CRITICAL if >0, else LOW}
|
||||
Impact: Security risk in {COUNT} locations
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Module-Level Quality Scoring
|
||||
|
||||
**YOU MUST NOW calculate quality scores per module:**
|
||||
|
||||
1. **Group files by module/directory:**
|
||||
- app/pages/
|
||||
- app/components/
|
||||
- app/utils/
|
||||
- src/services/
|
||||
- etc.
|
||||
|
||||
2. **For each module, calculate score:**
|
||||
- Test Coverage: Has tests? (+25 points)
|
||||
- Documentation: Has JSDoc/types? (+15 points)
|
||||
- No Anti-Patterns: Clean architecture? (+20 points)
|
||||
- No Security Issues: Secure? (+20 points)
|
||||
- Performance: Optimized? (+20 points)
|
||||
- Total: 0-100 points
|
||||
|
||||
3. **Display module scores:**
|
||||
```
|
||||
📊 Module Quality Scores
|
||||
========================
|
||||
|
||||
app/pages/: 75/100 ████████████████░░░░ (Good)
|
||||
- Test Coverage: ✓ 25/25
|
||||
- Documentation: ✓ 15/15
|
||||
- Architecture: ⚠️ 15/20 (2 anti-patterns)
|
||||
- Security: ✓ 20/20
|
||||
- Performance: ✗ 0/20 (missing lazy load)
|
||||
|
||||
app/components/: 45/100 ██████████░░░░░░░░░░ (Needs Improvement)
|
||||
- Test Coverage: ✗ 0/25 (no tests)
|
||||
- Documentation: ⚠️ 10/15 (missing prop types)
|
||||
- Architecture: ✓ 20/20
|
||||
- Security: ✓ 20/20
|
||||
- Performance: ⚠️ 10/20 (large bundle)
|
||||
|
||||
Overall Codebase Score: {AVERAGE}/100
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 8. Prioritized Recommendations
|
||||
|
||||
**YOU MUST NOW generate prioritized recommendations:**
|
||||
|
||||
1. **Critical Priority** (security, production failures):
|
||||
- Exposed secrets
|
||||
- SSR pattern violations
|
||||
- Security vulnerabilities
|
||||
|
||||
2. **High Priority** (quality, maintainability):
|
||||
- Missing test coverage for core modules
|
||||
- Major architecture anti-patterns
|
||||
- Large bundle sizes
|
||||
|
||||
3. **Medium Priority** (improvement opportunities):
|
||||
- Missing documentation
|
||||
- Minor anti-patterns
|
||||
- Missing lazy loading
|
||||
|
||||
4. **Low Priority** (nice-to-have):
|
||||
- Additional JSDoc
|
||||
- Inline style cleanup
|
||||
- Image optimization
|
||||
|
||||
5. **Display recommendations:**
|
||||
```
|
||||
📈 Prioritized Recommendations
|
||||
===============================
|
||||
|
||||
🔴 CRITICAL (Fix Immediately):
|
||||
1. Remove 3 exposed API keys from client code
|
||||
Impact: Security breach risk
|
||||
Files: app/config.ts:12, app/utils/api.ts:45, src/client.ts:8
|
||||
Fix: Move to environment variables
|
||||
|
||||
2. Fix 11 hardcoded URLs in SSR contexts
|
||||
Impact: Production deployment will fail
|
||||
Files: See SSR validation report
|
||||
Fix: Use getApiUrl() helper
|
||||
|
||||
🟠 HIGH (Fix This Week):
|
||||
3. Add tests for 15 untested core modules
|
||||
Impact: No regression protection
|
||||
Modules: app/pages/, app/components/Auth/
|
||||
Fix: Generate test templates
|
||||
|
||||
4. Reduce bundle size by 2.5MB
|
||||
Impact: Slow page loads
|
||||
Files: dist/main.js (3.2MB), dist/vendor.js (1.8MB)
|
||||
Fix: Implement code splitting and lazy loading
|
||||
|
||||
🟡 MEDIUM (Fix This Sprint):
|
||||
5. Add JSDoc to 45 functions
|
||||
Impact: Maintainability
|
||||
Fix: Use IDE code generation
|
||||
|
||||
6. Replace 12 useEffect fetches with loaders
|
||||
Impact: Architecture consistency
|
||||
Fix: Migrate to React Router loaders
|
||||
|
||||
🟢 LOW (Nice to Have):
|
||||
7. Optimize 8 large images
|
||||
Impact: Minor performance gain
|
||||
Fix: Use next/image or image optimization service
|
||||
|
||||
Estimated Impact: Fixing critical + high items would increase quality score from {CURRENT}/100 → {PROJECTED}/100
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 9. Generate Analysis Report
|
||||
|
||||
**YOU MUST NOW generate final report:**
|
||||
|
||||
1. **Create summary:**
|
||||
```
|
||||
═══════════════════════════════════════════════
|
||||
Quality Analysis Report
|
||||
═══════════════════════════════════════════════
|
||||
|
||||
Overall Quality Score: {SCORE}/100 {STATUS_EMOJI}
|
||||
|
||||
Breakdown:
|
||||
- Test Coverage: {SCORE}/100
|
||||
- Architecture: {SCORE}/100
|
||||
- Documentation: {SCORE}/100
|
||||
- Performance: {SCORE}/100
|
||||
- Security: {SCORE}/100
|
||||
|
||||
Issues Found:
|
||||
- Critical: {COUNT} 🔴
|
||||
- High: {COUNT} 🟠
|
||||
- Medium: {COUNT} 🟡
|
||||
- Low: {COUNT} 🟢
|
||||
|
||||
Total Issues: {TOTAL}
|
||||
```
|
||||
|
||||
2. **Save report** to file:
|
||||
- Write to: `.specswarm/quality-analysis-{TIMESTAMP}.md`
|
||||
- Include all sections above
|
||||
- Add file-by-file details
|
||||
- Include fix commands/examples
|
||||
|
||||
3. **Display next steps:**
|
||||
```
|
||||
📋 Next Steps
|
||||
=============
|
||||
|
||||
1. Review full report: .specswarm/quality-analysis-{TIMESTAMP}.md
|
||||
2. Fix critical issues first (security + SSR)
|
||||
3. Run quality validation: /specswarm:implement --validate-only
|
||||
4. Track improvements in metrics.json
|
||||
|
||||
Commands:
|
||||
- View detailed SSR issues: bash plugins/specswarm/lib/ssr-validator.sh
|
||||
- Generate test templates: /specswarm:implement {feature}
|
||||
- Re-run analysis: /specswarm:analyze-quality
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ All analysis sections completed
|
||||
✅ Issues categorized by priority
|
||||
✅ Recommendations generated with impact estimates
|
||||
✅ Report saved to .specswarm/
|
||||
✅ Next steps provided
|
||||
|
||||
---
|
||||
|
||||
## Operating Principles
|
||||
|
||||
1. **Comprehensive**: Scan entire codebase, not just recent changes
|
||||
2. **Prioritized**: Critical issues first, nice-to-haves last
|
||||
3. **Actionable**: Provide specific fixes, not just problems
|
||||
4. **Impact-Aware**: Estimate quality score improvement
|
||||
5. **Automated**: Runnable without user interaction
|
||||
6. **Repeatable**: Track improvements over time
|
||||
|
||||
---
|
||||
|
||||
**Note**: This command provides a holistic view of codebase quality. Use it before major releases, after adding new features, or when quality scores decline.
|
||||
192
commands/analyze.md
Normal file
192
commands/analyze.md
Normal file
@@ -0,0 +1,192 @@
|
||||
---
|
||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||
|
||||
**Constitution Authority**: The project constitution (`.specswarm/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Initialize Analysis Context
|
||||
|
||||
Locate the feature directory by searching for `spec.md`, `plan.md`, and `tasks.md` files in the repository. Derive absolute paths:
|
||||
|
||||
- SPEC = FEATURE_DIR/spec.md
|
||||
- PLAN = FEATURE_DIR/plan.md
|
||||
- TASKS = FEATURE_DIR/tasks.md
|
||||
|
||||
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||
|
||||
### 2. Load Artifacts (Progressive Disclosure)
|
||||
|
||||
Load only the minimal necessary context from each artifact:
|
||||
|
||||
**From spec.md:**
|
||||
|
||||
- Overview/Context
|
||||
- Functional Requirements
|
||||
- Non-Functional Requirements
|
||||
- User Stories
|
||||
- Edge Cases (if present)
|
||||
|
||||
**From plan.md:**
|
||||
|
||||
- Architecture/stack choices
|
||||
- Data Model references
|
||||
- Phases
|
||||
- Technical constraints
|
||||
|
||||
**From tasks.md:**
|
||||
|
||||
- Task IDs
|
||||
- Descriptions
|
||||
- Phase grouping
|
||||
- Parallel markers [P]
|
||||
- Referenced file paths
|
||||
|
||||
**From constitution:**
|
||||
|
||||
- Load `.specswarm/constitution.md` for principle validation
|
||||
|
||||
### 3. Build Semantic Models
|
||||
|
||||
Create internal representations (do not include raw artifacts in output):
|
||||
|
||||
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||
|
||||
### 4. Detection Passes (Token-Efficient Analysis)
|
||||
|
||||
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||
|
||||
#### A. Duplication Detection
|
||||
|
||||
- Identify near-duplicate requirements
|
||||
- Mark lower-quality phrasing for consolidation
|
||||
|
||||
#### B. Ambiguity Detection
|
||||
|
||||
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
||||
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
||||
|
||||
#### C. Underspecification
|
||||
|
||||
- Requirements with verbs but missing object or measurable outcome
|
||||
- User stories missing acceptance criteria alignment
|
||||
- Tasks referencing files or components not defined in spec/plan
|
||||
|
||||
#### D. Constitution Alignment
|
||||
|
||||
- Any requirement or plan element conflicting with a MUST principle
|
||||
- Missing mandated sections or quality gates from constitution
|
||||
|
||||
#### E. Coverage Gaps
|
||||
|
||||
- Requirements with zero associated tasks
|
||||
- Tasks with no mapped requirement/story
|
||||
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||
|
||||
#### F. Inconsistency
|
||||
|
||||
- Terminology drift (same concept named differently across files)
|
||||
- Data entities referenced in plan but absent in spec (or vice versa)
|
||||
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
||||
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
||||
|
||||
### 5. Severity Assignment
|
||||
|
||||
Use this heuristic to prioritize findings:
|
||||
|
||||
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
||||
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
||||
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
||||
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
||||
|
||||
### 6. Produce Compact Analysis Report
|
||||
|
||||
Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
## Specification Analysis Report
|
||||
|
||||
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||
|----|----------|----------|-------------|---------|----------------|
|
||||
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||
|
||||
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||
|
||||
**Coverage Summary Table:**
|
||||
|
||||
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||
|-----------------|-----------|----------|-------|
|
||||
|
||||
**Constitution Alignment Issues:** (if any)
|
||||
|
||||
**Unmapped Tasks:** (if any)
|
||||
|
||||
**Metrics:**
|
||||
|
||||
- Total Requirements
|
||||
- Total Tasks
|
||||
- Coverage % (requirements with >=1 task)
|
||||
- Ambiguity Count
|
||||
- Duplication Count
|
||||
- Critical Issues Count
|
||||
|
||||
### 7. Provide Next Actions
|
||||
|
||||
At end of report, output a concise Next Actions block:
|
||||
|
||||
- If CRITICAL issues exist: Recommend resolving before `/implement`
|
||||
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
||||
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
||||
|
||||
### 8. Offer Remediation
|
||||
|
||||
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### Context Efficiency
|
||||
|
||||
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
||||
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
||||
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
||||
|
||||
### Analysis Guidelines
|
||||
|
||||
- **NEVER modify files** (this is read-only analysis)
|
||||
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||
- **Prioritize constitution violations** (these are always CRITICAL)
|
||||
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
||||
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||
|
||||
## Context
|
||||
|
||||
{ARGS}
|
||||
1166
commands/bugfix.md
Normal file
1166
commands/bugfix.md
Normal file
File diff suppressed because it is too large
Load Diff
441
commands/build.md
Normal file
441
commands/build.md
Normal file
@@ -0,0 +1,441 @@
|
||||
---
|
||||
description: Build complete feature from specification to implementation - simplified workflow
|
||||
args:
|
||||
- name: feature_description
|
||||
description: Natural language description of the feature to build
|
||||
required: true
|
||||
- name: --validate
|
||||
description: Run browser validation with Playwright after implementation
|
||||
required: false
|
||||
- name: --quality-gate
|
||||
description: Set minimum quality score (default 80)
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Build a complete feature from natural language description through implementation and quality validation.
|
||||
|
||||
**Purpose**: Simplify feature development by orchestrating the complete workflow in a single command.
|
||||
|
||||
**Workflow**: Specify → Clarify → Plan → Tasks → Implement → (Validate) → Quality Analysis
|
||||
|
||||
**User Experience**:
|
||||
- Single command instead of 7+ manual steps
|
||||
- Interactive clarification (only pause point)
|
||||
- Autonomous execution through implementation
|
||||
- Quality validated automatically
|
||||
- Ready for final merge with `/specswarm:ship`
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checks
|
||||
|
||||
```bash
|
||||
# Parse arguments
|
||||
FEATURE_DESC=""
|
||||
RUN_VALIDATE=false
|
||||
QUALITY_GATE=80
|
||||
|
||||
# Extract feature description (first non-flag argument)
|
||||
for arg in $ARGUMENTS; do
|
||||
if [ "${arg:0:2}" != "--" ] && [ -z "$FEATURE_DESC" ]; then
|
||||
FEATURE_DESC="$arg"
|
||||
elif [ "$arg" = "--validate" ]; then
|
||||
RUN_VALIDATE=true
|
||||
elif [ "$arg" = "--quality-gate" ]; then
|
||||
shift
|
||||
QUALITY_GATE="$1"
|
||||
fi
|
||||
done
|
||||
|
||||
# Validate feature description
|
||||
if [ -z "$FEATURE_DESC" ]; then
|
||||
echo "❌ Error: Feature description required"
|
||||
echo ""
|
||||
echo "Usage: /specswarm:build \"feature description\" [--validate] [--quality-gate N]"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " /specswarm:build \"Add user authentication with email/password\""
|
||||
echo " /specswarm:build \"Implement dark mode toggle\" --validate"
|
||||
echo " /specswarm:build \"Add shopping cart\" --validate --quality-gate 85"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get project root
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
echo ""
|
||||
echo "SpecSwarm requires an existing git repository to manage feature branches."
|
||||
echo ""
|
||||
echo "If you're starting a new project, scaffold it first:"
|
||||
echo ""
|
||||
echo " # React + Vite"
|
||||
echo " npm create vite@latest my-app -- --template react-ts"
|
||||
echo ""
|
||||
echo " # Next.js"
|
||||
echo " npx create-next-app@latest"
|
||||
echo ""
|
||||
echo " # Astro"
|
||||
echo " npm create astro@latest"
|
||||
echo ""
|
||||
echo " # Vue"
|
||||
echo " npm create vue@latest"
|
||||
echo ""
|
||||
echo "Then initialize git and SpecSwarm:"
|
||||
echo " cd my-app"
|
||||
echo " git init"
|
||||
echo " git add ."
|
||||
echo " git commit -m \"Initial project scaffold\""
|
||||
echo " /specswarm:init"
|
||||
echo ""
|
||||
echo "For existing projects, initialize git:"
|
||||
echo " git init"
|
||||
echo " git add ."
|
||||
echo " git commit -m \"Initial commit\""
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
cd "$REPO_ROOT"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Display Welcome Banner
|
||||
|
||||
```bash
|
||||
echo "🏗️ SpecSwarm Build - Complete Feature Development"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Feature: $FEATURE_DESC"
|
||||
echo ""
|
||||
echo "This workflow will:"
|
||||
echo " 1. Create detailed specification"
|
||||
echo " 2. Ask clarification questions (interactive)"
|
||||
echo " 3. Generate implementation plan"
|
||||
echo " 4. Generate task breakdown"
|
||||
echo " 5. Implement all tasks"
|
||||
if [ "$RUN_VALIDATE" = true ]; then
|
||||
echo " 6. Run browser validation (Playwright)"
|
||||
echo " 7. Analyze code quality"
|
||||
else
|
||||
echo " 6. Analyze code quality"
|
||||
fi
|
||||
echo ""
|
||||
echo "You'll only be prompted during Step 2 (clarification)."
|
||||
echo "All other steps run automatically."
|
||||
echo ""
|
||||
read -p "Press Enter to start, or Ctrl+C to cancel..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Phase 1 - Specification
|
||||
|
||||
**YOU MUST NOW run the specify command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📋 Phase 1: Creating Specification"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:specify "$FEATURE_DESC"
|
||||
```
|
||||
|
||||
Wait for completion. Verify spec.md was created.
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Specification created"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Phase 2 - Clarification (INTERACTIVE)
|
||||
|
||||
**YOU MUST NOW run the clarify command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "❓ Phase 2: Clarification Questions"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "⚠️ INTERACTIVE: Please answer the clarification questions."
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:clarify
|
||||
```
|
||||
|
||||
**IMPORTANT**: This step is interactive. Wait for user to answer questions.
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Clarification complete"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Phase 3 - Planning
|
||||
|
||||
**YOU MUST NOW run the plan command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🗺️ Phase 3: Generating Implementation Plan"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:plan
|
||||
```
|
||||
|
||||
Wait for plan.md to be generated.
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Implementation plan created"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Phase 4 - Task Generation
|
||||
|
||||
**YOU MUST NOW run the tasks command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 Phase 4: Generating Task Breakdown"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:tasks
|
||||
```
|
||||
|
||||
Wait for tasks.md to be generated.
|
||||
|
||||
```bash
|
||||
# Count tasks
|
||||
TASK_COUNT=$(grep -c '^###[[:space:]]*T[0-9]' tasks.md 2>/dev/null || echo "0")
|
||||
|
||||
echo ""
|
||||
echo "✅ Task breakdown created ($TASK_COUNT tasks)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Phase 5 - Implementation
|
||||
|
||||
**YOU MUST NOW run the implement command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "⚙️ Phase 5: Implementing Feature"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "This will execute all $TASK_COUNT tasks automatically."
|
||||
echo "Estimated time: 2-5 minutes per task"
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:implement
|
||||
```
|
||||
|
||||
Wait for implementation to complete.
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Implementation complete"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Phase 6 - Browser Validation (Optional)
|
||||
|
||||
**IF --validate flag was provided, run validation:**
|
||||
|
||||
```bash
|
||||
if [ "$RUN_VALIDATE" = true ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🌐 Phase 6: Browser Validation"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Running AI-powered interaction flow validation with Playwright..."
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**IF RUN_VALIDATE = true, use the SlashCommand tool:**
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:validate
|
||||
```
|
||||
|
||||
```bash
|
||||
if [ "$RUN_VALIDATE" = true ]; then
|
||||
echo ""
|
||||
echo "✅ Validation complete"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 8: Phase 7 - Quality Analysis
|
||||
|
||||
**YOU MUST NOW run the quality analysis using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📊 Phase 7: Code Quality Analysis"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:analyze-quality
|
||||
```
|
||||
|
||||
Wait for quality analysis to complete. Extract the quality score.
|
||||
|
||||
Store quality score as QUALITY_SCORE.
|
||||
|
||||
---
|
||||
|
||||
### Step 9: Final Report
|
||||
|
||||
**Display completion summary:**
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "══════════════════════════════════════════"
|
||||
echo "🎉 FEATURE BUILD COMPLETE"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Feature: $FEATURE_DESC"
|
||||
echo ""
|
||||
echo "✅ Specification created"
|
||||
echo "✅ Clarification completed"
|
||||
echo "✅ Plan generated"
|
||||
echo "✅ Tasks generated ($TASK_COUNT tasks)"
|
||||
echo "✅ Implementation complete"
|
||||
if [ "$RUN_VALIDATE" = true ]; then
|
||||
echo "✅ Browser validation passed"
|
||||
fi
|
||||
echo "✅ Quality analyzed (Score: ${QUALITY_SCORE}%)"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 NEXT STEPS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "1. 🧪 Manual Testing"
|
||||
echo " - Test the feature in your browser/app"
|
||||
echo " - Verify all functionality works as expected"
|
||||
echo " - Check edge cases and error handling"
|
||||
echo ""
|
||||
echo "2. 🔍 Code Review (Optional)"
|
||||
echo " - Review generated code for best practices"
|
||||
echo " - Check for security issues"
|
||||
echo " - Verify tech stack compliance"
|
||||
echo ""
|
||||
echo "3. 🚢 Ship When Ready"
|
||||
echo " Run: /specswarm:ship"
|
||||
echo ""
|
||||
echo " This will:"
|
||||
echo " - Validate quality meets threshold ($QUALITY_GATE%)"
|
||||
echo " - Merge to parent branch if passing"
|
||||
echo " - Complete the feature workflow"
|
||||
echo ""
|
||||
|
||||
if [ "$QUALITY_SCORE" -lt "$QUALITY_GATE" ]; then
|
||||
echo "⚠️ WARNING: Quality score (${QUALITY_SCORE}%) below threshold (${QUALITY_GATE}%)"
|
||||
echo " Consider addressing quality issues before shipping."
|
||||
echo " Review the quality analysis output above for specific improvements."
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "══════════════════════════════════════════"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any phase fails:
|
||||
|
||||
1. **Specify fails**: Display error, suggest checking feature description clarity
|
||||
2. **Clarify fails**: Display error, suggest re-running clarify separately
|
||||
3. **Plan fails**: Display error, suggest reviewing spec.md for completeness
|
||||
4. **Tasks fails**: Display error, suggest reviewing plan.md
|
||||
5. **Implement fails**: Display error, suggest re-running implement or using bugfix
|
||||
6. **Validate fails**: Display validation errors, suggest fixing and re-validating
|
||||
7. **Quality analysis fails**: Display error, continue (quality optional for build)
|
||||
|
||||
**All errors should report clearly and suggest remediation.**
|
||||
|
||||
---
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
**Simplicity**: 1 command instead of 7+ manual steps
|
||||
|
||||
**Efficiency**: Autonomous execution except for clarification (user only pauses once)
|
||||
|
||||
**Quality**: Built-in quality analysis ensures code standards
|
||||
|
||||
**Flexibility**: Optional validation and configurable quality gates
|
||||
|
||||
**User Experience**: Clear progress indicators and final next steps
|
||||
|
||||
---
|
||||
|
||||
## Comparison to Manual Workflow
|
||||
|
||||
**Before** (Manual):
|
||||
```bash
|
||||
/specswarm:specify "feature description"
|
||||
/specswarm:clarify
|
||||
/specswarm:plan
|
||||
/specswarm:tasks
|
||||
/specswarm:implement
|
||||
/specswarm:analyze-quality
|
||||
/specswarm:complete
|
||||
```
|
||||
**7 commands**, ~5 minutes of manual orchestration
|
||||
|
||||
**After** (Build):
|
||||
```bash
|
||||
/specswarm:build "feature description" --validate
|
||||
# [Answer clarification questions]
|
||||
# [Wait for completion]
|
||||
/specswarm:ship
|
||||
```
|
||||
**2 commands**, 1 interactive pause, fully automated execution
|
||||
|
||||
**Time Savings**: 85-90% reduction in manual orchestration overhead
|
||||
313
commands/checklist.md
Normal file
313
commands/checklist.md
Normal file
@@ -0,0 +1,313 @@
|
||||
---
|
||||
description: Generate a custom checklist for the current feature based on user requirements.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
|
||||
## Checklist Purpose: "Unit Tests for English"
|
||||
|
||||
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
||||
|
||||
**NOT for verification/testing**:
|
||||
- ❌ NOT "Verify the button clicks correctly"
|
||||
- ❌ NOT "Test error handling works"
|
||||
- ❌ NOT "Confirm the API returns 200"
|
||||
- ❌ NOT checking if code/implementation matches the spec
|
||||
|
||||
**FOR requirements quality validation**:
|
||||
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||
|
||||
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. **Discover Feature Context**:
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
get_features_dir "$REPO_ROOT"
|
||||
|
||||
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
FEATURE_NUM=$(echo "$BRANCH" | grep -oE '^[0-9]{3}')
|
||||
[ -z "$FEATURE_NUM" ] && FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1)
|
||||
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT"
|
||||
# FEATURE_DIR is now set by find_feature_dir
|
||||
```
|
||||
|
||||
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||
- Only ask about information that materially changes checklist content
|
||||
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||
- Prefer precision over breadth
|
||||
|
||||
Generation algorithm:
|
||||
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
||||
5. Formulate questions chosen from these archetypes:
|
||||
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||
|
||||
Question formatting rules:
|
||||
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||
- Never ask the user to restate what they already said
|
||||
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||
|
||||
Defaults when interaction impossible:
|
||||
- Depth: Standard
|
||||
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||
- Focus: Top 2 relevance clusters
|
||||
|
||||
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||
|
||||
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||
- Consolidate explicit must-have items mentioned by user
|
||||
- Map focus selections to category scaffolding
|
||||
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||
|
||||
4. **Load feature context**: Read from FEATURE_DIR:
|
||||
- spec.md: Feature requirements and scope
|
||||
- plan.md (if exists): Technical details, dependencies
|
||||
- tasks.md (if exists): Implementation tasks
|
||||
|
||||
**Context Loading Strategy**:
|
||||
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||
|
||||
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||
- Generate unique checklist filename:
|
||||
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||
- Format: `[domain].md`
|
||||
- If file exists, append to existing file
|
||||
- Number items sequentially starting from CHK001
|
||||
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
||||
|
||||
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||
- **Completeness**: Are all necessary requirements present?
|
||||
- **Clarity**: Are requirements unambiguous and specific?
|
||||
- **Consistency**: Do requirements align with each other?
|
||||
- **Measurability**: Can requirements be objectively verified?
|
||||
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||
|
||||
**Category Structure** - Group items by requirement quality dimensions:
|
||||
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||
|
||||
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||
|
||||
❌ **WRONG** (Testing implementation):
|
||||
- "Verify landing page displays 3 episode cards"
|
||||
- "Test hover states work on desktop"
|
||||
- "Confirm logo click navigates home"
|
||||
|
||||
✅ **CORRECT** (Testing requirements quality):
|
||||
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
||||
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
||||
|
||||
**ITEM STRUCTURE**:
|
||||
Each item should follow this pattern:
|
||||
- Question format asking about requirement quality
|
||||
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||
- Use `[Gap]` marker when checking for missing requirements
|
||||
|
||||
**EXAMPLES BY QUALITY DIMENSION**:
|
||||
|
||||
Completeness:
|
||||
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||
|
||||
Clarity:
|
||||
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||
|
||||
Consistency:
|
||||
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||
|
||||
Coverage:
|
||||
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||
|
||||
Measurability:
|
||||
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||
|
||||
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||
|
||||
**Traceability Requirements**:
|
||||
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
||||
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||
|
||||
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||
Ask questions about the requirements themselves:
|
||||
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||
|
||||
**Content Consolidation**:
|
||||
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||
- Merge near-duplicates checking the same requirement aspect
|
||||
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||
|
||||
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||
- ❌ References to code execution, user actions, system behavior
|
||||
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||
- ❌ Test cases, test plans, QA procedures
|
||||
- ❌ Implementation details (frameworks, APIs, algorithms)
|
||||
|
||||
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||
- ✅ "Does the spec define [missing aspect]?"
|
||||
|
||||
6. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||
|
||||
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
- Focus areas selected
|
||||
- Depth level
|
||||
- Actor/timing
|
||||
- Any explicit user-specified must-have items incorporated
|
||||
|
||||
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
||||
|
||||
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||
- Simple, memorable filenames that indicate checklist purpose
|
||||
- Easy identification and navigation in the `checklists/` folder
|
||||
|
||||
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||
|
||||
## Example Checklist Types & Sample Items
|
||||
|
||||
**UX Requirements Quality:** `ux.md`
|
||||
|
||||
Sample items (testing the requirements, NOT the implementation):
|
||||
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||
|
||||
**API Requirements Quality:** `api.md`
|
||||
|
||||
Sample items:
|
||||
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||
- "Is versioning strategy documented in requirements? [Gap]"
|
||||
|
||||
**Performance Requirements Quality:** `performance.md`
|
||||
|
||||
Sample items:
|
||||
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||
- "Can performance requirements be objectively measured? [Measurability]"
|
||||
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||
|
||||
**Security Requirements Quality:** `security.md`
|
||||
|
||||
Sample items:
|
||||
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||
|
||||
## Anti-Examples: What NOT To Do
|
||||
|
||||
**❌ WRONG - These test implementation, not requirements:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||
```
|
||||
|
||||
**✅ CORRECT - These test requirements quality:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||
```
|
||||
|
||||
**Key Differences:**
|
||||
- Wrong: Tests if the system works correctly
|
||||
- Correct: Tests if the requirements are written correctly
|
||||
- Wrong: Verification of behavior
|
||||
- Correct: Validation of requirement quality
|
||||
- Wrong: "Does it do X?"
|
||||
- Correct: "Is X clearly specified?"
|
||||
189
commands/clarify.md
Normal file
189
commands/clarify.md
Normal file
@@ -0,0 +1,189 @@
|
||||
---
|
||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||
|
||||
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||
|
||||
Execution steps:
|
||||
|
||||
1. **Discover Feature Context**:
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
get_features_dir "$REPO_ROOT"
|
||||
|
||||
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
FEATURE_NUM=$(echo "$BRANCH" | grep -oE '^[0-9]{3}')
|
||||
[ -z "$FEATURE_NUM" ] && FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1)
|
||||
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT"
|
||||
# FEATURE_DIR is now set by find_feature_dir
|
||||
FEATURE_SPEC="${FEATURE_DIR}/spec.md"
|
||||
```
|
||||
|
||||
Validate: If FEATURE_SPEC doesn't exist, ERROR: "No spec found. Run `/specify` first."
|
||||
|
||||
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||
|
||||
Functional Scope & Behavior:
|
||||
- Core user goals & success criteria
|
||||
- Explicit out-of-scope declarations
|
||||
- User roles / personas differentiation
|
||||
|
||||
Domain & Data Model:
|
||||
- Entities, attributes, relationships
|
||||
- Identity & uniqueness rules
|
||||
- Lifecycle/state transitions
|
||||
- Data volume / scale assumptions
|
||||
|
||||
Interaction & UX Flow:
|
||||
- Critical user journeys / sequences
|
||||
- Error/empty/loading states
|
||||
- Accessibility or localization notes
|
||||
|
||||
Non-Functional Quality Attributes:
|
||||
- Performance (latency, throughput targets)
|
||||
- Scalability (horizontal/vertical, limits)
|
||||
- Reliability & availability (uptime, recovery expectations)
|
||||
- Observability (logging, metrics, tracing signals)
|
||||
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||
- Compliance / regulatory constraints (if any)
|
||||
|
||||
Integration & External Dependencies:
|
||||
- External services/APIs and failure modes
|
||||
- Data import/export formats
|
||||
- Protocol/versioning assumptions
|
||||
|
||||
Edge Cases & Failure Handling:
|
||||
- Negative scenarios
|
||||
- Rate limiting / throttling
|
||||
- Conflict resolution (e.g., concurrent edits)
|
||||
|
||||
Constraints & Tradeoffs:
|
||||
- Technical constraints (language, storage, hosting)
|
||||
- Explicit tradeoffs or rejected alternatives
|
||||
|
||||
Terminology & Consistency:
|
||||
- Canonical glossary terms
|
||||
- Avoided synonyms / deprecated terms
|
||||
|
||||
Completion Signals:
|
||||
- Acceptance criteria testability
|
||||
- Measurable Definition of Done style indicators
|
||||
|
||||
Misc / Placeholders:
|
||||
- TODO markers / unresolved decisions
|
||||
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||
|
||||
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||
- Clarification would not materially change implementation or validation strategy
|
||||
- Information is better deferred to planning phase (note internally)
|
||||
|
||||
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||
- Maximum of 10 total questions across the whole session.
|
||||
- Each question must be answerable with EITHER:
|
||||
* A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||
* A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||
|
||||
4. Sequential questioning loop (interactive):
|
||||
- Present EXACTLY ONE question at a time.
|
||||
- For multiple‑choice questions render options as a Markdown table:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | <Option A description> |
|
||||
| B | <Option B description> |
|
||||
| C | <Option C description> | (add D/E as needed up to 5)
|
||||
| Short | Provide a different short answer (<=5 words) | (Include only if free-form alternative is appropriate)
|
||||
|
||||
- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
|
||||
- After the user answers:
|
||||
* Validate the answer maps to one option or fits the <=5 word constraint.
|
||||
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||
- Stop asking further questions when:
|
||||
* All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||
* User signals completion ("done", "good", "no more"), OR
|
||||
* You reach 5 asked questions.
|
||||
- Never reveal future queued questions in advance.
|
||||
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||
|
||||
5. Integration after EACH accepted answer (incremental update approach):
|
||||
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||
- For the first integrated answer in this session:
|
||||
* Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||
* Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||
- Then immediately apply the clarification to the most appropriate section(s):
|
||||
* Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||
* User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||
* Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||
* Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||
* Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||
* Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||
|
||||
6. Validation (performed after EACH write plus final pass):
|
||||
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||
- Total asked (accepted) questions ≤ 5.
|
||||
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||
- Terminology consistency: same canonical term used across all updated sections.
|
||||
|
||||
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||
|
||||
8. Report completion (after questioning loop ends or early termination):
|
||||
- Number of questions asked & answered.
|
||||
- Path to updated spec.
|
||||
- Sections touched (list names).
|
||||
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||
- Suggested next command.
|
||||
|
||||
Behavior rules:
|
||||
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||
- Respect user early termination signals ("stop", "done", "proceed").
|
||||
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||
|
||||
Context for prioritization: {ARGS}
|
||||
778
commands/complete.md
Normal file
778
commands/complete.md
Normal file
@@ -0,0 +1,778 @@
|
||||
---
|
||||
description: Complete feature or bugfix workflow and merge to parent branch
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Complete feature or bugfix workflow by cleaning up, validating, committing, and merging to parent branch.
|
||||
|
||||
**Purpose**: Provide a clean, guided completion process for features and bugfixes developed with SpecSwarm workflows.
|
||||
|
||||
**Scope**: Handles cleanup → validation → commit → merge → branch deletion
|
||||
|
||||
**NEW**: Supports individual feature branches with auto-merge to parent branch (not just main)
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checks
|
||||
|
||||
```bash
|
||||
# Ensure we're in a git repository
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
echo ""
|
||||
echo "This command must be run from within a git repository."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get repository root
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
cd "$REPO_ROOT"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Detect Workflow Context
|
||||
|
||||
```bash
|
||||
echo "🎯 Feature Completion Workflow"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
|
||||
# Detect current branch
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
||||
|
||||
# Extract feature/bug number from branch name
|
||||
# Patterns: NNN-*, feature/NNN-*, bugfix/NNN-*, fix/NNN-*
|
||||
FEATURE_NUM=$(echo "$CURRENT_BRANCH" | grep -oE '^[0-9]{3}' || \
|
||||
echo "$CURRENT_BRANCH" | grep -oE '(feature|bugfix|fix)/([0-9]{3})' | grep -oE '[0-9]{3}' || \
|
||||
echo "")
|
||||
|
||||
# If no number found, check user arguments
|
||||
if [ -z "$FEATURE_NUM" ] && [ -n "$ARGUMENTS" ]; then
|
||||
# Try to extract number from arguments
|
||||
FEATURE_NUM=$(echo "$ARGUMENTS" | grep -oE '\b[0-9]{3}\b' | head -1)
|
||||
fi
|
||||
|
||||
# If still no number, ask user
|
||||
if [ -z "$FEATURE_NUM" ]; then
|
||||
echo "⚠️ Could not detect feature/bug number from branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
read -p "Enter feature or bug number (e.g., 915): " FEATURE_NUM
|
||||
|
||||
if [ -z "$FEATURE_NUM" ]; then
|
||||
echo "❌ Error: Feature number required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Pad to 3 digits
|
||||
FEATURE_NUM=$(printf "%03d" $FEATURE_NUM)
|
||||
fi
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
get_features_dir "$REPO_ROOT"
|
||||
|
||||
# Determine workflow type from branch or directory
|
||||
if echo "$CURRENT_BRANCH" | grep -qE '^(bugfix|bug|fix)/'; then
|
||||
WORKFLOW_TYPE="bugfix"
|
||||
elif echo "$CURRENT_BRANCH" | grep -qE '^(feature|feat)/'; then
|
||||
WORKFLOW_TYPE="feature"
|
||||
else
|
||||
# Check if feature directory exists
|
||||
if find_feature_dir "$FEATURE_NUM" "$REPO_ROOT" 2>/dev/null; then
|
||||
WORKFLOW_TYPE="feature"
|
||||
else
|
||||
# Ask user
|
||||
read -p "Is this a feature or bugfix? (feature/bugfix): " WORKFLOW_TYPE
|
||||
fi
|
||||
fi
|
||||
|
||||
# Find feature directory (re-find since condition above may not have set it)
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT" 2>/dev/null
|
||||
|
||||
if [ -z "$FEATURE_DIR" ]; then
|
||||
echo "⚠️ Warning: Feature directory not found for ${WORKFLOW_TYPE} ${FEATURE_NUM}"
|
||||
echo ""
|
||||
echo "Continuing without feature artifacts..."
|
||||
FEATURE_DIR=""
|
||||
STORED_PARENT_BRANCH=""
|
||||
else
|
||||
# Get feature title from spec
|
||||
if [ -f "$FEATURE_DIR/spec.md" ]; then
|
||||
FEATURE_TITLE=$(grep -m1 '^# Feature' "$FEATURE_DIR/spec.md" | sed 's/^# Feature [0-9]*: //' || echo "Feature $FEATURE_NUM")
|
||||
|
||||
# Extract parent branch from YAML frontmatter (v2.1.1+)
|
||||
STORED_PARENT_BRANCH=$(grep -A 10 '^---$' "$FEATURE_DIR/spec.md" 2>/dev/null | grep '^parent_branch:' | sed 's/^parent_branch: *//' | tr -d '\r' || echo "")
|
||||
else
|
||||
FEATURE_TITLE="Feature $FEATURE_NUM"
|
||||
STORED_PARENT_BRANCH=""
|
||||
fi
|
||||
fi
|
||||
|
||||
# Display detected context
|
||||
echo "Detected: $(echo "$WORKFLOW_TYPE" | sed 's/\b\(.\)/\u\1/') $FEATURE_NUM"
|
||||
if [ -n "$FEATURE_TITLE" ]; then
|
||||
echo "Title: $FEATURE_TITLE"
|
||||
fi
|
||||
echo "Branch: $CURRENT_BRANCH"
|
||||
if [ -n "$FEATURE_DIR" ]; then
|
||||
echo "Directory: $FEATURE_DIR"
|
||||
fi
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 1b: Detect Parent Branch Strategy
|
||||
|
||||
```bash
|
||||
echo "🔍 Analyzing git workflow..."
|
||||
echo ""
|
||||
|
||||
# Detect main branch name
|
||||
MAIN_BRANCH=$(git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@')
|
||||
if [ -z "$MAIN_BRANCH" ]; then
|
||||
# Fallback: try common names or use git's default branch
|
||||
if git show-ref --verify --quiet refs/heads/main; then
|
||||
MAIN_BRANCH="main"
|
||||
elif git show-ref --verify --quiet refs/heads/master; then
|
||||
MAIN_BRANCH="master"
|
||||
else
|
||||
# Use current branch as fallback
|
||||
MAIN_BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
||||
echo "⚠️ Warning: Could not detect main branch, using current branch: $MAIN_BRANCH"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Detect if we're in a sequential upgrade branch workflow
|
||||
SEQUENTIAL_BRANCH=false
|
||||
PARENT_BRANCH="$MAIN_BRANCH"
|
||||
|
||||
if [ "$CURRENT_BRANCH" != "$MAIN_BRANCH" ]; then
|
||||
# Check if current branch contains multiple feature directories
|
||||
# Check both old (features/) and new (.specswarm/features/) locations
|
||||
FEATURE_DIRS_ON_BRANCH=$(git log "$CURRENT_BRANCH" --not "$MAIN_BRANCH" --name-only --pretty=format: 2>/dev/null | \
|
||||
grep -E '^(features/|\.specswarm/features/)[0-9]\{3\}-' | sed 's#^\.specswarm/##' | cut -d'/' -f2 | sort -u | wc -l || echo "0")
|
||||
|
||||
if [ "$FEATURE_DIRS_ON_BRANCH" -gt 1 ]; then
|
||||
SEQUENTIAL_BRANCH=true
|
||||
PARENT_BRANCH="$CURRENT_BRANCH"
|
||||
echo "Detected: Sequential branch workflow"
|
||||
echo " This branch contains $FEATURE_DIRS_ON_BRANCH features"
|
||||
echo " Features will be marked complete without merging"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
|
||||
# If not sequential, determine parent branch
|
||||
if [ "$SEQUENTIAL_BRANCH" = "false" ]; then
|
||||
# Check for stored parent branch (v2.1.1+)
|
||||
echo "Determining parent branch..."
|
||||
echo " Stored parent branch: ${STORED_PARENT_BRANCH:-<empty>}"
|
||||
|
||||
if [ -n "$STORED_PARENT_BRANCH" ] && [ "$STORED_PARENT_BRANCH" != "unknown" ]; then
|
||||
PARENT_BRANCH="$STORED_PARENT_BRANCH"
|
||||
echo "✓ Using parent branch from spec.md: $PARENT_BRANCH"
|
||||
echo ""
|
||||
else
|
||||
echo "⚠️ No valid parent branch in spec.md, checking fallback options..."
|
||||
# Fallback: Check if there's a previous feature branch this might merge into
|
||||
PREV_FEATURE_NUM=$(printf "%03d" $((10#$FEATURE_NUM - 1)))
|
||||
PREV_FEATURE_BRANCH=$(git branch -a 2>/dev/null | grep -E "^ (remotes/origin/)?${PREV_FEATURE_NUM}-" | head -1 | sed 's/^[* ]*//' | sed 's/remotes\/origin\///' || echo "")
|
||||
|
||||
if [ -n "$PREV_FEATURE_BRANCH" ] && git show-ref --verify --quiet "refs/heads/$PREV_FEATURE_BRANCH" 2>/dev/null; then
|
||||
echo "Found previous feature branch: $PREV_FEATURE_BRANCH"
|
||||
echo ""
|
||||
read -p "Merge into $PREV_FEATURE_BRANCH instead of $MAIN_BRANCH? (y/n): " merge_into_prev
|
||||
if [ "$merge_into_prev" = "y" ]; then
|
||||
PARENT_BRANCH="$PREV_FEATURE_BRANCH"
|
||||
echo "✓ Will merge to: $PARENT_BRANCH"
|
||||
else
|
||||
echo "✓ Will merge to: $MAIN_BRANCH"
|
||||
fi
|
||||
echo ""
|
||||
else
|
||||
echo "✓ Will merge to: $MAIN_BRANCH (default)"
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Cleanup Diagnostic Files
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Phase 1: Cleanup"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Patterns for diagnostic files
|
||||
DIAGNOSTIC_PATTERNS=(
|
||||
"check-*.ts"
|
||||
"check-*.js"
|
||||
"check_*.ts"
|
||||
"check_*.js"
|
||||
"diagnose-*.ts"
|
||||
"diagnose-*.js"
|
||||
"debug-*.ts"
|
||||
"debug-*.js"
|
||||
"temp-*.ts"
|
||||
"temp-*.js"
|
||||
)
|
||||
|
||||
# Find matching files
|
||||
DIAGNOSTIC_FILES=()
|
||||
for pattern in "${DIAGNOSTIC_PATTERNS[@]}"; do
|
||||
while IFS= read -r file; do
|
||||
if [ -f "$file" ]; then
|
||||
DIAGNOSTIC_FILES+=("$file")
|
||||
fi
|
||||
done < <(find "$REPO_ROOT" -maxdepth 1 -name "$pattern" 2>/dev/null)
|
||||
done
|
||||
|
||||
if [ ${#DIAGNOSTIC_FILES[@]} -eq 0 ]; then
|
||||
echo "✓ No diagnostic files to clean up"
|
||||
else
|
||||
echo "📂 Diagnostic Files Found:"
|
||||
for file in "${DIAGNOSTIC_FILES[@]}"; do
|
||||
basename_file=$(basename "$file")
|
||||
size=$(du -h "$file" 2>/dev/null | cut -f1 || echo "?")
|
||||
echo " - $basename_file ($size)"
|
||||
done
|
||||
echo ""
|
||||
|
||||
echo "What should I do with these files?"
|
||||
echo " 1. Delete all (recommended)"
|
||||
echo " 2. Move to .claude/debug/ (keep for review)"
|
||||
echo " 3. Keep as-is (will be committed if staged)"
|
||||
echo " 4. Manual selection"
|
||||
echo ""
|
||||
read -p "Choice (1-4): " cleanup_choice
|
||||
|
||||
case $cleanup_choice in
|
||||
1)
|
||||
for file in "${DIAGNOSTIC_FILES[@]}"; do
|
||||
rm -f "$file"
|
||||
done
|
||||
echo "✓ Deleted ${#DIAGNOSTIC_FILES[@]} diagnostic files"
|
||||
;;
|
||||
2)
|
||||
mkdir -p "$REPO_ROOT/.claude/debug"
|
||||
for file in "${DIAGNOSTIC_FILES[@]}"; do
|
||||
mv "$file" "$REPO_ROOT/.claude/debug/"
|
||||
done
|
||||
echo "✓ Moved ${#DIAGNOSTIC_FILES[@]} files to .claude/debug/"
|
||||
;;
|
||||
3)
|
||||
echo "✓ Keeping diagnostic files"
|
||||
;;
|
||||
4)
|
||||
for file in "${DIAGNOSTIC_FILES[@]}"; do
|
||||
read -p "Delete $(basename "$file")? (y/n): " delete_choice
|
||||
if [ "$delete_choice" = "y" ]; then
|
||||
rm -f "$file"
|
||||
echo " ✓ Deleted"
|
||||
else
|
||||
echo " ✓ Kept"
|
||||
fi
|
||||
done
|
||||
;;
|
||||
*)
|
||||
echo "✓ Skipping cleanup"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Pre-Merge Validation
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Phase 2: Pre-Merge Validation"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Running validation checks..."
|
||||
echo ""
|
||||
|
||||
VALIDATION_PASSED=true
|
||||
|
||||
# Check if package.json exists (indicates Node.js project)
|
||||
if [ -f "package.json" ]; then
|
||||
# Run tests if test script exists
|
||||
if grep -q '"test"' package.json; then
|
||||
echo " Running tests..."
|
||||
if npm test --silent 2>&1 | grep -qE "(passing|All tests passed)"; then
|
||||
TEST_OUTPUT=$(npm test --silent 2>&1 | grep -oE '[0-9]+ passing' | head -1)
|
||||
echo " ✓ Tests passing ($TEST_OUTPUT)"
|
||||
else
|
||||
echo " ⚠️ Some tests may have failed (check manually)"
|
||||
fi
|
||||
fi
|
||||
|
||||
# TypeScript check if tsconfig.json exists
|
||||
if [ -f "tsconfig.json" ]; then
|
||||
echo " Checking TypeScript..."
|
||||
if npx tsc --noEmit 2>&1 | grep -qE "error TS[0-9]+"; then
|
||||
echo " ❌ TypeScript errors found"
|
||||
VALIDATION_PASSED=false
|
||||
else
|
||||
echo " ✓ No TypeScript errors"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Build check if build script exists
|
||||
if grep -q '"build"' package.json; then
|
||||
echo " Checking build..."
|
||||
if npm run build --silent 2>&1 | grep -qEi "(error|failed)"; then
|
||||
echo " ⚠️ Build may have issues (check manually)"
|
||||
else
|
||||
echo " ✓ Build successful"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Feature completion check
|
||||
if [ -n "$FEATURE_DIR" ] && [ -f "$FEATURE_DIR/tasks.md" ]; then
|
||||
TOTAL_TASKS=$(grep -cE '^### T[0-9]{3}:' "$FEATURE_DIR/tasks.md" 2>/dev/null || echo "0")
|
||||
COMPLETED_TASKS=$(grep -cE '^### T[0-9]{3}:.*\[x\]' "$FEATURE_DIR/tasks.md" 2>/dev/null || echo "0")
|
||||
if [ "$TOTAL_TASKS" -gt "0" ]; then
|
||||
echo " ✓ Feature progress ($COMPLETED_TASKS/$TOTAL_TASKS tasks)"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Bug resolution check
|
||||
if [ -n "$FEATURE_DIR" ] && [ -f "$FEATURE_DIR/bugfix.md" ]; then
|
||||
BUG_COUNT=$(grep -cE '^## Bug [0-9]{3}:' "$FEATURE_DIR/bugfix.md" 2>/dev/null || echo "0")
|
||||
if [ "$BUG_COUNT" -gt "0" ]; then
|
||||
echo " ✓ Bugs addressed ($BUG_COUNT bugs)"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
if [ "$VALIDATION_PASSED" = "false" ]; then
|
||||
echo "⚠️ Validation issues detected"
|
||||
echo ""
|
||||
read -p "Continue anyway? (y/n): " continue_choice
|
||||
if [ "$continue_choice" != "y" ]; then
|
||||
echo "❌ Completion cancelled"
|
||||
echo ""
|
||||
echo "Fix the issues above and run /specswarm:complete again"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "Ready to commit and merge!"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Commit Changes
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Phase 3: Commit Changes"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Check if there are changes to commit
|
||||
if git diff --quiet && git diff --cached --quiet; then
|
||||
echo "✓ No changes to commit (working tree clean)"
|
||||
echo ""
|
||||
SKIP_COMMIT=true
|
||||
else
|
||||
SKIP_COMMIT=false
|
||||
|
||||
# Show files to be committed
|
||||
echo "Files to commit:"
|
||||
git status --short | head -20
|
||||
echo ""
|
||||
|
||||
# Determine commit type
|
||||
if [ "$WORKFLOW_TYPE" = "bugfix" ]; then
|
||||
COMMIT_TYPE="fix"
|
||||
else
|
||||
COMMIT_TYPE="feat"
|
||||
fi
|
||||
|
||||
# Generate commit message
|
||||
COMMIT_MSG="${COMMIT_TYPE}: ${FEATURE_TITLE}
|
||||
|
||||
"
|
||||
|
||||
# Add description from spec if available
|
||||
if [ -n "$FEATURE_DIR" ] && [ -f "$FEATURE_DIR/spec.md" ]; then
|
||||
DESCRIPTION=$(grep -A 5 '^## Summary' "$FEATURE_DIR/spec.md" 2>/dev/null | tail -n +2 | head -3 | sed '/^$/d' || echo "")
|
||||
if [ -n "$DESCRIPTION" ]; then
|
||||
COMMIT_MSG+="${DESCRIPTION}
|
||||
|
||||
"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add bug fixes if any
|
||||
if [ -n "$FEATURE_DIR" ] && [ -f "$FEATURE_DIR/bugfix.md" ]; then
|
||||
BUG_NUMBERS=$(grep -oE 'Bug [0-9]{3}' "$FEATURE_DIR/bugfix.md" 2>/dev/null | grep -oE '[0-9]{3}' | sort -u || echo "")
|
||||
if [ -n "$BUG_NUMBERS" ]; then
|
||||
COMMIT_MSG+="Fixes:
|
||||
"
|
||||
while IFS= read -r bug; do
|
||||
COMMIT_MSG+="- Bug $bug
|
||||
"
|
||||
done <<< "$BUG_NUMBERS"
|
||||
COMMIT_MSG+="
|
||||
"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add generated footer
|
||||
COMMIT_MSG+="🤖 Generated with SpecSwarm
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>"
|
||||
|
||||
# Show commit message
|
||||
echo "Suggested commit message:"
|
||||
echo "┌────────────────────────────────────────────┐"
|
||||
echo "$COMMIT_MSG" | sed 's/^/│ /'
|
||||
echo "└────────────────────────────────────────────┘"
|
||||
echo ""
|
||||
|
||||
read -p "Edit commit message? (y/n): " edit_choice
|
||||
|
||||
if [ "$edit_choice" = "y" ]; then
|
||||
# Create temp file for editing
|
||||
TEMP_MSG_FILE=$(mktemp)
|
||||
echo "$COMMIT_MSG" > "$TEMP_MSG_FILE"
|
||||
${EDITOR:-nano} "$TEMP_MSG_FILE"
|
||||
COMMIT_MSG=$(cat "$TEMP_MSG_FILE")
|
||||
rm -f "$TEMP_MSG_FILE"
|
||||
fi
|
||||
|
||||
# Stage all changes
|
||||
git add -A
|
||||
|
||||
# Commit
|
||||
echo "$COMMIT_MSG" | git commit -F -
|
||||
echo "✓ Changes committed to feature branch"
|
||||
echo ""
|
||||
|
||||
# Push
|
||||
read -p "Push to remote? (y/n): " push_choice
|
||||
if [ "$push_choice" = "y" ]; then
|
||||
git push
|
||||
echo "✓ Pushed to remote"
|
||||
else
|
||||
echo "⚠️ Changes not pushed (you can push manually later)"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Merge to Parent Branch
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Phase 4: Merge to Parent Branch"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Skip merge for sequential branches
|
||||
if [ "$SEQUENTIAL_BRANCH" = "true" ]; then
|
||||
echo "✓ Sequential branch workflow - NO MERGE"
|
||||
echo ""
|
||||
echo "This feature is part of a sequential upgrade branch."
|
||||
echo "Features will be merged together after the entire sequence completes."
|
||||
echo ""
|
||||
|
||||
# Create completion tag for tracking
|
||||
TAG_NAME="feature-${FEATURE_NUM}-complete"
|
||||
if ! git tag -l | grep -q "^${TAG_NAME}$"; then
|
||||
git tag "$TAG_NAME"
|
||||
echo "✓ Created completion tag: $TAG_NAME"
|
||||
fi
|
||||
echo ""
|
||||
SKIP_MERGE=true
|
||||
elif [ "$CURRENT_BRANCH" = "$PARENT_BRANCH" ]; then
|
||||
echo "✓ Already on $PARENT_BRANCH branch"
|
||||
echo "✓ No merge needed"
|
||||
echo ""
|
||||
SKIP_MERGE=true
|
||||
else
|
||||
SKIP_MERGE=false
|
||||
|
||||
# Validation: Show merge details
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Merge Plan"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " Source branch: $CURRENT_BRANCH"
|
||||
echo " Target branch: $PARENT_BRANCH"
|
||||
if [ -n "$STORED_PARENT_BRANCH" ]; then
|
||||
echo " Source: spec.md parent_branch field"
|
||||
elif [ -n "$PREV_FEATURE_BRANCH" ]; then
|
||||
echo " Source: Previous feature branch detected"
|
||||
else
|
||||
echo " Source: Default main branch"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
if [ "$PARENT_BRANCH" != "$MAIN_BRANCH" ]; then
|
||||
echo "ℹ️ Note: Merging into '$PARENT_BRANCH' (not $MAIN_BRANCH)"
|
||||
echo " This is an intermediate merge in a feature branch hierarchy."
|
||||
fi
|
||||
echo ""
|
||||
echo "⚠️ IMPORTANT: This will merge your changes to $PARENT_BRANCH."
|
||||
echo " Make sure you've tested the feature thoroughly."
|
||||
echo " If the target branch looks wrong, press 'n' and check spec.md"
|
||||
echo ""
|
||||
read -p "Proceed with merge? (y/n): " merge_choice
|
||||
|
||||
if [ "$merge_choice" != "y" ]; then
|
||||
echo ""
|
||||
echo "❌ Merge cancelled"
|
||||
echo ""
|
||||
echo "You're still on branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
echo "When ready to merge, run:"
|
||||
echo " /specswarm:complete"
|
||||
echo ""
|
||||
echo "Or merge manually:"
|
||||
echo " git checkout $PARENT_BRANCH"
|
||||
echo " git merge --no-ff $CURRENT_BRANCH"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Checking out $PARENT_BRANCH..."
|
||||
git checkout "$PARENT_BRANCH"
|
||||
|
||||
echo "Pulling latest changes..."
|
||||
git pull
|
||||
echo "✓ $PARENT_BRANCH branch up to date"
|
||||
echo ""
|
||||
|
||||
echo "Merging feature branch (no-ff)..."
|
||||
MERGE_MSG="Merge ${WORKFLOW_TYPE}: ${FEATURE_TITLE}
|
||||
|
||||
Feature $FEATURE_NUM complete
|
||||
|
||||
🤖 Generated with SpecSwarm"
|
||||
|
||||
if git merge --no-ff "$CURRENT_BRANCH" -m "$MERGE_MSG"; then
|
||||
echo "✓ Merge successful"
|
||||
|
||||
# Create completion tag
|
||||
TAG_NAME="feature-${FEATURE_NUM}-complete"
|
||||
if ! git tag -l | grep -q "^${TAG_NAME}$"; then
|
||||
git tag "$TAG_NAME"
|
||||
echo "✓ Created completion tag: $TAG_NAME"
|
||||
fi
|
||||
else
|
||||
echo "❌ Merge conflicts detected"
|
||||
echo ""
|
||||
echo "Please resolve conflicts manually:"
|
||||
echo " 1. Fix conflicts in your editor"
|
||||
echo " 2. git add <resolved-files>"
|
||||
echo " 3. git merge --continue"
|
||||
echo " 4. /specswarm:complete --continue"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -p "Push to remote? (y/n): " push_main_choice
|
||||
if [ "$push_main_choice" = "y" ]; then
|
||||
echo "Pushing to remote..."
|
||||
git push --follow-tags
|
||||
echo "✓ $PARENT_BRANCH branch updated"
|
||||
else
|
||||
echo "⚠️ Not pushed (you can push manually later)"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Branch Cleanup
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Phase 5: Cleanup"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
if [ "$SKIP_MERGE" = "true" ] && [ "$SEQUENTIAL_BRANCH" = "false" ]; then
|
||||
echo "✓ No branch cleanup needed (already on $PARENT_BRANCH)"
|
||||
elif [ "$SEQUENTIAL_BRANCH" = "true" ]; then
|
||||
echo "✓ Sequential branch - keeping feature branch for remaining features"
|
||||
else
|
||||
read -p "Delete feature branch '$CURRENT_BRANCH'? (y/n): " delete_choice
|
||||
|
||||
if [ "$delete_choice" = "y" ]; then
|
||||
# Delete local branch
|
||||
if git branch -d "$CURRENT_BRANCH" 2>/dev/null; then
|
||||
echo "✓ Deleted local branch: $CURRENT_BRANCH"
|
||||
else
|
||||
echo "⚠️ Could not delete local branch (may have unmerged commits)"
|
||||
read -p "Force delete? (y/n): " force_choice
|
||||
if [ "$force_choice" = "y" ]; then
|
||||
git branch -D "$CURRENT_BRANCH"
|
||||
echo "✓ Force deleted local branch: $CURRENT_BRANCH"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Delete remote branch
|
||||
read -p "Delete remote branch? (y/n): " delete_remote_choice
|
||||
if [ "$delete_remote_choice" = "y" ]; then
|
||||
if git push origin --delete "$CURRENT_BRANCH" 2>/dev/null; then
|
||||
echo "✓ Deleted remote branch: origin/$CURRENT_BRANCH"
|
||||
else
|
||||
echo "⚠️ Could not delete remote branch (may not exist)"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo "✓ Keeping feature branch"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Update feature status
|
||||
if [ -n "$FEATURE_DIR" ] && [ -f "$FEATURE_DIR/spec.md" ]; then
|
||||
sed -i 's/^Status:.*/Status: Complete/' "$FEATURE_DIR/spec.md" 2>/dev/null || true
|
||||
echo "✓ Updated feature status: Complete"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Output
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🎉 $(echo "$WORKFLOW_TYPE" | sed 's/\b\(.\)/\u\1/') Complete!"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
echo "$(echo "$WORKFLOW_TYPE" | sed 's/\b\(.\)/\u\1/') $FEATURE_NUM: $FEATURE_TITLE"
|
||||
|
||||
if [ "$SKIP_COMMIT" = "false" ]; then
|
||||
echo "✓ Changes committed"
|
||||
fi
|
||||
|
||||
if [ "$SEQUENTIAL_BRANCH" = "true" ]; then
|
||||
echo "✓ Feature marked complete (sequential workflow)"
|
||||
echo "✓ Tag created: feature-${FEATURE_NUM}-complete"
|
||||
echo ""
|
||||
echo "ℹ️ This is part of a sequential upgrade branch."
|
||||
echo " Continue with remaining features, then merge the entire branch to main."
|
||||
elif [ "$SKIP_MERGE" = "false" ]; then
|
||||
echo "✓ Merged to: $PARENT_BRANCH"
|
||||
if [ "$PARENT_BRANCH" != "$MAIN_BRANCH" ]; then
|
||||
echo "ℹ️ Note: This is an intermediate merge. Complete $PARENT_BRANCH next."
|
||||
fi
|
||||
if [ "${delete_choice:-}" = "y" ]; then
|
||||
echo "✓ Branch: Deleted"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -n "$FEATURE_DIR" ]; then
|
||||
echo ""
|
||||
echo "📂 Feature Archive:"
|
||||
echo " $FEATURE_DIR"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🚀 Next Steps:"
|
||||
if [ "$SEQUENTIAL_BRANCH" = "true" ]; then
|
||||
echo " - Continue with next feature in sequence"
|
||||
echo " - After all features complete, merge branch to main"
|
||||
echo " - Test the complete upgrade sequence"
|
||||
elif [ "$PARENT_BRANCH" != "$MAIN_BRANCH" ] && [ "$SKIP_MERGE" = "false" ]; then
|
||||
echo " - Complete the parent branch: /specswarm:complete"
|
||||
echo " - This will merge $PARENT_BRANCH to main"
|
||||
else
|
||||
if grep -q '"deploy' package.json 2>/dev/null; then
|
||||
echo " - Deploy to staging: npm run deploy"
|
||||
fi
|
||||
if grep -q '"test:e2e' package.json 2>/dev/null; then
|
||||
echo " - Run E2E tests: npm run test:e2e"
|
||||
fi
|
||||
echo " - Monitor production for issues"
|
||||
echo " - Update project documentation if needed"
|
||||
fi
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If not in git repository:**
|
||||
- Exit with clear error message
|
||||
|
||||
**If validation fails:**
|
||||
- Show issues
|
||||
- Offer to continue anyway or cancel
|
||||
|
||||
**If merge conflicts:**
|
||||
- Provide clear resolution instructions
|
||||
- Suggest --continue flag for resume
|
||||
|
||||
**If branch deletion fails:**
|
||||
- Offer force delete option
|
||||
- Allow keeping branch if user prefers
|
||||
|
||||
---
|
||||
|
||||
## Operating Principles
|
||||
|
||||
1. **User Guidance**: Clear, step-by-step process with explanations
|
||||
2. **Safety First**: Confirm before destructive operations (merge, delete)
|
||||
3. **Flexibility**: Allow skipping steps or customizing behavior
|
||||
4. **Cleanup**: Remove temporary files, update documentation
|
||||
5. **Validation**: Check tests, build, TypeScript before merging
|
||||
6. **Transparency**: Show what's being done at each step
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Diagnostic files cleaned up
|
||||
✅ Pre-merge validation passed (or user acknowledged issues)
|
||||
✅ Changes committed with proper message
|
||||
✅ Merged to main branch
|
||||
✅ Feature branch deleted (optional)
|
||||
✅ Feature status updated to Complete
|
||||
|
||||
---
|
||||
|
||||
**Workflow Coverage**: Completes ~100% of feature/bugfix lifecycle
|
||||
**User Experience**: Guided, safe, transparent completion process
|
||||
90
commands/constitution.md
Normal file
90
commands/constitution.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
You are updating the project constitution at `.specswarm/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||
|
||||
Follow this execution flow:
|
||||
|
||||
1. Load the existing constitution template at `.specswarm/constitution.md`.
|
||||
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||
|
||||
2. Collect/derive values for placeholders:
|
||||
- **First, read README.md for project context** (if it exists):
|
||||
* Use the Read tool to read `README.md`
|
||||
* Extract project name from first heading (e.g., `# My Project` → "My Project")
|
||||
* Extract project description from first paragraph after title
|
||||
* Look for sections like "Goals", "Vision", "Purpose", "Standards", "Conventions" for principle guidance
|
||||
* If README.md doesn't exist or lacks content, skip this step
|
||||
- If user input (conversation) supplies a value, use it (takes priority over README).
|
||||
- Otherwise infer from README context (gathered above) or prior constitution versions if embedded.
|
||||
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||
* MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||
* MINOR: New principle/section added or materially expanded guidance.
|
||||
* PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||
|
||||
3. Draft the updated constitution content:
|
||||
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||
|
||||
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||
- Read `/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||
- Read `/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||
- Read `/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||
- Read each command file in `/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||
- Read any runtime guidance docs (e.g., `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed. Note: README.md is already read in Step 2.
|
||||
|
||||
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||
- Version change: old → new
|
||||
- List of modified principles (old title → new title if renamed)
|
||||
- Added sections
|
||||
- Removed sections
|
||||
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||
|
||||
6. Validation before final output:
|
||||
- No remaining unexplained bracket tokens.
|
||||
- Version line matches report.
|
||||
- Dates ISO format YYYY-MM-DD.
|
||||
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||
|
||||
7. Write the completed constitution back to `.specswarm/constitution.md` (overwrite).
|
||||
|
||||
8. Output a final summary to the user with:
|
||||
- New version and bump rationale.
|
||||
- Any files flagged for manual follow-up.
|
||||
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||
|
||||
Formatting & Style Requirements:
|
||||
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||
- Keep a single blank line between sections.
|
||||
- Avoid trailing whitespace.
|
||||
|
||||
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||
|
||||
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||
|
||||
Do not create a new template; always operate on the existing `.specswarm/constitution.md` file.
|
||||
570
commands/coordinate.md
Normal file
570
commands/coordinate.md
Normal file
@@ -0,0 +1,570 @@
|
||||
---
|
||||
description: Coordinate complex debugging workflows with logging, monitoring, and agent orchestration
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION:
|
||||
Debug Coordinate Plugin
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
Based on: docs/learnings/2025-10-14-orchestrator-missed-opportunity.md
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute systematic debugging coordination workflow:
|
||||
1. **Discovery Phase** - Add logging and collect data
|
||||
2. **Analysis Phase** - Correlate patterns and identify root causes
|
||||
3. **Orchestration Phase** - Spawn specialist agents and coordinate fixes
|
||||
4. **Integration Phase** - Verify fixes work together
|
||||
|
||||
**Usage**: `/debug:coordinate <problem-description>`
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
/debug:coordinate "navbar not updating after sign-in, sign-out not working, like button blank page"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pre-Coordination Hook
|
||||
|
||||
```bash
|
||||
echo "🐛 Debug Coordinator"
|
||||
echo "==================="
|
||||
echo ""
|
||||
echo "Systematic debugging with logging, monitoring, and agent orchestration"
|
||||
echo ""
|
||||
|
||||
# Record start time
|
||||
COORD_START_TIME=$(date +%s)
|
||||
|
||||
# Get repository root
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Problem Description
|
||||
|
||||
```bash
|
||||
# Get problem description from arguments
|
||||
PROBLEM_DESC="$ARGUMENTS"
|
||||
|
||||
if [ -z "$PROBLEM_DESC" ]; then
|
||||
echo "❌ Error: Problem description required"
|
||||
echo ""
|
||||
echo "Usage: /debug:coordinate <problem-description>"
|
||||
echo ""
|
||||
echo "Example:"
|
||||
echo "/debug:coordinate \"navbar not updating, sign-out broken, like button error\""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📋 Problem: $PROBLEM_DESC"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Discovery Phase - Initial Analysis
|
||||
|
||||
Analyze the problem description to identify:
|
||||
- How many distinct issues are mentioned
|
||||
- Which domains might be affected
|
||||
- Whether orchestration is appropriate
|
||||
|
||||
```bash
|
||||
echo "🔍 Phase 1: Discovery & Analysis"
|
||||
echo "================================"
|
||||
echo ""
|
||||
|
||||
# Count potential bugs (look for separators like commas, "and", bullet points)
|
||||
BUG_COUNT=$(echo "$PROBLEM_DESC" | tr ',' '\n' | tr ';' '\n' | grep -v '^$' | wc -l)
|
||||
|
||||
echo "Analyzing problem description..."
|
||||
echo " • Identified $BUG_COUNT potential issue(s)"
|
||||
echo ""
|
||||
|
||||
# Create debug session directory
|
||||
DEBUG_SESSION_ID=$(date +%Y%m%d-%H%M%S)
|
||||
DEBUG_DIR="${REPO_ROOT}/.debug-sessions/${DEBUG_SESSION_ID}"
|
||||
mkdir -p "$DEBUG_DIR"
|
||||
|
||||
echo "Created debug session: $DEBUG_SESSION_ID"
|
||||
echo "Directory: $DEBUG_DIR"
|
||||
echo ""
|
||||
|
||||
# Write problem description to session
|
||||
cat > "$DEBUG_DIR/problem-description.md" <<EOF
|
||||
# Debug Session: $DEBUG_SESSION_ID
|
||||
|
||||
**Created**: $(date)
|
||||
**Problem Description**: $PROBLEM_DESC
|
||||
|
||||
## Issues Identified
|
||||
|
||||
EOF
|
||||
|
||||
# Parse individual issues
|
||||
ISSUE_NUM=1
|
||||
echo "$PROBLEM_DESC" | tr ',' '\n' | tr ';' '\n' | grep -v '^$' | while read -r ISSUE; do
|
||||
ISSUE_TRIMMED=$(echo "$ISSUE" | sed 's/^ *//g' | sed 's/ *$//g')
|
||||
if [ -n "$ISSUE_TRIMMED" ]; then
|
||||
echo "$ISSUE_NUM. $ISSUE_TRIMMED" >> "$DEBUG_DIR/problem-description.md"
|
||||
ISSUE_NUM=$((ISSUE_NUM + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Problem analysis complete"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Determine Debugging Strategy
|
||||
|
||||
Based on the number of issues, decide whether to use orchestration:
|
||||
|
||||
```bash
|
||||
echo "🎯 Determining debugging strategy..."
|
||||
echo ""
|
||||
|
||||
if [ "$BUG_COUNT" -ge 3 ]; then
|
||||
STRATEGY="orchestrated"
|
||||
echo "Strategy: ORCHESTRATED (multi-bug scenario)"
|
||||
echo ""
|
||||
echo "Rationale:"
|
||||
echo " • $BUG_COUNT distinct issues detected"
|
||||
echo " • Parallel investigation will be faster"
|
||||
echo " • Orchestrator recommended for 3+ bugs"
|
||||
echo ""
|
||||
else
|
||||
STRATEGY="sequential"
|
||||
echo "Strategy: SEQUENTIAL (1-2 bugs)"
|
||||
echo ""
|
||||
echo "Rationale:"
|
||||
echo " • $BUG_COUNT issue(s) detected"
|
||||
echo " • Sequential debugging is efficient for small sets"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Save strategy to session
|
||||
echo "**Strategy**: $STRATEGY" >> "$DEBUG_DIR/problem-description.md"
|
||||
echo "" >> "$DEBUG_DIR/problem-description.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Discovery Phase - Add Logging Strategy
|
||||
|
||||
Generate a logging strategy document that identifies where to add instrumentation:
|
||||
|
||||
```markdown
|
||||
Create a logging strategy specification:
|
||||
|
||||
Analyze the problem description and generate a comprehensive logging plan.
|
||||
|
||||
For each suspected issue:
|
||||
1. Identify likely affected files/components
|
||||
2. Specify logging points to add
|
||||
3. Define what data to capture
|
||||
4. Determine monitoring approach
|
||||
|
||||
**Output**: Create `$DEBUG_DIR/logging-strategy.md` with:
|
||||
- Files to instrument
|
||||
- Logging statements to add
|
||||
- Data to capture
|
||||
- Monitoring commands
|
||||
```
|
||||
|
||||
**Logging Strategy Template**:
|
||||
|
||||
```markdown
|
||||
# Logging Strategy
|
||||
|
||||
## Issue 1: [Issue Description]
|
||||
|
||||
### Suspected Files
|
||||
- `path/to/file1.ts` - [reason]
|
||||
- `path/to/file2.ts` - [reason]
|
||||
|
||||
### Logging Points
|
||||
|
||||
**File**: `path/to/file1.ts`
|
||||
**Location**: Line XX (in function `functionName`)
|
||||
**Log Statement**:
|
||||
```typescript
|
||||
console.log('[DEBUG-1] FunctionName:', { variable1, variable2, context })
|
||||
```
|
||||
**Purpose**: Capture state at critical point
|
||||
|
||||
### Monitoring
|
||||
- Watch for: `[DEBUG-1]` in console
|
||||
- Expected output: Values should be X
|
||||
- Error indicators: null, undefined, unexpected values
|
||||
|
||||
---
|
||||
|
||||
## Issue 2: [Issue Description]
|
||||
...
|
||||
```
|
||||
|
||||
```bash
|
||||
echo "📝 Phase 2: Logging Strategy"
|
||||
echo "============================"
|
||||
echo ""
|
||||
|
||||
echo "Generating logging strategy..."
|
||||
echo ""
|
||||
echo "ACTION REQUIRED: Analyze the problem and create logging strategy"
|
||||
echo ""
|
||||
echo "Create file: $DEBUG_DIR/logging-strategy.md"
|
||||
echo ""
|
||||
echo "For each issue in the problem description:"
|
||||
echo " 1. Identify suspected files/components"
|
||||
echo " 2. Determine strategic logging points"
|
||||
echo " 3. Specify what data to capture"
|
||||
echo " 4. Define monitoring approach"
|
||||
echo ""
|
||||
echo "Use template from: plugins/debug-coordinate/templates/logging-strategy-template.md"
|
||||
echo ""
|
||||
```
|
||||
|
||||
After logging strategy is created, proceed to implementation:
|
||||
|
||||
```bash
|
||||
echo "After creating logging strategy, implement the logging:"
|
||||
echo ""
|
||||
echo " 1. Add logging statements to suspected files"
|
||||
echo " 2. Ensure logs are easily searchable (use prefixes like [DEBUG-1])"
|
||||
echo " 3. Capture relevant context (variables, state, params)"
|
||||
echo " 4. Commit logging additions (so they can be reverted later)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Analysis Phase - Run & Monitor
|
||||
|
||||
```bash
|
||||
echo "📊 Phase 3: Monitor & Analyze"
|
||||
echo "============================="
|
||||
echo ""
|
||||
|
||||
echo "Next steps:"
|
||||
echo " 1. Start the application (if not running)"
|
||||
echo " 2. Reproduce each issue"
|
||||
echo " 3. Capture log output to: $DEBUG_DIR/logs/"
|
||||
echo " 4. Analyze patterns and identify root causes"
|
||||
echo ""
|
||||
echo "Monitoring commands:"
|
||||
echo " • Save logs: [command] 2>&1 | tee $DEBUG_DIR/logs/session.log"
|
||||
echo " • Watch specific pattern: tail -f [logfile] | grep DEBUG"
|
||||
echo " • Filter errors: grep -E 'ERROR|WARN|DEBUG' [logfile]"
|
||||
echo ""
|
||||
```
|
||||
|
||||
Create analysis template:
|
||||
|
||||
```bash
|
||||
cat > "$DEBUG_DIR/analysis-template.md" <<'EOF'
|
||||
# Debug Analysis
|
||||
|
||||
## Issue 1: [Description]
|
||||
|
||||
### Log Evidence
|
||||
```
|
||||
[Paste relevant log output]
|
||||
```
|
||||
|
||||
### Root Cause
|
||||
[What is causing this issue?]
|
||||
|
||||
**File**: path/to/file.ts
|
||||
**Line**: XX
|
||||
**Problem**: [Specific issue - e.g., variable undefined, wrong condition, etc.]
|
||||
|
||||
### Domain
|
||||
- [ ] Backend
|
||||
- [ ] Frontend
|
||||
- [ ] Database
|
||||
- [ ] Config
|
||||
- [ ] Testing
|
||||
|
||||
### Fix Strategy
|
||||
[How to fix this issue]
|
||||
|
||||
---
|
||||
|
||||
## Issue 2: [Description]
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Total Issues**: X
|
||||
**Domains Affected**: [list]
|
||||
**Orchestration Recommended**: Yes/No
|
||||
EOF
|
||||
|
||||
echo "Created analysis template: $DEBUG_DIR/analysis-template.md"
|
||||
echo ""
|
||||
echo "Fill in this template as you analyze logs and identify root causes"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Orchestration Phase - Generate Fix Plan
|
||||
|
||||
Once root causes are identified, generate orchestration plan:
|
||||
|
||||
```bash
|
||||
echo "🎯 Phase 4: Orchestration Planning"
|
||||
echo "==================================="
|
||||
echo ""
|
||||
|
||||
if [ "$STRATEGY" = "orchestrated" ]; then
|
||||
echo "Generating orchestration plan for parallel fixes..."
|
||||
echo ""
|
||||
|
||||
cat > "$DEBUG_DIR/orchestration-plan.md" <<'EOF'
|
||||
# Orchestration Plan
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
**Mode**: Parallel
|
||||
**Agents**: [Number based on domains]
|
||||
|
||||
---
|
||||
|
||||
## Agent Assignments
|
||||
|
||||
### Agent 1: [Domain] Track
|
||||
**Responsibility**: [Description]
|
||||
**Issues**: #1, #3
|
||||
**Files to Modify**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
**Changes Required**:
|
||||
1. [Specific change 1]
|
||||
2. [Specific change 2]
|
||||
|
||||
**Dependencies**: None (can run in parallel)
|
||||
|
||||
---
|
||||
|
||||
### Agent 2: [Domain] Track
|
||||
**Responsibility**: [Description]
|
||||
**Issues**: #2, #4
|
||||
**Files to Modify**:
|
||||
- path/to/file3.tsx
|
||||
- path/to/file4.tsx
|
||||
|
||||
**Changes Required**:
|
||||
1. [Specific change 1]
|
||||
2. [Specific change 2]
|
||||
|
||||
**Dependencies**: None (can run in parallel)
|
||||
|
||||
---
|
||||
|
||||
## Coordination Points
|
||||
|
||||
### Server Restarts
|
||||
- After Agent 1 completes (backend changes)
|
||||
- After Agent 2 completes (if needed)
|
||||
|
||||
### Integration Testing
|
||||
- After all agents complete
|
||||
- Run full test suite
|
||||
- Manual verification of each fix
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
- [ ] All issues resolved
|
||||
- [ ] No new regressions
|
||||
- [ ] Tests pass
|
||||
- [ ] Application works as expected
|
||||
EOF
|
||||
|
||||
echo "✅ Orchestration plan template created: $DEBUG_DIR/orchestration-plan.md"
|
||||
echo ""
|
||||
echo "ACTION REQUIRED: Fill in orchestration plan based on analysis"
|
||||
echo ""
|
||||
echo "Then launch orchestrator:"
|
||||
echo " /project-orchestrator:debug --plan=$DEBUG_DIR/orchestration-plan.md"
|
||||
echo ""
|
||||
|
||||
else
|
||||
echo "Sequential debugging workflow (1-2 bugs)"
|
||||
echo ""
|
||||
echo "Fix issues sequentially using /speclab:bugfix for each"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Integration Phase - Verification
|
||||
|
||||
```bash
|
||||
echo "✅ Phase 5: Integration & Verification"
|
||||
echo "======================================="
|
||||
echo ""
|
||||
|
||||
echo "After fixes are implemented:"
|
||||
echo ""
|
||||
echo "1. Run integration tests"
|
||||
echo "2. Manually verify each issue is fixed"
|
||||
echo "3. Check for new regressions"
|
||||
echo "4. Remove debug logging (or keep if useful)"
|
||||
echo "5. Document learnings"
|
||||
echo ""
|
||||
echo "Verification checklist: $DEBUG_DIR/verification-checklist.md"
|
||||
echo ""
|
||||
|
||||
cat > "$DEBUG_DIR/verification-checklist.md" <<'EOF'
|
||||
# Verification Checklist
|
||||
|
||||
## Issue Verification
|
||||
|
||||
- [ ] Issue 1: [Description] - FIXED
|
||||
- [ ] Issue 2: [Description] - FIXED
|
||||
- [ ] Issue 3: [Description] - FIXED
|
||||
|
||||
## Regression Testing
|
||||
|
||||
- [ ] Existing tests still pass
|
||||
- [ ] No new errors in console
|
||||
- [ ] No performance degradation
|
||||
- [ ] All user flows work correctly
|
||||
|
||||
## Cleanup
|
||||
|
||||
- [ ] Remove temporary debug logging (or commit if useful)
|
||||
- [ ] Update documentation
|
||||
- [ ] Create regression tests for each fix
|
||||
- [ ] Document learnings
|
||||
|
||||
## Metrics
|
||||
|
||||
**Debugging Time**: [X hours]
|
||||
**Number of Issues Fixed**: [X]
|
||||
**Strategy Used**: [Sequential/Orchestrated]
|
||||
**Time Savings** (if orchestrated): [X%]
|
||||
EOF
|
||||
|
||||
echo "Created verification checklist"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Coordination Hook
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "📊 Debug Coordination Summary"
|
||||
echo "=============================="
|
||||
echo ""
|
||||
|
||||
# Calculate duration
|
||||
COORD_END_TIME=$(date +%s)
|
||||
COORD_DURATION=$((COORD_END_TIME - COORD_START_TIME))
|
||||
COORD_MINUTES=$((COORD_DURATION / 60))
|
||||
|
||||
echo "Session ID: $DEBUG_SESSION_ID"
|
||||
echo "Duration: ${COORD_MINUTES}m ${COORD_DURATION}s"
|
||||
echo "Strategy: $STRATEGY"
|
||||
echo "Issues: $BUG_COUNT"
|
||||
echo ""
|
||||
echo "Artifacts Created:"
|
||||
echo " • $DEBUG_DIR/problem-description.md"
|
||||
echo " • $DEBUG_DIR/logging-strategy.md"
|
||||
echo " • $DEBUG_DIR/analysis-template.md"
|
||||
if [ "$STRATEGY" = "orchestrated" ]; then
|
||||
echo " • $DEBUG_DIR/orchestration-plan.md"
|
||||
fi
|
||||
echo " • $DEBUG_DIR/verification-checklist.md"
|
||||
echo ""
|
||||
echo "📈 Next Steps:"
|
||||
echo ""
|
||||
if [ "$STRATEGY" = "orchestrated" ]; then
|
||||
echo "ORCHESTRATED WORKFLOW:"
|
||||
echo " 1. Fill in logging-strategy.md"
|
||||
echo " 2. Implement logging and run application"
|
||||
echo " 3. Analyze logs and complete analysis-template.md"
|
||||
echo " 4. Fill in orchestration-plan.md"
|
||||
echo " 5. Launch orchestrator: /project-orchestrator:debug --plan=$DEBUG_DIR/orchestration-plan.md"
|
||||
echo " 6. Verify all fixes with verification-checklist.md"
|
||||
else
|
||||
echo "SEQUENTIAL WORKFLOW:"
|
||||
echo " 1. Fill in logging-strategy.md"
|
||||
echo " 2. Implement logging and run application"
|
||||
echo " 3. Analyze logs and identify root causes"
|
||||
echo " 4. Fix issues one by one using /speclab:bugfix"
|
||||
echo " 5. Verify with verification-checklist.md"
|
||||
fi
|
||||
echo ""
|
||||
echo "💡 Tip: This structured approach ensures:"
|
||||
echo " • Systematic investigation (not random debugging)"
|
||||
echo " • Clear documentation of findings"
|
||||
echo " • Efficient parallelization (when possible)"
|
||||
echo " • Verifiable results"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Problem description parsed
|
||||
✅ Debug session created
|
||||
✅ Strategy determined (sequential vs orchestrated)
|
||||
✅ Logging strategy template created
|
||||
✅ Analysis template created
|
||||
✅ Orchestration plan template created (if needed)
|
||||
✅ Verification checklist created
|
||||
✅ Clear next steps provided
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If no problem description provided**:
|
||||
- Display usage example
|
||||
- Exit with error
|
||||
|
||||
**If repository not detected**:
|
||||
- Create debug session in current directory
|
||||
- Warn user about missing git context
|
||||
|
||||
---
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
Based on learnings from [2025-10-14-orchestrator-missed-opportunity.md](../../../docs/learnings/2025-10-14-orchestrator-missed-opportunity.md):
|
||||
|
||||
1. **Systematic Over Random**: Structured phases prevent random debugging
|
||||
2. **Logging First**: Add instrumentation before making changes
|
||||
3. **Parallel When Possible**: 3+ bugs → orchestrate
|
||||
4. **Document Everything**: Create audit trail for learnings
|
||||
5. **Verify Thoroughly**: Checklists ensure nothing missed
|
||||
|
||||
---
|
||||
|
||||
**This plugin transforms chaotic debugging into systematic investigation with clear orchestration opportunities.**
|
||||
568
commands/deprecate.md
Normal file
568
commands/deprecate.md
Normal file
@@ -0,0 +1,568 @@
|
||||
---
|
||||
description: Phased feature sunset workflow with migration guidance
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original methodology: spec-kit-extensions by Marty Bonacci (2025)
|
||||
2. Adapted: SpecLab plugin by Marty Bonacci & Claude Code (2025)
|
||||
3. Based on: GitHub spec-kit | MIT License
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute phased deprecation workflow to safely sunset features while providing migration guidance to users.
|
||||
|
||||
**Key Principles**:
|
||||
1. **Phased Approach**: Announce → Migrate → Remove
|
||||
2. **Migration Guidance**: Provide clear alternatives
|
||||
3. **User Support**: Help users migrate
|
||||
4. **Track Adoption**: Monitor migration progress
|
||||
5. **Safe Removal**: Only remove when migration complete
|
||||
|
||||
**Phases**: Announce deprecation, user migration support, feature removal
|
||||
|
||||
**Coverage**: Addresses ~5% of development work (feature evolution)
|
||||
|
||||
---
|
||||
|
||||
## Smart Integration
|
||||
|
||||
```bash
|
||||
SPECSWARM_INSTALLED=$(claude plugin list | grep -q "specswarm" && echo "true" || echo "false")
|
||||
SPECTEST_INSTALLED=$(claude plugin list | grep -q "spectest" && echo "true" || echo "false")
|
||||
|
||||
if [ "$SPECTEST_INSTALLED" = "true" ]; then
|
||||
ENABLE_HOOKS=true
|
||||
ENABLE_METRICS=true
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Discover Deprecation Context
|
||||
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
DEPRECATE_NUM=$(echo "$CURRENT_BRANCH" | grep -oE 'deprecate/([0-9]{3})' | grep -oE '[0-9]{3}')
|
||||
|
||||
if [ -z "$DEPRECATE_NUM" ]; then
|
||||
echo "📉 Deprecate Workflow"
|
||||
echo "Provide deprecation number:"
|
||||
read -p "Deprecation number: " DEPRECATE_NUM
|
||||
DEPRECATE_NUM=$(printf "%03d" $DEPRECATE_NUM)
|
||||
fi
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
ensure_features_dir "$REPO_ROOT"
|
||||
|
||||
FEATURE_DIR="${FEATURES_DIR}/${DEPRECATE_NUM}-deprecate"
|
||||
mkdir -p "$FEATURE_DIR"
|
||||
|
||||
DEPRECATE_SPEC="${FEATURE_DIR}/deprecate.md"
|
||||
MIGRATION_GUIDE="${FEATURE_DIR}/migration-guide.md"
|
||||
TASKS_FILE="${FEATURE_DIR}/tasks.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Identify Feature to Deprecate
|
||||
|
||||
```
|
||||
📉 Feature Deprecation
|
||||
|
||||
Which feature are you deprecating?
|
||||
[User input or scan codebase]
|
||||
|
||||
Feature to deprecate: [name]
|
||||
Reason for deprecation: [reason]
|
||||
Alternative/replacement: [what users should use instead]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Create Deprecation Specification
|
||||
|
||||
```markdown
|
||||
# Deprecation ${DEPRECATE_NUM}: [Feature Name]
|
||||
|
||||
**Status**: Active
|
||||
**Created**: YYYY-MM-DD
|
||||
**Feature**: [Feature to deprecate]
|
||||
**Replacement**: [Alternative feature/approach]
|
||||
|
||||
---
|
||||
|
||||
## Deprecation Rationale
|
||||
|
||||
**Why Deprecating**:
|
||||
- Reason 1
|
||||
- Reason 2
|
||||
- Reason 3
|
||||
|
||||
**Timeline Decision**: [Why now?]
|
||||
|
||||
---
|
||||
|
||||
## Current Usage Analysis
|
||||
|
||||
**Users Affected**: [Estimated N users/systems]
|
||||
**Usage Patterns**: [How is it currently used?]
|
||||
|
||||
**Dependencies**:
|
||||
| Dependent System | Usage Level | Migration Complexity |
|
||||
|------------------|-------------|----------------------|
|
||||
| [System 1] | [High/Med/Low] | [High/Med/Low] |
|
||||
|
||||
---
|
||||
|
||||
## Replacement Strategy
|
||||
|
||||
**Recommended Alternative**: [Feature/approach]
|
||||
|
||||
**Why Better**:
|
||||
- Benefit 1
|
||||
- Benefit 2
|
||||
|
||||
**Migration Effort**: [Low/Medium/High]
|
||||
**Migration Time Estimate**: [timeframe per user]
|
||||
|
||||
---
|
||||
|
||||
## Deprecation Timeline
|
||||
|
||||
### Phase 1: Announcement (Month 1)
|
||||
**Duration**: [timeframe]
|
||||
**Activities**:
|
||||
- [ ] Announce deprecation publicly
|
||||
- [ ] Add deprecation warnings to feature
|
||||
- [ ] Publish migration guide
|
||||
- [ ] Email affected users
|
||||
- [ ] Update documentation
|
||||
|
||||
**Deliverables**:
|
||||
- Deprecation announcement
|
||||
- Migration guide
|
||||
- Updated docs
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Migration Support (Months 2-3)
|
||||
**Duration**: [timeframe]
|
||||
**Activities**:
|
||||
- [ ] Monitor adoption of alternative
|
||||
- [ ] Provide migration support to users
|
||||
- [ ] Track usage of deprecated feature
|
||||
- [ ] Address migration blockers
|
||||
- [ ] Send migration reminders
|
||||
|
||||
**Success Criteria**:
|
||||
- ≥80% of users migrated to alternative
|
||||
- No critical dependencies remaining
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Removal (Month 4+)
|
||||
**Duration**: [timeframe]
|
||||
**Activities**:
|
||||
- [ ] Final migration reminder (2 weeks notice)
|
||||
- [ ] Disable feature in production
|
||||
- [ ] Remove code
|
||||
- [ ] Cleanup tests and documentation
|
||||
- [ ] Archive feature artifacts
|
||||
|
||||
**Prerequisites**:
|
||||
- ≥90% user migration
|
||||
- All critical dependencies resolved
|
||||
- Stakeholder approval
|
||||
|
||||
---
|
||||
|
||||
## Migration Blockers
|
||||
|
||||
**Known Blockers**:
|
||||
| Blocker | Impact | Resolution Plan |
|
||||
|---------|--------|-----------------|
|
||||
| [Blocker 1] | [High/Med/Low] | [How to resolve] |
|
||||
|
||||
---
|
||||
|
||||
## Communication Plan
|
||||
|
||||
**Channels**:
|
||||
- [ ] Blog post / changelog
|
||||
- [ ] Email to affected users
|
||||
- [ ] In-app deprecation warning
|
||||
- [ ] Documentation update
|
||||
- [ ] Support ticket template
|
||||
|
||||
**Messaging**:
|
||||
- Clear deprecation timeline
|
||||
- Migration guide link
|
||||
- Support resources
|
||||
- Benefits of alternative
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
**If Migration Fails**:
|
||||
- Extend timeline
|
||||
- Provide additional support resources
|
||||
- Reconsider deprecation if alternative insufficient
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
| Metric | Target | Current |
|
||||
|--------|--------|---------|
|
||||
| User Migration Rate | ≥90% | [%] |
|
||||
| Alternative Adoption | ≥80% | [%] |
|
||||
| Support Tickets | <10 | [N] |
|
||||
| Migration Blockers | 0 | [N] |
|
||||
|
||||
---
|
||||
|
||||
## Metadata
|
||||
|
||||
**Workflow**: Deprecate (Phased Sunset)
|
||||
**Created By**: SpecLab Plugin v1.0.0
|
||||
```
|
||||
|
||||
Write to `$DEPRECATE_SPEC`.
|
||||
|
||||
---
|
||||
|
||||
### 4. Create Migration Guide
|
||||
|
||||
```markdown
|
||||
# Migration Guide: Deprecation ${DEPRECATE_NUM}
|
||||
|
||||
**Feature Deprecated**: [Feature name]
|
||||
**Recommended Alternative**: [Alternative]
|
||||
**Migration Deadline**: [Date]
|
||||
|
||||
---
|
||||
|
||||
## Why This Change?
|
||||
|
||||
[Brief explanation of deprecation rationale]
|
||||
|
||||
**Benefits of Migration**:
|
||||
- Benefit 1
|
||||
- Benefit 2
|
||||
|
||||
---
|
||||
|
||||
## Migration Steps
|
||||
|
||||
### Step 1: [Action]
|
||||
**What**: [Description]
|
||||
|
||||
**Before** (deprecated):
|
||||
```
|
||||
[Code example using old feature]
|
||||
```
|
||||
|
||||
**After** (recommended):
|
||||
```
|
||||
[Code example using alternative]
|
||||
```
|
||||
|
||||
### Step 2: [Action]
|
||||
[Repeat pattern for each migration step]
|
||||
|
||||
---
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
- [ ] Update code to use alternative
|
||||
- [ ] Update tests
|
||||
- [ ] Update documentation
|
||||
- [ ] Test in development
|
||||
- [ ] Deploy to production
|
||||
- [ ] Verify functionality
|
||||
- [ ] Remove deprecated feature usage
|
||||
|
||||
---
|
||||
|
||||
## Common Migration Scenarios
|
||||
|
||||
### Scenario 1: [Use Case]
|
||||
**Old Approach**:
|
||||
```
|
||||
[Code]
|
||||
```
|
||||
|
||||
**New Approach**:
|
||||
```
|
||||
[Code]
|
||||
```
|
||||
|
||||
[Repeat for common scenarios]
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue 1: [Common Problem]
|
||||
**Symptom**: [What users see]
|
||||
**Solution**: [How to fix]
|
||||
|
||||
[Repeat for common issues]
|
||||
|
||||
---
|
||||
|
||||
## Support Resources
|
||||
|
||||
- Migration guide: [link]
|
||||
- API documentation: [link]
|
||||
- Support channel: [link]
|
||||
- Example migration: [link to example code]
|
||||
|
||||
**Need Help?**
|
||||
- Email: [support email]
|
||||
- Slack: [support channel]
|
||||
- Office hours: [time]
|
||||
|
||||
---
|
||||
|
||||
## Timeline
|
||||
|
||||
- **Now**: Feature deprecated, warnings active
|
||||
- **[Date]**: Migration support ends
|
||||
- **[Date]**: Feature removed from production
|
||||
|
||||
**Don't wait! Migrate today.**
|
||||
```
|
||||
|
||||
Write to `$MIGRATION_GUIDE`.
|
||||
|
||||
---
|
||||
|
||||
### 5. Generate Tasks
|
||||
|
||||
```markdown
|
||||
# Tasks: Deprecation ${DEPRECATE_NUM}
|
||||
|
||||
**Workflow**: Deprecate (Phased Sunset)
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Announcement Tasks
|
||||
|
||||
### T001: Add Deprecation Warnings
|
||||
**Description**: Add warnings to deprecated feature code
|
||||
**Files**: [list]
|
||||
**Warning Message**: "This feature is deprecated. Migrate to [alternative]. See [migration guide link]."
|
||||
|
||||
### T002: [P] Publish Migration Guide
|
||||
**Description**: Publish migration guide
|
||||
**Output**: ${MIGRATION_GUIDE}
|
||||
**Parallel**: [P]
|
||||
|
||||
### T003: [P] Update Documentation
|
||||
**Description**: Mark feature as deprecated in docs
|
||||
**Files**: [doc files]
|
||||
**Parallel**: [P]
|
||||
|
||||
### T004: [P] Announce Deprecation
|
||||
**Description**: Communicate deprecation via all channels
|
||||
**Channels**: Blog, email, in-app
|
||||
**Parallel**: [P]
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Migration Support Tasks
|
||||
|
||||
### T005: Monitor Adoption
|
||||
**Description**: Track usage of deprecated vs alternative
|
||||
**Metrics**: [usage metrics]
|
||||
**Frequency**: Weekly
|
||||
**Duration**: [Phase 2 duration]
|
||||
|
||||
### T006: Provide User Support
|
||||
**Description**: Help users migrate
|
||||
**Activities**: Answer questions, resolve blockers
|
||||
**Duration**: [Phase 2 duration]
|
||||
|
||||
### T007: Send Migration Reminders
|
||||
**Description**: Remind users to migrate
|
||||
**Schedule**: [reminder schedule]
|
||||
**Content**: Progress update, deadline reminder
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Removal Tasks
|
||||
|
||||
### T008: Final Migration Check
|
||||
**Description**: Verify ≥90% user migration
|
||||
**Validation**: Usage metrics, stakeholder approval
|
||||
|
||||
### T009: Disable Feature
|
||||
**Description**: Turn off deprecated feature in production
|
||||
**Rollback Plan**: [how to re-enable if needed]
|
||||
|
||||
### T010: Remove Code
|
||||
**Description**: Delete deprecated feature code
|
||||
**Files**: [list all files to remove]
|
||||
**Validation**: Tests still pass
|
||||
|
||||
### T011: [P] Cleanup Tests
|
||||
**Description**: Remove tests for deprecated feature
|
||||
**Files**: [test files]
|
||||
**Parallel**: [P]
|
||||
|
||||
### T012: [P] Cleanup Documentation
|
||||
**Description**: Remove deprecated feature from docs
|
||||
**Files**: [doc files]
|
||||
**Parallel**: [P]
|
||||
|
||||
### T013: Archive Artifacts
|
||||
**Description**: Archive feature artifacts for historical reference
|
||||
**Location**: [archive location]
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Total Tasks**: 13
|
||||
**Phase 1 (Announce)**: T001-T004 (4 tasks, [N] parallel)
|
||||
**Phase 2 (Migrate)**: T005-T007 (3 tasks, ongoing)
|
||||
**Phase 3 (Remove)**: T008-T013 (6 tasks, [N] parallel)
|
||||
|
||||
**Total Timeline**: [estimated timeline]
|
||||
|
||||
**Success Criteria**:
|
||||
- ✅ Deprecation announced to all users
|
||||
- ✅ Migration guide published
|
||||
- ✅ ≥90% users migrated
|
||||
- ✅ Feature removed safely
|
||||
- ✅ Documentation updated
|
||||
```
|
||||
|
||||
Write to `$TASKS_FILE`.
|
||||
|
||||
---
|
||||
|
||||
### 6. Execute Phased Deprecation
|
||||
|
||||
Execute across three phases:
|
||||
|
||||
```
|
||||
📉 Executing Deprecation Workflow
|
||||
|
||||
Phase 1: Announcement (Month 1)
|
||||
T001: Add Deprecation Warnings
|
||||
✓ Warnings added to code
|
||||
|
||||
T002-T004: [Parallel] Publish Migration Guide, Update Docs, Announce
|
||||
⚡ Executing 3 tasks in parallel...
|
||||
✓ Migration guide published
|
||||
✓ Documentation updated
|
||||
✓ Deprecation announced
|
||||
|
||||
Phase 2: Migration Support (Months 2-3)
|
||||
T005: Monitor Adoption
|
||||
Week 1: 15% migrated
|
||||
Week 2: 32% migrated
|
||||
Week 4: 58% migrated
|
||||
Week 6: 78% migrated
|
||||
Week 8: 91% migrated ✅
|
||||
|
||||
T006: Provide User Support
|
||||
✓ 12 support tickets resolved
|
||||
✓ 3 migration blockers fixed
|
||||
|
||||
T007: Send Migration Reminders
|
||||
✓ Weekly reminders sent
|
||||
|
||||
Phase 3: Removal (Month 4)
|
||||
T008: Final Migration Check
|
||||
✓ 91% users migrated
|
||||
✓ Stakeholder approval received
|
||||
|
||||
T009: Disable Feature
|
||||
✓ Feature disabled in production
|
||||
✓ No issues reported
|
||||
|
||||
T010-T012: [Parallel] Remove Code, Cleanup Tests, Cleanup Docs
|
||||
⚡ Executing 3 tasks in parallel...
|
||||
✓ All cleanup tasks complete
|
||||
|
||||
T013: Archive Artifacts
|
||||
✓ Artifacts archived
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Output
|
||||
|
||||
```
|
||||
✅ Deprecation Workflow Complete - Deprecation ${DEPRECATE_NUM}
|
||||
|
||||
📊 Deprecation Results:
|
||||
- Users migrated: 91% ✅
|
||||
- Alternative adopted: 87% ✅
|
||||
- Support tickets: 12 (all resolved) ✅
|
||||
- Timeline: Completed on schedule ✅
|
||||
|
||||
📋 Artifacts:
|
||||
- ${DEPRECATE_SPEC}
|
||||
- ${MIGRATION_GUIDE}
|
||||
- ${TASKS_FILE}
|
||||
|
||||
⏱️ Total Timeline: [duration]
|
||||
|
||||
✅ Feature Safely Removed:
|
||||
- Code deleted and archived
|
||||
- Documentation updated
|
||||
- Users successfully migrated
|
||||
|
||||
📈 Next Steps:
|
||||
- Monitor alternative feature adoption
|
||||
- Archive learnings for future deprecations
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Operating Principles
|
||||
|
||||
1. **Phased Approach**: Announce → Migrate → Remove
|
||||
2. **User-Centric**: Help users migrate successfully
|
||||
3. **Communication**: Over-communicate timeline and alternatives
|
||||
4. **Track Progress**: Monitor migration adoption
|
||||
5. **Safe Removal**: Only remove when migration complete
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Deprecation announced to all affected users
|
||||
✅ Migration guide published and accessible
|
||||
✅ ≥90% user migration achieved
|
||||
✅ Migration blockers resolved
|
||||
✅ Feature removed safely
|
||||
✅ Documentation and code cleanup complete
|
||||
✅ Artifacts archived
|
||||
|
||||
---
|
||||
|
||||
**Workflow Coverage**: Addresses ~5% of development work (feature evolution)
|
||||
495
commands/fix.md
Normal file
495
commands/fix.md
Normal file
@@ -0,0 +1,495 @@
|
||||
---
|
||||
description: Fix bugs with test-driven approach and automatic retry - simplified bugfix workflow
|
||||
args:
|
||||
- name: bug_description
|
||||
description: Natural language description of the bug to fix
|
||||
required: true
|
||||
- name: --regression-test
|
||||
description: Create failing test first (TDD approach - recommended)
|
||||
required: false
|
||||
- name: --hotfix
|
||||
description: Use expedited hotfix workflow for production issues
|
||||
required: false
|
||||
- name: --max-retries
|
||||
description: Maximum fix retry attempts (default 2)
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Fix bugs using a test-driven approach with automatic retry logic for failed fixes.
|
||||
|
||||
**Purpose**: Streamline bug fixing by combining bugfix workflow with retry logic and optional regression testing.
|
||||
|
||||
**Workflow**:
|
||||
- **Standard**: Bugfix → Verify → (Retry if needed)
|
||||
- **With --regression-test**: Create Test → Verify Fails → Bugfix → Verify Passes
|
||||
- **With --hotfix**: Expedited workflow for production issues
|
||||
|
||||
**User Experience**:
|
||||
- Single command instead of manual bugfix + validation
|
||||
- Automatic retry if fix doesn't work
|
||||
- Test-first approach ensures regression prevention
|
||||
- Ready for final merge with `/specswarm:ship`
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checks
|
||||
|
||||
```bash
|
||||
# Parse arguments
|
||||
BUG_DESC=""
|
||||
REGRESSION_TEST=false
|
||||
HOTFIX=false
|
||||
MAX_RETRIES=2
|
||||
|
||||
# Extract bug description (first non-flag argument)
|
||||
for arg in $ARGUMENTS; do
|
||||
if [ "${arg:0:2}" != "--" ] && [ -z "$BUG_DESC" ]; then
|
||||
BUG_DESC="$arg"
|
||||
elif [ "$arg" = "--regression-test" ]; then
|
||||
REGRESSION_TEST=true
|
||||
elif [ "$arg" = "--hotfix" ]; then
|
||||
HOTFIX=true
|
||||
elif [ "$arg" = "--max-retries" ]; then
|
||||
shift
|
||||
MAX_RETRIES="$1"
|
||||
fi
|
||||
done
|
||||
|
||||
# Validate bug description
|
||||
if [ -z "$BUG_DESC" ]; then
|
||||
echo "❌ Error: Bug description required"
|
||||
echo ""
|
||||
echo "Usage: /specswarm:fix \"bug description\" [--regression-test] [--hotfix] [--max-retries N]"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " /specswarm:fix \"Login fails with special characters in password\""
|
||||
echo " /specswarm:fix \"Cart total incorrect with discounts\" --regression-test"
|
||||
echo " /specswarm:fix \"Production API timeout\" --hotfix"
|
||||
echo " /specswarm:fix \"Memory leak in dashboard\" --regression-test --max-retries 3"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get project root
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
cd "$REPO_ROOT"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Detection
|
||||
|
||||
Detect available capabilities before starting workflow:
|
||||
|
||||
```bash
|
||||
# Get plugin directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Detect web project and Chrome DevTools MCP availability
|
||||
CHROME_DEVTOOLS_MODE="disabled"
|
||||
WEB_FRAMEWORK=""
|
||||
|
||||
if [ -f "$PLUGIN_DIR/lib/web-project-detector.sh" ]; then
|
||||
source "$PLUGIN_DIR/lib/web-project-detector.sh"
|
||||
|
||||
# Check if Chrome DevTools MCP should be used
|
||||
if should_use_chrome_devtools "$REPO_ROOT"; then
|
||||
CHROME_DEVTOOLS_MODE="enabled"
|
||||
elif is_web_project "$REPO_ROOT"; then
|
||||
CHROME_DEVTOOLS_MODE="fallback"
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Display Welcome Banner
|
||||
|
||||
```bash
|
||||
if [ "$HOTFIX" = true ]; then
|
||||
echo "🚨 SpecSwarm Fix - HOTFIX Mode (Expedited)"
|
||||
else
|
||||
echo "🔧 SpecSwarm Fix - Test-Driven Bug Resolution"
|
||||
fi
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Bug: $BUG_DESC"
|
||||
echo ""
|
||||
|
||||
if [ "$HOTFIX" = true ]; then
|
||||
echo "⚡ HOTFIX MODE: Expedited workflow for production issues"
|
||||
echo ""
|
||||
echo "This workflow will:"
|
||||
echo " 1. Analyze bug and identify root cause"
|
||||
echo " 2. Implement fix immediately"
|
||||
echo " 3. Verify fix works"
|
||||
echo " 4. Skip comprehensive testing (fast path)"
|
||||
echo ""
|
||||
elif [ "$REGRESSION_TEST" = true ]; then
|
||||
echo "✅ Test-Driven Mode: Creating regression test first"
|
||||
echo ""
|
||||
echo "This workflow will:"
|
||||
echo " 1. Create failing test that reproduces bug"
|
||||
echo " 2. Verify test fails (confirms bug exists)"
|
||||
echo " 3. Implement fix"
|
||||
echo " 4. Verify test passes (confirms fix works)"
|
||||
echo " 5. Run full test suite"
|
||||
echo " 6. Retry up to $MAX_RETRIES times if fix fails"
|
||||
echo ""
|
||||
else
|
||||
echo "This workflow will:"
|
||||
echo " 1. Analyze bug and identify root cause"
|
||||
echo " 2. Implement fix"
|
||||
echo " 3. Verify fix works"
|
||||
echo " 4. Run test suite to catch regressions"
|
||||
echo " 5. Retry up to $MAX_RETRIES times if fix fails"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Show Chrome DevTools MCP status for web projects
|
||||
if [ "$CHROME_DEVTOOLS_MODE" = "enabled" ]; then
|
||||
echo "🌐 Web project detected ($WEB_FRAMEWORK)"
|
||||
echo "🎯 Chrome DevTools MCP: Enhanced browser debugging available"
|
||||
echo ""
|
||||
elif [ "$CHROME_DEVTOOLS_MODE" = "fallback" ]; then
|
||||
echo "🌐 Web project detected ($WEB_FRAMEWORK)"
|
||||
echo "📦 Using Playwright for browser automation"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
read -p "Press Enter to start, or Ctrl+C to cancel..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Phase 1 - Regression Test (Optional)
|
||||
|
||||
**IF --regression-test flag was provided:**
|
||||
|
||||
```bash
|
||||
if [ "$REGRESSION_TEST" = true ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🧪 Phase 1: Creating Regression Test"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Creating a test that reproduces the bug..."
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**YOU MUST create a failing test that reproduces the bug:**
|
||||
|
||||
If REGRESSION_TEST = true:
|
||||
1. Analyze the bug description
|
||||
2. Identify the component/module affected
|
||||
3. Create a test file (e.g., `bug-NNN.test.ts`)
|
||||
4. Write a test that reproduces the bug behavior
|
||||
5. The test should FAIL before the fix
|
||||
|
||||
```bash
|
||||
if [ "$REGRESSION_TEST" = true ]; then
|
||||
# Run the new test to verify it fails
|
||||
# (This confirms the bug actually exists)
|
||||
|
||||
echo "Running test to verify it fails..."
|
||||
# Detect test runner and run test
|
||||
|
||||
echo ""
|
||||
echo "✅ Test created and verified (currently failing as expected)"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Phase 2 - Implement Fix
|
||||
|
||||
**YOU MUST NOW run the bugfix command using the SlashCommand tool:**
|
||||
|
||||
```bash
|
||||
if [ "$HOTFIX" = true ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "⚡ Phase 2: Implementing Hotfix"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
else
|
||||
PHASE_NUM=2
|
||||
if [ "$REGRESSION_TEST" = true ]; then
|
||||
PHASE_NUM=2
|
||||
fi
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔧 Phase $PHASE_NUM: Implementing Fix"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
fi
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Use the appropriate command:**
|
||||
|
||||
```
|
||||
IF HOTFIX = true:
|
||||
Use the SlashCommand tool to execute: /specswarm:hotfix "$BUG_DESC"
|
||||
ELSE:
|
||||
Use the SlashCommand tool to execute: /specswarm:bugfix "$BUG_DESC"
|
||||
```
|
||||
|
||||
Wait for fix to be implemented.
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Fix implemented"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Phase 3 - Verify Fix Works
|
||||
|
||||
**YOU MUST NOW verify the fix works:**
|
||||
|
||||
```bash
|
||||
PHASE_NUM=3
|
||||
if [ "$REGRESSION_TEST" = true ]; then
|
||||
PHASE_NUM=3
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "✓ Phase $PHASE_NUM: Verifying Fix"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Verification steps:**
|
||||
|
||||
1. If REGRESSION_TEST = true:
|
||||
- Run the regression test again
|
||||
- It should now PASS
|
||||
- If it still FAILS, fix didn't work
|
||||
|
||||
2. Run full test suite:
|
||||
- Detect test runner (npm test, pytest, etc.)
|
||||
- Run all tests
|
||||
- Check for any new failures
|
||||
|
||||
3. Store result as FIX_SUCCESSFUL (true/false)
|
||||
|
||||
```bash
|
||||
# Detect and run test suite
|
||||
if [ -f "package.json" ]; then
|
||||
if grep -q "\"test\":" package.json; then
|
||||
echo "Running test suite..."
|
||||
npm test
|
||||
TEST_RESULT=$?
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $TEST_RESULT -eq 0 ]; then
|
||||
FIX_SUCCESSFUL=true
|
||||
echo ""
|
||||
echo "✅ All tests passing - fix verified!"
|
||||
echo ""
|
||||
else
|
||||
FIX_SUCCESSFUL=false
|
||||
echo ""
|
||||
echo "❌ Tests failing - fix may not be complete"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Phase 4 - Retry Logic (If Needed)
|
||||
|
||||
**IF fix failed and retries remaining:**
|
||||
|
||||
```bash
|
||||
RETRY_COUNT=0
|
||||
|
||||
while [ "$FIX_SUCCESSFUL" = false ] && [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
|
||||
RETRY_COUNT=$((RETRY_COUNT + 1))
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔄 Retry $RETRY_COUNT/$MAX_RETRIES: Attempting Another Fix"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Previous fix didn't resolve all test failures."
|
||||
echo "Analyzing test failures and implementing improved fix..."
|
||||
echo ""
|
||||
|
||||
# Show Chrome DevTools diagnostics availability for web projects
|
||||
if [ "$CHROME_DEVTOOLS_MODE" = "enabled" ]; then
|
||||
echo "🌐 Chrome DevTools MCP available for enhanced failure diagnostics"
|
||||
echo " (console errors, network failures, runtime state inspection)"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**YOU MUST re-run bugfix with additional context:**
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:bugfix "Fix failed tests from previous attempt: $BUG_DESC. Test failures: [extract failure details from test output]"
|
||||
```
|
||||
|
||||
**Re-verify:**
|
||||
- Run tests again
|
||||
- Update FIX_SUCCESSFUL based on results
|
||||
|
||||
```bash
|
||||
# Re-run tests
|
||||
npm test
|
||||
TEST_RESULT=$?
|
||||
|
||||
if [ $TEST_RESULT -eq 0 ]; then
|
||||
FIX_SUCCESSFUL=true
|
||||
echo ""
|
||||
echo "✅ Fix successful on retry $RETRY_COUNT!"
|
||||
echo ""
|
||||
break
|
||||
else
|
||||
echo ""
|
||||
echo "❌ Still failing after retry $RETRY_COUNT"
|
||||
echo ""
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Final Report
|
||||
|
||||
**Display completion summary:**
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "══════════════════════════════════════════"
|
||||
|
||||
if [ "$FIX_SUCCESSFUL" = true ]; then
|
||||
echo "🎉 BUG FIX COMPLETE"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Bug: $BUG_DESC"
|
||||
echo ""
|
||||
if [ $RETRY_COUNT -gt 0 ]; then
|
||||
echo "✅ Fix implemented (succeeded on retry $RETRY_COUNT)"
|
||||
else
|
||||
echo "✅ Fix implemented"
|
||||
fi
|
||||
if [ "$REGRESSION_TEST" = true ]; then
|
||||
echo "✅ Regression test created and passing"
|
||||
fi
|
||||
echo "✅ All tests passing"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 NEXT STEPS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "1. 🧪 Manual Testing"
|
||||
echo " - Test the bug fix in your app"
|
||||
echo " - Verify the original issue is resolved"
|
||||
echo " - Check for any side effects"
|
||||
echo ""
|
||||
echo "2. 🚢 Ship When Ready"
|
||||
echo " Run: /specswarm:ship"
|
||||
echo ""
|
||||
echo " This will:"
|
||||
echo " - Validate code quality"
|
||||
echo " - Merge to parent branch if passing"
|
||||
echo " - Complete the bugfix workflow"
|
||||
echo ""
|
||||
else
|
||||
echo "⚠️ BUG FIX INCOMPLETE"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Bug: $BUG_DESC"
|
||||
echo ""
|
||||
echo "❌ Fix attempted $((RETRY_COUNT + 1)) time(s) but tests still failing"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔧 RECOMMENDED ACTIONS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "1. Review test failure output above"
|
||||
echo "2. Bug may be more complex than initially analyzed"
|
||||
echo "3. Consider:"
|
||||
echo " - Manual investigation of root cause"
|
||||
echo " - Breaking into smaller sub-bugs"
|
||||
echo " - Requesting code review for insights"
|
||||
echo ""
|
||||
echo "4. Re-run with more retries:"
|
||||
echo " /specswarm:fix \"$BUG_DESC\" --max-retries 5"
|
||||
echo ""
|
||||
echo "5. Or fix manually and run tests:"
|
||||
echo " npm test"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "══════════════════════════════════════════"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any step fails:
|
||||
|
||||
1. **Bugfix/hotfix command fails**: Display error, suggest reviewing bug description
|
||||
2. **Test creation fails**: Display error, suggest creating test manually
|
||||
3. **All retries exhausted**: Display final report with recommended actions (see Step 6)
|
||||
|
||||
**All errors should report clearly and suggest remediation.**
|
||||
|
||||
---
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
**Test-Driven**: Optional --regression-test ensures bug won't resurface
|
||||
|
||||
**Resilient**: Automatic retry logic handles incomplete fixes
|
||||
|
||||
**Fast Path**: --hotfix for production emergencies
|
||||
|
||||
**User Experience**: Clear progress indicators, retry feedback, actionable next steps
|
||||
|
||||
---
|
||||
|
||||
## Comparison to Manual Workflow
|
||||
|
||||
**Before** (Manual):
|
||||
```bash
|
||||
/specswarm:bugfix "bug description"
|
||||
# [Manually check if fix worked]
|
||||
# [If failed, manually re-run bugfix]
|
||||
# [Manually run tests]
|
||||
/specswarm:complete
|
||||
```
|
||||
**3-5+ commands**, manual verification and retry logic
|
||||
|
||||
**After** (Fix):
|
||||
```bash
|
||||
/specswarm:fix "bug description" --regression-test
|
||||
# [Automatic verification and retry]
|
||||
/specswarm:ship
|
||||
```
|
||||
**2 commands**, automatic retry, regression test included
|
||||
|
||||
**Benefits**:
|
||||
- Automatic retry eliminates manual orchestration
|
||||
- Regression test prevents future regressions
|
||||
- Clear success/failure reporting with next steps
|
||||
477
commands/hotfix.md
Normal file
477
commands/hotfix.md
Normal file
@@ -0,0 +1,477 @@
|
||||
---
|
||||
description: Expedited emergency response workflow for critical production issues
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original methodology: spec-kit-extensions (https://github.com/MartyBonacci/spec-kit-extensions)
|
||||
by Marty Bonacci (2025)
|
||||
2. Adapted: SpecLab plugin by Marty Bonacci & Claude Code (2025)
|
||||
3. Based on: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute expedited emergency response workflow for critical production issues requiring immediate resolution.
|
||||
|
||||
**Key Principles**:
|
||||
1. **Speed First**: Minimal process overhead, maximum velocity
|
||||
2. **Safety**: Despite speed, maintain essential safeguards
|
||||
3. **Rollback Ready**: Always have rollback plan
|
||||
4. **Post-Mortem**: After fire is out, analyze and learn
|
||||
5. **Communication**: Keep stakeholders informed
|
||||
|
||||
**Use Cases**: Critical bugs, security vulnerabilities, data corruption, service outages
|
||||
|
||||
**Coverage**: Addresses ~10-15% of development work (emergencies)
|
||||
|
||||
---
|
||||
|
||||
## Smart Integration Detection
|
||||
|
||||
```bash
|
||||
# Check for SpecSwarm (tech stack enforcement) - OPTIONAL in hotfix
|
||||
SPECSWARM_INSTALLED=$(claude plugin list | grep -q "specswarm" && echo "true" || echo "false")
|
||||
|
||||
# Check for SpecTest (parallel execution, hooks, metrics)
|
||||
SPECTEST_INSTALLED=$(claude plugin list | grep -q "spectest" && echo "true" || echo "false")
|
||||
|
||||
# In hotfix mode, integration is OPTIONAL - speed is priority
|
||||
if [ "$SPECTEST_INSTALLED" = "true" ]; then
|
||||
ENABLE_METRICS=true
|
||||
echo "🎯 SpecTest detected (metrics enabled, but minimal overhead)"
|
||||
fi
|
||||
|
||||
if [ "$SPECSWARM_INSTALLED" = "true" ]; then
|
||||
TECH_VALIDATION_AVAILABLE=true
|
||||
echo "🎯 SpecSwarm detected (tech validation available if time permits)"
|
||||
fi
|
||||
```
|
||||
|
||||
**Note**: In hotfix mode, speed takes precedence. Tech validation and hooks are OPTIONAL.
|
||||
|
||||
---
|
||||
|
||||
## Pre-Workflow Hook (if SpecTest installed)
|
||||
|
||||
```bash
|
||||
if [ "$ENABLE_METRICS" = "true" ]; then
|
||||
echo "🎣 Pre-Hotfix Hook (minimal overhead mode)"
|
||||
WORKFLOW_START_TIME=$(date +%s)
|
||||
echo "✓ Emergency metrics tracking initialized"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Discover Hotfix Context
|
||||
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
|
||||
# Try to extract hotfix number from branch (hotfix/NNN-*)
|
||||
HOTFIX_NUM=$(echo "$CURRENT_BRANCH" | grep -oE 'hotfix/([0-9]{3})' | grep -oE '[0-9]{3}')
|
||||
|
||||
if [ -z "$HOTFIX_NUM" ]; then
|
||||
echo "🚨 HOTFIX WORKFLOW - EMERGENCY MODE"
|
||||
echo ""
|
||||
echo "No hotfix branch detected. Provide hotfix number:"
|
||||
read -p "Hotfix number: " HOTFIX_NUM
|
||||
HOTFIX_NUM=$(printf "%03d" $HOTFIX_NUM)
|
||||
fi
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
ensure_features_dir "$REPO_ROOT"
|
||||
|
||||
FEATURE_DIR="${FEATURES_DIR}/${HOTFIX_NUM}-hotfix"
|
||||
mkdir -p "$FEATURE_DIR"
|
||||
|
||||
HOTFIX_SPEC="${FEATURE_DIR}/hotfix.md"
|
||||
TASKS_FILE="${FEATURE_DIR}/tasks.md"
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
🚨 Hotfix Workflow - EMERGENCY MODE - Hotfix ${HOTFIX_NUM}
|
||||
✓ Branch: ${CURRENT_BRANCH}
|
||||
✓ Feature directory: ${FEATURE_DIR}
|
||||
⚡ Expedited process active (minimal overhead)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Create Minimal Hotfix Specification
|
||||
|
||||
Create `$HOTFIX_SPEC` with ESSENTIAL details only:
|
||||
|
||||
```markdown
|
||||
# Hotfix ${HOTFIX_NUM}: [Title]
|
||||
|
||||
**🚨 EMERGENCY**: [Critical/High Priority]
|
||||
**Created**: YYYY-MM-DD HH:MM
|
||||
**Status**: Active
|
||||
|
||||
---
|
||||
|
||||
## Emergency Summary
|
||||
|
||||
**What's Broken**: [Brief description]
|
||||
|
||||
**Impact**:
|
||||
- Users affected: [scope]
|
||||
- Service status: [degraded/down]
|
||||
- Data at risk: [Yes/No]
|
||||
|
||||
**Urgency**: [Immediate/Within hours/Within day]
|
||||
|
||||
---
|
||||
|
||||
## Immediate Actions
|
||||
|
||||
**1. Mitigation** (if applicable):
|
||||
- [Temporary mitigation in place? Describe]
|
||||
|
||||
**2. Hotfix Scope**:
|
||||
- [What needs to be fixed immediately]
|
||||
|
||||
**3. Rollback Plan**:
|
||||
- [How to rollback if hotfix fails]
|
||||
|
||||
---
|
||||
|
||||
## Technical Details
|
||||
|
||||
**Root Cause** (if known):
|
||||
- [Quick analysis]
|
||||
|
||||
**Fix Approach**:
|
||||
- [Minimal changes to resolve emergency]
|
||||
|
||||
**Files Affected**:
|
||||
- [List critical files]
|
||||
|
||||
**Testing Strategy**:
|
||||
- [Minimal essential tests - what MUST pass?]
|
||||
|
||||
---
|
||||
|
||||
## Deployment Plan
|
||||
|
||||
**Target**: Production
|
||||
**Rollout**: [Immediate/Phased]
|
||||
**Rollback Trigger**: [When to rollback]
|
||||
|
||||
---
|
||||
|
||||
## Post-Mortem Required
|
||||
|
||||
**After Emergency Resolved**:
|
||||
- [ ] Root cause analysis
|
||||
- [ ] Permanent fix (if hotfix is temporary)
|
||||
- [ ] Process improvements
|
||||
- [ ] Documentation updates
|
||||
- [ ] Team retrospective
|
||||
|
||||
---
|
||||
|
||||
## Metadata
|
||||
|
||||
**Workflow**: Hotfix (Emergency Response)
|
||||
**Created By**: SpecLab Plugin v1.0.0
|
||||
```
|
||||
|
||||
**Prompt user for:**
|
||||
- Emergency summary
|
||||
- Impact assessment
|
||||
- Rollback plan
|
||||
|
||||
Write to `$HOTFIX_SPEC`.
|
||||
|
||||
Output:
|
||||
```
|
||||
📋 Hotfix Specification (Minimal)
|
||||
✓ Created: ${HOTFIX_SPEC}
|
||||
✓ Emergency documented
|
||||
✓ Rollback plan defined
|
||||
⚡ Ready for immediate action
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Generate Minimal Tasks
|
||||
|
||||
Create `$TASKS_FILE` with ESSENTIAL tasks only:
|
||||
|
||||
```markdown
|
||||
# Tasks: Hotfix ${HOTFIX_NUM} - EMERGENCY
|
||||
|
||||
**Workflow**: Hotfix (Expedited)
|
||||
**Status**: Active
|
||||
**Created**: YYYY-MM-DD HH:MM
|
||||
|
||||
---
|
||||
|
||||
## ⚡ EMERGENCY MODE - MINIMAL PROCESS
|
||||
|
||||
**Speed Priority**: Essential tasks only
|
||||
**Tech Validation**: ${TECH_VALIDATION_AVAILABLE} (OPTIONAL - use if time permits)
|
||||
**Metrics**: ${ENABLE_METRICS} (lightweight tracking)
|
||||
|
||||
---
|
||||
|
||||
## Emergency Tasks
|
||||
|
||||
### T001: Implement Hotfix
|
||||
**Description**: [Minimal fix to resolve emergency]
|
||||
**Files**: [list]
|
||||
**Validation**: [Essential test only]
|
||||
**Parallel**: No (focused fix)
|
||||
|
||||
### T002: Essential Testing
|
||||
**Description**: Verify hotfix resolves emergency
|
||||
**Test Scope**: MINIMAL (critical path only)
|
||||
**Expected**: Emergency resolved
|
||||
**Parallel**: No
|
||||
|
||||
### T003: Deploy to Production
|
||||
**Description**: Emergency deployment
|
||||
**Rollback Plan**: ${ROLLBACK_PLAN}
|
||||
**Validation**: Service restored
|
||||
**Parallel**: No
|
||||
|
||||
### T004: Monitor Post-Deployment
|
||||
**Description**: Watch metrics, error rates, user reports
|
||||
**Duration**: [monitoring period]
|
||||
**Escalation**: [when to rollback]
|
||||
**Parallel**: No
|
||||
|
||||
---
|
||||
|
||||
## Post-Emergency Tasks (After Fire Out)
|
||||
|
||||
### T005: Root Cause Analysis
|
||||
**Description**: Deep dive into why emergency occurred
|
||||
**Output**: Root cause doc
|
||||
**Timeline**: Within 24-48 hours
|
||||
|
||||
### T006: Permanent Fix (if hotfix is temporary)
|
||||
**Description**: Replace hotfix with proper solution
|
||||
**Workflow**: Use /speclab:bugfix for permanent fix
|
||||
**Timeline**: [timeframe]
|
||||
|
||||
### T007: Post-Mortem
|
||||
**Description**: Team retrospective and process improvements
|
||||
**Timeline**: Within 1 week
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Emergency Tasks**: 4 (T001-T004)
|
||||
**Post-Emergency Tasks**: 3 (T005-T007)
|
||||
**Estimated Time to Resolution**: <1-2 hours (emergency tasks only)
|
||||
|
||||
**Success Criteria**:
|
||||
- ✅ Emergency resolved
|
||||
- ✅ Service restored
|
||||
- ✅ No data loss
|
||||
- ✅ Rollback plan tested (if triggered)
|
||||
```
|
||||
|
||||
Write to `$TASKS_FILE`.
|
||||
|
||||
Output:
|
||||
```
|
||||
📊 Emergency Tasks Generated
|
||||
✓ Created: ${TASKS_FILE}
|
||||
✓ 4 emergency tasks (immediate resolution)
|
||||
✓ 3 post-emergency tasks (learning and improvement)
|
||||
⚡ Estimated resolution time: <1-2 hours
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Execute Emergency Tasks
|
||||
|
||||
**Execute T001-T004 with MAXIMUM SPEED**:
|
||||
|
||||
```
|
||||
🚨 Executing Hotfix Workflow - EMERGENCY MODE
|
||||
|
||||
⚡ T001: Implement Hotfix
|
||||
[Execute minimal fix]
|
||||
${TECH_VALIDATION_IF_TIME_PERMITS}
|
||||
|
||||
⚡ T002: Essential Testing
|
||||
[Run critical path tests only]
|
||||
|
||||
⚡ T003: Deploy to Production
|
||||
[Emergency deployment]
|
||||
✓ Deployed
|
||||
⏱️ Monitoring...
|
||||
|
||||
⚡ T004: Monitor Post-Deployment
|
||||
[Watch metrics for N minutes]
|
||||
${MONITORING_RESULTS}
|
||||
|
||||
${EMERGENCY_RESOLVED_OR_ROLLBACK}
|
||||
```
|
||||
|
||||
**If Emergency Resolved**:
|
||||
```
|
||||
✅ EMERGENCY RESOLVED
|
||||
|
||||
📊 Resolution:
|
||||
- Time to fix: [duration]
|
||||
- Service status: Restored
|
||||
- Impact: Mitigated
|
||||
|
||||
📋 Post-Emergency Actions Required:
|
||||
- T005: Root cause analysis (within 24-48h)
|
||||
- T006: Permanent fix (if hotfix is temporary)
|
||||
- T007: Post-mortem (within 1 week)
|
||||
|
||||
Schedule these using normal workflows when appropriate.
|
||||
```
|
||||
|
||||
**If Rollback Triggered**:
|
||||
```
|
||||
🔄 ROLLBACK INITIATED
|
||||
|
||||
Reason: [rollback trigger hit]
|
||||
Status: Rolling back hotfix
|
||||
|
||||
[Execute rollback plan from hotfix.md]
|
||||
|
||||
✅ Rollback Complete
|
||||
Service status: [current status]
|
||||
|
||||
⚠️ Hotfix failed - need alternative approach
|
||||
Recommend: Escalate to senior engineer / architect
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Workflow Hook (if SpecTest installed)
|
||||
|
||||
```bash
|
||||
if [ "$ENABLE_METRICS" = "true" ]; then
|
||||
echo ""
|
||||
echo "🎣 Post-Hotfix Hook"
|
||||
|
||||
WORKFLOW_END_TIME=$(date +%s)
|
||||
WORKFLOW_DURATION=$((WORKFLOW_END_TIME - WORKFLOW_START_TIME))
|
||||
WORKFLOW_MINUTES=$(echo "scale=0; $WORKFLOW_DURATION / 60" | bc)
|
||||
|
||||
echo "✓ Emergency resolved"
|
||||
echo "⏱️ Time to Resolution: ${WORKFLOW_MINUTES} minutes"
|
||||
|
||||
# Update metrics
|
||||
METRICS_FILE=".specswarm/workflow-metrics.json"
|
||||
echo "📊 Emergency metrics saved: ${METRICS_FILE}"
|
||||
|
||||
echo ""
|
||||
echo "📋 Post-Emergency Actions:"
|
||||
echo "- Complete T005-T007 in normal hours"
|
||||
echo "- Schedule post-mortem"
|
||||
echo "- Document learnings"
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Output
|
||||
|
||||
```
|
||||
✅ Hotfix Workflow Complete - Hotfix ${HOTFIX_NUM}
|
||||
|
||||
🚨 Emergency Status: RESOLVED
|
||||
|
||||
📋 Artifacts Created:
|
||||
- ${HOTFIX_SPEC}
|
||||
- ${TASKS_FILE}
|
||||
|
||||
📊 Results:
|
||||
- Emergency resolved in: ${RESOLUTION_TIME}
|
||||
- Service status: Restored
|
||||
- Rollback triggered: [Yes/No]
|
||||
|
||||
⏱️ Time to Resolution: ${WORKFLOW_DURATION}
|
||||
|
||||
📋 Post-Emergency Actions Required:
|
||||
1. Root cause analysis (T005) - within 24-48h
|
||||
2. Permanent fix (T006) - if hotfix is temporary
|
||||
3. Post-mortem (T007) - within 1 week
|
||||
|
||||
📈 Next Steps:
|
||||
- Monitor production metrics closely
|
||||
- Schedule post-mortem meeting
|
||||
- Plan permanent fix: /speclab:bugfix (if hotfix is temporary)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If hotfix fails**:
|
||||
- Execute rollback plan immediately
|
||||
- Escalate to senior engineer
|
||||
- Document failure for post-mortem
|
||||
|
||||
**If rollback fails**:
|
||||
- CRITICAL: Manual intervention required
|
||||
- Alert on-call engineer
|
||||
- Document all actions taken
|
||||
|
||||
**If emergency worsens**:
|
||||
- Stop hotfix attempt
|
||||
- Consider service shutdown / maintenance mode
|
||||
- Escalate to incident commander
|
||||
|
||||
---
|
||||
|
||||
## Operating Principles
|
||||
|
||||
1. **Speed Over Process**: Minimize overhead, maximize velocity
|
||||
2. **Essential Only**: Skip non-critical validations
|
||||
3. **Rollback Ready**: Always have escape hatch
|
||||
4. **Monitor Closely**: Watch post-deployment metrics
|
||||
5. **Learn After**: Post-mortem is mandatory
|
||||
6. **Communicate**: Keep stakeholders informed
|
||||
7. **Temporary OK**: Hotfix can be temporary (permanent fix later)
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Emergency resolved quickly (<2 hours)
|
||||
✅ Service restored to normal operation
|
||||
✅ No data loss or corruption
|
||||
✅ Rollback plan tested (if needed)
|
||||
✅ Post-emergency tasks scheduled
|
||||
✅ Incident documented for learning
|
||||
|
||||
---
|
||||
|
||||
**Workflow Coverage**: Addresses ~10-15% of development work (emergencies)
|
||||
**Speed**: ~45-90 minutes average time to resolution
|
||||
**Integration**: Optional SpecSwarm/SpecTest (speed takes priority)
|
||||
**Graduation Path**: Proven workflow will graduate to SpecSwarm stable
|
||||
196
commands/impact.md
Normal file
196
commands/impact.md
Normal file
@@ -0,0 +1,196 @@
|
||||
---
|
||||
description: Standalone impact analysis for any feature or change
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
## Goal
|
||||
|
||||
Perform standalone impact analysis for any feature or change to identify affected components, dependencies, and risks.
|
||||
|
||||
**Use Cases**:
|
||||
- Pre-planning impact assessment
|
||||
- Architecture review
|
||||
- Risk analysis before major changes
|
||||
- Dependency mapping
|
||||
- Used by `/speclab:modify` workflow
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Parse Target
|
||||
|
||||
```bash
|
||||
TARGET=$ARGUMENTS
|
||||
|
||||
if [ -z "$TARGET" ]; then
|
||||
echo "🔍 Impact Analysis"
|
||||
echo ""
|
||||
echo "What feature/component do you want to analyze?"
|
||||
echo "Examples:"
|
||||
echo " - Feature number: 018"
|
||||
echo " - File/module: app/models/user.ts"
|
||||
echo " - API endpoint: /api/v1/users"
|
||||
read -p "Target: " TARGET
|
||||
fi
|
||||
|
||||
echo "🔍 Analyzing Impact for: ${TARGET}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Analyze Dependencies
|
||||
|
||||
```bash
|
||||
# Search codebase for references
|
||||
echo "Scanning codebase..."
|
||||
|
||||
# Find direct references
|
||||
# - Import statements
|
||||
# - Function calls
|
||||
# - Type usage
|
||||
# - API endpoint calls
|
||||
|
||||
# Find indirect references
|
||||
# - Components using direct dependencies
|
||||
# - Services consuming APIs
|
||||
|
||||
# Generate dependency graph
|
||||
echo "Building dependency graph..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Generate Impact Report
|
||||
|
||||
```markdown
|
||||
# Impact Analysis: ${TARGET}
|
||||
|
||||
**Analysis Date**: YYYY-MM-DD
|
||||
**Scope**: [Feature/Module/API/Component]
|
||||
|
||||
---
|
||||
|
||||
## Target Summary
|
||||
|
||||
**Type**: [Feature/API/Module/Component]
|
||||
**Location**: [path]
|
||||
**Current Purpose**: [description]
|
||||
|
||||
---
|
||||
|
||||
## Direct Dependencies
|
||||
|
||||
Components that directly depend on this target:
|
||||
|
||||
| Component | Type | Usage Pattern | Impact Level |
|
||||
|-----------|------|---------------|--------------|
|
||||
| [Component 1] | [Service/UI/API] | [How it's used] | [High/Med/Low] |
|
||||
| [Component 2] | [Service/UI/API] | [How it's used] | [High/Med/Low] |
|
||||
|
||||
**Total Direct Dependencies**: [N]
|
||||
|
||||
---
|
||||
|
||||
## Indirect Dependencies
|
||||
|
||||
Components that depend on direct dependencies:
|
||||
|
||||
| Component | Via | Impact Level |
|
||||
|-----------|-----|--------------|
|
||||
| [Component 1] | [Direct Dep] | [High/Med/Low] |
|
||||
|
||||
**Total Indirect Dependencies**: [N]
|
||||
|
||||
---
|
||||
|
||||
## Dependency Graph
|
||||
|
||||
```
|
||||
[Target]
|
||||
├── Direct Dep 1
|
||||
│ ├── Indirect Dep 1a
|
||||
│ └── Indirect Dep 1b
|
||||
├── Direct Dep 2
|
||||
│ └── Indirect Dep 2a
|
||||
└── Direct Dep 3
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
### Change Impact Level: [Low/Medium/High/Critical]
|
||||
|
||||
**Risk Factors**:
|
||||
| Factor | Level | Rationale |
|
||||
|--------|-------|-----------|
|
||||
| Number of Dependencies | [High/Med/Low] | [N direct, M indirect] |
|
||||
| Criticality | [High/Med/Low] | [Critical systems affected?] |
|
||||
| Test Coverage | [High/Med/Low] | [Coverage %] |
|
||||
| Change Complexity | [High/Med/Low] | [Simple/Complex] |
|
||||
|
||||
**Overall Risk Score**: [N/10]
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### If Modifying This Target:
|
||||
|
||||
1. **Review all [N] dependencies** before making changes
|
||||
2. **Test strategy**: [recommendation]
|
||||
3. **Communication**: [notify teams owning dependent systems]
|
||||
4. **Rollout**: [Big bang / Phased / Feature flag]
|
||||
|
||||
### Consideration for Breaking Changes:
|
||||
|
||||
- **Compatibility layer**: [Yes - recommended / Not needed]
|
||||
- **Migration plan**: [Required / Not required]
|
||||
- **Deprecation timeline**: [timeframe]
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
Based on this analysis:
|
||||
|
||||
- **For modifications**: Use `/speclab:modify ${TARGET}`
|
||||
- **For bugfixes**: Use `/speclab:bugfix` if issues found
|
||||
- **For refactoring**: Use `/speclab:refactor ${TARGET}` if quality issues
|
||||
- **For deprecation**: Use `/speclab:deprecate ${TARGET}` if sunset planned
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Output Report
|
||||
|
||||
```
|
||||
🔍 Impact Analysis Complete
|
||||
|
||||
📊 Analysis Results:
|
||||
- Target: ${TARGET}
|
||||
- Direct dependencies: [N]
|
||||
- Indirect dependencies: [M]
|
||||
- Risk level: [Low/Medium/High/Critical]
|
||||
|
||||
📋 Full report available (above)
|
||||
|
||||
📈 Recommended Next Steps:
|
||||
- [Recommendation based on findings]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Target identified and analyzed
|
||||
✅ All direct dependencies mapped
|
||||
✅ Indirect dependencies identified
|
||||
✅ Risk level assessed
|
||||
✅ Recommendations provided
|
||||
993
commands/implement.md
Normal file
993
commands/implement.md
Normal file
@@ -0,0 +1,993 @@
|
||||
---
|
||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Discover Feature Context**:
|
||||
|
||||
**YOU MUST NOW discover the feature context using the Bash tool:**
|
||||
|
||||
a. **Get repository root** by executing:
|
||||
```bash
|
||||
git rev-parse --show-toplevel 2>/dev/null || pwd
|
||||
```
|
||||
Store the result as REPO_ROOT.
|
||||
|
||||
b. **Get current branch name** by executing:
|
||||
```bash
|
||||
git rev-parse --abbrev-ref HEAD 2>/dev/null
|
||||
```
|
||||
Store the result as BRANCH.
|
||||
|
||||
c. **Extract feature number from branch name** by executing:
|
||||
```bash
|
||||
echo "$BRANCH" | grep -oE '^[0-9]{3}'
|
||||
```
|
||||
Store the result as FEATURE_NUM.
|
||||
|
||||
d. **Initialize features directory and find feature**:
|
||||
```bash
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
get_features_dir "$REPO_ROOT"
|
||||
|
||||
# If feature number is empty, find latest feature
|
||||
if [ -z "$FEATURE_NUM" ]; then
|
||||
FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1)
|
||||
fi
|
||||
|
||||
# Find feature directory
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT"
|
||||
# FEATURE_DIR is now set by find_feature_dir
|
||||
```
|
||||
|
||||
f. **Display to user:**
|
||||
```
|
||||
📁 Feature Context
|
||||
✓ Repository: {REPO_ROOT}
|
||||
✓ Branch: {BRANCH}
|
||||
✓ Feature: {FEATURE_NUM}
|
||||
✓ Directory: {FEATURE_DIR}
|
||||
```
|
||||
|
||||
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||
- Scan all checklist files in the checklists/ directory
|
||||
- For each checklist, count:
|
||||
* Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
||||
* Completed items: Lines matching `- [X]` or `- [x]`
|
||||
* Incomplete items: Lines matching `- [ ]`
|
||||
- Create a status table:
|
||||
```
|
||||
| Checklist | Total | Completed | Incomplete | Status |
|
||||
|-----------|-------|-----------|------------|--------|
|
||||
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
||||
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
||||
| security.md | 6 | 6 | 0 | ✓ PASS |
|
||||
```
|
||||
- Calculate overall status:
|
||||
* **PASS**: All checklists have 0 incomplete items
|
||||
* **FAIL**: One or more checklists have incomplete items
|
||||
|
||||
- **If any checklist is incomplete**:
|
||||
* Display the table with incomplete item counts
|
||||
* **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
||||
* Wait for user response before continuing
|
||||
* If user says "no" or "wait" or "stop", halt execution
|
||||
* If user says "yes" or "proceed" or "continue", proceed to step 3
|
||||
|
||||
- **If all checklists are complete**:
|
||||
* Display the table showing all checklists passed
|
||||
* Automatically proceed to step 3
|
||||
|
||||
3. Load and analyze the implementation context:
|
||||
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
- **IF EXISTS**: Read `.specswarm/tech-stack.md` for runtime validation (SpecSwarm)
|
||||
|
||||
<!-- ========== TECH STACK VALIDATION (SpecSwarm Enhancement) ========== -->
|
||||
<!-- Added by Marty Bonacci & Claude Code (2025) -->
|
||||
|
||||
3b. **Pre-Implementation Tech Stack Validation** (if tech-stack.md exists):
|
||||
|
||||
**Purpose**: Runtime validation before writing any code or imports
|
||||
|
||||
**YOU MUST NOW perform tech stack validation using these steps:**
|
||||
|
||||
1. **Check if tech-stack.md exists** using the Read tool:
|
||||
- Try to read `.specswarm/tech-stack.md`
|
||||
- If file doesn't exist: Skip this entire section (3b)
|
||||
- If file exists: Continue with validation
|
||||
|
||||
2. **Load Tech Stack Compliance Report** from plan.md using the Read tool:
|
||||
- Read `${FEATURE_DIR}/plan.md`
|
||||
- Search for "Tech Stack Compliance Report" section
|
||||
- If section does NOT exist: Skip validation (plan created before SpecSwarm)
|
||||
- If section DOES exist: Continue to step 3
|
||||
|
||||
3. **Verify All Conflicts Resolved** using the Grep tool:
|
||||
|
||||
a. Search for conflicts:
|
||||
```bash
|
||||
grep -q "⚠️ Conflicting Technologies" "${FEATURE_DIR}/plan.md"
|
||||
```
|
||||
|
||||
b. If conflicts found, check if unresolved:
|
||||
```bash
|
||||
grep -q "**Your choice**: _\[" "${FEATURE_DIR}/plan.md"
|
||||
```
|
||||
|
||||
c. If unresolved choices found:
|
||||
- **HALT** implementation
|
||||
- Display error: "❌ Tech stack conflicts still unresolved"
|
||||
- Display message: "Cannot implement until all conflicts in plan.md are resolved"
|
||||
- Stop execution
|
||||
|
||||
4. **Verify No Prohibited Technologies in Plan** using the Grep tool:
|
||||
|
||||
a. Search for prohibited techs:
|
||||
```bash
|
||||
grep -q "❌ Prohibited Technologies" "${FEATURE_DIR}/plan.md"
|
||||
```
|
||||
|
||||
b. If found, check if blocking:
|
||||
```bash
|
||||
grep -q "**Cannot proceed**" "${FEATURE_DIR}/plan.md"
|
||||
```
|
||||
|
||||
c. If blocking issues found:
|
||||
- **HALT** implementation
|
||||
- Display error: "❌ Prohibited technologies still present in plan.md"
|
||||
- Display message: "Remove or replace prohibited technologies before implementing"
|
||||
- Stop execution
|
||||
|
||||
4. **Load Prohibited Technologies List**:
|
||||
```bash
|
||||
# Extract all prohibited technologies from tech-stack.md
|
||||
PROHIBITED_TECHS=()
|
||||
APPROVED_ALTERNATIVES=()
|
||||
|
||||
while IFS= read -r line; do
|
||||
if [[ $line =~ ^-\ ❌\ (.*)\ \(use\ (.*)\ instead\) ]]; then
|
||||
PROHIBITED_TECHS+=("${BASH_REMATCH[1]}")
|
||||
APPROVED_ALTERNATIVES+=("${BASH_REMATCH[2]}")
|
||||
fi
|
||||
done < <(grep "❌" "${REPO_ROOT}.specswarm/tech-stack.md")
|
||||
```
|
||||
|
||||
5. **Runtime Import/Dependency Validation**:
|
||||
|
||||
**BEFORE writing ANY file that contains imports or dependencies:**
|
||||
|
||||
```bash
|
||||
# For each import statement or dependency about to be written:
|
||||
check_technology_compliance() {
|
||||
local TECH_NAME="$1"
|
||||
local FILE_PATH="$2"
|
||||
local LINE_CONTENT="$3"
|
||||
|
||||
# Check if technology is prohibited
|
||||
for i in "${!PROHIBITED_TECHS[@]}"; do
|
||||
PROHIBITED="${PROHIBITED_TECHS[$i]}"
|
||||
APPROVED="${APPROVED_ALTERNATIVES[$i]}"
|
||||
|
||||
if echo "$TECH_NAME" | grep -qi "$PROHIBITED"; then
|
||||
ERROR "Prohibited technology detected: $PROHIBITED"
|
||||
MESSAGE "File: $FILE_PATH"
|
||||
MESSAGE "Line: $LINE_CONTENT"
|
||||
MESSAGE "❌ Cannot use: $PROHIBITED"
|
||||
MESSAGE "✅ Must use: $APPROVED"
|
||||
MESSAGE "See .specswarm/tech-stack.md for details"
|
||||
HALT
|
||||
fi
|
||||
done
|
||||
|
||||
# Check if technology is unapproved (warn but allow)
|
||||
if ! grep -qi "$TECH_NAME" "${REPO_ROOT}.specswarm/tech-stack.md" 2>/dev/null; then
|
||||
WARNING "Unapproved technology: $TECH_NAME"
|
||||
MESSAGE "File: $FILE_PATH"
|
||||
MESSAGE "This library is not in tech-stack.md"
|
||||
PROMPT "Continue anyway? (yes/no)"
|
||||
read -r RESPONSE
|
||||
if [[ ! "$RESPONSE" =~ ^[Yy] ]]; then
|
||||
MESSAGE "Halting. Please add $TECH_NAME to tech-stack.md or choose approved alternative"
|
||||
HALT
|
||||
fi
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
6. **Validation Triggers**:
|
||||
|
||||
**JavaScript/TypeScript**:
|
||||
- Before writing: `import ... from '...'`
|
||||
- Before writing: `require('...')`
|
||||
- Before writing: `npm install ...` or `yarn add ...`
|
||||
- Extract library name and validate
|
||||
|
||||
**Python**:
|
||||
- Before writing: `import ...` or `from ... import ...`
|
||||
- Before writing: `pip install ...`
|
||||
- Extract module name and validate
|
||||
|
||||
**Go**:
|
||||
- Before writing: `import "..."`
|
||||
- Before executing: `go get ...`
|
||||
- Extract package name and validate
|
||||
|
||||
**General**:
|
||||
- Before writing any `package.json` dependencies
|
||||
- Before writing any `requirements.txt` entries
|
||||
- Before writing any `go.mod` require statements
|
||||
- Before writing any `composer.json` dependencies
|
||||
|
||||
7. **Pattern Validation**:
|
||||
|
||||
Check for prohibited patterns (not just libraries):
|
||||
```bash
|
||||
validate_code_pattern() {
|
||||
local FILE_CONTENT="$1"
|
||||
local FILE_PATH="$2"
|
||||
|
||||
# Check for prohibited patterns from tech-stack.md
|
||||
# Example: "Class components" prohibited
|
||||
if echo "$FILE_CONTENT" | grep -q "class.*extends React.Component"; then
|
||||
ERROR "Prohibited pattern: Class components"
|
||||
MESSAGE "File: $FILE_PATH"
|
||||
MESSAGE "Use functional components instead"
|
||||
HALT
|
||||
fi
|
||||
|
||||
# Example: "Redux" prohibited
|
||||
if echo "$FILE_CONTENT" | grep -qi "createStore\|configureStore.*@reduxjs"; then
|
||||
ERROR "Prohibited library: Redux"
|
||||
MESSAGE "File: $FILE_PATH"
|
||||
MESSAGE "Use React Router loaders/actions instead"
|
||||
HALT
|
||||
fi
|
||||
}
|
||||
```
|
||||
|
||||
8. **Continuous Validation**:
|
||||
- Run validation before EVERY file write operation
|
||||
- Run validation before EVERY package manager command
|
||||
- Run validation before EVERY import statement
|
||||
- Accumulate violations and report at end if in batch mode
|
||||
|
||||
<!-- ========== END TECH STACK VALIDATION ========== -->
|
||||
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
**Detection & Creation Logic**:
|
||||
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||
|
||||
```sh
|
||||
git rev-parse --git-dir 2>/dev/null
|
||||
```
|
||||
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||
- Check if .eslintrc* or eslint.config.* exists → create/verify .eslintignore
|
||||
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||
|
||||
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||
**If ignore file missing**: Create with full pattern set for detected technology
|
||||
|
||||
**Common Patterns by Technology** (from plan.md tech stack):
|
||||
- **Node.js/JavaScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||
|
||||
**Tool-Specific Patterns**:
|
||||
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||
|
||||
5. Parse tasks.md structure and extract:
|
||||
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||
- **Task dependencies**: Sequential vs parallel execution rules
|
||||
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||
- **Execution flow**: Order and dependency requirements
|
||||
|
||||
6. Execute implementation following the task plan:
|
||||
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||
|
||||
8. Progress tracking and error handling:
|
||||
- Report progress after each completed task
|
||||
- Halt execution if any non-parallel task fails
|
||||
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||
- Provide clear error messages with context for debugging
|
||||
- Suggest next steps if implementation cannot proceed
|
||||
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||
|
||||
9. Completion validation:
|
||||
- Verify all required tasks are completed
|
||||
- Check that implemented features match the original specification
|
||||
- Validate that tests pass and coverage meets requirements
|
||||
- Confirm the implementation follows the technical plan
|
||||
- Report final status with summary of completed work
|
||||
|
||||
<!-- ========== SSR PATTERN VALIDATION ========== -->
|
||||
<!-- Added by Marty Bonacci & Claude Code (2025-10-15) -->
|
||||
|
||||
9b. **SSR Architecture Validation** - Prevent Bug 913-type issues:
|
||||
|
||||
**Purpose**: Detect hardcoded URLs and relative URLs in server-side rendering contexts
|
||||
|
||||
**YOU MUST NOW run SSR pattern validation using the Bash tool:**
|
||||
|
||||
1. **Execute SSR validator:**
|
||||
```bash
|
||||
cd ${REPO_ROOT} && bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/ssr-validator.sh
|
||||
```
|
||||
|
||||
2. **Parse validation results:**
|
||||
- Exit code 0: No issues found (✓ PASS)
|
||||
- Exit code 1: Issues detected (⚠️ FAIL)
|
||||
|
||||
3. **If issues detected:**
|
||||
|
||||
a. **Display issue summary** from validator output
|
||||
|
||||
b. **Ask user for action:**
|
||||
```
|
||||
⚠️ SSR Pattern Issues Detected
|
||||
================================
|
||||
|
||||
Would you like me to:
|
||||
1. Auto-fix common issues (create getApiUrl helper, update imports)
|
||||
2. Show detailed issues and fix manually
|
||||
3. Skip (continue anyway - NOT RECOMMENDED)
|
||||
|
||||
Choose (1/2/3):
|
||||
```
|
||||
|
||||
c. **If user chooses option 1 (Auto-fix):**
|
||||
- Check if app/utils/api.ts exists
|
||||
- If not, create it with getApiUrl() helper:
|
||||
```typescript
|
||||
export function getApiUrl(path: string): string {
|
||||
const base = typeof window !== 'undefined'
|
||||
? ''
|
||||
: process.env.API_BASE_URL || 'http://localhost:3000';
|
||||
return `${base}${path}`;
|
||||
}
|
||||
```
|
||||
- Scan all files with hardcoded URLs using Grep tool
|
||||
- For each file:
|
||||
* Add import: `import { getApiUrl } from '../utils/api';`
|
||||
* Replace `fetch('http://localhost:3000/api/...')` with `fetch(getApiUrl('/api/...'))`
|
||||
* Replace `fetch('/api/...')` in loaders/actions with `fetch(getApiUrl('/api/...'))`
|
||||
- Re-run SSR validator to confirm fixes
|
||||
- Display: "✅ Auto-fix complete. All SSR patterns corrected."
|
||||
|
||||
d. **If user chooses option 2 (Manual fix):**
|
||||
- Display detailed issue list from validator
|
||||
- Display recommendations
|
||||
- Wait for user to fix manually
|
||||
- Offer to re-run validator: "Type 'validate' to re-check or 'continue' to proceed"
|
||||
|
||||
e. **If user chooses option 3 (Skip):**
|
||||
- Display warning: "⚠️ Skipping SSR validation. Production deployment may fail."
|
||||
- Continue to next step
|
||||
|
||||
4. **If no issues found:**
|
||||
- Display: "✅ SSR patterns validated - No architectural issues detected"
|
||||
- Continue to next step
|
||||
|
||||
**IMPORTANT**: This validation prevents production failures from Bug 913-type issues (relative URLs in SSR contexts).
|
||||
|
||||
<!-- ========== END SSR PATTERN VALIDATION ========== -->
|
||||
|
||||
<!-- ========== QUALITY VALIDATION (SpecSwarm Phase 1) ========== -->
|
||||
<!-- Added by Marty Bonacci & Claude Code (2025) -->
|
||||
|
||||
10. **Quality Validation** - CRITICAL STEP, MUST EXECUTE:
|
||||
|
||||
**Purpose**: Automated quality assurance before merge
|
||||
|
||||
**YOU MUST NOW CHECK FOR AND RUN QUALITY VALIDATION:**
|
||||
|
||||
1. **First**, check if quality standards file exists by reading the file at `${REPO_ROOT}.specswarm/quality-standards.md` using the Read tool.
|
||||
|
||||
2. **If the file does NOT exist:**
|
||||
- Display this message to the user:
|
||||
```
|
||||
ℹ️ Quality Validation
|
||||
====================
|
||||
|
||||
No quality standards defined. Skipping automated validation.
|
||||
|
||||
To enable quality gates:
|
||||
1. Create .specswarm/quality-standards.md
|
||||
2. Define minimum coverage and quality score
|
||||
3. Configure test requirements
|
||||
|
||||
See: plugins/specswarm/templates/quality-standards-template.md
|
||||
```
|
||||
- Then proceed directly to Step 11 (Git Workflow)
|
||||
|
||||
3. **If the file EXISTS, you MUST execute the full quality validation workflow using the Bash tool:**
|
||||
|
||||
a. **Display header** by outputting directly to the user:
|
||||
```
|
||||
🧪 Running Quality Validation
|
||||
=============================
|
||||
```
|
||||
|
||||
b. **Detect test frameworks** using the Bash tool:
|
||||
```bash
|
||||
cd ${REPO_ROOT} && bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/test-framework-detector.sh
|
||||
```
|
||||
Parse the JSON output to extract:
|
||||
- List of all detected frameworks
|
||||
- Primary framework (highest priority)
|
||||
- Framework count
|
||||
Store primary framework for use in tests.
|
||||
|
||||
c. **Run unit tests** using the detected framework:
|
||||
|
||||
**YOU MUST NOW run tests using the Bash tool:**
|
||||
|
||||
1. **Source the test detector library:**
|
||||
```bash
|
||||
source ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/test-framework-detector.sh
|
||||
```
|
||||
|
||||
2. **Run tests for primary framework:**
|
||||
```bash
|
||||
run_tests "{PRIMARY_FRAMEWORK}" ${REPO_ROOT}
|
||||
```
|
||||
|
||||
3. **Parse test results:**
|
||||
```bash
|
||||
parse_test_results "{PRIMARY_FRAMEWORK}" "{TEST_OUTPUT}"
|
||||
```
|
||||
Extract: total, passed, failed, skipped counts
|
||||
|
||||
4. **Display results to user:**
|
||||
```
|
||||
1. Unit Tests ({FRAMEWORK_NAME})
|
||||
✓ Total: {TOTAL}
|
||||
✓ Passed: {PASSED} ({PASS_RATE}%)
|
||||
✗ Failed: {FAILED}
|
||||
⊘ Skipped: {SKIPPED}
|
||||
```
|
||||
|
||||
d. **Measure code coverage** (if coverage tool available):
|
||||
|
||||
**YOU MUST NOW measure coverage using the Bash tool:**
|
||||
|
||||
1. **Check for coverage tool:**
|
||||
```bash
|
||||
source ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/test-framework-detector.sh
|
||||
detect_coverage_tool "{PRIMARY_FRAMEWORK}" ${REPO_ROOT}
|
||||
```
|
||||
|
||||
2. **If coverage tool detected, run coverage:**
|
||||
```bash
|
||||
run_coverage "{PRIMARY_FRAMEWORK}" ${REPO_ROOT}
|
||||
```
|
||||
Parse coverage percentage from output.
|
||||
|
||||
3. **Calculate proportional coverage score (Phase 2 Enhancement):**
|
||||
- Read min_coverage from quality-standards.md (default 80%)
|
||||
- Calculate score using proportional formula:
|
||||
* Coverage >= 90%: 25 points (full credit)
|
||||
* Coverage 80-89%: 20-24 points (proportional)
|
||||
* Coverage 70-79%: 15-19 points (proportional)
|
||||
* Coverage 60-69%: 10-14 points (proportional)
|
||||
* Coverage 50-59%: 5-9 points (proportional)
|
||||
* Coverage < 50%: 0-4 points (proportional)
|
||||
|
||||
Formula: `score = min(25, (coverage / 90) * 25)`
|
||||
|
||||
4. **Display results to user:**
|
||||
```
|
||||
3. Code Coverage
|
||||
Coverage: {COVERAGE}%
|
||||
Target: {TARGET}%
|
||||
Score: {SCORE}/25 points
|
||||
Status: {EXCELLENT/GOOD/ACCEPTABLE/NEEDS IMPROVEMENT/INSUFFICIENT}
|
||||
```
|
||||
|
||||
5. **If no coverage tool:**
|
||||
- Display: "Coverage measurement not configured (0 points)"
|
||||
- Score: 0 points
|
||||
|
||||
e. **Detect browser test framework**:
|
||||
```bash
|
||||
cd ${REPO_ROOT} && source ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/specswarm/lib/quality-gates.sh && detect_browser_test_framework
|
||||
```
|
||||
|
||||
f. **Run browser tests** (if Playwright/Cypress detected):
|
||||
- For Playwright: `npx playwright test 2>&1 | tail -30`
|
||||
- For Cypress: `npx cypress run 2>&1 | tail -30`
|
||||
- Parse results (passed/failed/total)
|
||||
- Display with "4. Browser Tests" header
|
||||
- If no browser framework: Display "No browser test framework detected - Skipping"
|
||||
|
||||
f2. **Analyze bundle sizes** (Phase 3 Enhancement):
|
||||
|
||||
**YOU MUST NOW analyze bundle sizes using the Bash tool:**
|
||||
|
||||
1. **Run bundle size monitor:**
|
||||
```bash
|
||||
cd ${REPO_ROOT} && bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/speclabs/lib/bundle-size-monitor.sh
|
||||
```
|
||||
|
||||
2. **Parse bundle analysis results:**
|
||||
- Exit code 0: All bundles within budget (✓ PASS)
|
||||
- Exit code 1: Large bundles detected (⚠️ WARNING)
|
||||
- Exit code 2: Critical bundle size exceeded (🔴 CRITICAL)
|
||||
|
||||
3. **Extract bundle metrics:**
|
||||
- Total bundle size (in KB)
|
||||
- Number of large bundles (>500KB)
|
||||
- Number of critical bundles (>1MB)
|
||||
- List of top 5 largest bundles
|
||||
|
||||
4. **Calculate bundle size score:**
|
||||
- < 500KB total: 20 points (excellent)
|
||||
- 500-750KB: 15 points (good)
|
||||
- 750-1000KB: 10 points (acceptable)
|
||||
- 1000-2000KB: 5 points (poor)
|
||||
- > 2000KB: 0 points (critical)
|
||||
|
||||
5. **Display results to user:**
|
||||
```
|
||||
5. Bundle Size Performance
|
||||
Total Size: {TOTAL_SIZE}
|
||||
Largest Bundle: {LARGEST_BUNDLE}
|
||||
Score: {SCORE}/20 points
|
||||
Status: {EXCELLENT/GOOD/ACCEPTABLE/POOR/CRITICAL}
|
||||
```
|
||||
|
||||
6. **If no build directory found:**
|
||||
- Display: "No build artifacts found - Run build first (0 points)"
|
||||
- Score: 0 points
|
||||
- Note: Bundle size analysis requires a production build
|
||||
|
||||
7. **Track bundle size in metrics:**
|
||||
- Add bundle_size_kb to metrics.json
|
||||
- Enables bundle size tracking over time
|
||||
|
||||
f3. **Enforce performance budgets** (Phase 3 Enhancement - Optional):
|
||||
|
||||
**YOU MUST NOW check if performance budgets are defined:**
|
||||
|
||||
1. **Check for budget configuration:**
|
||||
- Read quality-standards.md using Read tool
|
||||
- Look for budget settings:
|
||||
* max_bundle_size (KB)
|
||||
* max_initial_load (KB)
|
||||
* enforce_budgets (true/false)
|
||||
|
||||
2. **If enforce_budgets is true, run enforcement:**
|
||||
```bash
|
||||
cd ${REPO_ROOT} && bash ~/.claude/plugins/marketplaces/specswarm-marketplace/plugins/speclabs/lib/performance-budget-enforcer.sh
|
||||
```
|
||||
|
||||
3. **Parse enforcement results:**
|
||||
- Exit code 0: All budgets met (✓ PASS)
|
||||
- Exit code 1: Budgets violated (❌ FAIL)
|
||||
|
||||
4. **If budgets violated:**
|
||||
a. **Display violations** from enforcer output
|
||||
b. **Check block_merge setting:**
|
||||
- If block_merge_on_budget_violation is true: HALT
|
||||
- If false: Warn and ask user "Continue anyway? (yes/no)"
|
||||
|
||||
5. **Display budget status:**
|
||||
```
|
||||
⚡ Performance Budget Status
|
||||
- Bundle Size: {PASS/FAIL}
|
||||
- Initial Load: {PASS/FAIL}
|
||||
Overall: {PASS/FAIL}
|
||||
```
|
||||
|
||||
6. **If no budgets configured:**
|
||||
- Skip enforcement
|
||||
- Note: "Performance budgets not configured (optional)"
|
||||
|
||||
g. **Calculate quality score** using proportional scoring (Phase 2 & 3 Enhancement):
|
||||
|
||||
**YOU MUST NOW calculate scores for each component:**
|
||||
|
||||
1. **Unit Tests** (0-25 points - proportional by pass rate):
|
||||
- 100% passing: 25 points
|
||||
- 90-99% passing: 20-24 points (proportional)
|
||||
- 80-89% passing: 15-19 points (proportional)
|
||||
- 70-79% passing: 10-14 points (proportional)
|
||||
- 60-69% passing: 5-9 points (proportional)
|
||||
- <60% passing: 0-4 points (proportional)
|
||||
|
||||
Formula: `score = min(25, (pass_rate / 100) * 25)`
|
||||
|
||||
2. **Code Coverage** (0-25 points - proportional by coverage %):
|
||||
- >=90% coverage: 25 points
|
||||
- 80-89% coverage: 20-24 points (proportional)
|
||||
- 70-79% coverage: 15-19 points (proportional)
|
||||
- 60-69% coverage: 10-14 points (proportional)
|
||||
- 50-59% coverage: 5-9 points (proportional)
|
||||
- <50% coverage: 0-4 points (proportional)
|
||||
|
||||
Formula: `score = min(25, (coverage / 90) * 25)`
|
||||
|
||||
3. **Integration Tests** (0-15 points - proportional):
|
||||
- 100% passing: 15 points
|
||||
- Proportional for <100%
|
||||
- 0 points if not detected
|
||||
|
||||
Formula: `score = min(15, (pass_rate / 100) * 15)`
|
||||
|
||||
4. **Browser Tests** (0-15 points - proportional):
|
||||
- 100% passing: 15 points
|
||||
- Proportional for <100%
|
||||
- 0 points if not detected
|
||||
|
||||
Formula: `score = min(15, (pass_rate / 100) * 15)`
|
||||
|
||||
5. **Bundle Size** (0-20 points - Phase 3 feature):
|
||||
- < 500KB total: 20 points
|
||||
- 500-750KB: 15 points
|
||||
- 750-1000KB: 10 points
|
||||
- 1000-2000KB: 5 points
|
||||
- > 2000KB: 0 points
|
||||
- No build found: 0 points
|
||||
|
||||
6. **Visual Alignment** (0-15 points - Phase 3 future):
|
||||
- Set to 0 for now (screenshot analysis not yet implemented)
|
||||
|
||||
**Total possible: 115 points** (but scaled to 100 for display)
|
||||
|
||||
**Scoring Note**: When Visual Alignment is implemented, adjust other components to maintain 100-point scale.
|
||||
|
||||
**Example Calculation:**
|
||||
- Unit Tests: 106/119 passing (89%) → 22.25 points
|
||||
- Coverage: 75% → 20.83 points
|
||||
- Integration Tests: Not detected → 0 points
|
||||
- Browser Tests: Not configured → 0 points
|
||||
- Bundle Size: 450KB → 20 points
|
||||
- Visual Alignment: Not implemented → 0 points
|
||||
- **Total: 63.08/115 points** (scaled to ~55/100)
|
||||
|
||||
h. **Display quality report** with proportional scoring details:
|
||||
```
|
||||
Quality Validation Results
|
||||
==========================
|
||||
|
||||
1. Unit Tests ({FRAMEWORK}): {SCORE}/25 points
|
||||
✓ Passed: {PASSED}/{TOTAL} ({PASS_RATE}%)
|
||||
✗ Failed: {FAILED}
|
||||
Status: {EXCELLENT/GOOD/ACCEPTABLE/NEEDS IMPROVEMENT}
|
||||
|
||||
2. Code Coverage: {SCORE}/25 points
|
||||
Coverage: {COVERAGE}% (target: {TARGET}%)
|
||||
Status: {EXCELLENT/GOOD/ACCEPTABLE/NEEDS IMPROVEMENT/INSUFFICIENT}
|
||||
|
||||
3. Integration Tests: {SCORE}/15 points
|
||||
{DETAILS or "Not detected"}
|
||||
|
||||
4. Browser Tests: {SCORE}/15 points
|
||||
{DETAILS or "Not configured"}
|
||||
|
||||
5. Bundle Size: {SCORE}/20 points
|
||||
Total: {TOTAL_SIZE} | Largest: {LARGEST_BUNDLE}
|
||||
Status: {EXCELLENT/GOOD/ACCEPTABLE/POOR/CRITICAL}
|
||||
|
||||
6. Visual Alignment: 0/15 points
|
||||
Status: Not yet implemented (Phase 3 future)
|
||||
|
||||
════════════════════════════════════════
|
||||
Raw Score: {RAW_SCORE}/115 points
|
||||
Scaled Score: {SCALED_SCORE}/100 points
|
||||
════════════════════════════════════════
|
||||
|
||||
Status: {PASS/FAIL} (threshold: {THRESHOLD}/100)
|
||||
|
||||
Score Breakdown:
|
||||
████████████████░░░░░░░░ {SCALED_SCORE}% ({VISUAL_BAR})
|
||||
|
||||
Note: Score scaled from 115-point system to 100-point display
|
||||
```
|
||||
|
||||
i. **Check quality gates** from quality-standards.md:
|
||||
- Read min_quality_score (default 80)
|
||||
- Read block_merge_on_failure (default false)
|
||||
- If score < minimum:
|
||||
- If block_merge_on_failure is true: HALT and show error
|
||||
- If block_merge_on_failure is false: Show warning and ask user "Continue with merge anyway? (yes/no)"
|
||||
- If score >= minimum: Display "✅ Quality validation passed!"
|
||||
|
||||
j. **Save quality metrics** by updating `${REPO_ROOT}.specswarm/metrics.json`:
|
||||
- Add entry for current feature number
|
||||
- Include quality score, coverage, test results
|
||||
- Use Write tool to update the JSON file
|
||||
|
||||
**IMPORTANT**: You MUST execute this step if quality-standards.md exists. Do NOT skip it. Use the Bash tool to run all commands and parse the results.
|
||||
|
||||
4. **Proactive Quality Improvements** - If quality score < 80/100:
|
||||
|
||||
**YOU MUST NOW offer to improve the quality score:**
|
||||
|
||||
a. **Check for missing coverage tool:**
|
||||
- If Vitest was detected but coverage measurement showed 0% or "not configured":
|
||||
- Display to user:
|
||||
```
|
||||
⚡ Coverage Tool Not Installed
|
||||
=============================
|
||||
|
||||
Installing @vitest/coverage-v8 would add +25 points to your quality score.
|
||||
|
||||
Current: {CURRENT_SCORE}/100
|
||||
With coverage: {CURRENT_SCORE + 25}/100
|
||||
|
||||
Would you like me to:
|
||||
1. Install coverage tool and re-run validation
|
||||
2. Skip (continue without coverage)
|
||||
|
||||
Choose (1 or 2):
|
||||
```
|
||||
- If user chooses 1:
|
||||
- Run: `npm install --save-dev @vitest/coverage-v8`
|
||||
- Check if vitest.config.ts exists using Read tool
|
||||
- If exists, update it to add coverage configuration
|
||||
- If not exists, create vitest.config.ts with coverage config
|
||||
- Re-run quality validation (step 3 above)
|
||||
- Display new score
|
||||
|
||||
b. **Check for missing E2E tests:**
|
||||
- If Playwright was detected but no tests were found:
|
||||
- Display to user:
|
||||
```
|
||||
⚡ No E2E Tests Found
|
||||
=====================
|
||||
|
||||
Writing basic E2E tests would add +15 points to your quality score.
|
||||
|
||||
Current: {CURRENT_SCORE}/100
|
||||
With E2E tests: {CURRENT_SCORE + 15}/100
|
||||
|
||||
Would you like me to:
|
||||
1. Generate basic Playwright test templates
|
||||
2. Skip (continue without E2E tests)
|
||||
|
||||
Choose (1 or 2):
|
||||
```
|
||||
- If user chooses 1:
|
||||
- Create tests/e2e/ directory if not exists
|
||||
- Generate basic test file with:
|
||||
* Login flow test (if authentication exists)
|
||||
* Main feature flow test (based on spec.md)
|
||||
* Basic smoke test
|
||||
- Run: `npx playwright test`
|
||||
- Re-run quality validation
|
||||
- Display new score
|
||||
|
||||
c. **Display final improvement summary:**
|
||||
```
|
||||
📊 Quality Score Improvement
|
||||
============================
|
||||
|
||||
Before improvements: {ORIGINAL_SCORE}/100
|
||||
After improvements: {FINAL_SCORE}/100
|
||||
Increase: +{INCREASE} points
|
||||
|
||||
{STATUS_EMOJI} Quality Status: {PASS/FAIL}
|
||||
```
|
||||
|
||||
**Note**: This proactive improvement step can increase quality scores from 25/100 to 65/100+ automatically.
|
||||
|
||||
<!-- ========== END QUALITY VALIDATION ========== -->
|
||||
|
||||
11. **Git Workflow Completion** (if git repository):
|
||||
|
||||
**Purpose**: Handle feature branch merge and cleanup after successful implementation
|
||||
|
||||
**INSTRUCTIONS FOR CLAUDE:**
|
||||
|
||||
1. **Check if in a git repository** using Bash tool:
|
||||
```bash
|
||||
git rev-parse --git-dir 2>/dev/null
|
||||
```
|
||||
If this fails, skip git workflow entirely.
|
||||
|
||||
2. **Get current and main branch names** using Bash:
|
||||
```bash
|
||||
git rev-parse --abbrev-ref HEAD
|
||||
git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's@^refs/remotes/origin/@@'
|
||||
```
|
||||
|
||||
3. **Only proceed if on a feature branch** (not main/master). If already on main, display "Already on main branch" and stop.
|
||||
|
||||
4. **Display git workflow options** to the user:
|
||||
```
|
||||
🌳 Git Workflow
|
||||
===============
|
||||
|
||||
Current branch: {CURRENT_BRANCH}
|
||||
Main branch: {MAIN_BRANCH}
|
||||
|
||||
Feature implementation complete! What would you like to do?
|
||||
|
||||
1. Merge to {MAIN_BRANCH} and delete feature branch (recommended)
|
||||
2. Stay on {CURRENT_BRANCH} for additional work
|
||||
3. Switch to {MAIN_BRANCH} without merging (keep branch)
|
||||
|
||||
Choose (1/2/3):
|
||||
```
|
||||
|
||||
5. **Wait for user choice** and proceed based on their selection.
|
||||
|
||||
**OPTION 1: Merge and Delete Branch**
|
||||
|
||||
a. **Check for uncommitted changes** using Bash:
|
||||
```bash
|
||||
git diff-index --quiet HEAD --
|
||||
```
|
||||
If exit code is non-zero, there are uncommitted changes.
|
||||
|
||||
b. **If there are uncommitted changes:**
|
||||
- Display: `git status --short` to show changes
|
||||
- Ask user: "Commit these changes first? (yes/no)"
|
||||
|
||||
c. **If user wants to commit, intelligently stage ONLY source files:**
|
||||
|
||||
**CRITICAL - Smart Git Staging (Project-Aware):**
|
||||
|
||||
**YOU MUST NOW perform smart file staging using these steps:**
|
||||
|
||||
1. **Detect project type** by checking for files:
|
||||
- Read package.json using Read tool
|
||||
- Check for framework indicators:
|
||||
* Vite: `vite.config.ts` or `"vite"` in package.json
|
||||
* Next.js: `next.config.js` or `"next"` in package.json
|
||||
* Remix: `remix.config.js` or `"@remix-run"` in package.json
|
||||
* Create React App: `react-scripts` in package.json
|
||||
* Node.js generic: package.json exists but no specific framework
|
||||
- Store detected type for use in exclusions
|
||||
|
||||
2. **Build exclusion patterns based on project type:**
|
||||
|
||||
a. **Base exclusions (all projects):**
|
||||
```
|
||||
':!node_modules/' ':!.pnpm-store/' ':!.yarn/'
|
||||
':!*.log' ':!coverage/' ':!.nyc_output/'
|
||||
```
|
||||
|
||||
b. **Project-specific exclusions:**
|
||||
- Vite: `':!dist/' ':!build/'`
|
||||
- Next.js: `':!.next/' ':!out/'`
|
||||
- Remix: `':!build/' ':!public/build/'`
|
||||
- CRA: `':!build/'`
|
||||
|
||||
c. **Parse .gitignore** using Read tool:
|
||||
- Read .gitignore if it exists
|
||||
- Extract patterns (lines not starting with #)
|
||||
- Convert to pathspec format: `:!{pattern}`
|
||||
- Add to exclusion list
|
||||
|
||||
3. **Check for large files** using Bash:
|
||||
```bash
|
||||
git status --porcelain | cut -c4- | while read file; do
|
||||
if [ -f "$file" ] && [ $(stat -f%z "$file" 2>/dev/null || stat -c%s "$file" 2>/dev/null) -gt 1048576 ]; then
|
||||
echo "$file ($(du -h "$file" | cut -f1))"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
a. **If large files found:**
|
||||
- Display warning:
|
||||
```
|
||||
⚠️ Large Files Detected
|
||||
======================
|
||||
|
||||
The following files are >1MB:
|
||||
- {file1} ({size1})
|
||||
- {file2} ({size2})
|
||||
|
||||
These may not belong in git. Add to .gitignore?
|
||||
1. Add to .gitignore and skip
|
||||
2. Commit anyway
|
||||
3. Cancel commit
|
||||
|
||||
Choose (1/2/3):
|
||||
```
|
||||
|
||||
- If option 1: Append to .gitignore, exclude from staging
|
||||
- If option 2: Include in staging
|
||||
- If option 3: Cancel commit, return to main flow
|
||||
|
||||
4. **Stage files with exclusions** using Bash:
|
||||
```bash
|
||||
git add . {BASE_EXCLUSIONS} {PROJECT_EXCLUSIONS} {GITIGNORE_EXCLUSIONS}
|
||||
```
|
||||
|
||||
Example for Vite project:
|
||||
```bash
|
||||
git add . ':!node_modules/' ':!dist/' ':!build/' ':!*.log' ':!coverage/'
|
||||
```
|
||||
|
||||
5. **Verify staging** using Bash:
|
||||
```bash
|
||||
git diff --cached --name-only
|
||||
```
|
||||
|
||||
a. **Check if any excluded patterns appear:**
|
||||
- If `dist/`, `build/`, `.next/`, etc. appear in staged files
|
||||
- Display error: "❌ Build artifacts detected in staging"
|
||||
- Ask user: "Unstage and retry? (yes/no)"
|
||||
- If yes: `git reset` and retry with stricter patterns
|
||||
|
||||
6. **Commit with message** using Bash:
|
||||
```bash
|
||||
git commit -m "{USER_PROVIDED_MESSAGE}"
|
||||
```
|
||||
|
||||
**IMPORTANT**: This project-aware staging prevents build artifacts and large files from being committed.
|
||||
|
||||
d. **Merge to main branch:**
|
||||
- Test merge first (dry run): `git merge --no-commit --no-ff {CURRENT_BRANCH}`
|
||||
- If successful: abort test, do real merge with message
|
||||
- If conflicts: abort, show manual resolution steps, stay on feature branch
|
||||
|
||||
e. **Delete feature branch** if merge succeeded:
|
||||
```bash
|
||||
git branch -d {CURRENT_BRANCH}
|
||||
```
|
||||
|
||||
**OPTION 2: Stay on Current Branch**
|
||||
- Display message about when/how to merge later
|
||||
- No git commands needed
|
||||
|
||||
**OPTION 3: Switch to Main (Keep Branch)**
|
||||
- Switch to main: `git checkout {MAIN_BRANCH}`
|
||||
- Keep feature branch for later
|
||||
|
||||
**IMPORTANT**: When staging files for commit, NEVER use `git add .` - always filter out build artifacts!
|
||||
|
||||
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/tasks` first to regenerate the task list.
|
||||
570
commands/init.md
Normal file
570
commands/init.md
Normal file
@@ -0,0 +1,570 @@
|
||||
---
|
||||
description: Interactive project initialization - creates constitution, tech stack, and quality standards
|
||||
args:
|
||||
- name: --skip-detection
|
||||
description: Skip automatic technology detection
|
||||
required: false
|
||||
- name: --minimal
|
||||
description: Use minimal defaults without interactive questions
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
## Goal
|
||||
|
||||
Initialize a new project with SpecSwarm by creating three foundation files:
|
||||
1. `.specswarm/constitution.md` - Project governance and coding principles
|
||||
2. `.specswarm/tech-stack.md` - Approved technologies and prohibited patterns
|
||||
3. `.specswarm/quality-standards.md` - Quality gates and performance budgets
|
||||
|
||||
This command streamlines project setup from 3 manual steps to a single interactive workflow.
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Check for Existing Files
|
||||
|
||||
```bash
|
||||
echo "🔍 Checking for existing SpecSwarm configuration..."
|
||||
echo ""
|
||||
|
||||
EXISTING_FILES=()
|
||||
|
||||
if [ -f ".specswarm/constitution.md" ]; then
|
||||
EXISTING_FILES+=("constitution.md")
|
||||
fi
|
||||
|
||||
if [ -f ".specswarm/tech-stack.md" ]; then
|
||||
EXISTING_FILES+=("tech-stack.md")
|
||||
fi
|
||||
|
||||
if [ -f ".specswarm/quality-standards.md" ]; then
|
||||
EXISTING_FILES+=("quality-standards.md")
|
||||
fi
|
||||
|
||||
if [ ${#EXISTING_FILES[@]} -gt 0 ]; then
|
||||
echo "⚠️ Found existing configuration files:"
|
||||
for file in "${EXISTING_FILES[@]}"; do
|
||||
echo " - .specswarm/$file"
|
||||
done
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
If existing files found, use **AskUserQuestion** tool:
|
||||
|
||||
```
|
||||
Question: "Existing configuration files detected. What would you like to do?"
|
||||
Header: "Existing Files"
|
||||
Options:
|
||||
1. "Update existing files"
|
||||
Description: "Merge new settings with existing configuration"
|
||||
2. "Backup and recreate"
|
||||
Description: "Save existing files to .backup/ and create fresh configuration"
|
||||
3. "Cancel initialization"
|
||||
Description: "Abort and keep existing configuration unchanged"
|
||||
```
|
||||
|
||||
Store response in `$EXISTING_ACTION`.
|
||||
|
||||
If `$EXISTING_ACTION` == "Cancel", exit with message.
|
||||
If `$EXISTING_ACTION` == "Backup and recreate", create backups:
|
||||
|
||||
```bash
|
||||
mkdir -p .specswarm/.backup/$(date +%Y%m%d-%H%M%S)
|
||||
for file in "${EXISTING_FILES[@]}"; do
|
||||
cp ".specswarm/$file" ".specswarm/.backup/$(date +%Y%m%d-%H%M%S)/$file"
|
||||
done
|
||||
echo "✅ Backed up existing files to .specswarm/.backup/"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Auto-Detect Technology Stack
|
||||
|
||||
**Skip this step if `--skip-detection` flag is present.**
|
||||
|
||||
```bash
|
||||
echo "🔍 Auto-detecting technology stack..."
|
||||
echo ""
|
||||
|
||||
# Source the multi-language detector
|
||||
PLUGIN_DIR="$(dirname "$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")")"
|
||||
source "${PLUGIN_DIR}/lib/language-detector.sh"
|
||||
|
||||
# Attempt to detect tech stack
|
||||
if detect_tech_stack "$(pwd)"; then
|
||||
AUTO_DETECT=true
|
||||
|
||||
# Display detected stack
|
||||
display_detected_stack
|
||||
else
|
||||
# No config file detected - manual configuration mode
|
||||
echo "ℹ️ No configuration file detected - auto-detection disabled"
|
||||
echo ""
|
||||
echo "📋 Supported configuration files:"
|
||||
echo " • package.json (JavaScript/TypeScript)"
|
||||
echo " • requirements.txt / pyproject.toml (Python)"
|
||||
echo " • composer.json (PHP)"
|
||||
echo " • go.mod (Go)"
|
||||
echo " • Gemfile (Ruby)"
|
||||
echo " • Cargo.toml (Rust)"
|
||||
echo ""
|
||||
echo "💡 Starting a new project?"
|
||||
echo ""
|
||||
echo " Consider scaffolding your project first for automatic setup:"
|
||||
echo ""
|
||||
echo " # JavaScript/TypeScript"
|
||||
echo " npm create vite@latest . -- --template react-ts # React + Vite"
|
||||
echo " npx create-next-app@latest . # Next.js"
|
||||
echo " npm create astro@latest . # Astro"
|
||||
echo " npm create vue@latest . # Vue"
|
||||
echo ""
|
||||
echo " # Python"
|
||||
echo " pip install flask && flask init # Flask"
|
||||
echo " django-admin startproject myproject . # Django"
|
||||
echo " pip install fastapi && touch main.py # FastAPI"
|
||||
echo ""
|
||||
echo " # PHP"
|
||||
echo " composer create-project laravel/laravel . # Laravel"
|
||||
echo ""
|
||||
echo " # Go"
|
||||
echo " go mod init github.com/username/project # Go"
|
||||
echo ""
|
||||
echo " # Ruby"
|
||||
echo " rails new . --skip-bundle # Rails"
|
||||
echo ""
|
||||
echo " # Rust"
|
||||
echo " cargo init # Rust"
|
||||
echo ""
|
||||
echo " Then re-run /specswarm:init for automatic detection."
|
||||
echo ""
|
||||
echo "⚠️ Continuing with manual tech stack configuration..."
|
||||
echo ""
|
||||
read -p "Press Enter to continue with manual setup, or Ctrl+C to scaffold first..."
|
||||
echo ""
|
||||
AUTO_DETECT=false
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Interactive Configuration (if not --minimal)
|
||||
|
||||
**Skip this step if `--minimal` flag is present. Use detected values or sensible defaults.**
|
||||
|
||||
Use **AskUserQuestion** tool for configuration:
|
||||
|
||||
```
|
||||
Question 1: "What is your project name?"
|
||||
Header: "Project"
|
||||
Options:
|
||||
- Auto-detected from package.json "name" field or current directory name
|
||||
- Allow custom input via "Other" option
|
||||
```
|
||||
|
||||
Store in `$PROJECT_NAME`.
|
||||
|
||||
```
|
||||
Question 2 (if AUTO_DETECT=true): "We detected your tech stack. Is this correct?"
|
||||
Header: "Tech Stack"
|
||||
Options:
|
||||
1. "Yes, looks good"
|
||||
Description: "Use detected technologies as-is"
|
||||
2. "Let me modify"
|
||||
Description: "Adjust the detected stack"
|
||||
3. "Start from scratch"
|
||||
Description: "Manually specify all technologies"
|
||||
```
|
||||
|
||||
Store in `$TECH_CONFIRM`.
|
||||
|
||||
If `$TECH_CONFIRM` == "Let me modify" or "Start from scratch" or AUTO_DETECT=false:
|
||||
|
||||
```
|
||||
Question 3: "What is your primary framework?"
|
||||
Header: "Framework"
|
||||
Options:
|
||||
1. "React"
|
||||
2. "Vue"
|
||||
3. "Angular"
|
||||
4. "Next.js"
|
||||
5. "Node.js (backend)"
|
||||
6. "Other" (allow custom input)
|
||||
```
|
||||
|
||||
```
|
||||
Question 4: "What testing framework do you use?"
|
||||
Header: "Testing"
|
||||
multiSelect: true
|
||||
Options:
|
||||
1. "Vitest (unit)"
|
||||
2. "Jest (unit)"
|
||||
3. "Playwright (e2e)"
|
||||
4. "Cypress (e2e)"
|
||||
5. "Testing Library"
|
||||
6. "Other" (allow custom input)
|
||||
```
|
||||
|
||||
```
|
||||
Question 5: "What quality thresholds do you want?"
|
||||
Header: "Quality"
|
||||
Options:
|
||||
1. "Standard (80% coverage, 80 quality score)"
|
||||
Description: "Recommended for most projects"
|
||||
2. "Strict (90% coverage, 90 quality score)"
|
||||
Description: "For mission-critical applications"
|
||||
3. "Relaxed (70% coverage, 70 quality score)"
|
||||
Description: "For prototypes and experiments"
|
||||
4. "Custom" (allow custom input)
|
||||
```
|
||||
|
||||
Store in `$QUALITY_LEVEL`.
|
||||
|
||||
Parse quality thresholds:
|
||||
- Standard: min_quality_score=80, min_test_coverage=80
|
||||
- Strict: min_quality_score=90, min_test_coverage=90
|
||||
- Relaxed: min_quality_score=70, min_test_coverage=70
|
||||
|
||||
```
|
||||
Question 6: "Do you want to use default coding principles?"
|
||||
Header: "Principles"
|
||||
Options:
|
||||
1. "Yes, use defaults"
|
||||
Description: "DRY, SOLID, type safety, test coverage, documentation"
|
||||
2. "Let me provide custom principles"
|
||||
Description: "Define your own 3-5 principles"
|
||||
```
|
||||
|
||||
Store in `$PRINCIPLES_CHOICE`.
|
||||
|
||||
If `$PRINCIPLES_CHOICE` == "Let me provide custom":
|
||||
Ask for custom principles (text input via "Other" option or multiple questions)
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Create .specswarm/constitution.md
|
||||
|
||||
Use the **SlashCommand** tool to execute the existing constitution command with the gathered information:
|
||||
|
||||
```bash
|
||||
echo "📝 Creating .specswarm/constitution.md..."
|
||||
|
||||
# If custom principles provided, pass them to constitution command
|
||||
if [ "$PRINCIPLES_CHOICE" = "custom" ]; then
|
||||
# Use SlashCommand tool to run:
|
||||
# /specswarm:constitution with custom principles
|
||||
else
|
||||
# Use SlashCommand tool to run:
|
||||
# /specswarm:constitution (will use defaults)
|
||||
fi
|
||||
|
||||
echo "✅ Created .specswarm/constitution.md"
|
||||
```
|
||||
|
||||
Use the **SlashCommand** tool:
|
||||
```
|
||||
/specswarm:constitution
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Create .specswarm/tech-stack.md
|
||||
|
||||
```bash
|
||||
echo "📝 Creating .specswarm/tech-stack.md..."
|
||||
|
||||
# Read template
|
||||
TEMPLATE=$(cat plugins/specswarm/templates/tech-stack.template.md)
|
||||
|
||||
# Replace placeholders
|
||||
OUTPUT="$TEMPLATE"
|
||||
OUTPUT="${OUTPUT//\[PROJECT_NAME\]/$PROJECT_NAME}"
|
||||
OUTPUT="${OUTPUT//\[DATE\]/$(date +%Y-%m-%d)}"
|
||||
OUTPUT="${OUTPUT//\[AUTO_GENERATED\]/$([[ $AUTO_DETECT == true ]] && echo "Yes" || echo "No")}"
|
||||
OUTPUT="${OUTPUT//\[FRAMEWORK\]/$FRAMEWORK}"
|
||||
OUTPUT="${OUTPUT//\[VERSION\]/$FRAMEWORK_VERSION}"
|
||||
OUTPUT="${OUTPUT//\[FRAMEWORK_NOTES\]/"Functional components only (if React), composition API (if Vue)"}"
|
||||
OUTPUT="${OUTPUT//\[LANGUAGE\]/$LANGUAGE}"
|
||||
OUTPUT="${OUTPUT//\[LANGUAGE_VERSION\]/$LANGUAGE_VERSION}"
|
||||
OUTPUT="${OUTPUT//\[BUILD_TOOL\]/$BUILD_TOOL}"
|
||||
OUTPUT="${OUTPUT//\[BUILD_TOOL_VERSION\]/$BUILD_TOOL_VERSION}"
|
||||
|
||||
# State management section
|
||||
if [ -n "$STATE_MGMT" ]; then
|
||||
STATE_SECTION="- **$STATE_MGMT**
|
||||
- Purpose: Application state management
|
||||
- Notes: Preferred over alternatives"
|
||||
else
|
||||
STATE_SECTION="- No state management library detected
|
||||
- Recommendation: Use React Context for simple state, Zustand for complex state"
|
||||
fi
|
||||
OUTPUT="${OUTPUT//\[STATE_MANAGEMENT_SECTION\]/$STATE_SECTION}"
|
||||
|
||||
# Styling section
|
||||
if [ -n "$STYLING" ]; then
|
||||
STYLE_SECTION="- **$STYLING**
|
||||
- Purpose: Component styling
|
||||
- Notes: Follow established patterns in codebase"
|
||||
else
|
||||
STYLE_SECTION="- No styling framework detected
|
||||
- Recommendation: Consider Tailwind CSS for utility-first styling"
|
||||
fi
|
||||
OUTPUT="${OUTPUT//\[STYLING_SECTION\]/$STYLE_SECTION}"
|
||||
|
||||
# Testing section
|
||||
UNIT_SECTION="${UNIT_TEST:-"Not configured - recommended: Vitest"}"
|
||||
E2E_SECTION="${E2E_TEST:-"Not configured - recommended: Playwright"}"
|
||||
OUTPUT="${OUTPUT//\[UNIT_TEST_FRAMEWORK\]/$UNIT_SECTION}"
|
||||
OUTPUT="${OUTPUT//\[E2E_TEST_FRAMEWORK\]/$E2E_SECTION}"
|
||||
OUTPUT="${OUTPUT//\[INTEGRATION_TEST_FRAMEWORK\]/${UNIT_TEST:-"Same as unit testing"}}"
|
||||
|
||||
# Approved libraries section
|
||||
APPROVED_SECTION="### Data Validation
|
||||
- Zod v4+ (runtime type validation)
|
||||
|
||||
### Utilities
|
||||
- date-fns (date manipulation)
|
||||
- lodash-es (utility functions, tree-shakeable)
|
||||
|
||||
### Forms (if applicable)
|
||||
- React Hook Form (if using React)
|
||||
- Zod validation integration
|
||||
|
||||
*Add project-specific approved libraries here*"
|
||||
OUTPUT="${OUTPUT//\[APPROVED_LIBRARIES_SECTION\]/$APPROVED_SECTION}"
|
||||
|
||||
# Prohibited section
|
||||
PROHIBITED_SECTION="### State Management
|
||||
- ❌ Redux (use Zustand or Context API instead)
|
||||
- ❌ MobX (prefer simpler alternatives)
|
||||
|
||||
### Deprecated Patterns
|
||||
- ❌ Class components (use functional components with hooks)
|
||||
- ❌ PropTypes (use TypeScript instead)
|
||||
- ❌ Moment.js (use date-fns instead - smaller bundle)
|
||||
|
||||
*Add project-specific prohibited patterns here*"
|
||||
OUTPUT="${OUTPUT//\[PROHIBITED_SECTION\]/$PROHIBITED_SECTION}"
|
||||
|
||||
# Notes section
|
||||
NOTES_SECTION="- This file was ${AUTO_DETECT:+auto-detected from package.json and }created by \`/specswarm:init\`
|
||||
- Update this file when adding new technologies or patterns
|
||||
- Run \`/specswarm:init\` again to update with new detections"
|
||||
OUTPUT="${OUTPUT//\[NOTES_SECTION\]/$NOTES_SECTION}"
|
||||
|
||||
# Write file
|
||||
mkdir -p /memory
|
||||
echo "$OUTPUT" > .specswarm/tech-stack.md
|
||||
|
||||
echo "✅ Created .specswarm/tech-stack.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Create .specswarm/quality-standards.md
|
||||
|
||||
```bash
|
||||
echo "📝 Creating .specswarm/quality-standards.md..."
|
||||
|
||||
# Read template
|
||||
TEMPLATE=$(cat plugins/specswarm/templates/quality-standards.template.md)
|
||||
|
||||
# Replace placeholders based on quality level
|
||||
OUTPUT="$TEMPLATE"
|
||||
OUTPUT="${OUTPUT//\[PROJECT_NAME\]/$PROJECT_NAME}"
|
||||
OUTPUT="${OUTPUT//\[DATE\]/$(date +%Y-%m-%d)}"
|
||||
OUTPUT="${OUTPUT//\[AUTO_GENERATED\]/Yes}"
|
||||
|
||||
# Quality thresholds
|
||||
case "$QUALITY_LEVEL" in
|
||||
"Standard")
|
||||
MIN_QUALITY=80
|
||||
MIN_COVERAGE=80
|
||||
;;
|
||||
"Strict")
|
||||
MIN_QUALITY=90
|
||||
MIN_COVERAGE=90
|
||||
;;
|
||||
"Relaxed")
|
||||
MIN_QUALITY=70
|
||||
MIN_COVERAGE=70
|
||||
;;
|
||||
"Custom")
|
||||
# Would be provided by user
|
||||
MIN_QUALITY="${CUSTOM_QUALITY:-80}"
|
||||
MIN_COVERAGE="${CUSTOM_COVERAGE:-80}"
|
||||
;;
|
||||
*)
|
||||
MIN_QUALITY=80
|
||||
MIN_COVERAGE=80
|
||||
;;
|
||||
esac
|
||||
|
||||
OUTPUT="${OUTPUT//\[MIN_QUALITY_SCORE\]/$MIN_QUALITY}"
|
||||
OUTPUT="${OUTPUT//\[MIN_TEST_COVERAGE\]/$MIN_COVERAGE}"
|
||||
OUTPUT="${OUTPUT//\[ENFORCE_GATES\]/true}"
|
||||
|
||||
# Performance budgets
|
||||
OUTPUT="${OUTPUT//\[ENFORCE_BUDGETS\]/true}"
|
||||
OUTPUT="${OUTPUT//\[MAX_BUNDLE_SIZE\]/500}"
|
||||
OUTPUT="${OUTPUT//\[MAX_INITIAL_LOAD\]/1000}"
|
||||
OUTPUT="${OUTPUT//\[MAX_CHUNK_SIZE\]/200}"
|
||||
|
||||
# Code quality
|
||||
OUTPUT="${OUTPUT//\[COMPLEXITY_THRESHOLD\]/10}"
|
||||
OUTPUT="${OUTPUT//\[MAX_FILE_LINES\]/300}"
|
||||
OUTPUT="${OUTPUT//\[MAX_FUNCTION_LINES\]/50}"
|
||||
OUTPUT="${OUTPUT//\[MAX_FUNCTION_PARAMS\]/5}"
|
||||
|
||||
# Testing
|
||||
OUTPUT="${OUTPUT//\[REQUIRE_TESTS\]/true}"
|
||||
|
||||
# Code review
|
||||
OUTPUT="${OUTPUT//\[REQUIRE_CODE_REVIEW\]/true}"
|
||||
OUTPUT="${OUTPUT//\[MIN_REVIEWERS\]/1}"
|
||||
|
||||
# CI/CD
|
||||
OUTPUT="${OUTPUT//\[BLOCK_MERGE_ON_FAILURE\]/true}"
|
||||
|
||||
# Custom checks section
|
||||
CUSTOM_CHECKS="### Performance Monitoring
|
||||
- Monitor Core Web Vitals (LCP, FID, CLS)
|
||||
- Set performance budgets in CI/CD
|
||||
|
||||
### Accessibility
|
||||
- WCAG 2.1 Level AA compliance
|
||||
- Automated a11y testing with axe-core
|
||||
|
||||
*Add project-specific checks here*"
|
||||
OUTPUT="${OUTPUT//\[CUSTOM_CHECKS_SECTION\]/$CUSTOM_CHECKS}"
|
||||
|
||||
# Exemptions section
|
||||
EXEMPTIONS="*No exemptions currently granted. Request exemptions via team discussion.*"
|
||||
OUTPUT="${OUTPUT//\[EXEMPTIONS_SECTION\]/$EXEMPTIONS}"
|
||||
|
||||
# Notes section
|
||||
NOTES="- Quality level: $QUALITY_LEVEL
|
||||
- Created by \`/specswarm:init\`
|
||||
- Enforced by \`/specswarm:ship\` before merge
|
||||
- Review and adjust these standards for your team's needs"
|
||||
OUTPUT="${OUTPUT//\[NOTES_SECTION\]/$NOTES}"
|
||||
|
||||
# Write file
|
||||
echo "$OUTPUT" > .specswarm/quality-standards.md
|
||||
|
||||
echo "✅ Created .specswarm/quality-standards.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Summary and Next Steps
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " ✅ PROJECT INITIALIZATION COMPLETE"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "📁 Created Configuration Files:"
|
||||
echo " ✓ .specswarm/constitution.md (governance & principles)"
|
||||
echo " ✓ .specswarm/tech-stack.md (approved technologies)"
|
||||
echo " ✓ .specswarm/quality-standards.md (quality gates)"
|
||||
echo ""
|
||||
echo "📊 Configuration Summary:"
|
||||
echo " Project: $PROJECT_NAME"
|
||||
echo " Framework: $FRAMEWORK $FRAMEWORK_VERSION"
|
||||
echo " Language: $LANGUAGE"
|
||||
echo " Quality Level: $QUALITY_LEVEL"
|
||||
echo " Min Quality: $MIN_QUALITY/100"
|
||||
echo " Min Coverage: $MIN_COVERAGE%"
|
||||
echo ""
|
||||
echo "📚 Next Steps:"
|
||||
echo ""
|
||||
echo " 1. Review the created files in .specswarm/"
|
||||
echo " 2. Customize as needed for your team"
|
||||
echo " 3. Build your first feature:"
|
||||
echo " /specswarm:build \"your feature description\""
|
||||
echo " 4. Ship when ready:"
|
||||
echo " /specswarm:ship"
|
||||
echo ""
|
||||
echo "💡 Tips:"
|
||||
echo " • Run /specswarm:suggest for workflow recommendations"
|
||||
echo " • Tech stack enforcement prevents drift across features"
|
||||
echo " • Quality gates ensure consistent code quality"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Auto-Detection Accuracy
|
||||
|
||||
The auto-detection logic parses `package.json` to identify:
|
||||
- Framework (React, Vue, Angular, Next.js)
|
||||
- Language (TypeScript vs JavaScript)
|
||||
- Build tool (Vite, Webpack, built-in)
|
||||
- State management (Zustand, Redux Toolkit, Jotai)
|
||||
- Styling (Tailwind, Styled Components, Emotion)
|
||||
- Testing frameworks (Vitest, Jest, Playwright, Cypress)
|
||||
|
||||
Detection is **best-effort** - users can always modify or override detected values.
|
||||
|
||||
### File Conflict Handling
|
||||
|
||||
If configuration files already exist:
|
||||
- **Update**: Merges new values with existing (preserves custom edits)
|
||||
- **Backup**: Saves to `.specswarm/.backup/[timestamp]/` before recreating
|
||||
- **Cancel**: Aborts initialization, keeps existing files
|
||||
|
||||
### Template Customization
|
||||
|
||||
Templates are located at:
|
||||
- `plugins/specswarm/templates/tech-stack.template.md`
|
||||
- `plugins/specswarm/templates/quality-standards.template.md`
|
||||
|
||||
Teams can customize these templates for organization-specific standards.
|
||||
|
||||
### Integration with Existing Commands
|
||||
|
||||
Once initialized, other commands reference these files:
|
||||
- `/specswarm:build` - Enforces tech stack
|
||||
- `/specswarm:ship` - Enforces quality gates
|
||||
- `/specswarm:analyze-quality` - Reports against standards
|
||||
- `/specswarm:upgrade` - Updates tech-stack.md
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
### Basic Initialization
|
||||
```bash
|
||||
/specswarm:init
|
||||
# Interactive questions, auto-detect tech stack
|
||||
```
|
||||
|
||||
### Minimal Setup (No Questions)
|
||||
```bash
|
||||
/specswarm:init --minimal
|
||||
# Uses all detected values and defaults
|
||||
```
|
||||
|
||||
### Manual Tech Stack (No Auto-Detection)
|
||||
```bash
|
||||
/specswarm:init --skip-detection
|
||||
# Asks for all technologies manually
|
||||
```
|
||||
|
||||
### Update Existing Configuration
|
||||
```bash
|
||||
/specswarm:init
|
||||
# Detects existing files, offers to update
|
||||
```
|
||||
326
commands/metrics-export.md
Normal file
326
commands/metrics-export.md
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
description: Display orchestration metrics and performance analytics across all feature sessions
|
||||
args:
|
||||
- name: --session-id
|
||||
description: Show detailed metrics for a specific session
|
||||
required: false
|
||||
- name: --recent
|
||||
description: Number of recent sessions to show (default 10)
|
||||
required: false
|
||||
- name: --export
|
||||
description: Export metrics to CSV file
|
||||
required: false
|
||||
---
|
||||
|
||||
# SpecLabs Orchestration Metrics Dashboard
|
||||
|
||||
```bash
|
||||
echo "📊 SpecLabs Orchestration Metrics Dashboard"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Parse arguments
|
||||
SESSION_ID=""
|
||||
RECENT_COUNT=10
|
||||
EXPORT_CSV=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--session-id) SESSION_ID="$2"; shift 2 ;;
|
||||
--recent) RECENT_COUNT="$2"; shift 2 ;;
|
||||
--export) EXPORT_CSV=true; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Define paths (use environment variable or default)
|
||||
SPECSWARM_ROOT="${SPECSWARM_ROOT:-$HOME/code-projects/specswarm}"
|
||||
MEMORY_DIR="${SPECSWARM_ROOT}/memory"
|
||||
ORCHESTRATOR_SESSIONS="${MEMORY_DIR}/orchestrator/sessions"
|
||||
FEATURE_SESSIONS="${MEMORY_DIR}/feature-orchestrator/sessions"
|
||||
|
||||
# Check if session directories exist
|
||||
if [ ! -d "$ORCHESTRATOR_SESSIONS" ] && [ ! -d "$FEATURE_SESSIONS" ]; then
|
||||
echo "⚠️ No orchestration sessions found"
|
||||
echo ""
|
||||
echo "Sessions will be created when you run:"
|
||||
echo " - /speclabs:orchestrate"
|
||||
echo " - /speclabs:orchestrate-feature"
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
## Metrics Overview
|
||||
|
||||
```bash
|
||||
# Count total sessions
|
||||
TOTAL_ORCHESTRATE=$(ls -1 "$ORCHESTRATOR_SESSIONS"/*.json 2>/dev/null | wc -l || echo "0")
|
||||
TOTAL_FEATURE=$(ls -1 "$FEATURE_SESSIONS"/*.json 2>/dev/null | wc -l || echo "0")
|
||||
TOTAL_SESSIONS=$((TOTAL_ORCHESTRATE + TOTAL_FEATURE))
|
||||
|
||||
echo "### Overall Statistics"
|
||||
echo ""
|
||||
echo "**Total Orchestration Sessions**: $TOTAL_SESSIONS"
|
||||
echo " - Task-level orchestrations: $TOTAL_ORCHESTRATE"
|
||||
echo " - Feature-level orchestrations: $TOTAL_FEATURE"
|
||||
echo ""
|
||||
|
||||
if [ "$TOTAL_SESSIONS" -eq 0 ]; then
|
||||
echo "No sessions to analyze. Run an orchestration command to generate metrics."
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
## Session-Specific Metrics
|
||||
|
||||
```bash
|
||||
if [ -n "$SESSION_ID" ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "### Detailed Metrics: $SESSION_ID"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Find session file
|
||||
SESSION_FILE=""
|
||||
if [ -f "${ORCHESTRATOR_SESSIONS}/${SESSION_ID}.json" ]; then
|
||||
SESSION_FILE="${ORCHESTRATOR_SESSIONS}/${SESSION_ID}.json"
|
||||
elif [ -f "${FEATURE_SESSIONS}/${SESSION_ID}.json" ]; then
|
||||
SESSION_FILE="${FEATURE_SESSIONS}/${SESSION_ID}.json"
|
||||
else
|
||||
echo "❌ Session not found: $SESSION_ID"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse session data
|
||||
SESSION_TYPE=$(jq -r '.type // "orchestrator"' "$SESSION_FILE")
|
||||
STATUS=$(jq -r '.status' "$SESSION_FILE")
|
||||
CREATED=$(jq -r '.created_at' "$SESSION_FILE")
|
||||
COMPLETED=$(jq -r '.completed_at // "In Progress"' "$SESSION_FILE")
|
||||
|
||||
echo "**Session Type**: $SESSION_TYPE"
|
||||
echo "**Status**: $STATUS"
|
||||
echo "**Created**: $CREATED"
|
||||
echo "**Completed**: $COMPLETED"
|
||||
echo ""
|
||||
|
||||
# Feature-specific metrics
|
||||
if [ "$SESSION_TYPE" = "feature_orchestrator" ]; then
|
||||
FEATURE_DESC=$(jq -r '.feature_description' "$SESSION_FILE")
|
||||
TOTAL_TASKS=$(jq -r '.implementation.total_count // 0' "$SESSION_FILE")
|
||||
COMPLETED_TASKS=$(jq -r '.implementation.completed_count // 0' "$SESSION_FILE")
|
||||
FAILED_TASKS=$(jq -r '.implementation.failed_count // 0' "$SESSION_FILE")
|
||||
QUALITY=$(jq -r '.metrics.quality_score // "N/A"' "$SESSION_FILE")
|
||||
|
||||
echo "**Feature**: $FEATURE_DESC"
|
||||
echo "**Tasks**: $COMPLETED_TASKS / $TOTAL_TASKS completed"
|
||||
echo "**Failed**: $FAILED_TASKS"
|
||||
echo "**Quality Score**: $QUALITY"
|
||||
echo ""
|
||||
|
||||
# Show phase breakdown
|
||||
echo "#### Phase Breakdown:"
|
||||
echo ""
|
||||
SPEC_DURATION=$(jq -r '.phases.specify.duration // "N/A"' "$SESSION_FILE")
|
||||
PLAN_DURATION=$(jq -r '.phases.plan.duration // "N/A"' "$SESSION_FILE")
|
||||
IMPL_DURATION=$(jq -r '.phases.implementation.duration // "N/A"' "$SESSION_FILE")
|
||||
|
||||
echo "- Specify: $SPEC_DURATION"
|
||||
echo "- Plan: $PLAN_DURATION"
|
||||
echo "- Implementation: $IMPL_DURATION"
|
||||
echo ""
|
||||
else
|
||||
# Task-level orchestration metrics
|
||||
WORKFLOW=$(jq -r '.workflow_file // "N/A"' "$SESSION_FILE")
|
||||
ATTEMPTS=$(jq -r '.total_attempts // 1' "$SESSION_FILE")
|
||||
SUCCESS=$(jq -r '.result.success // false' "$SESSION_FILE")
|
||||
|
||||
echo "**Workflow**: $WORKFLOW"
|
||||
echo "**Attempts**: $ATTEMPTS"
|
||||
echo "**Success**: $SUCCESS"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
## Recent Sessions Summary
|
||||
|
||||
```bash
|
||||
echo "### Recent Sessions (Last $RECENT_COUNT)"
|
||||
echo ""
|
||||
|
||||
# Get recent sessions from both directories
|
||||
RECENT_SESSIONS=$(
|
||||
(ls -t "$ORCHESTRATOR_SESSIONS"/*.json 2>/dev/null || true; \
|
||||
ls -t "$FEATURE_SESSIONS"/*.json 2>/dev/null || true) | \
|
||||
head -n "$RECENT_COUNT"
|
||||
)
|
||||
|
||||
if [ -z "$RECENT_SESSIONS" ]; then
|
||||
echo "No sessions found."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Table header
|
||||
printf "%-25s %-15s %-20s %-10s %-15s\n" "Session ID" "Type" "Status" "Tasks" "Quality"
|
||||
printf "%-25s %-15s %-20s %-10s %-15s\n" "-------------------------" "---------------" "--------------------" "----------" "---------------"
|
||||
|
||||
# Initialize counters for aggregates
|
||||
TOTAL_SUCCESS=0
|
||||
TOTAL_FAILED=0
|
||||
TOTAL_TASK_COUNT=0
|
||||
TOTAL_COMPLETED_TASKS=0
|
||||
QUALITY_SUM=0
|
||||
QUALITY_COUNT=0
|
||||
|
||||
# Process each session
|
||||
for SESSION_FILE in $RECENT_SESSIONS; do
|
||||
SESSION_ID=$(basename "$SESSION_FILE" .json)
|
||||
TYPE=$(jq -r '.type // "orchestrator"' "$SESSION_FILE")
|
||||
STATUS=$(jq -r '.status' "$SESSION_FILE")
|
||||
|
||||
# Determine type label
|
||||
if [ "$TYPE" = "feature_orchestrator" ]; then
|
||||
TYPE_LABEL="feature"
|
||||
TOTAL_TASKS=$(jq -r '.implementation.total_count // 0' "$SESSION_FILE")
|
||||
COMPLETED=$(jq -r '.implementation.completed_count // 0' "$SESSION_FILE")
|
||||
TASKS_DISPLAY="$COMPLETED/$TOTAL_TASKS"
|
||||
QUALITY=$(jq -r '.metrics.quality_score // ""' "$SESSION_FILE")
|
||||
|
||||
TOTAL_TASK_COUNT=$((TOTAL_TASK_COUNT + TOTAL_TASKS))
|
||||
TOTAL_COMPLETED_TASKS=$((TOTAL_COMPLETED_TASKS + COMPLETED))
|
||||
|
||||
if [ -n "$QUALITY" ] && [ "$QUALITY" != "null" ] && [ "$QUALITY" != "N/A" ]; then
|
||||
QUALITY_SUM=$(echo "$QUALITY_SUM + $QUALITY" | bc 2>/dev/null || echo "$QUALITY_SUM")
|
||||
QUALITY_COUNT=$((QUALITY_COUNT + 1))
|
||||
QUALITY_DISPLAY="${QUALITY}/100"
|
||||
else
|
||||
QUALITY_DISPLAY="N/A"
|
||||
fi
|
||||
else
|
||||
TYPE_LABEL="task"
|
||||
TASKS_DISPLAY="1/1"
|
||||
QUALITY_DISPLAY="N/A"
|
||||
fi
|
||||
|
||||
# Count successes/failures
|
||||
SUCCESS=$(jq -r '.result.success // false' "$SESSION_FILE")
|
||||
if [ "$SUCCESS" = "true" ]; then
|
||||
TOTAL_SUCCESS=$((TOTAL_SUCCESS + 1))
|
||||
else
|
||||
TOTAL_FAILED=$((TOTAL_FAILED + 1))
|
||||
fi
|
||||
|
||||
# Truncate session ID if too long
|
||||
SHORT_ID="${SESSION_ID:0:24}"
|
||||
|
||||
printf "%-25s %-15s %-20s %-10s %-15s\n" "$SHORT_ID" "$TYPE_LABEL" "$STATUS" "$TASKS_DISPLAY" "$QUALITY_DISPLAY"
|
||||
done
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
## Aggregate Metrics
|
||||
|
||||
```bash
|
||||
echo "### Aggregate Metrics"
|
||||
echo ""
|
||||
|
||||
SUCCESS_RATE=0
|
||||
if [ "$TOTAL_SESSIONS" -gt 0 ]; then
|
||||
SUCCESS_RATE=$(echo "scale=1; ($TOTAL_SUCCESS * 100) / $TOTAL_SESSIONS" | bc 2>/dev/null || echo "0")
|
||||
fi
|
||||
|
||||
AVG_QUALITY="N/A"
|
||||
if [ "$QUALITY_COUNT" -gt 0 ]; then
|
||||
AVG_QUALITY=$(echo "scale=1; $QUALITY_SUM / $QUALITY_COUNT" | bc 2>/dev/null || echo "N/A")
|
||||
fi
|
||||
|
||||
TASK_COMPLETION_RATE="N/A"
|
||||
if [ "$TOTAL_TASK_COUNT" -gt 0 ]; then
|
||||
TASK_COMPLETION_RATE=$(echo "scale=1; ($TOTAL_COMPLETED_TASKS * 100) / $TOTAL_TASK_COUNT" | bc 2>/dev/null || echo "0")
|
||||
fi
|
||||
|
||||
echo "**Success Rate**: ${SUCCESS_RATE}% ($TOTAL_SUCCESS/$TOTAL_SESSIONS)"
|
||||
echo "**Failed Sessions**: $TOTAL_FAILED"
|
||||
echo "**Task Completion Rate**: ${TASK_COMPLETION_RATE}%"
|
||||
echo "**Average Quality Score**: $AVG_QUALITY/100"
|
||||
echo ""
|
||||
```
|
||||
|
||||
## Export to CSV
|
||||
|
||||
```bash
|
||||
if [ "$EXPORT_CSV" = "true" ]; then
|
||||
EXPORT_FILE="${MEMORY_DIR}/metrics-export-$(date +%Y%m%d-%H%M%S).csv"
|
||||
|
||||
echo "📄 Exporting metrics to CSV..."
|
||||
|
||||
# CSV header
|
||||
echo "session_id,type,status,total_tasks,completed_tasks,quality_score,created_at,completed_at" > "$EXPORT_FILE"
|
||||
|
||||
# Export all sessions
|
||||
ALL_SESSIONS=$(
|
||||
(ls -t "$ORCHESTRATOR_SESSIONS"/*.json 2>/dev/null || true; \
|
||||
ls -t "$FEATURE_SESSIONS"/*.json 2>/dev/null || true)
|
||||
)
|
||||
|
||||
for SESSION_FILE in $ALL_SESSIONS; do
|
||||
SESSION_ID=$(basename "$SESSION_FILE" .json)
|
||||
TYPE=$(jq -r '.type // "orchestrator"' "$SESSION_FILE")
|
||||
STATUS=$(jq -r '.status' "$SESSION_FILE")
|
||||
CREATED=$(jq -r '.created_at' "$SESSION_FILE")
|
||||
COMPLETED=$(jq -r '.completed_at // ""' "$SESSION_FILE")
|
||||
|
||||
if [ "$TYPE" = "feature_orchestrator" ]; then
|
||||
TOTAL=$(jq -r '.implementation.total_count // 0' "$SESSION_FILE")
|
||||
COMP=$(jq -r '.implementation.completed_count // 0' "$SESSION_FILE")
|
||||
QUAL=$(jq -r '.metrics.quality_score // ""' "$SESSION_FILE")
|
||||
else
|
||||
TOTAL=1
|
||||
COMP=$(jq -r '.result.success // false' "$SESSION_FILE" | grep -q "true" && echo "1" || echo "0")
|
||||
QUAL=""
|
||||
fi
|
||||
|
||||
echo "$SESSION_ID,$TYPE,$STATUS,$TOTAL,$COMP,$QUAL,$CREATED,$COMPLETED" >> "$EXPORT_FILE"
|
||||
done
|
||||
|
||||
echo "✅ Metrics exported to: $EXPORT_FILE"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```
|
||||
# Show dashboard with recent sessions
|
||||
/speclabs:metrics
|
||||
|
||||
# Show detailed metrics for a specific session
|
||||
/speclabs:metrics --session-id feature-20251026-123456
|
||||
|
||||
# Show last 20 sessions
|
||||
/speclabs:metrics --recent 20
|
||||
|
||||
# Export all metrics to CSV
|
||||
/speclabs:metrics --export
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Purpose**: Track orchestration performance, quality metrics, and success rates across all SpecLabs workflows. Use this dashboard to identify patterns, monitor improvement, and analyze orchestration effectiveness.
|
||||
|
||||
**Key Metrics**:
|
||||
- Success rate across all orchestrations
|
||||
- Task completion rates for feature-level orchestrations
|
||||
- Average quality scores
|
||||
- Phase duration breakdowns
|
||||
- Session history and trends
|
||||
|
||||
**Data Source**: Orchestration session data stored in `.specswarm/orchestrator/sessions/` and `.specswarm/feature-orchestrator/sessions/`
|
||||
395
commands/metrics.md
Normal file
395
commands/metrics.md
Normal file
@@ -0,0 +1,395 @@
|
||||
---
|
||||
name: feature-metrics
|
||||
description: 'Feature-level orchestration metrics and analytics. Analyzes completed features from actual project artifacts (spec.md, tasks.md) rather than orchestration sessions. Shows completion rates, test metrics, and git history. v2.8.0: Initial release with feature detection and comprehensive analytics.'
|
||||
command_type: project
|
||||
---
|
||||
|
||||
# Feature-Level Metrics Dashboard
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Parse arguments
|
||||
PROJECT_PATH=""
|
||||
RECENT_COUNT=10
|
||||
EXPORT_FILE=""
|
||||
FEATURE_NUMBER=""
|
||||
SPRINT_FILTER=""
|
||||
SHOW_DETAILS=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--recent)
|
||||
RECENT_COUNT="$2"
|
||||
shift 2
|
||||
;;
|
||||
--export)
|
||||
EXPORT_FILE="${2:-feature-metrics-$(date +%Y%m%d_%H%M%S).csv}"
|
||||
shift 2
|
||||
;;
|
||||
--feature)
|
||||
FEATURE_NUMBER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--sprint)
|
||||
SPRINT_FILTER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--details)
|
||||
SHOW_DETAILS=true
|
||||
shift
|
||||
;;
|
||||
--path)
|
||||
PROJECT_PATH="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
if [ -z "$PROJECT_PATH" ] && [ -d "$1" ]; then
|
||||
PROJECT_PATH="$1"
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Default to current directory if no path specified
|
||||
PROJECT_PATH="${PROJECT_PATH:-$(pwd)}"
|
||||
|
||||
# Source the feature metrics collector library
|
||||
SCRIPT_DIR="$(dirname "${BASH_SOURCE[0]}")"
|
||||
PLUGIN_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
source "$PLUGIN_ROOT/lib/feature-metrics-collector.sh"
|
||||
|
||||
# Set project root for library
|
||||
export PROJECT_ROOT="$PROJECT_PATH"
|
||||
|
||||
echo "📊 SpecLabs Feature-Level Metrics Dashboard"
|
||||
echo "============================================"
|
||||
echo ""
|
||||
echo "Project: $PROJECT_PATH"
|
||||
echo ""
|
||||
|
||||
# Collect all feature data
|
||||
echo "🔍 Scanning for features..."
|
||||
features_json=$(fm_analyze_all_features "$PROJECT_PATH")
|
||||
|
||||
# Check if any features found
|
||||
total_features=$(echo "$features_json" | jq 'length')
|
||||
|
||||
if [ "$total_features" -eq 0 ]; then
|
||||
echo ""
|
||||
echo "ℹ️ No features found in $PROJECT_PATH"
|
||||
echo ""
|
||||
echo "Features are detected by the presence of spec.md files."
|
||||
echo "Make sure you're in a project directory with SpecSwarm features."
|
||||
echo ""
|
||||
echo "Searched for:"
|
||||
echo " - */spec.md"
|
||||
echo " - features/*/spec.md"
|
||||
echo " - .features/*/spec.md"
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "✅ Found $total_features features"
|
||||
echo ""
|
||||
|
||||
# Calculate aggregates
|
||||
aggregates=$(fm_calculate_aggregates "$features_json")
|
||||
|
||||
# Display based on requested view
|
||||
if [ -n "$FEATURE_NUMBER" ]; then
|
||||
#==========================================================================
|
||||
# SINGLE FEATURE DETAILS
|
||||
#==========================================================================
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Feature $FEATURE_NUMBER Details"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Find the feature
|
||||
feature=$(echo "$features_json" | jq --arg num "$FEATURE_NUMBER" \
|
||||
'.[] | select(.metadata.feature_number == $num)')
|
||||
|
||||
if [ -z "$feature" ] || [ "$feature" = "null" ]; then
|
||||
echo "❌ Feature $FEATURE_NUMBER not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Display metadata
|
||||
echo "📋 Metadata"
|
||||
echo "───────────"
|
||||
echo "$feature" | jq -r '" Name: \(.metadata.feature_name)
|
||||
Status: \(.metadata.status)
|
||||
Parent Branch: \(.metadata.parent_branch)
|
||||
Created: \(.metadata.created_at)
|
||||
Completed: \(.metadata.completed_at // "N/A")
|
||||
Directory: \(.metadata.feature_dir)"'
|
||||
echo ""
|
||||
|
||||
# Display task stats
|
||||
echo "✅ Tasks"
|
||||
echo "────────"
|
||||
echo "$feature" | jq -r '" Total: \(.tasks.total)
|
||||
Completed: \(.tasks.completed) (\(.tasks.completion_rate)%)
|
||||
Failed: \(.tasks.failed)
|
||||
Pending: \(.tasks.pending)"'
|
||||
echo ""
|
||||
|
||||
# Display test stats
|
||||
if [ "$(echo "$feature" | jq '.tests.total_tests')" -gt 0 ]; then
|
||||
echo "🧪 Tests"
|
||||
echo "────────"
|
||||
echo "$feature" | jq -r '" Total: \(.tests.total_tests)
|
||||
Passing: \(.tests.passing_tests) (\(.tests.pass_rate)%)
|
||||
Failing: \(.tests.failing_tests)"'
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Display git stats
|
||||
echo "🔀 Git History"
|
||||
echo "──────────────"
|
||||
echo "$feature" | jq -r '" Branch: \(.git.branch)
|
||||
Commits: \(.git.commits)
|
||||
Merged: \(.git.merged)
|
||||
Merge Date: \(.git.merge_date // "N/A")"'
|
||||
echo ""
|
||||
|
||||
elif [ -n "$SPRINT_FILTER" ]; then
|
||||
#==========================================================================
|
||||
# SPRINT AGGREGATE VIEW
|
||||
#==========================================================================
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Sprint: $SPRINT_FILTER"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Filter features for this sprint
|
||||
sprint_features=$(fm_filter_features "$features_json" "metadata.parent_branch" "$SPRINT_FILTER")
|
||||
sprint_count=$(echo "$sprint_features" | jq 'length')
|
||||
|
||||
if [ "$sprint_count" -eq 0 ]; then
|
||||
echo "ℹ️ No features found for sprint: $SPRINT_FILTER"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Calculate sprint aggregates
|
||||
sprint_aggregates=$(fm_calculate_aggregates "$sprint_features")
|
||||
|
||||
echo "📊 Sprint Statistics"
|
||||
echo "────────────────────"
|
||||
echo "$sprint_aggregates" | jq -r '" Total Features: \(.features.total)
|
||||
Completed: \(.features.completed)
|
||||
In Progress: \(.features.in_progress)
|
||||
|
||||
Total Tasks: \(.tasks.total)
|
||||
Completed: \(.tasks.completed)
|
||||
Failed: \(.tasks.failed)
|
||||
Avg Completion Rate: \(.tasks.avg_completion_rate)%
|
||||
|
||||
Total Tests: \(.tests.total)
|
||||
Passing: \(.tests.passing) (\(.tests.avg_pass_rate)%)
|
||||
Failing: \(.tests.failing)"'
|
||||
echo ""
|
||||
|
||||
echo "📝 Features in $SPRINT_FILTER"
|
||||
echo "─────────────────────────────"
|
||||
echo "$sprint_features" | jq -r '.[] | " [\(.metadata.feature_number)] \(.metadata.feature_name)
|
||||
Status: \(.metadata.status) | Tasks: \(.tasks.completed)/\(.tasks.total) | Tests: \(.tests.passing_tests)/\(.tests.total_tests)\n"'
|
||||
echo ""
|
||||
|
||||
else
|
||||
#==========================================================================
|
||||
# DASHBOARD SUMMARY VIEW
|
||||
#==========================================================================
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Overall Statistics"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
echo "📊 Features"
|
||||
echo "───────────"
|
||||
echo "$aggregates" | jq -r '" Total: \(.features.total)
|
||||
Completed: \(.features.completed)
|
||||
In Progress: \(.features.in_progress)"'
|
||||
echo ""
|
||||
|
||||
echo "✅ Tasks"
|
||||
echo "────────"
|
||||
echo "$aggregates" | jq -r '" Total: \(.tasks.total)
|
||||
Completed: \(.tasks.completed)
|
||||
Failed: \(.tasks.failed)
|
||||
Avg Completion Rate: \(.tasks.avg_completion_rate)%"'
|
||||
echo ""
|
||||
|
||||
echo "🧪 Tests"
|
||||
echo "────────"
|
||||
if [ "$(echo "$aggregates" | jq '.tests.total')" -gt 0 ]; then
|
||||
echo "$aggregates" | jq -r '" Total: \(.tests.total)
|
||||
Passing: \(.tests.passing) (\(.tests.avg_pass_rate)%)
|
||||
Failing: \(.tests.failing)"'
|
||||
else
|
||||
echo " No test data found"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Recent Features (Last $RECENT_COUNT)"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Get recent features
|
||||
recent_features=$(fm_get_recent "$features_json" "$RECENT_COUNT")
|
||||
|
||||
# Display recent features table
|
||||
echo "$recent_features" | jq -r '.[] |
|
||||
"[\(.metadata.feature_number)] \(.metadata.feature_name)
|
||||
Status: \(.metadata.status) | Parent: \(.metadata.parent_branch)
|
||||
Tasks: \(.tasks.completed)/\(.tasks.total) (\(.tasks.completion_rate)%) | Tests: \(.tests.passing_tests)/\(.tests.total_tests) (\(.tests.pass_rate)%)
|
||||
Created: \(.metadata.created_at)
|
||||
"'
|
||||
|
||||
# Sprint breakdown
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Features by Sprint/Parent Branch"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Group by parent branch
|
||||
echo "$features_json" | jq -r 'group_by(.metadata.parent_branch) |
|
||||
.[] |
|
||||
"[\(.[0].metadata.parent_branch)]
|
||||
Features: \(length)
|
||||
Tasks Completed: \([.[].tasks.completed] | add)/\([.[].tasks.total] | add)
|
||||
"'
|
||||
|
||||
fi
|
||||
|
||||
# Export to CSV if requested
|
||||
if [ -n "$EXPORT_FILE" ]; then
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Exporting to CSV"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
exported_file=$(fm_export_csv "$features_json" "$EXPORT_FILE")
|
||||
echo "✅ Metrics exported to: $exported_file"
|
||||
echo ""
|
||||
echo "Total rows: $((total_features + 1))" # +1 for header
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Help message for next steps
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Available Commands"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo " /speclabs:feature-metrics Dashboard summary"
|
||||
echo " /speclabs:feature-metrics --recent 20 Show last 20 features"
|
||||
echo " /speclabs:feature-metrics --feature 015 Feature 015 details"
|
||||
echo " /speclabs:feature-metrics --sprint sprint-4 Sprint aggregates"
|
||||
echo " /speclabs:feature-metrics --export Export to CSV"
|
||||
echo " /speclabs:feature-metrics --path /project Analyze specific project"
|
||||
echo ""
|
||||
|
||||
```
|
||||
|
||||
This command provides comprehensive feature-level metrics by analyzing actual project artifacts instead of orchestration sessions.
|
||||
|
||||
## What It Tracks
|
||||
|
||||
**Feature Detection**:
|
||||
- Scans for spec.md files to identify features
|
||||
- Works with any feature directory structure
|
||||
- No orchestration session required
|
||||
|
||||
**Metrics Collected**:
|
||||
1. **Feature Metadata** (from spec.md YAML):
|
||||
- Feature number, name, status
|
||||
- Parent branch, created/completed dates
|
||||
- Directory location
|
||||
|
||||
2. **Task Statistics** (from tasks.md):
|
||||
- Total, completed, failed, pending tasks
|
||||
- Completion rate percentage
|
||||
- Status markers (✅ COMPLETED, ❌ FAILED)
|
||||
|
||||
3. **Test Metrics** (from validation/testing docs):
|
||||
- Total tests, passing, failing
|
||||
- Pass rate percentage
|
||||
- Extracted from validation summaries
|
||||
|
||||
4. **Git History**:
|
||||
- Branch information
|
||||
- Commit counts
|
||||
- Merge status and dates
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Dashboard Summary
|
||||
```bash
|
||||
/speclabs:feature-metrics
|
||||
```
|
||||
Shows overall statistics and recent features.
|
||||
|
||||
### Feature Details
|
||||
```bash
|
||||
/speclabs:feature-metrics --feature 015
|
||||
```
|
||||
Complete metrics for Feature 015.
|
||||
|
||||
### Sprint Analysis
|
||||
```bash
|
||||
/speclabs:feature-metrics --sprint sprint-4
|
||||
```
|
||||
Aggregated metrics for all features in sprint-4.
|
||||
|
||||
### Export to CSV
|
||||
```bash
|
||||
/speclabs:feature-metrics --export
|
||||
/speclabs:feature-metrics --export metrics-2025-11.csv
|
||||
```
|
||||
|
||||
### Analyze Different Project
|
||||
```bash
|
||||
/speclabs:feature-metrics --path /home/user/projects/myapp
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
**No Session Required**: Analyzes actual feature artifacts, works with v2.6.1+ features that use `/specswarm:implement`
|
||||
|
||||
**Sprint Tracking**: Group features by parent branch for sprint-level analytics
|
||||
|
||||
**Export Capabilities**: CSV export for spreadsheet analysis
|
||||
|
||||
**Git Integration**: Tracks merge status and commit history
|
||||
|
||||
**Comprehensive**: Combines metadata, tasks, tests, and git data in one view
|
||||
|
||||
## Difference from `/speclabs:metrics`
|
||||
|
||||
| Feature | /speclabs:metrics | /speclabs:feature-metrics |
|
||||
|---------|-------------------|---------------------------|
|
||||
| Data Source | Orchestration sessions | Project artifacts (spec.md, tasks.md) |
|
||||
| Workflow | Pre-v2.6.1 (per-task orchestration) | v2.6.1+ (specswarm implement) |
|
||||
| Use Case | Task-level automation tracking | Feature-level completion analytics |
|
||||
| Requires Session | Yes | No |
|
||||
|
||||
## Feature 015 Example
|
||||
|
||||
Feature 015 (Testing Infrastructure) metrics would show:
|
||||
- 76 total tasks
|
||||
- 76 completed tasks (100%)
|
||||
- 136 total tests
|
||||
- 131 passing tests (96.3%)
|
||||
- Parent branch: sprint-4
|
||||
- Status: Complete
|
||||
- Git: Merged to sprint-4
|
||||
|
||||
This data comes from reading Feature 015's actual files, not orchestration sessions.
|
||||
1032
commands/modify.md
Normal file
1032
commands/modify.md
Normal file
File diff suppressed because it is too large
Load Diff
486
commands/orchestrate-feature.md
Normal file
486
commands/orchestrate-feature.md
Normal file
@@ -0,0 +1,486 @@
|
||||
---
|
||||
description: Orchestrate complete feature lifecycle from specification to implementation using autonomous agent
|
||||
args:
|
||||
- name: feature_description
|
||||
description: Natural language description of the feature to build
|
||||
required: true
|
||||
- name: project_path
|
||||
description: Path to the target project (defaults to current working directory)
|
||||
required: false
|
||||
- name: --skip-specify
|
||||
description: Skip the specify phase (spec.md already exists)
|
||||
required: false
|
||||
- name: --skip-clarify
|
||||
description: Skip the clarify phase
|
||||
required: false
|
||||
- name: --skip-plan
|
||||
description: Skip the plan phase (plan.md already exists)
|
||||
required: false
|
||||
- name: --max-retries
|
||||
description: Maximum retries per task (default 3)
|
||||
required: false
|
||||
- name: --audit
|
||||
description: Run comprehensive code audit phase after implementation (compatibility, security, best practices)
|
||||
required: false
|
||||
- name: --validate
|
||||
description: Run AI-powered interaction flow validation with Playwright (analyzes feature artifacts, generates intelligent test flows, executes user-defined + AI flows, monitors browser console + terminal, auto-fixes errors, kills dev server when done)
|
||||
required: false
|
||||
pre_orchestration_hook: |
|
||||
#!/bin/bash
|
||||
|
||||
echo "🎯 Feature Orchestrator v2.7.3 - Truly Silent Autonomous Execution"
|
||||
echo ""
|
||||
echo "This orchestrator launches an autonomous agent that handles:"
|
||||
echo " 1. SpecSwarm Planning: specify → clarify → plan → tasks"
|
||||
echo " 2. SpecLabs Execution: automatically execute all tasks"
|
||||
echo " 3. Intelligent Bugfix: Auto-fix failures with /specswarm:bugfix"
|
||||
echo " 4. Code Audit: Comprehensive quality validation (if --audit)"
|
||||
echo " 5. Completion Report: Full summary with next steps"
|
||||
echo ""
|
||||
|
||||
# Parse arguments
|
||||
FEATURE_DESC="$1"
|
||||
shift
|
||||
|
||||
# Check if next arg is a path (doesn't start with --)
|
||||
if [ -n "$1" ] && [ "${1:0:2}" != "--" ]; then
|
||||
PROJECT_PATH="$1"
|
||||
shift
|
||||
else
|
||||
PROJECT_PATH="$(pwd)"
|
||||
fi
|
||||
|
||||
SKIP_SPECIFY=false
|
||||
SKIP_CLARIFY=false
|
||||
SKIP_PLAN=false
|
||||
MAX_RETRIES=3
|
||||
RUN_AUDIT=false
|
||||
RUN_VALIDATE=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--skip-specify) SKIP_SPECIFY=true; shift ;;
|
||||
--skip-clarify) SKIP_CLARIFY=true; shift ;;
|
||||
--skip-plan) SKIP_PLAN=true; shift ;;
|
||||
--max-retries) MAX_RETRIES="$2"; shift 2 ;;
|
||||
--audit) RUN_AUDIT=true; shift ;;
|
||||
--validate) RUN_VALIDATE=true; shift ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate project path
|
||||
if [ ! -d "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Project path does not exist: $PROJECT_PATH"
|
||||
echo " (Tip: Provide an explicit path or run from your project directory)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📁 Project: $PROJECT_PATH"
|
||||
|
||||
# Source orchestration library
|
||||
PLUGIN_DIR="/home/marty/code-projects/specswarm/plugins/speclabs"
|
||||
source "${PLUGIN_DIR}/lib/feature-orchestrator.sh"
|
||||
|
||||
# Initialize orchestrator
|
||||
feature_init
|
||||
|
||||
# Create feature session
|
||||
echo "📝 Creating feature orchestration session..."
|
||||
SESSION_ID=$(feature_create_session "$FEATURE_DESC" "$PROJECT_PATH")
|
||||
echo "✅ Feature Session: $SESSION_ID"
|
||||
echo ""
|
||||
|
||||
# Export for agent
|
||||
export FEATURE_SESSION_ID="$SESSION_ID"
|
||||
export FEATURE_DESC="$FEATURE_DESC"
|
||||
export PROJECT_PATH="$PROJECT_PATH"
|
||||
export SKIP_SPECIFY="$SKIP_SPECIFY"
|
||||
export SKIP_CLARIFY="$SKIP_CLARIFY"
|
||||
export SKIP_PLAN="$SKIP_PLAN"
|
||||
export MAX_RETRIES="$MAX_RETRIES"
|
||||
export RUN_AUDIT="$RUN_AUDIT"
|
||||
export RUN_VALIDATE="$RUN_VALIDATE"
|
||||
export PLUGIN_DIR="$PLUGIN_DIR"
|
||||
|
||||
echo "🚀 Launching orchestration agent for: $FEATURE_DESC"
|
||||
echo ""
|
||||
---
|
||||
|
||||
⚠️ CRITICAL: EXECUTE IMMEDIATELY - DO NOT PAUSE OR REPORT STATUS
|
||||
|
||||
**DIRECTIVE**: Launch autonomous orchestration agent NOW using the Task tool.
|
||||
|
||||
DO NOT:
|
||||
- Acknowledge or explain this command
|
||||
- Report status before execution
|
||||
- Pause between phases
|
||||
- Wait for user confirmation
|
||||
|
||||
DO:
|
||||
- Execute the Task tool immediately with subagent_type="general-purpose"
|
||||
- Let the agent run through ALL phases autonomously
|
||||
- Return ONLY when complete with final report
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
**Orchestration Configuration**:
|
||||
- **Feature**: ${FEATURE_DESC}
|
||||
- **Project**: ${PROJECT_PATH}
|
||||
- **Session ID**: ${FEATURE_SESSION_ID}
|
||||
- **Audit**: ${RUN_AUDIT}
|
||||
- **Validate**: ${RUN_VALIDATE}
|
||||
- **Skip Phases**: Specify=${SKIP_SPECIFY}, Clarify=${SKIP_CLARIFY}, Plan=${SKIP_PLAN}
|
||||
|
||||
${RUN_VALIDATE} = true enables AI-powered flow validation (Phase 2.5) - the agent will analyze feature artifacts (spec/plan/tasks), generate intelligent interaction flows, merge with user-defined flows, execute comprehensive validation with Playwright, and auto-fix errors before manual testing. Dev server will be stopped before returning control to user.
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🎯 AUTONOMOUS AGENT INSTRUCTIONS (Execute via Task Tool NOW)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
**Agent Mission**: Execute the complete feature development lifecycle for "${FEATURE_DESC}" in ${PROJECT_PATH}
|
||||
|
||||
**Agent Instructions for Task Tool**:
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🎯 FEATURE ORCHESTRATION AGENT - AUTONOMOUS EXECUTION
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
You are an autonomous feature orchestration agent. Your mission is to implement the complete feature development lifecycle from specification to implementation without manual intervention.
|
||||
|
||||
**MISSION**: Implement "${FEATURE_DESC}" in ${PROJECT_PATH}
|
||||
|
||||
**SESSION TRACKING**: ${FEATURE_SESSION_ID}
|
||||
|
||||
**CONFIGURATION**:
|
||||
- Skip Specify: ${SKIP_SPECIFY}
|
||||
- Skip Clarify: ${SKIP_CLARIFY}
|
||||
- Skip Plan: ${SKIP_PLAN}
|
||||
- Max Retries: ${MAX_RETRIES}
|
||||
- Run Audit: ${RUN_AUDIT}
|
||||
- Run Validate: ${RUN_VALIDATE}
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
📋 WORKFLOW - EXECUTE IN ORDER
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
## PHASE 1: PLANNING (Automatic)
|
||||
|
||||
### Step 1.1: Specification
|
||||
IF ${SKIP_SPECIFY} = false:
|
||||
- Use the SlashCommand tool to execute: `/specswarm:specify "${FEATURE_DESC}"`
|
||||
- Wait for completion
|
||||
- Verify spec.md created in features/ directory
|
||||
- Update session: feature_complete_specswarm_phase "${FEATURE_SESSION_ID}" "specify"
|
||||
ELSE:
|
||||
- Skip this step (spec.md already exists)
|
||||
|
||||
### Step 1.2: Clarification
|
||||
IF ${SKIP_CLARIFY} = false:
|
||||
- Use the SlashCommand tool to execute: `/specswarm:clarify`
|
||||
- Answer any clarification questions if prompted
|
||||
- Wait for completion
|
||||
- Update session: feature_complete_specswarm_phase "${FEATURE_SESSION_ID}" "clarify"
|
||||
ELSE:
|
||||
- Skip this step
|
||||
|
||||
### Step 1.3: Planning
|
||||
IF ${SKIP_PLAN} = false:
|
||||
- Use the SlashCommand tool to execute: `/specswarm:plan`
|
||||
- Wait for plan.md generation
|
||||
- Review plan for implementation phases
|
||||
- Update session: feature_complete_specswarm_phase "${FEATURE_SESSION_ID}" "plan"
|
||||
ELSE:
|
||||
- Skip this step (plan.md already exists)
|
||||
|
||||
### Step 1.4: Task Generation
|
||||
- Use the SlashCommand tool to execute: `/specswarm:tasks`
|
||||
- Wait for tasks.md generation
|
||||
- Update session: feature_complete_specswarm_phase "${FEATURE_SESSION_ID}" "tasks"
|
||||
|
||||
### Step 1.5: Parse Tasks (Silent)
|
||||
- Use the Read tool to read ${PROJECT_PATH}/features/*/tasks.md
|
||||
- Count total tasks (look for task IDs like T001, T002, etc.)
|
||||
- Extract task list
|
||||
- Store total as ${total_tasks}
|
||||
- DO NOT report task count
|
||||
- Silently proceed to Phase 2
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🔨 PHASE 2: IMPLEMENTATION (SpecSwarm Implement)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
### Step 2.1: Execute All Tasks with SpecSwarm
|
||||
|
||||
⚠️ CRITICAL: Execute slash command WITHOUT explaining or reporting
|
||||
|
||||
- Update session: feature_start_implementation "${FEATURE_SESSION_ID}"
|
||||
- Execute SlashCommand: `/specswarm:implement`
|
||||
- DO NOT explain what implement will do
|
||||
- DO NOT report "SpecSwarm will..."
|
||||
- DO NOT describe the process
|
||||
- WAIT SILENTLY for implement to complete and return results
|
||||
- Once results returned, THEN parse them in Step 2.2
|
||||
|
||||
### Step 2.2: Parse Implementation Results (Silent)
|
||||
|
||||
⚠️ DO NOT REPORT - Only parse for decision-making
|
||||
|
||||
- Use Read tool to read ${PROJECT_PATH}/features/*/tasks.md
|
||||
- Parse task completion status from tasks.md:
|
||||
- Look for task status markers (✅ completed, ❌ failed, ⏳ in progress)
|
||||
- Count completed tasks
|
||||
- Count failed tasks
|
||||
- Extract error messages for failed tasks
|
||||
- Store counts in variables (${completed}, ${failed}, ${total})
|
||||
- DO NOT report statistics to user
|
||||
- DO NOT display task counts
|
||||
- Silently proceed to Step 2.3
|
||||
|
||||
### Step 2.3: Update Session (Silent)
|
||||
- Update session: feature_complete_implementation "${FEATURE_SESSION_ID}" "${completed}" "${failed}"
|
||||
- Determine next phase based on results:
|
||||
- If failed > 0: Proceed to Phase 3 (Bugfix)
|
||||
- If ${RUN_VALIDATE} = true: Proceed to Phase 2.5 (Validation)
|
||||
- Otherwise: Proceed to Phase 5 (Completion Report)
|
||||
- DO NOT report to user
|
||||
- Silently continue to next phase
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🔍 PHASE 2.5: INTERACTIVE ERROR DETECTION (Conditional - If ${RUN_VALIDATE}=true)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
IF ${RUN_VALIDATE} = true:
|
||||
|
||||
### Step 2.5.1: Initialize Validation Phase (Silent)
|
||||
- Update session: feature_start_validation "${FEATURE_SESSION_ID}"
|
||||
- DO NOT report to user
|
||||
- Silently proceed to Step 2.5.2
|
||||
|
||||
### Step 2.5.2: Delegate to Standalone Validator
|
||||
|
||||
⚠️ CRITICAL: Execute validation WITHOUT explaining or reporting
|
||||
|
||||
- Execute SlashCommand:
|
||||
```bash
|
||||
/speclabs:validate-feature ${PROJECT_PATH} --session-id ${FEATURE_SESSION_ID}
|
||||
```
|
||||
- DO NOT report "Detected project type..."
|
||||
- DO NOT explain what validation orchestrator will do
|
||||
- DO NOT describe the validation process
|
||||
- WAIT SILENTLY for validation to complete and return results
|
||||
- Once results returned, THEN parse them in Step 2.5.3
|
||||
|
||||
### Step 2.5.3: Parse Validation Results from Session (Silent)
|
||||
|
||||
⚠️ DO NOT REPORT - Only parse for Phase 5
|
||||
|
||||
- Use Bash tool to read validation results:
|
||||
```bash
|
||||
source ${PLUGIN_DIR}/lib/feature-orchestrator.sh
|
||||
SESSION_FILE="${FEATURE_SESSION_DIR}/${FEATURE_SESSION_ID}.json"
|
||||
|
||||
validation_status=$(jq -r '.validation.status' "$SESSION_FILE")
|
||||
validation_type=$(jq -r '.validation.type' "$SESSION_FILE")
|
||||
total_flows=$(jq -r '.validation.summary.total_flows' "$SESSION_FILE")
|
||||
passed_flows=$(jq -r '.validation.summary.passed_flows' "$SESSION_FILE")
|
||||
failed_flows=$(jq -r '.validation.summary.failed_flows' "$SESSION_FILE")
|
||||
error_count=$(jq -r '.validation.error_count' "$SESSION_FILE")
|
||||
```
|
||||
- Store validation results in variables
|
||||
- DO NOT display validation results
|
||||
- DO NOT report status to user
|
||||
- Results will be included in Phase 5 final report
|
||||
- Silently proceed to next phase (Phase 3 if bugs, else Phase 5)
|
||||
|
||||
ELSE:
|
||||
- Skip interactive error detection (--validate not specified)
|
||||
- DO NOT report to user
|
||||
- Silently proceed to next phase
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🔧 PHASE 2.5.1: WEBAPP VALIDATOR FEATURES (Informational)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
The standalone /speclabs:validate-feature command provides:
|
||||
|
||||
**Automatic Project Type Detection:**
|
||||
- Webapp: React, Vite, Next.js, React Router apps
|
||||
- Android: AndroidManifest.xml projects (validator planned for v2.7.1)
|
||||
- REST API: OpenAPI/Swagger specs (validator planned for v2.7.2)
|
||||
- Desktop GUI: Electron apps (validator planned for v2.7.3)
|
||||
|
||||
**Intelligent Flow Generation (Webapp v2.7.0):**
|
||||
- AI-Powered: Analyzes spec.md, plan.md, tasks.md to generate context-aware flows
|
||||
- Feature Type Detection: Identifies shopping_cart, social_feed, auth, forms, CRUD patterns
|
||||
- User-Defined: Parses YAML frontmatter from spec.md for custom flows
|
||||
- Smart Merging: Combines user + AI flows with deduplication
|
||||
|
||||
**Interactive Error Detection (Webapp v2.7.0):**
|
||||
- Playwright Browser Automation with Chromium
|
||||
- Real-time console/exception monitoring during interactions
|
||||
- Terminal output monitoring for compilation errors
|
||||
- Auto-fix retry loop (up to 3 attempts)
|
||||
- Development server lifecycle management (auto start + guaranteed cleanup)
|
||||
|
||||
**Standardized Results:**
|
||||
- JSON output matching validator interface
|
||||
- Rich metadata: duration, retry attempts, flow counts
|
||||
- Artifacts: screenshots, logs, detailed reports
|
||||
- Automatic session integration
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🔧 PHASE 3: BUGFIX (Conditional - If Tasks Failed)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
IF ${failed} > 0:
|
||||
|
||||
### Step 3.1: Execute Bugfix
|
||||
- Update session: feature_start_bugfix "${FEATURE_SESSION_ID}"
|
||||
- Use the SlashCommand tool to execute: `/specswarm:bugfix`
|
||||
- Wait for bugfix completion
|
||||
- Review bugfix results
|
||||
|
||||
### Step 3.2: Re-Verify Failed Tasks
|
||||
- Check if previously failed tasks are now fixed
|
||||
- Update success/failure counts
|
||||
- Update session: feature_complete_bugfix "${FEATURE_SESSION_ID}" "${fixed_count}"
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🔍 PHASE 4: AUDIT (Conditional - If ${RUN_AUDIT}=true)
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
IF ${RUN_AUDIT} = true:
|
||||
|
||||
### Step 4.1: Initialize Audit
|
||||
- Create audit directory: ${PROJECT_PATH}/.speclabs/audit/
|
||||
- Update session: feature_start_audit "${FEATURE_SESSION_ID}"
|
||||
- Prepare audit report file
|
||||
|
||||
### Step 4.2: Run Audit Checks
|
||||
|
||||
**Compatibility Audit**:
|
||||
- Check for deprecated patterns
|
||||
- Verify language version compatibility
|
||||
- Check library compatibility
|
||||
|
||||
**Security Audit**:
|
||||
- Scan for hardcoded secrets
|
||||
- Check for SQL injection vulnerabilities
|
||||
- Verify XSS prevention
|
||||
- Look for dangerous functions (eval, exec, etc.)
|
||||
|
||||
**Best Practices Audit**:
|
||||
- Check for TODO/FIXME comments
|
||||
- Verify error handling
|
||||
- Check for debug logging in production
|
||||
- Verify code organization
|
||||
|
||||
### Step 4.3: Calculate Quality Score
|
||||
- Count warnings and errors across all checks
|
||||
- Calculate score: 100 - (warnings + errors*2)
|
||||
- Minimum score: 0
|
||||
|
||||
### Step 4.4: Generate Audit Report
|
||||
- Create comprehensive markdown report
|
||||
- Include all findings with file locations and line numbers
|
||||
- Add quality score
|
||||
- Save to: ${PROJECT_PATH}/.speclabs/audit/audit-report-${DATE}.md
|
||||
- Update session: feature_complete_audit "${FEATURE_SESSION_ID}" "${AUDIT_REPORT_PATH}" "${QUALITY_SCORE}"
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
📊 PHASE 5: COMPLETION REPORT
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
### Step 5.1: Generate Final Report
|
||||
|
||||
Create comprehensive completion report with:
|
||||
|
||||
**Planning Artifacts**:
|
||||
- ✅ Specification: ${SPEC_FILE_PATH}
|
||||
- ✅ Plan: ${PLAN_FILE_PATH}
|
||||
- ✅ Tasks: ${TASKS_FILE_PATH}
|
||||
|
||||
**Implementation Results**:
|
||||
- ✅ Total Tasks: ${total}
|
||||
- ✅ Completed Successfully: ${completed}
|
||||
- ❌ Failed: ${failed}
|
||||
- ⚠️ Fixed in Bugfix: ${fixed} (if bugfix ran)
|
||||
|
||||
**Quality Assurance**:
|
||||
- Bugfix Phase: ${RAN_BUGFIX ? "✅ Executed" : "⏭️ Skipped (no failures)"}
|
||||
- Audit Phase: ${RUN_AUDIT ? "✅ Executed (Score: ${QUALITY_SCORE}/100)" : "⏭️ Skipped (--audit not specified)"}
|
||||
- Audit Report: ${AUDIT_REPORT_PATH} (if audit ran)
|
||||
|
||||
**Session Information**:
|
||||
- Session ID: ${FEATURE_SESSION_ID}
|
||||
- Session File: .specswarm/feature-orchestrator/sessions/${FEATURE_SESSION_ID}.json
|
||||
- Feature Branch: ${BRANCH_NAME}
|
||||
|
||||
**Next Steps**:
|
||||
1. Review implementation changes: `git diff`
|
||||
2. Test manually: Run application and verify feature works
|
||||
3. Complete feature: Run `/specswarm:complete` to finalize with git workflow
|
||||
|
||||
### Step 5.2: Update Session Status
|
||||
- Update session: feature_complete "${FEATURE_SESSION_ID}" "true" "Orchestration complete"
|
||||
- Mark session as ready for completion
|
||||
|
||||
### Step 5.3: Return Report
|
||||
Return the complete report to the main Claude instance.
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
⚠️ ERROR HANDLING
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
**If Any Phase Fails**:
|
||||
1. Document the failure point clearly
|
||||
2. Include full error messages in report
|
||||
3. Recommend manual intervention steps
|
||||
4. Update session status to "failed"
|
||||
5. DO NOT continue to next phase
|
||||
6. Return error report immediately
|
||||
|
||||
**Retry Logic**:
|
||||
- Individual task failures: Continue to next task (bugfix will handle)
|
||||
- Planning phase failures: Stop immediately (cannot proceed without plan)
|
||||
- Bugfix failures: Note in report but continue to audit
|
||||
- Audit failures: Note in report but continue to completion
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
✅ SUCCESS CRITERIA
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
Orchestration is successful when:
|
||||
- ✅ All planning phases complete (or skipped if --skip flags)
|
||||
- ✅ All tasks executed (track success/failure counts)
|
||||
- ✅ Bugfix ran if needed
|
||||
- ✅ Audit completed if --audit flag set
|
||||
- ✅ Comprehensive final report generated
|
||||
- ✅ Session tracking file created and updated
|
||||
- ✅ User receives clear next steps
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
🚀 BEGIN ORCHESTRATION - EXECUTE NOW
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
⚠️ CRITICAL EXECUTION DIRECTIVE:
|
||||
|
||||
You MUST execute the Task tool with the above instructions IMMEDIATELY.
|
||||
|
||||
DO NOT:
|
||||
- Explain what you're about to do
|
||||
- Summarize the workflow
|
||||
- Report status before launching
|
||||
- Ask for confirmation
|
||||
- Pause to think
|
||||
|
||||
DO:
|
||||
- Launch the Task tool RIGHT NOW with subagent_type="general-purpose"
|
||||
- Use the complete agent instructions above as the prompt
|
||||
- Let the autonomous agent execute all phases end-to-end
|
||||
- The agent will report back when complete
|
||||
|
||||
═══════════════════════════════════════════════════════════════
|
||||
|
||||
**EXECUTE THE TASK TOOL NOW**
|
||||
413
commands/orchestrate-validate.md
Normal file
413
commands/orchestrate-validate.md
Normal file
@@ -0,0 +1,413 @@
|
||||
---
|
||||
description: Run validation suite on target project (browser, terminal, visual analysis)
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION:
|
||||
Project Orchestrator Plugin
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
Based on learnings from Test 4A and Test Orchestrator Agent concept
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Run comprehensive validation on a target project using:
|
||||
1. **Browser Automation** (Playwright) - Navigate app, capture errors
|
||||
2. **Visual Analysis** (Claude Vision API) - Analyze screenshots for UI issues
|
||||
3. **Terminal Monitoring** - Check dev server output for errors
|
||||
|
||||
**Usage**: `/speclabs:orchestrate-validate <project-path> [url]`
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Validate project at specific path
|
||||
/speclabs:orchestrate-validate /home/marty/code-projects/tweeter-spectest
|
||||
|
||||
# Validate with custom URL
|
||||
/speclabs:orchestrate-validate /home/marty/code-projects/tweeter-spectest http://localhost:5173
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pre-Validation Hook
|
||||
|
||||
```bash
|
||||
echo "🎯 Project Orchestrator - Validation Suite"
|
||||
echo ""
|
||||
echo "Phase 0: Research & De-Risk"
|
||||
echo "Testing: Browser automation + Visual analysis"
|
||||
echo ""
|
||||
|
||||
# Record start time
|
||||
VALIDATION_START_TIME=$(date +%s)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
```bash
|
||||
# Get arguments
|
||||
ARGS="$ARGUMENTS"
|
||||
|
||||
# Parse project path (required)
|
||||
PROJECT_PATH=$(echo "$ARGS" | awk '{print $1}')
|
||||
|
||||
# Parse URL (optional, default to http://localhost:5173)
|
||||
URL=$(echo "$ARGS" | awk '{print $2}')
|
||||
if [ -z "$URL" ]; then
|
||||
URL="http://localhost:5173"
|
||||
fi
|
||||
|
||||
# Validate project path
|
||||
if [ -z "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Project path required"
|
||||
echo ""
|
||||
echo "Usage: /speclabs:orchestrate-validate <project-path> [url]"
|
||||
echo ""
|
||||
echo "Example:"
|
||||
echo "/speclabs:orchestrate-validate /home/marty/code-projects/tweeter-spectest"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Project path does not exist: $PROJECT_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📁 Project: $PROJECT_PATH"
|
||||
echo "🌐 URL: $URL"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Check If Dev Server Is Running
|
||||
|
||||
```bash
|
||||
echo "🔍 Checking if dev server is running..."
|
||||
echo ""
|
||||
|
||||
# Try to connect to URL
|
||||
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "$URL" 2>/dev/null || echo "000")
|
||||
|
||||
if [ "$HTTP_STATUS" = "000" ]; then
|
||||
echo "⚠️ Dev server not running at $URL"
|
||||
echo ""
|
||||
echo "Please start dev server first:"
|
||||
echo " cd $PROJECT_PATH"
|
||||
echo " npm run dev"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Dev server responding (HTTP $HTTP_STATUS)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Browser Validation (Playwright)
|
||||
|
||||
**NOTE**: For Phase 0, we'll create a simple Node.js script to run Playwright validation.
|
||||
|
||||
First, check if Playwright is available in the project:
|
||||
|
||||
```bash
|
||||
echo "📱 Running Browser Validation..."
|
||||
echo ""
|
||||
|
||||
# Check if Playwright is installed in project
|
||||
cd "$PROJECT_PATH"
|
||||
|
||||
if [ ! -d "node_modules/playwright" ]; then
|
||||
echo "⚠️ Playwright not installed in project"
|
||||
echo ""
|
||||
echo "Installing Playwright..."
|
||||
npm install --save-dev playwright
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ Failed to install Playwright"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Playwright installed"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
Now create and run validation script:
|
||||
|
||||
```typescript
|
||||
// Create temporary validation script
|
||||
const validationScript = `
|
||||
const { chromium } = require('playwright');
|
||||
|
||||
(async () => {
|
||||
console.log('🚀 Launching browser...');
|
||||
|
||||
const browser = await chromium.launch({ headless: true });
|
||||
const page = await browser.newPage();
|
||||
|
||||
// Track console errors
|
||||
const consoleErrors = [];
|
||||
page.on('console', msg => {
|
||||
if (msg.type() === 'error') {
|
||||
consoleErrors.push(msg.text());
|
||||
}
|
||||
});
|
||||
|
||||
// Track network errors
|
||||
const networkErrors = [];
|
||||
page.on('response', response => {
|
||||
if (response.status() >= 400) {
|
||||
networkErrors.push({
|
||||
url: response.url(),
|
||||
status: response.status()
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
try {
|
||||
// Navigate to app
|
||||
console.log('🌐 Navigating to ${URL}...');
|
||||
await page.goto('${URL}', { waitUntil: 'networkidle', timeout: 30000 });
|
||||
|
||||
// Wait for any dynamic content
|
||||
await page.waitForTimeout(2000);
|
||||
|
||||
// Capture screenshot
|
||||
console.log('📸 Capturing screenshot...');
|
||||
await page.screenshot({
|
||||
path: '/tmp/orchestrator-validation-screenshot.png',
|
||||
fullPage: true
|
||||
});
|
||||
|
||||
// Get page title
|
||||
const title = await page.title();
|
||||
|
||||
console.log('');
|
||||
console.log('✅ Browser Validation Complete');
|
||||
console.log('');
|
||||
console.log('Page Title:', title);
|
||||
console.log('Console Errors:', consoleErrors.length);
|
||||
console.log('Network Errors:', networkErrors.length);
|
||||
console.log('');
|
||||
|
||||
if (consoleErrors.length > 0) {
|
||||
console.log('🔴 Console Errors:');
|
||||
consoleErrors.forEach((err, i) => {
|
||||
console.log(\` \${i + 1}. \${err}\`);
|
||||
});
|
||||
console.log('');
|
||||
}
|
||||
|
||||
if (networkErrors.length > 0) {
|
||||
console.log('🔴 Network Errors:');
|
||||
networkErrors.forEach((err, i) => {
|
||||
console.log(\` \${i + 1}. \${err.status} - \${err.url}\`);
|
||||
});
|
||||
console.log('');
|
||||
}
|
||||
|
||||
if (consoleErrors.length === 0 && networkErrors.length === 0) {
|
||||
console.log('✅ No errors detected!');
|
||||
console.log('');
|
||||
}
|
||||
|
||||
// Export results as JSON
|
||||
const results = {
|
||||
success: true,
|
||||
url: '${URL}',
|
||||
title: title,
|
||||
consoleErrors: consoleErrors,
|
||||
networkErrors: networkErrors,
|
||||
screenshotPath: '/tmp/orchestrator-validation-screenshot.png',
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
require('fs').writeFileSync(
|
||||
'/tmp/orchestrator-validation-results.json',
|
||||
JSON.stringify(results, null, 2)
|
||||
);
|
||||
|
||||
} catch (error) {
|
||||
console.error('❌ Browser validation failed:', error.message);
|
||||
|
||||
const results = {
|
||||
success: false,
|
||||
error: error.message,
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
require('fs').writeFileSync(
|
||||
'/tmp/orchestrator-validation-results.json',
|
||||
JSON.stringify(results, null, 2)
|
||||
);
|
||||
|
||||
process.exit(1);
|
||||
} finally {
|
||||
await browser.close();
|
||||
}
|
||||
})();
|
||||
`;
|
||||
```
|
||||
|
||||
Write and execute the script:
|
||||
|
||||
```bash
|
||||
# Write validation script
|
||||
echo "$validationScript" > /tmp/orchestrator-validate.js
|
||||
|
||||
# Run validation
|
||||
cd "$PROJECT_PATH"
|
||||
node /tmp/orchestrator-validate.js
|
||||
|
||||
PLAYWRIGHT_EXIT_CODE=$?
|
||||
|
||||
# Check results
|
||||
if [ $PLAYWRIGHT_EXIT_CODE -ne 0 ]; then
|
||||
echo "❌ Browser validation failed"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Visual Analysis (Claude Vision API - Phase 0 Manual)
|
||||
|
||||
For Phase 0, we'll use the Read tool to view the screenshot, since we're already in Claude Code context:
|
||||
|
||||
```bash
|
||||
echo "👁️ Visual Analysis..."
|
||||
echo ""
|
||||
echo "Screenshot saved to: /tmp/orchestrator-validation-screenshot.png"
|
||||
echo ""
|
||||
echo "To analyze the screenshot, use:"
|
||||
echo "Read tool with: /tmp/orchestrator-validation-screenshot.png"
|
||||
echo ""
|
||||
```
|
||||
|
||||
**ACTION REQUIRED**: Use the Read tool to view `/tmp/orchestrator-validation-screenshot.png` and provide visual analysis:
|
||||
- Is the layout correct?
|
||||
- Are there any visual bugs (overlapping elements, missing content)?
|
||||
- Is styling applied correctly?
|
||||
- Any UX issues?
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Generate Validation Report
|
||||
|
||||
```bash
|
||||
echo "📊 Validation Report"
|
||||
echo "===================="
|
||||
echo ""
|
||||
echo "Project: $PROJECT_PATH"
|
||||
echo "URL: $URL"
|
||||
echo ""
|
||||
|
||||
# Read results
|
||||
if [ -f "/tmp/orchestrator-validation-results.json" ]; then
|
||||
# Parse JSON results (simplified for Phase 0)
|
||||
TITLE=$(cat /tmp/orchestrator-validation-results.json | grep '"title"' | sed 's/.*: "\(.*\)",/\1/')
|
||||
CONSOLE_ERRORS=$(cat /tmp/orchestrator-validation-results.json | grep -c '"consoleErrors"')
|
||||
NETWORK_ERRORS=$(cat /tmp/orchestrator-validation-results.json | grep -c '"networkErrors"')
|
||||
|
||||
echo "✅ Browser Validation: PASSED"
|
||||
echo " Page Title: $TITLE"
|
||||
echo " Console Errors: (check results JSON)"
|
||||
echo " Network Errors: (check results JSON)"
|
||||
echo ""
|
||||
echo "📄 Full results: /tmp/orchestrator-validation-results.json"
|
||||
echo "📸 Screenshot: /tmp/orchestrator-validation-screenshot.png"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Validation Hook
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "🎣 Post-Validation Hook"
|
||||
echo ""
|
||||
|
||||
# Calculate duration
|
||||
VALIDATION_END_TIME=$(date +%s)
|
||||
VALIDATION_DURATION=$((VALIDATION_END_TIME - VALIDATION_START_TIME))
|
||||
|
||||
echo "⏱️ Validation Duration: ${VALIDATION_DURATION}s"
|
||||
echo ""
|
||||
|
||||
# Cleanup temporary files (optional)
|
||||
# rm -f /tmp/orchestrator-validate.js
|
||||
|
||||
echo "✅ Validation Complete"
|
||||
echo ""
|
||||
echo "📈 Next Steps:"
|
||||
echo "1. Review screenshot: /tmp/orchestrator-validation-screenshot.png"
|
||||
echo "2. Review full results: /tmp/orchestrator-validation-results.json"
|
||||
echo "3. If validation passed, ready for orchestrate"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If project path is invalid**:
|
||||
- Display usage example
|
||||
- Exit with error
|
||||
|
||||
**If dev server not running**:
|
||||
- Display instructions to start dev server
|
||||
- Exit gracefully
|
||||
|
||||
**If Playwright installation fails**:
|
||||
- Display error message
|
||||
- Exit with error
|
||||
|
||||
**If browser navigation fails**:
|
||||
- Capture error details in results JSON
|
||||
- Display error message
|
||||
- Exit with error code
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Playwright browser automation works
|
||||
✅ Can navigate to target app
|
||||
✅ Console errors captured
|
||||
✅ Network errors captured
|
||||
✅ Screenshot captured successfully
|
||||
✅ Results exported as JSON
|
||||
✅ Validation report generated
|
||||
|
||||
---
|
||||
|
||||
## Phase 0 Notes
|
||||
|
||||
**Current Limitations**:
|
||||
- Manual visual analysis (using Read tool)
|
||||
- Basic error detection
|
||||
- Single URL validation
|
||||
- No retry logic
|
||||
|
||||
**Phase 1 Enhancements**:
|
||||
- Automated Claude Vision API integration
|
||||
- User flow validation (multi-step)
|
||||
- Retry logic with refinement
|
||||
- Comparison with previous screenshots
|
||||
- Integration with test workflow orchestration
|
||||
741
commands/orchestrate.md
Normal file
741
commands/orchestrate.md
Normal file
@@ -0,0 +1,741 @@
|
||||
---
|
||||
description: Run automated workflow orchestration with agent execution and validation
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION:
|
||||
Project Orchestrator Plugin
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
Based on learnings from Test 4A and Test Orchestrator Agent concept
|
||||
|
||||
PHASE 1b FULL AUTOMATION - 2025-10-16
|
||||
This command now includes full autonomous orchestration with zero manual steps:
|
||||
|
||||
COMPONENTS (Phase 1a):
|
||||
- State Manager: Session persistence and tracking
|
||||
- Decision Maker: Intelligent complete/retry/escalate logic
|
||||
- Prompt Refiner: Context-injected prompts on retry
|
||||
- Metrics Tracker: Session analytics and continuous improvement
|
||||
- Failure Analysis: 9 failure types categorized
|
||||
|
||||
AUTOMATION (Phase 1b):
|
||||
- Automatic Agent Launch: Uses Task tool automatically
|
||||
- Automatic Validation: Runs orchestrate-validate automatically
|
||||
- True Retry Loop: Up to 3 automatic retries with refined prompts
|
||||
- Automatic Decision Making: Complete/retry/escalate without user input
|
||||
- Full End-to-End: Zero manual steps during orchestration
|
||||
|
||||
BENEFITS:
|
||||
- 10x faster testing and iteration
|
||||
- Real validation data (Playwright, console, network)
|
||||
- Comprehensive metrics on every run
|
||||
- Rapid refinement and improvement
|
||||
|
||||
All session state persisted to: .specswarm/orchestrator/sessions/
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute autonomous workflow orchestration:
|
||||
1. **Parse Workflow** - Read workflow specification
|
||||
2. **Generate Comprehensive Prompt** - Create detailed agent instructions
|
||||
3. **Launch Agent** - Use Task tool to execute in target project
|
||||
4. **Validate Results** - Run validation suite
|
||||
5. **Report Outcome** - Summarize results and next steps
|
||||
|
||||
**Usage**: `/speclabs:orchestrate <workflow-file> <project-path>`
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
/speclabs:orchestrate features/001-fix-bug/workflow.md /home/marty/code-projects/tweeter-spectest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pre-Orchestration Hook
|
||||
|
||||
```bash
|
||||
echo "🎯 Project Orchestrator - Fully Autonomous Execution"
|
||||
echo ""
|
||||
echo "Phase 1b: Full Automation (Zero Manual Steps)"
|
||||
echo "Components: State Manager + Decision Maker + Prompt Refiner + Validation + Metrics"
|
||||
echo "Automation: Agent Launch + Validation + Retry Loop"
|
||||
echo ""
|
||||
|
||||
# Record start time
|
||||
ORCHESTRATION_START_TIME=$(date +%s)
|
||||
|
||||
# Source Phase 1a/1b components
|
||||
PLUGIN_DIR="/home/marty/code-projects/specswarm/plugins/speclabs"
|
||||
source "${PLUGIN_DIR}/lib/state-manager.sh"
|
||||
source "${PLUGIN_DIR}/lib/decision-maker.sh"
|
||||
source "${PLUGIN_DIR}/lib/prompt-refiner.sh"
|
||||
source "${PLUGIN_DIR}/lib/metrics-tracker.sh"
|
||||
|
||||
echo "✅ All components loaded"
|
||||
echo "✅ Full automation enabled"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Arguments
|
||||
|
||||
```bash
|
||||
# Get arguments
|
||||
ARGS="$ARGUMENTS"
|
||||
|
||||
# Parse test workflow file (required)
|
||||
TEST_WORKFLOW=$(echo "$ARGS" | awk '{print $1}')
|
||||
|
||||
# Parse project path (required)
|
||||
PROJECT_PATH=$(echo "$ARGS" | awk '{print $2}')
|
||||
|
||||
# Validate arguments
|
||||
if [ -z "$TEST_WORKFLOW" ] || [ -z "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Missing required arguments"
|
||||
echo ""
|
||||
echo "Usage: /speclabs:orchestrate <workflow-file> <project-path>"
|
||||
echo ""
|
||||
echo "Example:"
|
||||
echo "/speclabs:orchestrate features/001-fix-bug/workflow.md /home/marty/code-projects/tweeter-spectest"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check files exist
|
||||
if [ ! -f "$TEST_WORKFLOW" ]; then
|
||||
echo "❌ Error: Test workflow file not found: $TEST_WORKFLOW"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Project path does not exist: $PROJECT_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📋 Test Workflow: $TEST_WORKFLOW"
|
||||
echo "📁 Project: $PROJECT_PATH"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Parse Test Workflow
|
||||
|
||||
Read the test workflow file and extract:
|
||||
- Task description
|
||||
- Files to modify
|
||||
- Expected outcome
|
||||
- Validation criteria
|
||||
|
||||
**Test Workflow Format** (Markdown):
|
||||
```markdown
|
||||
# Test: [Task Name]
|
||||
|
||||
## Description
|
||||
[What needs to be done]
|
||||
|
||||
## Files to Modify
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
## Changes Required
|
||||
[Detailed description of changes]
|
||||
|
||||
## Expected Outcome
|
||||
[What should happen after changes]
|
||||
|
||||
## Validation
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
- [ ] Criterion 3
|
||||
|
||||
## Test URL
|
||||
http://localhost:5173/[path]
|
||||
```
|
||||
|
||||
```bash
|
||||
echo "📖 Parsing Test Workflow..."
|
||||
echo ""
|
||||
|
||||
# Read workflow file
|
||||
WORKFLOW_CONTENT=$(cat "$TEST_WORKFLOW")
|
||||
|
||||
# Extract task name (first heading)
|
||||
TASK_NAME=$(echo "$WORKFLOW_CONTENT" | grep "^# Test:" | head -1 | sed 's/^# Test: //')
|
||||
|
||||
if [ -z "$TASK_NAME" ]; then
|
||||
echo "❌ Error: Invalid workflow format (missing '# Test:' heading)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Task: $TASK_NAME"
|
||||
echo ""
|
||||
|
||||
# Create orchestration session
|
||||
echo "📝 Creating orchestration session..."
|
||||
SESSION_ID=$(state_create_session "$TEST_WORKFLOW" "$PROJECT_PATH" "$TASK_NAME")
|
||||
echo "✅ Session: $SESSION_ID"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Generate Comprehensive Prompt
|
||||
|
||||
This is the CORE of Project Orchestrator - generating effective prompts for agents.
|
||||
|
||||
**Prompt Template**:
|
||||
```markdown
|
||||
# Task: [TASK_NAME]
|
||||
|
||||
## Context
|
||||
You are working on project: [PROJECT_PATH]
|
||||
|
||||
This is an autonomous development task executed by Project Orchestrator (Phase 0).
|
||||
|
||||
## Your Mission
|
||||
[TASK_DESCRIPTION from workflow]
|
||||
|
||||
## Files to Modify
|
||||
[FILES_LIST from workflow]
|
||||
|
||||
## Detailed Requirements
|
||||
[CHANGES_REQUIRED from workflow]
|
||||
|
||||
## Success Criteria
|
||||
[VALIDATION_CRITERIA from workflow]
|
||||
|
||||
## Technical Guidelines
|
||||
- Make ALL changes in [PROJECT_PATH] directory
|
||||
- Follow existing code patterns and conventions
|
||||
- Ensure all changes are complete and functional
|
||||
- Run tests if applicable
|
||||
- Report any blockers or issues
|
||||
|
||||
## Validation
|
||||
After completing your work, the orchestrator will:
|
||||
1. Run browser validation with Playwright
|
||||
2. Check console for errors
|
||||
3. Analyze UI with Claude Vision API
|
||||
4. Verify success criteria
|
||||
|
||||
## Execution Instructions
|
||||
1. Read the workflow file to understand requirements
|
||||
2. Analyze existing code in target files
|
||||
3. Make necessary changes
|
||||
4. Verify changes work correctly
|
||||
5. Report completion with summary
|
||||
|
||||
Work autonomously. Only escalate if you encounter blockers that prevent completion.
|
||||
|
||||
---
|
||||
|
||||
**Test Workflow File**: [WORKFLOW_FILE_PATH]
|
||||
|
||||
Please read this file first to understand the full requirements, then proceed with implementation.
|
||||
```
|
||||
|
||||
Generate the actual prompt:
|
||||
|
||||
```bash
|
||||
echo "✍️ Generating comprehensive prompt..."
|
||||
echo ""
|
||||
|
||||
# Create prompt (simplified for Phase 0 - just reference the workflow file)
|
||||
AGENT_PROMPT="# Task: ${TASK_NAME}
|
||||
|
||||
## Context
|
||||
You are working on project: ${PROJECT_PATH}
|
||||
|
||||
This is an autonomous development task executed by Project Orchestrator (Phase 0 POC).
|
||||
|
||||
## Your Mission
|
||||
Read and execute the test workflow specified in: ${TEST_WORKFLOW}
|
||||
|
||||
## Technical Guidelines
|
||||
- Make ALL changes in ${PROJECT_PATH} directory
|
||||
- Follow existing code patterns and conventions
|
||||
- Ensure all changes are complete and functional
|
||||
- Run tests if applicable
|
||||
- Report any blockers or issues
|
||||
|
||||
## Execution Instructions
|
||||
1. Read the workflow file: ${TEST_WORKFLOW}
|
||||
2. Understand all requirements and validation criteria
|
||||
3. Analyze existing code in target files
|
||||
4. Make necessary changes
|
||||
5. Verify changes work correctly
|
||||
6. Report completion with summary
|
||||
|
||||
Work autonomously. Only escalate if you encounter blockers that prevent completion.
|
||||
|
||||
---
|
||||
|
||||
**IMPORTANT**: All file operations must be in the ${PROJECT_PATH} directory.
|
||||
"
|
||||
|
||||
echo "✅ Prompt generated ($(echo "$AGENT_PROMPT" | wc -w) words)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Automated Orchestration with Intelligent Retry
|
||||
|
||||
**Phase 1b Full Automation**: Agent launch + Validation + Retry loop - all automatic!
|
||||
|
||||
```bash
|
||||
echo "🔄 Starting Automated Orchestration (max 3 attempts)..."
|
||||
echo ""
|
||||
|
||||
# Configuration
|
||||
MAX_RETRIES=3
|
||||
```
|
||||
|
||||
I will now execute the orchestration with automatic retry logic:
|
||||
|
||||
**For each attempt (up to 3):**
|
||||
1. Check current retry count from state
|
||||
2. Select prompt (original on first attempt, refined on retries)
|
||||
3. Launch agent automatically using Task tool
|
||||
4. Run validation automatically using orchestrate-validate
|
||||
5. Parse validation results and update state
|
||||
6. Make decision (complete/retry/escalate)
|
||||
7. If retry needed: Resume state and loop back to step 1
|
||||
8. If complete or escalate: Exit loop and proceed to report
|
||||
|
||||
**Automatic Loop Implementation:**
|
||||
|
||||
Let me execute the retry loop automatically...
|
||||
|
||||
```bash
|
||||
# This will be executed by Claude Code in the loop
|
||||
echo "Orchestration loop will execute automatically..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
#### Orchestration Loop Execution
|
||||
|
||||
I'll now begin the automatic orchestration loop. For each attempt:
|
||||
|
||||
**Attempt Execution:**
|
||||
|
||||
1. **Check State & Select Prompt**
|
||||
```bash
|
||||
# Get current retry count
|
||||
RETRY_COUNT=$(state_get "$SESSION_ID" "retries.count")
|
||||
ATTEMPT_NUM=$((RETRY_COUNT + 1))
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📍 Attempt $ATTEMPT_NUM of $MAX_RETRIES"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
if [ "$RETRY_COUNT" -eq 0 ]; then
|
||||
# First attempt - use original prompt
|
||||
CURRENT_PROMPT="$AGENT_PROMPT"
|
||||
echo "📝 Using original prompt ($(echo "$CURRENT_PROMPT" | wc -w) words)"
|
||||
else
|
||||
# Retry attempt - use refined prompt with failure context
|
||||
echo "🔧 Generating refined prompt with failure analysis..."
|
||||
CURRENT_PROMPT=$(prompt_refine "$SESSION_ID" "$AGENT_PROMPT")
|
||||
prompt_save "$SESSION_ID" "$CURRENT_PROMPT"
|
||||
echo "✅ Refined prompt generated ($(echo "$CURRENT_PROMPT" | wc -w) words)"
|
||||
|
||||
# Show what changed
|
||||
prompt_diff "$SESSION_ID" "$AGENT_PROMPT" "$CURRENT_PROMPT" | head -20
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Update state for this attempt
|
||||
state_update "$SESSION_ID" "agent.status" '"running"'
|
||||
state_update "$SESSION_ID" "agent.started_at" "\"$(date -Iseconds)\""
|
||||
|
||||
# Save prompt for this attempt
|
||||
PROMPT_FILE="/tmp/orchestrate-prompt-${SESSION_ID}-attempt-${ATTEMPT_NUM}.md"
|
||||
echo "$CURRENT_PROMPT" > "$PROMPT_FILE"
|
||||
|
||||
echo "🚀 Launching agent for ${PROJECT_PATH}..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
2. **Launch Agent** - I'll use the Task tool to launch a general-purpose agent with the current prompt:
|
||||
|
||||
```bash
|
||||
# Agent will be launched by Claude Code using Task tool
|
||||
echo "Agent launch: Task tool with prompt from $PROMPT_FILE"
|
||||
echo "Working directory: $PROJECT_PATH"
|
||||
echo ""
|
||||
```
|
||||
|
||||
3. **Agent Execution** - Wait for agent to complete the task...
|
||||
|
||||
4. **Update Agent Status**
|
||||
```bash
|
||||
echo "✅ Agent completed execution"
|
||||
echo ""
|
||||
|
||||
# Update state
|
||||
state_update "$SESSION_ID" "agent.status" '"completed"'
|
||||
state_update "$SESSION_ID" "agent.completed_at" "\"$(date -Iseconds)\""
|
||||
```
|
||||
|
||||
5. **Run Validation** - I'll execute the validation suite automatically:
|
||||
|
||||
```bash
|
||||
echo "🔍 Running Automated Validation Suite..."
|
||||
echo ""
|
||||
|
||||
# Update validation status
|
||||
state_update "$SESSION_ID" "validation.status" '"running"'
|
||||
```
|
||||
|
||||
Execute validation: `/speclabs:orchestrate-validate ${PROJECT_PATH}`
|
||||
|
||||
6. **Parse Validation Results**
|
||||
|
||||
```bash
|
||||
# Parse validation output and update state
|
||||
# (Validation results will be captured from orchestrate-validate output)
|
||||
|
||||
echo ""
|
||||
echo "📊 Validation Results:"
|
||||
echo "- Playwright: [from validation output]"
|
||||
echo "- Console Errors: [from validation output]"
|
||||
echo "- Network Errors: [from validation output]"
|
||||
echo "- Vision API: [from validation output]"
|
||||
echo ""
|
||||
```
|
||||
|
||||
7. **Make Decision**
|
||||
|
||||
```bash
|
||||
echo "🧠 Decision Maker Analysis..."
|
||||
echo ""
|
||||
|
||||
# Make decision based on state
|
||||
DECISION=$(decision_make "$SESSION_ID")
|
||||
RETRY_COUNT=$(state_get "$SESSION_ID" "retries.count")
|
||||
|
||||
echo "Decision: $DECISION (after attempt $ATTEMPT_NUM)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
8. **Process Decision**
|
||||
|
||||
```bash
|
||||
case "$DECISION" in
|
||||
complete)
|
||||
echo "✅ COMPLETE - Task successful!"
|
||||
|
||||
# Record decision
|
||||
decision_record "$SESSION_ID" "complete" "Agent completed successfully and validation passed"
|
||||
|
||||
# Mark session complete
|
||||
state_complete "$SESSION_ID" "success"
|
||||
|
||||
# Track metrics
|
||||
metrics_track_completion "$SESSION_ID"
|
||||
|
||||
echo ""
|
||||
echo "🎉 Orchestration Complete (success on attempt $ATTEMPT_NUM)"
|
||||
|
||||
# Exit loop
|
||||
SHOULD_EXIT_LOOP=true
|
||||
;;
|
||||
|
||||
retry)
|
||||
echo "🔄 RETRY - Will attempt again with refined prompt"
|
||||
|
||||
# Get detailed decision with reasoning
|
||||
DETAILED_DECISION=$(decision_make_detailed "$SESSION_ID")
|
||||
REASON=$(echo "$DETAILED_DECISION" | jq -r '.reason')
|
||||
|
||||
echo "Reason: $REASON"
|
||||
echo ""
|
||||
|
||||
# Analyze failure
|
||||
FAILURE_ANALYSIS=$(decision_analyze_failure "$SESSION_ID")
|
||||
FAILURE_TYPE=$(echo "$FAILURE_ANALYSIS" | jq -r '.failure_type')
|
||||
|
||||
echo "Failure Type: $FAILURE_TYPE"
|
||||
|
||||
# Generate retry strategy
|
||||
RETRY_STRATEGY=$(decision_generate_retry_strategy "$SESSION_ID")
|
||||
BASE_STRATEGY=$(echo "$RETRY_STRATEGY" | jq -r '.base_strategy')
|
||||
|
||||
echo "Retry Strategy: $BASE_STRATEGY"
|
||||
echo ""
|
||||
|
||||
# Record decision
|
||||
decision_record "$SESSION_ID" "retry" "$REASON"
|
||||
|
||||
# Resume session for retry
|
||||
state_resume "$SESSION_ID"
|
||||
|
||||
# Track retry
|
||||
metrics_track_retry "$SESSION_ID" "$FAILURE_TYPE"
|
||||
|
||||
# Check if we should continue
|
||||
NEW_RETRY_COUNT=$(state_get "$SESSION_ID" "retries.count")
|
||||
|
||||
if [ "$NEW_RETRY_COUNT" -ge "$MAX_RETRIES" ]; then
|
||||
echo "⚠️ Maximum retries reached. Escalating..."
|
||||
SHOULD_EXIT_LOOP=true
|
||||
else
|
||||
echo "↻ Continuing to next attempt..."
|
||||
echo ""
|
||||
# Loop will continue automatically
|
||||
SHOULD_EXIT_LOOP=false
|
||||
fi
|
||||
;;
|
||||
|
||||
escalate)
|
||||
echo "⚠️ ESCALATE - Human intervention required"
|
||||
|
||||
# Get escalation message
|
||||
ESCALATION_MSG=$(decision_get_escalation_message "$SESSION_ID")
|
||||
|
||||
echo ""
|
||||
echo "$ESCALATION_MSG"
|
||||
echo ""
|
||||
|
||||
# Record decision
|
||||
decision_record "$SESSION_ID" "escalate" "Max retries exceeded or unrecoverable error"
|
||||
|
||||
# Mark session escalated
|
||||
state_complete "$SESSION_ID" "escalated"
|
||||
|
||||
# Track escalation
|
||||
metrics_track_escalation "$SESSION_ID"
|
||||
|
||||
echo "🎯 Escalation Actions Required:"
|
||||
echo "1. Review session state: state_print_summary $SESSION_ID"
|
||||
echo "2. Examine failure history"
|
||||
echo "3. Manually fix issues or adjust workflow"
|
||||
echo "4. Re-run orchestration if needed"
|
||||
echo ""
|
||||
|
||||
# Exit loop
|
||||
SHOULD_EXIT_LOOP=true
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Loop Control:** After each attempt, I'll check if we should continue:
|
||||
- If decision is "complete" or "escalate" → Exit loop, proceed to report
|
||||
- If decision is "retry" and retries < max → Continue to next attempt
|
||||
- If decision is "retry" but retries >= max → Escalate and exit
|
||||
|
||||
I'll now execute this loop automatically, making up to 3 attempts with intelligent retry logic.
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Generate Orchestration Report
|
||||
|
||||
```bash
|
||||
echo "📊 Orchestration Report"
|
||||
echo "======================="
|
||||
echo ""
|
||||
|
||||
# Print session summary using State Manager
|
||||
state_print_summary "$SESSION_ID"
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Get final state
|
||||
FINAL_STATUS=$(state_get "$SESSION_ID" "status")
|
||||
RETRY_COUNT=$(state_get "$SESSION_ID" "retries.count")
|
||||
FINAL_DECISION=$(state_get "$SESSION_ID" "decision.action")
|
||||
|
||||
echo "Final Status: $FINAL_STATUS"
|
||||
echo "Decision: $FINAL_DECISION"
|
||||
echo "Retry Count: $RETRY_COUNT"
|
||||
echo ""
|
||||
|
||||
# Calculate duration
|
||||
ORCHESTRATION_END_TIME=$(date +%s)
|
||||
ORCHESTRATION_DURATION=$((ORCHESTRATION_END_TIME - ORCHESTRATION_START_TIME))
|
||||
ORCHESTRATION_MINUTES=$((ORCHESTRATION_DURATION / 60))
|
||||
ORCHESTRATION_SECONDS=$((ORCHESTRATION_DURATION % 60))
|
||||
|
||||
echo "⏱️ Total Duration: ${ORCHESTRATION_MINUTES}m ${ORCHESTRATION_SECONDS}s"
|
||||
|
||||
# Update metrics in state
|
||||
state_update "$SESSION_ID" "metrics.total_time_seconds" "$ORCHESTRATION_DURATION"
|
||||
|
||||
echo ""
|
||||
echo "📁 Session Directory: $(state_get_session_dir $SESSION_ID)"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Post-Orchestration Hook
|
||||
|
||||
```bash
|
||||
echo "🎣 Post-Orchestration Hook"
|
||||
echo ""
|
||||
|
||||
echo "✅ Orchestration Complete"
|
||||
echo ""
|
||||
|
||||
# Show next steps based on final decision
|
||||
FINAL_DECISION=$(state_get "$SESSION_ID" "decision.action")
|
||||
|
||||
case "$FINAL_DECISION" in
|
||||
complete)
|
||||
echo "📈 Next Steps (Success):"
|
||||
echo "1. Review agent's changes in: ${PROJECT_PATH}"
|
||||
echo "2. Test manually in browser"
|
||||
echo "3. Commit changes if satisfied"
|
||||
echo "4. Review metrics: metrics_get_summary"
|
||||
;;
|
||||
|
||||
retry)
|
||||
echo "📈 Next Steps (Retry Needed):"
|
||||
echo "1. Review failure analysis in session state"
|
||||
echo "2. Optionally adjust workflow or requirements"
|
||||
echo "3. Re-run orchestration to retry automatically"
|
||||
echo "4. Or manually fix and test"
|
||||
;;
|
||||
|
||||
escalate)
|
||||
echo "📈 Next Steps (Escalated):"
|
||||
echo "1. Review escalation message above"
|
||||
echo "2. Examine session state: state_print_summary $SESSION_ID"
|
||||
echo "3. Manually investigate and fix issues"
|
||||
echo "4. Update workflow if needed"
|
||||
echo "5. Re-run orchestration after fixes"
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "📈 Next Steps:"
|
||||
echo "1. Review session state: state_print_summary $SESSION_ID"
|
||||
echo "2. Check agent output in: ${PROJECT_PATH}"
|
||||
echo "3. Review metrics: metrics_get_summary"
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "📊 Phase 1a Metrics:"
|
||||
echo "- Session ID: $SESSION_ID"
|
||||
echo "- Session Dir: $(state_get_session_dir $SESSION_ID)"
|
||||
echo "- View all sessions: state_list_sessions"
|
||||
echo "- View metrics: metrics_get_summary"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If workflow file is invalid**:
|
||||
- Display error message with format requirements
|
||||
- Exit with error
|
||||
|
||||
**If agent fails to complete**:
|
||||
- Capture agent's error/blocker message
|
||||
- Display summary
|
||||
- Suggest manual intervention
|
||||
|
||||
**If validation fails**:
|
||||
- Display validation errors
|
||||
- Suggest fixes or retry
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Phase 1a Implementation**:
|
||||
✅ Test workflow parsed successfully
|
||||
✅ Orchestration session created and tracked
|
||||
✅ Phase 1a components loaded (State Manager, Decision Maker, Prompt Refiner, Metrics)
|
||||
✅ Comprehensive prompt generated
|
||||
✅ Agent launched with appropriate prompt (original or refined)
|
||||
✅ Agent status tracked in state
|
||||
✅ Validation executed (manual for now, automated in Phase 1b)
|
||||
✅ Decision made (complete/retry/escalate)
|
||||
✅ Retry logic with refined prompt (up to 3 attempts)
|
||||
✅ Failure analysis performed
|
||||
✅ Escalation handling when needed
|
||||
✅ Metrics tracked and reported
|
||||
✅ Orchestration report generated with Phase 1a data
|
||||
✅ Session state persisted to .specswarm/orchestrator/
|
||||
|
||||
---
|
||||
|
||||
## Learning Objectives
|
||||
|
||||
**Phase 1a Questions**:
|
||||
1. Does intelligent retry logic improve success rate?
|
||||
2. How effective is failure analysis categorization?
|
||||
3. Do refined prompts reduce repeated failures?
|
||||
4. What failure types are most common?
|
||||
5. How many retries before escalation is typical?
|
||||
6. Is decision maker accurate (correct complete/retry/escalate)?
|
||||
7. Do agents improve on retry attempts?
|
||||
8. What prompt refinements are most effective?
|
||||
|
||||
**Phase 1a Metrics Tracked**:
|
||||
- Session status (success/failure/escalated)
|
||||
- Retry count per session
|
||||
- Failure type distribution
|
||||
- Time per attempt (agent + validation)
|
||||
- Total orchestration time
|
||||
- Success rate by attempt number
|
||||
- Escalation rate
|
||||
- Decision accuracy
|
||||
- Prompt refinement effectiveness
|
||||
|
||||
---
|
||||
|
||||
## Phase 1b Status: IMPLEMENTED ✅
|
||||
|
||||
**Implemented Features** (Phase 1b - Full Automation):
|
||||
- ✅ **State Manager** - Session persistence and tracking
|
||||
- ✅ **Decision Maker** - Intelligent complete/retry/escalate logic
|
||||
- ✅ **Prompt Refiner** - Context-injected prompts on retry
|
||||
- ✅ **Automatic Agent Launch** - No manual Task tool usage required
|
||||
- ✅ **Automatic Validation** - Orchestrate-validate runs automatically
|
||||
- ✅ **True Retry Loop** - Automatic retry with refined prompts (up to 3 attempts)
|
||||
- ✅ **Failure Analysis** - 9 failure types categorized
|
||||
- ✅ **Escalation Handling** - Automatic human escalation when needed
|
||||
- ✅ **Metrics Tracking** - Session analytics and metrics
|
||||
- ✅ **State Persistence** - All state saved to .specswarm/orchestrator/
|
||||
- ✅ **Full Automation** - No manual steps during orchestration
|
||||
|
||||
**Key Phase 1b Improvements**:
|
||||
- 🚀 **10x Faster Testing** - Automated execution vs manual steps
|
||||
- 🔄 **True Retry Loop** - Automatic continuation without user intervention
|
||||
- 📊 **Real Validation Data** - Actual Playwright/console/network results
|
||||
- ⚡ **Rapid Iteration** - Test cases run in 2-5 minutes vs 15-30 minutes
|
||||
|
||||
**Phase 1c Enhancements** (Planned):
|
||||
- Real Vision API integration (vs mock)
|
||||
- Multiple workflow parallel execution
|
||||
- Advanced metrics dashboards
|
||||
- Cross-session learning
|
||||
|
||||
**Phase 2 Enhancements** (Future):
|
||||
- Multi-agent coordination
|
||||
- Dynamic prompt optimization with ML
|
||||
- Self-healing workflows
|
||||
- Predictive failure prevention
|
||||
- Autonomous workflow generation
|
||||
455
commands/plan.md
Normal file
455
commands/plan.md
Normal file
@@ -0,0 +1,455 @@
|
||||
---
|
||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Discover Feature Context**:
|
||||
|
||||
a. **Find Repository Root and Initialize Features Directory**:
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory (handles migration if needed)
|
||||
get_features_dir "$REPO_ROOT"
|
||||
```
|
||||
|
||||
b. **Get Current Feature**:
|
||||
```bash
|
||||
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
FEATURE_NUM=$(echo "$BRANCH" | grep -oE '^[0-9]{3}')
|
||||
|
||||
# Fallback for non-git
|
||||
if [ -z "$FEATURE_NUM" ]; then
|
||||
FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1)
|
||||
fi
|
||||
```
|
||||
|
||||
c. **Locate Feature Directory**:
|
||||
```bash
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT"
|
||||
# FEATURE_DIR is now set by find_feature_dir
|
||||
```
|
||||
|
||||
d. **Set Path Variables**:
|
||||
```bash
|
||||
FEATURE_SPEC="${FEATURE_DIR}/spec.md"
|
||||
IMPL_PLAN="${FEATURE_DIR}/plan.md"
|
||||
RESEARCH_FILE="${FEATURE_DIR}/research.md"
|
||||
DATA_MODEL_FILE="${FEATURE_DIR}/data-model.md"
|
||||
CONTRACTS_DIR="${FEATURE_DIR}/contracts"
|
||||
```
|
||||
|
||||
e. **Validate Prerequisites**:
|
||||
- Check that `spec.md` exists
|
||||
- If missing: ERROR "No specification found. Run `/specify` first."
|
||||
|
||||
2. **Load context**:
|
||||
- Read FEATURE_SPEC (spec.md)
|
||||
- Read `.specswarm/constitution.md` if it exists
|
||||
- Check for `.specswarm/tech-stack.md`
|
||||
- Load plan template from `templates/plan-template.md` or use embedded template
|
||||
|
||||
<!-- ========== TECH STACK MANAGEMENT (SpecSwarm Enhancement) ========== -->
|
||||
<!-- Added by Marty Bonacci & Claude Code (2025) -->
|
||||
|
||||
2a. **Tech Stack File Initialization** (first-time setup):
|
||||
|
||||
**Check if tech stack file exists**:
|
||||
```bash
|
||||
if [ ! -f "${REPO_ROOT}.specswarm/tech-stack.md" ]; then
|
||||
FIRST_FEATURE=true
|
||||
else
|
||||
FIRST_FEATURE=false
|
||||
fi
|
||||
```
|
||||
|
||||
**If FIRST_FEATURE=true** (no tech-stack.md exists):
|
||||
|
||||
1. **Detect project technologies** (scan existing files):
|
||||
```bash
|
||||
# Check for package managers and extract dependencies
|
||||
if [ -f "${REPO_ROOT}/package.json" ]; then
|
||||
# Node.js/TypeScript project
|
||||
LANGUAGE=$(grep -q "typescript" "${REPO_ROOT}/package.json" && echo "TypeScript" || echo "JavaScript")
|
||||
RUNTIME="Node.js"
|
||||
# Extract frameworks and libraries from dependencies
|
||||
elif [ -f "${REPO_ROOT}/composer.json" ]; then
|
||||
# PHP project
|
||||
LANGUAGE="PHP"
|
||||
elif [ -f "${REPO_ROOT}/requirements.txt" ] || [ -f "${REPO_ROOT}/pyproject.toml" ]; then
|
||||
# Python project
|
||||
LANGUAGE="Python"
|
||||
elif [ -f "${REPO_ROOT}/go.mod" ]; then
|
||||
# Go project
|
||||
LANGUAGE="Go"
|
||||
elif [ -f "${REPO_ROOT}/Gemfile" ]; then
|
||||
# Ruby project
|
||||
LANGUAGE="Ruby"
|
||||
fi
|
||||
```
|
||||
|
||||
2. **Generate tech-stack.md** using template:
|
||||
- Use `/templates/tech-stack-template.md` if exists
|
||||
- Otherwise use embedded template with detected technologies
|
||||
- Populate Core Technologies section from detection
|
||||
- Populate Standard Libraries from package files
|
||||
- Leave Prohibited Technologies section with TODO marker and common examples
|
||||
|
||||
3. **Present to user for confirmation**:
|
||||
```
|
||||
🎯 TECH STACK FILE CREATED
|
||||
|
||||
This is your FIRST feature using SpecSwarm. I've created `.specswarm/tech-stack.md`
|
||||
based on your project files and plan.md Technical Context.
|
||||
|
||||
**Detected Stack:**
|
||||
- Language: {detected_language}
|
||||
- Framework: {detected_framework}
|
||||
- Database: {detected_database}
|
||||
- Key Libraries: {detected_libraries}
|
||||
|
||||
**Action Required:**
|
||||
1. Review `.specswarm/tech-stack.md`
|
||||
2. Add any PROHIBITED technologies (important for drift prevention!)
|
||||
3. Confirm accuracy of detected stack
|
||||
|
||||
⚠️ This file will be used to validate ALL future features.
|
||||
|
||||
**Options:**
|
||||
- Type "continue" to accept and proceed with planning
|
||||
- Type "edit" to pause while you refine tech-stack.md
|
||||
- Provide corrections: "Language should be JavaScript, not TypeScript"
|
||||
```
|
||||
|
||||
4. **Wait for user response**:
|
||||
- If "continue": Proceed to section 2b (Tech Stack Validation)
|
||||
- If "edit": PAUSE with message "Run `/specswarm:plan` again when ready"
|
||||
- If corrections provided: Update tech-stack.md, show updated version, ask for confirmation again
|
||||
|
||||
5. **Add reminder to constitution** (if constitution exists):
|
||||
- Append note to constitution.md suggesting Principle 5 formalization
|
||||
- Message: "💡 Consider running `/specswarm:constitution` to formalize tech stack enforcement as Principle 5"
|
||||
|
||||
**If FIRST_FEATURE=false** (tech-stack.md already exists):
|
||||
- Skip initialization entirely
|
||||
- Proceed directly to section 2b (Tech Stack Validation)
|
||||
|
||||
**If auto-detection fails** (no package files found):
|
||||
- Prompt user for manual input:
|
||||
```
|
||||
🎯 TECH STACK FILE NEEDED
|
||||
|
||||
Cannot auto-detect technologies. Please provide:
|
||||
1. Programming language and version (e.g., "TypeScript 5.x")
|
||||
2. Main framework (e.g., "React Router v7")
|
||||
3. Database (e.g., "PostgreSQL 17")
|
||||
4. 3-5 key libraries
|
||||
|
||||
Example: "TypeScript 5, React Router v7, PostgreSQL 17, Drizzle ORM, Zod"
|
||||
```
|
||||
- Wait for response, generate tech-stack.md from input
|
||||
|
||||
2b. **Tech Stack Validation** (runs for all features after first):
|
||||
|
||||
**If tech-stack.md does NOT exist**:
|
||||
- Skip validation (handled by section 2a above)
|
||||
|
||||
**If tech-stack.md exists**:
|
||||
|
||||
1. **Extract technologies from plan.md Technical Context**:
|
||||
- Parse all mentioned libraries, frameworks, tools from Technical Context section
|
||||
- Create list: NEW_TECHNOLOGIES[] (technologies mentioned in current plan)
|
||||
- Read and parse: APPROVED_STACK[] (from tech-stack.md approved sections)
|
||||
- Read and parse: PROHIBITED_STACK[] (from tech-stack.md prohibited section)
|
||||
|
||||
2. **Classify each NEW technology**:
|
||||
```bash
|
||||
for TECH in "${NEW_TECHNOLOGIES[@]}"; do
|
||||
# Check 1: Is it PROHIBITED?
|
||||
if grep -qi "❌.*${TECH}" "${REPO_ROOT}.specswarm/tech-stack.md"; then
|
||||
TECH_STATUS[$TECH]="PROHIBITED"
|
||||
CONFLICTS_WITH[$TECH]=$(grep "❌.*${TECH}" "${REPO_ROOT}.specswarm/tech-stack.md" | sed 's/.*use \(.*\) instead.*/\1/')
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check 2: Is it already APPROVED?
|
||||
if grep -qi "${TECH}" "${REPO_ROOT}.specswarm/tech-stack.md" | grep -v "❌"; then
|
||||
TECH_STATUS[$TECH]="APPROVED"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Check 3: Does it CONFLICT with existing tech?
|
||||
# Extract PURPOSE tag from tech-stack.md for conflict detection
|
||||
TECH_PURPOSE=$(get_library_purpose "${TECH}")
|
||||
|
||||
if [ -n "$TECH_PURPOSE" ]; then
|
||||
EXISTING_WITH_PURPOSE=$(grep -i "PURPOSE:${TECH_PURPOSE}" "${REPO_ROOT}.specswarm/tech-stack.md" | grep -v "❌" | head -1)
|
||||
if [ -n "$EXISTING_WITH_PURPOSE" ]; then
|
||||
TECH_STATUS[$TECH]="CONFLICT"
|
||||
CONFLICTS_WITH[$TECH]="${EXISTING_WITH_PURPOSE}"
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check 4: No conflict = AUTO_ADD candidate
|
||||
TECH_STATUS[$TECH]="AUTO_ADD"
|
||||
done
|
||||
```
|
||||
|
||||
3. **Generate Tech Stack Compliance Report** in plan.md:
|
||||
|
||||
Create a new section in the plan.md file:
|
||||
|
||||
```markdown
|
||||
## Tech Stack Compliance Report
|
||||
<!-- Auto-generated by SpecSwarm tech stack validation -->
|
||||
|
||||
### ✅ Approved Technologies (already in stack)
|
||||
{list of technologies with APPROVED status}
|
||||
|
||||
### ➕ New Technologies (auto-added)
|
||||
{list of technologies with AUTO_ADD status}
|
||||
For each:
|
||||
- **{Technology Name}**
|
||||
- Purpose: {detected_purpose}
|
||||
- No conflicts detected
|
||||
- Added to: {section_name}
|
||||
- Version updated: {old_version} → {new_version}
|
||||
|
||||
### ⚠️ Conflicting Technologies (require approval)
|
||||
{list of technologies with CONFLICT status}
|
||||
For each:
|
||||
- **{Technology Name}**
|
||||
- Purpose: {detected_purpose}
|
||||
- ❌ CONFLICT: Project uses `{existing_tech}` ({purpose})
|
||||
- **Action Required**: Choose one:
|
||||
|
||||
| Option | Choice | Implications |
|
||||
|--------|--------|--------------|
|
||||
| A | Use {existing_tech} (keep existing) | Remove {new_tech} from plan, update Technical Context |
|
||||
| B | Replace {existing_tech} with {new_tech} | Update tech-stack.md (MAJOR version), refactor existing code |
|
||||
| C | Use both (justify overlap) | Document why both needed in research.md, add note to tech-stack.md |
|
||||
|
||||
**Your choice**: _[Wait for user response]_
|
||||
|
||||
### ❌ Prohibited Technologies (cannot use)
|
||||
{list of technologies with PROHIBITED status}
|
||||
For each:
|
||||
- **{Technology Name}**
|
||||
- ❌ PROHIBITED in tech-stack.md
|
||||
- Reason: "{reason from tech-stack.md}"
|
||||
- **Must use**: {approved_alternative}
|
||||
- **Cannot proceed** until plan.md updated
|
||||
```
|
||||
|
||||
4. **Handle Each Status**:
|
||||
|
||||
**APPROVED** → Continue silently (no action needed)
|
||||
|
||||
**AUTO_ADD** →
|
||||
```bash
|
||||
# For each AUTO_ADD technology:
|
||||
1. Determine appropriate section (Data Layer, UI Layer, Utilities, etc.)
|
||||
2. Add entry to tech-stack.md:
|
||||
- {Technology} v{version} ({purpose}) <!-- Auto-added: Feature {FEATURE_NUM}, {DATE} -->
|
||||
3. Bump tech-stack.md MINOR version:
|
||||
OLD_VERSION=$(grep "^\*\*Version\*\*:" "${REPO_ROOT}.specswarm/tech-stack.md" | grep -oE '[0-9]+\.[0-9]+\.[0-9]+')
|
||||
MAJOR=$(echo $OLD_VERSION | cut -d. -f1)
|
||||
MINOR=$(echo $OLD_VERSION | cut -d. -f2)
|
||||
PATCH=$(echo $OLD_VERSION | cut -d. -f3)
|
||||
NEW_MINOR=$((MINOR + 1))
|
||||
NEW_VERSION="${MAJOR}.${NEW_MINOR}.0"
|
||||
sed -i "s/\*\*Version\*\*: ${OLD_VERSION}/\*\*Version\*\*: ${NEW_VERSION}/" "${REPO_ROOT}.specswarm/tech-stack.md"
|
||||
4. Update version history section
|
||||
5. Notify user in compliance report
|
||||
6. Continue with plan generation
|
||||
```
|
||||
|
||||
**CONFLICT** →
|
||||
```bash
|
||||
1. STOP planning process
|
||||
2. Show conflict details with options table (see report format above)
|
||||
3. WAIT for user response (A, B, or C)
|
||||
4. Based on choice:
|
||||
Option A (keep existing):
|
||||
- Remove conflicting tech from plan.md Technical Context
|
||||
- Continue with plan generation
|
||||
Option B (replace):
|
||||
- Remove existing tech from tech-stack.md
|
||||
- Add new tech to tech-stack.md
|
||||
- Bump MAJOR version (breaking change)
|
||||
- Add to research.md: justification for replacement
|
||||
- Continue with plan generation
|
||||
Option C (use both):
|
||||
- Require research.md justification section
|
||||
- Add new tech to tech-stack.md with overlap note
|
||||
- Bump MINOR version
|
||||
- Continue with plan generation
|
||||
5. Document decision in plan.md under "Tech Stack Compliance"
|
||||
```
|
||||
|
||||
**PROHIBITED** →
|
||||
```bash
|
||||
1. ERROR: Cannot proceed with planning
|
||||
2. Show prohibited tech details and approved alternative
|
||||
3. Show amendment process:
|
||||
```
|
||||
To use prohibited technology:
|
||||
1. Document compelling business justification in research.md
|
||||
2. Update .specswarm/constitution.md (requires constitutional amendment)
|
||||
3. Remove from Prohibited section in tech-stack.md
|
||||
4. Add to Approved section with justification comment
|
||||
5. Bump tech-stack.md MAJOR version (breaking constitutional change)
|
||||
6. Re-run /specswarm:plan
|
||||
```
|
||||
4. HALT planning until issue resolved
|
||||
```
|
||||
|
||||
5. **Update tech-stack.md file** (for AUTO_ADD technologies):
|
||||
- Automatically write changes to tech-stack.md
|
||||
- Update version and version history
|
||||
- Add auto-generated comment with feature number and date
|
||||
|
||||
<!-- ========== END TECH STACK MANAGEMENT ========== -->
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
- Fill Constitution Check section from constitution
|
||||
- Evaluate gates (ERROR if violations unjustified)
|
||||
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
||||
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
||||
- Phase 1: Update agent context by running the agent script
|
||||
- Re-evaluate Constitution Check post-design
|
||||
|
||||
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 0: Outline & Research
|
||||
|
||||
1. **Extract unknowns from Technical Context** above:
|
||||
- For each NEEDS CLARIFICATION → research task
|
||||
- For each dependency → best practices task
|
||||
- For each integration → patterns task
|
||||
|
||||
2. **Generate and dispatch research agents**:
|
||||
```
|
||||
For each unknown in Technical Context:
|
||||
Task: "Research {unknown} for {feature context}"
|
||||
For each technology choice:
|
||||
Task: "Find best practices for {tech} in {domain}"
|
||||
```
|
||||
|
||||
3. **Consolidate findings** in `research.md` using format:
|
||||
- Decision: [what was chosen]
|
||||
- Rationale: [why chosen]
|
||||
- Alternatives considered: [what else evaluated]
|
||||
|
||||
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||
|
||||
### Phase 1: Design & Contracts
|
||||
|
||||
**Prerequisites:** `research.md` complete
|
||||
|
||||
1. **Extract entities from feature spec** → `data-model.md`:
|
||||
- Entity name, fields, relationships
|
||||
- Validation rules from requirements
|
||||
- State transitions if applicable
|
||||
|
||||
2. **Generate API contracts** from functional requirements:
|
||||
- For each user action → endpoint
|
||||
- Use standard REST/GraphQL patterns
|
||||
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||
|
||||
3. **Agent context update** (optional):
|
||||
|
||||
a. **Detect Agent Type**:
|
||||
```bash
|
||||
# Check which agent context files exist
|
||||
if [ -d "${REPO_ROOT}/.claude" ]; then
|
||||
AGENT="claude"
|
||||
CONTEXT_FILE=".claude/context.md"
|
||||
elif [ -f "${REPO_ROOT}/.cursorrules" ]; then
|
||||
AGENT="cursor"
|
||||
CONTEXT_FILE=".cursorrules"
|
||||
elif [ -f "${REPO_ROOT}/.github/copilot-instructions.md" ]; then
|
||||
AGENT="copilot"
|
||||
CONTEXT_FILE=".github/copilot-instructions.md"
|
||||
else
|
||||
# No agent context file found - skip this step
|
||||
AGENT="none"
|
||||
fi
|
||||
```
|
||||
|
||||
b. **Extract Tech Stack from plan.md**:
|
||||
- Language (e.g., Python, TypeScript, Go)
|
||||
- Framework (e.g., React, FastAPI, Express)
|
||||
- Database (e.g., PostgreSQL, MongoDB)
|
||||
- Key libraries and tools
|
||||
|
||||
c. **Update Agent Context File** (Enhanced by SpecSwarm):
|
||||
<!-- Tech stack enforcement added by Marty Bonacci & Claude Code (2025) -->
|
||||
- Read existing CONTEXT_FILE (if exists)
|
||||
- **Read `.specswarm/tech-stack.md` if exists** (SpecSwarm enhancement)
|
||||
- Look for markers like `<!-- AUTO-GENERATED-START -->` and `<!-- AUTO-GENERATED-END -->`
|
||||
- Replace content between markers with enhanced format:
|
||||
- If no markers exist, append new section
|
||||
- Preserve all manual edits outside markers
|
||||
- **Enhanced format** (includes CRITICAL CONSTRAINTS from tech-stack.md):
|
||||
```markdown
|
||||
<!-- AUTO-GENERATED-START -->
|
||||
## Tech Stack (from plan.md)
|
||||
- **Language**: {language}
|
||||
- **Framework**: {framework}
|
||||
- **Database**: {database}
|
||||
- **Key Libraries**: {libraries}
|
||||
|
||||
## CRITICAL CONSTRAINTS (from tech-stack.md)
|
||||
⚠️ **BEFORE suggesting ANY library, framework, or pattern:**
|
||||
1. Read `.specswarm/tech-stack.md`
|
||||
2. Verify your suggestion is APPROVED
|
||||
3. If PROHIBITED, suggest approved alternative
|
||||
4. If UNAPPROVED, warn user and require justification
|
||||
|
||||
**Prohibited Technologies** (NEVER suggest these):
|
||||
{list of ❌ technologies from tech-stack.md with approved alternatives}
|
||||
Example format:
|
||||
- ❌ Axios → Use: fetch API
|
||||
- ❌ Class components → Use: Functional components
|
||||
- ❌ Redux → Use: React Router loaders/actions
|
||||
|
||||
**Violation = Constitution violation** (see `.specswarm/constitution.md` Principle 5)
|
||||
|
||||
**Auto-Addition:** Non-conflicting new libraries will be auto-added during `/specswarm:plan`
|
||||
<!-- AUTO-GENERATED-END -->
|
||||
```
|
||||
|
||||
**Output**: data-model.md, /contracts/*, quickstart.md, (optional) agent context file
|
||||
|
||||
## Key rules
|
||||
|
||||
- Use absolute paths
|
||||
- ERROR on gate failures or unresolved clarifications
|
||||
427
commands/refactor.md
Normal file
427
commands/refactor.md
Normal file
@@ -0,0 +1,427 @@
|
||||
---
|
||||
description: Metrics-driven code quality improvement with behavior preservation
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original methodology: spec-kit-extensions by Marty Bonacci (2025)
|
||||
2. Adapted: SpecLab plugin by Marty Bonacci & Claude Code (2025)
|
||||
3. Based on: GitHub spec-kit | MIT License
|
||||
-->
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute metrics-driven refactoring workflow to improve code quality while preserving behavior.
|
||||
|
||||
**Key Principles**:
|
||||
1. **Metrics First**: Establish baseline before refactoring
|
||||
2. **Behavior Preservation**: No functional changes
|
||||
3. **Testable**: Verify identical behavior before/after
|
||||
4. **Measurable Improvement**: Quantify quality gains
|
||||
5. **Incremental**: Small, safe refactoring steps
|
||||
|
||||
**Refactoring Types**: Extract function/module, reduce complexity, eliminate duplication, improve naming, optimize performance
|
||||
|
||||
**Coverage**: Addresses ~10% of development work (quality improvement)
|
||||
|
||||
---
|
||||
|
||||
## Smart Integration
|
||||
|
||||
```bash
|
||||
SPECSWARM_INSTALLED=$(claude plugin list | grep -q "specswarm" && echo "true" || echo "false")
|
||||
SPECTEST_INSTALLED=$(claude plugin list | grep -q "spectest" && echo "true" || echo "false")
|
||||
|
||||
if [ "$SPECTEST_INSTALLED" = "true" ]; then
|
||||
EXECUTION_MODE="parallel"
|
||||
ENABLE_HOOKS=true
|
||||
ENABLE_METRICS=true
|
||||
echo "🎯 SpecTest detected (parallel execution, hooks, metrics enabled)"
|
||||
elif [ "$SPECSWARM_INSTALLED" = "true" ]; then
|
||||
ENABLE_TECH_VALIDATION=true
|
||||
echo "🎯 SpecSwarm detected (tech stack enforcement enabled)"
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Discover Refactor Context
|
||||
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
REFACTOR_NUM=$(echo "$CURRENT_BRANCH" | grep -oE 'refactor/([0-9]{3})' | grep -oE '[0-9]{3}')
|
||||
|
||||
if [ -z "$REFACTOR_NUM" ]; then
|
||||
echo "♻️ Refactor Workflow"
|
||||
echo "Provide refactor number:"
|
||||
read -p "Refactor number: " REFACTOR_NUM
|
||||
REFACTOR_NUM=$(printf "%03d" $REFACTOR_NUM)
|
||||
fi
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
ensure_features_dir "$REPO_ROOT"
|
||||
|
||||
FEATURE_DIR="${FEATURES_DIR}/${REFACTOR_NUM}-refactor"
|
||||
mkdir -p "$FEATURE_DIR"
|
||||
|
||||
REFACTOR_SPEC="${FEATURE_DIR}/refactor.md"
|
||||
BASELINE_METRICS="${FEATURE_DIR}/baseline-metrics.md"
|
||||
TASKS_FILE="${FEATURE_DIR}/tasks.md"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Establish Baseline Metrics
|
||||
|
||||
Analyze target code and establish baseline:
|
||||
|
||||
```markdown
|
||||
# Baseline Metrics: Refactor ${REFACTOR_NUM}
|
||||
|
||||
**Target**: [File/module/function to refactor]
|
||||
**Analysis Date**: YYYY-MM-DD
|
||||
|
||||
---
|
||||
|
||||
## Code Metrics
|
||||
|
||||
### Complexity Metrics
|
||||
- **Cyclomatic Complexity**: [N] (target: <10)
|
||||
- **Cognitive Complexity**: [N] (target: <15)
|
||||
- **Nesting Depth**: [N] (target: <4)
|
||||
- **Function Length**: [N lines] (target: <50)
|
||||
|
||||
### Quality Metrics
|
||||
- **Code Duplication**: [N%] (target: <3%)
|
||||
- **Test Coverage**: [N%] (target: >80%)
|
||||
- **Maintainability Index**: [N/100] (target: >65)
|
||||
|
||||
### Performance Metrics (if applicable)
|
||||
- **Execution Time**: [duration]
|
||||
- **Memory Usage**: [MB]
|
||||
- **Database Queries**: [N]
|
||||
|
||||
---
|
||||
|
||||
## Identified Issues
|
||||
|
||||
| Issue | Severity | Metric | Current | Target |
|
||||
|-------|----------|--------|---------|--------|
|
||||
| [Issue 1] | High | [Metric] | [Value] | [Value] |
|
||||
| [Issue 2] | Medium | [Metric] | [Value] | [Value] |
|
||||
|
||||
---
|
||||
|
||||
## Refactoring Opportunities
|
||||
|
||||
### Opportunity 1: [Type - e.g., Extract Function]
|
||||
**Location**: [file:line]
|
||||
**Issue**: [What's wrong]
|
||||
**Approach**: [How to refactor]
|
||||
**Expected Improvement**: [Metric improvement]
|
||||
|
||||
[Repeat for each opportunity]
|
||||
|
||||
---
|
||||
|
||||
## Behavior Preservation Tests
|
||||
|
||||
**Existing Tests**: [N tests]
|
||||
**Coverage**: [N%]
|
||||
**Additional Tests Needed**: [Y/N - list if yes]
|
||||
|
||||
**Test Strategy**:
|
||||
- Run full test suite before refactoring
|
||||
- Verify 100% pass rate
|
||||
- Run same tests after each refactor step
|
||||
- Verify identical results
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
**Metrics Improvement Targets**:
|
||||
- Complexity: [current] → [target]
|
||||
- Duplication: [current] → [target]
|
||||
- Test Coverage: [current] → [target]
|
||||
- Maintainability: [current] → [target]
|
||||
|
||||
**Non-Negotiables**:
|
||||
- ✅ All tests pass (before and after)
|
||||
- ✅ No functional changes
|
||||
- ✅ No performance regression
|
||||
```
|
||||
|
||||
Write to `$BASELINE_METRICS`.
|
||||
|
||||
---
|
||||
|
||||
### 3. Create Refactor Specification
|
||||
|
||||
```markdown
|
||||
# Refactor ${REFACTOR_NUM}: [Title]
|
||||
|
||||
**Status**: Active
|
||||
**Created**: YYYY-MM-DD
|
||||
**Baseline Metrics**: ${BASELINE_METRICS}
|
||||
|
||||
---
|
||||
|
||||
## Refactoring Goal
|
||||
|
||||
**What We're Improving**: [Brief description]
|
||||
|
||||
**Why**: [Motivation]
|
||||
- Pain point 1
|
||||
- Pain point 2
|
||||
|
||||
**Type**: [Extract Function/Reduce Complexity/Eliminate Duplication/Improve Naming/Optimize Performance]
|
||||
|
||||
---
|
||||
|
||||
## Target Code
|
||||
|
||||
**Location**: [file:line range]
|
||||
**Current Issues**:
|
||||
- Issue 1
|
||||
- Issue 2
|
||||
|
||||
---
|
||||
|
||||
## Refactoring Plan
|
||||
|
||||
### Step 1: [Refactor Action]
|
||||
**What**: [Description]
|
||||
**Files**: [list]
|
||||
**Validation**: [How to verify]
|
||||
|
||||
### Step 2: [Refactor Action]
|
||||
[Repeat pattern]
|
||||
|
||||
---
|
||||
|
||||
## Behavior Preservation
|
||||
|
||||
**Critical**: No functional changes allowed
|
||||
|
||||
**Validation Strategy**:
|
||||
1. Capture test suite results (before)
|
||||
2. Apply refactoring step
|
||||
3. Run test suite (after)
|
||||
4. Compare results (must be identical)
|
||||
5. Repeat for each step
|
||||
|
||||
---
|
||||
|
||||
## Expected Improvements
|
||||
|
||||
**Metrics**:
|
||||
| Metric | Before | After | Improvement |
|
||||
|--------|--------|-------|-------------|
|
||||
| Complexity | [N] | [Target] | [%] |
|
||||
| Duplication | [N%] | [Target] | [%] |
|
||||
| Maintainability | [N] | [Target] | [%] |
|
||||
|
||||
---
|
||||
|
||||
## Risks
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| [Risk 1] | [Mitigation] |
|
||||
|
||||
---
|
||||
|
||||
## Tech Stack Compliance
|
||||
|
||||
${TECH_STACK_VALIDATION}
|
||||
```
|
||||
|
||||
Write to `$REFACTOR_SPEC`.
|
||||
|
||||
---
|
||||
|
||||
### 4. Generate Tasks
|
||||
|
||||
```markdown
|
||||
# Tasks: Refactor ${REFACTOR_NUM}
|
||||
|
||||
**Workflow**: Refactor (Metrics-Driven, Behavior-Preserving)
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Baseline Establishment
|
||||
|
||||
### T001: Establish Baseline Metrics
|
||||
**Description**: Run code analysis tools, capture metrics
|
||||
**Validation**: Baseline documented in ${BASELINE_METRICS}
|
||||
|
||||
### T002: Run Full Test Suite (Baseline)
|
||||
**Description**: Capture test results before refactoring
|
||||
**Expected**: 100% pass rate (or document failures)
|
||||
**Parallel**: No
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Incremental Refactoring
|
||||
|
||||
[For each refactoring step, create task sequence:]
|
||||
|
||||
### T003: Refactor Step 1 - [Description]
|
||||
**Description**: [What's changing]
|
||||
**Files**: [list]
|
||||
**Validation**: Code compiles, no functional changes
|
||||
**Parallel**: No (incremental steps)
|
||||
|
||||
### T004: Test Step 1
|
||||
**Description**: Run test suite after step 1
|
||||
**Expected**: Identical results to baseline
|
||||
**Parallel**: No (depends on T003)
|
||||
|
||||
[Repeat T00N/T00N+1 pattern for each refactor step]
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Validation and Metrics
|
||||
|
||||
### T00N: Run Full Test Suite (Final)
|
||||
**Description**: Verify all tests still pass
|
||||
**Expected**: 100% pass rate, identical to baseline
|
||||
|
||||
### T00N+1: Measure Final Metrics
|
||||
**Description**: Capture metrics after refactoring
|
||||
**Expected**: Measurable improvement in target metrics
|
||||
**Parallel**: [P]
|
||||
|
||||
### T00N+2: Compare Metrics
|
||||
**Description**: Generate before/after comparison
|
||||
**Expected**: Improvements meet success criteria
|
||||
**Parallel**: No
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Total Tasks**: [N]
|
||||
**Refactoring Steps**: [N]
|
||||
**Estimated Time**: [time]
|
||||
|
||||
**Success Criteria**:
|
||||
- ✅ All tests pass (identical results before/after)
|
||||
- ✅ Measurable metrics improvement
|
||||
- ✅ No functional changes
|
||||
- ✅ No performance regression
|
||||
```
|
||||
|
||||
Write to `$TASKS_FILE`.
|
||||
|
||||
---
|
||||
|
||||
### 5. Execute Refactoring Workflow
|
||||
|
||||
Execute tasks incrementally with test validation after each step:
|
||||
|
||||
```
|
||||
♻️ Executing Refactor Workflow
|
||||
|
||||
Phase 1: Baseline Establishment
|
||||
T001: Establish Baseline Metrics
|
||||
[Analyze code, capture metrics]
|
||||
✓ Baseline: Complexity=45, Duplication=12%, Maintainability=52
|
||||
|
||||
T002: Run Full Test Suite (Baseline)
|
||||
[Run tests]
|
||||
✓ 247 tests passed
|
||||
|
||||
Phase 2: Incremental Refactoring
|
||||
T003: Extract Function - processUserData
|
||||
[Apply refactoring]
|
||||
T004: Test Step 1
|
||||
[Run tests]
|
||||
✓ 247 tests passed (identical to baseline)
|
||||
|
||||
[Repeat for each refactoring step]
|
||||
|
||||
Phase 3: Validation and Metrics
|
||||
T00N: Run Full Test Suite (Final)
|
||||
✓ 247 tests passed (100% identical to baseline)
|
||||
|
||||
T00N+1: Measure Final Metrics
|
||||
✓ Complexity=12 (was 45) - 73% improvement ✅
|
||||
✓ Duplication=2% (was 12%) - 83% improvement ✅
|
||||
✓ Maintainability=78 (was 52) - 50% improvement ✅
|
||||
|
||||
T00N+2: Compare Metrics
|
||||
✓ All success criteria met
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Final Output
|
||||
|
||||
```
|
||||
✅ Refactor Workflow Complete - Refactor ${REFACTOR_NUM}
|
||||
|
||||
📊 Metrics Improvement:
|
||||
| Metric | Before | After | Improvement |
|
||||
|--------|--------|-------|-------------|
|
||||
| Complexity | [N] | [N] | [%] ✅ |
|
||||
| Duplication | [N%] | [N%] | [%] ✅ |
|
||||
| Maintainability | [N] | [N] | [%] ✅ |
|
||||
|
||||
✅ Behavior Preservation:
|
||||
- All tests passed (before and after)
|
||||
- Identical test results
|
||||
- No functional changes
|
||||
- No performance regression
|
||||
|
||||
⏱️ Time to Refactor: ${DURATION}
|
||||
|
||||
📈 Next Steps:
|
||||
1. Review: ${REFACTOR_SPEC}
|
||||
2. Commit refactoring
|
||||
3. Share metrics improvements with team
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Operating Principles
|
||||
|
||||
1. **Metrics First**: Establish measurable baseline
|
||||
2. **Incremental**: Small, safe refactoring steps
|
||||
3. **Test After Each Step**: Verify behavior preservation
|
||||
4. **Measurable**: Quantify quality improvements
|
||||
5. **No Functional Changes**: Behavior must be identical
|
||||
6. **Tech Compliance**: Validate against tech stack
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Baseline metrics established
|
||||
✅ Incremental refactoring steps completed
|
||||
✅ All tests pass (before and after)
|
||||
✅ Identical test results (behavior preserved)
|
||||
✅ Measurable metrics improvement
|
||||
✅ No performance regression
|
||||
✅ Tech stack compliant
|
||||
|
||||
---
|
||||
|
||||
**Workflow Coverage**: Addresses ~10% of development work (quality improvement)
|
||||
944
commands/release.md
Normal file
944
commands/release.md
Normal file
@@ -0,0 +1,944 @@
|
||||
---
|
||||
description: Comprehensive release preparation workflow including quality gates, security audit, changelog generation, version bumping, tagging, and publishing
|
||||
args:
|
||||
- name: --patch
|
||||
description: Create patch release (bug fixes, 1.0.0 → 1.0.1)
|
||||
required: false
|
||||
- name: --minor
|
||||
description: Create minor release (new features, 1.0.0 → 1.1.0)
|
||||
required: false
|
||||
- name: --major
|
||||
description: Create major release (breaking changes, 1.0.0 → 2.0.0)
|
||||
required: false
|
||||
- name: --skip-audit
|
||||
description: Skip security audit (not recommended for production)
|
||||
required: false
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
Orchestrate a complete release workflow with quality gates, version bumping, and publishing automation.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Command Does
|
||||
|
||||
`/specswarm:release` orchestrates a complete release workflow:
|
||||
|
||||
1. **Pre-Release Validation** - Checks git status, branch protection, uncommitted changes
|
||||
2. **Quality Gates** - Runs tests, linting, type checking, build verification
|
||||
3. **Security Audit** - Optional security scan (recommended for production releases)
|
||||
4. **Version Bumping** - Semantic versioning (patch/minor/major)
|
||||
5. **Changelog Generation** - Auto-generates changelog from git commits
|
||||
6. **Git Tagging** - Creates annotated git tags
|
||||
7. **Artifact Building** - Builds production-ready artifacts
|
||||
8. **Publishing** - Optional npm publish and GitHub release creation
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Command
|
||||
|
||||
- Preparing production releases
|
||||
- Creating new package versions
|
||||
- Publishing to npm registry
|
||||
- Creating GitHub releases
|
||||
- After completing a feature milestone
|
||||
- Before deploying to production
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Git repository with clean working directory
|
||||
- On a release-ready branch (main/master or release branch)
|
||||
- All tests passing
|
||||
- package.json exists (for version bumping)
|
||||
- npm credentials configured (if publishing to npm)
|
||||
- GitHub CLI installed (if creating GitHub releases)
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/specswarm:release
|
||||
```
|
||||
|
||||
**Options** (via interactive prompts):
|
||||
- Release type (patch/minor/major)
|
||||
- Run security audit (yes/no)
|
||||
- Publish to npm (yes/no)
|
||||
- Create GitHub release (yes/no)
|
||||
- Push to remote (yes/no)
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
1. Updated `package.json` with new version
|
||||
2. Generated/updated `CHANGELOG.md`
|
||||
3. Git tag (e.g., `v1.2.3`)
|
||||
4. Built artifacts (in `dist/` or `build/`)
|
||||
5. Optional: Published npm package
|
||||
6. Optional: GitHub release with notes
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ============================================================================
|
||||
# RELEASE COMMAND - v3.1.0
|
||||
# ============================================================================
|
||||
|
||||
echo "📦 SpecSwarm Release Workflow"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# ============================================================================
|
||||
# CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
RELEASE_TYPE="patch"
|
||||
RUN_SECURITY_AUDIT="no"
|
||||
PUBLISH_TO_NPM="no"
|
||||
CREATE_GITHUB_RELEASE="no"
|
||||
PUSH_TO_REMOTE="yes"
|
||||
DRY_RUN="no"
|
||||
|
||||
CURRENT_VERSION=""
|
||||
NEW_VERSION=""
|
||||
CHANGELOG_FILE="CHANGELOG.md"
|
||||
TAG_NAME=""
|
||||
|
||||
# ============================================================================
|
||||
# HELPER FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
log_section() {
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "$1"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
}
|
||||
|
||||
log_check() {
|
||||
local check_name=$1
|
||||
local status=$2
|
||||
local message=$3
|
||||
|
||||
if [ "$status" = "pass" ]; then
|
||||
echo "✅ $check_name: $message"
|
||||
elif [ "$status" = "warn" ]; then
|
||||
echo "⚠️ $check_name: $message"
|
||||
else
|
||||
echo "❌ $check_name: $message"
|
||||
fi
|
||||
}
|
||||
|
||||
semver_bump() {
|
||||
local version=$1
|
||||
local bump_type=$2
|
||||
|
||||
# Parse version (e.g., "1.2.3" -> major=1, minor=2, patch=3)
|
||||
local major minor patch
|
||||
IFS='.' read -r major minor patch <<< "${version#v}"
|
||||
|
||||
case "$bump_type" in
|
||||
major)
|
||||
major=$((major + 1))
|
||||
minor=0
|
||||
patch=0
|
||||
;;
|
||||
minor)
|
||||
minor=$((minor + 1))
|
||||
patch=0
|
||||
;;
|
||||
patch)
|
||||
patch=$((patch + 1))
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "${major}.${minor}.${patch}"
|
||||
}
|
||||
|
||||
generate_changelog_entry() {
|
||||
local version=$1
|
||||
local previous_tag=$2
|
||||
local date=$(date +%Y-%m-%d)
|
||||
|
||||
echo "## [$version] - $date"
|
||||
echo ""
|
||||
|
||||
# Get commits since last tag
|
||||
if [ -n "$previous_tag" ]; then
|
||||
commits=$(git log "$previous_tag..HEAD" --pretty=format:"- %s" --no-merges 2>/dev/null || echo "- Initial release")
|
||||
else
|
||||
commits=$(git log --pretty=format:"- %s" --no-merges 2>/dev/null || echo "- Initial release")
|
||||
fi
|
||||
|
||||
# Categorize commits
|
||||
echo "### Added"
|
||||
echo "$commits" | grep -i "^- \(feat\|add\|new\)" || echo "- No new features"
|
||||
echo ""
|
||||
|
||||
echo "### Fixed"
|
||||
echo "$commits" | grep -i "^- \(fix\|bugfix\|patch\)" || echo "- No bug fixes"
|
||||
echo ""
|
||||
|
||||
echo "### Changed"
|
||||
echo "$commits" | grep -i "^- \(update\|change\|refactor\|improve\)" || echo "- No changes"
|
||||
echo ""
|
||||
|
||||
echo "### Deprecated"
|
||||
echo "- None"
|
||||
echo ""
|
||||
|
||||
echo "### Removed"
|
||||
echo "- None"
|
||||
echo ""
|
||||
|
||||
echo "### Security"
|
||||
echo "- Security audit passed"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# PREFLIGHT CHECKS
|
||||
# ============================================================================
|
||||
|
||||
log_section "Preflight Checks"
|
||||
|
||||
# Check 1: Git repository
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
log_check "Git Repository" "fail" "Not in a git repository"
|
||||
exit 1
|
||||
fi
|
||||
log_check "Git Repository" "pass" "Valid git repository"
|
||||
|
||||
# Check 2: Clean working directory
|
||||
DIRTY=$(git status --porcelain 2>/dev/null)
|
||||
if [ -n "$DIRTY" ]; then
|
||||
log_check "Working Directory" "fail" "Uncommitted changes detected"
|
||||
echo ""
|
||||
git status --short
|
||||
echo ""
|
||||
echo "Please commit or stash changes before releasing"
|
||||
exit 1
|
||||
fi
|
||||
log_check "Working Directory" "pass" "Clean working directory"
|
||||
|
||||
# Check 3: Current branch
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
log_check "Current Branch" "pass" "$CURRENT_BRANCH"
|
||||
|
||||
RELEASE_BRANCHES=("main" "master" "develop" "release")
|
||||
IS_RELEASE_BRANCH=false
|
||||
for branch in "${RELEASE_BRANCHES[@]}"; do
|
||||
if [ "$CURRENT_BRANCH" = "$branch" ] || [[ "$CURRENT_BRANCH" == release/* ]]; then
|
||||
IS_RELEASE_BRANCH=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$IS_RELEASE_BRANCH" = false ]; then
|
||||
log_check "Branch Validation" "warn" "Not on a typical release branch (main/master/develop/release/*)"
|
||||
echo " Current branch: $CURRENT_BRANCH"
|
||||
echo " You may want to merge to a release branch first"
|
||||
fi
|
||||
|
||||
# Check 4: package.json exists
|
||||
if [ ! -f "package.json" ]; then
|
||||
log_check "package.json" "fail" "No package.json found"
|
||||
echo " Release command requires package.json for version management"
|
||||
exit 1
|
||||
fi
|
||||
log_check "package.json" "pass" "Found package.json"
|
||||
|
||||
# Check 5: Get current version
|
||||
CURRENT_VERSION=$(jq -r '.version' package.json)
|
||||
if [ "$CURRENT_VERSION" = "null" ] || [ -z "$CURRENT_VERSION" ]; then
|
||||
log_check "Version" "fail" "No version in package.json"
|
||||
exit 1
|
||||
fi
|
||||
log_check "Current Version" "pass" "$CURRENT_VERSION"
|
||||
|
||||
# Check 6: Remote repository
|
||||
if git remote -v | grep -q "origin"; then
|
||||
REMOTE_URL=$(git remote get-url origin)
|
||||
log_check "Remote Repository" "pass" "$REMOTE_URL"
|
||||
HAS_REMOTE=true
|
||||
else
|
||||
log_check "Remote Repository" "warn" "No remote 'origin' configured"
|
||||
HAS_REMOTE=false
|
||||
fi
|
||||
|
||||
# Check 7: GitHub CLI (for GitHub releases)
|
||||
if command -v gh > /dev/null 2>&1; then
|
||||
log_check "GitHub CLI" "pass" "gh available"
|
||||
HAS_GH_CLI=true
|
||||
else
|
||||
log_check "GitHub CLI" "warn" "gh not installed (GitHub releases disabled)"
|
||||
HAS_GH_CLI=false
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# QUALITY GATES
|
||||
# ============================================================================
|
||||
|
||||
log_section "Quality Gates"
|
||||
|
||||
# Check for quality-standards.md
|
||||
QUALITY_GATES_ENFORCED=false
|
||||
if [ -f ".specswarm/quality-standards.md" ]; then
|
||||
log_check "Quality Standards" "pass" "Found quality-standards.md"
|
||||
QUALITY_GATES_ENFORCED=true
|
||||
|
||||
# Extract quality gate settings
|
||||
if grep -q "enforce_gates: true" .specswarm/quality-standards.md 2>/dev/null; then
|
||||
echo " Quality gates are enforced"
|
||||
fi
|
||||
else
|
||||
log_check "Quality Standards" "warn" "No quality-standards.md (run /specswarm:init)"
|
||||
fi
|
||||
|
||||
# Run tests if available
|
||||
if jq -e '.scripts.test' package.json > /dev/null 2>&1; then
|
||||
echo ""
|
||||
echo "Running tests..."
|
||||
if npm test 2>&1 | tee /tmp/test-output.txt; then
|
||||
log_check "Tests" "pass" "All tests passed"
|
||||
else
|
||||
log_check "Tests" "fail" "Tests failed"
|
||||
echo ""
|
||||
echo "Cannot release with failing tests"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "Tests" "warn" "No test script defined"
|
||||
fi
|
||||
|
||||
# Run linting if available
|
||||
if jq -e '.scripts.lint' package.json > /dev/null 2>&1; then
|
||||
echo ""
|
||||
echo "Running linter..."
|
||||
if npm run lint 2>&1 | tee /tmp/lint-output.txt; then
|
||||
log_check "Linting" "pass" "Linting passed"
|
||||
else
|
||||
log_check "Linting" "fail" "Linting failed"
|
||||
echo ""
|
||||
echo "Cannot release with linting errors"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "Linting" "warn" "No lint script defined"
|
||||
fi
|
||||
|
||||
# Run type checking if available (TypeScript)
|
||||
if jq -e '.scripts["type-check"]' package.json > /dev/null 2>&1; then
|
||||
echo ""
|
||||
echo "Running type check..."
|
||||
if npm run type-check 2>&1 | tee /tmp/typecheck-output.txt; then
|
||||
log_check "Type Check" "pass" "Type checking passed"
|
||||
else
|
||||
log_check "Type Check" "fail" "Type errors found"
|
||||
echo ""
|
||||
echo "Cannot release with type errors"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "Type Check" "warn" "No type-check script defined"
|
||||
fi
|
||||
|
||||
# Run build if available
|
||||
if jq -e '.scripts.build' package.json > /dev/null 2>&1; then
|
||||
echo ""
|
||||
echo "Running build..."
|
||||
if npm run build 2>&1 | tee /tmp/build-output.txt; then
|
||||
log_check "Build" "pass" "Build successful"
|
||||
else
|
||||
log_check "Build" "fail" "Build failed"
|
||||
echo ""
|
||||
echo "Cannot release with build errors"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "Build" "warn" "No build script defined"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "✅ All quality gates passed"
|
||||
|
||||
# ============================================================================
|
||||
# INTERACTIVE CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
log_section "Release Configuration"
|
||||
|
||||
# Question 1: Release type
|
||||
cat << 'EOF_QUESTION_1' | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "What type of release is this?",
|
||||
"header": "Release type",
|
||||
"multiSelect": false,
|
||||
"options": [
|
||||
{
|
||||
"label": "Patch (bug fixes)",
|
||||
"description": "Patch release - bug fixes and minor updates (1.0.0 → 1.0.1)"
|
||||
},
|
||||
{
|
||||
"label": "Minor (new features)",
|
||||
"description": "Minor release - new features, backward compatible (1.0.0 → 1.1.0)"
|
||||
},
|
||||
{
|
||||
"label": "Major (breaking changes)",
|
||||
"description": "Major release - breaking changes or major features (1.0.0 → 2.0.0)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_1
|
||||
|
||||
# Parse release type
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Release type"] == "Minor (new features)"' > /dev/null 2>&1; then
|
||||
RELEASE_TYPE="minor"
|
||||
elif echo "$CLAUDE_ANSWERS" | jq -e '.["Release type"] == "Major (breaking changes)"' > /dev/null 2>&1; then
|
||||
RELEASE_TYPE="major"
|
||||
else
|
||||
RELEASE_TYPE="patch"
|
||||
fi
|
||||
|
||||
# Calculate new version
|
||||
NEW_VERSION=$(semver_bump "$CURRENT_VERSION" "$RELEASE_TYPE")
|
||||
TAG_NAME="v${NEW_VERSION}"
|
||||
|
||||
echo "Current version: $CURRENT_VERSION"
|
||||
echo "New version: $NEW_VERSION ($RELEASE_TYPE)"
|
||||
echo "Tag name: $TAG_NAME"
|
||||
|
||||
# Question 2: Security audit
|
||||
cat << 'EOF_QUESTION_2' | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "Run security audit before releasing?",
|
||||
"header": "Security",
|
||||
"multiSelect": false,
|
||||
"options": [
|
||||
{
|
||||
"label": "Yes, run audit",
|
||||
"description": "Recommended for production releases - scans dependencies and code"
|
||||
},
|
||||
{
|
||||
"label": "No, skip audit",
|
||||
"description": "Skip security audit (not recommended for production)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_2
|
||||
|
||||
# Parse security audit answer
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Security"] == "Yes, run audit"' > /dev/null 2>&1; then
|
||||
RUN_SECURITY_AUDIT="yes"
|
||||
fi
|
||||
|
||||
echo "Security audit: $RUN_SECURITY_AUDIT"
|
||||
|
||||
# Question 3: Publishing options
|
||||
PUBLISH_OPTIONS=()
|
||||
|
||||
# Always ask about npm publishing
|
||||
PUBLISH_OPTIONS+='{
|
||||
"label": "Publish to npm",
|
||||
"description": "Publish package to npm registry (requires npm credentials)"
|
||||
}'
|
||||
|
||||
# Add GitHub release option if gh CLI is available
|
||||
if [ "$HAS_GH_CLI" = true ]; then
|
||||
PUBLISH_OPTIONS+='{
|
||||
"label": "Create GitHub release",
|
||||
"description": "Create GitHub release with auto-generated notes"
|
||||
}'
|
||||
fi
|
||||
|
||||
# Add push to remote option if remote exists
|
||||
if [ "$HAS_REMOTE" = true ]; then
|
||||
PUBLISH_OPTIONS+='{
|
||||
"label": "Push to remote",
|
||||
"description": "Push commits and tags to remote repository"
|
||||
}'
|
||||
fi
|
||||
|
||||
# Build JSON array for options
|
||||
PUBLISH_OPTIONS_JSON=$(printf '%s\n' "${PUBLISH_OPTIONS[@]}" | jq -s .)
|
||||
|
||||
cat << EOF_QUESTION_3 | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "Select publishing and deployment options:",
|
||||
"header": "Publishing",
|
||||
"multiSelect": true,
|
||||
"options": $PUBLISH_OPTIONS_JSON
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_3
|
||||
|
||||
# Parse publishing options
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Publishing"]' | grep -q "Publish to npm"; then
|
||||
PUBLISH_TO_NPM="yes"
|
||||
fi
|
||||
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Publishing"]' | grep -q "Create GitHub release"; then
|
||||
CREATE_GITHUB_RELEASE="yes"
|
||||
fi
|
||||
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Publishing"]' | grep -q "Push to remote"; then
|
||||
PUSH_TO_REMOTE="yes"
|
||||
else
|
||||
PUSH_TO_REMOTE="no"
|
||||
fi
|
||||
|
||||
echo "Publish to npm: $PUBLISH_TO_NPM"
|
||||
echo "Create GitHub release: $CREATE_GITHUB_RELEASE"
|
||||
echo "Push to remote: $PUSH_TO_REMOTE"
|
||||
|
||||
# ============================================================================
|
||||
# SECURITY AUDIT (OPTIONAL)
|
||||
# ============================================================================
|
||||
|
||||
if [ "$RUN_SECURITY_AUDIT" = "yes" ]; then
|
||||
log_section "Security Audit"
|
||||
|
||||
echo "Running security audit..."
|
||||
# Note: This would call /specswarm:security-audit in production
|
||||
# For now, we'll run npm audit directly
|
||||
if command -v npm > /dev/null 2>&1; then
|
||||
if npm audit --audit-level=moderate 2>&1 | tee /tmp/audit-output.txt; then
|
||||
log_check "Security Audit" "pass" "No moderate+ vulnerabilities"
|
||||
else
|
||||
log_check "Security Audit" "fail" "Vulnerabilities found"
|
||||
echo ""
|
||||
echo "Security audit failed. Fix vulnerabilities before releasing."
|
||||
echo "Run: npm audit fix"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "Security Audit" "warn" "npm not available, skipping audit"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# VERSION BUMPING
|
||||
# ============================================================================
|
||||
|
||||
log_section "Version Bumping"
|
||||
|
||||
echo "Updating version in package.json..."
|
||||
|
||||
# Update package.json version
|
||||
jq --arg version "$NEW_VERSION" '.version = $version' package.json > package.json.tmp
|
||||
mv package.json.tmp package.json
|
||||
|
||||
log_check "package.json" "pass" "Updated to $NEW_VERSION"
|
||||
|
||||
# Update package-lock.json if it exists
|
||||
if [ -f "package-lock.json" ]; then
|
||||
jq --arg version "$NEW_VERSION" '.version = $version' package-lock.json > package-lock.json.tmp
|
||||
mv package-lock.json.tmp package-lock.json
|
||||
log_check "package-lock.json" "pass" "Updated to $NEW_VERSION"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# CHANGELOG GENERATION
|
||||
# ============================================================================
|
||||
|
||||
log_section "Changelog Generation"
|
||||
|
||||
# Get previous tag
|
||||
PREVIOUS_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$PREVIOUS_TAG" ]; then
|
||||
echo "Previous tag: $PREVIOUS_TAG"
|
||||
else
|
||||
echo "No previous tags found (first release)"
|
||||
fi
|
||||
|
||||
# Generate changelog entry
|
||||
CHANGELOG_ENTRY=$(generate_changelog_entry "$NEW_VERSION" "$PREVIOUS_TAG")
|
||||
|
||||
# Update or create CHANGELOG.md
|
||||
if [ -f "$CHANGELOG_FILE" ]; then
|
||||
echo "Updating $CHANGELOG_FILE..."
|
||||
|
||||
# Prepend new entry to existing changelog
|
||||
{
|
||||
echo "# Changelog"
|
||||
echo ""
|
||||
echo "$CHANGELOG_ENTRY"
|
||||
echo ""
|
||||
tail -n +2 "$CHANGELOG_FILE"
|
||||
} > "${CHANGELOG_FILE}.tmp"
|
||||
|
||||
mv "${CHANGELOG_FILE}.tmp" "$CHANGELOG_FILE"
|
||||
else
|
||||
echo "Creating $CHANGELOG_FILE..."
|
||||
|
||||
cat > "$CHANGELOG_FILE" << EOF
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
$CHANGELOG_ENTRY
|
||||
EOF
|
||||
fi
|
||||
|
||||
log_check "Changelog" "pass" "Updated $CHANGELOG_FILE"
|
||||
|
||||
# ============================================================================
|
||||
# GIT COMMIT & TAG
|
||||
# ============================================================================
|
||||
|
||||
log_section "Git Commit & Tag"
|
||||
|
||||
# Stage changes
|
||||
git add package.json package-lock.json "$CHANGELOG_FILE" 2>/dev/null || true
|
||||
|
||||
# Create commit
|
||||
COMMIT_MESSAGE="chore(release): v${NEW_VERSION}
|
||||
|
||||
🚀 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
|
||||
Co-Authored-By: Claude <noreply@anthropic.com>"
|
||||
|
||||
git commit -m "$COMMIT_MESSAGE"
|
||||
log_check "Git Commit" "pass" "Created release commit"
|
||||
|
||||
# Create annotated tag
|
||||
TAG_MESSAGE="Release v${NEW_VERSION}
|
||||
|
||||
$(echo "$CHANGELOG_ENTRY" | head -n 30)
|
||||
|
||||
🚀 Generated with [Claude Code](https://claude.com/claude-code)"
|
||||
|
||||
git tag -a "$TAG_NAME" -m "$TAG_MESSAGE"
|
||||
log_check "Git Tag" "pass" "Created tag $TAG_NAME"
|
||||
|
||||
# ============================================================================
|
||||
# BUILD ARTIFACTS
|
||||
# ============================================================================
|
||||
|
||||
if jq -e '.scripts.build' package.json > /dev/null 2>&1; then
|
||||
log_section "Building Artifacts"
|
||||
|
||||
echo "Running production build..."
|
||||
if npm run build; then
|
||||
log_check "Build" "pass" "Production artifacts built"
|
||||
else
|
||||
log_check "Build" "fail" "Build failed"
|
||||
echo ""
|
||||
echo "Build failed after version bump. Please fix and retry."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# PUSH TO REMOTE
|
||||
# ============================================================================
|
||||
|
||||
if [ "$PUSH_TO_REMOTE" = "yes" ] && [ "$HAS_REMOTE" = true ]; then
|
||||
log_section "Push to Remote"
|
||||
|
||||
echo "Pushing to remote..."
|
||||
git push origin "$CURRENT_BRANCH"
|
||||
log_check "Push Branch" "pass" "Pushed $CURRENT_BRANCH"
|
||||
|
||||
git push origin "$TAG_NAME"
|
||||
log_check "Push Tag" "pass" "Pushed $TAG_NAME"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# NPM PUBLISHING
|
||||
# ============================================================================
|
||||
|
||||
if [ "$PUBLISH_TO_NPM" = "yes" ]; then
|
||||
log_section "NPM Publishing"
|
||||
|
||||
echo "Publishing to npm..."
|
||||
|
||||
# Check if logged in to npm
|
||||
if npm whoami > /dev/null 2>&1; then
|
||||
NPM_USER=$(npm whoami)
|
||||
echo "Logged in as: $NPM_USER"
|
||||
|
||||
# Publish to npm
|
||||
if npm publish; then
|
||||
log_check "npm publish" "pass" "Published to npm registry"
|
||||
else
|
||||
log_check "npm publish" "fail" "npm publish failed"
|
||||
echo ""
|
||||
echo "npm publish failed. Check credentials and package.json configuration."
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_check "npm auth" "fail" "Not logged in to npm"
|
||||
echo ""
|
||||
echo "Please run: npm login"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# GITHUB RELEASE
|
||||
# ============================================================================
|
||||
|
||||
if [ "$CREATE_GITHUB_RELEASE" = "yes" ] && [ "$HAS_GH_CLI" = true ]; then
|
||||
log_section "GitHub Release"
|
||||
|
||||
echo "Creating GitHub release..."
|
||||
|
||||
# Create release notes
|
||||
RELEASE_NOTES=$(cat << EOF
|
||||
$CHANGELOG_ENTRY
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/$(git remote get-url origin | sed 's/.*github.com[:/]\(.*\)\.git/\1/')/compare/${PREVIOUS_TAG}...${TAG_NAME}
|
||||
|
||||
🚀 Generated with [Claude Code](https://claude.com/claude-code)
|
||||
EOF
|
||||
)
|
||||
|
||||
# Create GitHub release
|
||||
if gh release create "$TAG_NAME" \
|
||||
--title "Release $TAG_NAME" \
|
||||
--notes "$RELEASE_NOTES"; then
|
||||
log_check "GitHub Release" "pass" "Created release $TAG_NAME"
|
||||
else
|
||||
log_check "GitHub Release" "fail" "Failed to create GitHub release"
|
||||
echo ""
|
||||
echo "GitHub release creation failed. Check gh CLI authentication."
|
||||
fi
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SUMMARY
|
||||
# ============================================================================
|
||||
|
||||
log_section "Release Complete"
|
||||
|
||||
echo "🎉 Successfully released version $NEW_VERSION!"
|
||||
echo ""
|
||||
echo "Summary:"
|
||||
echo " Previous version: $CURRENT_VERSION"
|
||||
echo " New version: $NEW_VERSION"
|
||||
echo " Release type: $RELEASE_TYPE"
|
||||
echo " Git tag: $TAG_NAME"
|
||||
echo " Branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
|
||||
if [ "$PUBLISH_TO_NPM" = "yes" ]; then
|
||||
echo " 📦 Published to npm: https://www.npmjs.com/package/$(jq -r '.name' package.json)"
|
||||
fi
|
||||
|
||||
if [ "$CREATE_GITHUB_RELEASE" = "yes" ]; then
|
||||
REPO_URL=$(git remote get-url origin | sed 's/\.git$//' | sed 's/git@github.com:/https:\/\/github.com\//')
|
||||
echo " 🚀 GitHub release: ${REPO_URL}/releases/tag/${TAG_NAME}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
if [ "$PUSH_TO_REMOTE" = "no" ]; then
|
||||
echo " - Push changes: git push origin $CURRENT_BRANCH --tags"
|
||||
fi
|
||||
if [ "$PUBLISH_TO_NPM" = "no" ]; then
|
||||
echo " - Publish to npm: npm publish"
|
||||
fi
|
||||
echo " - Deploy to production environments"
|
||||
echo " - Update documentation if needed"
|
||||
echo " - Notify stakeholders of the release"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Patch Release (Bug Fixes)
|
||||
|
||||
```bash
|
||||
/specswarm:release
|
||||
# Select: Patch (bug fixes)
|
||||
# Select: Yes, run audit
|
||||
# Select: Push to remote
|
||||
# Result: v1.0.1 released with security audit
|
||||
```
|
||||
|
||||
### Example 2: Minor Release (New Features)
|
||||
|
||||
```bash
|
||||
/specswarm:release
|
||||
# Select: Minor (new features)
|
||||
# Select: Yes, run audit
|
||||
# Select: Publish to npm, Create GitHub release, Push to remote
|
||||
# Result: v1.1.0 published to npm and GitHub
|
||||
```
|
||||
|
||||
### Example 3: Major Release (Breaking Changes)
|
||||
|
||||
```bash
|
||||
/specswarm:release
|
||||
# Select: Major (breaking changes)
|
||||
# Select: Yes, run audit
|
||||
# Select: All publishing options
|
||||
# Result: v2.0.0 with full publishing workflow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Release Checklist
|
||||
|
||||
The command automatically handles:
|
||||
|
||||
- ✅ Verify clean working directory
|
||||
- ✅ Run all tests
|
||||
- ✅ Run linting
|
||||
- ✅ Run type checking (if TypeScript)
|
||||
- ✅ Run production build
|
||||
- ✅ Optional security audit
|
||||
- ✅ Bump version in package.json
|
||||
- ✅ Update/create CHANGELOG.md
|
||||
- ✅ Create git commit
|
||||
- ✅ Create annotated git tag
|
||||
- ✅ Push to remote
|
||||
- ✅ Optional: Publish to npm
|
||||
- ✅ Optional: Create GitHub release
|
||||
|
||||
---
|
||||
|
||||
## Semantic Versioning
|
||||
|
||||
The command follows [Semantic Versioning](https://semver.org/):
|
||||
|
||||
- **Patch** (1.0.0 → 1.0.1): Bug fixes, minor updates
|
||||
- **Minor** (1.0.0 → 1.1.0): New features, backward compatible
|
||||
- **Major** (1.0.0 → 2.0.0): Breaking changes
|
||||
|
||||
---
|
||||
|
||||
## Changelog Format
|
||||
|
||||
Generated changelogs follow [Keep a Changelog](https://keepachangelog.com/) format:
|
||||
|
||||
```markdown
|
||||
## [1.2.0] - 2025-01-15
|
||||
|
||||
### Added
|
||||
- feat: new user dashboard
|
||||
- add: dark mode support
|
||||
|
||||
### Fixed
|
||||
- fix: authentication bug
|
||||
- bugfix: memory leak
|
||||
|
||||
### Changed
|
||||
- update: API endpoint structure
|
||||
- refactor: database queries
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
Integrate with GitHub Actions:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/release.yml
|
||||
name: Release
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_type:
|
||||
description: 'Release type'
|
||||
required: true
|
||||
type: choice
|
||||
options:
|
||||
- patch
|
||||
- minor
|
||||
- major
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: /specswarm:release
|
||||
env:
|
||||
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **Protected Branches**: Some quality gates may be skipped on feature branches
|
||||
- **npm Credentials**: Run `npm login` before publishing to npm
|
||||
- **GitHub Releases**: Requires `gh` CLI and authentication
|
||||
- **Rollback**: Use `/specswarm:rollback` if release fails
|
||||
- **Dry Run**: The command always shows what it will do before making changes
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Build Fails**:
|
||||
- Check build scripts in package.json
|
||||
- Verify all dependencies are installed
|
||||
- Review build logs in `/tmp/build-output.txt`
|
||||
|
||||
**npm Publish Fails**:
|
||||
- Verify npm login: `npm whoami`
|
||||
- Check package name availability
|
||||
- Ensure version hasn't been published before
|
||||
|
||||
**GitHub Release Fails**:
|
||||
- Verify gh CLI: `gh auth status`
|
||||
- Check repository permissions
|
||||
- Ensure remote URL is correct
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- `/specswarm:security-audit` - Run security audit separately
|
||||
- `/specswarm:rollback` - Rollback failed releases
|
||||
- `/specswarm:analyze-quality` - Check quality before releasing
|
||||
- `/specswarm:ship` - Complete feature workflow with merge
|
||||
|
||||
---
|
||||
|
||||
**Version**: 3.1.0
|
||||
**Category**: Release Management
|
||||
**Estimated Time**: 3-10 minutes (depending on build and publishing)
|
||||
576
commands/rollback.md
Normal file
576
commands/rollback.md
Normal file
@@ -0,0 +1,576 @@
|
||||
---
|
||||
description: Safely rollback a failed or unwanted feature with automatic artifact cleanup
|
||||
args:
|
||||
- name: --dry-run
|
||||
description: Show what would be rolled back without executing
|
||||
required: false
|
||||
- name: --keep-artifacts
|
||||
description: Backup artifacts instead of deleting them
|
||||
required: false
|
||||
- name: --force
|
||||
description: Skip confirmation prompts (dangerous!)
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
## Goal
|
||||
|
||||
Safely rollback a feature that failed or is no longer wanted, including:
|
||||
1. Reverting all code changes since branch creation
|
||||
2. Cleaning up feature artifacts (spec.md, plan.md, tasks.md)
|
||||
3. Optionally deleting the feature branch
|
||||
4. Switching back to the parent branch
|
||||
|
||||
Provides two rollback strategies with comprehensive safety checks.
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Safety Checks
|
||||
|
||||
```bash
|
||||
echo "🔍 Performing safety checks..."
|
||||
echo ""
|
||||
|
||||
# Check 1: Get current branch
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
|
||||
if [ -z "$CURRENT_BRANCH" ]; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check 2: Prevent rollback of main/master/develop
|
||||
PROTECTED_BRANCHES=("main" "master" "develop" "production")
|
||||
|
||||
for protected in "${PROTECTED_BRANCHES[@]}"; do
|
||||
if [ "$CURRENT_BRANCH" = "$protected" ]; then
|
||||
echo "❌ Error: Cannot rollback protected branch '$CURRENT_BRANCH'"
|
||||
echo ""
|
||||
echo "💡 Rollback is designed for feature branches only."
|
||||
echo " If you need to revert changes on $CURRENT_BRANCH, use:"
|
||||
echo " git revert <commit-hash>"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check 3: Check for uncommitted changes
|
||||
DIRTY=$(git status --porcelain 2>/dev/null)
|
||||
|
||||
if [ -n "$DIRTY" ]; then
|
||||
echo "❌ Error: Uncommitted changes detected"
|
||||
echo ""
|
||||
echo "You have uncommitted changes. Please commit or stash them first:"
|
||||
git status --short
|
||||
echo ""
|
||||
echo "To commit:"
|
||||
echo " git add ."
|
||||
echo " git commit -m \"WIP: save before rollback\""
|
||||
echo ""
|
||||
echo "To stash:"
|
||||
echo " git stash push -m \"Before rollback\""
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check 4: Detect parent branch
|
||||
# First try git config
|
||||
PARENT_BRANCH=$(git config "branch.$CURRENT_BRANCH.parent" 2>/dev/null)
|
||||
|
||||
# Fallback to common parent branches
|
||||
if [ -z "$PARENT_BRANCH" ]; then
|
||||
for candidate in "develop" "development" "dev" "main" "master"; do
|
||||
if git show-ref --verify --quiet "refs/heads/$candidate"; then
|
||||
PARENT_BRANCH="$candidate"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Last resort: ask user
|
||||
if [ -z "$PARENT_BRANCH" ]; then
|
||||
echo "⚠️ Could not auto-detect parent branch"
|
||||
PARENT_BRANCH="main" # default
|
||||
fi
|
||||
|
||||
echo "✅ Safety checks passed"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Gather Rollback Information
|
||||
|
||||
```bash
|
||||
echo "📊 Analyzing feature branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
|
||||
# Find divergence point from parent
|
||||
DIVERGE_POINT=$(git merge-base "$CURRENT_BRANCH" "$PARENT_BRANCH" 2>/dev/null)
|
||||
|
||||
if [ -z "$DIVERGE_POINT" ]; then
|
||||
echo "❌ Error: Could not find divergence point with $PARENT_BRANCH"
|
||||
echo " Please specify the correct parent branch."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Count commits to rollback
|
||||
COMMIT_COUNT=$(git rev-list --count "$DIVERGE_POINT..$CURRENT_BRANCH" 2>/dev/null)
|
||||
COMMITS=$(git log --oneline "$DIVERGE_POINT..$CURRENT_BRANCH" 2>/dev/null)
|
||||
|
||||
# Check if already merged to parent
|
||||
MERGED=$(git branch --merged "$PARENT_BRANCH" | grep -w "$CURRENT_BRANCH" || true)
|
||||
|
||||
# Check for feature artifacts
|
||||
FEATURE_DIR=".feature"
|
||||
ARTIFACTS_FOUND=false
|
||||
ARTIFACT_LIST=()
|
||||
|
||||
if [ -d "$FEATURE_DIR" ]; then
|
||||
if [ -f "$FEATURE_DIR/spec.md" ]; then
|
||||
ARTIFACTS_FOUND=true
|
||||
ARTIFACT_LIST+=("spec.md ($(wc -l < "$FEATURE_DIR/spec.md" 2>/dev/null || echo "0") lines)")
|
||||
fi
|
||||
if [ -f "$FEATURE_DIR/plan.md" ]; then
|
||||
ARTIFACTS_FOUND=true
|
||||
ARTIFACT_LIST+=("plan.md ($(wc -l < "$FEATURE_DIR/plan.md" 2>/dev/null || echo "0") lines)")
|
||||
fi
|
||||
if [ -f "$FEATURE_DIR/tasks.md" ]; then
|
||||
ARTIFACTS_FOUND=true
|
||||
TASK_COUNT=$(grep -c "^-" "$FEATURE_DIR/tasks.md" 2>/dev/null || echo "0")
|
||||
TASK_COMPLETE=$(grep -c "^- \[x\]" "$FEATURE_DIR/tasks.md" 2>/dev/null || echo "0")
|
||||
ARTIFACT_LIST+=("tasks.md ($TASK_COUNT tasks, $TASK_COMPLETE completed)")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check remote tracking
|
||||
REMOTE_TRACKING=$(git rev-parse --abbrev-ref --symbolic-full-name "@{u}" 2>/dev/null || echo "")
|
||||
|
||||
# Display information
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " ROLLBACK ANALYSIS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Branch Information:"
|
||||
echo " Current Branch: $CURRENT_BRANCH"
|
||||
echo " Parent Branch: $PARENT_BRANCH"
|
||||
echo " Commits: $COMMIT_COUNT"
|
||||
echo " Merged Status: $([[ -n "$MERGED" ]] && echo "Already merged to $PARENT_BRANCH" || echo "Not merged")"
|
||||
echo " Remote Tracking: ${REMOTE_TRACKING:-"None (local only)"}"
|
||||
echo ""
|
||||
|
||||
if [ "$COMMIT_COUNT" -gt 0 ]; then
|
||||
echo "Commits to Rollback:"
|
||||
echo "$COMMITS" | head -5
|
||||
if [ "$COMMIT_COUNT" -gt 5 ]; then
|
||||
echo " ... and $((COMMIT_COUNT - 5)) more"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ "$ARTIFACTS_FOUND" = true ]; then
|
||||
echo "Feature Artifacts Found:"
|
||||
for artifact in "${ARTIFACT_LIST[@]}"; do
|
||||
echo " ✓ $artifact"
|
||||
done
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ -n "$MERGED" ]; then
|
||||
echo "⚠️ WARNING: This branch has already been merged to $PARENT_BRANCH"
|
||||
echo " Rollback will create revert commits on $PARENT_BRANCH"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ -n "$REMOTE_TRACKING" ]; then
|
||||
echo "⚠️ WARNING: This branch is pushed to remote ($REMOTE_TRACKING)"
|
||||
echo " Other collaborators may have checked out this branch"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Dry Run Mode
|
||||
|
||||
**If `--dry-run` flag is present, show what would happen and exit:**
|
||||
|
||||
```bash
|
||||
if echo "$ARGUMENTS" | grep -q "\-\-dry-run"; then
|
||||
echo "🔍 DRY RUN MODE - No changes will be made"
|
||||
echo ""
|
||||
echo "The following actions WOULD be performed:"
|
||||
echo ""
|
||||
echo "1. Rollback Strategy: (User would choose)"
|
||||
echo " Option A: Soft rollback (create $COMMIT_COUNT revert commits)"
|
||||
echo " Option B: Hard rollback (reset to $PARENT_BRANCH)"
|
||||
echo ""
|
||||
echo "2. Artifact Cleanup:"
|
||||
if [ "$ARTIFACTS_FOUND" = true ]; then
|
||||
echo " - Delete or backup: ${ARTIFACT_LIST[*]}"
|
||||
else
|
||||
echo " - No artifacts to clean up"
|
||||
fi
|
||||
echo ""
|
||||
echo "3. Branch Cleanup:"
|
||||
echo " - Optionally delete branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
echo "4. Final State:"
|
||||
echo " - Switch to: $PARENT_BRANCH"
|
||||
echo ""
|
||||
echo "Run without --dry-run to execute rollback."
|
||||
exit 0
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Interactive Rollback Configuration (if not --force)
|
||||
|
||||
**Skip if `--force` flag is present (use soft rollback defaults).**
|
||||
|
||||
Use **AskUserQuestion** tool:
|
||||
|
||||
```
|
||||
Question 1: "Choose rollback strategy"
|
||||
Header: "Strategy"
|
||||
Options:
|
||||
1. "Soft rollback (revert commits)"
|
||||
Description: "Creates revert commits, preserves history (RECOMMENDED)"
|
||||
2. "Hard rollback (reset to parent)"
|
||||
Description: "Deletes all commits, rewrites history (DANGEROUS)"
|
||||
3. "Cancel rollback"
|
||||
Description: "Abort and keep current state"
|
||||
```
|
||||
|
||||
Store in `$ROLLBACK_STRATEGY`.
|
||||
|
||||
If `$ROLLBACK_STRATEGY` == "Cancel rollback", exit with message.
|
||||
|
||||
```
|
||||
Question 2: "Delete feature branch after rollback?"
|
||||
Header: "Branch Cleanup"
|
||||
Options:
|
||||
1. "Yes, delete branch"
|
||||
Description: "Remove $CURRENT_BRANCH completely"
|
||||
2. "No, keep branch"
|
||||
Description: "Keep branch for reference (can delete later)"
|
||||
```
|
||||
|
||||
Store in `$DELETE_BRANCH`.
|
||||
|
||||
```
|
||||
Question 3 (if ARTIFACTS_FOUND=true): "What should we do with feature artifacts?"
|
||||
Header: "Artifacts"
|
||||
Options:
|
||||
1. "Delete artifacts"
|
||||
Description: "Remove spec.md, plan.md, tasks.md permanently"
|
||||
2. "Backup artifacts"
|
||||
Description: "Move to .feature.backup/$(date +%Y%m%d-%H%M%S)/"
|
||||
3. "Keep artifacts"
|
||||
Description: "Leave artifacts in place (not recommended)"
|
||||
```
|
||||
|
||||
Store in `$ARTIFACT_ACTION`.
|
||||
|
||||
```
|
||||
Question 4: "Final confirmation - Type 'rollback' to proceed"
|
||||
Header: "Confirm"
|
||||
Options:
|
||||
- Text input required
|
||||
- Must type exactly "rollback" (case-sensitive)
|
||||
```
|
||||
|
||||
Store in `$CONFIRMATION`.
|
||||
|
||||
```bash
|
||||
if [ "$CONFIRMATION" != "rollback" ]; then
|
||||
echo "❌ Rollback cancelled - confirmation text did not match"
|
||||
echo " You must type exactly: rollback"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Execute Rollback
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "🔄 Executing rollback..."
|
||||
echo ""
|
||||
|
||||
# Step 5a: Handle artifacts first (before git operations)
|
||||
if [ "$ARTIFACTS_FOUND" = true ]; then
|
||||
case "$ARTIFACT_ACTION" in
|
||||
"Delete artifacts")
|
||||
echo "🗑️ Deleting feature artifacts..."
|
||||
rm -rf "$FEATURE_DIR"
|
||||
echo "✅ Deleted artifacts"
|
||||
;;
|
||||
"Backup artifacts")
|
||||
BACKUP_DIR=".feature.backup/$(date +%Y%m%d-%H%M%S)"
|
||||
echo "💾 Backing up artifacts to $BACKUP_DIR..."
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
cp -r "$FEATURE_DIR"/* "$BACKUP_DIR/" 2>/dev/null
|
||||
rm -rf "$FEATURE_DIR"
|
||||
echo "✅ Artifacts backed up to $BACKUP_DIR"
|
||||
;;
|
||||
"Keep artifacts")
|
||||
echo "ℹ️ Keeping artifacts in place"
|
||||
;;
|
||||
esac
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Step 5b: Execute rollback strategy
|
||||
case "$ROLLBACK_STRATEGY" in
|
||||
"Soft rollback (revert commits)")
|
||||
echo "🔄 Creating revert commits..."
|
||||
|
||||
if [ "$COMMIT_COUNT" -gt 0 ]; then
|
||||
# Create revert commits in reverse order
|
||||
git revert --no-edit "$DIVERGE_POINT..$CURRENT_BRANCH" 2>&1 || {
|
||||
echo ""
|
||||
echo "⚠️ Revert encountered conflicts"
|
||||
echo " Conflicts must be resolved manually:"
|
||||
echo " 1. Fix conflicts in the listed files"
|
||||
echo " 2. git add <resolved-files>"
|
||||
echo " 3. git revert --continue"
|
||||
echo ""
|
||||
echo " Or to abort:"
|
||||
echo " git revert --abort"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✅ Created $COMMIT_COUNT revert commits"
|
||||
else
|
||||
echo "ℹ️ No commits to revert"
|
||||
fi
|
||||
|
||||
# Stay on current branch for soft rollback
|
||||
SWITCH_BRANCH=false
|
||||
;;
|
||||
|
||||
"Hard rollback (reset to parent)")
|
||||
echo "⚠️ WARNING: Performing hard rollback (history will be lost)"
|
||||
echo ""
|
||||
|
||||
# Switch to parent branch
|
||||
echo "🔀 Switching to $PARENT_BRANCH..."
|
||||
git checkout "$PARENT_BRANCH" 2>&1 || {
|
||||
echo "❌ Failed to switch to $PARENT_BRANCH"
|
||||
exit 1
|
||||
}
|
||||
|
||||
echo "✅ Switched to $PARENT_BRANCH"
|
||||
|
||||
SWITCH_BRANCH=true
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
|
||||
# Step 5c: Branch cleanup
|
||||
if [ "$DELETE_BRANCH" = "Yes, delete branch" ] && [ "$SWITCH_BRANCH" = false ]; then
|
||||
# For soft rollback, need to switch before deleting
|
||||
echo "🔀 Switching to $PARENT_BRANCH before branch deletion..."
|
||||
git checkout "$PARENT_BRANCH" 2>&1 || {
|
||||
echo "❌ Failed to switch to $PARENT_BRANCH"
|
||||
exit 1
|
||||
}
|
||||
SWITCH_BRANCH=true
|
||||
fi
|
||||
|
||||
if [ "$DELETE_BRANCH" = "Yes, delete branch" ]; then
|
||||
echo "🗑️ Deleting branch $CURRENT_BRANCH..."
|
||||
|
||||
# Check if branch has unmerged changes
|
||||
UNMERGED=$(git branch --no-merged "$PARENT_BRANCH" | grep -w "$CURRENT_BRANCH" || true)
|
||||
|
||||
if [ -n "$UNMERGED" ] && [ "$ROLLBACK_STRATEGY" != "Hard rollback (reset to parent)" ]; then
|
||||
# Force delete for unmerged branches
|
||||
git branch -D "$CURRENT_BRANCH" 2>&1 || {
|
||||
echo "❌ Failed to delete branch $CURRENT_BRANCH"
|
||||
exit 1
|
||||
}
|
||||
else
|
||||
git branch -d "$CURRENT_BRANCH" 2>&1 || {
|
||||
# Try force delete if normal delete fails
|
||||
git branch -D "$CURRENT_BRANCH" 2>&1 || {
|
||||
echo "❌ Failed to delete branch $CURRENT_BRANCH"
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
fi
|
||||
|
||||
echo "✅ Deleted branch $CURRENT_BRANCH"
|
||||
|
||||
# Delete remote branch if exists
|
||||
if [ -n "$REMOTE_TRACKING" ]; then
|
||||
REMOTE_NAME=$(echo "$REMOTE_TRACKING" | cut -d'/' -f1)
|
||||
echo ""
|
||||
echo "⚠️ Remote branch exists: $REMOTE_TRACKING"
|
||||
echo " To delete remote branch, run:"
|
||||
echo " git push $REMOTE_NAME --delete $CURRENT_BRANCH"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Summary and Final State
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " ✅ ROLLBACK COMPLETE"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Summary:"
|
||||
echo " Strategy: $ROLLBACK_STRATEGY"
|
||||
if [ "$COMMIT_COUNT" -gt 0 ]; then
|
||||
if [ "$ROLLBACK_STRATEGY" = "Soft rollback (revert commits)" ]; then
|
||||
echo " Commits: $COMMIT_COUNT reverted"
|
||||
else
|
||||
echo " Commits: $COMMIT_COUNT removed"
|
||||
fi
|
||||
fi
|
||||
if [ "$ARTIFACTS_FOUND" = true ]; then
|
||||
echo " Artifacts: $ARTIFACT_ACTION"
|
||||
fi
|
||||
if [ "$DELETE_BRANCH" = "Yes, delete branch" ]; then
|
||||
echo " Branch: $CURRENT_BRANCH deleted"
|
||||
else
|
||||
echo " Branch: $CURRENT_BRANCH kept"
|
||||
fi
|
||||
echo " Current: $(git rev-parse --abbrev-ref HEAD)"
|
||||
echo ""
|
||||
|
||||
if [ "$ROLLBACK_STRATEGY" = "Soft rollback (revert commits)" ] && [ "$SWITCH_BRANCH" = false ]; then
|
||||
echo "💡 Next Steps:"
|
||||
echo " 1. Review the revert commits: git log"
|
||||
echo " 2. Push to remote if needed: git push"
|
||||
echo " 3. Merge to parent: /specswarm:complete"
|
||||
echo ""
|
||||
elif [ "$SWITCH_BRANCH" = true ]; then
|
||||
echo "💡 Next Steps:"
|
||||
echo " 1. Review current state: git status"
|
||||
echo " 2. Start a new feature: /specswarm:build \"feature\""
|
||||
echo ""
|
||||
fi
|
||||
|
||||
if [ -d ".feature.backup" ]; then
|
||||
echo "📦 Backups:"
|
||||
echo " Feature artifacts backed up to:"
|
||||
ls -1d .feature.backup/*/ 2>/dev/null | tail -1
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Safety Features
|
||||
|
||||
1. **Protected Branch Prevention**: Cannot rollback main/master/develop/production
|
||||
2. **Uncommitted Changes Check**: Requires clean working tree
|
||||
3. **Confirmation Required**: Must type "rollback" to proceed (unless --force)
|
||||
4. **Artifact Backup**: Option to backup before deletion
|
||||
5. **Dry Run**: Preview changes before executing
|
||||
|
||||
### Rollback Strategies
|
||||
|
||||
**Soft Rollback (Recommended)**:
|
||||
- Creates revert commits for each commit since divergence
|
||||
- Preserves complete history
|
||||
- Safe for shared/pushed branches
|
||||
- Can be undone by reverting the reverts
|
||||
|
||||
**Hard Rollback (Dangerous)**:
|
||||
- Resets to parent branch state
|
||||
- Deletes all commits permanently
|
||||
- Only safe for local-only branches
|
||||
- Cannot be undone easily
|
||||
|
||||
### When to Use Each Strategy
|
||||
|
||||
**Use Soft Rollback when**:
|
||||
- Branch has been pushed to remote
|
||||
- Other developers may have based work on this branch
|
||||
- You want to preserve history
|
||||
- You're unsure if you might need the code later
|
||||
|
||||
**Use Hard Rollback when**:
|
||||
- Branch is local-only (never pushed)
|
||||
- You're absolutely certain code should be deleted
|
||||
- You want a clean slate
|
||||
|
||||
### Remote Branch Handling
|
||||
|
||||
If branch is tracked remotely:
|
||||
- Local branch can be deleted safely
|
||||
- Remote branch deletion must be done manually:
|
||||
```bash
|
||||
git push origin --delete feature-branch-name
|
||||
```
|
||||
- Provides command at end of rollback
|
||||
|
||||
### Merged Branch Rollback
|
||||
|
||||
If branch is already merged to parent:
|
||||
- Soft rollback creates revert commits on current branch
|
||||
- These reverts must then be merged to parent
|
||||
- Hard rollback switches to parent (no changes to parent)
|
||||
- Consider using `/specswarm:deprecate` for features in production
|
||||
|
||||
---
|
||||
|
||||
## Example Usage
|
||||
|
||||
### Basic Rollback (Interactive)
|
||||
```bash
|
||||
/specswarm:rollback
|
||||
# Guided through strategy selection
|
||||
# Confirms with "rollback" text
|
||||
```
|
||||
|
||||
### Preview Rollback (Dry Run)
|
||||
```bash
|
||||
/specswarm:rollback --dry-run
|
||||
# Shows what would happen without executing
|
||||
```
|
||||
|
||||
### Quick Rollback with Defaults
|
||||
```bash
|
||||
/specswarm:rollback --force
|
||||
# Uses soft rollback, no confirmation
|
||||
# NOT RECOMMENDED unless you know what you're doing
|
||||
```
|
||||
|
||||
### Rollback with Artifact Backup
|
||||
```bash
|
||||
/specswarm:rollback --keep-artifacts
|
||||
# Backs up artifacts automatically
|
||||
```
|
||||
|
||||
### Undo a Rollback (if soft rollback was used)
|
||||
```bash
|
||||
# Find the revert commits
|
||||
git log --oneline | grep "Revert"
|
||||
|
||||
# Revert the reverts to restore original state
|
||||
git revert <revert-commit-hash>
|
||||
```
|
||||
940
commands/security-audit.md
Normal file
940
commands/security-audit.md
Normal file
@@ -0,0 +1,940 @@
|
||||
---
|
||||
description: Comprehensive security scanning including dependency vulnerabilities, secret detection, OWASP Top 10 analysis, and configuration checks
|
||||
args:
|
||||
- name: --quick
|
||||
description: Quick scan (dependency check and basic secret detection only)
|
||||
required: false
|
||||
- name: --thorough
|
||||
description: Thorough scan (extensive pattern matching and deep analysis)
|
||||
required: false
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
Perform a comprehensive security audit of your codebase to identify vulnerabilities before merging or releasing.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Command Does
|
||||
|
||||
`/specswarm:security-audit` performs a comprehensive security analysis of your codebase:
|
||||
|
||||
1. **Dependency Scanning** - Checks for known vulnerabilities in npm/yarn/pnpm packages
|
||||
2. **Secret Detection** - Scans for hardcoded API keys, passwords, tokens, and credentials
|
||||
3. **OWASP Top 10 Analysis** - Detects common web vulnerabilities (XSS, SQL injection, etc.)
|
||||
4. **Security Configuration** - Validates HTTPS, CORS, headers, and other security settings
|
||||
5. **Report Generation** - Creates actionable report with severity levels and remediation steps
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Command
|
||||
|
||||
- Before merging features to main/production branches
|
||||
- As part of CI/CD pipeline security gates
|
||||
- Before major releases
|
||||
- After adding new dependencies
|
||||
- When investigating security concerns
|
||||
- Regular security audits (monthly/quarterly)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Git repository
|
||||
- Node.js project with package.json (for dependency scanning)
|
||||
- Working directory should be project root
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/specswarm:security-audit
|
||||
```
|
||||
|
||||
**Options** (via interactive prompts):
|
||||
- Scan depth (quick/standard/thorough)
|
||||
- Report format (markdown/json/both)
|
||||
- Severity threshold (all/medium+/high+/critical)
|
||||
- Auto-fix vulnerabilities (yes/no)
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Generates a security audit report: `security-audit-YYYY-MM-DD.md`
|
||||
|
||||
**Report Sections**:
|
||||
1. Executive Summary (overall risk score, critical findings count)
|
||||
2. Dependency Vulnerabilities (from npm audit/yarn audit)
|
||||
3. Secret Detection Results (hardcoded credentials, API keys)
|
||||
4. Code Vulnerabilities (OWASP Top 10 patterns)
|
||||
5. Security Configuration Issues
|
||||
6. Remediation Recommendations (prioritized by severity)
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ============================================================================
|
||||
# SECURITY AUDIT COMMAND - v3.1.0
|
||||
# ============================================================================
|
||||
|
||||
echo "🔒 SpecSwarm Security Audit"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# ============================================================================
|
||||
# CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
REPORT_DATE=$(date +%Y-%m-%d)
|
||||
REPORT_FILE="security-audit-${REPORT_DATE}.md"
|
||||
SCAN_DEPTH="standard"
|
||||
REPORT_FORMAT="markdown"
|
||||
SEVERITY_THRESHOLD="all"
|
||||
AUTO_FIX="no"
|
||||
|
||||
# Vulnerability counters
|
||||
CRITICAL_COUNT=0
|
||||
HIGH_COUNT=0
|
||||
MEDIUM_COUNT=0
|
||||
LOW_COUNT=0
|
||||
INFO_COUNT=0
|
||||
|
||||
# Temporary files for scan results
|
||||
TEMP_DIR=$(mktemp -d)
|
||||
DEP_SCAN_FILE="${TEMP_DIR}/dep-scan.json"
|
||||
SECRET_SCAN_FILE="${TEMP_DIR}/secret-scan.txt"
|
||||
CODE_SCAN_FILE="${TEMP_DIR}/code-scan.txt"
|
||||
CONFIG_SCAN_FILE="${TEMP_DIR}/config-scan.txt"
|
||||
|
||||
# Cleanup on exit
|
||||
trap 'rm -rf "$TEMP_DIR"' EXIT
|
||||
|
||||
# ============================================================================
|
||||
# HELPER FUNCTIONS
|
||||
# ============================================================================
|
||||
|
||||
log_section() {
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "$1"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
}
|
||||
|
||||
log_finding() {
|
||||
local severity=$1
|
||||
local category=$2
|
||||
local message=$3
|
||||
|
||||
case "$severity" in
|
||||
CRITICAL) ((CRITICAL_COUNT++)); echo "🔴 CRITICAL [$category]: $message" ;;
|
||||
HIGH) ((HIGH_COUNT++)); echo "🟠 HIGH [$category]: $message" ;;
|
||||
MEDIUM) ((MEDIUM_COUNT++)); echo "🟡 MEDIUM [$category]: $message" ;;
|
||||
LOW) ((LOW_COUNT++)); echo "🔵 LOW [$category]: $message" ;;
|
||||
INFO) ((INFO_COUNT++)); echo "ℹ️ INFO [$category]: $message" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
calculate_risk_score() {
|
||||
# Risk score = (CRITICAL * 10) + (HIGH * 5) + (MEDIUM * 2) + (LOW * 1)
|
||||
local score=$(( CRITICAL_COUNT * 10 + HIGH_COUNT * 5 + MEDIUM_COUNT * 2 + LOW_COUNT * 1 ))
|
||||
echo "$score"
|
||||
}
|
||||
|
||||
get_risk_level() {
|
||||
local score=$1
|
||||
if [ "$score" -ge 50 ]; then
|
||||
echo "CRITICAL"
|
||||
elif [ "$score" -ge 20 ]; then
|
||||
echo "HIGH"
|
||||
elif [ "$score" -ge 10 ]; then
|
||||
echo "MEDIUM"
|
||||
else
|
||||
echo "LOW"
|
||||
fi
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# PREFLIGHT CHECKS
|
||||
# ============================================================================
|
||||
|
||||
log_section "Preflight Checks"
|
||||
|
||||
# Check if in git repository
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
echo " Security audit requires a git repository to scan committed code"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if package.json exists
|
||||
if [ ! -f "package.json" ]; then
|
||||
echo "⚠️ Warning: No package.json found - dependency scanning will be skipped"
|
||||
HAS_PACKAGE_JSON=false
|
||||
else
|
||||
HAS_PACKAGE_JSON=true
|
||||
echo "✅ Found package.json"
|
||||
fi
|
||||
|
||||
# Detect package manager
|
||||
if [ "$HAS_PACKAGE_JSON" = true ]; then
|
||||
if [ -f "package-lock.json" ]; then
|
||||
PKG_MANAGER="npm"
|
||||
echo "✅ Detected npm package manager"
|
||||
elif [ -f "yarn.lock" ]; then
|
||||
PKG_MANAGER="yarn"
|
||||
echo "✅ Detected yarn package manager"
|
||||
elif [ -f "pnpm-lock.yaml" ]; then
|
||||
PKG_MANAGER="pnpm"
|
||||
echo "✅ Detected pnpm package manager"
|
||||
else
|
||||
PKG_MANAGER="npm"
|
||||
echo "⚠️ No lock file found, defaulting to npm"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Repository: $(basename "$(git rev-parse --show-toplevel)")"
|
||||
echo "Branch: $(git rev-parse --abbrev-ref HEAD)"
|
||||
echo "Scan Date: $(date '+%Y-%m-%d %H:%M:%S')"
|
||||
|
||||
# ============================================================================
|
||||
# INTERACTIVE CONFIGURATION
|
||||
# ============================================================================
|
||||
|
||||
log_section "Scan Configuration"
|
||||
|
||||
# Question 1: Scan depth
|
||||
cat << 'EOF_QUESTION_1' | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "How thorough should the security scan be?",
|
||||
"header": "Scan depth",
|
||||
"multiSelect": false,
|
||||
"options": [
|
||||
{
|
||||
"label": "Quick scan",
|
||||
"description": "Fast scan - dependency check and basic secret detection (~1 min)"
|
||||
},
|
||||
{
|
||||
"label": "Standard scan",
|
||||
"description": "Recommended - all checks with standard patterns (~3-5 min)"
|
||||
},
|
||||
{
|
||||
"label": "Thorough scan",
|
||||
"description": "Deep scan - extensive pattern matching and analysis (~10+ min)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_1
|
||||
|
||||
# Parse scan depth answer
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Scan depth"] == "Quick scan"' > /dev/null 2>&1; then
|
||||
SCAN_DEPTH="quick"
|
||||
elif echo "$CLAUDE_ANSWERS" | jq -e '.["Scan depth"] == "Thorough scan"' > /dev/null 2>&1; then
|
||||
SCAN_DEPTH="thorough"
|
||||
else
|
||||
SCAN_DEPTH="standard"
|
||||
fi
|
||||
|
||||
echo "Scan depth: $SCAN_DEPTH"
|
||||
|
||||
# Question 2: Severity threshold
|
||||
cat << 'EOF_QUESTION_2' | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "What severity level should be included in the report?",
|
||||
"header": "Severity",
|
||||
"multiSelect": false,
|
||||
"options": [
|
||||
{
|
||||
"label": "All findings",
|
||||
"description": "Include all severity levels (CRITICAL, HIGH, MEDIUM, LOW, INFO)"
|
||||
},
|
||||
{
|
||||
"label": "Medium and above",
|
||||
"description": "Show CRITICAL, HIGH, and MEDIUM (hide LOW and INFO)"
|
||||
},
|
||||
{
|
||||
"label": "High and above",
|
||||
"description": "Show only CRITICAL and HIGH findings"
|
||||
},
|
||||
{
|
||||
"label": "Critical only",
|
||||
"description": "Show only CRITICAL findings"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_2
|
||||
|
||||
# Parse severity threshold
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Severity"] == "Medium and above"' > /dev/null 2>&1; then
|
||||
SEVERITY_THRESHOLD="medium"
|
||||
elif echo "$CLAUDE_ANSWERS" | jq -e '.["Severity"] == "High and above"' > /dev/null 2>&1; then
|
||||
SEVERITY_THRESHOLD="high"
|
||||
elif echo "$CLAUDE_ANSWERS" | jq -e '.["Severity"] == "Critical only"' > /dev/null 2>&1; then
|
||||
SEVERITY_THRESHOLD="critical"
|
||||
else
|
||||
SEVERITY_THRESHOLD="all"
|
||||
fi
|
||||
|
||||
echo "Severity threshold: $SEVERITY_THRESHOLD"
|
||||
|
||||
# Question 3: Auto-fix
|
||||
if [ "$HAS_PACKAGE_JSON" = true ]; then
|
||||
cat << 'EOF_QUESTION_3' | claude --question
|
||||
{
|
||||
"questions": [
|
||||
{
|
||||
"question": "Should we attempt to auto-fix dependency vulnerabilities?",
|
||||
"header": "Auto-fix",
|
||||
"multiSelect": false,
|
||||
"options": [
|
||||
{
|
||||
"label": "Yes, auto-fix",
|
||||
"description": "Run 'npm audit fix' to automatically update vulnerable dependencies"
|
||||
},
|
||||
{
|
||||
"label": "No, report only",
|
||||
"description": "Only generate report without making changes"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF_QUESTION_3
|
||||
|
||||
# Parse auto-fix answer
|
||||
if echo "$CLAUDE_ANSWERS" | jq -e '.["Auto-fix"] == "Yes, auto-fix"' > /dev/null 2>&1; then
|
||||
AUTO_FIX="yes"
|
||||
fi
|
||||
|
||||
echo "Auto-fix: $AUTO_FIX"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SCAN 1: DEPENDENCY VULNERABILITIES
|
||||
# ============================================================================
|
||||
|
||||
log_section "1. Dependency Vulnerability Scan"
|
||||
|
||||
if [ "$HAS_PACKAGE_JSON" = true ]; then
|
||||
echo "Running ${PKG_MANAGER} audit..."
|
||||
|
||||
# Run audit and capture results
|
||||
case "$PKG_MANAGER" in
|
||||
npm)
|
||||
if npm audit --json > "$DEP_SCAN_FILE" 2>/dev/null; then
|
||||
echo "✅ npm audit completed"
|
||||
else
|
||||
echo "⚠️ npm audit found vulnerabilities"
|
||||
fi
|
||||
|
||||
# Parse npm audit results
|
||||
if [ -f "$DEP_SCAN_FILE" ]; then
|
||||
VULN_CRITICAL=$(jq '.metadata.vulnerabilities.critical // 0' "$DEP_SCAN_FILE")
|
||||
VULN_HIGH=$(jq '.metadata.vulnerabilities.high // 0' "$DEP_SCAN_FILE")
|
||||
VULN_MEDIUM=$(jq '.metadata.vulnerabilities.moderate // 0' "$DEP_SCAN_FILE")
|
||||
VULN_LOW=$(jq '.metadata.vulnerabilities.low // 0' "$DEP_SCAN_FILE")
|
||||
|
||||
echo " Critical: $VULN_CRITICAL"
|
||||
echo " High: $VULN_HIGH"
|
||||
echo " Medium: $VULN_MEDIUM"
|
||||
echo " Low: $VULN_LOW"
|
||||
|
||||
# Update counters
|
||||
CRITICAL_COUNT=$((CRITICAL_COUNT + VULN_CRITICAL))
|
||||
HIGH_COUNT=$((HIGH_COUNT + VULN_HIGH))
|
||||
MEDIUM_COUNT=$((MEDIUM_COUNT + VULN_MEDIUM))
|
||||
LOW_COUNT=$((LOW_COUNT + VULN_LOW))
|
||||
fi
|
||||
;;
|
||||
yarn)
|
||||
if yarn audit --json > "$DEP_SCAN_FILE" 2>/dev/null; then
|
||||
echo "✅ yarn audit completed"
|
||||
else
|
||||
echo "⚠️ yarn audit found vulnerabilities"
|
||||
fi
|
||||
;;
|
||||
pnpm)
|
||||
if pnpm audit --json > "$DEP_SCAN_FILE" 2>/dev/null; then
|
||||
echo "✅ pnpm audit completed"
|
||||
else
|
||||
echo "⚠️ pnpm audit found vulnerabilities"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# Auto-fix if requested
|
||||
if [ "$AUTO_FIX" = "yes" ]; then
|
||||
echo ""
|
||||
echo "Attempting auto-fix..."
|
||||
case "$PKG_MANAGER" in
|
||||
npm)
|
||||
npm audit fix
|
||||
echo "✅ Auto-fix completed"
|
||||
;;
|
||||
yarn)
|
||||
echo "⚠️ Yarn does not support auto-fix - please update manually"
|
||||
;;
|
||||
pnpm)
|
||||
pnpm audit --fix
|
||||
echo "✅ Auto-fix completed"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
else
|
||||
echo "⏭️ Skipping dependency scan (no package.json)"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SCAN 2: SECRET DETECTION
|
||||
# ============================================================================
|
||||
|
||||
log_section "2. Secret Detection Scan"
|
||||
|
||||
echo "Scanning for hardcoded secrets..."
|
||||
|
||||
# Secret patterns to detect
|
||||
declare -A SECRET_PATTERNS=(
|
||||
["AWS Access Key"]='AKIA[0-9A-Z]{16}'
|
||||
["AWS Secret Key"]='aws_secret_access_key[[:space:]]*=[[:space:]]*[A-Za-z0-9/+=]{40}'
|
||||
["GitHub Token"]='gh[pousr]_[A-Za-z0-9]{36,}'
|
||||
["Slack Token"]='xox[baprs]-[0-9a-zA-Z-]+'
|
||||
["Google API Key"]='AIza[0-9A-Za-z-_]{35}'
|
||||
["Stripe Key"]='sk_live_[0-9a-zA-Z]{24,}'
|
||||
["Private Key"]='-----BEGIN (RSA |EC |DSA |OPENSSH )?PRIVATE KEY-----'
|
||||
["Password in Code"]='password[[:space:]]*=[[:space:]]*["\047][^"\047]{8,}["\047]'
|
||||
["API Key"]='api[_-]?key[[:space:]]*=[[:space:]]*["\047][^"\047]{20,}["\047]'
|
||||
["Generic Secret"]='secret[[:space:]]*=[[:space:]]*["\047][^"\047]{16,}["\047]'
|
||||
)
|
||||
|
||||
# Files to exclude from scanning
|
||||
EXCLUDE_PATTERNS="node_modules|\.git|\.lock|package-lock\.json|yarn\.lock|pnpm-lock\.yaml|\.min\.js|\.map$"
|
||||
|
||||
# Scan for each pattern
|
||||
for secret_type in "${!SECRET_PATTERNS[@]}"; do
|
||||
pattern="${SECRET_PATTERNS[$secret_type]}"
|
||||
|
||||
# Search in git tracked files only
|
||||
results=$(git ls-files | grep -vE "$EXCLUDE_PATTERNS" | xargs grep -HnE "$pattern" 2>/dev/null || true)
|
||||
|
||||
if [ -n "$results" ]; then
|
||||
log_finding "CRITICAL" "Secret Detection" "Found $secret_type"
|
||||
echo "$results" >> "$SECRET_SCAN_FILE"
|
||||
echo "---" >> "$SECRET_SCAN_FILE"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -f "$SECRET_SCAN_FILE" ] && [ -s "$SECRET_SCAN_FILE" ]; then
|
||||
echo "⚠️ Found hardcoded secrets - see report for details"
|
||||
else
|
||||
echo "✅ No hardcoded secrets detected"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SCAN 3: OWASP TOP 10 CODE ANALYSIS
|
||||
# ============================================================================
|
||||
|
||||
log_section "3. OWASP Top 10 Code Analysis"
|
||||
|
||||
echo "Scanning for common web vulnerabilities..."
|
||||
|
||||
# OWASP patterns to detect
|
||||
declare -A OWASP_PATTERNS=(
|
||||
["SQL Injection"]='(query|execute)\s*\(\s*["\047].*\+|SELECT.*FROM.*WHERE.*\+|INSERT INTO.*VALUES.*\+'
|
||||
["XSS - innerHTML"]='innerHTML\s*=\s*.*\+'
|
||||
["XSS - dangerouslySetInnerHTML"]='dangerouslySetInnerHTML.*__html:'
|
||||
["Command Injection"]='(exec|spawn|system)\s*\(\s*.*\+'
|
||||
["Path Traversal"]='\.\./'
|
||||
["Eval Usage"]='eval\s*\('
|
||||
["Insecure Random"]='Math\.random\(\)'
|
||||
["Weak Crypto"]='md5|sha1'
|
||||
)
|
||||
|
||||
# File patterns to scan (source code only)
|
||||
SOURCE_PATTERNS="\.js$|\.jsx$|\.ts$|\.tsx$|\.py$|\.php$|\.rb$"
|
||||
|
||||
for vuln_type in "${!OWASP_PATTERNS[@]}"; do
|
||||
pattern="${OWASP_PATTERNS[$vuln_type]}"
|
||||
|
||||
# Search in source files
|
||||
results=$(git ls-files | grep -E "$SOURCE_PATTERNS" | grep -vE "$EXCLUDE_PATTERNS" | xargs grep -HnE "$pattern" 2>/dev/null || true)
|
||||
|
||||
if [ -n "$results" ]; then
|
||||
# Determine severity based on vulnerability type
|
||||
case "$vuln_type" in
|
||||
*"SQL Injection"*|*"Command Injection"*)
|
||||
severity="CRITICAL"
|
||||
;;
|
||||
*"XSS"*|*"Path Traversal"*)
|
||||
severity="HIGH"
|
||||
;;
|
||||
*"Eval"*|*"Weak Crypto"*)
|
||||
severity="MEDIUM"
|
||||
;;
|
||||
*)
|
||||
severity="LOW"
|
||||
;;
|
||||
esac
|
||||
|
||||
log_finding "$severity" "Code Vulnerability" "$vuln_type detected"
|
||||
echo "$results" >> "$CODE_SCAN_FILE"
|
||||
echo "---" >> "$CODE_SCAN_FILE"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ -f "$CODE_SCAN_FILE" ] && [ -s "$CODE_SCAN_FILE" ]; then
|
||||
echo "⚠️ Found potential code vulnerabilities - see report for details"
|
||||
else
|
||||
echo "✅ No obvious code vulnerabilities detected"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# SCAN 4: SECURITY CONFIGURATION CHECK
|
||||
# ============================================================================
|
||||
|
||||
log_section "4. Security Configuration Check"
|
||||
|
||||
echo "Checking security configurations..."
|
||||
|
||||
# Check for HTTPS enforcement
|
||||
if [ -f "package.json" ]; then
|
||||
# Check scripts for HTTPS
|
||||
if grep -q "http://localhost" package.json 2>/dev/null; then
|
||||
log_finding "INFO" "Configuration" "Development server using HTTP (localhost is acceptable)"
|
||||
fi
|
||||
|
||||
# Check for security-related dependencies
|
||||
if grep -q "helmet" package.json 2>/dev/null; then
|
||||
echo "✅ Found helmet (security headers middleware)"
|
||||
else
|
||||
log_finding "MEDIUM" "Configuration" "Missing helmet - no HTTP security headers"
|
||||
fi
|
||||
|
||||
if grep -q "cors" package.json 2>/dev/null; then
|
||||
echo "✅ Found cors middleware"
|
||||
# Check for permissive CORS
|
||||
if git ls-files | xargs grep -E "cors\(\s*\{" 2>/dev/null | grep -q "origin.*\*"; then
|
||||
log_finding "HIGH" "Configuration" "Permissive CORS - allows all origins (*)"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for environment variable usage
|
||||
if git ls-files | grep -E "\.js$|\.ts$" | xargs grep -q "process\.env\." 2>/dev/null; then
|
||||
echo "✅ Using environment variables"
|
||||
else
|
||||
log_finding "LOW" "Configuration" "No environment variables detected - check config management"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for .env in git
|
||||
if git ls-files | grep -q "^\.env$" 2>/dev/null; then
|
||||
log_finding "CRITICAL" "Configuration" ".env file is tracked in git - contains secrets!"
|
||||
fi
|
||||
|
||||
# Check for proper .gitignore
|
||||
if [ -f ".gitignore" ]; then
|
||||
if grep -q "\.env" .gitignore 2>/dev/null; then
|
||||
echo "✅ .env properly ignored"
|
||||
else
|
||||
log_finding "MEDIUM" "Configuration" ".env not in .gitignore"
|
||||
fi
|
||||
|
||||
if grep -q "node_modules" .gitignore 2>/dev/null; then
|
||||
echo "✅ node_modules properly ignored"
|
||||
else
|
||||
log_finding "LOW" "Configuration" "node_modules not in .gitignore"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Configuration check complete"
|
||||
|
||||
# ============================================================================
|
||||
# GENERATE REPORT
|
||||
# ============================================================================
|
||||
|
||||
log_section "Generating Security Report"
|
||||
|
||||
# Calculate overall risk
|
||||
RISK_SCORE=$(calculate_risk_score)
|
||||
RISK_LEVEL=$(get_risk_level "$RISK_SCORE")
|
||||
|
||||
echo "Risk Score: $RISK_SCORE"
|
||||
echo "Risk Level: $RISK_LEVEL"
|
||||
echo ""
|
||||
echo "Findings:"
|
||||
echo " 🔴 Critical: $CRITICAL_COUNT"
|
||||
echo " 🟠 High: $HIGH_COUNT"
|
||||
echo " 🟡 Medium: $MEDIUM_COUNT"
|
||||
echo " 🔵 Low: $LOW_COUNT"
|
||||
echo " ℹ️ Info: $INFO_COUNT"
|
||||
|
||||
# Generate markdown report
|
||||
cat > "$REPORT_FILE" << EOF
|
||||
# Security Audit Report
|
||||
|
||||
**Generated**: $(date '+%Y-%m-%d %H:%M:%S')
|
||||
**Repository**: $(basename "$(git rev-parse --show-toplevel)")
|
||||
**Branch**: $(git rev-parse --abbrev-ref HEAD)
|
||||
**Scan Depth**: $SCAN_DEPTH
|
||||
**Severity Threshold**: $SEVERITY_THRESHOLD
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
**Overall Risk Score**: $RISK_SCORE
|
||||
**Risk Level**: $RISK_LEVEL
|
||||
|
||||
### Findings Summary
|
||||
|
||||
| Severity | Count |
|
||||
|----------|-------|
|
||||
| 🔴 Critical | $CRITICAL_COUNT |
|
||||
| 🟠 High | $HIGH_COUNT |
|
||||
| 🟡 Medium | $MEDIUM_COUNT |
|
||||
| 🔵 Low | $LOW_COUNT |
|
||||
| ℹ️ Info | $INFO_COUNT |
|
||||
|
||||
**Risk Assessment**:
|
||||
- **0-9**: Low risk - minimal security concerns
|
||||
- **10-19**: Medium risk - some vulnerabilities need attention
|
||||
- **20-49**: High risk - significant security issues present
|
||||
- **50+**: Critical risk - immediate action required
|
||||
|
||||
---
|
||||
|
||||
## 1. Dependency Vulnerabilities
|
||||
|
||||
EOF
|
||||
|
||||
if [ "$HAS_PACKAGE_JSON" = true ] && [ -f "$DEP_SCAN_FILE" ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
### npm Audit Results
|
||||
|
||||
\`\`\`json
|
||||
$(cat "$DEP_SCAN_FILE")
|
||||
\`\`\`
|
||||
|
||||
**Remediation**:
|
||||
- Run \`npm audit fix\` to automatically fix vulnerabilities
|
||||
- Run \`npm audit fix --force\` for breaking changes (review first!)
|
||||
- Manually update packages that cannot be auto-fixed
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
No dependency scan performed (package.json not found).
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Secret detection section
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
---
|
||||
|
||||
## 2. Secret Detection
|
||||
|
||||
EOF
|
||||
|
||||
if [ -f "$SECRET_SCAN_FILE" ] && [ -s "$SECRET_SCAN_FILE" ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
⚠️ **Hardcoded secrets detected!**
|
||||
|
||||
\`\`\`
|
||||
$(cat "$SECRET_SCAN_FILE")
|
||||
\`\`\`
|
||||
|
||||
**Remediation**:
|
||||
1. Remove all hardcoded secrets from code
|
||||
2. Use environment variables (\`.env\` files)
|
||||
3. Add \`.env\` to \`.gitignore\`
|
||||
4. Rotate all exposed credentials immediately
|
||||
5. Use secret management tools (AWS Secrets Manager, HashiCorp Vault, etc.)
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
✅ No hardcoded secrets detected.
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Code vulnerabilities section
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
---
|
||||
|
||||
## 3. Code Vulnerabilities (OWASP Top 10)
|
||||
|
||||
EOF
|
||||
|
||||
if [ -f "$CODE_SCAN_FILE" ] && [ -s "$CODE_SCAN_FILE" ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
⚠️ **Potential code vulnerabilities detected!**
|
||||
|
||||
\`\`\`
|
||||
$(cat "$CODE_SCAN_FILE")
|
||||
\`\`\`
|
||||
|
||||
**Remediation**:
|
||||
- **SQL Injection**: Use parameterized queries/ORMs
|
||||
- **XSS**: Sanitize user input, use React's built-in XSS protection
|
||||
- **Command Injection**: Validate and sanitize all user input
|
||||
- **Path Traversal**: Validate file paths, use path.resolve()
|
||||
- **Eval**: Remove eval(), use safer alternatives
|
||||
- **Weak Crypto**: Use bcrypt/argon2 for passwords, SHA-256+ for hashing
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
✅ No obvious code vulnerabilities detected.
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Configuration section
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
---
|
||||
|
||||
## 4. Security Configuration
|
||||
|
||||
See findings above for configuration issues.
|
||||
|
||||
**Recommended Security Headers** (use helmet.js):
|
||||
\`\`\`javascript
|
||||
app.use(helmet({
|
||||
contentSecurityPolicy: true,
|
||||
hsts: true,
|
||||
noSniff: true,
|
||||
xssFilter: true
|
||||
}));
|
||||
\`\`\`
|
||||
|
||||
**Recommended CORS Configuration**:
|
||||
\`\`\`javascript
|
||||
app.use(cors({
|
||||
origin: process.env.ALLOWED_ORIGINS?.split(',') || 'http://localhost:3000',
|
||||
credentials: true
|
||||
}));
|
||||
\`\`\`
|
||||
|
||||
---
|
||||
|
||||
## 5. Remediation Recommendations
|
||||
|
||||
### Priority 1: Critical (Fix Immediately)
|
||||
|
||||
EOF
|
||||
|
||||
if [ "$CRITICAL_COUNT" -gt 0 ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
$CRITICAL_COUNT critical issues found:
|
||||
- Review dependency vulnerabilities with CRITICAL severity
|
||||
- Remove all hardcoded secrets and rotate credentials
|
||||
- Fix SQL injection and command injection vulnerabilities
|
||||
- Check if .env is committed to git
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
✅ No critical issues found.
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
### Priority 2: High (Fix This Week)
|
||||
|
||||
EOF
|
||||
|
||||
if [ "$HIGH_COUNT" -gt 0 ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
$HIGH_COUNT high-severity issues found:
|
||||
- Update dependencies with HIGH severity vulnerabilities
|
||||
- Fix XSS and path traversal vulnerabilities
|
||||
- Review permissive CORS configurations
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
✅ No high-severity issues found.
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
### Priority 3: Medium (Fix This Sprint)
|
||||
|
||||
EOF
|
||||
|
||||
if [ "$MEDIUM_COUNT" -gt 0 ]; then
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
$MEDIUM_COUNT medium-severity issues found:
|
||||
- Add helmet for security headers
|
||||
- Fix weak cryptography usage
|
||||
- Add .env to .gitignore if missing
|
||||
|
||||
EOF
|
||||
else
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
✅ No medium-severity issues found.
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
cat >> "$REPORT_FILE" << EOF
|
||||
|
||||
### Priority 4: Low & Info (Address in Backlog)
|
||||
|
||||
$LOW_COUNT low-severity + $INFO_COUNT info findings.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Address all CRITICAL findings
|
||||
2. **This Week**: Fix HIGH severity issues
|
||||
3. **This Sprint**: Resolve MEDIUM issues
|
||||
4. **Ongoing**: Set up automated security scanning in CI/CD
|
||||
5. **Monthly**: Run \`/specswarm:security-audit\` regularly
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
|
||||
- [npm Audit Documentation](https://docs.npmjs.com/cli/v8/commands/npm-audit)
|
||||
- [Helmet.js Security Headers](https://helmetjs.github.io/)
|
||||
- [NIST Vulnerability Database](https://nvd.nist.gov/)
|
||||
|
||||
---
|
||||
|
||||
**Generated by**: SpecSwarm v3.1.0 Security Audit
|
||||
**Command**: \`/specswarm:security-audit\`
|
||||
|
||||
EOF
|
||||
|
||||
echo "✅ Report generated: $REPORT_FILE"
|
||||
|
||||
# ============================================================================
|
||||
# SUMMARY
|
||||
# ============================================================================
|
||||
|
||||
log_section "Security Audit Complete"
|
||||
|
||||
echo "Report saved to: $REPORT_FILE"
|
||||
echo ""
|
||||
echo "Summary:"
|
||||
echo " Risk Level: $RISK_LEVEL ($RISK_SCORE points)"
|
||||
echo " Critical: $CRITICAL_COUNT"
|
||||
echo " High: $HIGH_COUNT"
|
||||
echo " Medium: $MEDIUM_COUNT"
|
||||
echo " Low: $LOW_COUNT"
|
||||
echo " Info: $INFO_COUNT"
|
||||
echo ""
|
||||
|
||||
if [ "$CRITICAL_COUNT" -gt 0 ]; then
|
||||
echo "⚠️ CRITICAL ISSUES FOUND - Immediate action required!"
|
||||
echo " Review $REPORT_FILE for details and remediation steps"
|
||||
elif [ "$HIGH_COUNT" -gt 0 ]; then
|
||||
echo "⚠️ HIGH SEVERITY ISSUES - Address within the week"
|
||||
echo " Review $REPORT_FILE for details"
|
||||
elif [ "$MEDIUM_COUNT" -gt 0 ]; then
|
||||
echo "✅ No critical issues, but $MEDIUM_COUNT medium-severity findings"
|
||||
echo " Review $REPORT_FILE for improvements"
|
||||
else
|
||||
echo "✅ No significant security issues detected"
|
||||
echo " Good job! Keep running regular security audits"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Next: Review the report and prioritize remediation work"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Quick Security Scan
|
||||
|
||||
```bash
|
||||
/specswarm:security-audit
|
||||
# Select: Quick scan
|
||||
# Select: All findings
|
||||
# Result: security-audit-2025-01-15.md generated in ~1 minute
|
||||
```
|
||||
|
||||
### Example 2: Pre-Release Audit
|
||||
|
||||
```bash
|
||||
/specswarm:security-audit
|
||||
# Select: Thorough scan
|
||||
# Select: High and above
|
||||
# Select: Yes, auto-fix
|
||||
# Result: Comprehensive audit with auto-fixed dependencies
|
||||
```
|
||||
|
||||
### Example 3: CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/security.yml
|
||||
name: Security Audit
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
audit:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- run: /specswarm:security-audit
|
||||
- run: |
|
||||
if grep -q "CRITICAL" security-audit-*.md; then
|
||||
echo "Critical vulnerabilities found!"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **False Positives**: Pattern matching may produce false positives - review all findings
|
||||
- **Complementary Tools**: Consider using additional tools (Snyk, SonarQube, Dependabot)
|
||||
- **Regular Scans**: Run monthly or before major releases
|
||||
- **Auto-Fix**: Use with caution - test thoroughly after auto-fixing dependencies
|
||||
- **Secret Rotation**: If secrets are found, rotate them immediately even after removal
|
||||
|
||||
---
|
||||
|
||||
## See Also
|
||||
|
||||
- `/specswarm:release` - Includes security audit as part of release checklist
|
||||
- `/specswarm:analyze-quality` - Code quality analysis
|
||||
- `/specswarm:ship` - Enforces quality gates before merge
|
||||
|
||||
---
|
||||
|
||||
**Version**: 3.1.0
|
||||
**Category**: Security & Quality
|
||||
**Estimated Time**: 1-10 minutes (depending on scan depth)
|
||||
244
commands/ship.md
Normal file
244
commands/ship.md
Normal file
@@ -0,0 +1,244 @@
|
||||
---
|
||||
description: Quality-gated merge to parent branch - validates code quality before allowing merge
|
||||
args:
|
||||
- name: --force-quality
|
||||
description: Override quality threshold (e.g., --force-quality 70)
|
||||
required: false
|
||||
- name: --skip-tests
|
||||
description: Skip test validation (not recommended)
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Execute final quality gate validation and merge to parent branch.
|
||||
|
||||
**Purpose**: Enforce quality standards before merging features/bugfixes, preventing low-quality code from entering the codebase.
|
||||
|
||||
**Workflow**: Quality Analysis → Threshold Check → Merge (if passing)
|
||||
|
||||
**Quality Gates**:
|
||||
- Default threshold: 80% quality score
|
||||
- Configurable via `--force-quality` flag
|
||||
- Reads `.specswarm/quality-standards.md` for project-specific thresholds
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checks
|
||||
|
||||
```bash
|
||||
# Ensure we're in a git repository
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
echo ""
|
||||
echo "This command must be run from within a git repository."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get repository root
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Parse arguments
|
||||
FORCE_QUALITY=""
|
||||
SKIP_TESTS=false
|
||||
|
||||
for arg in $ARGUMENTS; do
|
||||
case "$arg" in
|
||||
--force-quality)
|
||||
shift
|
||||
FORCE_QUALITY="$1"
|
||||
;;
|
||||
--skip-tests)
|
||||
SKIP_TESTS=true
|
||||
;;
|
||||
esac
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Display Banner
|
||||
|
||||
```bash
|
||||
echo "🚢 SpecSwarm Ship - Quality-Gated Merge"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "This command enforces quality standards before merge:"
|
||||
echo " 1. Runs comprehensive quality analysis"
|
||||
echo " 2. Checks quality score meets threshold"
|
||||
echo " 3. If passing: merges to parent branch"
|
||||
echo " 4. If failing: reports issues and blocks merge"
|
||||
echo ""
|
||||
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
||||
echo "📍 Current branch: $CURRENT_BRANCH"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Run Quality Analysis
|
||||
|
||||
**YOU MUST NOW run the quality analysis using the SlashCommand tool:**
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:analyze-quality
|
||||
```
|
||||
|
||||
Wait for the quality analysis to complete and extract the quality score from the output.
|
||||
|
||||
**Expected Output Pattern**: Look for quality score in output (e.g., "Overall Quality: 85%")
|
||||
|
||||
Store the quality score as QUALITY_SCORE.
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Check Quality Threshold
|
||||
|
||||
**YOU MUST NOW check if quality meets threshold:**
|
||||
|
||||
```bash
|
||||
# Determine threshold
|
||||
DEFAULT_THRESHOLD=80
|
||||
|
||||
# Check for project-specific threshold in .specswarm/quality-standards.md
|
||||
THRESHOLD=$DEFAULT_THRESHOLD
|
||||
|
||||
if [ -f ".specswarm/quality-standards.md" ]; then
|
||||
# Try to extract threshold from quality standards file
|
||||
PROJECT_THRESHOLD=$(grep -i "^quality_threshold:" .specswarm/quality-standards.md | grep -oE '[0-9]+' || echo "")
|
||||
if [ -n "$PROJECT_THRESHOLD" ]; then
|
||||
THRESHOLD=$PROJECT_THRESHOLD
|
||||
echo "📋 Using project quality threshold: ${THRESHOLD}%"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Override with --force-quality if provided
|
||||
if [ -n "$FORCE_QUALITY" ]; then
|
||||
THRESHOLD=$FORCE_QUALITY
|
||||
echo "⚠️ Quality threshold overridden: ${THRESHOLD}%"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🎯 Quality Threshold: ${THRESHOLD}%"
|
||||
echo "📊 Actual Quality Score: ${QUALITY_SCORE}%"
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Decision Logic**:
|
||||
|
||||
IF QUALITY_SCORE >= THRESHOLD:
|
||||
- ✅ Quality gate PASSED
|
||||
- Proceed to Step 4 (Merge)
|
||||
ELSE:
|
||||
- ❌ Quality gate FAILED
|
||||
- Display failure message
|
||||
- List top issues from analysis
|
||||
- Suggest fixes
|
||||
- EXIT without merging
|
||||
|
||||
```bash
|
||||
if [ "$QUALITY_SCORE" -ge "$THRESHOLD" ]; then
|
||||
echo "✅ Quality gate PASSED (${QUALITY_SCORE}% >= ${THRESHOLD}%)"
|
||||
echo ""
|
||||
else
|
||||
echo "❌ Quality gate FAILED (${QUALITY_SCORE}% < ${THRESHOLD}%)"
|
||||
echo ""
|
||||
echo "The code quality does not meet the required threshold."
|
||||
echo ""
|
||||
echo "🔧 Recommended Actions:"
|
||||
echo " 1. Review the quality analysis output above"
|
||||
echo " 2. Address critical and high-priority issues"
|
||||
echo " 3. Run /specswarm:analyze-quality again to verify improvements"
|
||||
echo " 4. Run /specswarm:ship again when quality improves"
|
||||
echo ""
|
||||
echo "💡 Alternatively:"
|
||||
echo " - Override threshold: /specswarm:ship --force-quality 70"
|
||||
echo " - Note: Overriding quality gates is not recommended for production code"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Merge to Parent Branch
|
||||
|
||||
**Quality gate passed! YOU MUST NOW merge using the SlashCommand tool:**
|
||||
|
||||
```
|
||||
Use the SlashCommand tool to execute: /specswarm:complete
|
||||
```
|
||||
|
||||
Wait for the merge to complete.
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Success Report
|
||||
|
||||
**After successful merge, display:**
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "══════════════════════════════════════════"
|
||||
echo "🎉 SHIP SUCCESSFUL"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "✅ Quality gate passed (${QUALITY_SCORE}%)"
|
||||
echo "✅ Merged to parent branch"
|
||||
echo "✅ Feature/bugfix complete"
|
||||
echo ""
|
||||
echo "📝 Next Steps:"
|
||||
echo " - Pull latest changes in other branches"
|
||||
echo " - Consider creating a release tag if ready"
|
||||
echo " - Update project documentation if needed"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any step fails:
|
||||
|
||||
1. **Quality analysis fails**: Report error and suggest checking logs
|
||||
2. **Quality threshold not met**: Display issues and exit (see Step 3)
|
||||
3. **Merge fails**: Report git errors and suggest manual resolution
|
||||
|
||||
**All errors should EXIT with clear remediation steps.**
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
**Design Philosophy**:
|
||||
- Quality gates prevent technical debt accumulation
|
||||
- Encourages addressing issues before merge (not after)
|
||||
- Configurable thresholds balance strictness with pragmatism
|
||||
- Override flag available but discouraged for production code
|
||||
|
||||
**Quality Standards File** (`.specswarm/quality-standards.md`):
|
||||
```yaml
|
||||
---
|
||||
quality_threshold: 85
|
||||
enforce_gates: true
|
||||
---
|
||||
|
||||
# Project Quality Standards
|
||||
|
||||
Minimum quality threshold: 85%
|
||||
...
|
||||
```
|
||||
|
||||
If `enforce_gates: false`, ship will warn but not block merge.
|
||||
431
commands/specify.md
Normal file
431
commands/specify.md
Normal file
@@ -0,0 +1,431 @@
|
||||
---
|
||||
description: Create or update the feature specification from a natural language feature description.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{ARGS}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||
|
||||
Given that feature description, do this:
|
||||
|
||||
1. **Create New Feature Structure** (replaces script execution):
|
||||
|
||||
a. **Find Repository Root and Initialize Features Directory**:
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory (handles migration if needed)
|
||||
ensure_features_dir "$REPO_ROOT"
|
||||
```
|
||||
|
||||
b. **Determine Next Feature Number**:
|
||||
```bash
|
||||
# Get next feature number using helper
|
||||
FEATURE_NUM=$(get_next_feature_number "$REPO_ROOT")
|
||||
```
|
||||
|
||||
c. **Create Feature Slug from Description**:
|
||||
```bash
|
||||
# Convert description to kebab-case
|
||||
# Example: "User Authentication System" → "user-authentication-system"
|
||||
SLUG=$(echo "$ARGUMENTS" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//' | sed 's/-$//')
|
||||
```
|
||||
|
||||
d. **Capture Parent Branch and Create Feature Branch** (if git available):
|
||||
```bash
|
||||
BRANCH_NAME="${FEATURE_NUM}-${SLUG}"
|
||||
|
||||
if git rev-parse --git-dir >/dev/null 2>&1; then
|
||||
# Capture current branch as parent BEFORE switching
|
||||
PARENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Branch Setup"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " Parent branch: $PARENT_BRANCH (current branch)"
|
||||
echo " Feature branch: $BRANCH_NAME (will be created)"
|
||||
echo ""
|
||||
echo "ℹ️ The feature branch will be created from $PARENT_BRANCH."
|
||||
echo " When complete, it will merge back to $PARENT_BRANCH."
|
||||
echo ""
|
||||
read -p "Is this correct? (y/n): " branch_confirm
|
||||
|
||||
if [ "$branch_confirm" != "y" ]; then
|
||||
echo ""
|
||||
echo "❌ Branch setup cancelled"
|
||||
echo ""
|
||||
echo "Please checkout the correct parent branch first, then run:"
|
||||
echo " /specswarm:specify \"$FEATURE_DESCRIPTION\""
|
||||
exit 0
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Create and switch to new feature branch
|
||||
git checkout -b "$BRANCH_NAME"
|
||||
else
|
||||
# Non-git: use environment variable
|
||||
export SPECIFY_FEATURE="$BRANCH_NAME"
|
||||
PARENT_BRANCH="unknown"
|
||||
fi
|
||||
```
|
||||
|
||||
e. **Create Feature Directory Structure**:
|
||||
```bash
|
||||
FEATURE_DIR="${FEATURES_DIR}/${FEATURE_NUM}-${SLUG}"
|
||||
mkdir -p "${FEATURE_DIR}/checklists"
|
||||
mkdir -p "${FEATURE_DIR}/contracts"
|
||||
```
|
||||
|
||||
f. **Set Path Variables** (use these for remainder of command):
|
||||
```bash
|
||||
SPEC_FILE="${FEATURE_DIR}/spec.md"
|
||||
CHECKLISTS_DIR="${FEATURE_DIR}/checklists"
|
||||
```
|
||||
|
||||
2. Load spec template to understand required sections:
|
||||
- Try to read `templates/spec-template.md` if it exists
|
||||
- If template missing, use this embedded minimal template:
|
||||
|
||||
```markdown
|
||||
# Feature: [Feature Name]
|
||||
|
||||
## Overview
|
||||
[Brief description of the feature and its purpose]
|
||||
|
||||
## User Scenarios
|
||||
[Key user flows and scenarios]
|
||||
|
||||
## Functional Requirements
|
||||
[What the system must do]
|
||||
|
||||
## Success Criteria
|
||||
[Measurable outcomes that define success]
|
||||
|
||||
## Key Entities
|
||||
[Important data entities involved]
|
||||
|
||||
## Assumptions
|
||||
[Documented assumptions and reasonable defaults]
|
||||
```
|
||||
|
||||
3. **Parse the user's feature description** from `$ARGUMENTS` and validate:
|
||||
- If empty: ERROR "No feature description provided"
|
||||
- Extract key concepts: actors, actions, data, constraints
|
||||
|
||||
4. **Framework & Dependency Compatibility Check** (for upgrade/migration features):
|
||||
|
||||
If the feature description mentions dependency upgrades (PHP, Laravel, Node, framework versions, etc.), perform compatibility validation:
|
||||
|
||||
a. **Detect Upgrade Context**:
|
||||
```bash
|
||||
# Check if feature involves upgrades
|
||||
INVOLVES_UPGRADE=$(echo "$ARGUMENTS" | grep -iE '(upgrade|migrat|updat).*(php|laravel|node|framework|version|[0-9]\.[0-9])')
|
||||
```
|
||||
|
||||
b. **Read Project Dependencies** (if upgrade detected):
|
||||
```bash
|
||||
if [ -n "$INVOLVES_UPGRADE" ] && [ -f "${REPO_ROOT}/composer.json" ]; then
|
||||
# Extract PHP requirement
|
||||
PHP_CURRENT=$(grep -Po '(?<="php":\s")[^"]+' "${REPO_ROOT}/composer.json" 2>/dev/null)
|
||||
|
||||
# Extract Laravel/framework version
|
||||
FRAMEWORK=$(grep -Po '(?<="laravel/framework":\s")[^"]+' "${REPO_ROOT}/composer.json" 2>/dev/null)
|
||||
fi
|
||||
```
|
||||
|
||||
c. **Cross-Reference Compatibility Matrices**:
|
||||
|
||||
When upgrade targets are identified in the feature description, check known compatibility constraints:
|
||||
|
||||
**Laravel Compatibility Matrix**:
|
||||
- Laravel 5.8: PHP 7.2 - 7.4 ONLY
|
||||
- Laravel 6.x: PHP 7.2 - 8.0
|
||||
- Laravel 7.x: PHP 7.2 - 8.0
|
||||
- Laravel 8.x: PHP 7.3 - 8.1
|
||||
- Laravel 9.x: PHP 8.0 - 8.2
|
||||
- Laravel 10.x: PHP 8.1 - 8.3
|
||||
- Laravel 11.x: PHP 8.2 - 8.3
|
||||
|
||||
**Key Detection Rules**:
|
||||
- If feature mentions "PHP 8.x upgrade" AND Laravel 5.8 detected → BLOCKER
|
||||
- If feature mentions "PHP 8.x upgrade" AND Laravel 6-7 detected → WARNING (check target PHP version)
|
||||
- If feature mentions framework upgrade dependencies → Include in spec
|
||||
|
||||
d. **Add Blockers Section to Spec** (if incompatibilities found):
|
||||
|
||||
If compatibility issues detected, add this section AFTER "Overview" and BEFORE "User Scenarios":
|
||||
|
||||
```markdown
|
||||
## ⚠️ CRITICAL BLOCKERS & DEPENDENCIES
|
||||
|
||||
### [Framework] Version Incompatibility
|
||||
|
||||
**Issue**: [Current framework version] officially supports [compatible versions] only.
|
||||
|
||||
**Current State**:
|
||||
- Framework: [Detected version from composer.json]
|
||||
- Target: [Upgrade target from feature description]
|
||||
- Compatibility: ❌ NOT COMPATIBLE
|
||||
|
||||
**Resolution Options**:
|
||||
|
||||
1. **Recommended**: Upgrade framework first
|
||||
- Path: [Current] → [Intermediate versions] → [Target compatible version]
|
||||
- Benefit: Official support, maintained compatibility
|
||||
- Timeline: [Estimated complexity]
|
||||
|
||||
2. **Community Patches**: Use unofficial compatibility patches
|
||||
- Benefit: Faster, smaller scope
|
||||
- Risk: Unsupported, may break in production
|
||||
- Recommendation: NOT recommended for production
|
||||
|
||||
3. **Stay on Compatible Version**: Delay target upgrade
|
||||
- Keep: [Current compatible version]
|
||||
- Timeline: [Until when it's supported]
|
||||
- Benefit: Stable, supported
|
||||
|
||||
4. **Accept Risk**: Proceed with unsupported configuration
|
||||
- Risk: High - potential breaking changes
|
||||
- Required: Extensive testing, acceptance of maintenance burden
|
||||
- Recommendation: Only if timeline critical and resources available
|
||||
|
||||
**Recommended Path**: [Most appropriate option with reasoning]
|
||||
|
||||
**Impact on This Feature**: This blocker must be resolved before beginning implementation. Consider creating separate features for:
|
||||
- Feature XXX: [Framework] upgrade to [compatible version]
|
||||
- Feature YYY: [Dependency] upgrade (this feature, dependent on XXX)
|
||||
```
|
||||
|
||||
e. **Document Assumptions About Compatibility**:
|
||||
|
||||
Even if no blockers found, add relevant assumptions to the Assumptions section:
|
||||
- "Framework version [X] is compatible with [upgrade target]"
|
||||
- "Standard upgrade path follows: [path]"
|
||||
- "Breaking changes from [source docs URL] have been reviewed"
|
||||
|
||||
5. Follow this execution flow:
|
||||
|
||||
1. For unclear aspects:
|
||||
- Make informed guesses based on context and industry standards
|
||||
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
||||
- The choice significantly impacts feature scope or user experience
|
||||
- Multiple reasonable interpretations exist with different implications
|
||||
- No reasonable default exists
|
||||
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
||||
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
||||
2. Fill User Scenarios & Testing section
|
||||
If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||
3. Generate Functional Requirements
|
||||
Each requirement must be testable
|
||||
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
||||
4. Define Success Criteria
|
||||
Create measurable, technology-agnostic outcomes
|
||||
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
||||
Each criterion must be verifiable without implementation details
|
||||
5. Identify Key Entities (if data involved)
|
||||
6. Return: SUCCESS (spec ready for planning)
|
||||
|
||||
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||
|
||||
**IMPORTANT: Include YAML Frontmatter**:
|
||||
|
||||
The spec.md file MUST start with YAML frontmatter containing metadata:
|
||||
|
||||
```yaml
|
||||
---
|
||||
parent_branch: ${PARENT_BRANCH}
|
||||
feature_number: ${FEATURE_NUM}
|
||||
status: In Progress
|
||||
created_at: $(date -Iseconds)
|
||||
---
|
||||
```
|
||||
|
||||
This metadata enables the `/specswarm:complete` command to merge back to the correct parent branch.
|
||||
|
||||
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
||||
|
||||
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
||||
|
||||
```markdown
|
||||
# Specification Quality Checklist: [FEATURE NAME]
|
||||
|
||||
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||
**Created**: [DATE]
|
||||
**Feature**: [Link to spec.md]
|
||||
|
||||
## Content Quality
|
||||
|
||||
- [ ] No implementation details (languages, frameworks, APIs)
|
||||
- [ ] Focused on user value and business needs
|
||||
- [ ] Written for non-technical stakeholders
|
||||
- [ ] All mandatory sections completed
|
||||
|
||||
## Requirement Completeness
|
||||
|
||||
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||
- [ ] Requirements are testable and unambiguous
|
||||
- [ ] Success criteria are measurable
|
||||
- [ ] Success criteria are technology-agnostic (no implementation details)
|
||||
- [ ] All acceptance scenarios are defined
|
||||
- [ ] Edge cases are identified
|
||||
- [ ] Scope is clearly bounded
|
||||
- [ ] Dependencies and assumptions identified
|
||||
|
||||
## Feature Readiness
|
||||
|
||||
- [ ] All functional requirements have clear acceptance criteria
|
||||
- [ ] User scenarios cover primary flows
|
||||
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
||||
- [ ] No implementation details leak into specification
|
||||
|
||||
## Notes
|
||||
|
||||
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
||||
```
|
||||
|
||||
b. **Run Validation Check**: Review the spec against each checklist item:
|
||||
- For each item, determine if it passes or fails
|
||||
- Document specific issues found (quote relevant spec sections)
|
||||
|
||||
c. **Handle Validation Results**:
|
||||
|
||||
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||
|
||||
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||
1. List the failing items and specific issues
|
||||
2. Update the spec to address each issue
|
||||
3. Re-run validation until all items pass (max 3 iterations)
|
||||
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
||||
|
||||
- **If [NEEDS CLARIFICATION] markers remain**:
|
||||
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
||||
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
||||
3. For each clarification needed (max 3), present options to user in this format:
|
||||
|
||||
```markdown
|
||||
## Question [N]: [Topic]
|
||||
|
||||
**Context**: [Quote relevant spec section]
|
||||
|
||||
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
||||
|
||||
**Suggested Answers**:
|
||||
|
||||
| Option | Answer | Implications |
|
||||
|--------|--------|--------------|
|
||||
| A | [First suggested answer] | [What this means for the feature] |
|
||||
| B | [Second suggested answer] | [What this means for the feature] |
|
||||
| C | [Third suggested answer] | [What this means for the feature] |
|
||||
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
||||
|
||||
**Your choice**: _[Wait for user response]_
|
||||
```
|
||||
|
||||
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
||||
- Use consistent spacing with pipes aligned
|
||||
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
||||
- Header separator must have at least 3 dashes: `|--------|`
|
||||
- Test that the table renders correctly in markdown preview
|
||||
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
||||
6. Present all questions together before waiting for responses
|
||||
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
||||
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
||||
9. Re-run validation after all clarifications are resolved
|
||||
|
||||
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
||||
|
||||
7. Report completion with:
|
||||
- Branch name (if git repo)
|
||||
- Feature directory path
|
||||
- Spec file path
|
||||
- Checklist results
|
||||
- Readiness for next phase (`/clarify` or `/plan`)
|
||||
|
||||
**NOTE:** This command creates the feature branch (if git), initializes the feature directory structure, and creates the initial spec.md file.
|
||||
|
||||
## General Guidelines
|
||||
|
||||
## Quick Guidelines
|
||||
|
||||
- Focus on **WHAT** users need and **WHY**.
|
||||
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
||||
- Written for business stakeholders, not developers.
|
||||
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
||||
|
||||
### Section Requirements
|
||||
|
||||
- **Mandatory sections**: Must be completed for every feature
|
||||
- **Optional sections**: Include only when relevant to the feature
|
||||
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||
|
||||
### For AI Generation
|
||||
|
||||
When creating this spec from a user prompt:
|
||||
|
||||
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
||||
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
||||
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
||||
- Significantly impact feature scope or user experience
|
||||
- Have multiple reasonable interpretations with different implications
|
||||
- Lack any reasonable default
|
||||
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
||||
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||
6. **Common areas needing clarification** (only if no reasonable default exists):
|
||||
- Feature scope and boundaries (include/exclude specific use cases)
|
||||
- User types and permissions (if multiple conflicting interpretations possible)
|
||||
- Security/compliance requirements (when legally/financially significant)
|
||||
|
||||
**Examples of reasonable defaults** (don't ask about these):
|
||||
|
||||
- Data retention: Industry-standard practices for the domain
|
||||
- Performance targets: Standard web/mobile app expectations unless specified
|
||||
- Error handling: User-friendly messages with appropriate fallbacks
|
||||
- Authentication method: Standard session-based or OAuth2 for web apps
|
||||
- Integration patterns: RESTful APIs unless specified otherwise
|
||||
|
||||
### Success Criteria Guidelines
|
||||
|
||||
Success criteria must be:
|
||||
|
||||
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
||||
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
||||
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
||||
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
||||
|
||||
**Good examples**:
|
||||
|
||||
- "Users can complete checkout in under 3 minutes"
|
||||
- "System supports 10,000 concurrent users"
|
||||
- "95% of searches return results in under 1 second"
|
||||
- "Task completion rate improves by 40%"
|
||||
|
||||
**Bad examples** (implementation-focused):
|
||||
|
||||
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
||||
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||
- "React components render efficiently" (framework-specific)
|
||||
- "Redis cache hit rate above 80%" (technology-specific)
|
||||
292
commands/suggest.md
Normal file
292
commands/suggest.md
Normal file
@@ -0,0 +1,292 @@
|
||||
---
|
||||
description: AI-powered workflow recommendation based on context analysis
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
## Goal
|
||||
|
||||
Analyze current context (branch name, commits, file changes, user description) and recommend the most appropriate workflow.
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Gather Context
|
||||
|
||||
```bash
|
||||
echo "🤖 Workflow Suggestion - Analyzing Context..."
|
||||
|
||||
# Branch name
|
||||
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
|
||||
# Recent commits
|
||||
RECENT_COMMITS=$(git log -5 --oneline 2>/dev/null)
|
||||
|
||||
# Changed files
|
||||
CHANGED_FILES=$(git diff --name-only HEAD~5..HEAD 2>/dev/null)
|
||||
|
||||
# Uncommitted changes
|
||||
UNCOMMITTED=$(git status --porcelain 2>/dev/null | head -10)
|
||||
|
||||
# User description (if provided)
|
||||
USER_DESCRIPTION=$ARGUMENTS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Analyze Patterns
|
||||
|
||||
**Branch Name Analysis:**
|
||||
```bash
|
||||
if [[ "$CURRENT_BRANCH" =~ ^feature/ ]]; then
|
||||
PATTERN_SCORE["feature"]=5
|
||||
elif [[ "$CURRENT_BRANCH" =~ ^bugfix/ ]]; then
|
||||
PATTERN_SCORE["bugfix"]=10
|
||||
elif [[ "$CURRENT_BRANCH" =~ ^modify/ ]]; then
|
||||
PATTERN_SCORE["modify"]=10
|
||||
elif [[ "$CURRENT_BRANCH" =~ ^hotfix/ ]]; then
|
||||
PATTERN_SCORE["hotfix"]=10
|
||||
elif [[ "$CURRENT_BRANCH" =~ ^refactor/ ]]; then
|
||||
PATTERN_SCORE["refactor"]=10
|
||||
elif [[ "$CURRENT_BRANCH" =~ ^deprecate/ ]]; then
|
||||
PATTERN_SCORE["deprecate"]=10
|
||||
fi
|
||||
```
|
||||
|
||||
**Commit Message Analysis:**
|
||||
```bash
|
||||
# Look for keywords in commits
|
||||
if echo "$RECENT_COMMITS" | grep -qi "fix\|bug\|issue"; then
|
||||
PATTERN_SCORE["bugfix"]+=3
|
||||
fi
|
||||
|
||||
if echo "$RECENT_COMMITS" | grep -qi "refactor\|cleanup\|improve"; then
|
||||
PATTERN_SCORE["refactor"]+=3
|
||||
fi
|
||||
|
||||
if echo "$RECENT_COMMITS" | grep -qi "hotfix\|emergency\|critical"; then
|
||||
PATTERN_SCORE["hotfix"]+=5
|
||||
fi
|
||||
|
||||
if echo "$RECENT_COMMITS" | grep -qi "deprecate\|remove\|sunset"; then
|
||||
PATTERN_SCORE["deprecate"]+=3
|
||||
fi
|
||||
|
||||
if echo "$RECENT_COMMITS" | grep -qi "modify\|update\|change"; then
|
||||
PATTERN_SCORE["modify"]+=2
|
||||
fi
|
||||
```
|
||||
|
||||
**File Change Analysis:**
|
||||
```bash
|
||||
# Analyze changed files
|
||||
FILE_COUNT=$(echo "$CHANGED_FILES" | wc -l)
|
||||
|
||||
if [ "$FILE_COUNT" -gt 10 ]; then
|
||||
# Many files changed - likely refactor or major modify
|
||||
PATTERN_SCORE["refactor"]+=2
|
||||
PATTERN_SCORE["modify"]+=2
|
||||
fi
|
||||
|
||||
# Check for test file changes
|
||||
if echo "$CHANGED_FILES" | grep -qi "test\|spec"; then
|
||||
PATTERN_SCORE["bugfix"]+=1 # Tests often accompany bug fixes
|
||||
fi
|
||||
```
|
||||
|
||||
**User Description Analysis:**
|
||||
```bash
|
||||
if [ -n "$USER_DESCRIPTION" ]; then
|
||||
# Analyze user description for keywords
|
||||
if echo "$USER_DESCRIPTION" | grep -qi "bug\|fix\|broken\|not working"; then
|
||||
PATTERN_SCORE["bugfix"]+=5
|
||||
fi
|
||||
|
||||
if echo "$USER_DESCRIPTION" | grep -qi "emergency\|critical\|urgent\|hotfix\|down"; then
|
||||
PATTERN_SCORE["hotfix"]+=7
|
||||
fi
|
||||
|
||||
if echo "$USER_DESCRIPTION" | grep -qi "refactor\|clean\|improve quality"; then
|
||||
PATTERN_SCORE["refactor"]+=5
|
||||
fi
|
||||
|
||||
if echo "$USER_DESCRIPTION" | grep -qi "modify\|change\|update\|add to"; then
|
||||
PATTERN_SCORE["modify"]+=5
|
||||
fi
|
||||
|
||||
if echo "$USER_DESCRIPTION" | grep -qi "deprecate\|remove\|sunset\|phase out"; then
|
||||
PATTERN_SCORE["deprecate"]+=5
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Generate Recommendations
|
||||
|
||||
```markdown
|
||||
# Workflow Recommendation
|
||||
|
||||
**Analysis Date**: YYYY-MM-DD
|
||||
|
||||
---
|
||||
|
||||
## Context Analysis
|
||||
|
||||
**Branch**: ${CURRENT_BRANCH}
|
||||
|
||||
**Recent Activity**:
|
||||
- Commits analyzed: [N]
|
||||
- Files changed: [N]
|
||||
- Keywords found: [list]
|
||||
|
||||
**User Description**: ${USER_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
## Pattern Scores
|
||||
|
||||
| Workflow | Score | Confidence |
|
||||
|----------|-------|------------|
|
||||
| Bugfix | [N] | [High/Med/Low] |
|
||||
| Modify | [N] | [High/Med/Low] |
|
||||
| Hotfix | [N] | [High/Med/Low] |
|
||||
| Refactor | [N] | [High/Med/Low] |
|
||||
| Deprecate | [N] | [High/Med/Low] |
|
||||
| Feature (SpecSwarm/SpecTest) | [N] | [High/Med/Low] |
|
||||
|
||||
---
|
||||
|
||||
## Recommendation
|
||||
|
||||
### 🎯 Primary Recommendation: `/specswarm:[workflow]`
|
||||
|
||||
**Confidence**: [High/Medium/Low]
|
||||
|
||||
**Reasoning**:
|
||||
1. [Reason 1 - evidence from branch/commits/files]
|
||||
2. [Reason 2]
|
||||
3. [Reason 3]
|
||||
|
||||
**Why This Workflow**:
|
||||
[Description of why this workflow fits]
|
||||
|
||||
**Expected Outcome**:
|
||||
- [Outcome 1]
|
||||
- [Outcome 2]
|
||||
|
||||
---
|
||||
|
||||
## Alternative Workflows
|
||||
|
||||
### Alternative 1: `/specswarm:[workflow2]`
|
||||
**Confidence**: [Medium/Low]
|
||||
**When to Use**: [Scenario where this would be better]
|
||||
|
||||
### Alternative 2: `/specswarm:[workflow3]`
|
||||
**Confidence**: [Medium/Low]
|
||||
**When to Use**: [Scenario where this would be better]
|
||||
|
||||
---
|
||||
|
||||
## Not Recommended
|
||||
|
||||
**Feature Development** → Use `/specswarm:*` workflows
|
||||
**Reason**: [If feature development detected, redirect to appropriate plugin]
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review recommendation** above
|
||||
2. **Run recommended workflow**: `/specswarm:[workflow]`
|
||||
3. **If uncertain**, provide more context: `/specswarm:suggest "more details here"`
|
||||
|
||||
**Command to run**:
|
||||
```
|
||||
/specswarm:[recommended_workflow]
|
||||
```
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Output Recommendation
|
||||
|
||||
```
|
||||
🤖 Workflow Suggestion Complete
|
||||
|
||||
🎯 Recommended Workflow: /specswarm:[workflow]
|
||||
Confidence: [High/Medium/Low]
|
||||
|
||||
📊 Analysis:
|
||||
- Branch pattern: [detected pattern]
|
||||
- Commit keywords: [list]
|
||||
- Files changed: [N]
|
||||
- User signals: [description analysis]
|
||||
|
||||
📋 Full analysis (above)
|
||||
|
||||
⚡ Quick Start:
|
||||
/specswarm:[recommended_workflow]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
🤖 Workflow Suggestion Complete
|
||||
|
||||
🎯 Recommended Workflow: /specswarm:bugfix
|
||||
Confidence: High
|
||||
|
||||
📊 Analysis:
|
||||
- Branch pattern: bugfix/042-login-timeout
|
||||
- Commit keywords: "fix", "bug", "timeout issue"
|
||||
- Files changed: 3 (auth, tests)
|
||||
- User signals: "login not working"
|
||||
|
||||
Reasoning:
|
||||
1. Branch name explicitly indicates bugfix
|
||||
2. Commit messages contain bug-fix keywords
|
||||
3. User description indicates broken functionality
|
||||
4. Test files modified (typical of bugfix workflow)
|
||||
|
||||
⚡ Quick Start:
|
||||
/speclab:bugfix
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Logic
|
||||
|
||||
**High Confidence (score ≥ 10)**:
|
||||
- Clear indicators from multiple sources
|
||||
- Strong pattern match
|
||||
- Recommend with confidence
|
||||
|
||||
**Medium Confidence (score 5-9)**:
|
||||
- Some indicators present
|
||||
- Moderate pattern match
|
||||
- Offer alternatives
|
||||
|
||||
**Low Confidence (score < 5)**:
|
||||
- Weak or conflicting signals
|
||||
- Suggest multiple options
|
||||
- Ask for more context
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ Context gathered from git history
|
||||
✅ Patterns analyzed (branch, commits, files, description)
|
||||
✅ Scores calculated for each workflow
|
||||
✅ Primary recommendation provided
|
||||
✅ Alternative workflows suggested
|
||||
✅ Next steps clear
|
||||
214
commands/tasks.md
Normal file
214
commands/tasks.md
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||
---
|
||||
|
||||
<!--
|
||||
ATTRIBUTION CHAIN:
|
||||
1. Original: GitHub spec-kit (https://github.com/github/spec-kit)
|
||||
Copyright (c) GitHub, Inc. | MIT License
|
||||
2. Adapted: SpecKit plugin by Marty Bonacci (2025)
|
||||
3. Forked: SpecSwarm plugin with tech stack management
|
||||
by Marty Bonacci & Claude Code (2025)
|
||||
-->
|
||||
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Discover Feature Context**:
|
||||
```bash
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Source features location helper
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
source "$PLUGIN_DIR/lib/features-location.sh"
|
||||
|
||||
# Initialize features directory
|
||||
get_features_dir "$REPO_ROOT"
|
||||
|
||||
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
FEATURE_NUM=$(echo "$BRANCH" | grep -oE '^[0-9]{3}')
|
||||
[ -z "$FEATURE_NUM" ] && FEATURE_NUM=$(list_features "$REPO_ROOT" | grep -oE '^[0-9]{3}' | sort -nr | head -1)
|
||||
|
||||
find_feature_dir "$FEATURE_NUM" "$REPO_ROOT"
|
||||
# FEATURE_DIR is now set by find_feature_dir
|
||||
```
|
||||
|
||||
2. **Load design documents**: Read from FEATURE_DIR:
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
||||
- **Load `.specswarm/tech-stack.md` for validation** (if exists)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
<!-- ========== TECH STACK VALIDATION (SpecSwarm Enhancement) ========== -->
|
||||
<!-- Added by Marty Bonacci & Claude Code (2025) -->
|
||||
|
||||
2b. **Tech Stack Validation** (if tech-stack.md exists):
|
||||
|
||||
**Purpose**: Validate that no tasks introduce unapproved or prohibited technologies
|
||||
|
||||
1. **Read Tech Stack Compliance Report** from plan.md:
|
||||
- Check if "Tech Stack Compliance Report" section exists
|
||||
- If it does NOT exist: Skip validation (plan.md was created before SpecSwarm)
|
||||
- If it DOES exist: Proceed with validation
|
||||
|
||||
2. **Verify Compliance Report is Resolved**:
|
||||
```bash
|
||||
# Check for unresolved conflicts or prohibitions
|
||||
if grep -q "⚠️ Conflicting Technologies" "${FEATURE_DIR}/plan.md"; then
|
||||
# Check if there are still pending choices
|
||||
if grep -q "**Your choice**: _\[" "${FEATURE_DIR}/plan.md"; then
|
||||
ERROR "Tech stack conflicts unresolved in plan.md"
|
||||
MESSAGE "Please resolve conflicting technology choices in plan.md before generating tasks"
|
||||
MESSAGE "Run /specswarm:plan again to address conflicts"
|
||||
HALT
|
||||
fi
|
||||
fi
|
||||
|
||||
if grep -q "❌ Prohibited Technologies" "${FEATURE_DIR}/plan.md"; then
|
||||
if grep -q "**Cannot proceed**" "${FEATURE_DIR}/plan.md"; then
|
||||
ERROR "Prohibited technologies found in plan.md"
|
||||
MESSAGE "Remove prohibited technologies from plan.md Technical Context"
|
||||
MESSAGE "See .specswarm/tech-stack.md for approved alternatives"
|
||||
HALT
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
3. **Scan Task Descriptions for Technology References**:
|
||||
Before finalizing tasks.md, scan all task descriptions for library/framework names:
|
||||
```bash
|
||||
# Extract task descriptions (before they're written to tasks.md)
|
||||
for TASK in "${ALL_TASKS[@]}"; do
|
||||
TASK_DESC=$(echo "$TASK" | grep -oE 'Install.*|Add.*|Use.*|Import.*')
|
||||
|
||||
# Check against prohibited list
|
||||
for PROHIBITED in $(grep "❌" "${REPO_ROOT}.specswarm/tech-stack.md" | sed 's/.*❌ \([^ ]*\).*/\1/'); do
|
||||
if echo "$TASK_DESC" | grep -qi "$PROHIBITED"; then
|
||||
WARNING "Task references prohibited technology: $PROHIBITED"
|
||||
MESSAGE "Task: $TASK_DESC"
|
||||
APPROVED_ALT=$(grep "❌.*${PROHIBITED}" "${REPO_ROOT}.specswarm/tech-stack.md" | sed 's/.*use \(.*\) instead.*/\1/')
|
||||
MESSAGE "Replace with approved alternative: $APPROVED_ALT"
|
||||
# Auto-correct task description
|
||||
TASK_DESC=$(echo "$TASK_DESC" | sed -i "s/${PROHIBITED}/${APPROVED_ALT}/gi")
|
||||
fi
|
||||
done
|
||||
|
||||
# Check against unapproved list (warn but allow)
|
||||
if ! grep -qi "$TECH_MENTIONED" "${REPO_ROOT}.specswarm/tech-stack.md" 2>/dev/null; then
|
||||
INFO "Task mentions unapproved technology: $TECH_MENTIONED"
|
||||
INFO "This will be validated during /specswarm:implement"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
4. **Validation Summary**:
|
||||
Add validation summary to tasks.md header:
|
||||
```markdown
|
||||
<!-- Tech Stack Validation: PASSED -->
|
||||
<!-- Validated against: .specswarm/tech-stack.md v{version} -->
|
||||
<!-- No prohibited technologies found -->
|
||||
<!-- {N} unapproved technologies require runtime validation -->
|
||||
```
|
||||
|
||||
<!-- ========== END TECH STACK VALIDATION ========== -->
|
||||
|
||||
3. **Execute task generation workflow** (follow the template structure):
|
||||
- Load plan.md and extract tech stack, libraries, project structure
|
||||
- **Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)**
|
||||
- If data-model.md exists: Extract entities → map to user stories
|
||||
- If contracts/ exists: Each file → map endpoints to user stories
|
||||
- If research.md exists: Extract decisions → generate setup tasks
|
||||
- **Generate tasks ORGANIZED BY USER STORY**:
|
||||
- Setup tasks (shared infrastructure needed by all stories)
|
||||
- **Foundational tasks (prerequisites that must complete before ANY user story can start)**
|
||||
- For each user story (in priority order P1, P2, P3...):
|
||||
- Group all tasks needed to complete JUST that story
|
||||
- Include models, services, endpoints, UI components specific to that story
|
||||
- Mark which tasks are [P] parallelizable
|
||||
- If tests requested: Include tests specific to that story
|
||||
- Polish/Integration tasks (cross-cutting concerns)
|
||||
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature spec or user asks for TDD approach
|
||||
- Apply task rules:
|
||||
- Different files = mark [P] for parallel
|
||||
- Same file = sequential (no [P])
|
||||
- If tests requested: Tests before implementation (TDD order)
|
||||
- Number tasks sequentially (T001, T002...)
|
||||
- Generate dependency graph showing user story completion order
|
||||
- Create parallel execution examples per user story
|
||||
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||
|
||||
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
|
||||
- Correct feature name from plan.md
|
||||
- Phase 1: Setup tasks (project initialization)
|
||||
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||
- Clear [Story] labels (US1, US2, US3...) for each task
|
||||
- [P] markers for parallelizable tasks within each story
|
||||
- Checkpoint markers after each story phase
|
||||
- Final Phase: Polish & cross-cutting concerns
|
||||
- Numbered tasks (T001, T002...) in execution order
|
||||
- Clear file paths for each task
|
||||
- Dependencies section showing story completion order
|
||||
- Parallel execution examples per story
|
||||
- Implementation strategy section (MVP first, incremental delivery)
|
||||
|
||||
5. **Report**: Output path to generated tasks.md and summary:
|
||||
- Total task count
|
||||
- Task count per user story
|
||||
- Parallel opportunities identified
|
||||
- Independent test criteria for each story
|
||||
- Suggested MVP scope (typically just User Story 1)
|
||||
|
||||
Context for task generation: {ARGS}
|
||||
|
||||
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||
|
||||
## Task Generation Rules
|
||||
|
||||
**IMPORTANT**: Tests are optional. Only generate test tasks if the user explicitly requested testing or TDD approach in the feature specification.
|
||||
|
||||
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||
|
||||
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||
- Each user story (P1, P2, P3...) gets its own phase
|
||||
- Map all related components to their story:
|
||||
- Models needed for that story
|
||||
- Services needed for that story
|
||||
- Endpoints/UI needed for that story
|
||||
- If tests requested: Tests specific to that story
|
||||
- Mark story dependencies (most stories should be independent)
|
||||
|
||||
2. **From Contracts**:
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity → to the user story(ies) that need it
|
||||
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||
- Relationships → service layer tasks in appropriate story phase
|
||||
|
||||
4. **From Setup/Infrastructure**:
|
||||
- Shared infrastructure → Setup phase (Phase 1)
|
||||
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||
- Examples: Database schema setup, authentication framework, core libraries, base configurations
|
||||
- These MUST complete before any user story can be implemented
|
||||
- Story-specific setup → within that story's phase
|
||||
|
||||
5. **Ordering**:
|
||||
- Phase 1: Setup (project initialization)
|
||||
- Phase 2: Foundational (blocking prerequisites - must complete before user stories)
|
||||
- Phase 3+: User Stories in priority order (P1, P2, P3...)
|
||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||
- Final Phase: Polish & Cross-Cutting Concerns
|
||||
- Each user story phase should be a complete, independently testable increment
|
||||
|
||||
631
commands/upgrade.md
Normal file
631
commands/upgrade.md
Normal file
@@ -0,0 +1,631 @@
|
||||
---
|
||||
description: Upgrade dependencies or frameworks with breaking change analysis and automated refactoring
|
||||
args:
|
||||
- name: upgrade_target
|
||||
description: What to upgrade (e.g., "React 18 to React 19", "all dependencies", "react", "@latest")
|
||||
required: true
|
||||
- name: --deps
|
||||
description: Upgrade all dependencies to latest (alternative to specifying target)
|
||||
required: false
|
||||
- name: --package
|
||||
description: Upgrade specific package (e.g., --package react)
|
||||
required: false
|
||||
- name: --dry-run
|
||||
description: Analyze breaking changes without making changes
|
||||
required: false
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Upgrade dependencies or frameworks with automated breaking change analysis and code refactoring.
|
||||
|
||||
**Purpose**: Streamline complex upgrade processes by automating dependency updates, breaking change detection, and code refactoring.
|
||||
|
||||
**Workflow**: Analyze → Plan → Update → Refactor → Test → Report
|
||||
|
||||
**Scope**:
|
||||
- Framework upgrades (React 18→19, Vue 2→3, Next.js 13→14, etc.)
|
||||
- Dependency upgrades (specific packages or all dependencies)
|
||||
- Breaking change detection from changelogs
|
||||
- Automated code refactoring for breaking changes
|
||||
- Test validation after upgrade
|
||||
|
||||
**User Experience**:
|
||||
- Single command handles complex multi-step upgrades
|
||||
- Automatic breaking change detection
|
||||
- Guided refactoring with codemod suggestions
|
||||
- Test-driven validation
|
||||
- Clear migration report with manual tasks
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Checks
|
||||
|
||||
```bash
|
||||
# Parse arguments
|
||||
UPGRADE_TARGET=""
|
||||
UPGRADE_ALL_DEPS=false
|
||||
PACKAGE_NAME=""
|
||||
DRY_RUN=false
|
||||
|
||||
# Extract upgrade target (first non-flag argument)
|
||||
for arg in $ARGUMENTS; do
|
||||
if [ "${arg:0:2}" != "--" ] && [ -z "$UPGRADE_TARGET" ]; then
|
||||
UPGRADE_TARGET="$arg"
|
||||
elif [ "$arg" = "--deps" ]; then
|
||||
UPGRADE_ALL_DEPS=true
|
||||
elif [ "$arg" = "--package" ]; then
|
||||
shift
|
||||
PACKAGE_NAME="$1"
|
||||
elif [ "$arg" = "--dry-run" ]; then
|
||||
DRY_RUN=true
|
||||
fi
|
||||
done
|
||||
|
||||
# Validate upgrade target
|
||||
if [ -z "$UPGRADE_TARGET" ] && [ "$UPGRADE_ALL_DEPS" = false ] && [ -z "$PACKAGE_NAME" ]; then
|
||||
echo "❌ Error: Upgrade target required"
|
||||
echo ""
|
||||
echo "Usage: /specswarm:upgrade \"target\" [--deps] [--package name] [--dry-run]"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " /specswarm:upgrade \"React 18 to React 19\""
|
||||
echo " /specswarm:upgrade \"Next.js 14 to Next.js 15\""
|
||||
echo " /specswarm:upgrade --deps # All dependencies"
|
||||
echo " /specswarm:upgrade --package react # Specific package"
|
||||
echo " /specswarm:upgrade \"Vue 2 to Vue 3\" --dry-run"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Normalize upgrade target
|
||||
if [ "$UPGRADE_ALL_DEPS" = true ]; then
|
||||
UPGRADE_TARGET="all dependencies"
|
||||
elif [ -n "$PACKAGE_NAME" ]; then
|
||||
UPGRADE_TARGET="$PACKAGE_NAME to latest"
|
||||
fi
|
||||
|
||||
# Get project root
|
||||
if ! git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
echo "❌ Error: Not in a git repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Check for package.json
|
||||
if [ ! -f "package.json" ]; then
|
||||
echo "❌ Error: package.json not found"
|
||||
echo " This command currently supports Node.js/npm projects."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Display Welcome Banner
|
||||
|
||||
```bash
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo "🔍 SpecSwarm Upgrade - Analysis Mode (Dry Run)"
|
||||
else
|
||||
echo "⬆️ SpecSwarm Upgrade - Dependency & Framework Migration"
|
||||
fi
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Upgrade: $UPGRADE_TARGET"
|
||||
echo ""
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo "🔍 DRY RUN MODE: Analyzing only, no changes will be made"
|
||||
echo ""
|
||||
echo "This analysis will:"
|
||||
echo " 1. Detect current versions"
|
||||
echo " 2. Identify latest available versions"
|
||||
echo " 3. Analyze breaking changes from changelogs"
|
||||
echo " 4. Estimate refactoring impact"
|
||||
echo " 5. Generate migration plan"
|
||||
echo ""
|
||||
else
|
||||
echo "This workflow will:"
|
||||
echo " 1. Analyze breaking changes"
|
||||
echo " 2. Generate migration plan"
|
||||
echo " 3. Update dependencies"
|
||||
echo " 4. Refactor code for breaking changes"
|
||||
echo " 5. Run tests to verify compatibility"
|
||||
echo " 6. Report manual migration tasks"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
read -p "Press Enter to start, or Ctrl+C to cancel..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Phase 1 - Analyze Current State
|
||||
|
||||
**YOU MUST NOW analyze the current dependency state:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔍 Phase 1: Analyzing Current Dependencies"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Analysis steps:**
|
||||
|
||||
1. Read package.json to get current dependencies
|
||||
2. Identify the packages to upgrade:
|
||||
- If UPGRADE_ALL_DEPS: all dependencies + devDependencies
|
||||
- If PACKAGE_NAME: specific package
|
||||
- If upgrade target like "React 18 to React 19": extract package names (react, react-dom, etc.)
|
||||
|
||||
3. For each package, detect:
|
||||
- Current version (from package.json)
|
||||
- Latest version (from npm registry)
|
||||
- Installed version (from package-lock.json or node_modules)
|
||||
|
||||
4. Check for framework-specific packages:
|
||||
- React: react, react-dom, @types/react
|
||||
- Vue: vue, @vue/*, vue-router
|
||||
- Next.js: next, react, react-dom
|
||||
- Remix: @remix-run/*
|
||||
- Angular: @angular/*
|
||||
|
||||
```bash
|
||||
# Example analysis output
|
||||
echo "📦 Current Versions:"
|
||||
echo " react: 18.2.0"
|
||||
echo " react-dom: 18.2.0"
|
||||
echo " @types/react: 18.0.28"
|
||||
echo ""
|
||||
echo "📦 Target Versions:"
|
||||
echo " react: 19.0.0"
|
||||
echo " react-dom: 19.0.0"
|
||||
echo " @types/react: 19.0.0"
|
||||
echo ""
|
||||
```
|
||||
|
||||
Store:
|
||||
- PACKAGES_TO_UPGRADE (array of package names)
|
||||
- CURRENT_VERSIONS (map of package → version)
|
||||
- TARGET_VERSIONS (map of package → version)
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Phase 2 - Breaking Change Analysis
|
||||
|
||||
**YOU MUST NOW analyze breaking changes:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📋 Phase 2: Analyzing Breaking Changes"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Fetching changelogs and release notes..."
|
||||
echo ""
|
||||
```
|
||||
|
||||
**For each package being upgraded:**
|
||||
|
||||
1. **Fetch changelog/release notes:**
|
||||
- Check GitHub releases (if repository URL in package.json)
|
||||
- Check npm package page
|
||||
- Use WebFetch tool to get changelog content
|
||||
|
||||
2. **Extract breaking changes:**
|
||||
- Look for "BREAKING CHANGE", "Breaking Changes", "⚠️", "❗" sections
|
||||
- Extract version-specific breaking changes (between current and target)
|
||||
- Parse common patterns:
|
||||
- "Removed X"
|
||||
- "X is now Y"
|
||||
- "X has been deprecated"
|
||||
- "X no longer supports Y"
|
||||
|
||||
3. **Categorize breaking changes:**
|
||||
- **API changes**: Function signature changes, removed methods
|
||||
- **Behavior changes**: Default behavior modifications
|
||||
- **Deprecations**: Deprecated but still working
|
||||
- **Removals**: Features completely removed
|
||||
- **Configuration**: Config file format changes
|
||||
|
||||
4. **Assess impact on codebase:**
|
||||
- Search codebase for usage of affected APIs
|
||||
- Estimate files requiring manual changes
|
||||
- Identify automated refactoring opportunities
|
||||
|
||||
```bash
|
||||
echo "🔍 Breaking Changes Detected:"
|
||||
echo ""
|
||||
echo "React 18 → 19:"
|
||||
echo " ⚠️ ReactDOM.render removed (use createRoot)"
|
||||
echo " ⚠️ Legacy Context API deprecated"
|
||||
echo " ⚠️ PropTypes moved to separate package"
|
||||
echo " ✅ Automatic migration available for 2/3 changes"
|
||||
echo ""
|
||||
echo "Impact Analysis:"
|
||||
echo " 📁 3 files use ReactDOM.render"
|
||||
echo " 📁 1 file uses Legacy Context"
|
||||
echo " 📁 0 files use PropTypes"
|
||||
echo ""
|
||||
```
|
||||
|
||||
Store:
|
||||
- BREAKING_CHANGES (array of change descriptions)
|
||||
- AFFECTED_FILES (map of change → files)
|
||||
- AUTO_FIXABLE (array of changes with codemods available)
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Phase 3 - Generate Migration Plan
|
||||
|
||||
**YOU MUST NOW generate a detailed migration plan:**
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 Phase 3: Generating Migration Plan"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
```
|
||||
|
||||
**Create a migration plan that includes:**
|
||||
|
||||
1. **Dependency Update Steps:**
|
||||
```
|
||||
npm install react@19 react-dom@19 @types/react@19
|
||||
```
|
||||
|
||||
2. **Automated Refactoring Steps:**
|
||||
- For each auto-fixable breaking change
|
||||
- Codemod command or manual find/replace pattern
|
||||
- Example: `npx react-codemod create-root src/`
|
||||
|
||||
3. **Manual Migration Tasks:**
|
||||
- For each non-auto-fixable breaking change
|
||||
- Which files need manual updates
|
||||
- What changes to make
|
||||
- Code examples (before/after)
|
||||
|
||||
4. **Test Validation:**
|
||||
- Test commands to run after changes
|
||||
- Expected test results
|
||||
|
||||
5. **Risk Assessment:**
|
||||
- Low/Medium/High risk rating
|
||||
- Estimated time
|
||||
- Recommended approach (all at once vs incremental)
|
||||
|
||||
```bash
|
||||
echo "Migration Plan Generated:"
|
||||
echo ""
|
||||
echo "1. Update Dependencies (automated)"
|
||||
echo " └─ npm install react@19 react-dom@19"
|
||||
echo ""
|
||||
echo "2. Automated Refactoring (3 codemods)"
|
||||
echo " └─ ReactDOM.render → createRoot"
|
||||
echo " └─ Legacy Context → New Context API"
|
||||
echo " └─ Update TypeScript types"
|
||||
echo ""
|
||||
echo "3. Manual Tasks (1 required)"
|
||||
echo " └─ Review and test Legacy Context migration in src/contexts/ThemeContext.tsx"
|
||||
echo ""
|
||||
echo "4. Test Validation"
|
||||
echo " └─ npm test (all tests must pass)"
|
||||
echo ""
|
||||
echo "Risk Assessment: MEDIUM"
|
||||
echo "Estimated Time: 15-30 minutes"
|
||||
echo ""
|
||||
```
|
||||
|
||||
IF DRY_RUN = true:
|
||||
- Stop here, display plan, exit
|
||||
- Do NOT make any changes
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Phase 4 - Update Dependencies
|
||||
|
||||
**IF NOT dry run, YOU MUST NOW update dependencies:**
|
||||
|
||||
```bash
|
||||
if [ "$DRY_RUN" = false ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📦 Phase 4: Updating Dependencies"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**Use Bash tool to run npm install:**
|
||||
|
||||
```bash
|
||||
# Build install command
|
||||
INSTALL_CMD="npm install"
|
||||
|
||||
for package in "${PACKAGES_TO_UPGRADE[@]}"; do
|
||||
target_version="${TARGET_VERSIONS[$package]}"
|
||||
INSTALL_CMD="$INSTALL_CMD $package@$target_version"
|
||||
done
|
||||
|
||||
echo "Running: $INSTALL_CMD"
|
||||
echo ""
|
||||
|
||||
# Execute
|
||||
$INSTALL_CMD
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo "✅ Dependencies updated successfully"
|
||||
echo ""
|
||||
else
|
||||
echo ""
|
||||
echo "❌ Dependency update failed"
|
||||
echo " Check npm output above for errors"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Phase 5 - Automated Refactoring
|
||||
|
||||
**IF NOT dry run, YOU MUST NOW apply automated refactorings:**
|
||||
|
||||
```bash
|
||||
if [ "$DRY_RUN" = false ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔧 Phase 5: Automated Refactoring"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**For each auto-fixable breaking change:**
|
||||
|
||||
1. **Apply codemod** (if available):
|
||||
- React: `npx react-codemod <codemod-name> <path>`
|
||||
- Vue: `npx vue-codemod <codemod-name> <path>`
|
||||
- Other: Custom find/replace using Edit tool
|
||||
|
||||
2. **Apply manual refactoring patterns:**
|
||||
- Use Grep tool to find affected code
|
||||
- Use Edit tool to make changes
|
||||
- Log each change for rollback if needed
|
||||
|
||||
3. **Verify changes:**
|
||||
- Check syntax (bash -n for shell, tsc --noEmit for TypeScript)
|
||||
- Quick validation that files parse correctly
|
||||
|
||||
```bash
|
||||
echo "Applying refactoring 1/3: ReactDOM.render → createRoot"
|
||||
# Apply codemod or manual refactoring
|
||||
echo "✅ Refactored 3 files"
|
||||
echo ""
|
||||
|
||||
echo "Applying refactoring 2/3: Legacy Context → New Context API"
|
||||
# Apply codemod or manual refactoring
|
||||
echo "✅ Refactored 1 file"
|
||||
echo ""
|
||||
|
||||
echo "Applying refactoring 3/3: Update TypeScript types"
|
||||
# Apply codemod or manual refactoring
|
||||
echo "✅ Updated type definitions"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 7: Phase 6 - Test Validation
|
||||
|
||||
**IF NOT dry run, YOU MUST NOW run tests:**
|
||||
|
||||
```bash
|
||||
if [ "$DRY_RUN" = false ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "✓ Phase 6: Testing Compatibility"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
fi
|
||||
```
|
||||
|
||||
**Run test suite:**
|
||||
|
||||
1. Detect test command (package.json scripts.test)
|
||||
2. Run tests with Bash tool
|
||||
3. Capture results
|
||||
4. Report pass/fail
|
||||
|
||||
```bash
|
||||
echo "Running test suite..."
|
||||
echo ""
|
||||
|
||||
npm test
|
||||
|
||||
TEST_RESULT=$?
|
||||
|
||||
if [ $TEST_RESULT -eq 0 ]; then
|
||||
echo ""
|
||||
echo "✅ All tests passing after upgrade!"
|
||||
echo ""
|
||||
TESTS_PASSING=true
|
||||
else
|
||||
echo ""
|
||||
echo "⚠️ Some tests failing after upgrade"
|
||||
echo " This may be expected for breaking changes requiring manual migration"
|
||||
echo ""
|
||||
TESTS_PASSING=false
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Step 8: Final Report
|
||||
|
||||
**Display completion summary:**
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "══════════════════════════════════════════"
|
||||
|
||||
if [ "$DRY_RUN" = true ]; then
|
||||
echo "🔍 UPGRADE ANALYSIS COMPLETE (DRY RUN)"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Upgrade: $UPGRADE_TARGET"
|
||||
echo ""
|
||||
echo "📋 Migration Plan Ready"
|
||||
echo "📊 Risk Assessment: [RISK_LEVEL]"
|
||||
echo "⏱️ Estimated Time: [TIME_ESTIMATE]"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 NEXT STEPS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "1. Review migration plan above"
|
||||
echo "2. If acceptable, run actual upgrade:"
|
||||
echo " /specswarm:upgrade \"$UPGRADE_TARGET\""
|
||||
echo ""
|
||||
echo "3. Or handle migration manually using plan as guide"
|
||||
echo ""
|
||||
else
|
||||
echo "🎉 UPGRADE COMPLETE"
|
||||
echo "══════════════════════════════════════════"
|
||||
echo ""
|
||||
echo "Upgrade: $UPGRADE_TARGET"
|
||||
echo ""
|
||||
echo "✅ Dependencies updated"
|
||||
echo "✅ Automated refactoring applied"
|
||||
|
||||
if [ "$TESTS_PASSING" = true ]; then
|
||||
echo "✅ All tests passing"
|
||||
else
|
||||
echo "⚠️ Tests require attention (see output above)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📝 MANUAL TASKS REMAINING"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# List manual tasks from migration plan
|
||||
echo "The following tasks require manual intervention:"
|
||||
echo ""
|
||||
for task in "${MANUAL_TASKS[@]}"; do
|
||||
echo " ⚠️ $task"
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🧪 RECOMMENDED TESTING"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "1. 🔍 Review Changes"
|
||||
echo " git diff"
|
||||
echo ""
|
||||
echo "2. 🧪 Manual Testing"
|
||||
echo " - Start development server"
|
||||
echo " - Test critical user flows"
|
||||
echo " - Verify no console errors"
|
||||
echo " - Check for visual regressions"
|
||||
echo ""
|
||||
echo "3. 📊 Performance Check"
|
||||
echo " - Compare bundle size (if applicable)"
|
||||
echo " - Verify no performance degradation"
|
||||
echo ""
|
||||
|
||||
if [ "$TESTS_PASSING" = true ]; then
|
||||
echo "4. 🚢 Ship When Ready"
|
||||
echo " /specswarm:ship"
|
||||
echo ""
|
||||
echo " All tests passing - ready to merge after manual verification"
|
||||
else
|
||||
echo "4. 🔧 Fix Failing Tests"
|
||||
echo " - Address test failures"
|
||||
echo " - Complete manual migration tasks above"
|
||||
echo " - Re-run: npm test"
|
||||
echo ""
|
||||
echo "5. 🚢 Ship When Tests Pass"
|
||||
echo " /specswarm:ship"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo "══════════════════════════════════════════"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
If any step fails:
|
||||
|
||||
1. **Changelog fetch fails**: Continue with generic breaking change warnings
|
||||
2. **Dependency update fails**: Display npm errors, suggest manual resolution
|
||||
3. **Refactoring fails**: Roll back changes, report errors
|
||||
4. **Tests fail**: Report as warning, not error (expected for breaking changes)
|
||||
|
||||
**All errors should report clearly and suggest remediation.**
|
||||
|
||||
---
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
**Automated Where Possible**: Codemods and refactoring patterns reduce manual work
|
||||
|
||||
**Transparent Process**: Clear breaking change analysis before making changes
|
||||
|
||||
**Safety First**: Dry run mode for risk assessment, test validation after changes
|
||||
|
||||
**Guidance**: Manual tasks clearly documented with examples
|
||||
|
||||
**Realistic**: Acknowledges that some migrations require manual intervention
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
**Framework Upgrades:**
|
||||
- React 18 → 19
|
||||
- Vue 2 → 3
|
||||
- Next.js 14 → 15
|
||||
- Angular major versions
|
||||
|
||||
**Dependency Upgrades:**
|
||||
- All dependencies to latest
|
||||
- Specific package upgrades
|
||||
- Security vulnerability fixes
|
||||
|
||||
**Breaking Change Migrations:**
|
||||
- API removals
|
||||
- Configuration format changes
|
||||
- Behavior modifications
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
**Current Support:**
|
||||
- ✅ Node.js/npm projects (package.json)
|
||||
- ✅ React framework upgrades
|
||||
- ✅ Common JavaScript libraries
|
||||
|
||||
**Future Enhancements:**
|
||||
- Python (pip, requirements.txt)
|
||||
- Ruby (Gemfile)
|
||||
- Rust (Cargo.toml)
|
||||
- More framework-specific codemods
|
||||
- Automated rollback on test failure
|
||||
314
commands/validate.md
Normal file
314
commands/validate.md
Normal file
@@ -0,0 +1,314 @@
|
||||
---
|
||||
description: Run AI-powered interaction flow validation for any software type (webapp, Android app, REST API, desktop GUI)
|
||||
args:
|
||||
- name: project_path
|
||||
description: Path to project to validate (defaults to current directory)
|
||||
required: false
|
||||
- name: --session-id
|
||||
description: Optional session ID for orchestration integration
|
||||
required: false
|
||||
- name: --type
|
||||
description: Override detected type (webapp|android|rest-api|desktop-gui)
|
||||
required: false
|
||||
- name: --flows
|
||||
description: Path to custom flows JSON file
|
||||
required: false
|
||||
- name: --url
|
||||
description: Override base URL for webapp (default http://localhost:5173)
|
||||
required: false
|
||||
---
|
||||
|
||||
# AI-Powered Feature Validation
|
||||
|
||||
Run comprehensive validation with intelligent flow generation, interactive error detection, and automatic project type detection.
|
||||
|
||||
## Initialize Validation
|
||||
|
||||
```bash
|
||||
echo "🔍 SpecLabs Feature Validation v2.7.0"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Parse arguments
|
||||
PROJECT_PATH=""
|
||||
SESSION_ID=""
|
||||
TYPE_OVERRIDE="auto"
|
||||
FLOWS_FILE=""
|
||||
BASE_URL=""
|
||||
|
||||
# Check if first arg is a path (doesn't start with --)
|
||||
if [ -n "$1" ] && [ "${1:0:2}" != "--" ]; then
|
||||
PROJECT_PATH="$1"
|
||||
shift
|
||||
else
|
||||
PROJECT_PATH="$(pwd)"
|
||||
fi
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--session-id) SESSION_ID="$2"; shift 2 ;;
|
||||
--type) TYPE_OVERRIDE="$2"; shift 2 ;;
|
||||
--flows) FLOWS_FILE="$2"; shift 2 ;;
|
||||
--url) BASE_URL="$2"; shift 2 ;;
|
||||
*) shift ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate project path exists
|
||||
if [ ! -d "$PROJECT_PATH" ]; then
|
||||
echo "❌ Error: Project path does not exist: $PROJECT_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📁 Project: $PROJECT_PATH"
|
||||
echo ""
|
||||
|
||||
# Detect web project and Chrome DevTools MCP availability
|
||||
PLUGIN_DIR="/home/marty/code-projects/specswarm/plugins/speclabs"
|
||||
SPECSWARM_PLUGIN_DIR="/home/marty/code-projects/specswarm/plugins/specswarm"
|
||||
|
||||
if [ -f "$SPECSWARM_PLUGIN_DIR/lib/web-project-detector.sh" ]; then
|
||||
source "$SPECSWARM_PLUGIN_DIR/lib/web-project-detector.sh"
|
||||
|
||||
# Check if Chrome DevTools MCP should be used
|
||||
if should_use_chrome_devtools "$PROJECT_PATH"; then
|
||||
export CHROME_DEVTOOLS_MODE="enabled"
|
||||
export WEB_FRAMEWORK="$WEB_FRAMEWORK"
|
||||
echo "🌐 Web project detected: $WEB_FRAMEWORK"
|
||||
echo "🎯 Chrome DevTools MCP: Available for browser automation"
|
||||
echo " (saves ~200MB Chromium download)"
|
||||
echo ""
|
||||
elif is_web_project "$PROJECT_PATH"; then
|
||||
export CHROME_DEVTOOLS_MODE="fallback"
|
||||
export WEB_FRAMEWORK="$WEB_FRAMEWORK"
|
||||
echo "🌐 Web project detected: $WEB_FRAMEWORK"
|
||||
echo "📦 Using Playwright fallback for browser automation"
|
||||
echo ""
|
||||
else
|
||||
export CHROME_DEVTOOLS_MODE="disabled"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Source validation orchestrator
|
||||
source "${PLUGIN_DIR}/lib/validate-feature-orchestrator.sh"
|
||||
```
|
||||
|
||||
## Execute Validation
|
||||
|
||||
```bash
|
||||
# Build orchestrator arguments
|
||||
ORCHESTRATOR_ARGS=(--project-path "$PROJECT_PATH")
|
||||
|
||||
[ -n "$SESSION_ID" ] && ORCHESTRATOR_ARGS+=(--session-id "$SESSION_ID")
|
||||
[ "$TYPE_OVERRIDE" != "auto" ] && ORCHESTRATOR_ARGS+=(--type "$TYPE_OVERRIDE")
|
||||
[ -n "$FLOWS_FILE" ] && ORCHESTRATOR_ARGS+=(--flows "$FLOWS_FILE")
|
||||
[ -n "$BASE_URL" ] && ORCHESTRATOR_ARGS+=(--url "$BASE_URL")
|
||||
|
||||
# Execute validation
|
||||
VALIDATION_RESULT=$(validate_feature_orchestrate "${ORCHESTRATOR_ARGS[@]}")
|
||||
VALIDATION_EXIT_CODE=$?
|
||||
|
||||
if [ $VALIDATION_EXIT_CODE -ne 0 ]; then
|
||||
echo ""
|
||||
echo "❌ Validation failed to execute"
|
||||
echo ""
|
||||
echo "Result:"
|
||||
echo "$VALIDATION_RESULT" | jq '.'
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Display Results
|
||||
|
||||
```bash
|
||||
# Parse result
|
||||
STATUS=$(echo "$VALIDATION_RESULT" | jq -r '.status')
|
||||
TYPE=$(echo "$VALIDATION_RESULT" | jq -r '.type')
|
||||
TOTAL_FLOWS=$(echo "$VALIDATION_RESULT" | jq -r '.summary.total_flows')
|
||||
PASSED_FLOWS=$(echo "$VALIDATION_RESULT" | jq -r '.summary.passed_flows')
|
||||
FAILED_FLOWS=$(echo "$VALIDATION_RESULT" | jq -r '.summary.failed_flows')
|
||||
ERROR_COUNT=$(echo "$VALIDATION_RESULT" | jq -r '.summary.error_count')
|
||||
DURATION=$(echo "$VALIDATION_RESULT" | jq -r '.metadata.duration_seconds')
|
||||
|
||||
# Display summary using orchestrator helper
|
||||
print_validation_summary "$VALIDATION_RESULT"
|
||||
|
||||
# Update session if session ID provided
|
||||
if [ -n "$SESSION_ID" ]; then
|
||||
echo "💾 Updating session: $SESSION_ID"
|
||||
|
||||
# Source feature orchestrator
|
||||
source "${PLUGIN_DIR}/lib/feature-orchestrator.sh"
|
||||
feature_init
|
||||
|
||||
# Store validation results in session
|
||||
SESSION_FILE="${FEATURE_SESSION_DIR}/${SESSION_ID}.json"
|
||||
if [ -f "$SESSION_FILE" ]; then
|
||||
jq --argjson result "$VALIDATION_RESULT" \
|
||||
--arg completed_at "$(date -Iseconds)" \
|
||||
'.validation = $result |
|
||||
.validation.completed_at = $completed_at' \
|
||||
"$SESSION_FILE" > "${SESSION_FILE}.tmp"
|
||||
|
||||
mv "${SESSION_FILE}.tmp" "$SESSION_FILE"
|
||||
echo " ✅ Session updated with validation results"
|
||||
else
|
||||
echo " ⚠️ Session file not found (validation results not persisted)"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
## Validation Status Exit Code
|
||||
|
||||
```bash
|
||||
# Display Chrome DevTools MCP tip for web projects (if applicable)
|
||||
if [ "$CHROME_DEVTOOLS_MODE" = "fallback" ]; then
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "💡 TIP: Enhanced Web Validation Available"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo "Chrome DevTools MCP provides enhanced browser automation:"
|
||||
echo " • Real-time console error monitoring"
|
||||
echo " • Network request inspection during flows"
|
||||
echo " • Runtime state debugging"
|
||||
echo " • Saves ~200MB Chromium download"
|
||||
echo ""
|
||||
echo "Install Chrome DevTools MCP:"
|
||||
echo " claude mcp add ChromeDevTools/chrome-devtools-mcp"
|
||||
echo ""
|
||||
echo "Your validation will automatically use it next time!"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Exit with appropriate code based on validation status
|
||||
if [ "$STATUS" = "passed" ]; then
|
||||
echo "✅ VALIDATION PASSED"
|
||||
exit 0
|
||||
elif [ "$STATUS" = "failed" ]; then
|
||||
echo "⚠️ VALIDATION FAILED"
|
||||
exit 1
|
||||
else
|
||||
echo "❌ VALIDATION ERROR"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Features
|
||||
|
||||
### Automatic Project Type Detection
|
||||
- **Webapp**: Detects React, Vite, Next.js, React Router applications
|
||||
- **Android**: Detects Android projects with AndroidManifest.xml
|
||||
- **REST API**: Detects OpenAPI specs, Express, FastAPI, Flask
|
||||
- **Desktop GUI**: Detects Electron, PyQt, Tkinter applications
|
||||
|
||||
Detection uses file-based analysis with confidence scoring. Manual override available via `--type` flag.
|
||||
|
||||
### Intelligent Flow Generation (Webapp)
|
||||
- **User-Defined Flows**: Parse from spec.md YAML frontmatter or custom flows file
|
||||
- **AI-Generated Flows**: Analyze feature artifacts (spec.md, plan.md, tasks.md)
|
||||
- **Feature Type Detection**: Identifies shopping_cart, social_feed, auth, forms, CRUD patterns
|
||||
- **Smart Merging**: Combines user + AI flows with deduplication
|
||||
|
||||
### Interactive Error Detection (Webapp)
|
||||
- **Browser Automation**: Chrome DevTools MCP (if installed) or Playwright fallback
|
||||
- **Chrome DevTools MCP Benefits**:
|
||||
- Saves ~200MB Chromium download
|
||||
- Persistent browser profile (~/.cache/chrome-devtools-mcp/)
|
||||
- Enhanced debugging with real-time console/network monitoring
|
||||
- **Real-Time Error Capture**: Console errors, uncaught exceptions, network failures
|
||||
- **Terminal Monitoring**: Tracks dev server output for compilation errors
|
||||
- **Auto-Fix Retry Loop**: Attempts fixes up to 3 times before manual intervention
|
||||
- **Dev Server Lifecycle**: Automatic startup and guaranteed cleanup
|
||||
|
||||
### Standardized Results
|
||||
- **JSON Output**: Consistent format across all validator types
|
||||
- **Rich Metadata**: Duration, tool versions, retry attempts, flow counts
|
||||
- **Artifacts**: Screenshots, logs, detailed reports
|
||||
- **Session Integration**: Automatic persistence when session ID provided
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Standalone Validation
|
||||
|
||||
```bash
|
||||
# Validate current directory (auto-detect type)
|
||||
/speclabs:validate-feature
|
||||
|
||||
# Validate specific project
|
||||
/speclabs:validate-feature /path/to/my-app
|
||||
|
||||
# Override detected type
|
||||
/speclabs:validate-feature /path/to/project --type webapp
|
||||
|
||||
# Use custom flows
|
||||
/speclabs:validate-feature --flows ./custom-flows.json
|
||||
|
||||
# Override base URL
|
||||
/speclabs:validate-feature --url http://localhost:3000
|
||||
```
|
||||
|
||||
### Integrated with Orchestration
|
||||
|
||||
```bash
|
||||
# Called automatically by orchestrate-feature when --validate is used
|
||||
/speclabs:orchestrate-feature "Add shopping cart" /path/to/project --validate
|
||||
|
||||
# Manual validation with session tracking
|
||||
/speclabs:validate-feature /path/to/project --session-id feature_20251103_143022
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```bash
|
||||
# Exit code 0 = passed, 1 = failed, useful for CI/CD
|
||||
/speclabs:validate-feature && echo "Deploy approved" || echo "Fix errors before deploy"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
### Validation Summary
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
VALIDATION SUMMARY
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
Type: webapp
|
||||
Status: passed
|
||||
Duration: 47s
|
||||
|
||||
Flows: 8/8 passed
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
### Artifacts Location
|
||||
- **Flow Results**: `<project>/.speclabs/validation/flow-results.json`
|
||||
- **Screenshots**: `<project>/.speclabs/validation/screenshots/*.png`
|
||||
- **Dev Server Log**: `<project>/.speclabs/validation/dev-server.log`
|
||||
- **Test Output**: `<project>/.speclabs/validation/test-output-*.log`
|
||||
|
||||
---
|
||||
|
||||
## Supported Validators
|
||||
|
||||
### v2.7.0
|
||||
- ✅ **webapp**: Full support with AI flows + Chrome DevTools MCP/Playwright + auto-fix
|
||||
|
||||
### Future Versions
|
||||
- ⏳ **android**: Planned for v2.7.1 (Appium-based)
|
||||
- ⏳ **rest-api**: Planned for v2.7.2 (Newman/Postman-based)
|
||||
- ⏳ **desktop-gui**: Planned for v2.7.3 (Electron/desktop automation)
|
||||
|
||||
---
|
||||
|
||||
**Architecture**: Generic orchestrator with pluggable type-specific validators. Extensible design allows adding new validator types without breaking changes.
|
||||
|
||||
**Purpose**: Provide comprehensive, automated validation across any software type, with intelligent flow generation and interactive error detection. Reduces manual testing burden and catches errors before deployment.
|
||||
Reference in New Issue
Block a user