Initial commit
This commit is contained in:
556
commands/batch.md
Normal file
556
commands/batch.md
Normal file
@@ -0,0 +1,556 @@
|
||||
---
|
||||
description: Batch process multiple repos with StackShift analysis running in parallel. Analyzes 5 repos at a time, tracks progress, and aggregates results. Perfect for analyzing monorepo services or multiple related projects.
|
||||
---
|
||||
|
||||
# StackShift Batch Processing
|
||||
|
||||
**Analyze multiple repositories in parallel**
|
||||
|
||||
Run StackShift on 10, 50, or 100+ repos simultaneously with progress tracking and result aggregation.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Analyze all services in a monorepo:**
|
||||
|
||||
```bash
|
||||
# From monorepo services directory
|
||||
cd ~/git/my-monorepo/services
|
||||
|
||||
# Let me analyze all service-* directories in batches of 5
|
||||
```
|
||||
|
||||
I'll:
|
||||
1. ✅ Find all service-* directories
|
||||
2. ✅ Filter to valid repos (has package.json)
|
||||
3. ✅ Process in batches of 5 (configurable)
|
||||
4. ✅ Track progress in `batch-results/`
|
||||
5. ✅ Aggregate results when complete
|
||||
|
||||
---
|
||||
|
||||
## What I'll Do
|
||||
|
||||
### Step 1: Discovery
|
||||
|
||||
```bash
|
||||
echo "=== Discovering repositories in ~/git/my-monorepo/services ==="
|
||||
|
||||
# Find all service directories
|
||||
find ~/git/my-monorepo/services -maxdepth 1 -type d -name "service-*" | sort > /tmp/services-to-analyze.txt
|
||||
|
||||
# Count
|
||||
SERVICE_COUNT=$(wc -l < /tmp/services-to-analyze.txt)
|
||||
echo "Found $SERVICE_COUNT services"
|
||||
|
||||
# Show first 10
|
||||
head -10 /tmp/services-to-analyze.txt
|
||||
```
|
||||
|
||||
### Step 2: Batch Configuration
|
||||
|
||||
**IMPORTANT:** I'll ask ALL configuration questions upfront, ONCE. Your answers will be saved to a batch session file and automatically applied to ALL repos in all batches. You won't need to answer these questions again during this batch run!
|
||||
|
||||
I'll ask you:
|
||||
|
||||
**Question 1: How many to process?**
|
||||
- A) All services ($WIDGET_COUNT total)
|
||||
- B) First 10 (test run)
|
||||
- C) First 25 (small batch)
|
||||
- D) Custom number
|
||||
|
||||
**Question 2: Parallel batch size?**
|
||||
- A) 3 at a time (conservative)
|
||||
- B) 5 at a time (recommended)
|
||||
- C) 10 at a time (aggressive, may slow down)
|
||||
- D) Sequential (1 at a time, safest)
|
||||
|
||||
**Question 3: What route?**
|
||||
- A) Auto-detect (auto-detect (monorepo for service-*), ask for others)
|
||||
- B) Force monorepo-service for all
|
||||
- C) Force greenfield for all
|
||||
- D) Force brownfield for all
|
||||
|
||||
**Question 4: Brownfield mode?** _(If route = brownfield)_
|
||||
- A) Standard - Just create specs for current state
|
||||
- B) Upgrade - Create specs + upgrade all dependencies
|
||||
|
||||
**Question 5: Transmission?**
|
||||
- A) Manual - Review each gear before proceeding
|
||||
- B) Cruise Control - Shift through all gears automatically
|
||||
|
||||
**Question 6: Clarifications strategy?** _(If transmission = cruise control)_
|
||||
- A) Defer - Mark them, continue around them
|
||||
- B) Prompt - Stop and ask questions
|
||||
- C) Skip - Only implement fully-specified features
|
||||
|
||||
**Question 7: Implementation scope?** _(If transmission = cruise control)_
|
||||
- A) None - Stop after specs are ready
|
||||
- B) P0 only - Critical features only
|
||||
- C) P0 + P1 - Critical + high-value features
|
||||
- D) All - Every feature
|
||||
|
||||
**Question 8: Spec output location?** _(If route = greenfield)_
|
||||
- A) Current repository (default)
|
||||
- B) New application repository
|
||||
- C) Separate documentation repository
|
||||
- D) Custom location
|
||||
|
||||
**Question 9: Target stack?** _(If greenfield + implementation scope != none)_
|
||||
- Examples:
|
||||
- Next.js 15 + TypeScript + Prisma + PostgreSQL
|
||||
- Python/FastAPI + SQLAlchemy + PostgreSQL
|
||||
- Your choice: [specify]
|
||||
|
||||
**Question 10: Build location?** _(If greenfield + implementation scope != none)_
|
||||
- A) Subfolder (recommended) - e.g., greenfield/, v2/
|
||||
- B) Separate directory - e.g., ~/git/my-new-app
|
||||
- C) Replace in place (destructive)
|
||||
|
||||
**Then I'll:**
|
||||
1. ✅ Save all answers to `.stackshift-batch-session.json` (in current directory)
|
||||
2. ✅ Show batch session summary
|
||||
3. ✅ Start processing batches with auto-applied configuration
|
||||
4. ✅ Clear batch session when complete (or keep if you want)
|
||||
|
||||
**Why directory-scoped?**
|
||||
- Multiple batch sessions can run simultaneously in different directories
|
||||
- Each batch (monorepo services, etc.) has its own isolated configuration
|
||||
- No conflicts between parallel batch runs
|
||||
- Session file is co-located with the repos being processed
|
||||
|
||||
### Step 3: Create Batch Session & Spawn Agents
|
||||
|
||||
**First: Create batch session with all answers**
|
||||
|
||||
```bash
|
||||
# After collecting all configuration answers, create batch session
|
||||
# Stored in current directory for isolation from other batch runs
|
||||
cat > .stackshift-batch-session.json <<EOF
|
||||
{
|
||||
"sessionId": "batch-$(date +%s)",
|
||||
"startedAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"batchRootDirectory": "$(pwd)",
|
||||
"totalRepos": ${TOTAL_REPOS},
|
||||
"batchSize": ${BATCH_SIZE},
|
||||
"answers": {
|
||||
"route": "${ROUTE}",
|
||||
"transmission": "${TRANSMISSION}",
|
||||
"spec_output_location": "${SPEC_OUTPUT}",
|
||||
"target_stack": "${TARGET_STACK}",
|
||||
"build_location": "${BUILD_LOCATION}",
|
||||
"clarifications_strategy": "${CLARIFICATIONS}",
|
||||
"implementation_scope": "${SCOPE}"
|
||||
},
|
||||
"processedRepos": []
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "✅ Batch session created: $(pwd)/.stackshift-batch-session.json"
|
||||
echo "📦 Configuration will be auto-applied to all ${TOTAL_REPOS} repos"
|
||||
```
|
||||
|
||||
**Then: Spawn parallel agents (they'll auto-use batch session)**
|
||||
|
||||
```typescript
|
||||
// Use Task tool to spawn parallel agents
|
||||
const batch1 = [
|
||||
'service-user-api',
|
||||
'service-inventory',
|
||||
'service-contact',
|
||||
'service-search',
|
||||
'service-pricing'
|
||||
];
|
||||
|
||||
// Spawn 5 agents in parallel
|
||||
const agents = batch1.map(service => ({
|
||||
task: `Analyze ${service} service with StackShift`,
|
||||
description: `StackShift analysis: ${service}`,
|
||||
subagent_type: 'general-purpose',
|
||||
prompt: `
|
||||
cd ~/git/my-monorepo/services/${service}
|
||||
|
||||
IMPORTANT: Batch session is active (will be auto-detected by walking up to parent)
|
||||
Parent directory has: .stackshift-batch-session.json
|
||||
All configuration will be auto-applied. DO NOT ask configuration questions.
|
||||
|
||||
Run StackShift Gear 1: Analyze
|
||||
- Will auto-detect route (batch session: ${ROUTE})
|
||||
- Will use spec output location: ${SPEC_OUTPUT}
|
||||
- Analyze service + shared packages
|
||||
- Generate analysis-report.md
|
||||
|
||||
Then run Gear 2: Reverse Engineer
|
||||
- Extract business logic
|
||||
- Document all shared package dependencies
|
||||
- Create comprehensive documentation
|
||||
|
||||
Then run Gear 3: Create Specifications
|
||||
- Generate .specify/ structure
|
||||
- Create constitution
|
||||
- Generate feature specs
|
||||
|
||||
Save all results to:
|
||||
${SPEC_OUTPUT}/${service}/
|
||||
|
||||
When complete, create completion marker:
|
||||
${SPEC_OUTPUT}/${service}/.complete
|
||||
`
|
||||
}));
|
||||
|
||||
// Launch all 5 in parallel
|
||||
agents.forEach(agent => spawnAgent(agent));
|
||||
```
|
||||
|
||||
### Step 4: Progress Tracking
|
||||
|
||||
```bash
|
||||
# Create tracking directory
|
||||
mkdir -p ~/git/stackshift-batch-results
|
||||
|
||||
# Monitor progress
|
||||
while true; do
|
||||
COMPLETE=$(find ~/git/stackshift-batch-results -name ".complete" | wc -l)
|
||||
echo "Completed: $COMPLETE / $WIDGET_COUNT"
|
||||
|
||||
# Check if batch done
|
||||
if [ $COMPLETE -ge 5 ]; then
|
||||
echo "✅ Batch 1 complete"
|
||||
break
|
||||
fi
|
||||
|
||||
sleep 30
|
||||
done
|
||||
|
||||
# Start next batch...
|
||||
```
|
||||
|
||||
### Step 5: Result Aggregation
|
||||
|
||||
```bash
|
||||
# After all batches complete
|
||||
echo "=== Aggregating Results ==="
|
||||
|
||||
# Create master report
|
||||
cat > ~/git/stackshift-batch-results/BATCH_SUMMARY.md <<EOF
|
||||
# StackShift Batch Analysis Results
|
||||
|
||||
**Date:** $(date)
|
||||
**Widgets Analyzed:** $WIDGET_COUNT
|
||||
**Batches:** $(($WIDGET_COUNT / 5))
|
||||
**Total Time:** [calculated]
|
||||
|
||||
## Completion Status
|
||||
|
||||
$(for service in $(cat /tmp/services-to-analyze.txt); do
|
||||
service_name=$(basename $service)
|
||||
if [ -f ~/git/stackshift-batch-results/$service_name/.complete ]; then
|
||||
echo "- ✅ $service_name - Complete"
|
||||
else
|
||||
echo "- ❌ $service_name - Failed or incomplete"
|
||||
fi
|
||||
done)
|
||||
|
||||
## Results by Widget
|
||||
|
||||
$(for service in $(cat /tmp/services-to-analyze.txt); do
|
||||
service_name=$(basename $service)
|
||||
if [ -f ~/git/stackshift-batch-results/$service_name/.complete ]; then
|
||||
echo "### $service_name"
|
||||
echo ""
|
||||
echo "**Specs created:** $(find ~/git/stackshift-batch-results/$service_name/.specify/memory/specifications -name "*.md" 2>/dev/null | wc -l)"
|
||||
echo "**Modules analyzed:** $(cat ~/git/stackshift-batch-results/$service_name/.stackshift-state.json 2>/dev/null | jq -r '.metadata.modulesAnalyzed // 0')"
|
||||
echo ""
|
||||
fi
|
||||
done)
|
||||
|
||||
## Next Steps
|
||||
|
||||
All specifications are ready for review:
|
||||
- Review specs in each service's batch-results directory
|
||||
- Merge specs to actual repos if satisfied
|
||||
- Run Gears 4-6 as needed
|
||||
EOF
|
||||
|
||||
cat ~/git/stackshift-batch-results/BATCH_SUMMARY.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Result Structure
|
||||
|
||||
```
|
||||
~/git/stackshift-batch-results/
|
||||
├── BATCH_SUMMARY.md # Master summary
|
||||
├── batch-progress.json # Real-time tracking
|
||||
│
|
||||
├── service-user-api/
|
||||
│ ├── .complete # Marker file
|
||||
│ ├── .stackshift-state.json # State
|
||||
│ ├── analysis-report.md # Gear 1 output
|
||||
│ ├── docs/reverse-engineering/ # Gear 2 output
|
||||
│ │ ├── functional-specification.md
|
||||
│ │ ├── service-logic.md
|
||||
│ │ ├── modules/
|
||||
│ │ │ ├── shared-pricing-utils.md
|
||||
│ │ │ └── shared-discount-utils.md
|
||||
│ │ └── [7 more docs]
|
||||
│ └── .specify/ # Gear 3 output
|
||||
│ └── memory/
|
||||
│ ├── constitution.md
|
||||
│ └── specifications/
|
||||
│ ├── pricing-display.md
|
||||
│ ├── incentive-logic.md
|
||||
│ └── [more specs]
|
||||
│
|
||||
├── service-inventory/
|
||||
│ └── [same structure]
|
||||
│
|
||||
└── [88 more services...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Monitoring Progress
|
||||
|
||||
**Real-time status:**
|
||||
|
||||
```bash
|
||||
# I'll show you periodic updates
|
||||
echo "=== Batch Progress ==="
|
||||
echo "Batch 1 (5 services): 3/5 complete"
|
||||
echo " ✅ service-user-api - Complete (12 min)"
|
||||
echo " ✅ service-inventory - Complete (8 min)"
|
||||
echo " ✅ service-contact - Complete (15 min)"
|
||||
echo " 🔄 service-search - Running (7 min elapsed)"
|
||||
echo " ⏳ service-pricing - Queued"
|
||||
echo ""
|
||||
echo "Estimated time remaining: 25 minutes"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**If a service fails:**
|
||||
```bash
|
||||
# Retry failed services
|
||||
failed_services=(service-search service-pricing)
|
||||
|
||||
for service in "${failed_services[@]}"; do
|
||||
echo "Retrying: $service"
|
||||
# Spawn new agent for retry
|
||||
done
|
||||
```
|
||||
|
||||
**Common failures:**
|
||||
- Missing package.json
|
||||
- Tests failing (can continue anyway)
|
||||
- Module source not found (prompt for location)
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
**1. Entire monorepo migration:**
|
||||
```
|
||||
Analyze all 90+ ws-* services for migration planning
|
||||
↓
|
||||
Result: Complete business logic extracted from entire platform
|
||||
↓
|
||||
Use specs to plan Next.js migration strategy
|
||||
```
|
||||
|
||||
**2. Selective analysis:**
|
||||
```
|
||||
Analyze just the 10 high-priority services first
|
||||
↓
|
||||
Review results
|
||||
↓
|
||||
Then batch process remaining 80
|
||||
```
|
||||
|
||||
**3. Module analysis:**
|
||||
```
|
||||
cd ~/git/my-monorepo/services
|
||||
Analyze all shared packages (not services)
|
||||
↓
|
||||
Result: Shared module documentation
|
||||
↓
|
||||
Understand dependencies before service migration
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Options
|
||||
|
||||
I'll ask you to configure:
|
||||
|
||||
- **Repository list:** All in folder, or custom list?
|
||||
- **Batch size:** How many parallel (3/5/10)?
|
||||
- **Gears to run:** 1-3 only or full 1-6?
|
||||
- **Route:** Auto-detect or force specific route?
|
||||
- **Output location:** Central results dir or per-repo?
|
||||
- **Error handling:** Stop on failure or continue?
|
||||
|
||||
---
|
||||
|
||||
## Comparison with thoth-cli
|
||||
|
||||
**thoth-cli (Upgrades):**
|
||||
- Orchestrates 90+ service upgrades
|
||||
- 3 phases: coverage → discovery → implementation
|
||||
- Tracks in .upgrade-state.json
|
||||
- Parallel processing (2-5 at a time)
|
||||
|
||||
**StackShift Batch (Analysis):**
|
||||
- Orchestrates 90+ service analyses
|
||||
- 6 gears: analyze → reverse-engineer → create-specs → gap → clarify → implement
|
||||
- Tracks in .stackshift-state.json
|
||||
- Parallel processing (3-10 at a time)
|
||||
- Can output to central location
|
||||
|
||||
---
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
You: "I want to analyze all Osiris services in ~/git/my-monorepo/services"
|
||||
|
||||
Me: "Found 92 services! Let me configure batch processing..."
|
||||
|
||||
[Asks questions via AskUserQuestion]
|
||||
- Process all 92? ✅
|
||||
- Batch size: 5
|
||||
- Gears: 1-3 (just analyze and spec, no implementation)
|
||||
- Output: Central results directory
|
||||
|
||||
Me: "Starting batch analysis..."
|
||||
|
||||
Batch 1 (5 services): service-user-api, service-inventory, service-contact, ws-inventory, service-pricing
|
||||
[Spawns 5 parallel agents using Task tool]
|
||||
|
||||
[15 minutes later]
|
||||
"Batch 1 complete! Starting batch 2..."
|
||||
|
||||
[3 hours later]
|
||||
"✅ All 92 services analyzed!
|
||||
|
||||
Results: ~/git/stackshift-batch-results/
|
||||
- 92 analysis reports
|
||||
- 92 sets of specifications
|
||||
- 890 total specs extracted
|
||||
- Multiple shared packages documented
|
||||
|
||||
Next: Review specs and begin migration planning"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Managing Batch Sessions
|
||||
|
||||
### View Current Batch Session
|
||||
|
||||
```bash
|
||||
# Check if batch session exists in current directory and view configuration
|
||||
if [ -f .stackshift-batch-session.json ]; then
|
||||
echo "📦 Active Batch Session in $(pwd)"
|
||||
cat .stackshift-batch-session.json | jq '.'
|
||||
else
|
||||
echo "No active batch session in current directory"
|
||||
fi
|
||||
```
|
||||
|
||||
### View All Batch Sessions
|
||||
|
||||
```bash
|
||||
# Find all active batch sessions
|
||||
echo "🔍 Finding all active batch sessions..."
|
||||
find ~/git -name ".stackshift-batch-session.json" -type f 2>/dev/null | while read session; do
|
||||
echo ""
|
||||
echo "📦 $(dirname $session)"
|
||||
cat "$session" | jq -r '" Route: \(.answers.route) | Repos: \(.processedRepos | length)/\(.totalRepos)"'
|
||||
done
|
||||
```
|
||||
|
||||
### Clear Batch Session
|
||||
|
||||
**After batch completes:**
|
||||
```bash
|
||||
# I'll ask you:
|
||||
# "Batch processing complete! Clear batch session? (Y/n)"
|
||||
|
||||
# If yes:
|
||||
rm .stackshift-batch-session.json
|
||||
echo "✅ Batch session cleared"
|
||||
|
||||
# If no:
|
||||
echo "✅ Batch session kept (will be used for next batch run in this directory)"
|
||||
```
|
||||
|
||||
**Manual clear (current directory):**
|
||||
```bash
|
||||
# Clear batch session in current directory
|
||||
rm .stackshift-batch-session.json
|
||||
```
|
||||
|
||||
**Manual clear (specific directory):**
|
||||
```bash
|
||||
# Clear batch session in specific directory
|
||||
rm ~/git/my-monorepo/services/.stackshift-batch-session.json
|
||||
```
|
||||
|
||||
**Why keep batch session?**
|
||||
- Run another batch with same configuration
|
||||
- Process more repos later in same directory
|
||||
- Continue interrupted batch
|
||||
- Consistent settings for related batches
|
||||
|
||||
**Why clear batch session?**
|
||||
- Done with current migration
|
||||
- Want different configuration for next batch
|
||||
- Starting fresh analysis
|
||||
- Free up directory for different batch type
|
||||
|
||||
---
|
||||
|
||||
## Batch Session Benefits
|
||||
|
||||
**Without batch session (old way):**
|
||||
```
|
||||
Batch 1: Answer 10 questions ⏱️ 2 min
|
||||
↓ Process 3 repos (15 min)
|
||||
|
||||
Batch 2: Answer 10 questions AGAIN ⏱️ 2 min
|
||||
↓ Process 3 repos (15 min)
|
||||
|
||||
Batch 3: Answer 10 questions AGAIN ⏱️ 2 min
|
||||
↓ Process 3 repos (15 min)
|
||||
|
||||
Total: 30 questions answered, 6 min wasted
|
||||
```
|
||||
|
||||
**With batch session (new way):**
|
||||
```
|
||||
Setup: Answer 10 questions ONCE ⏱️ 2 min
|
||||
↓ Batch 1: Process 3 repos (15 min)
|
||||
↓ Batch 2: Process 3 repos (15 min)
|
||||
↓ Batch 3: Process 3 repos (15 min)
|
||||
|
||||
Total: 10 questions answered, 0 min wasted
|
||||
Saved: 4 minutes per 9 repos processed
|
||||
```
|
||||
|
||||
**For 90 repos in batches of 3:**
|
||||
- Old way: 300 questions answered (60 min of clicking)
|
||||
- New way: 10 questions answered (2 min of clicking)
|
||||
- **Time saved: 58 minutes!** ⚡
|
||||
|
||||
---
|
||||
|
||||
**This batch processing system is perfect for:**
|
||||
- Monorepo migration (90+ services)
|
||||
- Multi-repo monorepo analysis
|
||||
- Department-wide code audits
|
||||
- Portfolio modernization projects
|
||||
111
commands/coverage.md
Normal file
111
commands/coverage.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
description: Generate spec-to-code coverage map showing which code files are covered by which specifications. Creates ASCII diagrams, reverse indexes, and coverage statistics.
|
||||
---
|
||||
|
||||
# Generate Spec Coverage Map
|
||||
|
||||
Create a comprehensive visual map of specification-to-code coverage.
|
||||
|
||||
---
|
||||
|
||||
## What This Does
|
||||
|
||||
Analyzes all specifications and generates a coverage map showing:
|
||||
|
||||
1. **Spec → Files**: ASCII box diagrams for each spec
|
||||
2. **Files → Specs**: Reverse index table
|
||||
3. **Coverage Statistics**: Percentages by category
|
||||
4. **Heat Map**: Visual representation
|
||||
5. **Gap Analysis**: Uncovered files
|
||||
6. **Shared Files**: High-risk multi-spec files
|
||||
|
||||
**Output:** `docs/spec-coverage-map.md`
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Generate coverage map
|
||||
/stackshift.coverage
|
||||
```
|
||||
|
||||
I'll:
|
||||
1. Find all spec files in `.specify/memory/specifications/` or `specs/`
|
||||
2. Extract file references from each spec
|
||||
3. Categorize files (backend, frontend, infrastructure, etc.)
|
||||
4. Generate visual diagrams and tables
|
||||
5. Calculate coverage statistics
|
||||
6. Identify gaps and shared files
|
||||
7. Save to `docs/spec-coverage-map.md`
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
- ✅ After Gear 6 (Implementation complete)
|
||||
- ✅ During code reviews (validate coverage)
|
||||
- ✅ When onboarding new team members
|
||||
- ✅ Before refactoring (identify dependencies)
|
||||
- ✅ For documentation audits
|
||||
|
||||
---
|
||||
|
||||
## Example Output Summary
|
||||
|
||||
```
|
||||
📊 Spec Coverage Health Report
|
||||
|
||||
Overall Coverage: 91% (99/109 files)
|
||||
|
||||
By Category:
|
||||
Backend: 93% [████████████████░░]
|
||||
Frontend: 92% [████████████████░░]
|
||||
Infrastructure: 83% [███████████████░░░]
|
||||
Database: 100% [████████████████████]
|
||||
Scripts: 67% [█████████░░░░░░░░░]
|
||||
|
||||
Status:
|
||||
✅ 12 specs covering 99 files
|
||||
⚠️ 10 gap files identified
|
||||
🔴 2 high-risk shared files
|
||||
|
||||
Full report: docs/spec-coverage-map.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What You'll See
|
||||
|
||||
The full coverage map includes:
|
||||
|
||||
### ASCII Box Diagrams
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ 001-vehicle-details │ Status: ✅ COMPLETE
|
||||
├─────────────────────────────────┤
|
||||
│ Backend: │
|
||||
│ ├─ api/handlers/vehicle.ts │
|
||||
│ └─ api/services/data.ts │
|
||||
│ Frontend: │
|
||||
│ └─ site/pages/Vehicle.tsx │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Reverse Index
|
||||
```markdown
|
||||
| File | Covered By | Count |
|
||||
|------|------------|-------|
|
||||
| lib/utils/pricing.ts | 001, 003, 004, 007, 009 | 5 |
|
||||
```
|
||||
|
||||
### Coverage Gaps
|
||||
```markdown
|
||||
Files not covered by any specification:
|
||||
- api/utils/debug.ts
|
||||
- scripts/experimental/test.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Ready!** Let me generate your spec coverage map now...
|
||||
662
commands/modernize.md
Normal file
662
commands/modernize.md
Normal file
@@ -0,0 +1,662 @@
|
||||
---
|
||||
description: Execute Brownfield Upgrade Mode - spec-driven dependency modernization workflow. Runs 4-phase process: spec-guided test coverage, baseline analysis, dependency upgrade with spec-guided fixes, and spec validation. Based on thoth-cli dependency upgrade process adapted for spec-driven development.
|
||||
---
|
||||
|
||||
# StackShift Modernize: Spec-Driven Dependency Upgrade
|
||||
|
||||
**Brownfield Upgrade Mode** - Execute after completing Gears 1-6
|
||||
|
||||
Run this command to modernize all dependencies while using specs as your guide and safety net.
|
||||
|
||||
---
|
||||
|
||||
## Quick Status Check
|
||||
|
||||
```bash
|
||||
# Check prerequisites
|
||||
echo "=== Prerequisites Check ==="
|
||||
|
||||
# 1. Specs exist?
|
||||
SPEC_COUNT=$(find .specify/memory/specifications -name "*.md" 2>/dev/null | wc -l)
|
||||
echo "Specs found: $SPEC_COUNT"
|
||||
|
||||
# 2. /speckit commands available?
|
||||
ls .claude/commands/speckit.*.md 2>/dev/null | wc -l | xargs -I {} echo "Speckit commands: {}"
|
||||
|
||||
# 3. Tests passing?
|
||||
npm test --silent && echo "Tests: ✅ PASSING" || echo "Tests: ❌ FAILING"
|
||||
|
||||
# 4. StackShift state?
|
||||
cat .stackshift-state.json 2>/dev/null | jq -r '"\(.path) - Gear \(.currentStep // "complete")"' || echo "No state file"
|
||||
|
||||
if [ "$SPEC_COUNT" -lt 1 ]; then
|
||||
echo "❌ No specs found. Run Gears 1-6 first."
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Spec-Guided Test Coverage Foundation
|
||||
|
||||
**Goal:** 85%+ coverage using spec acceptance criteria as test blueprint
|
||||
|
||||
**Time:** 30-90 minutes | **Mode:** Autonomous test writing
|
||||
|
||||
### 0.1: Load Specifications & Baseline Coverage
|
||||
|
||||
```bash
|
||||
echo "=== Phase 0: Spec-Guided Test Coverage ==="
|
||||
mkdir -p .upgrade
|
||||
|
||||
# List all specs
|
||||
find .specify/memory/specifications -name "*.md" | sort | tee .upgrade/all-specs.txt
|
||||
SPEC_COUNT=$(wc -l < .upgrade/all-specs.txt)
|
||||
|
||||
# Baseline coverage
|
||||
npm test -- --coverage --watchAll=false 2>&1 | tee .upgrade/baseline-coverage.txt
|
||||
COVERAGE=$(grep "All files" .upgrade/baseline-coverage.txt | grep -oE "[0-9]+\.[0-9]+" | head -1 || echo "0")
|
||||
|
||||
echo "Specs: $SPEC_COUNT"
|
||||
echo "Coverage: ${COVERAGE}%"
|
||||
echo "Target: 85%"
|
||||
```
|
||||
|
||||
### 0.2: Map Tests to Specs
|
||||
|
||||
Create `.upgrade/spec-coverage-map.json` mapping each spec to its tests:
|
||||
|
||||
```bash
|
||||
# For each spec, find which tests validate it
|
||||
# Track which acceptance criteria are covered vs. missing
|
||||
```
|
||||
|
||||
### 0.3: Write Tests for Missing Acceptance Criteria
|
||||
|
||||
**For each spec with missing coverage:**
|
||||
|
||||
1. Read spec file
|
||||
2. Extract acceptance criteria section
|
||||
3. For each criterion without a test:
|
||||
- Write test directly validating that criterion
|
||||
- Use Given-When-Then from spec
|
||||
- Ensure test actually validates behavior
|
||||
|
||||
**Example:**
|
||||
|
||||
```typescript
|
||||
// From spec: user-authentication.md
|
||||
// AC-3: "Given user logs in, When session expires, Then user redirected to login"
|
||||
|
||||
describe('User Authentication - AC-3: Session Expiration', () => {
|
||||
it('should redirect to login when session expires', async () => {
|
||||
// Given: User is logged in
|
||||
renderWithAuth(<Dashboard />, { authenticated: true });
|
||||
|
||||
// When: Session expires
|
||||
await act(async () => {
|
||||
expireSession(); // Helper to expire token
|
||||
});
|
||||
|
||||
// Then: User redirected to login
|
||||
await waitFor(() => {
|
||||
expect(mockNavigate).toHaveBeenCalledWith('/login');
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 0.4: Iterative Coverage Improvement
|
||||
|
||||
```bash
|
||||
ITERATION=1
|
||||
TARGET=85
|
||||
MIN=80
|
||||
|
||||
while (( $(echo "$COVERAGE < $TARGET" | bc -l) )); do
|
||||
echo "Iteration $ITERATION: ${COVERAGE}%"
|
||||
|
||||
# Stop conditions
|
||||
if (( $(echo "$COVERAGE >= $MIN" | bc -l) )) && [ $ITERATION -gt 5 ]; then
|
||||
echo "✅ Min coverage reached"
|
||||
break
|
||||
fi
|
||||
|
||||
# Find spec with lowest coverage
|
||||
# Write tests for missing acceptance criteria
|
||||
# Prioritize P0 > P1 > P2 specs
|
||||
|
||||
npm test -- --coverage --watchAll=false --silent
|
||||
PREV=$COVERAGE
|
||||
COVERAGE=$(jq '.total.lines.pct' coverage/coverage-summary.json)
|
||||
GAIN=$(echo "$COVERAGE - $PREV" | bc)
|
||||
|
||||
# Diminishing returns?
|
||||
if (( $(echo "$GAIN < 0.5" | bc -l) )) && (( $(echo "$COVERAGE >= $MIN" | bc -l) )); then
|
||||
echo "✅ Diminishing returns (${GAIN}% gain)"
|
||||
break
|
||||
fi
|
||||
|
||||
ITERATION=$((ITERATION + 1))
|
||||
[ $ITERATION -gt 10 ] && break
|
||||
done
|
||||
|
||||
echo "✅ Phase 0 Complete: ${COVERAGE}% coverage"
|
||||
```
|
||||
|
||||
### 0.5: Completion Marker
|
||||
|
||||
```bash
|
||||
cat > .upgrade/stackshift-upgrade.yml <<EOF
|
||||
widget_name: $(cat package.json | jq -r '.name')
|
||||
route: brownfield-upgrade
|
||||
started: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
phase_0_complete: true
|
||||
phase_0_coverage: $COVERAGE
|
||||
specs_analyzed: $SPEC_COUNT
|
||||
all_tests_passing: true
|
||||
EOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Baseline & Analysis (READ-ONLY)
|
||||
|
||||
**Goal:** Understand current state, plan upgrade, identify spec impact
|
||||
|
||||
**Time:** 15-30 minutes | **Mode:** Read-only analysis
|
||||
|
||||
**🚨 NO FILE MODIFICATIONS IN PHASE 1**
|
||||
|
||||
### 1.1: Spec-Code Baseline
|
||||
|
||||
```bash
|
||||
echo "=== Phase 1: Baseline & Analysis ==="
|
||||
|
||||
# Run spec analysis
|
||||
/speckit.analyze | tee .upgrade/baseline-spec-analysis.txt
|
||||
|
||||
# Document which specs are COMPLETE/PARTIAL/MISSING
|
||||
grep -E "✅|⚠️|❌" .upgrade/baseline-spec-analysis.txt > .upgrade/baseline-spec-status.txt
|
||||
```
|
||||
|
||||
### 1.2: Dependency Analysis
|
||||
|
||||
```bash
|
||||
# Current dependencies
|
||||
npm list --depth=0 > .upgrade/dependencies-before.txt
|
||||
|
||||
# Outdated packages
|
||||
npm outdated --json > .upgrade/outdated.json || echo "{}" > .upgrade/outdated.json
|
||||
|
||||
# Count major upgrades
|
||||
MAJOR_COUNT=$(cat .upgrade/outdated.json | jq '[.[] | select(.current != .latest)] | length')
|
||||
echo "Major upgrades: $MAJOR_COUNT"
|
||||
```
|
||||
|
||||
### 1.3: Spec Impact Analysis
|
||||
|
||||
**For each major dependency upgrade, identify affected specs:**
|
||||
|
||||
```bash
|
||||
# Create impact analysis
|
||||
cat > .upgrade/spec-impact-analysis.json <<'EOF'
|
||||
{
|
||||
"react": {
|
||||
"current": "17.0.2",
|
||||
"latest": "19.2.0",
|
||||
"breaking": true,
|
||||
"affectedSpecs": [
|
||||
"user-interface.md",
|
||||
"form-handling.md"
|
||||
],
|
||||
"acceptanceCriteria": [
|
||||
"user-interface.md: AC-1, AC-3",
|
||||
"form-handling.md: AC-2"
|
||||
],
|
||||
"testFiles": [
|
||||
"components/UserInterface.test.tsx"
|
||||
],
|
||||
"risk": "HIGH"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
### 1.4: Generate Upgrade Plan
|
||||
|
||||
Create `.upgrade/UPGRADE_PLAN.md`:
|
||||
|
||||
```markdown
|
||||
# Upgrade Plan
|
||||
|
||||
## Summary
|
||||
- Dependencies to upgrade: ${MAJOR_COUNT} major versions
|
||||
- Specs affected: [count from impact analysis]
|
||||
- Risk level: [HIGH/MEDIUM/LOW]
|
||||
- Estimated effort: 2-4 hours
|
||||
|
||||
## Critical Upgrades (Breaking Changes Expected)
|
||||
|
||||
### react: 17.0.2 → 19.2.0
|
||||
- **Breaking Changes:**
|
||||
- Automatic batching
|
||||
- Hydration mismatches
|
||||
- useId for SSR
|
||||
- **Affected Specs:**
|
||||
- user-interface.md (AC-1, AC-3, AC-5)
|
||||
- form-handling.md (AC-2, AC-4)
|
||||
- **Test Files:**
|
||||
- components/UserInterface.test.tsx
|
||||
- components/FormHandler.test.tsx
|
||||
- **Risk:** HIGH
|
||||
|
||||
[Continue for all major upgrades...]
|
||||
|
||||
## Spec Impact Summary
|
||||
|
||||
### High-Risk Specs (Validate Carefully)
|
||||
1. user-interface.md - React changes affect all components
|
||||
2. form-handling.md - State batching changes
|
||||
|
||||
### Low-Risk Specs (Quick Validation)
|
||||
[List specs unlikely to be affected]
|
||||
|
||||
## Upgrade Sequence
|
||||
1. Update package.json versions
|
||||
2. npm install
|
||||
3. Fix TypeScript errors
|
||||
4. Fix test failures (spec-guided)
|
||||
5. Fix build errors
|
||||
6. Validate with /speckit.analyze
|
||||
```
|
||||
|
||||
### 1.5: Update Tracking
|
||||
|
||||
```bash
|
||||
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||
phase_1_complete: true
|
||||
phase_1_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
planned_major_upgrades: $MAJOR_COUNT
|
||||
high_risk_specs: [count]
|
||||
upgrade_plan_created: true
|
||||
EOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Dependency Upgrade & Spec-Guided Fixes
|
||||
|
||||
**Goal:** Upgrade all dependencies, fix breaking changes using specs
|
||||
|
||||
**Time:** 1-4 hours | **Mode:** Implementation with spec guidance
|
||||
|
||||
### 2.1: Pre-Flight Check
|
||||
|
||||
```bash
|
||||
echo "=== Pre-Flight Health Check ==="
|
||||
|
||||
npm test && echo "Tests: ✅" || (echo "Tests: ❌ STOP" && exit 1)
|
||||
npm run build && echo "Build: ✅" || (echo "Build: ❌ STOP" && exit 1)
|
||||
npm run lint 2>/dev/null && echo "Lint: ✅" || echo "Lint: ⚠️ OK"
|
||||
|
||||
# Must be green to proceed
|
||||
```
|
||||
|
||||
### 2.2: Create Upgrade Branch
|
||||
|
||||
```bash
|
||||
git checkout -b upgrade/dependencies-to-latest
|
||||
git add .upgrade/
|
||||
git commit -m "docs: upgrade baseline and plan
|
||||
|
||||
Phase 0: Coverage ${COVERAGE}%
|
||||
Phase 1: Analysis complete
|
||||
Specs: $SPEC_COUNT validated
|
||||
Ready for Phase 2
|
||||
"
|
||||
```
|
||||
|
||||
### 2.3: Upgrade Dependencies
|
||||
|
||||
```bash
|
||||
echo "=== Upgrading Dependencies ==="
|
||||
|
||||
# Upgrade to latest
|
||||
npx npm-check-updates -u
|
||||
npm install
|
||||
|
||||
# Update Node (optional but recommended)
|
||||
echo "22.21.0" > .nvmrc
|
||||
nvm install 22.21.0
|
||||
nvm use
|
||||
|
||||
# Document changes
|
||||
npm list --depth=0 > .upgrade/dependencies-after.txt
|
||||
```
|
||||
|
||||
### 2.4: Detect Breaking Changes
|
||||
|
||||
```bash
|
||||
echo "=== Testing After Upgrade ==="
|
||||
|
||||
npm test 2>&1 | tee .upgrade/test-results-post-upgrade.txt
|
||||
|
||||
# Extract failures
|
||||
grep -E "FAIL|✕" .upgrade/test-results-post-upgrade.txt > .upgrade/failures.txt || true
|
||||
FAILURE_COUNT=$(wc -l < .upgrade/failures.txt)
|
||||
|
||||
echo "Breaking changes: $FAILURE_COUNT test failures"
|
||||
```
|
||||
|
||||
### 2.5: Spec-Guided Fix Loop
|
||||
|
||||
**Autonomous iteration until all tests pass:**
|
||||
|
||||
```bash
|
||||
ITERATION=1
|
||||
MAX_ITERATIONS=20
|
||||
|
||||
while ! npm test --silent 2>&1; do
|
||||
echo "=== Fix Iteration $ITERATION ==="
|
||||
|
||||
# Get first failing test
|
||||
FAILING_TEST=$(npm test 2>&1 | grep -m 1 "FAIL" | grep -oE "[^ ]+\.test\.[jt]sx?" || echo "")
|
||||
|
||||
if [ -z "$FAILING_TEST" ]; then
|
||||
echo "No specific test file found, checking for general errors..."
|
||||
npm test 2>&1 | head -50
|
||||
break
|
||||
fi
|
||||
|
||||
echo "Failing test: $FAILING_TEST"
|
||||
|
||||
# Find spec from coverage map
|
||||
SPEC=$(jq -r "to_entries[] | select(.value.testFiles[] | contains(\"$FAILING_TEST\")) | .key" .upgrade/spec-coverage-map.json || echo "")
|
||||
|
||||
if [ -n "$SPEC" ]; then
|
||||
echo "Validates spec: $SPEC"
|
||||
echo "Loading spec acceptance criteria..."
|
||||
cat ".specify/memory/specifications/$SPEC" | grep -A 20 "Acceptance Criteria"
|
||||
fi
|
||||
|
||||
# FIX THE BREAKING CHANGE
|
||||
# - Read spec acceptance criteria
|
||||
# - Understand intended behavior
|
||||
# - Fix code to preserve that behavior
|
||||
# - Run test to verify fix
|
||||
|
||||
# Log fix
|
||||
echo "[$ITERATION] Fixed: $FAILING_TEST (spec: $SPEC)" >> .upgrade/fixes-applied.log
|
||||
|
||||
# Commit incremental fix
|
||||
git add -A
|
||||
git commit -m "fix: breaking change in $FAILING_TEST
|
||||
|
||||
Spec: $SPEC
|
||||
Fixed to preserve acceptance criteria behavior
|
||||
"
|
||||
|
||||
ITERATION=$((ITERATION + 1))
|
||||
|
||||
if [ $ITERATION -gt $MAX_ITERATIONS ]; then
|
||||
echo "⚠️ Max iterations reached - manual review needed"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ All tests passing"
|
||||
```
|
||||
|
||||
### 2.6: Build & Lint
|
||||
|
||||
```bash
|
||||
# Fix build
|
||||
npm run build || (echo "Fixing build errors..." && [fix build])
|
||||
|
||||
# Fix lint (often ESLint 9 config changes)
|
||||
npm run lint || (echo "Fixing lint..." && [update eslint.config.js])
|
||||
```
|
||||
|
||||
### 2.7: Phase 2 Complete
|
||||
|
||||
```bash
|
||||
npm test && npm run build && npm run lint
|
||||
|
||||
FINAL_COVERAGE=$(jq '.total.lines.pct' coverage/coverage-summary.json)
|
||||
|
||||
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||
phase_2_complete: true
|
||||
phase_2_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
test_coverage_after: $FINAL_COVERAGE
|
||||
fixes_applied: $(wc -l < .upgrade/fixes-applied.log)
|
||||
all_passing: true
|
||||
EOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Spec Validation & PR
|
||||
|
||||
**Goal:** Validate specs match code, create PR
|
||||
|
||||
**Time:** 15-30 minutes
|
||||
|
||||
### 3.1: Spec Validation
|
||||
|
||||
```bash
|
||||
echo "=== Phase 3: Spec Validation ==="
|
||||
|
||||
# Run spec analysis
|
||||
/speckit.analyze | tee .upgrade/final-spec-analysis.txt
|
||||
|
||||
# Check for drift
|
||||
if grep -q "drift\|mismatch\|inconsistent" .upgrade/final-spec-analysis.txt; then
|
||||
echo "⚠️ Drift detected - investigating..."
|
||||
# Review and fix any drift
|
||||
fi
|
||||
```
|
||||
|
||||
### 3.2: Generate Upgrade Report
|
||||
|
||||
```bash
|
||||
# Create comprehensive report
|
||||
cat > .upgrade/UPGRADE_REPORT.md <<EOF
|
||||
# Dependency Upgrade Report
|
||||
|
||||
**Date:** $(date)
|
||||
**Project:** $(cat package.json | jq -r '.name')
|
||||
**Route:** Brownfield Upgrade (StackShift Modernize)
|
||||
|
||||
## Summary
|
||||
|
||||
- **Dependencies Upgraded:** $(wc -l < .upgrade/dependencies-after.txt) packages
|
||||
- **Breaking Changes Fixed:** $(wc -l < .upgrade/fixes-applied.log)
|
||||
- **Test Coverage:** ${BASELINE_COVERAGE}% → ${FINAL_COVERAGE}%
|
||||
- **Spec Validation:** ✅ $(grep -c "✅ COMPLETE" .upgrade/final-spec-analysis.txt) specs validated
|
||||
- **Security:** $(npm audit --json | jq '.metadata.vulnerabilities.total // 0') vulnerabilities
|
||||
|
||||
## Major Dependency Upgrades
|
||||
|
||||
$(npm outdated --json | jq -r 'to_entries[] | "- **\(.key):** \(.value.current) → \(.value.latest)"')
|
||||
|
||||
## Breaking Changes Fixed
|
||||
|
||||
$(cat .upgrade/fixes-applied.log)
|
||||
|
||||
## Spec Validation Results
|
||||
|
||||
- **Total Specs:** $SPEC_COUNT
|
||||
- **Validated:** ✅ All passing
|
||||
- **Drift:** None detected
|
||||
- **Specs Updated:** [if any, list why]
|
||||
|
||||
## Test Coverage
|
||||
|
||||
- **Before:** ${BASELINE_COVERAGE}%
|
||||
- **After:** ${FINAL_COVERAGE}%
|
||||
- **Target:** 85% ✅
|
||||
|
||||
## Security Improvements
|
||||
|
||||
$(npm audit --json | jq -r '.vulnerabilities | to_entries[] | select(.value.via[0].severity == "high" or .value.via[0].severity == "critical") | "- \(.key): \(.value.via[0].title)"' || echo "No high/critical vulnerabilities")
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [x] All tests passing
|
||||
- [x] Build successful
|
||||
- [x] Lint passing
|
||||
- [x] Coverage ≥85%
|
||||
- [x] Specs validated (/speckit.analyze)
|
||||
- [x] No high/critical vulnerabilities
|
||||
- [ ] Code review
|
||||
- [ ] Merge approved
|
||||
EOF
|
||||
|
||||
cat .upgrade/UPGRADE_REPORT.md
|
||||
```
|
||||
|
||||
### 3.3: Create Pull Request
|
||||
|
||||
```bash
|
||||
# Final commit
|
||||
git add -A
|
||||
git commit -m "$(cat <<'EOF'
|
||||
chore: upgrade all dependencies to latest versions
|
||||
|
||||
Spec-driven upgrade using StackShift modernize workflow.
|
||||
|
||||
Phases Completed:
|
||||
- Phase 0: Test coverage ${BASELINE_COVERAGE}% → ${FINAL_COVERAGE}%
|
||||
- Phase 1: Analysis & planning
|
||||
- Phase 2: Upgrades & spec-guided fixes
|
||||
- Phase 3: Validation ✅
|
||||
|
||||
See .upgrade/UPGRADE_REPORT.md for complete details.
|
||||
EOF
|
||||
)"
|
||||
|
||||
# Push
|
||||
git push -u origin upgrade/dependencies-to-latest
|
||||
|
||||
# Create PR
|
||||
gh pr create \
|
||||
--title "chore: Upgrade all dependencies to latest versions" \
|
||||
--body-file .upgrade/UPGRADE_REPORT.md
|
||||
```
|
||||
|
||||
### 3.4: Final Tracking Update
|
||||
|
||||
```bash
|
||||
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||
phase_3_complete: true
|
||||
upgrade_complete: true
|
||||
completion_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
pr_number: $(gh pr list --head upgrade/dependencies-to-latest --json number -q '.[0].number')
|
||||
pr_url: $(gh pr list --head upgrade/dependencies-to-latest --json url -q '.[0].url')
|
||||
final_status: SUCCESS
|
||||
EOF
|
||||
|
||||
echo ""
|
||||
echo "🎉 ========================================="
|
||||
echo " MODERNIZATION COMPLETE!"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
echo "✅ Dependencies: All latest versions"
|
||||
echo "✅ Tests: All passing"
|
||||
echo "✅ Coverage: ${FINAL_COVERAGE}% (target: 85%)"
|
||||
echo "✅ Specs: Validated with /speckit.analyze"
|
||||
echo "✅ Build: Successful"
|
||||
echo "✅ Security: Vulnerabilities resolved"
|
||||
echo ""
|
||||
echo "📋 Report: .upgrade/UPGRADE_REPORT.md"
|
||||
echo "🔗 PR: $(gh pr list --head upgrade/dependencies-to-latest --json url -q '.[0].url')"
|
||||
echo ""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Differences from thoth-cli
|
||||
|
||||
**Generic upgrade workflow:**
|
||||
- 3 phases (coverage, discovery, implementation)
|
||||
- Target latest versions per package manager
|
||||
- Enzyme removal required
|
||||
- ESLint 9 flat config required
|
||||
- Batch processing 90+ widgets
|
||||
|
||||
**StackShift modernize (Generic + Spec-driven):**
|
||||
- 4 phases (includes spec validation)
|
||||
- Latest versions (whatever is current)
|
||||
- **Spec acceptance criteria guide test writing**
|
||||
- **Specs guide breaking change fixes**
|
||||
- **Continuous spec validation**
|
||||
- Single repo focus
|
||||
|
||||
**What we learned from thoth-cli:**
|
||||
- ✅ Phase 0 test coverage foundation
|
||||
- ✅ Read-only analysis phase
|
||||
- ✅ Iterative fix loop
|
||||
- ✅ Autonomous test writing
|
||||
- ✅ Comprehensive tracking files
|
||||
- ✅ Validation before proceeding
|
||||
|
||||
**What we added for specs:**
|
||||
- ✅ Spec acceptance criteria = test blueprint
|
||||
- ✅ Spec impact analysis
|
||||
- ✅ Spec-guided breaking change fixes
|
||||
- ✅ /speckit.analyze validation
|
||||
- ✅ Spec-coverage mapping
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- ✅ All dependencies at latest stable
|
||||
- ✅ Test coverage ≥85%
|
||||
- ✅ All tests passing
|
||||
- ✅ Build successful
|
||||
- ✅ /speckit.analyze: No drift
|
||||
- ✅ PR created
|
||||
- ✅ Security issues resolved
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"Tests failing after upgrade"**
|
||||
```bash
|
||||
# 1. Find failing test
|
||||
npm test 2>&1 | grep FAIL
|
||||
|
||||
# 2. Find spec
|
||||
jq '.[] | select(.testFiles[] | contains("failing-test.ts"))' .upgrade/spec-coverage-map.json
|
||||
|
||||
# 3. Read spec acceptance criteria
|
||||
cat .specify/memory/specifications/[spec-name].md | grep -A 10 "Acceptance Criteria"
|
||||
|
||||
# 4. Fix to match spec
|
||||
# 5. Verify with npm test
|
||||
```
|
||||
|
||||
**"Can't reach 85% coverage"**
|
||||
```bash
|
||||
# Check which acceptance criteria lack tests
|
||||
cat .upgrade/spec-coverage-map.json | jq '.[] | select(.missingCoverage | length > 0)'
|
||||
|
||||
# Write tests for those criteria
|
||||
```
|
||||
|
||||
**"/speckit.analyze shows drift"**
|
||||
```bash
|
||||
# Review what changed
|
||||
/speckit.analyze
|
||||
|
||||
# Fix code to match spec OR
|
||||
# Update spec if intentional improvement
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Remember:** Specs are your north star. When breaking changes occur, specs tell you what behavior to preserve.
|
||||
132
commands/setup.md
Normal file
132
commands/setup.md
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
description: Install StackShift and Spec Kit slash commands to this project for team use. Run this if you joined a project after StackShift analysis was completed.
|
||||
---
|
||||
|
||||
# StackShift Setup - Install Slash Commands
|
||||
|
||||
**Use this if:**
|
||||
- You cloned a project that uses StackShift
|
||||
- Slash commands aren't showing up (/speckit.*, /stackshift.*)
|
||||
- You want to add features but don't have the commands
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install Commands
|
||||
|
||||
```bash
|
||||
# Create commands directory
|
||||
mkdir -p .claude/commands
|
||||
|
||||
# Copy from StackShift plugin
|
||||
cp ~/.claude/plugins/stackshift/.claude/commands/speckit.*.md .claude/commands/
|
||||
cp ~/.claude/plugins/stackshift/.claude/commands/stackshift.*.md .claude/commands/
|
||||
|
||||
# Verify
|
||||
ls .claude/commands/
|
||||
```
|
||||
|
||||
**You should see:**
|
||||
- ✅ speckit.analyze.md
|
||||
- ✅ speckit.clarify.md
|
||||
- ✅ speckit.implement.md
|
||||
- ✅ speckit.plan.md
|
||||
- ✅ speckit.specify.md
|
||||
- ✅ speckit.tasks.md
|
||||
- ✅ stackshift.modernize.md
|
||||
- ✅ stackshift.setup.md
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Update .gitignore
|
||||
|
||||
**Ensure .gitignore allows .claude/commands/ to be committed:**
|
||||
|
||||
```bash
|
||||
# Check if .gitignore exists
|
||||
if [ ! -f .gitignore ]; then
|
||||
echo "Creating .gitignore..."
|
||||
touch .gitignore
|
||||
fi
|
||||
|
||||
# Add rules to allow slash commands
|
||||
cat >> .gitignore <<'EOF'
|
||||
|
||||
# Claude Code - Allow slash commands (team needs these!)
|
||||
!.claude/
|
||||
!.claude/commands/
|
||||
!.claude/commands/*.md
|
||||
|
||||
# Ignore user-specific Claude settings
|
||||
.claude/settings.json
|
||||
.claude/mcp-settings.json
|
||||
.claude/.storage/
|
||||
EOF
|
||||
|
||||
echo "✅ .gitignore updated"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Commit to Git
|
||||
|
||||
```bash
|
||||
git add .claude/commands/
|
||||
git add .gitignore
|
||||
|
||||
git commit -m "chore: add StackShift slash commands for team
|
||||
|
||||
Adds /speckit.* and /stackshift.* slash commands.
|
||||
|
||||
Commands installed:
|
||||
- /speckit.specify - Create feature specifications
|
||||
- /speckit.plan - Create technical implementation plans
|
||||
- /speckit.tasks - Generate task breakdowns
|
||||
- /speckit.implement - Execute implementation
|
||||
- /speckit.clarify - Resolve specification ambiguities
|
||||
- /speckit.analyze - Validate specs match code
|
||||
- /stackshift.modernize - Upgrade dependencies
|
||||
- /stackshift.setup - Install commands (this command)
|
||||
|
||||
These enable spec-driven development for the entire team.
|
||||
All team members will have commands after cloning.
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Done!
|
||||
|
||||
✅ Commands installed to project
|
||||
✅ .gitignore updated to allow commands
|
||||
✅ Commands committed to git
|
||||
✅ Team members will have commands when they clone
|
||||
|
||||
**Type `/spec` and you should see all commands autocomplete!**
|
||||
|
||||
---
|
||||
|
||||
## For Project Leads
|
||||
|
||||
**After running StackShift on a project, always:**
|
||||
|
||||
1. ✅ Run `/stackshift.setup` (or manual Step 1-3 above)
|
||||
2. ✅ Commit .claude/commands/ to git
|
||||
3. ✅ Push to remote
|
||||
|
||||
**This ensures everyone on your team has access to slash commands without individual setup.**
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"Commands still not showing up"**
|
||||
→ Restart Claude Code after installing
|
||||
|
||||
**"Git says .claude/ is ignored"**
|
||||
→ Check .gitignore has `!.claude/commands/` rule
|
||||
|
||||
**"Don't have StackShift plugin installed"**
|
||||
→ Install: `/plugin marketplace add jschulte/claude-plugins` then `/plugin install stackshift`
|
||||
|
||||
**"StackShift plugin not in ~/.claude/plugins/"**
|
||||
→ Commands might be in different location, manually copy from project that has them
|
||||
134
commands/speckit-analyze.md
Normal file
134
commands/speckit-analyze.md
Normal file
@@ -0,0 +1,134 @@
|
||||
---
|
||||
description: Validate specifications against implementation
|
||||
---
|
||||
|
||||
# Spec Kit: Analyze Specifications
|
||||
|
||||
Compare specifications in `specs/` against the actual codebase implementation.
|
||||
|
||||
## Analysis Steps
|
||||
|
||||
### 1. Load All Specifications
|
||||
|
||||
Read all files in `specs/`:
|
||||
- Note status markers (✅ COMPLETE / ⚠️ PARTIAL / ❌ MISSING)
|
||||
- Identify dependencies between specs
|
||||
- List acceptance criteria
|
||||
|
||||
### 2. Validate Each Specification
|
||||
|
||||
For each spec marked **✅ COMPLETE:**
|
||||
- **Verify implementation exists:**
|
||||
- Check file paths mentioned in spec
|
||||
- Verify API endpoints exist
|
||||
- Confirm database models match schema
|
||||
- Test that features actually work
|
||||
|
||||
- **If implementation missing:**
|
||||
```
|
||||
⚠️ Inconsistency: spec-name.md marked COMPLETE
|
||||
Reality: Implementation not found
|
||||
Files checked: [list]
|
||||
Recommendation: Update status to PARTIAL or MISSING
|
||||
```
|
||||
|
||||
For each spec marked **⚠️ PARTIAL:**
|
||||
- **Verify what's listed as implemented actually exists**
|
||||
- **Verify what's listed as missing is actually missing**
|
||||
- **Check for implementation drift** (code changed since spec written)
|
||||
|
||||
For each spec marked **❌ MISSING:**
|
||||
- **Check if implementation exists anyway** (orphaned code)
|
||||
```
|
||||
⚠️ Orphaned Implementation: user-notifications feature
|
||||
Spec: Marked MISSING
|
||||
Reality: Implementation found in src/notifications/
|
||||
Recommendation: Create specification or remove code
|
||||
```
|
||||
|
||||
### 3. Check for Inconsistencies
|
||||
|
||||
- **Conflicting requirements** between related specs
|
||||
- **Broken dependencies** (spec A requires spec B, but B is MISSING)
|
||||
- **Version mismatches** (spec requires v2.0, code uses v1.5)
|
||||
- **Outdated technical details** (for brownfield specs)
|
||||
|
||||
### 4. Generate Report
|
||||
|
||||
Output format:
|
||||
|
||||
```markdown
|
||||
# Specification Analysis Results
|
||||
|
||||
**Date:** [current date]
|
||||
**Specifications analyzed:** X total
|
||||
|
||||
## Summary
|
||||
|
||||
- ✅ COMPLETE features: X (fully aligned)
|
||||
- ⚠️ PARTIAL features: X (some gaps)
|
||||
- ❌ MISSING features: X (not started)
|
||||
- 🔴 Inconsistencies found: X
|
||||
|
||||
## Issues Detected
|
||||
|
||||
### High Priority (Blocking)
|
||||
|
||||
1. **user-authentication.md** (COMPLETE → should be PARTIAL)
|
||||
- Spec says: Frontend login UI required
|
||||
- Reality: No login components found
|
||||
- Impact: Users cannot authenticate
|
||||
- Recommendation: Implement login UI or update spec status
|
||||
|
||||
2. **photo-upload.md → fish-management.md**
|
||||
- Dependency broken
|
||||
- fish-management requires photo-upload
|
||||
- photo-upload marked PARTIAL (API incomplete)
|
||||
- Impact: Fish photos cannot be uploaded
|
||||
- Recommendation: Complete photo-upload API first
|
||||
|
||||
### Medium Priority
|
||||
|
||||
[...]
|
||||
|
||||
### Low Priority (Minor)
|
||||
|
||||
[...]
|
||||
|
||||
## Orphaned Implementations
|
||||
|
||||
Code exists without specifications:
|
||||
|
||||
1. **src/api/notifications.ts** (156 lines)
|
||||
- No specification found
|
||||
- Recommendation: Create specification or remove code
|
||||
|
||||
## Alignment Score
|
||||
|
||||
- Specifications ↔ Code: X% aligned
|
||||
- No issues found: ✅ / Issues require attention: ⚠️
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. Update status markers for inconsistent specs
|
||||
2. Create specifications for orphaned code
|
||||
3. Complete high-priority implementations
|
||||
4. Re-run `/speckit.analyze` after fixes
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Fix high-priority issues first
|
||||
- Update specification status markers
|
||||
- Re-run analysis to validate fixes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- This analysis should be thorough but not modify any files
|
||||
- Report inconsistencies, don't auto-fix
|
||||
- Cross-reference related specifications
|
||||
- Check both directions (spec → code and code → spec)
|
||||
- For brownfield: Verify exact versions and file paths
|
||||
- For greenfield: Check if business requirements are met (ignore implementation details)
|
||||
212
commands/speckit-clarify.md
Normal file
212
commands/speckit-clarify.md
Normal file
@@ -0,0 +1,212 @@
|
||||
---
|
||||
description: Resolve [NEEDS CLARIFICATION] markers interactively
|
||||
---
|
||||
|
||||
# Spec Kit: Clarify Specifications
|
||||
|
||||
Find and resolve all `[NEEDS CLARIFICATION]` markers in specifications.
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Find All Clarifications
|
||||
|
||||
Scan `specs/` for `[NEEDS CLARIFICATION]` markers:
|
||||
|
||||
```bash
|
||||
grep -r "\[NEEDS CLARIFICATION\]" specs/
|
||||
```
|
||||
|
||||
Create a list:
|
||||
```markdown
|
||||
## Clarifications Needed
|
||||
|
||||
### High Priority (P0)
|
||||
1. **user-authentication.md:**
|
||||
- [NEEDS CLARIFICATION] Password reset: email link or SMS code?
|
||||
|
||||
2. **photo-upload.md:**
|
||||
- [NEEDS CLARIFICATION] Max file size: 5MB or 10MB?
|
||||
- [NEEDS CLARIFICATION] Supported formats: just images or also videos?
|
||||
|
||||
### Medium Priority (P1)
|
||||
3. **analytics-dashboard.md:**
|
||||
- [NEEDS CLARIFICATION] Chart types: line, bar, pie, or all three?
|
||||
- [NEEDS CLARIFICATION] Real-time updates or daily aggregates?
|
||||
|
||||
[... list all ...]
|
||||
```
|
||||
|
||||
### Step 2: Ask Questions by Priority
|
||||
|
||||
**For each clarification (P0 first):**
|
||||
|
||||
```
|
||||
**Feature:** user-authentication
|
||||
**Question:** For password reset, should we use email link or SMS code?
|
||||
|
||||
**Context:**
|
||||
- Email link: More secure, standard approach
|
||||
- SMS code: Faster, but requires phone number
|
||||
|
||||
**Recommendation:** Email link (industry standard)
|
||||
|
||||
What would you prefer?
|
||||
```
|
||||
|
||||
### Step 3: Record Answers
|
||||
|
||||
For each answer received:
|
||||
|
||||
```markdown
|
||||
## Answer: Password Reset Method
|
||||
|
||||
**Question:** Email link or SMS code?
|
||||
**Answer:** Email link with 1-hour expiration
|
||||
**Additional Details:**
|
||||
- Link format: /reset-password?token={jwt}
|
||||
- Token expires: 1 hour
|
||||
- Email template: Branded with reset button
|
||||
```
|
||||
|
||||
### Step 4: Update Specifications
|
||||
|
||||
For each answer, update the corresponding specification:
|
||||
|
||||
**Before:**
|
||||
```markdown
|
||||
## Password Reset
|
||||
|
||||
[NEEDS CLARIFICATION] Should we use email link or SMS code?
|
||||
```
|
||||
|
||||
**After:**
|
||||
```markdown
|
||||
## Password Reset
|
||||
|
||||
Users can reset their password via email link.
|
||||
|
||||
**Implementation:**
|
||||
- User requests reset via /api/auth/reset-password
|
||||
- System sends email with unique token (JWT, 1-hour expiry)
|
||||
- Email contains link: /reset-password?token={jwt}
|
||||
- User clicks link, enters new password
|
||||
- Token validated, password updated
|
||||
|
||||
**Acceptance Criteria:**
|
||||
- [ ] User can request password reset
|
||||
- [ ] Email sent within 30 seconds
|
||||
- [ ] Link expires after 1 hour
|
||||
- [ ] Token is single-use
|
||||
- [ ] Password successfully updated after reset
|
||||
```
|
||||
|
||||
### Step 5: Cross-Reference Updates
|
||||
|
||||
If clarification affects multiple specs, update all:
|
||||
|
||||
Example: Max file size affects both photo-upload and document-upload specs
|
||||
|
||||
### Step 6: Validate Completeness
|
||||
|
||||
After all clarifications:
|
||||
|
||||
```bash
|
||||
# Check no markers remain
|
||||
grep -r "\[NEEDS CLARIFICATION\]" specs/
|
||||
|
||||
# Should return: No matches (or only in comments/examples)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
# Clarification Session Complete
|
||||
|
||||
**Date:** {{DATE}}
|
||||
**Clarifications resolved:** X
|
||||
|
||||
## Summary
|
||||
|
||||
### Photo Upload Feature
|
||||
✅ Max file size: 10MB
|
||||
✅ Supported formats: JPEG, PNG, WebP
|
||||
✅ Upload method: Drag-drop and click-browse (both)
|
||||
✅ Max photos per item: 10
|
||||
|
||||
### Analytics Dashboard
|
||||
✅ Chart types: Line, bar, pie (all three)
|
||||
✅ Data refresh: Real-time for alerts, daily aggregates for charts
|
||||
✅ Date ranges: 7d, 30d, 90d, all time
|
||||
|
||||
### User Authentication
|
||||
✅ Password reset: Email link (1-hour expiry)
|
||||
✅ Session duration: 24 hours (configurable)
|
||||
✅ Rate limiting: 10 attempts per hour
|
||||
|
||||
[... all clarifications ...]
|
||||
|
||||
## Specifications Updated
|
||||
|
||||
Updated X specifications:
|
||||
1. user-authentication.md
|
||||
2. photo-upload.md
|
||||
3. analytics-dashboard.md
|
||||
[...]
|
||||
|
||||
## Validation
|
||||
|
||||
✅ No [NEEDS CLARIFICATION] markers remaining
|
||||
✅ All acceptance criteria complete
|
||||
✅ Specifications ready for implementation
|
||||
|
||||
## Next Steps
|
||||
|
||||
Ready for Gear 6: Implementation
|
||||
|
||||
Use `/speckit.tasks` and `/speckit.implement` for each feature.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Defer Mode
|
||||
|
||||
If clarifications_strategy = "defer":
|
||||
|
||||
Instead of asking questions, just document them:
|
||||
|
||||
```markdown
|
||||
# Deferred Clarifications
|
||||
|
||||
**File:** .specify/memory/deferred-clarifications.md
|
||||
|
||||
## Items to Clarify Later
|
||||
|
||||
1. **photo-upload.md:**
|
||||
- Max file size?
|
||||
- Supported formats?
|
||||
|
||||
2. **analytics-dashboard.md:**
|
||||
- Chart types?
|
||||
- Real-time or aggregated?
|
||||
|
||||
[... list all ...]
|
||||
|
||||
## How to Resolve
|
||||
|
||||
When ready, run `/speckit.clarify` to answer these questions.
|
||||
```
|
||||
|
||||
Then continue with implementation of fully-specified features.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Ask questions in priority order (P0 first)
|
||||
- Provide context and recommendations
|
||||
- Allow "defer to later" for non-critical clarifications
|
||||
- Update all related specs when answering
|
||||
- Validate no markers remain at the end
|
||||
- If user is unsure, mark as P2/P3 (lower priority)
|
||||
218
commands/speckit-implement.md
Normal file
218
commands/speckit-implement.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
description: Implement feature from specification and plan
|
||||
---
|
||||
|
||||
# Spec Kit: Implement Feature
|
||||
|
||||
Systematically implement a feature from its specification and implementation plan.
|
||||
|
||||
## Inputs
|
||||
|
||||
**Feature name:** 008-user-profile/tasks.md
|
||||
|
||||
**Files to read:**
|
||||
- Specification: `specs/specs/008-user-profile/tasks.md`
|
||||
- Implementation Plan: `specs/specs/008-user-profile/tasks.md`
|
||||
## Implementation Process
|
||||
|
||||
### Step 1: Review Specification
|
||||
|
||||
Read `specs/specs/008-user-profile/tasks.md`:
|
||||
|
||||
- Understand the feature overview
|
||||
- Read all user stories
|
||||
- Review acceptance criteria (these are your tests!)
|
||||
- Note dependencies on other features
|
||||
- Check current status (COMPLETE/PARTIAL/MISSING)
|
||||
|
||||
### Step 2: Review Implementation Plan
|
||||
|
||||
Read `specs/specs/008-user-profile/tasks.md`:
|
||||
|
||||
- Understand current vs target state
|
||||
- Review technical approach
|
||||
- Read the task list
|
||||
- Note risks and mitigations
|
||||
- Review testing strategy
|
||||
|
||||
### Step 3: Execute Tasks Systematically
|
||||
|
||||
For each task in the implementation plan:
|
||||
|
||||
1. **Read the task description**
|
||||
2. **Implement the task:**
|
||||
- Create/modify files as needed
|
||||
- Follow existing code patterns
|
||||
- Use appropriate frameworks/libraries
|
||||
- Add proper error handling
|
||||
- Include logging where appropriate
|
||||
|
||||
3. **Test the task:**
|
||||
- Run relevant tests
|
||||
- Manual testing if needed
|
||||
- Verify acceptance criteria
|
||||
|
||||
4. **Mark task complete:**
|
||||
- Update task checklist
|
||||
- Note any issues or deviations
|
||||
|
||||
5. **Continue to next task**
|
||||
|
||||
### Step 4: Validate Against Acceptance Criteria
|
||||
|
||||
After all tasks complete, verify each acceptance criterion:
|
||||
|
||||
```markdown
|
||||
## Acceptance Criteria Validation
|
||||
|
||||
- [x] User can register with email/password
|
||||
✅ Tested: Registration form works, user created in DB
|
||||
|
||||
- [x] Passwords meet complexity requirements
|
||||
✅ Tested: Weak passwords rejected with proper error message
|
||||
|
||||
- [x] Verification email sent
|
||||
✅ Tested: Email sent via SendGrid, token valid for 24h
|
||||
|
||||
- [ ] User can reset password
|
||||
❌ NOT IMPLEMENTED: Deferred to future sprint
|
||||
```
|
||||
|
||||
### Step 5: Run Tests
|
||||
|
||||
Execute test suite:
|
||||
|
||||
```bash
|
||||
# Run unit tests
|
||||
npm test
|
||||
|
||||
# Run integration tests (if available)
|
||||
npm run test:integration
|
||||
|
||||
# Run E2E tests for this feature (if available)
|
||||
npm run test:e2e
|
||||
```
|
||||
|
||||
Report test results:
|
||||
- Tests passing: X/Y
|
||||
- New tests added: Z
|
||||
- Coverage: X%
|
||||
|
||||
### Step 6: Update Specification Status
|
||||
|
||||
Update `specs/specs/008-user-profile/tasks.md`:
|
||||
|
||||
**If fully implemented:**
|
||||
```markdown
|
||||
## Status
|
||||
✅ **COMPLETE** - Fully implemented and tested
|
||||
|
||||
## Implementation Complete
|
||||
|
||||
- Date: [current date]
|
||||
- All acceptance criteria met
|
||||
- Tests passing: X/X
|
||||
- No known issues
|
||||
```
|
||||
|
||||
**If partially implemented:**
|
||||
```markdown
|
||||
## Status
|
||||
⚠️ **PARTIAL** - Core functionality complete, missing: [list]
|
||||
|
||||
## Implementation Status
|
||||
|
||||
**Completed:**
|
||||
- ✅ [What was implemented]
|
||||
|
||||
**Still Missing:**
|
||||
- ❌ [What's still needed]
|
||||
|
||||
**Reason:** [Why not fully complete]
|
||||
```
|
||||
|
||||
### Step 7: Commit Changes
|
||||
|
||||
Create commit with reference to specification:
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "feat: implement specs/008-user-profile/tasks.md (specs/008-user-profile/tasks.md)
|
||||
|
||||
Implemented from specification: specs/specs/008-user-profile/tasks.md
|
||||
|
||||
Completed:
|
||||
- [Task 1]
|
||||
- [Task 2]
|
||||
- [Task 3]
|
||||
|
||||
Tests: X passing
|
||||
Status: COMPLETE"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
Provide summary:
|
||||
|
||||
```markdown
|
||||
## Implementation Complete: specs/008-user-profile/tasks.md
|
||||
|
||||
### Tasks Completed
|
||||
- [x] Task 1: [description] (file.ts)
|
||||
- [x] Task 2: [description] (file2.ts)
|
||||
- [x] Task 3: [description]
|
||||
|
||||
### Files Created/Modified
|
||||
- src/feature/component.ts (142 lines)
|
||||
- src/api/endpoint.ts (78 lines)
|
||||
- tests/feature.test.ts (95 lines)
|
||||
|
||||
### Tests
|
||||
- Unit tests: 12/12 passing ✅
|
||||
- Integration tests: 3/3 passing ✅
|
||||
- Coverage: 87%
|
||||
|
||||
### Acceptance Criteria
|
||||
- [x] Criterion 1 ✅
|
||||
- [x] Criterion 2 ✅
|
||||
- [x] Criterion 3 ✅
|
||||
|
||||
### Status Update
|
||||
- Previous: ❌ MISSING
|
||||
- Now: ✅ COMPLETE
|
||||
|
||||
### Commit
|
||||
✅ Committed: feat: implement specs/008-user-profile/tasks.md
|
||||
|
||||
### Next Steps
|
||||
Ready to shift into next feature or run `/speckit.analyze` to validate.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
If implementation fails:
|
||||
- Save progress (mark completed tasks)
|
||||
- Update spec with partial status
|
||||
- Document blocker
|
||||
- Provide recommendations
|
||||
|
||||
If tests fail:
|
||||
- Fix issues before marking complete
|
||||
- Update spec if acceptance criteria need adjustment
|
||||
- Document test failures
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Work incrementally (one task at a time)
|
||||
- Test frequently (after each task if possible)
|
||||
- Commit early and often
|
||||
- Update spec status accurately
|
||||
- Cross-reference related specs
|
||||
- For greenfield: Use target_stack from state
|
||||
- For brownfield: Maintain existing patterns and stack
|
||||
295
commands/speckit-plan.md
Normal file
295
commands/speckit-plan.md
Normal file
@@ -0,0 +1,295 @@
|
||||
---
|
||||
description: Create implementation plan for a feature
|
||||
---
|
||||
|
||||
# Spec Kit: Create Implementation Plan
|
||||
|
||||
Generate detailed implementation plan for a feature.
|
||||
|
||||
## Input
|
||||
|
||||
**Feature name:** {{FEATURE_NAME}}
|
||||
**Specification:** `specs/{{FEATURE_NAME}}.md`
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Read Specification
|
||||
|
||||
Load specification and understand:
|
||||
- Current status (COMPLETE/PARTIAL/MISSING)
|
||||
- Acceptance criteria
|
||||
- Business rules
|
||||
- Dependencies
|
||||
|
||||
### Step 2: Assess Current State
|
||||
|
||||
What exists now?
|
||||
- For PARTIAL: What's implemented vs missing?
|
||||
- For MISSING: Starting from scratch?
|
||||
- For COMPLETE: Why creating plan? (refactor? upgrade?)
|
||||
|
||||
### Step 3: Define Target State
|
||||
|
||||
What should exist after implementation?
|
||||
- All acceptance criteria met
|
||||
- All business rules enforced
|
||||
- Tests passing
|
||||
- Documentation updated
|
||||
|
||||
### Step 4: Determine Technical Approach
|
||||
|
||||
**For brownfield:**
|
||||
- Use existing tech stack (from specification)
|
||||
- Follow existing patterns
|
||||
- Maintain consistency
|
||||
|
||||
**For greenfield:**
|
||||
- Use target_stack from .stackshift-state.json
|
||||
- Choose appropriate libraries
|
||||
- Design architecture
|
||||
|
||||
**Outline approach:**
|
||||
1. Backend changes needed
|
||||
2. Frontend changes needed
|
||||
3. Database changes needed
|
||||
4. Configuration changes needed
|
||||
5. Testing strategy
|
||||
|
||||
### Step 5: Break Down Into Phases
|
||||
|
||||
```markdown
|
||||
## Implementation Phases
|
||||
|
||||
### Phase 1: Backend (Foundation)
|
||||
- Database schema changes
|
||||
- API endpoints
|
||||
- Business logic
|
||||
- Validation
|
||||
|
||||
### Phase 2: Frontend (UI)
|
||||
- Page/route creation
|
||||
- Component development
|
||||
- API integration
|
||||
- State management
|
||||
|
||||
### Phase 3: Testing
|
||||
- Unit tests
|
||||
- Integration tests
|
||||
- E2E tests
|
||||
|
||||
### Phase 4: Polish
|
||||
- Error handling
|
||||
- Loading states
|
||||
- Edge cases
|
||||
- Documentation
|
||||
```
|
||||
|
||||
### Step 6: Identify Risks
|
||||
|
||||
```markdown
|
||||
## Risks & Mitigations
|
||||
|
||||
### Risk 1: [Description]
|
||||
- **Impact:** [What could go wrong]
|
||||
- **Probability:** High/Medium/Low
|
||||
- **Mitigation:** [How to prevent/handle]
|
||||
|
||||
### Risk 2: [Description]
|
||||
...
|
||||
```
|
||||
|
||||
### Step 7: Define Success Criteria
|
||||
|
||||
```markdown
|
||||
## Success Criteria
|
||||
|
||||
- [ ] All acceptance criteria from specification met
|
||||
- [ ] Tests passing (unit, integration, E2E)
|
||||
- [ ] No new bugs introduced
|
||||
- [ ] Performance within acceptable range
|
||||
- [ ] Code reviewed and approved
|
||||
- [ ] Documentation updated
|
||||
- [ ] Specification status updated to COMPLETE
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Template
|
||||
|
||||
Save to: `specs/{{FEATURE_NAME}}-impl.md`
|
||||
|
||||
```markdown
|
||||
# Implementation Plan: {{FEATURE_NAME}}
|
||||
|
||||
**Feature Spec:** `specs/{{FEATURE_NAME}}.md`
|
||||
**Created:** {{DATE}}
|
||||
**Status:** {{CURRENT_STATUS}}
|
||||
**Target:** ✅ COMPLETE
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
[Clear statement of what needs to be accomplished]
|
||||
|
||||
## Current State
|
||||
|
||||
{{#if status == 'MISSING'}}
|
||||
**Not started:**
|
||||
- No implementation exists
|
||||
- Starting from scratch
|
||||
{{/if}}
|
||||
|
||||
{{#if status == 'PARTIAL'}}
|
||||
**What exists:**
|
||||
- ✅ [Component 1]
|
||||
- ✅ [Component 2]
|
||||
|
||||
**What's missing:**
|
||||
- ❌ [Component 3]
|
||||
- ❌ [Component 4]
|
||||
{{/if}}
|
||||
|
||||
## Target State
|
||||
|
||||
After implementation:
|
||||
- All acceptance criteria met
|
||||
- Full feature functionality
|
||||
- Tests passing
|
||||
- Production-ready
|
||||
|
||||
## Technical Approach
|
||||
|
||||
### Architecture
|
||||
|
||||
[Describe overall approach]
|
||||
|
||||
### Technology Choices
|
||||
|
||||
{{#if greenfield}}
|
||||
**Stack:** [From .stackshift-state.json target_stack]
|
||||
- Framework: [choice]
|
||||
- Database: [choice]
|
||||
- Libraries: [list]
|
||||
{{/if}}
|
||||
|
||||
{{#if brownfield}}
|
||||
**Existing Stack:** [From specification]
|
||||
- Maintain consistency with current implementation
|
||||
- Use existing patterns and libraries
|
||||
{{/if}}
|
||||
|
||||
### Implementation Steps
|
||||
|
||||
1. **Backend Implementation**
|
||||
- Create/modify API endpoints
|
||||
- Implement business logic
|
||||
- Add database models/migrations
|
||||
- Add validation
|
||||
|
||||
2. **Frontend Implementation**
|
||||
- Create pages/routes
|
||||
- Build components
|
||||
- Integrate with backend API
|
||||
- Add state management
|
||||
|
||||
3. **Testing**
|
||||
- Write unit tests
|
||||
- Write integration tests
|
||||
- Write E2E tests
|
||||
|
||||
4. **Configuration**
|
||||
- Add environment variables
|
||||
- Update configuration files
|
||||
- Update routing
|
||||
|
||||
## Detailed Tasks
|
||||
|
||||
[High-level tasks - use `/speckit.tasks` to break down further]
|
||||
|
||||
### Backend
|
||||
- [ ] Task 1
|
||||
- [ ] Task 2
|
||||
|
||||
### Frontend
|
||||
- [ ] Task 3
|
||||
- [ ] Task 4
|
||||
|
||||
### Testing
|
||||
- [ ] Task 5
|
||||
- [ ] Task 6
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
### Risk: [Description]
|
||||
- **Impact:** [What could go wrong]
|
||||
- **Mitigation:** [Prevention strategy]
|
||||
|
||||
## Dependencies
|
||||
|
||||
**Must be complete before starting:**
|
||||
- [Dependency 1]
|
||||
- [Dependency 2]
|
||||
|
||||
**Blocks these features:**
|
||||
- [Feature that depends on this]
|
||||
|
||||
## Effort Estimate
|
||||
|
||||
- Backend: ~X hours
|
||||
- Frontend: ~Y hours
|
||||
- Testing: ~Z hours
|
||||
|
||||
**Total:** ~W hours
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
- Test business logic in isolation
|
||||
- Mock external dependencies
|
||||
- Target: 80%+ coverage
|
||||
|
||||
### Integration Tests
|
||||
- Test API endpoints with real database (test DB)
|
||||
- Verify data persistence
|
||||
- Test error conditions
|
||||
|
||||
### E2E Tests
|
||||
- Test complete user flows
|
||||
- Critical paths must pass
|
||||
- Use realistic data
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- [ ] All acceptance criteria met
|
||||
- [ ] All tests passing (X/X)
|
||||
- [ ] No TypeScript/linting errors
|
||||
- [ ] Code review approved
|
||||
- [ ] Performance acceptable
|
||||
- [ ] Security review passed (if sensitive)
|
||||
- [ ] Documentation updated
|
||||
- [ ] Specification status: ✅ COMPLETE
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If implementation fails:
|
||||
- [How to undo changes]
|
||||
- [Database rollback if needed]
|
||||
- [Feature flag to disable]
|
||||
|
||||
---
|
||||
|
||||
**Ready for execution:** Use `/speckit.tasks` to generate task checklist, then `/speckit.implement` to execute.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Plans should be detailed but not prescriptive about every line of code
|
||||
- Leave room for implementation decisions
|
||||
- Focus on what needs to be done, not exact how
|
||||
- Include enough detail for `/speckit.tasks` to generate atomic tasks
|
||||
- Consider risks and dependencies
|
||||
- For greenfield: Use target stack from configuration
|
||||
- For brownfield: Follow existing patterns
|
||||
209
commands/speckit-specify.md
Normal file
209
commands/speckit-specify.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
description: Create new feature specification
|
||||
---
|
||||
|
||||
# Spec Kit: Create Feature Specification
|
||||
|
||||
Create a new feature specification in GitHub Spec Kit format.
|
||||
|
||||
## Input
|
||||
|
||||
**Feature name:** {{FEATURE_NAME}}
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Gather Requirements
|
||||
|
||||
Ask the user about the feature:
|
||||
|
||||
```
|
||||
I'll help you create a specification for: {{FEATURE_NAME}}
|
||||
|
||||
Let's define this feature:
|
||||
|
||||
1. What does this feature do? (Overview)
|
||||
2. Who is it for? (User type)
|
||||
3. What problem does it solve? (Value proposition)
|
||||
4. What are the key capabilities? (User stories)
|
||||
5. How will we know it's done? (Acceptance criteria)
|
||||
6. What are the business rules? (Validation, authorization, etc.)
|
||||
7. What are the dependencies? (Other features required)
|
||||
8. What's the priority? (P0/P1/P2/P3)
|
||||
```
|
||||
|
||||
### Step 2: Define User Stories
|
||||
|
||||
Format: "As a [user type], I want [capability] so that [benefit]"
|
||||
|
||||
```markdown
|
||||
## User Stories
|
||||
|
||||
- As a user, I want to [capability] so that [benefit]
|
||||
- As a user, I want to [capability] so that [benefit]
|
||||
- As an admin, I want to [capability] so that [benefit]
|
||||
```
|
||||
|
||||
### Step 3: Define Acceptance Criteria
|
||||
|
||||
Specific, testable criteria:
|
||||
|
||||
```markdown
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] User can [action]
|
||||
- [ ] System validates [condition]
|
||||
- [ ] Error shown when [invalid state]
|
||||
- [ ] Data persists after [action]
|
||||
```
|
||||
|
||||
### Step 4: Define Business Rules
|
||||
|
||||
```markdown
|
||||
## Business Rules
|
||||
|
||||
- BR-001: [Validation rule]
|
||||
- BR-002: [Authorization rule]
|
||||
- BR-003: [Business logic rule]
|
||||
```
|
||||
|
||||
### Step 5: Check Route and Add Technical Details
|
||||
|
||||
**If brownfield (tech-prescriptive):**
|
||||
|
||||
Ask about implementation:
|
||||
```
|
||||
For brownfield route, I need implementation details:
|
||||
|
||||
1. What API endpoints? (paths, methods)
|
||||
2. What database models? (schema)
|
||||
3. What UI components? (file paths)
|
||||
4. What dependencies? (libraries, versions)
|
||||
5. Implementation status? (COMPLETE/PARTIAL/MISSING)
|
||||
```
|
||||
|
||||
Add technical implementation section.
|
||||
|
||||
**If greenfield (tech-agnostic):**
|
||||
|
||||
Skip implementation details, focus on business requirements only.
|
||||
|
||||
### Step 6: Create Specification File
|
||||
|
||||
Save to: `specs/{{FEATURE_NAME}}.md`
|
||||
|
||||
**Template:**
|
||||
|
||||
```markdown
|
||||
# Feature: {{FEATURE_NAME}}
|
||||
|
||||
## Status
|
||||
[❌ MISSING | ⚠️ PARTIAL | ✅ COMPLETE]
|
||||
|
||||
## Overview
|
||||
[Clear description of what this feature does]
|
||||
|
||||
## User Stories
|
||||
- As a [user], I want [capability] so that [benefit]
|
||||
- As a [user], I want [capability] so that [benefit]
|
||||
|
||||
## Acceptance Criteria
|
||||
- [ ] Criterion 1
|
||||
- [ ] Criterion 2
|
||||
- [ ] Criterion 3
|
||||
|
||||
## Business Rules
|
||||
- BR-001: [Rule description]
|
||||
- BR-002: [Rule description]
|
||||
|
||||
## Non-Functional Requirements
|
||||
- Performance: [requirement]
|
||||
- Security: [requirement]
|
||||
- Accessibility: [requirement]
|
||||
|
||||
{{#if brownfield}}
|
||||
## Current Implementation
|
||||
|
||||
### Tech Stack
|
||||
- Framework: [name version]
|
||||
- Libraries: [list]
|
||||
|
||||
### API Endpoints
|
||||
- [METHOD] /api/path
|
||||
- Handler: [file path]
|
||||
|
||||
### Database Schema
|
||||
\`\`\`prisma
|
||||
model Name {
|
||||
[schema]
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
### Implementation Files
|
||||
- [file path] ([description])
|
||||
- [file path] ([description])
|
||||
|
||||
### Dependencies
|
||||
- [library]: [version]
|
||||
{{/if}}
|
||||
|
||||
## Dependencies
|
||||
- [Other Feature] (required)
|
||||
- [External Service] (required)
|
||||
|
||||
## Related Specifications
|
||||
- [related-spec.md]
|
||||
|
||||
## Implementation Plan
|
||||
See: `specs/{{FEATURE_NAME}}-impl.md`
|
||||
|
||||
## Notes
|
||||
[Any additional context, gotchas, or considerations]
|
||||
```
|
||||
|
||||
### Step 7: Create Implementation Plan (if PARTIAL or MISSING)
|
||||
|
||||
If status is not COMPLETE, create implementation plan:
|
||||
|
||||
`specs/{{FEATURE_NAME}}-impl.md`
|
||||
|
||||
See `/speckit.plan` for implementation plan template.
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
```markdown
|
||||
✅ Feature specification created
|
||||
|
||||
**File:** `specs/{{FEATURE_NAME}}.md`
|
||||
**Status:** {{STATUS}}
|
||||
**Priority:** {{PRIORITY}}
|
||||
|
||||
**Summary:**
|
||||
- User stories: X
|
||||
- Acceptance criteria: Y
|
||||
- Business rules: Z
|
||||
{{#if brownfield}}
|
||||
- Implementation files: N
|
||||
{{/if}}
|
||||
|
||||
{{#if status != 'COMPLETE'}}
|
||||
**Implementation Plan:** `specs/{{FEATURE_NAME}}-impl.md`
|
||||
{{/if}}
|
||||
|
||||
**Next Steps:**
|
||||
- Review specification for accuracy
|
||||
- Create related specifications if needed
|
||||
- Use `/speckit.plan` to create/update implementation plan
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Follow GitHub Spec Kit format conventions
|
||||
- Use status markers: ✅ ⚠️ ❌
|
||||
- For brownfield: Include complete technical details
|
||||
- For greenfield: Focus on business requirements only
|
||||
- Cross-reference related specifications
|
||||
- Create implementation plan for non-complete features
|
||||
59
commands/speckit-tasks.md
Normal file
59
commands/speckit-tasks.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
description: Generate actionable tasks from implementation plan for a feature
|
||||
---
|
||||
|
||||
# Generate Tasks for Feature
|
||||
|
||||
Read the implementation plan and generate atomic, actionable task list.
|
||||
|
||||
## Input
|
||||
|
||||
Feature directory: `specs/{{FEATURE_ID}}/`
|
||||
|
||||
## Process
|
||||
|
||||
1. **Read the plan:**
|
||||
```bash
|
||||
cat specs/{{FEATURE_ID}}/plan.md
|
||||
```
|
||||
|
||||
2. **Generate atomic tasks:**
|
||||
- Each task should be specific (exact file to create/modify)
|
||||
- Testable (has clear acceptance criteria)
|
||||
- Atomic (can be done in one step)
|
||||
- Ordered by dependencies
|
||||
|
||||
3. **Create tasks.md:**
|
||||
```bash
|
||||
# Save to specs/{{FEATURE_ID}}/tasks.md
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
# Tasks: {{FEATURE_NAME}}
|
||||
|
||||
Based on: specs/{{FEATURE_ID}}/plan.md
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] Create component (path/to/file.tsx)
|
||||
- [ ] Add API endpoint (path/to/route.ts)
|
||||
- [ ] Write tests (path/to/test.ts)
|
||||
- [ ] Update documentation
|
||||
|
||||
## Dependencies
|
||||
|
||||
1. Task X must complete before Task Y
|
||||
|
||||
## Estimated Effort
|
||||
|
||||
Total: ~X hours
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Break down into atomic, testable tasks
|
||||
- Include file paths
|
||||
- Order by dependencies
|
||||
- Each task completable independently where possible
|
||||
338
commands/stackshift.review.md
Normal file
338
commands/stackshift.review.md
Normal file
@@ -0,0 +1,338 @@
|
||||
---
|
||||
name: stackshift.review
|
||||
description: Perform comprehensive code review across multiple dimensions - correctness, standards, security, performance, and testing. Returns APPROVED/NEEDS CHANGES/BLOCKED with specific feedback.
|
||||
---
|
||||
|
||||
# Code Review
|
||||
|
||||
Comprehensive multi-dimensional code review with actionable feedback.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Review recent changes
|
||||
/stackshift.review
|
||||
|
||||
# Review specific feature
|
||||
/stackshift.review "pricing display feature"
|
||||
|
||||
# Review before deployment
|
||||
/stackshift.review "OAuth2 implementation before production"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Reviews
|
||||
|
||||
### 🔍 Correctness
|
||||
- Does it work as intended?
|
||||
- Are all requirements met?
|
||||
- Logic errors or edge cases?
|
||||
- Matches specification requirements?
|
||||
|
||||
### 📏 Standards Compliance
|
||||
- Follows project conventions?
|
||||
- Coding style consistent?
|
||||
- Documentation adequate?
|
||||
- Aligns with constitution principles?
|
||||
|
||||
### 🔒 Security Assessment
|
||||
- No obvious vulnerabilities?
|
||||
- Proper input validation?
|
||||
- Secure data handling?
|
||||
- Authentication/authorization correct?
|
||||
|
||||
### ⚡ Performance Review
|
||||
- Efficient implementation?
|
||||
- Resource usage reasonable?
|
||||
- Scalability considerations?
|
||||
- Database queries optimized?
|
||||
|
||||
### 🧪 Testing Validation
|
||||
- Adequate test coverage?
|
||||
- Edge cases handled?
|
||||
- Error conditions tested?
|
||||
- Integration tests included?
|
||||
|
||||
---
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Identify What Changed
|
||||
|
||||
```bash
|
||||
echo "🔍 Identifying changes to review..."
|
||||
|
||||
# Get recent changes
|
||||
git diff HEAD~1 --name-only > changed-files.txt
|
||||
|
||||
# Show files to review
|
||||
echo "Files to review:"
|
||||
cat changed-files.txt | sed 's/^/ - /'
|
||||
|
||||
# Count by type
|
||||
BACKEND_FILES=$(grep -E "api/|src/.*\.ts$|lib/" changed-files.txt | wc -l)
|
||||
FRONTEND_FILES=$(grep -E "site/|pages/|components/.*\.tsx$" changed-files.txt | wc -l)
|
||||
TEST_FILES=$(grep -E "\.test\.|\.spec\.|__tests__" changed-files.txt | wc -l)
|
||||
|
||||
echo ""
|
||||
echo "📊 Changes breakdown:"
|
||||
echo " Backend: $BACKEND_FILES files"
|
||||
echo " Frontend: $FRONTEND_FILES files"
|
||||
echo " Tests: $TEST_FILES files"
|
||||
echo ""
|
||||
```
|
||||
|
||||
### Step 2: Review Each Dimension
|
||||
|
||||
```bash
|
||||
echo "🔍 Performing multi-dimensional review..."
|
||||
echo ""
|
||||
|
||||
# Correctness Check
|
||||
echo "✓ Correctness Review"
|
||||
|
||||
# Check if tests exist for changed files
|
||||
for file in $(cat changed-files.txt); do
|
||||
if [[ ! "$file" =~ \.test\. && ! "$file" =~ \.spec\. ]]; then
|
||||
# Look for corresponding test file
|
||||
TEST_FILE="${file%.ts}.test.ts"
|
||||
if [ ! -f "$TEST_FILE" ]; then
|
||||
echo " ⚠️ Missing tests for: $file"
|
||||
echo "ISSUE: No test coverage for $file" >> review-issues.log
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Check against spec requirements
|
||||
echo " Checking against specifications..."
|
||||
# Implementation reviews code against spec requirements
|
||||
|
||||
echo ""
|
||||
|
||||
# Standards Compliance
|
||||
echo "✓ Standards Review"
|
||||
|
||||
# Check for common issues
|
||||
grep -rn "console.log\|debugger" $(cat changed-files.txt) > debug-statements.log 2>/dev/null
|
||||
if [ -s debug-statements.log ]; then
|
||||
echo " ⚠️ Debug statements found:"
|
||||
cat debug-statements.log
|
||||
echo "ISSUE: Debug statements in production code" >> review-issues.log
|
||||
fi
|
||||
|
||||
# Check for TODO/FIXME
|
||||
grep -rn "TODO\|FIXME\|XXX\|HACK" $(cat changed-files.txt) > todos.log 2>/dev/null
|
||||
if [ -s todos.log ]; then
|
||||
echo " ⚠️ Unresolved TODOs found:"
|
||||
cat todos.log
|
||||
echo "ISSUE: Unresolved TODO comments" >> review-issues.log
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Security Assessment
|
||||
echo "✓ Security Review"
|
||||
|
||||
# Check for common security issues
|
||||
grep -rn "eval(\|innerHTML\|dangerouslySetInnerHTML" $(cat changed-files.txt) > security-issues.log 2>/dev/null
|
||||
if [ -s security-issues.log ]; then
|
||||
echo " ❌ Security concerns found:"
|
||||
cat security-issues.log
|
||||
echo "CRITICAL: Security vulnerability" >> review-issues.log
|
||||
fi
|
||||
|
||||
# Check for hardcoded secrets/credentials
|
||||
grep -rn "password.*=\|api.*key.*=\|secret.*=" $(cat changed-files.txt) > credentials.log 2>/dev/null
|
||||
if [ -s credentials.log ]; then
|
||||
echo " ❌ Potential hardcoded credentials:"
|
||||
cat credentials.log
|
||||
echo "CRITICAL: Hardcoded credentials" >> review-issues.log
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Performance Review
|
||||
echo "✓ Performance Review"
|
||||
|
||||
# Check for common performance issues
|
||||
grep -rn "for.*in.*forEach\|while.*push\|map.*filter.*map" $(cat changed-files.txt) > performance-issues.log 2>/dev/null
|
||||
if [ -s performance-issues.log ]; then
|
||||
echo " ⚠️ Potential performance issues:"
|
||||
cat performance-issues.log | head -5
|
||||
echo "ISSUE: Inefficient loops or chaining" >> review-issues.log
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Testing Validation
|
||||
echo "✓ Testing Review"
|
||||
|
||||
# Check test quality
|
||||
if [[ "$TEST_FILES" -gt 0 ]]; then
|
||||
# Check for proper test structure
|
||||
grep -rn "describe\|it(\|test(" $(grep "\.test\.\|\.spec\." changed-files.txt) > test-structure.log 2>/dev/null
|
||||
TEST_COUNT=$(grep -c "it(\|test(" test-structure.log || echo "0")
|
||||
|
||||
echo " Test cases found: $TEST_COUNT"
|
||||
|
||||
if [[ "$TEST_COUNT" -lt 5 ]]; then
|
||||
echo " ⚠️ Low test coverage detected"
|
||||
echo "ISSUE: Insufficient test cases (< 5)" >> review-issues.log
|
||||
fi
|
||||
else
|
||||
echo " ⚠️ No tests added for changes"
|
||||
echo "ISSUE: No test files modified" >> review-issues.log
|
||||
fi
|
||||
|
||||
echo ""
|
||||
```
|
||||
|
||||
### Step 3: Generate Review Report
|
||||
|
||||
```bash
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📋 Review Report"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
TOTAL_ISSUES=$(wc -l < review-issues.log 2>/dev/null || echo "0")
|
||||
CRITICAL_ISSUES=$(grep -c "CRITICAL" review-issues.log 2>/dev/null || echo "0")
|
||||
|
||||
if [[ "$TOTAL_ISSUES" == "0" ]]; then
|
||||
echo "### ✅ APPROVED"
|
||||
echo ""
|
||||
echo "**Strengths:**"
|
||||
echo "- All quality checks passed"
|
||||
echo "- Spec compliance validated"
|
||||
echo "- Security review clean"
|
||||
echo "- Performance acceptable"
|
||||
echo "- Test coverage adequate"
|
||||
echo ""
|
||||
echo "**Decision:** Ready for next phase/deployment"
|
||||
else
|
||||
if [[ "$CRITICAL_ISSUES" -gt 0 ]]; then
|
||||
echo "### 🚫 BLOCKED"
|
||||
echo ""
|
||||
echo "**Critical Issues Found:** $CRITICAL_ISSUES"
|
||||
echo ""
|
||||
echo "Critical issues must be resolved before proceeding:"
|
||||
else
|
||||
echo "### 🔄 NEEDS CHANGES"
|
||||
echo ""
|
||||
echo "**Issues Found:** $TOTAL_ISSUES"
|
||||
echo ""
|
||||
echo "Issues requiring attention:"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
cat review-issues.log | sed 's/^/ - /'
|
||||
echo ""
|
||||
|
||||
echo "**Recommendations:**"
|
||||
echo " 1. Address critical security/spec violations first"
|
||||
echo " 2. Run /stackshift.validate --fix to auto-resolve technical issues"
|
||||
echo " 3. Add missing test coverage"
|
||||
echo " 4. Remove debug statements and TODOs"
|
||||
echo " 5. Re-run review after fixes"
|
||||
echo ""
|
||||
|
||||
if [[ "$CRITICAL_ISSUES" -gt 0 ]]; then
|
||||
echo "**Decision:** BLOCKED - Cannot proceed until critical issues resolved"
|
||||
else
|
||||
echo "**Decision:** NEEDS CHANGES - Address issues before finalizing"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Cleanup
|
||||
rm -f changed-files.txt debug-statements.log todos.log security-issues.log \
|
||||
credentials.log performance-issues.log test-structure.log review-issues.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Review Categories
|
||||
|
||||
### ✅ APPROVED
|
||||
- All critical issues resolved
|
||||
- Meets quality standards
|
||||
- Ready for next phase/deployment
|
||||
|
||||
### 🔄 NEEDS CHANGES
|
||||
- Issues found requiring fixes
|
||||
- Specific feedback provided
|
||||
- Return to implementation
|
||||
|
||||
### 🚫 BLOCKED
|
||||
- Fundamental issues requiring redesign
|
||||
- Critical security vulnerabilities
|
||||
- Major spec violations
|
||||
|
||||
---
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- **No rubber stamping** - Actually review, don't just approve
|
||||
- **Specific feedback** - Actionable recommendations with line numbers
|
||||
- **Priority classification** - Critical, Important, Suggestion
|
||||
- **Evidence-based** - Cite specific files, lines, patterns
|
||||
|
||||
---
|
||||
|
||||
## Integration with StackShift
|
||||
|
||||
**Auto-runs after:**
|
||||
- Gear 6 completion
|
||||
- `/stackshift.validate` finds issues
|
||||
- Before final commit
|
||||
|
||||
**Manual usage:**
|
||||
```bash
|
||||
# Before committing
|
||||
/stackshift.review
|
||||
|
||||
# After implementing feature
|
||||
/stackshift.review "vehicle details feature"
|
||||
|
||||
# Before deployment
|
||||
/stackshift.review "all changes since last release"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 Review Report
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
### 🔄 NEEDS CHANGES
|
||||
|
||||
**Issues Found:** 3
|
||||
|
||||
Issues requiring attention:
|
||||
|
||||
- ISSUE: Debug statements in production code
|
||||
- ISSUE: No test coverage for api/handlers/pricing.ts
|
||||
- ISSUE: Inefficient loops or chaining in data-processor.ts
|
||||
|
||||
**Recommendations:**
|
||||
1. Remove console.log from api/handlers/pricing.ts:45
|
||||
2. Add test file: api/handlers/pricing.test.ts
|
||||
3. Refactor nested loops in data-processor.ts:78-82
|
||||
4. Re-run review after fixes
|
||||
|
||||
**Decision:** NEEDS CHANGES - Address issues before finalizing
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Quality assurance before every finalization!**
|
||||
358
commands/stackshift.validate.md
Normal file
358
commands/stackshift.validate.md
Normal file
@@ -0,0 +1,358 @@
|
||||
---
|
||||
name: stackshift.validate
|
||||
description: Systematically validate implementation against specifications. Runs tests, TypeScript checks, and spec compliance validation. Use after implementation to ensure quality before finalizing.
|
||||
---
|
||||
|
||||
# Validate Implementation
|
||||
|
||||
Comprehensive validation of implementation against specifications with automatic fixing capability.
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Run full validation
|
||||
/stackshift.validate
|
||||
|
||||
# Run with automatic fixes
|
||||
/stackshift.validate --fix
|
||||
|
||||
# Focus on specific feature
|
||||
/stackshift.validate --feature=vehicle-details
|
||||
|
||||
# TypeScript check only
|
||||
/stackshift.validate --type-check-only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What This Does
|
||||
|
||||
**Phase 1: Assessment**
|
||||
1. Run full test suite
|
||||
2. Run TypeScript compilation
|
||||
3. Categorize failures (imports, types, spec violations, mocks)
|
||||
4. Cross-reference against specifications
|
||||
|
||||
**Phase 2: Spec Compliance**
|
||||
1. Validate implementation matches spec requirements
|
||||
2. Check for missing specified features
|
||||
3. Verify API contracts are implemented
|
||||
4. Assess priorities (spec violations are P1)
|
||||
|
||||
**Phase 3: Resolution (if --fix mode)**
|
||||
1. Fix import/export issues
|
||||
2. Fix type mismatches (aligned with spec)
|
||||
3. Fix specification compliance gaps
|
||||
4. Fix test mocks
|
||||
5. Validate after each fix
|
||||
6. Rollback if fixes break things
|
||||
|
||||
**Phase 4: Final Validation**
|
||||
1. Re-run all tests
|
||||
2. Verify TypeScript compilation
|
||||
3. Confirm spec compliance
|
||||
4. Generate quality report
|
||||
|
||||
---
|
||||
|
||||
## Command Execution
|
||||
|
||||
### Phase 1: Comprehensive Assessment
|
||||
|
||||
```bash
|
||||
echo "🚀 Starting Implementation Validation"
|
||||
echo ""
|
||||
|
||||
# Run test suite
|
||||
echo "🧪 Running test suite..."
|
||||
npm test 2>&1 | tee test-results.log
|
||||
|
||||
# Extract statistics
|
||||
TOTAL_TESTS=$(grep -o "[0-9]* tests" test-results.log | head -1 || echo "0")
|
||||
FAILED_TESTS=$(grep -o "[0-9]* failed" test-results.log || echo "0")
|
||||
PASSED_TESTS=$(grep -o "[0-9]* passed" test-results.log || echo "0")
|
||||
|
||||
echo "📊 Test Results:"
|
||||
echo " Total: $TOTAL_TESTS"
|
||||
echo " Passed: $PASSED_TESTS"
|
||||
echo " Failed: $FAILED_TESTS"
|
||||
echo ""
|
||||
|
||||
# Run TypeScript validation
|
||||
echo "🔍 Running TypeScript validation..."
|
||||
npx tsc --noEmit 2>&1 | tee typescript-results.log
|
||||
|
||||
TS_ERRORS=$(grep -c "error TS" typescript-results.log || echo "0")
|
||||
echo "📊 TypeScript Results:"
|
||||
echo " Errors: $TS_ERRORS"
|
||||
echo ""
|
||||
```
|
||||
|
||||
### Phase 2: Specification Validation
|
||||
|
||||
```bash
|
||||
echo "📋 Validating against specifications..."
|
||||
echo ""
|
||||
|
||||
# Find all spec files
|
||||
SPEC_DIR=".specify/memory/specifications"
|
||||
if [ ! -d "$SPEC_DIR" ]; then
|
||||
SPEC_DIR="specs"
|
||||
fi
|
||||
|
||||
# For each spec, check implementation
|
||||
for spec in $(find $SPEC_DIR -name "*.md" -o -name "spec.md"); do
|
||||
SPEC_NAME=$(basename $(dirname $spec) 2>/dev/null || basename $spec .md)
|
||||
|
||||
echo "🔍 Checking: $SPEC_NAME"
|
||||
|
||||
# Extract required files/components from spec
|
||||
# Look for "Files:" or "Implementation Status:" sections
|
||||
grep -A 20 "^## Files\|^## Implementation Status" "$spec" | \
|
||||
grep -o "\`[^`]*\.(ts|tsx|js|jsx|py|go)\`" | \
|
||||
sed 's/`//g' > required-files-$SPEC_NAME.txt
|
||||
|
||||
# Check if required files exist
|
||||
while read file; do
|
||||
if [ ! -f "$file" ]; then
|
||||
echo " ❌ Missing file: $file"
|
||||
echo "SPEC_VIOLATION: $SPEC_NAME missing $file" >> spec-violations.log
|
||||
fi
|
||||
done < required-files-$SPEC_NAME.txt
|
||||
|
||||
# Clean up temp file
|
||||
rm required-files-$SPEC_NAME.txt
|
||||
done
|
||||
|
||||
SPEC_VIOLATIONS=$(wc -l < spec-violations.log 2>/dev/null || echo "0")
|
||||
echo ""
|
||||
echo "📊 Specification Compliance:"
|
||||
echo " Violations: $SPEC_VIOLATIONS"
|
||||
echo ""
|
||||
```
|
||||
|
||||
### Phase 3: Categorize Issues
|
||||
|
||||
```bash
|
||||
echo "📋 Categorizing failures..."
|
||||
echo ""
|
||||
|
||||
# Extract import/export errors
|
||||
grep -n "Cannot find module\|Module not found\|has no exported member" \
|
||||
test-results.log typescript-results.log 2>/dev/null > import-errors.log
|
||||
|
||||
# Extract type mismatch errors
|
||||
grep -n "Type.*is not assignable\|Property.*does not exist\|Argument of type" \
|
||||
typescript-results.log 2>/dev/null > type-errors.log
|
||||
|
||||
# Extract test assertion failures
|
||||
grep -n "AssertionError\|Expected.*but received\|toBe\|toEqual" \
|
||||
test-results.log 2>/dev/null > test-failures.log
|
||||
|
||||
IMPORT_ERRORS=$(wc -l < import-errors.log 2>/dev/null || echo "0")
|
||||
TYPE_ERRORS=$(wc -l < type-errors.log 2>/dev/null || echo "0")
|
||||
TEST_FAILURES=$(wc -l < test-failures.log 2>/dev/null || echo "0")
|
||||
|
||||
echo "📊 Issue Breakdown:"
|
||||
echo " P1 - Spec Violations: $SPEC_VIOLATIONS (highest priority)"
|
||||
echo " P2 - Type Errors: $TYPE_ERRORS"
|
||||
echo " P3 - Import Errors: $IMPORT_ERRORS"
|
||||
echo " P4 - Test Failures: $TEST_FAILURES"
|
||||
echo ""
|
||||
```
|
||||
|
||||
### Phase 4: Fix Mode (if --fix)
|
||||
|
||||
```bash
|
||||
# Only run if --fix flag provided
|
||||
if [[ "$FIX_MODE" == "true" ]]; then
|
||||
echo "🔧 Automatic fix mode enabled"
|
||||
echo "⚠️ Creating backup..."
|
||||
|
||||
# Backup current state
|
||||
git stash push -m "stackshift-validate backup $(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
echo ""
|
||||
echo "🔧 Fixing issues in priority order..."
|
||||
echo ""
|
||||
|
||||
# P1: Fix spec violations first
|
||||
if [[ "$SPEC_VIOLATIONS" != "0" ]]; then
|
||||
echo "🔧 P1: Resolving specification violations..."
|
||||
|
||||
# Read spec violations and attempt to implement missing files/features
|
||||
while read violation; do
|
||||
echo " Fixing: $violation"
|
||||
# Implementation would add missing files based on spec requirements
|
||||
done < spec-violations.log
|
||||
|
||||
# Re-validate
|
||||
echo " Re-checking spec compliance..."
|
||||
# Re-run spec validation
|
||||
fi
|
||||
|
||||
# P2: Fix type errors
|
||||
if [[ "$TYPE_ERRORS" != "0" ]]; then
|
||||
echo "🔧 P2: Resolving type errors..."
|
||||
|
||||
# Show first few type errors for context
|
||||
head -10 type-errors.log
|
||||
|
||||
echo " Analyzing type mismatches against spec..."
|
||||
# Implementation would fix types to match spec definitions
|
||||
fi
|
||||
|
||||
# P3: Fix import errors
|
||||
if [[ "$IMPORT_ERRORS" != "0" ]]; then
|
||||
echo "🔧 P3: Resolving import errors..."
|
||||
|
||||
# Show import errors
|
||||
cat import-errors.log
|
||||
|
||||
echo " Adding missing exports..."
|
||||
# Implementation would add missing exports
|
||||
fi
|
||||
|
||||
# P4: Fix test failures
|
||||
if [[ "$TEST_FAILURES" != "0" ]]; then
|
||||
echo "🔧 P4: Resolving test failures..."
|
||||
|
||||
# Show test failures
|
||||
head -10 test-failures.log
|
||||
|
||||
echo " Fixing test assertions..."
|
||||
# Implementation would fix failing tests
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "🔄 Re-running validation after fixes..."
|
||||
|
||||
# Re-run tests and type check
|
||||
npm test 2>&1 | tee final-test-results.log
|
||||
npx tsc --noEmit 2>&1 | tee final-ts-results.log
|
||||
|
||||
FINAL_FAILED=$(grep -o "[0-9]* failed" final-test-results.log || echo "0")
|
||||
FINAL_TS_ERRORS=$(grep -c "error TS" final-ts-results.log || echo "0")
|
||||
|
||||
if [[ "$FINAL_FAILED" == "0" && "$FINAL_TS_ERRORS" == "0" ]]; then
|
||||
echo "✅ All issues resolved!"
|
||||
echo "🎉 Implementation validated successfully"
|
||||
else
|
||||
echo "❌ Some issues remain"
|
||||
echo " Failed tests: $FINAL_FAILED"
|
||||
echo " Type errors: $FINAL_TS_ERRORS"
|
||||
echo ""
|
||||
echo "🔄 Rolling back changes..."
|
||||
git stash pop
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "ℹ️ Run with --fix to automatically resolve issues"
|
||||
fi
|
||||
```
|
||||
|
||||
### Final Report
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "📊 Validation Summary"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
if [[ "$SPEC_VIOLATIONS" == "0" && "$TYPE_ERRORS" == "0" && \
|
||||
"$IMPORT_ERRORS" == "0" && "$TEST_FAILURES" == "0" ]]; then
|
||||
echo "✅ VALIDATION PASSED"
|
||||
echo ""
|
||||
echo " All tests passing: ✅"
|
||||
echo " TypeScript compiling: ✅"
|
||||
echo " Spec compliance: ✅"
|
||||
echo " Code quality: ✅"
|
||||
echo ""
|
||||
echo "🚀 Implementation is production-ready!"
|
||||
else
|
||||
echo "⚠️ VALIDATION ISSUES FOUND"
|
||||
echo ""
|
||||
echo " Spec Violations: $SPEC_VIOLATIONS"
|
||||
echo " Type Errors: $TYPE_ERRORS"
|
||||
echo " Import Errors: $IMPORT_ERRORS"
|
||||
echo " Test Failures: $TEST_FAILURES"
|
||||
echo ""
|
||||
echo "💡 Recommendations:"
|
||||
echo " 1. Run with --fix to auto-resolve issues"
|
||||
echo " 2. Review spec-violations.log for spec compliance gaps"
|
||||
echo " 3. Run /stackshift.review for detailed code review"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Cleanup temp files
|
||||
rm -f test-results.log typescript-results.log import-errors.log \
|
||||
type-errors.log test-failures.log spec-violations.log \
|
||||
final-test-results.log final-ts-results.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Options
|
||||
|
||||
- `--fix` - Automatically attempt to fix identified issues
|
||||
- `--feature=<name>` - Focus validation on specific feature
|
||||
- `--spec-first` - Prioritize spec compliance (default)
|
||||
- `--type-check-only` - Only run TypeScript validation
|
||||
- `--no-rollback` - Disable automatic rollback on failures
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
✅ All tests pass (0 failures)
|
||||
✅ TypeScript compiles (0 errors)
|
||||
✅ Spec compliance (0 violations)
|
||||
✅ Quality gates passed
|
||||
|
||||
---
|
||||
|
||||
## Integration with StackShift
|
||||
|
||||
**Auto-runs after Gear 6:**
|
||||
```
|
||||
Gear 6: Implement features ✅
|
||||
↓
|
||||
Gear 6.5: Validate & Review
|
||||
1. /stackshift.validate --fix
|
||||
2. /stackshift.review (if issues found)
|
||||
3. /stackshift.coverage (generate coverage map)
|
||||
↓
|
||||
Complete with confidence! 🎉
|
||||
```
|
||||
|
||||
**Manual usage anytime:**
|
||||
```bash
|
||||
# Before committing
|
||||
/stackshift.validate
|
||||
|
||||
# Before pull request
|
||||
/stackshift.validate --fix
|
||||
|
||||
# Check specific feature
|
||||
/stackshift.validate --feature=pricing-display
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Principles
|
||||
|
||||
1. **Specification Supremacy** - Specs are source of truth
|
||||
2. **Zero Tolerance** - ALL tests must pass, ALL types must compile
|
||||
3. **No Configuration Shortcuts** - Fix implementation, not configs
|
||||
4. **Progressive Fix Strategy** - Address spec violations first
|
||||
5. **Safety First** - Automatic rollback on fix failures
|
||||
6. **Comprehensive Reporting** - Clear categorization and progress tracking
|
||||
|
||||
---
|
||||
|
||||
**This ensures every implementation is validated against specifications before being marked complete!**
|
||||
171
commands/start.md
Normal file
171
commands/start.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
description: Start StackShift reverse engineering process - analyzes codebase, auto-detects application type (monorepo service, Nx app, etc.), and guides through 6-gear transformation to spec-driven development. Choose Greenfield (tech-agnostic for migration) or Brownfield (tech-prescriptive for maintenance).
|
||||
---
|
||||
|
||||
# StackShift: Reverse Engineering Toolkit
|
||||
|
||||
**Start the 6-gear reverse engineering process**
|
||||
|
||||
Transform your application into a fully-specified, spec-driven codebase.
|
||||
|
||||
---
|
||||
|
||||
## Auto-Detection
|
||||
|
||||
StackShift will automatically detect your widget/module type:
|
||||
|
||||
- **service-*** → Monorepo Service (in services/ directory)
|
||||
- **shared-*** → Shared Package (in packages/ directory)
|
||||
- **Nx project in apps/ → Nx Application
|
||||
- **Turborepo package → Turborepo Package
|
||||
- **Other** → Generic application (user chooses Greenfield or Brownfield)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Just run the analyze skill to begin:**
|
||||
|
||||
```
|
||||
I want to reverse engineer this application.
|
||||
```
|
||||
|
||||
Or be specific:
|
||||
|
||||
```
|
||||
Analyze this codebase for Greenfield migration to Next.js.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The 6-Gear Process
|
||||
|
||||
Once analysis starts, you'll shift through these gears:
|
||||
|
||||
### 🔍 Gear 1: Analyze
|
||||
- Auto-detect widget/module type
|
||||
- Detect tech stack and architecture
|
||||
- Assess completeness
|
||||
- Choose route (or auto-selected)
|
||||
- Configure workflow options
|
||||
|
||||
### 🔄 Gear 2: Reverse Engineer
|
||||
- Extract comprehensive documentation (8-9 files)
|
||||
- Business logic extraction
|
||||
- For monorepo: Include shared packages
|
||||
-
|
||||
- For Greenfield: Tech-agnostic requirements
|
||||
- For Brownfield: Tech-prescriptive implementation
|
||||
|
||||
### 📋 Gear 3: Create Specifications
|
||||
- Initialize GitHub Spec Kit (`.specify/`)
|
||||
- Create Constitution (project principles)
|
||||
- Generate feature specifications
|
||||
- Create implementation plans
|
||||
- Install `/speckit.*` slash commands
|
||||
|
||||
### 🔎 Gear 4: Gap Analysis
|
||||
- **Greenfield:** Validate spec completeness, ask about target stack
|
||||
- **Brownfield:** Run `/speckit.analyze`, find implementation gaps
|
||||
- Identify clarification needs
|
||||
- Prioritize features (P0/P1/P2/P3)
|
||||
- Create implementation roadmap
|
||||
|
||||
### ✨ Gear 5: Complete Specification
|
||||
- Resolve `[NEEDS CLARIFICATION]` markers
|
||||
- Interactive Q&A session
|
||||
- Use `/speckit.clarify` for structured clarification
|
||||
- Finalize all specifications
|
||||
- Ensure no ambiguities remain
|
||||
|
||||
### 🚀 Gear 6: Implement
|
||||
- **Greenfield:** Build NEW app in chosen tech stack
|
||||
- **Brownfield:** Fill gaps in existing implementation
|
||||
- Use `/speckit.tasks` for task breakdown
|
||||
- Use `/speckit.implement` for execution
|
||||
- Validate with `/speckit.analyze`
|
||||
|
||||
---
|
||||
|
||||
## Routes Available
|
||||
|
||||
| Route | Auto-Detect | Purpose |
|
||||
|-------|-------------|---------|
|
||||
| **greenfield** | Generic app | Extract business logic, rebuild in new stack |
|
||||
| **brownfield** | Generic app | Spec existing codebase, manage with Spec Kit |
|
||||
| **monorepo-service** | services/* | Extract service + shared packages |
|
||||
| **nx-app** | has nx.json | Extract Nx app + project config |
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Workflow Options
|
||||
|
||||
**Manual Mode:**
|
||||
- Review each gear before proceeding
|
||||
- You control the pace
|
||||
- Good for first-time users
|
||||
|
||||
**Cruise Control:**
|
||||
- Shift through all gears automatically
|
||||
- Hands-free execution
|
||||
- Good for experienced users or overnight runs
|
||||
- Configure: clarifications strategy, implementation scope
|
||||
|
||||
---
|
||||
|
||||
## Additional Commands
|
||||
|
||||
After completing Gears 1-6:
|
||||
|
||||
- **`/stackshift.modernize`** - Brownfield Upgrade Mode (dependency modernization)
|
||||
- **`/speckit.*`** - GitHub Spec Kit commands (auto-installed in Gear 3)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Git repository with code to analyze
|
||||
- Claude Code with plugin support
|
||||
- ~2-4 hours for complete process (or use Cruise Control)
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**Monorepo migration:**
|
||||
```
|
||||
I want to reverse engineer ws-vehicle-details for migration to Next.js.
|
||||
```
|
||||
|
||||
**Legacy app spec creation:**
|
||||
```
|
||||
Analyze this Java Spring app and create specifications for ongoing management.
|
||||
```
|
||||
|
||||
**Nx application extraction:**
|
||||
```
|
||||
Analyze this V9 Velocity widget and extract the business logic.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Starting Now
|
||||
|
||||
**I'm now going to analyze this codebase and begin the StackShift process!**
|
||||
|
||||
Here's what I'll do:
|
||||
|
||||
1. ✅ Auto-detect application type (monorepo-service, nx-app, generic, etc.)
|
||||
2. ✅ Detect tech stack and architecture
|
||||
3. ✅ Assess completeness
|
||||
4. ✅ Determine or ask for route
|
||||
5. ✅ Set up workflow configuration
|
||||
6. ✅ Begin Gear 1: Analyze
|
||||
|
||||
Let me start by analyzing this codebase... 🚗💨
|
||||
|
||||
---
|
||||
|
||||
**Now beginning StackShift Gear 1: Analyze...**
|
||||
96
commands/version.md
Normal file
96
commands/version.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
description: Show installed StackShift version and check for updates
|
||||
---
|
||||
|
||||
# StackShift Version Check
|
||||
|
||||
**Current Installation:**
|
||||
|
||||
```bash
|
||||
# Show installed version
|
||||
cat ~/.claude/plugins/cache/stackshift/.claude-plugin/plugin.json | jq -r '.version' || echo "StackShift not installed"
|
||||
```
|
||||
|
||||
**Latest Release:**
|
||||
|
||||
Check latest version at: https://github.com/jschulte/stackshift/releases/latest
|
||||
|
||||
---
|
||||
|
||||
## Installation Info
|
||||
|
||||
**Repository:** github.com/jschulte/stackshift
|
||||
|
||||
**Installed From:**
|
||||
```bash
|
||||
# Check installation directory
|
||||
ls -la ~/.claude/plugins/cache/stackshift
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Update Instructions
|
||||
|
||||
### If Installed from Marketplace
|
||||
|
||||
**Important:** Update the marketplace FIRST, then update the plugin!
|
||||
|
||||
```bash
|
||||
# Step 1: Update marketplace
|
||||
/plugin marketplace update jschulte
|
||||
|
||||
# Step 2: Update StackShift
|
||||
/plugin update stackshift
|
||||
|
||||
# Step 3: Restart Claude Code
|
||||
```
|
||||
|
||||
### If Installed Locally
|
||||
|
||||
```bash
|
||||
cd ~/git/stackshift
|
||||
git pull origin main
|
||||
./install-local.sh
|
||||
```
|
||||
|
||||
### Force Reinstall
|
||||
|
||||
```bash
|
||||
# Remove old installation
|
||||
/plugin uninstall stackshift
|
||||
|
||||
# Reinstall from marketplace
|
||||
/plugin marketplace add jschulte/claude-plugins
|
||||
/plugin install stackshift
|
||||
|
||||
# Or install locally
|
||||
git clone https://github.com/jschulte/stackshift.git
|
||||
cd stackshift
|
||||
./install-local.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
- **v1.3.0** (2025-11-18) - Batch session persistence, directory-scoped sessions
|
||||
- **v1.2.0** (2025-11-17) - Brownfield upgrade mode, Multi-framework support
|
||||
- **v1.1.1** (2025-11-16) - Monorepo detection improvements
|
||||
- **v1.1.0** (2025-11-15) - Added monorepo detection
|
||||
- **v1.0.0** (2025-11-14) - Initial release
|
||||
|
||||
---
|
||||
|
||||
## What's New in Latest Version
|
||||
|
||||
### v1.3.0 Features
|
||||
|
||||
🎯 **Cross-batch answer persistence** - Answer configuration questions once, automatically apply to all repos across all batches
|
||||
|
||||
📁 **Directory-scoped sessions** - Multiple simultaneous batch runs in different directories without conflicts
|
||||
|
||||
🔍 **Auto-discovery** - Agents automatically find parent batch configuration
|
||||
|
||||
⚡ **Time savings** - Save 58 minutes on 90-repo batches!
|
||||
|
||||
**Full changelog:** https://github.com/jschulte/stackshift/releases/tag/v1.3.0
|
||||
Reference in New Issue
Block a user