Initial commit
This commit is contained in:
17
.claude-plugin/plugin.json
Normal file
17
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
{
|
||||||
|
"name": "stackshift",
|
||||||
|
"description": "Reverse engineering toolkit that transforms any application into a fully-specified, spec-driven codebase through a 6-gear process. Auto-detects app type (monorepo service, Nx app, etc.) then choose route: Greenfield (tech-agnostic for migration) or Brownfield (tech-prescriptive for maintenance). Includes Gear 6.5 validation, code review, and coverage mapping.",
|
||||||
|
"version": "1.6.0",
|
||||||
|
"author": {
|
||||||
|
"name": "Jonah Schulte"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./skills"
|
||||||
|
],
|
||||||
|
"agents": [
|
||||||
|
"./agents"
|
||||||
|
],
|
||||||
|
"commands": [
|
||||||
|
"./commands"
|
||||||
|
]
|
||||||
|
}
|
||||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# stackshift
|
||||||
|
|
||||||
|
Reverse engineering toolkit that transforms any application into a fully-specified, spec-driven codebase through a 6-gear process. Auto-detects app type (monorepo service, Nx app, etc.) then choose route: Greenfield (tech-agnostic for migration) or Brownfield (tech-prescriptive for maintenance). Includes Gear 6.5 validation, code review, and coverage mapping.
|
||||||
66
agents/README.md
Normal file
66
agents/README.md
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
# StackShift Agents
|
||||||
|
|
||||||
|
Custom AI agents for StackShift tasks. These agents are included with the plugin so users don't need external dependencies.
|
||||||
|
|
||||||
|
## Available Agents
|
||||||
|
|
||||||
|
### stackshift:technical-writer
|
||||||
|
**Purpose:** Generate technical documentation and specifications
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Creating feature specifications in specs/
|
||||||
|
- Writing constitution.md
|
||||||
|
- Generating implementation plans
|
||||||
|
- Creating comprehensive documentation
|
||||||
|
|
||||||
|
**Specialization:**
|
||||||
|
- Clear, concise technical writing
|
||||||
|
- Markdown formatting
|
||||||
|
- GitHub Spec Kit format compliance
|
||||||
|
- Acceptance criteria definition
|
||||||
|
- Implementation status tracking
|
||||||
|
|
||||||
|
### stackshift:code-analyzer
|
||||||
|
**Purpose:** Deep codebase analysis and extraction
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Extracting API endpoints from code
|
||||||
|
- Identifying data models and schemas
|
||||||
|
- Mapping component structure
|
||||||
|
- Detecting configuration options
|
||||||
|
- Assessing completeness
|
||||||
|
|
||||||
|
**Specialization:**
|
||||||
|
- Multi-language code analysis
|
||||||
|
- Pattern recognition
|
||||||
|
- Dependency detection
|
||||||
|
- Architecture identification
|
||||||
|
|
||||||
|
## How They Work
|
||||||
|
|
||||||
|
StackShift agents are automatically available when the plugin is installed. Skills can invoke them for specific tasks:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// In a skill
|
||||||
|
Task({
|
||||||
|
subagent_type: 'stackshift:technical-writer',
|
||||||
|
prompt: 'Generate feature specification for user authentication...'
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
✅ Self-contained - No external dependencies
|
||||||
|
✅ Optimized - Tuned for StackShift workflows
|
||||||
|
✅ Consistent - Same output format every time
|
||||||
|
✅ Reliable - Don't break if user doesn't have other plugins
|
||||||
|
|
||||||
|
## Agent Definitions
|
||||||
|
|
||||||
|
Each agent has:
|
||||||
|
- `AGENT.md` - Agent definition and instructions
|
||||||
|
- Specific tools and capabilities
|
||||||
|
- Guidelines for output format
|
||||||
|
- Examples of usage
|
||||||
|
|
||||||
|
These follow Claude Code's agent specification format.
|
||||||
414
agents/feature-brainstorm/AGENT.md
Normal file
414
agents/feature-brainstorm/AGENT.md
Normal file
@@ -0,0 +1,414 @@
|
|||||||
|
---
|
||||||
|
name: feature-brainstorm
|
||||||
|
description: Feature brainstorming agent that analyzes Constitution constraints and presents 4 solid implementation approaches for new features. Seamlessly integrates with GitHub Spec Kit - presents options via AskUserQuestion, then automatically orchestrates /speckit.specify, /speckit.plan, /speckit.tasks workflow.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Feature Brainstorm Agent
|
||||||
|
|
||||||
|
**Purpose:** Analyze project Constitution, present 4 solid implementation approaches, then seamlessly transition to GitHub Spec Kit for structured development.
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
- After completing StackShift Gears 1-6 (app is spec'd and implemented)
|
||||||
|
- Want to add a new feature
|
||||||
|
- Need creative exploration of implementation approaches
|
||||||
|
- Want guided workflow from idea → spec → plan → tasks → implementation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Workflow
|
||||||
|
|
||||||
|
### Phase 1: Feature Understanding (5 min)
|
||||||
|
|
||||||
|
**Gather context:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Load project constitution
|
||||||
|
cat .specify/memory/constitution.md
|
||||||
|
|
||||||
|
# 2. Understand current architecture
|
||||||
|
ls -la src/
|
||||||
|
cat package.json | jq -r '.dependencies'
|
||||||
|
|
||||||
|
# 3. Review existing specs for patterns
|
||||||
|
ls .specify/memory/specifications/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ask user:**
|
||||||
|
- "What feature do you want to add?"
|
||||||
|
- "What problem does it solve?"
|
||||||
|
- "Who are the users?"
|
||||||
|
|
||||||
|
**Extract:**
|
||||||
|
- Feature name
|
||||||
|
- User stories
|
||||||
|
- Business value
|
||||||
|
- Constraints from Constitution
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Generate 4 Solid Implementation Approaches (10-15 min)
|
||||||
|
|
||||||
|
**Analyze feature within Constitution constraints:**
|
||||||
|
|
||||||
|
Based on:
|
||||||
|
- Constitution tech stack (e.g., Next.js + React + Prisma)
|
||||||
|
- Constitution principles (e.g., Test-First, 85% coverage)
|
||||||
|
- Constitution patterns (e.g., approved state management)
|
||||||
|
- Feature requirements
|
||||||
|
|
||||||
|
**Generate 4 practical, viable approaches:**
|
||||||
|
|
||||||
|
Consider dimensions:
|
||||||
|
- **Complexity:** Simple → Complex
|
||||||
|
- **Time:** Quick → Thorough
|
||||||
|
- **Infrastructure:** Minimal → Full
|
||||||
|
- **Cost:** Low → High
|
||||||
|
|
||||||
|
**Example: Real-time Notifications Feature**
|
||||||
|
|
||||||
|
**Approach A: Server-Side Rendering (Balanced)**
|
||||||
|
- Server-Sent Events (SSE) with React Server Components
|
||||||
|
- Notification state in PostgreSQL (per Constitution)
|
||||||
|
- Toast UI using shadcn/ui (per Constitution)
|
||||||
|
- Complexity: Medium | Time: 2-3 days | Cost: Low
|
||||||
|
- Pros: SEO-friendly, uses existing Next.js SSR, minimal infrastructure
|
||||||
|
- Cons: SSE connection management, not true bidirectional
|
||||||
|
|
||||||
|
**Approach B: WebSocket Service (Full-featured)**
|
||||||
|
- Dedicated WebSocket server (Socket.io)
|
||||||
|
- Redis for message queue
|
||||||
|
- React Query for client state (per Constitution approved patterns)
|
||||||
|
- Complexity: High | Time: 4-5 days | Cost: Medium (Redis hosting)
|
||||||
|
- Pros: True real-time, bidirectional, scalable
|
||||||
|
- Cons: Additional infrastructure, deployment complexity
|
||||||
|
|
||||||
|
**Approach C: Simple Polling (Quick & Easy)**
|
||||||
|
- HTTP polling API endpoint
|
||||||
|
- React Query with refetchInterval
|
||||||
|
- Notification table in PostgreSQL
|
||||||
|
- Complexity: Low | Time: 1-2 days | Cost: Very Low
|
||||||
|
- Pros: Simple, no connection management, works everywhere
|
||||||
|
- Cons: Not real-time (30s latency), more DB queries
|
||||||
|
|
||||||
|
**Approach D: Managed Service (Fastest)**
|
||||||
|
- Third-party service (Pusher/Ably/Firebase)
|
||||||
|
- Simple client SDK
|
||||||
|
- Pay-per-message pricing
|
||||||
|
- Complexity: Very Low | Time: 1 day | Cost: Pay-per-use
|
||||||
|
- Pros: Zero infrastructure, proven, fast implementation
|
||||||
|
- Cons: Vendor lock-in, data leaves infrastructure, ongoing costs
|
||||||
|
|
||||||
|
**All approaches comply with Constitution:**
|
||||||
|
- ✅ Use React (required)
|
||||||
|
- ✅ Use TypeScript (required)
|
||||||
|
- ✅ PostgreSQL for persistent data (required)
|
||||||
|
- ✅ Follow approved patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Present Options to User (Use AskUserQuestion Tool)
|
||||||
|
|
||||||
|
**Format VS results for user:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Which implementation approach for notifications aligns best with your priorities?",
|
||||||
|
header: "Approach",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Server-Side Rendering (SSR)",
|
||||||
|
description: "Server-Sent Events with React Server Components. Medium complexity, 2-3 days. SEO-friendly, leverages existing Next.js. Constitution-compliant."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "WebSocket Service",
|
||||||
|
description: "Dedicated Socket.io server with Redis queue. High complexity, 4-5 days. True real-time, scalable. Requires additional infrastructure."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Polling-Based",
|
||||||
|
description: "HTTP polling with React Query. Low complexity, 1-2 days. Simple, works everywhere. Higher latency than real-time."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Third-Party (Pusher/Ably)",
|
||||||
|
description: "Managed service with SDK. Very low complexity, 1 day. Zero infrastructure management. Ongoing costs, vendor lock-in."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Do you want to proceed directly to implementation after planning?",
|
||||||
|
header: "Next Steps",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Yes - Full automation",
|
||||||
|
description: "Run /speckit.specify, /speckit.plan, /speckit.tasks, and /speckit.implement automatically"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Stop after planning",
|
||||||
|
description: "Generate spec and plan, then I'll review before implementing"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 4: Constitution-Guided Specification (Automatic)
|
||||||
|
|
||||||
|
**Load Constitution guardrails:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Extract tech stack from Constitution
|
||||||
|
STACK=$(grep -A 20 "## Technical Architecture" .specify/memory/constitution.md)
|
||||||
|
|
||||||
|
# Extract non-negotiables
|
||||||
|
PRINCIPLES=$(grep -A 5 "NON-NEGOTIABLE" .specify/memory/constitution.md)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run /speckit.specify with chosen approach:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Automatically run speckit.specify with user's choice
|
||||||
|
/speckit.specify
|
||||||
|
|
||||||
|
[Feature description from user]
|
||||||
|
|
||||||
|
Implementation Approach (selected): [USER_CHOICE]
|
||||||
|
|
||||||
|
[Detailed approach from VS option]
|
||||||
|
|
||||||
|
This approach complies with Constitution principles:
|
||||||
|
- Uses [TECH_STACK from Constitution]
|
||||||
|
- Follows [PRINCIPLES from Constitution]
|
||||||
|
- Adheres to [STANDARDS from Constitution]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 5: Automatic Orchestration (If user chose "Full automation")
|
||||||
|
|
||||||
|
**Execute the Spec Kit workflow automatically:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Running Full Spec Kit Workflow ==="
|
||||||
|
|
||||||
|
# Step 1: Specification (already done in Phase 4)
|
||||||
|
echo "✅ Specification created"
|
||||||
|
|
||||||
|
# Step 2: Clarification (if needed)
|
||||||
|
if grep -q "\[NEEDS CLARIFICATION\]" .specify/memory/specifications/*.md; then
|
||||||
|
echo "Running /speckit.clarify to resolve ambiguities..."
|
||||||
|
/speckit.clarify
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Step 3: Technical Plan
|
||||||
|
echo "Running /speckit.plan with Constitution tech stack..."
|
||||||
|
/speckit.plan
|
||||||
|
|
||||||
|
[Tech stack from Constitution]
|
||||||
|
[Chosen implementation approach details]
|
||||||
|
|
||||||
|
Implementation must follow Constitution:
|
||||||
|
- [List relevant Constitution principles]
|
||||||
|
|
||||||
|
# Step 4: Task Breakdown
|
||||||
|
echo "Running /speckit.tasks..."
|
||||||
|
/speckit.tasks
|
||||||
|
|
||||||
|
# Step 5: Ask user before implementing
|
||||||
|
echo "Spec, plan, and tasks ready. Ready to implement?"
|
||||||
|
# Wait for user confirmation
|
||||||
|
|
||||||
|
# Step 6: Implementation (if user confirms)
|
||||||
|
/speckit.implement
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Capabilities
|
||||||
|
|
||||||
|
**Tools this agent uses:**
|
||||||
|
1. **Read** - Load Constitution, existing specs, project files
|
||||||
|
2. **AskUserQuestion** - Present VS options with multi-choice
|
||||||
|
3. **SlashCommand** - Run `/speckit.*` commands
|
||||||
|
4. **Bash** - Check project structure, validate prerequisites
|
||||||
|
|
||||||
|
**Integration with Constitution:**
|
||||||
|
- ✅ Loads Constitution before VS generation
|
||||||
|
- ✅ VS options constrained by Constitution (no prohibited tech)
|
||||||
|
- ✅ All approaches comply with NON-NEGOTIABLES
|
||||||
|
- ✅ Tech stack inherited from Constitution
|
||||||
|
- ✅ Principles enforced in planning phase
|
||||||
|
|
||||||
|
**Integration with Spec Kit:**
|
||||||
|
- ✅ Auto-runs `/speckit.specify` with chosen approach
|
||||||
|
- ✅ Auto-runs `/speckit.clarify` if ambiguities detected
|
||||||
|
- ✅ Auto-runs `/speckit.plan` with Constitution tech stack
|
||||||
|
- ✅ Auto-runs `/speckit.tasks` for breakdown
|
||||||
|
- ✅ Prompts before `/speckit.implement` (user controls execution)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Session
|
||||||
|
|
||||||
|
```
|
||||||
|
User: "I want to add user notifications to the app"
|
||||||
|
|
||||||
|
Agent: "Let me analyze your Constitution and generate implementation approaches..."
|
||||||
|
|
||||||
|
[Loads Constitution - sees Next.js + React + Prisma stack]
|
||||||
|
[Uses VS to generate 4 approaches within those constraints]
|
||||||
|
[Presents via AskUserQuestion]
|
||||||
|
|
||||||
|
User: [Selects "Server-Side Rendering" approach]
|
||||||
|
|
||||||
|
Agent: "Great choice! Running /speckit.specify with SSR approach..."
|
||||||
|
[Automatically runs /speckit.specify]
|
||||||
|
|
||||||
|
Agent: "Specification created. Running /speckit.clarify..."
|
||||||
|
[Automatically runs /speckit.clarify]
|
||||||
|
|
||||||
|
Agent: "Clarifications complete. Running /speckit.plan with Next.js (per Constitution)..."
|
||||||
|
[Automatically runs /speckit.plan]
|
||||||
|
|
||||||
|
Agent: "Plan created. Running /speckit.tasks..."
|
||||||
|
[Automatically runs /speckit.tasks]
|
||||||
|
|
||||||
|
Agent: "✅ Complete workflow ready:
|
||||||
|
- Specification: .specify/memory/specifications/notifications.md
|
||||||
|
- Plan: .specify/memory/plans/notifications-plan.md
|
||||||
|
- Tasks: .specify/memory/tasks/notifications-tasks.md
|
||||||
|
|
||||||
|
Ready to implement? (yes/no)"
|
||||||
|
|
||||||
|
User: "yes"
|
||||||
|
|
||||||
|
Agent: "Running /speckit.implement..."
|
||||||
|
[Executes implementation]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Verbalized Sampling Best Practices
|
||||||
|
|
||||||
|
**DO use VS for:**
|
||||||
|
- ✅ Implementation approach exploration
|
||||||
|
- ✅ Architecture pattern choices
|
||||||
|
- ✅ Technology selection (within Constitution)
|
||||||
|
- ✅ UX/UI strategy
|
||||||
|
- ✅ State management approach
|
||||||
|
|
||||||
|
**DON'T use VS for:**
|
||||||
|
- ❌ Constitution violations (filter out)
|
||||||
|
- ❌ Obvious single-choice scenarios
|
||||||
|
- ❌ Established project patterns
|
||||||
|
- ❌ Simple factual questions
|
||||||
|
|
||||||
|
**Guardrails:**
|
||||||
|
- All VS options MUST comply with Constitution
|
||||||
|
- Filter out approaches that violate NON-NEGOTIABLES
|
||||||
|
- Only present viable, implementable options
|
||||||
|
- Probabilities should reflect viability within constraints
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Constitution Integration
|
||||||
|
|
||||||
|
**How Constitution guides approach generation:**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Load Constitution
|
||||||
|
const constitution = loadConstitution();
|
||||||
|
const techStack = constitution.technicalArchitecture;
|
||||||
|
const principles = constitution.principles;
|
||||||
|
|
||||||
|
// Generate approaches within constraints
|
||||||
|
const approaches = generateApproaches({
|
||||||
|
mustUse: [techStack.frontend, techStack.backend, techStack.database],
|
||||||
|
mustFollow: principles.filter(p => p.nonNegotiable),
|
||||||
|
canChoose: ['state management', 'real-time strategy', 'UI patterns'],
|
||||||
|
feature: userFeatureDescription
|
||||||
|
});
|
||||||
|
|
||||||
|
// All approaches automatically comply
|
||||||
|
approaches.forEach(approach => {
|
||||||
|
assert(compliesWithConstitution(approach, constitution));
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
- 4 solid options within guardrails
|
||||||
|
- No fragmentation (all use same stack)
|
||||||
|
- Constitution compliance guaranteed
|
||||||
|
- Practical choices based on real tradeoffs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Seamless User Experience
|
||||||
|
|
||||||
|
**Single command kicks off everything:**
|
||||||
|
|
||||||
|
```
|
||||||
|
User: "I want to add real-time notifications"
|
||||||
|
|
||||||
|
Agent (autonomous workflow):
|
||||||
|
1. ✅ Load Constitution
|
||||||
|
2. ✅ Generate 4 diverse approaches (VS)
|
||||||
|
3. ✅ Present options (AskUserQuestion)
|
||||||
|
4. ✅ User selects
|
||||||
|
5. ✅ Auto-run /speckit.specify
|
||||||
|
6. ✅ Auto-run /speckit.clarify
|
||||||
|
7. ✅ Auto-run /speckit.plan
|
||||||
|
8. ✅ Auto-run /speckit.tasks
|
||||||
|
9. ❓ Ask: "Ready to implement?"
|
||||||
|
10. ✅ If yes: Auto-run /speckit.implement
|
||||||
|
|
||||||
|
User just makes 2 decisions:
|
||||||
|
- Which approach (from 4 options)
|
||||||
|
- Implement now or later
|
||||||
|
|
||||||
|
Everything else is automated!
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Agent Activation
|
||||||
|
|
||||||
|
**Triggers:**
|
||||||
|
- "I want to add a new feature..."
|
||||||
|
- "Let's brainstorm approaches for..."
|
||||||
|
- "I need to implement [feature]..."
|
||||||
|
- "/feature-brainstorm [description]"
|
||||||
|
|
||||||
|
**Prerequisites Check:**
|
||||||
|
```bash
|
||||||
|
# Must have Constitution
|
||||||
|
[ -f .specify/memory/constitution.md ] || echo "❌ No Constitution - run StackShift Gears 1-6 first"
|
||||||
|
|
||||||
|
# Must have speckit commands
|
||||||
|
ls .claude/commands/speckit.*.md || echo "❌ No /speckit commands - run Gear 3"
|
||||||
|
|
||||||
|
# Must have existing specs (app is already spec'd)
|
||||||
|
[ -d .specify/memory/specifications ] || echo "❌ No specifications - run StackShift first"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
Agent successfully completes when:
|
||||||
|
- ✅ VS generated 4+ diverse approaches
|
||||||
|
- ✅ User selected approach via questionnaire
|
||||||
|
- ✅ Specification created (`/speckit.specify`)
|
||||||
|
- ✅ Plan created (`/speckit.plan`) with Constitution tech stack
|
||||||
|
- ✅ Tasks created (`/speckit.tasks`)
|
||||||
|
- ✅ User prompted for implementation decision
|
||||||
|
- ✅ If approved: Implementation executed (`/speckit.implement`)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**This agent bridges creative brainstorming (VS) with structured delivery (Spec Kit), all while respecting Constitution guardrails!**
|
||||||
289
agents/stackshift-code-analyzer/AGENT.md
Normal file
289
agents/stackshift-code-analyzer/AGENT.md
Normal file
@@ -0,0 +1,289 @@
|
|||||||
|
# StackShift Code Analyzer Agent
|
||||||
|
|
||||||
|
**Type:** Codebase analysis and extraction specialist
|
||||||
|
|
||||||
|
**Purpose:** Deep analysis of codebases to extract business logic, technical implementation, APIs, data models, and architecture patterns for the StackShift reverse engineering workflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Specialization
|
||||||
|
|
||||||
|
This agent excels at:
|
||||||
|
|
||||||
|
✅ **Multi-Language Analysis** - Analyzes codebases in any programming language
|
||||||
|
✅ **API Discovery** - Finds and documents all API endpoints
|
||||||
|
✅ **Data Model Extraction** - Identifies schemas, types, and relationships
|
||||||
|
✅ **Architecture Recognition** - Detects patterns (MVC, microservices, serverless, etc.)
|
||||||
|
✅ **Configuration Discovery** - Finds all environment variables and config options
|
||||||
|
✅ **Dependency Mapping** - Catalogs all dependencies with versions
|
||||||
|
✅ **Completeness Assessment** - Estimates implementation percentage
|
||||||
|
✅ **Path-Aware Extraction** - Adapts output for greenfield vs brownfield routes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Capabilities
|
||||||
|
|
||||||
|
### Tools Available
|
||||||
|
- Read (for reading source files)
|
||||||
|
- Grep (for searching code patterns)
|
||||||
|
- Glob (for finding files by pattern)
|
||||||
|
- Bash (for running detection commands)
|
||||||
|
- Task (for launching sub-agents if needed)
|
||||||
|
|
||||||
|
### Analysis Modes
|
||||||
|
|
||||||
|
#### Greenfield Mode (Business Logic Only)
|
||||||
|
When route is "greenfield", extract:
|
||||||
|
- User capabilities (what users can do)
|
||||||
|
- Business workflows (user journeys)
|
||||||
|
- Business rules (validation, authorization)
|
||||||
|
- Data entities and relationships (abstract)
|
||||||
|
- Integration requirements (what external services, not which SDK)
|
||||||
|
|
||||||
|
**Avoid extracting:**
|
||||||
|
- Framework names
|
||||||
|
- Library details
|
||||||
|
- Database technology
|
||||||
|
- File paths
|
||||||
|
- Implementation specifics
|
||||||
|
|
||||||
|
#### Brownfield Mode (Full Stack)
|
||||||
|
When route is "brownfield", extract:
|
||||||
|
- Everything from greenfield mode PLUS:
|
||||||
|
- Exact frameworks and versions
|
||||||
|
- Database schemas with ORM details
|
||||||
|
- API endpoint paths and handlers
|
||||||
|
- File locations and code structure
|
||||||
|
- Configuration files and environment variables
|
||||||
|
- Dependencies with exact versions
|
||||||
|
- Current implementation details
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
### For Reverse Engineering (Gear 2)
|
||||||
|
|
||||||
|
Generate 8 comprehensive documentation files:
|
||||||
|
|
||||||
|
1. **functional-specification.md**
|
||||||
|
- Executive summary
|
||||||
|
- Functional requirements (FR-001, FR-002, ...)
|
||||||
|
- User stories (P0/P1/P2/P3)
|
||||||
|
- Non-functional requirements
|
||||||
|
- Business rules
|
||||||
|
- System boundaries
|
||||||
|
|
||||||
|
2. **configuration-reference.md**
|
||||||
|
- All environment variables
|
||||||
|
- Configuration files
|
||||||
|
- Feature flags
|
||||||
|
- Default values
|
||||||
|
|
||||||
|
3. **data-architecture.md**
|
||||||
|
- Data models
|
||||||
|
- API contracts
|
||||||
|
- Database schema (if brownfield)
|
||||||
|
- Data flow diagrams
|
||||||
|
|
||||||
|
4. **operations-guide.md**
|
||||||
|
- Deployment procedures
|
||||||
|
- Infrastructure overview
|
||||||
|
- Monitoring and alerting
|
||||||
|
|
||||||
|
5. **technical-debt-analysis.md**
|
||||||
|
- Code quality issues
|
||||||
|
- Missing tests
|
||||||
|
- Security concerns
|
||||||
|
- Performance issues
|
||||||
|
|
||||||
|
6. **observability-requirements.md**
|
||||||
|
- Logging requirements
|
||||||
|
- Monitoring needs
|
||||||
|
- Alerting rules
|
||||||
|
|
||||||
|
7. **visual-design-system.md**
|
||||||
|
- UI/UX patterns
|
||||||
|
- Component library
|
||||||
|
- Design tokens
|
||||||
|
|
||||||
|
8. **test-documentation.md**
|
||||||
|
- Test strategy
|
||||||
|
- Coverage requirements
|
||||||
|
- Test patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guidelines by Route
|
||||||
|
|
||||||
|
### Greenfield Route
|
||||||
|
|
||||||
|
**Example - User Authentication:**
|
||||||
|
|
||||||
|
✅ **Write this:**
|
||||||
|
```markdown
|
||||||
|
## User Authentication
|
||||||
|
|
||||||
|
### Capability
|
||||||
|
Users can create accounts and log in securely.
|
||||||
|
|
||||||
|
### Business Rules
|
||||||
|
- Email addresses must be unique
|
||||||
|
- Passwords must meet complexity requirements (8+ chars, number, special char)
|
||||||
|
- Sessions expire after 24 hours of inactivity
|
||||||
|
- Failed login attempts are rate-limited (10 per hour)
|
||||||
|
|
||||||
|
### Data Requirements
|
||||||
|
User entity must store:
|
||||||
|
- Unique identifier
|
||||||
|
- Email address (unique constraint)
|
||||||
|
- Password (securely hashed)
|
||||||
|
- Email verification status
|
||||||
|
- Registration timestamp
|
||||||
|
```
|
||||||
|
|
||||||
|
❌ **Don't write this:**
|
||||||
|
```markdown
|
||||||
|
## User Authentication (Next.js + Jose)
|
||||||
|
|
||||||
|
Implemented using Next.js App Router with jose library for JWT...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Brownfield Route
|
||||||
|
|
||||||
|
**Example - User Authentication:**
|
||||||
|
|
||||||
|
✅ **Write this:**
|
||||||
|
```markdown
|
||||||
|
## User Authentication
|
||||||
|
|
||||||
|
### Capability
|
||||||
|
Users can create accounts and log in securely.
|
||||||
|
|
||||||
|
### Current Implementation
|
||||||
|
|
||||||
|
**Framework:** Next.js 14.0.3 (App Router)
|
||||||
|
**Auth Library:** jose 5.1.0 (JWT)
|
||||||
|
**Password Hashing:** bcrypt 5.1.1 (cost: 10)
|
||||||
|
|
||||||
|
**API Endpoints:**
|
||||||
|
- POST /api/auth/register
|
||||||
|
- Handler: `app/api/auth/register/route.ts`
|
||||||
|
- Validation: Zod schema (`lib/validation/auth.ts`)
|
||||||
|
- Returns: JWT token + user object
|
||||||
|
|
||||||
|
- POST /api/auth/login
|
||||||
|
- Handler: `app/api/auth/login/route.ts`
|
||||||
|
- Rate limiting: 10 attempts/hour (upstash/ratelimit)
|
||||||
|
|
||||||
|
**Database Schema (Prisma):**
|
||||||
|
\`\`\`prisma
|
||||||
|
model User {
|
||||||
|
id String @id @default(cuid())
|
||||||
|
email String @unique
|
||||||
|
passwordHash String
|
||||||
|
emailVerified Boolean @default(false)
|
||||||
|
createdAt DateTime @default(now())
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
**Implementation Files:**
|
||||||
|
- app/api/auth/register/route.ts (78 lines)
|
||||||
|
- app/api/auth/login/route.ts (64 lines)
|
||||||
|
- lib/auth/jwt.ts (56 lines)
|
||||||
|
- lib/auth/password.ts (24 lines)
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
- jose: 5.1.0
|
||||||
|
- bcrypt: 5.1.1
|
||||||
|
- zod: 3.22.4
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Parallelization
|
||||||
|
- Generate multiple documentation files in parallel when possible
|
||||||
|
- Use multiple Task calls in one response for efficiency
|
||||||
|
|
||||||
|
### Accuracy
|
||||||
|
- Cross-reference code to ensure accuracy
|
||||||
|
- Verify version numbers from package files
|
||||||
|
- Check file paths actually exist (for brownfield)
|
||||||
|
- Don't hallucinate APIs or features
|
||||||
|
|
||||||
|
### Completeness
|
||||||
|
- Cover ALL features (don't skip minor ones)
|
||||||
|
- Document ALL API endpoints
|
||||||
|
- Include ALL data models
|
||||||
|
- Catalog ALL configuration options
|
||||||
|
|
||||||
|
### Formatting
|
||||||
|
- Use proper markdown headers
|
||||||
|
- Include code blocks with language tags
|
||||||
|
- Use tables for structured data
|
||||||
|
- Add emoji status indicators (✅⚠️❌)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Response Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## StackShift Code Analysis Complete
|
||||||
|
|
||||||
|
### Documentation Generated
|
||||||
|
|
||||||
|
Created 8 comprehensive files in `docs/reverse-engineering/`:
|
||||||
|
|
||||||
|
1. ✅ functional-specification.md (542 lines)
|
||||||
|
- 12 functional requirements
|
||||||
|
- 18 user stories
|
||||||
|
- 8 business rules
|
||||||
|
|
||||||
|
2. ✅ configuration-reference.md (186 lines)
|
||||||
|
- 24 environment variables
|
||||||
|
- 3 configuration files documented
|
||||||
|
|
||||||
|
3. ✅ data-architecture.md (437 lines)
|
||||||
|
- 8 data models
|
||||||
|
- 15 API endpoints
|
||||||
|
- Complete database schema
|
||||||
|
|
||||||
|
[... list all 8 files ...]
|
||||||
|
|
||||||
|
### Extraction Summary
|
||||||
|
|
||||||
|
**Route:** ${route}
|
||||||
|
**Approach:** ${route === 'greenfield' ? 'Business logic only (tech-agnostic)' : 'Business logic + technical implementation (prescriptive)'}
|
||||||
|
|
||||||
|
**Features Identified:** 12 total
|
||||||
|
- Core features: 8
|
||||||
|
- Advanced features: 4
|
||||||
|
|
||||||
|
**API Endpoints:** ${route === 'brownfield' ? '15 (fully documented)' : '15 (abstract contracts)'}
|
||||||
|
**Data Models:** ${route === 'brownfield' ? '8 (with schemas)' : '8 (abstract entities)'}
|
||||||
|
|
||||||
|
### Quality Check
|
||||||
|
|
||||||
|
- [x] All features documented
|
||||||
|
- [x] Business rules extracted
|
||||||
|
- [x] ${route === 'brownfield' ? 'File paths verified' : 'Tech-agnostic descriptions'}
|
||||||
|
- [x] Comprehensive and accurate
|
||||||
|
|
||||||
|
### Next Steps
|
||||||
|
|
||||||
|
Ready to shift into 3rd gear: Create Specifications
|
||||||
|
|
||||||
|
The extracted documentation will be transformed into GitHub Spec Kit format.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This agent is specialized for StackShift's reverse engineering workflow
|
||||||
|
- Path-aware: behavior changes based on greenfield vs brownfield route
|
||||||
|
- Efficiency-focused: generates multiple files in parallel
|
||||||
|
- Accuracy-driven: verifies information from actual code
|
||||||
|
- Compliant: follows StackShift templates and conventions
|
||||||
404
agents/stackshift-technical-writer/AGENT.md
Normal file
404
agents/stackshift-technical-writer/AGENT.md
Normal file
@@ -0,0 +1,404 @@
|
|||||||
|
# StackShift Technical Writer Agent
|
||||||
|
|
||||||
|
**Type:** Documentation and specification generation specialist
|
||||||
|
|
||||||
|
**Purpose:** Create clear, comprehensive technical documentation and GitHub Spec Kit specifications for the StackShift reverse engineering workflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Specialization
|
||||||
|
|
||||||
|
This agent excels at:
|
||||||
|
|
||||||
|
✅ **GitHub Spec Kit Format** - Generates specs that work with `/speckit.*` commands
|
||||||
|
✅ **Dual Format Support** - Creates both agnostic (greenfield) and prescriptive (brownfield) specs
|
||||||
|
✅ **Feature Specifications** - Writes comprehensive feature specs with acceptance criteria
|
||||||
|
✅ **Implementation Plans** - Creates detailed, actionable implementation plans
|
||||||
|
✅ **Constitution Documents** - Generates project principles and technical decisions
|
||||||
|
✅ **Markdown Excellence** - Professional, well-structured markdown formatting
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Capabilities
|
||||||
|
|
||||||
|
### Tools Available
|
||||||
|
- Read (for analyzing existing docs and code)
|
||||||
|
- Write (for generating new specifications)
|
||||||
|
- Edit (for updating existing specs)
|
||||||
|
- Grep (for finding patterns in codebase)
|
||||||
|
- Glob (for finding files)
|
||||||
|
|
||||||
|
### Output Formats
|
||||||
|
|
||||||
|
**Feature Specification:**
|
||||||
|
```markdown
|
||||||
|
# Feature: [Feature Name]
|
||||||
|
|
||||||
|
## Status
|
||||||
|
[✅ COMPLETE | ⚠️ PARTIAL | ❌ MISSING]
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
[Clear description of what this feature does]
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
- As a [user type], I want [capability] so that [benefit]
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] Criterion 1
|
||||||
|
- [ ] Criterion 2
|
||||||
|
|
||||||
|
[For Brownfield: Include Technical Implementation section]
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
[Related features or prerequisites]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation Plan:**
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan: [Feature Name]
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
[What needs to be accomplished]
|
||||||
|
|
||||||
|
## Current State
|
||||||
|
[What exists now]
|
||||||
|
|
||||||
|
## Target State
|
||||||
|
[What should exist after implementation]
|
||||||
|
|
||||||
|
## Technical Approach
|
||||||
|
[Step-by-step approach]
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
- [ ] Task 1
|
||||||
|
- [ ] Task 2
|
||||||
|
|
||||||
|
## Risks & Mitigations
|
||||||
|
[Potential issues and how to address them]
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
[How to validate implementation]
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
[How to know it's done]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
### For Greenfield (Tech-Agnostic) Specs
|
||||||
|
|
||||||
|
**DO:**
|
||||||
|
- Focus on business requirements (WHAT)
|
||||||
|
- Use generic technical terms
|
||||||
|
- Describe capabilities, not implementation
|
||||||
|
- Keep framework-agnostic
|
||||||
|
|
||||||
|
**DON'T:**
|
||||||
|
- Mention specific frameworks (React, Express, etc.)
|
||||||
|
- Specify database technology
|
||||||
|
- Include library names
|
||||||
|
- Reference file paths
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```markdown
|
||||||
|
## Authentication Requirement
|
||||||
|
|
||||||
|
Users must be able to securely authenticate with email and password.
|
||||||
|
|
||||||
|
**Business Rules:**
|
||||||
|
- Passwords must be hashed using industry-standard algorithm
|
||||||
|
- Sessions expire after configurable period (default: 24 hours)
|
||||||
|
- Failed attempts are rate-limited
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Brownfield (Tech-Prescriptive) Specs
|
||||||
|
|
||||||
|
**DO:**
|
||||||
|
- Include both business requirements and technical implementation
|
||||||
|
- Document exact frameworks and versions
|
||||||
|
- Specify file paths and code locations
|
||||||
|
- Include dependencies with versions
|
||||||
|
- Reference actual database schemas
|
||||||
|
|
||||||
|
**DON'T:**
|
||||||
|
- Skip implementation details
|
||||||
|
- Use vague version references ("latest")
|
||||||
|
- Omit file locations
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```markdown
|
||||||
|
## Authentication Implementation
|
||||||
|
|
||||||
|
Users must be able to securely authenticate with email and password.
|
||||||
|
|
||||||
|
**Technical Stack:**
|
||||||
|
- Framework: Next.js 14.0.3 (App Router)
|
||||||
|
- Auth Library: jose 5.1.0 (JWT)
|
||||||
|
- Password Hashing: bcrypt 5.1.1 (cost: 10)
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- Endpoint: POST /api/auth/login
|
||||||
|
- Handler: `app/api/auth/login/route.ts`
|
||||||
|
- Validation: Zod schema in `lib/validation/auth.ts`
|
||||||
|
- Database: User model via Prisma ORM
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
- jose@5.1.0
|
||||||
|
- bcrypt@5.1.1
|
||||||
|
- zod@3.22.4
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Standards
|
||||||
|
|
||||||
|
### All Specifications Must Have
|
||||||
|
|
||||||
|
- [ ] Clear, descriptive title
|
||||||
|
- [ ] Status marker (✅/⚠️/❌)
|
||||||
|
- [ ] Overview explaining the feature
|
||||||
|
- [ ] User stories (As a..., I want..., so that...)
|
||||||
|
- [ ] Acceptance criteria (testable, specific)
|
||||||
|
- [ ] Dependencies listed
|
||||||
|
- [ ] Related specifications referenced
|
||||||
|
|
||||||
|
### Brownfield Specifications Also Include
|
||||||
|
|
||||||
|
- [ ] Technical Implementation section
|
||||||
|
- [ ] Exact frameworks and versions
|
||||||
|
- [ ] File paths for all implementations
|
||||||
|
- [ ] Database schema (if applicable)
|
||||||
|
- [ ] API endpoints (if applicable)
|
||||||
|
- [ ] Environment variables (if applicable)
|
||||||
|
- [ ] Dependencies with versions
|
||||||
|
|
||||||
|
### Implementation Plans Must Have
|
||||||
|
|
||||||
|
- [ ] Clear goal statement
|
||||||
|
- [ ] Current vs target state
|
||||||
|
- [ ] Technical approach (step-by-step)
|
||||||
|
- [ ] Atomic, testable tasks
|
||||||
|
- [ ] Risks and mitigations
|
||||||
|
- [ ] Testing strategy
|
||||||
|
- [ ] Success criteria
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Working with StackShift
|
||||||
|
|
||||||
|
### When Called by create-specs Skill
|
||||||
|
|
||||||
|
1. **Check route** from `.stackshift-state.json`
|
||||||
|
2. **Load reverse-engineering docs** from `docs/reverse-engineering/`
|
||||||
|
3. **Create feature directories** in `specs/FEATURE-ID/` format
|
||||||
|
4. **Generate spec.md and plan.md** for each feature
|
||||||
|
5. **Use appropriate template** for route:
|
||||||
|
- Greenfield: Tech-agnostic
|
||||||
|
- Brownfield: Tech-prescriptive
|
||||||
|
6. **Create multiple features in parallel** (efficiency)
|
||||||
|
7. **Ensure GitHub Spec Kit compliance**
|
||||||
|
|
||||||
|
### Typical Invocation
|
||||||
|
|
||||||
|
```
|
||||||
|
Task({
|
||||||
|
subagent_type: 'stackshift:technical-writer',
|
||||||
|
prompt: `Generate feature specifications from docs/reverse-engineering/functional-specification.md
|
||||||
|
|
||||||
|
Route: brownfield (tech-prescriptive)
|
||||||
|
|
||||||
|
Create individual feature specs in specs/:
|
||||||
|
- Extract each feature from functional spec
|
||||||
|
- Include business requirements
|
||||||
|
- Include technical implementation details (frameworks, versions, file paths)
|
||||||
|
- Mark implementation status (✅/⚠️/❌)
|
||||||
|
- Cross-reference related specs
|
||||||
|
|
||||||
|
Generate 8-12 feature specs covering all major features.`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Greenfield Feature Spec
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature: Photo Upload
|
||||||
|
|
||||||
|
## Status
|
||||||
|
❌ MISSING - To be implemented in new stack
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Users can upload and manage photos of their fish, with automatic resizing and storage.
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
- As a user, I want to upload fish photos so that I can visually track my fish
|
||||||
|
- As a user, I want to see thumbnail previews so that I can quickly browse photos
|
||||||
|
- As a user, I want to delete photos so that I can remove unwanted images
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] User can upload images (JPEG, PNG, max 10MB)
|
||||||
|
- [ ] Images automatically resized to standard dimensions
|
||||||
|
- [ ] Thumbnails generated for gallery view
|
||||||
|
- [ ] User can delete their own photos
|
||||||
|
- [ ] Maximum 10 photos per fish
|
||||||
|
- [ ] Upload progress indicator shown
|
||||||
|
|
||||||
|
## Business Rules
|
||||||
|
- Supported formats: JPEG, PNG only
|
||||||
|
- Maximum file size: 10MB
|
||||||
|
- Maximum photos per fish: 10
|
||||||
|
- Images stored securely with access control
|
||||||
|
- Deleted photos removed from storage
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- Upload completes in < 5 seconds for 5MB file
|
||||||
|
- Thumbnail generation < 1 second
|
||||||
|
- Images served via CDN for fast loading
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- User must be authenticated
|
||||||
|
- Fish must exist in database
|
||||||
|
```
|
||||||
|
|
||||||
|
### Brownfield Feature Spec
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature: Photo Upload
|
||||||
|
|
||||||
|
## Status
|
||||||
|
⚠️ PARTIAL - Backend complete, frontend UI missing
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Users can upload and manage photos of their fish, with automatic resizing and cloud storage.
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
[Same as greenfield]
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [x] User can upload images (implemented)
|
||||||
|
- [x] Images automatically resized (implemented)
|
||||||
|
- [x] Thumbnails generated (implemented)
|
||||||
|
- [ ] Frontend upload UI (MISSING)
|
||||||
|
- [ ] Progress indicator (MISSING)
|
||||||
|
- [x] Delete functionality (backend only)
|
||||||
|
|
||||||
|
## Current Implementation
|
||||||
|
|
||||||
|
### Backend (✅ Complete)
|
||||||
|
|
||||||
|
**Tech Stack:**
|
||||||
|
- Storage: Vercel Blob Storage (@vercel/blob 0.15.0)
|
||||||
|
- Image Processing: sharp 0.33.0
|
||||||
|
- Upload API: Next.js 14 App Router
|
||||||
|
|
||||||
|
**API Endpoints:**
|
||||||
|
- POST /api/fish/[id]/photos
|
||||||
|
- Handler: `app/api/fish/[id]/photos/route.ts`
|
||||||
|
- Accepts: multipart/form-data
|
||||||
|
- Validates: File type, size
|
||||||
|
- Returns: Photo object with URLs
|
||||||
|
|
||||||
|
- DELETE /api/fish/[id]/photos/[photoId]
|
||||||
|
- Handler: `app/api/fish/[id]/photos/[photoId]/route.ts`
|
||||||
|
- Removes from Blob storage
|
||||||
|
- Deletes database record
|
||||||
|
|
||||||
|
**Database Schema:**
|
||||||
|
\`\`\`prisma
|
||||||
|
model Photo {
|
||||||
|
id String @id @default(cuid())
|
||||||
|
fishId String
|
||||||
|
originalUrl String
|
||||||
|
thumbUrl String
|
||||||
|
size Int
|
||||||
|
createdAt DateTime @default(now())
|
||||||
|
|
||||||
|
fish Fish @relation(fields: [fishId], references: [id], onDelete: Cascade)
|
||||||
|
|
||||||
|
@@index([fishId])
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
**Implementation Files:**
|
||||||
|
- app/api/fish/[id]/photos/route.ts (upload handler)
|
||||||
|
- app/api/fish/[id]/photos/[photoId]/route.ts (delete handler)
|
||||||
|
- lib/storage/blob.ts (Vercel Blob utilities)
|
||||||
|
- lib/images/resize.ts (sharp image processing)
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
- @vercel/blob@0.15.0
|
||||||
|
- sharp@0.33.0
|
||||||
|
- zod@3.22.4 (validation)
|
||||||
|
|
||||||
|
### Frontend (❌ Missing)
|
||||||
|
|
||||||
|
**Needed:**
|
||||||
|
- Upload component with drag-and-drop
|
||||||
|
- Progress indicator during upload
|
||||||
|
- Photo gallery component
|
||||||
|
- Delete confirmation dialog
|
||||||
|
|
||||||
|
**Files to create:**
|
||||||
|
- components/PhotoUpload.tsx
|
||||||
|
- components/PhotoGallery.tsx
|
||||||
|
- app/fish/[id]/photos/page.tsx
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
See: `specs/photo-upload-frontend.md`
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- User Authentication (complete)
|
||||||
|
- Fish Management (complete)
|
||||||
|
- Vercel Blob Storage (configured)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Response Format
|
||||||
|
|
||||||
|
Always respond with markdown containing:
|
||||||
|
|
||||||
|
1. Success message
|
||||||
|
2. Files created/updated (with line counts)
|
||||||
|
3. Next steps
|
||||||
|
4. Any important notes
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```markdown
|
||||||
|
✅ Feature specifications generated successfully!
|
||||||
|
|
||||||
|
## Files Created
|
||||||
|
|
||||||
|
1. specs/user-authentication.md (156 lines)
|
||||||
|
2. specs/fish-management.md (243 lines)
|
||||||
|
3. specs/photo-upload.md (198 lines)
|
||||||
|
...
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- Total specifications: 8
|
||||||
|
- Complete features: 3 (✅)
|
||||||
|
- Partial features: 3 (⚠️)
|
||||||
|
- Missing features: 2 (❌)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
Ready for Gear 4: Gap Analysis
|
||||||
|
Use: /speckit.analyze to validate specifications
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Work efficiently - generate multiple specs in parallel when possible
|
||||||
|
- Maintain consistent formatting across all specs
|
||||||
|
- Cross-reference related specifications
|
||||||
|
- Use appropriate template based on route (agnostic vs prescriptive)
|
||||||
|
- Ensure all specs are GitHub Spec Kit compliant
|
||||||
556
commands/batch.md
Normal file
556
commands/batch.md
Normal file
@@ -0,0 +1,556 @@
|
|||||||
|
---
|
||||||
|
description: Batch process multiple repos with StackShift analysis running in parallel. Analyzes 5 repos at a time, tracks progress, and aggregates results. Perfect for analyzing monorepo services or multiple related projects.
|
||||||
|
---
|
||||||
|
|
||||||
|
# StackShift Batch Processing
|
||||||
|
|
||||||
|
**Analyze multiple repositories in parallel**
|
||||||
|
|
||||||
|
Run StackShift on 10, 50, or 100+ repos simultaneously with progress tracking and result aggregation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
**Analyze all services in a monorepo:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From monorepo services directory
|
||||||
|
cd ~/git/my-monorepo/services
|
||||||
|
|
||||||
|
# Let me analyze all service-* directories in batches of 5
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll:
|
||||||
|
1. ✅ Find all service-* directories
|
||||||
|
2. ✅ Filter to valid repos (has package.json)
|
||||||
|
3. ✅ Process in batches of 5 (configurable)
|
||||||
|
4. ✅ Track progress in `batch-results/`
|
||||||
|
5. ✅ Aggregate results when complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What I'll Do
|
||||||
|
|
||||||
|
### Step 1: Discovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Discovering repositories in ~/git/my-monorepo/services ==="
|
||||||
|
|
||||||
|
# Find all service directories
|
||||||
|
find ~/git/my-monorepo/services -maxdepth 1 -type d -name "service-*" | sort > /tmp/services-to-analyze.txt
|
||||||
|
|
||||||
|
# Count
|
||||||
|
SERVICE_COUNT=$(wc -l < /tmp/services-to-analyze.txt)
|
||||||
|
echo "Found $SERVICE_COUNT services"
|
||||||
|
|
||||||
|
# Show first 10
|
||||||
|
head -10 /tmp/services-to-analyze.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Batch Configuration
|
||||||
|
|
||||||
|
**IMPORTANT:** I'll ask ALL configuration questions upfront, ONCE. Your answers will be saved to a batch session file and automatically applied to ALL repos in all batches. You won't need to answer these questions again during this batch run!
|
||||||
|
|
||||||
|
I'll ask you:
|
||||||
|
|
||||||
|
**Question 1: How many to process?**
|
||||||
|
- A) All services ($WIDGET_COUNT total)
|
||||||
|
- B) First 10 (test run)
|
||||||
|
- C) First 25 (small batch)
|
||||||
|
- D) Custom number
|
||||||
|
|
||||||
|
**Question 2: Parallel batch size?**
|
||||||
|
- A) 3 at a time (conservative)
|
||||||
|
- B) 5 at a time (recommended)
|
||||||
|
- C) 10 at a time (aggressive, may slow down)
|
||||||
|
- D) Sequential (1 at a time, safest)
|
||||||
|
|
||||||
|
**Question 3: What route?**
|
||||||
|
- A) Auto-detect (auto-detect (monorepo for service-*), ask for others)
|
||||||
|
- B) Force monorepo-service for all
|
||||||
|
- C) Force greenfield for all
|
||||||
|
- D) Force brownfield for all
|
||||||
|
|
||||||
|
**Question 4: Brownfield mode?** _(If route = brownfield)_
|
||||||
|
- A) Standard - Just create specs for current state
|
||||||
|
- B) Upgrade - Create specs + upgrade all dependencies
|
||||||
|
|
||||||
|
**Question 5: Transmission?**
|
||||||
|
- A) Manual - Review each gear before proceeding
|
||||||
|
- B) Cruise Control - Shift through all gears automatically
|
||||||
|
|
||||||
|
**Question 6: Clarifications strategy?** _(If transmission = cruise control)_
|
||||||
|
- A) Defer - Mark them, continue around them
|
||||||
|
- B) Prompt - Stop and ask questions
|
||||||
|
- C) Skip - Only implement fully-specified features
|
||||||
|
|
||||||
|
**Question 7: Implementation scope?** _(If transmission = cruise control)_
|
||||||
|
- A) None - Stop after specs are ready
|
||||||
|
- B) P0 only - Critical features only
|
||||||
|
- C) P0 + P1 - Critical + high-value features
|
||||||
|
- D) All - Every feature
|
||||||
|
|
||||||
|
**Question 8: Spec output location?** _(If route = greenfield)_
|
||||||
|
- A) Current repository (default)
|
||||||
|
- B) New application repository
|
||||||
|
- C) Separate documentation repository
|
||||||
|
- D) Custom location
|
||||||
|
|
||||||
|
**Question 9: Target stack?** _(If greenfield + implementation scope != none)_
|
||||||
|
- Examples:
|
||||||
|
- Next.js 15 + TypeScript + Prisma + PostgreSQL
|
||||||
|
- Python/FastAPI + SQLAlchemy + PostgreSQL
|
||||||
|
- Your choice: [specify]
|
||||||
|
|
||||||
|
**Question 10: Build location?** _(If greenfield + implementation scope != none)_
|
||||||
|
- A) Subfolder (recommended) - e.g., greenfield/, v2/
|
||||||
|
- B) Separate directory - e.g., ~/git/my-new-app
|
||||||
|
- C) Replace in place (destructive)
|
||||||
|
|
||||||
|
**Then I'll:**
|
||||||
|
1. ✅ Save all answers to `.stackshift-batch-session.json` (in current directory)
|
||||||
|
2. ✅ Show batch session summary
|
||||||
|
3. ✅ Start processing batches with auto-applied configuration
|
||||||
|
4. ✅ Clear batch session when complete (or keep if you want)
|
||||||
|
|
||||||
|
**Why directory-scoped?**
|
||||||
|
- Multiple batch sessions can run simultaneously in different directories
|
||||||
|
- Each batch (monorepo services, etc.) has its own isolated configuration
|
||||||
|
- No conflicts between parallel batch runs
|
||||||
|
- Session file is co-located with the repos being processed
|
||||||
|
|
||||||
|
### Step 3: Create Batch Session & Spawn Agents
|
||||||
|
|
||||||
|
**First: Create batch session with all answers**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# After collecting all configuration answers, create batch session
|
||||||
|
# Stored in current directory for isolation from other batch runs
|
||||||
|
cat > .stackshift-batch-session.json <<EOF
|
||||||
|
{
|
||||||
|
"sessionId": "batch-$(date +%s)",
|
||||||
|
"startedAt": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||||
|
"batchRootDirectory": "$(pwd)",
|
||||||
|
"totalRepos": ${TOTAL_REPOS},
|
||||||
|
"batchSize": ${BATCH_SIZE},
|
||||||
|
"answers": {
|
||||||
|
"route": "${ROUTE}",
|
||||||
|
"transmission": "${TRANSMISSION}",
|
||||||
|
"spec_output_location": "${SPEC_OUTPUT}",
|
||||||
|
"target_stack": "${TARGET_STACK}",
|
||||||
|
"build_location": "${BUILD_LOCATION}",
|
||||||
|
"clarifications_strategy": "${CLARIFICATIONS}",
|
||||||
|
"implementation_scope": "${SCOPE}"
|
||||||
|
},
|
||||||
|
"processedRepos": []
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "✅ Batch session created: $(pwd)/.stackshift-batch-session.json"
|
||||||
|
echo "📦 Configuration will be auto-applied to all ${TOTAL_REPOS} repos"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Then: Spawn parallel agents (they'll auto-use batch session)**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Use Task tool to spawn parallel agents
|
||||||
|
const batch1 = [
|
||||||
|
'service-user-api',
|
||||||
|
'service-inventory',
|
||||||
|
'service-contact',
|
||||||
|
'service-search',
|
||||||
|
'service-pricing'
|
||||||
|
];
|
||||||
|
|
||||||
|
// Spawn 5 agents in parallel
|
||||||
|
const agents = batch1.map(service => ({
|
||||||
|
task: `Analyze ${service} service with StackShift`,
|
||||||
|
description: `StackShift analysis: ${service}`,
|
||||||
|
subagent_type: 'general-purpose',
|
||||||
|
prompt: `
|
||||||
|
cd ~/git/my-monorepo/services/${service}
|
||||||
|
|
||||||
|
IMPORTANT: Batch session is active (will be auto-detected by walking up to parent)
|
||||||
|
Parent directory has: .stackshift-batch-session.json
|
||||||
|
All configuration will be auto-applied. DO NOT ask configuration questions.
|
||||||
|
|
||||||
|
Run StackShift Gear 1: Analyze
|
||||||
|
- Will auto-detect route (batch session: ${ROUTE})
|
||||||
|
- Will use spec output location: ${SPEC_OUTPUT}
|
||||||
|
- Analyze service + shared packages
|
||||||
|
- Generate analysis-report.md
|
||||||
|
|
||||||
|
Then run Gear 2: Reverse Engineer
|
||||||
|
- Extract business logic
|
||||||
|
- Document all shared package dependencies
|
||||||
|
- Create comprehensive documentation
|
||||||
|
|
||||||
|
Then run Gear 3: Create Specifications
|
||||||
|
- Generate .specify/ structure
|
||||||
|
- Create constitution
|
||||||
|
- Generate feature specs
|
||||||
|
|
||||||
|
Save all results to:
|
||||||
|
${SPEC_OUTPUT}/${service}/
|
||||||
|
|
||||||
|
When complete, create completion marker:
|
||||||
|
${SPEC_OUTPUT}/${service}/.complete
|
||||||
|
`
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Launch all 5 in parallel
|
||||||
|
agents.forEach(agent => spawnAgent(agent));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Progress Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create tracking directory
|
||||||
|
mkdir -p ~/git/stackshift-batch-results
|
||||||
|
|
||||||
|
# Monitor progress
|
||||||
|
while true; do
|
||||||
|
COMPLETE=$(find ~/git/stackshift-batch-results -name ".complete" | wc -l)
|
||||||
|
echo "Completed: $COMPLETE / $WIDGET_COUNT"
|
||||||
|
|
||||||
|
# Check if batch done
|
||||||
|
if [ $COMPLETE -ge 5 ]; then
|
||||||
|
echo "✅ Batch 1 complete"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
sleep 30
|
||||||
|
done
|
||||||
|
|
||||||
|
# Start next batch...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Result Aggregation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# After all batches complete
|
||||||
|
echo "=== Aggregating Results ==="
|
||||||
|
|
||||||
|
# Create master report
|
||||||
|
cat > ~/git/stackshift-batch-results/BATCH_SUMMARY.md <<EOF
|
||||||
|
# StackShift Batch Analysis Results
|
||||||
|
|
||||||
|
**Date:** $(date)
|
||||||
|
**Widgets Analyzed:** $WIDGET_COUNT
|
||||||
|
**Batches:** $(($WIDGET_COUNT / 5))
|
||||||
|
**Total Time:** [calculated]
|
||||||
|
|
||||||
|
## Completion Status
|
||||||
|
|
||||||
|
$(for service in $(cat /tmp/services-to-analyze.txt); do
|
||||||
|
service_name=$(basename $service)
|
||||||
|
if [ -f ~/git/stackshift-batch-results/$service_name/.complete ]; then
|
||||||
|
echo "- ✅ $service_name - Complete"
|
||||||
|
else
|
||||||
|
echo "- ❌ $service_name - Failed or incomplete"
|
||||||
|
fi
|
||||||
|
done)
|
||||||
|
|
||||||
|
## Results by Widget
|
||||||
|
|
||||||
|
$(for service in $(cat /tmp/services-to-analyze.txt); do
|
||||||
|
service_name=$(basename $service)
|
||||||
|
if [ -f ~/git/stackshift-batch-results/$service_name/.complete ]; then
|
||||||
|
echo "### $service_name"
|
||||||
|
echo ""
|
||||||
|
echo "**Specs created:** $(find ~/git/stackshift-batch-results/$service_name/.specify/memory/specifications -name "*.md" 2>/dev/null | wc -l)"
|
||||||
|
echo "**Modules analyzed:** $(cat ~/git/stackshift-batch-results/$service_name/.stackshift-state.json 2>/dev/null | jq -r '.metadata.modulesAnalyzed // 0')"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
done)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
All specifications are ready for review:
|
||||||
|
- Review specs in each service's batch-results directory
|
||||||
|
- Merge specs to actual repos if satisfied
|
||||||
|
- Run Gears 4-6 as needed
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat ~/git/stackshift-batch-results/BATCH_SUMMARY.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Result Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
~/git/stackshift-batch-results/
|
||||||
|
├── BATCH_SUMMARY.md # Master summary
|
||||||
|
├── batch-progress.json # Real-time tracking
|
||||||
|
│
|
||||||
|
├── service-user-api/
|
||||||
|
│ ├── .complete # Marker file
|
||||||
|
│ ├── .stackshift-state.json # State
|
||||||
|
│ ├── analysis-report.md # Gear 1 output
|
||||||
|
│ ├── docs/reverse-engineering/ # Gear 2 output
|
||||||
|
│ │ ├── functional-specification.md
|
||||||
|
│ │ ├── service-logic.md
|
||||||
|
│ │ ├── modules/
|
||||||
|
│ │ │ ├── shared-pricing-utils.md
|
||||||
|
│ │ │ └── shared-discount-utils.md
|
||||||
|
│ │ └── [7 more docs]
|
||||||
|
│ └── .specify/ # Gear 3 output
|
||||||
|
│ └── memory/
|
||||||
|
│ ├── constitution.md
|
||||||
|
│ └── specifications/
|
||||||
|
│ ├── pricing-display.md
|
||||||
|
│ ├── incentive-logic.md
|
||||||
|
│ └── [more specs]
|
||||||
|
│
|
||||||
|
├── service-inventory/
|
||||||
|
│ └── [same structure]
|
||||||
|
│
|
||||||
|
└── [88 more services...]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Monitoring Progress
|
||||||
|
|
||||||
|
**Real-time status:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# I'll show you periodic updates
|
||||||
|
echo "=== Batch Progress ==="
|
||||||
|
echo "Batch 1 (5 services): 3/5 complete"
|
||||||
|
echo " ✅ service-user-api - Complete (12 min)"
|
||||||
|
echo " ✅ service-inventory - Complete (8 min)"
|
||||||
|
echo " ✅ service-contact - Complete (15 min)"
|
||||||
|
echo " 🔄 service-search - Running (7 min elapsed)"
|
||||||
|
echo " ⏳ service-pricing - Queued"
|
||||||
|
echo ""
|
||||||
|
echo "Estimated time remaining: 25 minutes"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**If a service fails:**
|
||||||
|
```bash
|
||||||
|
# Retry failed services
|
||||||
|
failed_services=(service-search service-pricing)
|
||||||
|
|
||||||
|
for service in "${failed_services[@]}"; do
|
||||||
|
echo "Retrying: $service"
|
||||||
|
# Spawn new agent for retry
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common failures:**
|
||||||
|
- Missing package.json
|
||||||
|
- Tests failing (can continue anyway)
|
||||||
|
- Module source not found (prompt for location)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
**1. Entire monorepo migration:**
|
||||||
|
```
|
||||||
|
Analyze all 90+ ws-* services for migration planning
|
||||||
|
↓
|
||||||
|
Result: Complete business logic extracted from entire platform
|
||||||
|
↓
|
||||||
|
Use specs to plan Next.js migration strategy
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Selective analysis:**
|
||||||
|
```
|
||||||
|
Analyze just the 10 high-priority services first
|
||||||
|
↓
|
||||||
|
Review results
|
||||||
|
↓
|
||||||
|
Then batch process remaining 80
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Module analysis:**
|
||||||
|
```
|
||||||
|
cd ~/git/my-monorepo/services
|
||||||
|
Analyze all shared packages (not services)
|
||||||
|
↓
|
||||||
|
Result: Shared module documentation
|
||||||
|
↓
|
||||||
|
Understand dependencies before service migration
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
I'll ask you to configure:
|
||||||
|
|
||||||
|
- **Repository list:** All in folder, or custom list?
|
||||||
|
- **Batch size:** How many parallel (3/5/10)?
|
||||||
|
- **Gears to run:** 1-3 only or full 1-6?
|
||||||
|
- **Route:** Auto-detect or force specific route?
|
||||||
|
- **Output location:** Central results dir or per-repo?
|
||||||
|
- **Error handling:** Stop on failure or continue?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Comparison with thoth-cli
|
||||||
|
|
||||||
|
**thoth-cli (Upgrades):**
|
||||||
|
- Orchestrates 90+ service upgrades
|
||||||
|
- 3 phases: coverage → discovery → implementation
|
||||||
|
- Tracks in .upgrade-state.json
|
||||||
|
- Parallel processing (2-5 at a time)
|
||||||
|
|
||||||
|
**StackShift Batch (Analysis):**
|
||||||
|
- Orchestrates 90+ service analyses
|
||||||
|
- 6 gears: analyze → reverse-engineer → create-specs → gap → clarify → implement
|
||||||
|
- Tracks in .stackshift-state.json
|
||||||
|
- Parallel processing (3-10 at a time)
|
||||||
|
- Can output to central location
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Session
|
||||||
|
|
||||||
|
```
|
||||||
|
You: "I want to analyze all Osiris services in ~/git/my-monorepo/services"
|
||||||
|
|
||||||
|
Me: "Found 92 services! Let me configure batch processing..."
|
||||||
|
|
||||||
|
[Asks questions via AskUserQuestion]
|
||||||
|
- Process all 92? ✅
|
||||||
|
- Batch size: 5
|
||||||
|
- Gears: 1-3 (just analyze and spec, no implementation)
|
||||||
|
- Output: Central results directory
|
||||||
|
|
||||||
|
Me: "Starting batch analysis..."
|
||||||
|
|
||||||
|
Batch 1 (5 services): service-user-api, service-inventory, service-contact, ws-inventory, service-pricing
|
||||||
|
[Spawns 5 parallel agents using Task tool]
|
||||||
|
|
||||||
|
[15 minutes later]
|
||||||
|
"Batch 1 complete! Starting batch 2..."
|
||||||
|
|
||||||
|
[3 hours later]
|
||||||
|
"✅ All 92 services analyzed!
|
||||||
|
|
||||||
|
Results: ~/git/stackshift-batch-results/
|
||||||
|
- 92 analysis reports
|
||||||
|
- 92 sets of specifications
|
||||||
|
- 890 total specs extracted
|
||||||
|
- Multiple shared packages documented
|
||||||
|
|
||||||
|
Next: Review specs and begin migration planning"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Managing Batch Sessions
|
||||||
|
|
||||||
|
### View Current Batch Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if batch session exists in current directory and view configuration
|
||||||
|
if [ -f .stackshift-batch-session.json ]; then
|
||||||
|
echo "📦 Active Batch Session in $(pwd)"
|
||||||
|
cat .stackshift-batch-session.json | jq '.'
|
||||||
|
else
|
||||||
|
echo "No active batch session in current directory"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### View All Batch Sessions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all active batch sessions
|
||||||
|
echo "🔍 Finding all active batch sessions..."
|
||||||
|
find ~/git -name ".stackshift-batch-session.json" -type f 2>/dev/null | while read session; do
|
||||||
|
echo ""
|
||||||
|
echo "📦 $(dirname $session)"
|
||||||
|
cat "$session" | jq -r '" Route: \(.answers.route) | Repos: \(.processedRepos | length)/\(.totalRepos)"'
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Clear Batch Session
|
||||||
|
|
||||||
|
**After batch completes:**
|
||||||
|
```bash
|
||||||
|
# I'll ask you:
|
||||||
|
# "Batch processing complete! Clear batch session? (Y/n)"
|
||||||
|
|
||||||
|
# If yes:
|
||||||
|
rm .stackshift-batch-session.json
|
||||||
|
echo "✅ Batch session cleared"
|
||||||
|
|
||||||
|
# If no:
|
||||||
|
echo "✅ Batch session kept (will be used for next batch run in this directory)"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual clear (current directory):**
|
||||||
|
```bash
|
||||||
|
# Clear batch session in current directory
|
||||||
|
rm .stackshift-batch-session.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual clear (specific directory):**
|
||||||
|
```bash
|
||||||
|
# Clear batch session in specific directory
|
||||||
|
rm ~/git/my-monorepo/services/.stackshift-batch-session.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why keep batch session?**
|
||||||
|
- Run another batch with same configuration
|
||||||
|
- Process more repos later in same directory
|
||||||
|
- Continue interrupted batch
|
||||||
|
- Consistent settings for related batches
|
||||||
|
|
||||||
|
**Why clear batch session?**
|
||||||
|
- Done with current migration
|
||||||
|
- Want different configuration for next batch
|
||||||
|
- Starting fresh analysis
|
||||||
|
- Free up directory for different batch type
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Batch Session Benefits
|
||||||
|
|
||||||
|
**Without batch session (old way):**
|
||||||
|
```
|
||||||
|
Batch 1: Answer 10 questions ⏱️ 2 min
|
||||||
|
↓ Process 3 repos (15 min)
|
||||||
|
|
||||||
|
Batch 2: Answer 10 questions AGAIN ⏱️ 2 min
|
||||||
|
↓ Process 3 repos (15 min)
|
||||||
|
|
||||||
|
Batch 3: Answer 10 questions AGAIN ⏱️ 2 min
|
||||||
|
↓ Process 3 repos (15 min)
|
||||||
|
|
||||||
|
Total: 30 questions answered, 6 min wasted
|
||||||
|
```
|
||||||
|
|
||||||
|
**With batch session (new way):**
|
||||||
|
```
|
||||||
|
Setup: Answer 10 questions ONCE ⏱️ 2 min
|
||||||
|
↓ Batch 1: Process 3 repos (15 min)
|
||||||
|
↓ Batch 2: Process 3 repos (15 min)
|
||||||
|
↓ Batch 3: Process 3 repos (15 min)
|
||||||
|
|
||||||
|
Total: 10 questions answered, 0 min wasted
|
||||||
|
Saved: 4 minutes per 9 repos processed
|
||||||
|
```
|
||||||
|
|
||||||
|
**For 90 repos in batches of 3:**
|
||||||
|
- Old way: 300 questions answered (60 min of clicking)
|
||||||
|
- New way: 10 questions answered (2 min of clicking)
|
||||||
|
- **Time saved: 58 minutes!** ⚡
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**This batch processing system is perfect for:**
|
||||||
|
- Monorepo migration (90+ services)
|
||||||
|
- Multi-repo monorepo analysis
|
||||||
|
- Department-wide code audits
|
||||||
|
- Portfolio modernization projects
|
||||||
111
commands/coverage.md
Normal file
111
commands/coverage.md
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
---
|
||||||
|
description: Generate spec-to-code coverage map showing which code files are covered by which specifications. Creates ASCII diagrams, reverse indexes, and coverage statistics.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Generate Spec Coverage Map
|
||||||
|
|
||||||
|
Create a comprehensive visual map of specification-to-code coverage.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Does
|
||||||
|
|
||||||
|
Analyzes all specifications and generates a coverage map showing:
|
||||||
|
|
||||||
|
1. **Spec → Files**: ASCII box diagrams for each spec
|
||||||
|
2. **Files → Specs**: Reverse index table
|
||||||
|
3. **Coverage Statistics**: Percentages by category
|
||||||
|
4. **Heat Map**: Visual representation
|
||||||
|
5. **Gap Analysis**: Uncovered files
|
||||||
|
6. **Shared Files**: High-risk multi-spec files
|
||||||
|
|
||||||
|
**Output:** `docs/spec-coverage-map.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate coverage map
|
||||||
|
/stackshift.coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll:
|
||||||
|
1. Find all spec files in `.specify/memory/specifications/` or `specs/`
|
||||||
|
2. Extract file references from each spec
|
||||||
|
3. Categorize files (backend, frontend, infrastructure, etc.)
|
||||||
|
4. Generate visual diagrams and tables
|
||||||
|
5. Calculate coverage statistics
|
||||||
|
6. Identify gaps and shared files
|
||||||
|
7. Save to `docs/spec-coverage-map.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
- ✅ After Gear 6 (Implementation complete)
|
||||||
|
- ✅ During code reviews (validate coverage)
|
||||||
|
- ✅ When onboarding new team members
|
||||||
|
- ✅ Before refactoring (identify dependencies)
|
||||||
|
- ✅ For documentation audits
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Output Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
📊 Spec Coverage Health Report
|
||||||
|
|
||||||
|
Overall Coverage: 91% (99/109 files)
|
||||||
|
|
||||||
|
By Category:
|
||||||
|
Backend: 93% [████████████████░░]
|
||||||
|
Frontend: 92% [████████████████░░]
|
||||||
|
Infrastructure: 83% [███████████████░░░]
|
||||||
|
Database: 100% [████████████████████]
|
||||||
|
Scripts: 67% [█████████░░░░░░░░░]
|
||||||
|
|
||||||
|
Status:
|
||||||
|
✅ 12 specs covering 99 files
|
||||||
|
⚠️ 10 gap files identified
|
||||||
|
🔴 2 high-risk shared files
|
||||||
|
|
||||||
|
Full report: docs/spec-coverage-map.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What You'll See
|
||||||
|
|
||||||
|
The full coverage map includes:
|
||||||
|
|
||||||
|
### ASCII Box Diagrams
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ 001-vehicle-details │ Status: ✅ COMPLETE
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ Backend: │
|
||||||
|
│ ├─ api/handlers/vehicle.ts │
|
||||||
|
│ └─ api/services/data.ts │
|
||||||
|
│ Frontend: │
|
||||||
|
│ └─ site/pages/Vehicle.tsx │
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reverse Index
|
||||||
|
```markdown
|
||||||
|
| File | Covered By | Count |
|
||||||
|
|------|------------|-------|
|
||||||
|
| lib/utils/pricing.ts | 001, 003, 004, 007, 009 | 5 |
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coverage Gaps
|
||||||
|
```markdown
|
||||||
|
Files not covered by any specification:
|
||||||
|
- api/utils/debug.ts
|
||||||
|
- scripts/experimental/test.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Ready!** Let me generate your spec coverage map now...
|
||||||
662
commands/modernize.md
Normal file
662
commands/modernize.md
Normal file
@@ -0,0 +1,662 @@
|
|||||||
|
---
|
||||||
|
description: Execute Brownfield Upgrade Mode - spec-driven dependency modernization workflow. Runs 4-phase process: spec-guided test coverage, baseline analysis, dependency upgrade with spec-guided fixes, and spec validation. Based on thoth-cli dependency upgrade process adapted for spec-driven development.
|
||||||
|
---
|
||||||
|
|
||||||
|
# StackShift Modernize: Spec-Driven Dependency Upgrade
|
||||||
|
|
||||||
|
**Brownfield Upgrade Mode** - Execute after completing Gears 1-6
|
||||||
|
|
||||||
|
Run this command to modernize all dependencies while using specs as your guide and safety net.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Status Check
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check prerequisites
|
||||||
|
echo "=== Prerequisites Check ==="
|
||||||
|
|
||||||
|
# 1. Specs exist?
|
||||||
|
SPEC_COUNT=$(find .specify/memory/specifications -name "*.md" 2>/dev/null | wc -l)
|
||||||
|
echo "Specs found: $SPEC_COUNT"
|
||||||
|
|
||||||
|
# 2. /speckit commands available?
|
||||||
|
ls .claude/commands/speckit.*.md 2>/dev/null | wc -l | xargs -I {} echo "Speckit commands: {}"
|
||||||
|
|
||||||
|
# 3. Tests passing?
|
||||||
|
npm test --silent && echo "Tests: ✅ PASSING" || echo "Tests: ❌ FAILING"
|
||||||
|
|
||||||
|
# 4. StackShift state?
|
||||||
|
cat .stackshift-state.json 2>/dev/null | jq -r '"\(.path) - Gear \(.currentStep // "complete")"' || echo "No state file"
|
||||||
|
|
||||||
|
if [ "$SPEC_COUNT" -lt 1 ]; then
|
||||||
|
echo "❌ No specs found. Run Gears 1-6 first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 0: Spec-Guided Test Coverage Foundation
|
||||||
|
|
||||||
|
**Goal:** 85%+ coverage using spec acceptance criteria as test blueprint
|
||||||
|
|
||||||
|
**Time:** 30-90 minutes | **Mode:** Autonomous test writing
|
||||||
|
|
||||||
|
### 0.1: Load Specifications & Baseline Coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Phase 0: Spec-Guided Test Coverage ==="
|
||||||
|
mkdir -p .upgrade
|
||||||
|
|
||||||
|
# List all specs
|
||||||
|
find .specify/memory/specifications -name "*.md" | sort | tee .upgrade/all-specs.txt
|
||||||
|
SPEC_COUNT=$(wc -l < .upgrade/all-specs.txt)
|
||||||
|
|
||||||
|
# Baseline coverage
|
||||||
|
npm test -- --coverage --watchAll=false 2>&1 | tee .upgrade/baseline-coverage.txt
|
||||||
|
COVERAGE=$(grep "All files" .upgrade/baseline-coverage.txt | grep -oE "[0-9]+\.[0-9]+" | head -1 || echo "0")
|
||||||
|
|
||||||
|
echo "Specs: $SPEC_COUNT"
|
||||||
|
echo "Coverage: ${COVERAGE}%"
|
||||||
|
echo "Target: 85%"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 0.2: Map Tests to Specs
|
||||||
|
|
||||||
|
Create `.upgrade/spec-coverage-map.json` mapping each spec to its tests:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For each spec, find which tests validate it
|
||||||
|
# Track which acceptance criteria are covered vs. missing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 0.3: Write Tests for Missing Acceptance Criteria
|
||||||
|
|
||||||
|
**For each spec with missing coverage:**
|
||||||
|
|
||||||
|
1. Read spec file
|
||||||
|
2. Extract acceptance criteria section
|
||||||
|
3. For each criterion without a test:
|
||||||
|
- Write test directly validating that criterion
|
||||||
|
- Use Given-When-Then from spec
|
||||||
|
- Ensure test actually validates behavior
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// From spec: user-authentication.md
|
||||||
|
// AC-3: "Given user logs in, When session expires, Then user redirected to login"
|
||||||
|
|
||||||
|
describe('User Authentication - AC-3: Session Expiration', () => {
|
||||||
|
it('should redirect to login when session expires', async () => {
|
||||||
|
// Given: User is logged in
|
||||||
|
renderWithAuth(<Dashboard />, { authenticated: true });
|
||||||
|
|
||||||
|
// When: Session expires
|
||||||
|
await act(async () => {
|
||||||
|
expireSession(); // Helper to expire token
|
||||||
|
});
|
||||||
|
|
||||||
|
// Then: User redirected to login
|
||||||
|
await waitFor(() => {
|
||||||
|
expect(mockNavigate).toHaveBeenCalledWith('/login');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 0.4: Iterative Coverage Improvement
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ITERATION=1
|
||||||
|
TARGET=85
|
||||||
|
MIN=80
|
||||||
|
|
||||||
|
while (( $(echo "$COVERAGE < $TARGET" | bc -l) )); do
|
||||||
|
echo "Iteration $ITERATION: ${COVERAGE}%"
|
||||||
|
|
||||||
|
# Stop conditions
|
||||||
|
if (( $(echo "$COVERAGE >= $MIN" | bc -l) )) && [ $ITERATION -gt 5 ]; then
|
||||||
|
echo "✅ Min coverage reached"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Find spec with lowest coverage
|
||||||
|
# Write tests for missing acceptance criteria
|
||||||
|
# Prioritize P0 > P1 > P2 specs
|
||||||
|
|
||||||
|
npm test -- --coverage --watchAll=false --silent
|
||||||
|
PREV=$COVERAGE
|
||||||
|
COVERAGE=$(jq '.total.lines.pct' coverage/coverage-summary.json)
|
||||||
|
GAIN=$(echo "$COVERAGE - $PREV" | bc)
|
||||||
|
|
||||||
|
# Diminishing returns?
|
||||||
|
if (( $(echo "$GAIN < 0.5" | bc -l) )) && (( $(echo "$COVERAGE >= $MIN" | bc -l) )); then
|
||||||
|
echo "✅ Diminishing returns (${GAIN}% gain)"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
ITERATION=$((ITERATION + 1))
|
||||||
|
[ $ITERATION -gt 10 ] && break
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "✅ Phase 0 Complete: ${COVERAGE}% coverage"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 0.5: Completion Marker
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat > .upgrade/stackshift-upgrade.yml <<EOF
|
||||||
|
widget_name: $(cat package.json | jq -r '.name')
|
||||||
|
route: brownfield-upgrade
|
||||||
|
started: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
|
||||||
|
phase_0_complete: true
|
||||||
|
phase_0_coverage: $COVERAGE
|
||||||
|
specs_analyzed: $SPEC_COUNT
|
||||||
|
all_tests_passing: true
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Baseline & Analysis (READ-ONLY)
|
||||||
|
|
||||||
|
**Goal:** Understand current state, plan upgrade, identify spec impact
|
||||||
|
|
||||||
|
**Time:** 15-30 minutes | **Mode:** Read-only analysis
|
||||||
|
|
||||||
|
**🚨 NO FILE MODIFICATIONS IN PHASE 1**
|
||||||
|
|
||||||
|
### 1.1: Spec-Code Baseline
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Phase 1: Baseline & Analysis ==="
|
||||||
|
|
||||||
|
# Run spec analysis
|
||||||
|
/speckit.analyze | tee .upgrade/baseline-spec-analysis.txt
|
||||||
|
|
||||||
|
# Document which specs are COMPLETE/PARTIAL/MISSING
|
||||||
|
grep -E "✅|⚠️|❌" .upgrade/baseline-spec-analysis.txt > .upgrade/baseline-spec-status.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2: Dependency Analysis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Current dependencies
|
||||||
|
npm list --depth=0 > .upgrade/dependencies-before.txt
|
||||||
|
|
||||||
|
# Outdated packages
|
||||||
|
npm outdated --json > .upgrade/outdated.json || echo "{}" > .upgrade/outdated.json
|
||||||
|
|
||||||
|
# Count major upgrades
|
||||||
|
MAJOR_COUNT=$(cat .upgrade/outdated.json | jq '[.[] | select(.current != .latest)] | length')
|
||||||
|
echo "Major upgrades: $MAJOR_COUNT"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.3: Spec Impact Analysis
|
||||||
|
|
||||||
|
**For each major dependency upgrade, identify affected specs:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create impact analysis
|
||||||
|
cat > .upgrade/spec-impact-analysis.json <<'EOF'
|
||||||
|
{
|
||||||
|
"react": {
|
||||||
|
"current": "17.0.2",
|
||||||
|
"latest": "19.2.0",
|
||||||
|
"breaking": true,
|
||||||
|
"affectedSpecs": [
|
||||||
|
"user-interface.md",
|
||||||
|
"form-handling.md"
|
||||||
|
],
|
||||||
|
"acceptanceCriteria": [
|
||||||
|
"user-interface.md: AC-1, AC-3",
|
||||||
|
"form-handling.md: AC-2"
|
||||||
|
],
|
||||||
|
"testFiles": [
|
||||||
|
"components/UserInterface.test.tsx"
|
||||||
|
],
|
||||||
|
"risk": "HIGH"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.4: Generate Upgrade Plan
|
||||||
|
|
||||||
|
Create `.upgrade/UPGRADE_PLAN.md`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Upgrade Plan
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
- Dependencies to upgrade: ${MAJOR_COUNT} major versions
|
||||||
|
- Specs affected: [count from impact analysis]
|
||||||
|
- Risk level: [HIGH/MEDIUM/LOW]
|
||||||
|
- Estimated effort: 2-4 hours
|
||||||
|
|
||||||
|
## Critical Upgrades (Breaking Changes Expected)
|
||||||
|
|
||||||
|
### react: 17.0.2 → 19.2.0
|
||||||
|
- **Breaking Changes:**
|
||||||
|
- Automatic batching
|
||||||
|
- Hydration mismatches
|
||||||
|
- useId for SSR
|
||||||
|
- **Affected Specs:**
|
||||||
|
- user-interface.md (AC-1, AC-3, AC-5)
|
||||||
|
- form-handling.md (AC-2, AC-4)
|
||||||
|
- **Test Files:**
|
||||||
|
- components/UserInterface.test.tsx
|
||||||
|
- components/FormHandler.test.tsx
|
||||||
|
- **Risk:** HIGH
|
||||||
|
|
||||||
|
[Continue for all major upgrades...]
|
||||||
|
|
||||||
|
## Spec Impact Summary
|
||||||
|
|
||||||
|
### High-Risk Specs (Validate Carefully)
|
||||||
|
1. user-interface.md - React changes affect all components
|
||||||
|
2. form-handling.md - State batching changes
|
||||||
|
|
||||||
|
### Low-Risk Specs (Quick Validation)
|
||||||
|
[List specs unlikely to be affected]
|
||||||
|
|
||||||
|
## Upgrade Sequence
|
||||||
|
1. Update package.json versions
|
||||||
|
2. npm install
|
||||||
|
3. Fix TypeScript errors
|
||||||
|
4. Fix test failures (spec-guided)
|
||||||
|
5. Fix build errors
|
||||||
|
6. Validate with /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.5: Update Tracking
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||||
|
phase_1_complete: true
|
||||||
|
phase_1_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
planned_major_upgrades: $MAJOR_COUNT
|
||||||
|
high_risk_specs: [count]
|
||||||
|
upgrade_plan_created: true
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Dependency Upgrade & Spec-Guided Fixes
|
||||||
|
|
||||||
|
**Goal:** Upgrade all dependencies, fix breaking changes using specs
|
||||||
|
|
||||||
|
**Time:** 1-4 hours | **Mode:** Implementation with spec guidance
|
||||||
|
|
||||||
|
### 2.1: Pre-Flight Check
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Pre-Flight Health Check ==="
|
||||||
|
|
||||||
|
npm test && echo "Tests: ✅" || (echo "Tests: ❌ STOP" && exit 1)
|
||||||
|
npm run build && echo "Build: ✅" || (echo "Build: ❌ STOP" && exit 1)
|
||||||
|
npm run lint 2>/dev/null && echo "Lint: ✅" || echo "Lint: ⚠️ OK"
|
||||||
|
|
||||||
|
# Must be green to proceed
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2: Create Upgrade Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git checkout -b upgrade/dependencies-to-latest
|
||||||
|
git add .upgrade/
|
||||||
|
git commit -m "docs: upgrade baseline and plan
|
||||||
|
|
||||||
|
Phase 0: Coverage ${COVERAGE}%
|
||||||
|
Phase 1: Analysis complete
|
||||||
|
Specs: $SPEC_COUNT validated
|
||||||
|
Ready for Phase 2
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3: Upgrade Dependencies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Upgrading Dependencies ==="
|
||||||
|
|
||||||
|
# Upgrade to latest
|
||||||
|
npx npm-check-updates -u
|
||||||
|
npm install
|
||||||
|
|
||||||
|
# Update Node (optional but recommended)
|
||||||
|
echo "22.21.0" > .nvmrc
|
||||||
|
nvm install 22.21.0
|
||||||
|
nvm use
|
||||||
|
|
||||||
|
# Document changes
|
||||||
|
npm list --depth=0 > .upgrade/dependencies-after.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.4: Detect Breaking Changes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Testing After Upgrade ==="
|
||||||
|
|
||||||
|
npm test 2>&1 | tee .upgrade/test-results-post-upgrade.txt
|
||||||
|
|
||||||
|
# Extract failures
|
||||||
|
grep -E "FAIL|✕" .upgrade/test-results-post-upgrade.txt > .upgrade/failures.txt || true
|
||||||
|
FAILURE_COUNT=$(wc -l < .upgrade/failures.txt)
|
||||||
|
|
||||||
|
echo "Breaking changes: $FAILURE_COUNT test failures"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.5: Spec-Guided Fix Loop
|
||||||
|
|
||||||
|
**Autonomous iteration until all tests pass:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ITERATION=1
|
||||||
|
MAX_ITERATIONS=20
|
||||||
|
|
||||||
|
while ! npm test --silent 2>&1; do
|
||||||
|
echo "=== Fix Iteration $ITERATION ==="
|
||||||
|
|
||||||
|
# Get first failing test
|
||||||
|
FAILING_TEST=$(npm test 2>&1 | grep -m 1 "FAIL" | grep -oE "[^ ]+\.test\.[jt]sx?" || echo "")
|
||||||
|
|
||||||
|
if [ -z "$FAILING_TEST" ]; then
|
||||||
|
echo "No specific test file found, checking for general errors..."
|
||||||
|
npm test 2>&1 | head -50
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Failing test: $FAILING_TEST"
|
||||||
|
|
||||||
|
# Find spec from coverage map
|
||||||
|
SPEC=$(jq -r "to_entries[] | select(.value.testFiles[] | contains(\"$FAILING_TEST\")) | .key" .upgrade/spec-coverage-map.json || echo "")
|
||||||
|
|
||||||
|
if [ -n "$SPEC" ]; then
|
||||||
|
echo "Validates spec: $SPEC"
|
||||||
|
echo "Loading spec acceptance criteria..."
|
||||||
|
cat ".specify/memory/specifications/$SPEC" | grep -A 20 "Acceptance Criteria"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# FIX THE BREAKING CHANGE
|
||||||
|
# - Read spec acceptance criteria
|
||||||
|
# - Understand intended behavior
|
||||||
|
# - Fix code to preserve that behavior
|
||||||
|
# - Run test to verify fix
|
||||||
|
|
||||||
|
# Log fix
|
||||||
|
echo "[$ITERATION] Fixed: $FAILING_TEST (spec: $SPEC)" >> .upgrade/fixes-applied.log
|
||||||
|
|
||||||
|
# Commit incremental fix
|
||||||
|
git add -A
|
||||||
|
git commit -m "fix: breaking change in $FAILING_TEST
|
||||||
|
|
||||||
|
Spec: $SPEC
|
||||||
|
Fixed to preserve acceptance criteria behavior
|
||||||
|
"
|
||||||
|
|
||||||
|
ITERATION=$((ITERATION + 1))
|
||||||
|
|
||||||
|
if [ $ITERATION -gt $MAX_ITERATIONS ]; then
|
||||||
|
echo "⚠️ Max iterations reached - manual review needed"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "✅ All tests passing"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.6: Build & Lint
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Fix build
|
||||||
|
npm run build || (echo "Fixing build errors..." && [fix build])
|
||||||
|
|
||||||
|
# Fix lint (often ESLint 9 config changes)
|
||||||
|
npm run lint || (echo "Fixing lint..." && [update eslint.config.js])
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.7: Phase 2 Complete
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm test && npm run build && npm run lint
|
||||||
|
|
||||||
|
FINAL_COVERAGE=$(jq '.total.lines.pct' coverage/coverage-summary.json)
|
||||||
|
|
||||||
|
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||||
|
phase_2_complete: true
|
||||||
|
phase_2_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
test_coverage_after: $FINAL_COVERAGE
|
||||||
|
fixes_applied: $(wc -l < .upgrade/fixes-applied.log)
|
||||||
|
all_passing: true
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: Spec Validation & PR
|
||||||
|
|
||||||
|
**Goal:** Validate specs match code, create PR
|
||||||
|
|
||||||
|
**Time:** 15-30 minutes
|
||||||
|
|
||||||
|
### 3.1: Spec Validation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "=== Phase 3: Spec Validation ==="
|
||||||
|
|
||||||
|
# Run spec analysis
|
||||||
|
/speckit.analyze | tee .upgrade/final-spec-analysis.txt
|
||||||
|
|
||||||
|
# Check for drift
|
||||||
|
if grep -q "drift\|mismatch\|inconsistent" .upgrade/final-spec-analysis.txt; then
|
||||||
|
echo "⚠️ Drift detected - investigating..."
|
||||||
|
# Review and fix any drift
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2: Generate Upgrade Report
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create comprehensive report
|
||||||
|
cat > .upgrade/UPGRADE_REPORT.md <<EOF
|
||||||
|
# Dependency Upgrade Report
|
||||||
|
|
||||||
|
**Date:** $(date)
|
||||||
|
**Project:** $(cat package.json | jq -r '.name')
|
||||||
|
**Route:** Brownfield Upgrade (StackShift Modernize)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- **Dependencies Upgraded:** $(wc -l < .upgrade/dependencies-after.txt) packages
|
||||||
|
- **Breaking Changes Fixed:** $(wc -l < .upgrade/fixes-applied.log)
|
||||||
|
- **Test Coverage:** ${BASELINE_COVERAGE}% → ${FINAL_COVERAGE}%
|
||||||
|
- **Spec Validation:** ✅ $(grep -c "✅ COMPLETE" .upgrade/final-spec-analysis.txt) specs validated
|
||||||
|
- **Security:** $(npm audit --json | jq '.metadata.vulnerabilities.total // 0') vulnerabilities
|
||||||
|
|
||||||
|
## Major Dependency Upgrades
|
||||||
|
|
||||||
|
$(npm outdated --json | jq -r 'to_entries[] | "- **\(.key):** \(.value.current) → \(.value.latest)"')
|
||||||
|
|
||||||
|
## Breaking Changes Fixed
|
||||||
|
|
||||||
|
$(cat .upgrade/fixes-applied.log)
|
||||||
|
|
||||||
|
## Spec Validation Results
|
||||||
|
|
||||||
|
- **Total Specs:** $SPEC_COUNT
|
||||||
|
- **Validated:** ✅ All passing
|
||||||
|
- **Drift:** None detected
|
||||||
|
- **Specs Updated:** [if any, list why]
|
||||||
|
|
||||||
|
## Test Coverage
|
||||||
|
|
||||||
|
- **Before:** ${BASELINE_COVERAGE}%
|
||||||
|
- **After:** ${FINAL_COVERAGE}%
|
||||||
|
- **Target:** 85% ✅
|
||||||
|
|
||||||
|
## Security Improvements
|
||||||
|
|
||||||
|
$(npm audit --json | jq -r '.vulnerabilities | to_entries[] | select(.value.via[0].severity == "high" or .value.via[0].severity == "critical") | "- \(.key): \(.value.via[0].title)"' || echo "No high/critical vulnerabilities")
|
||||||
|
|
||||||
|
## Validation Checklist
|
||||||
|
|
||||||
|
- [x] All tests passing
|
||||||
|
- [x] Build successful
|
||||||
|
- [x] Lint passing
|
||||||
|
- [x] Coverage ≥85%
|
||||||
|
- [x] Specs validated (/speckit.analyze)
|
||||||
|
- [x] No high/critical vulnerabilities
|
||||||
|
- [ ] Code review
|
||||||
|
- [ ] Merge approved
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat .upgrade/UPGRADE_REPORT.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3: Create Pull Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Final commit
|
||||||
|
git add -A
|
||||||
|
git commit -m "$(cat <<'EOF'
|
||||||
|
chore: upgrade all dependencies to latest versions
|
||||||
|
|
||||||
|
Spec-driven upgrade using StackShift modernize workflow.
|
||||||
|
|
||||||
|
Phases Completed:
|
||||||
|
- Phase 0: Test coverage ${BASELINE_COVERAGE}% → ${FINAL_COVERAGE}%
|
||||||
|
- Phase 1: Analysis & planning
|
||||||
|
- Phase 2: Upgrades & spec-guided fixes
|
||||||
|
- Phase 3: Validation ✅
|
||||||
|
|
||||||
|
See .upgrade/UPGRADE_REPORT.md for complete details.
|
||||||
|
EOF
|
||||||
|
)"
|
||||||
|
|
||||||
|
# Push
|
||||||
|
git push -u origin upgrade/dependencies-to-latest
|
||||||
|
|
||||||
|
# Create PR
|
||||||
|
gh pr create \
|
||||||
|
--title "chore: Upgrade all dependencies to latest versions" \
|
||||||
|
--body-file .upgrade/UPGRADE_REPORT.md
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.4: Final Tracking Update
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat >> .upgrade/stackshift-upgrade.yml <<EOF
|
||||||
|
phase_3_complete: true
|
||||||
|
upgrade_complete: true
|
||||||
|
completion_date: $(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||||
|
pr_number: $(gh pr list --head upgrade/dependencies-to-latest --json number -q '.[0].number')
|
||||||
|
pr_url: $(gh pr list --head upgrade/dependencies-to-latest --json url -q '.[0].url')
|
||||||
|
final_status: SUCCESS
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "🎉 ========================================="
|
||||||
|
echo " MODERNIZATION COMPLETE!"
|
||||||
|
echo "========================================="
|
||||||
|
echo ""
|
||||||
|
echo "✅ Dependencies: All latest versions"
|
||||||
|
echo "✅ Tests: All passing"
|
||||||
|
echo "✅ Coverage: ${FINAL_COVERAGE}% (target: 85%)"
|
||||||
|
echo "✅ Specs: Validated with /speckit.analyze"
|
||||||
|
echo "✅ Build: Successful"
|
||||||
|
echo "✅ Security: Vulnerabilities resolved"
|
||||||
|
echo ""
|
||||||
|
echo "📋 Report: .upgrade/UPGRADE_REPORT.md"
|
||||||
|
echo "🔗 PR: $(gh pr list --head upgrade/dependencies-to-latest --json url -q '.[0].url')"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Differences from thoth-cli
|
||||||
|
|
||||||
|
**Generic upgrade workflow:**
|
||||||
|
- 3 phases (coverage, discovery, implementation)
|
||||||
|
- Target latest versions per package manager
|
||||||
|
- Enzyme removal required
|
||||||
|
- ESLint 9 flat config required
|
||||||
|
- Batch processing 90+ widgets
|
||||||
|
|
||||||
|
**StackShift modernize (Generic + Spec-driven):**
|
||||||
|
- 4 phases (includes spec validation)
|
||||||
|
- Latest versions (whatever is current)
|
||||||
|
- **Spec acceptance criteria guide test writing**
|
||||||
|
- **Specs guide breaking change fixes**
|
||||||
|
- **Continuous spec validation**
|
||||||
|
- Single repo focus
|
||||||
|
|
||||||
|
**What we learned from thoth-cli:**
|
||||||
|
- ✅ Phase 0 test coverage foundation
|
||||||
|
- ✅ Read-only analysis phase
|
||||||
|
- ✅ Iterative fix loop
|
||||||
|
- ✅ Autonomous test writing
|
||||||
|
- ✅ Comprehensive tracking files
|
||||||
|
- ✅ Validation before proceeding
|
||||||
|
|
||||||
|
**What we added for specs:**
|
||||||
|
- ✅ Spec acceptance criteria = test blueprint
|
||||||
|
- ✅ Spec impact analysis
|
||||||
|
- ✅ Spec-guided breaking change fixes
|
||||||
|
- ✅ /speckit.analyze validation
|
||||||
|
- ✅ Spec-coverage mapping
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- ✅ All dependencies at latest stable
|
||||||
|
- ✅ Test coverage ≥85%
|
||||||
|
- ✅ All tests passing
|
||||||
|
- ✅ Build successful
|
||||||
|
- ✅ /speckit.analyze: No drift
|
||||||
|
- ✅ PR created
|
||||||
|
- ✅ Security issues resolved
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**"Tests failing after upgrade"**
|
||||||
|
```bash
|
||||||
|
# 1. Find failing test
|
||||||
|
npm test 2>&1 | grep FAIL
|
||||||
|
|
||||||
|
# 2. Find spec
|
||||||
|
jq '.[] | select(.testFiles[] | contains("failing-test.ts"))' .upgrade/spec-coverage-map.json
|
||||||
|
|
||||||
|
# 3. Read spec acceptance criteria
|
||||||
|
cat .specify/memory/specifications/[spec-name].md | grep -A 10 "Acceptance Criteria"
|
||||||
|
|
||||||
|
# 4. Fix to match spec
|
||||||
|
# 5. Verify with npm test
|
||||||
|
```
|
||||||
|
|
||||||
|
**"Can't reach 85% coverage"**
|
||||||
|
```bash
|
||||||
|
# Check which acceptance criteria lack tests
|
||||||
|
cat .upgrade/spec-coverage-map.json | jq '.[] | select(.missingCoverage | length > 0)'
|
||||||
|
|
||||||
|
# Write tests for those criteria
|
||||||
|
```
|
||||||
|
|
||||||
|
**"/speckit.analyze shows drift"**
|
||||||
|
```bash
|
||||||
|
# Review what changed
|
||||||
|
/speckit.analyze
|
||||||
|
|
||||||
|
# Fix code to match spec OR
|
||||||
|
# Update spec if intentional improvement
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** Specs are your north star. When breaking changes occur, specs tell you what behavior to preserve.
|
||||||
132
commands/setup.md
Normal file
132
commands/setup.md
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
---
|
||||||
|
description: Install StackShift and Spec Kit slash commands to this project for team use. Run this if you joined a project after StackShift analysis was completed.
|
||||||
|
---
|
||||||
|
|
||||||
|
# StackShift Setup - Install Slash Commands
|
||||||
|
|
||||||
|
**Use this if:**
|
||||||
|
- You cloned a project that uses StackShift
|
||||||
|
- Slash commands aren't showing up (/speckit.*, /stackshift.*)
|
||||||
|
- You want to add features but don't have the commands
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Install Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create commands directory
|
||||||
|
mkdir -p .claude/commands
|
||||||
|
|
||||||
|
# Copy from StackShift plugin
|
||||||
|
cp ~/.claude/plugins/stackshift/.claude/commands/speckit.*.md .claude/commands/
|
||||||
|
cp ~/.claude/plugins/stackshift/.claude/commands/stackshift.*.md .claude/commands/
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
ls .claude/commands/
|
||||||
|
```
|
||||||
|
|
||||||
|
**You should see:**
|
||||||
|
- ✅ speckit.analyze.md
|
||||||
|
- ✅ speckit.clarify.md
|
||||||
|
- ✅ speckit.implement.md
|
||||||
|
- ✅ speckit.plan.md
|
||||||
|
- ✅ speckit.specify.md
|
||||||
|
- ✅ speckit.tasks.md
|
||||||
|
- ✅ stackshift.modernize.md
|
||||||
|
- ✅ stackshift.setup.md
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Update .gitignore
|
||||||
|
|
||||||
|
**Ensure .gitignore allows .claude/commands/ to be committed:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if .gitignore exists
|
||||||
|
if [ ! -f .gitignore ]; then
|
||||||
|
echo "Creating .gitignore..."
|
||||||
|
touch .gitignore
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Add rules to allow slash commands
|
||||||
|
cat >> .gitignore <<'EOF'
|
||||||
|
|
||||||
|
# Claude Code - Allow slash commands (team needs these!)
|
||||||
|
!.claude/
|
||||||
|
!.claude/commands/
|
||||||
|
!.claude/commands/*.md
|
||||||
|
|
||||||
|
# Ignore user-specific Claude settings
|
||||||
|
.claude/settings.json
|
||||||
|
.claude/mcp-settings.json
|
||||||
|
.claude/.storage/
|
||||||
|
EOF
|
||||||
|
|
||||||
|
echo "✅ .gitignore updated"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Commit to Git
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add .claude/commands/
|
||||||
|
git add .gitignore
|
||||||
|
|
||||||
|
git commit -m "chore: add StackShift slash commands for team
|
||||||
|
|
||||||
|
Adds /speckit.* and /stackshift.* slash commands.
|
||||||
|
|
||||||
|
Commands installed:
|
||||||
|
- /speckit.specify - Create feature specifications
|
||||||
|
- /speckit.plan - Create technical implementation plans
|
||||||
|
- /speckit.tasks - Generate task breakdowns
|
||||||
|
- /speckit.implement - Execute implementation
|
||||||
|
- /speckit.clarify - Resolve specification ambiguities
|
||||||
|
- /speckit.analyze - Validate specs match code
|
||||||
|
- /stackshift.modernize - Upgrade dependencies
|
||||||
|
- /stackshift.setup - Install commands (this command)
|
||||||
|
|
||||||
|
These enable spec-driven development for the entire team.
|
||||||
|
All team members will have commands after cloning.
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Done!
|
||||||
|
|
||||||
|
✅ Commands installed to project
|
||||||
|
✅ .gitignore updated to allow commands
|
||||||
|
✅ Commands committed to git
|
||||||
|
✅ Team members will have commands when they clone
|
||||||
|
|
||||||
|
**Type `/spec` and you should see all commands autocomplete!**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## For Project Leads
|
||||||
|
|
||||||
|
**After running StackShift on a project, always:**
|
||||||
|
|
||||||
|
1. ✅ Run `/stackshift.setup` (or manual Step 1-3 above)
|
||||||
|
2. ✅ Commit .claude/commands/ to git
|
||||||
|
3. ✅ Push to remote
|
||||||
|
|
||||||
|
**This ensures everyone on your team has access to slash commands without individual setup.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**"Commands still not showing up"**
|
||||||
|
→ Restart Claude Code after installing
|
||||||
|
|
||||||
|
**"Git says .claude/ is ignored"**
|
||||||
|
→ Check .gitignore has `!.claude/commands/` rule
|
||||||
|
|
||||||
|
**"Don't have StackShift plugin installed"**
|
||||||
|
→ Install: `/plugin marketplace add jschulte/claude-plugins` then `/plugin install stackshift`
|
||||||
|
|
||||||
|
**"StackShift plugin not in ~/.claude/plugins/"**
|
||||||
|
→ Commands might be in different location, manually copy from project that has them
|
||||||
134
commands/speckit-analyze.md
Normal file
134
commands/speckit-analyze.md
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
---
|
||||||
|
description: Validate specifications against implementation
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Kit: Analyze Specifications
|
||||||
|
|
||||||
|
Compare specifications in `specs/` against the actual codebase implementation.
|
||||||
|
|
||||||
|
## Analysis Steps
|
||||||
|
|
||||||
|
### 1. Load All Specifications
|
||||||
|
|
||||||
|
Read all files in `specs/`:
|
||||||
|
- Note status markers (✅ COMPLETE / ⚠️ PARTIAL / ❌ MISSING)
|
||||||
|
- Identify dependencies between specs
|
||||||
|
- List acceptance criteria
|
||||||
|
|
||||||
|
### 2. Validate Each Specification
|
||||||
|
|
||||||
|
For each spec marked **✅ COMPLETE:**
|
||||||
|
- **Verify implementation exists:**
|
||||||
|
- Check file paths mentioned in spec
|
||||||
|
- Verify API endpoints exist
|
||||||
|
- Confirm database models match schema
|
||||||
|
- Test that features actually work
|
||||||
|
|
||||||
|
- **If implementation missing:**
|
||||||
|
```
|
||||||
|
⚠️ Inconsistency: spec-name.md marked COMPLETE
|
||||||
|
Reality: Implementation not found
|
||||||
|
Files checked: [list]
|
||||||
|
Recommendation: Update status to PARTIAL or MISSING
|
||||||
|
```
|
||||||
|
|
||||||
|
For each spec marked **⚠️ PARTIAL:**
|
||||||
|
- **Verify what's listed as implemented actually exists**
|
||||||
|
- **Verify what's listed as missing is actually missing**
|
||||||
|
- **Check for implementation drift** (code changed since spec written)
|
||||||
|
|
||||||
|
For each spec marked **❌ MISSING:**
|
||||||
|
- **Check if implementation exists anyway** (orphaned code)
|
||||||
|
```
|
||||||
|
⚠️ Orphaned Implementation: user-notifications feature
|
||||||
|
Spec: Marked MISSING
|
||||||
|
Reality: Implementation found in src/notifications/
|
||||||
|
Recommendation: Create specification or remove code
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Check for Inconsistencies
|
||||||
|
|
||||||
|
- **Conflicting requirements** between related specs
|
||||||
|
- **Broken dependencies** (spec A requires spec B, but B is MISSING)
|
||||||
|
- **Version mismatches** (spec requires v2.0, code uses v1.5)
|
||||||
|
- **Outdated technical details** (for brownfield specs)
|
||||||
|
|
||||||
|
### 4. Generate Report
|
||||||
|
|
||||||
|
Output format:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Specification Analysis Results
|
||||||
|
|
||||||
|
**Date:** [current date]
|
||||||
|
**Specifications analyzed:** X total
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- ✅ COMPLETE features: X (fully aligned)
|
||||||
|
- ⚠️ PARTIAL features: X (some gaps)
|
||||||
|
- ❌ MISSING features: X (not started)
|
||||||
|
- 🔴 Inconsistencies found: X
|
||||||
|
|
||||||
|
## Issues Detected
|
||||||
|
|
||||||
|
### High Priority (Blocking)
|
||||||
|
|
||||||
|
1. **user-authentication.md** (COMPLETE → should be PARTIAL)
|
||||||
|
- Spec says: Frontend login UI required
|
||||||
|
- Reality: No login components found
|
||||||
|
- Impact: Users cannot authenticate
|
||||||
|
- Recommendation: Implement login UI or update spec status
|
||||||
|
|
||||||
|
2. **photo-upload.md → fish-management.md**
|
||||||
|
- Dependency broken
|
||||||
|
- fish-management requires photo-upload
|
||||||
|
- photo-upload marked PARTIAL (API incomplete)
|
||||||
|
- Impact: Fish photos cannot be uploaded
|
||||||
|
- Recommendation: Complete photo-upload API first
|
||||||
|
|
||||||
|
### Medium Priority
|
||||||
|
|
||||||
|
[...]
|
||||||
|
|
||||||
|
### Low Priority (Minor)
|
||||||
|
|
||||||
|
[...]
|
||||||
|
|
||||||
|
## Orphaned Implementations
|
||||||
|
|
||||||
|
Code exists without specifications:
|
||||||
|
|
||||||
|
1. **src/api/notifications.ts** (156 lines)
|
||||||
|
- No specification found
|
||||||
|
- Recommendation: Create specification or remove code
|
||||||
|
|
||||||
|
## Alignment Score
|
||||||
|
|
||||||
|
- Specifications ↔ Code: X% aligned
|
||||||
|
- No issues found: ✅ / Issues require attention: ⚠️
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
1. Update status markers for inconsistent specs
|
||||||
|
2. Create specifications for orphaned code
|
||||||
|
3. Complete high-priority implementations
|
||||||
|
4. Re-run `/speckit.analyze` after fixes
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
- Fix high-priority issues first
|
||||||
|
- Update specification status markers
|
||||||
|
- Re-run analysis to validate fixes
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This analysis should be thorough but not modify any files
|
||||||
|
- Report inconsistencies, don't auto-fix
|
||||||
|
- Cross-reference related specifications
|
||||||
|
- Check both directions (spec → code and code → spec)
|
||||||
|
- For brownfield: Verify exact versions and file paths
|
||||||
|
- For greenfield: Check if business requirements are met (ignore implementation details)
|
||||||
212
commands/speckit-clarify.md
Normal file
212
commands/speckit-clarify.md
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
---
|
||||||
|
description: Resolve [NEEDS CLARIFICATION] markers interactively
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Kit: Clarify Specifications
|
||||||
|
|
||||||
|
Find and resolve all `[NEEDS CLARIFICATION]` markers in specifications.
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Find All Clarifications
|
||||||
|
|
||||||
|
Scan `specs/` for `[NEEDS CLARIFICATION]` markers:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep -r "\[NEEDS CLARIFICATION\]" specs/
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a list:
|
||||||
|
```markdown
|
||||||
|
## Clarifications Needed
|
||||||
|
|
||||||
|
### High Priority (P0)
|
||||||
|
1. **user-authentication.md:**
|
||||||
|
- [NEEDS CLARIFICATION] Password reset: email link or SMS code?
|
||||||
|
|
||||||
|
2. **photo-upload.md:**
|
||||||
|
- [NEEDS CLARIFICATION] Max file size: 5MB or 10MB?
|
||||||
|
- [NEEDS CLARIFICATION] Supported formats: just images or also videos?
|
||||||
|
|
||||||
|
### Medium Priority (P1)
|
||||||
|
3. **analytics-dashboard.md:**
|
||||||
|
- [NEEDS CLARIFICATION] Chart types: line, bar, pie, or all three?
|
||||||
|
- [NEEDS CLARIFICATION] Real-time updates or daily aggregates?
|
||||||
|
|
||||||
|
[... list all ...]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Ask Questions by Priority
|
||||||
|
|
||||||
|
**For each clarification (P0 first):**
|
||||||
|
|
||||||
|
```
|
||||||
|
**Feature:** user-authentication
|
||||||
|
**Question:** For password reset, should we use email link or SMS code?
|
||||||
|
|
||||||
|
**Context:**
|
||||||
|
- Email link: More secure, standard approach
|
||||||
|
- SMS code: Faster, but requires phone number
|
||||||
|
|
||||||
|
**Recommendation:** Email link (industry standard)
|
||||||
|
|
||||||
|
What would you prefer?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Record Answers
|
||||||
|
|
||||||
|
For each answer received:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Answer: Password Reset Method
|
||||||
|
|
||||||
|
**Question:** Email link or SMS code?
|
||||||
|
**Answer:** Email link with 1-hour expiration
|
||||||
|
**Additional Details:**
|
||||||
|
- Link format: /reset-password?token={jwt}
|
||||||
|
- Token expires: 1 hour
|
||||||
|
- Email template: Branded with reset button
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Update Specifications
|
||||||
|
|
||||||
|
For each answer, update the corresponding specification:
|
||||||
|
|
||||||
|
**Before:**
|
||||||
|
```markdown
|
||||||
|
## Password Reset
|
||||||
|
|
||||||
|
[NEEDS CLARIFICATION] Should we use email link or SMS code?
|
||||||
|
```
|
||||||
|
|
||||||
|
**After:**
|
||||||
|
```markdown
|
||||||
|
## Password Reset
|
||||||
|
|
||||||
|
Users can reset their password via email link.
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- User requests reset via /api/auth/reset-password
|
||||||
|
- System sends email with unique token (JWT, 1-hour expiry)
|
||||||
|
- Email contains link: /reset-password?token={jwt}
|
||||||
|
- User clicks link, enters new password
|
||||||
|
- Token validated, password updated
|
||||||
|
|
||||||
|
**Acceptance Criteria:**
|
||||||
|
- [ ] User can request password reset
|
||||||
|
- [ ] Email sent within 30 seconds
|
||||||
|
- [ ] Link expires after 1 hour
|
||||||
|
- [ ] Token is single-use
|
||||||
|
- [ ] Password successfully updated after reset
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Cross-Reference Updates
|
||||||
|
|
||||||
|
If clarification affects multiple specs, update all:
|
||||||
|
|
||||||
|
Example: Max file size affects both photo-upload and document-upload specs
|
||||||
|
|
||||||
|
### Step 6: Validate Completeness
|
||||||
|
|
||||||
|
After all clarifications:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check no markers remain
|
||||||
|
grep -r "\[NEEDS CLARIFICATION\]" specs/
|
||||||
|
|
||||||
|
# Should return: No matches (or only in comments/examples)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Clarification Session Complete
|
||||||
|
|
||||||
|
**Date:** {{DATE}}
|
||||||
|
**Clarifications resolved:** X
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
### Photo Upload Feature
|
||||||
|
✅ Max file size: 10MB
|
||||||
|
✅ Supported formats: JPEG, PNG, WebP
|
||||||
|
✅ Upload method: Drag-drop and click-browse (both)
|
||||||
|
✅ Max photos per item: 10
|
||||||
|
|
||||||
|
### Analytics Dashboard
|
||||||
|
✅ Chart types: Line, bar, pie (all three)
|
||||||
|
✅ Data refresh: Real-time for alerts, daily aggregates for charts
|
||||||
|
✅ Date ranges: 7d, 30d, 90d, all time
|
||||||
|
|
||||||
|
### User Authentication
|
||||||
|
✅ Password reset: Email link (1-hour expiry)
|
||||||
|
✅ Session duration: 24 hours (configurable)
|
||||||
|
✅ Rate limiting: 10 attempts per hour
|
||||||
|
|
||||||
|
[... all clarifications ...]
|
||||||
|
|
||||||
|
## Specifications Updated
|
||||||
|
|
||||||
|
Updated X specifications:
|
||||||
|
1. user-authentication.md
|
||||||
|
2. photo-upload.md
|
||||||
|
3. analytics-dashboard.md
|
||||||
|
[...]
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
✅ No [NEEDS CLARIFICATION] markers remaining
|
||||||
|
✅ All acceptance criteria complete
|
||||||
|
✅ Specifications ready for implementation
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
Ready for Gear 6: Implementation
|
||||||
|
|
||||||
|
Use `/speckit.tasks` and `/speckit.implement` for each feature.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Defer Mode
|
||||||
|
|
||||||
|
If clarifications_strategy = "defer":
|
||||||
|
|
||||||
|
Instead of asking questions, just document them:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Deferred Clarifications
|
||||||
|
|
||||||
|
**File:** .specify/memory/deferred-clarifications.md
|
||||||
|
|
||||||
|
## Items to Clarify Later
|
||||||
|
|
||||||
|
1. **photo-upload.md:**
|
||||||
|
- Max file size?
|
||||||
|
- Supported formats?
|
||||||
|
|
||||||
|
2. **analytics-dashboard.md:**
|
||||||
|
- Chart types?
|
||||||
|
- Real-time or aggregated?
|
||||||
|
|
||||||
|
[... list all ...]
|
||||||
|
|
||||||
|
## How to Resolve
|
||||||
|
|
||||||
|
When ready, run `/speckit.clarify` to answer these questions.
|
||||||
|
```
|
||||||
|
|
||||||
|
Then continue with implementation of fully-specified features.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Ask questions in priority order (P0 first)
|
||||||
|
- Provide context and recommendations
|
||||||
|
- Allow "defer to later" for non-critical clarifications
|
||||||
|
- Update all related specs when answering
|
||||||
|
- Validate no markers remain at the end
|
||||||
|
- If user is unsure, mark as P2/P3 (lower priority)
|
||||||
218
commands/speckit-implement.md
Normal file
218
commands/speckit-implement.md
Normal file
@@ -0,0 +1,218 @@
|
|||||||
|
---
|
||||||
|
description: Implement feature from specification and plan
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Kit: Implement Feature
|
||||||
|
|
||||||
|
Systematically implement a feature from its specification and implementation plan.
|
||||||
|
|
||||||
|
## Inputs
|
||||||
|
|
||||||
|
**Feature name:** 008-user-profile/tasks.md
|
||||||
|
|
||||||
|
**Files to read:**
|
||||||
|
- Specification: `specs/specs/008-user-profile/tasks.md`
|
||||||
|
- Implementation Plan: `specs/specs/008-user-profile/tasks.md`
|
||||||
|
## Implementation Process
|
||||||
|
|
||||||
|
### Step 1: Review Specification
|
||||||
|
|
||||||
|
Read `specs/specs/008-user-profile/tasks.md`:
|
||||||
|
|
||||||
|
- Understand the feature overview
|
||||||
|
- Read all user stories
|
||||||
|
- Review acceptance criteria (these are your tests!)
|
||||||
|
- Note dependencies on other features
|
||||||
|
- Check current status (COMPLETE/PARTIAL/MISSING)
|
||||||
|
|
||||||
|
### Step 2: Review Implementation Plan
|
||||||
|
|
||||||
|
Read `specs/specs/008-user-profile/tasks.md`:
|
||||||
|
|
||||||
|
- Understand current vs target state
|
||||||
|
- Review technical approach
|
||||||
|
- Read the task list
|
||||||
|
- Note risks and mitigations
|
||||||
|
- Review testing strategy
|
||||||
|
|
||||||
|
### Step 3: Execute Tasks Systematically
|
||||||
|
|
||||||
|
For each task in the implementation plan:
|
||||||
|
|
||||||
|
1. **Read the task description**
|
||||||
|
2. **Implement the task:**
|
||||||
|
- Create/modify files as needed
|
||||||
|
- Follow existing code patterns
|
||||||
|
- Use appropriate frameworks/libraries
|
||||||
|
- Add proper error handling
|
||||||
|
- Include logging where appropriate
|
||||||
|
|
||||||
|
3. **Test the task:**
|
||||||
|
- Run relevant tests
|
||||||
|
- Manual testing if needed
|
||||||
|
- Verify acceptance criteria
|
||||||
|
|
||||||
|
4. **Mark task complete:**
|
||||||
|
- Update task checklist
|
||||||
|
- Note any issues or deviations
|
||||||
|
|
||||||
|
5. **Continue to next task**
|
||||||
|
|
||||||
|
### Step 4: Validate Against Acceptance Criteria
|
||||||
|
|
||||||
|
After all tasks complete, verify each acceptance criterion:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Acceptance Criteria Validation
|
||||||
|
|
||||||
|
- [x] User can register with email/password
|
||||||
|
✅ Tested: Registration form works, user created in DB
|
||||||
|
|
||||||
|
- [x] Passwords meet complexity requirements
|
||||||
|
✅ Tested: Weak passwords rejected with proper error message
|
||||||
|
|
||||||
|
- [x] Verification email sent
|
||||||
|
✅ Tested: Email sent via SendGrid, token valid for 24h
|
||||||
|
|
||||||
|
- [ ] User can reset password
|
||||||
|
❌ NOT IMPLEMENTED: Deferred to future sprint
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Run Tests
|
||||||
|
|
||||||
|
Execute test suite:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run unit tests
|
||||||
|
npm test
|
||||||
|
|
||||||
|
# Run integration tests (if available)
|
||||||
|
npm run test:integration
|
||||||
|
|
||||||
|
# Run E2E tests for this feature (if available)
|
||||||
|
npm run test:e2e
|
||||||
|
```
|
||||||
|
|
||||||
|
Report test results:
|
||||||
|
- Tests passing: X/Y
|
||||||
|
- New tests added: Z
|
||||||
|
- Coverage: X%
|
||||||
|
|
||||||
|
### Step 6: Update Specification Status
|
||||||
|
|
||||||
|
Update `specs/specs/008-user-profile/tasks.md`:
|
||||||
|
|
||||||
|
**If fully implemented:**
|
||||||
|
```markdown
|
||||||
|
## Status
|
||||||
|
✅ **COMPLETE** - Fully implemented and tested
|
||||||
|
|
||||||
|
## Implementation Complete
|
||||||
|
|
||||||
|
- Date: [current date]
|
||||||
|
- All acceptance criteria met
|
||||||
|
- Tests passing: X/X
|
||||||
|
- No known issues
|
||||||
|
```
|
||||||
|
|
||||||
|
**If partially implemented:**
|
||||||
|
```markdown
|
||||||
|
## Status
|
||||||
|
⚠️ **PARTIAL** - Core functionality complete, missing: [list]
|
||||||
|
|
||||||
|
## Implementation Status
|
||||||
|
|
||||||
|
**Completed:**
|
||||||
|
- ✅ [What was implemented]
|
||||||
|
|
||||||
|
**Still Missing:**
|
||||||
|
- ❌ [What's still needed]
|
||||||
|
|
||||||
|
**Reason:** [Why not fully complete]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Commit Changes
|
||||||
|
|
||||||
|
Create commit with reference to specification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add .
|
||||||
|
git commit -m "feat: implement specs/008-user-profile/tasks.md (specs/008-user-profile/tasks.md)
|
||||||
|
|
||||||
|
Implemented from specification: specs/specs/008-user-profile/tasks.md
|
||||||
|
|
||||||
|
Completed:
|
||||||
|
- [Task 1]
|
||||||
|
- [Task 2]
|
||||||
|
- [Task 3]
|
||||||
|
|
||||||
|
Tests: X passing
|
||||||
|
Status: COMPLETE"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Provide summary:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Implementation Complete: specs/008-user-profile/tasks.md
|
||||||
|
|
||||||
|
### Tasks Completed
|
||||||
|
- [x] Task 1: [description] (file.ts)
|
||||||
|
- [x] Task 2: [description] (file2.ts)
|
||||||
|
- [x] Task 3: [description]
|
||||||
|
|
||||||
|
### Files Created/Modified
|
||||||
|
- src/feature/component.ts (142 lines)
|
||||||
|
- src/api/endpoint.ts (78 lines)
|
||||||
|
- tests/feature.test.ts (95 lines)
|
||||||
|
|
||||||
|
### Tests
|
||||||
|
- Unit tests: 12/12 passing ✅
|
||||||
|
- Integration tests: 3/3 passing ✅
|
||||||
|
- Coverage: 87%
|
||||||
|
|
||||||
|
### Acceptance Criteria
|
||||||
|
- [x] Criterion 1 ✅
|
||||||
|
- [x] Criterion 2 ✅
|
||||||
|
- [x] Criterion 3 ✅
|
||||||
|
|
||||||
|
### Status Update
|
||||||
|
- Previous: ❌ MISSING
|
||||||
|
- Now: ✅ COMPLETE
|
||||||
|
|
||||||
|
### Commit
|
||||||
|
✅ Committed: feat: implement specs/008-user-profile/tasks.md
|
||||||
|
|
||||||
|
### Next Steps
|
||||||
|
Ready to shift into next feature or run `/speckit.analyze` to validate.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
If implementation fails:
|
||||||
|
- Save progress (mark completed tasks)
|
||||||
|
- Update spec with partial status
|
||||||
|
- Document blocker
|
||||||
|
- Provide recommendations
|
||||||
|
|
||||||
|
If tests fail:
|
||||||
|
- Fix issues before marking complete
|
||||||
|
- Update spec if acceptance criteria need adjustment
|
||||||
|
- Document test failures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Work incrementally (one task at a time)
|
||||||
|
- Test frequently (after each task if possible)
|
||||||
|
- Commit early and often
|
||||||
|
- Update spec status accurately
|
||||||
|
- Cross-reference related specs
|
||||||
|
- For greenfield: Use target_stack from state
|
||||||
|
- For brownfield: Maintain existing patterns and stack
|
||||||
295
commands/speckit-plan.md
Normal file
295
commands/speckit-plan.md
Normal file
@@ -0,0 +1,295 @@
|
|||||||
|
---
|
||||||
|
description: Create implementation plan for a feature
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Kit: Create Implementation Plan
|
||||||
|
|
||||||
|
Generate detailed implementation plan for a feature.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
**Feature name:** {{FEATURE_NAME}}
|
||||||
|
**Specification:** `specs/{{FEATURE_NAME}}.md`
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Read Specification
|
||||||
|
|
||||||
|
Load specification and understand:
|
||||||
|
- Current status (COMPLETE/PARTIAL/MISSING)
|
||||||
|
- Acceptance criteria
|
||||||
|
- Business rules
|
||||||
|
- Dependencies
|
||||||
|
|
||||||
|
### Step 2: Assess Current State
|
||||||
|
|
||||||
|
What exists now?
|
||||||
|
- For PARTIAL: What's implemented vs missing?
|
||||||
|
- For MISSING: Starting from scratch?
|
||||||
|
- For COMPLETE: Why creating plan? (refactor? upgrade?)
|
||||||
|
|
||||||
|
### Step 3: Define Target State
|
||||||
|
|
||||||
|
What should exist after implementation?
|
||||||
|
- All acceptance criteria met
|
||||||
|
- All business rules enforced
|
||||||
|
- Tests passing
|
||||||
|
- Documentation updated
|
||||||
|
|
||||||
|
### Step 4: Determine Technical Approach
|
||||||
|
|
||||||
|
**For brownfield:**
|
||||||
|
- Use existing tech stack (from specification)
|
||||||
|
- Follow existing patterns
|
||||||
|
- Maintain consistency
|
||||||
|
|
||||||
|
**For greenfield:**
|
||||||
|
- Use target_stack from .stackshift-state.json
|
||||||
|
- Choose appropriate libraries
|
||||||
|
- Design architecture
|
||||||
|
|
||||||
|
**Outline approach:**
|
||||||
|
1. Backend changes needed
|
||||||
|
2. Frontend changes needed
|
||||||
|
3. Database changes needed
|
||||||
|
4. Configuration changes needed
|
||||||
|
5. Testing strategy
|
||||||
|
|
||||||
|
### Step 5: Break Down Into Phases
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Implementation Phases
|
||||||
|
|
||||||
|
### Phase 1: Backend (Foundation)
|
||||||
|
- Database schema changes
|
||||||
|
- API endpoints
|
||||||
|
- Business logic
|
||||||
|
- Validation
|
||||||
|
|
||||||
|
### Phase 2: Frontend (UI)
|
||||||
|
- Page/route creation
|
||||||
|
- Component development
|
||||||
|
- API integration
|
||||||
|
- State management
|
||||||
|
|
||||||
|
### Phase 3: Testing
|
||||||
|
- Unit tests
|
||||||
|
- Integration tests
|
||||||
|
- E2E tests
|
||||||
|
|
||||||
|
### Phase 4: Polish
|
||||||
|
- Error handling
|
||||||
|
- Loading states
|
||||||
|
- Edge cases
|
||||||
|
- Documentation
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: Identify Risks
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Risks & Mitigations
|
||||||
|
|
||||||
|
### Risk 1: [Description]
|
||||||
|
- **Impact:** [What could go wrong]
|
||||||
|
- **Probability:** High/Medium/Low
|
||||||
|
- **Mitigation:** [How to prevent/handle]
|
||||||
|
|
||||||
|
### Risk 2: [Description]
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Define Success Criteria
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] All acceptance criteria from specification met
|
||||||
|
- [ ] Tests passing (unit, integration, E2E)
|
||||||
|
- [ ] No new bugs introduced
|
||||||
|
- [ ] Performance within acceptable range
|
||||||
|
- [ ] Code reviewed and approved
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Specification status updated to COMPLETE
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Template
|
||||||
|
|
||||||
|
Save to: `specs/{{FEATURE_NAME}}-impl.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan: {{FEATURE_NAME}}
|
||||||
|
|
||||||
|
**Feature Spec:** `specs/{{FEATURE_NAME}}.md`
|
||||||
|
**Created:** {{DATE}}
|
||||||
|
**Status:** {{CURRENT_STATUS}}
|
||||||
|
**Target:** ✅ COMPLETE
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
[Clear statement of what needs to be accomplished]
|
||||||
|
|
||||||
|
## Current State
|
||||||
|
|
||||||
|
{{#if status == 'MISSING'}}
|
||||||
|
**Not started:**
|
||||||
|
- No implementation exists
|
||||||
|
- Starting from scratch
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
{{#if status == 'PARTIAL'}}
|
||||||
|
**What exists:**
|
||||||
|
- ✅ [Component 1]
|
||||||
|
- ✅ [Component 2]
|
||||||
|
|
||||||
|
**What's missing:**
|
||||||
|
- ❌ [Component 3]
|
||||||
|
- ❌ [Component 4]
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
## Target State
|
||||||
|
|
||||||
|
After implementation:
|
||||||
|
- All acceptance criteria met
|
||||||
|
- Full feature functionality
|
||||||
|
- Tests passing
|
||||||
|
- Production-ready
|
||||||
|
|
||||||
|
## Technical Approach
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
[Describe overall approach]
|
||||||
|
|
||||||
|
### Technology Choices
|
||||||
|
|
||||||
|
{{#if greenfield}}
|
||||||
|
**Stack:** [From .stackshift-state.json target_stack]
|
||||||
|
- Framework: [choice]
|
||||||
|
- Database: [choice]
|
||||||
|
- Libraries: [list]
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
{{#if brownfield}}
|
||||||
|
**Existing Stack:** [From specification]
|
||||||
|
- Maintain consistency with current implementation
|
||||||
|
- Use existing patterns and libraries
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
### Implementation Steps
|
||||||
|
|
||||||
|
1. **Backend Implementation**
|
||||||
|
- Create/modify API endpoints
|
||||||
|
- Implement business logic
|
||||||
|
- Add database models/migrations
|
||||||
|
- Add validation
|
||||||
|
|
||||||
|
2. **Frontend Implementation**
|
||||||
|
- Create pages/routes
|
||||||
|
- Build components
|
||||||
|
- Integrate with backend API
|
||||||
|
- Add state management
|
||||||
|
|
||||||
|
3. **Testing**
|
||||||
|
- Write unit tests
|
||||||
|
- Write integration tests
|
||||||
|
- Write E2E tests
|
||||||
|
|
||||||
|
4. **Configuration**
|
||||||
|
- Add environment variables
|
||||||
|
- Update configuration files
|
||||||
|
- Update routing
|
||||||
|
|
||||||
|
## Detailed Tasks
|
||||||
|
|
||||||
|
[High-level tasks - use `/speckit.tasks` to break down further]
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
- [ ] Task 1
|
||||||
|
- [ ] Task 2
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
- [ ] Task 3
|
||||||
|
- [ ] Task 4
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
- [ ] Task 5
|
||||||
|
- [ ] Task 6
|
||||||
|
|
||||||
|
## Risks & Mitigations
|
||||||
|
|
||||||
|
### Risk: [Description]
|
||||||
|
- **Impact:** [What could go wrong]
|
||||||
|
- **Mitigation:** [Prevention strategy]
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
**Must be complete before starting:**
|
||||||
|
- [Dependency 1]
|
||||||
|
- [Dependency 2]
|
||||||
|
|
||||||
|
**Blocks these features:**
|
||||||
|
- [Feature that depends on this]
|
||||||
|
|
||||||
|
## Effort Estimate
|
||||||
|
|
||||||
|
- Backend: ~X hours
|
||||||
|
- Frontend: ~Y hours
|
||||||
|
- Testing: ~Z hours
|
||||||
|
|
||||||
|
**Total:** ~W hours
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
- Test business logic in isolation
|
||||||
|
- Mock external dependencies
|
||||||
|
- Target: 80%+ coverage
|
||||||
|
|
||||||
|
### Integration Tests
|
||||||
|
- Test API endpoints with real database (test DB)
|
||||||
|
- Verify data persistence
|
||||||
|
- Test error conditions
|
||||||
|
|
||||||
|
### E2E Tests
|
||||||
|
- Test complete user flows
|
||||||
|
- Critical paths must pass
|
||||||
|
- Use realistic data
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] All acceptance criteria met
|
||||||
|
- [ ] All tests passing (X/X)
|
||||||
|
- [ ] No TypeScript/linting errors
|
||||||
|
- [ ] Code review approved
|
||||||
|
- [ ] Performance acceptable
|
||||||
|
- [ ] Security review passed (if sensitive)
|
||||||
|
- [ ] Documentation updated
|
||||||
|
- [ ] Specification status: ✅ COMPLETE
|
||||||
|
|
||||||
|
## Rollback Plan
|
||||||
|
|
||||||
|
If implementation fails:
|
||||||
|
- [How to undo changes]
|
||||||
|
- [Database rollback if needed]
|
||||||
|
- [Feature flag to disable]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Ready for execution:** Use `/speckit.tasks` to generate task checklist, then `/speckit.implement` to execute.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Plans should be detailed but not prescriptive about every line of code
|
||||||
|
- Leave room for implementation decisions
|
||||||
|
- Focus on what needs to be done, not exact how
|
||||||
|
- Include enough detail for `/speckit.tasks` to generate atomic tasks
|
||||||
|
- Consider risks and dependencies
|
||||||
|
- For greenfield: Use target stack from configuration
|
||||||
|
- For brownfield: Follow existing patterns
|
||||||
209
commands/speckit-specify.md
Normal file
209
commands/speckit-specify.md
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
---
|
||||||
|
description: Create new feature specification
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec Kit: Create Feature Specification
|
||||||
|
|
||||||
|
Create a new feature specification in GitHub Spec Kit format.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
**Feature name:** {{FEATURE_NAME}}
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Gather Requirements
|
||||||
|
|
||||||
|
Ask the user about the feature:
|
||||||
|
|
||||||
|
```
|
||||||
|
I'll help you create a specification for: {{FEATURE_NAME}}
|
||||||
|
|
||||||
|
Let's define this feature:
|
||||||
|
|
||||||
|
1. What does this feature do? (Overview)
|
||||||
|
2. Who is it for? (User type)
|
||||||
|
3. What problem does it solve? (Value proposition)
|
||||||
|
4. What are the key capabilities? (User stories)
|
||||||
|
5. How will we know it's done? (Acceptance criteria)
|
||||||
|
6. What are the business rules? (Validation, authorization, etc.)
|
||||||
|
7. What are the dependencies? (Other features required)
|
||||||
|
8. What's the priority? (P0/P1/P2/P3)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Define User Stories
|
||||||
|
|
||||||
|
Format: "As a [user type], I want [capability] so that [benefit]"
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## User Stories
|
||||||
|
|
||||||
|
- As a user, I want to [capability] so that [benefit]
|
||||||
|
- As a user, I want to [capability] so that [benefit]
|
||||||
|
- As an admin, I want to [capability] so that [benefit]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Define Acceptance Criteria
|
||||||
|
|
||||||
|
Specific, testable criteria:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
- [ ] User can [action]
|
||||||
|
- [ ] System validates [condition]
|
||||||
|
- [ ] Error shown when [invalid state]
|
||||||
|
- [ ] Data persists after [action]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Define Business Rules
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Business Rules
|
||||||
|
|
||||||
|
- BR-001: [Validation rule]
|
||||||
|
- BR-002: [Authorization rule]
|
||||||
|
- BR-003: [Business logic rule]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Check Route and Add Technical Details
|
||||||
|
|
||||||
|
**If brownfield (tech-prescriptive):**
|
||||||
|
|
||||||
|
Ask about implementation:
|
||||||
|
```
|
||||||
|
For brownfield route, I need implementation details:
|
||||||
|
|
||||||
|
1. What API endpoints? (paths, methods)
|
||||||
|
2. What database models? (schema)
|
||||||
|
3. What UI components? (file paths)
|
||||||
|
4. What dependencies? (libraries, versions)
|
||||||
|
5. Implementation status? (COMPLETE/PARTIAL/MISSING)
|
||||||
|
```
|
||||||
|
|
||||||
|
Add technical implementation section.
|
||||||
|
|
||||||
|
**If greenfield (tech-agnostic):**
|
||||||
|
|
||||||
|
Skip implementation details, focus on business requirements only.
|
||||||
|
|
||||||
|
### Step 6: Create Specification File
|
||||||
|
|
||||||
|
Save to: `specs/{{FEATURE_NAME}}.md`
|
||||||
|
|
||||||
|
**Template:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature: {{FEATURE_NAME}}
|
||||||
|
|
||||||
|
## Status
|
||||||
|
[❌ MISSING | ⚠️ PARTIAL | ✅ COMPLETE]
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
[Clear description of what this feature does]
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
- As a [user], I want [capability] so that [benefit]
|
||||||
|
- As a [user], I want [capability] so that [benefit]
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] Criterion 1
|
||||||
|
- [ ] Criterion 2
|
||||||
|
- [ ] Criterion 3
|
||||||
|
|
||||||
|
## Business Rules
|
||||||
|
- BR-001: [Rule description]
|
||||||
|
- BR-002: [Rule description]
|
||||||
|
|
||||||
|
## Non-Functional Requirements
|
||||||
|
- Performance: [requirement]
|
||||||
|
- Security: [requirement]
|
||||||
|
- Accessibility: [requirement]
|
||||||
|
|
||||||
|
{{#if brownfield}}
|
||||||
|
## Current Implementation
|
||||||
|
|
||||||
|
### Tech Stack
|
||||||
|
- Framework: [name version]
|
||||||
|
- Libraries: [list]
|
||||||
|
|
||||||
|
### API Endpoints
|
||||||
|
- [METHOD] /api/path
|
||||||
|
- Handler: [file path]
|
||||||
|
|
||||||
|
### Database Schema
|
||||||
|
\`\`\`prisma
|
||||||
|
model Name {
|
||||||
|
[schema]
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### Implementation Files
|
||||||
|
- [file path] ([description])
|
||||||
|
- [file path] ([description])
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
- [library]: [version]
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- [Other Feature] (required)
|
||||||
|
- [External Service] (required)
|
||||||
|
|
||||||
|
## Related Specifications
|
||||||
|
- [related-spec.md]
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
See: `specs/{{FEATURE_NAME}}-impl.md`
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
[Any additional context, gotchas, or considerations]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Create Implementation Plan (if PARTIAL or MISSING)
|
||||||
|
|
||||||
|
If status is not COMPLETE, create implementation plan:
|
||||||
|
|
||||||
|
`specs/{{FEATURE_NAME}}-impl.md`
|
||||||
|
|
||||||
|
See `/speckit.plan` for implementation plan template.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
✅ Feature specification created
|
||||||
|
|
||||||
|
**File:** `specs/{{FEATURE_NAME}}.md`
|
||||||
|
**Status:** {{STATUS}}
|
||||||
|
**Priority:** {{PRIORITY}}
|
||||||
|
|
||||||
|
**Summary:**
|
||||||
|
- User stories: X
|
||||||
|
- Acceptance criteria: Y
|
||||||
|
- Business rules: Z
|
||||||
|
{{#if brownfield}}
|
||||||
|
- Implementation files: N
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
{{#if status != 'COMPLETE'}}
|
||||||
|
**Implementation Plan:** `specs/{{FEATURE_NAME}}-impl.md`
|
||||||
|
{{/if}}
|
||||||
|
|
||||||
|
**Next Steps:**
|
||||||
|
- Review specification for accuracy
|
||||||
|
- Create related specifications if needed
|
||||||
|
- Use `/speckit.plan` to create/update implementation plan
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Follow GitHub Spec Kit format conventions
|
||||||
|
- Use status markers: ✅ ⚠️ ❌
|
||||||
|
- For brownfield: Include complete technical details
|
||||||
|
- For greenfield: Focus on business requirements only
|
||||||
|
- Cross-reference related specifications
|
||||||
|
- Create implementation plan for non-complete features
|
||||||
59
commands/speckit-tasks.md
Normal file
59
commands/speckit-tasks.md
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
---
|
||||||
|
description: Generate actionable tasks from implementation plan for a feature
|
||||||
|
---
|
||||||
|
|
||||||
|
# Generate Tasks for Feature
|
||||||
|
|
||||||
|
Read the implementation plan and generate atomic, actionable task list.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
Feature directory: `specs/{{FEATURE_ID}}/`
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Read the plan:**
|
||||||
|
```bash
|
||||||
|
cat specs/{{FEATURE_ID}}/plan.md
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Generate atomic tasks:**
|
||||||
|
- Each task should be specific (exact file to create/modify)
|
||||||
|
- Testable (has clear acceptance criteria)
|
||||||
|
- Atomic (can be done in one step)
|
||||||
|
- Ordered by dependencies
|
||||||
|
|
||||||
|
3. **Create tasks.md:**
|
||||||
|
```bash
|
||||||
|
# Save to specs/{{FEATURE_ID}}/tasks.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Tasks: {{FEATURE_NAME}}
|
||||||
|
|
||||||
|
Based on: specs/{{FEATURE_ID}}/plan.md
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
|
||||||
|
- [ ] Create component (path/to/file.tsx)
|
||||||
|
- [ ] Add API endpoint (path/to/route.ts)
|
||||||
|
- [ ] Write tests (path/to/test.ts)
|
||||||
|
- [ ] Update documentation
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
1. Task X must complete before Task Y
|
||||||
|
|
||||||
|
## Estimated Effort
|
||||||
|
|
||||||
|
Total: ~X hours
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Break down into atomic, testable tasks
|
||||||
|
- Include file paths
|
||||||
|
- Order by dependencies
|
||||||
|
- Each task completable independently where possible
|
||||||
338
commands/stackshift.review.md
Normal file
338
commands/stackshift.review.md
Normal file
@@ -0,0 +1,338 @@
|
|||||||
|
---
|
||||||
|
name: stackshift.review
|
||||||
|
description: Perform comprehensive code review across multiple dimensions - correctness, standards, security, performance, and testing. Returns APPROVED/NEEDS CHANGES/BLOCKED with specific feedback.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Code Review
|
||||||
|
|
||||||
|
Comprehensive multi-dimensional code review with actionable feedback.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Review recent changes
|
||||||
|
/stackshift.review
|
||||||
|
|
||||||
|
# Review specific feature
|
||||||
|
/stackshift.review "pricing display feature"
|
||||||
|
|
||||||
|
# Review before deployment
|
||||||
|
/stackshift.review "OAuth2 implementation before production"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Reviews
|
||||||
|
|
||||||
|
### 🔍 Correctness
|
||||||
|
- Does it work as intended?
|
||||||
|
- Are all requirements met?
|
||||||
|
- Logic errors or edge cases?
|
||||||
|
- Matches specification requirements?
|
||||||
|
|
||||||
|
### 📏 Standards Compliance
|
||||||
|
- Follows project conventions?
|
||||||
|
- Coding style consistent?
|
||||||
|
- Documentation adequate?
|
||||||
|
- Aligns with constitution principles?
|
||||||
|
|
||||||
|
### 🔒 Security Assessment
|
||||||
|
- No obvious vulnerabilities?
|
||||||
|
- Proper input validation?
|
||||||
|
- Secure data handling?
|
||||||
|
- Authentication/authorization correct?
|
||||||
|
|
||||||
|
### ⚡ Performance Review
|
||||||
|
- Efficient implementation?
|
||||||
|
- Resource usage reasonable?
|
||||||
|
- Scalability considerations?
|
||||||
|
- Database queries optimized?
|
||||||
|
|
||||||
|
### 🧪 Testing Validation
|
||||||
|
- Adequate test coverage?
|
||||||
|
- Edge cases handled?
|
||||||
|
- Error conditions tested?
|
||||||
|
- Integration tests included?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Review Process
|
||||||
|
|
||||||
|
### Step 1: Identify What Changed
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "🔍 Identifying changes to review..."
|
||||||
|
|
||||||
|
# Get recent changes
|
||||||
|
git diff HEAD~1 --name-only > changed-files.txt
|
||||||
|
|
||||||
|
# Show files to review
|
||||||
|
echo "Files to review:"
|
||||||
|
cat changed-files.txt | sed 's/^/ - /'
|
||||||
|
|
||||||
|
# Count by type
|
||||||
|
BACKEND_FILES=$(grep -E "api/|src/.*\.ts$|lib/" changed-files.txt | wc -l)
|
||||||
|
FRONTEND_FILES=$(grep -E "site/|pages/|components/.*\.tsx$" changed-files.txt | wc -l)
|
||||||
|
TEST_FILES=$(grep -E "\.test\.|\.spec\.|__tests__" changed-files.txt | wc -l)
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "📊 Changes breakdown:"
|
||||||
|
echo " Backend: $BACKEND_FILES files"
|
||||||
|
echo " Frontend: $FRONTEND_FILES files"
|
||||||
|
echo " Tests: $TEST_FILES files"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Review Each Dimension
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "🔍 Performing multi-dimensional review..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Correctness Check
|
||||||
|
echo "✓ Correctness Review"
|
||||||
|
|
||||||
|
# Check if tests exist for changed files
|
||||||
|
for file in $(cat changed-files.txt); do
|
||||||
|
if [[ ! "$file" =~ \.test\. && ! "$file" =~ \.spec\. ]]; then
|
||||||
|
# Look for corresponding test file
|
||||||
|
TEST_FILE="${file%.ts}.test.ts"
|
||||||
|
if [ ! -f "$TEST_FILE" ]; then
|
||||||
|
echo " ⚠️ Missing tests for: $file"
|
||||||
|
echo "ISSUE: No test coverage for $file" >> review-issues.log
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Check against spec requirements
|
||||||
|
echo " Checking against specifications..."
|
||||||
|
# Implementation reviews code against spec requirements
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Standards Compliance
|
||||||
|
echo "✓ Standards Review"
|
||||||
|
|
||||||
|
# Check for common issues
|
||||||
|
grep -rn "console.log\|debugger" $(cat changed-files.txt) > debug-statements.log 2>/dev/null
|
||||||
|
if [ -s debug-statements.log ]; then
|
||||||
|
echo " ⚠️ Debug statements found:"
|
||||||
|
cat debug-statements.log
|
||||||
|
echo "ISSUE: Debug statements in production code" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for TODO/FIXME
|
||||||
|
grep -rn "TODO\|FIXME\|XXX\|HACK" $(cat changed-files.txt) > todos.log 2>/dev/null
|
||||||
|
if [ -s todos.log ]; then
|
||||||
|
echo " ⚠️ Unresolved TODOs found:"
|
||||||
|
cat todos.log
|
||||||
|
echo "ISSUE: Unresolved TODO comments" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Security Assessment
|
||||||
|
echo "✓ Security Review"
|
||||||
|
|
||||||
|
# Check for common security issues
|
||||||
|
grep -rn "eval(\|innerHTML\|dangerouslySetInnerHTML" $(cat changed-files.txt) > security-issues.log 2>/dev/null
|
||||||
|
if [ -s security-issues.log ]; then
|
||||||
|
echo " ❌ Security concerns found:"
|
||||||
|
cat security-issues.log
|
||||||
|
echo "CRITICAL: Security vulnerability" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for hardcoded secrets/credentials
|
||||||
|
grep -rn "password.*=\|api.*key.*=\|secret.*=" $(cat changed-files.txt) > credentials.log 2>/dev/null
|
||||||
|
if [ -s credentials.log ]; then
|
||||||
|
echo " ❌ Potential hardcoded credentials:"
|
||||||
|
cat credentials.log
|
||||||
|
echo "CRITICAL: Hardcoded credentials" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Performance Review
|
||||||
|
echo "✓ Performance Review"
|
||||||
|
|
||||||
|
# Check for common performance issues
|
||||||
|
grep -rn "for.*in.*forEach\|while.*push\|map.*filter.*map" $(cat changed-files.txt) > performance-issues.log 2>/dev/null
|
||||||
|
if [ -s performance-issues.log ]; then
|
||||||
|
echo " ⚠️ Potential performance issues:"
|
||||||
|
cat performance-issues.log | head -5
|
||||||
|
echo "ISSUE: Inefficient loops or chaining" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Testing Validation
|
||||||
|
echo "✓ Testing Review"
|
||||||
|
|
||||||
|
# Check test quality
|
||||||
|
if [[ "$TEST_FILES" -gt 0 ]]; then
|
||||||
|
# Check for proper test structure
|
||||||
|
grep -rn "describe\|it(\|test(" $(grep "\.test\.\|\.spec\." changed-files.txt) > test-structure.log 2>/dev/null
|
||||||
|
TEST_COUNT=$(grep -c "it(\|test(" test-structure.log || echo "0")
|
||||||
|
|
||||||
|
echo " Test cases found: $TEST_COUNT"
|
||||||
|
|
||||||
|
if [[ "$TEST_COUNT" -lt 5 ]]; then
|
||||||
|
echo " ⚠️ Low test coverage detected"
|
||||||
|
echo "ISSUE: Insufficient test cases (< 5)" >> review-issues.log
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo " ⚠️ No tests added for changes"
|
||||||
|
echo "ISSUE: No test files modified" >> review-issues.log
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Generate Review Report
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "📋 Review Report"
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
TOTAL_ISSUES=$(wc -l < review-issues.log 2>/dev/null || echo "0")
|
||||||
|
CRITICAL_ISSUES=$(grep -c "CRITICAL" review-issues.log 2>/dev/null || echo "0")
|
||||||
|
|
||||||
|
if [[ "$TOTAL_ISSUES" == "0" ]]; then
|
||||||
|
echo "### ✅ APPROVED"
|
||||||
|
echo ""
|
||||||
|
echo "**Strengths:**"
|
||||||
|
echo "- All quality checks passed"
|
||||||
|
echo "- Spec compliance validated"
|
||||||
|
echo "- Security review clean"
|
||||||
|
echo "- Performance acceptable"
|
||||||
|
echo "- Test coverage adequate"
|
||||||
|
echo ""
|
||||||
|
echo "**Decision:** Ready for next phase/deployment"
|
||||||
|
else
|
||||||
|
if [[ "$CRITICAL_ISSUES" -gt 0 ]]; then
|
||||||
|
echo "### 🚫 BLOCKED"
|
||||||
|
echo ""
|
||||||
|
echo "**Critical Issues Found:** $CRITICAL_ISSUES"
|
||||||
|
echo ""
|
||||||
|
echo "Critical issues must be resolved before proceeding:"
|
||||||
|
else
|
||||||
|
echo "### 🔄 NEEDS CHANGES"
|
||||||
|
echo ""
|
||||||
|
echo "**Issues Found:** $TOTAL_ISSUES"
|
||||||
|
echo ""
|
||||||
|
echo "Issues requiring attention:"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
cat review-issues.log | sed 's/^/ - /'
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo "**Recommendations:**"
|
||||||
|
echo " 1. Address critical security/spec violations first"
|
||||||
|
echo " 2. Run /stackshift.validate --fix to auto-resolve technical issues"
|
||||||
|
echo " 3. Add missing test coverage"
|
||||||
|
echo " 4. Remove debug statements and TODOs"
|
||||||
|
echo " 5. Re-run review after fixes"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$CRITICAL_ISSUES" -gt 0 ]]; then
|
||||||
|
echo "**Decision:** BLOCKED - Cannot proceed until critical issues resolved"
|
||||||
|
else
|
||||||
|
echo "**Decision:** NEEDS CHANGES - Address issues before finalizing"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
rm -f changed-files.txt debug-statements.log todos.log security-issues.log \
|
||||||
|
credentials.log performance-issues.log test-structure.log review-issues.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Review Categories
|
||||||
|
|
||||||
|
### ✅ APPROVED
|
||||||
|
- All critical issues resolved
|
||||||
|
- Meets quality standards
|
||||||
|
- Ready for next phase/deployment
|
||||||
|
|
||||||
|
### 🔄 NEEDS CHANGES
|
||||||
|
- Issues found requiring fixes
|
||||||
|
- Specific feedback provided
|
||||||
|
- Return to implementation
|
||||||
|
|
||||||
|
### 🚫 BLOCKED
|
||||||
|
- Fundamental issues requiring redesign
|
||||||
|
- Critical security vulnerabilities
|
||||||
|
- Major spec violations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Standards
|
||||||
|
|
||||||
|
- **No rubber stamping** - Actually review, don't just approve
|
||||||
|
- **Specific feedback** - Actionable recommendations with line numbers
|
||||||
|
- **Priority classification** - Critical, Important, Suggestion
|
||||||
|
- **Evidence-based** - Cite specific files, lines, patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with StackShift
|
||||||
|
|
||||||
|
**Auto-runs after:**
|
||||||
|
- Gear 6 completion
|
||||||
|
- `/stackshift.validate` finds issues
|
||||||
|
- Before final commit
|
||||||
|
|
||||||
|
**Manual usage:**
|
||||||
|
```bash
|
||||||
|
# Before committing
|
||||||
|
/stackshift.review
|
||||||
|
|
||||||
|
# After implementing feature
|
||||||
|
/stackshift.review "vehicle details feature"
|
||||||
|
|
||||||
|
# Before deployment
|
||||||
|
/stackshift.review "all changes since last release"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Output
|
||||||
|
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
📋 Review Report
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
### 🔄 NEEDS CHANGES
|
||||||
|
|
||||||
|
**Issues Found:** 3
|
||||||
|
|
||||||
|
Issues requiring attention:
|
||||||
|
|
||||||
|
- ISSUE: Debug statements in production code
|
||||||
|
- ISSUE: No test coverage for api/handlers/pricing.ts
|
||||||
|
- ISSUE: Inefficient loops or chaining in data-processor.ts
|
||||||
|
|
||||||
|
**Recommendations:**
|
||||||
|
1. Remove console.log from api/handlers/pricing.ts:45
|
||||||
|
2. Add test file: api/handlers/pricing.test.ts
|
||||||
|
3. Refactor nested loops in data-processor.ts:78-82
|
||||||
|
4. Re-run review after fixes
|
||||||
|
|
||||||
|
**Decision:** NEEDS CHANGES - Address issues before finalizing
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Quality assurance before every finalization!**
|
||||||
358
commands/stackshift.validate.md
Normal file
358
commands/stackshift.validate.md
Normal file
@@ -0,0 +1,358 @@
|
|||||||
|
---
|
||||||
|
name: stackshift.validate
|
||||||
|
description: Systematically validate implementation against specifications. Runs tests, TypeScript checks, and spec compliance validation. Use after implementation to ensure quality before finalizing.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Validate Implementation
|
||||||
|
|
||||||
|
Comprehensive validation of implementation against specifications with automatic fixing capability.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run full validation
|
||||||
|
/stackshift.validate
|
||||||
|
|
||||||
|
# Run with automatic fixes
|
||||||
|
/stackshift.validate --fix
|
||||||
|
|
||||||
|
# Focus on specific feature
|
||||||
|
/stackshift.validate --feature=vehicle-details
|
||||||
|
|
||||||
|
# TypeScript check only
|
||||||
|
/stackshift.validate --type-check-only
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Does
|
||||||
|
|
||||||
|
**Phase 1: Assessment**
|
||||||
|
1. Run full test suite
|
||||||
|
2. Run TypeScript compilation
|
||||||
|
3. Categorize failures (imports, types, spec violations, mocks)
|
||||||
|
4. Cross-reference against specifications
|
||||||
|
|
||||||
|
**Phase 2: Spec Compliance**
|
||||||
|
1. Validate implementation matches spec requirements
|
||||||
|
2. Check for missing specified features
|
||||||
|
3. Verify API contracts are implemented
|
||||||
|
4. Assess priorities (spec violations are P1)
|
||||||
|
|
||||||
|
**Phase 3: Resolution (if --fix mode)**
|
||||||
|
1. Fix import/export issues
|
||||||
|
2. Fix type mismatches (aligned with spec)
|
||||||
|
3. Fix specification compliance gaps
|
||||||
|
4. Fix test mocks
|
||||||
|
5. Validate after each fix
|
||||||
|
6. Rollback if fixes break things
|
||||||
|
|
||||||
|
**Phase 4: Final Validation**
|
||||||
|
1. Re-run all tests
|
||||||
|
2. Verify TypeScript compilation
|
||||||
|
3. Confirm spec compliance
|
||||||
|
4. Generate quality report
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Command Execution
|
||||||
|
|
||||||
|
### Phase 1: Comprehensive Assessment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "🚀 Starting Implementation Validation"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Run test suite
|
||||||
|
echo "🧪 Running test suite..."
|
||||||
|
npm test 2>&1 | tee test-results.log
|
||||||
|
|
||||||
|
# Extract statistics
|
||||||
|
TOTAL_TESTS=$(grep -o "[0-9]* tests" test-results.log | head -1 || echo "0")
|
||||||
|
FAILED_TESTS=$(grep -o "[0-9]* failed" test-results.log || echo "0")
|
||||||
|
PASSED_TESTS=$(grep -o "[0-9]* passed" test-results.log || echo "0")
|
||||||
|
|
||||||
|
echo "📊 Test Results:"
|
||||||
|
echo " Total: $TOTAL_TESTS"
|
||||||
|
echo " Passed: $PASSED_TESTS"
|
||||||
|
echo " Failed: $FAILED_TESTS"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Run TypeScript validation
|
||||||
|
echo "🔍 Running TypeScript validation..."
|
||||||
|
npx tsc --noEmit 2>&1 | tee typescript-results.log
|
||||||
|
|
||||||
|
TS_ERRORS=$(grep -c "error TS" typescript-results.log || echo "0")
|
||||||
|
echo "📊 TypeScript Results:"
|
||||||
|
echo " Errors: $TS_ERRORS"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Specification Validation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "📋 Validating against specifications..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Find all spec files
|
||||||
|
SPEC_DIR=".specify/memory/specifications"
|
||||||
|
if [ ! -d "$SPEC_DIR" ]; then
|
||||||
|
SPEC_DIR="specs"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# For each spec, check implementation
|
||||||
|
for spec in $(find $SPEC_DIR -name "*.md" -o -name "spec.md"); do
|
||||||
|
SPEC_NAME=$(basename $(dirname $spec) 2>/dev/null || basename $spec .md)
|
||||||
|
|
||||||
|
echo "🔍 Checking: $SPEC_NAME"
|
||||||
|
|
||||||
|
# Extract required files/components from spec
|
||||||
|
# Look for "Files:" or "Implementation Status:" sections
|
||||||
|
grep -A 20 "^## Files\|^## Implementation Status" "$spec" | \
|
||||||
|
grep -o "\`[^`]*\.(ts|tsx|js|jsx|py|go)\`" | \
|
||||||
|
sed 's/`//g' > required-files-$SPEC_NAME.txt
|
||||||
|
|
||||||
|
# Check if required files exist
|
||||||
|
while read file; do
|
||||||
|
if [ ! -f "$file" ]; then
|
||||||
|
echo " ❌ Missing file: $file"
|
||||||
|
echo "SPEC_VIOLATION: $SPEC_NAME missing $file" >> spec-violations.log
|
||||||
|
fi
|
||||||
|
done < required-files-$SPEC_NAME.txt
|
||||||
|
|
||||||
|
# Clean up temp file
|
||||||
|
rm required-files-$SPEC_NAME.txt
|
||||||
|
done
|
||||||
|
|
||||||
|
SPEC_VIOLATIONS=$(wc -l < spec-violations.log 2>/dev/null || echo "0")
|
||||||
|
echo ""
|
||||||
|
echo "📊 Specification Compliance:"
|
||||||
|
echo " Violations: $SPEC_VIOLATIONS"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Categorize Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "📋 Categorizing failures..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Extract import/export errors
|
||||||
|
grep -n "Cannot find module\|Module not found\|has no exported member" \
|
||||||
|
test-results.log typescript-results.log 2>/dev/null > import-errors.log
|
||||||
|
|
||||||
|
# Extract type mismatch errors
|
||||||
|
grep -n "Type.*is not assignable\|Property.*does not exist\|Argument of type" \
|
||||||
|
typescript-results.log 2>/dev/null > type-errors.log
|
||||||
|
|
||||||
|
# Extract test assertion failures
|
||||||
|
grep -n "AssertionError\|Expected.*but received\|toBe\|toEqual" \
|
||||||
|
test-results.log 2>/dev/null > test-failures.log
|
||||||
|
|
||||||
|
IMPORT_ERRORS=$(wc -l < import-errors.log 2>/dev/null || echo "0")
|
||||||
|
TYPE_ERRORS=$(wc -l < type-errors.log 2>/dev/null || echo "0")
|
||||||
|
TEST_FAILURES=$(wc -l < test-failures.log 2>/dev/null || echo "0")
|
||||||
|
|
||||||
|
echo "📊 Issue Breakdown:"
|
||||||
|
echo " P1 - Spec Violations: $SPEC_VIOLATIONS (highest priority)"
|
||||||
|
echo " P2 - Type Errors: $TYPE_ERRORS"
|
||||||
|
echo " P3 - Import Errors: $IMPORT_ERRORS"
|
||||||
|
echo " P4 - Test Failures: $TEST_FAILURES"
|
||||||
|
echo ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Fix Mode (if --fix)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Only run if --fix flag provided
|
||||||
|
if [[ "$FIX_MODE" == "true" ]]; then
|
||||||
|
echo "🔧 Automatic fix mode enabled"
|
||||||
|
echo "⚠️ Creating backup..."
|
||||||
|
|
||||||
|
# Backup current state
|
||||||
|
git stash push -m "stackshift-validate backup $(date +%Y%m%d-%H%M%S)"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "🔧 Fixing issues in priority order..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# P1: Fix spec violations first
|
||||||
|
if [[ "$SPEC_VIOLATIONS" != "0" ]]; then
|
||||||
|
echo "🔧 P1: Resolving specification violations..."
|
||||||
|
|
||||||
|
# Read spec violations and attempt to implement missing files/features
|
||||||
|
while read violation; do
|
||||||
|
echo " Fixing: $violation"
|
||||||
|
# Implementation would add missing files based on spec requirements
|
||||||
|
done < spec-violations.log
|
||||||
|
|
||||||
|
# Re-validate
|
||||||
|
echo " Re-checking spec compliance..."
|
||||||
|
# Re-run spec validation
|
||||||
|
fi
|
||||||
|
|
||||||
|
# P2: Fix type errors
|
||||||
|
if [[ "$TYPE_ERRORS" != "0" ]]; then
|
||||||
|
echo "🔧 P2: Resolving type errors..."
|
||||||
|
|
||||||
|
# Show first few type errors for context
|
||||||
|
head -10 type-errors.log
|
||||||
|
|
||||||
|
echo " Analyzing type mismatches against spec..."
|
||||||
|
# Implementation would fix types to match spec definitions
|
||||||
|
fi
|
||||||
|
|
||||||
|
# P3: Fix import errors
|
||||||
|
if [[ "$IMPORT_ERRORS" != "0" ]]; then
|
||||||
|
echo "🔧 P3: Resolving import errors..."
|
||||||
|
|
||||||
|
# Show import errors
|
||||||
|
cat import-errors.log
|
||||||
|
|
||||||
|
echo " Adding missing exports..."
|
||||||
|
# Implementation would add missing exports
|
||||||
|
fi
|
||||||
|
|
||||||
|
# P4: Fix test failures
|
||||||
|
if [[ "$TEST_FAILURES" != "0" ]]; then
|
||||||
|
echo "🔧 P4: Resolving test failures..."
|
||||||
|
|
||||||
|
# Show test failures
|
||||||
|
head -10 test-failures.log
|
||||||
|
|
||||||
|
echo " Fixing test assertions..."
|
||||||
|
# Implementation would fix failing tests
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "🔄 Re-running validation after fixes..."
|
||||||
|
|
||||||
|
# Re-run tests and type check
|
||||||
|
npm test 2>&1 | tee final-test-results.log
|
||||||
|
npx tsc --noEmit 2>&1 | tee final-ts-results.log
|
||||||
|
|
||||||
|
FINAL_FAILED=$(grep -o "[0-9]* failed" final-test-results.log || echo "0")
|
||||||
|
FINAL_TS_ERRORS=$(grep -c "error TS" final-ts-results.log || echo "0")
|
||||||
|
|
||||||
|
if [[ "$FINAL_FAILED" == "0" && "$FINAL_TS_ERRORS" == "0" ]]; then
|
||||||
|
echo "✅ All issues resolved!"
|
||||||
|
echo "🎉 Implementation validated successfully"
|
||||||
|
else
|
||||||
|
echo "❌ Some issues remain"
|
||||||
|
echo " Failed tests: $FINAL_FAILED"
|
||||||
|
echo " Type errors: $FINAL_TS_ERRORS"
|
||||||
|
echo ""
|
||||||
|
echo "🔄 Rolling back changes..."
|
||||||
|
git stash pop
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "ℹ️ Run with --fix to automatically resolve issues"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Final Report
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo ""
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "📊 Validation Summary"
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ "$SPEC_VIOLATIONS" == "0" && "$TYPE_ERRORS" == "0" && \
|
||||||
|
"$IMPORT_ERRORS" == "0" && "$TEST_FAILURES" == "0" ]]; then
|
||||||
|
echo "✅ VALIDATION PASSED"
|
||||||
|
echo ""
|
||||||
|
echo " All tests passing: ✅"
|
||||||
|
echo " TypeScript compiling: ✅"
|
||||||
|
echo " Spec compliance: ✅"
|
||||||
|
echo " Code quality: ✅"
|
||||||
|
echo ""
|
||||||
|
echo "🚀 Implementation is production-ready!"
|
||||||
|
else
|
||||||
|
echo "⚠️ VALIDATION ISSUES FOUND"
|
||||||
|
echo ""
|
||||||
|
echo " Spec Violations: $SPEC_VIOLATIONS"
|
||||||
|
echo " Type Errors: $TYPE_ERRORS"
|
||||||
|
echo " Import Errors: $IMPORT_ERRORS"
|
||||||
|
echo " Test Failures: $TEST_FAILURES"
|
||||||
|
echo ""
|
||||||
|
echo "💡 Recommendations:"
|
||||||
|
echo " 1. Run with --fix to auto-resolve issues"
|
||||||
|
echo " 2. Review spec-violations.log for spec compliance gaps"
|
||||||
|
echo " 3. Run /stackshift.review for detailed code review"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
|
||||||
|
# Cleanup temp files
|
||||||
|
rm -f test-results.log typescript-results.log import-errors.log \
|
||||||
|
type-errors.log test-failures.log spec-violations.log \
|
||||||
|
final-test-results.log final-ts-results.log
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Options
|
||||||
|
|
||||||
|
- `--fix` - Automatically attempt to fix identified issues
|
||||||
|
- `--feature=<name>` - Focus validation on specific feature
|
||||||
|
- `--spec-first` - Prioritize spec compliance (default)
|
||||||
|
- `--type-check-only` - Only run TypeScript validation
|
||||||
|
- `--no-rollback` - Disable automatic rollback on failures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
✅ All tests pass (0 failures)
|
||||||
|
✅ TypeScript compiles (0 errors)
|
||||||
|
✅ Spec compliance (0 violations)
|
||||||
|
✅ Quality gates passed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with StackShift
|
||||||
|
|
||||||
|
**Auto-runs after Gear 6:**
|
||||||
|
```
|
||||||
|
Gear 6: Implement features ✅
|
||||||
|
↓
|
||||||
|
Gear 6.5: Validate & Review
|
||||||
|
1. /stackshift.validate --fix
|
||||||
|
2. /stackshift.review (if issues found)
|
||||||
|
3. /stackshift.coverage (generate coverage map)
|
||||||
|
↓
|
||||||
|
Complete with confidence! 🎉
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual usage anytime:**
|
||||||
|
```bash
|
||||||
|
# Before committing
|
||||||
|
/stackshift.validate
|
||||||
|
|
||||||
|
# Before pull request
|
||||||
|
/stackshift.validate --fix
|
||||||
|
|
||||||
|
# Check specific feature
|
||||||
|
/stackshift.validate --feature=pricing-display
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Principles
|
||||||
|
|
||||||
|
1. **Specification Supremacy** - Specs are source of truth
|
||||||
|
2. **Zero Tolerance** - ALL tests must pass, ALL types must compile
|
||||||
|
3. **No Configuration Shortcuts** - Fix implementation, not configs
|
||||||
|
4. **Progressive Fix Strategy** - Address spec violations first
|
||||||
|
5. **Safety First** - Automatic rollback on fix failures
|
||||||
|
6. **Comprehensive Reporting** - Clear categorization and progress tracking
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**This ensures every implementation is validated against specifications before being marked complete!**
|
||||||
171
commands/start.md
Normal file
171
commands/start.md
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
---
|
||||||
|
description: Start StackShift reverse engineering process - analyzes codebase, auto-detects application type (monorepo service, Nx app, etc.), and guides through 6-gear transformation to spec-driven development. Choose Greenfield (tech-agnostic for migration) or Brownfield (tech-prescriptive for maintenance).
|
||||||
|
---
|
||||||
|
|
||||||
|
# StackShift: Reverse Engineering Toolkit
|
||||||
|
|
||||||
|
**Start the 6-gear reverse engineering process**
|
||||||
|
|
||||||
|
Transform your application into a fully-specified, spec-driven codebase.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Auto-Detection
|
||||||
|
|
||||||
|
StackShift will automatically detect your widget/module type:
|
||||||
|
|
||||||
|
- **service-*** → Monorepo Service (in services/ directory)
|
||||||
|
- **shared-*** → Shared Package (in packages/ directory)
|
||||||
|
- **Nx project in apps/ → Nx Application
|
||||||
|
- **Turborepo package → Turborepo Package
|
||||||
|
- **Other** → Generic application (user chooses Greenfield or Brownfield)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
**Just run the analyze skill to begin:**
|
||||||
|
|
||||||
|
```
|
||||||
|
I want to reverse engineer this application.
|
||||||
|
```
|
||||||
|
|
||||||
|
Or be specific:
|
||||||
|
|
||||||
|
```
|
||||||
|
Analyze this codebase for Greenfield migration to Next.js.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The 6-Gear Process
|
||||||
|
|
||||||
|
Once analysis starts, you'll shift through these gears:
|
||||||
|
|
||||||
|
### 🔍 Gear 1: Analyze
|
||||||
|
- Auto-detect widget/module type
|
||||||
|
- Detect tech stack and architecture
|
||||||
|
- Assess completeness
|
||||||
|
- Choose route (or auto-selected)
|
||||||
|
- Configure workflow options
|
||||||
|
|
||||||
|
### 🔄 Gear 2: Reverse Engineer
|
||||||
|
- Extract comprehensive documentation (8-9 files)
|
||||||
|
- Business logic extraction
|
||||||
|
- For monorepo: Include shared packages
|
||||||
|
-
|
||||||
|
- For Greenfield: Tech-agnostic requirements
|
||||||
|
- For Brownfield: Tech-prescriptive implementation
|
||||||
|
|
||||||
|
### 📋 Gear 3: Create Specifications
|
||||||
|
- Initialize GitHub Spec Kit (`.specify/`)
|
||||||
|
- Create Constitution (project principles)
|
||||||
|
- Generate feature specifications
|
||||||
|
- Create implementation plans
|
||||||
|
- Install `/speckit.*` slash commands
|
||||||
|
|
||||||
|
### 🔎 Gear 4: Gap Analysis
|
||||||
|
- **Greenfield:** Validate spec completeness, ask about target stack
|
||||||
|
- **Brownfield:** Run `/speckit.analyze`, find implementation gaps
|
||||||
|
- Identify clarification needs
|
||||||
|
- Prioritize features (P0/P1/P2/P3)
|
||||||
|
- Create implementation roadmap
|
||||||
|
|
||||||
|
### ✨ Gear 5: Complete Specification
|
||||||
|
- Resolve `[NEEDS CLARIFICATION]` markers
|
||||||
|
- Interactive Q&A session
|
||||||
|
- Use `/speckit.clarify` for structured clarification
|
||||||
|
- Finalize all specifications
|
||||||
|
- Ensure no ambiguities remain
|
||||||
|
|
||||||
|
### 🚀 Gear 6: Implement
|
||||||
|
- **Greenfield:** Build NEW app in chosen tech stack
|
||||||
|
- **Brownfield:** Fill gaps in existing implementation
|
||||||
|
- Use `/speckit.tasks` for task breakdown
|
||||||
|
- Use `/speckit.implement` for execution
|
||||||
|
- Validate with `/speckit.analyze`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Routes Available
|
||||||
|
|
||||||
|
| Route | Auto-Detect | Purpose |
|
||||||
|
|-------|-------------|---------|
|
||||||
|
| **greenfield** | Generic app | Extract business logic, rebuild in new stack |
|
||||||
|
| **brownfield** | Generic app | Spec existing codebase, manage with Spec Kit |
|
||||||
|
| **monorepo-service** | services/* | Extract service + shared packages |
|
||||||
|
| **nx-app** | has nx.json | Extract Nx app + project config |
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Workflow Options
|
||||||
|
|
||||||
|
**Manual Mode:**
|
||||||
|
- Review each gear before proceeding
|
||||||
|
- You control the pace
|
||||||
|
- Good for first-time users
|
||||||
|
|
||||||
|
**Cruise Control:**
|
||||||
|
- Shift through all gears automatically
|
||||||
|
- Hands-free execution
|
||||||
|
- Good for experienced users or overnight runs
|
||||||
|
- Configure: clarifications strategy, implementation scope
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Additional Commands
|
||||||
|
|
||||||
|
After completing Gears 1-6:
|
||||||
|
|
||||||
|
- **`/stackshift.modernize`** - Brownfield Upgrade Mode (dependency modernization)
|
||||||
|
- **`/speckit.*`** - GitHub Spec Kit commands (auto-installed in Gear 3)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Git repository with code to analyze
|
||||||
|
- Claude Code with plugin support
|
||||||
|
- ~2-4 hours for complete process (or use Cruise Control)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
**Monorepo migration:**
|
||||||
|
```
|
||||||
|
I want to reverse engineer ws-vehicle-details for migration to Next.js.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Legacy app spec creation:**
|
||||||
|
```
|
||||||
|
Analyze this Java Spring app and create specifications for ongoing management.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Nx application extraction:**
|
||||||
|
```
|
||||||
|
Analyze this V9 Velocity widget and extract the business logic.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Starting Now
|
||||||
|
|
||||||
|
**I'm now going to analyze this codebase and begin the StackShift process!**
|
||||||
|
|
||||||
|
Here's what I'll do:
|
||||||
|
|
||||||
|
1. ✅ Auto-detect application type (monorepo-service, nx-app, generic, etc.)
|
||||||
|
2. ✅ Detect tech stack and architecture
|
||||||
|
3. ✅ Assess completeness
|
||||||
|
4. ✅ Determine or ask for route
|
||||||
|
5. ✅ Set up workflow configuration
|
||||||
|
6. ✅ Begin Gear 1: Analyze
|
||||||
|
|
||||||
|
Let me start by analyzing this codebase... 🚗💨
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Now beginning StackShift Gear 1: Analyze...**
|
||||||
96
commands/version.md
Normal file
96
commands/version.md
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
---
|
||||||
|
description: Show installed StackShift version and check for updates
|
||||||
|
---
|
||||||
|
|
||||||
|
# StackShift Version Check
|
||||||
|
|
||||||
|
**Current Installation:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show installed version
|
||||||
|
cat ~/.claude/plugins/cache/stackshift/.claude-plugin/plugin.json | jq -r '.version' || echo "StackShift not installed"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Latest Release:**
|
||||||
|
|
||||||
|
Check latest version at: https://github.com/jschulte/stackshift/releases/latest
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installation Info
|
||||||
|
|
||||||
|
**Repository:** github.com/jschulte/stackshift
|
||||||
|
|
||||||
|
**Installed From:**
|
||||||
|
```bash
|
||||||
|
# Check installation directory
|
||||||
|
ls -la ~/.claude/plugins/cache/stackshift
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Update Instructions
|
||||||
|
|
||||||
|
### If Installed from Marketplace
|
||||||
|
|
||||||
|
**Important:** Update the marketplace FIRST, then update the plugin!
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Step 1: Update marketplace
|
||||||
|
/plugin marketplace update jschulte
|
||||||
|
|
||||||
|
# Step 2: Update StackShift
|
||||||
|
/plugin update stackshift
|
||||||
|
|
||||||
|
# Step 3: Restart Claude Code
|
||||||
|
```
|
||||||
|
|
||||||
|
### If Installed Locally
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/git/stackshift
|
||||||
|
git pull origin main
|
||||||
|
./install-local.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Force Reinstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Remove old installation
|
||||||
|
/plugin uninstall stackshift
|
||||||
|
|
||||||
|
# Reinstall from marketplace
|
||||||
|
/plugin marketplace add jschulte/claude-plugins
|
||||||
|
/plugin install stackshift
|
||||||
|
|
||||||
|
# Or install locally
|
||||||
|
git clone https://github.com/jschulte/stackshift.git
|
||||||
|
cd stackshift
|
||||||
|
./install-local.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
- **v1.3.0** (2025-11-18) - Batch session persistence, directory-scoped sessions
|
||||||
|
- **v1.2.0** (2025-11-17) - Brownfield upgrade mode, Multi-framework support
|
||||||
|
- **v1.1.1** (2025-11-16) - Monorepo detection improvements
|
||||||
|
- **v1.1.0** (2025-11-15) - Added monorepo detection
|
||||||
|
- **v1.0.0** (2025-11-14) - Initial release
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's New in Latest Version
|
||||||
|
|
||||||
|
### v1.3.0 Features
|
||||||
|
|
||||||
|
🎯 **Cross-batch answer persistence** - Answer configuration questions once, automatically apply to all repos across all batches
|
||||||
|
|
||||||
|
📁 **Directory-scoped sessions** - Multiple simultaneous batch runs in different directories without conflicts
|
||||||
|
|
||||||
|
🔍 **Auto-discovery** - Agents automatically find parent batch configuration
|
||||||
|
|
||||||
|
⚡ **Time savings** - Save 58 minutes on 90-repo batches!
|
||||||
|
|
||||||
|
**Full changelog:** https://github.com/jschulte/stackshift/releases/tag/v1.3.0
|
||||||
185
plugin.lock.json
Normal file
185
plugin.lock.json
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:jschulte/claude-plugins:stackshift",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "1a5131ee2df027bf01ef5595d1767b61b8c95c18",
|
||||||
|
"treeHash": "a98323d6e2f94ffae92e20ca47e5466ddb20e37f2a216559594c69787ca9b701",
|
||||||
|
"generatedAt": "2025-11-28T10:19:20.337428Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "stackshift",
|
||||||
|
"description": "Reverse engineering toolkit that transforms any application into a fully-specified, spec-driven codebase through a 6-gear process. Auto-detects app type (monorepo service, Nx app, etc.) then choose route: Greenfield (tech-agnostic for migration) or Brownfield (tech-prescriptive for maintenance). Includes Gear 6.5 validation, code review, and coverage mapping.",
|
||||||
|
"version": "1.6.0"
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "3228b5fa4ff6713d7d12c741f3a40e5e1376f53599b05402530cfef5564cfaa5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "agents/README.md",
|
||||||
|
"sha256": "5481430eec3f4d4a7afba52fe72cb38ebb58df57c9077cffa9b36888ea26f183"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "agents/stackshift-technical-writer/AGENT.md",
|
||||||
|
"sha256": "3f08ed687eb822789b5c6ac5a9f1a6add4912f63ef290336009ed6bef526f339"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "agents/stackshift-code-analyzer/AGENT.md",
|
||||||
|
"sha256": "0370f464b764b5de064334cc905db1aefba8f39a1210c45cdc69d7afa964392f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "agents/feature-brainstorm/AGENT.md",
|
||||||
|
"sha256": "bf77d98de4402910b24c6e6411cf765ccb1d588cc0d914efbcab0c9c4e8cfa54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "9f7484cba188b929bbbabf015e91fcd68ca514b1291bb8e40a7fe7f16e08d1b5"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-specify.md",
|
||||||
|
"sha256": "fb5ac43cf69da37ea64e135979e1dcdae532472bbb256dc6e4ec45818302aa91"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/setup.md",
|
||||||
|
"sha256": "1c6a259deb21e316a17dd9697ae65a1f5faa54dd2f950a4c583efccc5fa3c9d2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-clarify.md",
|
||||||
|
"sha256": "76bd5b75c4216f945678c26e463d92a78ee9f3b09c4c165c92d732048827a5fd"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/stackshift.validate.md",
|
||||||
|
"sha256": "edb6b8ca1c27885624611c83a0c16456f0d56894a72353c558981e926cd9d83e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/modernize.md",
|
||||||
|
"sha256": "3f75b0b825ed7706827ae6d458948602c3a38895bf6229882232edefecfe6acc"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-plan.md",
|
||||||
|
"sha256": "6f22a93a4ac5c7f4a438ea70af46b95dcfc6c490b5e4cb00b2d43bc524ae68ec"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/version.md",
|
||||||
|
"sha256": "1eb32eec68f9b5576a9923912ec7d0c528f3f87796859cb896aafb297ce3512d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-analyze.md",
|
||||||
|
"sha256": "c8d9a3b8509b8ffb06dc36cf56c571397ebde0e0630928e3821619a49ef229cf"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-tasks.md",
|
||||||
|
"sha256": "edb9d3733fc018e332ae472475c100552c8f5cf172a01d9f2757ba0afbb9e928"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/stackshift.review.md",
|
||||||
|
"sha256": "8004e769623a2595e9dd3fd8b0523077a11727f7cae64cbde9e62d1f648d1413"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/coverage.md",
|
||||||
|
"sha256": "bf1d8cd3a21aec1fbe47e773c52844635b3b0cf917b53503773caae9a55ac500"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/start.md",
|
||||||
|
"sha256": "634a6828755a2a93e5c67af8923b139cb4838e088d08a0aa97dce0f0ba163b65"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/speckit-implement.md",
|
||||||
|
"sha256": "8fa4ff5d413485ef303242d37717b2ce314b972f516bd26ad7ca6a71e496b53f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/batch.md",
|
||||||
|
"sha256": "7792fbafe7bb46e66e4c3607c0ec7039fd70e1ec563a79000f1fe7cb7b68892d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/modernize/SKILL.md",
|
||||||
|
"sha256": "215be4f6dd764ec4c3755bd572e4d5d4c03f7cf0266084af2dad4a909003e8eb"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/convert-to-speckit/SKILL.md",
|
||||||
|
"sha256": "e2b1afb58bc3f8cade4612152445991da22cc49c2a6d4fe617cdf1968c3ad0fb"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/.claude/SKILL_HELPERS.md",
|
||||||
|
"sha256": "a64c056ae1e26446812ab02c15483b0fefccc1a6a11b255e7ea19760d5024696"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/complete-spec/SKILL.md",
|
||||||
|
"sha256": "55b2a7ae0d992f34336b39c7fe023ff05a074eb18cef3f655b90ee76034a9611"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/implement/SKILL.md",
|
||||||
|
"sha256": "beaab5376c5260aa3f02b563b2a6c74ebb34916bdd547a4de9ab0d76ecf3de7e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/implement/operations/handoff.md",
|
||||||
|
"sha256": "8bb3e64ab3770ee26e0532ad8c9194a687691d1057c2df8720cdddeb82fde204"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/SKILL.md",
|
||||||
|
"sha256": "4c4669e6a88ba8ebccda02486205e36b35dde3209cad79e3f4b08812b38022f1"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/batch-session-state.ts",
|
||||||
|
"sha256": "eea592bbb4cf93865408b21a5e2679a77faaa26218ec76f52358b3d0b3545288"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/operations/documentation-scan.md",
|
||||||
|
"sha256": "0eef04d4bb5bba16cc576fb751558e516b7e508395ec7474b59850236a733440"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/operations/generate-report.md",
|
||||||
|
"sha256": "8bfe27b1e8e74d349d77bf7fafbdef19e4eb32741d9945f1840f0c2337b17be4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/operations/detect-stack.md",
|
||||||
|
"sha256": "29a5e26c1d398072054cac099832bb075661e22ca6e65c58190e26433c1cb028"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/operations/directory-analysis.md",
|
||||||
|
"sha256": "8fea91693e7a19f5266778d44353da339930289195d5c484ec1dd0593cb61e4c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/analyze/operations/completeness-assessment.md",
|
||||||
|
"sha256": "daca92ec5bcdc273e8cbd1ff6e0b9107d2eee0bf0392185642eafa5101112f60"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/create-specs/SKILL.md",
|
||||||
|
"sha256": "c92c3e06b784f08a98e963b1ff68bc9f5e53e8be20ee0e14eeb208986a5e5c2c"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/gap-analysis/SKILL.md",
|
||||||
|
"sha256": "ba5658df9efb7dde8ec2dd45a0ffed5b60178f17c0eb0438df131c7ddba057c3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/reverse-engineer/SKILL.md",
|
||||||
|
"sha256": "7e8eaf8d8bf6cb5b72faa9533809b952e2e27b33ec8b8cfff05bc6ea4eb17f0f"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/spec-coverage-map/SKILL.md",
|
||||||
|
"sha256": "4bea839988ec3354eb352e91e70fe0e4129b5dfa535cd9aa3e71a6df18bca1dd"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/cruise-control/SKILL.md",
|
||||||
|
"sha256": "a7e7d46cda2bc14150e1a71024932c85a76c1145c02f44930a13a0404ed95fde"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "a98323d6e2f94ffae92e20ca47e5466ddb20e37f2a216559594c69787ca9b701"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
250
skills/.claude/SKILL_HELPERS.md
Normal file
250
skills/.claude/SKILL_HELPERS.md
Normal file
@@ -0,0 +1,250 @@
|
|||||||
|
# Skill Helper Functions
|
||||||
|
|
||||||
|
Internal guidelines for Claude when using RE Toolkit skills.
|
||||||
|
|
||||||
|
## Workflow State Management
|
||||||
|
|
||||||
|
### Auto-Tracking Progress
|
||||||
|
|
||||||
|
When a skill completes, automatically update the workflow state:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// After completing a step
|
||||||
|
const { exec } = require('child_process');
|
||||||
|
exec('node ${CLAUDE_PLUGIN_ROOT}/scripts/state-manager.js complete <step-id>');
|
||||||
|
```
|
||||||
|
|
||||||
|
### Checking Progress
|
||||||
|
|
||||||
|
Before starting a skill, check current progress:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Get current status
|
||||||
|
exec('node ${CLAUDE_PLUGIN_ROOT}/scripts/state-manager.js status', (err, stdout) => {
|
||||||
|
const status = JSON.parse(stdout);
|
||||||
|
// Use status to determine next steps
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### State File Location
|
||||||
|
|
||||||
|
State is stored in project root as `.stackshift-state.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"version": "1.0.0",
|
||||||
|
"created": "2024-01-15T10:30:00.000Z",
|
||||||
|
"updated": "2024-01-15T11:45:00.000Z",
|
||||||
|
"currentStep": "create-specs",
|
||||||
|
"completedSteps": ["analyze", "reverse-engineer"],
|
||||||
|
"metadata": {
|
||||||
|
"projectName": "my-app",
|
||||||
|
"projectPath": "/path/to/my-app"
|
||||||
|
},
|
||||||
|
"stepDetails": {
|
||||||
|
"analyze": {
|
||||||
|
"started": "2024-01-15T10:30:00.000Z",
|
||||||
|
"completed": "2024-01-15T10:35:00.000Z",
|
||||||
|
"status": "completed"
|
||||||
|
},
|
||||||
|
"reverse-engineer": {
|
||||||
|
"started": "2024-01-15T10:35:00.000Z",
|
||||||
|
"completed": "2024-01-15T11:05:00.000Z",
|
||||||
|
"status": "completed"
|
||||||
|
},
|
||||||
|
"create-specs": {
|
||||||
|
"started": "2024-01-15T11:05:00.000Z",
|
||||||
|
"status": "in_progress"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Skill Invocation Patterns
|
||||||
|
|
||||||
|
### Sequential Workflow
|
||||||
|
|
||||||
|
Skills should check if prerequisites are met:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Step N-1 must be completed
|
||||||
|
- Output file X must exist
|
||||||
|
|
||||||
|
**Auto-checks:**
|
||||||
|
1. Load state file
|
||||||
|
2. Verify completedSteps includes prerequisite
|
||||||
|
3. Verify output files exist
|
||||||
|
4. If not met, guide user to complete prerequisites first
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resume Capability
|
||||||
|
|
||||||
|
If a skill was interrupted, it should be able to resume:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Resume Logic
|
||||||
|
|
||||||
|
1. Check if output files partially exist
|
||||||
|
2. Ask user: "I see you've started this step. Resume or start over?"
|
||||||
|
3. If resume: Skip completed parts, continue from where left off
|
||||||
|
4. If start over: Warn about overwriting, then proceed
|
||||||
|
```
|
||||||
|
|
||||||
|
## Progress Reporting
|
||||||
|
|
||||||
|
### Start of Skill
|
||||||
|
|
||||||
|
When skill activates:
|
||||||
|
```markdown
|
||||||
|
Starting Step N: [Skill Name]
|
||||||
|
Progress: N/6 steps completed (X%)
|
||||||
|
```
|
||||||
|
|
||||||
|
### During Skill
|
||||||
|
|
||||||
|
Show sub-progress:
|
||||||
|
```markdown
|
||||||
|
Step N.1: [Sub-task] ✅ Complete
|
||||||
|
Step N.2: [Sub-task] 🔄 In progress...
|
||||||
|
Step N.3: [Sub-task] ⏳ Pending
|
||||||
|
```
|
||||||
|
|
||||||
|
### End of Skill
|
||||||
|
|
||||||
|
When skill completes:
|
||||||
|
```markdown
|
||||||
|
✅ Step N Complete: [Skill Name]
|
||||||
|
|
||||||
|
Output:
|
||||||
|
- [File 1] created
|
||||||
|
- [File 2] created
|
||||||
|
|
||||||
|
Progress: N/6 steps completed (X%)
|
||||||
|
|
||||||
|
Next Step: Use the `[next-skill]` skill to [description]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Missing Prerequisites
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
❌ Cannot start Step N: Prerequisites not met
|
||||||
|
|
||||||
|
Missing:
|
||||||
|
- Step [X] must be completed first
|
||||||
|
- File [Y] must exist
|
||||||
|
|
||||||
|
Recommendation:
|
||||||
|
Run the `[prerequisite-skill]` skill first.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Partial Completion
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
⚠️ Step N partially complete
|
||||||
|
|
||||||
|
Found existing files:
|
||||||
|
- [File 1] ✅
|
||||||
|
- [File 2] ❌ Missing
|
||||||
|
|
||||||
|
Would you like to:
|
||||||
|
1. Resume and complete missing parts
|
||||||
|
2. Start over (overwrites existing files)
|
||||||
|
3. Skip this step (if already complete elsewhere)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cross-Skill References
|
||||||
|
|
||||||
|
### Linking to Other Skills
|
||||||
|
|
||||||
|
In skill documentation, reference related skills:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
See also:
|
||||||
|
- Previous: `[prev-skill]` - [Description]
|
||||||
|
- Next: `[next-skill]` - [Description]
|
||||||
|
- Related: `[related-skill]` - [Description]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow Visualization
|
||||||
|
|
||||||
|
Show where user is in the process:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
Workflow Progress:
|
||||||
|
1. ✅ analyze
|
||||||
|
2. ✅ reverse-engineer
|
||||||
|
3. 🔄 create-specs ← You are here
|
||||||
|
4. ⏳ gap-analysis
|
||||||
|
5. ⏳ complete-spec
|
||||||
|
6. ⏳ implement
|
||||||
|
```
|
||||||
|
|
||||||
|
## Template Access
|
||||||
|
|
||||||
|
### Loading Templates
|
||||||
|
|
||||||
|
Skills can access templates from plugin:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const templatePath = '${CLAUDE_PLUGIN_ROOT}/../templates/[template-name].md';
|
||||||
|
// Read and use template
|
||||||
|
```
|
||||||
|
|
||||||
|
### Template Variables
|
||||||
|
|
||||||
|
When using templates, replace these variables:
|
||||||
|
|
||||||
|
- `{{PROJECT_NAME}}` - From state.metadata.projectName
|
||||||
|
- `{{CURRENT_DATE}}` - ISO date
|
||||||
|
- `{{STEP_NUMBER}}` - Current step number (1-6)
|
||||||
|
- `{{TOTAL_STEPS}}` - Total steps (6)
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Skill Activation
|
||||||
|
|
||||||
|
1. **Check state first** - Verify prerequisites
|
||||||
|
2. **Show progress** - Let user know where they are
|
||||||
|
3. **Confirm action** - Ask before overwriting files
|
||||||
|
4. **Update state** - Mark step as started
|
||||||
|
5. **Generate output** - Create expected files
|
||||||
|
6. **Validate output** - Verify files created successfully
|
||||||
|
7. **Mark complete** - Update state to completed
|
||||||
|
8. **Guide next step** - Tell user what to do next
|
||||||
|
|
||||||
|
### User Communication
|
||||||
|
|
||||||
|
- Be encouraging and supportive
|
||||||
|
- Show clear progress indicators
|
||||||
|
- Explain what's happening at each step
|
||||||
|
- Provide estimates of time/effort
|
||||||
|
- Offer choices when applicable
|
||||||
|
- Confirm understanding before proceeding
|
||||||
|
|
||||||
|
### Error Recovery
|
||||||
|
|
||||||
|
- Don't fail silently
|
||||||
|
- Provide clear error messages
|
||||||
|
- Suggest solutions
|
||||||
|
- Offer to retry or skip
|
||||||
|
- Never leave user stuck
|
||||||
|
|
||||||
|
## State Transitions
|
||||||
|
|
||||||
|
```
|
||||||
|
null → analyze → reverse-engineer → create-specs → gap-analysis → complete-spec → implement → null
|
||||||
|
↑ ↓
|
||||||
|
└───────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
(reset)
|
||||||
|
```
|
||||||
|
|
||||||
|
Each skill:
|
||||||
|
1. Validates current state
|
||||||
|
2. Executes its task
|
||||||
|
3. Updates state
|
||||||
|
4. Points to next skill
|
||||||
759
skills/analyze/SKILL.md
Normal file
759
skills/analyze/SKILL.md
Normal file
@@ -0,0 +1,759 @@
|
|||||||
|
---
|
||||||
|
name: analyze
|
||||||
|
description: Perform initial analysis of a codebase - detect tech stack, directory structure, and completeness. This is Step 1 of the 6-step reverse engineering process that transforms incomplete applications into spec-driven codebases. Automatically detects programming languages, frameworks, architecture patterns, and generates comprehensive analysis-report.md. Use when starting reverse engineering on any codebase.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Initial Analysis
|
||||||
|
|
||||||
|
**Step 1 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** 5 minutes
|
||||||
|
**Output:** `analysis-report.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- Starting reverse engineering on a new or existing codebase
|
||||||
|
- Need to understand tech stack and architecture before making changes
|
||||||
|
- Want to assess project completeness and identify gaps
|
||||||
|
- First time analyzing this project with the toolkit
|
||||||
|
- User asks "analyze this codebase" or "what's in this project?"
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Analyze this codebase"
|
||||||
|
- "What tech stack is this using?"
|
||||||
|
- "How complete is this application?"
|
||||||
|
- "Run initial analysis"
|
||||||
|
- "Start reverse engineering process"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
This skill performs comprehensive initial analysis by:
|
||||||
|
|
||||||
|
1. **Asking which path you want** - Greenfield (new app) or Brownfield (manage existing)
|
||||||
|
2. **Auto-detecting application context** - Identifies programming languages, frameworks, and build systems
|
||||||
|
3. **Analyzing directory structure** - Maps architecture patterns and key components
|
||||||
|
4. **Scanning existing documentation** - Assesses current documentation quality
|
||||||
|
5. **Estimating completeness** - Evaluates how complete the implementation is
|
||||||
|
6. **Generating analysis report** - Creates `analysis-report.md` with all findings
|
||||||
|
7. **Storing path choice** - Saves your selection to guide subsequent steps
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Choose Your Path
|
||||||
|
|
||||||
|
**FIRST:** Determine which path aligns with your goals.
|
||||||
|
|
||||||
|
### Path A: Greenfield (Build New App from Business Logic)
|
||||||
|
|
||||||
|
**Use when:**
|
||||||
|
- Building a new application based on existing app's business logic
|
||||||
|
- Migrating to a different tech stack
|
||||||
|
- Want flexibility in implementation choices
|
||||||
|
- Need platform-agnostic specifications
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
- Specifications focus on WHAT, not HOW
|
||||||
|
- Business requirements only
|
||||||
|
- Can implement in any technology
|
||||||
|
- Tech-stack agnostic
|
||||||
|
|
||||||
|
**Example:** "Extract the business logic from this Rails app so we can rebuild it in Next.js"
|
||||||
|
|
||||||
|
### Path B: Brownfield (Manage Existing with Spec Kit)
|
||||||
|
|
||||||
|
**Use when:**
|
||||||
|
- Managing an existing codebase with GitHub Spec Kit
|
||||||
|
- Want spec-code validation with `/speckit.analyze`
|
||||||
|
- Planning upgrades or refactoring
|
||||||
|
- Need specs that match current implementation exactly
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
- Specifications include both WHAT and HOW
|
||||||
|
- Business logic + technical implementation
|
||||||
|
- Tech-stack prescriptive
|
||||||
|
- `/speckit.analyze` can validate alignment
|
||||||
|
|
||||||
|
**Example:** "Add GitHub Spec Kit to this Next.js app so we can manage it with specs going forward"
|
||||||
|
|
||||||
|
### Batch Session Auto-Configuration
|
||||||
|
|
||||||
|
**Before showing questions, check for batch session by walking up directories:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Function to find batch session file (walks up like .git search)
|
||||||
|
find_batch_session() {
|
||||||
|
local current_dir="$(pwd)"
|
||||||
|
while [[ "$current_dir" != "/" ]]; do
|
||||||
|
if [[ -f "$current_dir/.stackshift-batch-session.json" ]]; then
|
||||||
|
echo "$current_dir/.stackshift-batch-session.json"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
current_dir="$(dirname "$current_dir")"
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if batch session exists
|
||||||
|
BATCH_SESSION=$(find_batch_session)
|
||||||
|
if [[ -n "$BATCH_SESSION" ]]; then
|
||||||
|
echo "✅ Using batch session configuration from: $BATCH_SESSION"
|
||||||
|
cat "$BATCH_SESSION" | jq '.answers'
|
||||||
|
# Auto-apply answers from batch session
|
||||||
|
# Skip questionnaire entirely
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**If batch session exists:**
|
||||||
|
1. Walk up directory tree to find `.stackshift-batch-session.json`
|
||||||
|
2. Load answers from found batch session file
|
||||||
|
3. Show: "Using batch session configuration: route=osiris, spec_output=~/git/specs, ..."
|
||||||
|
4. Skip all questions below
|
||||||
|
5. Proceed directly to analysis with pre-configured answers
|
||||||
|
6. Save answers to local `.stackshift-state.json` as usual
|
||||||
|
|
||||||
|
**Example directory structure:**
|
||||||
|
```
|
||||||
|
~/git/osiris/
|
||||||
|
├── .stackshift-batch-session.json ← Batch session here
|
||||||
|
├── ws-vehicle-details/
|
||||||
|
│ └── [agent working here finds parent session]
|
||||||
|
├── ws-hours/
|
||||||
|
│ └── [agent working here finds parent session]
|
||||||
|
└── ws-contact/
|
||||||
|
└── [agent working here finds parent session]
|
||||||
|
```
|
||||||
|
|
||||||
|
**If no batch session:**
|
||||||
|
- Continue with normal questionnaire below
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 1: Auto-Detect Application Type
|
||||||
|
|
||||||
|
**Before asking questions, detect what kind of application this is:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check repository name and structure
|
||||||
|
REPO_NAME=$(basename $(pwd))
|
||||||
|
PARENT_DIR=$(basename $(dirname $(pwd)))
|
||||||
|
|
||||||
|
# Detection patterns (in priority order)
|
||||||
|
# Add your own patterns here for your framework/architecture!
|
||||||
|
|
||||||
|
# Monorepo service detection
|
||||||
|
if [[ "$PARENT_DIR" == "services" || "$PARENT_DIR" == "apps" ]] && [ -f "../../package.json" ]; then
|
||||||
|
DETECTION="monorepo-service"
|
||||||
|
echo "📦 Detected: Monorepo Service (services/* or apps/* directory)"
|
||||||
|
|
||||||
|
# Nx workspace detection
|
||||||
|
elif [ -f "nx.json" ] || [ -f "../../nx.json" ]; then
|
||||||
|
DETECTION="nx-app"
|
||||||
|
echo "⚡ Detected: Nx Application"
|
||||||
|
|
||||||
|
# Turborepo detection
|
||||||
|
elif [ -f "turbo.json" ] || [ -f "../../turbo.json" ]; then
|
||||||
|
DETECTION="turborepo-package"
|
||||||
|
echo "🚀 Detected: Turborepo Package"
|
||||||
|
|
||||||
|
# Lerna package detection
|
||||||
|
elif [ -f "lerna.json" ] || [ -f "../../lerna.json" ]; then
|
||||||
|
DETECTION="lerna-package"
|
||||||
|
echo "📦 Detected: Lerna Package"
|
||||||
|
|
||||||
|
# Generic application (default)
|
||||||
|
else
|
||||||
|
DETECTION="generic"
|
||||||
|
echo "🔍 Detected: Generic Application"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Detection type: $DETECTION"
|
||||||
|
```
|
||||||
|
|
||||||
|
**How Detection Patterns Work:**
|
||||||
|
|
||||||
|
Detection identifies WHAT patterns to look for during analysis:
|
||||||
|
- **monorepo-service**: Look for shared packages, inter-service calls, monorepo structure
|
||||||
|
- **nx-app**: Look for project.json, workspace deps, Nx-specific patterns
|
||||||
|
- **generic**: Standard application analysis
|
||||||
|
|
||||||
|
**Add Your Own Patterns:**
|
||||||
|
```bash
|
||||||
|
# Example: Custom framework detection
|
||||||
|
# elif [[ "$REPO_NAME" =~ ^my-widget- ]]; then
|
||||||
|
# DETECTION="my-framework-widget"
|
||||||
|
# echo "🎯 Detected: My Framework Widget"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Detection determines what to analyze, but NOT how to spec it!**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Step 2: Initial Questionnaire
|
||||||
|
|
||||||
|
Now that we know what kind of application this is, let's configure the extraction approach:
|
||||||
|
|
||||||
|
**Question 1: Choose Your Route**
|
||||||
|
```
|
||||||
|
Which path best aligns with your goals?
|
||||||
|
|
||||||
|
A) Greenfield: Extract for migration to new tech stack
|
||||||
|
→ Extract business logic only (tech-agnostic)
|
||||||
|
→ Can implement in any stack
|
||||||
|
→ Suitable for platform migrations
|
||||||
|
→ Example: Extract Rails app business logic → rebuild in Next.js
|
||||||
|
|
||||||
|
B) Brownfield: Extract for maintaining existing codebase
|
||||||
|
→ Extract business logic + technical details (tech-prescriptive)
|
||||||
|
→ Manage existing codebase with specs
|
||||||
|
→ Suitable for in-place improvements
|
||||||
|
→ Example: Add specs to Express API for ongoing maintenance
|
||||||
|
```
|
||||||
|
|
||||||
|
**This applies to ALL detection types:**
|
||||||
|
- Monorepo Service + Greenfield = Business logic for platform migration
|
||||||
|
- Monorepo Service + Brownfield = Full implementation for maintenance
|
||||||
|
- Nx App + Greenfield = Business logic for rebuild
|
||||||
|
- Nx App + Brownfield = Full Nx/Angular details for refactoring
|
||||||
|
- Generic + Greenfield = Business logic for rebuild
|
||||||
|
- Generic + Brownfield = Full implementation for management
|
||||||
|
|
||||||
|
**Question 2: Brownfield Mode** _(If Brownfield selected)_
|
||||||
|
```
|
||||||
|
Do you want to upgrade dependencies after establishing specs?
|
||||||
|
|
||||||
|
A) Standard - Just create specs for current state
|
||||||
|
→ Document existing implementation as-is
|
||||||
|
→ Specs match current code exactly
|
||||||
|
→ Good for maintaining existing versions
|
||||||
|
|
||||||
|
B) Upgrade - Create specs + upgrade all dependencies
|
||||||
|
→ Spec current state first (100% coverage)
|
||||||
|
→ Then upgrade all dependencies to latest versions
|
||||||
|
→ Fix breaking changes with spec guidance
|
||||||
|
→ Improve test coverage to spec standards
|
||||||
|
→ End with modern, fully-spec'd application
|
||||||
|
→ Perfect for modernizing legacy apps
|
||||||
|
|
||||||
|
**Upgrade mode includes:**
|
||||||
|
- npm update / pip upgrade / go get -u (based on tech stack)
|
||||||
|
- Automated breaking change detection
|
||||||
|
- Test-driven upgrade fixes
|
||||||
|
- Spec updates for API changes
|
||||||
|
- Coverage improvement to 85%+
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 3: Choose Your Transmission**
|
||||||
|
```
|
||||||
|
How do you want to shift through the gears?
|
||||||
|
|
||||||
|
A) Manual - Review each gear before proceeding
|
||||||
|
→ You're in control
|
||||||
|
→ Stop at each step
|
||||||
|
→ Good for first-time users
|
||||||
|
|
||||||
|
B) Cruise Control - Shift through all gears automatically
|
||||||
|
→ Hands-free
|
||||||
|
→ Unattended execution
|
||||||
|
→ Good for experienced users or overnight runs
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 4: Specification Thoroughness**
|
||||||
|
```
|
||||||
|
How thorough should specification generation be in Gear 3?
|
||||||
|
|
||||||
|
A) Specs only (30 min - fast)
|
||||||
|
→ Generate specs for all features
|
||||||
|
→ Create plans manually with /speckit.plan as needed
|
||||||
|
→ Good for: quick assessment, flexibility
|
||||||
|
|
||||||
|
B) Specs + Plans (45-60 min - recommended)
|
||||||
|
→ Generate specs for all features
|
||||||
|
→ Auto-generate implementation plans for incomplete features
|
||||||
|
→ Ready for /speckit.tasks when you implement
|
||||||
|
→ Good for: most projects, balanced automation
|
||||||
|
|
||||||
|
C) Specs + Plans + Tasks (90-120 min - complete roadmap)
|
||||||
|
→ Generate specs for all features
|
||||||
|
→ Auto-generate plans for incomplete features
|
||||||
|
→ Auto-generate comprehensive task lists (300-500 lines each)
|
||||||
|
→ Ready for immediate implementation
|
||||||
|
→ Good for: large projects, maximum automation
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 5: Clarifications Strategy** _(If Cruise Control selected)_
|
||||||
|
```
|
||||||
|
How should [NEEDS CLARIFICATION] markers be handled?
|
||||||
|
|
||||||
|
A) Defer - Mark them, continue implementation around them
|
||||||
|
→ Fastest
|
||||||
|
→ Can clarify later with /speckit.clarify
|
||||||
|
|
||||||
|
B) Prompt - Stop and ask questions interactively
|
||||||
|
→ Most thorough
|
||||||
|
→ Takes longer
|
||||||
|
|
||||||
|
C) Skip - Only implement fully-specified features
|
||||||
|
→ Safest
|
||||||
|
→ Some features won't be implemented
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 6: Implementation Scope** _(If Cruise Control selected)_
|
||||||
|
```
|
||||||
|
What should be implemented in Gear 6?
|
||||||
|
|
||||||
|
A) None - Stop after specs are ready
|
||||||
|
→ Just want specifications
|
||||||
|
→ Will implement manually later
|
||||||
|
|
||||||
|
B) P0 only - Critical features only
|
||||||
|
→ Essential features
|
||||||
|
→ Fastest implementation
|
||||||
|
|
||||||
|
C) P0 + P1 - Critical + high-value features
|
||||||
|
→ Good balance
|
||||||
|
→ Most common choice
|
||||||
|
|
||||||
|
D) All - Every feature (may take hours/days)
|
||||||
|
→ Complete implementation
|
||||||
|
→ Longest runtime
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 7: Spec Output Location** _(If Greenfield selected)_
|
||||||
|
```
|
||||||
|
Where should specifications and documentation be written?
|
||||||
|
|
||||||
|
A) Current repository (default)
|
||||||
|
→ Specs in: ./docs/reverse-engineering/, ./.specify/
|
||||||
|
→ Simple, everything in one place
|
||||||
|
→ Good for: small teams, single repo
|
||||||
|
|
||||||
|
B) New application repository
|
||||||
|
→ Specs in: ~/git/my-new-app/.specify/
|
||||||
|
→ Specs live with NEW codebase
|
||||||
|
→ Good for: clean separation, NEW repo already exists
|
||||||
|
|
||||||
|
C) Separate documentation repository
|
||||||
|
→ Specs in: ~/git/my-app-docs/.specify/
|
||||||
|
→ Central docs repo for multiple apps
|
||||||
|
→ Good for: enterprise, multiple related apps
|
||||||
|
|
||||||
|
D) Custom location
|
||||||
|
→ Your choice: [specify path]
|
||||||
|
|
||||||
|
Default: Current repository (A)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 6: Target Stack** _(If Greenfield + Implementation selected)_
|
||||||
|
```
|
||||||
|
What tech stack for the new implementation?
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- Next.js 15 + TypeScript + Prisma + PostgreSQL
|
||||||
|
- Python/FastAPI + SQLAlchemy + PostgreSQL
|
||||||
|
- Go + Gin + GORM + PostgreSQL
|
||||||
|
- Your choice: [specify your preferred stack]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Question 7: Build Location** _(If Greenfield + Implementation selected)_
|
||||||
|
```
|
||||||
|
Where should the new application be built?
|
||||||
|
|
||||||
|
A) Subfolder (recommended for Web)
|
||||||
|
→ Examples: greenfield/, v2/, new-app/
|
||||||
|
→ Keeps old and new in same repo
|
||||||
|
→ Works in Claude Code Web
|
||||||
|
|
||||||
|
B) Separate directory (local only)
|
||||||
|
→ Examples: ~/git/my-new-app, ../my-app-v2
|
||||||
|
→ Completely separate location
|
||||||
|
→ Requires local Claude Code (doesn't work in Web)
|
||||||
|
|
||||||
|
C) Replace in place (destructive)
|
||||||
|
→ Removes old code as new is built
|
||||||
|
→ Not recommended
|
||||||
|
```
|
||||||
|
|
||||||
|
**Then ask for the specific path:**
|
||||||
|
|
||||||
|
**If subfolder (A):**
|
||||||
|
```
|
||||||
|
Folder name within this repo? (default: greenfield/)
|
||||||
|
|
||||||
|
Examples: v2/, new-app/, nextjs-version/, rebuilt/
|
||||||
|
Your choice: [or press enter for greenfield/]
|
||||||
|
```
|
||||||
|
|
||||||
|
**If separate directory (B):**
|
||||||
|
```
|
||||||
|
Full path to new application directory:
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- ~/git/my-new-app
|
||||||
|
- ../my-app-v2
|
||||||
|
- /Users/you/projects/new-version
|
||||||
|
|
||||||
|
Your choice: [absolute or relative path]
|
||||||
|
|
||||||
|
⚠️ Note: Directory will be created if it doesn't exist.
|
||||||
|
Claude Code Web users: This won't work in Web - use subfolder instead.
|
||||||
|
```
|
||||||
|
|
||||||
|
All answers are stored in `.stackshift-state.json` and guide the entire workflow.
|
||||||
|
|
||||||
|
**State file example:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"detection_type": "monorepo-service", // What kind of app: monorepo-service, nx-app, generic, etc.
|
||||||
|
"route": "greenfield", // How to spec it: greenfield or brownfield
|
||||||
|
"config": {
|
||||||
|
"spec_output_location": "~/git/my-new-app", // Where to write specs/docs
|
||||||
|
"build_location": "~/git/my-new-app", // Where to build new code (Gear 6)
|
||||||
|
"target_stack": "Next.js 15 + React 19 + Prisma",
|
||||||
|
"clarifications_strategy": "defer",
|
||||||
|
"implementation_scope": "p0_p1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key fields:**
|
||||||
|
- `detection_type` - What we're analyzing (monorepo-service, nx-app, turborepo-package, generic)
|
||||||
|
- `route` - How to spec it (greenfield = tech-agnostic, brownfield = tech-prescriptive)
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
- Monorepo Service + Greenfield = Extract business logic for platform migration
|
||||||
|
- Monorepo Service + Brownfield = Extract full implementation for maintenance
|
||||||
|
- Nx App + Greenfield = Extract business logic (framework-agnostic)
|
||||||
|
- Nx App + Brownfield = Extract full Nx/Angular implementation details
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
**Spec Output Location:**
|
||||||
|
- Gear 2 writes to: `{spec_output_location}/docs/reverse-engineering/`
|
||||||
|
- Gear 3 writes to: `{spec_output_location}/.specify/memory/`
|
||||||
|
- If not set: defaults to current directory
|
||||||
|
|
||||||
|
**Build Location:**
|
||||||
|
- Gear 6 writes code to: `{build_location}/src/`, `{build_location}/package.json`, etc.
|
||||||
|
- Can be same as spec location OR different
|
||||||
|
- If not set: defaults to `greenfield/` subfolder
|
||||||
|
|
||||||
|
### Implementing the Questionnaire
|
||||||
|
|
||||||
|
Use the `AskUserQuestion` tool to collect all configuration upfront:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Example implementation
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Which route best aligns with your goals?",
|
||||||
|
header: "Route",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Greenfield",
|
||||||
|
description: "Shift to new tech stack - extract business logic only (tech-agnostic)"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Brownfield",
|
||||||
|
description: "Manage existing code with specs - extract full implementation (tech-prescriptive)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "How do you want to shift through the gears?",
|
||||||
|
header: "Transmission",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Manual",
|
||||||
|
description: "Review each gear before proceeding - you're in control"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Cruise Control",
|
||||||
|
description: "Shift through all gears automatically - hands-free, unattended execution"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
});
|
||||||
|
|
||||||
|
// Then based on answers, ask follow-up questions conditionally:
|
||||||
|
// - If cruise control: Ask clarifications strategy, implementation scope
|
||||||
|
// - If greenfield + implementing: Ask target stack
|
||||||
|
// - If greenfield subfolder: Ask folder name (or accept default: greenfield/)
|
||||||
|
```
|
||||||
|
|
||||||
|
**For custom folder name:** Use free-text input or accept default.
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```
|
||||||
|
StackShift: "What folder name for the new application? (default: greenfield/)"
|
||||||
|
|
||||||
|
User: "v2/" (or just press enter for greenfield/)
|
||||||
|
|
||||||
|
StackShift: "✅ New app will be built in: v2/"
|
||||||
|
```
|
||||||
|
|
||||||
|
Stored in state as:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"config": {
|
||||||
|
"greenfield_location": "v2/" // Relative (subfolder)
|
||||||
|
// OR
|
||||||
|
"greenfield_location": "~/git/my-new-app" // Absolute (separate)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
**Subfolder (relative path):**
|
||||||
|
```bash
|
||||||
|
# Building in: /Users/you/git/my-app/greenfield/
|
||||||
|
cd /Users/you/git/my-app
|
||||||
|
# StackShift creates: ./greenfield/
|
||||||
|
# Everything in one repo
|
||||||
|
```
|
||||||
|
|
||||||
|
**Separate directory (absolute path):**
|
||||||
|
```bash
|
||||||
|
# Current repo: /Users/you/git/my-app
|
||||||
|
# New app: /Users/you/git/my-new-app
|
||||||
|
|
||||||
|
# StackShift:
|
||||||
|
# - Reads specs from: /Users/you/git/my-app/.specify/
|
||||||
|
# - Builds new app in: /Users/you/git/my-new-app/
|
||||||
|
# - Two completely separate repos
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 0: Install Slash Commands (FIRST!)
|
||||||
|
|
||||||
|
**Before any analysis, ensure /speckit.* commands are available:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create project commands directory
|
||||||
|
mkdir -p .claude/commands
|
||||||
|
|
||||||
|
# Copy StackShift's slash commands to project
|
||||||
|
cp ~/.claude/plugins/stackshift/.claude/commands/speckit.*.md .claude/commands/
|
||||||
|
cp ~/.claude/plugins/stackshift/.claude/commands/stackshift.modernize.md .claude/commands/
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
ls .claude/commands/speckit.*.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**You should see:**
|
||||||
|
- ✅ speckit.analyze.md
|
||||||
|
- ✅ speckit.clarify.md
|
||||||
|
- ✅ speckit.implement.md
|
||||||
|
- ✅ speckit.plan.md
|
||||||
|
- ✅ speckit.specify.md
|
||||||
|
- ✅ speckit.tasks.md
|
||||||
|
- ✅ stackshift.modernize.md
|
||||||
|
|
||||||
|
**Why this is needed:**
|
||||||
|
- Claude Code looks for slash commands in project `.claude/commands/` directory
|
||||||
|
- Plugin-level commands are not automatically discovered
|
||||||
|
- This copies them to the current project so they're available
|
||||||
|
- Only needs to be done once per project
|
||||||
|
|
||||||
|
**After copying:**
|
||||||
|
- `/speckit.*` commands will be available for this project
|
||||||
|
- No need to restart Claude Code
|
||||||
|
- Commands work immediately
|
||||||
|
|
||||||
|
### Critical: Commit Commands to Git
|
||||||
|
|
||||||
|
**Add to .gitignore (or create if missing):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Allow .claude directory structure
|
||||||
|
!.claude/
|
||||||
|
!.claude/commands/
|
||||||
|
|
||||||
|
# Track slash commands (team needs these!)
|
||||||
|
!.claude/commands/*.md
|
||||||
|
|
||||||
|
# Ignore user-specific settings
|
||||||
|
.claude/settings.json
|
||||||
|
.claude/mcp-settings.json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Then commit:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add .claude/commands/
|
||||||
|
git commit -m "chore: add StackShift and Spec Kit slash commands
|
||||||
|
|
||||||
|
Adds /speckit.* and /stackshift.* slash commands for team use.
|
||||||
|
|
||||||
|
Commands added:
|
||||||
|
- /speckit.specify - Create feature specifications
|
||||||
|
- /speckit.plan - Create technical plans
|
||||||
|
- /speckit.tasks - Generate task lists
|
||||||
|
- /speckit.implement - Execute implementation
|
||||||
|
- /speckit.clarify - Resolve ambiguities
|
||||||
|
- /speckit.analyze - Validate specs match code
|
||||||
|
- /stackshift.modernize - Upgrade dependencies
|
||||||
|
|
||||||
|
These commands enable spec-driven development workflow.
|
||||||
|
All team members will have access after cloning.
|
||||||
|
"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why this is critical:**
|
||||||
|
- ✅ Teammates get commands when they clone
|
||||||
|
- ✅ Commands are versioned with project
|
||||||
|
- ✅ No setup needed for new team members
|
||||||
|
- ✅ Commands always available
|
||||||
|
|
||||||
|
**Without committing:**
|
||||||
|
- ❌ Each developer needs to run StackShift or manually copy
|
||||||
|
- ❌ Confusion: "Why don't slash commands work?"
|
||||||
|
- ❌ Inconsistent developer experience
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
The analysis follows 5 steps:
|
||||||
|
|
||||||
|
### Step 1: Auto-Detect Application Context
|
||||||
|
- Run detection commands for all major languages/frameworks
|
||||||
|
- Identify the primary technology stack
|
||||||
|
- Extract version information
|
||||||
|
|
||||||
|
See [operations/detect-stack.md](operations/detect-stack.md) for detailed instructions.
|
||||||
|
|
||||||
|
### Step 2: Extract Core Metadata
|
||||||
|
- Application name from manifest or directory
|
||||||
|
- Version number from package manifests
|
||||||
|
- Description from README or manifest
|
||||||
|
- Git repository URL if available
|
||||||
|
- Technology stack summary
|
||||||
|
|
||||||
|
### Step 3: Analyze Directory Structure
|
||||||
|
- Identify architecture patterns (MVC, microservices, monolith, etc.)
|
||||||
|
- Find configuration files
|
||||||
|
- Count source files by type
|
||||||
|
- Map key components (backend, frontend, database, API, infrastructure)
|
||||||
|
|
||||||
|
See [operations/directory-analysis.md](operations/directory-analysis.md) for detailed instructions.
|
||||||
|
|
||||||
|
### Step 4: Check for Existing Documentation
|
||||||
|
- Scan for docs folders and markdown files
|
||||||
|
- Assess documentation quality
|
||||||
|
- Identify what's documented vs. what's missing
|
||||||
|
|
||||||
|
See [operations/documentation-scan.md](operations/documentation-scan.md) for detailed instructions.
|
||||||
|
|
||||||
|
### Step 5: Assess Completeness
|
||||||
|
- Look for placeholder files (TODO, WIP, etc.)
|
||||||
|
- Check README for mentions of incomplete features
|
||||||
|
- Count test files and estimate test coverage
|
||||||
|
- Verify deployment/CI setup
|
||||||
|
|
||||||
|
See [operations/completeness-assessment.md](operations/completeness-assessment.md) for detailed instructions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
This skill generates `analysis-report.md` in the project root with:
|
||||||
|
|
||||||
|
- **Application Metadata** - Name, version, description, repository
|
||||||
|
- **Technology Stack** - Languages, frameworks, libraries, build system
|
||||||
|
- **Architecture Overview** - Directory structure, key components
|
||||||
|
- **Existing Documentation** - What docs exist and their quality
|
||||||
|
- **Completeness Assessment** - Estimated % completion with evidence
|
||||||
|
- **Source Code Statistics** - File counts, lines of code estimates
|
||||||
|
- **Recommended Next Steps** - Focus areas for reverse engineering
|
||||||
|
- **Notes** - Additional observations
|
||||||
|
|
||||||
|
See [operations/generate-report.md](operations/generate-report.md) for the complete template.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill, you should have:
|
||||||
|
|
||||||
|
- ✅ `analysis-report.md` file created in project root
|
||||||
|
- ✅ Technology stack clearly identified
|
||||||
|
- ✅ Directory structure and architecture understood
|
||||||
|
- ✅ Completeness estimated (% done for backend, frontend, tests, docs)
|
||||||
|
- ✅ Ready to proceed to Step 2 (Reverse Engineer)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Step
|
||||||
|
|
||||||
|
Once `analysis-report.md` is created and reviewed, proceed to:
|
||||||
|
|
||||||
|
**Step 2: Reverse Engineer** - Use the reverse-engineer skill to generate comprehensive documentation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Principles
|
||||||
|
|
||||||
|
For guidance on performing effective initial analysis:
|
||||||
|
- [principles/multi-language-detection.md](principles/multi-language-detection.md) - Detecting polyglot codebases
|
||||||
|
- [principles/architecture-pattern-recognition.md](principles/architecture-pattern-recognition.md) - Identifying common patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
**New Project Analysis:**
|
||||||
|
1. User asks to analyze codebase
|
||||||
|
2. Run all detection commands in parallel
|
||||||
|
3. Generate analysis report
|
||||||
|
4. Present summary and ask if ready for Step 2
|
||||||
|
|
||||||
|
**Re-analysis:**
|
||||||
|
1. Check if analysis-report.md already exists
|
||||||
|
2. Ask user if they want to update it or skip to Step 2
|
||||||
|
3. If updating, re-run analysis and show diff
|
||||||
|
|
||||||
|
**Partial Analysis:**
|
||||||
|
1. User already knows tech stack
|
||||||
|
2. Skip detection, focus on completeness assessment
|
||||||
|
3. Generate abbreviated report
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- **Parallel execution:** Run all language detection commands in parallel for speed
|
||||||
|
- **Error handling:** Missing manifest files are normal (return empty), don't error
|
||||||
|
- **File limits:** Use `head` to limit output for large codebases
|
||||||
|
- **Exclusions:** Always exclude node_modules, vendor, .git, build, dist, target
|
||||||
|
- **Platform compatibility:** Commands work on macOS, Linux, WSL
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Invocation
|
||||||
|
|
||||||
|
When a user says:
|
||||||
|
|
||||||
|
> "I need to reverse engineer this application and create specifications. Let's start."
|
||||||
|
|
||||||
|
This skill auto-activates and:
|
||||||
|
1. Detects tech stack (e.g., Next.js, TypeScript, Prisma, AWS)
|
||||||
|
2. Analyzes directory structure (identifies app/, lib/, prisma/, infrastructure/)
|
||||||
|
3. Scans documentation (finds README.md, basic setup docs)
|
||||||
|
4. Assesses completeness (estimates backend 100%, frontend 60%, tests 30%)
|
||||||
|
5. Generates analysis-report.md
|
||||||
|
6. Presents summary and recommends proceeding to Step 2
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** This is Step 1 of 6. After analysis, you'll proceed to reverse-engineer, create-specs, gap-analysis, complete-spec, and implement. Each step builds on the previous one.
|
||||||
198
skills/analyze/batch-session-state.ts
Normal file
198
skills/analyze/batch-session-state.ts
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
/**
|
||||||
|
* Batch Session State Management
|
||||||
|
*
|
||||||
|
* Stores answers from initial batch configuration and reuses them
|
||||||
|
* across all widgets in the batch session, eliminating repetitive questions.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as fs from 'fs';
|
||||||
|
import * as path from 'path';
|
||||||
|
|
||||||
|
export interface BatchSessionAnswers {
|
||||||
|
detection_type?: 'generic' | 'monorepo-service' | 'nx-app' | 'turborepo-package' | 'lerna-package';
|
||||||
|
route?: 'greenfield' | 'brownfield';
|
||||||
|
brownfield_mode?: 'standard' | 'upgrade';
|
||||||
|
transmission?: 'manual' | 'cruise-control';
|
||||||
|
clarifications_strategy?: 'defer' | 'prompt' | 'skip';
|
||||||
|
implementation_scope?: 'none' | 'p0' | 'p0_p1' | 'all';
|
||||||
|
spec_output_location?: string;
|
||||||
|
target_stack?: string;
|
||||||
|
build_location?: string;
|
||||||
|
build_location_type?: 'subfolder' | 'separate' | 'replace';
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface BatchSessionState {
|
||||||
|
sessionId: string;
|
||||||
|
startedAt: string;
|
||||||
|
batchRootDirectory: string;
|
||||||
|
totalRepos: number;
|
||||||
|
batchSize: number;
|
||||||
|
answers: BatchSessionAnswers;
|
||||||
|
processedRepos: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
const BATCH_SESSION_FILENAME = '.stackshift-batch-session.json';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find batch session file by checking current directory and walking up
|
||||||
|
* Similar to how .git directories are found
|
||||||
|
*/
|
||||||
|
function findBatchSessionFile(startDir: string = process.cwd()): string | null {
|
||||||
|
let currentDir = path.resolve(startDir);
|
||||||
|
const root = path.parse(currentDir).root;
|
||||||
|
|
||||||
|
while (currentDir !== root) {
|
||||||
|
const batchSessionPath = path.join(currentDir, BATCH_SESSION_FILENAME);
|
||||||
|
if (fs.existsSync(batchSessionPath)) {
|
||||||
|
return batchSessionPath;
|
||||||
|
}
|
||||||
|
currentDir = path.dirname(currentDir);
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get batch session file path for a directory
|
||||||
|
*/
|
||||||
|
function getBatchSessionPath(directory: string = process.cwd()): string {
|
||||||
|
return path.join(directory, BATCH_SESSION_FILENAME);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a new batch session with initial answers
|
||||||
|
*/
|
||||||
|
export function createBatchSession(
|
||||||
|
batchRootDirectory: string,
|
||||||
|
totalRepos: number,
|
||||||
|
batchSize: number,
|
||||||
|
answers: BatchSessionAnswers
|
||||||
|
): BatchSessionState {
|
||||||
|
const session: BatchSessionState = {
|
||||||
|
sessionId: `batch-${Date.now()}`,
|
||||||
|
startedAt: new Date().toISOString(),
|
||||||
|
batchRootDirectory: path.resolve(batchRootDirectory),
|
||||||
|
totalRepos,
|
||||||
|
batchSize,
|
||||||
|
answers,
|
||||||
|
processedRepos: []
|
||||||
|
};
|
||||||
|
|
||||||
|
const sessionPath = getBatchSessionPath(batchRootDirectory);
|
||||||
|
fs.writeFileSync(sessionPath, JSON.stringify(session, null, 2));
|
||||||
|
|
||||||
|
return session;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get current batch session if it exists
|
||||||
|
* Searches current directory and walks up to find batch session
|
||||||
|
*/
|
||||||
|
export function getBatchSession(startDir: string = process.cwd()): BatchSessionState | null {
|
||||||
|
try {
|
||||||
|
const sessionPath = findBatchSessionFile(startDir);
|
||||||
|
if (!sessionPath) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const content = fs.readFileSync(sessionPath, 'utf-8');
|
||||||
|
return JSON.parse(content);
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error reading batch session:', error);
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if we're currently in a batch session
|
||||||
|
* Searches current directory and walks up
|
||||||
|
*/
|
||||||
|
export function hasBatchSession(startDir: string = process.cwd()): boolean {
|
||||||
|
return findBatchSessionFile(startDir) !== null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update batch session with processed repo
|
||||||
|
*/
|
||||||
|
export function markRepoProcessed(repoName: string, startDir: string = process.cwd()): void {
|
||||||
|
const sessionPath = findBatchSessionFile(startDir);
|
||||||
|
if (!sessionPath) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const content = fs.readFileSync(sessionPath, 'utf-8');
|
||||||
|
const session: BatchSessionState = JSON.parse(content);
|
||||||
|
|
||||||
|
if (!session.processedRepos.includes(repoName)) {
|
||||||
|
session.processedRepos.push(repoName);
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.writeFileSync(sessionPath, JSON.stringify(session, null, 2));
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error updating batch session:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear batch session in specific directory
|
||||||
|
*/
|
||||||
|
export function clearBatchSession(directory: string = process.cwd()): boolean {
|
||||||
|
try {
|
||||||
|
const sessionPath = getBatchSessionPath(directory);
|
||||||
|
if (fs.existsSync(sessionPath)) {
|
||||||
|
fs.unlinkSync(sessionPath);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
} catch (error) {
|
||||||
|
console.error('Error clearing batch session:', error);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get batch session progress
|
||||||
|
*/
|
||||||
|
export function getBatchProgress(startDir: string = process.cwd()): string {
|
||||||
|
const session = getBatchSession(startDir);
|
||||||
|
if (!session) {
|
||||||
|
return 'No active batch session';
|
||||||
|
}
|
||||||
|
|
||||||
|
const processed = session.processedRepos.length;
|
||||||
|
const total = session.totalRepos;
|
||||||
|
const percentage = Math.round((processed / total) * 100);
|
||||||
|
|
||||||
|
return `Batch Progress: ${processed}/${total} repos (${percentage}%)`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Format batch session for display
|
||||||
|
*/
|
||||||
|
export function formatBatchSession(session: BatchSessionState): string {
|
||||||
|
const duration = Date.now() - new Date(session.startedAt).getTime();
|
||||||
|
const hours = Math.floor(duration / 3600000);
|
||||||
|
const minutes = Math.floor((duration % 3600000) / 60000);
|
||||||
|
|
||||||
|
return `
|
||||||
|
📦 Active Batch Session
|
||||||
|
|
||||||
|
Session ID: ${session.sessionId}
|
||||||
|
Batch Root: ${session.batchRootDirectory}
|
||||||
|
Started: ${new Date(session.startedAt).toLocaleString()}
|
||||||
|
Duration: ${hours}h ${minutes}m
|
||||||
|
Total Repos: ${session.totalRepos}
|
||||||
|
Batch Size: ${session.batchSize}
|
||||||
|
Processed: ${session.processedRepos.length}/${session.totalRepos}
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
Route: ${session.answers.route || 'not set'}
|
||||||
|
Transmission: ${session.answers.transmission || 'not set'}
|
||||||
|
Spec Output: ${session.answers.spec_output_location || 'current directory'}
|
||||||
|
Build Location: ${session.answers.build_location || 'greenfield/'}
|
||||||
|
Target Stack: ${session.answers.target_stack || 'not set'}
|
||||||
|
|
||||||
|
Session File: ${session.batchRootDirectory}/${BATCH_SESSION_FILENAME}
|
||||||
|
`.trim();
|
||||||
|
}
|
||||||
381
skills/analyze/operations/completeness-assessment.md
Normal file
381
skills/analyze/operations/completeness-assessment.md
Normal file
@@ -0,0 +1,381 @@
|
|||||||
|
# Completeness Assessment
|
||||||
|
|
||||||
|
Assess how complete the application implementation is.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Estimate the percentage completion for:
|
||||||
|
- Overall application
|
||||||
|
- Backend implementation
|
||||||
|
- Frontend implementation
|
||||||
|
- Test coverage
|
||||||
|
- Documentation
|
||||||
|
- Deployment/Infrastructure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Evidence Collection
|
||||||
|
|
||||||
|
### Placeholder Files
|
||||||
|
|
||||||
|
Look for files indicating incomplete work:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find TODO/WIP/PLACEHOLDER files
|
||||||
|
find . -iname "*todo*" -o -iname "*wip*" -o -iname "*placeholder*" -o -iname "*draft*" 2>/dev/null \
|
||||||
|
| grep -v "node_modules\|\.git"
|
||||||
|
|
||||||
|
# Find empty or near-empty files
|
||||||
|
find . -type f -size 0 2>/dev/null | grep -v "node_modules\|\.git\|\.keep"
|
||||||
|
|
||||||
|
# Files with just placeholders
|
||||||
|
find . -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" 2>/dev/null \
|
||||||
|
| xargs grep -l "TODO\|FIXME\|PLACEHOLDER\|XXX\|HACK" \
|
||||||
|
| head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
### README Mentions
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Search README for incomplete features
|
||||||
|
grep -i "todo\|wip\|work in progress\|coming soon\|not yet\|planned\|roadmap\|incomplete" README.md 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Code Comments
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Count TODO/FIXME comments
|
||||||
|
grep -r "TODO\|FIXME\|XXX\|HACK" src/ 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Show sample TODOs
|
||||||
|
grep -r "TODO\|FIXME" src/ 2>/dev/null | head -10
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Component-Specific Assessment
|
||||||
|
|
||||||
|
### Backend Completeness
|
||||||
|
|
||||||
|
**Check for:**
|
||||||
|
|
||||||
|
1. **API Endpoints**
|
||||||
|
```bash
|
||||||
|
# Count API routes
|
||||||
|
find . -path "*/api/*" -o -path "*/routes/*" 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Find routes with placeholder responses
|
||||||
|
grep -r "res.json({})\|return {}\|NotImplementedError" src/api/ src/routes/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Business Logic**
|
||||||
|
```bash
|
||||||
|
# Count service/controller files
|
||||||
|
find . -path "*/services/*" -o -path "*/controllers/*" 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Find stub implementations
|
||||||
|
grep -r "throw new Error.*not implemented\|NotImplementedError" src/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Database**
|
||||||
|
```bash
|
||||||
|
# Check if migrations are applied
|
||||||
|
ls -la prisma/migrations/ 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Check for empty seed files
|
||||||
|
ls -la prisma/seed.ts 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Authentication**
|
||||||
|
```bash
|
||||||
|
# Check for auth middleware
|
||||||
|
find . -name "*auth*" -o -name "*session*" | grep -v node_modules
|
||||||
|
|
||||||
|
# Look for JWT/session handling
|
||||||
|
grep -r "jwt\|jsonwebtoken\|express-session\|passport" package.json src/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Estimate:**
|
||||||
|
- 100% = All endpoints implemented, tested, documented
|
||||||
|
- 75% = All endpoints exist, some missing tests/validation
|
||||||
|
- 50% = Core endpoints done, advanced features missing
|
||||||
|
- 25% = Skeleton only, most logic unimplemented
|
||||||
|
- 0% = No backend implementation
|
||||||
|
|
||||||
|
### Frontend Completeness
|
||||||
|
|
||||||
|
**Check for:**
|
||||||
|
|
||||||
|
1. **Pages/Views**
|
||||||
|
```bash
|
||||||
|
# Count pages
|
||||||
|
find . -path "*/pages/*" -o -path "*/app/*" -o -path "*/views/*" 2>/dev/null \
|
||||||
|
| grep -E "\.(tsx?|jsx?|vue)$" | wc -l
|
||||||
|
|
||||||
|
# Find placeholder pages
|
||||||
|
grep -r "Coming Soon\|Under Construction\|TODO\|Placeholder" app/ pages/ components/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Components**
|
||||||
|
```bash
|
||||||
|
# Count components
|
||||||
|
find . -path "*/components/*" | grep -E "\.(tsx?|jsx?|vue)$" | wc -l
|
||||||
|
|
||||||
|
# Find stub components
|
||||||
|
grep -r "return null\|return <div>TODO\|placeholder" components/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Styling**
|
||||||
|
```bash
|
||||||
|
# Check for styling setup
|
||||||
|
ls -la tailwind.config.* 2>/dev/null
|
||||||
|
find . -name "*.css" -o -name "*.scss" 2>/dev/null | head -10
|
||||||
|
|
||||||
|
# Check if components are styled
|
||||||
|
grep -r "className\|style=" components/ 2>/dev/null | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **State Management**
|
||||||
|
```bash
|
||||||
|
# Check for state management
|
||||||
|
grep -r "redux\|zustand\|recoil\|jotai\|mobx" package.json 2>/dev/null
|
||||||
|
grep -r "createContext\|useContext" src/ app/ 2>/dev/null | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
**Estimate:**
|
||||||
|
- 100% = All pages implemented, styled, interactive
|
||||||
|
- 75% = All pages exist, some missing polish/interactivity
|
||||||
|
- 50% = Core pages done, advanced features missing
|
||||||
|
- 25% = Skeleton pages, minimal styling/functionality
|
||||||
|
- 0% = No frontend implementation
|
||||||
|
|
||||||
|
### Testing Completeness
|
||||||
|
|
||||||
|
**Check for:**
|
||||||
|
|
||||||
|
1. **Test Files**
|
||||||
|
```bash
|
||||||
|
# Count test files
|
||||||
|
find . -name "*.test.*" -o -name "*.spec.*" 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Count tests
|
||||||
|
grep -r "test(\|it(\|describe(" tests/ __tests__/ src/ 2>/dev/null | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Test Coverage**
|
||||||
|
```bash
|
||||||
|
# Check for coverage reports
|
||||||
|
ls -la coverage/ 2>/dev/null
|
||||||
|
|
||||||
|
# Check test configuration
|
||||||
|
ls -la jest.config.* vitest.config.* 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Test Types**
|
||||||
|
```bash
|
||||||
|
# Unit tests
|
||||||
|
find . -path "*/tests/unit/*" -o -name "*.unit.test.*" 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# Integration tests
|
||||||
|
find . -path "*/tests/integration/*" -o -name "*.integration.test.*" 2>/dev/null | wc -l
|
||||||
|
|
||||||
|
# E2E tests
|
||||||
|
find . -path "*/tests/e2e/*" -o -name "*.e2e.*" -o -name "cypress/" -o -name "playwright/" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Estimate:**
|
||||||
|
- 100% = >80% code coverage, unit + integration + E2E tests
|
||||||
|
- 75% = 60-80% coverage, unit + integration tests
|
||||||
|
- 50% = 40-60% coverage, mostly unit tests
|
||||||
|
- 25% = <40% coverage, sparse unit tests
|
||||||
|
- 0% = No tests
|
||||||
|
|
||||||
|
### Documentation Completeness
|
||||||
|
|
||||||
|
Use findings from [documentation-scan.md](documentation-scan.md):
|
||||||
|
|
||||||
|
**Estimate:**
|
||||||
|
- 100% = README, API docs, architecture, deployment, developer guide, all current
|
||||||
|
- 75% = README + API docs + some architecture/deployment docs
|
||||||
|
- 50% = Good README, partial API docs
|
||||||
|
- 25% = Basic README only
|
||||||
|
- 0% = No meaningful documentation
|
||||||
|
|
||||||
|
### Infrastructure/Deployment Completeness
|
||||||
|
|
||||||
|
**Check for:**
|
||||||
|
|
||||||
|
1. **Infrastructure as Code**
|
||||||
|
```bash
|
||||||
|
# Check for IaC files
|
||||||
|
find . -name "*.tf" -o -name "serverless.yml" -o -name "cdk.json" 2>/dev/null
|
||||||
|
|
||||||
|
# Count resources defined
|
||||||
|
grep -r "resource\|AWS::" infrastructure/ terraform/ 2>/dev/null | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **CI/CD**
|
||||||
|
```bash
|
||||||
|
# GitHub Actions
|
||||||
|
ls -la .github/workflows/ 2>/dev/null
|
||||||
|
|
||||||
|
# Other CI/CD
|
||||||
|
ls -la .gitlab-ci.yml .circleci/config.yml .travis.yml 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Environment Configuration**
|
||||||
|
```bash
|
||||||
|
# Environment files
|
||||||
|
ls -la .env.example .env.template 2>/dev/null
|
||||||
|
|
||||||
|
# Environment validation
|
||||||
|
grep -r "dotenv\|env-var\|envalid" package.json src/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Deployment Scripts**
|
||||||
|
```bash
|
||||||
|
# Deployment scripts
|
||||||
|
find . -name "deploy.sh" -o -name "deploy.js" -o -name "deploy.ts" 2>/dev/null
|
||||||
|
|
||||||
|
# Package scripts
|
||||||
|
grep "deploy\|build\|start\|test" package.json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Estimate:**
|
||||||
|
- 100% = Full IaC, CI/CD, monitoring, auto-deployment
|
||||||
|
- 75% = IaC + CI/CD, manual deployment
|
||||||
|
- 50% = Basic CI/CD, no IaC
|
||||||
|
- 25% = Manual deployment only
|
||||||
|
- 0% = No deployment setup
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overall Completeness Calculation
|
||||||
|
|
||||||
|
Calculate weighted average:
|
||||||
|
|
||||||
|
```
|
||||||
|
Overall = (Backend × 0.3) + (Frontend × 0.3) + (Tests × 0.2) + (Docs × 0.1) + (Infra × 0.1)
|
||||||
|
```
|
||||||
|
|
||||||
|
Example:
|
||||||
|
- Backend: 100%
|
||||||
|
- Frontend: 60%
|
||||||
|
- Tests: 30%
|
||||||
|
- Docs: 40%
|
||||||
|
- Infra: 80%
|
||||||
|
|
||||||
|
Overall = (100 × 0.3) + (60 × 0.3) + (30 × 0.2) + (40 × 0.1) + (80 × 0.1)
|
||||||
|
= 30 + 18 + 6 + 4 + 8
|
||||||
|
= **66%**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Missing Components Identification
|
||||||
|
|
||||||
|
List what's missing or incomplete:
|
||||||
|
|
||||||
|
**High Priority:**
|
||||||
|
- [ ] Frontend: User profile page (placeholder only)
|
||||||
|
- [ ] Frontend: Analytics dashboard (not started)
|
||||||
|
- [ ] Backend: Email notification service (stub)
|
||||||
|
- [ ] Tests: Integration tests for API (0 tests)
|
||||||
|
- [ ] Docs: API specification (no OpenAPI)
|
||||||
|
|
||||||
|
**Medium Priority:**
|
||||||
|
- [ ] Frontend: Mobile responsive design (partially done)
|
||||||
|
- [ ] Backend: Rate limiting (not implemented)
|
||||||
|
- [ ] Tests: E2E tests (no framework setup)
|
||||||
|
- [ ] Infra: Monitoring/alerting (not configured)
|
||||||
|
|
||||||
|
**Low Priority:**
|
||||||
|
- [ ] Frontend: Dark mode (placeholder toggle)
|
||||||
|
- [ ] Backend: Admin panel API (not started)
|
||||||
|
- [ ] Docs: Troubleshooting guide (missing)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Evidence Summary
|
||||||
|
|
||||||
|
Document the evidence used for estimates:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Evidence
|
||||||
|
|
||||||
|
**Backend (100%):**
|
||||||
|
- 17 Lambda functions fully implemented
|
||||||
|
- All database models defined and migrated
|
||||||
|
- Authentication and authorization complete
|
||||||
|
- API endpoints tested and documented
|
||||||
|
|
||||||
|
**Frontend (60%):**
|
||||||
|
- 8 of 12 planned pages implemented
|
||||||
|
- Core components complete (Header, Footer, Nav)
|
||||||
|
- 4 pages are placeholder/TODO:
|
||||||
|
- Analytics Dashboard (TODO comment in code)
|
||||||
|
- User Settings (returns "Coming Soon")
|
||||||
|
- Admin Panel (not started)
|
||||||
|
- Reports (skeleton only)
|
||||||
|
- Styling ~80% complete
|
||||||
|
|
||||||
|
**Tests (30%):**
|
||||||
|
- 12 unit tests for backend utilities
|
||||||
|
- 0 integration tests
|
||||||
|
- 0 E2E tests
|
||||||
|
- No test coverage reports
|
||||||
|
- jest.config.js exists but minimal tests
|
||||||
|
|
||||||
|
**Documentation (40%):**
|
||||||
|
- Good README with setup instructions
|
||||||
|
- No API documentation (no OpenAPI spec)
|
||||||
|
- No architecture diagrams
|
||||||
|
- Basic deployment guide
|
||||||
|
- No developer guide
|
||||||
|
|
||||||
|
**Infrastructure (80%):**
|
||||||
|
- Full Terraform IaC for AWS
|
||||||
|
- GitHub Actions CI/CD configured
|
||||||
|
- Auto-deploy to staging
|
||||||
|
- Manual production deploy
|
||||||
|
- No monitoring/alerting setup
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Completeness Assessment
|
||||||
|
|
||||||
|
### Estimated Completion
|
||||||
|
- **Overall:** ~66%
|
||||||
|
- **Backend:** ~100%
|
||||||
|
- **Frontend:** ~60%
|
||||||
|
- **Tests:** ~30%
|
||||||
|
- **Documentation:** ~40%
|
||||||
|
- **Infrastructure:** ~80%
|
||||||
|
|
||||||
|
### Evidence
|
||||||
|
[Detailed evidence as shown above]
|
||||||
|
|
||||||
|
### Missing Components
|
||||||
|
[Categorized list of missing/incomplete features]
|
||||||
|
|
||||||
|
### Placeholder Files
|
||||||
|
- app/analytics/page.tsx (TODO comment)
|
||||||
|
- app/settings/page.tsx ("Coming Soon" text)
|
||||||
|
- src/services/email.ts (stub functions)
|
||||||
|
|
||||||
|
### TODO Comments
|
||||||
|
- Found 23 TODO/FIXME comments across codebase
|
||||||
|
- Most common: Frontend polish, missing tests, error handling
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Be conservative with estimates - round down when uncertain
|
||||||
|
- Provide evidence for all estimates
|
||||||
|
- Consider quality, not just quantity (a poorly implemented feature counts less)
|
||||||
|
- Differentiate between "not started" vs "partially done" vs "mostly complete"
|
||||||
268
skills/analyze/operations/detect-stack.md
Normal file
268
skills/analyze/operations/detect-stack.md
Normal file
@@ -0,0 +1,268 @@
|
|||||||
|
# Tech Stack Detection
|
||||||
|
|
||||||
|
Comprehensive commands for detecting programming languages, frameworks, and build systems.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Run all detection commands **in parallel** to identify the technology stack. Missing files are normal - they just mean that technology isn't used.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Detection Commands
|
||||||
|
|
||||||
|
Execute these commands to detect the primary language and framework:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get current directory context
|
||||||
|
pwd
|
||||||
|
|
||||||
|
# Show directory contents
|
||||||
|
ls -la
|
||||||
|
|
||||||
|
# Get git repository info
|
||||||
|
git remote -v 2>/dev/null
|
||||||
|
|
||||||
|
# Language/Framework Detection (run all in parallel)
|
||||||
|
cat package.json 2>/dev/null # Node.js/JavaScript/TypeScript
|
||||||
|
cat composer.json 2>/dev/null # PHP
|
||||||
|
cat requirements.txt 2>/dev/null # Python (pip)
|
||||||
|
cat Pipfile 2>/dev/null # Python (pipenv)
|
||||||
|
cat pyproject.toml 2>/dev/null # Python (poetry)
|
||||||
|
cat Gemfile 2>/dev/null # Ruby
|
||||||
|
cat pom.xml 2>/dev/null # Java/Maven
|
||||||
|
cat build.gradle 2>/dev/null # Java/Gradle
|
||||||
|
cat Cargo.toml 2>/dev/null # Rust
|
||||||
|
cat go.mod 2>/dev/null # Go
|
||||||
|
cat pubspec.yaml 2>/dev/null # Dart/Flutter
|
||||||
|
cat mix.exs 2>/dev/null # Elixir
|
||||||
|
find . -maxdepth 2 -name "*.csproj" 2>/dev/null # .NET/C#
|
||||||
|
find . -maxdepth 2 -name "*.sln" 2>/dev/null # .NET Solution
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Framework-Specific Detection
|
||||||
|
|
||||||
|
### JavaScript/TypeScript Frameworks
|
||||||
|
|
||||||
|
If `package.json` exists, look for these framework indicators:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"dependencies": {
|
||||||
|
"react": "...", // React
|
||||||
|
"next": "...", // Next.js
|
||||||
|
"vue": "...", // Vue.js
|
||||||
|
"nuxt": "...", // Nuxt.js
|
||||||
|
"@angular/core": "...", // Angular
|
||||||
|
"svelte": "...", // Svelte
|
||||||
|
"express": "...", // Express.js (backend)
|
||||||
|
"fastify": "...", // Fastify (backend)
|
||||||
|
"nestjs": "..." // NestJS (backend)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python Frameworks
|
||||||
|
|
||||||
|
Look for these imports or dependencies:
|
||||||
|
|
||||||
|
- `django` - Django web framework
|
||||||
|
- `flask` - Flask micro-framework
|
||||||
|
- `fastapi` - FastAPI
|
||||||
|
- `pyramid` - Pyramid
|
||||||
|
- `tornado` - Tornado
|
||||||
|
|
||||||
|
### Ruby Frameworks
|
||||||
|
|
||||||
|
In `Gemfile`:
|
||||||
|
|
||||||
|
- `rails` - Ruby on Rails
|
||||||
|
- `sinatra` - Sinatra
|
||||||
|
- `hanami` - Hanami
|
||||||
|
|
||||||
|
### PHP Frameworks
|
||||||
|
|
||||||
|
In `composer.json`:
|
||||||
|
|
||||||
|
- `laravel/framework` - Laravel
|
||||||
|
- `symfony/symfony` - Symfony
|
||||||
|
- `slim/slim` - Slim
|
||||||
|
|
||||||
|
### Java Frameworks
|
||||||
|
|
||||||
|
In `pom.xml` or `build.gradle`:
|
||||||
|
|
||||||
|
- `spring-boot` - Spring Boot
|
||||||
|
- `quarkus` - Quarkus
|
||||||
|
- `micronaut` - Micronaut
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Detection
|
||||||
|
|
||||||
|
Look for database-related dependencies or configuration:
|
||||||
|
|
||||||
|
### SQL Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# PostgreSQL indicators
|
||||||
|
grep -r "postgres" package.json composer.json requirements.txt 2>/dev/null
|
||||||
|
ls -la prisma/ 2>/dev/null # Prisma ORM
|
||||||
|
|
||||||
|
# MySQL indicators
|
||||||
|
grep -r "mysql" package.json composer.json requirements.txt 2>/dev/null
|
||||||
|
|
||||||
|
# SQLite
|
||||||
|
find . -name "*.db" -o -name "*.sqlite" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### NoSQL Databases
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# MongoDB
|
||||||
|
grep -r "mongodb\|mongoose" package.json requirements.txt 2>/dev/null
|
||||||
|
|
||||||
|
# Redis
|
||||||
|
grep -r "redis" package.json requirements.txt 2>/dev/null
|
||||||
|
|
||||||
|
# DynamoDB (AWS)
|
||||||
|
grep -r "dynamodb\|@aws-sdk" package.json requirements.txt 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Infrastructure Detection
|
||||||
|
|
||||||
|
### Cloud Providers
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# AWS
|
||||||
|
find . -name "*.tf" -o -name "terraform.tfvars" 2>/dev/null # Terraform
|
||||||
|
find . -name "serverless.yml" 2>/dev/null # Serverless Framework
|
||||||
|
find . -name "cdk.json" 2>/dev/null # AWS CDK
|
||||||
|
grep -r "@aws-sdk\|aws-lambda" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# Azure
|
||||||
|
grep -r "@azure" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# GCP
|
||||||
|
grep -r "@google-cloud" package.json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container/Orchestration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Docker
|
||||||
|
ls -la Dockerfile docker-compose.yml 2>/dev/null
|
||||||
|
|
||||||
|
# Kubernetes
|
||||||
|
find . -name "*.yaml" | xargs grep -l "apiVersion: apps/v1" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Build System Detection
|
||||||
|
|
||||||
|
Identify the build tool based on manifest files:
|
||||||
|
|
||||||
|
- `package.json` → npm, yarn, or pnpm (check for lock files)
|
||||||
|
- `pom.xml` → Maven
|
||||||
|
- `build.gradle` → Gradle
|
||||||
|
- `Cargo.toml` → Cargo (Rust)
|
||||||
|
- `go.mod` → Go modules
|
||||||
|
- `Gemfile` → Bundler
|
||||||
|
- `composer.json` → Composer
|
||||||
|
- `requirements.txt` → pip
|
||||||
|
- `Pipfile` → pipenv
|
||||||
|
- `pyproject.toml` → poetry
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Version Extraction
|
||||||
|
|
||||||
|
Extract version numbers from manifests:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Node.js
|
||||||
|
cat package.json | grep '"version"' | head -1
|
||||||
|
|
||||||
|
# Python
|
||||||
|
cat setup.py | grep "version=" 2>/dev/null
|
||||||
|
cat pyproject.toml | grep "version =" 2>/dev/null
|
||||||
|
|
||||||
|
# Java
|
||||||
|
cat pom.xml | grep "<version>" | head -1
|
||||||
|
|
||||||
|
# Rust
|
||||||
|
cat Cargo.toml | grep "version =" | head -1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multi-Language Projects
|
||||||
|
|
||||||
|
If multiple language manifest files exist, identify:
|
||||||
|
|
||||||
|
- **Primary language** - The main application language (most source files)
|
||||||
|
- **Secondary languages** - Supporting tools, scripts, infrastructure
|
||||||
|
|
||||||
|
Example:
|
||||||
|
- Primary: TypeScript (Next.js frontend + backend)
|
||||||
|
- Secondary: Python (data processing scripts), Terraform (infrastructure)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Summary
|
||||||
|
|
||||||
|
After detection, summarize as:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
### Primary Language
|
||||||
|
- TypeScript 5.2
|
||||||
|
|
||||||
|
### Frameworks & Libraries
|
||||||
|
- Next.js 14.0.3 (React framework)
|
||||||
|
- Prisma 5.6.0 (ORM)
|
||||||
|
- tRPC 10.45.0 (API)
|
||||||
|
|
||||||
|
### Build System
|
||||||
|
- npm 10.2.3
|
||||||
|
|
||||||
|
### Database
|
||||||
|
- PostgreSQL (via Prisma)
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
- AWS Lambda (Serverless)
|
||||||
|
- Terraform 1.6.0 (IaC)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Patterns
|
||||||
|
|
||||||
|
### Full-Stack JavaScript/TypeScript
|
||||||
|
- Frontend: React/Next.js/Vue
|
||||||
|
- Backend: Express/Fastify/NestJS
|
||||||
|
- Database: PostgreSQL/MongoDB
|
||||||
|
- Infrastructure: AWS/Vercel/Netlify
|
||||||
|
|
||||||
|
### Python Web App
|
||||||
|
- Framework: Django/Flask/FastAPI
|
||||||
|
- Database: PostgreSQL/MySQL
|
||||||
|
- Cache: Redis
|
||||||
|
- Infrastructure: AWS/GCP
|
||||||
|
|
||||||
|
### Ruby on Rails
|
||||||
|
- Framework: Rails
|
||||||
|
- Database: PostgreSQL
|
||||||
|
- Cache: Redis
|
||||||
|
- Infrastructure: Heroku/AWS
|
||||||
|
|
||||||
|
### Java Enterprise
|
||||||
|
- Framework: Spring Boot
|
||||||
|
- Database: PostgreSQL/Oracle
|
||||||
|
- Message Queue: RabbitMQ/Kafka
|
||||||
|
- Infrastructure: Kubernetes
|
||||||
395
skills/analyze/operations/directory-analysis.md
Normal file
395
skills/analyze/operations/directory-analysis.md
Normal file
@@ -0,0 +1,395 @@
|
|||||||
|
# Directory Structure Analysis
|
||||||
|
|
||||||
|
Analyze directory structure to identify architecture patterns and key components.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Map the directory structure to understand the application's organization and identify:
|
||||||
|
- Architecture patterns (MVC, microservices, monolith, etc.)
|
||||||
|
- Key components (backend, frontend, database, API, infrastructure)
|
||||||
|
- Configuration files
|
||||||
|
- Source code organization
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Directory Mapping Commands
|
||||||
|
|
||||||
|
### Basic Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Show directory tree (limited depth to avoid noise)
|
||||||
|
find . -type d -maxdepth 3 | grep -v -E "node_modules|vendor|\.git|build|dist|target|__pycache__|\.next|\.nuxt" | sort | head -50
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuration Files
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all configuration files
|
||||||
|
find . -maxdepth 3 \( \
|
||||||
|
-name "*.json" -o \
|
||||||
|
-name "*.yaml" -o \
|
||||||
|
-name "*.yml" -o \
|
||||||
|
-name "*.toml" -o \
|
||||||
|
-name "*.xml" -o \
|
||||||
|
-name "*.conf" -o \
|
||||||
|
-name "*.config" -o \
|
||||||
|
-name ".env*" \
|
||||||
|
\) | grep -v -E "node_modules|vendor|\.git|dist|build" | sort | head -40
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Pattern Recognition
|
||||||
|
|
||||||
|
### Frontend Patterns
|
||||||
|
|
||||||
|
**Next.js / React (App Router)**
|
||||||
|
```
|
||||||
|
app/ # Next.js 13+ app directory
|
||||||
|
components/
|
||||||
|
api/
|
||||||
|
(routes)/
|
||||||
|
public/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Next.js (Pages Router)**
|
||||||
|
```
|
||||||
|
pages/ # Next.js pages
|
||||||
|
api/ # API routes
|
||||||
|
components/
|
||||||
|
public/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Standard React**
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
components/
|
||||||
|
hooks/
|
||||||
|
pages/
|
||||||
|
utils/
|
||||||
|
public/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Vue.js**
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
components/
|
||||||
|
views/
|
||||||
|
router/
|
||||||
|
store/
|
||||||
|
public/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Angular**
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
app/
|
||||||
|
components/
|
||||||
|
services/
|
||||||
|
modules/
|
||||||
|
assets/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backend Patterns
|
||||||
|
|
||||||
|
**Node.js/Express**
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
routes/
|
||||||
|
controllers/
|
||||||
|
models/
|
||||||
|
middleware/
|
||||||
|
services/
|
||||||
|
```
|
||||||
|
|
||||||
|
**NestJS**
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
modules/
|
||||||
|
controllers/
|
||||||
|
services/
|
||||||
|
entities/
|
||||||
|
dto/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Django**
|
||||||
|
```
|
||||||
|
project_name/
|
||||||
|
app_name/
|
||||||
|
models/
|
||||||
|
views/
|
||||||
|
templates/
|
||||||
|
migrations/
|
||||||
|
settings/
|
||||||
|
manage.py
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ruby on Rails**
|
||||||
|
```
|
||||||
|
app/
|
||||||
|
models/
|
||||||
|
controllers/
|
||||||
|
views/
|
||||||
|
helpers/
|
||||||
|
db/
|
||||||
|
migrate/
|
||||||
|
config/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Microservices Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
services/
|
||||||
|
service-a/
|
||||||
|
service-b/
|
||||||
|
service-c/
|
||||||
|
shared/
|
||||||
|
docker-compose.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Monorepo Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
packages/
|
||||||
|
package-a/
|
||||||
|
package-b/
|
||||||
|
apps/
|
||||||
|
app-1/
|
||||||
|
app-2/
|
||||||
|
turbo.json (or lerna.json, nx.json)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Component Detection
|
||||||
|
|
||||||
|
### Backend Detection
|
||||||
|
|
||||||
|
Indicators of backend code:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API/Backend directories
|
||||||
|
find . -type d -name "api" -o -name "server" -o -name "backend" -o -name "routes" -o -name "controllers" 2>/dev/null
|
||||||
|
|
||||||
|
# Server files
|
||||||
|
find . -name "server.js" -o -name "server.ts" -o -name "app.js" -o -name "app.ts" -o -name "main.py" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend Detection
|
||||||
|
|
||||||
|
Indicators of frontend code:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Frontend directories
|
||||||
|
find . -type d -name "components" -o -name "pages" -o -name "views" -o -name "public" -o -name "assets" 2>/dev/null
|
||||||
|
|
||||||
|
# Frontend config files
|
||||||
|
find . -name "next.config.*" -o -name "vite.config.*" -o -name "vue.config.*" -o -name "angular.json" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Detection
|
||||||
|
|
||||||
|
Indicators of database usage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ORM/Database directories
|
||||||
|
find . -type d -name "prisma" -o -name "migrations" -o -name "models" -o -name "entities" 2>/dev/null
|
||||||
|
|
||||||
|
# Database schema files
|
||||||
|
find . -name "schema.prisma" -o -name "*.sql" -o -name "database.yml" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Infrastructure Detection
|
||||||
|
|
||||||
|
Indicators of infrastructure-as-code:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# IaC directories
|
||||||
|
find . -type d -name "terraform" -o -name "infrastructure" -o -name "infra" -o -name ".aws" 2>/dev/null
|
||||||
|
|
||||||
|
# IaC files
|
||||||
|
find . -name "*.tf" -o -name "cloudformation.yml" -o -name "serverless.yml" -o -name "cdk.json" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source File Counting
|
||||||
|
|
||||||
|
Count source files by type to understand project size:
|
||||||
|
|
||||||
|
### JavaScript/TypeScript
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# TypeScript/JavaScript files (excluding tests, node_modules, build)
|
||||||
|
find . -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) \
|
||||||
|
| grep -v -E "node_modules|dist|build|\.next|coverage|test|spec" \
|
||||||
|
| wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Python
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Python files
|
||||||
|
find . -type f -name "*.py" \
|
||||||
|
| grep -v -E "__pycache__|venv|\.venv|dist|build|test_" \
|
||||||
|
| wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Java
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Java files
|
||||||
|
find . -type f -name "*.java" \
|
||||||
|
| grep -v -E "build|target|test" \
|
||||||
|
| wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ruby
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Ruby files
|
||||||
|
find . -type f -name "*.rb" \
|
||||||
|
| grep -v -E "vendor|spec|test" \
|
||||||
|
| wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Other Languages
|
||||||
|
|
||||||
|
Adapt the pattern based on detected language:
|
||||||
|
- PHP: `*.php`
|
||||||
|
- Go: `*.go`
|
||||||
|
- Rust: `*.rs`
|
||||||
|
- C#: `*.cs`
|
||||||
|
- Swift: `*.swift`
|
||||||
|
- Kotlin: `*.kt`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Key Components Summary
|
||||||
|
|
||||||
|
After analysis, summarize key components:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### Key Components Identified
|
||||||
|
|
||||||
|
- **Backend:** Yes - Express.js API server
|
||||||
|
- Location: `src/api/` (12 routes, 8 controllers)
|
||||||
|
- Database: PostgreSQL via Prisma ORM
|
||||||
|
- Authentication: JWT-based
|
||||||
|
|
||||||
|
- **Frontend:** Yes - Next.js 14 with App Router
|
||||||
|
- Location: `app/` (15 pages, 23 components)
|
||||||
|
- Styling: Tailwind CSS
|
||||||
|
- State: React Context + Server Components
|
||||||
|
|
||||||
|
- **Database:** PostgreSQL
|
||||||
|
- ORM: Prisma
|
||||||
|
- Schema: `prisma/schema.prisma` (8 models)
|
||||||
|
- Migrations: 12 migration files
|
||||||
|
|
||||||
|
- **API:** RESTful + tRPC
|
||||||
|
- REST endpoints: `app/api/` (5 routes)
|
||||||
|
- tRPC router: `src/server/api/` (4 routers)
|
||||||
|
- OpenAPI: Not found
|
||||||
|
|
||||||
|
- **Infrastructure:** AWS Serverless
|
||||||
|
- IaC: Terraform (`infrastructure/terraform/`)
|
||||||
|
- Services: Lambda, API Gateway, RDS, S3
|
||||||
|
- CI/CD: GitHub Actions (`.github/workflows/`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Pattern Summary
|
||||||
|
|
||||||
|
Based on directory structure, identify the overall pattern:
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
|
||||||
|
- **Monolithic Full-Stack** - Single repo with frontend + backend + database
|
||||||
|
- **Microservices** - Multiple independent services
|
||||||
|
- **JAMstack** - Static frontend + serverless functions + headless CMS
|
||||||
|
- **Serverless** - No traditional servers, all functions/managed services
|
||||||
|
- **Monorepo** - Multiple packages/apps in one repository
|
||||||
|
- **Client-Server** - Clear separation between client and server code
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Directory Structures by Framework
|
||||||
|
|
||||||
|
### Next.js 13+ (App Router)
|
||||||
|
```
|
||||||
|
app/
|
||||||
|
(auth)/
|
||||||
|
login/
|
||||||
|
register/
|
||||||
|
dashboard/
|
||||||
|
api/
|
||||||
|
components/
|
||||||
|
ui/
|
||||||
|
lib/
|
||||||
|
public/
|
||||||
|
prisma/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Next.js (Pages Router)
|
||||||
|
```
|
||||||
|
pages/
|
||||||
|
api/
|
||||||
|
_app.tsx
|
||||||
|
index.tsx
|
||||||
|
components/
|
||||||
|
public/
|
||||||
|
styles/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Express.js Backend
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
routes/
|
||||||
|
controllers/
|
||||||
|
models/
|
||||||
|
middleware/
|
||||||
|
services/
|
||||||
|
utils/
|
||||||
|
config/
|
||||||
|
tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Django
|
||||||
|
```
|
||||||
|
project/
|
||||||
|
app/
|
||||||
|
models.py
|
||||||
|
views.py
|
||||||
|
urls.py
|
||||||
|
serializers.py
|
||||||
|
settings/
|
||||||
|
wsgi.py
|
||||||
|
manage.py
|
||||||
|
requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rails
|
||||||
|
```
|
||||||
|
app/
|
||||||
|
models/
|
||||||
|
controllers/
|
||||||
|
views/
|
||||||
|
jobs/
|
||||||
|
mailers/
|
||||||
|
db/
|
||||||
|
migrate/
|
||||||
|
config/
|
||||||
|
routes.rb
|
||||||
|
Gemfile
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Exclude common noise directories: node_modules, vendor, .git, dist, build, target, __pycache__, .next, .nuxt, coverage
|
||||||
|
- Limit depth to 3 levels to avoid overwhelming output
|
||||||
|
- Use `head` to limit file counts for large codebases
|
||||||
|
- Look for naming conventions that indicate purpose (e.g., `controllers/`, `services/`, `utils/`)
|
||||||
367
skills/analyze/operations/documentation-scan.md
Normal file
367
skills/analyze/operations/documentation-scan.md
Normal file
@@ -0,0 +1,367 @@
|
|||||||
|
# Documentation Scan
|
||||||
|
|
||||||
|
Scan for existing documentation and assess quality.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Identify all existing documentation to understand:
|
||||||
|
- What's already documented
|
||||||
|
- Quality and completeness of docs
|
||||||
|
- What documentation is missing
|
||||||
|
- Where docs are located
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Discovery
|
||||||
|
|
||||||
|
### Find Documentation Directories
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Common documentation directories
|
||||||
|
ls -la docs/ 2>/dev/null
|
||||||
|
ls -la documentation/ 2>/dev/null
|
||||||
|
ls -la doc/ 2>/dev/null
|
||||||
|
ls -la wiki/ 2>/dev/null
|
||||||
|
ls -la .docs/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Find Markdown Files
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all markdown files (limiting to avoid noise)
|
||||||
|
find . -type f -name "*.md" \
|
||||||
|
| grep -v -E "node_modules|vendor|\.git|dist|build" \
|
||||||
|
| head -30
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Documentation Files
|
||||||
|
|
||||||
|
Look for these standard files:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Standard docs
|
||||||
|
ls -la README.md 2>/dev/null
|
||||||
|
ls -la CONTRIBUTING.md 2>/dev/null
|
||||||
|
ls -la CHANGELOG.md 2>/dev/null
|
||||||
|
ls -la LICENSE 2>/dev/null
|
||||||
|
ls -la CODE_OF_CONDUCT.md 2>/dev/null
|
||||||
|
ls -la SECURITY.md 2>/dev/null
|
||||||
|
|
||||||
|
# Setup/deployment docs
|
||||||
|
ls -la INSTALL.md 2>/dev/null
|
||||||
|
ls -la DEPLOYMENT.md 2>/dev/null
|
||||||
|
ls -la SETUP.md 2>/dev/null
|
||||||
|
|
||||||
|
# Architecture docs
|
||||||
|
ls -la ARCHITECTURE.md 2>/dev/null
|
||||||
|
ls -la DESIGN.md 2>/dev/null
|
||||||
|
ls -la API.md 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Categories
|
||||||
|
|
||||||
|
### README Assessment
|
||||||
|
|
||||||
|
Read `README.md` and assess quality:
|
||||||
|
|
||||||
|
**Good README includes:**
|
||||||
|
- Clear project description
|
||||||
|
- Installation/setup instructions
|
||||||
|
- Usage examples
|
||||||
|
- API documentation or links
|
||||||
|
- Development guide
|
||||||
|
- Testing instructions
|
||||||
|
- Deployment guide
|
||||||
|
- Contributing guidelines
|
||||||
|
- License information
|
||||||
|
|
||||||
|
**Rate as:**
|
||||||
|
- **Good** - Comprehensive, well-organized, covers all key areas
|
||||||
|
- **Basic** - Has description and setup, but missing key sections
|
||||||
|
- **Poor** - Minimal info, outdated, or confusing
|
||||||
|
|
||||||
|
### API Documentation
|
||||||
|
|
||||||
|
Look for API documentation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# OpenAPI/Swagger
|
||||||
|
find . -name "openapi.yaml" -o -name "openapi.yml" -o -name "swagger.yaml" -o -name "swagger.json" 2>/dev/null
|
||||||
|
|
||||||
|
# API doc generators
|
||||||
|
find . -name "apidoc.json" -o -name ".redocly.yaml" 2>/dev/null
|
||||||
|
|
||||||
|
# API docs directories
|
||||||
|
ls -la docs/api/ 2>/dev/null
|
||||||
|
ls -la api-docs/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - OpenAPI spec or comprehensive API docs exist
|
||||||
|
- **Partial** - Some API docs but incomplete
|
||||||
|
- **No** - No API documentation found
|
||||||
|
|
||||||
|
### Architecture Documentation
|
||||||
|
|
||||||
|
Look for architecture diagrams and docs:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Architecture docs
|
||||||
|
find . -name "ARCHITECTURE.md" -o -name "architecture.md" -o -name "DESIGN.md" 2>/dev/null
|
||||||
|
|
||||||
|
# Diagram files
|
||||||
|
find . \( -name "*.drawio" -o -name "*.mermaid" -o -name "*.puml" -o -name "*.svg" \) \
|
||||||
|
| grep -i "architecture\|diagram\|flow" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - Architecture docs with diagrams/explanations
|
||||||
|
- **Partial** - Some architecture info in README
|
||||||
|
- **No** - No architecture documentation
|
||||||
|
|
||||||
|
### Setup/Deployment Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deployment docs
|
||||||
|
find . -name "DEPLOYMENT.md" -o -name "deployment.md" -o -name "DEPLOY.md" 2>/dev/null
|
||||||
|
|
||||||
|
# Infrastructure docs
|
||||||
|
ls -la infrastructure/README.md 2>/dev/null
|
||||||
|
ls -la terraform/README.md 2>/dev/null
|
||||||
|
ls -la .github/workflows/ 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - Clear deployment and infrastructure docs
|
||||||
|
- **Partial** - Basic setup but missing details
|
||||||
|
- **No** - No deployment documentation
|
||||||
|
|
||||||
|
### Developer Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Developer guides
|
||||||
|
find . -name "CONTRIBUTING.md" -o -name "DEVELOPMENT.md" -o -name "dev-guide.md" 2>/dev/null
|
||||||
|
|
||||||
|
# Code comments/JSDoc
|
||||||
|
grep -r "@param\|@returns\|@description" src/ 2>/dev/null | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - Developer guide with setup, conventions, workflow
|
||||||
|
- **Partial** - Some developer info scattered
|
||||||
|
- **No** - No developer documentation
|
||||||
|
|
||||||
|
### Testing Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test docs
|
||||||
|
find . -name "TESTING.md" -o -name "test-guide.md" 2>/dev/null
|
||||||
|
|
||||||
|
# Test README files
|
||||||
|
find . -path "*/tests/README.md" -o -path "*/test/README.md" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - Testing guide with examples and conventions
|
||||||
|
- **Partial** - Basic test info in README
|
||||||
|
- **No** - No testing documentation
|
||||||
|
|
||||||
|
### Database Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Database docs
|
||||||
|
find . -name "schema.md" -o -name "database.md" -o -name "DATA_MODEL.md" 2>/dev/null
|
||||||
|
|
||||||
|
# ER diagrams
|
||||||
|
find . -name "*er-diagram*" -o -name "*schema-diagram*" 2>/dev/null
|
||||||
|
|
||||||
|
# Migration docs
|
||||||
|
ls -la migrations/README.md 2>/dev/null
|
||||||
|
ls -la prisma/README.md 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
**Assessment:**
|
||||||
|
- **Yes** - Database schema docs and ER diagrams
|
||||||
|
- **Partial** - Schema file but no explanatory docs
|
||||||
|
- **No** - No database documentation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Tools Detection
|
||||||
|
|
||||||
|
Identify if automated documentation tools are configured:
|
||||||
|
|
||||||
|
### Code Documentation Generators
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# JSDoc/TypeDoc (JavaScript/TypeScript)
|
||||||
|
grep -r "typedoc\|jsdoc" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# Sphinx (Python)
|
||||||
|
ls -la docs/conf.py 2>/dev/null
|
||||||
|
|
||||||
|
# Javadoc (Java)
|
||||||
|
grep -r "javadoc" pom.xml build.gradle 2>/dev/null
|
||||||
|
|
||||||
|
# RDoc/YARD (Ruby)
|
||||||
|
ls -la .yardopts 2>/dev/null
|
||||||
|
|
||||||
|
# Doxygen (C/C++)
|
||||||
|
ls -la Doxyfile 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Documentation Tools
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Swagger UI
|
||||||
|
grep -r "swagger-ui" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# Redoc
|
||||||
|
grep -r "redoc" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# Postman collections
|
||||||
|
find . -name "*.postman_collection.json" 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
### Static Site Generators
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Docusaurus
|
||||||
|
grep -r "docusaurus" package.json 2>/dev/null
|
||||||
|
ls -la docusaurus.config.js 2>/dev/null
|
||||||
|
|
||||||
|
# VuePress
|
||||||
|
grep -r "vuepress" package.json 2>/dev/null
|
||||||
|
|
||||||
|
# MkDocs
|
||||||
|
ls -la mkdocs.yml 2>/dev/null
|
||||||
|
|
||||||
|
# GitBook
|
||||||
|
ls -la .gitbook.yaml 2>/dev/null
|
||||||
|
|
||||||
|
# Mintlify
|
||||||
|
ls -la mint.json 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Quality Checklist
|
||||||
|
|
||||||
|
For each category, assess:
|
||||||
|
|
||||||
|
- [ ] **Exists** - Documentation files are present
|
||||||
|
- [ ] **Current** - Docs match current code (check dates)
|
||||||
|
- [ ] **Complete** - Covers all major features/components
|
||||||
|
- [ ] **Clear** - Well-written and easy to understand
|
||||||
|
- [ ] **Examples** - Includes code examples and usage
|
||||||
|
- [ ] **Maintained** - Recently updated (check git log)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Summary
|
||||||
|
|
||||||
|
Summarize findings:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Existing Documentation
|
||||||
|
|
||||||
|
### README.md
|
||||||
|
- **Status:** Yes
|
||||||
|
- **Quality:** Good
|
||||||
|
- **Coverage:** Installation, usage, API overview, development setup
|
||||||
|
- **Last Updated:** 2024-01-15
|
||||||
|
- **Notes:** Comprehensive but missing deployment section
|
||||||
|
|
||||||
|
### API Documentation
|
||||||
|
- **Status:** Partial
|
||||||
|
- **Type:** Inline JSDoc comments only
|
||||||
|
- **Coverage:** ~60% of endpoints documented
|
||||||
|
- **OpenAPI Spec:** No
|
||||||
|
- **Notes:** Should generate OpenAPI spec
|
||||||
|
|
||||||
|
### Architecture Documentation
|
||||||
|
- **Status:** No
|
||||||
|
- **Notes:** Architecture decisions are scattered in code comments
|
||||||
|
|
||||||
|
### Setup/Deployment Documentation
|
||||||
|
- **Status:** Yes
|
||||||
|
- **Files:** DEPLOYMENT.md, infrastructure/README.md
|
||||||
|
- **Coverage:** AWS deployment, CI/CD, environment setup
|
||||||
|
- **Quality:** Basic
|
||||||
|
|
||||||
|
### Developer Documentation
|
||||||
|
- **Status:** Partial
|
||||||
|
- **Files:** CONTRIBUTING.md
|
||||||
|
- **Coverage:** PR process, code style guide
|
||||||
|
- **Missing:** Local development setup, debugging guide
|
||||||
|
|
||||||
|
### Testing Documentation
|
||||||
|
- **Status:** No
|
||||||
|
- **Notes:** No testing guide, test structure undocumented
|
||||||
|
|
||||||
|
### Database Documentation
|
||||||
|
- **Status:** Yes
|
||||||
|
- **Type:** Prisma schema file with comments
|
||||||
|
- **Coverage:** All models documented inline
|
||||||
|
- **ER Diagram:** No
|
||||||
|
- **Notes:** Should generate ER diagram from schema
|
||||||
|
|
||||||
|
### Documentation Tools
|
||||||
|
- **Configured:** None
|
||||||
|
- **Recommended:** TypeDoc for code docs, Swagger for API docs
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Missing Documentation Identification
|
||||||
|
|
||||||
|
List what documentation should be created:
|
||||||
|
|
||||||
|
**Critical (needed for Step 2):**
|
||||||
|
- OpenAPI specification for API endpoints
|
||||||
|
- Architecture overview document
|
||||||
|
- Database ER diagram
|
||||||
|
|
||||||
|
**Important (create during specification):**
|
||||||
|
- Comprehensive testing guide
|
||||||
|
- Deployment runbook
|
||||||
|
- Troubleshooting guide
|
||||||
|
|
||||||
|
**Nice-to-have:**
|
||||||
|
- Code contribution guide
|
||||||
|
- ADRs (Architecture Decision Records)
|
||||||
|
- Security documentation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Metrics
|
||||||
|
|
||||||
|
Calculate documentation coverage:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Count documented vs undocumented functions (example for JS/TS)
|
||||||
|
# Total functions
|
||||||
|
grep -r "function\|const.*=>.*{" src/ | wc -l
|
||||||
|
|
||||||
|
# Documented functions (with JSDoc)
|
||||||
|
grep -B1 "function\|const.*=>" src/ | grep -c "/\*\*"
|
||||||
|
|
||||||
|
# Calculate percentage
|
||||||
|
```
|
||||||
|
|
||||||
|
Report as:
|
||||||
|
- **Code documentation coverage:** ~45% (estimated)
|
||||||
|
- **API endpoint documentation:** ~60%
|
||||||
|
- **Feature documentation:** ~30%
|
||||||
|
- **Overall documentation score:** 4/10
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Check git history to see when docs were last updated
|
||||||
|
- Compare doc dates with code changes to identify stale docs
|
||||||
|
- Look for TODO/FIXME comments in docs indicating incomplete sections
|
||||||
|
- Verify links in docs aren't broken
|
||||||
475
skills/analyze/operations/generate-report.md
Normal file
475
skills/analyze/operations/generate-report.md
Normal file
@@ -0,0 +1,475 @@
|
|||||||
|
# Generate Analysis Report
|
||||||
|
|
||||||
|
Template and guidelines for creating `analysis-report.md`.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
After completing all analysis steps, generate a comprehensive `analysis-report.md` file in the project root.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Report Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Initial Analysis Report
|
||||||
|
|
||||||
|
**Date:** [Current Date - YYYY-MM-DD]
|
||||||
|
**Directory:** [Full path from pwd]
|
||||||
|
**Analyst:** Claude Code (Reverse Engineering Toolkit v1.0.0)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
[2-3 paragraph summary of the application, its purpose, current state, and overall completeness]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
> This is a full-stack Next.js application for managing aquarium fish care ("fishfan"). The backend is fully implemented with 17 AWS Lambda functions, PostgreSQL database, and complete authentication. The frontend is ~60% complete with core functionality implemented but several pages still in placeholder state. Overall project completion is estimated at ~66%.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Application Metadata
|
||||||
|
|
||||||
|
- **Name:** [Application Name from package.json or directory]
|
||||||
|
- **Version:** [Version from manifest]
|
||||||
|
- **Description:** [From manifest or README]
|
||||||
|
- **Repository:** [Git remote URL or "Not configured"]
|
||||||
|
- **License:** [From LICENSE file or package.json]
|
||||||
|
- **Primary Language:** [Language] [Version if available]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
### Primary Language
|
||||||
|
- [Language] [Version]
|
||||||
|
- [Key notes about language usage]
|
||||||
|
|
||||||
|
### Frontend Framework
|
||||||
|
- [Framework] [Version]
|
||||||
|
- [Key dependencies and notes]
|
||||||
|
|
||||||
|
### Backend Framework
|
||||||
|
- [Framework] [Version]
|
||||||
|
- [Key dependencies and notes]
|
||||||
|
|
||||||
|
### Database
|
||||||
|
- [Database Type] [Version/Service]
|
||||||
|
- ORM: [ORM if applicable]
|
||||||
|
- Migration System: [Yes/No with details]
|
||||||
|
|
||||||
|
### Infrastructure & Deployment
|
||||||
|
- **Cloud Provider:** [AWS/GCP/Azure/Other]
|
||||||
|
- **IaC Tool:** [Terraform/CloudFormation/CDK/None]
|
||||||
|
- **CI/CD:** [GitHub Actions/GitLab CI/CircleCI/None]
|
||||||
|
- **Hosting:** [Vercel/Netlify/AWS Lambda/EC2/etc.]
|
||||||
|
|
||||||
|
### Key Dependencies
|
||||||
|
|
||||||
|
| Category | Library | Version | Purpose |
|
||||||
|
|----------|---------|---------|---------|
|
||||||
|
| Auth | [Library] | [Version] | [Purpose] |
|
||||||
|
| API | [Library] | [Version] | [Purpose] |
|
||||||
|
| UI | [Library] | [Version] | [Purpose] |
|
||||||
|
| Testing | [Library] | [Version] | [Purpose] |
|
||||||
|
| Build | [Library] | [Version] | [Purpose] |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
### Application Type
|
||||||
|
[Full-Stack Monolith / Microservices / JAMstack / Serverless / etc.]
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
[Project Root]/
|
||||||
|
├── [key-directory-1]/ # [Purpose]
|
||||||
|
│ ├── [subdirectory]/ # [Purpose]
|
||||||
|
│ └── ...
|
||||||
|
├── [key-directory-2]/ # [Purpose]
|
||||||
|
├── [configuration-files] # [Purpose]
|
||||||
|
└── [other-key-files]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Components
|
||||||
|
|
||||||
|
#### Backend
|
||||||
|
- **Status:** [Exists / Not Found]
|
||||||
|
- **Location:** [Directory path]
|
||||||
|
- **Type:** [REST API / GraphQL / tRPC / Mixed]
|
||||||
|
- **Endpoints:** [Count] endpoints identified
|
||||||
|
- **Database Models:** [Count] models
|
||||||
|
- **Authentication:** [Method - JWT/Session/OAuth/None]
|
||||||
|
- **Key Features:**
|
||||||
|
- [Feature 1]
|
||||||
|
- [Feature 2]
|
||||||
|
- [Feature 3]
|
||||||
|
|
||||||
|
#### Frontend
|
||||||
|
- **Status:** [Exists / Not Found]
|
||||||
|
- **Location:** [Directory path]
|
||||||
|
- **Type:** [SPA / SSR / SSG / Hybrid]
|
||||||
|
- **Pages:** [Count] pages identified
|
||||||
|
- **Components:** [Count] reusable components
|
||||||
|
- **Styling:** [Tailwind/CSS Modules/Styled Components/etc.]
|
||||||
|
- **State Management:** [Redux/Context/Zustand/None]
|
||||||
|
- **Key Features:**
|
||||||
|
- [Feature 1]
|
||||||
|
- [Feature 2]
|
||||||
|
- [Feature 3]
|
||||||
|
|
||||||
|
#### Database
|
||||||
|
- **Type:** [PostgreSQL/MySQL/MongoDB/etc.]
|
||||||
|
- **ORM:** [Prisma/TypeORM/Sequelize/etc.]
|
||||||
|
- **Schema Location:** [Path to schema files]
|
||||||
|
- **Models:** [Count] models defined
|
||||||
|
- **Migrations:** [Count] migrations
|
||||||
|
- **Seeding:** [Configured / Not Configured]
|
||||||
|
|
||||||
|
#### API Architecture
|
||||||
|
- **Type:** [RESTful / GraphQL / tRPC / Mixed]
|
||||||
|
- **Endpoints:** [Count] total endpoints
|
||||||
|
- **Documentation:** [OpenAPI Spec / Inline Comments / None]
|
||||||
|
- **Versioning:** [v1/v2/etc. or None]
|
||||||
|
- **Rate Limiting:** [Configured / Not Configured]
|
||||||
|
|
||||||
|
#### Infrastructure
|
||||||
|
- **IaC Tool:** [Terraform/CloudFormation/etc.]
|
||||||
|
- **Services Used:**
|
||||||
|
- [Service 1]: [Purpose]
|
||||||
|
- [Service 2]: [Purpose]
|
||||||
|
- [Service 3]: [Purpose]
|
||||||
|
- **Configuration:** [Location of IaC files]
|
||||||
|
- **Environments:** [dev/staging/prod or single]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Existing Documentation
|
||||||
|
|
||||||
|
### README.md
|
||||||
|
- **Status:** [Yes / No]
|
||||||
|
- **Quality:** [Good / Basic / Poor]
|
||||||
|
- **Sections:**
|
||||||
|
- [✓] Description
|
||||||
|
- [✓] Installation
|
||||||
|
- [✗] API Documentation
|
||||||
|
- [✓] Development Setup
|
||||||
|
- [✗] Testing Guide
|
||||||
|
- [✓] Deployment
|
||||||
|
- **Last Updated:** [Date from git log]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### API Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Format:** [OpenAPI/Postman/Inline/None]
|
||||||
|
- **Coverage:** [Percentage or count]
|
||||||
|
- **Location:** [Path or URL]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Architecture Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Files:** [List of architecture docs]
|
||||||
|
- **Diagrams:** [Yes/No - list types]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Setup/Deployment Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Files:** [List files]
|
||||||
|
- **Coverage:** [What's documented]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Developer Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Files:** [CONTRIBUTING.md, etc.]
|
||||||
|
- **Coverage:** [What's documented]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Testing Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Files:** [Test guide files]
|
||||||
|
- **Coverage:** [What's documented]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Database Documentation
|
||||||
|
- **Status:** [Yes / Partial / No]
|
||||||
|
- **Type:** [ER Diagram / Schema Comments / None]
|
||||||
|
- **Coverage:** [What's documented]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
### Documentation Tools
|
||||||
|
- **Configured:** [List tools like TypeDoc, JSDoc, Sphinx, etc.]
|
||||||
|
- **Output Location:** [Where docs are generated]
|
||||||
|
- **Notes:** [Any observations]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Completeness Assessment
|
||||||
|
|
||||||
|
### Overall Completion: ~[X]%
|
||||||
|
|
||||||
|
### Component Breakdown
|
||||||
|
|
||||||
|
| Component | Completion | Evidence |
|
||||||
|
|-----------|------------|----------|
|
||||||
|
| Backend | ~[X]% | [Brief evidence] |
|
||||||
|
| Frontend | ~[X]% | [Brief evidence] |
|
||||||
|
| Database | ~[X]% | [Brief evidence] |
|
||||||
|
| Tests | ~[X]% | [Brief evidence] |
|
||||||
|
| Documentation | ~[X]% | [Brief evidence] |
|
||||||
|
| Infrastructure | ~[X]% | [Brief evidence] |
|
||||||
|
|
||||||
|
### Detailed Evidence
|
||||||
|
|
||||||
|
#### Backend (~[X]%)
|
||||||
|
[Detailed evidence with specific examples, file counts, etc.]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
- 17 Lambda functions fully implemented and tested
|
||||||
|
- All API endpoints functional with proper error handling
|
||||||
|
- Authentication/authorization complete with JWT
|
||||||
|
- Database queries optimized
|
||||||
|
- No placeholder or TODO comments in backend code
|
||||||
|
|
||||||
|
#### Frontend (~[X]%)
|
||||||
|
[Detailed evidence]
|
||||||
|
|
||||||
|
Example:
|
||||||
|
- 8 of 12 planned pages implemented
|
||||||
|
- Core pages complete: Login, Dashboard, Fish List, Fish Detail
|
||||||
|
- Placeholder pages: Analytics (TODO), Settings (stub), Admin (not started), Reports (skeleton)
|
||||||
|
- Components: 23 reusable components, all functional
|
||||||
|
- Styling: ~80% complete, missing dark mode and mobile polish
|
||||||
|
|
||||||
|
#### Tests (~[X]%)
|
||||||
|
[Detailed evidence]
|
||||||
|
|
||||||
|
#### Documentation (~[X]%)
|
||||||
|
[Detailed evidence]
|
||||||
|
|
||||||
|
#### Infrastructure (~[X]%)
|
||||||
|
[Detailed evidence]
|
||||||
|
|
||||||
|
### Placeholder Files & TODOs
|
||||||
|
|
||||||
|
**Files with Placeholder Content:**
|
||||||
|
- [File path]: [Description]
|
||||||
|
- [File path]: [Description]
|
||||||
|
|
||||||
|
**TODO/FIXME Comments:**
|
||||||
|
- Found [N] TODO comments across codebase
|
||||||
|
- Top categories:
|
||||||
|
1. [Category]: [Count]
|
||||||
|
2. [Category]: [Count]
|
||||||
|
3. [Category]: [Count]
|
||||||
|
|
||||||
|
**Sample TODOs:**
|
||||||
|
```
|
||||||
|
[File:Line] - TODO: [Comment]
|
||||||
|
[File:Line] - FIXME: [Comment]
|
||||||
|
[File:Line] - TODO: [Comment]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Missing Components
|
||||||
|
|
||||||
|
**Not Started:**
|
||||||
|
- [Component/Feature]: [Description]
|
||||||
|
- [Component/Feature]: [Description]
|
||||||
|
|
||||||
|
**Partially Implemented:**
|
||||||
|
- [Component/Feature]: [What exists vs what's missing]
|
||||||
|
- [Component/Feature]: [What exists vs what's missing]
|
||||||
|
|
||||||
|
**Needs Improvement:**
|
||||||
|
- [Component/Feature]: [Current state and what needs work]
|
||||||
|
- [Component/Feature]: [Current state and what needs work]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Source Code Statistics
|
||||||
|
|
||||||
|
- **Total Source Files:** [Count]
|
||||||
|
- **Lines of Code:** ~[Estimate from cloc or wc]
|
||||||
|
- **Test Files:** [Count]
|
||||||
|
- **Test Coverage:** [Percentage if available or "Not measured"]
|
||||||
|
- **Configuration Files:** [Count]
|
||||||
|
|
||||||
|
### File Type Breakdown
|
||||||
|
|
||||||
|
| Type | Count | Purpose |
|
||||||
|
|------|-------|---------|
|
||||||
|
| TypeScript/JavaScript | [Count] | [Application code/Components/etc.] |
|
||||||
|
| Tests | [Count] | [Unit/Integration/E2E] |
|
||||||
|
| Styles | [Count] | [CSS/SCSS/etc.] |
|
||||||
|
| Configuration | [Count] | [Build/Deploy/Environment] |
|
||||||
|
| Documentation | [Count] | [Markdown files] |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Debt & Issues
|
||||||
|
|
||||||
|
### Identified Issues
|
||||||
|
1. [Issue 1]: [Description and impact]
|
||||||
|
2. [Issue 2]: [Description and impact]
|
||||||
|
3. [Issue 3]: [Description and impact]
|
||||||
|
|
||||||
|
### Security Concerns
|
||||||
|
- [Concern 1]: [Description]
|
||||||
|
- [Concern 2]: [Description]
|
||||||
|
|
||||||
|
### Performance Concerns
|
||||||
|
- [Concern 1]: [Description]
|
||||||
|
- [Concern 2]: [Description]
|
||||||
|
|
||||||
|
### Code Quality
|
||||||
|
- Linting: [Configured / Not Configured]
|
||||||
|
- Type Checking: [Strict / Loose / None]
|
||||||
|
- Code Formatting: [Prettier/ESLint/None]
|
||||||
|
- Pre-commit Hooks: [Configured / Not Configured]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommended Next Steps
|
||||||
|
|
||||||
|
Based on this analysis, the reverse engineering process should focus on:
|
||||||
|
|
||||||
|
### Immediate Priorities
|
||||||
|
1. **[Priority 1]**
|
||||||
|
- Why: [Reasoning]
|
||||||
|
- Impact: [Expected outcome]
|
||||||
|
|
||||||
|
2. **[Priority 2]**
|
||||||
|
- Why: [Reasoning]
|
||||||
|
- Impact: [Expected outcome]
|
||||||
|
|
||||||
|
3. **[Priority 3]**
|
||||||
|
- Why: [Reasoning]
|
||||||
|
- Impact: [Expected outcome]
|
||||||
|
|
||||||
|
### Reverse Engineering Focus Areas
|
||||||
|
|
||||||
|
For **Step 2 (Reverse Engineer)**:
|
||||||
|
- Prioritize extracting documentation for: [List components]
|
||||||
|
- Pay special attention to: [Areas of concern]
|
||||||
|
- Can likely skip: [Well-documented areas]
|
||||||
|
|
||||||
|
### Estimated Reverse Engineering Effort
|
||||||
|
- **Step 2 (Reverse Engineer):** ~[X] minutes (based on codebase size)
|
||||||
|
- **Step 3 (Create Specifications):** ~[X] minutes
|
||||||
|
- **Step 4 (Gap Analysis):** ~[X] minutes
|
||||||
|
- **Step 5 (Complete Specification):** ~[X] minutes (interactive)
|
||||||
|
- **Step 6 (Implement from Spec):** ~[X] hours/days (depends on gaps)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes & Observations
|
||||||
|
|
||||||
|
[Any additional observations, concerns, or context that doesn't fit above categories]
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- Monorepo structure detected but not using workspace tools
|
||||||
|
- Multiple authentication methods suggest migration in progress
|
||||||
|
- Infrastructure code is more mature than application code
|
||||||
|
- Build process is complex and could be simplified
|
||||||
|
- Dependencies are up-to-date (as of [date])
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendices
|
||||||
|
|
||||||
|
### A. Dependency Tree (Top-Level)
|
||||||
|
|
||||||
|
```
|
||||||
|
[Main dependencies with versions]
|
||||||
|
```
|
||||||
|
|
||||||
|
### B. Configuration Files Inventory
|
||||||
|
|
||||||
|
```
|
||||||
|
[List of all configuration files with brief descriptions]
|
||||||
|
```
|
||||||
|
|
||||||
|
### C. Database Schema Summary
|
||||||
|
|
||||||
|
```
|
||||||
|
[List of models/tables with key relationships]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Report Generated:** [Timestamp]
|
||||||
|
**Toolkit Version:** 1.0.0
|
||||||
|
**Ready for Step 2:** ✅
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Filling Out the Template
|
||||||
|
|
||||||
|
### Executive Summary Guidelines
|
||||||
|
|
||||||
|
Write a 2-3 paragraph summary answering:
|
||||||
|
1. What is this application? (Purpose, domain, users)
|
||||||
|
2. What's the tech stack? (Key technologies)
|
||||||
|
3. What's the current state? (Completion %, what's done, what's missing)
|
||||||
|
4. What's next? (Main recommendation)
|
||||||
|
|
||||||
|
### Evidence Requirements
|
||||||
|
|
||||||
|
For every percentage estimate, provide concrete evidence:
|
||||||
|
- File counts
|
||||||
|
- Feature lists
|
||||||
|
- Specific examples
|
||||||
|
- Code samples (for TODOs)
|
||||||
|
- Dates (for documentation)
|
||||||
|
|
||||||
|
### Prioritization Logic
|
||||||
|
|
||||||
|
Recommend next steps based on:
|
||||||
|
1. **Critical gaps** - Security, data integrity, deployment blockers
|
||||||
|
2. **High-value gaps** - User-facing features, core functionality
|
||||||
|
3. **Quality gaps** - Tests, documentation, error handling
|
||||||
|
4. **Nice-to-haves** - Polish, optimizations, extras
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Report Validation Checklist
|
||||||
|
|
||||||
|
Before finalizing the report, verify:
|
||||||
|
|
||||||
|
- [ ] All sections filled out (no [TODO] markers left)
|
||||||
|
- [ ] Percentages are evidence-based, not guesses
|
||||||
|
- [ ] File paths are accurate and up-to-date
|
||||||
|
- [ ] Tech stack versions are correct
|
||||||
|
- [ ] Missing components are clearly identified
|
||||||
|
- [ ] Recommendations are actionable
|
||||||
|
- [ ] Executive summary is clear and concise
|
||||||
|
- [ ] Report is saved to project root as `analysis-report.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Sample Output Location
|
||||||
|
|
||||||
|
The report should be saved to:
|
||||||
|
```
|
||||||
|
/path/to/project-root/analysis-report.md
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures it's:
|
||||||
|
- Easy to find
|
||||||
|
- Version controlled (add to git)
|
||||||
|
- Referenced by subsequent steps
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps After Report
|
||||||
|
|
||||||
|
Once the report is generated and reviewed:
|
||||||
|
|
||||||
|
1. **Review with user** - Present key findings
|
||||||
|
2. **Confirm accuracy** - Ask if the analysis matches their understanding
|
||||||
|
3. **Adjust estimates** - Update based on user feedback
|
||||||
|
4. **Proceed to Step 2** - Use the reverse-engineer skill to generate comprehensive documentation
|
||||||
|
|
||||||
|
The analysis report serves as the foundation for the entire reverse engineering process.
|
||||||
260
skills/complete-spec/SKILL.md
Normal file
260
skills/complete-spec/SKILL.md
Normal file
@@ -0,0 +1,260 @@
|
|||||||
|
---
|
||||||
|
name: complete-spec
|
||||||
|
description: Interactive conversation to resolve [NEEDS CLARIFICATION] markers using /speckit.clarify command. Claude asks questions about missing features, UX/UI details, behavior, and priorities. Updates specs in .specify/memory/ with answers to create complete, unambiguous documentation. This is Step 5 of 6 in the reverse engineering process.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Complete Specification (with GitHub Spec Kit)
|
||||||
|
|
||||||
|
**Step 5 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** 30-60 minutes (interactive)
|
||||||
|
**Prerequisites:** Step 4 completed (`docs/gap-analysis-report.md` exists with clarifications list)
|
||||||
|
**Output:** Updated specs in `specs/` with all `[NEEDS CLARIFICATION]` markers resolved
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- You've completed Step 4 (Gap Analysis)
|
||||||
|
- Have `[NEEDS CLARIFICATION]` markers in specifications
|
||||||
|
- Ready for interactive clarification session using `/speckit.clarify`
|
||||||
|
- Want to finalize specifications before implementation
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Complete the specification"
|
||||||
|
- "Resolve clarifications"
|
||||||
|
- "Run speckit clarify"
|
||||||
|
- "Let's clarify the missing details"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
Uses `/speckit.clarify` and **interactive conversation** to fill specification gaps:
|
||||||
|
|
||||||
|
1. **Use /speckit.clarify** - GitHub Spec Kit's built-in clarification tool
|
||||||
|
2. **Interactive Q&A** - Ask questions about missing features and details
|
||||||
|
3. **Update Specifications** - Add answers to specs in `specs/`
|
||||||
|
4. **Resolve Ambiguities** - Remove all `[NEEDS CLARIFICATION]` markers
|
||||||
|
5. **Update Implementation Plans** - Refine plans in `specs/`
|
||||||
|
6. **Finalize for Implementation** - Ready for `/speckit.tasks` and `/speckit.implement`
|
||||||
|
|
||||||
|
**Note:** `/speckit.clarify` provides structured clarification workflow. This skill can also supplement with custom Q&A for project-specific needs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
### Step 1: Collect All Clarifications
|
||||||
|
|
||||||
|
From `specs/gap-analysis.md` and all feature specs:
|
||||||
|
- List all `[NEEDS CLARIFICATION]` markers
|
||||||
|
- Group by feature
|
||||||
|
- Prioritize by impact (P0 first)
|
||||||
|
|
||||||
|
### Step 2: Interactive Q&A Session
|
||||||
|
|
||||||
|
**For each clarification, ask the user:**
|
||||||
|
|
||||||
|
Example questions:
|
||||||
|
- "The Analytics Dashboard feature is missing. What charts/metrics should be displayed?"
|
||||||
|
- "For photo upload, should it be drag-and-drop or click-to-browse?"
|
||||||
|
- "Should offline sync download full data or just metadata?"
|
||||||
|
- "What's the maximum number of photos per fish?"
|
||||||
|
- "For species input, free-text field or autocomplete dropdown?"
|
||||||
|
|
||||||
|
**Listen for:**
|
||||||
|
- Feature requirements
|
||||||
|
- UX/UI preferences
|
||||||
|
- Business logic rules
|
||||||
|
- Constraints and limitations
|
||||||
|
- Priority adjustments
|
||||||
|
|
||||||
|
### Step 3: Update Feature Specs
|
||||||
|
|
||||||
|
As user answers questions:
|
||||||
|
- Add details to feature specs
|
||||||
|
- Define acceptance criteria
|
||||||
|
- Specify UI behavior
|
||||||
|
- Document business rules
|
||||||
|
- Remove `[NEEDS CLARIFICATION]` markers
|
||||||
|
|
||||||
|
### Step 4: Confirm Priorities
|
||||||
|
|
||||||
|
Review the prioritized implementation plan:
|
||||||
|
- Are P0 items truly critical?
|
||||||
|
- Should any features move up/down in priority?
|
||||||
|
- What's the target timeline?
|
||||||
|
|
||||||
|
### Step 5: Finalize Specifications
|
||||||
|
|
||||||
|
Ensure all specs are:
|
||||||
|
- Complete (no clarification markers)
|
||||||
|
- Unambiguous (clear requirements)
|
||||||
|
- Testable (acceptance criteria defined)
|
||||||
|
- Prioritized (P0/P1/P2/P3)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Interactive Session
|
||||||
|
|
||||||
|
**Claude:**
|
||||||
|
> I found 12 items that need clarification. Let me go through them by priority.
|
||||||
|
>
|
||||||
|
> **F003 - Analytics Dashboard [P1]**
|
||||||
|
> This feature is marked as missing. What specific charts and metrics should be displayed?
|
||||||
|
|
||||||
|
**User:**
|
||||||
|
> I want to see:
|
||||||
|
> - Fish count over time (line chart)
|
||||||
|
> - Water parameter trends (multi-line chart)
|
||||||
|
> - Tank health score (gauge)
|
||||||
|
> - Recent alerts (list)
|
||||||
|
|
||||||
|
**Claude:**
|
||||||
|
> Got it! I'll add those requirements to F003.
|
||||||
|
>
|
||||||
|
> Should these be real-time updates or daily aggregates?
|
||||||
|
|
||||||
|
**User:**
|
||||||
|
> Daily aggregates for the charts, but real-time for alerts.
|
||||||
|
|
||||||
|
**Claude:**
|
||||||
|
> Perfect. Next question:
|
||||||
|
>
|
||||||
|
> **F002 - Photo Upload UI [P0]**
|
||||||
|
> For uploading fish photos, would you prefer drag-and-drop or click-to-browse?
|
||||||
|
|
||||||
|
**User:**
|
||||||
|
> Both options would be ideal.
|
||||||
|
|
||||||
|
**Claude:**
|
||||||
|
> Understood. What's the maximum number of photos per fish?
|
||||||
|
|
||||||
|
**User:**
|
||||||
|
> Let's limit it to 10 photos per fish.
|
||||||
|
|
||||||
|
[...continues for all clarifications...]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Updates
|
||||||
|
|
||||||
|
### Updated Feature Specs
|
||||||
|
|
||||||
|
**Before:**
|
||||||
|
```markdown
|
||||||
|
## Analytics Dashboard [NEEDS CLARIFICATION]
|
||||||
|
|
||||||
|
Status: ❌ MISSING
|
||||||
|
|
||||||
|
[NEEDS CLARIFICATION] What charts and metrics to display?
|
||||||
|
```
|
||||||
|
|
||||||
|
**After:**
|
||||||
|
```markdown
|
||||||
|
## Analytics Dashboard
|
||||||
|
|
||||||
|
Status: ❌ MISSING
|
||||||
|
Priority: P1
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
Dashboard displaying fish count trends, water parameter history, tank health, and recent alerts.
|
||||||
|
|
||||||
|
### Acceptance Criteria
|
||||||
|
- [ ] Fish count over time line chart (daily aggregates)
|
||||||
|
- [ ] Water parameter trends multi-line chart (pH, temp, ammonia)
|
||||||
|
- [ ] Tank health score gauge (0-100)
|
||||||
|
- [ ] Recent alerts list (real-time updates)
|
||||||
|
- [ ] Date range selector (7d, 30d, 90d, all)
|
||||||
|
|
||||||
|
### UI Requirements
|
||||||
|
- Responsive design (desktop + mobile)
|
||||||
|
- Charts use Recharts library
|
||||||
|
- Real-time updates for alerts via WebSocket
|
||||||
|
|
||||||
|
### API Requirements
|
||||||
|
- GET /api/analytics/fish-count?range=30d
|
||||||
|
- GET /api/analytics/water-params?range=30d
|
||||||
|
- GET /api/analytics/health-score
|
||||||
|
- WebSocket /ws/alerts for real-time alerts
|
||||||
|
```
|
||||||
|
|
||||||
|
### Updated Gap Analysis
|
||||||
|
|
||||||
|
Remove resolved clarifications from the list.
|
||||||
|
|
||||||
|
### Updated Implementation Status
|
||||||
|
|
||||||
|
Reflect finalized priorities and details.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill, you should have:
|
||||||
|
|
||||||
|
- ✅ All `[NEEDS CLARIFICATION]` markers resolved
|
||||||
|
- ✅ Feature specs updated with complete details
|
||||||
|
- ✅ Acceptance criteria defined for all features
|
||||||
|
- ✅ Priorities confirmed (P0/P1/P2/P3)
|
||||||
|
- ✅ Implementation roadmap finalized
|
||||||
|
- ✅ Ready to proceed to Step 6 (Implement from Spec)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Step
|
||||||
|
|
||||||
|
Once specifications are complete and unambiguous, proceed to:
|
||||||
|
|
||||||
|
**Step 6: Implement from Spec** - Use the implement skill to systematically build missing features.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Interactive Guidelines
|
||||||
|
|
||||||
|
### Asking Good Questions
|
||||||
|
|
||||||
|
**DO:**
|
||||||
|
- Ask specific, focused questions
|
||||||
|
- Provide context for each question
|
||||||
|
- Offer examples or common patterns
|
||||||
|
- Ask one category at a time (don't overwhelm)
|
||||||
|
- Confirm understanding by summarizing
|
||||||
|
|
||||||
|
**DON'T:**
|
||||||
|
- Ask overly technical questions (keep user-focused)
|
||||||
|
- Assume answers (always ask)
|
||||||
|
- Rush through clarifications
|
||||||
|
- Mix multiple questions together
|
||||||
|
|
||||||
|
### Handling Uncertainty
|
||||||
|
|
||||||
|
If user is unsure:
|
||||||
|
- Suggest common industry patterns
|
||||||
|
- Provide examples from similar features
|
||||||
|
- Offer to defer to later (mark as P2/P3)
|
||||||
|
- Document the uncertainty and move on
|
||||||
|
|
||||||
|
### Documenting Answers
|
||||||
|
|
||||||
|
For each answer:
|
||||||
|
- Update the relevant feature spec immediately
|
||||||
|
- Add to acceptance criteria
|
||||||
|
- Remove clarification marker
|
||||||
|
- Confirm understanding with user
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Use the AskUserQuestion tool for structured Q&A
|
||||||
|
- Group related questions together
|
||||||
|
- Prioritize P0 clarifications first
|
||||||
|
- Keep a running list of resolved items
|
||||||
|
- Update specs incrementally (don't batch)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** This is Step 5 of 6. After this interactive session, you'll have complete, unambiguous specifications ready for implementation in Step 6.
|
||||||
283
skills/convert-to-speckit/SKILL.md
Normal file
283
skills/convert-to-speckit/SKILL.md
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
# Convert to Spec Kit Format
|
||||||
|
|
||||||
|
**Skill**: convert-to-speckit
|
||||||
|
**Purpose**: Convert existing `docs/reverse-engineering/` documentation into GitHub Spec Kit specifications
|
||||||
|
**Use Case**: Repository has reverse engineering docs from StackShift Gears 1-2, but needs proper Spec Kit format
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
This skill reads your existing reverse engineering documentation and converts it into properly formatted GitHub Spec Kit specifications, ready for the `/speckit-*` workflow commands.
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
Your repository should have:
|
||||||
|
- `docs/reverse-engineering/` directory with documentation files
|
||||||
|
- `.specify/templates/` with Spec Kit templates (will be created if missing)
|
||||||
|
|
||||||
|
### Process
|
||||||
|
|
||||||
|
1. **Scan** - Read all files in `docs/reverse-engineering/`
|
||||||
|
2. **Analyze** - Identify distinct features and capabilities
|
||||||
|
3. **Extract** - Pull out business logic, data models, APIs, integrations
|
||||||
|
4. **Convert** - Map to GitHub Spec Kit format
|
||||||
|
5. **Create** - Generate `specs/F{NNN}-{feature}/spec.md` files
|
||||||
|
6. **Validate** - Ensure all required sections are complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Locate Documentation
|
||||||
|
|
||||||
|
I'll scan for reverse engineering documentation:
|
||||||
|
|
||||||
|
```
|
||||||
|
docs/reverse-engineering/
|
||||||
|
├── functional-specification.md
|
||||||
|
├── data-architecture.md
|
||||||
|
├── api-documentation.md
|
||||||
|
├── integration-points.md
|
||||||
|
├── business-logic.md
|
||||||
|
├── deployment-architecture.md
|
||||||
|
└── [other analysis files]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Action**: Let me read all files in this directory to understand the application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Extract Features
|
||||||
|
|
||||||
|
From the documentation, I'll identify distinct features. Each feature becomes one specification.
|
||||||
|
|
||||||
|
**Examples of features**:
|
||||||
|
- User Authentication (login, registration, password reset)
|
||||||
|
- Product Catalog (listing, search, filtering)
|
||||||
|
- Shopping Cart (add to cart, update, checkout)
|
||||||
|
- Payment Processing (cards, validation, receipts)
|
||||||
|
- Order Management (create, track, update, cancel)
|
||||||
|
- Admin Dashboard (reporting, analytics, user management)
|
||||||
|
|
||||||
|
**Question for you**: After I list the features I found, you can:
|
||||||
|
- Confirm priorities (P0 = critical, P1 = important, P2 = enhancement)
|
||||||
|
- Add features I might have missed
|
||||||
|
- Skip features not needed right now
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Create Specifications
|
||||||
|
|
||||||
|
For each feature, I'll create a properly formatted specification following this structure:
|
||||||
|
|
||||||
|
### File: `specs/F{NNN}-{feature-slug}/spec.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature Specification: {Feature Name}
|
||||||
|
|
||||||
|
**Feature Branch**: `{NNN}-{feature-slug}`
|
||||||
|
**Created**: {date}
|
||||||
|
**Status**: Draft
|
||||||
|
**Priority**: P0 | P1 | P2
|
||||||
|
|
||||||
|
## User Scenarios & Testing *(mandatory)*
|
||||||
|
|
||||||
|
### User Story 1 - {Capability} (Priority: P0/P1/P2)
|
||||||
|
|
||||||
|
As a {user type}, I need {capability} so that {benefit}.
|
||||||
|
|
||||||
|
**Why this priority**: {Business value explanation}
|
||||||
|
|
||||||
|
**Independent Test**: {How to test this in isolation}
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
1. **Given** {precondition}, **When** {action}, **Then** {outcome}
|
||||||
|
2. **Given** {precondition}, **When** {action}, **Then** {outcome}
|
||||||
|
3. **Given** {precondition}, **When** {action}, **Then** {outcome}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
{3-5 user stories per feature}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- {5-10 edge cases that need handling}
|
||||||
|
|
||||||
|
## Requirements *(mandatory)*
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST {requirement}
|
||||||
|
- **FR-002**: System MUST {requirement}
|
||||||
|
- **FR-003**: System SHOULD {optional requirement}
|
||||||
|
|
||||||
|
{10-15 functional requirements}
|
||||||
|
|
||||||
|
### Key Entities *(if data-related)*
|
||||||
|
|
||||||
|
- **{Entity}**: {Description}
|
||||||
|
|
||||||
|
## Success Criteria *(mandatory)*
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: {Metric}: {Expected value}
|
||||||
|
- **SC-002**: {Performance metric}
|
||||||
|
|
||||||
|
{8-12 success criteria}
|
||||||
|
|
||||||
|
### Non-Functional Requirements
|
||||||
|
|
||||||
|
- **Performance**: {Response times, throughput}
|
||||||
|
- **Reliability**: {Uptime, error rates}
|
||||||
|
- **Security**: {Auth, encryption, protection}
|
||||||
|
- **Maintainability**: {Code quality, tests}
|
||||||
|
|
||||||
|
## Assumptions
|
||||||
|
|
||||||
|
1. {Technical assumptions}
|
||||||
|
2. {Environment assumptions}
|
||||||
|
{3-7 assumptions}
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
- {External systems, libraries, services}
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- {Things NOT in this feature}
|
||||||
|
- {Future enhancements}
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- {Documentation links}
|
||||||
|
- {Standards followed}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conversion Mapping
|
||||||
|
|
||||||
|
### From Reverse Engineering Docs → Spec Kit
|
||||||
|
|
||||||
|
| Source | Target |
|
||||||
|
|--------|--------|
|
||||||
|
| Capabilities, features, "what it does" | User Stories |
|
||||||
|
| API endpoints, request/response | Functional Requirements |
|
||||||
|
| Database tables, schemas | Key Entities |
|
||||||
|
| Business rules, validation | Acceptance Scenarios |
|
||||||
|
| Error conditions, limits | Edge Cases |
|
||||||
|
| External APIs, services | Dependencies |
|
||||||
|
| Tech stack, frameworks | Implementation details (plan.md, not spec.md) |
|
||||||
|
|
||||||
|
### Key Principles
|
||||||
|
|
||||||
|
**Spec.md describes WHAT, not HOW**:
|
||||||
|
- ✅ "System MUST authenticate users securely"
|
||||||
|
- ❌ "System MUST use JWT with RS256 algorithm"
|
||||||
|
|
||||||
|
**Stay technology-agnostic**:
|
||||||
|
- ✅ "System MUST persist data reliably"
|
||||||
|
- ❌ "System MUST use PostgreSQL with replication"
|
||||||
|
|
||||||
|
**Focus on outcomes**:
|
||||||
|
- ✅ "System MUST respond within 200ms"
|
||||||
|
- ❌ "System MUST cache with Redis"
|
||||||
|
|
||||||
|
Implementation details go in `plan.md`, which is created later via `/speckit-plan`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Validation
|
||||||
|
|
||||||
|
Before completing, I'll verify each spec has:
|
||||||
|
|
||||||
|
- [ ] 3-5 user stories with clear business value
|
||||||
|
- [ ] Each story has 3 acceptance scenarios
|
||||||
|
- [ ] 5-10 edge cases identified
|
||||||
|
- [ ] 10-15 functional requirements (MUST/SHOULD/MAY)
|
||||||
|
- [ ] Key entities listed (if data-related)
|
||||||
|
- [ ] 8-12 measurable success criteria
|
||||||
|
- [ ] Non-functional requirements (performance, security, reliability)
|
||||||
|
- [ ] 3-7 assumptions documented
|
||||||
|
- [ ] Dependencies clearly listed
|
||||||
|
- [ ] Out of scope items specified
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example: Before & After
|
||||||
|
|
||||||
|
### Before (from reverse engineering docs):
|
||||||
|
|
||||||
|
```
|
||||||
|
API: POST /api/auth/login
|
||||||
|
- Takes email and password
|
||||||
|
- Returns JWT token
|
||||||
|
- Returns 401 if invalid
|
||||||
|
- Rate limited to 5 attempts
|
||||||
|
```
|
||||||
|
|
||||||
|
### After (Spec Kit format):
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
### User Story 1 - Secure Login (Priority: P0)
|
||||||
|
|
||||||
|
As a registered user, I need to log in with email and password so that I can access my account securely.
|
||||||
|
|
||||||
|
**Why this priority**: Core authentication is critical for all user features.
|
||||||
|
|
||||||
|
**Independent Test**: Submit valid credentials, verify token returned.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
1. **Given** valid email and password, **When** login submitted, **Then** authentication token and user profile returned
|
||||||
|
2. **Given** invalid credentials, **When** login submitted, **Then** 401 error returned without revealing which field failed
|
||||||
|
3. **Given** 5 failed attempts, **When** login submitted, **Then** 429 rate limit error returned
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST accept email and password via secure HTTPS
|
||||||
|
- **FR-002**: System MUST validate email format before auth
|
||||||
|
- **FR-003**: System MUST return auth token on successful verification
|
||||||
|
- **FR-004**: System MUST return 401 for invalid credentials
|
||||||
|
- **FR-005**: System MUST rate limit to prevent brute force (5 attempts)
|
||||||
|
- **FR-006**: System MUST return 429 when rate limit exceeded
|
||||||
|
|
||||||
|
### Success Criteria
|
||||||
|
|
||||||
|
- **SC-001**: Login succeeds for valid credentials 99.9% of time
|
||||||
|
- **SC-002**: Login completes within 500ms at 95th percentile
|
||||||
|
- **SC-003**: Rate limiting activates after 5 attempts per 15 minutes
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Ready to Convert!
|
||||||
|
|
||||||
|
I'm ready to help convert your reverse engineering docs to Spec Kit format.
|
||||||
|
|
||||||
|
**What I need from you**:
|
||||||
|
|
||||||
|
1. **Confirm**: Do you have `docs/reverse-engineering/` with documentation?
|
||||||
|
2. **Preferences**: Any specific features to prioritize or skip?
|
||||||
|
3. **Review**: After I list features found, confirm priorities (P0/P1/P2)
|
||||||
|
|
||||||
|
**What I'll deliver**:
|
||||||
|
|
||||||
|
1. Analysis of all features found
|
||||||
|
2. Priority recommendations
|
||||||
|
3. Complete spec files in `specs/F{NNN}-{feature}/spec.md` format
|
||||||
|
4. Summary table of what was created
|
||||||
|
|
||||||
|
**Next steps after conversion**:
|
||||||
|
|
||||||
|
1. Review specs: Check they capture all requirements
|
||||||
|
2. Run `/speckit-plan` on each spec to create implementation plans
|
||||||
|
3. Run `/speckit-tasks` to generate actionable task lists
|
||||||
|
4. Run `/speckit-implement` to execute the implementation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Let's begin!**
|
||||||
|
|
||||||
|
I'll start by reading your `docs/reverse-engineering/` directory. Please confirm you're ready, and I'll begin the conversion process.
|
||||||
710
skills/create-specs/SKILL.md
Normal file
710
skills/create-specs/SKILL.md
Normal file
@@ -0,0 +1,710 @@
|
|||||||
|
---
|
||||||
|
name: create-specs
|
||||||
|
description: Transform reverse-engineering documentation into GitHub Spec Kit format. Initializes .specify/ directory, creates constitution.md, generates specifications from reverse-engineered docs, and sets up for /speckit slash commands. This is Step 3 of 6 in the reverse engineering process.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Create Specifications (GitHub Spec Kit Integration)
|
||||||
|
|
||||||
|
**Step 3 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** 30 minutes (specs only) to 90 minutes (specs + plans + tasks)
|
||||||
|
**Prerequisites:** Step 2 completed (`docs/reverse-engineering/` exists with 9 files)
|
||||||
|
**Output:** `.specify/` directory with GitHub Spec Kit structure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Thoroughness Options
|
||||||
|
|
||||||
|
Gear 3 generates different levels of detail based on configuration set in Gear 1:
|
||||||
|
|
||||||
|
**Option 1: Specs Only** (30 min - fast)
|
||||||
|
- Generate `.specify/specs/###-feature-name/spec.md` for all features
|
||||||
|
- Constitution and folder structure
|
||||||
|
- Ready for manual planning with `/speckit.plan`
|
||||||
|
|
||||||
|
**Option 2: Specs + Plans** (45-60 min - recommended)
|
||||||
|
- Everything from Option 1
|
||||||
|
- **PLUS**: Auto-generate `plan.md` for PARTIAL/MISSING features
|
||||||
|
- Ready for manual task breakdown with `/speckit.tasks`
|
||||||
|
|
||||||
|
**Option 3: Specs + Plans + Tasks** (90-120 min - complete roadmap)
|
||||||
|
- Everything from Option 2
|
||||||
|
- **PLUS**: Auto-generate comprehensive `tasks.md` (300-500 lines each)
|
||||||
|
- Ready for immediate implementation
|
||||||
|
- No additional planning needed
|
||||||
|
|
||||||
|
**Configuration:** Set during Gear 1 (Analyze) via initial questionnaire, stored in `.stackshift-state.json`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- You've completed Step 2 (Reverse Engineer)
|
||||||
|
- Have comprehensive documentation in `docs/reverse-engineering/`
|
||||||
|
- Ready to create formal specifications in GitHub Spec Kit format
|
||||||
|
- Want to leverage `/speckit` slash commands for implementation
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Create specifications from documentation"
|
||||||
|
- "Transform docs into Spec Kit format"
|
||||||
|
- "Set up GitHub Spec Kit"
|
||||||
|
- "Initialize Spec Kit for this project"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
**Automatically** transforms reverse-engineering documentation into **GitHub Spec Kit format** using F002 automated spec generation:
|
||||||
|
|
||||||
|
1. **Read reverse engineering docs** - Parse `docs/reverse-engineering/functional-specification.md`
|
||||||
|
2. **Extract ALL features** - Identify every feature (complete, partial, missing)
|
||||||
|
3. **Generate constitution** - Create `.specify/memory/constitution.md` with project principles
|
||||||
|
4. **Create feature specs** - Generate `.specify/specs/###-feature-name/spec.md` for EVERY feature
|
||||||
|
5. **Implementation plans** - Create `plan.md` for PARTIAL and MISSING features only
|
||||||
|
6. **Enable slash commands** - Set up `/speckit.*` commands
|
||||||
|
|
||||||
|
**Critical**: This creates specs for **100% of features**, not just gaps!
|
||||||
|
- ✅ Complete features get specs (for future spec-driven changes)
|
||||||
|
- ⚠️ Partial features get specs + plans (show what exists + what's missing)
|
||||||
|
- ❌ Missing features get specs + plans (ready to implement)
|
||||||
|
|
||||||
|
**Result:** Complete spec coverage - entire application under spec control.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Check (FIRST STEP!)
|
||||||
|
|
||||||
|
**Load state file to determine execution plan:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check thoroughness level (set in Gear 1)
|
||||||
|
THOROUGHNESS=$(cat .stackshift-state.json | jq -r '.config.gear3_thoroughness // "specs"')
|
||||||
|
|
||||||
|
# Check route
|
||||||
|
ROUTE=$(cat .stackshift-state.json | jq -r '.path')
|
||||||
|
|
||||||
|
# Check spec output location (Greenfield may have custom location)
|
||||||
|
SPEC_OUTPUT=$(cat .stackshift-state.json | jq -r '.config.spec_output_location // "."')
|
||||||
|
|
||||||
|
echo "Route: $ROUTE"
|
||||||
|
echo "Spec output: $SPEC_OUTPUT"
|
||||||
|
echo "Thoroughness: $THOROUGHNESS"
|
||||||
|
|
||||||
|
# Determine what to execute
|
||||||
|
case "$THOROUGHNESS" in
|
||||||
|
"specs")
|
||||||
|
echo "Will generate: Specs only"
|
||||||
|
GENERATE_PLANS=false
|
||||||
|
GENERATE_TASKS=false
|
||||||
|
;;
|
||||||
|
"specs+plans")
|
||||||
|
echo "Will generate: Specs + Plans"
|
||||||
|
GENERATE_PLANS=true
|
||||||
|
GENERATE_TASKS=false
|
||||||
|
;;
|
||||||
|
"specs+plans+tasks")
|
||||||
|
echo "Will generate: Specs + Plans + Tasks (complete roadmap)"
|
||||||
|
GENERATE_PLANS=true
|
||||||
|
GENERATE_TASKS=true
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown thoroughness: $THOROUGHNESS, defaulting to specs only"
|
||||||
|
GENERATE_PLANS=false
|
||||||
|
GENERATE_TASKS=false
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# If custom location, ensure .specify directory exists there
|
||||||
|
if [ "$SPEC_OUTPUT" != "." ]; then
|
||||||
|
echo "Creating .specify/ structure at custom location..."
|
||||||
|
mkdir -p "$SPEC_OUTPUT/.specify/specs"
|
||||||
|
mkdir -p "$SPEC_OUTPUT/.specify/memory"
|
||||||
|
mkdir -p "$SPEC_OUTPUT/.specify/templates"
|
||||||
|
mkdir -p "$SPEC_OUTPUT/.specify/scripts"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Where specs will be written:**
|
||||||
|
|
||||||
|
| Route | Config | Specs Written To |
|
||||||
|
|-------|--------|------------------|
|
||||||
|
| Greenfield | spec_output_location set | `{spec_output_location}/.specify/specs/` |
|
||||||
|
| Greenfield | Not set (default) | `./.specify/specs/` (current repo) |
|
||||||
|
| Brownfield | Always current | `./.specify/specs/` (current repo) |
|
||||||
|
|
||||||
|
|
||||||
|
**Common patterns:**
|
||||||
|
- Same repo: `spec_output_location: "."` (default)
|
||||||
|
- New repo: `spec_output_location: "~/git/my-new-app"`
|
||||||
|
- Docs repo: `spec_output_location: "~/git/my-app-docs"`
|
||||||
|
- Subfolder: `spec_output_location: "./new-version"`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🤖 Execution Instructions
|
||||||
|
|
||||||
|
**IMPORTANT**: This skill uses automated spec generation tools from F002.
|
||||||
|
|
||||||
|
### Step 1: Call the MCP Tool
|
||||||
|
|
||||||
|
Run the `stackshift_create_specs` MCP tool to automatically generate ALL specifications:
|
||||||
|
|
||||||
|
**This tool will**:
|
||||||
|
- Parse `docs/reverse-engineering/functional-specification.md`
|
||||||
|
- Extract EVERY feature (complete, partial, missing)
|
||||||
|
- Generate constitution and ALL feature specs
|
||||||
|
- Create implementation plans for incomplete features
|
||||||
|
|
||||||
|
**Usage**:
|
||||||
|
```typescript
|
||||||
|
// Call the MCP tool
|
||||||
|
const result = await mcp.callTool('stackshift_create_specs', {
|
||||||
|
directory: process.cwd()
|
||||||
|
});
|
||||||
|
|
||||||
|
// The tool will:
|
||||||
|
// 1. Read functional-specification.md
|
||||||
|
// 2. Create specs for ALL features (not just gaps!)
|
||||||
|
// 3. Mark implementation status (✅/⚠️/❌)
|
||||||
|
// 4. Generate plans for PARTIAL/MISSING features
|
||||||
|
// 5. Return summary showing complete coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected output**:
|
||||||
|
- Constitution created
|
||||||
|
- 15-50 feature specs created (depending on app size)
|
||||||
|
- 100% feature coverage
|
||||||
|
- Implementation plans for incomplete features
|
||||||
|
|
||||||
|
### Step 2: Verify Success
|
||||||
|
|
||||||
|
After the tool completes, verify:
|
||||||
|
1. `.specify/memory/constitution.md` exists
|
||||||
|
2. `.specify/specs/###-feature-name/` directories created for ALL features
|
||||||
|
3. Each feature has `spec.md`
|
||||||
|
4. PARTIAL/MISSING features have `plan.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## If Automated Tool Fails
|
||||||
|
|
||||||
|
The MCP tool creates all Spec Kit files programmatically - it does NOT need `specify init`.
|
||||||
|
|
||||||
|
**The tool creates**:
|
||||||
|
- `.specify/memory/constitution.md` (from templates)
|
||||||
|
- `.specify/specs/###-feature-name/spec.md` (all features)
|
||||||
|
- `.specify/specs/###-feature-name/plan.md` (for incomplete features)
|
||||||
|
- `.claude/commands/speckit.*.md` (slash commands)
|
||||||
|
|
||||||
|
**If the MCP tool fails**, use the manual reconciliation prompt:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy this prompt into Claude.ai:
|
||||||
|
cat web/reconcile-specs.md
|
||||||
|
|
||||||
|
# This will manually create all specs with 100% coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
**DO NOT run `specify init`** - it requires GitHub API access and isn't needed since F002 creates all files directly.
|
||||||
|
|
||||||
|
This creates:
|
||||||
|
```
|
||||||
|
.specify/
|
||||||
|
├── memory/
|
||||||
|
│ └── constitution.md # Project principles (will be generated)
|
||||||
|
├── templates/ # AI agent configs
|
||||||
|
├── scripts/ # Automation utilities
|
||||||
|
└── specs/ # Feature directories (will be generated)
|
||||||
|
├── 001-feature-name/
|
||||||
|
│ ├── spec.md # Feature specification
|
||||||
|
│ ├── plan.md # Implementation plan
|
||||||
|
│ └── tasks.md # Task breakdown (generated by /speckit.tasks)
|
||||||
|
└── 002-another-feature/
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** GitHub Spec Kit uses `.specify/specs/NNN-feature-name/` directory structure
|
||||||
|
|
||||||
|
See [operations/init-speckit.md](operations/init-speckit.md)
|
||||||
|
|
||||||
|
### Step 2: Generate Constitution
|
||||||
|
|
||||||
|
From `docs/reverse-engineering/functional-specification.md`, create `.specify/memory/constitution.md`:
|
||||||
|
|
||||||
|
**Constitution includes:**
|
||||||
|
- **Purpose & Values** - Why this project exists, core principles
|
||||||
|
- **Technical Decisions** - Architecture choices with rationale
|
||||||
|
- **Development Standards** - Code style, testing requirements, review process
|
||||||
|
- **Quality Standards** - Performance, security, reliability requirements
|
||||||
|
- **Governance** - How decisions are made
|
||||||
|
|
||||||
|
**Use `/speckit.constitution` command:**
|
||||||
|
```
|
||||||
|
After generating initial constitution, user can run:
|
||||||
|
> /speckit.constitution
|
||||||
|
|
||||||
|
To refine and update the constitution interactively
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/generate-constitution.md](operations/generate-constitution.md)
|
||||||
|
|
||||||
|
### Step 3: Generate Specifications
|
||||||
|
|
||||||
|
Transform `docs/reverse-engineering/functional-specification.md` into individual feature specs in `specs/FEATURE-ID/`:
|
||||||
|
|
||||||
|
**Recommended:** Use the Task tool with `subagent_type=stackshift:technical-writer` for efficient, parallel spec generation.
|
||||||
|
|
||||||
|
**Directory Structure (per GitHub Spec Kit conventions):**
|
||||||
|
|
||||||
|
Each feature gets its own directory:
|
||||||
|
```
|
||||||
|
specs/001-user-authentication/
|
||||||
|
├── spec.md # Feature specification
|
||||||
|
└── plan.md # Implementation plan
|
||||||
|
```
|
||||||
|
|
||||||
|
**spec.md format:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature: User Authentication
|
||||||
|
|
||||||
|
## Status
|
||||||
|
⚠️ **PARTIAL** - Backend complete, frontend missing login UI
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
[Description of what this feature does]
|
||||||
|
|
||||||
|
## User Stories
|
||||||
|
- As a user, I want to register an account so that I can save my data
|
||||||
|
- As a user, I want to log in so that I can access my dashboard
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
- [ ] User can register with email and password
|
||||||
|
- [x] User can log in with credentials
|
||||||
|
- [ ] User can reset forgotten password
|
||||||
|
- [x] JWT tokens issued on successful login
|
||||||
|
|
||||||
|
## Technical Requirements
|
||||||
|
- Authentication method: JWT
|
||||||
|
- Password hashing: bcrypt
|
||||||
|
- Session duration: 24 hours
|
||||||
|
- API endpoints:
|
||||||
|
- POST /api/auth/register
|
||||||
|
- POST /api/auth/login
|
||||||
|
- POST /api/auth/reset-password
|
||||||
|
|
||||||
|
## Implementation Status
|
||||||
|
**Completed:**
|
||||||
|
- ✅ Backend API endpoints (all 3)
|
||||||
|
- ✅ Database user model
|
||||||
|
- ✅ JWT token generation
|
||||||
|
|
||||||
|
**Missing:**
|
||||||
|
- ❌ Frontend login page
|
||||||
|
- ❌ Frontend registration page
|
||||||
|
- ❌ Password reset UI
|
||||||
|
- ❌ Token refresh mechanism
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
None
|
||||||
|
|
||||||
|
## Related Specifications
|
||||||
|
- user-profile.md (depends on authentication)
|
||||||
|
- authorization.md (extends authentication)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use `/speckit.specify` command:**
|
||||||
|
```
|
||||||
|
After generating initial specs, user can run:
|
||||||
|
> /speckit.specify
|
||||||
|
|
||||||
|
To create additional specifications or refine existing ones
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/generate-specifications.md](operations/generate-specifications.md)
|
||||||
|
|
||||||
|
### Step 4: Generate Implementation Plans
|
||||||
|
|
||||||
|
For each **PARTIAL** or **MISSING** feature, create `plan.md` in the feature's directory:
|
||||||
|
|
||||||
|
**Location:** `specs/FEATURE-ID/plan.md`
|
||||||
|
|
||||||
|
**Format:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Implementation Plan: User Authentication Frontend
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
Complete the frontend UI for user authentication (login, registration, password reset)
|
||||||
|
|
||||||
|
## Current State
|
||||||
|
- Backend API fully functional
|
||||||
|
- No frontend UI components exist
|
||||||
|
- User lands on placeholder page
|
||||||
|
|
||||||
|
## Target State
|
||||||
|
- Complete login page with form validation
|
||||||
|
- Registration page with email verification
|
||||||
|
- Password reset flow (email + new password)
|
||||||
|
- Responsive design for mobile/desktop
|
||||||
|
|
||||||
|
## Technical Approach
|
||||||
|
1. Create React components using existing UI library
|
||||||
|
2. Integrate with backend API endpoints
|
||||||
|
3. Add form validation with Zod
|
||||||
|
4. Implement JWT token storage (localStorage)
|
||||||
|
5. Add route protection for authenticated pages
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
- [ ] Create LoginPage component
|
||||||
|
- [ ] Create RegistrationPage component
|
||||||
|
- [ ] Create PasswordResetPage component
|
||||||
|
- [ ] Add form validation
|
||||||
|
- [ ] Integrate with API endpoints
|
||||||
|
- [ ] Add loading and error states
|
||||||
|
- [ ] Write component tests
|
||||||
|
- [ ] Update routing configuration
|
||||||
|
|
||||||
|
## Risks & Mitigations
|
||||||
|
- Risk: Token storage in localStorage (XSS vulnerability)
|
||||||
|
- Mitigation: Consider httpOnly cookies instead
|
||||||
|
- Risk: No rate limiting on frontend
|
||||||
|
- Mitigation: Add rate limiting to API endpoints
|
||||||
|
|
||||||
|
## Testing Strategy
|
||||||
|
- Unit tests for form validation logic
|
||||||
|
- Integration tests for API calls
|
||||||
|
- E2E tests for complete auth flow
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- All acceptance criteria from specification met
|
||||||
|
- No security vulnerabilities
|
||||||
|
- Pass all tests
|
||||||
|
- UI matches design system
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use `/speckit.plan` command:**
|
||||||
|
```
|
||||||
|
After generating initial plans, user can run:
|
||||||
|
> /speckit.plan
|
||||||
|
|
||||||
|
To create or refine implementation plans
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/generate-plans.md](operations/generate-plans.md)
|
||||||
|
|
||||||
|
### Step 5: Mark Implementation Status
|
||||||
|
|
||||||
|
In each specification, clearly mark what's implemented vs missing:
|
||||||
|
|
||||||
|
- ✅ **COMPLETE** - Fully implemented and tested
|
||||||
|
- ⚠️ **PARTIAL** - Partially implemented (note what exists vs what's missing)
|
||||||
|
- ❌ **MISSING** - Not started
|
||||||
|
|
||||||
|
This allows `/speckit.analyze` to verify consistency.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GitHub Spec Kit Slash Commands
|
||||||
|
|
||||||
|
After setting up specs, these commands become available:
|
||||||
|
|
||||||
|
### Validation & Analysis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check consistency between specs and implementation
|
||||||
|
> /speckit.analyze
|
||||||
|
|
||||||
|
# Identifies:
|
||||||
|
# - Specs marked COMPLETE but implementation missing
|
||||||
|
# - Implementation exists but not in spec
|
||||||
|
# - Inconsistencies between related specs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate tasks from implementation plan
|
||||||
|
> /speckit.tasks
|
||||||
|
|
||||||
|
# Implement a specific feature
|
||||||
|
> /speckit.implement <specification-name>
|
||||||
|
|
||||||
|
# Runs through implementation plan step-by-step
|
||||||
|
# Updates implementation status as it progresses
|
||||||
|
```
|
||||||
|
|
||||||
|
### Clarification
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Resolve underspecified areas
|
||||||
|
> /speckit.clarify
|
||||||
|
|
||||||
|
# Interactive Q&A to fill in missing details
|
||||||
|
# Similar to our complete-spec skill
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
After this skill completes:
|
||||||
|
|
||||||
|
```
|
||||||
|
.specify/
|
||||||
|
├── memory/
|
||||||
|
│ └── constitution.md # Project principles
|
||||||
|
├── templates/
|
||||||
|
└── scripts/
|
||||||
|
|
||||||
|
specs/ # Feature directories
|
||||||
|
├── 001-user-authentication/
|
||||||
|
│ ├── spec.md # ⚠️ PARTIAL
|
||||||
|
│ └── plan.md # Implementation plan
|
||||||
|
├── 002-fish-management/
|
||||||
|
│ ├── spec.md # ⚠️ PARTIAL
|
||||||
|
│ └── plan.md
|
||||||
|
├── 003-analytics-dashboard/
|
||||||
|
│ ├── spec.md # ❌ MISSING
|
||||||
|
│ └── plan.md
|
||||||
|
└── 004-photo-upload/
|
||||||
|
├── spec.md # ⚠️ PARTIAL
|
||||||
|
└── plan.md
|
||||||
|
|
||||||
|
docs/reverse-engineering/ # Keep original docs for reference
|
||||||
|
├── functional-specification.md
|
||||||
|
├── data-architecture.md
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Greenfield Separate Directory
|
||||||
|
|
||||||
|
If `greenfield_location` is an absolute path (e.g., `~/git/my-new-app`):
|
||||||
|
|
||||||
|
**After Gear 3, .specify/ exists in BOTH locations:**
|
||||||
|
|
||||||
|
**Original repo:**
|
||||||
|
```
|
||||||
|
~/git/my-app/
|
||||||
|
├── [original code]
|
||||||
|
├── .specify/ # Created here first
|
||||||
|
└── docs/
|
||||||
|
```
|
||||||
|
|
||||||
|
**New repo (created and initialized):**
|
||||||
|
```
|
||||||
|
~/git/my-new-app/
|
||||||
|
├── .specify/ # COPIED from original repo
|
||||||
|
├── README.md
|
||||||
|
└── .gitignore
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why copy?**
|
||||||
|
- New repo needs specs for `/speckit.*` commands
|
||||||
|
- New repo is self-contained and spec-driven
|
||||||
|
- Can develop independently going forward
|
||||||
|
- Original repo keeps specs for reference
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with Original Toolkit
|
||||||
|
|
||||||
|
**Reverse-Engineered Docs → Spec Kit Artifacts:**
|
||||||
|
|
||||||
|
| Original Doc | Spec Kit Artifact | Location |
|
||||||
|
|-------------|------------------|----------|
|
||||||
|
| functional-specification.md | constitution.md | `.specify/memory/` |
|
||||||
|
| functional-specification.md | Individual feature specs | `specs/` |
|
||||||
|
| data-architecture.md | Technical details in specs | Embedded in specifications |
|
||||||
|
| operations-guide.md | Operational notes in constitution | `.specify/memory/constitution.md` |
|
||||||
|
| technical-debt-analysis.md | Implementation plans | `specs/` |
|
||||||
|
|
||||||
|
**Keep both:**
|
||||||
|
- `docs/reverse-engineering/` - Comprehensive reference docs
|
||||||
|
- `.specify/memory/` - Spec Kit format for `/speckit` commands
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Generate Plans (Optional - Thoroughness Level 2+)
|
||||||
|
|
||||||
|
**If user selected Option 2 or 3**, automatically generate implementation plans for all PARTIAL/MISSING features.
|
||||||
|
|
||||||
|
### Process
|
||||||
|
|
||||||
|
1. **Scan specs directory**:
|
||||||
|
```bash
|
||||||
|
find specs -name "spec.md" -type f | sort
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Identify incomplete features**:
|
||||||
|
- Parse status from each spec.md
|
||||||
|
- Filter for ⚠️ PARTIAL and ❌ MISSING
|
||||||
|
- Skip ✅ COMPLETE features (no plan needed)
|
||||||
|
|
||||||
|
3. **Generate plans in parallel** (5 at a time):
|
||||||
|
```javascript
|
||||||
|
// For each PARTIAL/MISSING feature
|
||||||
|
Task({
|
||||||
|
subagent_type: 'general-purpose',
|
||||||
|
model: 'sonnet',
|
||||||
|
description: `Create plan for ${featureName}`,
|
||||||
|
prompt: `
|
||||||
|
Read: specs/${featureId}/spec.md
|
||||||
|
|
||||||
|
Generate implementation plan following /speckit.plan template:
|
||||||
|
- Assess current state (what exists vs missing)
|
||||||
|
- Define target state (all acceptance criteria)
|
||||||
|
- Determine technical approach
|
||||||
|
- Break into implementation phases
|
||||||
|
- Identify risks and mitigations
|
||||||
|
- Define success criteria
|
||||||
|
|
||||||
|
Save to: specs/${featureId}/plan.md
|
||||||
|
|
||||||
|
Target: 300-500 lines, detailed but not prescriptive
|
||||||
|
`
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Verify coverage**:
|
||||||
|
- Check every PARTIAL/MISSING spec has plan.md
|
||||||
|
- Report summary (e.g., "8 plans generated for 8 incomplete features")
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Generate Tasks (Optional - Thoroughness Level 3 Only)
|
||||||
|
|
||||||
|
**If user selected Option 3**, automatically generate comprehensive task breakdowns for all plans.
|
||||||
|
|
||||||
|
### Process
|
||||||
|
|
||||||
|
1. **Scan for plans**:
|
||||||
|
```bash
|
||||||
|
find specs -name "plan.md" -type f | sort
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Generate tasks in parallel** (3 at a time - slower due to length):
|
||||||
|
```javascript
|
||||||
|
// For each plan
|
||||||
|
Task({
|
||||||
|
subagent_type: 'general-purpose',
|
||||||
|
model: 'sonnet',
|
||||||
|
description: `Create tasks for ${featureName}`,
|
||||||
|
prompt: `
|
||||||
|
Read: specs/${featureId}/spec.md
|
||||||
|
Read: specs/${featureId}/plan.md
|
||||||
|
|
||||||
|
Generate COMPREHENSIVE task breakdown:
|
||||||
|
- Break into 5-10 logical phases
|
||||||
|
- Each task has: status, file path, acceptance criteria, code examples
|
||||||
|
- Include Testing phase (unit, integration, E2E)
|
||||||
|
- Include Documentation phase
|
||||||
|
- Include Edge Cases section
|
||||||
|
- Include Dependencies section
|
||||||
|
- Include Acceptance Checklist
|
||||||
|
- Include Priority Actions
|
||||||
|
|
||||||
|
Target: 300-500 lines (be thorough!)
|
||||||
|
|
||||||
|
Save to: specs/${featureId}/tasks.md
|
||||||
|
`
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Verify quality**:
|
||||||
|
- Check each tasks.md is > 200 lines
|
||||||
|
- Flag if too short (< 200 lines)
|
||||||
|
- Report summary (e.g., "8 task files generated, avg 427 lines")
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
**In .stackshift-state.json:**
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"config": {
|
||||||
|
"gear3_thoroughness": "specs+plans+tasks", // or "specs" or "specs+plans"
|
||||||
|
"plan_parallel_limit": 5,
|
||||||
|
"task_parallel_limit": 3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or ask user interactively if not set.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill, you should have:
|
||||||
|
|
||||||
|
**Thoroughness Level 1 (Specs Only):**
|
||||||
|
- ✅ `.specify/` directory initialized
|
||||||
|
- ✅ `constitution.md` created with project principles
|
||||||
|
- ✅ Individual feature specifications in `specs/`
|
||||||
|
- ✅ Implementation status clearly marked (✅/⚠️/❌)
|
||||||
|
- ✅ `/speckit.*` slash commands available
|
||||||
|
|
||||||
|
**Thoroughness Level 2 (Specs + Plans):**
|
||||||
|
- ✅ Everything from Level 1
|
||||||
|
- ✅ `plan.md` for every PARTIAL/MISSING feature
|
||||||
|
- ✅ 100% plan coverage for incomplete features
|
||||||
|
- ✅ Ready for manual task breakdown or `/speckit.tasks`
|
||||||
|
|
||||||
|
**Thoroughness Level 3 (Specs + Plans + Tasks):**
|
||||||
|
- ✅ Everything from Level 2
|
||||||
|
- ✅ `tasks.md` for every planned feature
|
||||||
|
- ✅ Comprehensive task lists (300-500 lines each)
|
||||||
|
- ✅ Complete roadmap ready for implementation
|
||||||
|
- ✅ No additional planning needed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Step
|
||||||
|
|
||||||
|
Once specifications are created in Spec Kit format, proceed to:
|
||||||
|
|
||||||
|
**Step 4: Gap Analysis** - Use `/speckit.analyze` to identify inconsistencies and the gap-analysis skill to create prioritized implementation plan.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# This skill runs
|
||||||
|
1. specify init my-app
|
||||||
|
2. Generate constitution.md from functional-specification.md
|
||||||
|
3. Create individual feature specs from functional requirements
|
||||||
|
4. Mark implementation status (✅/⚠️/❌)
|
||||||
|
5. Generate implementation plans for gaps
|
||||||
|
|
||||||
|
# User can then run
|
||||||
|
> /speckit.analyze
|
||||||
|
# Shows: "5 PARTIAL features, 3 MISSING features, 2 inconsistencies"
|
||||||
|
|
||||||
|
> /speckit.implement user-authentication
|
||||||
|
# Walks through implementation plan step-by-step
|
||||||
|
|
||||||
|
> /speckit.specify
|
||||||
|
# Add new features as needed
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Spec Kit uses `.specify/` directory (not `specs/`)
|
||||||
|
- Specifications are markdown files, not JSON/YAML
|
||||||
|
- Implementation status uses emoji markers: ✅ ⚠️ ❌
|
||||||
|
- `/speckit` commands are slash commands in Claude Code, not CLI
|
||||||
|
- Constitution is a living document, update as project evolves
|
||||||
|
- Keep reverse-engineering docs as comprehensive reference
|
||||||
|
- Use `stackshift:technical-writer` agent for efficient parallel spec generation
|
||||||
|
- Always use `--ai claude` flag with `specify init` for non-interactive mode
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** This integrates your reverse-engineered codebase with GitHub Spec Kit, enabling the full `/speckit.*` workflow for ongoing development.
|
||||||
359
skills/cruise-control/SKILL.md
Normal file
359
skills/cruise-control/SKILL.md
Normal file
@@ -0,0 +1,359 @@
|
|||||||
|
---
|
||||||
|
name: cruise-control
|
||||||
|
description: Automatic mode - shift through all 6 gears sequentially without stopping. Like cruise control or automatic transmission, this runs the entire StackShift workflow from analysis to implementation in one go. Perfect for unattended execution or when you want to let StackShift handle everything automatically.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Cruise Control Mode 🚗💨
|
||||||
|
|
||||||
|
**Automatic transmission for StackShift** - Shift through all 6 gears sequentially without manual intervention.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use cruise control when:
|
||||||
|
- You want to run the entire workflow automatically
|
||||||
|
- Don't need to review each step before proceeding
|
||||||
|
- Trust StackShift to make reasonable defaults
|
||||||
|
- Want unattended execution (kick it off and come back later)
|
||||||
|
- Prefer automatic over manual transmission
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Run StackShift in cruise control mode"
|
||||||
|
- "Automatically shift through all gears"
|
||||||
|
- "Run the full workflow automatically"
|
||||||
|
- "StackShift autopilot"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Does
|
||||||
|
|
||||||
|
Runs all 6 gears sequentially:
|
||||||
|
|
||||||
|
```
|
||||||
|
Gear 1: Analyze → Gear 2: Reverse Engineer → Gear 3: Create Specs →
|
||||||
|
Gear 4: Gap Analysis → Gear 5: Complete Spec → Gear 6: Implement
|
||||||
|
```
|
||||||
|
|
||||||
|
**Without stopping between gears!**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### Initial Configuration (One-Time)
|
||||||
|
|
||||||
|
At the start, you'll be asked:
|
||||||
|
|
||||||
|
1. **Route Selection:**
|
||||||
|
```
|
||||||
|
Choose your route:
|
||||||
|
A) Greenfield - Shift to new tech stack
|
||||||
|
B) Brownfield - Manage existing code
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Clarifications Handling:**
|
||||||
|
```
|
||||||
|
How to handle [NEEDS CLARIFICATION] markers?
|
||||||
|
A) Defer - Mark them, implement around them, clarify later
|
||||||
|
B) Prompt - Stop and ask questions interactively
|
||||||
|
C) Skip - Only implement fully-specified features
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Implementation Scope:**
|
||||||
|
```
|
||||||
|
What to implement in Gear 6?
|
||||||
|
A) P0 only - Critical features only
|
||||||
|
B) P0 + P1 - Critical and high-value
|
||||||
|
C) All - Everything (may take hours/days)
|
||||||
|
D) None - Stop after specs are ready
|
||||||
|
```
|
||||||
|
|
||||||
|
Then cruise control takes over!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Flow
|
||||||
|
|
||||||
|
### Gear 1: Analyze (Auto)
|
||||||
|
- Detects tech stack
|
||||||
|
- Assesses completeness
|
||||||
|
- Sets route (from your selection)
|
||||||
|
- Saves state with `auto_mode: true`
|
||||||
|
- **Auto-shifts to Gear 2** ✅
|
||||||
|
|
||||||
|
### Gear 2: Reverse Engineer (Auto)
|
||||||
|
- Launches `stackshift:code-analyzer` agent
|
||||||
|
- Extracts documentation based on route
|
||||||
|
- Generates all 9 files (including integration-points.md)
|
||||||
|
- **Auto-shifts to Gear 3** ✅
|
||||||
|
|
||||||
|
### Gear 3: Create Specifications (Auto)
|
||||||
|
- Calls automated spec generation (F002)
|
||||||
|
- Generates constitution (appropriate template for route)
|
||||||
|
- Creates all feature specs programmatically
|
||||||
|
- Creates implementation plans for incomplete features
|
||||||
|
- Sets up `/speckit.*` slash commands
|
||||||
|
- **Auto-shifts to Gear 4** ✅
|
||||||
|
|
||||||
|
### Gear 4: Gap Analysis (Auto)
|
||||||
|
- Runs `/speckit.analyze`
|
||||||
|
- Identifies PARTIAL/MISSING features
|
||||||
|
- Creates prioritized roadmap
|
||||||
|
- Marks [NEEDS CLARIFICATION] items
|
||||||
|
- **Auto-shifts to Gear 5** ✅
|
||||||
|
|
||||||
|
### Gear 5: Complete Specification (Conditional)
|
||||||
|
- If clarifications handling = "Defer": Skips, moves to Gear 6
|
||||||
|
- If clarifications handling = "Prompt": Asks questions interactively, then continues
|
||||||
|
- If clarifications handling = "Skip": Marks unclear features as P2, moves on
|
||||||
|
- **Auto-shifts to Gear 6** ✅
|
||||||
|
|
||||||
|
### Gear 6: Implement (Based on Scope)
|
||||||
|
- If scope = "None": Stops, specs ready
|
||||||
|
- If scope = "P0 only": Implements critical features only
|
||||||
|
- If scope = "P0 + P1": Implements critical + high-value
|
||||||
|
- If scope = "All": Implements everything
|
||||||
|
- Uses `/speckit.tasks` and `/speckit.implement` for each feature
|
||||||
|
- **Completes!** 🏁
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Progress Monitoring
|
||||||
|
|
||||||
|
While cruise control is running, you can check progress:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# See current gear
|
||||||
|
node plugin/scripts/state-manager.js status
|
||||||
|
|
||||||
|
# Detailed progress
|
||||||
|
node plugin/scripts/state-manager.js progress
|
||||||
|
|
||||||
|
# Watch progress in real-time
|
||||||
|
watch -n 5 'node plugin/scripts/state-manager.js progress'
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via MCP resource:
|
||||||
|
```
|
||||||
|
Read stackshift://progress
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Interrupting Cruise Control
|
||||||
|
|
||||||
|
You can stop cruise control at any time:
|
||||||
|
|
||||||
|
```
|
||||||
|
"Stop cruise control"
|
||||||
|
"Pause StackShift"
|
||||||
|
"Switch to manual mode"
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
- Save current state
|
||||||
|
- Set `auto_mode: false`
|
||||||
|
- Allow you to proceed manually from current gear
|
||||||
|
|
||||||
|
You can resume cruise control later:
|
||||||
|
```
|
||||||
|
"Resume cruise control from current gear"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example: Full Automatic Run
|
||||||
|
|
||||||
|
```
|
||||||
|
User: "Run StackShift in cruise control mode"
|
||||||
|
|
||||||
|
StackShift:
|
||||||
|
Setting up cruise control...
|
||||||
|
|
||||||
|
Choose your route:
|
||||||
|
A) Greenfield - Shift to new tech stack
|
||||||
|
B) Brownfield - Manage existing code
|
||||||
|
|
||||||
|
User: "B - Brownfield"
|
||||||
|
|
||||||
|
StackShift:
|
||||||
|
How to handle clarifications?
|
||||||
|
A) Defer - Clarify later
|
||||||
|
B) Prompt - Ask me now
|
||||||
|
C) Skip - Only implement clear features
|
||||||
|
|
||||||
|
User: "A - Defer"
|
||||||
|
|
||||||
|
StackShift:
|
||||||
|
Implementation scope?
|
||||||
|
A) P0 only
|
||||||
|
B) P0 + P1
|
||||||
|
C) All features
|
||||||
|
D) None (stop after specs)
|
||||||
|
|
||||||
|
User: "B - P0 + P1"
|
||||||
|
|
||||||
|
StackShift:
|
||||||
|
✅ Cruise control engaged! Shifting through all gears automatically...
|
||||||
|
|
||||||
|
🔍 Gear 1: Analyzing... ✅ Complete (2 min)
|
||||||
|
🔄 Gear 2: Reverse engineering... ✅ Complete (8 min)
|
||||||
|
📋 Gear 3: Creating specifications... ✅ Complete (5 min)
|
||||||
|
🔍 Gear 4: Gap analysis... ✅ Complete (3 min)
|
||||||
|
✨ Gear 5: Completing specs (deferred clarifications)... ✅ Complete (1 min)
|
||||||
|
🚀 Gear 6: Implementing P0 + P1 features... 🔄 In Progress (est. 45 min)
|
||||||
|
|
||||||
|
Feature 1/8: user-authentication... ✅
|
||||||
|
Feature 2/8: fish-management... ✅
|
||||||
|
Feature 3/8: photo-upload... 🔄 In progress...
|
||||||
|
|
||||||
|
[... continues automatically ...]
|
||||||
|
|
||||||
|
🏁 All gears complete! Application at 85% implementation.
|
||||||
|
|
||||||
|
Deferred clarifications (3) saved in: .specify/memory/clarifications.md
|
||||||
|
You can resolve these later with: /speckit.clarify
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
Cruise control can be configured via state:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"auto_mode": true,
|
||||||
|
"auto_config": {
|
||||||
|
"route": "brownfield",
|
||||||
|
"clarifications_strategy": "defer",
|
||||||
|
"implementation_scope": "p0_p1",
|
||||||
|
"pause_between_gears": false,
|
||||||
|
"notify_on_completion": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced: Scheduled Execution
|
||||||
|
|
||||||
|
Run cruise control in background:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start in background
|
||||||
|
nohup stackshift cruise-control --route brownfield --scope p0 &
|
||||||
|
|
||||||
|
# Check progress
|
||||||
|
tail -f stackshift-cruise.log
|
||||||
|
|
||||||
|
# Or via state
|
||||||
|
watch stackshift://progress
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
### 1. Overnight Execution
|
||||||
|
```
|
||||||
|
5pm: "Run cruise control, brownfield, P0+P1, defer clarifications"
|
||||||
|
9am: Check results, review generated specs, answer deferred questions
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. CI/CD Integration
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/stackshift.yml
|
||||||
|
- name: Run StackShift Analysis
|
||||||
|
run: stackshift cruise-control --route brownfield --scope none
|
||||||
|
# Generates specs, doesn't implement (safe for CI)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Batch Processing
|
||||||
|
```
|
||||||
|
Run cruise control on multiple projects:
|
||||||
|
- project-a: greenfield
|
||||||
|
- project-b: brownfield
|
||||||
|
- project-c: brownfield
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Demo Mode
|
||||||
|
```
|
||||||
|
"Show me what StackShift does - run full demo"
|
||||||
|
→ Runs cruise control with sample project
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Safety Features
|
||||||
|
|
||||||
|
### Checkpoints
|
||||||
|
|
||||||
|
Cruise control creates checkpoints at each gear:
|
||||||
|
- State saved after each gear completes
|
||||||
|
- Can resume from any checkpoint if interrupted
|
||||||
|
- Rollback possible if issues detected
|
||||||
|
|
||||||
|
### Validation
|
||||||
|
|
||||||
|
Before proceeding:
|
||||||
|
- Validates output files were created
|
||||||
|
- Checks for errors in previous gear
|
||||||
|
- Ensures prerequisites met
|
||||||
|
|
||||||
|
### User Intervention
|
||||||
|
|
||||||
|
Pauses automatically if:
|
||||||
|
- Critical error detected
|
||||||
|
- `/speckit.analyze` shows major inconsistencies
|
||||||
|
- Implementation fails tests
|
||||||
|
- Disk space low
|
||||||
|
- Git conflicts detected
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Manual Override
|
||||||
|
|
||||||
|
At any point, you can:
|
||||||
|
|
||||||
|
```
|
||||||
|
"Pause after current gear"
|
||||||
|
"Stop cruise control"
|
||||||
|
"Switch to manual mode"
|
||||||
|
"Take control"
|
||||||
|
```
|
||||||
|
|
||||||
|
State saved, you can continue manually from that point.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After cruise control completes:
|
||||||
|
|
||||||
|
- ✅ All 6 gears complete
|
||||||
|
- ✅ `.stackshift-state.json` shows 6/6 gears
|
||||||
|
- ✅ All output files generated
|
||||||
|
- ✅ GitHub Spec Kit initialized
|
||||||
|
- ✅ Features implemented (based on scope)
|
||||||
|
- ✅ Ready for production (or clarifications if deferred)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Cruise control is a special skill that orchestrates other skills
|
||||||
|
- Each gear is still executed by its corresponding skill
|
||||||
|
- Auto mode can be toggled on/off at any time
|
||||||
|
- State tracks auto_mode for resume capability
|
||||||
|
- Great for CI/CD, batch processing, or overnight runs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** Cruise control is like automatic transmission - convenient and hands-off. Manual mode (using individual skills) gives you more control. Choose based on your needs!
|
||||||
|
|
||||||
|
🚗 **Manual** = Control each gear yourself
|
||||||
|
🤖 **Cruise Control** = Let StackShift handle it
|
||||||
|
|
||||||
|
Both get you to the same destination!
|
||||||
596
skills/gap-analysis/SKILL.md
Normal file
596
skills/gap-analysis/SKILL.md
Normal file
@@ -0,0 +1,596 @@
|
|||||||
|
---
|
||||||
|
name: gap-analysis
|
||||||
|
description: Route-aware gap analysis. For Brownfield - uses /speckit.analyze to compare specs against implementation. For Greenfield - validates spec completeness and asks about target tech stack for new implementation. This is Step 4 of 6 in the reverse engineering process.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Gap Analysis (Route-Aware)
|
||||||
|
|
||||||
|
**Step 4 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** 15 minutes
|
||||||
|
**Prerequisites:** Step 3 completed (`.specify/` directory exists with specifications)
|
||||||
|
**Output:** Route-specific analysis and implementation roadmap
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Check (FIRST STEP!)
|
||||||
|
|
||||||
|
**CRITICAL:** Check detection type and route:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Load state file
|
||||||
|
DETECTION_TYPE=$(cat .stackshift-state.json | jq -r '.detection_type // .path')
|
||||||
|
ROUTE=$(cat .stackshift-state.json | jq -r '.route // .path')
|
||||||
|
|
||||||
|
echo "Detection: $DETECTION_TYPE (what kind of app)"
|
||||||
|
echo "Route: $ROUTE (how to spec it)"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Routes:**
|
||||||
|
- **greenfield** → Building NEW app (tech-agnostic specs)
|
||||||
|
- **brownfield** → Managing EXISTING app (tech-prescriptive specs)
|
||||||
|
|
||||||
|
**Detection Types:**
|
||||||
|
- **generic** → Standard application
|
||||||
|
- **monorepo-service** → Service in a monorepo
|
||||||
|
- **nx-app** → Nx workspace application
|
||||||
|
- **turborepo-package** → Turborepo package
|
||||||
|
- **lerna-package** → Lerna package
|
||||||
|
|
||||||
|
**Based on route, this skill behaves differently!**
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
- Monorepo Service + Greenfield → Analyze spec completeness for platform migration
|
||||||
|
- Monorepo Service + Brownfield → Compare specs vs current implementation
|
||||||
|
- Nx App + Greenfield → Validate specs for rebuild (framework-agnostic)
|
||||||
|
- Nx App + Brownfield → Find gaps in current Nx/Angular implementation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Greenfield Route: Spec Completeness Analysis
|
||||||
|
|
||||||
|
**Goal:** Validate specs are complete enough to build NEW application
|
||||||
|
|
||||||
|
**NOT analyzing:** Old codebase (we're not fixing it, we're building new)
|
||||||
|
**YES analyzing:** Spec quality, completeness, readiness
|
||||||
|
|
||||||
|
### Step 1: Review Spec Completeness
|
||||||
|
|
||||||
|
For each specification:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check each spec
|
||||||
|
for spec in .specify/memory/specifications/*.md; do
|
||||||
|
echo "Analyzing: $(basename $spec)"
|
||||||
|
|
||||||
|
# Look for ambiguities
|
||||||
|
grep "\[NEEDS CLARIFICATION\]" "$spec" || echo "No clarifications needed"
|
||||||
|
|
||||||
|
# Check for acceptance criteria
|
||||||
|
grep -A 10 "Acceptance Criteria" "$spec" || echo "⚠️ No acceptance criteria"
|
||||||
|
|
||||||
|
# Check for user stories
|
||||||
|
grep -A 5 "User Stories" "$spec" || echo "⚠️ No user stories"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Identify Clarification Needs
|
||||||
|
|
||||||
|
**Common ambiguities in Greenfield specs:**
|
||||||
|
- UI/UX details missing (what should it look like?)
|
||||||
|
- Business rules unclear (what happens when...?)
|
||||||
|
- Data relationships ambiguous (how do entities relate?)
|
||||||
|
- Non-functional requirements vague (how fast? how secure?)
|
||||||
|
|
||||||
|
**Mark with [NEEDS CLARIFICATION]:**
|
||||||
|
```markdown
|
||||||
|
### Photo Upload Feature
|
||||||
|
- Users can upload photos [NEEDS CLARIFICATION: drag-drop or click-browse?]
|
||||||
|
- Photos stored in cloud [NEEDS CLARIFICATION: S3, Cloudinary, or Vercel Blob?]
|
||||||
|
- Max 10 photos [NEEDS CLARIFICATION: per fish or per tank?]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Ask About Target Tech Stack
|
||||||
|
|
||||||
|
**For Greenfield, you're building NEW - need to choose stack!**
|
||||||
|
|
||||||
|
```
|
||||||
|
I've extracted the business logic into tech-agnostic specifications.
|
||||||
|
Now we need to decide what to build the NEW application in.
|
||||||
|
|
||||||
|
What tech stack would you like to use for the new implementation?
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
A) Next.js 15 + React 19 + Prisma + PostgreSQL + Vercel
|
||||||
|
B) Python FastAPI + SQLAlchemy + PostgreSQL + AWS ECS
|
||||||
|
C) Ruby on Rails 7 + PostgreSQL + Heroku
|
||||||
|
D) Your choice: [describe your preferred stack]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Document choice** in Constitution for consistency.
|
||||||
|
|
||||||
|
### Step 4: Create Implementation Roadmap
|
||||||
|
|
||||||
|
**Greenfield roadmap focuses on BUILD ORDER:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Greenfield Implementation Roadmap
|
||||||
|
|
||||||
|
## Tech Stack Selected
|
||||||
|
- Frontend: Next.js 15 + React 19
|
||||||
|
- Backend: Next.js API Routes
|
||||||
|
- Database: PostgreSQL + Prisma
|
||||||
|
- Auth: NextAuth.js
|
||||||
|
- Hosting: Vercel
|
||||||
|
|
||||||
|
## Build Phases
|
||||||
|
|
||||||
|
### Phase 1: Foundation (Week 1)
|
||||||
|
- Set up Next.js project
|
||||||
|
- Database schema with Prisma
|
||||||
|
- Authentication system
|
||||||
|
- Base UI components
|
||||||
|
|
||||||
|
### Phase 2: Core Features (Week 2-3)
|
||||||
|
- User management
|
||||||
|
- Fish tracking
|
||||||
|
- Tank management
|
||||||
|
- Water quality logging
|
||||||
|
|
||||||
|
### Phase 3: Advanced Features (Week 4)
|
||||||
|
- Photo upload
|
||||||
|
- Analytics dashboard
|
||||||
|
- Notifications
|
||||||
|
- Social features
|
||||||
|
|
||||||
|
## All Features are ❌ MISSING
|
||||||
|
(Greenfield = building from scratch)
|
||||||
|
|
||||||
|
Ready to proceed to:
|
||||||
|
- Step 5: Resolve clarifications
|
||||||
|
- Step 6: Implement features in new stack
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Brownfield Route: Implementation Gap Analysis
|
||||||
|
|
||||||
|
**Goal:** Identify gaps in EXISTING codebase implementation
|
||||||
|
|
||||||
|
**YES analyzing:** Old codebase vs specs
|
||||||
|
**Using:** /speckit.analyze to find gaps
|
||||||
|
|
||||||
|
### Step 1: Run /speckit.analyze
|
||||||
|
|
||||||
|
GitHub Spec Kit's built-in validation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it checks:**
|
||||||
|
- Specifications marked ✅ COMPLETE but implementation missing
|
||||||
|
- Implementation exists but not documented in specs
|
||||||
|
- Inconsistencies between related specifications
|
||||||
|
- Conflicting requirements across specs
|
||||||
|
- Outdated implementation status
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
### Step 1: Run /speckit.analyze
|
||||||
|
|
||||||
|
GitHub Spec Kit's built-in validation:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it checks:**
|
||||||
|
- Specifications marked ✅ COMPLETE but implementation missing
|
||||||
|
- Implementation exists but not documented in specs
|
||||||
|
- Inconsistencies between related specifications
|
||||||
|
- Conflicting requirements across specs
|
||||||
|
- Outdated implementation status
|
||||||
|
|
||||||
|
**Output example:**
|
||||||
|
```
|
||||||
|
Analyzing specifications vs implementation...
|
||||||
|
|
||||||
|
Issues Found:
|
||||||
|
|
||||||
|
1. user-authentication.md marked PARTIAL
|
||||||
|
- Spec says: Frontend login UI required
|
||||||
|
- Reality: No login components found in codebase
|
||||||
|
|
||||||
|
2. analytics-dashboard.md marked MISSING
|
||||||
|
- Spec exists but no implementation
|
||||||
|
|
||||||
|
3. Inconsistency detected:
|
||||||
|
- fish-management.md requires photo-upload feature
|
||||||
|
- photo-upload.md marked PARTIAL (upload API missing)
|
||||||
|
|
||||||
|
4. Orphaned implementation:
|
||||||
|
- src/api/notifications.ts exists
|
||||||
|
- No specification found for notifications feature
|
||||||
|
|
||||||
|
Summary:
|
||||||
|
- 3 COMPLETE features
|
||||||
|
- 4 PARTIAL features
|
||||||
|
- 5 MISSING features
|
||||||
|
- 2 inconsistencies
|
||||||
|
- 1 orphaned implementation
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/run-speckit-analyze.md](operations/run-speckit-analyze.md)
|
||||||
|
|
||||||
|
### Step 2: Detailed Gap Analysis
|
||||||
|
|
||||||
|
Expand on `/speckit.analyze` findings with deeper analysis:
|
||||||
|
|
||||||
|
#### A. Review PARTIAL Features
|
||||||
|
|
||||||
|
For each ⚠️ PARTIAL feature:
|
||||||
|
- What exists? (backend, frontend, tests, docs)
|
||||||
|
- What's missing? (specific components, endpoints, UI)
|
||||||
|
- Why incomplete? (was it deprioritized? ran out of time?)
|
||||||
|
- Effort to complete? (hours estimate)
|
||||||
|
- Blockers? (dependencies, unclear requirements)
|
||||||
|
|
||||||
|
#### B. Review MISSING Features
|
||||||
|
|
||||||
|
For each ❌ MISSING feature:
|
||||||
|
- Is it actually needed? (or can it be deprioritized?)
|
||||||
|
- User impact if missing? (critical, important, nice-to-have)
|
||||||
|
- Implementation complexity? (simple, moderate, complex)
|
||||||
|
- Dependencies? (what must be done first)
|
||||||
|
|
||||||
|
#### C. Technical Debt Assessment
|
||||||
|
|
||||||
|
From `docs/reverse-engineering/technical-debt-analysis.md`:
|
||||||
|
- Code quality issues
|
||||||
|
- Missing tests (unit, integration, E2E)
|
||||||
|
- Documentation gaps
|
||||||
|
- Security vulnerabilities
|
||||||
|
- Performance bottlenecks
|
||||||
|
|
||||||
|
#### D. Identify Clarification Needs
|
||||||
|
|
||||||
|
Mark ambiguous areas with `[NEEDS CLARIFICATION]`:
|
||||||
|
- Unclear requirements
|
||||||
|
- Missing UX/UI details
|
||||||
|
- Undefined behavior
|
||||||
|
- Unspecified constraints
|
||||||
|
|
||||||
|
See [operations/detailed-gap-analysis.md](operations/detailed-gap-analysis.md)
|
||||||
|
|
||||||
|
### Step 3: Prioritize Implementation
|
||||||
|
|
||||||
|
Classify gaps by priority:
|
||||||
|
|
||||||
|
**P0 - Critical**
|
||||||
|
- Blocking major use cases
|
||||||
|
- Security vulnerabilities
|
||||||
|
- Data integrity issues
|
||||||
|
- Broken core functionality
|
||||||
|
|
||||||
|
**P1 - High Priority**
|
||||||
|
- Important for core user value
|
||||||
|
- High user impact
|
||||||
|
- Competitive differentiation
|
||||||
|
- Technical debt causing problems
|
||||||
|
|
||||||
|
**P2 - Medium Priority**
|
||||||
|
- Nice-to-have features
|
||||||
|
- Improvements to existing features
|
||||||
|
- Minor technical debt
|
||||||
|
- Edge cases
|
||||||
|
|
||||||
|
**P3 - Low Priority**
|
||||||
|
- Future enhancements
|
||||||
|
- Polish and refinements
|
||||||
|
- Non-critical optimizations
|
||||||
|
|
||||||
|
See [operations/prioritization.md](operations/prioritization.md)
|
||||||
|
|
||||||
|
### Step 4: Create Implementation Roadmap
|
||||||
|
|
||||||
|
Phase the work into manageable chunks:
|
||||||
|
|
||||||
|
**Phase 1: P0 Items** (~X hours)
|
||||||
|
- Complete critical features
|
||||||
|
- Fix security issues
|
||||||
|
- Unblock major workflows
|
||||||
|
|
||||||
|
**Phase 2: P1 Features** (~X hours)
|
||||||
|
- Build high-value features
|
||||||
|
- Address important technical debt
|
||||||
|
- Improve test coverage
|
||||||
|
|
||||||
|
**Phase 3: P2/P3 Enhancements** (~X hours or defer)
|
||||||
|
- Nice-to-have features
|
||||||
|
- Polish and refinements
|
||||||
|
- Optional improvements
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Create `docs/gap-analysis-report.md` (supplementing Spec Kit's output):
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Gap Analysis Report
|
||||||
|
|
||||||
|
**Date:** [Current Date]
|
||||||
|
**Based on:** /speckit.analyze + manual review
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
- **Overall Completion:** ~66%
|
||||||
|
- **Complete Features:** 3 (25%)
|
||||||
|
- **Partial Features:** 4 (33%)
|
||||||
|
- **Missing Features:** 5 (42%)
|
||||||
|
- **Critical Issues:** 2
|
||||||
|
- **Clarifications Needed:** 8
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Spec Kit Analysis Results
|
||||||
|
|
||||||
|
### Inconsistencies Detected by /speckit.analyze
|
||||||
|
|
||||||
|
1. **user-authentication.md** (PARTIAL)
|
||||||
|
- Spec: Frontend login UI required
|
||||||
|
- Reality: No login components exist
|
||||||
|
- Impact: Users cannot authenticate
|
||||||
|
|
||||||
|
2. **photo-upload.md → fish-management.md**
|
||||||
|
- fish-management depends on photo-upload
|
||||||
|
- photo-upload.md is PARTIAL (API incomplete)
|
||||||
|
- Impact: Fish photos cannot be uploaded
|
||||||
|
|
||||||
|
3. **Orphaned Code: notifications.ts**
|
||||||
|
- Implementation exists without specification
|
||||||
|
- Action: Create specification or remove code
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Gap Details
|
||||||
|
|
||||||
|
### Missing Features (❌ 5 features)
|
||||||
|
|
||||||
|
#### F003: Analytics Dashboard [P1]
|
||||||
|
**Specification:** `specs/analytics-dashboard.md`
|
||||||
|
**Status:** ❌ MISSING (not started)
|
||||||
|
**Impact:** Users cannot track metrics over time
|
||||||
|
**Effort:** ~8 hours
|
||||||
|
**Dependencies:** None
|
||||||
|
|
||||||
|
**Needs Clarification:**
|
||||||
|
- [NEEDS CLARIFICATION] What metrics to display?
|
||||||
|
- [NEEDS CLARIFICATION] Chart types (line, bar, pie)?
|
||||||
|
- [NEEDS CLARIFICATION] Real-time or daily aggregates?
|
||||||
|
|
||||||
|
#### F005: Social Features [P2]
|
||||||
|
...
|
||||||
|
|
||||||
|
### Partial Features (⚠️ 4 features)
|
||||||
|
|
||||||
|
#### F002: Fish Management [P0]
|
||||||
|
**Specification:** `specs/fish-management.md`
|
||||||
|
**Status:** ⚠️ PARTIAL
|
||||||
|
|
||||||
|
**Implemented:**
|
||||||
|
- ✅ Backend API (all CRUD endpoints)
|
||||||
|
- ✅ Fish list page
|
||||||
|
- ✅ Fish detail view
|
||||||
|
|
||||||
|
**Missing:**
|
||||||
|
- ❌ Fish profile edit page
|
||||||
|
- ❌ Photo upload UI (blocked by photo-upload.md)
|
||||||
|
- ❌ Bulk import feature
|
||||||
|
|
||||||
|
**Effort to Complete:** ~4 hours
|
||||||
|
**Blockers:** Photo upload API must be completed first
|
||||||
|
|
||||||
|
**Needs Clarification:**
|
||||||
|
- [NEEDS CLARIFICATION] Photo upload: drag-drop or click-browse?
|
||||||
|
- [NEEDS CLARIFICATION] Max photos per fish?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Debt
|
||||||
|
|
||||||
|
### High Priority (Blocking)
|
||||||
|
- Missing integration tests (0 tests, blocks deployment)
|
||||||
|
- No error handling in 8 API endpoints (causes crashes)
|
||||||
|
- Hardcoded AWS region (prevents multi-region)
|
||||||
|
|
||||||
|
### Medium Priority
|
||||||
|
- Frontend components lack TypeScript types
|
||||||
|
- No loading states in UI (poor UX)
|
||||||
|
- Missing rate limiting on API (security risk)
|
||||||
|
|
||||||
|
### Low Priority
|
||||||
|
- Inconsistent code formatting
|
||||||
|
- No dark mode support
|
||||||
|
- Missing accessibility labels
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prioritized Roadmap
|
||||||
|
|
||||||
|
### Phase 1: P0 Critical (~12 hours)
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Unblock core user workflows
|
||||||
|
- Fix security issues
|
||||||
|
- Complete essential features
|
||||||
|
|
||||||
|
**Tasks:**
|
||||||
|
1. Complete F002: Fish Management UI (~4h)
|
||||||
|
- Implement photo upload API
|
||||||
|
- Build fish edit page
|
||||||
|
- Connect to backend
|
||||||
|
|
||||||
|
2. Add error handling to all APIs (~3h)
|
||||||
|
|
||||||
|
3. Implement integration tests (~5h)
|
||||||
|
|
||||||
|
### Phase 2: P1 High Value (~20 hours)
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Build analytics dashboard
|
||||||
|
- Implement notifications
|
||||||
|
- Improve test coverage
|
||||||
|
|
||||||
|
**Tasks:**
|
||||||
|
1. F003: Analytics Dashboard (~8h)
|
||||||
|
2. F006: Notification System (~6h)
|
||||||
|
3. Add rate limiting (~2h)
|
||||||
|
4. Improve TypeScript coverage (~4h)
|
||||||
|
|
||||||
|
### Phase 3: P2/P3 Enhancements (~TBD)
|
||||||
|
|
||||||
|
**Goals:**
|
||||||
|
- Add nice-to-have features
|
||||||
|
- Polish and refinements
|
||||||
|
|
||||||
|
**Tasks:**
|
||||||
|
1. F005: Social Features (~12h)
|
||||||
|
2. F007: Dark Mode (~6h)
|
||||||
|
3. F008: Admin Panel (~10h)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Clarifications Needed (8 total)
|
||||||
|
|
||||||
|
### Critical (P0) - 2 items
|
||||||
|
1. **F002 - Photo Upload:** Drag-drop, click-browse, or both?
|
||||||
|
2. **F004 - Offline Sync:** Full data or metadata only?
|
||||||
|
|
||||||
|
### Important (P1) - 4 items
|
||||||
|
3. **F003 - Analytics:** Which chart types and metrics?
|
||||||
|
4. **F006 - Notifications:** Email, push, or both?
|
||||||
|
5. **F003 - Data Refresh:** Real-time or daily aggregates?
|
||||||
|
6. **F006 - Alert Frequency:** Per event or digest?
|
||||||
|
|
||||||
|
### Nice-to-Have (P2) - 2 items
|
||||||
|
7. **F007 - Dark Mode:** Full theme or toggle only?
|
||||||
|
8. **F005 - Social:** Which social features (share, comment, like)?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
1. **Resolve P0 clarifications first** (Step 5: complete-spec)
|
||||||
|
2. **Focus on Phase 1** before expanding scope
|
||||||
|
3. **Use /speckit.implement** for systematic implementation
|
||||||
|
4. **Update specs as you go** to keep them accurate
|
||||||
|
5. **Run /speckit.analyze regularly** to catch drift
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. Run complete-spec skill to resolve clarifications
|
||||||
|
2. Begin Phase 1 implementation
|
||||||
|
3. Use `/speckit.implement` for each feature
|
||||||
|
4. Update implementation status in specs
|
||||||
|
5. Re-run `/speckit.analyze` to validate progress
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GitHub Spec Kit Integration
|
||||||
|
|
||||||
|
After gap analysis, leverage Spec Kit commands:
|
||||||
|
|
||||||
|
### Validate Continuously
|
||||||
|
```bash
|
||||||
|
# Re-run after making changes
|
||||||
|
> /speckit.analyze
|
||||||
|
|
||||||
|
# Should show fewer issues as you implement
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implement Systematically
|
||||||
|
```bash
|
||||||
|
# Generate tasks for a feature
|
||||||
|
> /speckit.tasks user-authentication
|
||||||
|
|
||||||
|
# Implement step-by-step
|
||||||
|
> /speckit.implement user-authentication
|
||||||
|
|
||||||
|
# Updates spec status automatically
|
||||||
|
```
|
||||||
|
|
||||||
|
### Clarify Ambiguities
|
||||||
|
```bash
|
||||||
|
# Before implementing unclear features
|
||||||
|
> /speckit.clarify analytics-dashboard
|
||||||
|
|
||||||
|
# Interactive Q&A to fill gaps
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill, you should have:
|
||||||
|
|
||||||
|
- ✅ `/speckit.analyze` results reviewed
|
||||||
|
- ✅ All inconsistencies documented
|
||||||
|
- ✅ PARTIAL features analyzed (what exists vs missing)
|
||||||
|
- ✅ MISSING features categorized
|
||||||
|
- ✅ Technical debt cataloged
|
||||||
|
- ✅ `[NEEDS CLARIFICATION]` markers added
|
||||||
|
- ✅ Priorities assigned (P0/P1/P2/P3)
|
||||||
|
- ✅ Phased implementation roadmap
|
||||||
|
- ✅ `docs/gap-analysis-report.md` created
|
||||||
|
- ✅ Ready to proceed to Step 5 (Complete Specification)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Step
|
||||||
|
|
||||||
|
Once gap analysis is complete, proceed to:
|
||||||
|
|
||||||
|
**Step 5: Complete Specification** - Use the complete-spec skill to resolve all `[NEEDS CLARIFICATION]` markers interactively.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- `/speckit.analyze` is run first for automated checks
|
||||||
|
- Manual analysis supplements with deeper insights
|
||||||
|
- Gap report complements Spec Kit's output
|
||||||
|
- Keep both `.specify/memory/` specs and gap report updated
|
||||||
|
- Re-run `/speckit.analyze` frequently to track progress
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Route Comparison: What Gap Analysis Means
|
||||||
|
|
||||||
|
| Aspect | Greenfield | Brownfield |
|
||||||
|
|--------|-----------|-----------|
|
||||||
|
| **Analyzing** | Spec completeness | Existing code vs specs |
|
||||||
|
| **Goal** | Validate specs ready to build NEW | Find gaps in CURRENT implementation |
|
||||||
|
| **/speckit.analyze** | Skip (no old code to compare) | Run (compare specs to code) |
|
||||||
|
| **Gap Definition** | Missing requirements, ambiguities | Missing features, partial implementations |
|
||||||
|
| **Roadmap** | Build order for NEW app | Fill gaps in EXISTING app |
|
||||||
|
| **Tech Stack** | ASK user (choosing for new) | Already decided (current stack) |
|
||||||
|
| **All Features** | ❌ MISSING (building from scratch) | Mix of ✅⚠️❌ (some exist) |
|
||||||
|
|
||||||
|
**Key Insight:**
|
||||||
|
- **Greenfield:** Specs describe WHAT to build (old code doesn't matter) - Same for ALL detection types
|
||||||
|
- **Brownfield:** Specs describe current reality (validate against old code) - Same for ALL detection types
|
||||||
|
|
||||||
|
**Detection type doesn't change gap analysis approach** - it only affects what patterns were analyzed in Gear 2
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** Check route first! Greenfield analyzes SPECS, Brownfield analyzes IMPLEMENTATION.
|
||||||
625
skills/implement/SKILL.md
Normal file
625
skills/implement/SKILL.md
Normal file
@@ -0,0 +1,625 @@
|
|||||||
|
---
|
||||||
|
name: implement
|
||||||
|
description: Use GitHub Spec Kit's /speckit.implement and /speckit.tasks to systematically build missing features from specifications. Leverages implementation plans in specs/, validates against acceptance criteria, and achieves 100% spec completion. This is Step 6 of 6 in the reverse engineering process.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Implement from Spec (with GitHub Spec Kit)
|
||||||
|
|
||||||
|
**Step 6 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** Hours to days (depends on gaps)
|
||||||
|
**Prerequisites:** Step 5 completed (all specs finalized, no `[NEEDS CLARIFICATION]` markers)
|
||||||
|
**Output:** Fully implemented application with all specs marked ✅ COMPLETE
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- You've completed Step 5 (Complete Specification)
|
||||||
|
- All specifications in `specs/` are finalized
|
||||||
|
- Implementation plans exist in `specs/`
|
||||||
|
- Ready to use `/speckit.implement` to build features
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Implement missing features"
|
||||||
|
- "Use speckit to implement"
|
||||||
|
- "Build from specifications"
|
||||||
|
- "Run speckit implement"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
Uses **GitHub Spec Kit's implementation workflow** to systematically build features:
|
||||||
|
|
||||||
|
1. **Use /speckit.tasks** - Generate actionable task lists from implementation plans
|
||||||
|
2. **Use /speckit.implement** - Execute tasks step-by-step for each feature
|
||||||
|
3. **Validate with /speckit.analyze** - Verify implementation matches specs
|
||||||
|
4. **Update Specs Automatically** - Spec Kit marks features ✅ COMPLETE as you implement
|
||||||
|
5. **Track Progress** - Monitor completion via `.specify/memory/` status markers
|
||||||
|
6. **Achieve 100% Completion** - All specs implemented and validated
|
||||||
|
|
||||||
|
**Key Benefit:** Spec Kit's `/speckit.implement` command guides you through implementation plans, updates specs automatically, and validates work against acceptance criteria.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Two Contexts: Handoff vs Standard Implementation
|
||||||
|
|
||||||
|
**This skill works differently based on context:**
|
||||||
|
|
||||||
|
### Context A: Handoff (After Reverse Engineering)
|
||||||
|
**When:** Just completed Gears 1-5, on main branch, gaps identified
|
||||||
|
**What happens:** Handoff procedure (celebrate, explain transition, offer feature branch setup)
|
||||||
|
**See:** [operations/handoff.md](operations/handoff.md)
|
||||||
|
|
||||||
|
### Context B: Standard Implementation (Ongoing)
|
||||||
|
**When:** On feature branch (002-*, 003-*), working on specific feature
|
||||||
|
**What happens:** Standard GitHub Spec Kit implementation workflow
|
||||||
|
**See:** Process Overview below
|
||||||
|
|
||||||
|
**The handoff only happens ONCE** (after initial reverse engineering). After that, you always use standard /speckit.* workflow on feature branches.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GitHub Spec Kit Implementation Workflow
|
||||||
|
|
||||||
|
The standard Spec Kit workflow is:
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.specify → /speckit.plan → /speckit.tasks → /speckit.implement → /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
**For reverse engineering, we've already done the first two steps:**
|
||||||
|
- ✅ `/speckit.specify` - Done in Step 3 (created specifications)
|
||||||
|
- ✅ `/speckit.plan` - Done in Step 3 (created implementation plans)
|
||||||
|
|
||||||
|
**Now we use the remaining commands:**
|
||||||
|
- `/speckit.tasks` - Generate task lists
|
||||||
|
- `/speckit.implement` - Build features
|
||||||
|
- `/speckit.analyze` - Validate
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
### Step 1: Review Implementation Roadmap
|
||||||
|
|
||||||
|
From `docs/gap-analysis-report.md`, review the phased plan:
|
||||||
|
|
||||||
|
**Phase 1: P0 Critical** (~12 hours)
|
||||||
|
- Essential features
|
||||||
|
- Security fixes
|
||||||
|
- Blocking issues
|
||||||
|
|
||||||
|
**Phase 2: P1 High Value** (~20 hours)
|
||||||
|
- Important features
|
||||||
|
- High user impact
|
||||||
|
- Key improvements
|
||||||
|
|
||||||
|
**Phase 3: P2/P3** (~TBD)
|
||||||
|
- Nice-to-have
|
||||||
|
- Future enhancements
|
||||||
|
|
||||||
|
**Confirm with user:**
|
||||||
|
- Start with Phase 1 (P0 items)?
|
||||||
|
- Any blockers to address first?
|
||||||
|
- Time constraints?
|
||||||
|
|
||||||
|
### Step 2: For Each Feature - Generate Tasks
|
||||||
|
|
||||||
|
Use `/speckit.tasks` to generate actionable tasks from implementation plan:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Example: Implement user authentication frontend
|
||||||
|
> /speckit.tasks user-authentication-frontend
|
||||||
|
```
|
||||||
|
|
||||||
|
**What this does:**
|
||||||
|
- Reads `specs/user-authentication-frontend.md`
|
||||||
|
- Breaks down plan into specific, actionable tasks
|
||||||
|
- Creates task checklist
|
||||||
|
|
||||||
|
**Output example:**
|
||||||
|
```markdown
|
||||||
|
# Tasks: User Authentication Frontend
|
||||||
|
|
||||||
|
Based on implementation plan in `specs/user-authentication-frontend.md`
|
||||||
|
|
||||||
|
## Tasks
|
||||||
|
- [ ] Create LoginPage component (app/login/page.tsx)
|
||||||
|
- [ ] Create RegistrationPage component (app/register/page.tsx)
|
||||||
|
- [ ] Create PasswordResetPage component (app/reset-password/page.tsx)
|
||||||
|
- [ ] Add Zod validation schemas (lib/validation/auth.ts)
|
||||||
|
- [ ] Create useAuth hook (hooks/useAuth.ts)
|
||||||
|
- [ ] Implement API integration (lib/api/auth.ts)
|
||||||
|
- [ ] Add loading states to all forms
|
||||||
|
- [ ] Add error handling and display
|
||||||
|
- [ ] Write component tests (LoginPage.test.tsx, etc.)
|
||||||
|
- [ ] Update routing configuration (app/layout.tsx)
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- Backend API endpoints must be functional
|
||||||
|
- UI component library installed
|
||||||
|
|
||||||
|
## Acceptance Criteria (from specification)
|
||||||
|
- [ ] User can register with email and password
|
||||||
|
- [ ] User can log in with credentials
|
||||||
|
- [ ] User can reset forgotten password
|
||||||
|
- [ ] JWT tokens stored securely
|
||||||
|
- [ ] Forms validate input before submission
|
||||||
|
- [ ] Loading states shown during API calls
|
||||||
|
- [ ] Error messages displayed clearly
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/generate-tasks.md](operations/generate-tasks.md)
|
||||||
|
|
||||||
|
### Step 3: Implement Feature with /speckit.implement
|
||||||
|
|
||||||
|
Use `/speckit.implement` to execute the implementation plan:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Implement the feature step-by-step
|
||||||
|
> /speckit.implement user-authentication-frontend
|
||||||
|
```
|
||||||
|
|
||||||
|
**What this does:**
|
||||||
|
1. Loads tasks from `/speckit.tasks` output
|
||||||
|
2. Walks through each task systematically
|
||||||
|
3. Generates code for each task
|
||||||
|
4. Tests implementation against acceptance criteria
|
||||||
|
5. Updates specification status markers
|
||||||
|
6. Commits changes with descriptive messages
|
||||||
|
|
||||||
|
**Interactive flow:**
|
||||||
|
```
|
||||||
|
> /speckit.implement user-authentication-frontend
|
||||||
|
|
||||||
|
Starting implementation of: User Authentication Frontend
|
||||||
|
Plan: specs/user-authentication-frontend.md
|
||||||
|
|
||||||
|
Task 1/10: Create LoginPage component
|
||||||
|
|
||||||
|
I'll create app/login/page.tsx with:
|
||||||
|
- Email/password form
|
||||||
|
- Form validation
|
||||||
|
- Submit handler
|
||||||
|
- Link to registration and password reset
|
||||||
|
|
||||||
|
[Code generated]
|
||||||
|
|
||||||
|
✅ Task 1 complete
|
||||||
|
|
||||||
|
Task 2/10: Create RegistrationPage component
|
||||||
|
[...]
|
||||||
|
|
||||||
|
All tasks complete! Running validation...
|
||||||
|
|
||||||
|
✅ All acceptance criteria met
|
||||||
|
✅ Tests passing (8/8)
|
||||||
|
✅ No TypeScript errors
|
||||||
|
|
||||||
|
Updating specification status...
|
||||||
|
user-authentication.md: ⚠️ PARTIAL → ✅ COMPLETE
|
||||||
|
|
||||||
|
Implementation complete!
|
||||||
|
```
|
||||||
|
|
||||||
|
See [operations/use-speckit-implement.md](operations/use-speckit-implement.md)
|
||||||
|
|
||||||
|
### Step 4: Validate Implementation
|
||||||
|
|
||||||
|
After implementing, use `/speckit.analyze` to verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it checks:**
|
||||||
|
- Implementation matches specification
|
||||||
|
- All acceptance criteria met
|
||||||
|
- No inconsistencies with related specs
|
||||||
|
- Status markers accurate
|
||||||
|
|
||||||
|
**If issues found:**
|
||||||
|
```
|
||||||
|
⚠️ Issues detected:
|
||||||
|
|
||||||
|
1. user-authentication.md marked COMPLETE
|
||||||
|
- Missing: Token refresh mechanism
|
||||||
|
- Action: Add token refresh or update spec
|
||||||
|
|
||||||
|
2. Inconsistency with user-profile.md
|
||||||
|
- user-profile depends on authentication
|
||||||
|
- user-profile marked PARTIAL
|
||||||
|
- Recommendation: Complete user-profile next
|
||||||
|
```
|
||||||
|
|
||||||
|
Fix any issues and re-run `/speckit.analyze` until clean.
|
||||||
|
|
||||||
|
### Step 5: Update Progress and Continue
|
||||||
|
|
||||||
|
After each feature:
|
||||||
|
|
||||||
|
1. **Check progress:**
|
||||||
|
```bash
|
||||||
|
> /speckit.analyze
|
||||||
|
# Shows: X/Y features complete
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Update gap report:**
|
||||||
|
- Mark feature as ✅ COMPLETE
|
||||||
|
- Update overall completion percentage
|
||||||
|
- Move to next priority feature
|
||||||
|
|
||||||
|
3. **Commit changes:**
|
||||||
|
```bash
|
||||||
|
git commit -m "feat: implement user authentication frontend (user-authentication.md)"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Select next feature:**
|
||||||
|
- Follow prioritized roadmap
|
||||||
|
- Choose next P0 item, or move to P1 if P0 complete
|
||||||
|
|
||||||
|
### Step 6: Iterate Until 100% Complete
|
||||||
|
|
||||||
|
Repeat Steps 2-5 for each feature in the roadmap:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Phase 1: P0 Critical
|
||||||
|
> /speckit.tasks fish-management-ui
|
||||||
|
> /speckit.implement fish-management-ui
|
||||||
|
> /speckit.analyze
|
||||||
|
|
||||||
|
> /speckit.tasks photo-upload-api
|
||||||
|
> /speckit.implement photo-upload-api
|
||||||
|
> /speckit.analyze
|
||||||
|
|
||||||
|
# Phase 2: P1 High Value
|
||||||
|
> /speckit.tasks analytics-dashboard
|
||||||
|
> /speckit.implement analytics-dashboard
|
||||||
|
> /speckit.analyze
|
||||||
|
|
||||||
|
# Continue until all features complete...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Track progress:**
|
||||||
|
- Phase 1: 3/3 complete (100%) ✅
|
||||||
|
- Phase 2: 2/4 complete (50%) 🔄
|
||||||
|
- Phase 3: 0/5 complete (0%) ⏳
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example: Complete Implementation Flow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Review roadmap
|
||||||
|
User: "Let's implement the missing features"
|
||||||
|
Claude: Reviews docs/gap-analysis-report.md
|
||||||
|
Claude: "I see 3 P0 items. Let's start with fish-management-ui?"
|
||||||
|
User: "Yes, let's do it"
|
||||||
|
|
||||||
|
# 2. Generate tasks
|
||||||
|
> /speckit.tasks fish-management-ui
|
||||||
|
Output: 12 tasks identified
|
||||||
|
|
||||||
|
# 3. Implement
|
||||||
|
> /speckit.implement fish-management-ui
|
||||||
|
|
||||||
|
Starting implementation...
|
||||||
|
|
||||||
|
Task 1/12: Create FishEditPage component
|
||||||
|
[Code generated for app/fish/[id]/edit/page.tsx]
|
||||||
|
✅ Task 1 complete
|
||||||
|
|
||||||
|
Task 2/12: Add photo upload UI
|
||||||
|
[Code generated for components/PhotoUpload.tsx]
|
||||||
|
✅ Task 2 complete
|
||||||
|
|
||||||
|
[... continues through all 12 tasks ...]
|
||||||
|
|
||||||
|
Implementation complete!
|
||||||
|
✅ All acceptance criteria met
|
||||||
|
✅ Tests passing (15/15)
|
||||||
|
|
||||||
|
# 4. Validate
|
||||||
|
> /speckit.analyze
|
||||||
|
✅ No issues found
|
||||||
|
fish-management.md: ⚠️ PARTIAL → ✅ COMPLETE
|
||||||
|
|
||||||
|
# 5. Commit
|
||||||
|
git commit -m "feat: complete fish management UI (fish-management.md)"
|
||||||
|
|
||||||
|
# 6. Next feature
|
||||||
|
Claude: "Phase 1 progress: 1/3 complete. Next: photo-upload-api?"
|
||||||
|
User: "Yes"
|
||||||
|
|
||||||
|
# Repeat...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with Reverse Engineering Process
|
||||||
|
|
||||||
|
**Your reverse-engineered codebase is now:**
|
||||||
|
1. ✅ Fully documented (Step 2)
|
||||||
|
2. ✅ Formal specs created (Step 3)
|
||||||
|
3. ✅ Gaps identified (Step 4)
|
||||||
|
4. ✅ Clarifications resolved (Step 5)
|
||||||
|
5. 🔄 **Being implemented systematically (Step 6)**
|
||||||
|
|
||||||
|
**Spec Kit ensures:**
|
||||||
|
- Implementation matches specs exactly
|
||||||
|
- Specs stay up-to-date with code
|
||||||
|
- No drift between docs and reality
|
||||||
|
- Continuous validation
|
||||||
|
|
||||||
|
**After completion:**
|
||||||
|
- Use `/speckit.specify` for new features
|
||||||
|
- Use `/speckit.plan` → `/speckit.tasks` → `/speckit.implement` for development
|
||||||
|
- Use `/speckit.analyze` to maintain consistency
|
||||||
|
- Your codebase is now fully spec-driven!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill (implementing all features), you should have:
|
||||||
|
|
||||||
|
- ✅ All P0 features implemented (Phase 1 complete)
|
||||||
|
- ✅ All P1 features implemented (Phase 2 complete)
|
||||||
|
- ✅ P2/P3 features implemented or intentionally deferred
|
||||||
|
- ✅ All specifications marked ✅ COMPLETE
|
||||||
|
- ✅ `/speckit.analyze` shows no issues
|
||||||
|
- ✅ All tests passing
|
||||||
|
- ✅ Application at 100% completion
|
||||||
|
- ✅ Ready for production deployment
|
||||||
|
|
||||||
|
**Ongoing spec-driven development established:**
|
||||||
|
- New features start with `/speckit.specify`
|
||||||
|
- Implementation uses `/speckit.plan` → `/speckit.tasks` → `/speckit.implement`
|
||||||
|
- Continuous validation with `/speckit.analyze`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### During Implementation
|
||||||
|
|
||||||
|
1. **One feature at a time** - Don't start multiple features in parallel
|
||||||
|
2. **Follow the roadmap** - Respect P0 → P1 → P2 priority order
|
||||||
|
3. **Use `/speckit.implement`** - Don't implement manually, let Spec Kit guide you
|
||||||
|
4. **Validate frequently** - Run `/speckit.analyze` after each feature
|
||||||
|
5. **Commit often** - Commit after each feature completion
|
||||||
|
6. **Update specs** - If you discover new requirements, update specs first
|
||||||
|
|
||||||
|
### Quality Standards
|
||||||
|
|
||||||
|
For each implementation:
|
||||||
|
- ✅ Meets all acceptance criteria
|
||||||
|
- ✅ Tests added and passing
|
||||||
|
- ✅ TypeScript types correct (if applicable)
|
||||||
|
- ✅ Error handling implemented
|
||||||
|
- ✅ Loading states for async operations
|
||||||
|
- ✅ Responsive design (if UI)
|
||||||
|
- ✅ Accessibility standards met
|
||||||
|
|
||||||
|
### When Issues Arise
|
||||||
|
|
||||||
|
If `/speckit.analyze` finds problems:
|
||||||
|
1. Fix the implementation to match spec, OR
|
||||||
|
2. Update the spec if requirements changed
|
||||||
|
3. Never leave specs and code out of sync
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Continuous Spec-Driven Development
|
||||||
|
|
||||||
|
After completing the reverse engineering process:
|
||||||
|
|
||||||
|
### For New Features
|
||||||
|
```bash
|
||||||
|
# 1. Create specification
|
||||||
|
> /speckit.specify
|
||||||
|
|
||||||
|
# 2. Create implementation plan
|
||||||
|
> /speckit.plan
|
||||||
|
|
||||||
|
# 3. Generate tasks
|
||||||
|
> /speckit.tasks
|
||||||
|
|
||||||
|
# 4. Implement
|
||||||
|
> /speckit.implement
|
||||||
|
|
||||||
|
# 5. Validate
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Refactoring
|
||||||
|
```bash
|
||||||
|
# 1. Update affected specifications
|
||||||
|
> /speckit.specify
|
||||||
|
|
||||||
|
# 2. Update implementation plan
|
||||||
|
> /speckit.plan
|
||||||
|
|
||||||
|
# 3. Implement changes
|
||||||
|
> /speckit.implement
|
||||||
|
|
||||||
|
# 4. Validate no regression
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Bug Fixes
|
||||||
|
```bash
|
||||||
|
# 1. Update spec if bug reveals requirement gap
|
||||||
|
> /speckit.specify
|
||||||
|
|
||||||
|
# 2. Fix implementation
|
||||||
|
[manual fix or /speckit.implement]
|
||||||
|
|
||||||
|
# 3. Validate
|
||||||
|
> /speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Spec Kit's `/speckit.implement` generates code - review before committing
|
||||||
|
- Implementation plans should be detailed for best results
|
||||||
|
- `/speckit.tasks` output can be refined if tasks are too broad
|
||||||
|
- Use `/speckit.clarify` if you discover ambiguities during implementation
|
||||||
|
- Keep `.specify/memory/` in version control
|
||||||
|
- `specs/` is the source of truth
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Final Outcome
|
||||||
|
|
||||||
|
**You've transformed:**
|
||||||
|
- Partially-complete codebase with no specs
|
||||||
|
- → Fully spec-driven development workflow
|
||||||
|
- → 100% implementation aligned with specifications
|
||||||
|
- → Continuous validation with `/speckit.analyze`
|
||||||
|
- → Sustainable spec-first development process
|
||||||
|
|
||||||
|
**Your application is now:**
|
||||||
|
- ✅ Fully documented
|
||||||
|
- ✅ Completely specified
|
||||||
|
- ✅ 100% implemented
|
||||||
|
- ✅ Continuously validated
|
||||||
|
- ✅ Ready for ongoing spec-driven development
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Gear 6.5: Validate & Review
|
||||||
|
|
||||||
|
Before finalizing, let's ensure everything meets quality standards through systematic validation.
|
||||||
|
|
||||||
|
### Step 1: Run Validation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Validate implementation against specs
|
||||||
|
/stackshift.validate --fix
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
1. ✅ Run full test suite
|
||||||
|
2. ✅ Validate TypeScript compilation
|
||||||
|
3. ✅ Check spec compliance
|
||||||
|
4. ✅ Categorize any issues
|
||||||
|
5. ✅ Auto-fix issues (with --fix flag)
|
||||||
|
6. ✅ Rollback if fixes fail
|
||||||
|
|
||||||
|
**Expected result:**
|
||||||
|
```
|
||||||
|
✅ VALIDATION PASSED
|
||||||
|
|
||||||
|
All tests passing: ✅
|
||||||
|
TypeScript compiling: ✅
|
||||||
|
Spec compliance: ✅
|
||||||
|
Code quality: ✅
|
||||||
|
|
||||||
|
🚀 Implementation is production-ready!
|
||||||
|
```
|
||||||
|
|
||||||
|
If validation finds issues, they'll be fixed automatically. If critical issues are found that can't be auto-fixed, I'll report them for manual resolution.
|
||||||
|
|
||||||
|
### Step 2: Code Review
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Perform comprehensive code review
|
||||||
|
/stackshift.review
|
||||||
|
```
|
||||||
|
|
||||||
|
This reviews across 5 dimensions:
|
||||||
|
1. 🔍 **Correctness** - Works as intended, meets requirements
|
||||||
|
2. 📏 **Standards** - Follows conventions, well documented
|
||||||
|
3. 🔒 **Security** - No vulnerabilities, proper validation
|
||||||
|
4. ⚡ **Performance** - Efficient, scalable implementation
|
||||||
|
5. 🧪 **Testing** - Adequate coverage, edge cases handled
|
||||||
|
|
||||||
|
**Expected result:**
|
||||||
|
```
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
📋 Review Report
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
|
||||||
|
### ✅ APPROVED
|
||||||
|
|
||||||
|
All quality checks passed
|
||||||
|
Ready for deployment
|
||||||
|
|
||||||
|
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||||
|
```
|
||||||
|
|
||||||
|
If issues are found, I'll provide specific feedback with line numbers and recommendations.
|
||||||
|
|
||||||
|
### Step 3: Generate Spec Coverage Map
|
||||||
|
|
||||||
|
After validation passes, let's create the coverage map...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Final Step: Generate Spec Coverage Map
|
||||||
|
|
||||||
|
Now let's create a visual coverage map showing the relationship between your specifications and code:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate coverage map
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll analyze all specs in `.specify/memory/specifications/` or `specs/` and create:
|
||||||
|
|
||||||
|
1. **ASCII box diagrams** - Visual map of each spec's files
|
||||||
|
2. **Reverse index** - Which spec(s) cover each file
|
||||||
|
3. **Coverage statistics** - Percentages by category
|
||||||
|
4. **Heat map** - Visual coverage representation
|
||||||
|
5. **Gap analysis** - Files not covered by specs
|
||||||
|
6. **Shared files** - High-risk files used by multiple specs
|
||||||
|
|
||||||
|
**Output:** `docs/spec-coverage-map.md`
|
||||||
|
|
||||||
|
This provides crucial visibility into spec-code alignment and helps identify any gaps!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Spec Coverage Health Report
|
||||||
|
|
||||||
|
After generating the coverage map, I'll show you a summary:
|
||||||
|
|
||||||
|
```
|
||||||
|
📊 Spec Coverage Health Report
|
||||||
|
|
||||||
|
Overall Coverage: 91% (99/109 files)
|
||||||
|
|
||||||
|
By Category:
|
||||||
|
Backend: 93% [████████████████░░]
|
||||||
|
Frontend: 92% [████████████████░░]
|
||||||
|
Infrastructure: 83% [███████████████░░░]
|
||||||
|
Database: 100% [████████████████████]
|
||||||
|
Scripts: 67% [█████████░░░░░░░░░]
|
||||||
|
|
||||||
|
Status:
|
||||||
|
✅ 12 specs covering 99 files
|
||||||
|
⚠️ 10 gap files identified (need review)
|
||||||
|
🔴 2 high-risk shared files (used by 4+ specs)
|
||||||
|
|
||||||
|
Full report: docs/spec-coverage-map.md
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Congratulations!** You've completed the 6-step Reverse Engineering to Spec-Driven Development process. Your codebase is now enterprise-grade, fully specified, and ready for sustainable development using GitHub Spec Kit or continue using StackShift to help develop new functionality. 🎉
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** Maintain the spec-driven workflow going forward:
|
||||||
|
1. Requirements change → Update specs first (`/speckit.specify`)
|
||||||
|
2. Plan implementation (`/speckit.plan`)
|
||||||
|
3. Generate tasks (`/speckit.tasks`)
|
||||||
|
4. Implement (`/speckit.implement`)
|
||||||
|
5. Validate (`/speckit.analyze`)
|
||||||
|
|
||||||
|
This ensures specs and code never drift apart.
|
||||||
308
skills/implement/operations/handoff.md
Normal file
308
skills/implement/operations/handoff.md
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
# Handoff: Reverse Engineering → Spec-Driven Development
|
||||||
|
|
||||||
|
**The transition point from reverse engineering to standard GitHub Spec Kit workflow**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When This Happens
|
||||||
|
|
||||||
|
After completing Gears 1-5 (reverse engineering complete), before starting implementation.
|
||||||
|
|
||||||
|
**Triggers:**
|
||||||
|
- User is on main/master branch
|
||||||
|
- Gap analysis shows PARTIAL or MISSING features
|
||||||
|
- Implementation plans exist in .specify/memory/plans/
|
||||||
|
- User hasn't created feature branch yet
|
||||||
|
|
||||||
|
**Purpose:** Celebrate completion, explain transition, guide next steps.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handoff Procedure
|
||||||
|
|
||||||
|
### Step 1: Celebrate Completion
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# 🎉 Reverse Engineering Complete!
|
||||||
|
|
||||||
|
Congratulations! You've successfully transformed your codebase into a
|
||||||
|
fully-specified, spec-driven project.
|
||||||
|
|
||||||
|
## What You've Accomplished
|
||||||
|
|
||||||
|
✅ **Gear 1: Analysis** - Tech stack detected, completeness assessed
|
||||||
|
✅ **Gear 2: Reverse Engineering** - 8 comprehensive documentation files
|
||||||
|
✅ **Gear 3: Specifications** - GitHub Spec Kit structure created
|
||||||
|
✅ **Gear 4: Gap Analysis** - Missing features identified and prioritized
|
||||||
|
✅ **Gear 5: Complete Specification** - All clarifications resolved
|
||||||
|
|
||||||
|
## What You Have Now
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- analysis-report.md
|
||||||
|
- docs/reverse-engineering/ (8 files)
|
||||||
|
- docs/gap-analysis-report.md
|
||||||
|
|
||||||
|
**GitHub Spec Kit:**
|
||||||
|
- .specify/memory/constitution.md (project principles)
|
||||||
|
- .specify/memory/specifications/ (X feature specs)
|
||||||
|
- .specify/memory/plans/ (Y implementation plans)
|
||||||
|
|
||||||
|
**State:**
|
||||||
|
- .stackshift-state.json (complete history)
|
||||||
|
|
||||||
|
**All committed to:** [current branch]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Explain the Transition
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 🔄 What Happens Next?
|
||||||
|
|
||||||
|
You're now transitioning from **reverse engineering** (one-time) to
|
||||||
|
**standard spec-driven development** (ongoing).
|
||||||
|
|
||||||
|
### Two Different Workflows
|
||||||
|
|
||||||
|
**StackShift Reverse Engineering (You Just Finished This):**
|
||||||
|
- Purpose: Transform existing codebase into specifications
|
||||||
|
- When: One-time (or when major refactor needed)
|
||||||
|
- Skills: analyze, reverse-engineer, create-specs, gap-analysis, complete-spec
|
||||||
|
- Output: Complete specification of your system
|
||||||
|
- ✅ DONE!
|
||||||
|
|
||||||
|
**GitHub Spec Kit Standard Workflow (Use Going Forward):**
|
||||||
|
- Purpose: Implement features, fix bugs, ongoing development
|
||||||
|
- When: Every day, every feature
|
||||||
|
- Commands: /speckit.tasks, /speckit.implement, /speckit.analyze
|
||||||
|
- Workflow: Feature branch → implement → validate → merge
|
||||||
|
- ⬅️ START USING THIS NOW
|
||||||
|
|
||||||
|
### Key Difference
|
||||||
|
|
||||||
|
**StackShift** = Creating the specs from existing code
|
||||||
|
**Spec Kit** = Building features from specs
|
||||||
|
|
||||||
|
You've completed the StackShift phase. Now you use Spec Kit!
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Show Remaining Work
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 📋 Your Remaining Work (From Gap Analysis)
|
||||||
|
|
||||||
|
You have X features ready for implementation:
|
||||||
|
|
||||||
|
### High Priority (P0/P1)
|
||||||
|
1. **[Feature 1 Name]** - [X hours]
|
||||||
|
- Status: ⚠️ PARTIAL (backend done, UI missing)
|
||||||
|
- Plan: .specify/memory/plans/[feature-1].md
|
||||||
|
|
||||||
|
2. **[Feature 2 Name]** - [X hours]
|
||||||
|
- Status: ❌ MISSING (not started)
|
||||||
|
- Plan: .specify/memory/plans/[feature-2].md
|
||||||
|
|
||||||
|
### Medium Priority (P2)
|
||||||
|
3. **[Feature 3 Name]** - [X hours]
|
||||||
|
- Status: ❌ MISSING
|
||||||
|
- Plan: .specify/memory/plans/[feature-3].md
|
||||||
|
|
||||||
|
[... list all ...]
|
||||||
|
|
||||||
|
## Total Estimated Effort
|
||||||
|
|
||||||
|
- P0 features: [X hours]
|
||||||
|
- P1 features: [Y hours]
|
||||||
|
- P2 features: [Z hours]
|
||||||
|
- **Total: [N hours]**
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Offer Feature Branch Setup
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 🚀 Ready to Start Implementing?
|
||||||
|
|
||||||
|
I recommend starting with: **[First Feature Name]**
|
||||||
|
|
||||||
|
**Why this one first:**
|
||||||
|
- [Reason: highest priority, foundational, etc.]
|
||||||
|
- Clear implementation plan already exists
|
||||||
|
- Estimated effort: [X hours]
|
||||||
|
|
||||||
|
**Would you like me to set up the feature branch for you?**
|
||||||
|
|
||||||
|
If yes:
|
||||||
|
1. Create feature branch: `002-[feature-name]`
|
||||||
|
2. Set up working environment
|
||||||
|
3. Show you the `/speckit.tasks` and `/speckit.implement` workflow
|
||||||
|
|
||||||
|
If no:
|
||||||
|
- I'll provide instructions for doing it manually
|
||||||
|
- You can implement when ready
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: If User Says Yes - Set Up Feature Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get feature name from plan (e.g., "manual-catch-logging-frontend")
|
||||||
|
FEATURE_NAME="[feature-name]"
|
||||||
|
FEATURE_NUMBER="002" # Increment from existing
|
||||||
|
|
||||||
|
# Create and switch to feature branch
|
||||||
|
git checkout -b ${FEATURE_NUMBER}-${FEATURE_NAME}
|
||||||
|
|
||||||
|
# Create README in branch
|
||||||
|
cat > WORKING_ON.md <<EOF
|
||||||
|
# Feature: ${FEATURE_NAME}
|
||||||
|
|
||||||
|
**Branch:** ${FEATURE_NUMBER}-${FEATURE_NAME}
|
||||||
|
**Specification:** .specify/memory/specifications/[spec-file].md
|
||||||
|
**Implementation Plan:** .specify/memory/plans/${FEATURE_NAME}.md
|
||||||
|
|
||||||
|
## Status
|
||||||
|
🔄 In Progress
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. Generate tasks from implementation plan:
|
||||||
|
\`\`\`
|
||||||
|
/speckit.tasks
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
2. Execute implementation:
|
||||||
|
\`\`\`
|
||||||
|
/speckit.implement
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
3. Validate:
|
||||||
|
\`\`\`
|
||||||
|
/speckit.analyze
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Progress
|
||||||
|
[Will be updated as you work]
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Commit
|
||||||
|
git add WORKING_ON.md
|
||||||
|
git commit -m "chore: set up feature branch for ${FEATURE_NAME}"
|
||||||
|
git push -u origin ${FEATURE_NUMBER}-${FEATURE_NAME}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Report to user:**
|
||||||
|
```markdown
|
||||||
|
✅ Feature branch created: `002-[feature-name]`
|
||||||
|
|
||||||
|
## You're All Set!
|
||||||
|
|
||||||
|
**Next command to run:**
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.tasks
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
1. Read the implementation plan
|
||||||
|
2. Generate actionable task checklist
|
||||||
|
3. Break down the 52-hour plan into specific steps
|
||||||
|
|
||||||
|
Then you can run:
|
||||||
|
```
|
||||||
|
/speckit.implement
|
||||||
|
```
|
||||||
|
|
||||||
|
To execute the tasks step-by-step!
|
||||||
|
|
||||||
|
**I'm ready to help.** Run `/speckit.tasks` when you're ready to start building.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 6: If User Says No - Provide Manual Instructions
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Manual Feature Branch Workflow
|
||||||
|
|
||||||
|
When you're ready to implement a feature:
|
||||||
|
|
||||||
|
### 1. Create Feature Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pick a feature from .specify/memory/plans/
|
||||||
|
# Choose a number (001, 002, 003, etc.)
|
||||||
|
# Name format: {number}-{feature-name}
|
||||||
|
|
||||||
|
git checkout -b 002-feature-name
|
||||||
|
git push -u origin 002-feature-name
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Run Spec Kit Commands
|
||||||
|
|
||||||
|
```
|
||||||
|
# Generate tasks
|
||||||
|
/speckit.tasks
|
||||||
|
|
||||||
|
# Implement
|
||||||
|
/speckit.implement
|
||||||
|
|
||||||
|
# Validate
|
||||||
|
/speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. When Complete
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add .
|
||||||
|
git commit -m "feat: complete [feature-name] (#002)"
|
||||||
|
gh pr create --base main
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. After Merge
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git checkout main
|
||||||
|
git pull
|
||||||
|
# Pick next feature and repeat!
|
||||||
|
```
|
||||||
|
|
||||||
|
## Need Help Later?
|
||||||
|
|
||||||
|
Just ask! I can help with:
|
||||||
|
- Setting up feature branches
|
||||||
|
- Running /speckit commands
|
||||||
|
- Understanding implementation plans
|
||||||
|
- Resolving issues
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Handoff Checklist
|
||||||
|
|
||||||
|
Before transitioning to standard workflow:
|
||||||
|
|
||||||
|
- [ ] All specifications finalized (no [NEEDS CLARIFICATION])
|
||||||
|
- [ ] Gap analysis complete with prioritized roadmap
|
||||||
|
- [ ] Implementation plans exist for PARTIAL/MISSING features
|
||||||
|
- [ ] Constitution established
|
||||||
|
- [ ] User understands feature branch workflow
|
||||||
|
- [ ] User knows to use /speckit.* commands (not stackshift skills)
|
||||||
|
- [ ] Clear about next steps
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After handoff, user should:
|
||||||
|
|
||||||
|
✅ Understand they've completed reverse engineering
|
||||||
|
✅ Know to use /speckit.* commands going forward
|
||||||
|
✅ Have clear next steps (either feature branch created or instructions provided)
|
||||||
|
✅ Feel confident about proceeding
|
||||||
|
✅ Not confused about what to do next
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- This handoff only happens ONCE (after initial reverse engineering)
|
||||||
|
- Future feature development uses standard Spec Kit workflow from the start
|
||||||
|
- If user comes back later: Remind them they're past reverse engineering, use /speckit commands
|
||||||
|
- Feature branch naming: 001-, 002-, 003- (numeric prefix for ordering)
|
||||||
419
skills/modernize/SKILL.md
Normal file
419
skills/modernize/SKILL.md
Normal file
@@ -0,0 +1,419 @@
|
|||||||
|
---
|
||||||
|
name: modernize
|
||||||
|
description: Brownfield Upgrade - Upgrade all dependencies and modernize the application while maintaining spec-driven control. Runs after Gear 6 for brownfield projects with modernize flag enabled. Updates deps, fixes breaking changes, improves test coverage, updates specs to match changes.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Modernize (Brownfield Upgrade)
|
||||||
|
|
||||||
|
**Optional Step** after Gear 6 for Brownfield projects with `modernize: true` flag.
|
||||||
|
|
||||||
|
**Estimated Time:** 2-6 hours (depends on dependency age and breaking changes)
|
||||||
|
**Prerequisites:** Gears 1-6 completed, 100% spec coverage established
|
||||||
|
**Output:** Modern dependency versions, updated tests, synchronized specs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- Brownfield path with `modernize: true` flag set
|
||||||
|
- Gears 1-6 are complete (specs established, gaps implemented)
|
||||||
|
- Ready to upgrade all dependencies to latest versions
|
||||||
|
- Want to modernize while maintaining spec-driven control
|
||||||
|
|
||||||
|
**Trigger Conditions:**
|
||||||
|
- State file has `path: "brownfield"` AND `modernize: true`
|
||||||
|
- Gear 6 (implement) is complete
|
||||||
|
- User requested "Brownfield Upgrade" during Gear 1
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
Systematically upgrades the entire application to modern dependency versions:
|
||||||
|
|
||||||
|
1. **Detect Package Manager** - npm, yarn, pnpm, pip, go mod, cargo, etc.
|
||||||
|
2. **Audit Current Versions** - Document what's installed before upgrade
|
||||||
|
3. **Upgrade Dependencies** - Use appropriate upgrade command for tech stack
|
||||||
|
4. **Run Tests** - Identify breaking changes
|
||||||
|
5. **Fix Breaking Changes** - Iteratively fix with spec guidance
|
||||||
|
6. **Update Specs** - Synchronize specs with API/behavior changes
|
||||||
|
7. **Validate Coverage** - Ensure tests meet 85%+ threshold
|
||||||
|
8. **Verify Specs Match** - Run /speckit.analyze to confirm alignment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
### Phase 1: Pre-Upgrade Audit
|
||||||
|
|
||||||
|
**Document current state**:
|
||||||
|
```bash
|
||||||
|
# Create upgrade baseline
|
||||||
|
cat package.json > .modernize/baseline-package.json
|
||||||
|
|
||||||
|
# Run tests to establish baseline
|
||||||
|
npm test > .modernize/baseline-test-results.txt
|
||||||
|
|
||||||
|
# Document current coverage
|
||||||
|
npm run test:coverage > .modernize/baseline-coverage.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
**Analyze upgrade scope**:
|
||||||
|
```bash
|
||||||
|
# Check for available updates
|
||||||
|
npm outdated > .modernize/upgrade-plan.txt
|
||||||
|
|
||||||
|
# Identify major version bumps (potential breaking changes)
|
||||||
|
# Highlight security vulnerabilities
|
||||||
|
# Note deprecated packages
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Dependency Upgrade
|
||||||
|
|
||||||
|
**Tech stack detection** (from analysis-report.md):
|
||||||
|
|
||||||
|
**For Node.js/TypeScript**:
|
||||||
|
```bash
|
||||||
|
# Update all dependencies
|
||||||
|
npm update
|
||||||
|
|
||||||
|
# Or for major versions:
|
||||||
|
npx npm-check-updates -u
|
||||||
|
npm install
|
||||||
|
|
||||||
|
# Check for security issues
|
||||||
|
npm audit fix
|
||||||
|
```
|
||||||
|
|
||||||
|
**For Python**:
|
||||||
|
```bash
|
||||||
|
# Update all dependencies
|
||||||
|
pip install --upgrade -r requirements.txt
|
||||||
|
pip freeze > requirements.txt
|
||||||
|
|
||||||
|
# Or use pip-upgrader
|
||||||
|
pip-upgrade requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
**For Go**:
|
||||||
|
```bash
|
||||||
|
# Update all dependencies
|
||||||
|
go get -u ./...
|
||||||
|
go mod tidy
|
||||||
|
```
|
||||||
|
|
||||||
|
**For Rust**:
|
||||||
|
```bash
|
||||||
|
# Update dependencies
|
||||||
|
cargo update
|
||||||
|
|
||||||
|
# Check for outdated packages
|
||||||
|
cargo outdated
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Breaking Change Detection
|
||||||
|
|
||||||
|
**Run tests after upgrade**:
|
||||||
|
```bash
|
||||||
|
# Run full test suite
|
||||||
|
npm test
|
||||||
|
|
||||||
|
# Capture failures
|
||||||
|
npm test 2>&1 | tee .modernize/post-upgrade-test-results.txt
|
||||||
|
|
||||||
|
# Compare to baseline
|
||||||
|
diff .modernize/baseline-test-results.txt .modernize/post-upgrade-test-results.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
**Identify breaking changes**:
|
||||||
|
- TypeScript compilation errors
|
||||||
|
- Test failures
|
||||||
|
- Runtime errors
|
||||||
|
- API signature changes
|
||||||
|
- Deprecated method usage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 4: Fix Breaking Changes (Spec-Guided)
|
||||||
|
|
||||||
|
**For each breaking change**:
|
||||||
|
|
||||||
|
1. **Identify affected feature**:
|
||||||
|
- Match failing test to feature spec
|
||||||
|
- Determine which spec the code implements
|
||||||
|
|
||||||
|
2. **Review spec requirements**:
|
||||||
|
- What behavior SHOULD exist (from spec)
|
||||||
|
- What changed in the upgrade
|
||||||
|
- How to preserve spec compliance
|
||||||
|
|
||||||
|
3. **Fix with spec guidance**:
|
||||||
|
- Update code to work with new dependency
|
||||||
|
- Ensure behavior still matches spec
|
||||||
|
- Refactor if needed to maintain spec alignment
|
||||||
|
|
||||||
|
4. **Update tests**:
|
||||||
|
- Fix broken tests
|
||||||
|
- Add tests for new edge cases from upgrade
|
||||||
|
- Maintain 85%+ coverage threshold
|
||||||
|
|
||||||
|
5. **Verify spec alignment**:
|
||||||
|
- Behavior unchanged from user perspective
|
||||||
|
- Implementation may change but spec compliance maintained
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 5: Spec Synchronization
|
||||||
|
|
||||||
|
**Check if upgrades changed behavior**:
|
||||||
|
|
||||||
|
Some dependency upgrades change API behavior:
|
||||||
|
- Date formatting libraries (moment → date-fns)
|
||||||
|
- Validation libraries (joi → zod)
|
||||||
|
- HTTP clients (axios → fetch)
|
||||||
|
- ORM updates (Prisma major versions)
|
||||||
|
|
||||||
|
**If behavior changed**:
|
||||||
|
1. Update relevant feature spec to document new behavior
|
||||||
|
2. Update acceptance criteria if needed
|
||||||
|
3. Update technical requirements with new dependencies
|
||||||
|
4. Run /speckit.analyze to validate changes
|
||||||
|
|
||||||
|
**If only implementation changed**:
|
||||||
|
- No spec updates needed
|
||||||
|
- Just update technical details (versions, file paths)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 6: Test Coverage Improvement
|
||||||
|
|
||||||
|
**Goal: Achieve 85%+ coverage on all modules**
|
||||||
|
|
||||||
|
1. **Run coverage report**:
|
||||||
|
```bash
|
||||||
|
npm run test:coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Identify gaps**:
|
||||||
|
- Modules below 85%
|
||||||
|
- Missing edge case tests
|
||||||
|
- Integration test gaps
|
||||||
|
|
||||||
|
3. **Add tests with spec guidance**:
|
||||||
|
- Each spec has acceptance criteria
|
||||||
|
- Write tests to cover all criteria
|
||||||
|
- Use spec success criteria as test cases
|
||||||
|
|
||||||
|
4. **Validate**:
|
||||||
|
```bash
|
||||||
|
npm run test:coverage
|
||||||
|
# All modules should be 85%+
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 7: Final Validation
|
||||||
|
|
||||||
|
**Run complete validation suite**:
|
||||||
|
|
||||||
|
1. **Build succeeds**:
|
||||||
|
```bash
|
||||||
|
npm run build
|
||||||
|
# No errors
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **All tests pass**:
|
||||||
|
```bash
|
||||||
|
npm test
|
||||||
|
# 0 failures
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Coverage meets threshold**:
|
||||||
|
```bash
|
||||||
|
npm run test:coverage
|
||||||
|
# 85%+ on all modules
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Specs validated**:
|
||||||
|
```bash
|
||||||
|
/speckit.analyze
|
||||||
|
# No drift, all specs match implementation
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Dependencies secure**:
|
||||||
|
```bash
|
||||||
|
npm audit
|
||||||
|
# No high/critical vulnerabilities
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
**Upgrade Report** (`.modernize/UPGRADE_REPORT.md`):
|
||||||
|
```markdown
|
||||||
|
# Dependency Modernization Report
|
||||||
|
|
||||||
|
**Date**: {date}
|
||||||
|
**Project**: {name}
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- **Dependencies upgraded**: {X} packages
|
||||||
|
- **Major version bumps**: {X} packages
|
||||||
|
- **Breaking changes**: {X} fixed
|
||||||
|
- **Tests fixed**: {X} tests
|
||||||
|
- **New tests added**: {X} tests
|
||||||
|
- **Coverage improvement**: {before}% → {after}%
|
||||||
|
- **Specs updated**: {X} specs
|
||||||
|
|
||||||
|
## Upgraded Dependencies
|
||||||
|
|
||||||
|
| Package | Old Version | New Version | Breaking? |
|
||||||
|
|---------|-------------|-------------|-----------|
|
||||||
|
| react | 17.0.2 | 18.3.1 | Yes |
|
||||||
|
| next | 13.5.0 | 14.2.0 | Yes |
|
||||||
|
| ... | ... | ... | ... |
|
||||||
|
|
||||||
|
## Breaking Changes Fixed
|
||||||
|
|
||||||
|
1. **React 18 Automatic Batching**
|
||||||
|
- Affected: User state management
|
||||||
|
- Fix: Updated useEffect dependencies
|
||||||
|
- Spec: No behavior change
|
||||||
|
- Tests: Added async state tests
|
||||||
|
|
||||||
|
2. **Next.js 14 App Router**
|
||||||
|
- Affected: Routing architecture
|
||||||
|
- Fix: Migrated pages/ to app/
|
||||||
|
- Spec: Updated file paths
|
||||||
|
- Tests: Updated route tests
|
||||||
|
|
||||||
|
## Spec Updates
|
||||||
|
|
||||||
|
- Updated technical requirements with new versions
|
||||||
|
- Updated file paths for App Router migration
|
||||||
|
- No functional spec changes (behavior preserved)
|
||||||
|
|
||||||
|
## Test Coverage
|
||||||
|
|
||||||
|
- Before: 78%
|
||||||
|
- After: 87%
|
||||||
|
- New tests: 45 tests added
|
||||||
|
- All modules: ✅ 85%+
|
||||||
|
|
||||||
|
## Validation
|
||||||
|
|
||||||
|
- ✅ All tests passing
|
||||||
|
- ✅ Build successful
|
||||||
|
- ✅ /speckit.analyze: No drift
|
||||||
|
- ✅ npm audit: 0 high/critical
|
||||||
|
- ✅ Coverage: 87% (target: 85%+)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
Application is now:
|
||||||
|
- ✅ Fully modernized (latest dependencies)
|
||||||
|
- ✅ 100% spec coverage maintained
|
||||||
|
- ✅ Tests passing with high coverage
|
||||||
|
- ✅ Specs synchronized with implementation
|
||||||
|
- ✅ Ready for ongoing spec-driven development
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration in State File
|
||||||
|
|
||||||
|
The modernize flag is set during Gear 1:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"path": "brownfield",
|
||||||
|
"modernize": true,
|
||||||
|
"metadata": {
|
||||||
|
"modernizeRequested": "2024-11-17T12:00:00Z",
|
||||||
|
"upgradeScope": "all-dependencies",
|
||||||
|
"targetCoverage": 85
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When Modernize Runs
|
||||||
|
|
||||||
|
**In Cruise Control**:
|
||||||
|
- Automatically runs after Gear 6 if `modernize: true`
|
||||||
|
|
||||||
|
**In Manual Mode**:
|
||||||
|
- Skill becomes available after Gear 6 completes
|
||||||
|
- User explicitly invokes: `/stackshift.modernize` or skill auto-activates
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
Modernization complete when:
|
||||||
|
- ✅ All dependencies updated to latest stable versions
|
||||||
|
- ✅ All tests passing
|
||||||
|
- ✅ Test coverage ≥ 85% on all modules
|
||||||
|
- ✅ Build successful (no compilation errors)
|
||||||
|
- ✅ /speckit.analyze shows no drift
|
||||||
|
- ✅ No high/critical security vulnerabilities
|
||||||
|
- ✅ Specs updated where behavior changed
|
||||||
|
- ✅ Upgrade report generated
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Benefits of Brownfield Upgrade
|
||||||
|
|
||||||
|
### vs. Standard Brownfield:
|
||||||
|
- ✅ **Modern dependencies** (not stuck on old versions)
|
||||||
|
- ✅ **Security updates** (latest patches)
|
||||||
|
- ✅ **Performance improvements** (newer libraries often faster)
|
||||||
|
- ✅ **New features** (latest library capabilities)
|
||||||
|
- ✅ **Reduced technical debt** (no old dependencies)
|
||||||
|
|
||||||
|
### vs. Greenfield:
|
||||||
|
- ✅ **Faster** (upgrade vs. rebuild)
|
||||||
|
- ✅ **Lower risk** (incremental changes vs. rewrite)
|
||||||
|
- ✅ **Spec-guided** (specs help fix breaking changes)
|
||||||
|
- ✅ **Keeps working code** (only changes dependencies)
|
||||||
|
|
||||||
|
### Use Case:
|
||||||
|
Perfect for teams that want to modernize without full rewrites. Get the benefits of modern tooling while maintaining existing features.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Approach
|
||||||
|
|
||||||
|
### Spec-Driven Upgrade Strategy
|
||||||
|
|
||||||
|
1. **Specs as Safety Net**:
|
||||||
|
- Every feature has acceptance criteria
|
||||||
|
- Run tests against specs after each upgrade
|
||||||
|
- If tests fail, specs guide the fix
|
||||||
|
|
||||||
|
2. **Incremental Upgrades**:
|
||||||
|
- Upgrade in phases (minor first, then majors)
|
||||||
|
- Run tests after each phase
|
||||||
|
- Rollback if too many failures
|
||||||
|
|
||||||
|
3. **Coverage as Quality Gate**:
|
||||||
|
- Must maintain 85%+ throughout upgrade
|
||||||
|
- Add tests for new library behaviors
|
||||||
|
- Ensure edge cases covered
|
||||||
|
|
||||||
|
4. **Spec Synchronization**:
|
||||||
|
- If library changes behavior, update spec
|
||||||
|
- If implementation changes, update spec
|
||||||
|
- /speckit.analyze validates alignment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Result**: A fully modernized application under complete spec-driven control!
|
||||||
323
skills/reverse-engineer/SKILL.md
Normal file
323
skills/reverse-engineer/SKILL.md
Normal file
@@ -0,0 +1,323 @@
|
|||||||
|
---
|
||||||
|
name: reverse-engineer
|
||||||
|
description: Deep codebase analysis to generate 8 comprehensive documentation files. Adapts based on path choice - Greenfield extracts business logic only (tech-agnostic), Brownfield extracts business logic + technical implementation (tech-prescriptive). This is Step 2 of 6 in the reverse engineering process.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Reverse Engineer (Path-Aware)
|
||||||
|
|
||||||
|
**Step 2 of 6** in the Reverse Engineering to Spec-Driven Development process.
|
||||||
|
|
||||||
|
**Estimated Time:** 30-45 minutes
|
||||||
|
**Prerequisites:** Step 1 completed (`analysis-report.md` and path selection)
|
||||||
|
**Output:** 9 comprehensive documentation files in `docs/reverse-engineering/`
|
||||||
|
|
||||||
|
**Path-Dependent Behavior:**
|
||||||
|
- **Greenfield:** Extract business logic only (framework-agnostic)
|
||||||
|
- **Brownfield:** Extract business logic + technical implementation details
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
- You've completed Step 1 (Initial Analysis) with path selection
|
||||||
|
- Ready to extract comprehensive documentation from code
|
||||||
|
- Path has been chosen (greenfield or brownfield)
|
||||||
|
- Preparing to create formal specifications
|
||||||
|
|
||||||
|
**Trigger Phrases:**
|
||||||
|
- "Reverse engineer the codebase"
|
||||||
|
- "Generate comprehensive documentation"
|
||||||
|
- "Extract business logic" (greenfield)
|
||||||
|
- "Document the full implementation" (brownfield)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
This skill performs deep codebase analysis and generates **9 comprehensive documentation files**.
|
||||||
|
|
||||||
|
**Content adapts based on your path:**
|
||||||
|
|
||||||
|
### Path A: Greenfield (Business Logic Only)
|
||||||
|
- Focus on WHAT the system does
|
||||||
|
- Avoid framework/technology specifics
|
||||||
|
- Extract user stories, business rules, workflows
|
||||||
|
- Framework-agnostic functional requirements
|
||||||
|
- Can be implemented in any tech stack
|
||||||
|
|
||||||
|
### Path B: Brownfield (Business Logic + Technical)
|
||||||
|
- Focus on WHAT and HOW
|
||||||
|
- Document exact frameworks, libraries, versions
|
||||||
|
- Extract file paths, configurations, schemas
|
||||||
|
- Prescriptive technical requirements
|
||||||
|
- Enables `/speckit.analyze` validation
|
||||||
|
|
||||||
|
**9 Documentation Files Generated:**
|
||||||
|
|
||||||
|
1. **functional-specification.md** - Business logic, requirements, user stories (+ tech details for brownfield)
|
||||||
|
2. **integration-points.md** - External services, APIs, dependencies, data flows (single source of truth)
|
||||||
|
3. **configuration-reference.md** - Config options (business-level for greenfield, all details for brownfield)
|
||||||
|
4. **data-architecture.md** - Data models, API contracts (abstract for greenfield, schemas for brownfield)
|
||||||
|
5. **operations-guide.md** - Operational needs (requirements for greenfield, current setup for brownfield)
|
||||||
|
6. **technical-debt-analysis.md** - Issues and improvements
|
||||||
|
7. **observability-requirements.md** - Monitoring needs (goals for greenfield, current state for brownfield)
|
||||||
|
8. **visual-design-system.md** - UI/UX patterns (requirements for greenfield, implementation for brownfield)
|
||||||
|
9. **test-documentation.md** - Testing requirements (targets for greenfield, current state for brownfield)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Check (FIRST STEP!)
|
||||||
|
|
||||||
|
**Load state file to check detection type and route:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check what kind of application we're analyzing
|
||||||
|
DETECTION_TYPE=$(cat .stackshift-state.json | jq -r '.detection_type // .path')
|
||||||
|
echo "Detection: $DETECTION_TYPE"
|
||||||
|
|
||||||
|
# Check extraction approach
|
||||||
|
ROUTE=$(cat .stackshift-state.json | jq -r '.route // .path')
|
||||||
|
echo "Route: $ROUTE"
|
||||||
|
|
||||||
|
# Check spec output location (Greenfield only)
|
||||||
|
SPEC_OUTPUT=$(cat .stackshift-state.json | jq -r '.config.spec_output_location // "."')
|
||||||
|
echo "Writing specs to: $SPEC_OUTPUT"
|
||||||
|
|
||||||
|
# Create output directories if needed
|
||||||
|
if [ "$SPEC_OUTPUT" != "." ]; then
|
||||||
|
mkdir -p "$SPEC_OUTPUT/docs/reverse-engineering"
|
||||||
|
mkdir -p "$SPEC_OUTPUT/.specify/memory/specifications"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**State file structure (new):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"detection_type": "monorepo-service", // What kind of app
|
||||||
|
"route": "greenfield", // How to spec it
|
||||||
|
"config": {
|
||||||
|
"spec_output_location": "~/git/my-new-app",
|
||||||
|
"build_location": "~/git/my-new-app",
|
||||||
|
"target_stack": "Next.js 15..."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**File write locations:**
|
||||||
|
|
||||||
|
| Route | Spec Output | Where Files Go |
|
||||||
|
|-------|-------------|----------------|
|
||||||
|
| **Greenfield** | Custom location | `{spec_output_location}/docs/`, `{spec_output_location}/.specify/` |
|
||||||
|
| **Greenfield** | Not set (default) | `./docs/reverse-engineering/`, `./.specify/` (current repo) |
|
||||||
|
| **Brownfield** | Always current repo | `./docs/reverse-engineering/`, `./.specify/` |
|
||||||
|
|
||||||
|
**Extraction approach based on detection + route:**
|
||||||
|
|
||||||
|
| Detection Type | + Greenfield | + Brownfield |
|
||||||
|
|----------------|--------------|--------------|
|
||||||
|
| **Monorepo Service** | Business logic only (tech-agnostic) | Full implementation + shared packages (tech-prescriptive) |
|
||||||
|
| **Nx App** | Business logic only (framework-agnostic) | Full Nx/Angular implementation details |
|
||||||
|
| **Generic App** | Business logic only | Full implementation |
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
- `detection_type` determines WHAT patterns to look for (shared packages, Nx project config, monorepo structure, etc.)
|
||||||
|
- `route` determines HOW to document them (tech-agnostic vs tech-prescriptive)
|
||||||
|
|
||||||
|
**Examples:**
|
||||||
|
- Monorepo Service + Greenfield → Extract what the service does (not React/Express specifics)
|
||||||
|
- Monorepo Service + Brownfield → Extract full Express routes, React components, shared utilities
|
||||||
|
- Nx App + Greenfield → Extract business logic (not Angular specifics)
|
||||||
|
- Nx App + Brownfield → Extract full Nx configuration, Angular components, project graph
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process Overview
|
||||||
|
|
||||||
|
### Phase 1: Deep Codebase Analysis
|
||||||
|
|
||||||
|
**Approach depends on path:**
|
||||||
|
|
||||||
|
Use the Task tool with `subagent_type=stackshift:code-analyzer` (or `Explore` as fallback) to analyze:
|
||||||
|
|
||||||
|
#### 1.1 Backend Analysis
|
||||||
|
- All API endpoints (method, path, auth, params, purpose)
|
||||||
|
- Data models (schemas, types, interfaces, fields)
|
||||||
|
- Configuration (env vars, config files, settings)
|
||||||
|
- External integrations (APIs, services, databases)
|
||||||
|
- Business logic (services, utilities, algorithms)
|
||||||
|
|
||||||
|
See [operations/backend-analysis.md](operations/backend-analysis.md)
|
||||||
|
|
||||||
|
#### 1.2 Frontend Analysis
|
||||||
|
- All pages/routes (path, purpose, auth requirement)
|
||||||
|
- Components catalog (layout, form, UI components)
|
||||||
|
- State management (store structure, global state)
|
||||||
|
- API client (how frontend calls backend)
|
||||||
|
- Styling (design system, themes, component styles)
|
||||||
|
|
||||||
|
See [operations/frontend-analysis.md](operations/frontend-analysis.md)
|
||||||
|
|
||||||
|
#### 1.3 Infrastructure Analysis
|
||||||
|
- Deployment (IaC tools, configuration)
|
||||||
|
- CI/CD (pipelines, workflows)
|
||||||
|
- Hosting (cloud provider, services)
|
||||||
|
- Database (type, schema, migrations)
|
||||||
|
- Storage (object storage, file systems)
|
||||||
|
|
||||||
|
See [operations/infrastructure-analysis.md](operations/infrastructure-analysis.md)
|
||||||
|
|
||||||
|
#### 1.4 Testing Analysis
|
||||||
|
- Test files (location, framework, coverage)
|
||||||
|
- Test types (unit, integration, E2E)
|
||||||
|
- Coverage estimates (% covered)
|
||||||
|
- Test data (mocks, fixtures, seeds)
|
||||||
|
|
||||||
|
See [operations/testing-analysis.md](operations/testing-analysis.md)
|
||||||
|
|
||||||
|
### Phase 2: Generate Documentation
|
||||||
|
|
||||||
|
Create `docs/reverse-engineering/` directory and generate all 8 documentation files.
|
||||||
|
|
||||||
|
See [operations/generate-docs.md](operations/generate-docs.md) for templates and guidelines.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Files
|
||||||
|
|
||||||
|
### 1. functional-specification.md
|
||||||
|
**Focus:** Business logic, WHAT the system does (not HOW)
|
||||||
|
|
||||||
|
**Sections:**
|
||||||
|
- Executive Summary (purpose, users, value)
|
||||||
|
- Functional Requirements (FR-001, FR-002, ...)
|
||||||
|
- User Stories (P0/P1/P2/P3 priorities)
|
||||||
|
- Non-Functional Requirements (NFR-001, ...)
|
||||||
|
- Business Rules (validation, authorization, workflows)
|
||||||
|
- System Boundaries (scope, integrations)
|
||||||
|
- Success Criteria (measurable outcomes)
|
||||||
|
|
||||||
|
**Critical:** Framework-agnostic, testable, measurable
|
||||||
|
|
||||||
|
### 2. configuration-reference.md
|
||||||
|
**Complete inventory** of all configuration:
|
||||||
|
- Environment variables
|
||||||
|
- Config file options
|
||||||
|
- Feature flags
|
||||||
|
- Secrets and credentials (how managed)
|
||||||
|
- Default values
|
||||||
|
|
||||||
|
### 3. data-architecture.md
|
||||||
|
**All data models and API contracts:**
|
||||||
|
- Data models (with field types, constraints, relationships)
|
||||||
|
- API endpoints (request/response formats)
|
||||||
|
- JSON Schemas
|
||||||
|
- GraphQL schemas (if applicable)
|
||||||
|
- Database ER diagram (textual)
|
||||||
|
|
||||||
|
### 4. operations-guide.md
|
||||||
|
**How to deploy and maintain:**
|
||||||
|
- Deployment procedures
|
||||||
|
- Infrastructure overview
|
||||||
|
- Monitoring and alerting
|
||||||
|
- Backup and recovery
|
||||||
|
- Troubleshooting runbooks
|
||||||
|
|
||||||
|
### 5. technical-debt-analysis.md
|
||||||
|
**Issues and improvements:**
|
||||||
|
- Code quality issues
|
||||||
|
- Missing tests
|
||||||
|
- Security vulnerabilities
|
||||||
|
- Performance bottlenecks
|
||||||
|
- Refactoring opportunities
|
||||||
|
|
||||||
|
### 6. observability-requirements.md
|
||||||
|
**Logging, monitoring, alerting:**
|
||||||
|
- What to log (events, errors, metrics)
|
||||||
|
- Monitoring requirements (uptime, latency, errors)
|
||||||
|
- Alerting rules and thresholds
|
||||||
|
- Debugging capabilities
|
||||||
|
|
||||||
|
### 7. visual-design-system.md
|
||||||
|
**UI/UX patterns:**
|
||||||
|
- Component library
|
||||||
|
- Design tokens (colors, typography, spacing)
|
||||||
|
- Responsive breakpoints
|
||||||
|
- Accessibility standards
|
||||||
|
- User flows
|
||||||
|
|
||||||
|
### 8. test-documentation.md
|
||||||
|
**Testing requirements:**
|
||||||
|
- Test strategy
|
||||||
|
- Coverage requirements
|
||||||
|
- Test patterns and conventions
|
||||||
|
- E2E scenarios
|
||||||
|
- Performance testing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After running this skill, you should have:
|
||||||
|
|
||||||
|
- ✅ `docs/reverse-engineering/` directory created
|
||||||
|
- ✅ All 8 documentation files generated
|
||||||
|
- ✅ Comprehensive coverage of all application aspects
|
||||||
|
- ✅ Framework-agnostic functional specification
|
||||||
|
- ✅ Complete data model documentation
|
||||||
|
- ✅ Ready to proceed to Step 3 (Create Specifications)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Step
|
||||||
|
|
||||||
|
Once all documentation is generated and reviewed, proceed to:
|
||||||
|
|
||||||
|
**Step 3: Create Specifications** - Use the create-specs skill to transform docs into formal specifications.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Important Guidelines
|
||||||
|
|
||||||
|
### Framework-Agnostic Documentation
|
||||||
|
|
||||||
|
**DO:**
|
||||||
|
- Describe WHAT, not HOW
|
||||||
|
- Focus on business logic and requirements
|
||||||
|
- Use generic terms (e.g., "HTTP API" not "Express routes")
|
||||||
|
|
||||||
|
**DON'T:**
|
||||||
|
- Hard-code framework names in functional specs
|
||||||
|
- Describe implementation details in requirements
|
||||||
|
- Mix business logic with technical implementation
|
||||||
|
|
||||||
|
### Completeness
|
||||||
|
|
||||||
|
Use the Explore agent to ensure you find:
|
||||||
|
- ALL API endpoints (not just the obvious ones)
|
||||||
|
- ALL data models (including DTOs, types, interfaces)
|
||||||
|
- ALL configuration options (check multiple files)
|
||||||
|
- ALL external integrations
|
||||||
|
|
||||||
|
### Quality Standards
|
||||||
|
|
||||||
|
Each document must be:
|
||||||
|
- **Comprehensive** - Nothing important missing
|
||||||
|
- **Accurate** - Reflects actual code, not assumptions
|
||||||
|
- **Organized** - Clear sections, easy to navigate
|
||||||
|
- **Actionable** - Can be used to rebuild the system
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Use Task tool with `subagent_type=stackshift:code-analyzer` for path-aware extraction
|
||||||
|
- Fallback to `subagent_type=Explore` if StackShift agent not available
|
||||||
|
- Parallel analysis: Run backend, frontend, infrastructure analysis concurrently
|
||||||
|
- Use multiple rounds of exploration for complex codebases
|
||||||
|
- Cross-reference findings across different parts of the codebase
|
||||||
|
- The `stackshift:code-analyzer` agent understands greenfield vs brownfield routes automatically
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** This is Step 2 of 6. The documentation you generate here will be transformed into formal specifications in Step 3.
|
||||||
670
skills/spec-coverage-map/SKILL.md
Normal file
670
skills/spec-coverage-map/SKILL.md
Normal file
@@ -0,0 +1,670 @@
|
|||||||
|
---
|
||||||
|
name: spec-coverage-map
|
||||||
|
description: Generate a visual spec-to-code coverage map showing which code files are covered by which specifications. Creates ASCII diagrams, reverse indexes, and coverage statistics. Use after implementation or during cleanup to validate spec coverage.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Spec-to-Code Coverage Map
|
||||||
|
|
||||||
|
Generate a comprehensive coverage map showing the relationship between specifications and implementation files.
|
||||||
|
|
||||||
|
**When to run:** After Gear 6 (Implementation) or during cleanup/documentation phases
|
||||||
|
|
||||||
|
**Output:** `docs/spec-coverage-map.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What This Does
|
||||||
|
|
||||||
|
Creates a visual map showing:
|
||||||
|
1. **Spec → Files**: Which code files each spec covers
|
||||||
|
2. **Files → Specs**: Which spec(s) cover each code file (reverse index)
|
||||||
|
3. **Coverage Statistics**: Percentages by category (backend, frontend, infrastructure)
|
||||||
|
4. **Gap Analysis**: Code files not covered by any spec
|
||||||
|
5. **Shared Files**: Files referenced by multiple specs
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Discover All Specifications
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all spec files
|
||||||
|
find .specify/memory/specifications -name "*.md" -type f 2>/dev/null || \
|
||||||
|
find specs -name "spec.md" -type f 2>/dev/null
|
||||||
|
|
||||||
|
# Count them
|
||||||
|
SPEC_COUNT=$(find .specify/memory/specifications -name "*.md" -type f 2>/dev/null | wc -l)
|
||||||
|
echo "Found $SPEC_COUNT specifications"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Extract File References from Each Spec
|
||||||
|
|
||||||
|
For each spec, look for file references in these sections:
|
||||||
|
- "Files" or "File Structure"
|
||||||
|
- "Implementation Status"
|
||||||
|
- "Components Implemented"
|
||||||
|
- "Technical Details"
|
||||||
|
- Code blocks with file paths
|
||||||
|
|
||||||
|
**Pattern matching:**
|
||||||
|
```bash
|
||||||
|
# Common file reference patterns in specs
|
||||||
|
- src/handlers/foo.js
|
||||||
|
- api/handlers/vehicle-details.ts
|
||||||
|
- site/pages/Home.tsx
|
||||||
|
- infrastructure/terraform/main.tf
|
||||||
|
- .github/workflows/deploy.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Read each spec and extract:
|
||||||
|
- File paths mentioned
|
||||||
|
- Component names that map to files
|
||||||
|
- Directory references
|
||||||
|
|
||||||
|
### Step 3: Categorize Files
|
||||||
|
|
||||||
|
Group files by type:
|
||||||
|
- **Backend**: api/, src/handlers/, src/services/, lib/
|
||||||
|
- **Frontend**: site/, pages/, components/, app/
|
||||||
|
- **Infrastructure**: infrastructure/, terraform/, .github/workflows/
|
||||||
|
- **Database**: prisma/, migrations/, schema/
|
||||||
|
- **Scripts**: scripts/, bin/
|
||||||
|
- **Config**: package.json, tsconfig.json, etc.
|
||||||
|
- **Tests**: *.test.ts, *.spec.ts, __tests__/
|
||||||
|
|
||||||
|
### Step 4: Generate ASCII Box Diagrams
|
||||||
|
|
||||||
|
For each spec, create a box diagram:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────┐
|
||||||
|
│ 001-feature-name │ Status: ✅ COMPLETE
|
||||||
|
├─────────────────────────────────┤
|
||||||
|
│ Backend: │
|
||||||
|
│ ├─ api/src/handlers/foo.js │
|
||||||
|
│ └─ api/src/services/bar.js │
|
||||||
|
│ Frontend: │
|
||||||
|
│ └─ site/src/pages/Foo.tsx │
|
||||||
|
│ Infrastructure: │
|
||||||
|
│ └─ .github/workflows/deploy.yml│
|
||||||
|
└─────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Box Drawing Characters:**
|
||||||
|
```
|
||||||
|
┌ ─ ┐ (top)
|
||||||
|
│ (sides)
|
||||||
|
├ ─ ┤ (divider)
|
||||||
|
└ ─ ┘ (bottom)
|
||||||
|
├ └ (tree branches)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Create Reverse Index
|
||||||
|
|
||||||
|
Build a table showing which spec(s) cover each file:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Files → Specs Reverse Index
|
||||||
|
|
||||||
|
| File | Covered By | Count |
|
||||||
|
|------|------------|-------|
|
||||||
|
| api/handlers/vehicle.ts | 001-vehicle-details, 003-pricing | 2 |
|
||||||
|
| site/pages/Home.tsx | 001-homepage | 1 |
|
||||||
|
| lib/utils/format.ts | 001-vehicle, 002-search, 004-pricing | 3 |
|
||||||
|
```
|
||||||
|
|
||||||
|
**Highlight:**
|
||||||
|
- 🟢 **Single spec** (1 spec) - Normal coverage
|
||||||
|
- 🟡 **Shared** (2-3 specs) - Multiple features use this
|
||||||
|
- 🔴 **Hot** (4+ specs) - Critical shared utility
|
||||||
|
|
||||||
|
### Step 6: Calculate Coverage Statistics
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Coverage Statistics
|
||||||
|
|
||||||
|
| Category | Total Files | Covered | Coverage % |
|
||||||
|
|----------|-------------|---------|------------|
|
||||||
|
| Backend | 45 | 42 | 93% |
|
||||||
|
| Frontend | 38 | 35 | 92% |
|
||||||
|
| Infrastructure | 12 | 10 | 83% |
|
||||||
|
| Database | 8 | 8 | 100% |
|
||||||
|
| Scripts | 6 | 4 | 67% |
|
||||||
|
| **TOTAL** | **109** | **99** | **91%** |
|
||||||
|
```
|
||||||
|
|
||||||
|
**Coverage Heat Map:**
|
||||||
|
```
|
||||||
|
Backend [████████████████░░] 93%
|
||||||
|
Frontend [████████████████░░] 92%
|
||||||
|
Infrastructure [███████████████░░░] 83%
|
||||||
|
Database [████████████████████] 100%
|
||||||
|
Scripts [█████████░░░░░░░░░] 67%
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 7: Identify Gaps
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 🚨 Coverage Gaps (10 files)
|
||||||
|
|
||||||
|
Files not covered by any specification:
|
||||||
|
|
||||||
|
**Backend:**
|
||||||
|
- api/handlers/legacy-foo.js (deprecated?)
|
||||||
|
- api/utils/debug.ts (utility?)
|
||||||
|
|
||||||
|
**Frontend:**
|
||||||
|
- site/components/DevTools.tsx (dev-only?)
|
||||||
|
|
||||||
|
**Scripts:**
|
||||||
|
- scripts/experimental/test.sh (WIP?)
|
||||||
|
- scripts/deprecated/old-deploy.sh (remove?)
|
||||||
|
|
||||||
|
**Recommendations:**
|
||||||
|
- Remove deprecated files
|
||||||
|
- Create specs for utilities if they contain business logic
|
||||||
|
- Document dev-only tools in a utilities spec
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 8: Highlight Shared Files
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 🔥 Shared Files (Referenced by 3+ Specs)
|
||||||
|
|
||||||
|
| File | Specs | Count | Risk |
|
||||||
|
|------|-------|-------|------|
|
||||||
|
| lib/utils/pricing.ts | 001, 003, 004, 007, 009 | 5 | 🔴 HIGH |
|
||||||
|
| lib/api/client.ts | 002, 005, 006, 008 | 4 | 🔴 HIGH |
|
||||||
|
| lib/types/vehicle.ts | 001, 002, 011 | 3 | 🟡 MEDIUM |
|
||||||
|
|
||||||
|
**High-risk files** (4+ specs):
|
||||||
|
- Changes affect multiple features
|
||||||
|
- Require extra testing
|
||||||
|
- Should have comprehensive test coverage
|
||||||
|
- Consider refactoring if too coupled
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Output Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Spec-to-Code Coverage Map
|
||||||
|
|
||||||
|
Generated: [TIMESTAMP]
|
||||||
|
Total Specs: [COUNT]
|
||||||
|
Total Files Covered: [COUNT]
|
||||||
|
Overall Coverage: [PERCENTAGE]%
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage by Spec
|
||||||
|
|
||||||
|
[For each spec, ASCII box diagram with files]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files → Specs Reverse Index
|
||||||
|
|
||||||
|
[Table of all files and which specs cover them]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage Statistics
|
||||||
|
|
||||||
|
[Stats table and heat map]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage Gaps
|
||||||
|
|
||||||
|
[List of files not covered by any spec]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Shared Files
|
||||||
|
|
||||||
|
[Files referenced by multiple specs with risk assessment]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
- [Action items based on analysis]
|
||||||
|
- [Gaps to address]
|
||||||
|
- [Refactoring opportunities]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to Generate
|
||||||
|
|
||||||
|
### Automatic Triggers
|
||||||
|
|
||||||
|
1. **End of Gear 6 (Implement)** - After all features implemented
|
||||||
|
2. **Cleanup Phase** - When finalizing documentation
|
||||||
|
3. **Manual Request** - User asks for coverage analysis
|
||||||
|
|
||||||
|
### Manual Invocation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check current coverage
|
||||||
|
cat .stackshift-state.json | grep currentGear
|
||||||
|
|
||||||
|
# If Gear 6 complete or in cleanup:
|
||||||
|
"Generate spec-to-code coverage map"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### In Gear 6 (Implement) Skill
|
||||||
|
|
||||||
|
After completing all implementations, add:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Final Step: Generate Coverage Map
|
||||||
|
|
||||||
|
Creating spec-to-code coverage map...
|
||||||
|
|
||||||
|
[Run coverage map generation]
|
||||||
|
|
||||||
|
✅ Coverage map saved to docs/spec-coverage-map.md
|
||||||
|
|
||||||
|
Summary:
|
||||||
|
- 109 files covered by 12 specs
|
||||||
|
- 91% overall coverage
|
||||||
|
- 10 gap files identified
|
||||||
|
- 3 high-risk shared files
|
||||||
|
```
|
||||||
|
|
||||||
|
### In Cleanup/Finalization
|
||||||
|
|
||||||
|
When user says "cleanup" or "finalize documentation":
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
Running final cleanup tasks:
|
||||||
|
|
||||||
|
1. ✅ Generate spec-coverage-map.md
|
||||||
|
2. ✅ Update README with coverage stats
|
||||||
|
3. ✅ Commit all documentation
|
||||||
|
4. ✅ Create summary report
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
After generating coverage map, you should have:
|
||||||
|
|
||||||
|
- ✅ `docs/spec-coverage-map.md` file created
|
||||||
|
- ✅ Visual ASCII diagrams for each spec
|
||||||
|
- ✅ Reverse index table (files → specs)
|
||||||
|
- ✅ Coverage statistics and heat map
|
||||||
|
- ✅ Gap analysis with recommendations
|
||||||
|
- ✅ Shared files risk assessment
|
||||||
|
- ✅ Actionable next steps
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example Output
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Spec-to-Code Coverage Map
|
||||||
|
|
||||||
|
Generated: 2025-11-19T17:45:00Z
|
||||||
|
Total Specs: 12
|
||||||
|
Total Files Covered: 99
|
||||||
|
Overall Coverage: 91%
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage by Spec
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ 001-vehicle-details-display │ Status: ✅ COMPLETE
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ Backend (3 files): │
|
||||||
|
│ ├─ api/handlers/vehicle-details.ts │
|
||||||
|
│ ├─ api/services/vehicle-data.ts │
|
||||||
|
│ └─ lib/validators/vin.ts │
|
||||||
|
│ Frontend (2 files): │
|
||||||
|
│ ├─ site/pages/VehicleDetails.tsx │
|
||||||
|
│ └─ site/components/VehicleCard.tsx │
|
||||||
|
│ Tests (2 files): │
|
||||||
|
│ ├─ api/handlers/vehicle-details.test.ts │
|
||||||
|
│ └─ site/pages/VehicleDetails.test.tsx │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ 002-inventory-search │ Status: ✅ COMPLETE
|
||||||
|
├─────────────────────────────────────────────┤
|
||||||
|
│ Backend (4 files): │
|
||||||
|
│ ├─ api/handlers/search.ts │
|
||||||
|
│ ├─ api/services/elasticsearch.ts │
|
||||||
|
│ ├─ lib/query-builder.ts │
|
||||||
|
│ └─ lib/filters/vehicle-filters.ts │
|
||||||
|
│ Frontend (3 files): │
|
||||||
|
│ ├─ site/pages/Search.tsx │
|
||||||
|
│ ├─ site/components/SearchBar.tsx │
|
||||||
|
│ └─ site/components/FilterPanel.tsx │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
... [10 more specs]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files → Specs Reverse Index
|
||||||
|
|
||||||
|
| File | Covered By Specs | Count | Risk |
|
||||||
|
|------|------------------|-------|------|
|
||||||
|
| lib/utils/pricing.ts | 001, 003, 004, 007, 009 | 5 | 🔴 HIGH |
|
||||||
|
| lib/api/client.ts | 002, 005, 006, 008 | 4 | 🔴 HIGH |
|
||||||
|
| api/handlers/vehicle-details.ts | 001 | 1 | 🟢 LOW |
|
||||||
|
| site/pages/Home.tsx | 003 | 1 | 🟢 LOW |
|
||||||
|
| lib/types/vehicle.ts | 001, 002, 011 | 3 | 🟡 MEDIUM |
|
||||||
|
|
||||||
|
... [99 files total]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage Statistics
|
||||||
|
|
||||||
|
| Category | Total Files | Covered | Uncovered | Coverage % |
|
||||||
|
|----------|-------------|---------|-----------|------------|
|
||||||
|
| Backend | 45 | 42 | 3 | 93% |
|
||||||
|
| Frontend | 38 | 35 | 3 | 92% |
|
||||||
|
| Infrastructure | 12 | 10 | 2 | 83% |
|
||||||
|
| Database | 8 | 8 | 0 | 100% |
|
||||||
|
| Scripts | 6 | 4 | 2 | 67% |
|
||||||
|
| **TOTAL** | **109** | **99** | **10** | **91%** |
|
||||||
|
|
||||||
|
### Coverage Heat Map
|
||||||
|
|
||||||
|
```
|
||||||
|
Backend [████████████████░░] 93%
|
||||||
|
Frontend [████████████████░░] 92%
|
||||||
|
Infrastructure [███████████████░░░] 83%
|
||||||
|
Database [████████████████████] 100%
|
||||||
|
Scripts [█████████░░░░░░░░░] 67%
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage Gaps (10 files)
|
||||||
|
|
||||||
|
Files not covered by any specification:
|
||||||
|
|
||||||
|
**Backend (3 files):**
|
||||||
|
- api/handlers/legacy-foo.js - Deprecated?
|
||||||
|
- api/utils/debug.ts - Dev utility?
|
||||||
|
- api/middleware/cors.ts - Shared infrastructure?
|
||||||
|
|
||||||
|
**Frontend (3 files):**
|
||||||
|
- site/components/DevTools.tsx - Dev-only component
|
||||||
|
- site/pages/404.tsx - Error page (needs spec?)
|
||||||
|
- site/utils/logger.ts - Utility (shared)
|
||||||
|
|
||||||
|
**Infrastructure (2 files):**
|
||||||
|
- .github/workflows/experimental.yml - WIP?
|
||||||
|
- infrastructure/terraform/dev-only.tf - Dev env?
|
||||||
|
|
||||||
|
**Scripts (2 files):**
|
||||||
|
- scripts/experimental/test.sh - WIP
|
||||||
|
- scripts/deprecated/old-deploy.sh - Remove?
|
||||||
|
|
||||||
|
### Recommendations:
|
||||||
|
|
||||||
|
1. **Remove deprecated files** (3 files identified)
|
||||||
|
2. **Create utility spec** for shared utils (cors, logger)
|
||||||
|
3. **Document dev tools** in separate spec
|
||||||
|
4. **Review experimental** workflows/scripts
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Shared Files (Referenced by 3+ Specs)
|
||||||
|
|
||||||
|
| File | Referenced By | Count | Risk Level |
|
||||||
|
|------|---------------|-------|------------|
|
||||||
|
| lib/utils/pricing.ts | 001, 003, 004, 007, 009 | 5 | 🔴 HIGH |
|
||||||
|
| lib/api/client.ts | 002, 005, 006, 008 | 4 | 🔴 HIGH |
|
||||||
|
| lib/types/vehicle.ts | 001, 002, 011 | 3 | 🟡 MEDIUM |
|
||||||
|
| lib/validators/input.ts | 001, 002, 005 | 3 | 🟡 MEDIUM |
|
||||||
|
|
||||||
|
### Risk Assessment:
|
||||||
|
|
||||||
|
**🔴 High-risk files** (4+ specs):
|
||||||
|
- Changes affect multiple features
|
||||||
|
- Require comprehensive testing
|
||||||
|
- Should have 95%+ test coverage
|
||||||
|
- Consider splitting if too coupled
|
||||||
|
|
||||||
|
**🟡 Medium-risk files** (2-3 specs):
|
||||||
|
- Changes affect few features
|
||||||
|
- Standard testing required
|
||||||
|
- Monitor for increased coupling
|
||||||
|
|
||||||
|
**🟢 Low-risk files** (1 spec):
|
||||||
|
- Feature-specific
|
||||||
|
- Standard development flow
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
- ✅ **91% coverage** - Excellent
|
||||||
|
- ⚠️ **10 gap files** - Need review
|
||||||
|
- 🔴 **2 high-risk shared files** - Monitor closely
|
||||||
|
- 📊 **12 specs** covering **99 files**
|
||||||
|
|
||||||
|
### Action Items:
|
||||||
|
|
||||||
|
1. Review 10 gap files and either:
|
||||||
|
- Create specs for them
|
||||||
|
- Remove if deprecated
|
||||||
|
- Document as infrastructure/utilities
|
||||||
|
|
||||||
|
2. Add extra test coverage for high-risk shared files
|
||||||
|
|
||||||
|
3. Consider refactoring pricing.ts (5 specs depend on it)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Next Steps:**
|
||||||
|
|
||||||
|
Run `/speckit.clarify` to resolve any [NEEDS CLARIFICATION] markers in specs that were identified during coverage analysis.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Details
|
||||||
|
|
||||||
|
### File Path Extraction Patterns
|
||||||
|
|
||||||
|
Look for these patterns in spec markdown:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# In "Files" or "Implementation Status" sections:
|
||||||
|
- `api/handlers/foo.ts` ✅
|
||||||
|
- **Backend:** `src/services/bar.js`
|
||||||
|
- File: `site/pages/Home.tsx`
|
||||||
|
|
||||||
|
# In code blocks:
|
||||||
|
```typescript
|
||||||
|
// File: lib/utils/pricing.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
# In lists:
|
||||||
|
## Backend Components
|
||||||
|
- Vehicle handler: `api/handlers/vehicle.ts`
|
||||||
|
- Pricing service: `api/services/pricing.ts`
|
||||||
|
```
|
||||||
|
|
||||||
|
**Extraction strategy:**
|
||||||
|
1. Parse markdown sections titled "Files", "Implementation Status", "Components"
|
||||||
|
2. Extract backtick-wrapped paths: `path/to/file.ext`
|
||||||
|
3. Extract bold paths: **File:** path/to/file.ext
|
||||||
|
4. Look for file extensions: .ts, .tsx, .js, .jsx, .py, .go, .tf, .yml, etc.
|
||||||
|
5. Validate paths actually exist in codebase
|
||||||
|
|
||||||
|
### ASCII Box Generation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Box characters
|
||||||
|
TOP="┌─┐"
|
||||||
|
SIDE="│"
|
||||||
|
DIVIDER="├─┤"
|
||||||
|
BOTTOM="└─┘"
|
||||||
|
BRANCH="├─"
|
||||||
|
LAST_BRANCH="└─"
|
||||||
|
|
||||||
|
# Example template
|
||||||
|
echo "┌─────────────────────────────────┐"
|
||||||
|
echo "│ $SPEC_NAME │ Status: $STATUS"
|
||||||
|
echo "├─────────────────────────────────┤"
|
||||||
|
echo "│ Backend: │"
|
||||||
|
for file in $BACKEND_FILES; do
|
||||||
|
echo "│ ├─ $file"
|
||||||
|
done
|
||||||
|
echo "│ Frontend: │"
|
||||||
|
for file in $FRONTEND_FILES; do
|
||||||
|
echo "│ ├─ $file"
|
||||||
|
done
|
||||||
|
echo "└─────────────────────────────────┘"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coverage Calculation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Calculate coverage percentage
|
||||||
|
const totalFiles = Object.keys(allFiles).length;
|
||||||
|
const coveredFiles = Object.keys(filesToSpecs).length;
|
||||||
|
const coveragePercent = Math.round((coveredFiles / totalFiles) * 100);
|
||||||
|
|
||||||
|
// By category
|
||||||
|
const backendCoverage = (coveredBackend / totalBackend) * 100;
|
||||||
|
const frontendCoverage = (coveredFrontend / totalFrontend) * 100;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Heat Map Visualization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Generate heat map bar
|
||||||
|
function heatMapBar(percentage) {
|
||||||
|
const filled = Math.round(percentage / 5); // 20 blocks total
|
||||||
|
const empty = 20 - filled;
|
||||||
|
return `[${'█'.repeat(filled)}${'░'.repeat(empty)}] ${percentage}%`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Example output:
|
||||||
|
// [████████████████░░] 92%
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
✅ Coverage map generated at `docs/spec-coverage-map.md`
|
||||||
|
✅ ASCII box diagram for every spec
|
||||||
|
✅ Reverse index table (files → specs)
|
||||||
|
✅ Coverage statistics by category
|
||||||
|
✅ Heat map visualization
|
||||||
|
✅ Gap analysis with recommendations
|
||||||
|
✅ Shared files risk assessment
|
||||||
|
✅ Overall coverage percentage > 85%
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**If no specs found:**
|
||||||
|
```
|
||||||
|
⚠️ No specifications found in .specify/memory/specifications/ or specs/
|
||||||
|
Cannot generate coverage map without specs.
|
||||||
|
|
||||||
|
Run Gear 3 first: /stackshift.create-specs
|
||||||
|
```
|
||||||
|
|
||||||
|
**If specs have no file references:**
|
||||||
|
```
|
||||||
|
⚠️ Specs don't contain file references.
|
||||||
|
Cannot generate coverage map.
|
||||||
|
|
||||||
|
This usually means:
|
||||||
|
1. Specs were created but implementation hasn't started
|
||||||
|
2. Specs need "Files" or "Implementation Status" sections
|
||||||
|
3. Using old spec format (update specs)
|
||||||
|
```
|
||||||
|
|
||||||
|
**If coverage very low (< 50%):**
|
||||||
|
```
|
||||||
|
⚠️ Coverage is only 35%
|
||||||
|
|
||||||
|
This indicates:
|
||||||
|
- Many files not documented in specs
|
||||||
|
- Specs may be incomplete
|
||||||
|
- Consider running /speckit.analyze to validate alignment
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### After Gear 6 Completion
|
||||||
|
|
||||||
|
```
|
||||||
|
User: [Gear 6 completes all implementations]
|
||||||
|
|
||||||
|
StackShift: "✅ All features implemented!
|
||||||
|
|
||||||
|
Let me generate a coverage map to visualize spec-to-code relationships..."
|
||||||
|
|
||||||
|
[Generates coverage map]
|
||||||
|
|
||||||
|
StackShift: "📊 Coverage Map Complete!
|
||||||
|
|
||||||
|
✅ 91% coverage (99/109 files)
|
||||||
|
🚨 10 gap files identified
|
||||||
|
🔴 2 high-risk shared files
|
||||||
|
|
||||||
|
Saved to: docs/spec-coverage-map.md
|
||||||
|
|
||||||
|
Would you like me to:
|
||||||
|
1. Review gap files
|
||||||
|
2. Create specs for missing files
|
||||||
|
3. Commit all changes"
|
||||||
|
```
|
||||||
|
|
||||||
|
### During Cleanup
|
||||||
|
|
||||||
|
```
|
||||||
|
User: "Clean up the documentation and finalize"
|
||||||
|
|
||||||
|
StackShift: "Running cleanup tasks:
|
||||||
|
|
||||||
|
1. Generating spec-coverage-map...
|
||||||
|
2. Updating README with coverage stats...
|
||||||
|
3. Removing temporary files...
|
||||||
|
4. Creating final summary...
|
||||||
|
|
||||||
|
✅ Cleanup complete!"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Notes
|
||||||
|
|
||||||
|
- Parse specs in sorted order (001, 002, etc.) for consistent output
|
||||||
|
- Handle both `.specify/memory/specifications/` and `specs/*/spec.md` formats
|
||||||
|
- Use relative paths from project root
|
||||||
|
- Validate file existence before including in map
|
||||||
|
- Mark non-existent files as ❌ in coverage map
|
||||||
|
- Use unicode box-drawing characters (not ASCII art approximations)
|
||||||
|
- Generate clean markdown tables with proper alignment
|
||||||
|
- Include timestamp for tracking coverage over time
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**This coverage map provides crucial visibility into spec-code alignment and helps identify documentation gaps!**
|
||||||
Reference in New Issue
Block a user