Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:44:27 +08:00
commit d97a70be33
36 changed files with 17033 additions and 0 deletions

View File

@@ -0,0 +1,792 @@
# /sdd:command-optimise
## Meta
- Version: 2.0
- Category: transformation
- Complexity: moderate
- Purpose: Convert existing slash commands to LLM-optimized format with intelligent auto-detection
## Definition
**Purpose**: Transform slash command documentation into LLM-optimised format by analysing the command's content to determine optimal formatting, then replaces the original command file with the optimised version.
**Syntax**: `/sdd:command-optimise <command_ref> [format_style] [strictness] [--dry-run]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| command_ref | string | Yes | - | Command name (e.g., "/deploy") OR full command documentation text | Non-empty |
| format_style | string | No | "auto" | Output format style | One of: auto, structured, xml, json, imperative, contract |
| strictness | string | No | "auto" | Level of detail and validation rules | One of: auto, minimal, standard, comprehensive |
| --dry-run | flag | No | false | Preview changes without writing to file | Boolean flag |
## Behavior
```
ON INVOCATION:
1. DETERMINE input type and retrieve command documentation:
IF command_ref starts with "/" AND has no spaces/newlines:
// This is a command name reference
SEARCH for command documentation:
- Look in project's /sdd:commands directory
- Check registered command definitions
- Search documentation files
- Query command registry
IF command found:
SET command_text = retrieved documentation
ELSE:
RETURN "Error: Command '{command_ref}' not found. Please provide the full command text."
ELSE:
// This is the full command documentation
SET command_text = command_ref
2. PARSE command documentation:
- Extract command name (look for /sdd:command patterns)
- Identify any existing parameters/arguments
- Find usage examples if present
- Detect implicit behavior from description
- Extract action verbs and keywords from documentation
3. ANALYZE command content to determine optimal configuration:
EXAMINE command documentation for indicators:
// Check for STATE MODIFICATION indicators
STATE_MODIFYING_INDICATORS = [
"saves", "writes", "updates", "modifies", "changes", "deletes", "removes",
"creates", "inserts", "deploys", "publishes", "commits", "persists",
"alters", "mutates", "transforms production", "affects live"
]
// Check for SECURITY/CRITICAL indicators
SECURITY_INDICATORS = [
"authenticate", "authorize", "encrypt", "decrypt", "password", "token",
"certificate", "permission", "access control", "security", "vulnerability",
"sensitive", "credential", "private key", "secret"
]
// Check for ANALYSIS/INSPECTION indicators
ANALYSIS_INDICATORS = [
"analyzes", "scans", "inspects", "examines", "profiles", "measures",
"benchmarks", "evaluates", "assesses", "diagnoses", "investigates",
"reports on", "collects metrics", "gathers data"
]
// Check for VALIDATION/TESTING indicators
VALIDATION_INDICATORS = [
"tests", "validates", "verifies", "checks", "asserts", "ensures",
"confirms", "proves", "quality assurance", "QA", "unit test",
"integration test", "e2e", "smoke test"
]
// Check for CONFIGURATION indicators
CONFIG_INDICATORS = [
"configures", "sets up", "initializes", "options", "settings",
"preferences", "environment", "parameters", "flags", "toggles"
]
// Check for SEARCH/QUERY indicators
SEARCH_INDICATORS = [
"searches", "finds", "queries", "lists", "fetches", "retrieves",
"gets", "selects", "filters", "looks up", "discovers"
]
// Check for DOCUMENTATION indicators
DOC_INDICATORS = [
"documents", "generates docs", "creates documentation", "API spec",
"readme", "comments", "annotates", "describes", "explains"
]
// Analyze parameter complexity
PARAM_COMPLEXITY = COUNT(parameters) +
COUNT(nested_params) * 2 +
COUNT(optional_params) * 0.5
// Analyze behavior complexity
BEHAVIOR_COMPLEXITY = COUNT(steps) +
COUNT(conditionals) * 2 +
COUNT(error_cases)
// DECISION TREE for format selection:
IF (contains STATE_MODIFYING_INDICATORS && contains("production|live|database")):
format = "contract" // Need guarantees
strictness = "comprehensive"
reason = "Command modifies production state - requires strict pre/post conditions"
ELSE IF (contains SECURITY_INDICATORS):
format = "contract" // Security needs guarantees
strictness = "comprehensive"
reason = "Security-sensitive command - requires comprehensive validation"
ELSE IF (contains VALIDATION_INDICATORS && BEHAVIOR_COMPLEXITY > 5):
format = "contract" // Complex testing needs clear contracts
strictness = "comprehensive"
reason = "Complex validation logic - benefits from GIVEN/WHEN/THEN structure"
ELSE IF (contains ANALYSIS_INDICATORS && mentions("step|phase|process")):
format = "imperative" // Multi-step analysis
strictness = "comprehensive"
reason = "Multi-step analysis process - suits INSTRUCTION/PROCESS format"
ELSE IF (PARAM_COMPLEXITY > 8 || has_nested_objects):
format = "json" // Complex parameters
strictness = "standard"
reason = "Complex parameter structure - JSON schema provides clarity"
ELSE IF (contains CONFIG_INDICATORS && has_multiple_options):
format = "json" // Configuration with options
strictness = "standard"
reason = "Configuration command with multiple options - JSON format ideal"
ELSE IF (contains DOC_INDICATORS):
format = "xml" // Rich documentation
strictness = "comprehensive"
reason = "Documentation generation - XML provides rich metadata structure"
ELSE IF (contains SEARCH_INDICATORS && PARAM_COMPLEXITY < 3):
format = "json" // Simple search
strictness = "minimal"
reason = "Simple search/query command - minimal JSON sufficient"
ELSE IF (BEHAVIOR_COMPLEXITY < 3 && PARAM_COMPLEXITY < 3):
format = "structured" // Simple command
strictness = "minimal"
reason = "Simple utility command - basic structure sufficient"
ELSE IF (contains("build|compile|make|bundle")):
format = "imperative" // Build process
strictness = "standard"
reason = "Build process - benefits from step-by-step PROCESS format"
ELSE:
format = "structured" // Default fallback
strictness = "standard"
reason = "Standard command - balanced structure and detail"
4. LOG analysis decision:
```
Command Analysis Results:
━━━━━━━━━━━━━━━━━━━━━━━━
Detected Characteristics:
- State Modification: [Yes/No] {indicators found}
- Security Concerns: [Yes/No] {indicators found}
- Parameter Complexity: [Low/Medium/High] (score: X)
- Behavior Complexity: [Low/Medium/High] (score: Y)
- Primary Function: [category]
Selected Configuration:
- Format: [format_style]
- Strictness: [strictness]
- Reasoning: [detailed explanation]
```
5. TRANSFORM to selected format_style:
IF format_style="structured":
- Use markdown headers with consistent hierarchy
- Add parameter table with types
- Include behavior steps
- Add examples section
IF format_style="xml":
- Wrap in XML-style tags
- Separate purpose, syntax, parameters, behavior
- Include constraints section
IF format_style="json":
- Convert to JSON schema format
- Include type definitions
- Add execution_steps array
IF format_style="imperative":
- Use INSTRUCTION/PROCESS format
- Add INPUTS/OUTPUTS sections
- Include RULES with MUST/SHOULD/NEVER
IF format_style="contract":
- Use GIVEN/WHEN/THEN format
- Add PRECONDITIONS/POSTCONDITIONS
- Include INVARIANTS
6. ENHANCE based on strictness:
IF strictness="minimal":
- Add only essential missing elements
- Basic type annotations
- Simple examples
IF strictness="standard":
- Complete parameter documentation
- Validation rules
- Error handling section
- Multiple examples
IF strictness="comprehensive":
- Detailed type specifications
- Edge case handling
- Performance considerations
- Version information
- Related commands
- Security considerations (if applicable)
- Rollback procedures (if state-modifying)
7. VALIDATE converted documentation:
- Ensure all parameters are documented
- Verify examples match syntax
- Check for ambiguous instructions
8. WRITE optimised command to file:
IF --dry-run flag is set:
DISPLAY preview of changes:
```
DRY RUN - No files will be modified
────────────────────────────────────
Original file: /commands/deploy.md
Format: contract
Strictness: comprehensive
Preview of changes:
[Show diff or preview]
To apply changes, run without --dry-run
```
ELSE:
// Create backup of original
IF command was loaded from file:
COPY original to "{filename}.backup-{timestamp}"
LOG "Backup created: {backup_path}"
// Determine output path
IF command_ref was a name reference:
SET output_path = original file location
ELSE:
// For text input, create new file
SET output_path = "/commands/{command_name}.md"
// Write optimised version
WRITE optimised documentation to output_path
// Log the update
GENERATE update report:
```
✅ Command Successfully Optimised
═══════════════════════════════════
Command: {command_name}
Original: {original_path}
Backup: {backup_path}
Updated: {output_path}
Format Applied: {format_style}
Strictness: {strictness_level}
Timestamp: {iso_timestamp}
Changes Applied:
- Added type annotations to {n} parameters
- Generated {n} usage examples
- Added {validation_rules} validation rules
- Created {sections} new documentation sections
File has been updated in place.
To revert: mv {backup_path} {output_path}
```
9. RETURN confirmation with analysis report and file paths
```
## Examples
### Example 1: Replace Command File
```bash
INPUT:
/sdd:command-optimise /deploy
PROCESS:
→ Retrieved documentation from /commands/deploy.md
→ Created backup: /commands/deploy.md.backup-20250127-143022
→ Analysed and optimised command
→ Wrote optimised version to /commands/deploy.md
OUTPUT:
✅ Command Successfully Optimised
═══════════════════════════════════
Command: /deploy
Original: /commands/deploy.md
Backup: /commands/deploy.md.backup-20250127-143022
Updated: /commands/deploy.md
Format Applied: contract
Strictness: comprehensive
Timestamp: 2025-01-27T14:30:22Z
Changes Applied:
- Added type annotations to 5 parameters
- Generated 3 usage examples
- Added 8 validation rules
- Created 4 new documentation sections
File has been updated in place.
To revert: mv /commands/deploy.md.backup-20250127-143022 /commands/deploy.md
```
### Example 2: Dry Run Preview
```bash
INPUT:
/sdd:command-optimise /api-call json comprehensive --dry-run
PROCESS:
→ Retrieved documentation from /commands/api-call.md
→ Analysed and optimised command
→ DRY RUN MODE - No files modified
OUTPUT:
DRY RUN - No files will be modified
────────────────────────────────────
Original file: /commands/api-call.md
Format: json (manually specified)
Strictness: comprehensive (manually specified)
Preview of changes:
- FROM: Simple text documentation
- TO: Full JSON schema with type definitions
- Added: Error handling section
- Added: 5 usage examples
- Added: Parameter validation rules
[Shows preview of optimised content]
To apply changes, run without --dry-run:
/sdd:command-optimise /api-call json comprehensive
```
### Example 3: Create New File from Text
```bash
INPUT:
/sdd:command-optimise "
/process-data - handles user data
Takes input file, transforms it, and saves the results to the database.
Can update existing records or create new ones.
"
PROCESS:
→ Detected full command documentation (contains spaces/newlines)
→ Analyzing provided text directly
ANALYSIS:
✓ Found state modification indicators: "saves", "database", "update"
[Continues with normal analysis...]
```
### Example 4: Command Not Found
```bash
INPUT:
/sdd:command-optimise /unknown-command
PROCESS:
→ Detected command name reference: "/unknown-command"
→ Searching command registry...
→ Command not found
OUTPUT:
Error: Command '/unknown-command' not found. Please provide the full command text or ensure the command name is correct.
```
### Example 5: Auto-Detects State Modification in Retrieved Command
```bash
INPUT:
/sdd:command-optimise /migrate-db
PROCESS:
→ Retrieved from: /commands/migrate-db.md
→ Created backup: /commands/migrate-db.md.backup-20250127-143045
ANALYSIS:
✓ Found state modification indicators: "migrations", "updates", "schema"
✓ Found database indicators: "database", "schema"
✓ Detected: Database migration command
✓ Applied: contract format + comprehensive strictness
✓ Reasoning: Database changes need strict pre/post conditions
OUTPUT:
✅ Command Successfully Optimised
═══════════════════════════════════
Command: /migrate-db
Original: /commands/migrate-db.md
Backup: /commands/migrate-db.md.backup-20250127-143045
Updated: /commands/migrate-db.md
Format Applied: contract
Strictness: comprehensive
File has been updated in place.
✓ Applied: contract format + comprehensive strictness
✓ Reasoning: Command modifies database state - requires strict pre/post conditions
OUTPUT:
CONTRACT: Data Processing Command
GIVEN: Input file with user data to process
WHEN: The /process-data command is invoked
THEN: Data is transformed and persisted to database
PRECONDITIONS:
✓ Input file exists and is readable
✓ File format is valid (CSV/JSON/XML)
✓ Database connection is available
✓ User has write permissions
✓ Sufficient storage space available
ACTIONS:
→ Validate input file format
→ Parse and load data into memory
→ Apply transformation rules
→ Check for existing records
→ IF existing: Update with new values
→ ELSE: Create new records
→ Commit transaction to database
→ Generate operation report
POSTCONDITIONS:
✓ All valid data is saved to database
✓ Invalid records are logged
✓ Transaction is atomic (all or nothing)
✓ Audit trail is updated
✓ Report contains success/failure counts
INVARIANTS:
• Data integrity is maintained
• No partial updates on failure
• Original file is never modified
• Database constraints are respected
```
### Example 6: Auto-Detects Complex Parameters
```bash
INPUT:
/optimize-command "
/api-call - makes HTTP requests
Parameters:
- url: target endpoint
- method: GET, POST, PUT, DELETE
- headers: object with header key-values
- body: nested JSON payload
- auth: {type: 'bearer'|'basic', credentials: {...}}
- retry: {attempts: number, delay: number}
"
ANALYSIS:
✓ Parameter complexity score: 12 (6 params, 3 nested objects)
✓ Detected: Complex parameter structure
✓ Applied: json format + standard strictness
✓ Reasoning: Complex nested parameters - JSON schema provides clarity
OUTPUT:
{
"command": "/api-call",
"description": "Execute HTTP requests with configurable options",
"parameters": {
"url": {
"type": "string",
"required": true,
"description": "Target endpoint URL",
"pattern": "^https?://.+"
},
"method": {
"type": "string",
"required": false,
"default": "GET",
"enum": ["GET", "POST", "PUT", "DELETE", "PATCH"]
},
"headers": {
"type": "object",
"required": false,
"description": "HTTP headers as key-value pairs",
"additionalProperties": {"type": "string"}
},
"body": {
"type": "object",
"required": false,
"description": "Request body for POST/PUT requests"
},
"auth": {
"type": "object",
"required": false,
"properties": {
"type": {"enum": ["bearer", "basic"], "required": true},
"credentials": {
"type": "object",
"properties": {
"token": {"type": "string"},
"username": {"type": "string"},
"password": {"type": "string"}
}
}
}
},
"retry": {
"type": "object",
"required": false,
"properties": {
"attempts": {"type": "integer", "minimum": 1, "maximum": 5},
"delay": {"type": "integer", "minimum": 100, "maximum": 5000}
}
}
},
"execution_steps": [
"Validate URL format",
"Configure authentication if provided",
"Set request headers",
"Prepare request body if applicable",
"Execute HTTP request with retry logic",
"Handle response or errors",
"Format and return results"
],
"error_handling": {
"invalid_url": "Return error: 'Invalid URL format'",
"network_error": "Retry according to retry config, then fail",
"auth_failure": "Return 401 with auth error details",
"timeout": "Retry with exponential backoff"
}
}
```
### Example 3: Auto-Detects Multi-Step Process
```bash
INPUT:
/optimize-command "
/analyze-codebase
Performs code analysis in three phases:
1. First scans all files for syntax errors
2. Then checks for code smells and anti-patterns
3. Finally generates a detailed report with metrics
Each phase must complete before the next begins.
"
ANALYSIS:
✓ Found analysis indicators: "analyzes", "scans", "checks", "report", "metrics"
✓ Detected multi-step process: "phases", "First", "Then", "Finally"
✓ Behavior complexity: High (3 phases + sequencing)
✓ Applied: imperative format + comprehensive strictness
✓ Reasoning: Multi-step analysis process - suits INSTRUCTION/PROCESS format
OUTPUT:
/analyze-codebase
INSTRUCTION: Perform comprehensive codebase analysis in sequential phases
INPUTS:
- path: Root directory to analyze (required)
- config: Analysis configuration file (optional, default: .analysis.yml)
- output: Report output format [json|html|markdown] (optional, default: markdown)
- exclude: Patterns to exclude from analysis (optional)
PROCESS:
Phase 1: Syntax Analysis
1. SCAN all source files in specified path
2. IDENTIFY file types by extension
3. FOR EACH file:
- PARSE using language-specific parser
- COLLECT syntax errors with line numbers
- TRACK warning-level issues
4. IF errors found THEN log to phase1_errors.log
5. PROCEED only if no blocking errors
Phase 2: Code Quality Analysis
1. LOAD code smell detection rules
2. FOR EACH validated file:
- CHECK for duplicated code blocks
- IDENTIFY complex methods (cyclomatic complexity > 10)
- DETECT anti-patterns specific to language
- MEASURE code coverage if tests present
3. AGGREGATE findings by severity
4. CALCULATE quality score per file
Phase 3: Report Generation
1. COMPILE all findings from Phase 1 and 2
2. GENERATE metrics:
- Total lines of code
- Technical debt estimation
- Complexity distribution
- Issue density by module
3. CREATE visualizations if HTML output
4. FORMAT report according to output parameter
5. WRITE report to specified location
OUTPUT FORMAT:
```
Code Analysis Report
Generated: [timestamp]
Phase 1: Syntax Results
- Files Scanned: X
- Syntax Errors: Y
- Warnings: Z
Phase 2: Quality Findings
- Critical Issues: A
- Major Issues: B
- Minor Issues: C
- Quality Score: X/100
Phase 3: Metrics
[Detailed metrics table]
```
RULES:
- MUST complete each phase before proceeding
- NEVER analyze binary files
- SHOULD respect .gitignore patterns by default
- MUST handle large codebases incrementally
- ALWAYS provide progress indicators
```
### Example 4: Auto-Detects Security Concerns
```bash
INPUT:
/optimize-command "
/rotate-keys
Updates API keys and certificates. Handles sensitive credential rotation.
Requires admin authentication. Modifies production secrets.
"
ANALYSIS:
✓ Found security indicators: "keys", "certificates", "credential", "authentication", "secrets"
✓ Found state modification: "Updates", "Modifies", "production"
✓ Detected: Security-critical state modification
✓ Applied: contract format + comprehensive strictness
✓ Reasoning: Security-sensitive command modifying production - requires maximum safety
OUTPUT:
CONTRACT: Secure Key Rotation Command
GIVEN: Valid admin credentials and keys to rotate
WHEN: The /rotate-keys command is invoked with proper authorization
THEN: Keys are rotated securely with zero downtime
PRECONDITIONS:
✓ User has admin-level authentication
✓ Multi-factor authentication completed
✓ Audit logging is enabled
✓ Backup of current keys exists
✓ No active operations using current keys
✓ Key rotation window is open
✓ All dependent services notified
ACTIONS:
→ Verify admin authorization
→ Create encrypted backup of current keys
→ Generate new keys with proper entropy
→ Update keys in secure vault
→ Propagate to dependent services
→ Verify service connectivity
→ Mark old keys for deprecation
→ Update audit log
POSTCONDITIONS:
✓ New keys are active in all systems
✓ Old keys remain valid for grace period
✓ All services operational with new keys
✓ Audit trail complete with timestamp
✓ Backup is encrypted and stored
✓ Rollback plan is documented
INVARIANTS:
• Zero downtime during rotation
• Keys never exposed in logs
• Audit trail is immutable
• Encryption at rest and in transit
SECURITY CONSTRAINTS:
- MUST use hardware security module if available
- NEVER log key values
- ALWAYS use secure random generation
- MUST notify security team on completion
- REQUIRES two-person authorization for production
```
## Auto-Detection Logic
The command analyzes the CONTENT of your existing command documentation to determine optimal format:
### 🔍 What It Looks For:
1. **State Modification**`contract + comprehensive`
- Words like: saves, writes, updates, modifies, deletes, deploys
- Mentions: database, production, live system
2. **Security/Sensitive Operations**`contract + comprehensive`
- Words like: authenticate, encrypt, password, token, credential
- Mentions: security, permission, secret, private key
3. **Multi-Step Processes**`imperative + comprehensive`
- Structure: "Phase 1... Phase 2..." or "First... Then... Finally..."
- Words like: analyzes, scans, processes, evaluates + step indicators
4. **Complex Parameters**`json + standard`
- Multiple nested objects
- Arrays of options
- 5+ parameters with complex types
5. **Simple Queries**`json + minimal`
- Words like: search, find, get, list, query
- Few parameters (<3)
- No state modification
6. **Simple Utilities**`structured + minimal`
- Basic operations
- Minimal parameters
- No complex behavior
### 📊 Complexity Scoring:
- **Parameter Complexity** = base params + (nested × 2) + (optional × 0.5)
- **Behavior Complexity** = steps + (conditionals × 2) + error cases
- High complexity → More comprehensive documentation needed
## Constraints
- ⛔ NEVER discard existing information from the original command
- ⚠️ ALWAYS preserve original command name exactly as provided
- ✅ MUST add type information for all parameters
- ✅ MUST include at least one usage example
- 📝 SHOULD infer missing details from context when possible
- 🔍 SHOULD flag ambiguities for user review
## Error Handling
- If command_text is empty: Return "Error: No command text provided"
- If no command name detected: Return "Error: Could not identify command name (expected /sdd:command format)"
- If format_style invalid: Return "Error: Unknown format style. Use: auto, structured, xml, json, imperative, or contract"
- If parsing fails: Return partial optimization with warnings about unparseable sections
## Usage Patterns
### Quick Command Reference
```bash
# Optimise a command by name only
/sdd:command-optimise /deploy
# Optimise with specific format
/sdd:command-optimise /api-call json
# Optimise with format and strictness
/sdd:command-optimise /security-scan imperative comprehensive
# Provide full text when command isn't in registry
/sdd:command-optimise "
/custom-command - does something special
Parameters: input, output, options
Behavior: processes input and produces output
"
```
### Command Name Detection Logic
The system determines if input is a command name reference by checking:
1. Starts with "/" character
2. Contains no spaces or newlines (single token)
3. Looks like a command name pattern (/word-word)
If these conditions are met, it searches for the command documentation.
Otherwise, it treats the input as the full command text to analyze.
## Notes
- Command names are searched in the project's command registry
- If a command isn't found by name, provide the full documentation text
- Manual format/strictness override works with both name references and full text
- The optimiser analyses actual command content, not just the command name

View File

@@ -0,0 +1,300 @@
# Commands Explorer
## Meta
- Version: 1.0
- Category: development-tooling
- Complexity: moderate
- Purpose: Interactive command ecosystem exploration and management
## Definition
**Purpose**: Provides an interactive environment for exploring, understanding, modifying, and creating commands within the `.claude/commands/` directory. Enables comprehensive command ecosystem management with guided workflows.
**Syntax**: `/sdd:commands-explorer [query]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| query | string | No | - | Optional natural language query about commands | Free-form text |
## Capabilities
The Commands Explorer provides five primary capabilities:
### 1. Examine Commands
**Purpose**: Inspect and analyze existing command documentation
**Example Queries**:
- "Show me the details of `/sdd:story-new`"
- "What does `/sdd:project-status` actually do?"
- "Compare `/sdd:story-review` and `/sdd:story-qa`"
**Process**:
1. LOCATE the requested command file(s)
2. READ complete command documentation
3. EXTRACT key sections (purpose, parameters, behavior)
4. PRESENT structured analysis
5. IDENTIFY related commands
### 2. Modify Existing Commands
**Purpose**: Update and enhance existing command functionality
**Example Queries**:
- "Update `/sdd:story-implement` to include automated testing"
- "Add error handling to `/sdd:story-ship`"
- "Enhance `/sdd:project-status` with git branch info"
**Process**:
1. READ existing command documentation
2. IDENTIFY sections requiring modification
3. PROPOSE changes with clear rationale
4. PRESERVE existing command structure
5. UPDATE command file with enhancements
6. VALIDATE syntax and completeness
### 3. Create New Commands
**Purpose**: Generate new commands following project conventions
**Example Queries**:
- "Create a `/sdd:story-debug` command for troubleshooting"
- "Add a `/sdd:story-backup` command to save work in progress"
- "Build a `/sdd:project-cleanup` command for maintenance"
**Process**:
1. GATHER requirements from user query
2. DETERMINE command category (project/story/utility)
3. SELECT appropriate naming convention
4. GENERATE command structure:
- Command name and description
- Usage patterns
- Parameter definitions
- Behavioral specifications
- Examples and prerequisites
5. CREATE command file in `.claude/commands/`
6. INTEGRATE with existing workflow patterns
### 4. Optimize Workflows
**Purpose**: Streamline and improve command usage patterns
**Example Queries**:
- "Streamline the review process workflow"
- "Add shortcuts for common command sequences"
- "Create aliases for frequently used commands"
**Process**:
1. ANALYZE current workflow patterns
2. IDENTIFY bottlenecks or repetitive sequences
3. PROPOSE optimization strategies
4. IMPLEMENT workflow improvements
5. UPDATE related command documentation
### 5. Analyze Dependencies
**Purpose**: Map command relationships and requirements
**Example Queries**:
- "What commands depend on git being clean?"
- "Which commands modify the filesystem?"
- "Show me commands that require manual input"
**Process**:
1. SCAN all command files
2. EXTRACT dependency information
3. MAP command relationships
4. CATEGORIZE by dependency type
5. PRESENT structured analysis
## Command Categories
The system automatically categorizes commands into logical groups:
### Project Management Commands
**Purpose**: Project-wide operations and initialization
- `project-brief.md` - Project overview and brief generation
- `project-context-update.md` - Update project context and documentation
- `project-init.md` - Initialize new project structure
- `project-status.md` - Check overall project status
- `project-stories.md` - Manage project stories and workflow
### Story Workflow Commands
**Purpose**: Core story-driven development lifecycle
- `story-new.md` - Create new story
- `story-start.md` - Begin working on a story
- `story-continue.md` - Resume work on current story
- `story-implement.md` - Implement story requirements
- `story-review.md` - Review story implementation
- `story-qa.md` - Quality assurance validation
- `story-ship.md` - Ship completed story
- `story-complete.md` - Mark story as complete
### Story Management Commands
**Purpose**: Story state and lifecycle management
- `story-status.md` - Check current story status
- `story-next.md` - Move to next story in workflow
- `story-save.md` - Save current story progress
- `story-document.md` - Document story details
- `story-validate.md` - Validate story implementation
- `story-rollback.md` - Rollback story changes
- `story-blocked.md` - Mark story as blocked
### Development Support Commands
**Purpose**: Code quality and technical operations
- `story-refactor.md` - Refactor existing code
- `story-tech-debt.md` - Address technical debt
- `story-test-integration.md` - Integration testing workflows
- `story-patterns.md` - Code patterns and standards
- `story-metrics.md` - Development metrics tracking
### Utility Commands
**Purpose**: Development tooling and checks
- `story-quick-check.md` - Quick health checks
- `story-full-check.md` - Comprehensive validation
- `story-timebox.md` - Time management utilities
- `story-today.md` - Daily development planning
## Common Workflows
### Starting New Work
```
/sdd:story-new → /sdd:story-start → /sdd:story-implement
```
### Review & Ship
```
/sdd:story-review → /sdd:story-qa → /sdd:story-ship → /sdd:story-complete
```
### Project Management
```
/sdd:project-status → /sdd:project-stories → /sdd:story-next
```
## Command Modification Guidelines
When modifying or creating commands through the explorer:
### Standard Command Structure
```markdown
# Command Name
Brief description of what the command does.
## Usage
How to use the command and when to use it.
## What it does
Detailed steps the command performs.
## Prerequisites (if any)
What needs to be in place before running.
## Examples
Practical usage examples.
```
### Naming Conventions
- **Project commands**: `project-*` - Project-wide operations
- **Story commands**: `story-*` - Story-specific operations
- **Utility commands**: Descriptive names for general utilities
### Integration Requirements
- MUST integrate with existing workflows
- SHOULD consider git state requirements
- MUST document dependencies on other commands
- SHOULD maintain consistency with project patterns
## Interactive Session Features
When Commands Explorer is invoked, it establishes:
1. **Command Discovery**: Comprehensive catalogue of all commands
2. **Workflow Understanding**: Mapped command relationships and sequences
3. **Quick Access**: Identified common command patterns
4. **Documentation Access**: Full command documentation available
5. **Interactive Modification**: Complete editing capabilities
6. **Pattern Consistency**: Guided creation following standards
## Rules
### Command Management
- MUST preserve existing command functionality when modifying
- NEVER remove critical command sections without explicit approval
- SHOULD suggest improvements based on command best practices
- MUST maintain consistent formatting across all commands
- SHOULD validate command syntax before saving
### File Operations
- MUST use `.claude/commands/` directory for all commands
- SHOULD create backups before modifying existing commands
- MUST follow markdown naming convention: `command-name.md`
- NEVER overwrite commands without confirmation
### Documentation Standards
- MUST include purpose and usage for all commands
- SHOULD provide multiple examples for complex commands
- MUST document all parameters with types and defaults
- SHOULD explain command relationships and dependencies
- MUST maintain accurate category assignments
## Usage Examples
### Example 1: Examine a Command
```
User: "Show me how /sdd:story-new works"
Response:
[Reads story-new.md]
[Presents structured analysis]
━━━━━━━━━━━━━━━━━━━━━━━━
Command: /sdd:story-new
Category: Story Workflow
Purpose: Create new story in development workflow
[... detailed breakdown ...]
```
### Example 2: Create New Command
```
User: "Create a /sdd:story-debug command for troubleshooting"
Response:
[Gathers requirements]
[Generates command structure]
[Creates story-debug.md with proper format]
✅ Created: /sdd:story-debug
Category: Development Support
Location: .claude/commands/sdd:story-debug.md
```
### Example 3: Analyze Dependencies
```
User: "Which commands modify the filesystem?"
Response:
Commands that modify the filesystem:
━━━━━━━━━━━━━━━━━━━━━━━━
1. /sdd:story-new - Creates story files
2. /sdd:story-save - Writes progress snapshots
3. /sdd:story-ship - Moves files between directories
4. /sdd:project-init - Creates project structure
[... complete analysis ...]
```
## Getting Started
**Quick Start Queries**:
- "Show me how `/sdd:story-new` works"
- "Create a new command for X"
- "Modify `/sdd:story-implement` to do Y"
- "What's the difference between review and QA commands?"
- "List all commands that create files"
## Notes
- All commands are stored as markdown files in `.claude/commands/`
- Command files use kebab-case naming: `command-name.md`
- Commands are designed to work together in story-driven workflows
- The explorer maintains command ecosystem consistency
- Interactive mode allows natural language queries
- Command modifications follow existing project conventions
## Related Commands
- `/sdd:command-optimise` - Optimize command documentation format
- `/sdd:project-status` - View overall project and command status
- `/sdd:story-patterns` - Understand coding patterns (similar to command patterns)

392
commands/project-brief.md Normal file
View File

@@ -0,0 +1,392 @@
# /sdd:project-brief
## Meta
- Version: 2.0
- Category: project-planning
- Complexity: high
- Purpose: Creates comprehensive project briefs with intelligent story breakdown and version management
## Definition
**Purpose**: Generate comprehensive project briefs that intelligently break down complex features into multiple related stories, building upon existing project context when available.
**Syntax**: `/sdd:project-brief [project_title]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| project_title | string | No | prompted | Name of the project or feature set | Non-empty string |
## INSTRUCTION: Create Comprehensive Project Brief
### INPUTS
- project_title: Project or feature name (prompted if not provided)
- Existing project context files (if present):
- `/docs/project-context/project-brief.md` (current brief)
- `/docs/project-context/story-relationships.md` (story dependencies)
- `/docs/project-context/versions/` (historical versions)
### PROCESS
#### Phase 1: Version Management & Context Loading
1. CHECK for existing project brief at `/docs/project-context/project-brief.md`
2. IF exists:
- CREATE `/docs/project-context/versions/` directory if not present
- ADD `.gitkeep` file to versions directory
- GENERATE timestamp in format YYYYMMDD-HHMMSS
- MOVE existing brief to `/docs/project-context/versions/project-brief-v[N]-[timestamp].md`
- PARSE existing brief to extract:
* Current stories and their status
* Timeline and milestones
* Project objectives
* Stakeholder information
- LOG version backup location
3. CHECK for existing `/docs/project-context/story-relationships.md`
4. IF exists:
- MOVE to `/docs/project-context/versions/story-relationships-v[N]-[timestamp].md`
5. LOAD existing context to build upon (don't start from scratch)
#### Phase 2: Requirements Gathering
1. IF project_title not provided:
- PROMPT: "What is the title of this project?"
- VALIDATE: Non-empty string
- SET project_title from response
2. IF existing brief found:
- REVIEW existing objectives and expand/refine them
- ANALYZE existing stakeholders and identify new ones
- EXAMINE existing features and find gaps or enhancements
- UPDATE technical constraints based on current context
- ENHANCE existing success criteria with new metrics
3. ELSE (no existing brief):
- GATHER initial requirements by asking:
* "What is the high-level goal of this project?"
* "Who are the primary users/stakeholders?"
* "What are the core features that must be delivered?"
* "What are nice-to-have enhancements?"
* "Are there any technical constraints or dependencies?"
* "What are the success criteria?"
#### Phase 3: Intelligent Story Breakdown
1. ANALYZE requirements to identify:
- Core functionality (must-have stories)
- Enhancement features (should-have stories)
- Future considerations (could-have stories)
2. FOR EACH identified story:
- DEFINE detailed description of what the story accomplishes
- SPECIFY user scenarios and use cases
- DOCUMENT technical implementation requirements
- CREATE acceptance criteria with clear pass/fail conditions
- IDENTIFY edge cases and error handling requirements
- NOTE UI/UX considerations (if applicable)
- LIST testing requirements and test scenarios
- MAP integration points with other stories or systems
3. DETERMINE logical dependencies between stories
4. ESTIMATE relative effort using scale: S / M / L / XL
#### Phase 4: Project Brief Creation
1. GENERATE project brief content using project brief template
2. INCLUDE sections:
- Project Overview
- Objectives (built upon existing if applicable)
- Stakeholders
- Core Features
- Story Breakdown (with comprehensive details)
- Dependencies
- Timeline (if applicable)
- Success Criteria
3. WRITE to `/docs/project-context/project-brief.md`
- ⚠️ MANDATORY: File MUST be at `/docs/project-context/project-brief.md`
- ⚠️ MANDATORY: Do NOT create individual story files in `/docs/stories/development/`
#### Phase 5: Story Relationships File (OPTIONAL)
1. IF stories have dependencies:
- CREATE `/docs/project-context/story-relationships.md`
- INCLUDE sections:
* Dependency Graph (ASCII diagram)
* Story Priority Matrix (table format)
* Implementation Order (phased breakdown)
2. FORMAT Story Priority Matrix:
```markdown
| Story ID | Title | Priority | Dependencies | Effort | Status |
|----------|-------|----------|--------------|--------|--------|
| STORY-XX | ... | Core | None | M | development |
| STORY-YY | ... | Core | STORY-XX | L | backlog |
```
3. GROUP stories by implementation phase:
- Phase 1 (Core): Foundation stories
- Phase 2 (Enhancement): Feature expansions
- Phase 3 (Polish): Nice-to-have improvements
#### Phase 6: Summary & Next Steps
1. DISPLAY summary report:
```
✅ Project Brief Created
═══════════════════════════════════
Project: [project_title]
Brief Location: /docs/project-context/project-brief.md
Relationships: /docs/project-context/story-relationships.md
Stories Identified: [count]
- Core Stories: [count]
- Enhancement Stories: [count]
- Future Stories: [count]
Dependencies: [count] identified
Implementation Phases: [count]
```
2. SUGGEST next steps:
- Run `/sdd:project-init` to set up development environment
- Use `/sdd:story-new` to start with core stories
- Follow suggested implementation order from story-relationships.md
### OUTPUTS
- `/docs/project-context/project-brief.md` - Comprehensive project brief
- `/docs/project-context/story-relationships.md` - Story dependencies and implementation order (if applicable)
- `/docs/project-context/versions/project-brief-v[N]-[timestamp].md` - Versioned backup (if updating existing)
- `/docs/project-context/versions/story-relationships-v[N]-[timestamp].md` - Versioned relationships backup (if exists)
### RULES
- MUST complete version backup before modifying existing files
- MUST preserve all existing project context when building upon it
- NEVER create individual story files in `/docs/stories/development/` directory
- ALWAYS create comprehensive story definitions with all required elements
- SHOULD use existing project context as foundation, not start from scratch
- MUST include acceptance criteria for every story
- MUST document dependencies between related stories
- ALWAYS provide implementation order recommendations
## Story Definition Requirements
Each story MUST include:
1. **Detailed Description**
- What the story accomplishes
- Why it's needed (business value)
2. **User Scenarios**
- Specific use cases
- User workflows affected
3. **Technical Requirements**
- Implementation approach
- Technology choices
- Architecture considerations
4. **Acceptance Criteria**
- Clear pass/fail conditions
- Testable requirements
- Definition of "done"
5. **Edge Cases & Error Handling**
- Boundary conditions
- Failure scenarios
- Error recovery
6. **UI/UX Considerations** (if applicable)
- User interface requirements
- User experience goals
- Accessibility requirements
7. **Testing Requirements**
- Test scenarios
- Test data needed
- Test coverage expectations
8. **Integration Points**
- Dependencies on other stories
- External system integrations
- API requirements
## File Structure
### Required Files
```
/docs/project-context/
├── project-brief.md # Main project brief (REQUIRED)
└── story-relationships.md # Story dependencies (OPTIONAL)
```
### Version Management
```
/docs/project-context/versions/
├── .gitkeep # Ensures directory is tracked
├── project-brief-v1-20250101-143000.md
├── project-brief-v2-20250115-091500.md
└── story-relationships-v1-20250101-143000.md
```
## Example Output Structure
### Project Brief Template
```markdown
# Project Brief: [Project Title]
## Overview
[High-level project description]
## Objectives
- [Objective 1]
- [Objective 2]
## Stakeholders
- **Primary Users**: [description]
- **Business Stakeholders**: [description]
- **Technical Team**: [description]
## Core Features
1. [Feature 1]
2. [Feature 2]
## Story Breakdown
### STORY-XXX-001: [Story Title]
**Priority**: Core | **Effort**: M | **Status**: backlog
**Description**: [What this story accomplishes]
**User Scenarios**:
- [Scenario 1]
- [Scenario 2]
**Technical Requirements**:
- [Requirement 1]
- [Requirement 2]
**Acceptance Criteria**:
- [ ] [Criterion 1]
- [ ] [Criterion 2]
**Edge Cases**:
- [Edge case 1 and handling]
**Testing Requirements**:
- [Test scenario 1]
**Dependencies**: None
---
[Additional stories follow same format]
## Success Criteria
- [Success metric 1]
- [Success metric 2]
```
### Story Relationships Template
```markdown
# Story Relationships
## Dependency Graph
```
STORY-XXX-001 (Core)
└── STORY-XXX-002 (Core, depends on 001)
├── STORY-XXX-003 (Enhancement, depends on 002)
└── STORY-XXX-004 (Enhancement, depends on 002)
STORY-XXX-005 (Core, independent)
└── STORY-XXX-006 (Polish, depends on 005)
```
## Story Priority Matrix
| Story ID | Title | Priority | Dependencies | Effort | Status |
|----------|-------|----------|--------------|--------|--------|
| STORY-XXX-001 | Foundation Setup | Core | None | M | development |
| STORY-XXX-002 | Core Feature | Core | STORY-XXX-001 | L | backlog |
| STORY-XXX-003 | Enhancement A | Enhancement | STORY-XXX-002 | M | backlog |
## Implementation Order
**Phase 1 (Core - Week 1-2)**
- STORY-XXX-001: Foundation Setup
- STORY-XXX-005: Independent Core Feature
**Phase 2 (Core - Week 3-4)**
- STORY-XXX-002: Core Feature (after 001)
**Phase 3 (Enhancement - Week 5-6)**
- STORY-XXX-003: Enhancement A (after 002)
- STORY-XXX-004: Enhancement B (after 002)
**Phase 4 (Polish - Week 7)**
- STORY-XXX-006: Polish Feature (after 005)
```
## Error Handling
- If project_title is empty after prompt: Return "Error: Project title is required"
- If unable to create versions directory: Return "Warning: Could not create versions directory. Proceeding without backup."
- If existing brief cannot be parsed: Create new brief and log warning
- If no stories can be identified: Return "Error: Unable to identify actionable stories from requirements"
## Constraints
- ⚠️ NEVER skip version management if existing files are present
- ⚠️ NEVER create individual story files in `/docs/stories/development/`
- ✅ ALWAYS create comprehensive story definitions with all required elements
- ✅ ALWAYS preserve existing project context when building upon it
- ✅ MUST include dependency analysis if multiple stories exist
- 📝 SHOULD create story-relationships.md for projects with 3+ stories
- 🔄 MUST follow versioning pattern: project-brief-v[N]-[timestamp].md
## Usage Examples
### Example 1: New Project Brief
```bash
/sdd:project-brief
→ Prompts for project title
→ Gathers requirements interactively
→ Creates /docs/project-context/project-brief.md
→ Creates /docs/project-context/story-relationships.md
Output:
✅ Project Brief Created
Project: E-commerce Checkout Flow
Stories Identified: 5
- Core Stories: 3
- Enhancement Stories: 2
```
### Example 2: Updating Existing Brief
```bash
/sdd:project-brief "Enhanced Checkout"
→ Finds existing /docs/project-context/project-brief.md
→ Creates backup: /docs/project-context/versions/project-brief-v1-20250101-143000.md
→ Loads existing objectives and stories
→ Builds upon existing content
→ Creates enhanced version
Output:
✅ Project Brief Updated
Previous version backed up to: /docs/project-context/versions/project-brief-v1-20250101-143000.md
New stories added: 3
Updated stories: 2
```
### Example 3: With Title Provided
```bash
/sdd:project-brief "Mobile App Dashboard"
→ Uses provided title
→ Gathers requirements
→ Creates comprehensive brief
→ No version management needed (new project)
```
## Performance Considerations
- Large existing briefs (10+ stories) may take longer to parse
- Version management adds minimal overhead (single file copy)
- Interactive requirement gathering allows user to control pace
- Story dependency analysis scales with story count
## Related Commands
- `/sdd:project-init` - Initialize development environment after brief creation
- `/sdd:story-new` - Create individual story from brief
- `/sdd:story-qa` - Quality assurance for completed stories

View File

@@ -0,0 +1,433 @@
# /sdd:project-context-update
## Meta
- Version: 2.0
- Category: project-management
- Complexity: medium
- Purpose: Update project context documents with version control and consistency validation
## Definition
**Purpose**: Update technical stack, development process, coding standards, or project glossary documents while maintaining consistency across all project context files.
**Syntax**: `/sdd:project-context-update [document_type]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| document_type | string | No | prompted | Which document to update | One of: technical-stack, development-process, coding-standards, project-glossary, project-brief |
## INSTRUCTION: Update Project Context Documents
### INPUTS
- document_type: Type of context document to update (optional, prompted if not provided)
- Current context documents in `/docs/project-context/`
- User-specified changes and updates
- Related documents for consistency validation
### PROCESS
#### Phase 1: Environment Verification
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **VERIFY** which context documents exist:
- `/docs/project-context/technical-stack.md` - Technology choices
- `/docs/project-context/coding-standards.md` - Quality rules
- `/docs/project-context/development-process.md` - Workflow definitions
- `/docs/project-context/project-glossary.md` - Domain vocabulary
- `/docs/project-context/project-brief.md` - Project overview
#### Phase 2: Document Selection
1. IF document_type provided:
- VALIDATE it matches available document types
- SET target_document to specified type
2. ELSE:
- **ASK** user which document to update:
```
Which document would you like to update?
1. technical-stack - Frameworks, languages, tools
2. development-process - Stage definitions, workflows
3. coding-standards - Naming, patterns, quality requirements
4. project-glossary - Domain terms, project vocabulary
5. project-brief - Project overview and goals
Enter number or name:
```
- CAPTURE user selection
- SET target_document based on selection
#### Phase 3: Current State Analysis
1. **READ** current content from `/docs/project-context/[target_document].md`
2. **PARSE** existing structure:
- Identify main sections
- Extract current technology choices (if technical-stack)
- Understand existing standards and patterns
- Note areas that may need updating
3. **DISPLAY** current state summary to user
4. **IDENTIFY** dependent documents that may be affected
#### Phase 4: Change Specification
1. **ASK** user what changes to make:
- Add new technology/library (requires dependent updates)
- Update existing entry (cascade changes to related standards)
- Remove deprecated item (update all references)
- Reorganize structure (maintain compatibility)
- Migrate from one technology to another (comprehensive update)
2. **GATHER** specific details:
- WHAT is being changed
- WHY the change is needed
- WHEN it should take effect
- HOW it impacts other documents
3. **ASK** for version numbers and documentation links (if technical change)
#### Phase 5: Impact Analysis
1. **ANALYZE** impact on other context documents:
IF updating technical-stack:
- CHECK if coding-standards need updates
- VERIFY development-process matches new stack
- REVIEW project-glossary for new terms
- IDENTIFY test commands that may change
IF updating coding-standards:
- VERIFY alignment with technical-stack
- CHECK if development-process reviews need updates
- NOTE impact on existing stories
IF updating development-process:
- CHECK if stages match technical-stack capabilities
- VERIFY coding-standards align with new process
- NOTE story templates that may need updates
IF updating project-glossary:
- ENSURE terms align with technical-stack
- VERIFY consistency with project-brief terminology
2. **REPORT** identified impacts to user:
```
Impact Analysis:
━━━━━━━━━━━━━━━━
Primary Change: technical-stack.md
Affected Documents:
- coding-standards.md: Test command references need update
- development-process.md: QA stage testing strategy needs alignment
Affected Stories:
- 2 stories in /development may need test updates
- 1 story in /review may need coding standard adjustments
```
#### Phase 6: Change Application
1. **CREATE** backup before modifications:
- COPY original file to `/docs/project-context/versions/`
- USE format: `[document]-backup-[timestamp].md`
- LOG backup location
2. **APPLY** changes to target document:
- UPDATE specified sections
- MAINTAIN document structure
- PRESERVE formatting and organization
- ADD timestamps or version notes if appropriate
3. **SAVE** updated document to original location
#### Phase 7: Consistency Validation
1. **VALIDATE** consistency across project-context:
- CHECK technical-stack and coding-standards align
- VERIFY development-process matches technology choices
- ENSURE project-glossary includes relevant terms
- CONFIRM project-brief reflects current state
2. **DETECT** inconsistencies or conflicts:
- Mismatched technology references
- Outdated process descriptions
- Missing terminology definitions
- Conflicting standards
3. **REPORT** validation results to user
#### Phase 8: Cascading Updates (Optional)
1. IF other documents affected:
- **ASK** user if they want to update related documents now
- FOR EACH document requiring update:
* SHOW what needs to change
* ASK for confirmation
* APPLY updates if approved
- LOG all cascading changes
2. IF user declines cascading updates:
- PROVIDE list of manual updates needed
- SUGGEST specific changes for each document
- NOTE in target document that related updates pending
#### Phase 9: Story Impact Assessment
1. **IDENTIFY** stories affected by context changes:
- SCAN `/docs/stories/development/` for impacted stories
- CHECK `/docs/stories/review/` for stories needing review updates
- NOTE `/docs/stories/qa/` stories requiring test updates
2. **SUGGEST** actions for affected stories:
- Re-run `/sdd:story-review` for stories with new standards
- Re-run `/sdd:story-qa` for stories with new test requirements
- Update story documentation with new references
#### Phase 10: Completion Summary
1. **DISPLAY** update summary:
```
✅ Context Document Updated
═══════════════════════════════════
Document: technical-stack.md
Backup: /docs/project-context/versions/technical-stack-backup-20251001-104500.md
Changes Applied:
- Added: Playwright for E2E testing
- Updated: Node.js version 18 → 20
- Removed: Deprecated library XYZ
Cascading Updates:
- coding-standards.md: Updated test naming conventions
- development-process.md: Added Playwright to QA stage
Affected Stories: 2 in development
Recommended Actions:
1. Review STORY-XXX-005 test setup
2. Update STORY-XXX-007 with new standards
```
2. **PROVIDE** next steps:
- Commands to run for impacted stories
- Documentation links for new technologies
- Timeline for completing related updates
### OUTPUTS
- Updated `/docs/project-context/[document].md` - Modified context document
- Backup `/docs/project-context/versions/[document]-backup-[timestamp].md` - Original version
- Updated related documents (if cascading updates approved)
- Impact assessment report
- Recommended actions list
### RULES
- MUST create backup before any modifications
- MUST validate consistency across all context documents
- MUST identify and report impacts on other documents
- MUST identify and report impacts on existing stories
- SHOULD offer to update related documents
- SHOULD provide specific update recommendations
- NEVER overwrite without creating backup
- ALWAYS maintain document structure and formatting
## Examples
### Example 1: Add New Testing Framework
```bash
INPUT:
/sdd:project-context-update technical-stack
INTERACTION:
→ Shows current technical-stack.md content
→ Asks: "What would you like to update?"
→ User: "Add Playwright for E2E testing"
→ Asks: "Version number?" → "1.40"
→ Asks: "Documentation link?" → "https://playwright.dev"
IMPACT ANALYSIS:
Analyzing impact...
- coding-standards.md: Test naming conventions need update
- development-process.md: QA stage needs Playwright integration
- 2 stories in /development may need test updates
OUTPUT:
✅ Context Document Updated
═══════════════════════════════════
Document: technical-stack.md
Backup: /docs/project-context/versions/technical-stack-backup-20251001-104500.md
Changes Applied:
- Added: Playwright 1.40 for E2E testing
- Updated: Browser/E2E testing section
Cascading Updates Available:
1. coding-standards.md - Add Playwright test patterns
2. development-process.md - Update QA stage with Playwright
Would you like to apply cascading updates now? [y/n]
```
### Example 2: Migrate Technology
```bash
INPUT:
/sdd:project-context-update technical-stack
INTERACTION:
→ Asks: "What would you like to update?"
→ User: "Migrate from Jest to Vitest"
IMPACT ANALYSIS:
⚠️ Major Change Detected: Test Framework Migration
Affected Documents:
- technical-stack.md: Unit testing framework
- coding-standards.md: Test file conventions, imports
- development-process.md: Test commands in all stages
- project-glossary.md: Testing terminology
Affected Stories:
- 5 stories in /development with existing tests
- 2 stories in /review with test coverage
- 3 stories in /completed may need test migration
OUTPUT:
✅ Context Document Updated
═══════════════════════════════════
Document: technical-stack.md
Backup: /docs/project-context/versions/technical-stack-backup-20251001-104530.md
Changes Applied:
- Removed: Jest (deprecated)
- Added: Vitest 1.0 (new unit testing framework)
- Updated: Test runner commands (npm test → vitest)
Cascading Updates Required:
1. coding-standards.md: Test import statements (expect from 'vitest')
2. development-process.md: Test commands in all stages
3. project-glossary.md: Update testing terminology
⚠️ IMPORTANT: Existing Stories Need Updates
- 5 stories in /development require test migration
- Consider creating migration guide story
Recommended Actions:
1. Update all context documents (run /sdd:project-context-update)
2. Create STORY-MIGRATION: Jest to Vitest migration guide
3. Re-run tests for all in-progress stories
4. Update test templates in /docs/stories/templates/
```
### Example 3: Update Coding Standards
```bash
INPUT:
/sdd:project-context-update coding-standards
INTERACTION:
→ Shows current coding-standards.md
→ Asks: "What would you like to update?"
→ User: "Change line length from 80 to 120 characters"
IMPACT ANALYSIS:
Analyzing impact...
- Technical stack: No changes needed
- Development process: No changes needed
- Existing code: May need reformatting
- Stories: 3 in /review may need re-review
OUTPUT:
✅ Context Document Updated
═══════════════════════════════════
Document: coding-standards.md
Backup: /docs/project-context/versions/coding-standards-backup-20251001-104600.md
Changes Applied:
- Updated: Line length limit 80 → 120 characters
- Updated: Prettier configuration reference
No cascading updates required.
⚠️ Code Formatting Impact:
- Existing code may need reformatting
- Run: npm run format (or equivalent)
- Consider: Re-review 3 stories in /review with new standard
Recommended Actions:
1. Run formatter across codebase
2. Re-review STORY-XXX-003, STORY-XXX-006, STORY-XXX-009
3. Update editor/IDE settings to 120 char limit
```
### Example 4: No Project Context
```bash
INPUT:
/sdd:project-context-update
OUTPUT:
⚠️ PROJECT CONTEXT NOT FOUND
The /docs/project-context/ directory does not exist.
To set up the story-driven development system, run:
→ /sdd:project-init
This will create:
- Project context directory
- All context documents
- Document templates
```
## Edge Cases
### Document Doesn't Exist
- DETECT missing document
- OFFER to create it with standard template
- IF user confirms, create document and continue
- ELSE suggest running `/sdd:project-init`
### No Changes Specified
- IF user can't specify changes clearly
- OFFER examples of common updates
- PROVIDE guided questions
- ALLOW user to cancel if not ready
### Conflicting Updates
- DETECT conflicts between updates
- EXPLAIN the conflict to user
- SUGGEST resolution approaches
- REQUIRE user decision before proceeding
### Large Cascading Impact
- IF changes affect many documents/stories
- WARN user about scope of impact
- SUGGEST breaking into smaller updates
- PROVIDE option to review impact before applying
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init` with clear instructions
- **Document not found**: Offer to create with template or abort
- **Backup creation fails**: MUST NOT proceed with updates, report error
- **Permission errors**: Report specific file with access issue
- **Validation failures**: Show inconsistencies, suggest fixes, don't force update
## Performance Considerations
- Document reads are fast (< 100ms)
- Impact analysis scales with document count (typically < 1s)
- Backup creation is quick (< 50ms per file)
- Story scanning may take longer (100+ stories: ~2-3s)
## Security Considerations
- Verify write permissions before modifications
- Validate all file paths stay within project
- Create backups before any destructive operations
- Don't expose sensitive configuration data
## Related Commands
- `/sdd:project-init` - Initialize project structure if missing
- `/sdd:project-brief` - Update high-level project documentation
- `/sdd:story-review` - Re-review stories with new standards
- `/sdd:story-qa` - Re-test stories with new requirements
- `/sdd:project-status` - View current project state
## Constraints
- ⚠️ MUST create backup before any modification
- ✅ MUST validate consistency across context documents
- ✅ MUST identify impact on other documents and stories
- 📋 SHOULD offer to apply cascading updates
- 🔄 SHOULD provide specific recommendations
- ⚡ MUST complete analysis in < 5 seconds
- 💾 NEVER overwrite without backup

426
commands/project-init.md Normal file
View File

@@ -0,0 +1,426 @@
# /sdd:project-init
## Meta
- Version: 2.0
- Category: project-management
- Complexity: medium
- Purpose: Initialize story-driven development system with folder structure and template documents
## Definition
**Purpose**: Create the complete folder structure and template documents required for story-driven development workflow.
**Syntax**: `/sdd:project-init`
## Parameters
None
## INSTRUCTION: Initialize Project Structure
### INPUTS
- Project root directory (current working directory)
- User responses for technology stack and preferences (gathered interactively)
### PROCESS
#### Phase 1: Directory Structure Creation
1. **CREATE** `/docs/project-context/` directory
- Root directory for all project documentation
- Contains technical specifications and standards
2. **CREATE** `/docs/stories/` directory with subdirectories:
- `/docs/stories/development/` - Active implementation work
- `/docs/stories/review/` - Code review stage
- `/docs/stories/qa/` - Quality assurance testing
- `/docs/stories/completed/` - Finished and shipped stories
- `/docs/stories/backlog/` - Planned but not started
- `/docs/stories/templates/` - Story and documentation templates
3. **ADD** `.gitkeep` file to each empty directory:
- ENSURES directories are tracked in git
- PREVENTS empty directories from being ignored
- FORMAT: Empty file named `.gitkeep`
#### Phase 2: Technical Stack Documentation
1. **ASK** user about complete technical stack:
**Frontend**:
- Framework: (React, Vue, Svelte, Angular, Laravel Blade, Next.js, Nuxt.js, etc.)
- State management: (Redux, Zustand, Vuex, Pinia, Livewire, Alpine.js, etc.)
- Language: (TypeScript, JavaScript, PHP templating, etc.)
- Styling: (Tailwind CSS, CSS Modules, Styled Components, SCSS, etc.)
- Build tool: (Vite, Webpack, Rollup, esbuild, Parcel, Laravel Mix, etc.)
**Backend**:
- Runtime: (Node.js, Deno, Bun, PHP, Python, Go, Java, .NET, etc.)
- Framework: (Express, Fastify, Laravel, Symfony, Django, FastAPI, etc.)
- Language: (TypeScript, JavaScript, PHP, Python, Go, Java, C#, etc.)
**Database**:
- Primary database: (PostgreSQL, MySQL, MongoDB, SQLite, Redis, etc.)
- ORM/Query builder: (Prisma, TypeORM, Eloquent, Django ORM, etc.)
- Caching: (Redis, Memcached, database cache, etc.)
**Testing**:
- Unit testing: (Jest, Vitest, Pest, PHPUnit, Pytest, JUnit, etc.)
- Integration testing: (Same as unit or separate framework)
- Browser/E2E testing: (Playwright, Cypress, Selenium, Laravel Dusk, etc.)
- Test runner commands: (npm test, vendor/bin/pest, pytest, etc.)
**Development Tools**:
- Package manager: (npm, yarn, pnpm, composer, pip, go mod, etc.)
- Code formatting: (Prettier, ESLint, PHP CS Fixer, Black, etc.)
- Linting: (ESLint, PHPStan, pylint, golangci-lint, etc.)
- Git hooks: (husky, pre-commit, etc.)
**Deployment & Hosting**:
- Hosting platform: (Vercel, Netlify, AWS, Railway, Heroku, etc.)
- Container platform: (Docker, Podman, none, etc.)
- CI/CD: (GitHub Actions, GitLab CI, Jenkins, none, etc.)
**Key Libraries**:
- Authentication: (Auth0, Firebase Auth, Laravel Sanctum, etc.)
- HTTP client: (Axios, Fetch, Guzzle, Requests, etc.)
- Validation: (Zod, Joi, Laravel Validation, Pydantic, etc.)
- Other important libraries: [User provides list]
2. **CREATE** `/docs/project-context/technical-stack.md`:
- POPULATE with user's technology choices
- INCLUDE version numbers if available
- ADD links to documentation
- NOTE any specific configuration requirements
#### Phase 3: Development Process Documentation
1. **CREATE** `/docs/project-context/development-process.md`:
- DEFINE three-stage workflow (Development → Review → QA)
- SPECIFY entry/exit criteria for each stage
- DOCUMENT required activities per stage
- ESTABLISH quality gates and checkpoints
- OUTLINE story movement rules
2. **INCLUDE** sections:
- Stage Definitions
- Stage Requirements
- Testing Strategy
- Review Process
- Quality Gates
#### Phase 4: Coding Standards Documentation
1. **ASK** user about comprehensive coding standards:
- Naming conventions (camelCase, snake_case, PascalCase patterns)
- Function/method organization (length limits, complexity)
- Class/module structure (single responsibility patterns)
- Comment and documentation standards
- Framework-specific patterns
- File organization preferences
- Testing standards
- Quality requirements
- Git workflow conventions
2. **CREATE** `/docs/project-context/coding-standards.md`:
- DOCUMENT language-specific standards
- DEFINE framework-specific patterns
- SPECIFY file organization rules
- ESTABLISH testing standards
- SET quality requirements
- OUTLINE git workflow
#### Phase 5: Project Glossary
1. **CREATE** `/docs/project-context/project-glossary.md`:
- PROVIDE template for domain-specific terminology
- INCLUDE sections for:
* Domain Terms (business-specific vocabulary)
* Technical Terms (framework-specific terminology)
* Process Terms (development workflow vocabulary)
- ENCOURAGE user to populate over time
#### Phase 6: Project Brief Template
1. **CREATE** `/docs/project-context/project-brief.md`:
- PROVIDE comprehensive project overview template
- INCLUDE sections:
* Project Overview (name, description, objectives)
* Timeline (start date, target completion, milestones)
* Story Planning (total stories, prioritization)
* Success Metrics
- PROMPT user to fill with actual project details
#### Phase 7: Story Template Creation
1. **CREATE** `/docs/stories/templates/story-template.md`:
- COMPREHENSIVE story template with sections:
* Story Header (ID, title, status, priority)
* Description and Context
* Success Criteria and Acceptance Tests
* Technical Implementation Notes
* Implementation Checklist
* Test Cases (unit, integration, browser)
* Rollback Plans
* Lessons Learned
- REFERENCE project's technical stack from `technical-stack.md`
- ALIGN with coding standards from `coding-standards.md`
- MATCH process requirements from `development-process.md`
#### Phase 8: Completion Summary and Next Steps
1. **DISPLAY** creation summary:
```
✅ Project Structure Initialized
═══════════════════════════════════
📁 Directories Created:
- /docs/project-context/
- /docs/stories/development/
- /docs/stories/review/
- /docs/stories/qa/
- /docs/stories/completed/
- /docs/stories/backlog/
- /docs/stories/templates/
📄 Documents Created:
- /docs/project-context/technical-stack.md
- /docs/project-context/development-process.md
- /docs/project-context/coding-standards.md
- /docs/project-context/project-glossary.md
- /docs/project-context/project-brief.md
- /docs/stories/templates/story-template.md
🔧 Configuration Status:
- Technical stack: Configured with [user's stack]
- Coding standards: Customized
- Development process: Defined
```
2. **SUGGEST** next steps:
- Fill out `project-brief.md` with actual project details
- Customize `coding-standards.md` with team-specific patterns
- Update `development-process.md` with workflow preferences
- Populate `project-glossary.md` with domain terms
- Create first story: `/sdd:story-new`
- Begin development: `/sdd:story-start`
3. **PROVIDE** quick start guide:
- How to create a story
- How to move story through workflow
- How to check project status
- Where to find documentation
### OUTPUTS
**Directories**:
- `/docs/project-context/` - Project documentation root
- `/docs/stories/development/` - Active stories
- `/docs/stories/review/` - Stories in review
- `/docs/stories/qa/` - Stories in QA
- `/docs/stories/completed/` - Finished stories
- `/docs/stories/backlog/` - Planned stories
- `/docs/stories/templates/` - Templates
**Files**:
- `/docs/project-context/technical-stack.md` - Technology choices
- `/docs/project-context/development-process.md` - Workflow definitions
- `/docs/project-context/coding-standards.md` - Quality standards
- `/docs/project-context/project-glossary.md` - Terminology reference
- `/docs/project-context/project-brief.md` - Project overview
- `/docs/stories/templates/story-template.md` - Story template
### RULES
- MUST create all directories before creating files
- MUST add `.gitkeep` to all empty directories
- MUST gather user input for technology stack
- MUST customize templates based on user's stack
- SHOULD reference actual technology choices in templates
- NEVER overwrite existing files without user confirmation
- ALWAYS provide next steps after initialization
## File Structure
### Directory Hierarchy
```
/docs/project-context/
├── technical-stack.md # Technology choices and versions
├── development-process.md # Workflow and quality gates
├── coding-standards.md # Code quality standards
├── project-glossary.md # Domain terminology
└── project-brief.md # Project overview and goals
/docs/stories/
├── /development/ # Active implementation
│ └── .gitkeep
├── /review/ # Code review stage
│ └── .gitkeep
├── /qa/ # Quality assurance
│ └── .gitkeep
├── /completed/ # Finished stories
│ └── .gitkeep
├── /backlog/ # Planned stories
│ └── .gitkeep
└── /templates/ # Templates
├── .gitkeep
└── story-template.md # Story template
```
## Examples
### Example 1: First-Time Setup
```bash
INPUT:
/sdd:project-init
INTERACTION:
→ Asks about frontend framework
→ Asks about backend framework
→ Asks about database
→ Asks about testing framework
→ Asks about deployment platform
OUTPUT:
✅ Project Structure Initialized
═══════════════════════════════════
📁 Directories Created:
- /docs/project-context/
- /docs/stories/development/
- /docs/stories/review/
- /docs/stories/qa/
- /docs/stories/completed/
- /docs/stories/backlog/
- /docs/stories/templates/
📄 Documents Created:
- /docs/project-context/technical-stack.md (Laravel TALL stack)
- /docs/project-context/development-process.md
- /docs/project-context/coding-standards.md
- /docs/project-context/project-glossary.md
- /docs/project-context/project-brief.md
- /docs/stories/templates/story-template.md
🔧 Configuration Status:
- Technical stack: Laravel 12, Livewire 3, Alpine.js, Tailwind CSS
- Testing: Pest PHP, Playwright
- Deployment: Laravel Herd (local), Forge (production)
💡 NEXT STEPS:
1. Fill out /docs/project-context/project-brief.md with your project details
2. Run /sdd:project-brief to create comprehensive project plan
3. Create your first story with /sdd:story-new
4. Begin development with /sdd:story-start
📚 QUICK START:
- Create story: /sdd:story-new
- View status: /sdd:project-status
- Start work: /sdd:story-start [id]
- Documentation: See /docs/project-context/ directory
```
### Example 2: Already Initialized
```bash
INPUT:
/sdd:project-init
OUTPUT:
⚠️ Project Already Initialized
The following directories already exist:
- /docs/project-context/
- /docs/stories/
Would you like to:
1. Skip initialization (directories exist)
2. Add missing directories/files only
3. Recreate all templates (keeps existing config)
4. Abort
Choose an option [1-4]:
```
### Example 3: Partial Initialization
```bash
INPUT:
/sdd:project-init
DETECTION:
→ Found /docs/project-context/ but missing /docs/stories/
OUTPUT:
Partial Project Structure Detected
Found: /docs/project-context/
Missing: /docs/stories/ and subdirectories
Creating missing directories...
✅ Completed Missing Structure
═══════════════════════════════════
📁 Created:
- /docs/stories/development/
- /docs/stories/review/
- /docs/stories/qa/
- /docs/stories/completed/
- /docs/stories/backlog/
- /docs/stories/templates/
Existing configuration preserved.
💡 NEXT STEPS:
- Create first story: /sdd:story-new
- View project status: /sdd:project-status
```
## Edge Cases
### Existing Project Structure
- DETECT existing directories and files
- OFFER options:
* Skip initialization completely
* Add missing directories/files only
* Recreate templates (preserve config)
* Abort operation
- NEVER overwrite without confirmation
### Partial Initialization
- IDENTIFY which components exist
- CREATE only missing components
- PRESERVE existing configuration
- LOG what was added vs what existed
### Permission Issues
- CHECK write permissions before creating
- REPORT specific permission errors
- SUGGEST running with appropriate permissions
- PROVIDE manual creation instructions if needed
### Git Not Initialized
- DETECT if .git directory exists
- SUGGEST initializing git if missing
- NOTE that .gitkeep files require git
- CONTINUE with initialization regardless
## Error Handling
- **Permission denied**: Report specific directory/file, suggest fixes
- **Disk space full**: Report error, suggest cleanup
- **Invalid path**: Verify working directory is project root
- **User cancels**: Clean up partial creation, exit gracefully
## Performance Considerations
- Directory creation is fast (< 100ms typically)
- File creation with templates (< 500ms typically)
- Interactive prompts allow user to control pace
- No heavy processing or external dependencies
## Security Considerations
- Verify write permissions before operations
- Sanitize all file paths
- Don't create files outside project root
- Don't overwrite without explicit confirmation
## Related Commands
- `/sdd:project-brief` - Create comprehensive project plan after init
- `/sdd:story-new` - Create first story after initialization
- `/sdd:project-status` - View current project state
- `/sdd:project-context-update` - Update context documents later
## Constraints
- ⚠️ MUST NOT overwrite existing files without confirmation
- ✅ MUST create all directories before files
- ✅ MUST add `.gitkeep` to empty directories
- 📋 MUST gather user input for technology stack
- 🔧 SHOULD customize templates based on stack
- 💾 MUST verify write permissions
- ⚡ SHOULD complete initialization in < 5 seconds (excluding user input)

305
commands/project-phase.md Normal file
View File

@@ -0,0 +1,305 @@
# /sdd:project-phase
## Meta
- Version: 1.2
- Category: project-management
- Complexity: high
- Purpose: Interactively plan project development phases based on user requirements and preferences
## Definition
**Purpose**: Interactively plan the next development phase by gathering user input on desired features and improvements, with optional completion analysis of previous work.
**Syntax**: `/sdd:project-phase [phase_name] [--analyze-only]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| phase_name | string | No | Auto-generate (e.g., "Phase 2", "v2.0") | Name for the new development phase | Non-empty if provided |
| --analyze-only | flag | No | false | Only perform analysis without creating new phase documentation | Boolean flag |
## INSTRUCTION: Interactive Phase Planning with User Input
### INPUTS
- phase_name: New phase identifier (optional, auto-generated if not provided)
- current_brief: Main project brief at `/docs/project-context/project-brief.md`
- user_requirements: Interactive input from user about desired features and improvements
- existing_phases: Previous phase documentation in `/docs/project-context/phases/`
- analyze_only: Flag to perform analysis without creating new phase
- Optional context: Stories in `/docs/stories/completed/`, `/docs/stories/development/`, `/docs/stories/review/`, `/docs/stories/qa/`
### PROCESS
#### Phase 1: Environment Setup and Discovery
1. **VERIFY** main project brief exists at `/docs/project-context/project-brief.md`
2. **CREATE** `/docs/project-context/phases/` directory if missing
3. **SCAN** existing phase directories to determine version number
4. **OPTIONAL CONTEXT GATHERING**:
- Count stories in each directory (`/docs/stories/development/`, `/docs/stories/review/`, `/docs/stories/qa/`, `/docs/stories/completed/`)
- Identify recent development patterns for context only
#### Phase 2: User Consultation for New Phase
1. **PRESENT PROJECT STATUS**:
- **SHOW** current project state and recent development activity
- **SUMMARIZE** any incomplete work in development/review/qa
- **HIGHLIGHT** recent completed features and achievements
2. **ASK USER ABOUT NEW PHASE**:
- **ASK**: "Based on the current project state, do you want to plan a new development phase?"
- **EXPLAIN** what a new phase would involve (planning features, organizing stories, setting goals)
- **PROVIDE OPTIONS**:
* "Yes, I want to plan a new phase with specific features and improvements"
* "No, I want to continue with existing work or make smaller adjustments"
* "I'm not sure, help me understand what a new phase would look like"
3. **IF USER SAYS NO or UNSURE**:
- **SUGGEST** alternatives like:
* Continuing existing stories in development
* Making incremental improvements without formal phase planning
* Reviewing current work and identifying immediate next steps
- **EXIT** without creating new phase documentation
- **PROVIDE** guidance on other available commands for incremental work
4. **IF USER SAYS YES**:
- **GENERATE** phase identifier:
- IF phase_name provided: USE provided name
- ELSE: AUTO-GENERATE as "phase-N" where N is next sequential number
- **PROCEED** to Phase 3 (Interactive Requirements Gathering)
#### Phase 3: Interactive Requirements Gathering (Only if User Approved New Phase)
1. **USER CONSULTATION - PRIMARY FEATURES**:
- **ASK**: "What are the main features or improvements you want to focus on in this phase?"
- **PROMPT** for specific areas:
* New functionality you'd like to add
* Existing features that need improvement
* User experience enhancements
* Performance or technical improvements
- **GATHER** priority ranking from user input
2. **USER CONSULTATION - TECHNICAL PREFERENCES**:
- **ASK**: "Are there any technical areas you'd like to address?"
* Code refactoring or cleanup
* Testing improvements
* Performance optimizations
* Security enhancements
* Accessibility improvements
- **UNDERSTAND** user's technical comfort level and preferences
3. **USER CONSULTATION - CONSTRAINTS**:
- **ASK**: "What constraints should we consider for this phase?"
* Time limitations
* Complexity preferences (simple vs. ambitious)
* Dependencies on external factors
* Resource availability
- **CLARIFY** realistic scope expectations
4. **FEATURE CATEGORIZATION** (Based on user input):
- **Iteration Features**: User-identified improvements to existing functionality
- **Extension Features**: User-requested new capabilities
- **Foundation Features**: User-approved technical improvements
#### Phase 4: Optional Context Analysis
1. **IF USER REQUESTS CONTEXT** from previous work:
- REVIEW completed stories for relevant patterns
- ASSESS current technical foundation capabilities
- IDENTIFY any blockers from incomplete work
2. **TECHNICAL FOUNDATION REVIEW** (only if relevant to user goals):
- EVALUATE current stack capabilities against user requirements
- IDENTIFY necessary technical prerequisites
- ASSESS feasibility of user-requested features
3. **SUCCESS CRITERIA DEFINITION** (collaborative):
- WORK WITH USER to define measurable goals
- SET realistic timelines based on user constraints
- ESTABLISH clear completion criteria for each feature
#### Phase 5: User Confirmation for Documentation
1. **IF analyze_only flag is TRUE**:
- GENERATE analysis report to console
- PROVIDE recommendations without creating files
- SUGGEST optimal phase planning approach
- EXIT without file creation
2. **MANDATORY USER CONFIRMATION** (for full phase creation):
- **PRESENT** complete phase plan summary to user including:
* Proposed phase name and scope
* Feature categories and priorities (based on user input)
* Estimated timeline and effort
* Technical approach and considerations
* Story breakdown and dependencies
- **ASK EXPLICITLY**: "Should I proceed with creating the phase documentation files based on this plan?"
- **REQUIRE** explicit user approval (yes/no response)
- **IF USER DECLINES**: Exit without creating any files, suggest refinements
- **IF USER APPROVES**: Proceed to Phase 6 (Documentation Generation)
#### Phase 6: Phase Documentation Generation (User Approved Only)
1. **ONLY EXECUTE if user explicitly approved in Phase 5**:
- **CREATE** phase directory: `/docs/project-context/phases/[phase_name]/`
- **GENERATE** phase brief at `/docs/project-context/phases/[phase_name]/phase-brief.md`:
```markdown
# Phase Brief: [phase_name]
**Phase Name:** [phase_name]
**Created:** [date]
**Previous Phase Completion:** [completion_percentage]%
**Estimated Duration:** [duration]
## Phase Overview
[Description of this development phase goals and focus]
## Previous Phase Summary
### Recent Development Context
[Brief summary of recent work for context, if relevant]
### Current Project State
[Assessment of current capabilities and foundation]
## Phase Goals & Objectives
### Primary Focus
[Main goal for this phase]
### Success Criteria
[Measurable outcomes and quality gates]
## Feature Categories
### Iteration Features (Improve Existing)
[User-identified improvements to existing functionality with effort estimates]
### Extension Features (Build New)
[User-requested new capabilities with effort estimates and dependencies]
### Foundation Features (Technical Improvements)
[User-approved technical improvements with effort estimates]
## Technical Considerations
### Required Refactoring
[Technical debt and refactoring needs]
### Performance Targets
[Specific performance goals and metrics]
### Quality Improvements
[Testing, accessibility, and code quality goals]
## Dependencies and Prerequisites
[What must be completed before starting each feature category]
## Risk Assessment
[Specific risks for this phase with mitigation strategies]
## Estimated Timeline
[Phase-based implementation plan with milestones]
## Story Planning
[List of stories to be created for this phase]
## Success Metrics
[How to measure phase completion and success]
```
- **CREATE** story queue at `/docs/project-context/phases/[phase_name]/story-queue.md`:
```markdown
# Story Queue: [phase_name]
## Ready for Development
[Stories ready to move to /docs/stories/development/]
## Blocked/Waiting
[Stories waiting for dependencies or decisions]
## Future Consideration
[Stories for later in the phase or next phase]
## Story Dependencies
[Dependency relationships between new stories]
```
- **UPDATE** main project brief:
- ADD phase summary based on user input
- UPDATE timeline with new phase
- REFERENCE new phase documentation
#### Phase 7: Story Planning and Organization
1. **USER STORY DEFINITION** (based on user requirements):
- CONVERT user requirements into actionable stories
- BREAK DOWN complex features into implementable chunks
- PRIORITIZE based on user preferences and dependencies
2. **STORY QUEUE POPULATION**:
- POPULATE story queue with user-defined priorities
- ORGANIZE by user-specified implementation order
- ESTIMATE effort based on user constraints and complexity preferences
3. **EXISTING WORK INTEGRATION**:
- REVIEW any incomplete stories for relevance to new phase
- SUGGEST continuation only if aligned with user goals
- IDENTIFY conflicts between existing work and new direction
#### Phase 8: Final Summary and Next Steps
1. **COMPLETION SUMMARY**:
- CONFIRM phase documentation has been created successfully
- SUMMARIZE what was implemented based on user requirements
- HIGHLIGHT key files created and their purposes
2. **GENERATE** operation summary with:
- User-defined feature goals and categories
- Estimated timeline based on user constraints
- Implementation approach aligned with user preferences
- Recommended next steps for development
3. **PROVIDE** actionable next steps:
- Commands to run to start new phase development
- Story creation recommendations based on user priorities
- Technical setup requirements for user-requested features
### OUTPUTS
- `/docs/project-context/phases/[phase_name]/phase-brief.md`: Focused phase documentation
- `/docs/project-context/phases/[phase_name]/story-queue.md`: Prioritized story backlog
- Updated `/docs/project-context/project-brief.md`: Phase completion summary
- Console summary: Completion analysis and phase planning results
### RULES
- **MUST** preserve all existing project documentation
- **MUST** prioritize user input over automated analysis
- **MUST** ask clarifying questions to understand user requirements
- **MUST** get explicit user approval before creating any new phase documentation
- **MUST** exit gracefully if user does not want a new phase
- **MUST NOT** create individual story files (only queue them)
- **SHOULD** focus on user-defined priorities and goals
- **SHOULD** build upon technical foundation already established
- **MUST** maintain consistency with main project brief
- **SHOULD** provide realistic effort estimates based on user constraints
### ERROR HANDLING
- **Missing project brief**: Error and suggest running `/sdd:project-brief` first
- **Insufficient user input**: Ask clarifying questions to gather requirements
- **File system errors**: Report specific error and suggest manual intervention
- **Invalid phase name**: Sanitize and suggest corrected version
### PERFORMANCE CONSIDERATIONS
- **Large story collections**: Process incrementally to avoid memory issues
- **Complex dependency analysis**: Limit analysis to direct dependencies
- **File I/O optimization**: Batch read operations for story analysis
### SECURITY CONSIDERATIONS
- **File permissions**: Ensure write access to project-context directory
- **Path validation**: Sanitize all file paths and directory names
- **Data integrity**: Validate story file parsing before analysis
### INTEGRATION WITH EXISTING WORKFLOW
- **Before**: Must have existing project brief
- **After**: Use `/sdd:story-new` to create individual stories from queue
- **Complements**: Works with `/sdd:project-brief` for major updates
- **Feeds into**: Standard story development workflow
### RELATED COMMANDS
- `/sdd:project-brief`: Update main project documentation
- `/sdd:story-new`: Create individual stories from phase queue
- `/sdd:project-status`: View current development state
- `/sdd:story-relationships`: Manage dependencies between stories
### VERSION HISTORY
- **v1.0**: Initial implementation with completion analysis and phase planning
- **v1.1**: Updated to prioritize interactive user input over automated analysis
- **v1.2**: Added mandatory user consultation before phase creation and explicit approval before documentation

322
commands/project-status.md Normal file
View File

@@ -0,0 +1,322 @@
# /sdd:project-status
## Meta
- Version: 2.0
- Category: project-management
- Complexity: medium
- Purpose: Display comprehensive project status with story breakdown and progress tracking
## Definition
**Purpose**: Show current project status including story breakdown, progress metrics, and actionable next steps.
**Syntax**: `/sdd:project-status`
## Parameters
None
## Behavior
### Step 1: Project Context Loading
1. CHECK if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with guidance message
3. LOAD project-specific requirements from:
- `/docs/project-context/project-brief.md` (project title, timeline, objectives)
- `/docs/project-context/technical-stack.md` (technology information)
- `/docs/project-context/development-process.md` (stage definitions, workflows)
### Step 2: Project Brief Analysis
1. READ `/docs/project-context/project-brief.md`
2. EXTRACT:
- Project title and current status
- Project objectives and goals
- Target completion date (if specified)
- Total planned stories count
3. IF no project brief exists:
- SUGGEST using `/sdd:project-brief` to create one
- PROCEED with simplified view (Step 7)
### Step 3: Story Collection and Analysis
1. SCAN all story directories for project stories:
- `/docs/stories/development/` - Active implementation
- `/docs/stories/review/` - Code review stage
- `/docs/stories/qa/` - Quality assurance stage
- `/docs/stories/completed/` - Finished stories
- `/docs/stories/backlog/` - Planned stories (if exists)
2. FOR EACH story:
- COUNT stories by status category
- IDENTIFY blocked or stalled stories
- EXTRACT priority and effort estimates
- NOTE dependencies and relationships
3. CALCULATE metrics:
- Total stories across all stages
- Completion percentage
- Stories by priority (Core/Enhancement/Future)
- Active vs pending stories
### Step 4: Progress Analysis
1. CALCULATE completion metrics:
- Percentage complete: `(completed / total) × 100`
- Core stories progress
- Enhancement stories progress
- Future stories status
2. IDENTIFY current focus:
- Stories actively in development
- Stories ready to start (no blockers)
- Stories waiting on dependencies
3. ANALYZE timeline:
- Days since project start
- Days remaining to target
- Estimated completion based on velocity
### Step 5: Issue Detection
1. HIGHLIGHT concerns:
- Stories behind schedule
- Blocked stories with dependencies
- Missing critical dependencies
- Critical path bottlenecks
- Long-running stories (potential issues)
### Step 6: Formatted Status Display
GENERATE comprehensive status report:
```
📊 PROJECT STATUS
=================
🏗️ [PROJECT TITLE]
├── Status: Active (Started: [Date], Target: [Date])
├── Progress: ████████░░ 75% (6/8 stories complete)
├── Core Stories: ✅ 4/4 complete
├── Enhancement: 🔄 2/3 in progress
├── Future: ⏳ 0/1 pending
└── Next: STORY-XXX-007 (Feature name) - Ready to start
📊 STORY BREAKDOWN BY STATUS
- ✅ Completed: 3 stories
- 🔄 In Development: 2 stories
- 🔍 In Review: 1 story
- 🧪 In QA: 1 story
- ⏳ Backlog: 1 story
- ⚠️ Blocked: 0 stories
🎯 CURRENT FOCUS
- Active: STORY-XXX-005 (Feature name) - In development
- Ready to start: STORY-XXX-007 (Feature name)
- Waiting: STORY-XXX-008 (depends on STORY-XXX-005)
📅 TIMELINE
- Started: [Start Date]
- Target: [Target Date]
- Estimated completion: [Calculated Date]
- Days remaining: [X days]
💡 NEXT ACTIONS
1. Continue STORY-XXX-005 (Feature name)
2. Start STORY-XXX-007 when ready
3. Review completed stories in /qa
🔗 USEFUL COMMANDS
1. /sdd:story-continue # Resume current work
2. /sdd:story-next # Get next recommended story
3. /sdd:story-status # See all individual story details
```
### Step 7: Simplified View (No Project Brief)
IF no project brief exists, DISPLAY simplified metrics:
```
📊 PROJECT STATUS (SIMPLIFIED)
===============================
📁 Story Distribution:
- Development: [count] stories
- Review: [count] stories
- QA: [count] stories
- Completed: [count] stories
- Total: [count] stories
💡 RECOMMENDATION
Create a project brief for better organization and tracking:
→ /sdd:project-brief
Available commands:
1. /sdd:story-new # Create new story
2. /sdd:story-status # View story details
3. /sdd:project-brief # Create project structure
```
### Step 8: Command Suggestions
SUGGEST relevant commands based on current state:
IF stories ready to start:
- `/sdd:story-implement [id]` for ready stories
IF work in progress:
- `/sdd:story-continue` for resuming work
IF no project structure:
- `/sdd:project-brief` to create organization
## Output Format
### Success Output
Comprehensive status display with:
- Visual progress indicators (████░░)
- Story breakdown by status (✅ 🔄 🔍 🧪 ⏳ ⚠️)
- Timeline information
- Actionable next steps
- Relevant command suggestions
### Simplified Output
Basic metrics when project brief is missing:
- Story counts by directory
- Total story count
- Suggestions for creating structure
## Examples
### Example 1: Active Project with Full Brief
```bash
INPUT:
/sdd:project-status
OUTPUT:
📊 PROJECT STATUS
=================
🏗️ E-commerce Checkout Flow
├── Status: Active (Started: 2025-09-01, Target: 2025-10-15)
├── Progress: ████████░░ 75% (6/8 stories complete)
├── Core Stories: ✅ 4/4 complete
├── Enhancement: 🔄 2/3 in progress
├── Future: ⏳ 0/1 pending
└── Next: STORY-CHK-007 (Tax calculation) - Ready to start
📊 STORY BREAKDOWN BY STATUS
- ✅ Completed: 4 stories
- 🔄 In Development: 2 stories
- 🔍 In Review: 0 stories
- 🧪 In QA: 1 story
- ⏳ Backlog: 1 story
- ⚠️ Blocked: 0 stories
🎯 CURRENT FOCUS
- Active: STORY-CHK-005 (Payment integration) - In development
- Ready to start: STORY-CHK-007 (Tax calculation)
- Waiting: STORY-CHK-008 (depends on STORY-CHK-007)
📅 TIMELINE
- Started: 2025-09-01
- Target: 2025-10-15
- Estimated completion: 2025-10-12
- Days remaining: 14 days
💡 NEXT ACTIONS
1. Continue STORY-CHK-005 (Payment integration)
2. Start STORY-CHK-007 when ready
3. Review completed STORY-CHK-004 in /qa
🔗 USEFUL COMMANDS
1. /sdd:story-continue # Resume current work
2. /sdd:story-next # Get next recommended story
3. /sdd:story-status # See all individual story details
```
### Example 2: Project Without Brief
```bash
INPUT:
/sdd:project-status
OUTPUT:
📊 PROJECT STATUS (SIMPLIFIED)
===============================
📁 Story Distribution:
- Development: 2 stories
- Review: 1 story
- QA: 1 story
- Completed: 3 stories
- Total: 7 stories
💡 RECOMMENDATION
Create a project brief for better organization and tracking:
→ /sdd:project-brief
Available commands:
1. /sdd:story-new # Create new story
2. /sdd:story-status # View story details
3. /sdd:project-brief # Create project structure
```
### Example 3: No Project Context
```bash
INPUT:
/sdd:project-status
OUTPUT:
⚠️ PROJECT CONTEXT NOT FOUND
The /docs/project-context/ directory does not exist.
To set up the story-driven development system, run:
→ /sdd:project-init
This will create:
- Project context files
- Story directory structure
- Development workflow templates
```
## Edge Cases
### No Stories Found
IF no stories exist in any directory:
- DISPLAY message about empty project
- SUGGEST creating first story with `/sdd:story-new`
- PROVIDE link to `/sdd:project-brief` for planning
### Blocked Stories Only
IF all pending stories are blocked:
- HIGHLIGHT critical blocker in status
- IDENTIFY which story is blocking others
- SUGGEST focusing on unblocking work
### All Stories Complete
IF all stories are completed:
- CONGRATULATE on project completion
- SUGGEST running `/sdd:project-phase` for next phase
- PROVIDE option to archive or start new phase
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init` with clear instructions
- **Unreadable project brief**: Continue with simplified view, warn user
- **Corrupted story files**: Skip corrupted files, log warning, continue
- **Permission errors**: Report specific file/directory with permission issue
## Performance Considerations
- Story scanning optimizes by reading only metadata, not full content
- Large story collections (100+) process incrementally
- File I/O batched for efficiency
- Timeline calculations cached during single invocation
## Related Commands
- `/sdd:project-brief` - Create or update project documentation
- `/sdd:project-stories` - Detailed story list with dependencies
- `/sdd:project-phase` - Plan new development phase
- `/sdd:story-status` - Individual story details
- `/sdd:story-continue` - Resume active work
- `/sdd:story-next` - Get next recommended story
## Constraints
- ✅ MUST handle missing project context gracefully
- ✅ MUST provide actionable next steps
- ✅ MUST display progress visually
- 📊 SHOULD calculate accurate completion percentages
- 🎯 SHOULD identify ready-to-start stories
- ⚠️ MUST highlight blockers and issues clearly

418
commands/project-stories.md Normal file
View File

@@ -0,0 +1,418 @@
# /sdd:project-stories
## Meta
- Version: 2.0
- Category: project-management
- Complexity: medium
- Purpose: Display detailed story breakdown with dependencies and implementation order
## Definition
**Purpose**: List all stories for the current project with comprehensive dependency analysis, status tracking, and implementation recommendations.
**Syntax**: `/sdd:project-stories`
## Parameters
None
## Behavior
### Step 1: Project Brief Verification
1. CHECK for project brief at `/docs/project-context/project-brief.md`
2. IF no project brief exists:
- SUGGEST using `/sdd:project-brief` to create one
- EXIT with guidance message
### Step 2: Project Context Loading
1. READ project brief to extract:
- Project title and objectives
- Story categorization (Core/Enhancement/Future)
- Overall timeline and implementation phases
- Project goals and success criteria
2. READ story relationships file at `/docs/project-context/story-relationships.md`:
- Dependency mapping between stories
- Priority matrix with effort estimates
- Implementation phase groupings
- Critical path identification
### Step 3: Story Collection
SCAN all story directories to collect all project stories:
**Directories**:
- `/docs/stories/development/` - Active implementation
- `/docs/stories/review/` - Code review stage
- `/docs/stories/qa/` - Quality assurance testing
- `/docs/stories/completed/` - Finished and shipped
- `/docs/stories/backlog/` - Planned but not started (if exists)
FOR EACH story file:
- EXTRACT story ID, title, status
- READ dependencies and effort estimates
- IDENTIFY priority level (Core/Enhancement/Future)
- NOTE current stage in workflow
### Step 4: Story Analysis and Categorization
1. GROUP stories by priority:
- **Core Stories**: Must-have functionality (highest priority)
- **Enhancement Stories**: Should-have features (medium priority)
- **Future Stories**: Could-have improvements (lower priority)
2. ANALYZE dependencies:
- BUILD dependency graph
- IDENTIFY blocked stories (waiting on dependencies)
- FIND ready-to-start stories (all dependencies met)
- DETECT circular dependencies (if any)
3. CALCULATE metrics:
- Total story count by category
- Completion percentage per category
- Overall project progress
- Stories per status (Done/In Progress/Ready/Blocked)
### Step 5: Formatted Story Display
GENERATE comprehensive story breakdown:
```
🏗️ PROJECT: [Title]
====================
📊 OVERVIEW
- Total Stories: 8
- Completed: 3 ✅
- In Progress: 2 🔄
- Pending: 3 ⏳
- Overall Progress: 37% ████░░░░░░
🎯 CORE STORIES (Must Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ Story ID │ Title │ Dependencies │ Status │ Effort │
├─────────────┼──────────────────────────────┼──────────────┼─────────┼──────────┤
│ STORY-001 │ Shopping cart persistence │ None │ ✅ Done │ Medium │
│ STORY-002 │ Payment processing │ STORY-001 │ ✅ Done │ Large │
│ STORY-003 │ Order confirmation │ STORY-002 │ 🔄 Dev │ Medium │
│ STORY-004 │ Inventory validation │ STORY-001 │ ⏳ Ready│ Small │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🚀 ENHANCEMENT STORIES (Should Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-005 │ Tax calculation │ STORY-003 │ ⏳ Wait │ Medium │
│ STORY-006 │ Shipping options │ STORY-003 │ ⏳ Wait │ Large │
│ STORY-007 │ Promo code system │ STORY-002 │ ✅ Done │ Medium │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🔮 FUTURE STORIES (Could Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-008 │ Order tracking │ STORY-003 │ ⏳ Wait │ Large │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🗂️ DEPENDENCY FLOW
STORY-001 (✅) → STORY-002 (✅) → STORY-003 (🔄)
↓ ↓
STORY-004 (⏳) STORY-005 (⏳)
STORY-006 (⏳)
STORY-008 (⏳)
STORY-007 (✅) ← STORY-002 (✅)
📅 SUGGESTED NEXT ACTIONS
1. 🔄 Continue STORY-003 (Order confirmation) - Currently in development
2. ✅ Ready: STORY-004 (Inventory validation) - No blockers
3. ⏸️ Blocked: STORY-005, STORY-006, STORY-008 - Wait for STORY-003
💡 COMMANDS TO USE
1. /sdd:story-implement STORY-004 # Start ready story
2. /sdd:story-continue STORY-003 # Resume current work
3. /sdd:story-status # Check individual story details
```
### Step 6: Opportunity Identification
1. IDENTIFY ready-to-start stories:
- All dependencies completed
- No blockers present
- Can be started immediately
2. FIND blocked stories:
- List dependencies that must complete first
- Show which story is blocking each blocked story
- Estimate when blockers might be resolved
3. HIGHLIGHT current work in progress:
- Active development stories
- Stories in review or QA
- Recently completed stories
4. DETECT parallelization opportunities:
- Stories with no shared dependencies
- Independent work streams
- Team capacity considerations
### Step 7: Branch and Integration Information
IF git branch information available:
- LIST active branches for in-progress stories
- IDENTIFY merge conflicts or integration points
- SUGGEST branch cleanup for completed stories
### Step 8: Project Health Metrics
CALCULATE and DISPLAY:
**Velocity Metrics**:
- Stories completed per week (average)
- Current sprint/phase progress
- Estimated completion date
**Risk Factors**:
- Number of blocked stories
- Large unstarted critical stories
- Dependencies on slow-moving work
- Long-running stories (potential issues)
**Quality Metrics**:
- Stories awaiting review
- Stories in QA
- Recent failure rates (if available)
### Step 9: Simplified View (No Project Brief)
IF no project brief exists, DISPLAY simplified listing:
```
📊 STORY OVERVIEW (SIMPLIFIED)
===============================
📁 Stories Found:
- Development: [count] stories
- Review: [count] stories
- QA: [count] stories
- Completed: [count] stories
- Total: [count] stories
[List of all stories with basic info]
💡 RECOMMENDATION
Create a project brief for better organization:
→ /sdd:project-brief
This will enable:
- Story prioritization
- Dependency tracking
- Timeline planning
- Progress metrics
```
## Output Format
### Standard Output
Comprehensive story display including:
- Overview with progress metrics
- Categorized story tables (Core/Enhancement/Future)
- Visual dependency flow diagram
- Status indicators (✅ 🔄 ⏳ ⏸️)
- Suggested next actions
- Relevant commands
### Simplified Output
Basic story listing when project brief is missing:
- Count by directory
- Simple list of all stories
- Recommendation to create project structure
## Examples
### Example 1: E-commerce Checkout Project
```bash
INPUT:
/sdd:project-stories
OUTPUT:
🏗️ PROJECT: E-commerce Checkout Flow
====================================
📊 OVERVIEW
- Total Stories: 8
- Completed: 3
- In Progress: 2 🔄
- Pending: 3
- Overall Progress: 37% ████░░░░░░
🎯 CORE STORIES (Must Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-CHK-001 │ Shopping cart persistence │ None │ ✅ Done │ Medium │
│ STORY-CHK-002 │ Payment processing │ STORY-CHK-001│ ✅ Done │ Large │
│ STORY-CHK-003 │ Order confirmation │ STORY-CHK-002│ 🔄 Dev │ Medium │
│ STORY-CHK-004 │ Inventory validation │ STORY-CHK-001│ ⏳ Ready│ Small │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🚀 ENHANCEMENT STORIES (Should Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-CHK-005 │ Tax calculation │ STORY-CHK-003│ ⏳ Wait │ Medium │
│ STORY-CHK-006 │ Shipping options │ STORY-CHK-003│ ⏳ Wait │ Large │
│ STORY-CHK-007 │ Promo code system │ STORY-CHK-002│ ✅ Done │ Medium │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🔮 FUTURE STORIES (Could Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-CHK-008 │ Order tracking │ STORY-CHK-003│ ⏳ Wait │ Large │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🗂️ DEPENDENCY FLOW
STORY-CHK-001 () → STORY-CHK-002 () → STORY-CHK-003 (🔄)
↓ ↓
STORY-CHK-004 () STORY-CHK-005 ()
STORY-CHK-006 ()
STORY-CHK-008 ()
STORY-CHK-007 () ← STORY-CHK-002 ()
📅 SUGGESTED NEXT ACTIONS
1. 🔄 Continue STORY-CHK-003 (Order confirmation) - Currently in development
2. ✅ Ready: STORY-CHK-004 (Inventory validation) - No blockers, can start immediately
3. ⏸️ Blocked: STORY-CHK-005, STORY-CHK-006, STORY-CHK-008 - Waiting for STORY-CHK-003
💡 COMMANDS TO USE
1. /sdd:story-implement STORY-CHK-004 # Start ready story
2. /sdd:story-continue STORY-CHK-003 # Resume current work
3. /sdd:story-status # Check individual story details
```
### Example 2: Project Without Brief
```bash
INPUT:
/sdd:project-stories
OUTPUT:
📊 STORY OVERVIEW (SIMPLIFIED)
===============================
📁 Stories Found:
- Development: 2 stories
- Review: 1 story
- QA: 1 story
- Completed: 3 stories
- Total: 7 stories
Stories:
1. STORY-001: User authentication (✅ Completed)
2. STORY-002: Dashboard layout (✅ Completed)
3. STORY-003: Data export (✅ Completed)
4. STORY-004: Advanced filters (🔄 Development)
5. STORY-005: Report generation (🔄 Development)
6. STORY-006: Email notifications (🔍 Review)
7. STORY-007: Mobile responsive (🧪 QA)
💡 RECOMMENDATION
Create a project brief for better organization:
→ /sdd:project-brief
This will enable:
- Story prioritization (Core/Enhancement/Future)
- Dependency tracking and visualization
- Timeline planning and velocity metrics
- Progress tracking and health metrics
```
### Example 3: All Stories Blocked
```bash
INPUT:
/sdd:project-stories
OUTPUT:
🏗️ PROJECT: Mobile App Dashboard
==================================
📊 OVERVIEW
- Total Stories: 5
- Completed: 1
- In Progress: 1 🔄
- Blocked: 3 ⚠️
- Overall Progress: 20% ██░░░░░░░░
⚠️ CRITICAL: Multiple Blocked Stories
🎯 CORE STORIES (Must Have)
┌─────────────┬──────────────────────────────┬──────────────┬─────────┬──────────┐
│ STORY-001 │ API authentication │ None │ ✅ Done │ Large │
│ STORY-002 │ Data synchronization │ STORY-001 │ 🔄 Dev │ Large │
│ STORY-003 │ Offline mode │ STORY-002 │ ⚠️ Wait│ XLarge │
│ STORY-004 │ Push notifications │ STORY-002 │ ⚠️ Wait│ Medium │
│ STORY-005 │ Analytics dashboard │ STORY-002 │ ⚠️ Wait│ Large │
└─────────────┴──────────────────────────────┴──────────────┴─────────┴──────────┘
🗂️ DEPENDENCY FLOW
STORY-001 () → STORY-002 (🔄) → STORY-003 (⚠️)
STORY-004 (⚠️)
STORY-005 (⚠️)
⚠️ BLOCKER ANALYSIS
- 3 stories blocked by STORY-002 (Data synchronization)
- Focus needed on completing STORY-002 to unblock pipeline
- Large story (STORY-003) waiting - may need breakdown
📅 RECOMMENDED ACTIONS
1. 🔥 PRIORITY: Complete STORY-002 to unblock 3 downstream stories
2. 💡 Consider breaking down STORY-003 (XLarge) into smaller stories
3. 📋 Review STORY-002 progress and identify any blockers
💡 COMMANDS TO USE
1. /sdd:story-continue STORY-002 # Focus on unblocking work
2. /sdd:story-status STORY-002 # Check detailed progress
3. /sdd:project-status # Overall project health check
```
## Edge Cases
### No Stories Found
- DISPLAY message about empty project
- SUGGEST creating first story with `/sdd:story-new`
- RECOMMEND running `/sdd:project-brief` for planning
### Circular Dependencies
- DETECT circular dependency loops
- HIGHLIGHT stories involved in cycle
- SUGGEST breaking circular dependencies
- PROVIDE guidance on refactoring story structure
### All Stories Complete
- CONGRATULATE on completion
- SHOW final statistics and velocity
- SUGGEST next phase planning with `/sdd:project-phase`
- RECOMMEND project retrospective
### Large Number of Stories
- GROUP stories by phase/sprint if available
- PROVIDE filtering options
- SUMMARIZE rather than showing full tables
- SUGGEST using `/sdd:story-status` for individual details
## Error Handling
- **Missing project brief**: Suggest `/sdd:project-brief`, continue with simplified view
- **Corrupted story files**: Skip corrupted files, log warnings, continue processing
- **Missing dependencies**: Highlight unresolved dependencies, suggest fixes
- **Permission errors**: Report specific files with access issues
## Performance Considerations
- Story file reads optimized with metadata-only scanning
- Large collections (50+ stories) use progressive loading
- Dependency graph calculation cached per invocation
- Table formatting optimizes for terminal width
## Related Commands
- `/sdd:project-brief` - Create or update project documentation
- `/sdd:project-status` - High-level project progress view
- `/sdd:project-phase` - Plan next development phase
- `/sdd:story-status` - Individual story detailed view
- `/sdd:story-implement [id]` - Start working on a ready story
- `/sdd:story-continue` - Resume active work
## Constraints
- ✅ MUST group stories by priority category
- ✅ MUST show dependency relationships visually
- ✅ MUST identify ready-to-start and blocked stories
- 📊 SHOULD calculate accurate progress metrics
- 🎯 SHOULD provide actionable next steps
- ⚠️ MUST highlight critical blockers clearly
- 🔄 SHOULD show parallelization opportunities

231
commands/story-blocked.md Normal file
View File

@@ -0,0 +1,231 @@
# /sdd:story-blocked
Marks story as blocked and logs the reason.
## Implementation
**Format**: Imperative (comprehensive)
**Actions**: Multi-step status update with tracking
**Modifications**: Updates story file, adds progress log entry
### Input Parameters
```
/sdd:story-blocked [STORY-ID] [reason]
```
- `STORY-ID` (optional): Defaults to current active story
- `reason` (optional): Prompted if not provided
### Execution Steps
#### 1. Identify Target Story
- If `STORY-ID` provided, locate story file across all directories
- Otherwise, determine current active story from git branch
- Verify story exists and is not already completed
#### 2. Capture Blocking Details
Prompt user for:
```
🚫 BLOCKING ISSUE
================
Story: [ID] - [Title]
Blocked since: [timestamp]
Reason for block:
- [ ] Waiting on external dependency
- [ ] Need clarification on requirements
- [ ] Technical issue/bug
- [ ] Waiting for code review
- [ ] Infrastructure/environment issue
- [ ] Missing access/permissions
- [ ] Other: [specify]
Detailed description:
[What exactly is blocking progress]
What's needed to unblock:
[Specific action or information needed]
Who can help:
[Person/team who can resolve]
Estimated resolution:
[When might this be resolved]
```
#### 3. Update Story File
Modifications to make:
1. Update YAML frontmatter:
```yaml
status: blocked
blocked_since: [ISO timestamp]
blocked_reason: [selected reason]
```
2. Add progress log entry:
```markdown
## [Timestamp] - BLOCKED
**Reason**: [selected reason]
**Details**: [detailed description]
**Needed to unblock**: [requirements]
**Can help**: [person/team]
**Estimated resolution**: [timeframe]
**Completed before block**:
- [List work finished before blocking]
```
3. Add `[BLOCKED]` tag to story title if not present
#### 4. Suggest Alternative Work
```
💡 WHILE BLOCKED, YOU COULD:
Related work:
- [ ] Write tests for completed parts
- [ ] Document what's built so far
- [ ] Refactor existing code
Other stories:
- [Story X]: Ready to start
- [Story Y]: Quick bug fix
Improvements:
- Update documentation
- Fix technical debt
- Review other PRs
```
#### 5. Track Blocked Time
Calculate and display:
```
⏱️ BLOCKED TIME TRACKING
This story:
- Previously blocked: [X hours total]
- Current block: Started [timestamp]
All stories this week:
- Total blocked time: [X hours]
- Main block reasons: [Top 3]
```
Update story metadata:
```yaml
blocked_time_total: [X hours]
blocked_instances: [count]
```
#### 6. Create Follow-up Reminder
```
📅 FOLLOW-UP SCHEDULED
Check status in: [X hours/days]
Reminder for: [date/time]
Auto-check will:
- Verify if still blocked
- Suggest escalation if needed
- Track resolution time
```
Add to story metadata:
```yaml
follow_up_date: [ISO timestamp]
```
#### 7. Pattern Detection
If same blocking reason appears multiple times:
```
⚠️ PATTERN DETECTED
This is the [Nth] time blocked by:
[Similar blocking reason]
Consider:
- Process improvement
- Better communication
- Different approach
```
Add note to project context or retrospective file.
### Output Format
#### Block Confirmation
```
✅ STORY BLOCKED
===============
Story: [STORY-ID] - [Title]
Blocked since: [timestamp]
Reason: [selected reason]
📊 BLOCK REPORT
==============
Current blocks:
1. [Story ID]: [Reason] - [Duration]
2. [Story ID]: [Reason] - [Duration]
This week's blocks:
- External deps: [X hours]
- Clarifications: [X hours]
- Technical: [X hours]
Impact:
- Velocity reduced by [X]%
- [X] stories delayed
```
#### Unblock Procedure
For future reference:
When running `/sdd:story-continue` or `/sdd:story-unblock`:
```
✅ UNBLOCKED!
============
Was blocked for: [total time]
Resolution: [what resolved it]
Actions taken:
- Remove [BLOCKED] tag
- Update status to previous state
- Log resolution in progress
- Resume work
```
#### Escalation Path
If blocked > 4 hours:
```
⚠️ ESCALATION RECOMMENDED
========================
Blocked for: [X hours]
Actions:
- Notify: [escalation contact]
- Consider: Alternative approach
- Document: For retrospective
```
### Notes
- Modifies story YAML frontmatter and progress log
- Tracks blocking time in metadata
- Suggests productive alternatives
- Does not automatically switch stories (waits for user decision)
- Creates follow-up reminder for resolution checking

760
commands/story-complete.md Normal file
View File

@@ -0,0 +1,760 @@
# /sdd:story-complete
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Archive completed story, extract learnings, and update project metrics
## Definition
**Purpose**: Archive a shipped story, capture comprehensive lessons learned, extract reusable components, and update project metrics for continuous improvement.
**Syntax**: `/sdd:story-complete <story_id>`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | Yes | - | Story identifier (e.g., "STORY-2025-001") | Must match pattern STORY-\d{4}-\d{3} |
## INSTRUCTION: Archive Completed Story
### INPUTS
- story_id: Story identifier from /docs/stories/completed/
- Story file with completion data
- Project context from /docs/project-context/
### PROCESS
#### Phase 1: Verification
1. **VERIFY** story is in `/docs/stories/completed/` directory
2. IF NOT in completed:
- CHECK `/docs/stories/qa/` - suggest running `/sdd:story-ship` first
- CHECK `/docs/stories/review/` - suggest completing QA and shipping
- EXIT with appropriate guidance
3. **READ** story file and extract:
- Start date and completion date
- All progress log entries
- Test results and QA outcomes
- Implementation checklist status
- Success criteria completion
#### Phase 2: Metrics Collection
1. **CALCULATE** timeline metrics:
- Total duration (start to completion)
- Time in each stage (development, review, qa)
- Calendar days vs active working days
2. **ANALYZE** story progress log to determine:
- Planning time: Initial setup and design
- Implementation time: Active coding
- Testing time: Test writing and debugging
- Review/QA time: Code review and validation
3. **EXTRACT** quality metrics:
- Bugs found in review (count from progress log)
- Bugs found in QA (count from progress log)
- Test coverage achieved (from test results)
- Number of commits (from git log)
- Files changed (from git log)
4. **ASSESS** business impact:
- Features delivered vs planned
- User-facing improvements
- Performance improvements
- Technical debt addressed
5. **GENERATE** metrics summary:
```
📊 STORY METRICS
════════════════
Timeline:
- Started: [YYYY-MM-DD]
- Completed: [YYYY-MM-DD]
- Total duration: [X] days ([Y] working days)
- Development: [X] days
- Review: [Y] days
- QA: [Z] days
Effort Breakdown:
- Planning: [X] hours
- Implementation: [Y] hours
- Testing: [Z] hours
- Review/QA: [W] hours
- Total: [TOTAL] hours
Quality Metrics:
- Commits: [count]
- Files changed: [count]
- Bugs found in review: [count]
- Bugs found in QA: [count]
- Test coverage: [X%]
- Tests added: [count]
Velocity:
- Story points (if applicable): [points]
- Actual vs estimated: [comparison]
```
#### Phase 3: Lessons Learned Capture
1. **PROMPT** user for lessons learned (if not in story file):
- What went well?
- What could be improved?
- Any surprises or unexpected challenges?
- Technical insights gained?
- Process improvements identified?
2. **ANALYZE** story file for:
- Challenges documented in progress log
- Solutions that worked well
- Technical approaches that succeeded/failed
- Testing strategies effectiveness
3. **COMPILE** comprehensive lessons:
```
📚 LESSONS LEARNED
══════════════════
What Went Well:
- [Success 1: with specific details]
- [Success 2: with specific details]
- [Success 3: with specific details]
What Could Improve:
- [Improvement 1: with action items]
- [Improvement 2: with action items]
- [Improvement 3: with action items]
Surprises & Challenges:
- [Unexpected finding 1]
- [Unexpected finding 2]
Technical Insights:
- [New technique/pattern learned]
- [Library/tool discovery]
- [Architecture decision validated/challenged]
Process Improvements:
- [Workflow enhancement suggestion]
- [Tool/automation opportunity]
For Next Time:
- [ ] [Specific action item 1]
- [ ] [Specific action item 2]
- [ ] [Specific action item 3]
```
#### Phase 4: Documentation Updates
1. **IDENTIFY** documentation that needs updating:
- README files with new features
- API documentation with new endpoints
- Architecture diagrams with changes
- User guides with new workflows
2. **EXTRACT** reusable patterns from implementation:
- Code patterns to standardize
- Configuration templates
- Testing approaches
- Deployment procedures
3. **UPDATE** project context if needed:
- Add new tools to `/docs/project-context/technical-stack.md`
- Document new patterns in `/docs/project-context/coding-standards.md`
- Update process learnings in `/docs/project-context/development-process.md`
#### Phase 5: Reusable Component Extraction
1. **SCAN** implementation for reusable code:
- Utility functions to extract
- Components to generalize
- Middleware/helpers to share
- Configuration patterns
2. **IDENTIFY** candidates for:
- Shared component library
- Internal utility package
- Starter templates
- Boilerplate generators
3. **DOCUMENT** reusable assets:
```
🔧 REUSABLE COMPONENTS
═════════════════════
Created:
- [Component/utility name]: [path] - [description]
- [Component/utility name]: [path] - [description]
Patterns Documented:
- [Pattern name]: [location] - [use case]
Suggested Extractions:
- [ ] [Code to extract]: [benefit]
- [ ] [Component to generalize]: [benefit]
```
#### Phase 6: Story Archival
1. **UPDATE** story file with completion data using **EXACT STRUCTURE**:
**APPEND** the following sections to the story file in this exact order:
```markdown
---
## 📊 COMPLETION METRICS
**Archived:** [YYYY-MM-DD HH:MM]
**Total Duration:** [X] calendar days ([Y] working days)
**Status:** Completed and Archived
### Timeline
- **Started:** [YYYY-MM-DD]
- **Completed:** [YYYY-MM-DD]
- **Development:** [X] days
- **Review:** [Y] days
- **QA:** [Z] days
### Effort
- **Planning:** [X] hours
- **Implementation:** [Y] hours
- **Testing:** [Z] hours
- **Review/QA:** [W] hours
- **Total:** [TOTAL] hours
### Quality
- **Commits:** [count]
- **Files Changed:** [count]
- **Tests Added:** [count]
- **Test Coverage:** [X%]
- **Bugs in Review:** [count]
- **Bugs in QA:** [count]
### Velocity
- **Story Points:** [points] (if applicable)
- **Estimated vs Actual:** [comparison]
---
## 📚 RETROSPECTIVE
### What Went Well
- [Specific success 1 with details]
- [Specific success 2 with details]
- [Specific success 3 with details]
### What Could Improve
- [Specific improvement 1 with action]
- [Specific improvement 2 with action]
- [Specific improvement 3 with action]
### Surprises & Challenges
- [Unexpected finding 1]
- [Unexpected finding 2]
### Technical Insights
- [Technical learning 1]
- [Technical learning 2]
- [Technical learning 3]
### Process Improvements
- [Process improvement 1]
- [Process improvement 2]
### Action Items for Next Time
- [ ] [Specific action 1]
- [ ] [Specific action 2]
- [ ] [Specific action 3]
---
## 🔧 REUSABLE COMPONENTS
### Components Created
- **[Component Name]**: `[file path]` - [description]
- **[Component Name]**: `[file path]` - [description]
### Patterns Documented
- **[Pattern Name]**: [location] - [use case]
- **[Pattern Name]**: [location] - [use case]
### Extraction Opportunities
- [ ] **[Code to Extract]**: [benefit]
- [ ] **[Component to Generalize]**: [benefit]
---
## 📈 IMPACT ASSESSMENT
### User Impact
[Description of how this benefits end users]
### Business Impact
[Description of business value delivered]
### Technical Impact
[Description of technical improvements or debt addressed]
### Performance Metrics (if applicable)
- [Metric 1]: [baseline] → [achieved]
- [Metric 2]: [baseline] → [achieved]
---
## 🎯 KEY ACHIEVEMENTS
- [Major achievement 1 with specific deliverable]
- [Major achievement 2 with specific deliverable]
- [Major achievement 3 with specific deliverable]
---
## 🚀 TECHNICAL ADDITIONS
- [New capability/feature 1]
- [New pattern/approach 2]
- [Infrastructure/tooling improvement 3]
---
## 📋 FOLLOW-UP ITEMS
### Technical Debt
- [Technical debt item 1]
- [Technical debt item 2]
### Future Enhancements
- [Enhancement opportunity 1]
- [Enhancement opportunity 2]
### Related Stories
- [Dependency or follow-up story 1]
- [Dependency or follow-up story 2]
---
**Archive Status:** ✅ Complete
**Indexed:** Yes - `/docs/stories/completed/INDEX.md`
```
**NOTES:**
- ALL sections are REQUIRED (use "N/A" or "None" if section doesn't apply)
- Use consistent formatting with exact heading levels shown
- Always include separator lines (`---`) between major sections
- Timestamps must use format: YYYY-MM-DD HH:MM
- Numbers should include units (days, hours, count, %)
- All lists must use consistent bullet format (- or checkbox [ ])
2. **RENAME** story file:
- FROM: `/docs/stories/completed/[story-id].md`
- TO: `/docs/stories/completed/[ARCHIVED]-[story-id].md`
3. **CREATE OR UPDATE** `/docs/stories/completed/INDEX.md`:
**IF FILE DOESN'T EXIST**, create with this header:
```markdown
# Completed Stories Index
A chronological index of all completed and archived stories with key metrics.
## Stories
```
**THEN ADD** story entry using this EXACT format:
```markdown
### [STORY-ID] - [Title]
- **Completed:** [YYYY-MM-DD]
- **Duration:** [X] days ([Y] working days)
- **Test Coverage:** [Z%]
- **Impact:** [one-line business impact summary]
- **File:** [`[ARCHIVED]-[STORY-ID].md`](./%5BARCHIVED%5D-[STORY-ID].md)
```
**NOTES:**
- Add newest entries at the TOP (reverse chronological order)
- Maintain consistent spacing between entries (one blank line)
- Use URL-encoded file links for [ARCHIVED] prefix
4. **COMPRESS** large artifacts (optional):
- Screenshots folder
- Test recordings
- Large log files
#### Phase 7: Project Metrics Update
1. **UPDATE** project-level metrics:
- Increment completed stories count
- Add to velocity tracking
- Update cycle time averages
- Calculate success rate
2. **CREATE OR UPDATE** `/docs/project-context/project-metrics.md`:
- Total stories completed
- Average cycle time
- Average time per stage
- Quality metrics trends
- Velocity trends
3. **IDENTIFY** trends:
- Improving or degrading metrics
- Bottlenecks in process
- Quality improvements
- Velocity patterns
#### Phase 8: Completion Report
1. **GENERATE** comprehensive completion report:
```
✅ STORY COMPLETION REPORT
══════════════════════════
Story: [STORY-ID] - [Title]
Archived: [YYYY-MM-DD]
SUMMARY:
Successfully delivered [description of implementation] which
[business impact and user value provided].
KEY ACHIEVEMENTS:
• [Achievement 1: specific deliverable]
• [Achievement 2: specific deliverable]
• [Achievement 3: specific deliverable]
TECHNICAL ADDITIONS:
• [New capability/feature added]
• [New pattern/approach implemented]
• [Infrastructure/tooling improvement]
QUALITY METRICS:
• Duration: [X] days ([Y] working days)
• Test coverage: [Z%]
• Bugs found: [review: X, qa: Y]
• Performance: [metrics if applicable]
TEAM LEARNINGS:
• [Key learning 1]
• [Key learning 2]
• [Key learning 3]
REUSABLE ASSETS:
• [Component/utility created]
• [Pattern documented]
• [Template created]
FOLLOW-UP ITEMS:
• [Technical debt to address]
• [Future enhancement opportunity]
• [Process improvement action]
IMPACT:
• Users: [description of user benefit]
• Business: [description of business value]
• Technical: [description of technical improvement]
```
2. **DISPLAY** next steps:
```
💡 NEXT STEPS:
1. Review follow-up items for backlog
2. Share learnings with team
3. Update related documentation
4. /sdd:project-status to view remaining stories
5. /sdd:story-new to begin next story
```
### OUTPUTS
- `/docs/stories/completed/[ARCHIVED]-[story-id].md` - Archived story with metrics and learnings
- `/docs/stories/completed/INDEX.md` - Updated story index (created or updated)
- `/docs/project-context/sdd:project-metrics.md` - Updated project metrics (created or updated)
- Updated documentation files (as needed)
- Completion report (displayed to user)
### RULES
- MUST verify story is in `/docs/stories/completed/` before processing
- MUST collect comprehensive metrics from story timeline
- MUST capture lessons learned (prompt user if not documented)
- MUST use EXACT document structure defined in Phase 6 Step 1 (no variation allowed)
- MUST include ALL required sections (use "N/A" or "None" if not applicable)
- MUST maintain consistent formatting: heading levels, bullet styles, separators
- MUST use standard timestamp format: YYYY-MM-DD HH:MM
- SHOULD identify reusable components and patterns
- SHOULD update project-level metrics
- MUST rename file with [ARCHIVED] prefix
- MUST create or update `/docs/stories/completed/INDEX.md` with exact format specified
- MUST add newest INDEX entries at TOP in reverse chronological order
- ALWAYS generate detailed completion report
- SHOULD update project context if new tools/patterns introduced
- NEVER delete story files (archive only)
- NEVER omit required sections from document structure
## Examples
### Example 1: Complete Story with Full Metrics
```bash
INPUT:
/sdd:story-complete STORY-2025-001
PROCESS:
→ Verifying story location...
→ Found in /docs/stories/completed/STORY-2025-001.md
→ Analyzing story timeline and progress log...
→ Calculating metrics...
→ Extracting lessons learned...
→ Identifying reusable components...
OUTPUT:
✅ STORY COMPLETION REPORT
══════════════════════════
Story: STORY-2025-001 - User Authentication System
Archived: 2025-03-15
SUMMARY:
Successfully delivered a complete user authentication system with
email/password login, registration, password reset, and session
management. This provides secure user access and enables all future
user-specific features.
KEY ACHIEVEMENTS:
• Implemented secure authentication with bcrypt hashing
• Added comprehensive test coverage (95% for auth components)
• Created reusable authentication middleware
• Documented authentication patterns for future features
TECHNICAL ADDITIONS:
• AuthMiddleware for route protection
• SessionManager utility for token handling
• Reusable LoginForm and RegistrationForm components
• Comprehensive authentication test suite
QUALITY METRICS:
• Duration: 12 days (9 working days)
• Test coverage: 95%
• Bugs found: review: 2, qa: 1
• Performance: Login < 200ms, avg 150ms
TEAM LEARNINGS:
• JWT implementation was simpler than session-based auth
• Browser testing caught critical edge cases missed in unit tests
• Early security review prevented potential vulnerabilities
• Test-driven approach significantly reduced bugs in QA
REUSABLE ASSETS:
• AuthMiddleware: app/Middleware/AuthMiddleware.php
• SessionManager: app/Utils/SessionManager.php
• Authentication test helpers: tests/Helpers/AuthHelper.php
• LoginForm component: resources/views/components/LoginForm.blade.php
FOLLOW-UP ITEMS:
• Consider adding OAuth providers (Google, GitHub)
• Implement 2FA in future security story
• Add rate limiting to prevent brute force attacks
• Extract auth utilities to shared package
IMPACT:
• Users: Secure account creation and access to personalized features
• Business: Foundation for user-specific features and data
• Technical: Established authentication pattern for all future features
📊 STORY METRICS
════════════════
Timeline:
- Started: 2025-03-03
- Completed: 2025-03-15
- Total duration: 12 days (9 working days)
- Development: 6 days
- Review: 2 days
- QA: 1 day
Effort Breakdown:
- Planning: 4 hours
- Implementation: 28 hours
- Testing: 12 hours
- Review/QA: 6 hours
- Total: 50 hours
Quality Metrics:
- Commits: 24
- Files changed: 18
- Bugs found in review: 2
- Bugs found in QA: 1
- Test coverage: 95%
- Tests added: 36
📚 LESSONS LEARNED
══════════════════
What Went Well:
- Test-driven development caught edge cases early
- Browser testing revealed UX issues unit tests missed
- Early security review prevented auth vulnerabilities
- Modular design made testing straightforward
What Could Improve:
- Could have started with browser tests earlier
- Initial API design needed refinement during review
- Password reset flow took longer than estimated
- Documentation could have been written alongside code
Surprises & Challenges:
- JWT library had better DX than expected
- Session persistence across subdomains required extra config
- Password reset tokens needed expiration strategy
- Browser autofill behavior required special handling
Technical Insights:
- JWT significantly simpler than session-based auth for SPA
- httpOnly cookies provide better security than localStorage
- Middleware composition pattern works well for auth
- Playwright's auto-wait eliminated flaky tests
Process Improvements:
- Start browser tests earlier in development
- Document API contracts before implementation
- Include security review checklist in story template
- Create reusable test data factories upfront
For Next Time:
- [ ] Write API documentation as first step
- [ ] Create browser test scaffold when starting story
- [ ] Review security checklist during implementation
- [ ] Extract utilities earlier to improve testability
💡 NEXT STEPS:
1. Review follow-up items for backlog
2. Share learnings with team
3. Update authentication documentation
4. /sdd:project-status to view remaining stories
5. /sdd:story-new to begin next story
→ Story archived: /docs/stories/completed/[ARCHIVED]-STORY-2025-001.md
→ Story index updated: /docs/stories/completed/INDEX.md
→ Project metrics updated: /docs/project-context/sdd:project-metrics.md
```
### Example 2: Story Not Ready for Archival
```bash
INPUT:
/sdd:story-complete STORY-2025-002
PROCESS:
→ Searching for STORY-2025-002...
→ Found in /docs/stories/qa/
OUTPUT:
❌ Story Not Ready for Completion
═══════════════════════════════
Story: STORY-2025-002 - User Profile Management
Location: /docs/stories/qa/
The story has not been shipped to production yet.
Current Status: qa
Required Status: complete (in /docs/stories/completed/)
NEXT STEPS:
1. /sdd:story-ship STORY-2025-002 # Ship to production
2. /sdd:story-complete STORY-2025-002 # Archive after shipping
Note: Stories must be shipped before archival to ensure
all deployment data and production metrics are captured.
```
### Example 3: Story Missing Lessons Learned
```bash
INPUT:
/sdd:story-complete STORY-2025-003
PROCESS:
→ Verifying story location...
→ Found in /docs/stories/completed/STORY-2025-003.md
→ Analyzing story data...
→ Lessons learned section is empty
What went well in this story? (Enter each, then empty line when done)
> Test-driven approach worked great
> Reused authentication patterns from STORY-001
> Performance exceeded expectations
>
What could be improved? (Enter each, then empty line when done)
> Initial design needed iteration
> Could have communicated progress better
>
Any technical insights gained?
> Discovered excellent caching pattern for profile data
> Learned avatar upload optimization techniques
>
→ Capturing lessons learned...
→ Generating completion report...
OUTPUT:
[Full completion report with user-provided lessons learned integrated]
```
## Edge Cases
### Story in Wrong Directory
- DETECT story not in `/docs/stories/completed/`
- IDENTIFY current location (qa, review, development, backlog)
- SUGGEST appropriate next command to progress story
- OFFER to force complete if user confirms (with warning)
### Missing Metrics Data
- DETECT incomplete progress log
- CALCULATE what metrics are possible
- NOTE missing data in report
- SUGGEST improving progress logging for future stories
- CONTINUE with available data
### Empty Lessons Learned
- DETECT empty "Lessons Learned" section
- PROMPT user for key learnings interactively
- ANALYZE progress log for challenges and solutions
- GENERATE lessons from available story data
- ENCOURAGE documenting lessons during development
### Project Metrics File Doesn't Exist
- CREATE `/docs/project-context/sdd:project-metrics.md` with initial structure
- INITIALIZE with current story as first entry
- SET baseline metrics
- CONTINUE with normal completion
## Error Handling
- **Story ID missing**: Return "Error: Story ID required. Usage: /sdd:story-complete <story_id>"
- **Invalid story ID format**: Return "Error: Invalid story ID format. Expected: STORY-YYYY-NNN"
- **Story not found**: Search all directories and report current location
- **Story not shipped**: Suggest completing QA and shipping before archival
- **File rename error**: Log error, keep original name, note in report
- **Metrics calculation error**: Use available data, note gaps in report
## Performance Considerations
- Parse story file and git log only once
- Cache git log results for session
- Generate report asynchronously if processing large story
- Compress artifacts in background after report generation
## Related Commands
- `/sdd:story-ship` - Ship story to production before archival
- `/sdd:story-metrics` - View project-wide metrics and trends
- `/sdd:project-status` - View all project stories and progress
- `/sdd:story-new` - Create next story to work on
## Constraints
- ✅ MUST verify story is shipped before archival
- ✅ MUST collect comprehensive metrics
- ✅ MUST capture lessons learned (prompt if missing)
- 📋 MUST use exact document structure from Phase 6 Step 1 - NO VARIATION
- 📋 MUST include ALL sections even if content is "N/A" or "None"
- 📋 MUST maintain consistent heading levels, bullet styles, and separators
- 📊 MUST create or update `/docs/stories/completed/INDEX.md` with exact format
- 📊 MUST add newest INDEX entries at TOP (reverse chronological)
- 📊 SHOULD update project-level metrics
- 🔧 SHOULD identify reusable components
- 📝 MUST generate detailed completion report
- 💾 MUST rename file with [ARCHIVED] prefix
- 🚫 NEVER delete story files
- 🚫 NEVER omit required sections
- ⏱️ MUST use standard timestamp format: YYYY-MM-DD HH:MM

596
commands/story-continue.md Normal file
View File

@@ -0,0 +1,596 @@
# /sdd:story-continue
## Meta
- Version: 2.0
- Category: workflow
- Complexity: standard
- Purpose: Resume work on the most recently active story with context-aware status reporting
## Definition
**Purpose**: Resume development on the most recently modified story by displaying current status, git branch information, and suggesting appropriate next actions based on story stage.
**Syntax**: `/sdd:story-continue`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| - | - | - | - | No parameters required | - |
## INSTRUCTION: Continue Story Development
### INPUTS
- Story files from `/docs/stories/development/`, `/docs/stories/review/`, `/docs/stories/qa/`
- Current git branch and status
- Project context from `/docs/project-context/` (optional for enhanced guidance)
### PROCESS
#### Phase 1: Story Discovery
1. **SEARCH** for most recently modified story in order:
- CHECK `/docs/stories/development/` (highest priority)
- CHECK `/docs/stories/review/` (if no development stories)
- CHECK `/docs/stories/qa/` (if no review stories)
- CHECK `/docs/stories/backlog/` (fallback if nothing active)
2. **IDENTIFY** most recent story by:
- SORT by file modification time (most recent first)
- SELECT first result
- IF no stories found: PROCEED to Phase 6 (No Active Stories)
3. **EXTRACT** story ID from filename:
- PARSE filename pattern: `STORY-YYYY-NNN.md`
- VALIDATE story ID format
#### Phase 2: Story File Analysis
1. **READ** story file content
2. **PARSE** and **EXTRACT** key information:
- Story title
- Current status (backlog/development/review/qa/completed)
- Branch name
- Last progress log entry (most recent)
- Implementation checklist with status
- Success criteria (marked/unmarked)
- Technical notes and dependencies
- Started date
- Completed date (if applicable)
3. **IDENTIFY** incomplete checklist items:
- COUNT total checklist items
- COUNT completed items (`[x]`)
- COUNT remaining items (`[ ]`)
- LIST remaining items for display
#### Phase 3: Git Status Check
1. **GET** current git branch:
- RUN: `git branch --show-current`
- STORE current branch name
2. **COMPARE** with story branch:
- IF current branch matches story branch:
* SHOW: "✅ On correct branch: [branch-name]"
- IF current branch differs from story branch:
* SHOW: "⚠️ Not on story branch"
* CURRENT: [current-branch]
* EXPECTED: [story-branch]
* OFFER: "Switch to story branch? (y/n)"
3. **CHECK** git working tree status:
- RUN: `git status --porcelain`
- IF uncommitted changes exist:
* COUNT modified files
* COUNT untracked files
* SHOW: "⚠️ You have uncommitted changes"
* LIST: Modified and untracked files
* SUGGEST: `/sdd:story-save` to commit progress
- IF working tree clean:
* SHOW: "✅ Working tree clean"
4. **CHECK** branch sync status:
- RUN: `git rev-list --left-right --count origin/[branch]...[branch]`
- IF branch ahead of remote:
* SHOW: "⬆️ [N] commits ahead of remote"
* SUGGEST: Push to remote when ready
- IF branch behind remote:
* SHOW: "⬇️ [N] commits behind remote"
* SUGGEST: Pull latest changes
- IF diverged:
* SHOW: "⚠️ Branch has diverged from remote"
* SUGGEST: Rebase or merge required
#### Phase 4: Progress Summary Display
1. **DISPLAY** comprehensive story status:
```
📖 RESUMING STORY
════════════════════════════════════
Story: [STORY-ID] - [Title]
Status: [development/review/qa]
Branch: [branch-name]
📅 Timeline:
Started: [date]
Last Updated: [date]
[If completed:] Completed: [date]
📊 Progress:
Implementation: [X/Y] tasks complete ([Z%])
- [x] Completed item 1
- [x] Completed item 2
- [ ] Remaining item 1
- [ ] Remaining item 2
📝 Last Progress Entry:
[Most recent progress log entry with timestamp]
🔧 Git Status:
[Branch status - on correct branch or need to switch]
[Working tree status - clean or uncommitted changes]
[Sync status - ahead/behind/diverged from remote]
```
#### Phase 5: Context-Aware Next Actions
**IF status is "development":**
1. **SUGGEST** development actions:
```
💡 NEXT STEPS:
1. /sdd:story-implement [story-id] - Continue implementation
2. /sdd:story-save - Commit current progress
3. /sdd:story-review - Move to code review when complete
Development Commands:
- Run server: [detected from project context]
- Run tests: [detected from project context]
- Run linter: [detected from project context]
```
**IF status is "review":**
1. **CHECK** for review issues:
- READ story file for review notes
- IDENTIFY any failed checks or requested changes
- LIST issues that need addressing
2. **SUGGEST** review actions:
```
💡 NEXT STEPS:
[If issues exist:]
Issues to Address:
- [Issue 1 from review notes]
- [Issue 2 from review notes]
Actions:
1. Fix identified issues in code
2. /sdd:story-save - Commit fixes
3. /sdd:story-qa - Move to QA when review passes
[If no issues:]
Review Status: ✅ All checks passed
1. /sdd:story-qa - Move to quality assurance
2. /sdd:story-refactor - Optional improvements
```
**IF status is "qa":**
1. **SUGGEST** QA actions:
```
💡 NEXT STEPS:
1. /sdd:story-test-integration - Run integration tests
2. /sdd:story-validate - Perform final validation checks
3. /sdd:story-ship - Deploy when QA complete
QA Checklist:
- [ ] Manual testing across browsers
- [ ] Performance testing
- [ ] Accessibility testing
- [ ] Security review
- [ ] Documentation verification
```
**IF status is "backlog":**
1. **SUGGEST** starting development:
```
💡 NEXT STEPS:
This story is still in backlog.
1. /sdd:story-start [story-id] - Begin development
2. /sdd:story-start [story-id] --boilerplate - Start with boilerplate
```
#### Phase 6: No Active Stories Found
1. **IF** no stories found in development, review, or qa:
- **SEARCH** for completed stories
- **COUNT** total completed stories
- **DISPLAY**:
```
✅ NO ACTIVE STORIES
════════════════════════════════════
All stories are complete or in backlog.
Completed Stories: [count]
[List last 3 completed stories with titles]
💡 NEXT STEPS:
1. /sdd:story-new - Create a new story
2. /sdd:story-start [story-id] - Start a backlog story
3. /sdd:project-status - View full project status
Backlog Stories Available:
[List backlog stories if any exist]
```
### OUTPUTS
- Formatted story status summary
- Git branch and working tree status
- Progress metrics and completion percentage
- Incomplete checklist items
- Context-aware next action suggestions
- Optional: Switch to story branch (if needed)
### RULES
- MUST search development, review, and qa directories in order
- MUST display most recently modified story
- MUST show current git status and branch
- MUST suggest actions appropriate to story status
- SHOULD offer to switch branches if not on story branch
- SHOULD highlight uncommitted changes if present
- SHOULD calculate and display progress percentage
- MUST handle case when no active stories exist
- NEVER modify story file (read-only command)
- NEVER create or delete files
## Examples
### Example 1: Resume Development Story
```bash
INPUT:
/sdd:story-continue
PROCESS:
→ Searching for active stories...
→ Found: /docs/stories/development/STORY-AUTH-001.md
→ Modified: 2 hours ago
→ Analyzing story status...
→ Checking git branch...
→ Current branch: feature/auth-001-login-form ✅
OUTPUT:
📖 RESUMING STORY
════════════════════════════════════
Story: STORY-AUTH-001 - Implement Login Form
Status: development
Branch: feature/auth-001-login-form
📅 Timeline:
Started: 2025-09-28
Last Updated: 2 hours ago
📊 Progress:
Implementation: 6/10 tasks complete (60%)
- [x] Feature implementation
- [x] Unit tests
- [x] Integration tests
- [x] Error handling
- [x] Loading states
- [x] Browser tests
- [ ] Performance optimization
- [ ] Accessibility
- [ ] Security review
- [ ] Documentation
📝 Last Progress Entry:
[2025-09-28 14:30] Implemented login form with validation.
Added unit tests and feature tests. All tests passing.
Created browser test for login flow.
🔧 Git Status:
✅ On correct branch: feature/auth-001-login-form
⚠️ You have uncommitted changes (3 files modified)
Files:
M app/Livewire/Auth/LoginForm.php
M tests/Feature/Auth/LoginTest.php
M stories/development/STORY-AUTH-001.md
💡 NEXT STEPS:
1. /sdd:story-implement STORY-AUTH-001 - Continue implementation
2. /sdd:story-save - Commit current progress
3. /sdd:story-review - Move to code review when complete
Development Commands:
- Run server: composer dev
- Run tests: vendor/bin/pest
- Run linter: vendor/bin/pint
```
### Example 2: Resume Review Story
```bash
INPUT:
/sdd:story-continue
PROCESS:
→ Searching for active stories...
→ No stories in development
→ Found: /docs/stories/review/STORY-PROFILE-002.md
→ Modified: 1 day ago
→ Analyzing story status...
→ Checking git branch...
→ Current branch: main
→ Expected branch: feature/profile-002-settings
OUTPUT:
📖 RESUMING STORY
════════════════════════════════════
Story: STORY-PROFILE-002 - User Profile Settings
Status: review
Branch: feature/profile-002-settings
📅 Timeline:
Started: 2025-09-27
Last Updated: 1 day ago
📊 Progress:
Implementation: 10/10 tasks complete (100%)
All implementation tasks completed ✅
📝 Last Progress Entry:
[2025-09-27 16:45] Code review completed. Minor formatting
issues found. Need to run Pint and update comments.
🔧 Git Status:
⚠️ Not on story branch
Current: main
Expected: feature/profile-002-settings
✅ Working tree clean
Review Notes:
- Fix code formatting (run Pint)
- Add missing PHPDoc comments for public methods
- Update test descriptions to be more descriptive
💡 NEXT STEPS:
Issues to Address:
- Code formatting needs fixes
- Missing documentation comments
- Test descriptions need improvement
Actions:
1. Switch to feature/profile-002-settings branch
2. Fix identified issues in code
3. /sdd:story-save - Commit fixes
4. /sdd:story-qa - Move to QA when review passes
Switch to story branch now? [y/n]
```
### Example 3: Resume QA Story
```bash
INPUT:
/sdd:story-continue
PROCESS:
→ Searching for active stories...
→ No stories in development or review
→ Found: /docs/stories/qa/STORY-CART-003.md
→ Modified: 3 hours ago
→ Analyzing story status...
→ Checking git branch...
OUTPUT:
📖 RESUMING STORY
════════════════════════════════════
Story: STORY-CART-003 - Shopping Cart Checkout
Status: qa
Branch: feature/cart-003-checkout
📅 Timeline:
Started: 2025-09-25
Last Updated: 3 hours ago
Expected Completion: 2025-09-29
📊 Progress:
Implementation: 10/10 tasks complete (100%)
QA Progress: 3/5 checks complete (60%)
📝 Last Progress Entry:
[2025-09-28 13:20] QA testing in progress. Manual testing
complete across Chrome, Firefox, Safari. Performance testing
shows good results. Starting accessibility review.
🔧 Git Status:
✅ On correct branch: feature/cart-003-checkout
✅ Working tree clean
⬆️ 2 commits ahead of remote
QA Checklist:
- [x] Manual testing across browsers
- [x] Performance testing
- [x] Security review
- [ ] Accessibility testing
- [ ] Documentation verification
💡 NEXT STEPS:
1. Complete accessibility testing
2. Verify documentation is up to date
3. /sdd:story-validate - Perform final validation checks
4. /sdd:story-ship - Deploy when QA complete
QA Commands:
- Run accessibility audit: npm run a11y
- Run security scan: npm run security-check
- Run full test suite: vendor/bin/pest
```
### Example 4: No Active Stories
```bash
INPUT:
/sdd:story-continue
PROCESS:
→ Searching for active stories...
→ No stories in development
→ No stories in review
→ No stories in qa
→ Checking completed stories...
→ Found 5 completed stories
OUTPUT:
✅ NO ACTIVE STORIES
════════════════════════════════════
All stories are complete or in backlog.
Completed Stories: 5
Recently completed:
- STORY-AUTH-001: Implement Login Form (completed 2025-09-28)
- STORY-PROFILE-002: User Profile Settings (completed 2025-09-27)
- STORY-CART-003: Shopping Cart (completed 2025-09-26)
💡 NEXT STEPS:
1. /sdd:story-new - Create a new story
2. /sdd:story-start [story-id] - Start a backlog story
3. /sdd:project-status - View full project status
Backlog Stories Available:
- STORY-SEARCH-004: Search Functionality
- STORY-NOTIF-005: Notification System
- STORY-EXPORT-006: Data Export Feature
```
### Example 5: Branch Diverged
```bash
INPUT:
/sdd:story-continue
PROCESS:
→ Searching for active stories...
→ Found: /docs/stories/development/STORY-API-004.md
→ Checking git status...
→ Branch has diverged from remote
OUTPUT:
📖 RESUMING STORY
════════════════════════════════════
Story: STORY-API-004 - REST API Endpoints
Status: development
Branch: feature/api-004-endpoints
📅 Timeline:
Started: 2025-09-28
Last Updated: 30 minutes ago
📊 Progress:
Implementation: 4/10 tasks complete (40%)
- [x] Feature implementation
- [x] Unit tests
- [x] Integration tests
- [x] Error handling
- [ ] Loading states
- [ ] Browser tests
- [ ] Performance optimization
- [ ] Accessibility
- [ ] Security review
- [ ] Documentation
📝 Last Progress Entry:
[2025-09-28 15:45] Implemented API endpoints for user
resources. Added authentication middleware.
🔧 Git Status:
✅ On correct branch: feature/api-004-endpoints
⚠️ You have uncommitted changes (2 files modified)
⚠️ Branch has diverged from remote
Local: 3 commits ahead
Remote: 2 commits ahead
Action required: Rebase or merge
💡 NEXT STEPS:
⚠️ IMPORTANT: Resolve branch divergence first
1. Option A: Rebase on remote
git fetch origin
git rebase origin/feature/api-004-endpoints
2. Option B: Merge remote changes
git fetch origin
git merge origin/feature/api-004-endpoints
After resolving:
3. /sdd:story-implement STORY-API-004 - Continue implementation
4. /sdd:story-save - Commit progress
```
## Edge Cases
### Multiple Stories Modified Simultaneously
```
IF multiple stories have same modification time:
- SELECT story with most recent progress log entry
- IF still tied: SELECT by alphabetical story ID
- LOG: "Multiple stories modified recently, selected: [story-id]"
```
### Story File Corrupted or Invalid
```
IF story file cannot be parsed:
- LOG: "Warning: Story file appears corrupted"
- SHOW: Available story metadata (ID, file path, mod time)
- SUGGEST: Manual review of story file
- OFFER: Try next most recent story
```
### Git Repository Not Initialized
```
IF not in git repository:
- SKIP git status checks
- SHOW: Story information only
- WARN: "Not in git repository, git status unavailable"
- SUGGEST: Initialize git repository
```
### Branch Deleted Remotely
```
IF story branch no longer exists on remote:
- WARN: "Story branch not found on remote"
- SUGGEST: Push branch to remote or create new branch
- SHOW: Local branch status only
```
### Working Directory Outside Project Root
```
IF cwd not in project root:
- ATTEMPT to find project root
- IF found: Continue from project root
- IF not found: HALT with error
- SUGGEST: Run from project root directory
```
## Error Handling
- **No story directories exist**: Return "Error: No story directories found. Run /sdd:project-init first"
- **Story file read error**: Show "Error reading story file: [error]" and try next story
- **Invalid story format**: Warn and show what could be parsed
- **Git command fails**: Show git error and continue with story info only
- **Branch switch fails**: Show error and offer manual switch instructions
## Performance Considerations
- Scan directories once and cache file list
- Use file modification time for quick sorting (no need to read all files)
- Read only the most recent story file completely
- Parse story file sections on-demand (not all at once)
- Git commands run in parallel when possible
- Cache git status results within command execution
## Related Commands
- `/sdd:story-start [id]` - Start new story development
- `/sdd:story-implement [id]` - Continue implementation
- `/sdd:story-save` - Commit current progress
- `/sdd:story-review` - Move story to code review
- `/sdd:story-qa` - Move story to quality assurance
- `/sdd:project-status` - View all project stories
## Constraints
- ✅ MUST find most recently modified story
- ✅ MUST display comprehensive story status
- ✅ MUST check and display git status
- ✅ MUST suggest context-appropriate next actions
- ⚠️ NEVER modify story files (read-only)
- ⚠️ NEVER create or delete files
- 📋 SHOULD offer to switch branches if needed
- 💡 SHOULD highlight uncommitted changes
- 🔧 SHOULD detect and report sync issues

640
commands/story-document.md Normal file
View File

@@ -0,0 +1,640 @@
# /sdd:story-document
## Meta
- Version: 2.0
- Category: story-management
- Complexity: medium
- Purpose: Generate comprehensive documentation for implemented story features
## Definition
**Purpose**: Analyze story implementation and generate user, technical, and testing documentation with examples and inline code comments.
**Syntax**: `/sdd:story-document [story_id]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | No | current active | Story ID to document (format: STORY-YYYY-NNN) | Must be valid story ID |
## INSTRUCTION: Generate Story Documentation
### INPUTS
- story_id: Story to document (defaults to current active story)
- Story file from `/docs/stories/development/` or `/docs/stories/review/`
- Implemented code files referenced in story
- Project context from `/docs/project-context/`
### PROCESS
#### Phase 1: Story Location and Validation
1. **DETERMINE** which story to document:
- IF story_id provided: USE specified story
- IF no story_id: FIND current active story in `/docs/stories/development/`
2. **LOCATE** story file:
- CHECK `/docs/stories/development/[story-id].md`
- CHECK `/docs/stories/review/[story-id].md`
- CHECK `/docs/stories/qa/[story-id].md`
3. IF story not found:
- EXIT with error message
- SUGGEST using `/sdd:project-status` to find valid story IDs
#### Phase 2: Story Analysis
1. **READ** story file to extract:
- Feature title and description (from "What & Why" section)
- Implementation details (from "Technical Notes")
- Success criteria (acceptance criteria)
- Test cases defined
- UI/UX considerations
- Integration points
2. **SCAN** codebase to identify implementation:
- LOCATE files referenced in progress log
- IDENTIFY new/modified components, functions, classes
- EXTRACT public APIs and interfaces
- MAP dependencies and imports
- NOTE configuration files affected
3. **LOAD** project context:
- `/docs/project-context/technical-stack.md` - Framework conventions
- `/docs/project-context/coding-standards.md` - Documentation style
- `/docs/project-context/development-process.md` - Doc requirements
#### Phase 3: Documentation Generation
**Generate Multiple Documentation Types:**
1. **USER DOCUMENTATION** (if user-facing feature):
- CREATE `/docs/features/[feature-name].md`
- INCLUDE:
* Feature overview and purpose
* How to use the feature (step-by-step)
* Common use cases with examples
* Troubleshooting guide
* Screenshots or diagrams (note if needed)
2. **TECHNICAL DOCUMENTATION**:
- CREATE `/docs/technical/[feature-name].md`
- INCLUDE:
* Architecture overview
* Component/module descriptions
* API reference (if applicable)
* Configuration options
* Integration guide
* Data flow diagrams (note if needed)
3. **TESTING DOCUMENTATION**:
- CREATE `/docs/testing/[feature-name].md`
- INCLUDE:
* How to test the feature
* Test scenarios covered
* Known edge cases
* Performance benchmarks (if applicable)
* Manual testing checklist
4. **INLINE CODE DOCUMENTATION**:
- ADD framework-appropriate comments:
* PHP: PHPDoc blocks
* JavaScript/TypeScript: JSDoc/TSDoc
* Python: Docstrings
* [DISCOVERED language]: Appropriate style
- DOCUMENT:
* Public functions and methods
* Component props/parameters
* Complex logic explanations
* Configuration constants
* Event handlers and callbacks
5. **CODE EXAMPLES**:
- CREATE `/docs/examples/[feature-name]/`
- INCLUDE:
* Basic usage snippets
* Configuration examples
* Integration examples
* Common patterns
* Copy-paste ready code
#### Phase 4: Update Existing Documentation
1. **UPDATE** project-level docs:
- `/README.md` - Add feature to feature list (if user-facing)
- `/docs/api.md` - Add API endpoints (if applicable)
- `/docs/configuration.md` - Add new config options
- `/CHANGELOG.md` - Document changes
- `/docs/migration.md` - Add migration guide (if breaking changes)
2. **PRESERVE** existing content:
- APPEND new sections rather than replace
- MAINTAIN existing formatting style
- KEEP version history intact
#### Phase 5: Story Documentation Update
1. **UPDATE** story file with documentation summary:
```markdown
## Documentation
### Generated Documentation
- User Guide: /docs/features/[feature-name].md
- Technical: /docs/technical/[feature-name].md
- Testing: /docs/testing/[feature-name].md
- Examples: /docs/examples/[feature-name]/
### Updated Documentation
- README.md: Added feature to feature list
- CHANGELOG.md: Documented changes for v[version]
### Inline Documentation
- Added PHPDoc blocks to [count] functions
- Documented [count] component props
- Added complex logic comments in [file:line]
### Documentation Status
- [x] User documentation complete
- [x] Technical documentation complete
- [x] Testing documentation complete
- [x] Inline code comments added
- [x] Examples created
- [ ] Screenshots needed (optional)
- [ ] Diagrams needed (optional)
```
2. **CHECK** documentation completion criteria:
- [ ] All public APIs documented
- [ ] User-facing features have user guide
- [ ] Complex logic has inline comments
- [ ] Examples demonstrate key use cases
- [ ] Configuration options documented
- [ ] Breaking changes documented in migration guide
#### Phase 6: Completion Summary
1. **DISPLAY** documentation summary:
```
✅ Documentation Generated
═══════════════════════════════════
Story: [STORY-YYYY-NNN] - [Title]
DOCUMENTATION CREATED:
✓ User Guide: /docs/features/[feature-name].md
✓ Technical: /docs/technical/[feature-name].md
✓ Testing: /docs/testing/[feature-name].md
✓ Examples: /docs/examples/[feature-name]/
DOCUMENTATION UPDATED:
✓ README.md (feature list)
✓ CHANGELOG.md (version notes)
[✓ Migration guide (if breaking changes)]
INLINE DOCUMENTATION:
✓ [count] functions documented
✓ [count] components documented
✓ [count] complex logic comments
DOCUMENTATION DEBT:
[- Screenshots recommended for user guide]
[- Sequence diagram would help explain flow]
Story Updated: Documentation section added
```
2. **SUGGEST** next steps:
```
💡 NEXT STEPS:
1. Review generated documentation for accuracy
2. Add screenshots/diagrams if noted
3. /sdd:story-review [story-id] # Move to code review
4. Share docs with team for feedback
```
### OUTPUTS
- `/docs/features/[feature-name].md` - User-facing documentation
- `/docs/technical/[feature-name].md` - Technical documentation
- `/docs/testing/[feature-name].md` - Testing documentation
- `/docs/examples/[feature-name]/` - Code examples
- Updated project documentation (README, CHANGELOG, etc.)
- Inline code comments in implementation files
- Updated story file with documentation summary
### RULES
- MUST analyze story to understand what was built
- MUST generate appropriate doc types based on feature type
- MUST use framework-appropriate inline documentation style
- MUST update story file with documentation summary
- SHOULD create user docs for user-facing features
- SHOULD include code examples for all public APIs
- SHOULD update project README and CHANGELOG
- NEVER remove existing documentation
- ALWAYS preserve existing formatting style
- MUST check all public APIs are documented
## Documentation Templates
### User Documentation Template
```markdown
# [Feature Name]
## Overview
[What the feature does and why it exists]
## Prerequisites
[What user needs before using this feature]
## Getting Started
### Quick Start
[Simplest possible example to get started]
### Step-by-Step Guide
1. [First step with clear instructions]
2. [Second step]
3. [Continue...]
## Usage Examples
### Example 1: [Common Use Case]
[Description of scenario]
```[language]
[Code example]
```
[Expected result]
### Example 2: [Another Use Case]
[Description]
```[language]
[Code example]
```
## Configuration
### Available Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| [option] | [type] | [default] | [what it does] |
### Configuration Example
```[format]
[Example configuration]
```
## Troubleshooting
### Common Issues
#### [Issue 1]
**Problem**: [Description]
**Solution**: [How to fix]
#### [Issue 2]
**Problem**: [Description]
**Solution**: [How to fix]
## Related Features
- [Related feature 1]
- [Related feature 2]
## Additional Resources
- [Link to technical docs]
- [Link to API reference]
```
### Technical Documentation Template
```markdown
# [Feature Name] - Technical Documentation
## Architecture Overview
[High-level architecture description]
## Components
### [Component 1]
**Purpose**: [What it does]
**Location**: [File path]
**Dependencies**: [What it depends on]
**Public Interface**:
```[language]
[Key methods/functions]
```
**Usage**:
```[language]
[How to use it]
```
### [Component 2]
[Same structure]
## Data Flow
[Description of how data flows through the system]
```
[Diagram or flowchart in text/Mermaid format]
```
## API Reference
### [Function/Method Name]
```[language]
[Full signature]
```
**Parameters**:
- `param1` ([type]): [Description]
- `param2` ([type]): [Description]
**Returns**: [Return type and description]
**Throws**: [Exceptions that can be thrown]
**Example**:
```[language]
[Usage example]
```
## Configuration
### Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| [VAR] | [Yes/No] | [default] | [what it does] |
### Configuration Files
[List of config files and their purpose]
## Integration Guide
### Integrating with [System/Feature]
[Step-by-step integration instructions]
### Event Hooks
[Available hooks/events for extending functionality]
## Performance Considerations
- [Performance tip 1]
- [Performance tip 2]
## Security Considerations
- [Security concern 1]
- [Security concern 2]
## Testing
[Link to testing documentation]
## Troubleshooting
[Link to user documentation troubleshooting section]
```
### Testing Documentation Template
```markdown
# Testing: [Feature Name]
## Test Coverage Summary
- Unit Tests: [count] tests, [X]% coverage
- Integration Tests: [count] tests
- E2E Tests: [count] tests
- Manual Tests: [count] scenarios
## Running Tests
### All Tests
```bash
[Command to run all tests]
```
### Unit Tests Only
```bash
[Command to run unit tests]
```
### Integration Tests
```bash
[Command to run integration tests]
```
## Test Scenarios
### Scenario 1: [Happy Path]
**Given**: [Initial state]
**When**: [Action taken]
**Then**: [Expected result]
**Test**: [Test file and function name]
### Scenario 2: [Error Case]
**Given**: [Initial state]
**When**: [Action taken]
**Then**: [Expected error handling]
**Test**: [Test file and function name]
## Edge Cases Tested
1. [Edge case 1] - [How it's tested]
2. [Edge case 2] - [How it's tested]
## Known Limitations
- [Limitation 1]
- [Limitation 2]
## Manual Testing Checklist
- [ ] [Manual test step 1]
- [ ] [Manual test step 2]
- [ ] [Verify on different browsers/devices]
## Performance Benchmarks
[If applicable, performance test results]
## Test Data
[How to set up test data or where test fixtures are located]
```
## Examples
### Example 1: Document Current Active Story
```bash
INPUT:
/sdd:story-document
OUTPUT:
→ Finding active story...
→ Located: STORY-2025-003 in /docs/stories/development/
→ Analyzing implemented features...
→ Scanning codebase for TaskManager component...
→ Generating documentation...
✅ Documentation Generated
═══════════════════════════════════
Story: STORY-2025-003 - Task Management System
DOCUMENTATION CREATED:
✓ User Guide: /docs/features/task-management.md
✓ Technical: /docs/technical/task-management.md
✓ Testing: /docs/testing/task-management.md
✓ Examples: /docs/examples/task-management/
DOCUMENTATION UPDATED:
✓ README.md (added to feature list)
✓ CHANGELOG.md (documented for v1.2.0)
INLINE DOCUMENTATION:
✓ 12 functions documented with PHPDoc
✓ 3 Livewire components documented
✓ 5 complex logic sections commented
DOCUMENTATION DEBT:
- Screenshots recommended for user guide
- Consider adding sequence diagram for task lifecycle
Story Updated: Documentation section added
💡 NEXT STEPS:
1. Review generated documentation for accuracy
2. Add screenshots to user guide
3. /sdd:story-review STORY-2025-003 # Move to code review
4. Share docs with team for feedback
```
### Example 2: Document Specific Story
```bash
INPUT:
/sdd:story-document STORY-2025-005
OUTPUT:
→ Locating story: STORY-2025-005...
→ Found in /docs/stories/review/
→ Analyzing authentication implementation...
→ Generating documentation...
✅ Documentation Generated
═══════════════════════════════════
Story: STORY-2025-005 - User Authentication
DOCUMENTATION CREATED:
✓ User Guide: /docs/features/authentication.md
✓ Technical: /docs/technical/authentication.md
✓ Testing: /docs/testing/authentication.md
✓ Examples: /docs/examples/authentication/
✓ Migration Guide: /docs/migration-v2.md (breaking changes)
DOCUMENTATION UPDATED:
✓ README.md (security section)
✓ CHANGELOG.md (v2.0.0 breaking changes)
✓ docs/api.md (auth endpoints)
INLINE DOCUMENTATION:
✓ 8 controller methods documented
✓ 4 middleware classes documented
✓ Security notes added to sensitive functions
Story Updated: Documentation section added
💡 NEXT STEPS:
1. Review security documentation carefully
2. /sdd:story-review STORY-2025-005 # Continue review process
```
### Example 3: API Documentation
```bash
INPUT:
/sdd:story-document STORY-2025-007
OUTPUT:
→ Locating story: STORY-2025-007...
→ Found in /docs/stories/development/
→ Analyzing REST API implementation...
→ Extracting API endpoints and schemas...
→ Generating OpenAPI specification...
✅ Documentation Generated
═══════════════════════════════════
Story: STORY-2025-007 - REST API for Tasks
DOCUMENTATION CREATED:
✓ API Reference: /docs/api/tasks.md
✓ Technical: /docs/technical/api-tasks.md
✓ Testing: /docs/testing/api-tasks.md
✓ OpenAPI Spec: /docs/openapi/tasks.yaml
✓ Postman Collection: /docs/examples/tasks.postman.json
DOCUMENTATION UPDATED:
✓ docs/api.md (added tasks endpoints)
✓ README.md (API section)
✓ CHANGELOG.md (new API endpoints)
INLINE DOCUMENTATION:
✓ 6 API endpoints documented
✓ Request/response schemas defined
✓ Error responses documented
Story Updated: Documentation section added
💡 NEXT STEPS:
1. Test with Postman collection
2. Share OpenAPI spec with frontend team
3. /sdd:story-review STORY-2025-007
```
## Edge Cases
### Story Not Found
- DETECT invalid story ID
- SUGGEST using `/sdd:project-status` to list valid stories
- EXIT with helpful error message
### Story Has No Implementation Yet
- DETECT story in backlog with no code
- WARN that documentation requires implemented code
- SUGGEST using `/sdd:story-implement [id]` first
- EXIT gracefully
### No User-Facing Changes
- DETECT backend-only or infrastructure changes
- SKIP user documentation generation
- FOCUS on technical and testing documentation
- NOTE decision in story update
### Documentation Already Exists
- DETECT existing documentation files
- ASK user: Update existing or create new version?
- IF update: Merge new content with existing
- IF new: Create versioned documentation
- PRESERVE all existing content
### Complex API with Many Endpoints
- DETECT large API surface area
- GENERATE comprehensive API reference
- CREATE OpenAPI/Swagger specification
- ORGANIZE by resource/domain
- PROVIDE Postman/Insomnia collections
## Error Handling
- **Story not found**: Show available stories from `/sdd:project-status`
- **No implementation found**: Guide user to implement first
- **Permission errors**: Report specific file/directory issue
- **Documentation write errors**: Log error, continue with other docs
## Performance Considerations
- Documentation generation typically takes 10-30 seconds
- Inline documentation added via file editing (may take longer for many files)
- Show progress indicators for multi-file operations
- Cache story analysis for session
## Related Commands
- `/sdd:story-implement [id]` - Generate implementation first
- `/sdd:story-review [id]` - Move to code review after documentation
- `/sdd:story-test [id]` - Verify tests before documenting
- `/sdd:project-status` - Find stories to document
## Constraints
- ✅ MUST analyze story to understand implementation
- ✅ MUST generate docs appropriate to feature type
- ✅ MUST use framework-appropriate inline doc style
- ✅ MUST update story with documentation summary
- 📋 SHOULD create user docs for user-facing features
- 🔧 SHOULD include code examples for public APIs
- 💾 MUST preserve existing documentation content
- ⚠️ NEVER remove or replace existing docs without confirmation
- 🧪 MUST document all test scenarios covered

485
commands/story-flow.md Normal file
View File

@@ -0,0 +1,485 @@
# /sdd:story-flow
## Meta
- Version: 1.0
- Category: workflow-automation
- Complexity: high
- Purpose: Automate the complete story lifecycle from creation to deployment and archival
## Definition
**Purpose**: Execute the complete story development workflow in sequence, automating the progression through all stages from story creation to production deployment, retrospective completion, and archival.
**Syntax**: `/sdd:story-flow <prompt|story_id> [--start-at=step] [--stop-at=step] [--auto]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| prompt\|story_id | string | Yes | - | Story prompt/title or existing story ID (e.g., STORY-2025-001, STORY-DUE-001) | Non-empty string |
| --start-at | string | No | new | Start at specific step (new\|start\|implement\|review\|qa\|validate\|save\|ship\|complete) | Valid step name |
| --stop-at | string | No | complete | Stop at specific step | Valid step name |
| --auto | flag | No | false | Skip confirmations between steps | Boolean flag |
## INSTRUCTION: Execute Story Workflow Sequence
### INPUTS
- prompt\|story_id: Either a new story description or an existing story ID
- --start-at: Optional step to begin from (default: new)
- --stop-at: Optional step to end at (default: complete)
- --auto: Optional flag to run all steps without confirmation
### PROCESS
#### Phase 1: Initialization
1. **PARSE** input to determine if it's a new prompt or existing story ID:
- IF matches pattern `STORY-[A-Z0-9]+-\d+`: Use as existing story ID
* Supports: STORY-2025-001 (year-based)
* Supports: STORY-DUE-001 (phase-based)
* Supports: STORY-AUTH-001 (feature-based)
- ELSE: Treat as new story prompt
2. **VALIDATE** start-at and stop-at parameters:
- ENSURE start-at comes before stop-at in sequence
- VALID SEQUENCE: new → start → implement → review → qa → validate → save → ship → complete
- IF invalid: SHOW error and exit
3. **DISPLAY** workflow plan:
```
📋 STORY WORKFLOW PLAN
═══════════════════════
Story: [prompt or ID]
Sequence: [start-at] → [stop-at]
Mode: [auto ? "Automatic" : "Interactive"]
Steps to execute:
[list of steps that will run]
```
#### Phase 2: Sequential Execution
**STEP 1: /sdd:story-new** (IF start-at is "new")
1. **DETECT** if input is existing story ID:
- CHECK pattern: `STORY-[A-Z0-9]+-\d+`
- SEARCH for story file in all story directories:
* /docs/stories/backlog/
* /docs/stories/development/
* /docs/stories/review/
* /docs/stories/qa/
* /docs/stories/completed/
* /docs/project-context/phases/*/
2. IF story file found:
- SKIP story creation
- USE found story_id
- PROCEED to next step
3. ELSE (new story):
- EXECUTE: `/sdd:story-new` with prompt as story title
- CAPTURE: Generated story ID
- IF --auto flag NOT set:
- SHOW: Story creation summary
- ASK: "Continue to next step? (y/n)"
- IF no: EXIT workflow
- UPDATE: Current story_id variable
**STEP 2: /sdd:story-start** (IF in range)
1. EXECUTE: `/sdd:story-start [story_id]`
2. VERIFY: Branch created and checked out
3. IF --auto flag NOT set:
- SHOW: Branch and environment status
- ASK: "Continue to implementation? (y/n)"
- IF no: EXIT with current status
4. IF error:
- LOG: Error details
- OFFER: Skip this step, retry, or abort
- PROCEED based on user choice
**STEP 3: /sdd:story-implement** (IF in range)
1. EXECUTE: `/sdd:story-implement [story_id]`
2. VERIFY: Code generated successfully
3. IF --auto flag NOT set:
- SHOW: Files created/modified summary
- ASK: "Continue to review? (y/n)"
- IF no: EXIT with suggestion to run `/sdd:story-save`
4. IF error:
- SHOW: Implementation issues
- OFFER: Retry implementation or manual fix
- WAIT for user decision
**STEP 4: /sdd:story-review** (IF in range)
1. EXECUTE: `/sdd:story-review [story_id]`
2. VERIFY: Code quality checks passed
3. IF --auto flag NOT set:
- SHOW: Review results
- ASK: "Continue to QA? (y/n)"
- IF no: EXIT with refactor suggestions
4. IF review finds issues:
- DISPLAY: Issues found
- IF --auto: CONTINUE anyway with warning
- ELSE: OFFER to fix before continuing
**STEP 5: /sdd:story-qa** (IF in range)
1. EXECUTE: `/sdd:story-qa [story_id]`
2. VERIFY: All tests passed
3. IF --auto flag NOT set:
- SHOW: Test results summary
- ASK: "Continue to validation? (y/n)"
- IF no: EXIT with test failure details
4. IF tests fail:
- DISPLAY: Failed tests
- HALT workflow (QA must pass)
- SUGGEST: Fix tests and run `/sdd:story-flow [story_id] --start-at=qa`
- EXIT
**STEP 6: /sdd:story-validate** (IF in range)
1. EXECUTE: `/sdd:story-validate [story_id]`
2. VERIFY: All acceptance criteria met
3. IF --auto flag NOT set:
- SHOW: Validation checklist
- ASK: "Ready to save and ship? (y/n)"
- IF no: EXIT with validation details
4. IF validation fails:
- DISPLAY: Unmet criteria
- HALT workflow
- SUGGEST: Address issues and retry
- EXIT
**STEP 7: /sdd:story-save** (IF in range)
1. EXECUTE: `/sdd:story-save` with auto-generated commit message
2. VERIFY: Changes committed successfully
3. IF --auto flag NOT set:
- SHOW: Commit summary
- ASK: "Continue to ship? (y/n)"
- IF no: EXIT with ship instructions
4. IF commit fails:
- SHOW: Git errors
- OFFER: Resolve conflicts or abort
- WAIT for resolution
**STEP 8: /sdd:story-ship** (IF in range)
1. EXECUTE: `/sdd:story-ship [story_id]`
2. VERIFY: Merged and deployed successfully
3. IF --auto flag NOT set:
- SHOW: Deployment summary
- ASK: "Complete story and archive? (y/n)"
- IF no: EXIT with completion instructions
4. IF error:
- HALT before deployment
- SHOW: Deployment errors
- SUGGEST: `/sdd:story-rollback [story_id]` if needed
- MANUAL intervention required
**STEP 9: /sdd:story-complete** (IF in range AND is stop-at)
1. EXECUTE: `/sdd:story-complete [story_id]`
2. VERIFY: Story file fully populated with:
- Retrospective notes
- Lessons learned
- Performance metrics
- Final documentation
3. VERIFY: Story archived to /docs/stories/completed/
4. SHOW: Final completion summary
5. IF error:
- SHOW: Archive errors
- SUGGEST: Manual completion via `/sdd:story-complete [story_id]`
- EXIT with partial completion status
#### Phase 3: Completion Summary
**DISPLAY** workflow completion status:
```
✅ STORY WORKFLOW COMPLETED
═══════════════════════════
Story: [story_id] - [Title]
Status: [current stage]
Completed Steps:
✓ Story created/loaded
✓ Development started (branch: [name])
✓ Implementation generated
✓ Code review passed
✓ QA tests passed
✓ Validation successful
✓ Changes committed
✓ Deployed to production
✓ Story completed and archived
[IF stopped before complete:]
⏸️ Workflow Paused
Next Step: /sdd:story-flow [story_id] --start-at=[next-step]
[IF any warnings:]
⚠️ Warnings:
[list of non-blocking issues]
Total Duration: [time elapsed]
Next Actions:
[context-appropriate suggestions]
```
### OUTPUTS
- Fully executed story workflow from specified start to stop
- Progress updates at each step
- Error handling with recovery options
- Final summary with deployment status
### RULES
- MUST execute steps in correct sequence order
- MUST validate each step before proceeding to next
- MUST halt on QA or validation failures
- SHOULD ask for confirmation between steps (unless --auto)
- MUST provide clear error messages and recovery options
- NEVER skip critical validation steps
- ALWAYS save work before shipping
- MUST update story file status at each stage
- MUST complete story retrospective and archive after shipping
## Examples
### Example 1: Full Workflow from New Story
```bash
INPUT:
/sdd:story-flow "Add user registration form with email verification"
PROCESS:
→ Step 1/9: Creating story...
✅ Story created: STORY-2025-015
→ Prompt: Continue to start development? (y/n) y
→ Step 2/9: Starting development...
✅ Branch created: feature/registration-015
→ Prompt: Continue to implementation? (y/n) y
→ Step 3/9: Generating implementation...
✅ Files created: RegistrationForm.php, registration-form.blade.php, RegistrationTest.php
→ Prompt: Continue to review? (y/n) y
→ Step 4/9: Running code review...
✅ Review passed: 0 issues found
→ Prompt: Continue to QA? (y/n) y
→ Step 5/9: Running QA tests...
✅ All tests passed (Unit: 5, Feature: 3, Browser: 2)
→ Prompt: Continue to validation? (y/n) y
→ Step 6/9: Validating story...
✅ All acceptance criteria met
→ Prompt: Ready to save and ship? (y/n) y
→ Step 7/9: Committing changes...
✅ Committed: "feat: add user registration form with email verification"
→ Prompt: Continue to ship? (y/n) y
→ Step 8/9: Shipping to production...
✅ Merged to main, deployed successfully
→ Prompt: Complete story and archive? (y/n) y
→ Step 9/9: Completing and archiving story...
✅ Story file updated with retrospective and metrics
✅ Story archived to /docs/stories/completed/
OUTPUT:
✅ STORY WORKFLOW COMPLETED
═══════════════════════════
Story: STORY-2025-015 - Add user registration form
Status: completed
All steps completed successfully ✓
Total Duration: 13 minutes
```
### Example 2: Resume from Existing Story (Year-based ID)
```bash
INPUT:
/sdd:story-flow STORY-2025-010 --start-at=qa --auto
PROCESS:
→ Loading story: STORY-2025-010
→ Starting from: qa
→ Auto mode: enabled
→ Step 1/5: Running QA tests...
✅ All tests passed
→ Step 2/5: Validating story...
✅ Validation successful
→ Step 3/5: Committing changes...
✅ Changes committed
→ Step 4/5: Shipping to production...
✅ Deployed successfully
→ Step 5/5: Completing and archiving story...
✅ Story completed and archived
OUTPUT:
✅ STORY WORKFLOW COMPLETED
Story: STORY-2025-010
Executed: qa → validate → save → ship → complete
Duration: 4 minutes
```
### Example 2b: Phase-based Story ID
```bash
INPUT:
/sdd:story-flow STORY-DUE-001 --start-at=start
PROCESS:
→ Detected phase-based story: STORY-DUE-001
→ Found in: /docs/project-context/phases/phase-due-dates/
→ Starting from: start
→ Skipping story creation (already exists)
→ Step 1/7: Starting development...
✅ Branch created: feature/due-001-database-schema
→ Prompt: Continue to implementation? (y/n) y
→ Step 2/7: Generating implementation...
✅ Migration and model files created
→ Prompt: Continue to review? (y/n) y
[continues through workflow...]
OUTPUT:
✅ STORY WORKFLOW COMPLETED
Story: STORY-DUE-001 - Add Due Date Database Schema
Phase: due-dates
Duration: 8 minutes
```
### Example 3: Partial Workflow
```bash
INPUT:
/sdd:story-flow "Fix login page responsive layout" --stop-at=review
PROCESS:
→ Step 1/4: Creating story...
✅ Story created: STORY-2025-016
→ Step 2/4: Starting development...
✅ Branch created: feature/login-layout-016
→ Step 3/4: Generating implementation...
✅ Implementation complete
→ Step 4/4: Running code review...
✅ Review passed
OUTPUT:
⏸️ WORKFLOW PAUSED AT: review
═══════════════════════════
Story: STORY-2025-016 - Fix login page layout
Status: in-review
Completed: new → start → implement → review
Next: /sdd:story-flow STORY-2025-016 --start-at=qa
To resume full workflow:
/sdd:story-flow STORY-2025-016 --start-at=qa --auto
```
### Example 4: QA Failure Handling
```bash
INPUT:
/sdd:story-flow STORY-2025-012 --start-at=qa --auto
PROCESS:
→ Step 1/4: Running QA tests...
❌ Tests failed: 2 failures in Feature tests
OUTPUT:
❌ WORKFLOW HALTED AT: qa
═══════════════════════════
Story: STORY-2025-012
Failed Tests:
- Feature\TaskCompletionTest::test_task_can_be_marked_complete
- Feature\TaskCompletionTest::test_completed_task_updates_timestamp
QA must pass before proceeding.
Next Actions:
1. Review test failures above
2. Fix implementation issues
3. Run tests: vendor/bin/pest --filter=TaskCompletion
4. Resume workflow: /sdd:story-flow STORY-2025-012 --start-at=qa
```
## Edge Cases
### Story Already Shipped
```
IF story_id found in /docs/stories/completed/:
- SHOW: Story already completed
- OFFER: View story details or create new version
- SUGGEST: /sdd:story-new for related feature
- EXIT workflow
```
### Workflow Interrupted
```
IF user cancels mid-workflow:
- SHOW: Current step and status
- SAVE: Workflow state
- SUGGEST: Resume command with --start-at
- EXIT gracefully
```
### Mixed Mode (Some Steps Already Done)
```
IF starting mid-workflow and previous steps incomplete:
- DETECT: Missing prerequisites
- WARN: "Story implementation not found, cannot run QA"
- SUGGEST: Start from earlier step
- OFFER: Continue anyway (risky) or restart
```
## Error Handling
- **Invalid step name**: Show valid step names and exit
- **Story not found**: Search all story locations (backlog, development, review, qa, completed, phases), suggest `/sdd:story-new` or check story ID
- **Ambiguous story ID**: If multiple stories found with similar IDs, list them and ask user to specify
- **Step prerequisites missing**: Show missing requirements and suggest order
- **Git conflicts**: Halt workflow, show conflict files, require manual resolution
- **Test failures**: Always halt, never auto-continue on failures
- **Deployment errors**: Halt before merge, offer rollback option
## Performance Considerations
- Each step executes sequentially (no parallelization)
- Expected total time: 10-20 minutes for full workflow
- Auto mode reduces interaction time by ~50%
- Can pause/resume at any step without data loss
## Related Commands
- `/sdd:story-new` - Create individual story (Step 1)
- `/sdd:story-start` - Start development (Step 2)
- `/sdd:story-implement` - Generate code (Step 3)
- `/sdd:story-review` - Code review (Step 4)
- `/sdd:story-qa` - Run tests (Step 5)
- `/sdd:story-validate` - Final validation (Step 6)
- `/sdd:story-save` - Commit changes (Step 7)
- `/sdd:story-ship` - Deploy to production (Step 8)
- `/sdd:story-complete` - Complete story with retrospective and archive (Step 9)
- `/sdd:story-rollback` - Rollback if issues arise
## Constraints
- ✅ MUST execute steps in correct order
- ✅ MUST halt on test or validation failures
- ✅ MUST support flexible story ID patterns (year-based, phase-based, feature-based)
- ✅ MUST search all story locations (backlog, development, review, qa, completed, phases)
- ⚠️ NEVER skip QA or validation steps
- ⚠️ NEVER auto-continue on errors in auto mode
- 📋 MUST save work before shipping
- 🔧 SHOULD provide resume options on failure
- 💾 MUST update story status at each step
- 🚀 MUST verify deployment success before completion
- 📚 MUST complete story retrospective and archive after shipping
## Notes
- This command automates the entire story lifecycle
- Interactive mode (default) allows review at each step
- Auto mode (`--auto`) speeds up workflow but still halts on errors
- Partial workflows supported via --start-at and --stop-at
- All individual commands can still be run separately
- Workflow state is preserved for resume capability
- Supports multiple story ID formats:
* Year-based: STORY-2025-001
* Phase-based: STORY-DUE-001, STORY-AUTH-001
* Feature-based: STORY-API-001
- Searches all story locations including project-context/phases/

View File

@@ -0,0 +1,752 @@
# /sdd:story-full-check
Comprehensive 5-minute validation suite for production-ready quality assurance.
---
## Meta
**Category**: Testing & Validation
**Format**: Imperative (Comprehensive)
**Execution Time**: 4-6 minutes
**Prerequisites**: Story in `/docs/stories/development/` or `/docs/stories/review/`
**Destructive**: No (read-only analysis)
**Related Commands**:
- `/sdd:story-quick-check` - Fast 30s validation (run first)
- `/sdd:story-test-integration` - Integration + E2E tests only
- `/sdd:story-validate` - Final story validation before ship
**Context Requirements**:
- `/docs/project-context/technical-stack.md` (validation tools)
- `/docs/project-context/coding-standards.md` (compliance rules)
- `/docs/project-context/development-process.md` (quality gates)
---
## Parameters
**Validation Scope**:
```bash
# Full comprehensive check (default)
/sdd:story-full-check
# Scoped validation
--scope=tests|quality|security|performance|all # Default: all
--story-id=STORY-XXX-NNN # Specific story
--export # Save report to file
--compare=<commit-hash> # Compare with previous state
```
**Test Configuration**:
```bash
--coverage # Generate coverage reports
--browsers=chrome,firefox # Multi-browser E2E testing
--parallel=N # Parallel execution (default: 4)
--strict # Fail on warnings (production mode)
```
**Examples**:
```bash
/sdd:story-full-check # Full 5min check
/sdd:story-full-check --export # Save detailed report
/sdd:story-full-check --scope=tests --coverage # Tests + coverage only
/sdd:story-full-check --compare=abc123 --strict # Compare + strict mode
```
---
## Process
### Phase 1: Full Test Suite (2-3 min)
**Execute All Tests**:
```bash
# Run comprehensive test suite
php artisan test --parallel --coverage
# Includes:
✓ Unit tests (all)
✓ Feature tests (all)
✓ Integration tests (API, database)
✓ Browser tests (E2E workflows)
```
**Output**:
```
🧪 COMPREHENSIVE TESTING
========================
Unit Tests
✅ 24/24 passed (0.8s)
Coverage: 94%
Feature Tests
✅ 18/18 passed (2.1s)
Coverage: 88%
Integration Tests
🔗 API: 12/12 passed (1.4s)
💾 Database: 8/8 passed (0.6s)
Coverage: 85%
Browser Tests (Chrome)
🌐 E2E: 6/7 passed, 1 skipped (12.3s)
⚠️ Skipped: Safari-specific test
Coverage: 76%
┌──────────────────┬────────┬────────┬─────────┬──────────┐
│ Test Type │ Passed │ Failed │ Skipped │ Coverage │
├──────────────────┼────────┼────────┼─────────┼──────────┤
│ Unit │ 24 │ 0 │ 0 │ 94% │
│ Feature │ 18 │ 0 │ 0 │ 88% │
│ Integration │ 20 │ 0 │ 0 │ 85% │
│ Browser │ 6 │ 0 │ 1 │ 76% │
├──────────────────┼────────┼────────┼─────────┼──────────┤
│ TOTAL │ 68 │ 0 │ 1 │ 87% │
└──────────────────┴────────┴────────┴─────────┴──────────┘
Overall Coverage: 87% (target: 80%+) ✅
Lines: 1,247/1,432
Branches: 94/112
Functions: 156/178
Duration: 17.2s
Status: ✅ ALL TESTS PASSING
```
---
### Phase 2: Code Quality Analysis (1 min)
**Static Analysis**:
```bash
# Laravel Pint (formatting)
vendor/bin/pint --test
# PHPStan (static analysis) - if configured
vendor/bin/phpstan analyse
# Check:
✓ Code formatting
✓ Type safety
✓ Complexity metrics
✓ Duplicate code detection
```
**Output**:
```
📊 CODE QUALITY ANALYSIS
========================
Code Formatting (Pint)
✅ All files PSR-12 compliant
✅ No style violations
Static Analysis
⚠️ 3 warnings found
1. TaskManager::updateOrder() - Missing return type
Location: app/Livewire/TaskManager.php:87
2. Category::tasks() - @param missing
Location: app/Models/Category.php:42
3. Unused variable $order
Location: app/Http/Controllers/TaskController.php:23
Complexity Metrics
✅ Cyclomatic complexity: 4.2 avg (target: <10)
✅ Cognitive complexity: 6.8 avg (target: <15)
✅ No files over threshold
Highest complexity:
TaskManager::reorderTasks() - Complexity: 8
Code Duplication
✅ No duplicate code blocks detected
✅ Similar code: 2 locations (acceptable)
- Task creation in TaskManager vs TaskController
- Recommendation: Extract to service class
Dependencies
✅ All dependencies up to date
✅ No vulnerabilities detected
✅ No unused dependencies
Quality Score: B+ (88/100)
```
---
### Phase 3: Performance Profiling (30-60s)
**Build & Runtime Metrics**:
```bash
# Frontend build analysis
npm run build -- --analyze
# Backend profiling
php artisan route:list --compact
php artisan optimize
# Check:
✓ Build size and timing
✓ Route efficiency
✓ Query performance
✓ Memory usage
```
**Output**:
```
⚡ PERFORMANCE PROFILING
========================
Frontend Build
Bundle size: 248 KB (gzipped: 82 KB) ✅
Build time: 4.2s
Chunks:
- app.js: 156 KB
- vendor.js: 92 KB
Compared to baseline:
Bundle: +8 KB (+3.3%)
Build: -0.3s (faster)
Backend Performance
Routes: 24 registered
Avg response time: 45ms ✅
Database Queries
Average: 12ms
Slowest: Task::with('categories', 'tags') - 48ms
N+1 queries: None detected ✅
Memory Usage
Average: 48 MB
Peak: 72 MB ✅
Target: < 128 MB
Page Load Metrics (E2E)
Initial load: 680ms ✅
Time to interactive: 920ms ✅
First contentful paint: 340ms ✅
Performance Grade: A (94/100)
⚠️ Recommendations:
- Consider lazy loading categories for large lists
- Add index on tasks.order column for sorting
```
---
### Phase 4: Security Audit (30s)
**Security Scanning**:
```bash
# Dependency vulnerabilities
composer audit
# Laravel security checks
php artisan config:cache --check
php artisan route:cache --check
# Check:
✓ Dependency vulnerabilities
✓ Exposed secrets (.env validation)
✓ CSRF protection
✓ SQL injection prevention
```
**Output**:
```
🔒 SECURITY AUDIT
=================
Dependency Vulnerabilities
✅ 0 vulnerabilities found
Last scan: 2025-10-01 14:45:22
Code Security
✅ No exposed secrets detected
✅ CSRF protection enabled
✅ SQL injection prevention (Eloquent ORM)
✅ XSS protection enabled
Laravel Security
✅ Debug mode: OFF (production)
✅ APP_KEY set and secure
✅ HTTPS enforced
✅ Session secure: true
Authentication
✅ Password hashing: bcrypt
✅ Rate limiting: configured
✅ Authorization policies: implemented
⚠️ Recommendations:
- Enable Content Security Policy headers
- Add rate limiting to API endpoints
- Consider implementing 2FA
Security Score: A- (92/100)
```
---
### Phase 5: Standards Compliance (30s)
**Validate Against Project Standards**:
```bash
# Load coding standards
source /docs/project-context/coding-standards.md
# Check:
✓ TALL stack conventions
✓ Naming conventions
✓ File organization
✓ Error handling patterns
✓ Accessibility requirements
```
**Output**:
```
📐 STANDARDS COMPLIANCE
=======================
TALL Stack Conventions
✅ Livewire components properly structured
✅ Alpine.js patterns followed
✅ Tailwind utility-first approach
✅ Laravel best practices
Naming Conventions
✅ Models: PascalCase
✅ Controllers: PascalCase + Controller suffix
✅ Routes: kebab-case
✅ Variables: camelCase
File Organization
✅ PSR-4 autoloading
✅ Livewire components in App\Livewire
✅ Tests mirror app structure
✅ Resources organized by type
Error Handling
✅ Try-catch blocks where needed
✅ Validation using Form Requests
✅ User-friendly error messages
⚠️ Missing error logging in TaskManager::delete()
Accessibility (WCAG AA)
✅ Semantic HTML
✅ ARIA attributes present
✅ Keyboard navigation
✅ Color contrast: 4.5:1+
✅ Focus indicators
Responsive Design
✅ Mobile-first approach
✅ Touch targets: 44px min
✅ Viewport meta tag
✅ Fluid typography
Compliance Score: A (96/100)
⚠️ Minor Issues:
- Add error logging to deletion operations
- Document API endpoints in OpenAPI spec
```
---
### Phase 6: Documentation Check (20s)
**Documentation Validation**:
```bash
# Check documentation completeness
ls -la README.md CHANGELOG.md
# Check inline docs
grep -r "@param" app/ | wc -l
grep -r "@return" app/ | wc -l
# Check:
✓ README completeness
✓ Inline PHPDoc blocks
✓ Story documentation
✓ API documentation
```
**Output**:
```
📚 DOCUMENTATION CHECK
======================
Project Documentation
✅ README.md: Present and updated
✅ CHANGELOG.md: Updated with v1.3.0
⚠️ API documentation: Missing OpenAPI spec
Story Documentation
✅ Story file: Complete with acceptance criteria
✅ Progress log: Updated
✅ Test results: Documented
Code Documentation
✅ PHPDoc blocks: 156/178 methods (88%)
⚠️ Missing @param: 14 methods
⚠️ Missing @return: 8 methods
Classes needing docs:
- TaskManager::updateOrder() (missing @param)
- Category::tasks() (missing @return)
Inline Comments
✅ Complex logic documented
✅ TODOs tracked: 3 found
- TODO: Add pagination to large lists
- TODO: Implement caching for categories
- TODO: Add bulk operations
Documentation Score: B+ (87/100)
Recommendations:
- Add PHPDoc to remaining 22 methods
- Generate OpenAPI spec for API endpoints
- Document environment variables
```
---
### Phase 7: Generate Full Report (10s)
**Comprehensive Validation Report**:
```
📋 FULL VALIDATION REPORT
=========================
Story: STORY-DUE-002 - Due Date Management
Validated: 2025-10-01 14:48:35
Duration: 4 minutes 52 seconds
OVERALL GRADE: A- (91/100)
STATUS: ✅ READY FOR PRODUCTION
┌─────────────────────┬───────┬────────┬──────────────┐
│ Validation Area │ Score │ Status │ Grade │
├─────────────────────┼───────┼────────┼──────────────┤
│ Test Suite │ 100% │ ✅ │ A+ │
│ Code Quality │ 88% │ ✅ │ B+ │
│ Performance │ 94% │ ✅ │ A │
│ Security │ 92% │ ✅ │ A- │
│ Standards │ 96% │ ✅ │ A │
│ Documentation │ 87% │ ⚠️ │ B+ │
├─────────────────────┼───────┼────────┼──────────────┤
│ OVERALL │ 91% │ ✅ │ A- │
└─────────────────────┴───────┴────────┴──────────────┘
✅ HIGHLIGHTS (23):
✓ All 68 tests passing (1 skipped)
✓ Test coverage: 87% (exceeds 80% target)
✓ No security vulnerabilities
✓ Performance within targets
✓ WCAG AA accessibility compliant
✓ Mobile-responsive design validated
✓ No N+1 query issues
✓ Code formatting compliant (Pint)
✓ All dependencies up to date
✓ CSRF/XSS protection enabled
... (13 more)
⚠️ WARNINGS (8):
1. Missing return type: TaskManager::updateOrder()
Priority: LOW | Fix time: 2 min
Impact: Type safety, IDE autocomplete
2. Missing @param docs: 14 methods
Priority: LOW | Fix time: 10 min
Impact: Developer experience
3. Bundle size +8KB from baseline
Priority: LOW | Fix time: N/A
Impact: Acceptable growth for new features
4. Missing OpenAPI spec for API
Priority: MEDIUM | Fix time: 30 min
Impact: API documentation
... (4 more)
❌ FAILURES (0):
None - all critical checks passed!
📈 COMPARED TO LAST CHECK (STORY-DUE-001):
Improved:
✓ Test coverage: 83% → 87% (+4%)
✓ Performance: B → A (response times -12ms)
✓ Code quality: B → B+ (complexity reduced)
Degraded:
⚠️ Bundle size: 240KB → 248KB (+3.3%)
⚠️ Documentation: A- → B+ (new code needs docs)
Maintained:
→ Security: A- (consistent)
→ Standards: A (consistent)
🎯 ACTION ITEMS (Prioritized):
PRIORITY 1: MUST FIX BEFORE SHIP
(None)
PRIORITY 2: SHOULD FIX BEFORE REVIEW
1. Add OpenAPI spec for API endpoints (30 min)
Benefit: Better API documentation for consumers
2. Add missing PHPDoc blocks (10 min)
Benefit: Improved code maintainability
PRIORITY 3: CONSIDER FOR FUTURE
1. Implement lazy loading for large category lists
Benefit: Performance optimization for edge cases
2. Add Content Security Policy headers
Benefit: Enhanced security posture
3. Implement API rate limiting
Benefit: Prevent abuse, improve stability
✅ PRODUCTION READINESS: YES
All critical checks passed. Minor warnings acceptable.
🎯 NEXT STEPS:
1. (Optional) Fix Priority 2 warnings
2. Run /sdd:story-validate for final sign-off
3. Move to /docs/stories/qa/ for final QA
4. Ship to production
VALIDATION CONFIDENCE: HIGH (91%)
```
---
### Phase 8: Export Report (if --export flag)
**Save Detailed Report**:
```bash
# Create report file
mkdir -p /reports
cat > /reports/full-check-$(date +%Y%m%d-%H%M%S).md <<EOF
[... full report content ...]
EOF
# Output location
echo "Report saved: /reports/full-check-20251001-144835.md"
```
**Output**:
```
📄 REPORT EXPORTED
==================
Location: /reports/full-check-20251001-144835.md
Size: 24 KB
Format: Markdown
Report includes:
✓ Detailed test results
✓ Code quality metrics
✓ Performance benchmarks
✓ Security findings
✓ Compliance checklist
✓ Action items with priorities
✓ Historical comparison
Share with: git add reports/ && git commit -m "Add validation report"
```
---
## Examples
### Example 1: Perfect Score
```bash
$ /sdd:story-full-check
[... full validation ...]
📋 FULL VALIDATION REPORT
=========================
OVERALL GRADE: A+ (98/100)
STATUS: ✅ PRODUCTION READY
✅ All checks passed
⚠️ 0 warnings
0 failures
🎯 NEXT STEPS:
Run /sdd:story-validate → Ship to production
VALIDATION CONFIDENCE: VERY HIGH (98%)
```
### Example 2: Warnings Present
```bash
$ /sdd:story-full-check
[... full validation ...]
OVERALL GRADE: B+ (86/100)
STATUS: ⚠️ READY WITH WARNINGS
65 highlights
⚠️ 12 warnings (8 low, 4 medium priority)
0 critical failures
🎯 RECOMMENDED ACTIONS:
1. Fix 4 medium-priority warnings (est. 45 min)
2. Re-run /sdd:story-full-check
3. Then proceed to /sdd:story-validate
Accept warnings and ship? [y/n]:
```
### Example 3: Comparison Mode
```bash
$ /sdd:story-full-check --compare=abc123
[... full validation ...]
📈 COMPARED TO abc123:
Improved:
✓ Test coverage: 78% → 87%
✓ Performance: C+ → A
✓ Security: B → A-
Degraded:
⚠️ Code quality: A → B+ (new complex logic)
⚠️ Bundle size: +12KB
Net change: +8 points (improvement)
```
### Example 4: Export Report
```bash
$ /sdd:story-full-check --export
[... full validation ...]
📄 REPORT EXPORTED
==================
Location: /reports/full-check-20251001-144835.md
View report:
cat /reports/full-check-20251001-144835.md
Share report:
git add reports/ && git commit -m "Add validation"
```
### Example 5: Scoped Check (Tests Only)
```bash
$ /sdd:story-full-check --scope=tests --coverage
🧪 COMPREHENSIVE TESTING
========================
[... test results ...]
Coverage Report: /tests/coverage/index.html
OVERALL: ✅ ALL TESTS PASSING
Skipping: quality, security, performance, docs
Run full check: /sdd:story-full-check (no --scope)
```
---
## Success Criteria
**Command succeeds when**:
- All validation phases complete within 6 minutes
- Comprehensive report generated (grade A-F)
- Action items prioritized (P1, P2, P3)
- Clear production readiness verdict
- Historical comparison available
**Grade Scale**:
- **A+ (95-100%)**: Exceptional, production-ready
- **A (90-94%)**: Excellent, minor improvements
- **B (80-89%)**: Good, address warnings
- **C (70-79%)**: Acceptable, fix medium-priority issues
- **D (60-69%)**: Poor, significant issues
- **F (<60%)**: Failing, critical issues
**Command fails when**:
- Context files missing
- Critical test failures
- Security vulnerabilities detected
- Performance significantly degraded
---
## Output Files
**Generated Reports** (if `--export`):
- Full validation report: `/reports/full-check-YYYYMMDD-HHMMSS.md`
- Coverage report: `/tests/coverage/index.html` (if `--coverage`)
- Performance profile: `/reports/performance-YYYYMMDD.json` (if profiling enabled)
**No New Files** (default): Updates existing story documentation only
---
## Notes
- **Execution Time**: 4-6 minutes (varies by project size)
- **Comprehensive**: Includes all validation aspects (tests, quality, security, performance, standards, docs)
- **Production Gate**: Validates story is production-ready
- **Historical Tracking**: Compares against previous checks
- **Actionable**: Prioritized action items with fix time estimates
**Best Practices**:
1. Run `/sdd:story-quick-check` before this (catch obvious issues fast)
2. Run before moving story to `/docs/stories/review/`
3. Use `--export` for documentation trail
4. Use `--compare` to track quality improvements
5. Address P1 (must fix) and P2 (should fix) items before ship
**When to Use**:
- ✅ Before code review (move to `/docs/stories/review/`)
- ✅ Before final validation (`/sdd:story-validate`)
- ✅ After major refactoring
- ✅ Before production deployment
- ✅ Weekly quality check for long-running stories
**When NOT to Use**:
- ❌ During active development (use `/sdd:story-quick-check`)
- ❌ For quick validation (use `/sdd:story-quick-check`)
- ❌ Multiple times per day (too slow)
**Next Steps After Success**:
```bash
A+ (95-100%) → /sdd:story-validate → Ship
A (90-94%) → /sdd:story-validate → Ship (minor improvements optional)
B (80-89%) → Fix P2 warnings → Re-run → /sdd:story-validate
C (70-79%) → Fix P1+P2 issues → Re-run
D-F (<70%) → Fix critical issues → Re-run entire workflow
```
**Quick Reference**:
```bash
# Fast check first (30s)
/sdd:story-quick-check
# Then comprehensive validation (5min)
/sdd:story-full-check
# Fix issues, then final validation
/sdd:story-validate
```

638
commands/story-implement.md Normal file
View File

@@ -0,0 +1,638 @@
# /sdd:story-implement
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Generate context-aware implementation code, tests, and browser tests based on story requirements
## Definition
**Purpose**: Generate complete implementation including production code, unit tests, integration tests, and browser tests using the project's actual technical stack and coding standards.
**Syntax**: `/sdd:story-implement [story_id]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | No | current branch | Story identifier (e.g., "STORY-001") | Must match pattern STORY-\d{3,} |
## INSTRUCTION: Generate Story Implementation
### INPUTS
- story_id: Optional story identifier (auto-detected from current branch if omitted)
- Story file from `/docs/stories/development/[story-id].md`
- Project context from `/docs/project-context/` directory
- Technical stack configuration
- Coding standards and patterns
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- HALT execution with clear guidance
- EXIT with initialization instructions
3. **LOAD** and **PARSE** project context:
- `/docs/project-context/technical-stack.md`:
* IDENTIFY actual frontend framework (React/Vue/Svelte/Laravel Blade/etc.)
* IDENTIFY actual state management (Redux/Vuex/Pinia/Livewire/Alpine.js/etc.)
* IDENTIFY actual primary language (TypeScript/JavaScript/PHP/Python/Go/etc.)
* IDENTIFY actual styling approach (Tailwind/CSS Modules/Styled Components/etc.)
* IDENTIFY actual backend runtime and framework
* IDENTIFY actual database system
* IDENTIFY actual testing framework
* IDENTIFY actual browser testing tools
* IDENTIFY actual build tools and package manager
- `/docs/project-context/coding-standards.md`:
* EXTRACT file organization patterns
* EXTRACT naming conventions
* EXTRACT error handling approach
* EXTRACT testing patterns
* EXTRACT code formatting rules
* EXTRACT comment and documentation standards
- `/docs/project-context/development-process.md`:
* EXTRACT stage requirements
* EXTRACT quality gates
* EXTRACT review criteria
#### Phase 2: Story File Discovery and Analysis
1. **DETERMINE** story ID:
- IF story_id parameter provided: USE it
- ELSE: EXTRACT from current git branch name
- ELSE: FIND most recent story in `/docs/stories/development/`
- IF NOT FOUND: HALT with error and suggest `/sdd:story-start`
2. **READ** story file at `/docs/stories/development/[story-id].md`
3. **EXTRACT** story requirements:
- Success criteria (acceptance criteria)
- Technical approach and constraints
- Implementation checklist status
- Dependencies and integration points
- Edge cases and error scenarios
- UI/UX considerations
- Performance requirements
- Accessibility requirements
- Security considerations
4. **VALIDATE** story is in development status:
- IF in "backlog": SUGGEST running `/sdd:story-start` first
- IF in "review" or "qa": WARN about overwriting reviewed code
- IF in "completed": HALT with error
#### Phase 3: Implementation Generation
1. **GENERATE** production code using DISCOVERED stack:
**Frontend Components:**
- CREATE component files for DISCOVERED framework:
* React: .jsx/.tsx components with hooks
* Vue: .vue single-file components
* Laravel Blade + Livewire: Livewire component classes + Blade views
* Svelte: .svelte components
* Angular: .ts components with templates
- APPLY DISCOVERED state management patterns
- USE DISCOVERED styling approach
- IMPLEMENT error boundaries/error handling
- ADD loading states and feedback
- ENSURE accessibility features (ARIA, keyboard nav, focus management)
- OPTIMIZE for performance (memoization, lazy loading)
**Backend Components:**
- CREATE backend files for DISCOVERED framework:
* Laravel: Controllers, Models, Migrations, Policies
* Express: Routes, Controllers, Middleware, Models
* Django: Views, Models, Forms, Serializers
* FastAPI: Routes, Models, Schemas, Dependencies
- IMPLEMENT business logic
- ADD validation and sanitization
- INCLUDE error handling
- ADD logging using DISCOVERED tools
- IMPLEMENT security measures (CSRF, XSS, SQL injection prevention)
- OPTIMIZE database queries (eager loading, indexing)
**Database Changes:**
- CREATE migrations for DISCOVERED database system
- DEFINE models with relationships
- ADD indexes for performance
- INCLUDE rollback instructions
- SEED data if needed
2. **APPLY** coding standards from coding-standards.md:
- FOLLOW DISCOVERED file organization
- USE DISCOVERED naming conventions
- APPLY DISCOVERED formatting rules
- ADD DISCOVERED comment patterns
- RESPECT DISCOVERED error handling approach
3. **VERIFY** generated code quality:
- CHECK syntax correctness
- ENSURE imports and dependencies are correct
- VALIDATE against DISCOVERED linting rules
- CONFIRM adherence to coding standards
#### Phase 4: Unit Test Generation
1. **GENERATE** unit tests using DISCOVERED test framework:
**Test Framework Detection:**
- Jest/Vitest: Create .test.js/.spec.js files
- Pest: Create .php test files in tests/Unit/ or tests/Feature/
- Pytest: Create test_*.py files
- JUnit: Create *Test.java files
- Go: Create *_test.go files
2. **CREATE** test cases covering:
- Each success criterion
- Happy path scenarios
- Edge cases and boundary conditions
- Error scenarios and error handling
- Validation rules
- State management logic
- Business logic correctness
- Component rendering (for frontend)
- API responses (for backend)
3. **APPLY** DISCOVERED testing patterns:
- USE DISCOVERED test structure (describe/it, it(), test functions)
- FOLLOW DISCOVERED mocking patterns
- USE DISCOVERED assertion methods
- INCLUDE DISCOVERED test setup/teardown
4. **ORGANIZE** tests logically:
- GROUP related tests
- NAME tests descriptively
- ADD comments for complex scenarios
#### Phase 5: Integration Test Generation
1. **GENERATE** integration tests for:
- API endpoints (if applicable)
- Database interactions
- Component integration
- Service integration
- External dependencies
2. **CREATE** test scenarios covering:
- Full request/response cycles
- Database transactions
- Authentication/authorization flows
- Multi-step user interactions
- Cross-component communication
3. **USE** DISCOVERED integration testing tools:
- Laravel: Feature tests with RefreshDatabase
- Express: Supertest for API testing
- Django: TestCase with database fixtures
- React: Testing Library integration tests
#### Phase 6: Browser Test Generation
1. **PARSE** acceptance criteria from story file
2. **GENERATE** browser test file using DISCOVERED browser testing framework:
**Framework-Specific Paths:**
- Laravel: `tests/Browser/[StoryId]Test.php` (Dusk/Playwright)
- Node.js: `tests/e2e/[story-id].spec.js` (Playwright/Cypress)
- Python: `tests/browser/test_[story_id].py` (Playwright/Selenium)
- Ruby: `spec/features/[story_id]_spec.rb` (Capybara)
3. **CREATE** test methods for each acceptance criterion:
- USER INTERACTIONS: Click, type, select, submit
- VISUAL VERIFICATION: Assert elements, text, styles
- NAVIGATION: Page transitions, routing
- FORMS: Input validation, submission, error display
- RESPONSIVE DESIGN: Test on different viewports
- ACCESSIBILITY: Keyboard navigation, screen reader support
4. **INCLUDE** browser test patterns:
- Page object models (if applicable)
- Reusable test helpers
- Setup and teardown logic
- Screenshots on failure
- Proper wait strategies
#### Phase 7: Test Execution
1. **RUN** unit tests using DISCOVERED commands:
- Examples: `npm test`, `vendor/bin/pest`, `pytest`, `go test`
- EXECUTE with appropriate filters if available
- CAPTURE output and test results
2. **RUN** integration tests:
- EXECUTE using DISCOVERED integration test commands
- VERIFY database interactions
- CHECK API responses
3. **RUN** browser tests:
- EXECUTE using DISCOVERED browser testing commands
- VERIFY all acceptance criteria are met
- CAPTURE screenshots/videos on failure
4. **ANALYZE** test results:
- IF tests PASS:
* SHOW success summary
* DISPLAY coverage metrics
* PROCEED to Phase 8
- IF tests FAIL:
* IDENTIFY failing tests
* ANALYZE failure reasons
* FIX implementation issues
* RE-RUN tests
* REPEAT until all tests pass
5. **RUN** linter using DISCOVERED tool:
- Examples: `vendor/bin/pint`, `npm run lint`, `black`, `gofmt`
- AUTO-FIX formatting issues if possible
- REPORT any remaining issues
#### Phase 8: Story File Update
1. **UPDATE** Implementation Checklist in story file:
- `[x]` Feature implementation (when core functionality complete)
- `[x]` Unit tests (when unit test suite passes)
- `[x]` Integration tests (when integration tests pass)
- `[x]` Browser tests (when browser tests pass)
- `[x]` Error handling (when error scenarios are handled)
- `[x]` Loading states (when UI loading states implemented)
- `[x]` Performance optimization (when performance requirements met)
- `[x]` Accessibility (when accessibility features implemented)
- `[x]` Security review (when security checks pass)
- `[x]` Documentation (when inline docs complete)
2. **ADD** progress log entry:
- TIMESTAMP: Current date/time
- DESCRIPTION: What was implemented
- FILES CREATED: List of new files
- FILES MODIFIED: List of changed files
- TECHNOLOGIES USED: Which parts of stack were utilized
- TEST RESULTS: Test pass/fail status
- DEVIATIONS: Any changes from original plan
3. **NOTE** implementation decisions:
- KEY DECISIONS: Important architectural choices
- TRADE-OFFS: Compromises made
- FUTURE IMPROVEMENTS: Identified tech debt or enhancements
#### Phase 9: Implementation Summary
1. **DISPLAY** comprehensive summary:
```
✅ IMPLEMENTATION COMPLETE
============================
Story: [story-id] - [Title]
Stack: [Technologies Used]
Generated Files:
📄 Production Code:
- [list of production files with paths]
🧪 Unit Tests:
- [list of unit test files]
🔗 Integration Tests:
- [list of integration test files]
🌐 Browser Tests:
- [list of browser test files]
Test Results:
✅ Unit Tests: [X/X passed]
✅ Integration Tests: [X/X passed]
✅ Browser Tests: [X/X passed]
✅ Linting: Passed
Coverage: [X%]
Key Implementation Decisions:
- [Decision 1]
- [Decision 2]
Next Steps:
1. /sdd:story-save to commit implementation
2. /sdd:story-review to move to code review
3. Manual testing in browser at [URL]
```
### OUTPUTS
- Production code files (components, controllers, models, etc.)
- Unit test files with comprehensive coverage
- Integration test files for API/database/service interactions
- Browser test files covering all acceptance criteria
- Updated story file with progress and checklist updates
- Test execution results and coverage metrics
### RULES
- MUST load project context before generating any code
- MUST adapt ALL generated code to DISCOVERED stack
- NEVER assume framework - ALWAYS read technical-stack.md
- MUST generate tests for DISCOVERED testing framework
- MUST run tests and ensure they pass before completion
- MUST update story file with implementation progress
- MUST follow DISCOVERED coding standards exactly
- MUST generate browser tests for all acceptance criteria
- SHOULD run linter and fix formatting issues
- MUST NOT proceed if project context is missing
- MUST NOT modify code in "review", "qa", or "completed" stories without confirmation
## Examples
### Example 1: Laravel + Livewire Implementation
```bash
INPUT:
/sdd:story-implement STORY-AUTH-001
PROCESS:
→ Loading project context from /docs/project-context/
→ Detected stack: Laravel 12 + Livewire 3 + Tailwind CSS 4 + Pest 4
→ Reading story: /docs/stories/development/STORY-AUTH-001.md
→ Story: Implement Login Form
→ Generating Livewire component...
→ Generating unit tests...
→ Generating feature tests...
→ Generating browser tests...
→ Running tests: vendor/bin/pest
→ All tests passing ✅
→ Running linter: vendor/bin/pint
→ Code formatted ✅
→ Updating story file...
OUTPUT:
✅ IMPLEMENTATION COMPLETE
============================
Story: STORY-AUTH-001 - Implement Login Form
Stack: Laravel 12, Livewire 3, Tailwind CSS 4, Alpine.js, Pest 4
Generated Files:
📄 Production Code:
- app/Livewire/Auth/LoginForm.php
- resources/views/livewire/auth/login-form.blade.php
- app/Http/Controllers/Auth/LoginController.php
🧪 Unit Tests:
- tests/Unit/Livewire/LoginFormTest.php
🔗 Integration Tests:
- tests/Feature/Auth/LoginTest.php
🌐 Browser Tests:
- tests/Browser/Auth/LoginFormTest.php
Test Results:
✅ Unit Tests: 8/8 passed
✅ Feature Tests: 5/5 passed
✅ Browser Tests: 4/4 passed
✅ Linting: Passed
Coverage: 92%
Key Implementation Decisions:
- Used wire:model.live for real-time validation
- Implemented rate limiting (5 attempts per minute)
- Added remember me functionality with 30-day expiry
- Used Livewire validation attributes for clean code
Next Steps:
1. /sdd:story-save to commit implementation
2. /sdd:story-review to move to code review
3. Manual testing in browser at https://ccs-todo.test/login
```
### Example 2: React + TypeScript Implementation
```bash
INPUT:
/sdd:story-implement STORY-PROFILE-002
PROCESS:
→ Loading project context
→ Detected stack: React 18 + TypeScript + Vite + Jest + Playwright
→ Reading story: /docs/stories/development/STORY-PROFILE-002.md
→ Story: User Profile Settings
→ Generating React components...
→ Generating custom hooks...
→ Generating unit tests...
→ Generating integration tests...
→ Generating Playwright tests...
→ Running tests: npm test
→ All tests passing ✅
→ Running linter: npm run lint
→ Code formatted ✅
→ Updating story file...
OUTPUT:
✅ IMPLEMENTATION COMPLETE
============================
Story: STORY-PROFILE-002 - User Profile Settings
Stack: React 18, TypeScript, Redux Toolkit, Vite, Jest, Playwright
Generated Files:
📄 Production Code:
- src/components/ProfileSettings.tsx
- src/components/ProfileSettings.module.css
- src/hooks/useProfileSettings.ts
- src/store/slices/profileSlice.ts
- src/api/profileApi.ts
🧪 Unit Tests:
- src/components/ProfileSettings.test.tsx
- src/hooks/useProfileSettings.test.ts
- src/store/slices/profileSlice.test.ts
🔗 Integration Tests:
- src/tests/integration/profile.test.tsx
🌐 Browser Tests:
- tests/e2e/profile-settings.spec.ts
Test Results:
✅ Unit Tests: 15/15 passed
✅ Integration Tests: 6/6 passed
✅ Browser Tests: 7/7 passed
✅ Linting: Passed
Coverage: 88%
Key Implementation Decisions:
- Implemented optimistic updates for better UX
- Added debounced auto-save (500ms delay)
- Used React.memo to prevent unnecessary re-renders
- Implemented form validation with Zod schema
Next Steps:
1. /sdd:story-save to commit implementation
2. /sdd:story-review to move to code review
3. Manual testing in browser at http://localhost:5173/profile
```
### Example 3: Implementation with Missing Context
```bash
INPUT:
/sdd:story-implement STORY-API-003
PROCESS:
→ Checking for /docs/project-context/ directory
→ Directory not found
OUTPUT:
⚠️ Project Context Missing
==========================
Cannot generate implementation without project context.
The /docs/project-context/ directory does not exist.
This directory defines your technical stack and coding standards,
which determines what code to generate.
Please run: /sdd:project-init
This will create:
- /docs/project-context/technical-stack.md (defines your stack)
- /docs/project-context/coding-standards.md (defines your standards)
- /docs/project-context/development-process.md (defines your workflow)
After initialization, run /sdd:story-implement again.
```
### Example 4: Implementation with Failing Tests
```bash
INPUT:
/sdd:story-implement STORY-CART-004
PROCESS:
→ Loading project context
→ Detected stack: Django 5 + HTMX + Tailwind CSS + Pytest
→ Reading story: /docs/stories/development/STORY-CART-004.md
→ Generating Django views...
→ Generating templates...
→ Generating unit tests...
→ Generating integration tests...
→ Running tests: pytest
→ 2 tests failed ❌
→ Analyzing failures...
→ Fixing cart total calculation...
→ Re-running tests: pytest
→ All tests passing ✅
→ Updating story file...
OUTPUT:
✅ IMPLEMENTATION COMPLETE
============================
Story: STORY-CART-004 - Shopping Cart
Stack: Django 5, HTMX, Tailwind CSS, Pytest, Playwright
Generated Files:
📄 Production Code:
- shop/views/cart.py
- shop/templates/cart/index.html
- shop/templates/cart/_item.html
- shop/models/cart.py
- shop/forms/cart.py
🧪 Unit Tests:
- shop/tests/test_cart_model.py
- shop/tests/test_cart_views.py
🔗 Integration Tests:
- shop/tests/test_cart_integration.py
🌐 Browser Tests:
- tests/browser/test_cart.py
Test Results:
✅ Unit Tests: 12/12 passed (2 fixed)
✅ Integration Tests: 5/5 passed
✅ Browser Tests: 8/8 passed
✅ Linting: Passed
Coverage: 95%
Key Implementation Decisions:
- Used Decimal for currency to avoid floating point errors
- Implemented HTMX for partial updates (no page reload)
- Added optimistic locking for concurrent cart updates
- Used Django signals for cart total recalculation
Issues Fixed During Implementation:
- Cart total calculation had rounding error
- Missing CSRF token in HTMX request
Next Steps:
1. /sdd:story-save to commit implementation
2. /sdd:story-review to move to code review
3. Manual testing in browser at http://localhost:8000/cart
```
## Edge Cases
### Story Not in Development
```
IF story found in /docs/stories/backlog/:
- SUGGEST running /sdd:story-start first
- EXPLAIN that story must be moved to development
- OFFER to start story automatically
```
### Story Already Reviewed
```
IF story found in /docs/stories/review/ or /docs/stories/qa/:
- WARN about modifying reviewed code
- ASK for confirmation to proceed
- IF confirmed: Generate code
- IF declined: Exit gracefully
```
### Tests Fail Repeatedly
```
IF tests fail after 3 fix attempts:
- SHOW detailed error output
- SUGGEST manual debugging steps
- OFFER to save partial implementation
- MARK implementation as incomplete in story file
```
### Missing Dependencies
```
IF required packages not installed:
- DETECT missing dependencies from imports
- LIST missing packages
- SUGGEST installation command for DISCOVERED package manager
- OFFER to continue without those features
```
### Conflicting Files Exist
```
IF files already exist at generation paths:
- SHOW list of conflicting files
- ASK: Overwrite, skip, or merge?
- IF overwrite: Backup existing files first
- IF skip: Generate only non-conflicting files
- IF merge: Attempt intelligent merge
```
## Error Handling
- **Story ID missing and not on feature branch**: Return "Error: Story ID required. Usage: /sdd:story-implement <story_id>"
- **Invalid story ID format**: Return "Error: Invalid story ID format. Expected: STORY-XXX-NNN"
- **Project context missing**: Halt and suggest /sdd:project-init with detailed guidance
- **Story not found**: Return "Error: Story not found. Ensure it exists in /docs/stories/development/"
- **Context files corrupted**: Show specific parsing errors and suggest manual review
- **Test execution fails**: Show error output and offer troubleshooting steps
- **Linter fails**: Show linting errors and auto-fix if possible
## Performance Considerations
- Load and parse project context once at start
- Cache parsed technical stack for session
- Generate files in parallel when possible
- Run tests with appropriate parallelization flags
- Skip unchanged files during re-generation
- Use incremental builds for DISCOVERED build tools
## Related Commands
- `/sdd:story-start` - Begin development before implementing
- `/sdd:story-save` - Commit implementation after completion
- `/sdd:story-review` - Move to code review after implementation
- `/sdd:story-continue` - Resume implementation if interrupted
- `/sdd:project-init` - Initialize project context first
## Constraints
- ✅ MUST load project context before any code generation
- ✅ MUST generate code matching DISCOVERED technical stack
- ✅ MUST create tests for DISCOVERED testing framework
- ✅ MUST run tests and ensure they pass
- ⚠️ NEVER assume technology choices - ALWAYS read context
- 📋 MUST update story file with implementation progress
- 🧪 MUST generate browser tests for acceptance criteria
- 🔧 SHOULD run linter and fix formatting
- 💾 MUST create comprehensive test coverage

584
commands/story-metrics.md Normal file
View File

@@ -0,0 +1,584 @@
# /sdd:story-metrics
## Meta
- Version: 2.0
- Category: story-analysis
- Complexity: medium
- Purpose: Calculate and display development velocity, cycle time, and quality metrics from story data
## Definition
**Purpose**: Analyze completed and in-progress stories to understand development patterns, velocity trends, bottlenecks, and generate actionable insights.
**Syntax**: `/sdd:story-metrics [period]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| period | string | No | "all" | Time period to analyze (week, month, quarter, all) | One of: week, month, quarter, all |
## INSTRUCTION: Analyze Story Metrics
### INPUTS
- period: Optional time period filter (defaults to all-time)
- Story files from all directories:
- `/docs/stories/backlog/` - Stories not started
- `/docs/stories/development/` - Active stories
- `/docs/stories/review/` - Stories in review
- `/docs/stories/qa/` - Stories in testing
- `/docs/stories/completed/` - Finished stories
### PROCESS
#### Phase 1: Data Collection
1. **SCAN** all story directories for `.md` files
2. **PARSE** each story file to extract:
- Story ID and title
- Status (current folder)
- Started date
- Completed date
- Stage transitions (from progress log)
- Test results
- Bug count (from progress log)
- Story size (days to complete)
- Technologies used (from technical notes)
3. **FILTER** by period if specified:
- week: Last 7 days
- month: Last 30 days
- quarter: Last 90 days
- all: All stories
4. **CALCULATE** time in each stage:
- Development time
- Review time
- QA time
- Total cycle time
#### Phase 2: Velocity Metrics
1. **COUNT** completed stories per time period
2. **CALCULATE** average cycle time (start to completion)
3. **COMPUTE** throughput (stories per week)
4. **GENERATE** trend analysis:
- Group stories by week
- Create visual bar chart
- Calculate trend direction
5. **DISPLAY** velocity metrics:
```
📈 VELOCITY METRICS
══════════════════════════════════
Current Period: [Date range]
- Stories completed: [count]
- Average cycle time: [X] days
- Throughput: [X] stories/week
Trend (Last 4 Weeks):
Week 1: ████████ 8 stories
Week 2: ██████ 6 stories
Week 3: █████████ 9 stories
Week 4: ███████ 7 stories
Status: [↗ Trending up | ↘ Trending down | → Stable]
```
#### Phase 3: Cycle Time Analysis
1. **CALCULATE** average time per stage:
- Development: Mean days from start to review
- Review: Mean hours from review to QA
- QA: Mean hours from QA to completion
- Total: Mean days from start to completion
2. **IDENTIFY** outliers:
- Fastest story (min cycle time)
- Slowest story (max cycle time)
3. **DETECT** bottlenecks:
- Stage with longest average time
- Stage with most variance
4. **DISPLAY** cycle time analysis:
```
⏱️ CYCLE TIME ANALYSIS
══════════════════════════════════
Average by Stage:
- Development: [X] days
- Review: [X] hours
- QA: [X] hours
- Total: [X] days
Outliers:
- Fastest: [STORY-ID] - [X] days
- Slowest: [STORY-ID] - [X] days
Bottlenecks:
- [Stage]: [X]% above average
```
#### Phase 4: Quality Metrics
1. **CALCULATE** first-time pass rate:
- Stories completed without rework
- Percentage of stories passing review first time
2. **COUNT** bugs by stage:
- Average bugs found in review
- Average bugs found in QA
- Total production incidents
3. **ANALYZE** test coverage:
- Average test cases per story
- Percentage with complete test coverage
4. **COMPUTE** rollback rate:
- Stories requiring rollback
- Percentage of completed stories
5. **DISPLAY** quality metrics:
```
🎯 QUALITY METRICS
══════════════════════════════════
Pass Rate:
- First-time pass: [X]%
- Average rework cycles: [X]
Bug Detection:
- Avg bugs in review: [X]
- Avg bugs in QA: [X]
- Production incidents: [count]
Testing:
- Avg test cases: [X]
- Coverage target met: [X]%
Stability:
- Rollback rate: [X]%
```
#### Phase 5: Story Size Distribution
1. **CATEGORIZE** stories by cycle time:
- Small: 1-2 days
- Medium: 3-5 days
- Large: 5+ days
2. **CALCULATE** distribution percentages
3. **GENERATE** visual distribution chart
4. **PROVIDE** sizing recommendation
5. **DISPLAY** size distribution:
```
📊 STORY SIZE DISTRIBUTION
══════════════════════════════════
Small (1-2 days): ████████ 40% ([count] stories)
Medium (3-5 days): ██████ 30% ([count] stories)
Large (5+ days): ██████ 30% ([count] stories)
Recommendation:
[Break down large stories | Continue current sizing | Adjust estimation]
```
#### Phase 6: Technology Usage
1. **EXTRACT** technologies from technical notes
2. **COUNT** usage frequency across stories
3. **IDENTIFY** new technology additions
4. **TRACK** adoption dates
5. **DISPLAY** tech stack usage:
```
🔧 TECH STACK USAGE
══════════════════════════════════
Most Used:
- [Technology]: [X] stories
- [Framework]: [X] stories
- [Library]: [X] stories
Recent Additions:
- [New Tech]: Added [date]
- [New Tool]: Added [date]
```
#### Phase 7: Development Patterns
1. **ANALYZE** completion patterns:
- Most productive day of week
- Most productive time period
- Average stories per week
2. **IDENTIFY** common blockers:
- Extract from progress logs
- Count blocker frequency
- Categorize blocker types
3. **DISPLAY** development patterns:
```
📋 DEVELOPMENT PATTERNS
══════════════════════════════════
Productivity:
- Most productive day: [Day]
- Peak completion time: [Time range]
- Avg stories/week: [X]
Common Blockers:
- [Blocker type]: [X] occurrences
- [Blocker type]: [X] occurrences
```
#### Phase 8: Predictions
1. **CALCULATE** velocity-based projections:
- Expected stories next week
- Expected stories next month
- Confidence interval
2. **ANALYZE** work-in-progress:
- Current parallel stories
- Optimal WIP limit based on data
- Capacity recommendations
3. **DISPLAY** projections:
```
🔮 PROJECTIONS
══════════════════════════════════
At Current Velocity:
- Next week: [X] stories (±[Y])
- Next month: [X] stories (±[Y])
Capacity:
- Current WIP: [X] stories
- Optimal WIP limit: [X] stories
- Capacity utilization: [X]%
```
#### Phase 9: Recommendations
1. **ANALYZE** metrics for improvement opportunities
2. **GENERATE** specific, actionable recommendations:
- Process optimizations
- Bottleneck resolutions
- Quality improvements
- Tool suggestions
3. **PRIORITIZE** recommendations by impact
4. **DISPLAY** recommendations:
```
💡 RECOMMENDATIONS
══════════════════════════════════
High Impact:
1. [Specific improvement with metric basis]
2. [Process optimization with expected gain]
3. [Tool suggestion with benefit]
Quick Wins:
- [Low-effort, high-value change]
- [Simple process tweak]
```
#### Phase 10: Metrics Dashboard
1. **COMPILE** all metrics into summary dashboard
2. **CALCULATE** trend indicators:
- Velocity: trending up/down/stable
- Quality: improving/declining/stable
- Efficiency: percentage improvement
3. **EXTRACT** top insights from data
4. **GENERATE** action items
5. **DISPLAY** complete dashboard:
```
📊 METRICS DASHBOARD
══════════════════════════════════
Period: [Date range]
Generated: [Date and time]
HEADLINES:
• Velocity: [↗ Trending up | ↘ Trending down | → Stable] ([X]%)
• Quality: [Improving | Declining | Stable] ([X]%)
• Efficiency: [X]% [improvement | decline] over last period
KEY INSIGHTS:
• [Data-driven insight 1]
• [Data-driven insight 2]
• [Data-driven insight 3]
ACTION ITEMS:
• [Prioritized action 1]
• [Prioritized action 2]
NEXT REVIEW: [Suggested date]
```
6. **OFFER** export option:
```
💾 Export metrics to /metrics/[date].md? (y/n)
```
### OUTPUTS
- Console display of all metric sections
- Optional: `/metrics/[date].md` - Saved metrics report with timestamp
### RULES
- MUST scan all story directories (backlog, development, review, qa, completed)
- MUST calculate accurate time periods from story dates
- MUST handle missing dates gracefully (exclude from time-based metrics)
- SHOULD provide visual representations (bar charts) where helpful
- SHOULD calculate trends over multiple periods
- SHOULD generate actionable recommendations
- NEVER modify story files (read-only operation)
- ALWAYS display metric sources (which stories contributed)
- ALWAYS include confidence levels for predictions
- MUST handle empty directories (no stories found)
## Dashboard Layout
```
📊 METRICS DASHBOARD
══════════════════════════════════════════════════════════════════
Period: [Start Date] to [End Date]
Generated: [Timestamp]
📈 VELOCITY METRICS
─────────────────────────────────────
Current Period:
- Stories completed: [count]
- Average cycle time: [X] days
- Throughput: [X] stories/week
Trend (Last 4 Weeks):
Week 1: ████████ 8 stories
Week 2: ██████ 6 stories
Week 3: █████████ 9 stories
Week 4: ███████ 7 stories
Status: ↗ Trending up (15%)
⏱️ CYCLE TIME ANALYSIS
─────────────────────────────────────
Average by Stage:
- Development: [X] days
- Review: [X] hours
- QA: [X] hours
- Total: [X] days
Outliers:
- Fastest: [STORY-ID] - [X] days
- Slowest: [STORY-ID] - [X] days
Bottlenecks:
- [Stage]: [X]% above average
🎯 QUALITY METRICS
─────────────────────────────────────
Pass Rate:
- First-time pass: [X]%
- Average rework cycles: [X]
Bug Detection:
- Avg bugs in review: [X]
- Avg bugs in QA: [X]
- Production incidents: [count]
Testing:
- Avg test cases: [X]
- Coverage target met: [X]%
Stability:
- Rollback rate: [X]%
📊 STORY SIZE DISTRIBUTION
─────────────────────────────────────
Small (1-2 days): ████████ 40% ([count])
Medium (3-5 days): ██████ 30% ([count])
Large (5+ days): ██████ 30% ([count])
Recommendation: [Sizing guidance]
🔧 TECH STACK USAGE
─────────────────────────────────────
Most Used:
- [Technology]: [X] stories
- [Framework]: [X] stories
Recent Additions:
- [New Tech]: Added [date]
📋 DEVELOPMENT PATTERNS
─────────────────────────────────────
Productivity:
- Most productive day: [Day]
- Peak completion time: [Time range]
- Avg stories/week: [X]
Common Blockers:
- [Blocker]: [X] occurrences
🔮 PROJECTIONS
─────────────────────────────────────
At Current Velocity:
- Next week: [X] stories (±[Y])
- Next month: [X] stories (±[Y])
Capacity:
- Current WIP: [X] stories
- Optimal WIP limit: [X]
- Utilization: [X]%
💡 RECOMMENDATIONS
─────────────────────────────────────
High Impact:
1. [Specific improvement]
2. [Process optimization]
3. [Tool suggestion]
Quick Wins:
- [Low-effort improvement]
══════════════════════════════════════════════════════════════════
```
## Examples
### Example 1: All-Time Metrics
```bash
INPUT:
/sdd:story-metrics
OUTPUT:
→ Scanning story directories...
→ Found 45 stories across all stages
→ Analyzing velocity, cycle time, and quality...
📊 METRICS DASHBOARD
══════════════════════════════════════════════════════════════════
Period: All Time (Jan 1, 2025 - Oct 1, 2025)
Generated: Oct 1, 2025 at 2:30 PM
📈 VELOCITY METRICS
─────────────────────────────────────
Current Period:
- Stories completed: 42
- Average cycle time: 4.2 days
- Throughput: 6.8 stories/week
Trend (Last 4 Weeks):
Week 1: ████████ 8 stories
Week 2: ██████ 6 stories
Week 3: █████████ 9 stories
Week 4: ███████ 7 stories
Status: ↗ Trending up (12%)
[Additional sections...]
💾 Export metrics to /metrics/2025-10-01.md? (y/n)
```
### Example 2: Monthly Metrics
```bash
INPUT:
/sdd:story-metrics month
OUTPUT:
→ Scanning story directories...
→ Found 28 stories in last 30 days
→ Analyzing September 2025 data...
📊 METRICS DASHBOARD
══════════════════════════════════════════════════════════════════
Period: Sep 1, 2025 - Sep 30, 2025
Generated: Oct 1, 2025 at 2:30 PM
📈 VELOCITY METRICS
─────────────────────────────────────
Current Period:
- Stories completed: 28
- Average cycle time: 3.8 days
- Throughput: 7.0 stories/week
Status: ↗ Trending up (18% vs August)
[Additional sections...]
```
### Example 3: No Stories Found
```bash
INPUT:
/sdd:story-metrics week
OUTPUT:
→ Scanning story directories...
→ No stories found in last 7 days
⚠️ INSUFFICIENT DATA
══════════════════════════════════════
No completed stories found in the specified period.
Suggestions:
- Try a longer period: /sdd:story-metrics month
- Check if stories are marked as completed
- Verify story dates are set correctly
Current WIP:
- Development: 2 stories
- Review: 1 story
- QA: 1 story
```
## Edge Cases
### No Completed Stories
- DETECT empty completed directory
- DISPLAY insufficient data message
- SHOW current WIP as context
- SUGGEST longer time period
### Missing Dates in Stories
- SKIP stories without started/completed dates
- LOG warning about incomplete data
- CALCULATE metrics from available data
- NOTE data quality issue in dashboard
### Single Story in Period
- CALCULATE limited metrics
- WARN about small sample size
- AVOID trend calculations
- PROVIDE useful context instead
### Inconsistent Story Format
- PARSE flexibly with fallbacks
- LOG parsing warnings
- EXTRACT what's available
- CONTINUE with best-effort analysis
## Error Handling
- **No story directories**: Report missing directories, suggest `/sdd:project-init`
- **Permission errors**: Report specific access issues
- **Malformed story files**: Skip problematic files, log warnings
- **Invalid period parameter**: Show valid options, use default
- **Zero stories**: Provide helpful guidance instead of empty metrics
## Performance Considerations
- Efficient file scanning (single pass per directory)
- Lazy date parsing (only for period-filtered stories)
- Cached calculations within single run
- Streaming output for large datasets
- Typical completion time: < 2 seconds for 100 stories
## Related Commands
- `/sdd:story-patterns` - Identify recurring patterns in stories
- `/sdd:story-tech-debt` - Analyze technical debt from stories
- `/sdd:project-status` - View current story statuses
- `/sdd:story-list` - List stories with filters
## Constraints
- ✅ MUST be read-only (no file modifications)
- ✅ MUST handle missing/malformed data gracefully
- ✅ MUST provide accurate calculations
- ⚠️ SHOULD visualize trends with charts
- 📊 SHOULD include confidence intervals for predictions
- 💡 SHOULD generate actionable recommendations
- 🔍 MUST show data sources for transparency
- ⏱️ MUST complete analysis in reasonable time (< 5s)

358
commands/story-new.md Normal file
View File

@@ -0,0 +1,358 @@
# /sdd:story-new
## Meta
- Version: 2.0
- Category: story-management
- Complexity: medium
- Purpose: Create new story with auto-populated template and place in backlog
## Definition
**Purpose**: Create a new story using project context and place it in the backlog folder for future development.
**Syntax**: `/sdd:story-new [story_id_number]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id_number | number | No | auto-increment | Story number (used as STORY-YYYY-NNN) | Positive integer |
## INSTRUCTION: Create New Story
### INPUTS
- story_id_number: Optional story number (auto-increments if not provided)
- Project context from `/docs/project-context/` directory
- User-provided story details (if not in project brief)
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **LOAD** project context from:
- `/docs/project-context/project-brief.md` - Existing story definitions, goals
- `/docs/project-context/technical-stack.md` - Technology implementation requirements
- `/docs/project-context/coding-standards.md` - Testing and quality requirements
#### Phase 2: Story ID Generation
1. **GENERATE** story ID in format `STORY-YYYY-NNN`:
- YYYY = current year
- NNN = sequential number (001, 002, etc.)
2. IF user provides story_id_number:
- USE as basis: `STORY-YYYY-[story_id_number]`
- EXAMPLE: Input "5" → "STORY-2025-005"
3. **CHECK** for existing IDs across all directories:
- SCAN `/docs/stories/backlog/`
- SCAN `/docs/stories/development/`
- SCAN `/docs/stories/review/`
- SCAN `/docs/stories/qa/`
- SCAN `/docs/stories/completed/`
- CHECK `/docs/project-context/project-brief.md` for planned stories
4. IF no specific number provided:
- INCREMENT to next available number
- ENSURE uniqueness across all locations
#### Phase 3: Story Information Gathering
1. **SEARCH** `/docs/project-context/project-brief.md` for story with generated ID
2. IF story exists in project brief:
- EXTRACT comprehensive story details:
* Story title and description
* User scenarios and use cases
* Technical implementation requirements
* Acceptance criteria (pass/fail conditions)
* Edge cases and error handling requirements
* UI/UX considerations
* Testing requirements and test scenarios
* Integration points with other stories/systems
* Dependencies on other stories
3. IF story NOT found in project brief:
- **ASK** user for story title
- **ASK** user for story description and purpose
- **ASK** user for acceptance criteria
- **ASK** user for technical approach (optional)
- **ASK** user for dependencies (optional)
#### Phase 4: Story File Creation
1. **ENSURE** `/docs/stories/backlog/` directory exists
- CREATE directory if missing
- ADD `.gitkeep` file if directory was created
2. **CREATE** story file at `/docs/stories/backlog/[story-id].md`
3. **POPULATE** template with:
- Story ID and title
- Status: backlog
- Today's date as "Started" date
- Empty "Completed" date
- Note: "(none - in backlog)" for branch
- What & Why section (from project brief or user input)
- Success Criteria (from project brief or user input)
- Technical Notes with:
* Approach (from project brief or user input)
* Stack (auto-populated from technical-stack.md)
* Concerns (from project brief edge cases or user input)
* Dependencies (from project brief or user input)
- Implementation Checklist (standard items)
- Progress Log with creation entry
- Test Cases (from project brief or default scenarios)
- UI/UX Considerations (from project brief if applicable)
- Integration Points (from project brief if applicable)
- Rollback Plan section (empty template)
- Lessons Learned section (empty template)
4. **REFERENCE** project context in template:
- Pull technology stack from technical-stack.md
- Note coding standards that will apply
- Reference testing framework requirements
- Include project goals and constraints from project-brief.md
#### Phase 5: Completion Summary
1. **DISPLAY** creation summary:
```
✅ Story Created
═══════════════════════════════════
Story ID: [STORY-YYYY-NNN]
Title: [Story Title]
Location: /docs/stories/backlog/[story-id].md
Status: backlog
[If from project brief:]
Source: Extracted from project brief
- Acceptance criteria: [count] criteria defined
- Test scenarios: [count] scenarios defined
- Dependencies: [list or "None"]
[If from user input:]
Source: User-provided details
- Ready for refinement before development
```
2. **SUGGEST** next steps:
```
💡 NEXT STEPS:
1. /sdd:story-start [story-id] # Move to development and create branch
2. /sdd:story-implement [story-id] # Generate implementation code
3. /sdd:project-status # View all project stories
```
### OUTPUTS
- `/docs/stories/backlog/[story-id].md` - New story file with populated template
- `.gitkeep` file in `/docs/stories/backlog/` if directory was created
### RULES
- MUST generate unique story ID across all story directories
- MUST create backlog directory if it doesn't exist
- MUST auto-populate template with project context
- SHOULD extract story details from project brief if available
- SHOULD reference technical stack in story template
- NEVER create feature branch (stories start in backlog)
- ALWAYS add progress log entry for creation
- MUST include today's date as "Started" date
## Story Template Structure
```markdown
# [STORY-ID]: [Title]
## Status: backlog
**Started:** [Today's Date]
**Completed:**
**Branch:** (none - in backlog)
## What & Why
[Story description and purpose]
## Success Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
- [ ] [Criterion 3]
## Technical Notes
**Approach:** [Implementation approach]
**Stack:** [Auto-populated from technical-stack.md]
**Concerns:** [Risks and edge cases]
**Dependencies:** [External services/libraries/other stories]
## Implementation Checklist
- [ ] Feature implementation
- [ ] Unit tests
- [ ] Integration tests
- [ ] Error handling
- [ ] Loading states
- [ ] Documentation
- [ ] Performance optimization
- [ ] Accessibility
- [ ] Security review
## Progress Log
- [Today]: Created story, added to backlog
## Test Cases
1. Happy path: [scenario]
2. Error case: [scenario]
3. Edge case: [scenario]
## UI/UX Considerations
[User interface and experience requirements]
## Integration Points
[Dependencies and integration with other systems]
## Rollback Plan
[How to rollback if issues arise]
## Lessons Learned
[To be filled when complete]
```
## Examples
### Example 1: Create from Project Brief
```bash
INPUT:
/sdd:story-new
OUTPUT:
→ Checking project context...
→ Generating story ID: STORY-2025-001
→ Found story definition in project brief
✅ Story Created
═══════════════════════════════════
Story ID: STORY-2025-001
Title: User Authentication System
Location: /docs/stories/backlog/STORY-2025-001.md
Status: backlog
Source: Extracted from project brief
- Acceptance criteria: 5 criteria defined
- Test scenarios: 8 scenarios defined
- Dependencies: None
💡 NEXT STEPS:
1. /sdd:story-start STORY-2025-001 # Move to development and create branch
2. /sdd:story-implement STORY-2025-001 # Generate implementation code
3. /sdd:project-status # View all project stories
```
### Example 2: Create with Specific ID
```bash
INPUT:
/sdd:story-new 10
OUTPUT:
→ Checking project context...
→ Using story ID: STORY-2025-010
→ Story not found in project brief, gathering details...
What is the story title?
> Add Dark Mode Toggle
What are you building and why?
> Implement a dark mode toggle in the settings page to allow users to switch between light and dark themes.
What are the acceptance criteria? (Enter each, then empty line when done)
> Toggle is visible in settings page
> Theme persists across sessions
> All UI components support both themes
>
✅ Story Created
═══════════════════════════════════
Story ID: STORY-2025-010
Title: Add Dark Mode Toggle
Location: /docs/stories/backlog/STORY-2025-010.md
Status: backlog
Source: User-provided details
- Ready for refinement before development
💡 NEXT STEPS:
1. /sdd:story-start STORY-2025-010 # Move to development and create branch
2. /sdd:story-implement STORY-2025-010 # Generate implementation code
3. /sdd:project-status # View all project stories
```
### Example 3: Auto-Increment ID
```bash
INPUT:
/sdd:story-new
OUTPUT:
→ Checking project context...
→ Found existing stories: STORY-2025-001 through STORY-2025-005
→ Auto-incrementing to: STORY-2025-006
→ Story not found in project brief, gathering details...
[Interactive prompts for story details...]
✅ Story Created
═══════════════════════════════════
Story ID: STORY-2025-006
Title: Payment Processing Integration
Location: /docs/stories/backlog/STORY-2025-006.md
Status: backlog
```
## Edge Cases
### No Project Context
- DETECT missing `/docs/project-context/` directory
- SUGGEST running `/sdd:project-init`
- OFFER to create story with minimal template
- WARN that template won't be auto-populated
### Duplicate Story ID
- DETECT ID conflict across all directories
- INCREMENT to next available number automatically
- LOG warning about skipped number
- ENSURE final ID is unique
### Empty Project Brief
- DETECT missing story definitions
- GATHER all details from user interactively
- CREATE story with user-provided information
- SUGGEST adding stories to project brief
### Malformed Project Brief
- DETECT parsing errors
- LOG warning about brief issues
- FALL BACK to user input mode
- CONTINUE with story creation
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init` with guidance
- **Permission errors**: Report specific file/directory with access issue
- **Invalid story ID**: Sanitize and suggest corrected version
- **User cancels**: Clean up partial creation, exit gracefully
## Performance Considerations
- Story ID checking optimizes by scanning directories once
- Project brief parsing caches results for session
- Template population is fast (< 100ms typically)
- Interactive prompts allow user to control pace
## Related Commands
- `/sdd:project-init` - Initialize project structure first
- `/sdd:project-brief` - Create/update project documentation with stories
- `/sdd:story-start [id]` - Begin development on story
- `/sdd:story-implement [id]` - Generate implementation code
- `/sdd:project-status` - View all project stories
## Constraints
- ✅ MUST generate unique story ID
- ✅ MUST create story in backlog directory
- ⚠️ NEVER create feature branch (stories start in backlog)
- 📋 MUST auto-populate from project context when available
- 🔧 SHOULD extract from project brief before asking user
- 💾 MUST add creation entry to progress log
- 📅 MUST include today's date as "Started" date

198
commands/story-next.md Normal file
View File

@@ -0,0 +1,198 @@
# /sdd:story-next
Suggests what to work on next based on priorities and status.
## Implementation
**Format**: Imperative (comprehensive)
**Actions**: Multi-step analysis with dependency validation
**Modifications**: None (read-only recommendations)
### Analysis Steps
#### 1. Assess Current State
- List all stories in `/docs/stories/development/`
- List all stories in `/docs/stories/review/`
- List all stories in `/docs/stories/qa/`
- List all completed stories in `/docs/stories/completed/`
- Read backlog priorities from `/docs/stories/backlog/`
- Read dependency graph from `/docs/project-context/story-relationships.md`
#### 2. Validate Dependencies
- Cross-reference dependencies against completed stories
- Verify no recommended stories exist in `/docs/stories/completed/`
- Flag mismatches between planned vs actual completion status
- Identify stories with all dependencies satisfied
#### 3. Apply Decision Logic
Priority order:
1. Stories in QA with issues (closest to shipping)
2. Stories in review with feedback
3. Stories in development > 3 days (complete or timebox)
4. Critical bugs/security issues
5. High-priority backlog items with satisfied dependencies
6. Technical debt or improvements
### Output Format
#### Primary Recommendations
```
📋 NEXT STORY RECOMMENDATIONS
============================
🥇 HIGHEST PRIORITY
------------------
[STORY-ID]: [Title]
Status: Available (verified not completed)
Dependencies: [List with completion status]
Reason: [Why this is most important]
Estimated effort: [X days]
Business value: [High/Medium/Low]
Command: /sdd:story-start [STORY-ID]
🥈 SECOND OPTION
---------------
[STORY-ID]: [Title]
Status: Available (verified not completed)
Dependencies: [List with completion status]
Reason: [Why consider this]
Estimated effort: [X days]
Trade-off: [What you defer]
🥉 THIRD OPTION
--------------
[STORY-ID]: [Title]
Status: Available (verified not completed)
Dependencies: [List with completion status]
Reason: [Alternative path]
Benefit: [Specific advantage]
```
#### Decision Factors
```
⚖️ DECISION FACTORS
Time available:
- Full day → Start new feature
- Few hours → Fix bugs/review
- < 1 hour → Quick improvements
Energy level:
- High → Complex new work
- Medium → Continue existing
- Low → Simple fixes/docs
Dependencies:
- Waiting on review: [list]
- Blocked by external: [list]
- Ready to start: [list]
```
#### Backlog Overview
```
📚 BACKLOG OVERVIEW
✅ COMPLETED STORIES:
[List from /docs/stories/completed/ with dates]
📋 REMAINING BACKLOG:
High Priority:
1. [Story] - [Est.] - Available
2. [Story] - [Est.] - Blocked by [X]
Medium Priority:
3. [Story] - [Est.] - Available
4. [Story] - [Est.] - Blocked by [X]
Quick Wins (<1 day):
5. [Bug fix] - 2 hours
6. [Doc update] - 1 hour
Technical Debt:
7. [Refactor] - [Est.]
8. [Performance] - [Est.]
```
#### Pattern Insights (Optional)
If sufficient historical data:
```
📊 PATTERN INSIGHTS
Based on history:
- Fastest completions: [story type]
- Most productive: [day/time]
- Success patterns: [insights]
Recommendation: [Specific suggestion]
```
#### Risk Assessment
```
⚠️ RISK CONSIDERATIONS
Risky to start now:
- [Complex story] - End of week
- [Large refactor] - Before deadline
Safe to start:
- [Small feature] - Low risk
- [Bug fix] - Quick win
```
#### Project Context
```
🎯 PROJECT PRIORITIES
This sprint/week focus: [Main goal]
Upcoming deadline: [Date] - [What's due]
User feedback priority: [Most requested]
```
### Empty State
If no clear next story:
```
💭 NO CLEAR PRIORITY
Productive alternatives:
1. Code review backlog
2. Update documentation
3. Write tests for untested code
4. Refactor complex functions
5. Learn new tool/technique
Create new story: /sdd:story-new
```
### Action Plan
Always conclude with:
```
✅ RECOMMENDED ACTION PLAN
=========================
Right now:
1. [Immediate action]
Command: /[command-to-run]
Then:
2. [Follow-up action]
This week:
3. [Week goal]
```
### Notes
- Read-only analysis, no file modifications
- Validates all dependencies against filesystem
- Prevents recommending completed stories
- Waits for user decision before any action

668
commands/story-patterns.md Normal file
View File

@@ -0,0 +1,668 @@
# /sdd:story-patterns
## Meta
- Version: 2.0
- Category: story-analysis
- Complexity: high
- Purpose: Identify recurring patterns in completed stories to extract reusable knowledge and improve processes
## Definition
**Purpose**: Analyze completed stories to discover technical patterns, common problems, success strategies, code reusability opportunities, and anti-patterns.
**Syntax**: `/sdd:story-patterns [category]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| category | string | No | "all" | Pattern category to analyze (technical, problems, success, code, process, all) | One of: technical, problems, success, code, process, all |
## INSTRUCTION: Analyze Story Patterns
### INPUTS
- category: Optional pattern category filter (defaults to all)
- Completed story files from `/docs/stories/completed/`
- Optional: Stories from other stages for trend analysis
### PROCESS
#### Phase 1: Story Data Collection
1. **SCAN** `/docs/stories/completed/` directory for all `.md` files
2. **PARSE** each story file to extract:
- Implementation approach (from Technical Notes)
- Technologies used (from Stack)
- Problems encountered (from Progress Log)
- Solutions applied (from Progress Log)
- Success criteria and outcomes
- Test cases and results
- Code patterns (from Implementation Checklist)
- Dependencies and integrations
- Lessons learned
3. **CATEGORIZE** extracted data by type:
- Technical implementations
- Problem/solution pairs
- Success factors
- Code structures
- Process workflows
4. **FILTER** by category if specified
#### Phase 2: Technical Pattern Analysis
1. **IDENTIFY** common implementation approaches:
- Group stories by similar technical solutions
- Count frequency of each approach
- Extract specific examples
2. **DETECT** recurring architectures:
- Design patterns (MVC, Repository, etc.)
- Integration patterns (API, Queue, Event)
- Data patterns (Migration, Seeding, etc.)
3. **ANALYZE** technology combinations:
- Frequently paired technologies
- Successful tech stack patterns
4. **DISPLAY** technical patterns:
```
🔧 TECHNICAL PATTERNS FOUND
══════════════════════════════════
Common Implementations:
Pattern: JWT Authentication with Refresh Tokens
- Used in: [X] stories ([STORY-IDs])
- Success rate: [X]%
- Avg implementation time: [X] days
- Reusability: High
- Action: Extract as auth module
Pattern: Queue-based Background Processing
- Used in: [X] stories ([STORY-IDs])
- Success rate: [X]%
- Technologies: Laravel Queue, Redis
- Action: Create template
Recurring Architectures:
- Service Layer Pattern: [X] stories
- Repository Pattern: [X] stories
- Event-Driven: [X] stories
Technology Combinations:
- Livewire + Alpine.js: [X] stories
- Pest + Browser Tests: [X] stories
```
#### Phase 3: Problem Pattern Analysis
1. **EXTRACT** problems from progress logs
2. **CATEGORIZE** problems by type:
- Technical issues
- Integration challenges
- Performance problems
- Testing difficulties
- Deployment issues
3. **COUNT** frequency of each problem type
4. **LINK** problems to solutions
5. **IDENTIFY** root causes
6. **DISPLAY** problem patterns:
```
⚠️ RECURRING CHALLENGES
══════════════════════════════════
Common Problems:
Problem: N+1 Query Performance Issues
- Occurred: [X] times
- Stories: [STORY-IDs]
- Root cause: Missing eager loading
- Solution pattern: Add `with()` to queries
- Prevention: Code review checklist item
Problem: CORS Issues in API Integration
- Occurred: [X] times
- Stories: [STORY-IDs]
- Root cause: Middleware configuration
- Solution pattern: Configure cors.php
- Prevention: API setup template
Frequent Blockers:
- Third-party API rate limits: [X] occurrences
Mitigation: Implement caching layer
- Test environment setup: [X] occurrences
Mitigation: Docker compose template
```
#### Phase 4: Success Pattern Analysis
1. **IDENTIFY** high-performing stories:
- Fast completion times
- Zero bugs in QA
- First-time pass in review
2. **EXTRACT** success factors:
- Common approaches
- Best practices applied
- Tools and techniques used
3. **CALCULATE** success rates by pattern
4. **DETERMINE** velocity impact
5. **DISPLAY** success patterns:
```
✅ SUCCESS PATTERNS
══════════════════════════════════
High-Velocity Patterns:
Approach: TDD with Feature Tests First
- Used in: [X] stories
- Success rate: [X]%
- Avg completion: [X] days faster
- Key factors:
• Clear test cases upfront
• Fewer bugs in QA
• Confident refactoring
- Recommendation: Adopt as standard
Approach: Component-First UI Development
- Used in: [X] stories
- Success rate: [X]%
- Benefits:
• Reusable components
• Consistent design
• Faster iterations
- Best for: UI-heavy features
High-Quality Patterns:
- Livewire component testing: [X]% fewer bugs
- Browser E2E tests: [X]% fewer production issues
- Code review with checklist: [X]% first-time pass
```
#### Phase 5: Code Pattern Analysis
1. **SCAN** for reusable code structures:
- Component types
- Utility functions
- Service classes
- Middleware patterns
- Test helpers
2. **COUNT** instances of each pattern
3. **EVALUATE** reusability potential
4. **SUGGEST** extraction opportunities
5. **DISPLAY** code patterns:
```
💻 CODE PATTERNS
══════════════════════════════════
Reusable Components Identified:
Pattern: Form Validation Request Classes
- Instances: [X] similar implementations
- Stories: [STORY-IDs]
- Commonality: [X]% code overlap
- Candidate for: Base FormRequest class
- Estimated savings: [X] hours per story
Pattern: Livewire CRUD Components
- Instances: [X] similar implementations
- Stories: [STORY-IDs]
- Commonality: [X]% code overlap
- Candidate for: CRUD trait or base class
- Estimated savings: [X] hours per story
Pattern: API Response Formatters
- Instances: [X] similar implementations
- Candidate for: Shared utility package
- Extraction priority: High
Common Integrations:
- External API clients: [X] instances
Standard approach: Guzzle + DTO pattern
Template available: No
Action: Create API client template
```
#### Phase 6: Process Pattern Analysis
1. **ANALYZE** workflow patterns:
- Story progression times
- Review process effectiveness
- Testing strategies
- Deployment approaches
2. **IDENTIFY** effective practices:
- Time-of-day patterns
- Day-of-week patterns
- Story size sweet spots
- Review timing
3. **CALCULATE** process effectiveness metrics
4. **DISPLAY** process patterns:
```
📋 PROCESS PATTERNS
══════════════════════════════════
Effective Workflows:
Workflow: Same-day Review
- Stories: [X] with review within 24h
- Success rate: [X]%
- Avg cycle time: [X] days faster
- Recommendation: Target same-day reviews
Practice: Incremental Commits
- Stories: [X] with frequent commits
- Impact: [X]% easier code review
- Recommendation: Commit every feature increment
Timing Patterns:
- Stories started Monday: [X]% completion rate
- Stories started Friday: [X]% completion rate
- Optimal story size: [X] days
Risk Factors:
- Stories > 5 days: [X]% higher bug rate
- Stories with > 3 dependencies: [X]% longer cycle
```
#### Phase 7: Pattern Recommendations
1. **ANALYZE** all discovered patterns
2. **PRIORITIZE** by impact and effort:
- High impact, low effort: Quick wins
- High impact, high effort: Strategic initiatives
- Low impact, low effort: Nice to haves
3. **GENERATE** specific, actionable recommendations:
- Template creation
- Library extraction
- Process standardization
- Documentation needs
4. **DISPLAY** recommendations:
```
💡 PATTERN-BASED RECOMMENDATIONS
══════════════════════════════════
CREATE TEMPLATES FOR:
Priority: High
1. Authentication flow template
- Used in: [X] stories
- Estimated savings: [X] hours per story
- Template location: /templates/auth-flow.md
2. API integration template
- Used in: [X] stories
- Estimated savings: [X] hours per story
- Template location: /templates/api-integration.md
EXTRACT LIBRARIES FOR:
Priority: High
1. Form validation utilities
- Instances: [X] similar implementations
- Estimated savings: [X] hours per story
- Package name: app/Utils/FormValidation
2. API response formatters
- Instances: [X] similar implementations
- Estimated savings: [X] hours per story
- Package name: app/Http/Responses
STANDARDIZE PROCESSES:
Priority: Medium
1. Code review checklist
- Include: Performance checks, test coverage
- Expected impact: [X]% fewer QA bugs
2. Story sizing guidelines
- Optimal size: [X] days
- Expected impact: [X]% faster velocity
DOCUMENT PATTERNS:
Priority: Medium
1. JWT authentication pattern
- Location: /patterns/auth-jwt.md
- Include: Setup, usage, edge cases
```
#### Phase 8: Pattern Library Generation
1. **COMPILE** patterns into structured library
2. **CATEGORIZE** by domain:
- Authentication
- Data Processing
- API Integration
- UI Components
- Testing
3. **TRACK** usage and availability:
- Times used
- Template exists (yes/no)
- Documentation exists (yes/no)
- Action needed
4. **DISPLAY** pattern library:
```
📚 PATTERN LIBRARY
══════════════════════════════════
Category: Authentication
──────────────────────────────────
Pattern: JWT with Refresh Tokens
- Used in: 5 stories
- Success rate: 100%
- Template available: Yes (/templates/auth-jwt.md)
- Documentation: Yes (/patterns/auth-jwt.md)
Pattern: Social OAuth Integration
- Used in: 3 stories
- Success rate: 100%
- Template available: No
- Action: Create template
Category: Data Processing
──────────────────────────────────
Pattern: Queue-based Background Jobs
- Used in: 7 stories
- Success rate: 95%
- Template available: Yes (/templates/queue-job.md)
- Documentation: Yes (/patterns/queue-jobs.md)
Pattern: Batch Processing with Progress
- Used in: 3 stories
- Template available: No
- Action: Create template
Category: UI Components
──────────────────────────────────
Pattern: Livewire CRUD Components
- Used in: 12 stories
- Template available: No
- Action: Create base component trait
[Additional categories...]
```
#### Phase 9: Anti-Pattern Detection
1. **IDENTIFY** problematic patterns:
- Code smells that appear multiple times
- Approaches with low success rates
- Solutions that caused later problems
2. **ANALYZE** negative impact:
- Increased bug rates
- Longer cycle times
- Technical debt creation
3. **SUGGEST** better alternatives
4. **DISPLAY** anti-patterns:
```
❌ ANTI-PATTERNS TO AVOID
══════════════════════════════════
Anti-pattern: Direct DB Queries in Controllers
- Found in: [X] stories
- Problems caused:
• Difficult to test
• No reusability
• N+1 query issues
- Better approach: Use Repository or Query Builder
- Stories affected: [STORY-IDs]
Anti-pattern: Missing Validation in Livewire
- Found in: [X] stories
- Problems caused:
• Security vulnerabilities
• Data integrity issues
• Poor UX
- Better approach: Use #[Validate] attributes
- Stories affected: [STORY-IDs]
Anti-pattern: Monolithic Livewire Components
- Found in: [X] stories
- Problems caused:
• Hard to maintain
• Difficult to test
• Poor reusability
- Better approach: Break into smaller components
```
#### Phase 10: Pattern Export
1. **OFFER** to export patterns to structured files
2. **CREATE** pattern directory structure:
```
/patterns/
├── technical-patterns.md
├── success-patterns.md
├── anti-patterns.md
└── templates/
├── auth-flow.md
├── api-integration.md
└── queue-job.md
```
3. **GENERATE** markdown files with:
- Pattern descriptions
- Usage examples
- Code snippets
- Best practices
- Related stories
4. **DISPLAY** export summary:
```
💾 EXPORT PATTERNS
══════════════════════════════════
Files created:
✓ /patterns/technical-patterns.md (15 patterns)
✓ /patterns/success-patterns.md (8 patterns)
✓ /patterns/anti-patterns.md (5 patterns)
✓ /patterns/process-patterns.md (6 patterns)
Templates needed:
→ /templates/auth-jwt.md (create)
→ /templates/api-integration.md (create)
Next steps:
1. Review exported patterns
2. Create missing templates
3. Update coding standards
4. Share with team
```
### OUTPUTS
- Console display of all pattern analysis sections
- Optional: Pattern markdown files in `/patterns/` directory
- Optional: Template files in `/templates/` directory
### RULES
- MUST analyze only completed stories (read from `/docs/stories/completed/`)
- MUST identify patterns with 2+ occurrences (single instance not a pattern)
- MUST calculate accurate frequency and success metrics
- SHOULD provide specific story IDs as evidence
- SHOULD prioritize recommendations by impact
- SHOULD generate actionable insights
- NEVER modify story files (read-only operation)
- ALWAYS show pattern sources (which stories)
- ALWAYS suggest concrete next steps
- MUST handle missing data gracefully
## Pattern Categories
### Technical Patterns
- Implementation approaches (authentication, API, data processing)
- Architecture patterns (MVC, Repository, Event-Driven)
- Technology combinations (framework + library pairs)
- Integration patterns (external services, databases)
### Problem Patterns
- Recurring technical issues
- Common blockers
- Integration challenges
- Performance problems
- Root cause analysis
### Success Patterns
- High-velocity approaches
- High-quality techniques
- Effective workflows
- Best practices
### Code Patterns
- Reusable components
- Utility functions
- Service classes
- Test helpers
- Common structures
### Process Patterns
- Workflow effectiveness
- Timing patterns
- Story sizing
- Review practices
- Testing strategies
## Examples
### Example 1: All Patterns
```bash
INPUT:
/sdd:story-patterns
OUTPUT:
→ Scanning completed stories...
→ Found 42 completed stories
→ Analyzing patterns across all categories...
🔧 TECHNICAL PATTERNS FOUND
══════════════════════════════════
Pattern: JWT Authentication with Refresh Tokens
- Used in: 5 stories (STORY-2025-001, 012, 023, 034, 041)
- Success rate: 100%
- Avg implementation time: 2.4 days
- Reusability: High
- Action: Template available at /templates/auth-jwt.md
[Additional sections...]
💡 PATTERN-BASED RECOMMENDATIONS
══════════════════════════════════
CREATE TEMPLATES FOR:
1. API integration with retry logic (used in 8 stories)
2. Livewire form with validation (used in 15 stories)
[Additional recommendations...]
💾 Export patterns to /patterns/ directory? (y/n)
```
### Example 2: Technical Patterns Only
```bash
INPUT:
/sdd:story-patterns technical
OUTPUT:
→ Scanning completed stories...
→ Analyzing technical patterns only...
🔧 TECHNICAL PATTERNS FOUND
══════════════════════════════════
Common Implementations:
Pattern: Livewire Component with Alpine.js Enhancement
- Used in: 18 stories
- Technologies: Livewire 3, Alpine.js 3
- Success rate: 95%
- Common structure:
• Server-side state management
• Client-side UX enhancements
• Device-responsive behavior
[Additional technical patterns...]
```
### Example 3: No Patterns Found
```bash
INPUT:
/sdd:story-patterns
OUTPUT:
→ Scanning completed stories...
→ Found 3 completed stories
→ Analyzing patterns...
⚠️ INSUFFICIENT DATA
══════════════════════════════════
Not enough completed stories to identify patterns.
Patterns require at least 2 occurrences across multiple stories.
Current completed stories: 3
Minimum recommended: 10
Suggestions:
- Complete more stories to build pattern data
- Run /sdd:story-metrics to see development progress
- Check if stories are in /docs/stories/completed/
```
## Edge Cases
### Few Completed Stories
- DETECT insufficient story count (< 5)
- DISPLAY warning about pattern reliability
- SHOW limited patterns found
- SUGGEST completing more stories
### No Common Patterns
- DETECT when stories are highly unique
- DISPLAY "no recurring patterns" message
- SHOW individual story characteristics
- SUGGEST areas for potential standardization
### Inconsistent Story Format
- PARSE flexibly with fallbacks
- EXTRACT patterns from available data
- LOG warnings about incomplete data
- CONTINUE with best-effort analysis
### Missing Technical Notes
- SKIP pattern extraction from incomplete stories
- LOG which stories lack necessary sections
- CALCULATE patterns from complete data only
- SUGGEST standardizing story format
## Error Handling
- **No completed stories**: Inform user, suggest completing stories first
- **Permission errors**: Report specific file access issues
- **Malformed story files**: Skip problematic files, log warnings
- **Invalid category parameter**: Show valid options, use default
- **Export directory exists**: Ask to overwrite or merge
## Performance Considerations
- Efficient file scanning (single pass per directory)
- Lazy parsing (only parse when needed)
- Pattern matching with hash maps for speed
- Streaming output for large datasets
- Typical completion time: < 3 seconds for 50 stories
## Related Commands
- `/sdd:story-metrics` - Calculate velocity and quality metrics
- `/sdd:story-tech-debt` - Analyze technical debt
- `/sdd:project-status` - View current story statuses
- `/sdd:story-list` - List and filter stories
## Constraints
- ✅ MUST be read-only (no story modifications)
- ✅ MUST identify patterns with 2+ occurrences
- ✅ MUST provide evidence (story IDs)
- ⚠️ SHOULD prioritize by impact and frequency
- 📊 SHOULD include success rates
- 💡 SHOULD generate actionable recommendations
- 🔍 MUST show pattern sources
- ⏱️ MUST complete analysis in reasonable time (< 5s)
- 📁 SHOULD offer to export findings

706
commands/story-qa.md Normal file
View File

@@ -0,0 +1,706 @@
# /sdd:story-qa
## Meta
- Version: 2.0
- Category: quality-gates
- Complexity: high
- Purpose: Move story to QA stage and execute comprehensive test validation pipeline
## Definition
**Purpose**: Execute comprehensive automated QA test suite including unit, integration, browser, and performance tests before final validation.
**Syntax**: `/sdd:story-qa [story_id]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | No | current branch | Story ID (STORY-YYYY-NNN) | Must match format STORY-YYYY-NNN |
## INSTRUCTION: Execute Story QA
### INPUTS
- story_id: Story identifier (auto-detected from branch if not provided)
- Project context from `/docs/project-context/` directory
- Story file from `/docs/stories/review/[story-id].md`
- Complete test suite (unit, feature, browser)
- Performance benchmarks (if defined)
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **LOAD** project-specific QA requirements from:
- `/docs/project-context/technical-stack.md` - Testing tools and frameworks
- `/docs/project-context/coding-standards.md` - QA standards and thresholds
- `/docs/project-context/development-process.md` - QA stage requirements
#### Phase 2: Story Identification & Validation
1. IF story_id NOT provided:
- **DETECT** current git branch
- **EXTRACT** story ID from branch name
- EXAMPLE: Branch `feature/STORY-2025-001-auth` → ID `STORY-2025-001`
2. **VALIDATE** story exists:
- CHECK `/docs/stories/review/[story-id].md` exists
- IF NOT found in review:
- CHECK if already in `/docs/stories/qa/`
- INFORM user and ask to proceed with re-QA
- IF in development:
- ERROR: "Story must pass review first"
- SUGGEST: `/sdd:story-review [story-id]`
- IF NOT found anywhere:
- ERROR: "Story [story-id] not found"
- EXIT with guidance
3. **READ** story file for:
- Current status
- Success criteria (will map to browser tests)
- Implementation checklist state
- QA checklist requirements
#### Phase 3: Directory Preparation
1. **ENSURE** `/docs/stories/qa/` directory exists
- CREATE directory if missing
- ADD `.gitkeep` file if directory was created
2. **MOVE** story file:
- FROM: `/docs/stories/review/[story-id].md`
- TO: `/docs/stories/qa/[story-id].md`
- PRESERVE all content and formatting
3. **UPDATE** story metadata:
- Change status from "review" to "qa"
- KEEP existing dates and branch information
- ADD QA start timestamp to progress log
#### Phase 4: Test Suite Execution
##### 4.1 Unit Tests (Discovered Framework)
1. **IDENTIFY** unit test framework from technical-stack.md:
- PHP/Laravel: Pest, PHPUnit
- Node.js: Jest, Vitest, Mocha
- Python: pytest, unittest
- Go: go test
- Java: JUnit, TestNG
- .NET: xUnit, NUnit, MSTest
2. **RUN** unit tests with coverage:
```bash
# Example for Laravel Pest:
vendor/bin/pest --filter=Unit --coverage --min=80
# Example for Node.js Jest:
npm test -- --coverage --testPathPattern=unit
# Example for Python pytest:
pytest tests/unit/ --cov --cov-report=term-missing
```
3. **CAPTURE** results:
- PASS/FAIL count
- Execution time
- Coverage percentage (overall, per file)
- Failed test details with stack traces
- Slowest tests (performance indicators)
##### 4.2 Feature/Integration Tests (Discovered Patterns)
1. **IDENTIFY** integration test patterns from technical-stack.md:
- Laravel: Feature tests with database interactions
- Node.js: Integration tests with API calls
- Python: Integration tests with service layer
- Java: Integration tests with Spring context
2. **RUN** integration tests:
```bash
# Example for Laravel Pest:
vendor/bin/pest --filter=Feature --parallel
# Example for Node.js:
npm test -- --testPathPattern=integration
# Example for Python:
pytest tests/integration/ -v
```
3. **VALIDATE** integrations:
- API endpoints returning correct responses
- Database operations (CRUD, transactions)
- Service-to-service communication
- External API integrations (with mocks/stubs)
- Queue/job processing
- Cache operations
##### 4.3 Browser/E2E Tests (Discovered Browser Testing Tools)
1. **IDENTIFY** browser testing framework from technical-stack.md:
- Laravel: Laravel Dusk, Pest Browser
- Node.js: Playwright, Cypress, Puppeteer
- Python: Playwright, Selenium
- Java: Selenium, Playwright
2. **LOCATE** browser test files:
- Laravel: `tests/Browser/[StoryId]Test.php`
- Node.js: `tests/e2e/[story-id].spec.js`
- Python: `tests/browser/test_[story_id].py`
- Java: `src/test/java/**/[StoryId]Test.java`
3. **RUN** browser tests:
```bash
# Example for Laravel Pest Browser:
vendor/bin/pest --filter=Browser
# Example for Playwright (Node.js):
npx playwright test tests/e2e/sdd:story-2025-001
# Example for Python Playwright:
pytest tests/browser/test_story_2025_001.py --headed
```
4. **VALIDATE** against Success Criteria:
- MAP each acceptance criterion to browser test
- VERIFY each criterion has passing test
- CAPTURE screenshots of test execution
- RECORD video of test runs (if tool supports)
- VALIDATE all user workflows end-to-end
5. **TEST** across environments (if specified in standards):
- Different browsers (Chrome, Firefox, Safari)
- Different devices (desktop, tablet, mobile)
- Different viewports (responsive design)
- Light/dark mode (if applicable)
##### 4.4 Performance Tests (Discovered Performance Tools)
1. **CHECK** if performance requirements defined in story
2. IF performance criteria exist:
- **IDENTIFY** performance tools from technical-stack.md:
* Laravel: Laravel Debugbar, Telescope, Blackfire
* Node.js: Artillery, k6, Apache Bench
* Python: Locust, pytest-benchmark
* Java: JMeter, Gatling
3. **RUN** performance benchmarks:
```bash
# Example for Laravel:
php artisan serve &
ab -n 1000 -c 10 http://localhost:8000/api/endpoint
# Example for Node.js k6:
k6 run performance/sdd:story-2025-001.js
```
4. **VALIDATE** performance targets:
- Response time < target (e.g., 200ms)
- Throughput > target (e.g., 100 req/sec)
- Memory usage < target
- No memory leaks
- Database query count optimal
##### 4.5 Security Testing (Discovered Security Tools)
1. **RUN** security validation:
```bash
# Example for Laravel:
composer audit
# Example for Node.js:
npm audit --production
```
2. **VALIDATE** security requirements:
- No HIGH/CRITICAL vulnerabilities
- Authentication/Authorization working
- CSRF protection enabled
- XSS prevention implemented
- SQL injection prevention verified
- Rate limiting functional (if applicable)
#### Phase 5: Quality Gate Validation
1. **APPLY** quality gates from coding-standards.md:
- BLOCK progression if ANY critical test fails
- BLOCK progression if coverage below threshold
- BLOCK progression if performance targets not met
- BLOCK progression if security vulnerabilities found
2. **CAPTURE** test artifacts:
- Test reports (XML, JSON, HTML)
- Coverage reports
- Screenshots from browser tests
- Videos from browser tests (if available)
- Performance benchmark results
- Log files from test runs
#### Phase 6: QA Report Generation
1. **COMPILE** all test results
2. **GENERATE** automated QA report:
```
✅ AUTOMATED QA RESULTS
════════════════════════════════════════════════
Story: [STORY-ID] - [Title]
Stack: [Discovered Framework/Language/Tools]
QA Executed: [Timestamp]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 FUNCTIONAL TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Acceptance Criteria: 5/5 verified by browser tests
✓ User can toggle dark mode → tests/Browser/DarkModeTest.php::line 45
✓ Theme persists on refresh → tests/Browser/DarkModeTest.php::line 67
✓ All components support dark mode → tests/Browser/DarkModeTest.php::line 89
✓ Keyboard shortcut works → tests/Browser/DarkModeTest.php::line 112
✓ Preference syncs across tabs → tests/Browser/DarkModeTest.php::line 134
📸 Screenshots: /storage/screenshots/sdd:story-2025-003/
🎥 Videos: /storage/videos/sdd:story-2025-003/ (if applicable)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🧪 UNIT TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Tests Passed: 45/45 (100%)
✅ Coverage: 87% (target: 80%)
⏱️ Execution Time: 2.34s
Top Coverage Files:
✓ DarkModeService.php: 95%
✓ ThemeController.php: 92%
✓ UserPreferenceRepository.php: 88%
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔗 INTEGRATION TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Feature Tests: 23/23 passed
✓ API endpoints: 8/8 passed
✓ Database operations: 10/10 passed
✓ Service integrations: 5/5 passed
Operations Tested:
✓ GET /api/user/theme → 200 OK (45ms)
✓ POST /api/user/theme → 200 OK (67ms)
✓ Theme preference persisted to database
✓ Cache invalidation on theme change
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🌐 COMPATIBILITY TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Browsers: Chrome, Firefox, Safari (all passed)
✅ Devices: Desktop (1920x1080), Tablet (768x1024), Mobile (375x667)
✅ Viewports: All responsive breakpoints validated
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚡ PERFORMANCE TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Response Times: Within targets
✓ Theme toggle: 45ms (target: <100ms)
✓ Initial page load: 234ms (target: <500ms)
✅ Throughput: 250 req/sec (target: >100 req/sec)
✅ Memory: Stable (no leaks detected)
✅ Database Queries: Optimized (N+1 prevented)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔒 SECURITY TESTING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Vulnerability Scan: No issues (composer audit)
✅ Authentication: All protected routes secure
✅ CSRF Protection: Enabled and functional
✅ XSS Prevention: Input sanitization verified
✅ Rate Limiting: 60 requests/minute enforced
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 SUMMARY
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ ALL QUALITY GATES PASSED
Total Tests: 76/76 passed
Total Coverage: 87%
Total Execution Time: 45.67s
Browser Test Coverage: 100% of acceptance criteria
Performance: All targets met
Security: No vulnerabilities
```
#### Phase 7: Story File Updates
1. **UPDATE** QA Checklist based on test results:
**FUNCTIONAL TESTING:**
- `[x]` Each acceptance criterion validated by browser tests
- `[x]` All user workflows tested end-to-end
- `[x]` Error scenarios handled gracefully
- `[x]` Edge cases covered
**INTEGRATION TESTING:**
- `[x]` API endpoints return correct responses
- `[x]` Database operations work correctly
- `[x]` Service integrations functional
- `[x]` External APIs integrated properly
**COMPATIBILITY TESTING:**
- `[x]` Works across target browsers
- `[x]` Responsive on all devices
- `[x]` Accessible via keyboard
- `[x]` Screen reader compatible
**PERFORMANCE TESTING:**
- `[x]` Response times within targets
- `[x]` No memory leaks
- `[x]` Optimized database queries
- `[x]` Bundle size acceptable
**REGRESSION TESTING:**
- `[x]` Existing features still work
- `[x]` No unintended side effects
**SECURITY TESTING:**
- `[x]` No vulnerabilities introduced
- `[x]` Authentication/Authorization working
- `[x]` Input validation functional
2. **UPDATE** Implementation Checklist remaining items:
- `[x]` Browser tests (if now at 100% acceptance criteria coverage)
- `[x]` Documentation (if QA revealed complete docs)
3. **ADD** to Progress Log:
```markdown
- [Today]: Moved to QA stage
- [Today]: Executed comprehensive test suite
* Unit tests: 45/45 passed (87% coverage)
* Feature tests: 23/23 passed
* Browser tests: 8/8 passed (100% criteria coverage)
* Performance: All targets met
* Security: No vulnerabilities
- [Today]: All quality gates PASSED
```
4. **RECORD** test artifacts:
- Test report locations
- Screenshot/video paths
- Coverage report path
- Performance benchmark results
#### Phase 8: Next Steps
1. **DETERMINE** QA outcome:
- IF all tests PASS → Ready for `/sdd:story-validate`
- IF any critical failures → Requires `/sdd:story-refactor`
- IF performance issues → Optimize and re-run QA
2. **DISPLAY** next actions:
```
💡 NEXT STEPS:
════════════════════════════════════════════════
[IF ALL PASSED:]
✅ All QA tests passed - Ready for validation
1. /sdd:story-validate [story-id] # Final validation before ship
2. /sdd:story-ship [story-id] # Deploy to production (after validation)
[IF FAILURES:]
⚠️ X test(s) failed
1. /sdd:story-refactor [story-id] # Return to development
2. Fix failing tests:
- [Test 1 that failed]
- [Test 2 that failed]
3. /sdd:story-review [story-id] # Re-run review
4. /sdd:story-qa [story-id] # Re-run QA after fixes
[ARTIFACT LOCATIONS:]
📸 Screenshots: /storage/screenshots/[story-id]/
🎥 Videos: /storage/videos/[story-id]/
📊 Coverage: /storage/coverage/[story-id]/
📈 Performance: /storage/benchmarks/[story-id]/
```
3. **SHOW** debugging commands for discovered stack:
```bash
# Laravel:
vendor/bin/pest --filter=Browser --parallel=false # Run browser tests sequentially
vendor/bin/pest --filter=Feature::testName # Run specific test
php artisan telescope:prune # Clear performance logs
# Node.js Playwright:
npx playwright test --debug # Debug mode
npx playwright show-report # View HTML report
npx playwright codegen # Generate new test code
# Python Pytest:
pytest tests/browser/ -v -s # Verbose with print output
pytest tests/browser/ --headed --slowmo=1000 # Visual debugging
```
### OUTPUTS
- `/docs/stories/qa/[story-id].md` - Updated story file with QA results
- Automated QA report (displayed to user)
- Test artifacts (screenshots, videos, reports)
- Updated QA Checklist with validation status
- Progress log entry with QA timestamp
### RULES
- MUST load project context before running any tests
- MUST use discovered testing tools from technical-stack.md
- MUST validate against coding-standards.md thresholds
- MUST move story file from review to qa
- MUST run ALL test types (unit, feature, browser)
- MUST validate 100% of acceptance criteria via browser tests
- SHOULD capture test artifacts for documentation
- SHOULD test across browsers/devices if specified
- NEVER skip browser tests (critical for acceptance criteria)
- NEVER allow QA to pass if critical tests fail
- ALWAYS block progression if quality gates fail
- ALWAYS update checklist items accurately
- ALWAYS record test artifact locations
## Examples
### Example 1: QA with All Tests Passing
```bash
INPUT:
/sdd:story-qa STORY-2025-003
OUTPUT:
→ Loading project context...
✓ technical-stack.md: Laravel + Pest + Playwright
✓ coding-standards.md: 80% coverage, all browsers
✓ development-process.md: QA requirements
→ Moving story to QA...
✓ Moved STORY-2025-003.md to /docs/stories/qa/
→ Executing test suite...
[Unit Tests] Running Pest...
✓ 45/45 passed (87% coverage) - 2.34s
[Feature Tests] Running Pest...
✓ 23/23 passed - 5.67s
[Browser Tests] Running Pest Browser...
✓ 8/8 passed - 34.21s
✓ All 5 acceptance criteria validated
📸 Screenshots saved to /storage/screenshots/STORY-2025-003/
[Performance] Running benchmarks...
✓ Theme toggle: 45ms (target: <100ms)
✓ Page load: 234ms (target: <500ms)
[Security] Running Composer audit...
✓ No vulnerabilities
✅ AUTOMATED QA RESULTS
════════════════════════════════════════════════
Story: STORY-2025-003 - Dark Mode Toggle
Stack: Laravel 12, Livewire 3, Pest 4, Playwright
QA Executed: 2025-10-01 16:45:23
✅ ALL QUALITY GATES PASSED
Total Tests: 76/76 passed
Total Coverage: 87%
Browser Test Coverage: 100% of acceptance criteria
Performance: All targets met
Security: No vulnerabilities
💡 NEXT STEPS:
════════════════════════════════════════════════
✅ All QA tests passed - Ready for validation
1. /sdd:story-validate STORY-2025-003 # Final validation before ship
2. /sdd:story-ship STORY-2025-003 # Deploy to production (after validation)
```
### Example 2: QA with Browser Test Failures
```bash
INPUT:
/sdd:story-qa
OUTPUT:
→ Detecting story from current branch...
✓ Current branch: feature/STORY-2025-004-notifications
✓ Story ID: STORY-2025-004
→ Loading project context...
✓ technical-stack.md loaded
→ Moving story to QA...
✓ Moved STORY-2025-004.md to /docs/stories/qa/
→ Executing test suite...
[Unit Tests] Running Pest...
✓ 32/32 passed (92% coverage) - 1.87s
[Feature Tests] Running Pest...
✓ 18/18 passed - 4.23s
[Browser Tests] Running Pest Browser...
✗ 2/5 passed - 3 failed
✗ Failed tests:
- NotificationTest::test_notification_appears
- NotificationTest::test_notification_dismisses
- NotificationTest::test_notification_persistence
✅ AUTOMATED QA RESULTS
════════════════════════════════════════════════
Story: STORY-2025-004 - Notification System
Stack: Laravel 12, Pest 4, Playwright
QA Executed: 2025-10-01 17:12:45
❌ QUALITY GATES FAILED
Unit Tests: ✅ 32/32 passed
Feature Tests: ✅ 18/18 passed
Browser Tests: ❌ 2/5 passed (3 failed)
Acceptance Criteria Coverage: 40% (2/5 criteria validated)
❌ BLOCKING ISSUES:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Browser Tests:
✗ NotificationTest::test_notification_appears
Element not found: [wire:id="notification-toast"]
✗ NotificationTest::test_notification_dismisses
Click failed: Element not visible
✗ NotificationTest::test_notification_persistence
Timeout waiting for element
💡 NEXT STEPS:
════════════════════════════════════════════════
⚠️ 3 browser tests failed
1. /sdd:story-refactor STORY-2025-004 # Return to development
2. Fix failing browser tests:
- Verify notification component is rendered
- Check Livewire component IDs are correct
- Ensure dismiss button is visible
3. /sdd:story-review STORY-2025-004 # Re-run review
4. /sdd:story-qa STORY-2025-004 # Re-run QA after fixes
[DEBUGGING COMMANDS:]
vendor/bin/pest --filter=Browser::NotificationTest # Run specific test
vendor/bin/pest --filter=Browser --parallel=false # Run sequentially
php artisan serve # Start server for manual testing
```
### Example 3: QA with Performance Issues
```bash
INPUT:
/sdd:story-qa STORY-2025-005
OUTPUT:
→ Loading project context...
✓ technical-stack.md: Laravel + Pest + k6
→ Moving story to QA...
✓ Moved STORY-2025-005.md to /docs/stories/qa/
→ Executing test suite...
[Unit Tests] ✓ 28/28 passed (85% coverage)
[Feature Tests] ✓ 15/15 passed
[Browser Tests] ✓ 6/6 passed
[Performance] Running k6 benchmarks...
⚠️ Search endpoint: 450ms (target: <200ms)
⚠️ Database queries: 15 queries (N+1 detected)
✅ AUTOMATED QA RESULTS
════════════════════════════════════════════════
Story: STORY-2025-005 - Advanced Search
Stack: Laravel 12, Pest 4, k6
QA Executed: 2025-10-01 18:34:12
⚠️ PERFORMANCE ISSUES DETECTED
All tests: ✅ PASSED
Performance: ⚠️ Below targets
❌ BLOCKING ISSUES:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Performance:
✗ Search response time: 450ms (target: <200ms)
✗ N+1 query problem detected in SearchController
🔧 SUGGESTED OPTIMIZATIONS:
→ Add eager loading: ->with(['category', 'user'])
→ Add database index on search_terms column
→ Implement search results caching
💡 NEXT STEPS:
════════════════════════════════════════════════
1. /sdd:story-refactor STORY-2025-005 # Optimize performance
2. Add eager loading and indexes
3. /sdd:story-qa STORY-2025-005 # Re-run QA with optimizations
```
## Edge Cases
### No Project Context
- DETECT missing `/docs/project-context/` directory
- SUGGEST running `/sdd:project-init`
- ERROR: Cannot determine testing tools without context
- EXIT with guidance
### Story Not in Review
- CHECK if story in `/docs/stories/development/`
- IF found: ERROR "Story must pass review first"
- SUGGEST: `/sdd:story-review [story-id]` first
- IF in `/docs/stories/qa/`: ASK if user wants to re-run QA
### No Browser Tests Found
- DETECT missing browser test files
- ERROR: "Browser tests required for acceptance criteria validation"
- PROVIDE test file path examples for stack
- SUGGEST: Create browser tests before QA
### Browser Test Coverage < 100% Criteria
- COUNT acceptance criteria in story
- COUNT passing browser tests
- CALCULATE coverage gap
- BLOCK QA progression
- LIST uncovered criteria
### Performance Benchmarks Not Defined
- CHECK if performance targets in story
- IF missing: SKIP performance testing
- WARN: "No performance targets defined"
- CONTINUE with other tests
### Test Framework Not Installed
- DETECT missing testing tools
- PROVIDE installation commands for stack
- ERROR: Cannot proceed without test framework
- EXIT with setup instructions
### Flaky Browser Tests
- DETECT intermittent failures
- RETRY failed tests (up to 3 times)
- IF still failing: MARK as FAILED
- SUGGEST: Investigate timing/race conditions
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init`, exit gracefully
- **Story not in review**: Provide clear workflow guidance
- **Test framework errors**: Capture full error, suggest fixes
- **Browser test timeouts**: Increase timeout, suggest element inspection
- **Performance test failures**: Provide optimization suggestions
- **Security vulnerabilities**: Block with specific CVE details
## Performance Considerations
- Run unit, feature, and browser tests in parallel when possible
- Stream test output in real-time (don't wait for all tests)
- Cache test database between feature tests
- Reuse browser instances in browser tests
- Limit performance benchmarks to changed endpoints
## Related Commands
- `/sdd:story-review [id]` - Must pass before QA
- `/sdd:story-validate [id]` - Run after QA passes
- `/sdd:story-refactor [id]` - Return to development if QA fails
- `/sdd:story-ship [id]` - Deploy after validation
- `/sdd:story-status [id]` - Check current state
## Constraints
- ✅ MUST load project context before tests
- ✅ MUST move story file to qa directory
- ✅ MUST run ALL test types (unit, feature, browser)
- ✅ MUST validate 100% acceptance criteria via browser tests
- ⚠️ NEVER skip browser tests
- ⚠️ NEVER allow QA to pass with critical failures
- 📋 MUST capture test artifacts (screenshots, videos)
- 🔧 SHOULD test across browsers/devices if specified
- 💾 MUST update progress log with test results
- 🚫 BLOCK validation if quality gates fail

View File

@@ -0,0 +1,431 @@
# /sdd:story-quick-check
Lightning-fast 30-second health check for current work in progress.
---
## Meta
**Category**: Testing & Validation
**Format**: Structured (Standard)
**Execution Time**: 15-30 seconds
**Prerequisites**: None (works at any stage)
**Destructive**: No (100% read-only)
**Related Commands**:
- `/sdd:story-test-integration` - Comprehensive integration tests (3-8 min)
- `/sdd:story-full-check` - Full validation suite (5 min)
- `/sdd:story-save` - Save progress with git commit
**Context Requirements**:
- None (uses project defaults)
---
## Parameters
**Check Scope** (optional):
```bash
# Run all checks (default)
/sdd:story-quick-check
# Scope to specific checks
--checks=syntax|tests|lint|git|all # Default: all
--fix # Auto-fix issues when possible
--verbose # Show detailed output
```
**Examples**:
```bash
/sdd:story-quick-check # Full 30s check
/sdd:story-quick-check --checks=tests # Only run tests (~10s)
/sdd:story-quick-check --fix # Auto-fix lint/format issues
```
---
## Process
### Phase 1: Basic Checks (10s)
**Syntax & Compilation**:
```bash
# Laravel: Check for syntax errors
php -l app/**/*.php
php artisan config:clear --quiet
# Check:
✓ PHP syntax valid
✓ Configuration compiles
✓ No fatal errors
✓ Dependencies resolved
```
**Output**:
```
🔍 BASIC CHECKS (8s)
====================
✅ PHP syntax valid (127 files checked)
✅ Config compiles
✅ Autoload working
✅ Env file present
```
**If Errors**:
```
❌ SYNTAX ERROR FOUND
File: app/Livewire/TaskManager.php:42
Error: syntax error, unexpected 'public' (T_PUBLIC)
Quick fix: Missing semicolon on line 41
```
---
### Phase 2: Test Check (10s)
**Run Fast Tests**:
```bash
# Laravel/Pest: Run unit tests only (fastest)
php artisan test --filter=Unit --stop-on-failure
# Check:
✓ Existing tests still pass
✓ No new test failures
✓ Test files valid
```
**Output**:
```
🧪 TEST CHECK (9s)
==================
✅ Unit tests: 24/24 passed
✅ No failures detected
⚠️ New code in TaskManager.php has no tests
Tests run: 24
Duration: 0.8s
```
**If Failures**:
```
❌ TEST FAILURES (2)
1. Task::updateOrder() - Expected 1, got 0
Location: tests/Unit/TaskTest.php:45
Quick fix: Update assertion to expect 0
2. Category::tasks() - Undefined property
Location: tests/Unit/CategoryTest.php:28
Quick fix: Add relationship to Category model
```
---
### Phase 3: Lint & Format Check (5s)
**Code Quality**:
```bash
# Laravel: Run Pint in test mode (no changes)
vendor/bin/pint --test --dirty
# Check:
✓ Code formatting correct
✓ No style violations
✓ Follows Laravel standards
```
**Output**:
```
📋 LINT CHECK (4s)
==================
✅ Formatting correct (Laravel Pint)
✅ No style violations
✅ PSR-12 compliant
```
**If Issues**:
```
⚠️ FORMATTING ISSUES (3 files)
- app/Livewire/TaskManager.php (12 changes)
- app/Models/Task.php (3 changes)
- routes/web.php (1 change)
Auto-fix: vendor/bin/pint --dirty
```
---
### Phase 4: Git Status Check (5s)
**Repository Status**:
```bash
# Check git state
git status --short
git diff --stat
# Check:
✓ Working directory clean (or changes tracked)
✓ No merge conflicts
✓ Branch status
```
**Output**:
```
🚦 GIT CHECK (3s)
=================
Branch: feature/STORY-DUE-002
Status: ⚠️ Uncommitted changes
Modified files:
M app/Livewire/TaskManager.php
M resources/views/livewire/task-manager.blade.php
?? tests/Feature/TaskDueDateTest.php
✓ No conflicts
✓ Up to date with remote
```
---
### Phase 5: Instant Results Summary (2s)
**Generate Quick Report**:
```
⚡ QUICK CHECK RESULTS
=====================
Completed in 28 seconds
┌──────────────┬────────┬──────────────────────┐
│ Check │ Status │ Issues │
├──────────────┼────────┼──────────────────────┤
│ Syntax │ ✅ │ None │
│ Tests │ ⚠️ │ 2 new tests needed │
│ Lint/Format │ ✅ │ None │
│ Git Status │ ⚠️ │ Uncommitted changes │
└──────────────┴────────┴──────────────────────┘
OVERALL: 🟡 YELLOW - Minor issues
⚠️ ISSUES FOUND (2):
1. New code missing tests (TaskManager.php)
2. Uncommitted changes (3 files)
🔧 QUICK FIXES:
1. Add test: php artisan make:test TaskManagerTest
2. Commit: /sdd:story-save "Add due date feature"
Estimated fix time: 5 minutes
```
---
### Phase 6: Auto-Fix (if --fix flag)
**Automatic Fixes**:
```bash
# If --fix flag provided
/sdd:story-quick-check --fix
# Auto-fixes:
✓ Run Pint to format code
✓ Clear config cache
✓ Suggest test creation
✓ Offer to commit changes
```
**Output**:
```
🔧 AUTO-FIX APPLIED
===================
✅ Formatted 3 files (vendor/bin/pint)
✅ Cleared config cache
⚠️ Tests require manual creation
⚠️ Git commit requires manual action
Updated status: 🟢 GREEN (after fixes)
```
---
## Examples
### Example 1: All Clear
```bash
$ /sdd:story-quick-check
⚡ QUICK CHECK RESULTS
=====================
Completed in 22 seconds
✅ Syntax: Valid
✅ Tests: 24/24 passed
✅ Lint: No issues
✅ Git: Clean working directory
OVERALL: 🟢 GREEN - All clear!
✅ Safe to proceed
```
### Example 2: Minor Issues (Yellow)
```bash
$ /sdd:story-quick-check
⚡ QUICK CHECK RESULTS
=====================
Completed in 28 seconds
OVERALL: 🟡 YELLOW - Minor issues
⚠️ ISSUES (2):
1. Formatting: 3 files need Pint
2. Git: 3 uncommitted changes
🔧 Quick fixes:
vendor/bin/pint --dirty
/sdd:story-save "Add feature"
Estimated fix: 2 minutes
```
### Example 3: Critical Issues (Red)
```bash
$ /sdd:story-quick-check
⚡ QUICK CHECK RESULTS
=====================
Completed in 18 seconds
OVERALL: 🔴 RED - Blocking issues
❌ CRITICAL (2):
1. Syntax error: TaskManager.php:42
Missing semicolon on line 41
2. Test failures: 2/24 failed
Task::updateOrder() - assertion failed
Category::tasks() - undefined property
🔧 Must fix before continuing:
1. Fix syntax error
2. Update failing tests
Do NOT proceed until resolved.
```
### Example 4: Auto-Fix Applied
```bash
$ /sdd:story-quick-check --fix
⚡ QUICK CHECK RESULTS
=====================
🔧 AUTO-FIX APPLIED:
✅ Formatted 3 files
✅ Cleared caches
✅ Resolved all auto-fixable issues
OVERALL: 🟢 GREEN - All issues resolved!
Remaining manual actions:
- Consider adding tests for new code
- Run /sdd:story-save to commit changes
✅ Safe to proceed
```
### Example 5: Tests Only
```bash
$ /sdd:story-quick-check --checks=tests
🧪 TEST CHECK (9s)
==================
✅ Unit tests: 24/24 passed
✅ Feature tests: 8/8 passed
OVERALL: 🟢 GREEN
✅ All tests passing
```
---
## Success Criteria
**Command succeeds when**:
- All checks complete within 30 seconds
- Status report generated (green/yellow/red)
- Quick fixes suggested for issues
- Clear next action provided
**Status Levels**:
- 🟢 **GREEN**: No issues, safe to proceed
- 🟡 **YELLOW**: Minor issues, fix before review
- 🔴 **RED**: Blocking issues, must fix immediately
---
## Output Format
**One-Liner Status** (always shown):
```bash
✅ Clear to proceed
# or
⚠️ 3 issues need attention - 2min to fix
# or
❌ STOP: 2 critical errors must be fixed
```
**Detailed Report** (when issues found):
```
Issue breakdown by priority
Quick fix commands
Estimated fix time
Next recommended action
```
---
## Notes
- **Execution Time**: Always under 30 seconds
- **Read-Only**: Never modifies code (unless `--fix` flag)
- **Fast Feedback**: Designed for frequent use during development
- **Minimal Scope**: Only checks critical items (syntax, tests, lint, git)
- **Auto-Fix**: With `--fix` flag, automatically resolves formatting issues
**Best Practices**:
1. Run before every `/sdd:story-save` commit
2. Run after making changes to verify stability
3. Use `--fix` to quickly resolve formatting issues
4. Use `--checks=tests` for rapid test validation
5. If RED, fix immediately before continuing work
**When to Use**:
- ✅ Before committing code (`/sdd:story-save`)
- ✅ After implementing a feature
- ✅ Before switching tasks
- ✅ Multiple times per hour during active development
**When NOT to Use**:
- ❌ Instead of comprehensive testing (use `/sdd:story-test-integration`)
- ❌ For deployment validation (use `/sdd:story-full-check`)
- ❌ For final story validation (use `/sdd:story-validate`)
**Next Steps**:
```bash
🟢 GREEN → Continue work or /sdd:story-save
🟡 YELLOW → Fix issues, re-check, then /sdd:story-save
🔴 RED → Fix critical issues immediately
```
**For Deeper Validation**:
```bash
/sdd:story-test-integration # Integration + E2E tests (3-8 min)
/sdd:story-full-check # Complete validation suite (5 min)
```

455
commands/story-refactor.md Normal file
View File

@@ -0,0 +1,455 @@
# /sdd:story-refactor
## Meta
- Version: 2.0
- Category: story-management
- Complexity: high
- Purpose: Create refactoring story based on code analysis and project standards
## Definition
**Purpose**: Analyze codebase against project standards and create a prioritized refactoring story with specific, actionable requirements.
**Syntax**: `/sdd:story-refactor [objective]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| objective | string | No | comprehensive | Refactoring focus area or general analysis | Any text phrase |
## INSTRUCTION: Create Refactoring Story
### INPUTS
- objective: Optional refactoring focus (e.g., "improve performance", "reduce complexity")
- Project context from `/docs/project-context/` directory
- Current codebase state
- Active stories from `/docs/stories/development/`
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **LOAD** project context from:
- `/docs/project-context/coding-standards.md` - Code quality rules and thresholds
- `/docs/project-context/technical-stack.md` - Framework patterns and best practices
- `/docs/project-context/development-process.md` - Quality requirements
#### Phase 2: Objective Definition
1. **PARSE** optional objective parameter
2. IF no objective provided:
- SET analysis mode = comprehensive
- ANALYZE all code quality dimensions
3. IF objective provided:
- MAP objective to analysis focus areas:
* "improve performance" → Database queries, N+1, caching, assets
* "reduce complexity" → Cyclomatic complexity, nesting, method length
* "extract reusable components" → Duplicate code, large components
* "improve accessibility" → ARIA, keyboard nav, screen readers
* "optimize for mobile" → Responsive design, touch, performance
* Custom → Interpret and adapt analysis
- PRIORITIZE findings aligned with objective
#### Phase 3: Code Analysis
1. **ANALYZE** codebase using DISCOVERED standards from project context:
**Structure Issues**:
- SCAN for functions exceeding length limits (from coding-standards.md)
- CHECK nesting depth against complexity thresholds
- IDENTIFY duplicate code blocks
- FLAG complex conditionals
**Naming Conventions**:
- VERIFY variables follow naming patterns
- CHECK consistency with style guide
- VALIDATE function naming conventions
**Framework Patterns** (from technical-stack.md):
- Laravel: Eloquent patterns, validation, authorization
- React: Hooks patterns, prop validation
- Vue: Composition API, reactivity
- Django: MVT patterns, form validation
- Express: Middleware patterns, error handling
- [DISCOVERED framework]: Apply specific patterns
**Error Handling**:
- CHECK error boundaries exist (per framework)
- VERIFY comprehensive error handling
- VALIDATE loading states present
**Performance**:
- IDENTIFY database N+1 queries
- CHECK for missing caching opportunities
- ANALYZE asset optimization
- MEASURE component render efficiency
**Accessibility** (if relevant):
- VERIFY ARIA attributes present
- CHECK keyboard navigation support
- VALIDATE screen reader compatibility
- TEST focus management
2. **PRIORITIZE** findings:
IF comprehensive mode:
- Priority 1: Security/bug issues requiring immediate attention
- Priority 2: Maintainability issues affecting development velocity
- Priority 3: Style and optimization improvements
IF objective-focused mode:
- Priority 1: Changes directly supporting objective
- Priority 2: Critical security/bug issues (non-conflicting)
- Priority 3: Supporting improvements complementing objective
#### Phase 4: Story ID Generation
1. **GENERATE** story ID in format `STORY-YYYY-NNN`:
- YYYY = current year
- NNN = next available number
2. **CHECK** for existing IDs across all story directories
3. **ENSURE** uniqueness
#### Phase 5: Story File Creation
1. **ENSURE** `/docs/stories/backlog/` directory exists
2. **CREATE** story file at `/docs/stories/backlog/[story-id].md`
3. **POPULATE** refactoring story template with:
- Story ID and title (auto-generated based on objective)
- Status: backlog
- Today's date as "Started" date
- Refactoring objective (clear goal statement)
- Background (why refactoring needed, analysis summary)
- Analysis Findings (organized by priority, with file:line references)
- Requirements (specific, actionable refactoring tasks)
- Acceptance Criteria (testable completion criteria)
- Implementation Notes (technical guidance, patterns to follow)
- Testing Requirements (verify functionality maintained)
- Risk Assessment (potential risks, mitigation strategies)
- Impact Analysis (affected components, tests, dependencies)
4. **REFERENCE** analysis findings:
- Include file paths and line numbers
- Show before/after code examples
- Note pattern violations with framework references
- Link to relevant coding standard sections
#### Phase 6: Completion Summary
1. **DISPLAY** refactoring summary:
```
✅ Refactoring Story Created
═══════════════════════════════════
Story ID: [STORY-YYYY-NNN]
Title: [Auto-generated Title]
Location: /docs/stories/backlog/[story-id].md
Status: backlog
Analysis Mode: [Comprehensive / Objective-focused: {objective}]
Findings:
- Priority 1 (Critical): [count] issues
- Priority 2 (Important): [count] issues
- Priority 3 (Nice to have): [count] improvements
Files Affected: [count] files
Estimated Complexity: [Low/Medium/High]
```
2. **SUGGEST** next steps:
```
💡 NEXT STEPS:
1. Review story in /docs/stories/backlog/[story-id].md
2. /sdd:story-start [story-id] # Move to development when ready
3. /sdd:project-status # View all stories
```
### OUTPUTS
- `/docs/stories/backlog/[story-id].md` - New refactoring story with comprehensive analysis
- Analysis summary with prioritized findings
### RULES
- MUST generate unique story ID across all directories
- MUST analyze using DISCOVERED project standards (no assumptions)
- MUST prioritize findings by objective if provided
- MUST include file paths and line numbers in findings
- MUST provide specific, actionable refactoring tasks
- SHOULD include before/after code examples
- SHOULD reference framework patterns from technical-stack.md
- NEVER suggest refactoring without analysis evidence
- ALWAYS include risk assessment for breaking changes
- MUST verify all existing tests will pass post-refactoring
## Refactoring Story Template
```markdown
# [STORY-ID]: [Refactoring Title]
## Status: backlog
**Started:** [Today's Date]
**Completed:**
**Branch:** (none - in backlog)
## Objective
[Clear statement of refactoring goal based on analysis]
## Background
[Why this refactoring is needed - analysis context, pain points, violations found]
## Analysis Findings
### Priority 1: Critical
- [ ] **[File:Line]**: [Issue description]
- Current: [Code example or pattern]
- Standard: [Expected pattern from coding-standards.md]
- Impact: [Why this matters]
### Priority 2: Important
- [ ] **[File:Line]**: [Issue description]
- Current: [Code example]
- Suggested: [Improved pattern]
- Benefit: [Improvement gained]
### Priority 3: Nice to Have
- [ ] **[File:Line]**: [Improvement opportunity]
- Enhancement: [What to improve]
- Justification: [Why it helps]
## Requirements
### R1: [Requirement Title]
**What**: [Specific change needed]
**Why**: [Business/technical justification]
**How**: [Suggested approach or pattern]
**Tests**: [Impact on existing tests]
**Dependencies**: [Related components]
### R2: [Requirement Title]
[Same structure as R1]
## Acceptance Criteria
- [ ] All Priority 1 issues resolved
- [ ] Code quality metrics maintained or improved
- [ ] All existing functionality works identically
- [ ] All tests pass (no failures introduced)
- [ ] Code follows [framework] patterns from technical-stack.md
- [ ] Performance metrics maintained or improved
- [ ] Documentation updated where necessary
## Implementation Notes
**Approach**: [Recommended refactoring strategy]
**Stack**: [Auto-populated from technical-stack.md]
**Patterns**: [Framework-specific patterns to apply]
**Tools**: [Linters, formatters, static analysis tools to use]
## Testing Requirements
### Existing Tests
- [ ] All unit tests pass
- [ ] All integration tests pass
- [ ] All E2E tests pass
- [ ] No regressions in test coverage
### New Tests
- [ ] [New test scenario 1]
- [ ] [New test scenario 2]
### Manual Testing
- [ ] [Manual verification step 1]
- [ ] [Manual verification step 2]
## Risk Assessment
### High Risk
- **[Risk description]**: [Mitigation strategy]
### Medium Risk
- **[Risk description]**: [Mitigation strategy]
### Low Risk
- **[Risk description]**: [Mitigation strategy]
## Impact Analysis
**Components Affected**: [List of components/modules]
**Tests Affected**: [List of test files]
**Dependencies**: [External dependencies or other stories]
**Breaking Changes**: [Yes/No - details if yes]
## Progress Log
- [Today]: Created refactoring story from code analysis
## Rollback Plan
[How to rollback if issues arise during refactoring]
## Lessons Learned
[To be filled when complete]
```
## Examples
### Example 1: Comprehensive Refactoring
```bash
INPUT:
/sdd:story-refactor
OUTPUT:
→ Checking project context...
→ Loading coding standards and framework patterns...
→ Analyzing codebase comprehensively...
→ Found 23 issues across 8 files
✅ Refactoring Story Created
═══════════════════════════════════
Story ID: STORY-2025-012
Title: Comprehensive Code Quality Improvements
Location: /docs/stories/backlog/STORY-2025-012.md
Status: backlog
Analysis Mode: Comprehensive
Findings:
- Priority 1 (Critical): 3 issues
- Priority 2 (Important): 12 issues
- Priority 3 (Nice to have): 8 improvements
Files Affected: 8 files
Estimated Complexity: High
💡 NEXT STEPS:
1. Review story in /docs/stories/backlog/STORY-2025-012.md
2. /sdd:story-start STORY-2025-012 # Move to development when ready
3. /sdd:project-status # View all stories
```
### Example 2: Performance-Focused Refactoring
```bash
INPUT:
/sdd:story-refactor improve performance
OUTPUT:
→ Checking project context...
→ Loading coding standards and framework patterns...
→ Analyzing for performance issues...
→ Scanning database queries, caching, assets...
→ Found 7 performance issues across 5 files
✅ Refactoring Story Created
═══════════════════════════════════
Story ID: STORY-2025-013
Title: Performance Optimization Refactoring
Location: /docs/stories/backlog/STORY-2025-013.md
Status: backlog
Analysis Mode: Objective-focused (improve performance)
Findings:
- Priority 1 (Performance): 7 issues
• 3 N+1 query problems
• 2 missing cache opportunities
• 2 unoptimized asset loads
- Priority 2 (Critical): 0 issues
- Priority 3 (Supporting): 2 improvements
Files Affected: 5 files
Estimated Complexity: Medium
💡 NEXT STEPS:
1. Review story in /docs/stories/backlog/STORY-2025-013.md
2. /sdd:story-start STORY-2025-013 # Move to development when ready
3. /sdd:project-status # View all stories
```
### Example 3: Accessibility Refactoring
```bash
INPUT:
/sdd:story-refactor improve accessibility
OUTPUT:
→ Checking project context...
→ Analyzing ARIA, keyboard nav, screen reader support...
→ Found 15 accessibility issues across 6 components
✅ Refactoring Story Created
═══════════════════════════════════
Story ID: STORY-2025-014
Title: Accessibility Compliance Refactoring
Location: /docs/stories/backlog/STORY-2025-014.md
Status: backlog
Analysis Mode: Objective-focused (improve accessibility)
Findings:
- Priority 1 (Accessibility): 15 issues
• 8 missing ARIA attributes
• 4 keyboard navigation gaps
• 3 screen reader issues
- Priority 2 (Critical): 0 issues
- Priority 3 (Supporting): 5 improvements
Files Affected: 6 components
Estimated Complexity: Medium
💡 NEXT STEPS:
1. Review story in /docs/stories/backlog/STORY-2025-014.md
2. /sdd:story-start STORY-2025-014 # Move to development when ready
3. /sdd:project-status # View all stories
```
## Edge Cases
### No Project Context
- DETECT missing `/docs/project-context/` directory
- SUGGEST running `/sdd:project-init`
- CANNOT proceed without coding standards
- EXIT with clear guidance
### No Code Issues Found
- REPORT clean codebase status
- SUGGEST code is already well-refactored
- OFFER to create story anyway for future improvements
- DOCUMENT analysis results even if no issues
### Framework Not Recognized
- DETECT framework from technical-stack.md
- FALL BACK to generic code analysis if unknown
- WARN that framework-specific patterns unavailable
- CONTINUE with structure/naming analysis
### Conflicting Standards
- DETECT contradictions in project context
- LOG warnings about conflicts
- ASK user to clarify which standard applies
- DOCUMENT decision in story
## Error Handling
- **Missing /docs/project-context/**: Exit with suggestion to run `/sdd:project-init`
- **No coding-standards.md**: Cannot analyze - critical file missing
- **Analysis errors**: Log specific files causing issues, continue with others
- **Invalid objective**: Interpret broadly or ask user for clarification
## Performance Considerations
- Code analysis may take 30-60 seconds for large codebases
- Show progress indicators during analysis
- Cache project context for session
- Analyze only relevant files based on objective
## Related Commands
- `/sdd:project-init` - Initialize project structure with standards
- `/sdd:project-brief` - Define project goals and constraints
- `/sdd:story-new` - Create feature story
- `/sdd:story-start [id]` - Begin refactoring work
- `/sdd:project-status` - View all stories
## Constraints
- ✅ MUST use DISCOVERED standards (no assumptions)
- ✅ MUST include file:line references in findings
- ✅ MUST prioritize by objective when provided
- ⚠️ NEVER suggest refactoring without evidence
- 📋 MUST provide specific, actionable tasks
- 🔧 SHOULD include before/after examples
- 💾 MUST assess risk for breaking changes
- 🧪 MUST verify existing tests will pass

604
commands/story-review.md Normal file
View File

@@ -0,0 +1,604 @@
# /sdd:story-review
## Meta
- Version: 2.0
- Category: quality-gates
- Complexity: high
- Purpose: Move story to review stage and execute comprehensive quality checks
## Definition
**Purpose**: Execute comprehensive code review with project-specific quality gates, linting, testing, security checks, and standards compliance before QA.
**Syntax**: `/sdd:story-review [story_id]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | No | current branch | Story ID (STORY-YYYY-NNN) | Must match format STORY-YYYY-NNN |
## INSTRUCTION: Execute Story Review
### INPUTS
- story_id: Story identifier (auto-detected from branch if not provided)
- Project context from `/docs/project-context/` directory
- Story file from `/docs/stories/development/[story-id].md`
- Codebase changes since story started
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **LOAD** project-specific review standards from:
- `/docs/project-context/technical-stack.md` - Technology stack and tools
- `/docs/project-context/coding-standards.md` - Quality standards and thresholds
- `/docs/project-context/development-process.md` - Review stage requirements
#### Phase 2: Story Identification & Validation
1. IF story_id NOT provided:
- **DETECT** current git branch
- **EXTRACT** story ID from branch name
- EXAMPLE: Branch `feature/STORY-2025-001-auth` → ID `STORY-2025-001`
2. **VALIDATE** story exists:
- CHECK `/docs/stories/development/[story-id].md` exists
- IF NOT found in development:
- CHECK if already in `/docs/stories/review/`
- INFORM user and ask to proceed with re-review
- IF NOT found anywhere:
- ERROR: "Story [story-id] not found"
- EXIT with guidance
3. **READ** story file for:
- Current status
- Implementation checklist state
- Acceptance criteria
- Technical approach
#### Phase 3: Directory Preparation
1. **ENSURE** `/docs/stories/review/` directory exists
- CREATE directory if missing
- ADD `.gitkeep` file if directory was created
2. **MOVE** story file:
- FROM: `/docs/stories/development/[story-id].md`
- TO: `/docs/stories/review/[story-id].md`
- PRESERVE all content and formatting
3. **UPDATE** story metadata:
- Change status from "development" to "review"
- KEEP existing dates and branch information
- ADD review start timestamp to progress log
#### Phase 4: Quality Gate Execution
##### 4.1 Linting & Formatting (Discovered Tools)
1. **IDENTIFY** linter from technical-stack.md:
- PHP/Laravel: `vendor/bin/pint`
- Node.js: ESLint, Prettier
- Python: Black, flake8, pylint
- Go: gofmt, golint
- Rust: rustfmt, clippy
2. **RUN** discovered linter:
```bash
# Example for Laravel:
vendor/bin/pint --dirty
# Example for Node.js:
npm run lint
npm run format
```
3. **CAPTURE** results:
- COUNT style violations
- IDENTIFY auto-fixable issues
- LIST files modified by auto-fix
- REPORT remaining manual fixes needed
##### 4.2 Testing (Discovered Framework)
1. **IDENTIFY** test framework from technical-stack.md:
- PHP/Laravel: Pest, PHPUnit
- Node.js: Jest, Vitest, Mocha
- Python: pytest, unittest
- Go: go test
- Java: JUnit, TestNG
2. **RUN** discovered test suite:
```bash
# Example for Laravel Pest:
vendor/bin/pest --coverage
# Example for Node.js:
npm test -- --coverage
```
3. **ANALYZE** test results:
- PASS/FAIL status for all test types
- Coverage percentage (unit, feature, browser)
- Identify untested code paths
- CHECK coverage meets standards from coding-standards.md
##### 4.3 Security Checks (Discovered Tools)
1. **IDENTIFY** security tools from technical-stack.md:
- PHP/Laravel: `composer audit`
- Node.js: `npm audit`, `yarn audit`
- Python: `safety check`, `bandit`
- Go: `go mod audit`, `gosec`
- Java: `mvn dependency-check`
2. **RUN** discovered security scanners:
```bash
# Example for Laravel:
composer audit
# Example for Node.js:
npm audit --production
```
3. **SCAN** for exposed secrets:
- CHECK for API keys, tokens, passwords
- VALIDATE environment variable usage
- REVIEW configuration files
4. **FRAMEWORK-SPECIFIC** security checks:
- Laravel: CSRF tokens, SQL injection prevention, XSS protection
- React: XSS via dangerouslySetInnerHTML, dependency vulnerabilities
- Express: Helmet middleware, rate limiting, input validation
##### 4.4 Dependencies Analysis (Discovered Package Manager)
1. **IDENTIFY** package manager from technical-stack.md:
- PHP: Composer
- Node.js: npm, yarn, pnpm
- Python: pip, poetry
- Go: go modules
- Rust: cargo
2. **CHECK** for unused dependencies:
```bash
# Example for Node.js:
npx depcheck
# Example for PHP:
composer show --tree
```
3. **IDENTIFY** outdated packages:
```bash
# Example for Laravel:
composer outdated
# Example for Node.js:
npm outdated
```
4. **ANALYZE** bundle size impact (if frontend):
- MEASURE before/after bundle sizes
- IDENTIFY large dependencies
- SUGGEST optimization opportunities
##### 4.5 Standards Compliance (Discovered Coding Standards)
1. **LOAD** naming conventions from coding-standards.md
2. **LOAD** file organization patterns
3. **LOAD** error handling requirements
4. **LOAD** performance guidelines
5. **FRAMEWORK-SPECIFIC** compliance checks:
- **React**: Component structure, hooks rules, prop-types/TypeScript
- **Vue**: Composition API patterns, template conventions, ref naming
- **Laravel**: Eloquent usage, Blade conventions, Livewire patterns, route naming
- **Django**: Model/View/Template patterns, DRF conventions, ORM best practices
- **Express**: Middleware patterns, route organization, error handling
6. **VALIDATE** against standards:
- CHECK naming conventions (files, functions, variables)
- VERIFY file organization matches project structure
- ENSURE error handling follows patterns
- CONFIRM performance guidelines met
##### 4.6 Accessibility Checks (If UI Changes)
1. **DETECT** if story includes UI changes
2. IF UI changes present:
- **CHECK** for ARIA labels per coding-standards.md
- **VERIFY** keyboard navigation support
- **TEST** color contrast ratios (WCAG AA/AAA)
- **VALIDATE** semantic HTML usage
- **CHECK** screen reader compatibility
3. **FRAMEWORK-SPECIFIC** accessibility:
- React: jsx-a11y rules, focus management
- Vue: Template accessibility, v-focus directive
- Laravel Livewire: wire:loading states, wire:target accessibility
#### Phase 5: Report Generation
1. **COMPILE** all check results
2. **GENERATE** review report:
```
📊 CODE REVIEW REPORT
════════════════════════════════════════════════
Story: [STORY-ID] - [Title]
Stack: [Discovered Framework/Language/Tools]
Reviewed: [Timestamp]
✅ PASSED CHECKS:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Linting]
✓ Laravel Pint: All files formatted (X files checked)
✓ No style violations found
[Testing]
✓ Pest tests: XX/XX passed
✓ Unit coverage: XX% (target: YY% from standards)
✓ Feature coverage: XX% (target: YY% from standards)
✓ Browser coverage: XX% (target: YY% from standards)
[Security]
✓ Composer audit: No vulnerabilities
✓ No exposed secrets detected
✓ CSRF protection implemented
[Dependencies]
✓ No unused dependencies
✓ All packages up to date
[Standards]
✓ Naming conventions followed
✓ File organization matches project structure
✓ Error handling implemented
✓ Performance guidelines met
[Accessibility] (if UI)
✓ ARIA labels present
✓ Keyboard navigation functional
✓ Color contrast: WCAG AA compliant
⚠️ WARNINGS:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Dependencies]
⚠ Package "X" has minor update available (current: 1.2.3, latest: 1.2.4)
⚠ Bundle size increased by XKB (+Y%)
[Performance]
⚠ Method X complexity is high (cyclomatic complexity: N)
❌ FAILURES:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Testing]
✗ Coverage below threshold: 75% (target: 80%)
✗ Missing tests for: ErrorHandlingService.handleTimeout()
[Security]
✗ High severity vulnerability in package "Y" (CVE-2024-XXXX)
📈 METRICS:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Test Coverage: XX% (target: YY% from standards)
Code Quality Score: X/10 (using discovered metrics)
Bundle Size Impact: +XKB (+Y%)
Performance Score: X/100 (using discovered tools)
Complexity Score: X (average cyclomatic complexity)
🔧 SUGGESTED IMPROVEMENTS:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Laravel-specific]
→ Consider eager loading to prevent N+1 queries in TaskController
→ Extract complex query logic to repository pattern
→ Add database indexes for frequently queried columns
[Performance]
→ Cache expensive computations in method X
→ Lazy load heavy components
[Testing]
→ Add browser test for error scenario in feature Y
→ Increase coverage for edge cases in service Z
```
#### Phase 6: Story File Updates
1. **UPDATE** Implementation Checklist based on review:
- `[x]` Feature implementation - IF core functionality complete
- `[x]` Unit tests - IF tests pass AND coverage meets standards
- `[x]` Integration tests - IF integration tests pass
- `[x]` Error handling - IF error scenarios properly handled
- `[x]` Loading states - IF UI loading states implemented
- `[x]` Performance optimization - IF performance requirements met
- `[x]` Accessibility - IF accessibility standards met
- `[x]` Security review - IF security checks pass
2. **ADD** to Progress Log:
```markdown
- [Today]: Moved to review stage
- [Today]: Executed quality gates - [PASSED/FAILED]
* Linting: [status]
* Testing: [status] - [XX]% coverage
* Security: [status]
* Standards: [status]
```
3. **RECORD** review results:
- Which tools were used
- Which standards were applied
- Coverage percentages achieved
- Issues found and resolution status
4. **ONLY** mark items `[x]` if they truly pass review criteria
- BE STRICT with validation
- PARTIAL completion = NOT checked
- MUST meet coding-standards.md thresholds
#### Phase 7: Next Actions
1. **DETERMINE** review outcome:
- IF all critical checks PASS → Ready for `/sdd:story-qa`
- IF any failures → Requires `/sdd:story-refactor`
- IF documentation needed → Run `/sdd:story-document`
2. **DISPLAY** next steps:
```
💡 NEXT STEPS:
════════════════════════════════════════════════
[IF PASSED:]
✅ All quality gates passed
1. /sdd:story-qa [story-id] # Move to QA and run test suite
[IF FAILED:]
⚠️ X critical issues must be fixed
1. /sdd:story-refactor [story-id] # Return to development
2. Fix identified issues:
- [Issue 1]
- [Issue 2]
3. /sdd:story-review [story-id] # Re-run review after fixes
[IF WARNINGS:]
⚠️ X warnings (non-blocking)
1. /sdd:story-qa [story-id] # Proceed to QA (warnings won't block)
2. Consider addressing warnings in future iteration
[AVAILABLE COMMANDS:]
- /sdd:story-document [story-id] # Add/update documentation
- /sdd:story-status [story-id] # View detailed story status
```
3. **SHOW** relevant debugging commands for discovered stack:
```bash
# Laravel:
vendor/bin/pint --test # Check formatting without fixing
vendor/bin/pest --filter=Unit # Run specific test suite
composer audit # Re-run security scan
# Node.js:
npm run lint:fix # Auto-fix linting issues
npm test -- --coverage # Run tests with coverage
npm audit fix # Fix security vulnerabilities
```
### OUTPUTS
- `/docs/stories/review/[story-id].md` - Updated story file with review results
- Code review report (displayed to user)
- Updated Implementation Checklist with validation status
- Progress log entry with review timestamp
### RULES
- MUST load project context before running any checks
- MUST use discovered tools from technical-stack.md
- MUST validate against coding-standards.md thresholds
- MUST move story file from development to review
- SHOULD run all applicable quality gates for stack
- SHOULD capture detailed metrics for analysis
- NEVER mark checklist items complete unless they truly pass
- NEVER skip security checks
- ALWAYS update progress log with review results
- ALWAYS provide actionable feedback for failures
## Examples
### Example 1: Review with All Checks Passing
```bash
INPUT:
/sdd:story-review STORY-2025-001
OUTPUT:
→ Loading project context...
✓ technical-stack.md: Laravel + Pest + Playwright
✓ coding-standards.md: 80% coverage target, Laravel Pint
✓ development-process.md: Review stage requirements
→ Moving story to review...
✓ Moved STORY-2025-001.md to /docs/stories/review/
→ Running quality gates...
[Linting] Running Laravel Pint...
✓ 15 files formatted, 0 violations
[Testing] Running Pest...
✓ Unit tests: 45/45 passed
✓ Feature tests: 23/23 passed
✓ Browser tests: 8/8 passed
✓ Coverage: 87% (target: 80%)
[Security] Running Composer audit...
✓ No vulnerabilities found
✓ No exposed secrets
[Standards] Checking Laravel conventions...
✓ Eloquent usage follows best practices
✓ Livewire patterns correct
✓ Blade conventions followed
📊 CODE REVIEW REPORT
════════════════════════════════════════════════
Story: STORY-2025-001 - User Authentication System
Stack: Laravel 12, Livewire 3, Pest 4, Playwright
Reviewed: 2025-10-01 14:23:45
✅ ALL CHECKS PASSED
📈 METRICS:
Test Coverage: 87% (target: 80%)
Code Quality Score: 9.2/10
Bundle Size Impact: +12KB (+3%)
Performance Score: 94/100
💡 NEXT STEPS:
════════════════════════════════════════════════
✅ All quality gates passed
1. /sdd:story-qa STORY-2025-001 # Move to QA and run test suite
```
### Example 2: Review with Failures
```bash
INPUT:
/sdd:story-review
OUTPUT:
→ Detecting story from current branch...
✓ Current branch: feature/STORY-2025-003-dark-mode
✓ Story ID: STORY-2025-003
→ Loading project context...
✓ technical-stack.md: Laravel + Pest + Playwright
✓ coding-standards.md: 80% coverage target
→ Moving story to review...
✓ Moved STORY-2025-003.md to /docs/stories/review/
→ Running quality gates...
[Testing] Running Pest...
✓ Unit tests: 12/12 passed
✗ Feature tests: 4/5 passed (1 failed)
✓ Browser tests: 3/3 passed
✗ Coverage: 68% (target: 80%)
[Security] Running Composer audit...
✗ 1 high severity vulnerability found
📊 CODE REVIEW REPORT
════════════════════════════════════════════════
Story: STORY-2025-003 - Dark Mode Toggle
Stack: Laravel 12, Livewire 3, Pest 4
Reviewed: 2025-10-01 15:45:12
❌ FAILURES:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
[Testing]
✗ Coverage below threshold: 68% (target: 80%)
✗ Missing tests for: DarkModeService.applyTheme()
✗ Feature test failed: tests/Feature/DarkModeTest.php
[Security]
✗ High severity vulnerability in package "laravel/framework"
CVE-2024-12345 - Update to v12.1.5
💡 NEXT STEPS:
════════════════════════════════════════════════
⚠️ 3 critical issues must be fixed
1. /sdd:story-refactor STORY-2025-003 # Return to development
2. Fix identified issues:
- Add tests for DarkModeService.applyTheme()
- Fix failing feature test
- Update Laravel framework to v12.1.5
3. /sdd:story-review STORY-2025-003 # Re-run review after fixes
[DEBUGGING COMMANDS:]
vendor/bin/pest --filter=DarkMode # Run specific test
composer update laravel/framework # Update vulnerable package
```
### Example 3: Re-review Already in Review
```bash
INPUT:
/sdd:story-review STORY-2025-002
OUTPUT:
→ Loading project context...
✓ technical-stack.md loaded
→ Validating story location...
⚠️ Story STORY-2025-002 already in review stage
Running re-review with updated checks
→ Running quality gates...
[All checks execute...]
📊 CODE REVIEW REPORT (RE-REVIEW)
════════════════════════════════════════════════
Story: STORY-2025-002 - Payment Integration
Stack: Laravel 12, Stripe SDK
Reviewed: 2025-10-01 16:12:33 (2nd review)
✅ ALL CHECKS PASSED
```
## Edge Cases
### No Project Context
- DETECT missing `/docs/project-context/` directory
- SUGGEST running `/sdd:project-init`
- OFFER to run basic checks without discovered standards
- WARN that review will be incomplete
### Story Not in Development
- CHECK if story in `/docs/stories/review/`
- IF found: ASK user if they want to re-review
- IF in `/docs/stories/qa/`: ERROR and suggest `/sdd:story-refactor` first
- IF in `/docs/stories/completed/`: ERROR "Story already shipped"
### Missing Test Framework
- DETECT if testing tool not installed
- PROVIDE installation instructions for discovered stack
- SKIP test checks with warning
- MARK review as INCOMPLETE
### Security Vulnerabilities Found
- BLOCK progression to QA if HIGH/CRITICAL severity
- ALLOW progression if LOW/MEDIUM with warning
- PROVIDE update/fix commands
- LOG all vulnerabilities in review report
### Coverage Below Threshold
- CALCULATE gap to target (e.g., 68% vs 80% = 12% gap)
- IDENTIFY specific untested files/methods
- SUGGEST test cases to add
- BLOCK progression if below threshold
### Tool Execution Failures
- CATCH and log tool errors
- CONTINUE with remaining checks
- MARK affected section as INCOMPLETE
- SUGGEST manual verification
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init`, offer basic review
- **Story file not found**: Check all directories, provide helpful guidance
- **Tool not installed**: Provide installation commands for stack
- **Permission errors**: Report specific file/directory access issue
- **Git errors**: Validate git state, suggest resolution
- **Test failures**: Capture full output, suggest debugging steps
## Performance Considerations
- Run linting/formatting in parallel with security scans
- Cache project context for session (don't re-read every time)
- Stream test output in real-time (don't wait for completion)
- Limit coverage analysis to changed files when possible
## Related Commands
- `/sdd:story-refactor [id]` - Return to development to fix issues
- `/sdd:story-qa [id]` - Proceed to QA after passing review
- `/sdd:story-document [id]` - Add documentation before QA
- `/sdd:story-status [id]` - Check current story state
- `/sdd:project-context` - Update project standards
## Constraints
- ✅ MUST load project context before any checks
- ✅ MUST move story file to review directory
- ✅ MUST run all applicable quality gates
- ✅ MUST validate against coding-standards.md
- ⚠️ NEVER skip security checks
- ⚠️ NEVER mark checklist items complete without validation
- 📋 SHOULD provide actionable feedback for all failures
- 🔧 SHOULD suggest framework-specific improvements
- 💾 MUST update progress log with timestamp
- 🚫 BLOCK QA progression if critical checks fail

879
commands/story-rollback.md Normal file
View File

@@ -0,0 +1,879 @@
# /sdd:story-rollback
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Critical rollback procedure for failed deployments or production issues
## Definition
**Purpose**: Execute comprehensive rollback procedure for a deployed story experiencing critical issues in production. Revert code changes, database migrations, configuration, and restore system stability.
**Syntax**: `/sdd:story-rollback <story_id> [--severity=critical|high|medium] [--rollback-type=full|code|database|config]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | Yes | - | Story identifier (e.g., "STORY-2025-001") | Must match pattern STORY-\d{4}-\d{3} |
| --severity | enum | No | high | Issue severity level | critical, high, medium, low |
| --rollback-type | enum | No | full | Type of rollback to perform | full, code, database, config, partial |
## INSTRUCTION: Execute Critical Rollback
### INPUTS
- story_id: Story identifier (usually in /docs/stories/completed/ or /docs/stories/qa/)
- Issue severity and scope
- Rollback plan from story file
- Project context from /docs/project-context/
### PROCESS
#### Phase 1: Story Location and Context
1. **LOCATE** story file:
- SEARCH `/docs/stories/completed/[story-id].md` first
- IF NOT FOUND: CHECK `/docs/stories/qa/[story-id].md`
- IF NOT FOUND: CHECK `/docs/stories/review/[story-id].md`
- IF NOT FOUND: CHECK `/docs/stories/development/[story-id].md`
- IF STORY NOT FOUND:
- EXIT with error message
- SUGGEST checking story ID
2. **READ** story file and extract:
- Rollback plan section (if documented)
- Deployment version/tag
- Database migrations applied
- Configuration changes made
- Dependencies and integrations affected
- Technical changes summary
3. **IDENTIFY** deployment details:
- GET current git tag/commit
- GET previous stable tag/commit
- IDENTIFY files changed
- NOTE database migrations run
- LIST configuration changes
4. **DISPLAY** context:
```
📋 ROLLBACK CONTEXT
═══════════════════
Story: [STORY-ID] - [Title]
Current Location: /docs/stories/[directory]/
Deployed Version: [version]
Previous Version: [previous-version]
Deployment Time: [timestamp]
Time Since Deploy: [duration]
Changes Made:
- Code: [X] files changed
- Database: [Y] migrations applied
- Config: [Z] changes
- Dependencies: [list]
```
#### Phase 2: Situation Assessment
1. **PROMPT** user for incident details (if not provided):
- What is the issue?
- How many users are affected?
- What features are broken?
- Is there data corruption risk?
- What is the business impact?
2. **ASSESS** severity (use --severity if provided):
- **CRITICAL**: Data loss, security breach, complete outage
- **HIGH**: Major features broken, many users affected
- **MEDIUM**: Some features degraded, limited user impact
- **LOW**: Minor issues, cosmetic problems
3. **DETERMINE** rollback strategy:
- **FULL ROLLBACK**: Revert all changes (code + database + config)
- **CODE ONLY**: Revert code, keep database changes
- **DATABASE ONLY**: Rollback migrations, keep code
- **CONFIG ONLY**: Revert configuration changes
- **PARTIAL**: Selective rollback of specific changes
- **HOTFIX**: Fix forward instead of rolling back
4. **GENERATE** assessment report:
```
🚨 ROLLBACK ASSESSMENT
══════════════════════
Severity: [CRITICAL/HIGH/MEDIUM/LOW]
IMPACT:
- Users affected: [estimate or percentage]
- Features broken: [list of broken features]
- Data corruption risk: [YES/NO - details]
- Revenue impact: [description if applicable]
- SLA breach: [YES/NO]
ROOT CAUSE:
- [Identified or suspected issue]
- [Contributing factors]
ROLLBACK OPTIONS:
1. ✅ Full rollback to v[previous] (RECOMMENDED)
- Reverts all changes
- Restores known stable state
- Requires database rollback
- ETA: [X] minutes
2. Code-only rollback
- Keeps database changes
- Faster rollback
- May cause compatibility issues
- ETA: [Y] minutes
3. Hotfix forward
- Fix specific issue
- No rollback needed
- Takes longer to implement
- ETA: [Z] minutes
4. Partial rollback
- Revert specific changes
- Keep working features
- Complex to execute
- ETA: [W] minutes
RECOMMENDATION: [Strategy based on severity and impact]
```
5. **CONFIRM** rollback decision:
- DISPLAY assessment
- PROMPT user to confirm strategy
- WARN about consequences
- REQUIRE explicit confirmation for critical operations
#### Phase 3: Pre-Rollback Backup
1. **CREATE** safety backup:
- BACKUP current database state
- SNAPSHOT current code state (git commit)
- SAVE current configuration
- ARCHIVE application logs
- RECORD current metrics
2. **DOCUMENT** rollback start:
- TIMESTAMP rollback initiation
- LOG user who initiated
- RECORD rollback strategy
- NOTE current application state
3. **NOTIFY** stakeholders (if configured):
- ALERT that rollback is starting
- PROVIDE expected downtime
- SHARE rollback progress channel
4. **DISPLAY** backup confirmation:
```
💾 PRE-ROLLBACK BACKUP
══════════════════════
✅ Database backed up: [location]
✅ Code state saved: [commit-hash]
✅ Configuration saved: [location]
✅ Logs archived: [location]
✅ Metrics captured: [timestamp]
Safe to proceed with rollback.
```
#### Phase 4: Code Rollback
1. **VERIFY** current branch:
- CHECK on main branch
- PULL latest changes
- CONFIRM clean working directory
2. **IDENTIFY** rollback target:
- GET previous stable tag: `git describe --tags --abbrev=0 [current-tag]^`
- OR: USE previous commit from story history
- VERIFY target commit exists
3. **EXECUTE** code rollback:
- IF full rollback:
- REVERT merge commit: `git revert -m 1 [merge-commit]`
- IF selective rollback:
- REVERT specific commits
- PUSH revert to remote: `git push origin main`
4. **REMOVE** problematic release tag:
- DELETE local tag: `git tag -d [current-tag]`
- DELETE remote tag: `git push origin --delete [current-tag]`
5. **DISPLAY** code rollback status:
```
↩️ CODE ROLLBACK
════════════════
✅ Reverted to: v[previous-version]
✅ Revert commit: [commit-hash]
✅ Tag removed: [current-tag]
✅ Changes pushed to remote
Files reverted: [count]
```
#### Phase 5: Database Rollback
1. **IDENTIFY** migrations to rollback:
- GET migrations applied in story
- LIST from most recent to oldest
- CHECK for data loss risk
2. **WARN** about data loss:
- IF migrations drop columns/tables:
- DISPLAY data loss warning
- REQUIRE explicit confirmation
- SUGGEST data export if needed
3. **EXECUTE** database rollback:
- IF Laravel project:
- RUN: `php artisan migrate:rollback --step=[count]`
- IF Django project:
- RUN: `python manage.py migrate [app] [previous-migration]`
- IF Rails project:
- RUN: `rails db:rollback STEP=[count]`
- IF custom migrations:
- EXECUTE rollback scripts from story
4. **VERIFY** database state:
- CHECK migration status
- VALIDATE schema integrity
- TEST database connectivity
- VERIFY data integrity
5. **DISPLAY** database rollback status:
```
🗄️ DATABASE ROLLBACK
═══════════════════
✅ Migrations rolled back: [count]
✅ Schema restored to: [previous state]
✅ Data integrity: Verified
⚠️ Data loss: [description if any]
Migrations reversed:
- [migration-1]
- [migration-2]
- [migration-3]
```
#### Phase 6: Configuration Rollback
1. **IDENTIFY** configuration changes:
- ENV variables modified
- Config files changed
- Feature flags toggled
- API keys rotated
- Service endpoints updated
2. **REVERT** configuration:
- RESTORE previous ENV variables
- REVERT config files from git
- DISABLE feature flags
- RESTORE previous API credentials
- RESET service endpoints
3. **CLEAR** application caches:
- IF Laravel: `php artisan cache:clear && php artisan config:clear`
- IF Node.js: Clear Redis/Memcached
- IF Django: `python manage.py clear_cache`
- Clear CDN caches if applicable
4. **RESTART** application services:
- RESTART web servers
- RESTART queue workers
- RESTART cache services
- RESTART background jobs
5. **DISPLAY** configuration rollback status:
```
⚙️ CONFIGURATION ROLLBACK
════════════════════════
✅ ENV variables: Restored
✅ Config files: Reverted
✅ Feature flags: Disabled
✅ Caches: Cleared
✅ Services: Restarted
Changes reverted:
- [config-change-1]
- [config-change-2]
```
#### Phase 7: Deployment Rollback
1. **DETECT** deployment system:
- CHECK for deployment scripts
- IDENTIFY deployment platform
- READ `/docs/project-context/technical-stack.md`
2. **EXECUTE** deployment rollback:
- IF automated deployment:
- RUN deployment script with previous version
- MONITOR deployment progress
- IF manual deployment:
- PROVIDE rollback instructions
- CHECKLIST rollback steps
- WAIT for user confirmation
3. **VERIFY** deployment:
- CHECK application is running
- VERIFY correct version deployed
- VALIDATE services started
- CONFIRM endpoints responding
4. **DISPLAY** deployment status:
```
🚀 DEPLOYMENT ROLLBACK
══════════════════════
✅ Deployed: v[previous-version]
✅ Application: Running
✅ Services: Operational
✅ Endpoints: Responding
Deployment method: [method]
Rollback duration: [X] minutes
```
#### Phase 8: Verification and Validation
1. **RUN** smoke tests:
- TEST homepage loads
- VERIFY authentication works
- CHECK core features functional
- VALIDATE APIs responding
- TEST critical user paths
2. **CHECK** application health:
- VERIFY health endpoints
- CHECK error rates
- MONITOR response times
- VALIDATE resource usage
- CONFIRM database connectivity
3. **VERIFY** issue resolved:
- TEST specific issue that caused rollback
- CONFIRM users can access application
- CHECK reported errors are gone
- VALIDATE metrics are normal
4. **MONITOR** stability:
- WATCH for 10 minutes minimum
- CHECK for new errors
- MONITOR user activity
- TRACK key metrics
5. **DISPLAY** verification results:
```
✅ ROLLBACK VERIFICATION
════════════════════════
Smoke Tests: [X/Y] passed
Health Checks: All operational
Error Rates: Normal (< threshold)
Response Times: Normal
Resource Usage: Normal
Original Issue: ✅ RESOLVED
Application Status: ✅ STABLE
Safe to restore user access.
```
#### Phase 9: Post-Rollback Actions
1. **COMPLETE** post-rollback checklist:
```
📋 POST-ROLLBACK CHECKLIST
══════════════════════════
□ Production stable and verified
□ Users notified of restoration
□ Monitoring shows normal metrics
□ No data loss confirmed
□ Incident documented
□ Team notified
□ Stakeholders updated
```
2. **NOTIFY** users (if applicable):
- ANNOUNCE service restored
- APOLOGIZE for disruption
- PROVIDE incident summary
- SHARE preventive measures
3. **UPDATE** monitoring:
- RESET alerting thresholds
- RESUME normal monitoring
- WATCH for residual issues
- TRACK recovery metrics
#### Phase 10: Incident Documentation
1. **CREATE** incident report:
```
📊 INCIDENT REPORT
══════════════════
Story: [STORY-ID] - [Title]
Incident ID: INC-[YYYY-MM-DD]-[number]
TIMELINE:
- Deployed: [timestamp]
- Issue detected: [timestamp]
- Rollback started: [timestamp]
- Rollback completed: [timestamp]
- Service restored: [timestamp]
- Total duration: [X] minutes
WHAT HAPPENED:
[Detailed description of the issue that occurred]
IMPACT:
- Users affected: [estimate/percentage]
- Features broken: [list]
- Data loss: [YES/NO - details]
- Business impact: [description]
- Revenue impact: [if applicable]
- SLA impact: [if applicable]
ROOT CAUSE:
- Primary: [Technical cause]
- Contributing factors: [list]
- Detection: [How issue was found]
RESOLUTION:
- Action taken: [Rollback strategy used]
- Code: Reverted to v[previous]
- Database: [Migrations rolled back or kept]
- Configuration: [Changes reverted]
- Verification: [How stability confirmed]
LESSONS LEARNED:
- What worked well: [list]
- What didn't work: [list]
- Gaps identified: [list]
- Preventive measures: [list]
ACTION ITEMS:
- [ ] [Preventive measure 1]
- [ ] [Preventive measure 2]
- [ ] [Testing improvement 1]
- [ ] [Monitoring enhancement 1]
- [ ] [Process update 1]
FOLLOW-UP STORY:
Create fix story: /sdd:story-new [story-id-for-fix]
Link to incident: INC-[YYYY-MM-DD]-[number]
```
2. **ADD** incident to story file:
- APPEND incident report to story
- UPDATE lessons learned section
- NOTE what needs fixing
- MARK story as requiring fixes
#### Phase 11: Story Status Update
1. **DETERMINE** story destination:
- IF issue needs code fixes: Move to `/docs/stories/development/`
- IF issue needs testing: Move to `/docs/stories/qa/`
- IF minor tweaks needed: Keep in `/docs/stories/review/`
- IF investigation needed: Move to `/docs/stories/development/`
2. **ENSURE** target directory exists:
- CREATE directory if missing
- ADD `.gitkeep` if directory created
3. **MOVE** story file:
- FROM: Current location (usually `/docs/stories/completed/`)
- TO: Appropriate stage directory
- VERIFY move successful
4. **UPDATE** story file:
- CHANGE status to appropriate stage
- ADD rollback incident to progress log
- UPDATE lessons learned with incident findings
- CREATE action items for fixes
- NOTE what caused the rollback
5. **COMMIT** story move:
- ADD moved file to git
- COMMIT with message: "rollback: revert [story-id] due to [issue]"
- PUSH to repository
#### Phase 12: Fix Story Creation
1. **PROMPT** user to create fix story:
```
Do you want to create a fix story now? (y/n)
```
2. **IF** user confirms:
- GENERATE new story ID
- CREATE fix story file
- LINK to original story and incident
- INCLUDE incident details
- ADD root cause analysis
- SET high priority
- POPULATE with fix requirements
3. **DISPLAY** fix story details:
```
📝 FIX STORY CREATED
════════════════════
Story ID: [FIX-STORY-ID]
Title: Fix [Original Story] - [Issue Description]
Priority: HIGH
Location: /docs/stories/backlog/[fix-story-id].md
Linked to:
- Original: [STORY-ID]
- Incident: INC-[YYYY-MM-DD]-[number]
Next steps:
1. Review incident report
2. Investigate root cause
3. /sdd:story-start [fix-story-id]
4. Implement fix with additional testing
5. /sdd:story-ship [fix-story-id] (with caution)
```
#### Phase 13: Final Summary
1. **GENERATE** rollback summary:
```
✅ ROLLBACK COMPLETE
════════════════════
Story: [STORY-ID] - [Title]
ROLLBACK SUMMARY:
• Strategy: [Full/Partial/Code-only/etc.]
• Duration: [X] minutes
• Version: Reverted from v[current] to v[previous]
• Impact: [Users affected during rollback]
ACTIONS TAKEN:
✅ Code reverted to v[previous]
✅ Database rolled back ([X] migrations)
✅ Configuration restored
✅ Application redeployed
✅ Smoke tests passed
✅ Production stable
CURRENT STATE:
• Application: ✅ Running v[previous]
• Health: ✅ All systems operational
• Users: ✅ Full access restored
• Monitoring: ✅ Normal metrics
• Story: Moved to /docs/stories/[directory]/
INCIDENT REPORT:
Created: INC-[YYYY-MM-DD]-[number]
Location: [story-file-path]
FIX STORY:
Created: [FIX-STORY-ID] (if created)
Priority: HIGH
Location: /docs/stories/backlog/[fix-story-id].md
NEXT STEPS:
1. Continue monitoring for 24 hours
2. Review incident report with team
3. Implement action items
4. Start work on fix story: /sdd:story-start [fix-story-id]
5. Add additional testing to prevent recurrence
6. Update rollback procedures if needed
POST-MORTEM:
Schedule incident review meeting within 48 hours
to discuss root cause and preventive measures.
```
### OUTPUTS
- Reverted git commits on main branch
- Deleted problematic release tag
- Rolled back database migrations (if applicable)
- Restored configuration files
- Moved story file to appropriate stage
- Incident report in story file
- Fix story (if created)
- Clean, stable production environment
### RULES
- MUST locate story file before proceeding
- MUST assess severity and impact
- MUST create pre-rollback backup
- MUST confirm rollback strategy with user
- MUST revert code changes
- MUST rollback database if needed (with data loss warning)
- MUST restore configuration
- MUST verify application stability after rollback
- MUST complete post-rollback checklist
- MUST document incident comprehensively
- MUST update story status and location
- SHOULD create fix story for follow-up
- NEVER execute without confirmation for critical operations
- ALWAYS verify rollback success
- MUST notify stakeholders when configured
## Examples
### Example 1: Critical Full Rollback
```bash
INPUT:
/sdd:story-rollback STORY-2025-003 --severity=critical
PROCESS:
→ Locating story STORY-2025-003...
→ Found in /docs/stories/completed/STORY-2025-003.md
→ Loading rollback plan...
What issue are you experiencing?
> Payment processing completely broken, users cannot checkout
How many users are affected?
> All users attempting to purchase (estimated 100+ in last 10 mins)
→ Assessing situation...
OUTPUT:
🚨 ROLLBACK ASSESSMENT
══════════════════════
Severity: CRITICAL
IMPACT:
- Users affected: All users (100+ in 10 minutes)
- Features broken: Payment processing, checkout flow
- Data corruption risk: NO
- Revenue impact: Blocking all sales
- SLA breach: YES
ROOT CAUSE:
- Payment gateway integration broken
- API credential mismatch suspected
ROLLBACK OPTIONS:
1. ✅ Full rollback to v1.2.0 (RECOMMENDED)
- Reverts all payment changes
- Restores working checkout
- Requires 2 migration rollbacks
- ETA: 5 minutes
RECOMMENDATION: Full rollback (critical revenue impact)
Proceed with full rollback? (yes/no)
> yes
→ Creating pre-rollback backup...
💾 PRE-ROLLBACK BACKUP
✅ Database backed up
✅ Code state saved
✅ Configuration saved
✅ Logs archived
→ Executing code rollback...
↩️ CODE ROLLBACK
✅ Reverted to: v1.2.0
✅ Revert commit: def789
✅ Tag removed: v1.3.0
→ Executing database rollback...
🗄️ DATABASE ROLLBACK
✅ Migrations rolled back: 2
✅ Schema restored
→ Restoring configuration...
⚙️ CONFIGURATION ROLLBACK
✅ ENV variables restored
✅ Caches cleared
✅ Services restarted
→ Deploying previous version...
🚀 DEPLOYMENT ROLLBACK
✅ Deployed: v1.2.0
✅ Application: Running
→ Verifying rollback...
✅ ROLLBACK VERIFICATION
Smoke Tests: 12/12 passed
Original Issue: ✅ RESOLVED
Application Status: ✅ STABLE
→ Creating incident report...
→ Moving story to /docs/stories/development/
✅ ROLLBACK COMPLETE
════════════════════
Story: STORY-2025-003 - Payment Integration
ROLLBACK SUMMARY:
• Strategy: Full rollback
• Duration: 6 minutes
• Version: Reverted from v1.3.0 to v1.2.0
• Impact: Brief interruption during rollback
ACTIONS TAKEN:
✅ Code reverted to v1.2.0
✅ Database rolled back (2 migrations)
✅ Configuration restored
✅ Payment service restored
CURRENT STATE:
• Application: ✅ Running v1.2.0
• Checkout: ✅ Working normally
• Users: ✅ Can complete purchases
• Story: Moved to /docs/stories/development/
INCIDENT REPORT:
Created: INC-2025-03-16-001
Location: /docs/stories/development/STORY-2025-003.md
NEXT STEPS:
1. Monitor payment processing
2. Investigate API credential issue
3. Add payment integration tests
4. Implement with better validation
5. /sdd:story-start STORY-2025-003 when ready
```
### Example 2: Code-Only Rollback
```bash
INPUT:
/sdd:story-rollback STORY-2025-004 --rollback-type=code
PROCESS:
→ Locating story...
→ Found in /docs/stories/completed/STORY-2025-004.md
What issue are you experiencing?
> UI rendering broken on mobile devices
→ Assessing situation...
→ Severity: HIGH
→ Rollback type: Code only (database changes OK)
→ Executing code rollback...
→ Keeping database migrations
→ Reverting frontend changes...
OUTPUT:
✅ ROLLBACK COMPLETE
════════════════════
Story: STORY-2025-004 - Mobile UI Update
ROLLBACK SUMMARY:
• Strategy: Code-only rollback
• Duration: 3 minutes
• Database: No changes (migrations kept)
ACTIONS TAKEN:
✅ Code reverted
✅ Frontend assets rebuilt
✅ Application redeployed
✅ Mobile UI: Restored to previous version
Database migrations were not rolled back
as they are compatible with previous code.
```
### Example 3: Story Not Found
```bash
INPUT:
/sdd:story-rollback STORY-2025-999
PROCESS:
→ Searching for STORY-2025-999...
→ Not found in /docs/stories/completed/
→ Not found in /docs/stories/qa/
→ Not found in /docs/stories/review/
→ Not found in /docs/stories/development/
OUTPUT:
❌ STORY NOT FOUND
══════════════════
Story ID: STORY-2025-999
The story file was not found in any directory:
- /docs/stories/completed/
- /docs/stories/qa/
- /docs/stories/review/
- /docs/stories/development/
- /docs/stories/backlog/
Please verify the story ID and try again.
To see all stories: /sdd:project-status
```
## Edge Cases
### Database Data Loss Risk
- DETECT migrations that drop columns/tables
- CALCULATE potential data loss
- WARN user with specific details
- REQUIRE explicit confirmation
- OFFER to export data before rollback
- LOG data loss for incident report
### Partial Rollback Complexity
- IDENTIFY dependencies between changes
- ASSESS compatibility of partial rollback
- WARN about potential issues
- SUGGEST full rollback if too complex
- PROVIDE option to proceed with caution
### No Rollback Plan Documented
- WARN that rollback plan missing
- USE default rollback strategy
- GENERATE rollback steps from git history
- PROCEED with extra caution
- SUGGEST documenting rollback plans for future
### Rollback Verification Failure
- DETECT continued issues after rollback
- ASSESS if rollback successful but different issue
- OFFER to rollback further (older version)
- SUGGEST investigating root cause
- PROVIDE emergency contact information
### Multiple Stories Since Deployment
- DETECT other stories deployed after target
- WARN about reverting multiple changes
- LIST all stories that will be affected
- REQUIRE explicit confirmation
- SUGGEST selective rollback instead
## Error Handling
- **Story ID missing**: Return "Error: Story ID required. Usage: /sdd:story-rollback <story_id>"
- **Invalid story ID format**: Return "Error: Invalid story ID format. Expected: STORY-YYYY-NNN"
- **Story not found**: Search all directories and report not found
- **Rollback failure**: Capture error, provide manual rollback steps, alert for help
- **Database rollback error**: Stop rollback, restore from backup, seek manual intervention
- **Deployment failure**: Attempt re-deployment, provide manual steps, escalate if needed
- **Verification failure**: Alert that issue persists, suggest further rollback or investigation
## Performance Considerations
- Execute rollback steps in parallel when safe
- Stream rollback output in real-time
- Monitor application health continuously during rollback
- Generate incident report asynchronously after rollback
## Related Commands
- `/sdd:story-ship` - Ship story (the opposite of rollback)
- `/sdd:story-qa` - Return story to QA for fixes
- `/sdd:story-new` - Create fix story for addressing issues
- `/sdd:project-status` - View all project stories
## Constraints
- ✅ MUST locate story file before proceeding
- ✅ MUST assess severity and impact
- ✅ MUST create pre-rollback backup
- ✅ MUST confirm rollback strategy
- 🔄 MUST revert code changes
- 🗄️ MUST rollback database with caution
- ⚙️ MUST restore configuration
- ✔️ MUST verify application stability
- 📋 MUST complete post-rollback checklist
- 📊 MUST document incident
- 📝 SHOULD create fix story
- 🚫 NEVER execute without confirmation for critical operations
- ⚠️ ALWAYS warn about data loss
- 📣 MUST notify stakeholders

614
commands/story-save.md Normal file
View File

@@ -0,0 +1,614 @@
# /sdd:story-save
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Commit current work with properly formatted commit message and story file update
## Definition
**Purpose**: Save current progress by creating a properly formatted git commit with automatic commit type detection, story context integration, and story file progress logging.
**Syntax**: `/sdd:story-save [message]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| message | string | No | auto-generated | Custom commit message or description | Max 500 chars |
## INSTRUCTION: Save Story Progress
### INPUTS
- Current git working directory changes
- Story ID from branch name or active story
- Story file from `/docs/stories/development/`, `/docs/stories/review/`, or `/docs/stories/qa/`
- Optional: User-provided commit message
- Project context from `/docs/project-context/` (optional for enhanced commit messages)
### PROCESS
#### Phase 1: Git Status Analysis
1. **CHECK** git repository status:
- RUN: `git status --porcelain`
- IF no changes exist:
* SHOW: "✅ Working tree clean - nothing to commit"
* SUGGEST: Continue working or use /sdd:story-review
* EXIT gracefully
2. **CATEGORIZE** changes:
- MODIFIED files: List files with 'M' status
- UNTRACKED files: List files with '??' status
- DELETED files: List files with 'D' status
- RENAMED files: List files with 'R' status
3. **ANALYZE** file sizes:
- CHECK for large files (> 5MB)
- WARN about potentially sensitive files (.env, credentials.json, etc.)
- FLAG binary files that might bloat repository
4. **DISPLAY** changes summary:
```
📝 CHANGES TO COMMIT
════════════════════════════════════
Modified: [count] files
- [file1]
- [file2]
Untracked: [count] files
- [file3]
- [file4]
[If warnings exist:]
⚠️ WARNINGS:
- Large file detected: [file] ([size]MB)
- Potential secret: [file]
```
#### Phase 2: Story Context Discovery
1. **IDENTIFY** current story:
- ATTEMPT 1: Extract from current git branch name
* PATTERN: feature/[story-id]-[description]
* EXAMPLE: feature/STORY-AUTH-001-login-form → STORY-AUTH-001
- ATTEMPT 2: Find most recently modified story in /docs/stories/development/
- ATTEMPT 3: Check /docs/stories/review/ and /docs/stories/qa/
- IF no story found: PROCEED without story context
2. **READ** story file (if found):
- EXTRACT story title
- EXTRACT current status
- EXTRACT branch name
- EXTRACT last progress log entry
- EXTRACT implementation checklist status
3. **VALIDATE** story alignment:
- IF story branch doesn't match current branch:
* WARN: "Current branch doesn't match story branch"
* ASK: Continue with commit anyway? (y/n)
#### Phase 3: Commit Type Detection
1. **ANALYZE** changed files to determine commit type:
**Detection Rules:**
- `feat`: New feature files, new components, new functionality
* New files in app/, src/, lib/
* New Livewire components, React components, Vue components
* New controllers, models, services
- `fix`: Bug fixes, error corrections
* Modifications to fix issues
* Changes to error handling
* Corrections to logic
- `refactor`: Code restructuring without behavior change
* File moves, renames
* Code organization changes
* Performance improvements without new features
- `test`: Test additions or modifications
* New or modified files in tests/, __tests__/
* Test files (.test.js, .spec.js, Test.php)
- `docs`: Documentation only
* Changes to .md files only
* README updates
* Comment updates only
- `style`: Formatting, whitespace, linting
* Style files only (.css, .scss, .sass)
* Formatting changes (after running Pint, Prettier)
- `perf`: Performance improvements
* Optimization changes
* Database query improvements
* Caching additions
- `chore`: Maintenance, dependencies, configuration
* package.json, composer.json updates
* Config file changes
* Build script updates
2. **SELECT** primary commit type:
- IF multiple types apply: SELECT most significant
- PRIORITY ORDER: feat > fix > refactor > test > perf > docs > style > chore
3. **DETERMINE** scope from changes:
- IF story exists: USE story context (e.g., "auth", "profile", "cart")
- ELSE: USE directory/module name
- EXAMPLES: "auth", "api", "ui", "database", "tests"
#### Phase 4: Commit Message Generation
1. **IF** user provided message:
- USE provided message as description
- FORMAT: `[type]([scope]): [user message]`
- EXAMPLE: "feat(auth): add two-factor authentication"
2. **ELSE** auto-generate message:
- ANALYZE changes to create descriptive message
- USE story title if available
- INCLUDE key changes summary
- FORMAT: `[type]([scope]): [auto-generated description]`
3. **CREATE** full commit message:
```
[type]([scope]): [description]
[If story exists:]
Story: [STORY-ID] - [Story Title]
Changes:
- [Change 1]
- [Change 2]
- [Change 3]
[If applicable:]
Files: [count] modified, [count] added, [count] deleted
```
4. **VALIDATE** commit message format:
- TYPE must be valid conventional commit type
- SCOPE should be lowercase, hyphen-separated
- DESCRIPTION should be lowercase, imperative mood
- LENGTH should be under 72 chars for first line
#### Phase 5: Story File Update
1. **IF** story file exists:
- **ADD** progress log entry:
```markdown
- [YYYY-MM-DD HH:MM]: [commit description]
* Files: [list key files]
* Type: [commit type]
* [Additional context if significant changes]
```
2. **UPDATE** checklist items (if applicable):
- DETECT completed work from commit type
- IF commit type is "test": Mark "Unit tests" or "Integration tests" as complete
- IF commit type is "feat": Check if feature implementation complete
- IF commit type is "docs": Mark "Documentation" as complete
3. **NOTE** implementation decisions (if significant):
- ADD to Technical Notes section if architectural changes
- DOCUMENT trade-offs or important decisions
- REFERENCE commit hash (will be added after commit)
#### Phase 6: Staging and Commit
1. **STAGE** changes:
- IF story file was updated: INCLUDE story file in commit
- ADD all relevant modified files
- ADD all relevant untracked files
- EXCLUDE files from .gitignore
- SKIP large files or sensitive files (with warning)
2. **CREATE** commit:
- RUN: `git add [files]`
- RUN: `git commit -m "[commit message]"`
- CAPTURE commit hash
- CAPTURE commit timestamp
3. **UPDATE** story file with commit hash:
- ADD commit hash to latest progress log entry
- FORMAT: `Commit: [hash]`
4. **VERIFY** commit succeeded:
- RUN: `git log -1 --oneline`
- CONFIRM commit appears in history
#### Phase 7: Commit Summary Display
1. **DISPLAY** comprehensive commit summary:
```
✅ CHANGES COMMITTED
════════════════════════════════════
Commit: [hash]
Type: [type]
Scope: [scope]
Message: [description]
[If story exists:]
Story: [STORY-ID] - [Story Title]
Files Changed: [count]
📄 Modified: [count]
- [file1] ([+X/-Y lines])
- [file2] ([+X/-Y lines])
Added: [count]
- [file3] ([+X lines])
Deleted: [count]
- [file4] ([X lines])
Statistics:
Lines Added: [count]
Lines Removed: [count]
💡 NEXT STEPS:
1. /sdd:story-implement [story-id] - Continue development
2. /sdd:story-review - Move to code review when ready
3. git push - Push to remote when ready to share
```
### OUTPUTS
- Git commit with formatted conventional commit message
- Updated story file with progress log entry
- Commit statistics and file change summary
- Next action suggestions based on story status
### RULES
- MUST check for uncommitted changes before proceeding
- MUST determine appropriate commit type from changes
- MUST create properly formatted conventional commit message
- MUST update story file before committing (include in same commit)
- MUST warn about large files or potential secrets
- SHOULD auto-detect story context from branch name
- SHOULD provide meaningful auto-generated messages
- SHOULD include story context in commit body
- MUST NOT commit files that likely contain secrets
- MUST NOT proceed if no changes exist
- NEVER force push or amend commits without confirmation
## Examples
### Example 1: Feature Implementation Commit
```bash
INPUT:
/sdd:story-save
PROCESS:
→ Checking git status...
→ Found 5 modified files, 2 new files
→ Detecting story context...
→ Current branch: feature/STORY-AUTH-001-login-form
→ Story: STORY-AUTH-001 - Implement Login Form
→ Analyzing changes...
→ Detected commit type: feat
→ Generating commit message...
→ Updating story file...
→ Staging changes...
→ Creating commit...
OUTPUT:
✅ CHANGES COMMITTED
════════════════════════════════════
Commit: abc1234
Type: feat
Scope: auth
Message: implement login form with validation
Story: STORY-AUTH-001 - Implement Login Form
Files Changed: 7
📄 Modified: 5
- app/Livewire/Auth/LoginForm.php (+145/-0 lines)
- resources/views/livewire/auth/login-form.blade.php (+67/-0 lines)
- routes/web.php (+5/-0 lines)
- tests/Feature/Auth/LoginTest.php (+89/-0 lines)
- stories/development/STORY-AUTH-001.md (+8/-1 lines)
Added: 2
- app/Http/Controllers/Auth/LoginController.php (+52 lines)
- tests/Browser/Auth/LoginFormTest.php (+43 lines)
Statistics:
Lines Added: 409
Lines Removed: 1
💡 NEXT STEPS:
1. /sdd:story-implement STORY-AUTH-001 - Continue development
2. /sdd:story-review - Move to code review when ready
3. git push - Push to remote when ready to share
```
### Example 2: Custom Commit Message
```bash
INPUT:
/sdd:story-save "add rate limiting to login endpoint"
PROCESS:
→ Checking git status...
→ Found 2 modified files
→ Using custom message: "add rate limiting to login endpoint"
→ Detecting story context...
→ Story: STORY-AUTH-001 - Implement Login Form
→ Analyzing changes...
→ Detected commit type: feat
→ Generating commit message...
→ Updating story file...
→ Creating commit...
OUTPUT:
✅ CHANGES COMMITTED
════════════════════════════════════
Commit: def5678
Type: feat
Scope: auth
Message: add rate limiting to login endpoint
Story: STORY-AUTH-001 - Implement Login Form
Changes:
- Added rate limiting middleware (5 attempts per minute)
- Updated login controller to use rate limiter
- Added rate limit exceeded error message
Files Changed: 2
📄 Modified: 2
- app/Http/Middleware/RateLimitLogin.php (+28/-0 lines)
- app/Livewire/Auth/LoginForm.php (+12/-3 lines)
Statistics:
Lines Added: 40
Lines Removed: 3
💡 NEXT STEPS:
1. /sdd:story-implement STORY-AUTH-001 - Continue development
2. /sdd:story-review - Move to code review when ready
3. git push - Push to remote when ready to share
```
### Example 3: Test Addition Commit
```bash
INPUT:
/sdd:story-save
PROCESS:
→ Checking git status...
→ Found 3 new files (all tests)
→ Detecting story context...
→ Story: STORY-PROFILE-002 - User Profile Settings
→ Analyzing changes...
→ Detected commit type: test
→ Generating commit message...
→ Updating story file (marking test checklist items complete)...
→ Creating commit...
OUTPUT:
✅ CHANGES COMMITTED
════════════════════════════════════
Commit: ghi9012
Type: test
Scope: profile
Message: add comprehensive unit and browser tests
Story: STORY-PROFILE-002 - User Profile Settings
Files Changed: 4
Added: 3
- tests/Unit/ProfileSettingsTest.php (+76 lines)
- tests/Feature/ProfileUpdateTest.php (+92 lines)
- tests/Browser/ProfileSettingsTest.php (+58 lines)
📄 Modified: 1
- stories/development/STORY-PROFILE-002.md (+3/-3 lines)
* Marked "Unit tests" as complete
* Marked "Browser tests" as complete
Statistics:
Lines Added: 229
Lines Removed: 3
💡 NEXT STEPS:
1. /sdd:story-implement STORY-PROFILE-002 - Continue development
2. /sdd:story-review - Move to code review when ready
3. git push - Push to remote when ready to share
```
### Example 4: No Changes to Commit
```bash
INPUT:
/sdd:story-save
PROCESS:
→ Checking git status...
→ No uncommitted changes found
OUTPUT:
✅ WORKING TREE CLEAN
════════════════════════════════════
No changes to commit.
Current Status:
Branch: feature/STORY-AUTH-001-login-form
Story: STORY-AUTH-001 - Implement Login Form
Last Commit: abc1234 (2 hours ago)
💡 NEXT STEPS:
1. /sdd:story-implement STORY-AUTH-001 - Continue implementation
2. /sdd:story-review - Move to code review if complete
3. /sdd:story-continue - Resume work on story
```
### Example 5: Warning About Large Files
```bash
INPUT:
/sdd:story-save
PROCESS:
→ Checking git status...
→ Found 3 modified files, 1 large file
→ Warning: Large file detected
OUTPUT:
⚠️ LARGE FILE DETECTED
════════════════════════════════════
Found large file that may bloat repository:
- public/videos/demo.mp4 (12.5 MB)
Changes to commit:
📄 Modified: 2
- app/Livewire/VideoPlayer.php
- resources/views/livewire/video-player.blade.php
Added: 1
- public/videos/demo.mp4 (12.5 MB) ⚠️
Recommendation:
Large files should be stored externally (S3, CDN) or
use Git LFS for version control.
Continue with commit? [y/n]
> n
Commit cancelled.
💡 SUGGESTIONS:
1. Move large files to external storage
2. Add to .gitignore if not needed in repository
3. Use Git LFS for large binary files
4. /sdd:story-save (retry after removing large files)
```
### Example 6: Fix Commit with Auto-Detection
```bash
INPUT:
/sdd:story-save
PROCESS:
→ Checking git status...
→ Found 2 modified files
→ Detecting story context...
→ Story: STORY-CART-003 - Shopping Cart Checkout
→ Analyzing changes...
→ Detected commit type: fix (error handling changes detected)
→ Generating commit message...
→ Creating commit...
OUTPUT:
✅ CHANGES COMMITTED
════════════════════════════════════
Commit: jkl3456
Type: fix
Scope: cart
Message: fix cart total calculation rounding error
Story: STORY-CART-003 - Shopping Cart Checkout
Changes:
- Fixed rounding error in cart total calculation
- Changed to use Decimal for currency calculations
- Updated tests to verify correct rounding
Files Changed: 2
📄 Modified: 2
- app/Services/CartService.php (+8/-4 lines)
- tests/Unit/CartServiceTest.php (+15/-2 lines)
Statistics:
Lines Added: 23
Lines Removed: 6
💡 NEXT STEPS:
1. /sdd:story-implement STORY-CART-003 - Continue development
2. /sdd:story-review - Move to code review when ready
3. git push - Push to remote when ready to share
```
## Edge Cases
### No Story Context Available
```
IF no story can be determined from branch or files:
- PROCEED with commit using generic scope
- USE directory name or "app" as scope
- SKIP story file update
- WARN: "No story context found - commit without story reference"
```
### Multiple Stories Detected
```
IF branch name doesn't match active story:
- WARN: "Branch name suggests [STORY-A] but active story is [STORY-B]"
- ASK: Which story should this commit be associated with?
- USE selected story for commit message and file update
```
### Untracked Story File
```
IF story file exists but is untracked:
- INCLUDE story file in commit
- NOTE: "Adding story file to repository"
- PROCEED with normal commit flow
```
### Commit Message Too Long
```
IF generated message exceeds 72 characters:
- TRUNCATE first line to 72 chars
- MOVE details to commit body
- ENSURE proper formatting
```
### Detached HEAD State
```
IF in detached HEAD state:
- WARN: "Currently in detached HEAD state"
- SHOW current commit
- SUGGEST: Create branch or checkout existing branch
- OFFER: Continue commit anyway? (y/n)
```
### Merge Conflicts Present
```
IF merge conflicts detected:
- HALT: "Cannot commit with unresolved merge conflicts"
- LIST conflicted files
- SUGGEST: Resolve conflicts first using git mergetool
- EXIT with error
```
## Error Handling
- **Not in git repository**: Return "Error: Not in a git repository. Run 'git init' first"
- **No changes to commit**: Show "Working tree clean" and exit gracefully
- **Git command fails**: Show git error and suggest manual resolution
- **Story file read error**: Warn and proceed without story context
- **Story file write error**: Show error but continue with commit (story update optional)
- **Large file detected**: Warn and ask for confirmation before proceeding
- **Sensitive file detected**: Warn strongly and require explicit confirmation
## Performance Considerations
- Use `git status --porcelain` for fast, parseable output
- Read only necessary parts of story file (don't parse everything)
- Cache story context within command execution
- Run git commands in sequence (they're fast enough)
- Skip expensive diff calculations for very large commits
- Use `git diff --stat` instead of full diff for summary
## Related Commands
- `/sdd:story-implement` - Generate implementation before saving
- `/sdd:story-continue` - Resume work before saving
- `/sdd:story-review` - Move to review after saving
- `/sdd:story-start` - Begin development before implementation
- `/sdd:project-status` - View all stories and their status
## Constraints
- ✅ MUST check for uncommitted changes
- ✅ MUST generate proper conventional commit message
- ✅ MUST update story file before committing
- ✅ MUST include story file in commit
- ⚠️ NEVER commit secrets or sensitive files without warning
- ⚠️ NEVER commit large files without confirmation
- 📋 SHOULD auto-detect commit type from changes
- 💡 SHOULD provide meaningful commit messages
- 🔧 SHOULD include story context in commit
- 💾 MUST verify commit succeeded before reporting success

790
commands/story-ship.md Normal file
View File

@@ -0,0 +1,790 @@
# /sdd:story-ship
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Ship validated story to production with deployment, validation, and cleanup
## Definition
**Purpose**: Deploy a QA-validated story to production by merging to main branch, creating releases, deploying to production environment, performing post-deployment validation, and completing story archival.
**Syntax**: `/sdd:story-ship <story_id> [--skip-tests] [--dry-run]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | Yes | - | Story identifier (e.g., "STORY-2025-001") | Must match pattern STORY-\d{4}-\d{3} |
| --skip-tests | flag | No | false | Skip running tests on merged code (not recommended) | Boolean flag |
| --dry-run | flag | No | false | Simulate deployment without executing | Boolean flag |
## INSTRUCTION: Ship Story to Production
### INPUTS
- story_id: Story identifier from /docs/stories/qa/
- Story file with QA validation data
- Git repository with feature branch
- Project context from /docs/project-context/
### PROCESS
#### Phase 1: Pre-Flight Checks
1. **VERIFY** story location:
- CHECK story is in `/docs/stories/qa/` directory
- IF NOT in qa:
- CHECK `/docs/stories/review/` - suggest running `/sdd:story-qa` first
- CHECK `/docs/stories/development/` - suggest completing review and QA
- EXIT with appropriate guidance
2. **VALIDATE** story readiness:
- READ story file and verify:
* ALL Success Criteria marked [x]
* ALL Implementation Checklist items marked [x]
* ALL QA Checklist items marked [x] (or marked N/A)
* QA validation section completed
* Test results documented
* Performance benchmarks met
- IF any required items unchecked:
- DISPLAY incomplete items
- OFFER to mark as complete if user confirms
- EXIT if critical items missing
3. **CHECK** git status:
- VERIFY on feature branch
- ENSURE all changes committed
- CHECK branch is up to date with remote
- IF uncommitted changes exist:
- DISPLAY uncommitted files
- OFFER to commit with auto-generated message
- EXIT if user declines
4. **RUN** pre-merge tests (unless --skip-tests):
- Execute test suite on feature branch
- VERIFY all tests pass
- IF tests fail:
- DISPLAY failed tests
- SUGGEST fixing issues before shipping
- EXIT and keep story in QA
5. **DISPLAY** pre-flight summary:
```
✈️ PRE-FLIGHT CHECK
═══════════════════
Story: [STORY-ID] - [Title]
Branch: [branch-name]
Status: Ready for deployment
✅ Story in QA directory
✅ All checklists complete
✅ All changes committed
✅ Tests passing ([count] tests)
✅ QA validation complete
Ready to ship to production.
```
#### Phase 2: Branch Merge
1. **LOAD** project context:
- READ `/docs/project-context/development-process.md` for merge strategy
- IDENTIFY main branch name (main/master)
- CHECK for branch protection rules
2. **SWITCH** to main branch:
- RUN: `git checkout main` (or master)
- PULL latest changes: `git pull origin main`
- VERIFY clean state
3. **MERGE** feature branch:
- ATTEMPT merge: `git merge --no-ff [branch-name]`
- IF conflicts detected:
- DISPLAY conflicting files
- PROVIDE merge conflict resolution guide
- OFFER interactive conflict resolution
- VERIFY resolution with user
- IF merge successful:
- SHOW merge commit details
- NOTE files changed and lines added/removed
4. **RUN** tests on merged code (unless --skip-tests):
- Execute full test suite on main branch
- VERIFY all tests still pass
- IF tests fail after merge:
- DISPLAY failed tests
- OFFER to abort merge
- SUGGEST investigating merge conflicts
- EXIT if user chooses to abort
5. **DISPLAY** merge summary:
```
🔀 MERGE COMPLETE
═════════════════
Merged: [branch-name] → main
Commit: [commit-hash]
Files changed: [count]
Tests: [count] passing
```
#### Phase 3: Release Creation
1. **DETERMINE** version strategy:
- CHECK for existing version file (package.json, composer.json, etc.)
- IF versioning used:
- READ current version
- SUGGEST next version (semantic versioning)
- PROMPT user for version number
- IF no versioning:
- USE story ID as release identifier
- CREATE date-based version: v[YYYY.MM.DD]
2. **GENERATE** changelog entry:
- EXTRACT from story file:
* Story title and description
* Success criteria achieved
* Technical changes made
* Known issues or limitations
- FORMAT as changelog entry with version
3. **CREATE** git tag:
- IF versioning used:
- CREATE annotated tag: `git tag -a v[version] -m "[Story title]"`
- IF no versioning:
- CREATE annotated tag: `git tag -a [story-id] -m "[Story title]"`
- PUSH tag to remote: `git push origin --tags`
4. **UPDATE** version files (if applicable):
- UPDATE package.json, composer.json, etc.
- COMMIT version bump
- PUSH to remote
5. **DISPLAY** release summary:
```
📦 RELEASE CREATED
══════════════════
Version: [version]
Tag: v[version]
Story: [STORY-ID]
Date: [YYYY-MM-DD]
Changelog entry created
Version files updated
```
#### Phase 4: Production Deployment
1. **DETECT** deployment configuration:
- CHECK for deployment scripts in project
- COMMON locations:
* `scripts/deploy.sh`
* `.github/workflows/deploy.yml`
* `composer deploy` / `npm run deploy`
* `deployer.phar`
- READ `/docs/project-context/technical-stack.md` for deployment method
2. **EXECUTE** deployment (unless --dry-run):
- IF automated deployment configured:
- RUN deployment command
- STREAM output to user
- TRACK deployment progress
- IF no automation:
- PROVIDE manual deployment instructions
- CHECKLIST deployment steps
- WAIT for user confirmation
3. **MONITOR** deployment:
- WATCH for deployment completion
- TRACK any errors or warnings
- LOG deployment output
- IF deployment fails:
- CAPTURE error details
- SUGGEST rollback
- EXIT to Phase 9 (Rollback Handling)
4. **DISPLAY** deployment status:
```
🚀 DEPLOYING TO PRODUCTION
═══════════════════════════
Environment: production
Version: [version]
Method: [deployment-method]
[Real-time deployment output...]
✅ Deployment successful
```
#### Phase 5: Post-Deployment Validation
1. **RUN** smoke tests:
- LOAD test configuration from project context
- EXECUTE critical path tests:
* Homepage loads
* Authentication works
* Core features functional
* APIs responding
- IF smoke tests fail:
- CAPTURE test failures
- SUGGEST immediate rollback
- EXIT to Phase 9 (Rollback Handling)
2. **CHECK** application health:
- VERIFY application is running
- CHECK health endpoints (if available)
- VALIDATE database connectivity
- CONFIRM cache is functioning
- TEST key integrations
3. **MONITOR** initial metrics:
- CHECK error rates (first 5 minutes)
- VERIFY response times
- WATCH for exceptions/crashes
- MONITOR resource usage
- IF metrics anomalous:
- ALERT user to issues
- SUGGEST monitoring plan
- OFFER rollback option
4. **VALIDATE** story-specific functionality:
- TEST features from Success Criteria
- VERIFY changes are live
- CHECK user-facing improvements
- VALIDATE data integrity
- TEST critical user paths
5. **DISPLAY** validation results:
```
✅ POST-DEPLOYMENT VALIDATION
═════════════════════════════
Smoke Tests: [X/Y] passed
Health Checks: All systems operational
Metrics: Within normal ranges
Story Features: Validated and live
Application healthy and ready for users.
```
#### Phase 6: Story Completion
1. **VERIFY** all checklists one final time:
- CHECK all Success Criteria marked [x]
- CHECK all Implementation items marked [x]
- CHECK all QA items marked [x]
- IF any unchecked:
- MARK as complete with timestamp
- NOTE completion in progress log
2. **UPDATE** story file:
- SET status to "complete"
- ADD completion date: today's date
- ADD deployment information:
* Deployed version
* Deployment timestamp
* Production environment
- ADD progress log entry: "Shipped to production - [timestamp]"
- RECORD final metrics:
* Total development time
* Total commits
* Final test coverage
3. **ENSURE** completed directory exists:
- CREATE `/docs/stories/completed/` if missing
- ADD `.gitkeep` if directory created
4. **MOVE** story file:
- FROM: `/docs/stories/qa/[story-id].md`
- TO: `/docs/stories/completed/[story-id].md`
- VERIFY move successful
5. **COMMIT** story completion:
- ADD moved file to git
- COMMIT with message: "chore: ship [story-id] to production"
- PUSH to main branch
#### Phase 7: Release Notes Generation
1. **COMPILE** release notes:
- EXTRACT from story file:
* What's New (user-facing changes)
* Technical Changes (developer-facing)
* Bug Fixes (if applicable)
* Known Issues or Limitations
* Upgrade Instructions (if needed)
2. **FORMAT** release notes:
```
📦 RELEASE NOTES
════════════════
Version: [version]
Date: [YYYY-MM-DD]
Story: [STORY-ID] - [Title]
WHAT'S NEW:
- [User-facing feature 1]
- [User-facing feature 2]
- [User-facing improvement 3]
TECHNICAL CHANGES:
- [Implementation detail 1]
- [API change 2]
- [Database migration 3]
- [Configuration change 4]
BUG FIXES:
- [Bug fix 1]
- [Bug fix 2]
KNOWN ISSUES:
- [Limitation 1]
- [Known issue 2]
UPGRADE NOTES:
- [Special instruction 1]
- [Migration step 2]
ROLLBACK PLAN:
See story file for detailed rollback procedure.
```
3. **PUBLISH** release notes:
- ADD to `CHANGELOG.md` (if exists)
- CREATE GitHub release (if using GitHub)
- UPDATE documentation site (if applicable)
- NOTIFY team/stakeholders (if configured)
#### Phase 8: Documentation and Cleanup
1. **UPDATE** documentation:
- ADD features to README (if user-facing)
- UPDATE API documentation (if API changes)
- REFRESH user guides (if workflows changed)
- UPDATE architecture docs (if structure changed)
2. **CLEAN UP** branches:
- DELETE local feature branch: `git branch -d [branch-name]`
- DELETE remote feature branch: `git push origin --delete [branch-name]`
- VERIFY branches deleted
- KEEP main branch clean
3. **ARCHIVE** temporary files:
- REMOVE build artifacts
- CLEAN up test recordings (unless needed)
- COMPRESS large logs
- REMOVE temporary screenshots
4. **VERIFY** repository state:
- CHECK git status is clean
- ENSURE on main branch
- VERIFY all changes pushed
- CONFIRM no uncommitted files
5. **UPDATE** project tracking:
- MARK story complete in project board (if applicable)
- UPDATE story count in metrics
- RECORD deployment in tracking system
- NOTIFY relevant stakeholders
#### Phase 9: Success Summary (or Rollback Handling)
1. **IF** deployment successful:
- **GENERATE** success summary:
```
🚀 SUCCESSFULLY SHIPPED!
════════════════════════
Story: [STORY-ID] - [Title]
DEPLOYMENT:
• Environment: production
• Version: [version]
• Deployed: [timestamp]
• Duration: [development time]
VALIDATION:
• Smoke tests: ✅ Passed
• Health checks: ✅ Operational
• Metrics: ✅ Normal
• Features: ✅ Live
MONITORING:
• Application logs: [link or command]
• Error tracking: [link or command]
• Performance dashboard: [link]
ROLLBACK PLAN:
Available in story file at:
/docs/stories/completed/[story-id].md
NEXT STEPS:
1. Monitor application for 24 hours
2. Watch for user feedback and issues
3. Review metrics and performance
4. Run /sdd:story-complete [story-id] to archive with learnings
5. Celebrate the successful deployment! 🎉
SUGGESTED MONITORING PERIOD:
• First hour: Active monitoring
• First 24 hours: Regular checks
• First week: Periodic validation
```
2. **IF** deployment failed or validation failed:
- **CAPTURE** failure details
- **DISPLAY** error information
- **SUGGEST** rollback:
```
❌ DEPLOYMENT FAILED
════════════════════
Story: [STORY-ID] - [Title]
FAILURE DETAILS:
• Phase: [deployment/validation]
• Error: [error-message]
• Timestamp: [timestamp]
CURRENT STATE:
• Code: Merged to main
• Deployment: Failed or unstable
• Story: Kept in QA directory
RECOMMENDED ACTION:
/sdd:story-rollback [story-id]
This will:
1. Revert the merge commit
2. Remove the release tag
3. Rollback deployment (if possible)
4. Move story back to appropriate stage
Do you want to rollback now? (y/n)
```
- **IF** user confirms rollback:
- EXECUTE `/sdd:story-rollback` command
- EXIT with rollback results
- **IF** user declines:
- KEEP story in QA
- LOG incident in story file
- EXIT with manual resolution guidance
### OUTPUTS
- `/docs/stories/completed/[story-id].md` - Completed story with deployment data
- Git merge commit on main branch
- Git release tag (v[version] or [story-id])
- Updated CHANGELOG.md (if exists)
- Release notes (displayed and optionally published)
- Deleted feature branch (local and remote)
- Clean repository state
### RULES
- MUST verify story is in `/docs/stories/qa/` before proceeding
- MUST validate all checklists complete (or prompt user)
- MUST run tests on merged code (unless --skip-tests)
- MUST create release tag for traceability
- MUST perform post-deployment validation
- MUST move story to `/docs/stories/completed/` on success
- MUST cleanup feature branches after successful merge
- SHOULD generate comprehensive release notes
- SHOULD update relevant documentation
- NEVER force push to main branch
- ALWAYS provide rollback option on failure
- MUST commit story move with descriptive message
- MUST push all changes to remote repository
## Examples
### Example 1: Successful Deployment
```bash
INPUT:
/sdd:story-ship STORY-2025-001
PROCESS:
→ Pre-flight checks...
→ Story: STORY-2025-001 in /docs/stories/qa/
→ Validating checklists... ✅
→ Checking git status... ✅
→ Running tests... ✅ (156 tests passed)
→ Switching to main branch...
→ Merging feature/auth-001-login-form...
→ Merge successful - 12 files changed
→ Running tests on merged code... ✅
→ Creating release...
→ Version: v1.1.0
→ Tag created: v1.1.0
→ Changelog updated
→ Deploying to production...
→ Running: composer deploy
→ [deployment output...]
→ Deployment successful ✅
→ Post-deployment validation...
→ Smoke tests: 8/8 passed ✅
→ Health checks: All operational ✅
→ Metrics: Normal ✅
→ Completing story...
→ Moving to /docs/stories/completed/STORY-2025-001.md
→ Cleaning up feature branch...
→ Generating release notes...
OUTPUT:
🚀 SUCCESSFULLY SHIPPED!
════════════════════════
Story: STORY-2025-001 - User Authentication System
DEPLOYMENT:
• Environment: production
• Version: v1.1.0
• Deployed: 2025-03-15 14:32:18 UTC
• Duration: 9 working days
VALIDATION:
• Smoke tests: ✅ Passed (8/8)
• Health checks: ✅ Operational
• Metrics: ✅ Normal
• Features: ✅ Live
MONITORING:
• Application logs: php artisan pail
• Error tracking: Check Sentry dashboard
• Performance: Watch response times
ROLLBACK PLAN:
Available in story file at:
/docs/stories/completed/STORY-2025-001.md
NEXT STEPS:
1. Monitor application for 24 hours
2. Watch for user feedback and issues
3. Review metrics and performance
4. Run /sdd:story-complete STORY-2025-001 to archive
5. Celebrate the successful deployment! 🎉
📦 RELEASE NOTES
════════════════
Version: v1.1.0
Date: 2025-03-15
Story: STORY-2025-001 - User Authentication System
WHAT'S NEW:
- User registration with email verification
- Secure login with session management
- Password reset functionality
- Remember me option for convenience
TECHNICAL CHANGES:
- Added AuthMiddleware for route protection
- Implemented JWT token handling
- Created authentication test suite
- Added password hashing with bcrypt
ROLLBACK PLAN:
See /docs/stories/completed/STORY-2025-001.md
```
### Example 2: Deployment with Manual Steps
```bash
INPUT:
/sdd:story-ship STORY-2025-002
PROCESS:
→ Pre-flight checks... ✅
→ Merging to main... ✅
→ Creating release v1.2.0... ✅
→ Detecting deployment configuration...
→ No automated deployment found
OUTPUT:
📋 MANUAL DEPLOYMENT REQUIRED
══════════════════════════════
Story: STORY-2025-002 - Profile Management
Version: v1.2.0
Please complete the following deployment steps:
1. [ ] SSH into production server
2. [ ] Pull latest changes: git pull origin main
3. [ ] Install dependencies: composer install --no-dev
4. [ ] Run migrations: php artisan migrate --force
5. [ ] Clear caches: php artisan cache:clear
6. [ ] Restart services: sudo systemctl restart php-fpm
7. [ ] Verify application is running
Have you completed all deployment steps? (y/n)
> y
→ Running post-deployment validation...
→ Smoke tests: 10/10 passed ✅
→ Completing story...
[Success summary displayed]
```
### Example 3: Deployment Failure with Rollback
```bash
INPUT:
/sdd:story-ship STORY-2025-003
PROCESS:
→ Pre-flight checks... ✅
→ Merging to main... ✅
→ Creating release v1.3.0... ✅
→ Deploying to production...
→ Error: Database migration failed
OUTPUT:
❌ DEPLOYMENT FAILED
════════════════════
Story: STORY-2025-003 - Payment Integration
FAILURE DETAILS:
• Phase: deployment
• Error: Database migration "add_payment_columns" failed
• Timestamp: 2025-03-16 10:15:42 UTC
• Exit code: 1
ERROR MESSAGE:
SQLSTATE[42S21]: Column already exists: 1060 Duplicate column name 'payment_status'
CURRENT STATE:
• Code: Merged to main (commit abc123)
• Deployment: Failed during migrations
• Story: Kept in QA directory
RECOMMENDED ACTION:
/sdd:story-rollback STORY-2025-003
This will:
1. Revert the merge commit
2. Remove the release tag v1.3.0
3. Rollback deployment (revert to v1.2.0)
4. Move story back to QA for fixes
Do you want to rollback now? (y/n)
> y
→ Executing rollback...
[Rollback process initiated]
```
### Example 4: Dry Run
```bash
INPUT:
/sdd:story-ship STORY-2025-004 --dry-run
PROCESS:
→ DRY RUN MODE - No changes will be made
→ Pre-flight checks... ✅
→ Would merge feature/notifications to main
→ Would create release v1.4.0
→ Would deploy to production
→ Would run post-deployment validation
→ Would move story to completed
OUTPUT:
🔍 DRY RUN COMPLETE
═══════════════════
Story: STORY-2025-004 - Push Notifications
SIMULATED ACTIONS:
✓ Merge feature/notifications → main
✓ Create release tag v1.4.0
✓ Deploy to production
✓ Run smoke tests
✓ Move to /docs/stories/completed/
✓ Delete feature branch
ESTIMATED DURATION: ~5 minutes
No actual changes were made.
Run without --dry-run to execute deployment.
```
## Edge Cases
### Incomplete Checklists
- DETECT unchecked Success Criteria or Implementation items
- DISPLAY incomplete items with context
- OFFER to mark as complete if user confirms
- WARN about shipping with incomplete items
- EXIT if critical items missing (user decides what's critical)
### Merge Conflicts
- DETECT conflicts during merge
- DISPLAY conflicting files and conflict markers
- PROVIDE merge conflict resolution guide
- OFFER interactive conflict resolution
- VERIFY resolution with user before continuing
- RE-RUN tests after conflict resolution
### Failed Tests After Merge
- CAPTURE test failures on merged code
- DISPLAY failed test details
- SUGGEST investigating merge-related issues
- OFFER to abort merge and reset
- KEEP story in QA if merge aborted
- LOG incident in story progress log
### Deployment Timeout
- MONITOR deployment progress
- DETECT if deployment hangs or times out
- PROVIDE option to continue waiting or abort
- LOG timeout incident
- SUGGEST checking deployment logs manually
- OFFER rollback option
### Failed Smoke Tests
- CAPTURE smoke test failures
- DISPLAY which tests failed and why
- ASSESS severity of failures
- OFFER immediate rollback for critical failures
- ALLOW user to investigate for non-critical failures
- LOG post-deployment issues in story file
### No Version Management
- DETECT absence of version files
- USE story ID as release identifier
- CREATE date-based version as alternative
- SUGGEST implementing semantic versioning
- CONTINUE with story-based releases
## Error Handling
- **Story ID missing**: Return "Error: Story ID required. Usage: /sdd:story-ship <story_id>"
- **Invalid story ID format**: Return "Error: Invalid story ID format. Expected: STORY-YYYY-NNN"
- **Story not in QA**: Report current location and suggest appropriate next step
- **Uncommitted changes**: Display files and offer to commit or exit
- **Test failures**: Display failures, offer to fix or abort
- **Merge conflicts**: Provide resolution guide and interactive help
- **Deployment failure**: Capture details, suggest rollback, log incident
- **Validation failure**: Assess severity, offer rollback for critical issues
- **Git errors**: Display error, suggest manual resolution, provide recovery steps
## Performance Considerations
- Run tests in parallel when possible
- Stream deployment output in real-time
- Cache git operations during session
- Perform post-deployment validation concurrently
- Generate release notes asynchronously
- Cleanup branches in background after success
## Related Commands
- `/sdd:story-qa` - Move story to QA before shipping
- `/sdd:story-rollback` - Rollback failed deployment
- `/sdd:story-complete` - Archive story after successful deployment
- `/sdd:story-validate` - Run final validation before shipping
- `/sdd:project-status` - View all project stories
## Constraints
- ✅ MUST verify story in QA directory
- ✅ MUST validate checklists complete
- ✅ MUST run tests on merged code
- 🔀 MUST merge to main branch (no fast-forward)
- 🏷️ MUST create release tag
- 🚀 MUST deploy to production environment
- ✔️ MUST perform post-deployment validation
- 📝 MUST generate release notes
- 🧹 MUST cleanup feature branches
- 💾 MUST move story to completed on success
- 🚫 NEVER force push to main
- ↩️ ALWAYS provide rollback option on failure
- 📊 MUST update story with deployment data

383
commands/story-start.md Normal file
View File

@@ -0,0 +1,383 @@
# /sdd:story-start
## Meta
- Version: 2.0
- Category: workflow
- Complexity: comprehensive
- Purpose: Initialize story development with branch creation, context loading, and boilerplate generation
## Definition
**Purpose**: Start development on a specified story by creating a feature branch, loading project context, optionally generating boilerplate, and preparing the development environment.
**Syntax**: `/sdd:story-start <story_id> [--boilerplate]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | Yes | - | Story identifier (e.g., "STORY-001") | Must match pattern STORY-\d{3,} |
| --boilerplate | flag | No | false | Generate initial boilerplate files | Boolean flag |
## Behavior
```
INSTRUCTION: Initialize story development environment with context-aware setup
INPUTS:
- story_id: Story identifier from /docs/stories/development/ or /docs/stories/backlog/
- --boilerplate: Optional flag to generate framework-specific starter files
PROCESS:
Phase 1: Project Context Loading
1. CHECK if /docs/project-context/ directory exists
2. IF missing:
- SUGGEST running /sdd:project-init first
- HALT execution
3. ELSE:
- LOAD /docs/project-context/technical-stack.md
- LOAD /docs/project-context/coding-standards.md
- LOAD /docs/project-context/development-process.md
4. PARSE technical stack to identify:
- ACTUAL frontend framework (React/Vue/Svelte/Laravel Blade/etc.)
- ACTUAL backend framework and runtime
- ACTUAL testing framework and tools
- ACTUAL build tools and package manager
- ACTUAL database system
Phase 2: Story File Discovery
1. SEARCH for story file in order:
- CHECK /docs/stories/development/[story_id].md
- IF NOT FOUND: CHECK /docs/stories/backlog/[story_id].md
- IF NOT FOUND: OFFER to create with /sdd:story-new
2. READ story file and extract:
- Branch name (or generate from story ID)
- Success criteria
- Technical requirements
- Implementation approach
Phase 3: Git Branch Setup
1. CHECK if feature branch exists:
- RUN: git branch --list [branch-name]
2. IF branch exists:
- SWITCH to existing branch
- SHOW: "Switched to existing branch: [branch-name]"
3. ELSE:
- CREATE new branch from main/master
- CHECKOUT new branch
- SHOW: "Created and switched to: [branch-name]"
4. DISPLAY current branch status:
- Branch name
- Last commit
- Uncommitted changes (if any)
Phase 4: Boilerplate Generation (IF --boilerplate flag present)
1. IDENTIFY framework from technical-stack.md
2. GENERATE framework-specific files:
IF React:
- Component files (.jsx/.tsx)
- Hook files (use[Feature].js)
- Style files (CSS/SCSS/Tailwind)
- Test files (.test.jsx/.spec.tsx)
IF Vue:
- Component files (.vue)
- Composable files (use[Feature].js)
- Style files (scoped styles)
- Test files (.spec.js)
IF Laravel + Livewire:
- Livewire component classes (app/Livewire/)
- Blade view files (resources/views/livewire/)
- Migration files (database/migrations/)
- Test files (tests/Feature/, tests/Browser/)
IF Django:
- View files (views.py)
- Model files (models.py)
- Template files (templates/)
- Form files (forms.py)
- Test files (tests/)
IF Express:
- Route files (routes/)
- Controller files (controllers/)
- Middleware files (middleware/)
- Test files (.test.js)
IF Next.js:
- Page/route files (pages/ or app/)
- API route files (api/)
- Component files
- Test files
3. APPLY coding standards from coding-standards.md:
- Follow DISCOVERED file naming conventions
- Apply DISCOVERED directory structure
- Use DISCOVERED code formatting
- Include DISCOVERED file headers/comments
4. GENERATE test structure for DISCOVERED testing framework:
- Jest/Vitest: .test.js/.spec.js with describe/it blocks
- Pest: .php test files with it() syntax
- Pytest: test_*.py with test_ functions
- JUnit: *Test.java with @Test annotations
5. SET UP development environment:
- Install dependencies using DISCOVERED package manager
- Run initial build using DISCOVERED build tools
- Verify setup with basic test
Phase 5: Story File Update
1. UPDATE story file with:
- Progress log entry: "Development started - [timestamp]"
- Status: "development" (if was "backlog")
- Branch name: [branch-name]
- Tech stack used: [list of technologies from context]
2. IF boilerplate generated:
- LIST generated files in progress log
- NOTE initial setup completion
Phase 6: Next Steps Display
1. SHOW next steps in numbered format:
1. Review success criteria
2. Use /sdd:story-implement to generate code for DISCOVERED stack
3. Use /sdd:story-save to commit progress
4. Use /sdd:story-review when ready
2. MENTION relevant development commands for DISCOVERED stack:
- npm run dev / composer dev / python manage.py runserver / etc.
- Test commands for DISCOVERED framework
- Linting commands for DISCOVERED tools
OUTPUT FORMAT:
```
✅ STORY DEVELOPMENT STARTED
============================
Story: [story_id] - [Title]
Branch: [branch-name]
Stack: [DISCOVERED framework + technologies]
Project Context Loaded:
- Frontend: [DISCOVERED frontend framework]
- Backend: [DISCOVERED backend framework]
- Testing: [DISCOVERED testing tools]
- Build: [DISCOVERED build system]
[IF boilerplate generated:]
Generated Files:
- [list of created files with paths]
Next Steps:
1. Review success criteria
2. /sdd:story-implement to generate implementation
3. /sdd:story-save to commit progress
4. /sdd:story-review when ready for review
Development Commands:
- Server: [DISCOVERED dev server command]
- Tests: [DISCOVERED test command]
- Lint: [DISCOVERED lint command]
```
RULES:
- MUST load project context before proceeding
- MUST adapt all generated code to DISCOVERED stack
- NEVER assume framework - ALWAYS read technical-stack.md
- MUST create branch if it doesn't exist
- MUST update story file with start timestamp
- SHOULD generate boilerplate only if --boilerplate flag present
- MUST follow DISCOVERED coding standards exactly
```
## Examples
### Example 1: Start Story Without Boilerplate
```bash
INPUT:
/sdd:story-start STORY-AUTH-001
PROCESS:
→ Loading project context from /docs/project-context/
→ Technical stack: Laravel + Livewire + Tailwind + Pest
→ Found story: /docs/stories/backlog/STORY-AUTH-001.md
→ Creating branch: feature/auth-001-login-form
→ Switched to new branch
OUTPUT:
✅ STORY DEVELOPMENT STARTED
============================
Story: STORY-AUTH-001 - Implement Login Form
Branch: feature/auth-001-login-form
Stack: Laravel 12 + Livewire 3 + Tailwind CSS 4 + Pest 4
Project Context Loaded:
- Frontend: Laravel Blade + Livewire + Alpine.js
- Backend: Laravel 12 (PHP 8.4)
- Testing: Pest (Unit, Feature, Browser)
- Build: Vite 7
Next Steps:
1. Review success criteria in story file
2. /sdd:story-implement to generate Livewire component
3. /sdd:story-save to commit progress
4. /sdd:story-review when ready for code review
Development Commands:
- Server: composer dev (or php artisan serve)
- Tests: vendor/bin/pest
- Lint: vendor/bin/pint
```
### Example 2: Start Story With Boilerplate
```bash
INPUT:
/sdd:story-start STORY-PROFILE-002 --boilerplate
PROCESS:
→ Loading project context
→ Technical stack: React + TypeScript + Vite + Jest
→ Found story: /docs/stories/development/STORY-PROFILE-002.md
→ Branch already exists: feature/profile-002-settings
→ Switched to existing branch
→ Generating React boilerplate...
OUTPUT:
✅ STORY DEVELOPMENT STARTED
============================
Story: STORY-PROFILE-002 - User Profile Settings
Branch: feature/profile-002-settings (existing)
Stack: React 18 + TypeScript + Vite + Jest
Project Context Loaded:
- Frontend: React 18 (TypeScript)
- State: Redux Toolkit
- Testing: Jest + React Testing Library
- Build: Vite 5
Generated Files:
- src/components/ProfileSettings.tsx
- src/components/ProfileSettings.module.css
- src/hooks/useProfileSettings.ts
- src/tests/ProfileSettings.test.tsx
Next Steps:
1. Review success criteria in story file
2. /sdd:story-implement to add implementation logic
3. /sdd:story-save to commit boilerplate
4. /sdd:story-review when feature complete
Development Commands:
- Server: npm run dev
- Tests: npm test
- Lint: npm run lint
```
### Example 3: Story Not Found
```bash
INPUT:
/sdd:story-start STORY-UNKNOWN-999
PROCESS:
→ Loading project context
→ Searching for STORY-UNKNOWN-999
→ Not found in /docs/stories/development/
→ Not found in /docs/stories/backlog/
→ Story does not exist
OUTPUT:
❌ Story Not Found
==================
Story ID: STORY-UNKNOWN-999
The story file was not found in:
- /docs/stories/development/
- /docs/stories/backlog/
Would you like to create this story?
→ Use: /sdd:story-new "Story title and description"
```
### Example 4: Missing Project Context
```bash
INPUT:
/sdd:story-start STORY-API-003
PROCESS:
→ Checking for /docs/project-context/ directory
→ Directory not found
OUTPUT:
⚠️ Project Context Missing
==========================
Cannot start story without project context.
The /docs/project-context/ directory does not exist.
This directory defines your technical stack and coding standards.
Please run: /sdd:project-init
This will create:
- /docs/project-context/technical-stack.md
- /docs/project-context/coding-standards.md
- /docs/project-context/development-process.md
After initialization, run /sdd:story-start again.
```
## Edge Cases
### Story Already in Development
```
IF story found in /docs/stories/development/:
- SWITCH to story branch
- SHOW: "Story already in development"
- DISPLAY: Current progress
- SUGGEST: /sdd:story-continue to resume
```
### Branch Exists But Diverged
```
IF branch exists AND has diverged from main:
- SHOW: Warning about diverged branch
- OFFER options:
1. Continue on current branch
2. Rebase on main
3. Create new branch with suffix (-v2)
```
### Boilerplate with Existing Files
```
IF --boilerplate flag AND files already exist:
- CHECK for conflicts
- SHOW: List of existing files
- ASK: Overwrite, skip, or merge?
- PROCEED based on user choice
```
## Error Handling
- **Story ID missing**: Return "Error: Story ID required. Usage: /sdd:story-start <story_id>"
- **Invalid story ID format**: Return "Error: Invalid story ID format. Expected: STORY-XXX-NNN"
- **Project context missing**: Halt and suggest /sdd:project-init
- **Context files corrupted**: Show error and suggest manual review
- **Git branch error**: Show git error and suggest manual resolution
- **File generation error**: Show which files failed and suggest manual creation
## Performance Considerations
- Load project context files only once at start
- Cache parsed technical stack for session
- Generate boilerplate asynchronously to avoid blocking
- Skip dependency installation if package.json/composer.json unchanged
## Related Commands
- `/sdd:story-new` - Create a new story before starting
- `/sdd:story-continue` - Resume work on existing story
- `/sdd:story-implement` - Generate implementation code
- `/sdd:story-save` - Commit progress
- `/sdd:project-init` - Initialize project context
## Notes
- Project context is mandatory for story development
- Branch naming follows convention: feature/[story-id-kebab-case]
- Boilerplate generation is framework-aware and respects coding standards
- All generated code must match the DISCOVERED technical stack
- Never assume technology choices - always read and adapt

73
commands/story-status.md Normal file
View File

@@ -0,0 +1,73 @@
# /sdd:story-status
Shows all stories and their current stages.
## Implementation
**Format**: Structured (standard)
**Actions**: Read-only query
**Modifications**: None
### Discovery
1. List stories in each stage directory:
- `/docs/stories/development/`
- `/docs/stories/review/`
- `/docs/stories/qa/`
- `/docs/stories/completed/`
- `/docs/stories/backlog/`
2. For each story file found:
- Extract story ID from filename
- Read title from YAML frontmatter
- Parse started date from metadata
- Extract branch name
- Read most recent progress log entry
### Output Format
```
📊 STORY STATUS OVERVIEW
========================
🚧 DEVELOPMENT (active work)
---------------------------
• [STORY-ID]: [Title]
Started: [Date] | Branch: [branch-name]
Last update: [most recent progress entry]
🔍 REVIEW (code review & checks)
--------------------------------
• [List stories in review stage]
✅ QA (final validation)
-----------------------
• [List stories in QA stage]
📦 COMPLETED (last 5)
--------------------
• [Recent completed stories with completion dates]
💡 BACKLOG
----------
• [Backlog items by priority]
📈 SUMMARY
----------
Total active: [count]
This week completed: [count]
Average cycle time: [if data available]
```
### Empty State
If no stories exist:
```
💡 NO STORIES FOUND
Get started:
1. Create your first story with /sdd:story-new
2. Set up full structure with /sdd:project-init
```
### Notes
- Shows read-only snapshot of current state
- Does not modify any files
- Displays maximum 5 completed stories (most recent)

748
commands/story-tech-debt.md Normal file
View File

@@ -0,0 +1,748 @@
# /sdd:story-tech-debt
## Meta
- Version: 2.0
- Category: story-analysis
- Complexity: high
- Purpose: Identify, categorize, prioritize, and track technical debt from stories to inform debt reduction efforts
## Definition
**Purpose**: Scan all stories for technical debt indicators, categorize by severity and type, calculate impact metrics, and generate actionable debt reduction plan.
**Syntax**: `/sdd:story-tech-debt [priority]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| priority | string | No | "all" | Debt priority filter (critical, important, nice-to-have, all) | One of: critical, important, nice-to-have, all |
## INSTRUCTION: Analyze Technical Debt
### INPUTS
- priority: Optional priority filter (defaults to all)
- Story files from all directories:
- `/docs/stories/development/` - Active stories
- `/docs/stories/review/` - Stories in review
- `/docs/stories/qa/` - Stories in testing
- `/docs/stories/completed/` - Finished stories
- Optional: Project codebase for TODO scanning
### PROCESS
#### Phase 1: Debt Indicator Detection
1. **SCAN** all story files for debt indicators:
- "TODO" mentions in technical notes
- "FIXME" mentions in technical notes
- "HACK" mentions in technical notes
- "Technical debt" explicit mentions
- "Deferred" items in implementation checklist
- "Temporary solution" in progress log
- "Skipped tests" in test cases
- "Performance concern" in technical notes
- "Security risk" mentions
- "Needs refactor" mentions
2. **EXTRACT** debt details:
- Description of debt item
- Source story ID
- Date created (from story started date)
- Severity indicators
- Impact description
3. **OPTIONAL**: Scan codebase for TODOs:
- Search `*.php` files for TODO comments
- Search `*.blade.php` files for TODO comments
- Search `*.js` files for TODO comments
- Link to stories when possible
#### Phase 2: Debt Categorization
1. **CLASSIFY** debt by severity:
- Critical: Security/Stability issues
- Important: Performance/Maintenance issues
- Nice to have: Cleanup/Refactor items
2. **CLASSIFY** debt by type:
- Security debt
- Performance debt
- Code quality debt
- Test debt
- Documentation debt
- Infrastructure debt
3. **CALCULATE** impact scores:
- User impact (High/Medium/Low)
- Developer impact (High/Medium/Low)
- Business impact (High/Medium/Low)
4. **ESTIMATE** effort:
- Hours for small items
- Days for medium items
- Weeks for large items
5. **DISPLAY** debt inventory:
```
🏗️ TECHNICAL DEBT INVENTORY
══════════════════════════════════
🔴 CRITICAL (Security/Stability)
────────────────────────────────
[DEBT-001] Security: Unencrypted Sensitive Data Storage
- Story: STORY-2025-012
- Created: Sep 15, 2025
- Impact: High - PII at risk, compliance violation
- Effort: Medium (2 days)
- Priority: P0 - Fix immediately
- Description: User passwords stored in plain text in logs
[DEBT-002] Stability: Memory Leak in Background Service
- Story: STORY-2025-023
- Created: Sep 20, 2025
- Impact: High - Application crashes after 24h
- Effort: Low (1 day)
- Priority: P0 - Fix immediately
- Description: Queue worker accumulates memory over time
🟡 IMPORTANT (Performance/Maintenance)
────────────────────────────────────
[DEBT-003] Performance: Unoptimized Database Queries
- Story: STORY-2025-018
- Created: Sep 18, 2025
- Impact: Medium - 3-5s page load times
- Effort: Medium (2 days)
- Priority: P1 - Fix soon
- Description: N+1 queries in user dashboard
[DEBT-004] Maintenance: Duplicated Business Logic
- Story: STORY-2025-025
- Created: Sep 22, 2025
- Impact: Medium - Hard to update, bug prone
- Effort: High (3 days)
- Priority: P2 - Plan for next sprint
- Description: Payment validation duplicated in 5 places
🟢 NICE TO HAVE (Cleanup/Refactor)
──────────────────────────────────
[DEBT-005] Cleanup: Unused Dependencies
- Story: STORY-2025-010
- Created: Sep 10, 2025
- Impact: Low - Larger bundle size
- Effort: Low (2 hours)
- Priority: P3 - When time permits
- Description: 3 unused npm packages in package.json
[DEBT-006] Refactor: Complex Livewire Component
- Story: STORY-2025-021
- Created: Sep 21, 2025
- Impact: Low - Maintainability concern
- Effort: Medium (1 day)
- Priority: P3 - When time permits
- Description: TaskManager component has 15 methods
```
#### Phase 3: Debt Metrics Calculation
1. **COUNT** total debt items by category
2. **SUM** estimated effort (convert to days)
3. **CALCULATE** debt ratio:
- Total debt effort / Total development time
- Percentage of development capacity
4. **DISPLAY** debt metrics:
```
📊 DEBT METRICS
══════════════════════════════════
Total Debt Items: 24
Estimated Effort: 32 days
By Severity:
- Critical: 3 items (6 days)
- Important: 8 items (18 days)
- Nice to have: 13 items (8 days)
By Category:
- Security debt: 2 items (4 days)
- Performance debt: 5 items (12 days)
- Code quality debt: 9 items (10 days)
- Test debt: 4 items (3 days)
- Documentation debt: 3 items (2 days)
- Infrastructure debt: 1 item (1 day)
Debt Ratio: 28% of development capacity
(32 debt days / 115 total development days)
Status: ⚠️ High debt load - prioritize reduction
```
#### Phase 4: Impact Assessment
1. **ANALYZE** user impact:
- Items affecting user experience
- Items affecting performance
- Items invisible to users
2. **ANALYZE** developer impact:
- Items slowing development
- Items causing confusion
- Items increasing bug rate
3. **ANALYZE** business impact:
- Items affecting scalability
- Items increasing costs
- Items risking compliance
4. **DISPLAY** impact assessment:
```
⚡ IMPACT ASSESSMENT
══════════════════════════════════
User Impact:
- 8 items affect user experience
- 5 items affect performance
- 11 items are invisible to users
Developer Impact:
- 12 items slow development
- 7 items cause confusion
- 9 items increase bug likelihood
Business Impact:
- 4 items affect scalability
- 3 items increase operational costs
- 2 items risk compliance/security
Risk Level: 🔴 High
Recommendation: Address critical items immediately
```
#### Phase 5: Priority Matrix Generation
1. **PLOT** debt items on impact/effort matrix:
- High impact + Low effort: Quick wins
- High impact + High effort: Strategic projects
- Low impact + Low effort: Backlog items
- Low impact + High effort: Defer or eliminate
2. **PRIORITIZE** within each quadrant
3. **GENERATE** priority recommendations
4. **DISPLAY** priority matrix:
```
📈 PRIORITY MATRIX
══════════════════════════════════
🎯 HIGH IMPACT + LOW EFFORT (DO FIRST)
────────────────────────────────────
[DEBT-002] Memory leak fix (1 day)
[DEBT-005] Remove unused dependencies (2 hours)
[DEBT-007] Add missing indexes (4 hours)
Total effort: 1.75 days
Expected impact: High stability, reduced costs
📋 HIGH IMPACT + HIGH EFFORT (PLAN & SCHEDULE)
──────────────────────────────────────────────
[DEBT-001] Implement data encryption (2 days)
[DEBT-004] Refactor duplicated logic (3 days)
[DEBT-008] Migrate to new API version (5 days)
Total effort: 10 days
Expected impact: Security, maintainability
⚡ LOW IMPACT + LOW EFFORT (QUICK WINS)
───────────────────────────────────────
[DEBT-006] Simplify complex component (1 day)
[DEBT-009] Update deprecated API calls (2 hours)
[DEBT-010] Fix linting warnings (1 hour)
Total effort: 1.5 days
Expected impact: Code quality, dev experience
⏸️ LOW IMPACT + HIGH EFFORT (DEFER)
────────────────────────────────────
[DEBT-011] Achieve 100% test coverage (5 days)
[DEBT-012] Complete architectural refactor (10 days)
Total effort: 15 days
Recommendation: Defer or break into smaller items
```
#### Phase 6: Debt Story Generation
1. **GROUP** related debt items
2. **CREATE** debt reduction story proposals:
- Story title
- Combined debt items
- Total effort estimate
- Expected value/benefit
3. **SUGGEST** story descriptions
4. **DISPLAY** debt stories:
```
📝 DEBT REDUCTION STORIES
══════════════════════════════════
Suggested Stories to Create:
[DEBT-STORY-001] Security Hardening Sprint
- Combines: DEBT-001, DEBT-013, DEBT-015
- Items: 3 security issues
- Effort: 5 days
- Value: Critical security and compliance
- Priority: P0 - Must do next sprint
[DEBT-STORY-002] Performance Optimization Sprint
- Combines: DEBT-003, DEBT-007, DEBT-014, DEBT-016
- Items: 4 performance issues
- Effort: 6 days
- Value: 50% faster page loads, better UX
- Priority: P1 - High value
[DEBT-STORY-003] Code Quality Refactor
- Combines: DEBT-004, DEBT-006, DEBT-017
- Items: 3 maintainability issues
- Effort: 5 days
- Value: Easier maintenance, faster features
- Priority: P2 - Medium value
Create these stories? (y/n)
```
#### Phase 7: Debt Reduction Plan
1. **ORGANIZE** debt items into sprint-sized chunks
2. **CREATE** timeline for debt reduction:
- Immediate: Critical items
- Short-term: Important items
- Medium-term: Nice to have items
- Ongoing: Continuous improvements
3. **CALCULATE** capacity allocation:
- Percentage of sprint for debt work
- Expected completion timeline
4. **DISPLAY** reduction plan:
```
📅 DEBT REDUCTION PLAN
══════════════════════════════════
IMMEDIATE (This Week)
────────────────────────────────
Sprint Focus: Critical security and stability
- Fix data encryption (2 days)
- Patch memory leak (1 day)
- Add authentication checks (1 day)
Total: 4 days
Team capacity: 2 developers × 2 days each
SHORT-TERM (Next 2 Sprints)
────────────────────────────────
Sprint 1: Performance optimization
- Optimize database queries (2 days)
- Add caching layer (2 days)
- Add missing indexes (0.5 days)
Sprint 2: Code quality improvement
- Refactor duplicated logic (3 days)
- Simplify complex components (1 day)
- Add missing tests (2 days)
Total: 10.5 days
MEDIUM-TERM (Month 2-3)
────────────────────────────────
- Documentation updates (2 days)
- Dependency upgrades (1 day)
- Architectural improvements (5 days)
Total: 8 days
ONGOING PREVENTION
────────────────────────────────
- Allocate 20% of each sprint to debt
- Address new TODOs within 2 weeks
- Code review checklist for debt
- Monthly debt review meeting
ESTIMATED COMPLETION
────────────────────────────────
All critical debt: 1 week
All high-priority debt: 6 weeks
All tracked debt: 12 weeks
With 20% ongoing capacity: Sustainable
```
#### Phase 8: Trend Analysis
1. **TRACK** debt over time:
- New debt created (from recent stories)
- Debt resolved (from progress logs)
- Net change
2. **CALCULATE** debt velocity:
- Rate of debt creation
- Rate of debt resolution
- Projected timeline
3. **DISPLAY** trend analysis:
```
📉 DEBT TRENDS
══════════════════════════════════
Last 30 Days:
- New debt created: 8 items (12 days effort)
- Debt resolved: 3 items (4 days effort)
- Net change: +5 items (+8 days) ⚠️
Monthly Rate:
- Debt creation: 8 items/month
- Debt resolution: 3 items/month
- Net accumulation: 5 items/month
Current Trajectory:
- At current rate: Debt increasing
- Projected debt in 3 months: 39 items (56 days)
- Status: 🔴 Unsustainable
With 20% Sprint Capacity (4 days/sprint):
- Debt resolution: 8 items/month
- Net change: Even or reducing
- Clear current debt: 8 sprints (4 months)
- Status: ✅ Sustainable
Recommendation:
Allocate 20% of sprint capacity to debt reduction
to prevent accumulation and clear backlog.
```
#### Phase 9: Prevention Recommendations
1. **ANALYZE** root causes of debt
2. **SUGGEST** process improvements:
- Code review additions
- Definition of done criteria
- Quality gates
- Standards enforcement
3. **RECOMMEND** preventive measures:
- Performance budgets
- Complexity limits
- Coverage requirements
- Documentation standards
4. **DISPLAY** prevention recommendations:
```
🛡️ DEBT PREVENTION
══════════════════════════════════
PROCESS IMPROVEMENTS:
Code Review Checklist:
✓ Add debt check to review template
✓ Block PRs with new TODO comments
✓ Require justification for technical debt
✓ Link debt to tracking story
Definition of Done:
✓ All tests passing
✓ No new TODO/FIXME comments
✓ Performance benchmarks met
✓ Security checks passed
✓ Documentation updated
Quality Gates:
✓ Automated: Lint, format, test coverage
✓ Manual: Security review for auth changes
✓ Manual: Performance review for queries
STANDARDS TO ENFORCE:
Performance Budgets:
- Max page load: 2 seconds
- Max API response: 500ms
- Max database queries: 10 per request
- Max N+1 queries: 0
Complexity Limits:
- Max cyclomatic complexity: 10
- Max method length: 30 lines
- Max class length: 300 lines
- Max method parameters: 4
Coverage Requirements:
- Minimum test coverage: 80%
- All public methods tested
- Edge cases covered
- Browser tests for critical paths
Documentation Standards:
- All public APIs documented
- Complex logic explained
- Setup instructions complete
- Deployment process documented
TOOL RECOMMENDATIONS:
Automated Checks:
- Laravel Pint for code style
- Pest for testing
- PHPStan for static analysis
- GitHub Actions for CI/CD
Monitoring:
- Laravel Telescope for debugging
- Performance monitoring
- Error tracking
- Log aggregation
```
#### Phase 10: Report Export
1. **COMPILE** all debt data into comprehensive report
2. **GENERATE** debt backlog stories
3. **CREATE** tracking documents
4. **OFFER** export options:
```
💾 EXPORT OPTIONS
══════════════════════════════════
Export debt report to:
1. /tech-debt/report-2025-10-01.md
- Complete debt inventory
- Metrics and trends
- Reduction plan
- Prevention recommendations
2. Create debt stories in /docs/stories/backlog/:
- DEBT-STORY-001.md (Security sprint)
- DEBT-STORY-002.md (Performance sprint)
- DEBT-STORY-003.md (Code quality sprint)
3. Create debt tracking dashboard:
- /tech-debt/dashboard.md
- Updated weekly with latest status
Export all? (y/n)
```
5. **DISPLAY** export summary:
```
✅ EXPORT COMPLETE
══════════════════════════════════
Files Created:
✓ /tech-debt/report-2025-10-01.md
✓ /tech-debt/dashboard.md
✓ /docs/stories/backlog/DEBT-STORY-001.md
✓ /docs/stories/backlog/DEBT-STORY-002.md
✓ /docs/stories/backlog/DEBT-STORY-003.md
NEXT STEPS:
1. Review and prioritize debt stories
2. Schedule critical items for this sprint
3. Allocate 20% capacity for debt work
4. Update debt dashboard weekly
5. Track debt velocity monthly
💡 QUICK START:
/sdd:story-start DEBT-STORY-001 # Begin security sprint
```
### OUTPUTS
- Console display of complete debt analysis
- Optional: `/tech-debt/report-[date].md` - Comprehensive debt report
- Optional: `/tech-debt/dashboard.md` - Tracking dashboard
- Optional: `/docs/stories/backlog/DEBT-STORY-*.md` - Debt reduction stories
### RULES
- MUST scan all story directories (not just completed)
- MUST categorize debt by severity (critical/important/nice-to-have)
- MUST calculate effort estimates (hours/days)
- MUST prioritize by impact/effort matrix
- SHOULD link debt items to source stories
- SHOULD provide timeline for debt reduction
- SHOULD suggest prevention measures
- NEVER modify story files (read-only operation)
- ALWAYS show debt sources and dates
- ALWAYS provide actionable recommendations
- MUST handle missing data gracefully
## Debt Severity Levels
### Critical (P0)
- Security vulnerabilities
- Stability/crash issues
- Data loss risks
- Compliance violations
- **Action**: Fix immediately
### Important (P1)
- Performance degradation
- Maintainability issues
- Moderate bug risks
- User experience problems
- **Action**: Fix within 1-2 sprints
### Nice to Have (P2/P3)
- Code cleanup
- Minor refactoring
- Documentation gaps
- Optimization opportunities
- **Action**: Address when capacity allows
## Examples
### Example 1: All Debt
```bash
INPUT:
/sdd:story-tech-debt
OUTPUT:
→ Scanning all story directories...
→ Found 24 debt items across 18 stories
→ Categorizing and prioritizing...
🏗️ TECHNICAL DEBT INVENTORY
══════════════════════════════════
🔴 CRITICAL (3 items)
[DEBT-001] Security: Unencrypted sensitive data
[DEBT-002] Stability: Memory leak in queue worker
[DEBT-003] Security: Missing authentication check
🟡 IMPORTANT (8 items)
[DEBT-004] Performance: N+1 query issues
[DEBT-005] Maintenance: Duplicated business logic
[... 6 more ...]
🟢 NICE TO HAVE (13 items)
[DEBT-006] Cleanup: Unused dependencies
[... 12 more ...]
[Additional sections...]
📝 DEBT REDUCTION STORIES
Create 3 debt stories in backlog? (y/n)
```
### Example 2: Critical Debt Only
```bash
INPUT:
/sdd:story-tech-debt critical
OUTPUT:
→ Scanning for critical debt only...
→ Found 3 critical items
🔴 CRITICAL DEBT (3 items, 6 days)
══════════════════════════════════
[DEBT-001] Security: Unencrypted Sensitive Data
- Story: STORY-2025-012
- Impact: High - PII at risk
- Effort: 2 days
- Description: User data stored without encryption
[DEBT-002] Stability: Memory Leak
- Story: STORY-2025-023
- Impact: High - Crashes after 24h
- Effort: 1 day
- Description: Queue worker memory accumulation
[DEBT-003] Security: Missing Auth Check
- Story: STORY-2025-028
- Impact: High - Unauthorized access possible
- Effort: 3 days
- Description: Admin endpoints lack verification
⚠️ ACTION REQUIRED
──────────────────────────────────
These critical items should be addressed immediately.
Estimated effort: 6 days total
Create emergency debt story? (y/n)
```
### Example 3: No Debt Found
```bash
INPUT:
/sdd:story-tech-debt
OUTPUT:
→ Scanning all story directories...
→ Analyzing debt indicators...
✅ NO TECHNICAL DEBT DETECTED
══════════════════════════════════
No debt indicators found in stories:
- No TODO/FIXME comments
- No deferred items
- No temporary solutions
- No skipped tests
Status: 🎉 Clean codebase!
PREVENTION:
Continue following best practices:
- Code review process
- Test-driven development
- Performance monitoring
- Security checks
Run /sdd:story-metrics to see quality metrics.
```
## Edge Cases
### No Stories Available
- DETECT empty story directories
- DISPLAY no data message
- SUGGEST creating stories first
- PROVIDE guidance on starting
### All Debt Resolved
- DETECT zero debt items
- CELEBRATE clean codebase
- SHOW prevention recommendations
- SUGGEST ongoing practices
### Incomplete Debt Information
- PARSE flexibly from available data
- MARK incomplete items for review
- ESTIMATE effort conservatively
- CONTINUE with best-effort analysis
### Very High Debt Load
- DETECT debt > 50% of development time
- DISPLAY warning alert
- PRIORITIZE ruthlessly (critical only)
- SUGGEST process intervention
## Error Handling
- **No story directories**: Report missing structure, suggest `/sdd:project-init`
- **Permission errors**: Report specific file access issues
- **Malformed story files**: Skip problematic files, log warnings
- **Invalid priority parameter**: Show valid options, use default
- **Export directory conflicts**: Ask to overwrite or merge
## Performance Considerations
- Efficient file scanning (single pass per directory)
- Lazy parsing (only parse when needed)
- Pattern matching with regex for debt indicators
- Streaming output for large debt lists
- Typical completion time: < 3 seconds for 50 stories
## Related Commands
- `/sdd:story-metrics` - Development velocity and quality metrics
- `/sdd:story-patterns` - Identify recurring patterns
- `/sdd:project-status` - Current project state
- `/sdd:story-new [id]` - Create debt reduction story
## Constraints
- ✅ MUST be read-only (no story modifications)
- ✅ MUST categorize by severity (critical/important/nice-to-have)
- ✅ MUST provide effort estimates
- ⚠️ SHOULD link debt to source stories
- 📊 SHOULD include impact assessment
- 💡 SHOULD generate reduction plan
- 🛡️ SHOULD suggest prevention measures
- ⏱️ MUST complete analysis in reasonable time (< 5s)
- 📁 SHOULD offer to export and create stories

View File

@@ -0,0 +1,510 @@
# /sdd:story-test-integration
Execute comprehensive integration and end-to-end tests for story validation.
---
## Meta
**Category**: Testing & Validation
**Format**: Imperative (Comprehensive)
**Execution Time**: 3-8 minutes
**Prerequisites**: Active story in `/docs/stories/development/` or `/docs/stories/review/`
**Destructive**: No (read-only with test execution)
**Related Commands**:
- `/sdd:story-quick-check` - Fast validation before integration tests
- `/sdd:story-full-check` - Comprehensive validation suite (includes this + more)
- `/sdd:story-validate` - Final story validation (runs after this)
**Context Requirements**:
- `/docs/project-context/technical-stack.md` (testing tools, frameworks, database)
- `/docs/project-context/coding-standards.md` (test patterns, coverage requirements)
- `/docs/project-context/development-process.md` (integration testing criteria)
---
## Parameters
**Story Parameters**:
```bash
# Auto-detect from current active story or specify:
--story-id=STORY-XXX-NNN # Specific story ID
--scope=api|db|e2e|all # Test scope (default: all)
--performance # Include performance profiling
```
**Test Configuration**:
```bash
--browser=chrome|firefox|safari # Browser for e2e tests (default: chrome)
--parallel=N # Parallel test execution (default: 4)
--coverage # Generate coverage report
--verbose # Detailed test output
```
---
## Process
### Phase 1: Test Scope Discovery (30s)
**Load Context**:
```bash
# Verify project context exists
if ! [ -d /docs/project-context/ ]; then
echo "⚠️ Missing /docs/project-context/ - run /sdd:project-init first"
exit 1
fi
# Load testing requirements
source /docs/project-context/technical-stack.md # Testing tools
source /docs/project-context/coding-standards.md # Test patterns
source /docs/project-context/development-process.md # Integration criteria
```
**Identify Test Scope**:
1. Read active story acceptance criteria
2. Extract integration points (API, database, external services)
3. Identify dependent services and components
4. Determine required test types (API, DB, E2E, performance)
**Output**:
```
🎯 INTEGRATION TEST SCOPE
========================
Story: STORY-XXX-NNN - [Title]
Integration Points:
✓ API: POST /api/tasks, GET /api/tasks/{id}
✓ Database: tasks, categories, task_category pivot
✓ Livewire: TaskManager component
✓ Browser: Task creation workflow
Test Types: API, Database, E2E, Performance
Estimated Duration: ~5 minutes
```
---
### Phase 2: API Integration Tests (1-2 min)
**Execute API Tests**:
```bash
# Laravel/Pest example
php artisan test --filter=Api --coverage
# Check:
✓ Endpoint functionality (CRUD operations)
✓ Request/response formats (JSON, validation)
✓ Authentication/authorization (gates, policies)
✓ Error responses (422, 404, 403, 500)
✓ Rate limiting (if configured)
```
**Output**:
```
🔗 API INTEGRATION TESTS
=======================
✅ POST /api/tasks creates task (24ms)
✅ GET /api/tasks returns all tasks (18ms)
✅ PUT /api/tasks/{id} updates task (22ms)
✅ DELETE /api/tasks/{id} removes task (19ms)
❌ POST /api/tasks validates input (FAILED)
Expected 422, got 500
Error: Column 'order' cannot be null
Passed: 4/5 (80%)
Failed: 1
Duration: 0.8s
```
---
### Phase 3: Database Integration Tests (1-2 min)
**Execute Database Tests**:
```bash
# Test database operations
php artisan test --filter=Database
# Check:
✓ CRUD operations (create, read, update, delete)
✓ Transactions (rollback, commit)
✓ Data integrity (constraints, foreign keys)
✓ Migrations (up, down, fresh)
✓ Relationships (eager loading, N+1 prevention)
```
**Output**:
```
💾 DATABASE INTEGRATION
======================
✅ Task model creates records (12ms)
✅ Categories relationship loads (8ms)
✅ Soft deletes work correctly (10ms)
✅ Order column maintains sequence (15ms)
✅ Transaction rollback on error (18ms)
Passed: 5/5 (100%)
Duration: 0.6s
Query Performance:
Average: 8ms
Slowest: Task::with('categories') - 15ms
```
---
### Phase 4: End-to-End Tests (2-4 min)
**Execute Browser Tests**:
```bash
# Pest v4 Browser Testing
php artisan test --filter=Browser --browser=chrome
# Test workflows:
✓ Complete user workflows (login → create → edit → delete)
✓ Multi-step processes (task creation with categories)
✓ Cross-feature interactions (filtering + sorting)
✓ Data flow validation (form → server → database → UI)
```
**Output**:
```
🌐 END-TO-END TESTS (Chrome)
===========================
✅ User can create task with category (2.4s)
✅ Task displays in correct order (1.8s)
✅ Drag-and-drop reorders tasks (3.1s)
❌ Mobile touch gestures work (FAILED)
Element not found: [wire:sortable]
Screenshot: /tmp/mobile-touch-fail.png
Passed: 3/4 (75%)
Failed: 1
Duration: 8.2s
Console Errors: None
Network Errors: None
```
---
### Phase 5: Performance Testing (1-2 min, optional)
**Execute Performance Tests** (if `--performance` flag):
```bash
# Load testing
ab -n 100 -c 10 https://ccs-todo.test/api/tasks
# Check:
✓ API response times (< 200ms p95)
✓ Database query performance (< 50ms avg)
✓ Memory usage (< 128MB)
✓ Stress test critical paths (100 concurrent users)
```
**Output**:
```
⚡ PERFORMANCE PROFILING
=======================
API Endpoints:
GET /api/tasks avg: 45ms p95: 120ms ✓
POST /api/tasks avg: 68ms p95: 180ms ✓
PUT /api/tasks/{id} avg: 52ms p95: 150ms ✓
Database Queries:
Average: 12ms
Slowest: Task::with('categories', 'tags') - 48ms
Memory Usage: 64MB (peak: 82MB) ✓
Bottlenecks: None detected
```
---
### Phase 6: Test Report Generation (10s)
**Generate Comprehensive Report**:
```
📊 INTEGRATION TEST RESULTS
===========================
Story: STORY-XXX-NNN - [Title]
Executed: 2025-10-01 14:32:15
Duration: 5m 12s
OVERALL: 🟡 PASSING WITH WARNINGS
┌─────────────────────┬────────┬────────┬─────────┬──────────┐
│ Test Suite │ Passed │ Failed │ Skipped │ Coverage │
├─────────────────────┼────────┼────────┼─────────┼──────────┤
│ API Integration │ 4 │ 1 │ 0 │ 92% │
│ Database Integration│ 5 │ 0 │ 0 │ 88% │
│ E2E Browser │ 3 │ 1 │ 0 │ 76% │
│ Performance │ N/A │ N/A │ N/A │ N/A │
├─────────────────────┼────────┼────────┼─────────┼──────────┤
│ TOTAL │ 12 │ 2 │ 0 │ 85% │
└─────────────────────┴────────┴────────┴─────────┴──────────┘
❌ FAILED TESTS (2):
1. POST /api/tasks validates input
Error: Column 'order' cannot be null
Fix: Add default value to 'order' column in migration
2. Mobile touch gestures work
Error: Element not found: [wire:sortable]
Fix: Ensure SortableJS loads on mobile viewport
Screenshot: /tmp/mobile-touch-fail.png
⚠️ WARNINGS (1):
- Slowest query: Task::with('categories', 'tags') - 48ms
Consider adding indexes or reducing eager loading
✅ HIGHLIGHTS:
✓ All database operations working correctly
✓ API authentication/authorization passing
✓ Desktop E2E workflows functional
✓ Performance within acceptable ranges
📈 COVERAGE: 85% (target: 80%+)
Lines: 342/402
Branches: 28/35
Functions: 45/48
🎯 NEXT STEPS:
1. Fix: Add default order value in migration
2. Fix: Debug mobile touch gesture handling
3. Re-run: php artisan test --filter="failed"
4. Then run: /sdd:story-validate
```
---
### Phase 7: Failure Handling & Auto-Fix (if failures)
**Interactive Failure Resolution**:
```
❌ 2 TEST FAILURES DETECTED
Would you like me to:
[1] Show detailed error logs
[2] Suggest fixes for each failure
[3] Implement fixes automatically
[4] Re-run failed tests only
[5] Exit (fix manually)
Choose option [1-5]:
```
**If Option 2 (Suggest Fixes)**:
```
🔧 SUGGESTED FIXES
==================
Failure 1: POST /api/tasks validates input
Problem: Column 'order' has no default value
Location: database/migrations/xxx_create_tasks_table.php
Fix:
$table->integer('order')->default(0);
Confidence: HIGH (common pattern)
Failure 2: Mobile touch gestures
Problem: SortableJS not loading on mobile
Location: resources/js/app.js
Fix: Check Alpine.js device detection:
if (window.isDevice('mobile') || window.isDevice('tablet')) {
loadSortable();
}
Confidence: MEDIUM (requires investigation)
Apply fixes? [y/n]:
```
**If Option 3 (Auto-fix)**:
- Apply suggested fixes
- Run Pint formatting
- Re-run failed tests
- Show updated results
---
### Phase 8: Story Update (10s)
**Update Story Documentation**:
```bash
# Append to story's progress log
echo "$(date): Integration tests executed" >> /docs/stories/development/STORY-XXX-NNN.md
# Add test results section
cat >> /docs/stories/development/STORY-XXX-NNN.md <<EOF
## Integration Test Results ($(date +%Y-%m-%d))
**Status**: 🟡 Passing with warnings
**Duration**: 5m 12s
**Coverage**: 85%
### Test Summary
- API Integration: 4/5 passed (1 failed)
- Database Integration: 5/5 passed
- E2E Browser: 3/4 passed (1 failed)
### Failed Tests
1. POST /api/tasks validation - Fixed: Added default order value
2. Mobile touch gestures - In Progress: Debugging SortableJS loading
### Next Actions
- Fix remaining mobile touch issue
- Re-run tests
- Proceed to /sdd:story-validate
EOF
```
**Output**:
```
📝 STORY UPDATED
===============
Progress log updated: /docs/stories/development/STORY-XXX-NNN.md
Test results recorded
Timestamp: 2025-10-01 14:37:27
```
---
## Examples
### Example 1: All Tests Pass
```bash
$ /sdd:story-test-integration
🎯 Integration Test Scope: STORY-DUE-002
API + Database + E2E + Performance
[... test execution ...]
📊 INTEGRATION TEST RESULTS
===========================
OVERALL: ✅ ALL TESTS PASSING
Total: 15 tests passed (0 failed)
Coverage: 92%
Duration: 4m 38s
✅ Ready for /sdd:story-validate
```
### Example 2: Failures with Auto-Fix
```bash
$ /sdd:story-test-integration
[... test execution ...]
2 failures detected
Applying auto-fixes...
✓ Fixed migration default value
✓ Updated SortableJS loading
Re-running failed tests...
✅ POST /api/tasks validates input (FIXED)
✅ Mobile touch gestures work (FIXED)
📊 FINAL RESULTS: ✅ ALL TESTS PASSING
```
### Example 3: Scoped to API Only
```bash
$ /sdd:story-test-integration --scope=api
🎯 Integration Test Scope: API only
🔗 API INTEGRATION TESTS
=======================
✅ All 8 API tests passed
Duration: 1m 12s
✅ API integration validated
```
### Example 4: Performance Profiling
```bash
$ /sdd:story-test-integration --performance
[... test execution ...]
⚡ PERFORMANCE PROFILING
=======================
⚠️ Bottleneck detected:
GET /api/tasks with 100+ categories
Response time: 450ms (target: <200ms)
Recommendation:
- Add pagination (limit 25 per page)
- Cache category counts
- Add database indexes
Would you like me to implement optimizations? [y/n]:
```
---
## Success Criteria
**Command succeeds when**:
- All integration tests pass (or auto-fixed)
- Coverage meets project threshold (typically 80%+)
- Performance within acceptable ranges
- Story progress log updated
- Detailed report generated
**Command fails when**:
- Critical test failures cannot be auto-fixed
- Coverage below minimum threshold
- Performance degradation detected
- Context files missing
---
## Output Files
**Generated Reports**:
- Story progress log updated: `/docs/stories/development/STORY-XXX-NNN.md`
- Failure screenshots: `/tmp/test-failure-*.png` (if applicable)
- Coverage reports: `/tests/coverage/` (if `--coverage` flag)
**No New Files Created**: This command only executes tests and updates existing story documentation.
---
## Notes
- **Execution Time**: Varies by test scope (3-8 minutes typical)
- **Auto-Fix**: Attempts common fixes automatically (with confirmation)
- **Mobile Testing**: Tests responsive design on mobile viewports
- **Performance**: Optional profiling with `--performance` flag
- **Parallel Execution**: Use `--parallel=N` for faster execution
- **Browser Choice**: Defaults to Chrome, supports Firefox/Safari
**Best Practices**:
1. Run `/sdd:story-quick-check` first for fast validation
2. Fix obvious issues before integration tests
3. Use `--scope` to test specific areas during development
4. Run full suite before moving to `/sdd:story-validate`
5. Review performance metrics for critical paths
**Next Steps After Success**:
```bash
✅ Integration tests passing → /sdd:story-validate
⚠️ Minor warnings → Fix, re-run, then /sdd:story-validate
❌ Critical failures → Fix issues, re-run this command
```

538
commands/story-timebox.md Normal file
View File

@@ -0,0 +1,538 @@
# /sdd:story-timebox
## Meta
- Version: 2.0
- Category: productivity
- Complexity: medium
- Purpose: Set focused work session timer with progress tracking and checkpoints
## Definition
**Purpose**: Start a time-boxed work session with automatic progress checkpoints, metrics tracking, and session logging for story development.
**Syntax**: `/sdd:story-timebox [duration] [mode]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| duration | number | No | 2 | Session duration in hours | 0.5-8 hours |
| mode | string | No | standard | Timer mode: standard or pomodoro | standard/pomodoro |
## INSTRUCTION: Start Timeboxed Work Session
### INPUTS
- duration: Session length in hours (or use default)
- mode: Timer mode (standard with checkpoints or pomodoro technique)
- Current active story from `/docs/stories/development/`
- Session goal from user
### PROCESS
#### Phase 1: Session Initialization
1. **DETERMINE** session parameters:
- IF duration provided: USE specified hours
- IF no duration: DEFAULT to 2 hours
- IF mode provided: USE specified mode (standard/pomodoro)
- IF no mode: DEFAULT to standard
2. **FIND** active story:
- SCAN `/docs/stories/development/` for active story
- IF multiple stories: ASK user which story to focus on
- IF no active story: SUGGEST using `/sdd:story-start [id]` first
3. **ASK** user for session goal:
- "What do you want to accomplish in this session?"
- RECORD goal for tracking
#### Phase 2: Session Planning
1. **CALCULATE** time checkpoints:
IF standard mode (duration in hours):
- Start time: [current timestamp]
- 25% checkpoint: [start + 0.25 * duration]
- 50% checkpoint: [start + 0.50 * duration]
- 75% checkpoint: [start + 0.75 * duration]
- End time: [start + duration]
IF pomodoro mode:
- Calculate 25-minute work intervals
- 5-minute short breaks
- 15-minute long break after 4 intervals
- Total time based on duration
2. **GENERATE** session plan based on goal:
- Break down goal into 4 quarterly segments
- Suggest specific tasks for each quarter
- Include testing and cleanup phases
3. **CREATE** session tracking file:
- Location: `.timebox/session-[timestamp].md`
- Contains: Story ID, goal, plan, checkpoints
#### Phase 3: Session Start Display
1. **DISPLAY** session start summary:
```
⏰ TIMEBOX SESSION STARTED
═══════════════════════════════════
Duration: [X] hours ([X] minutes)
Started: [HH:MM AM/PM]
Ends at: [HH:MM AM/PM]
Story: [STORY-ID] - [Title]
Session Goal: [User's stated goal]
📍 CHECKPOINTS:
- 25% ([HH:MM]): Quick progress check
- 50% ([HH:MM]): Halfway review
- 75% ([HH:MM]): Wrap-up warning
- 100% ([HH:MM]): Session complete
Timer Mode: [Standard/Pomodoro]
```
2. **SHOW** session plan:
```
📋 SESSION PLAN
══════════════════════════════════
Quarter 1 (0-25%): [Focus area]
- [ ] [Specific task 1]
- [ ] [Specific task 2]
Quarter 2 (25-50%): [Focus area]
- [ ] [Specific task 3]
- [ ] [Specific task 4]
Quarter 3 (50-75%): [Focus area]
- [ ] [Test what was built]
- [ ] [Address any issues found]
Quarter 4 (75-100%): [Focus area]
- [ ] [Commit changes]
- [ ] [Update story progress]
- [ ] [Document session notes]
🎯 START NOW: Begin with [first task]
```
3. **REMIND** user:
- I'll check in at each checkpoint
- Mention checkpoints when user provides updates
- Track progress throughout session
#### Phase 4: Checkpoint Management
**At each 25%, 50%, 75% checkpoint**, IF user interacts:
1. **DISPLAY** checkpoint notification:
```
🔔 CHECKPOINT: [25/50/75]% Complete
═══════════════════════════════════
Time Elapsed: [X] minutes
Time Remaining: [X] minutes
Current Time: [HH:MM AM/PM]
Progress Check:
- Planned: [Tasks planned for this quarter]
- What's done: [Ask user what was completed]
- On track: [Calculate based on response]
[IF on track:]
Great progress! Continue with: [next planned task]
[IF behind:]
Adjust plan: [Suggest focusing on priority items]
[IF ahead:]
Excellent! Consider: [Suggest stretch goals]
```
2. **UPDATE** session tracking file:
- Log checkpoint reached
- Record user's progress update
- Note any plan adjustments
#### Phase 5: Session Metrics Tracking
**Throughout session**, TRACK:
1. **Code Metrics** (if user commits/shows changes):
- Lines added/removed
- Files modified
- Commits made
2. **Test Metrics** (if tests run):
- Tests added
- Tests passing/failing
- Coverage changes
3. **Progress Metrics**:
- Tasks completed vs planned
- Checkpoint adherence
- Goal achievement percentage
#### Phase 6: Session Interruption Handling
IF user says "pause", "stop", "brb", or similar:
1. **PAUSE** session:
```
⏸️ SESSION PAUSED
═══════════════════════════════════
Paused at: [HH:MM AM/PM]
Time Elapsed: [X] minutes
Time Remaining: [X] minutes
Progress so far:
- [Tasks completed]
To resume: Simply mention you're back or say "resume"
To end early: Say "end session"
```
2. **SAVE** pause state to session tracking file
IF user returns:
1. **RESUME** session:
```
▶️ SESSION RESUMED
═══════════════════════════════════
Welcome back!
Time Remaining: [X] minutes
Next Checkpoint: [time]
Continue with: [current task]
```
#### Phase 7: Session Completion
**At session end** OR IF user says "end session":
1. **DISPLAY** session summary:
```
✅ SESSION COMPLETE
═══════════════════════════════════
Total Duration: [X] hours [X] minutes
Story: [STORY-ID] - [Title]
SESSION GOAL:
[Original goal stated]
ACCOMPLISHED:
✓ [Completed task 1]
✓ [Completed task 2]
✓ [Completed task 3]
⏳ [Partial task - note what's left]
SESSION METRICS:
- Planned vs Actual: [X]%
- Code changes: [X] files, [X] lines
- Commits: [X]
- Tests: [X] added, [X/Y] passing
- Checkpoints hit: [X/4]
WHAT WENT WELL:
- [Success point 1]
- [Success point 2]
CHALLENGES:
- [Challenge encountered]
- [How addressed or needs addressing]
FOR NEXT SESSION:
- [Specific next task to start with]
- [Any blockers to resolve first]
- [Estimated time needed]
NEXT STEPS:
1. /sdd:story-save # Save progress to story
2. /sdd:story-quick-check # Verify everything works
3. Take a break! 🎉
```
2. **ASK** user for session notes:
- "What went well this session?"
- "What was challenging?"
- "What should you focus on next time?"
3. **UPDATE** story progress log:
- Append session summary to story file
- Note tasks completed and time spent
- Record any blockers or notes
4. **SAVE** session to history:
- Complete session tracking file
- Add to `.timebox/history/` for velocity analysis
5. **SUGGEST** next session:
- Based on progress rate
- Consider remaining work
- Recommend duration and focus
#### Phase 8: Pomodoro Mode (Special Handling)
IF mode = pomodoro:
1. **STRUCTURE** session as intervals:
```
🍅 POMODORO MODE ACTIVE
═══════════════════════════════════
Session Structure:
🍅 Pomodoro 1: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 2: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 3: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 4: 25 minutes (Focus)
🎉 Long Break: 15 minutes
Total Time: ~2 hours
Current: 🍅 Pomodoro 1
Focus Task: [First planned task]
Time Remaining: 25:00
```
2. **AT EACH INTERVAL END**:
- Notify completion
- Show brief summary
- Announce break time or next pomodoro
- Track completed pomodoros
3. **DURING BREAKS**:
- Remind user to step away
- Show break time remaining
- Announce when break ends
### OUTPUTS
- `.timebox/session-[timestamp].md` - Session tracking file
- Updated story progress log with session summary
- Session history added to `.timebox/history/`
- Velocity metrics for future planning
### RULES
- MUST find active story before starting session
- MUST calculate accurate checkpoint times
- MUST track session start and end timestamps
- MUST save session data to tracking file
- SHOULD remind at checkpoints if user is active
- SHOULD track metrics if commits/tests are mentioned
- SHOULD update story progress log at session end
- NEVER interrupt user during focused work
- ALWAYS provide encouragement and progress acknowledgment
- MUST handle pause/resume gracefully
## Timer Mode Details
### Standard Mode
- Single continuous session with 4 checkpoints (25%, 50%, 75%, 100%)
- Flexible focus periods
- Checkpoints for progress assessment and plan adjustment
- Best for: Larger tasks, exploratory work, complex implementations
### Pomodoro Mode
- Fixed 25-minute work intervals
- Mandatory 5-minute breaks between intervals
- 15-minute long break after 4 intervals
- Structured and disciplined approach
- Best for: Deep focus, avoiding burnout, building work habits
## Examples
### Example 1: Standard 2-Hour Session
```bash
INPUT:
/sdd:story-timebox
OUTPUT:
→ Finding active story...
→ Located: STORY-2025-003 in development
What do you want to accomplish in this session?
> Implement the task editing feature and write tests
⏰ TIMEBOX SESSION STARTED
═══════════════════════════════════
Duration: 2 hours (120 minutes)
Started: 2:00 PM
Ends at: 4:00 PM
Story: STORY-2025-003 - Task Management System
Session Goal: Implement task editing feature and write tests
📍 CHECKPOINTS:
- 25% (2:30 PM): Quick progress check
- 50% (3:00 PM): Halfway review
- 75% (3:30 PM): Wrap-up warning
- 100% (4:00 PM): Session complete
Timer Mode: Standard
📋 SESSION PLAN
══════════════════════════════════
Quarter 1 (0-25%): Feature Implementation
- [ ] Create TaskEdit Livewire component
- [ ] Add edit form with validation
Quarter 2 (25-50%): Continue Implementation
- [ ] Implement update logic in controller
- [ ] Add authorization checks
Quarter 3 (50-75%): Testing
- [ ] Write feature tests for edit flow
- [ ] Test validation rules
- [ ] Test authorization
Quarter 4 (75-100%): Cleanup & Documentation
- [ ] Run all tests and verify passing
- [ ] Commit changes with descriptive message
- [ ] Update story progress log
🎯 START NOW: Begin with creating TaskEdit Livewire component
```
### Example 2: Pomodoro Mode
```bash
INPUT:
/sdd:story-timebox 2 pomodoro
OUTPUT:
→ Finding active story...
→ Located: STORY-2025-005 in development
What do you want to accomplish in this session?
> Refactor TaskManager component to reduce complexity
🍅 POMODORO MODE ACTIVE
═══════════════════════════════════
Session Structure:
🍅 Pomodoro 1: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 2: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 3: 25 minutes (Focus)
☕ Break: 5 minutes
🍅 Pomodoro 4: 25 minutes (Focus)
🎉 Long Break: 15 minutes
Total Time: ~2 hours
Story: STORY-2025-005 - Code Refactoring
Session Goal: Refactor TaskManager component to reduce complexity
🍅 POMODORO 1 (25:00)
Focus: Extract reusable methods from TaskManager
- [ ] Identify duplicate code blocks
- [ ] Create private helper methods
🎯 START NOW: Begin analyzing TaskManager for duplication
[After 25 minutes]
✅ POMODORO 1 COMPLETE
Nice work! Take a 5-minute break.
☕ Break time: 5:00
Next: 🍅 Pomodoro 2 - Continue refactoring
```
### Example 3: Short 30-Minute Session
```bash
INPUT:
/sdd:story-timebox 0.5
OUTPUT:
→ Finding active story...
→ Located: STORY-2025-007 in development
What do you want to accomplish in this session?
> Add validation rules to API endpoints
⏰ TIMEBOX SESSION STARTED
═══════════════════════════════════
Duration: 0.5 hours (30 minutes)
Started: 10:00 AM
Ends at: 10:30 AM
Story: STORY-2025-007 - REST API Development
Session Goal: Add validation rules to API endpoints
📍 CHECKPOINTS:
- 50% (10:15 AM): Halfway check
- 100% (10:30 AM): Session complete
Timer Mode: Standard (short session - fewer checkpoints)
📋 SESSION PLAN
══════════════════════════════════
First Half (0-50%): Implementation
- [ ] Add validation rules to TaskController
- [ ] Add validation rules to CategoryController
Second Half (50-100%): Testing & Wrap-up
- [ ] Test validation with invalid data
- [ ] Commit changes
🎯 START NOW: Add validation to TaskController
```
## Edge Cases
### No Active Story
- DETECT no story in `/docs/stories/development/`
- SUGGEST using `/sdd:story-start [id]` to begin a story
- OFFER to start session without story tracking
- EXIT if user declines
### Session Already Active
- DETECT existing session tracking file
- ASK user: Resume previous session or start new?
- IF resume: Load previous session state
- IF new: Complete previous session first
### Very Long Duration (> 4 hours)
- WARN about diminishing returns beyond 4 hours
- SUGGEST breaking into multiple sessions
- OFFER to set up with extra break time
- ALLOW if user confirms
### User Goes Silent Mid-Session
- Don't interrupt if user is focused
- Only mention checkpoints if user becomes active near checkpoint time
- Session tracking continues regardless
## Error Handling
- **No active story**: Suggest `/sdd:story-start [id]` or allow storyless session
- **Invalid duration**: Suggest valid range (0.5-8 hours)
- **Invalid mode**: Suggest "standard" or "pomodoro"
- **Session file write error**: Log warning, continue without persistent tracking
## Performance Considerations
- Session tracking is lightweight (< 1KB file)
- Checkpoint calculations happen at start (no ongoing computation)
- Metrics collected passively from user updates
- History files archived monthly to maintain performance
## Related Commands
- `/sdd:story-start [id]` - Begin story before timeboxing
- `/sdd:story-save` - Save progress after session
- `/sdd:story-quick-check` - Verify work after session
- `/sdd:project-status` - View velocity metrics from past sessions
## Constraints
- ✅ MUST find or create active story context
- ✅ MUST calculate accurate checkpoint times
- ✅ MUST save session data for history
- 📋 SHOULD remind at checkpoints (if user active)
- 🔧 SHOULD track metrics from user updates
- 💾 MUST update story progress log at end
- ⚠️ NEVER interrupt during focused work
- 🎯 ALWAYS acknowledge progress and provide encouragement
- ⏸️ MUST handle pause/resume gracefully
- 🧪 SHOULD suggest realistic next sessions based on velocity

149
commands/story-today.md Normal file
View File

@@ -0,0 +1,149 @@
# /sdd:story-today
Shows current story, stage, and next actions for today's work.
## Implementation
**Format**: Structured (standard)
**Actions**: Read-only summary
**Modifications**: None
### Discovery
1. Check current git context:
- Active branch name
- Uncommitted changes count
- Last commit timestamp
2. Find active stories:
- List files in `/docs/stories/development/`
- List files in `/docs/stories/review/`
- List files in `/docs/stories/qa/`
- Match branch name to story files
3. Parse active story:
- Read story metadata from YAML frontmatter
- Extract progress log entries
- Identify completed vs remaining tasks
### Output Format
```
📅 TODAY'S FOCUS
===============
Date: [Today's date]
Time: [Current time]
🎯 ACTIVE STORY
--------------
[STORY-ID]: [Title]
Status: [development/review/qa]
Branch: [branch-name]
Started: [X days ago]
📊 CURRENT PROGRESS
-----------------
Last activity: [last commit/change]
Completed:
✅ [Completed item 1]
✅ [Completed item 2]
In Progress:
🔄 [Current task]
Remaining:
⏳ [Todo item 1]
⏳ [Todo item 2]
```
#### Next Actions
```
🚀 NEXT ACTIONS
--------------
1. [Specific next task]
Command: /sdd:story-[appropriate]
2. [Second priority task]
Command: /sdd:story-[appropriate]
3. [Third task if applicable]
```
#### Attention Needed
If blockers exist:
```
⚠️ ATTENTION NEEDED
------------------
- [Failing tests]
- [Unresolved conflicts]
- [Missing dependencies]
- [Review feedback]
```
#### Time Management
```
⏰ TIME ALLOCATION
-----------------
Suggested for today:
- Morning: [Main development task]
- Afternoon: [Testing/review]
Estimated to complete: [X hours]
Consider: /sdd:story-timebox 2
```
#### Context Reminders
If applicable:
```
📌 REMEMBER
----------
- [Project standard]
- [Technical decision]
- [Deadline]
```
#### Project Health
```
💚 PROJECT HEALTH
----------------
Tests: [Passing/Failing]
Build: [Success/Failed]
Coverage: [X%]
Lint: [Clean/Issues]
```
### Empty State
If no active story:
```
💡 NO ACTIVE STORY
-----------------
Options:
1. Continue previous: /sdd:story-continue
2. Start new: /sdd:story-new
3. Review backlog: /sdd:story-next
4. Fix tech debt: /sdd:story-tech-debt
```
#### Standup Summary
Always include:
```
📢 STANDUP SUMMARY
-----------------
Yesterday: [What was completed]
Today: [What will be worked on]
Blockers: [Any impediments]
```
### Notes
- Read-only display of current state
- Does not modify any files
- Suggests highest priority task to begin with

668
commands/story-validate.md Normal file
View File

@@ -0,0 +1,668 @@
# /sdd:story-validate
## Meta
- Version: 2.0
- Category: quality-gates
- Complexity: medium
- Purpose: Final validation of story against acceptance criteria before production deployment
## Definition
**Purpose**: Execute comprehensive final validation to ensure all acceptance criteria are met, all tests pass, and story is production-ready.
**Syntax**: `/sdd:story-validate [story_id]`
## Parameters
| Parameter | Type | Required | Default | Description | Validation |
|-----------|------|----------|---------|-------------|------------|
| story_id | string | No | current branch | Story ID (STORY-YYYY-NNN) | Must match format STORY-YYYY-NNN |
## INSTRUCTION: Execute Final Story Validation
### INPUTS
- story_id: Story identifier (auto-detected from branch if not provided)
- Project context from `/docs/project-context/` directory
- Story file from `/docs/stories/qa/[story-id].md`
- Complete test suite results from QA
- Acceptance criteria from story file
### PROCESS
#### Phase 1: Project Context Loading
1. **CHECK** if `/docs/project-context/` directory exists
2. IF missing:
- SUGGEST running `/sdd:project-init` first
- EXIT with initialization guidance
3. **LOAD** project-specific validation requirements from:
- `/docs/project-context/technical-stack.md` - Testing tools and validation methods
- `/docs/project-context/coding-standards.md` - Quality thresholds and criteria
- `/docs/project-context/development-process.md` - Validation stage requirements
#### Phase 2: Story Identification & Validation
1. IF story_id NOT provided:
- **DETECT** current git branch
- **EXTRACT** story ID from branch name
- EXAMPLE: Branch `feature/STORY-2025-001-auth` → ID `STORY-2025-001`
2. **VALIDATE** story exists and is ready:
- CHECK `/docs/stories/qa/[story-id].md` exists
- IF NOT found in QA:
- CHECK if in `/docs/stories/development/` or `/docs/stories/review/`
- ERROR: "Story must complete QA before validation"
- SUGGEST appropriate command to progress
- IF in `/docs/stories/completed/`:
- ERROR: "Story already completed and shipped"
- IF NOT found anywhere:
- ERROR: "Story [story-id] not found"
- EXIT with guidance
3. **READ** story file for:
- Success Criteria (acceptance criteria)
- Implementation Checklist
- QA Checklist
- Technical Notes and concerns
- Rollback Plan
#### Phase 3: Acceptance Criteria Validation
##### 3.1 Load and Parse Criteria
1. **EXTRACT** all acceptance criteria from Success Criteria section
2. **COUNT** total criteria
3. **IDENTIFY** browser test mappings from QA results
##### 3.2 Validate Each Criterion
1. FOR each acceptance criterion:
```
✓ [Criterion]: PASSED/FAILED
How tested: [Discovered browser testing framework]
Evidence: [Test file path and line number]
Screenshots: [Screenshot path]
Validation method: [Automated browser test/Manual verification]
```
2. **MAP** criteria to browser tests:
- Laravel: `tests/Browser/[StoryId]Test.php`
- Node.js Playwright: `tests/e2e/[story-id].spec.js`
- Python Playwright: `tests/browser/test_[story_id].py`
3. **VERIFY** test evidence:
- READ test file to confirm test exists
- CHECK QA results for test pass status
- VALIDATE screenshot exists (if applicable)
- CONFIRM test actually validates the criterion
4. **MARK** criterion validation:
```markdown
## Success Criteria
- [x] User can toggle dark mode
* Tested by: tests/Browser/DarkModeTest.php::line 45
* Evidence: Browser test passed, screenshot saved
* Validated: 2025-10-01
```
#### Phase 4: Implementation Completeness Check
##### 4.1 Core Features Validation
1. **CHECK** Implementation Checklist:
```
✅ Core Features (using discovered standards):
- [x] Feature implementation → Code complete and functional
- [x] Unit tests → 87% coverage (target: 80% from standards)
- [x] Integration tests → All feature tests passing
- [x] Browser test coverage → 100% of acceptance criteria
- [x] All discovered tests passing → Unit + Feature + Browser
```
2. **VALIDATE** each checklist item:
- Feature implementation: CODE exists and works
- Unit tests: COVERAGE meets threshold from coding-standards.md
- Integration tests: ALL feature tests PASS
- Browser tests: 100% acceptance criteria coverage
- All tests passing: NO failures in any test suite
##### 4.2 Quality Standards Validation
1. **CHECK** quality items:
```
✅ Quality Standards:
- [x] Error handling → Try/catch blocks, graceful failures
- [x] Loading states → Spinners, skeleton screens implemented
- [x] Documentation → Code comments, README updated
```
2. **VALIDATE**:
- Error handling: CHECK for try/catch, error boundaries, validation
- Loading states: VERIFY wire:loading, spinners, feedback
- Documentation: CONFIRM inline docs, updated README/CHANGELOG
##### 4.3 Non-Functional Requirements
1. **CHECK** non-functional items:
```
✅ Non-Functional:
- [x] Performance → Response times meet targets
- [x] Accessibility → WCAG AA compliance
- [x] Security → No vulnerabilities, auth working
```
2. **VALIDATE** from QA results:
- Performance: METRICS from QA meet story targets
- Accessibility: ARIA labels, keyboard nav, contrast checked
- Security: NO vulnerabilities in audit, auth/authz verified
#### Phase 5: Rollback Plan Verification
1. **CHECK** if Rollback Plan section is populated
2. IF empty or minimal:
- WARN: "Rollback plan should be documented"
- SUGGEST rollback steps based on changes
- OFFER to populate from git diff
3. **VALIDATE** rollback plan contains:
- Clear steps to revert changes
- Database migration rollback (if applicable)
- Cache clearing instructions
- Service restart procedures (if needed)
- Verification steps after rollback
4. **TEST** rollback feasibility (if possible):
- Verify migrations are reversible
- Check for data loss risks
- Confirm no breaking changes to shared code
#### Phase 6: Final Checks
##### 6.1 Production Readiness Checklist
1. **EXECUTE** final readiness validation:
```
🚀 READY FOR PRODUCTION?
════════════════════════════════════════════════
☑ All acceptance criteria met
☑ All tests passing
☑ All acceptance criteria covered by automated browser tests
☑ Browser test suite passes completely
☑ Code reviewed and approved
☑ Documentation complete
☑ Performance acceptable
☑ Security verified
☑ Rollback plan ready
☑ Monitoring configured (if applicable)
CONFIDENCE LEVEL: [High/Medium/Low]
```
2. **CALCULATE** confidence level:
- HIGH: All items ✓, no warnings, comprehensive tests
- MEDIUM: All items ✓, some warnings, adequate tests
- LOW: Missing items, concerns, or gaps in testing
##### 6.2 Risk Assessment
1. **IDENTIFY** remaining risks:
```
RISKS:
- [Risk 1]: [Description] - Mitigation: [How to handle]
- [Risk 2]: [Description] - Mitigation: [How to handle]
```
2. **ASSESS** risk levels:
- Database migrations: Risk of data loss?
- API changes: Breaking changes for consumers?
- Dependency updates: Compatibility issues?
- Performance impact: Degradation possible?
- Security changes: New attack vectors?
##### 6.3 Dependency Validation
1. **CHECK** external dependencies:
```
DEPENDENCIES:
- [Dependency 1]: [Status - Ready/Not Ready]
- [Dependency 2]: [Status - Ready/Not Ready]
```
2. **VALIDATE**:
- External services: Available and tested?
- Third-party APIs: Credentials configured?
- Database migrations: Run on staging?
- Feature flags: Configured correctly?
- Environment variables: Set in production?
#### Phase 7: Validation Report Generation
1. **COMPILE** all validation results
2. **DETERMINE** overall status:
- ✅ READY TO SHIP: All checks pass, high confidence
- ⚠️ NEEDS WORK: Critical items missing or failing
3. **GENERATE** validation report:
```
📄 VALIDATION SUMMARY
════════════════════════════════════════════════
Story: [STORY-ID] - [Title]
Validated: [Timestamp]
Validator: Claude Code (Automated)
RESULT: ✅ READY TO SHIP / ⚠️ NEEDS WORK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 ACCEPTANCE CRITERIA: 5/5 PASSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✓ User can toggle dark mode
→ tests/Browser/DarkModeTest.php::line 45 ✅
✓ Theme persists across sessions
→ tests/Browser/DarkModeTest.php::line 67 ✅
✓ All UI components support both themes
→ tests/Browser/DarkModeTest.php::line 89 ✅
✓ Keyboard shortcut (Cmd+Shift+D) works
→ tests/Browser/DarkModeTest.php::line 112 ✅
✓ Preference syncs across browser tabs
→ tests/Browser/DarkModeTest.php::line 134 ✅
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 QUALITY METRICS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Passed Criteria: 5/5 (100%)
Test Coverage: 87% (target: 80% ✅)
Quality Score: 9.2/10
Performance: All targets met ✅
Security: No vulnerabilities ✅
Accessibility: WCAG AA compliant ✅
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🚀 PRODUCTION READINESS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ All acceptance criteria met
✅ All tests passing (76/76)
✅ Browser test coverage: 100%
✅ Code reviewed and approved
✅ Documentation complete
✅ Performance acceptable
✅ Security verified
✅ Rollback plan documented
CONFIDENCE LEVEL: ✅ HIGH
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ RISKS & MITIGATIONS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Low Risk: CSS changes may affect custom themes
Mitigation: Browser tests cover theme switching,
rollback plan ready
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔗 DEPENDENCIES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ No external dependencies
[IF NOT READY:]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
❌ BLOCKING ISSUES
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. Acceptance criterion "X" not validated by browser test
→ Create browser test in tests/Browser/
2. Rollback plan not documented
→ Add rollback steps to story file
3. Performance target not met (450ms vs 200ms)
→ Optimize database queries
[IF READY:]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ SHIP CHECKLIST
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. /sdd:story-ship [story-id] # Deploy to production
2. Monitor application after deployment
3. Be ready to execute rollback plan if needed
4. Document lessons learned after ship
```
#### Phase 8: Story File Updates
1. **UPDATE** Success Criteria with validation evidence:
```markdown
## Success Criteria
- [x] User can toggle dark mode
* Tested by: tests/Browser/DarkModeTest.php::line 45
* Evidence: Browser test passed on 2025-10-01
* Screenshot: /storage/screenshots/STORY-2025-003/toggle.png
- [x] Theme persists across sessions
* Tested by: tests/Browser/DarkModeTest.php::line 67
* Evidence: Browser test passed on 2025-10-01
```
2. **MARK** remaining checklist items:
- Implementation Checklist: `[x]` any newly validated items
- QA Checklist: `[x]` any newly validated items
3. **ADD** validation entry to Progress Log:
```markdown
- [Today]: Final validation completed
* All 5 acceptance criteria validated ✅
* Test coverage: 87% (exceeds 80% target)
* Performance: All targets met
* Security: No vulnerabilities
* Rollback plan: Documented and verified
* Confidence level: HIGH
* Status: READY TO SHIP
```
4. **RECORD** validation results:
- Validation timestamp
- Confidence level
- Risk assessment
- Dependency status
- Any conditions for shipping
#### Phase 9: Next Steps
1. **DISPLAY** validation outcome:
```
💡 NEXT STEPS:
════════════════════════════════════════════════
[IF READY TO SHIP:]
✅ Story validated and ready for production
1. /sdd:story-ship [story-id] # Deploy to production
- Creates PR (if not created)
- Merges to main branch
- Moves story to completed
- Tags release
2. Post-Deployment Actions:
- Monitor application logs
- Watch performance metrics
- Be ready for rollback
- Document lessons learned
[IF NEEDS WORK:]
⚠️ X critical issues prevent shipping
1. /sdd:story-refactor [story-id] # Return to development
2. Address blocking issues:
- [Issue 1]
- [Issue 2]
3. /sdd:story-review [story-id] # Re-run review
4. /sdd:story-qa [story-id] # Re-run QA
5. /sdd:story-validate [story-id] # Re-validate
[MONITORING COMMANDS:]
# Laravel:
php artisan pail # Watch logs in real-time
php artisan telescope:clear # Clear old monitoring data
# System:
tail -f storage/logs/laravel.log # Follow application logs
```
### OUTPUTS
- Updated `/docs/stories/qa/[story-id].md` with validation results
- Validation summary report (displayed to user)
- Updated Success Criteria with test evidence
- Updated checklists with final validation status
- Progress log entry with validation timestamp
### RULES
- MUST load project context before validation
- MUST validate ALL acceptance criteria with evidence
- MUST verify 100% browser test coverage of criteria
- MUST check rollback plan is documented
- MUST assess production readiness
- SHOULD identify risks and mitigations
- SHOULD validate external dependencies
- NEVER mark story ready with failing tests
- NEVER skip acceptance criteria validation
- ALWAYS provide test evidence for each criterion
- ALWAYS update Success Criteria with validation details
- ALWAYS record confidence level in validation
## Examples
### Example 1: Validation Ready to Ship
```bash
INPUT:
/sdd:story-validate STORY-2025-003
OUTPUT:
→ Loading project context...
✓ technical-stack.md: Laravel + Pest + Playwright
✓ coding-standards.md: 80% coverage target
✓ development-process.md: Validation requirements
→ Loading story requirements...
✓ Story: STORY-2025-003 - Dark Mode Toggle
✓ Success Criteria: 5 criteria
✓ Implementation Checklist: 9/9 complete
✓ QA Checklist: 6/6 complete
→ Validating acceptance criteria...
✓ [1/5] User can toggle dark mode
→ tests/Browser/DarkModeTest.php::line 45 ✅
✓ [2/5] Theme persists across sessions
→ tests/Browser/DarkModeTest.php::line 67 ✅
✓ [3/5] All UI components support both themes
→ tests/Browser/DarkModeTest.php::line 89 ✅
✓ [4/5] Keyboard shortcut works
→ tests/Browser/DarkModeTest.php::line 112 ✅
✓ [5/5] Preference syncs across tabs
→ tests/Browser/DarkModeTest.php::line 134 ✅
→ Checking implementation completeness...
✓ Feature implementation: Complete
✓ Unit tests: 87% coverage (target: 80%)
✓ Browser tests: 100% criteria coverage
✓ All tests passing: 76/76
→ Verifying rollback plan...
✓ Rollback plan documented
✓ Migrations are reversible
→ Final production readiness checks...
✓ All acceptance criteria met
✓ All tests passing
✓ Code reviewed
✓ Documentation complete
✓ Performance acceptable
✓ Security verified
✓ Rollback plan ready
📄 VALIDATION SUMMARY
════════════════════════════════════════════════
Story: STORY-2025-003 - Dark Mode Toggle
Validated: 2025-10-01 17:45:23
RESULT: ✅ READY TO SHIP
Passed Criteria: 5/5 (100%)
Test Coverage: 87%
Quality Score: 9.2/10
CONFIDENCE LEVEL: ✅ HIGH
💡 NEXT STEPS:
════════════════════════════════════════════════
✅ Story validated and ready for production
1. /sdd:story-ship STORY-2025-003 # Deploy to production
2. Monitor application after deployment
3. Be ready to execute rollback if needed
```
### Example 2: Validation with Blocking Issues
```bash
INPUT:
/sdd:story-validate STORY-2025-004
OUTPUT:
→ Loading project context...
✓ technical-stack.md loaded
→ Loading story requirements...
✓ Story: STORY-2025-004 - Notification System
✓ Success Criteria: 4 criteria
→ Validating acceptance criteria...
✓ [1/4] Notifications appear on events
→ tests/Browser/NotificationTest.php::line 23 ✅
✓ [2/4] Notifications can be dismissed
→ tests/Browser/NotificationTest.php::line 45 ✅
✗ [3/4] Notifications persist across sessions
→ NO BROWSER TEST FOUND ❌
✗ [4/4] Email notifications sent
→ NO BROWSER TEST FOUND ❌
→ Checking rollback plan...
✗ Rollback plan not documented
📄 VALIDATION SUMMARY
════════════════════════════════════════════════
Story: STORY-2025-004 - Notification System
Validated: 2025-10-01 18:12:45
RESULT: ⚠️ NEEDS WORK
Passed Criteria: 2/4 (50%)
Test Coverage: 78% (below 80% target)
❌ BLOCKING ISSUES:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. Acceptance criterion "Notifications persist across sessions" not validated
→ Create browser test in tests/Browser/NotificationTest.php
2. Acceptance criterion "Email notifications sent" not validated
→ Create browser test for email sending
3. Rollback plan not documented
→ Add rollback steps to story file
4. Test coverage 78% below target 80%
→ Add tests to increase coverage
💡 NEXT STEPS:
════════════════════════════════════════════════
⚠️ 4 critical issues prevent shipping
1. /sdd:story-refactor STORY-2025-004 # Return to development
2. Address blocking issues:
- Add browser test for session persistence
- Add browser test for email notifications
- Document rollback plan
- Increase test coverage to 80%+
3. /sdd:story-review STORY-2025-004 # Re-run review
4. /sdd:story-qa STORY-2025-004 # Re-run QA
5. /sdd:story-validate STORY-2025-004 # Re-validate
```
### Example 3: Validation with Warnings (Still Ship-Ready)
```bash
INPUT:
/sdd:story-validate
OUTPUT:
→ Detecting story from current branch...
✓ Current branch: feature/STORY-2025-005-search
✓ Story ID: STORY-2025-005
→ Loading story requirements...
✓ Success Criteria: 3 criteria
→ Validating acceptance criteria...
✓ All 3 criteria validated by browser tests
→ Assessing risks...
⚠️ Database migration changes column type
⚠️ Bundle size increased by 15KB
📄 VALIDATION SUMMARY
════════════════════════════════════════════════
Story: STORY-2025-005 - Advanced Search
Validated: 2025-10-01 19:23:11
RESULT: ✅ READY TO SHIP (with warnings)
Passed Criteria: 3/3 (100%)
Test Coverage: 91%
CONFIDENCE LEVEL: ⚠️ MEDIUM
⚠️ WARNINGS (non-blocking):
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1. Database migration changes column type
Risk: Potential data loss on rollback
Mitigation: Backup database before deployment
2. Bundle size increased by 15KB
Risk: Slightly slower page loads
Mitigation: Monitor performance metrics
💡 NEXT STEPS:
════════════════════════════════════════════════
✅ Story validated - Ready to ship with awareness
1. /sdd:story-ship STORY-2025-005 # Deploy to production
2. Backup database before deployment
3. Monitor performance after deployment
```
## Edge Cases
### No Project Context
- DETECT missing `/docs/project-context/` directory
- SUGGEST running `/sdd:project-init`
- OFFER to validate with basic checks
- WARN that validation will be incomplete
### Story Not in QA
- CHECK if story in `/docs/stories/development/` or `/docs/stories/review/`
- ERROR: "Story must complete QA before validation"
- PROVIDE workflow guidance to reach QA stage
- SUGGEST appropriate command
### Missing Browser Tests
- DETECT acceptance criteria without browser test evidence
- COUNT uncovered criteria
- BLOCK validation if coverage < 100%
- PROVIDE test file examples for stack
### Incomplete Checklists
- DETECT unchecked items in Implementation/QA checklists
- LIST incomplete items
- ASSESS if items are truly incomplete or just not checked
- WARN if critical items unchecked
### Rollback Plan Empty
- DETECT missing or minimal rollback plan
- SUGGEST rollback steps based on git diff
- OFFER to auto-generate basic rollback plan
- WARN that deployment without rollback plan is risky
### External Dependencies Not Ready
- DETECT external dependencies in story
- CHECK if dependencies are ready (if possible)
- WARN about deployment risks
- SUGGEST coordinating with dependency owners
## Error Handling
- **Missing /docs/project-context/**: Suggest `/sdd:project-init`, offer basic validation
- **Story not in QA**: Provide clear workflow, suggest correct command
- **Missing tests**: Block validation, provide test creation guidance
- **Git errors**: Validate git state, suggest resolution
- **File read errors**: Report specific file issue, suggest fix
## Performance Considerations
- Validation is primarily file reading and analysis (fast)
- Browser test evidence lookup is file-based (< 1s typically)
- No expensive operations unless re-running tests
- Cache story file contents for session
## Related Commands
- `/sdd:story-qa [id]` - Must complete before validation
- `/sdd:story-ship [id]` - Run after validation passes
- `/sdd:story-refactor [id]` - Return to development if validation fails
- `/sdd:story-status [id]` - Check current story state
## Constraints
- ✅ MUST load project context for validation standards
- ✅ MUST validate ALL acceptance criteria with evidence
- ✅ MUST verify 100% browser test coverage
- ✅ MUST check rollback plan exists
- ✅ MUST assess production readiness
- ⚠️ NEVER mark story ready with incomplete criteria validation
- ⚠️ NEVER skip browser test evidence requirement
- 📋 SHOULD identify and document risks
- 🔧 SHOULD validate external dependencies
- 💾 MUST update Success Criteria with validation details
- 🚫 BLOCK shipping if critical issues found