Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:20:16 +08:00
commit 538e6fc7bb
17 changed files with 3333 additions and 0 deletions

408
commands/create-research.md Normal file
View File

@@ -0,0 +1,408 @@
---
allowed-tools: Task(*), Read(*), Glob(*), Grep(*), TodoWrite(*)
argument-hint: <question or topic>
description: Research codebase to understand existing implementation
---
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to understand existing implementation and create documentation that can feed into the TDD workflow.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes
- DO NOT critique the implementation or identify problems
- DO NOT recommend refactoring, optimization, or architectural changes
- ONLY describe what exists, where it exists, how it works, and how components interact
- You are creating technical documentation of the existing system
## Workflow
### Step 1: Understand the Research Question
When invoked with: `/create-research "How does user authentication work?"`
Parse the question and identify:
- What components/features to investigate
- What architectural layers/modules are involved
- What patterns to look for
### Step 2: Create Research Plan
Use TodoWrite to track your research tasks:
- Find relevant files and components
- Analyze implementation details
- Discover patterns and conventions
- Document findings
### Step 3: Launch Parallel Research Agents with Rich Context
Spawn multiple specialized agents concurrently using Task tool.
**IMPORTANT**: Each agent must receive:
1. The original research question
2. The task/ticket context (if available)
3. Specific deliverable requirements
4. Cross-referencing instructions
**Agent 1 - codebase-locator:**
Provide detailed instructions:
```
CONTEXT:
- Research question: [original question]
- Ticket/requirement: [if available, include ticket path or summary]
- Related components: [any components mentioned in question]
TASK:
Find all files related to:
1. [Main component/feature]
2. Dependencies and imports/requires
3. Core business logic and data structures
4. External integrations and data access
5. Test files for all above
DELIVERABLES:
1. Complete file paths with descriptions
2. File roles (entry point/implementation/utility/test)
3. Import/dependency graph (which files depend on which)
4. Files grouped by modules/layers (adapt to discovered structure)
5. Test coverage mapping (which test tests which implementation)
OUTPUT FORMAT:
## Component Locations
### [Module/Layer Name] - [Component Type]
- `path/to/file.ext` - [description] [ROLE: entry point/implementation/utility/test]
- Dependencies: [list imports/requires]
- Tests: [test file name]
### Dependencies
- `file-a.ext` imports/requires from `file-b.ext` at line X
- [map all import/dependency relationships]
### Test Coverage
- `test_file.ext` or `file.test.ext` - tests for [component]
```
**Agent 2 - codebase-analyzer:**
Provide detailed instructions:
```
CONTEXT:
- Research question: [original question]
- Ticket/requirement: [if available]
- Core files to analyze: [specify key files or reference Agent 1 results if sequential]
TASK:
Analyze implementation details:
1. Complete execution flow from entry to exit
2. Data transformations at each step
3. Validation and error handling points
4. Dependencies on other components
5. Edge cases and special handling
DELIVERABLES:
1. Function/method signatures with file:line references
2. Step-by-step execution flow
3. Data transformation pipeline
4. Error handling and validation logic
5. Dependencies referenced by file path
OUTPUT FORMAT:
## Implementation Analysis
### Execution Flow
1. Entry point: `file.ext:line` - [what happens, parameters received]
2. Validation: `file.ext:line` - [what's validated, how, error cases]
3. Processing: `file.ext:line` - [transformations, business logic]
4. Exit/return: `file.ext:line` - [what's returned, side effects]
### Key Functions/Methods
- `functionName()` at `file.ext:line`
- Parameters: [types and descriptions]
- Returns: [type and description]
- Dependencies: [list files/components used with lines]
- Error cases: [what throws/returns, when, with examples]
- Edge cases: [special handling]
### Data Transformations
- Input: [type/structure] → Output: [type/structure]
- Transformation at `file.ext:line`: [description]
```
**Agent 3 - codebase-pattern-finder:**
Provide detailed instructions:
```
CONTEXT:
- Research question: [original question]
- Components analyzed: [reference from Agent 1/2]
- Ticket/requirement: [if available]
TASK:
Find examples of patterns in the codebase:
1. Naming conventions (files, classes, methods)
2. Repeated architectural patterns
3. Common validation approaches
4. Error handling strategies
5. Layer separation rules
DELIVERABLES:
1. Pattern name and description
2. At least 2-3 concrete examples with file:line references
3. Consistency analysis (always/mostly/sometimes used)
4. Code snippets showing pattern in action
5. Exceptions or variations noted
OUTPUT FORMAT:
## Patterns Discovered
### Pattern: [Name]
- **Description**: [how it works]
- **Found in**: `file1.ext:line`, `file2.ext:line`, `file3.ext:line`
- **Consistency**: [always/mostly/sometimes used]
- **Example**:
```
[code snippet showing pattern]
```
- **Variations**: [note any deviations from standard pattern]
### Pattern: [Name]
[repeat for each pattern found]
```
**EXECUTION**: Run these agents in PARALLEL (single message with multiple Task tool calls).
### Step 4: Synthesize Agent Findings (CRITICAL QUALITY STEP)
After ALL agents complete, perform comprehensive synthesis:
**1. Cross-Reference Findings**
- Map Agent 1's file locations to Agent 2's implementation details
- Link Agent 3's patterns to specific files from Agent 1
- Identify gaps: Did Agent 2 analyze all files from Agent 1?
- Verify: Do patterns from Agent 3 appear in files from Agent 1?
**2. Validate Completeness**
- Does Agent 2's execution flow reference all core components from Agent 1?
- Do Agent 3's patterns explain Agent 2's implementation choices?
- Are there files in Agent 1 not covered by Agent 2?
- Are test files from Agent 1 explained (what they test)?
**3. Resolve Conflicts**
- If agents report different information, investigate yourself
- Read files directly to confirm ambiguous findings
- Use Grep to verify pattern claims from Agent 3
- Check import statements to validate Agent 1's dependency graph
**4. Fill Gaps**
- If Agent 2 missed error handling, search for it yourself using Grep
- If Agent 3 found only 1 example of a pattern, search for more
- If Agent 1 missed test files, glob for common test patterns (`*test*`, `*spec*`, `test_*`)
- If execution flow is incomplete, read the source files directly
**5. Enrich with Context**
- Read the original ticket/requirement again
- Check if research fully answers the original question
- Add any missing details from your own investigation
- Note any architectural insights discovered during synthesis
**6. Document Synthesis Process**
- Track which findings came from agents vs your investigation
- Note any conflicts resolved
- Record gaps that were filled
- Document manual verification performed
### Step 5: Quality Verification Checklist
Before writing research.md, verify:
- [ ] All files from Agent 1 are explained in Agent 2's analysis
- [ ] Agent 2's execution flow has file:line references for every step
- [ ] Agent 3's patterns have at least 2-3 concrete examples each
- [ ] Error handling is documented (not just happy path)
- [ ] Test files are identified and their purpose explained
- [ ] Architectural layers are clearly separated in documentation
- [ ] Cross-references between agents are resolved
- [ ] Original research question is fully answered
- [ ] Import dependencies are verified (not just assumed)
- [ ] Edge cases and special handling are documented
**If any item is unchecked**, investigate yourself using:
- Read tool for specific files
- Grep for patterns across codebase
- Glob for missing files
- Manual code tracing
### Step 6: Generate Research Document
Create `research.md` in the task directory (or root if no task context) with explicit synthesis markers:
```markdown
# Research: [Question/Topic]
**Date**: [Current date]
**Question**: [Original research question]
## Summary
[Synthesized high-level overview from all agents + your analysis]
## Detailed Findings
### Component Structure
[From Agent 1 + your verification]
**Files Discovered** (Agent 1):
- `path/to/file.ext` - [description] [ROLE]
- Dependencies: [imports/requires]
- Tests: [test files]
**Verified Dependencies** (Your investigation):
- Cross-referenced import/require statements at `file.ext:line`
- Confirmed dependency chain: A → B → C
- [any corrections or additions]
### Implementation Details
[From Agent 2 + your enrichment]
**Core Flow** (Agent 2):
1. Entry: `file.ext:45` - [description with parameters]
2. Validation: `file.ext:67` - [what's validated, how]
3. Processing: `file.ext:89` - [transformations]
4. Exit: `file.ext:102` - [what's returned]
**Additional Details** (Your research):
- Error case at `file.ext:110` - throws/returns error when [condition]
- Edge case handling at `file.ext:125` - [special behavior]
- Missing from agent analysis: [any gaps you filled]
### Patterns Discovered
[From Agent 3 + your validation]
**Pattern: [Name]** (Agent 3):
- **Description**: [how it works]
- **Found in**: `file1.ext:line`, `file2.ext:line`
- **Example**:
```
[code snippet]
```
**Pattern Validation** (Your verification):
- Confirmed with grep: [X] occurrences across [module/directory]
- Additional examples found: `file3.ext:line`, `file4.ext:line`
- Exceptions noted: `legacy-file.ext` uses different pattern because [reason]
### Cross-Agent Synthesis
[Your analysis connecting all findings]
Agent 1 identified [X] core files. Agent 2 analyzed [Y] of them. The remaining files are:
- `helper.ext` - Utility used by all components at: `file-a.ext:12`, `file-b.ext:34`
- Purpose: [description from your reading]
Agent 3 found pattern X in [locations]. This explains Agent 2's implementation choice at `file.ext:45` because [architectural reasoning].
The execution flow from Agent 2 aligns with the file structure from Agent 1, confirming [architectural principle].
### Test Coverage
- Unit tests: `test_file.ext` or `file.test.ext` - tests [what] from `file.ext`
- Integration tests: `integration_test.ext` - tests [what flow]
- Coverage gaps: [any areas without tests]
## Architecture Insights
[Synthesized patterns with evidence]
- **Pattern observed**: [description]
- Evidence: [file references from agents + your verification]
- **Module separation**: [how it's maintained]
- [Module type] never imports from [other module type] (verified in [X] files)
- **Error handling strategy**: [approach used]
- Examples: [file:line references]
## File References
[Complete, verified list grouped by module/layer/directory]
### [Module/Layer Name 1]
- `path/to/file.ext` - [description]
### [Module/Layer Name 2]
- `path/to/file.ext` - [description]
### [Module/Layer Name 3]
- `path/to/file.ext` - [description]
(Adapt structure to discovered codebase organization)
## Research Quality Notes
[Transparency about synthesis process]
**Agent Coverage**:
- Agent 1 (codebase-locator): Found [X] files across [Y] modules/layers
- Agent 2 (codebase-analyzer): Analyzed [X] core files, [Y] execution flows
- Agent 3 (codebase-pattern-finder): Found [X] patterns with [Y] total examples
**Manual Enrichment**:
- Added error handling details not covered by Agent 2
- Verified Agent 3's pattern claims with [X] additional examples via Grep
- Cross-referenced ticket requirements - all [X] requirements addressed
- Filled [X] gaps identified during synthesis
**Conflicts Resolved**:
- Agent [X] reported [Y], but file reading confirmed [Z]
- [any other discrepancies and resolutions]
## Next Steps
This research can be used to:
1. Inform requirements and specifications for modifications
2. Understand impact of proposed changes
3. Identify test scenarios that need coverage
4. Guide refactoring and architectural decisions
```
### Step 7: Report Completion
After creating research.md, provide this summary:
```
✅ Research Complete: [Topic]
Research document created at: ./research.md
Key findings:
- [Major finding 1 with file reference]
- [Major finding 2 with file reference]
- [Major finding 3 with file reference]
Research quality:
- Agent 1 found [X] files across [Y] modules/layers
- Agent 2 analyzed [X] execution flows
- Agent 3 identified [X] patterns
- Manual enrichment: [X] gaps filled, [Y] verifications performed
This research can now be used to:
- Inform specification creation (if using TDD workflow)
- Guide architectural decisions
- Understand impact of proposed changes
```
## Integration with Development Workflow
This research can be used in various workflows:
```
/create-research → research.md → [use as input for specifications, design docs, or implementation]
```
## Important Notes
**Research Quality Strategy**:
- All agents work in parallel for efficiency (speed benefit)
- Rich context provided to each agent (quality benefit)
- Mandatory synthesis phase (quality assurance)
- Quality checklist before documentation (completeness verification)
- Transparency about agent vs manual findings (traceability)
**Research Purpose**:
- Research is purely documentary - no suggestions or improvements
- Focus on understanding what exists, not what should exist
- Output is structured to inform specifications, designs, and implementations
- Document actual implementation, not ideal implementation
- All findings must include specific file:line references

36
commands/create-spec.md Normal file
View File

@@ -0,0 +1,36 @@
---
allowed-tools: Task(*), Read(*), Glob(*)
argument-hint: <ticket-path | prompt> [research.md]
description: Create requirements.md with testable acceptance criteria from a ticket or prompt
---
Execute requirements analysis for: $ARGUMENTS
**Step 0: Parse Arguments and Determine Context**
1. Parse $ARGUMENTS to extract:
- Primary argument: Either a ticket file path (e.g., ticket.md) OR a direct text prompt
- Optional second argument: research.md path
2. Determine if first argument is a file path or prompt:
- If it ends with .md or contains path separators, treat as file path
- Otherwise, treat as direct prompt text
3. If file path: Get the directory path where the ticket file is located
4. If prompt: Use current working directory as output location
5. This directory will be used as the output location for requirements.md
6. Check if research.md was provided as second argument
**Step 1: Requirements Analysis**
Launch the requirements-analyzer agent:
Source: [ticket path from $ARGUMENTS OR direct prompt text]
Research file: [research.md path if provided as second argument]
Output directory: [Same directory as ticket file OR current working directory]
Output filename: requirements.md (not spec.md)
Task: Analyze requirements and create requirements.md with testable acceptance criteria. If research.md is provided, use it to inform requirements based on existing implementation patterns. If source is a direct prompt, use it as the requirements input.
**Expected Output:**
✅ requirements.md with acceptance criteria in the target directory
**Next Steps:**
After requirements.md is created, you can:
1. Run `/test-scenarios <directory>` to generate detailed test scenarios
2. Or continue with the full workflow using `/create-design <directory>`

View File

@@ -0,0 +1,41 @@
---
allowed-tools: Task(*), Read(*), Glob(*)
argument-hint: <directory>
description: Create technical design (tech-design.md) from requirements.md
---
Execute technical design creation for the directory at: $ARGUMENTS
**Prerequisites:**
- requirements.md must exist in the directory (run `/create-spec` first if needed)
**Step 0: Parse Arguments and Validate**
1. Extract directory path from $ARGUMENTS
2. Verify that requirements.md exists in this directory (MANDATORY)
3. Check if research.md exists in the same directory (OPTIONAL - for context on existing patterns)
4. Check if scenarios.md exists in the same directory (OPTIONAL - for detailed test context)
**Step 1: Launch software-architect Agent**
Launch software-architect agent to create technical design:
- Read requirements.md from: [directory]/requirements.md (MANDATORY - contains acceptance criteria)
- Read research.md from: [directory]/research.md (OPTIONAL - if exists, use existing patterns and components)
- Read scenarios.md from: [directory]/scenarios.md (OPTIONAL - if exists, provides detailed test scenarios for design context)
- Output directory: [directory]
- Output filename: tech-design.md (not design.md)
- Task: Create tech-design.md with technical design covering ALL acceptance criteria in requirements.md. If research.md is provided, use existing patterns and components from the research. If scenarios.md is provided, use test scenarios to inform design decisions.
**Step 2: If Design Options Presented**
If the software-architect agent pauses to present multiple design alternatives:
1. Review the design options with the user
2. Present pros/cons and recommendation clearly
3. Wait for user to select an option
4. Once user selects, relaunch software-architect agent to complete tech-design.md with chosen option
**Expected Output:**
✅ tech-design.md with technical design in the directory
**Next Steps:**
After tech-design.md is created, you can begin implementation using your preferred development approach.

72
commands/start-tdd.md Normal file
View File

@@ -0,0 +1,72 @@
---
allowed-tools: Task(*)
argument-hint: <task-directory>
description: Continue TDD implementation using orchestrator agent
---
Execute TDD orchestration for: $ARGUMENTS
**STEP 0: Validate Prerequisites**
Before starting TDD, verify task directory contains required files:
1. **Parse task directory** from $ARGUMENTS
2. **Check required files exist** using Read tool:
- `scenarios.md` - REQUIRED (contains scenarios with TDD tracking checkboxes)
- `test-scenarios/` directory - REQUIRED (contains happy-path.md, error-cases.md, edge-cases.md)
- `tech-design.md` - REQUIRED (provides architectural guidance and implementation strategy)
3. **If any required file missing:**
```
❌ Prerequisites validation failed
Missing required files:
- [list missing files]
Run the workflow commands to generate required files:
/test-scenarios [task-directory]
/create-tech-design [task-directory]
```
STOP execution - do not proceed.
4. **If all required files exist:**
```
✅ Prerequisites validated
- scenarios.md ✓
- test-scenarios/ ✓
- tech-design.md ✓
```
Proceed to STEP 1.
**STEP 1: Read scenarios.md** in the task directory and find the first scenario that needs work.
**STEP 2: Detect phase** by checking the scenario's checkboxes:
- `[ ] [ ] [ ]` = RED phase → Launch tdd-red agent
- `[x] [ ] [ ]` = GREEN phase → Launch tdd-green agent
- `[x] [x] [ ]` = REFACTOR phase → Launch tdd-refactor agent
- `[x] [x] [x]` = Complete → Move to next scenario
If all scenarios are complete, report task completion.
**STEP 3: Launch the appropriate agent** with Task tool:
- RED: `subagent_type: "tdd-red"` - Agent will write failing test
- GREEN: `subagent_type: "tdd-green"` - Agent will implement minimal code
- REFACTOR: `subagent_type: "tdd-refactor"` - Agent will improve code quality
Pass the task directory path to the agent. Agents will use tech-design.md for architectural guidance and implementation strategy, scenarios.md for tracking progress, and test-scenarios/ for detailed scenario specifications.
**STEP 4: After agent completes**, show:
- What phase completed
- Current progress (all scenarios with checkbox states)
- Suggested commit message
**STEP 5: Ask user** what to do next:
1. Commit and continue to next phase/scenario
2. Stop here to review
3. Continue without committing
4. Skip refactoring (only when in REFACTOR phase) - move directly to next scenario
**Notes:**
- Option 4 is only available when in REFACTOR phase (scenario shows `[x] [x] [ ]`)
- Skipping refactoring is acceptable since scenarios are functionally complete after GREEN phase
- Do NOT automatically continue to the next phase.

538
commands/test-scenarios.md Normal file
View File

@@ -0,0 +1,538 @@
---
allowed-tools: Task(*), Read(*), Glob(*), Write(*), Edit(*), Bash(mkdir:*)
argument-hint: <task-directory> [operation-prompt]
description: Create or manage test scenarios from requirements.md using the qa-engineer agent. Supports creating, adding, modifying, or discovering scenarios.
---
# Test Scenarios Command
Execute test scenario creation or management for: $ARGUMENTS
## Overview
This command creates and manages test scenarios from a requirements.md file containing high-level acceptance criteria.
It orchestrates scenario generation by:
1. Determining what operation to perform (create all, add one, modify one, discover gaps)
2. Reading requirements.md and preparing context
3. Calling qa-engineer agent to generate scenario content
4. Handling all file operations (writing, numbering, organizing, linking)
The qa-engineer agent ONLY generates scenario content. This command handles everything else.
## Step 1: Parse Arguments and Determine Operation
### Operation Detection
Analyze `$ARGUMENTS` to determine the operation:
**EXPAND Mode** - Create comprehensive scenarios from requirements.md:
- Single argument: directory path containing `requirements.md`
- Example: `/test-scenarios apps/feature/task-001/`
**ADD Mode** - Add single scenario to existing set:
- First argument: directory path
- Second argument contains "add"
- Example: `/test-scenarios apps/feature/task-001/ "add scenario for null input to AC-1"`
**MODIFY Mode** - Edit existing scenario:
- First argument: directory path
- Second argument contains "modify" or "edit"
- Example: `/test-scenarios apps/feature/task-001/ "modify scenario 1.2 to test empty string"`
**DISCOVER Mode** - Find gaps in existing scenarios:
- First argument: directory path
- Second argument contains "discover" or "gaps"
- Optional: Tag additional context files (e.g., @research.md, @tech-design.md) for deeper analysis
- Example: `/test-scenarios apps/feature/task-001/ "discover gaps"`
- Example with context: `/test-scenarios apps/feature/task-001/ "discover gaps" @research.md @tech-design.md`
**DELETE Mode** - Remove a scenario:
- First argument: directory path
- Second argument contains "delete" or "remove"
- Example: `/test-scenarios apps/feature/task-001/ "delete scenario 1.3"`
## Step 2: Gather Context
All modes use `requirements.md` as the base input containing high-level acceptance criteria.
Read relevant files from the task directory:
### For EXPAND Mode:
```
Required:
- <directory>/requirements.md (high-level acceptance criteria)
```
### For ADD/MODIFY/DISCOVER/DELETE Modes:
```
Required:
- <directory>/requirements.md (high-level acceptance criteria - for context)
- <directory>/scenarios.md (existing scenarios with implementation tracking)
- <directory>/test-scenarios/*.md (existing scenario details)
Optional (if tagged in user prompt):
- Any additional context files (research.md, tech-design.md, code files, etc.)
- These provide deeper context for scenario discovery and analysis
```
## Step 3: Prepare Agent Request
Based on operation mode, formulate a simple request for qa-engineer:
### EXPAND Mode Request:
```
Generate comprehensive test scenarios for these acceptance criteria:
[paste requirements.md content - high-level acceptance criteria]
Generate multiple scenarios per acceptance criterion, ordered by implementation priority.
```
### ADD Mode Request:
```
Generate ONE scenario for AC-[N] to test: [specific behavior]
Acceptance Criterion:
[AC-[N] from requirements.md]
Existing scenarios for AC-[N]:
[list existing scenarios with names only]
Check for duplicates against existing scenarios.
```
### MODIFY Mode Request:
```
Modify scenario [N.M]:
Current scenario:
[paste current scenario]
Current implementation progress: [checkboxes state]
Requested change: [what user specified]
Preserve structure and warn if existing tests/implementation may need updates.
```
### DISCOVER Mode Request:
```
Analyze existing scenarios for gaps and generate NEW scenarios to fill those gaps.
Acceptance Criteria:
[paste all ACs from requirements.md]
Existing Scenarios:
[paste organized list of scenarios by AC with their types]
[IF additional context files were tagged, include them here:]
Additional Context:
[For each tagged file, include a section:]
File: [filename]
[paste file content]
[Repeat for all tagged files]
Generate new scenarios to fill any gaps. If no gaps found, return: "No gaps found - coverage is complete"
```
## Step 4: Invoke qa-engineer Agent
Use Task tool to launch qa-engineer with the prepared request:
```
Task: qa-engineer
Description: Generate test scenario content
[Paste the request from Step 3]
```
The agent is the QA domain expert. It will:
- Ask clarification questions if needed (wait for answers)
- Apply its QA heuristics automatically
- Assign scenario types and priorities based on QA expertise
- Generate scenario content in Given-When-Then format
- Return structured JSON with scenarios, warnings, and optional context requests
**Expected JSON Response:**
```json
{
"scenarios": [...],
"warnings": {
"duplicates": [...],
"gaps": [...],
"implementation_impact": [...]
},
"context_requests": [...] // Optional, for DISCOVER mode
}
```
**Handle Context Requests (DISCOVER mode only):**
If agent returns `context_requests` array:
1. Read each requested file
2. Append to the original request with section: "Additional Context:\n[file content]"
3. Re-invoke agent with updated request
4. Repeat until agent returns scenarios or "No gaps found"
## Step 5: Common File Operations
All modes (except DELETE) use these common operations after receiving agent JSON output.
### 5.1: Parse Agent JSON Response
```javascript
const response = JSON.parse(agent_output);
const scenarios = response.scenarios || [];
const warnings = response.warnings || {};
const contextRequests = response.context_requests || [];
```
### 5.2: Display Warnings
```
if (warnings.duplicates?.length > 0) {
Display: ⚠️ Duplicate warnings: [list warnings.duplicates]
}
if (warnings.gaps?.length > 0) {
Display: Identified gaps: [list warnings.gaps]
}
if (warnings.implementation_impact?.length > 0) {
Display: ⚠️ Implementation impact: [list warnings.implementation_impact]
Display: "Review existing tests/code for needed updates"
}
```
### 5.3: Write Scenario Content to Type File
For any scenario that needs to be written or updated:
```bash
# Determine target file from scenario.type:
target_file = test-scenarios/{scenario.type}.md
# Where scenario.type is one of: "happy-path", "error-case", "edge-case"
# Write scenario.content exactly as received from agent:
[Paste scenario.content]
# Agent's content includes:
# - ## Scenario N.M: [Name] heading
# - Proper blank lines (MD022/MD032 compliance)
# - Trailing --- separator
```
### 5.4: Update scenarios.md Tracking
For any scenario that needs tracking entry:
```markdown
### Scenario N.M: [scenario.name]
- **Type**: [scenario.type]
- **Details**: [test-scenarios/{scenario.type}.md#scenario-nm](test-scenarios/{scenario.type}.md#scenario-nm)
- **Implementation Progress**: [ ] Test Written [ ] Implementation [ ] Refactoring
```
**Important**: When modifying existing entry, PRESERVE checkbox states.
### 5.5: Determine Next Scenario Number
When adding new scenarios, find the next available number for an AC:
```bash
# Read scenarios.md
# Find all scenarios for the target AC (e.g., "AC-2")
# Find highest number (e.g., if 2.1, 2.2, 2.3 exist → next is 2.4)
# Return next_number
```
## Step 6: Mode-Specific Workflows
After receiving JSON output from qa-engineer, perform mode-specific operations:
### For EXPAND Mode:
Creates all scenario files from scratch.
**Your tasks:**
1. **Use Step 5.1** to parse JSON response
2. **Use Step 5.2** to display warnings
3. **Group scenarios by type**:
- Filter `scenarios` where `type === "happy-path"` → happy-path.md
- Filter `scenarios` where `type === "error-case"` → error-cases.md
- Filter `scenarios` where `type === "edge-case"` → edge-cases.md
4. **Create directory**:
```bash
mkdir -p <directory>/test-scenarios
```
5. **Assign scenario numbers**:
- Group scenarios by `acceptance_criterion` field (e.g., "AC-1")
- Sort within each AC by `priority` field (1, 2, 3...)
- Assign numbers: AC-1 → 1.1, 1.2, 1.3...; AC-2 → 2.1, 2.2...
- Agent determines priority (happy-path first, then error-case, then edge-case)
6. **Write type files** with headers:
**happy-path.md**:
```markdown
# Happy Path Scenarios
Valid inputs and successful outcomes that represent typical user workflows.
---
[For each happy-path scenario, use Step 5.3 to write scenario.content]
```
**error-cases.md**:
```markdown
# Error Case Scenarios
Invalid inputs and failure conditions.
---
[For each error-case scenario, use Step 5.3 to write scenario.content]
```
**edge-cases.md**:
```markdown
# Edge Case Scenarios
Boundary values, limits, and unusual but valid conditions.
---
[For each edge-case scenario, use Step 5.3 to write scenario.content]
```
7. **Create scenarios.md** with tracking:
```markdown
# Test Scenarios
[For each AC in requirements.md:]
## AC-N: [Title from requirements.md]
**Source**: [requirements.md#ac-n](requirements.md#ac-n)
[For each scenario for this AC, use Step 5.4 to create tracking entry]
```
### For ADD Mode:
Adds one new scenario to existing set.
**Your tasks:**
1. **Use Step 5.1** to parse JSON (get `scenarios[0]` as the single scenario)
2. **Use Step 5.2** to display warnings
3. **Check for duplicates** - if `warnings.duplicates` is not empty:
```
Ask user: "Proceed anyway? (yes/no)"
If no, abort
```
4. **Use Step 5.5** to determine next scenario number for the AC
5. **Use Step 5.3** to append scenario.content to appropriate type file
6. **Use Step 5.4** to add tracking entry to scenarios.md under the AC section
7. **Display confirmation**:
```
✅ Added Scenario N.M: [scenario.name]
- Type: [scenario.type]
- AC: [scenario.acceptance_criterion]
```
### For MODIFY Mode:
Updates an existing scenario.
**Your tasks:**
1. **Use Step 5.1** to parse JSON (get `scenarios[0]` as the updated scenario)
2. **Use Step 5.2** to display warnings
3. **Update type file**:
- Find scenario in `test-scenarios/{scenario.type}.md`
- Locate section starting with `## Scenario N.M:`
- Replace entire section (from heading to `---`) with **Step 5.3** content
4. **Update scenarios.md** (if name or type changed):
- Find scenario entry `### Scenario N.M`
- Update name and type using **Step 5.4** format
- **PRESERVE existing checkbox states**
5. **Display confirmation**:
```
✅ Modified Scenario N.M: [scenario.name]
If tests/implementation exist, review them for needed updates.
```
### For DISCOVER Mode:
Analyzes existing scenarios for gaps and adds missing scenarios.
**Your tasks:**
1. **Use Step 5.1** to parse JSON
2. **Handle context requests** (if any):
```
if (response.context_requests?.length > 0) {
For each requested file:
- Read the file
- Append to original request: "Additional Context:\n\nFile: {filename}\n{content}\n"
Re-invoke qa-engineer agent with updated request
Return to step 1 when agent responds
}
```
3. **If scenarios found** (`response.scenarios.length > 0`):
a. **Use Step 5.2** to display warnings (especially gaps identified)
b. **For each new scenario**:
- **Use Step 5.5** to determine next scenario number for the AC
- **Use Step 5.3** to append to appropriate type file
- **Use Step 5.4** to add tracking entry to scenarios.md
c. **Display summary**:
```
✅ Discovered and added {count} new scenarios:
- Scenario X.Y: [scenario.name] ({scenario.type})
[list all new scenarios]
```
4. **If no gaps found** (`response.message === "No gaps found - coverage is complete"`):
```
Display:
✅ No gaps found - scenario coverage is complete
```
### For DELETE Mode:
**No agent needed** - this is a file operation only
**Your tasks:**
1. **Parse scenario number** from user request (e.g., "1.3")
2. **Remove from scenarios.md**:
```bash
# Find and remove the scenario entry:
### Scenario N.M: [Name]
- **Type**: [type]
- **Details**: [link]
- **Implementation Progress**: [checkboxes]
```
3. **Remove from scenario detail file**:
```bash
# Find and remove from <directory>/test-scenarios/{type}.md
## Scenario N.M: [Name]
[entire scenario content]
---
```
4. **Display confirmation**:
```
✅ Deleted Scenario N.M: [Name]
- Removed from scenarios.md
- Removed from test-scenarios/{type}.md
⚠️ Note: If this scenario had implementation (tests/code), you may need to remove those manually.
```
## Expected File Structure
After EXPAND mode, the directory structure will be:
```
<task-directory>/
├── requirements.md (INPUT - read-only, contains acceptance criteria)
├── scenarios.md (OUTPUT - all scenarios with Implementation tracking)
└── test-scenarios/
├── happy-path.md (OUTPUT - success cases)
├── error-cases.md (OUTPUT - failure cases)
└── edge-cases.md (OUTPUT - boundary cases)
```
## Usage Examples
### Example 1: Create comprehensive scenarios from requirements.md
```bash
/test-scenarios apps/snyk-cmd/docs/features/bulk-ignore/tasks/task-001/
```
Creates all scenarios from requirements.md in that directory.
### Example 2: Add single scenario
```bash
/test-scenarios apps/feature/task-001/ "add scenario for null organization name to AC-3"
```
Adds one scenario to existing set for AC-3.
### Example 3: Modify existing scenario
```bash
/test-scenarios apps/feature/task-001/ "modify scenario 2.1 to test empty string instead of null"
```
Updates scenario 2.1 with new behavior.
### Example 4: Discover and add missing scenarios
```bash
/test-scenarios apps/feature/task-001/ "discover gaps in existing scenarios"
```
Analyzes scenarios for gaps and automatically generates and adds missing test scenarios.
### Example 4a: Discover gaps with additional context
```bash
/test-scenarios apps/feature/task-001/ "discover gaps" @research.md @tech-design.md
```
Uses additional context files to discover edge cases, technical constraints, and integration scenarios that may not be obvious from requirements alone. The qa-engineer agent may ask clarifying questions, then generates and adds the missing scenarios automatically.
### Example 5: Delete scenario
```bash
/test-scenarios apps/feature/task-001/ "delete scenario 1.3"
```
Removes scenario 1.3 from scenarios.md and test-scenarios files.
## Key Principles
This command is the **orchestrator**:
- ✅ Determines what to do (mode selection)
- ✅ Prepares context for agent
- ✅ Calls agent with simple request
- ✅ Parses structured JSON output
- ✅ Handles all file operations
- ✅ Manages scenario numbering based on agent's priority
- ✅ Creates links and tracking
- ✅ Displays warnings to user
- ✅ Handles context requests from agent (DISCOVER mode)
The qa-engineer agent is the **QA expert**:
- ✅ Applies QA heuristics
- ✅ Classifies scenarios by type (happy-path/error-case/edge-case)
- ✅ Assigns priority based on QA expertise
- ✅ Links scenarios to acceptance criteria
- ✅ Generates Given-When-Then scenarios
- ✅ Uses business-friendly language
- ✅ Warns about duplicates and implementation impact
- ✅ Requests additional context if needed (DISCOVER mode)
- ✅ Returns structured JSON for easy parsing
- ❌ Does NOT handle files, numbering, or organization