Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:54:26 +08:00
commit 3562b3d6a4
27 changed files with 16593 additions and 0 deletions

451
commands/brainstorm.md Normal file
View File

@@ -0,0 +1,451 @@
---
description: Explore 3-5 distinct solution options at conceptual level with S/M/L sizing
argument-hint: [jira-id|pr-url|#pr-number|issue-url|description] [--input PATH] [--output PATH] [--options N] [--no-file] [--quiet] [--work-dir PATH]
allowed-tools: ["Read", "Write", "Task", "ExitPlanMode"]
---
# Brainstorm Workflow
You are performing **broad solution exploration** for a problem/feature/change using the **executor pattern**. Follow this structured workflow to generate 3-5 distinct solution options at CONCEPTUAL level with S/M/L sizing (NO file paths, scripts, or numeric time estimates).
**Key Innovation**: The brainstorm-executor subagent performs ALL work (context fetching, exploration, generation) in isolated context, keeping main context clean.
---
## ⚙️ MODE ENFORCEMENT
**CRITICAL**: This command operates in **PLAN MODE** throughout Phases 1-2 (argument parsing and executor invocation). You MUST use the **ExitPlanMode tool** before Phase 3 (output handling) to transition from analysis to execution.
**Why Plan Mode**:
- Phases 1-2 require understanding the request WITHOUT making changes
- Plan mode ensures safe operation before file writes
- Only file output operations (Phase 3-4) require execution mode
**Workflow**:
```
┌──────────────────────────────────┐
│ PLAN MODE (Read-only) │
│ Phases 1-2: Setup & Execute │
└──────────────────────────────────┘
[ExitPlanMode Tool]
┌──────────────────────────────────┐
│ EXECUTION MODE (Write) │
│ Phases 3-4: Output & Completion │
└──────────────────────────────────┘
```
---
## PHASE 1: ARGUMENT PARSING
Use lib/argument-parser.md:
```
Configuration:
command_name: "brainstorm"
command_label: "Brainstorm-Solutions"
positional:
- name: "problem_input"
description: "Jira ID, GitHub URL, or problem description"
required: false
flags:
- name: "--input"
type: "path"
description: "Read problem description from file"
- name: "--output"
type: "path"
description: "Custom output file path for brainstorm"
- name: "--options"
type: "number"
description: "Number of solution options to generate (default: 2-3)"
- name: "--work-dir"
type: "path"
description: "Custom work directory"
- name: "--no-file"
type: "boolean"
description: "Skip file creation, terminal only"
- name: "--quiet"
type: "boolean"
description: "Suppress terminal output"
validation:
- --output and --no-file are mutually exclusive
- --options must be 2-5 if provided
- At least one input source required (positional or --input)
```
**Store parsed values:**
- `problem_input`: Positional argument or --input file content
- `output_path`: --output value or null
- `options_count`: --options value or null (default 2-3)
- `work_dir`: --work-dir value or null
- `file_output`: true (unless --no-file)
- `terminal_output`: true (unless --quiet)
---
## PHASE 2: EXECUTE BRAINSTORM (Isolated Context)
**Objective**: Spawn brainstorm-executor subagent to perform ALL brainstorming work in isolated context.
**Use Task tool with brainstorm-executor**:
```
Task tool configuration:
subagent_type: "schovi:brainstorm-executor:brainstorm-executor"
model: "sonnet"
description: "Execute brainstorm workflow"
prompt: |
PROBLEM REFERENCE: [problem_input]
CONFIGURATION:
- number_of_options: [options_count or "2-3"]
- identifier: [auto-detect from problem_input or generate slug]
- exploration_mode: medium
Execute complete brainstorm workflow:
1. Fetch external context (Jira/GitHub if applicable)
2. Light codebase exploration (Plan subagent, medium mode)
3. Generate 2-3 distinct solution options following template
Return structured brainstorm output (~2000-3000 tokens).
```
**Expected output from executor**:
- Complete structured brainstorm markdown (~2000-3000 tokens)
- Includes: problem summary, constraints, 2-3 options, comparison matrix, recommendation, exploration notes
- Already formatted following `schovi/templates/brainstorm/full.md`
**Store executor output**:
- `brainstorm_output`: Complete markdown from executor
- `identifier`: Extract from brainstorm header or use fallback
---
## PHASE 3: EXIT PLAN MODE
**CRITICAL**: Before proceeding to output handling, use ExitPlanMode tool to transition from plan mode to execution mode.
```
ExitPlanMode tool:
plan: |
# Brainstorm Solutions Completed
Generated solution options via brainstorm-executor subagent.
**Identifier**: [identifier]
**Options Count**: [N options generated]
## Key Results
- Problem context fetched and analyzed
- Light codebase exploration completed (medium mode)
- [N] distinct solution options generated
- Comparison matrix with feasibility analysis
- Recommendation provided
## Next Steps
1. Save brainstorm output to work folder
2. Display summary to user
3. Guide user to research command for deep dive
```
**Wait for user approval before proceeding to Phase 4.**
---
## PHASE 4: OUTPUT HANDLING & WORK FOLDER
### Step 4.1: Work Folder Resolution
Use lib/work-folder.md:
```
Configuration:
mode: "auto-detect"
identifier: [identifier extracted from brainstorm_output or problem_input]
description: [extract problem title from brainstorm_output]
workflow_type: "brainstorm"
current_step: "brainstorm"
custom_work_dir: [work_dir from argument parsing, or null]
Output (store for use below):
work_folder: [path from library, e.g., ".WIP/EC-1234-feature"]
metadata_file: [path from library, e.g., ".WIP/EC-1234-feature/.metadata.json"]
output_file: [path from library, e.g., ".WIP/EC-1234-feature/brainstorm-EC-1234.md"]
identifier: [identifier from library]
is_new: [true/false from library]
```
**Store the returned values for steps below.**
### Step 4.2: Write Brainstorm Output
**If `file_output == true` (default unless --no-file):**
Use Write tool:
```
file_path: [output_file from Step 4.1]
content: [brainstorm_output from Phase 2]
```
**If write succeeds:**
```
📄 Brainstorm saved to: [output_file]
```
**If write fails or --no-file:**
Skip file creation, continue to terminal output.
### Step 4.3: Update Metadata
**If work_folder exists and file was written:**
Read current metadata:
```bash
cat [metadata_file from Step 4.1]
```
Update fields:
```json
{
...existing fields,
"workflow": {
...existing.workflow,
"completed": ["brainstorm"],
"current": "brainstorm"
},
"files": {
"brainstorm": "brainstorm-[identifier].md"
},
"timestamps": {
...existing.timestamps,
"lastModified": "[current timestamp]"
}
}
```
Get current timestamp:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
Write updated metadata:
```
Write tool:
file_path: [metadata_file]
content: [updated JSON]
```
### Step 4.4: Create Fragments
**Use lib/fragment-loader.md**:
Parse brainstorm output for assumptions and unknowns:
1. **Extract Assumptions** from "Assumptions & Unknowns" section:
- Look for lines starting with "**A-#**:" or "- A-#:" or bullets under "Assumptions"
- Extract statement for each assumption
2. **Extract Unknowns** from "Assumptions & Unknowns" section:
- Look for lines starting with "**U-#**:" or "- U-#:" or bullets under "Unknowns"
- Extract question for each unknown
3. **Initialize Fragment System** (Operation 6):
```
work_folder: [work_folder from Step 4.1]
identifier: [identifier from Step 4.1]
```
- Creates `fragments/` directory
- Creates initial `fragments.md` registry
4. **Batch Create Fragments** (Operation 10):
```
work_folder: [work_folder]
identifier: [identifier]
fragments: [
{
type: "A",
number: 1,
statement: [extracted assumption statement],
source: "Created during brainstorm phase",
stage: "brainstorm",
timestamp: [current timestamp]
},
{
type: "A",
number: 2,
...
},
{
type: "U",
number: 1,
question: [extracted unknown question],
importance: "Needed for research phase",
stage: "brainstorm",
timestamp: [current timestamp]
}
]
```
**Get current timestamp**:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
**Result**:
- Fragment files created: `fragments/A-1.md`, `fragments/A-2.md`, `fragments/U-1.md`, etc.
- Registry created: `fragments.md`
- All fragments have status ⏳ Pending
**If fragment creation fails**:
- Log warning but don't block command
- Continue to terminal output
### Step 4.5: Terminal Output
**If `terminal_output == true` (default unless --quiet):**
Display:
```markdown
# 🧠 Brainstorm Complete: [identifier]
Generated [N] solution options with broad feasibility analysis.
## Options Summary
[Extract option summaries from brainstorm_output - 1-2 lines each]
## 🎯 Recommendation
[Extract recommendation from brainstorm_output - 2-3 sentences]
## 📁 Output
Brainstorm saved to: `[output_file]`
Work folder: `[work_folder]`
Fragments: [A_COUNT] assumptions, [U_COUNT] unknowns
## 🔬 Next Steps
Choose an option for deep technical research:
```bash
# Research recommended option
/schovi:research --input brainstorm-[identifier].md --option [N]
# Or research a different option
/schovi:research --input brainstorm-[identifier].md --option [1|2|3]
```
This will perform deep codebase exploration with detailed file:line references and implementation considerations.
```
**After this phase:**
- Brainstorm file created in `.WIP/[identifier]/` work folder
- Fragment system initialized with assumptions and unknowns
- Metadata file updated
- Terminal output displayed (unless --quiet)
- User guided to next step (research command)
---
## PHASE 5: COMPLETION
**Final Message**:
```
✅ Brainstorm completed successfully!
📊 Generated [N] solution options for [identifier]
🎯 Recommended: Option [N] - [Name]
📁 Saved to: [file_path]
🔬 Ready for deep research? Run:
/schovi:research --input brainstorm-[identifier].md --option [N]
```
**Command complete.**
---
## ERROR HANDLING
### Input Processing Errors
- **No input provided**: Ask user for Jira ID, GitHub URL, or description
- **Invalid format**: Report error, show format examples
- **File not found**: Report error, ask for correct path
### Executor Errors
- **Executor failed**: Report error with details from subagent
- **Validation failed**: Check brainstorm_output has required sections
- **Token budget exceeded**: Executor handles compression, shouldn't happen
### Output Errors
- **File write failed**: Report error, offer terminal-only output
- **Work folder error**: Use fallback location or report error
---
## QUALITY GATES
Before completing, verify:
- [ ] Input processed successfully with clear problem reference
- [ ] Executor invoked and completed successfully
- [ ] Brainstorm output received (~2000-3000 tokens)
- [ ] Output contains all required sections (problem, constraints, options, matrix, recommendation)
- [ ] 2-3 distinct options present (not variations)
- [ ] File saved to work folder (unless --no-file)
- [ ] Fragment system initialized (fragments/ directory and fragments.md created)
- [ ] Assumption fragments created (A-1.md, A-2.md, etc.)
- [ ] Unknown fragments created (U-1.md, U-2.md, etc.)
- [ ] Fragment registry updated with all fragments
- [ ] Metadata updated
- [ ] Terminal output displayed (unless --quiet)
- [ ] User guided to research command for next step
---
## NOTES
**Design Philosophy**:
- **Executor pattern**: ALL work (fetch + explore + generate) happens in isolated context
- **Main context stays clean**: Only sees final formatted output (~2-3k tokens)
- **Token efficiency**: 93% reduction in main context (from ~43k to ~3k tokens)
- **Consistent experience**: User sees same output, just more efficient internally
**Token Benefits**:
- Before: Main context sees input processing + exploration + generation = ~43k tokens
- After: Main context sees only final output = ~3k tokens
- Savings: 40k tokens (93% reduction)
**Integration**:
- Input from: Jira, GitHub issues/PRs, files, or text
- Output to: Work folder with metadata
- Next command: Research with --option flag
**Executor Capabilities**:
- Spawns jira-analyzer, gh-pr-analyzer, gh-issue-analyzer for external context
- Spawns Plan subagent for medium-thoroughness exploration
- Reads brainstorm template and generates formatted output
- All in isolated context, returns clean result
---
**Command Version**: 3.0 (Executor Pattern + Fragment System)
**Last Updated**: 2025-11-08
**Dependencies**:
- `lib/argument-parser.md`
- `lib/work-folder.md`
- `lib/fragment-loader.md` (NEW: Fragment system operations)
- `schovi/agents/brainstorm-executor/AGENT.md`
- `schovi/templates/brainstorm/full.md`
**Changelog**: v3.0 - Added fragment system for assumption/unknown tracking with cross-stage traceability

895
commands/commit.md Normal file
View File

@@ -0,0 +1,895 @@
---
description: Create structured git commits with context from Jira, GitHub, or change analysis
argument-hint: [jira-id|github-issue|github-pr|notes|--message "text"]
allowed-tools: ["Bash", "Grep", "Read", "Task", "AskUserQuestion"]
---
# 🚀 Git Commit Command
╭─────────────────────────────────────────────╮
│ /schovi:commit - Structured Commit Creation │
╰─────────────────────────────────────────────╯
Creates well-structured git commits with conventional format, automatic change analysis, and optional external context fetching.
## Command Overview
This command creates git commits following these principles:
- **Conventional commit format**: `PREFIX: Title` with detailed description
- **Smart validation**: Prevents commits on main/master, validates branch naming
- **Change analysis**: Analyzes diffs to determine commit type and generate description
- **Optional context**: Fetches Jira/GitHub context when diff analysis is unclear
- **Claude Code footer**: Includes automation signature and co-authorship
## Usage Patterns
```bash
# Auto-analyze changes and create commit
/schovi:commit
# With Jira context
/schovi:commit EC-1234
# With GitHub issue context
/schovi:commit https://github.com/owner/repo/issues/123
/schovi:commit owner/repo#123
# With GitHub PR context
/schovi:commit https://github.com/owner/repo/pull/456
/schovi:commit owner/repo#456
# With custom notes for commit message
/schovi:commit "Add user authentication with JWT tokens"
# Override commit message completely
/schovi:commit --message "feat: Custom commit message"
# Only commit staged changes (don't auto-stage)
/schovi:commit --staged-only
```
---
# EXECUTION FLOW
## PHASE 1: Input Parsing & Context Detection
### Step 1.1: Parse Arguments
```
Parse the user's input to detect:
1. Jira issue ID pattern: [A-Z]{2,10}-\d{1,6} (e.g., EC-1234, PROJ-567)
2. GitHub issue URL: https://github.com/[owner]/[repo]/issues/\d+
3. GitHub issue shorthand: [owner]/[repo]#\d+
4. GitHub PR URL: https://github.com/[owner]/[repo]/pull/\d+
5. GitHub PR shorthand: [owner]/[repo]#\d+ or #\d+
6. Custom notes: Free-form text
7. Flags: --message "text", --staged-only, --type prefix
```
### Step 1.2: Display Detection
Display what was detected:
```markdown
╭─────────────────────────────────────────────╮
│ 📝 COMMIT COMMAND │
╰─────────────────────────────────────────────╯
**Detected Input**:
- Type: [Jira Issue | GitHub Issue | GitHub PR | Custom Notes | Auto-detect]
- Reference: [ID/URL if applicable]
- Flags: [List any flags provided]
```
### Step 1.3: Work Folder Detection (Optional Enhancement)
**Objective**: Auto-detect work folder to enrich commit context automatically.
**Try to find work folder** (non-blocking):
```bash
# Get current branch
branch=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
# Extract identifier (Jira ID)
identifier=$(echo "$branch" | grep -oE '[A-Z]{2,10}-[0-9]+' | head -1)
# Find work folder
if [ -n "$identifier" ]; then
work_folder=$(find .WIP -type d -name "${identifier}*" | head -1)
# Read metadata if folder exists
if [ -f "$work_folder/.metadata.json" ]; then
cat "$work_folder/.metadata.json"
fi
fi
```
**If work folder found, extract**:
- `work_folder_path`: .WIP/[identifier]
- `work_identifier`: From metadata.identifier
- `work_title`: From metadata.title
- `work_external`: From metadata.external (Jira/GitHub URLs)
- `work_progress`: Read 04-progress.md if exists (for phase-based commits)
**If no work folder found**:
- Continue with user-provided context only
- No error - work folder is optional
**Benefits of work folder context**:
- Auto-fills Jira ID if not provided by user
- Uses title from metadata for better commit messages
- Can reference phase number for multi-phase implementations
- Links commits to work folder in metadata
### Step 1.4: Store Context
Store the detected context in variables for later phases:
- `context_type`: jira | github_issue | github_pr | custom | auto
- `context_ref`: The ID/URL/notes
- `flag_message`: Custom message if --message provided
- `flag_staged_only`: Boolean for --staged-only
- `flag_commit_type`: Explicit commit type if --type provided
- `work_folder`: Path to work folder (or null if not found)
- `work_metadata`: Parsed metadata object (or null)
- `work_progress`: Progress info (or null)
---
## PHASE 2: Git State Validation
### Step 2.1: Get Current Branch
```bash
git rev-parse --abbrev-ref HEAD
```
**Validation**:
-**ERROR if on main/master**: Cannot commit directly to main/master branch
-**Continue**: If on feature/bugfix/chore branch
**Error Display** (if on main/master):
```markdown
╭─────────────────────────────────────────────╮
│ ❌ COMMIT BLOCKED │
╰─────────────────────────────────────────────╯
**Reason**: Direct commits to main/master are not allowed.
**Current branch**: main
**Suggested Actions**:
1. Create a feature branch: `git checkout -b feature/your-feature`
2. Create from Jira: `git checkout -b EC-1234-description`
3. Switch to existing branch: `git checkout <branch-name>`
Commit cancelled.
```
### Step 2.2: Validate Branch Name (if Jira context)
If context_type is "jira":
```bash
git rev-parse --abbrev-ref HEAD
```
Check if branch name contains the Jira issue key (case-insensitive).
**Validation**:
- ⚠️ **WARN if mismatch**: Branch name doesn't contain Jira issue key
-**OK**: Branch name matches (e.g., EC-1234-feature for EC-1234)
**Warning Display** (if mismatch):
```markdown
⚠️ **Branch Naming Warning**
**Expected**: Branch name should contain issue key "EC-1234"
**Current branch**: feature/user-auth
**Suggestion**: Consider renaming branch to EC-1234-user-auth
Continue with commit? [Proceeding...]
```
### Step 2.3: Check Git Status
```bash
git status --porcelain
```
**Analyze output**:
- **Staged changes**: Lines starting with M, A, D, R, C in first column
- **Unstaged changes**: Lines with modifications in second column
- **Untracked files**: Lines starting with ??
- **Merge conflicts**: Lines starting with U
**Validation**:
-**ERROR if merge conflicts**: Resolve conflicts before committing
- ⚠️ **WARN if no changes**: No staged or unstaged changes detected
-**Continue**: Changes detected
**Error Display** (if conflicts):
```markdown
╭─────────────────────────────────────────────╮
│ ❌ MERGE CONFLICTS DETECTED │
╰─────────────────────────────────────────────╯
**Files with conflicts**:
- src/api/controller.ts
- src/models/user.ts
**Action Required**: Resolve merge conflicts before committing.
1. Edit conflicted files
2. Mark as resolved: `git add <file>`
3. Run commit command again
Commit cancelled.
```
**Empty Changes Display**:
```markdown
╭─────────────────────────────────────────────╮
│ ⚠️ NO CHANGES DETECTED │
╰─────────────────────────────────────────────╯
**Git Status**: Working directory clean
**Staged**: 0 files
**Unstaged**: 0 files
Nothing to commit.
```
### Step 2.4: Summary Display
```markdown
╭─────────────────────────────────────────────╮
│ ✅ GIT STATE VALIDATION PASSED │
╰─────────────────────────────────────────────╯
**Branch**: feature/user-authentication
**Branch Status**: Valid feature branch
**Jira Validation**: ✅ Branch matches EC-1234 [if applicable]
**Changes Detected**: Yes
**Summary**:
- Staged: 3 files
- Unstaged: 5 files
- Untracked: 2 files
Proceeding to staging phase...
```
---
## PHASE 3: Staging & Change Analysis
### Step 3.1: Determine Staging Strategy
**Default behavior** (when `--staged-only` NOT provided):
```bash
git add .
```
Display: `🔄 Auto-staging all changes (git add .)`
**Staged-only behavior** (when `--staged-only` provided or called from implement flow):
- Skip auto-staging
- Only commit what's already staged
- Display: `📋 Using only staged changes`
**Validation after staging**:
```bash
git diff --cached --name-only
```
If no files staged:
```markdown
**No staged changes**
No files are staged for commit. Cannot proceed.
**Suggestions**:
1. Stage specific files: `git add <files>`
2. Stage all changes: `git add .`
3. Check working directory: `git status`
```
### Step 3.2: Analyze Staged Changes
```bash
# Get summary statistics
git diff --cached --stat
# Get detailed diff for analysis
git diff --cached
```
**Display**:
```markdown
╭─────────────────────────────────────────────╮
│ 🔍 ANALYZING STAGED CHANGES │
╰─────────────────────────────────────────────╯
**Files Changed**: 5 files
**Insertions**: +234 lines
**Deletions**: -45 lines
**Affected Files**:
- src/api/auth-controller.ts (+156, -12)
- src/models/user.ts (+45, -8)
- src/services/jwt-service.ts (+28, -0)
- tests/auth.test.ts (+5, -25)
- README.md (+0, -0)
Analyzing changes to determine commit type...
```
### Step 3.3: Determine Commit Type
Analyze the diff to determine the appropriate conventional commit prefix.
**Logic**:
1. **feat**: New features, new files with substantial code, new API endpoints
- Keywords: "new", "add", "implement", "create"
- Indicators: New files, new functions/classes, new routes
2. **fix**: Bug fixes, error handling, corrections
- Keywords: "fix", "bug", "error", "issue", "resolve"
- Indicators: Changes in error handling, conditional logic fixes
3. **chore**: Maintenance, dependencies, configs, build changes
- Keywords: "update", "upgrade", "maintain", "cleanup"
- Indicators: package.json, lockfiles, config files, build scripts
4. **refactor**: Code restructuring without changing behavior
- Keywords: "refactor", "restructure", "reorganize", "simplify"
- Indicators: Moving code, renaming, extracting functions
5. **test**: Test additions or updates
- Keywords: "test", "spec"
- Indicators: Files in test/ directories, *.test.*, *.spec.*
6. **docs**: Documentation changes only
- Keywords: "doc", "readme", "comment"
- Indicators: .md files, comment changes, no code changes
7. **style**: Code style/formatting (no logic change)
- Keywords: "format", "style", "lint"
- Indicators: Whitespace, formatting, linting fixes
8. **perf**: Performance improvements
- Keywords: "performance", "optimize", "faster", "cache"
- Indicators: Algorithm changes, caching additions
**Override**: If `--type` flag provided, use that instead.
**Display**:
```markdown
🎯 **Commit Type Determined**: feat
**Reasoning**: New authentication controller and JWT service implementation detected
```
### Step 3.4: Extract Change Summary
Analyze the diff to extract:
1. **Primary change**: What's the main thing being changed?
2. **Affected components**: Which parts of the system are touched?
3. **Key changes**: 2-4 bullet points describing specific changes
**Store for commit message generation**:
- `commit_type`: The determined prefix (feat/fix/chore/etc.)
- `primary_change`: One-line description
- `affected_files`: List of changed files
- `key_changes`: Array of bullet points
---
## PHASE 4: Optional Context Fetching
### Step 4.1: Evaluate Need for External Context
**Decision logic**:
```
IF context_type is "jira" OR "github_issue" OR "github_pr":
IF primary_change is clear AND key_changes are substantial:
SKIP context fetching
Display: "📊 Change analysis is clear, skipping external context fetch"
ELSE:
FETCH external context
Display: "🔍 Fetching external context to enrich commit message..."
ELSE:
SKIP context fetching
```
**Indicators that analysis is "clear"**:
- 3+ key changes identified
- Primary change is descriptive (>15 chars)
- Commit type confidence is high
### Step 4.2: Fetch Context (if needed)
#### For Jira Issues:
```markdown
🔍 **Fetching Jira Context**
⏳ Fetching issue EC-1234 via jira-analyzer...
```
Use Task tool:
```
prompt: "Fetch and summarize Jira issue [JIRA-KEY]"
subagent_type: "schovi:jira-auto-detector:jira-analyzer"
description: "Fetching Jira issue summary"
```
#### For GitHub Issues:
```markdown
🔍 **Fetching GitHub Issue Context**
⏳ Fetching issue via gh-issue-analyzer...
```
Use Task tool:
```
prompt: "Fetch and summarize GitHub issue [URL or owner/repo#123]"
subagent_type: "schovi:gh-issue-analyzer:gh-issue-analyzer"
description: "Fetching GitHub issue summary"
```
#### For GitHub PRs:
```markdown
🔍 **Fetching GitHub PR Context**
⏳ Fetching PR via gh-pr-analyzer...
```
Use Task tool:
```
prompt: "Fetch and summarize GitHub pull request [URL or owner/repo#123]"
subagent_type: "schovi:gh-pr-auto-detector:gh-pr-analyzer"
description: "Fetching GitHub PR summary"
```
### Step 4.3: Merge Context with Analysis
Combine external context with diff analysis:
- Use external context to clarify "why" (problem being solved)
- Use diff analysis to describe "what" (specific changes made)
- Prioritize diff analysis for technical accuracy
---
## PHASE 5: Commit Message Generation
### Step 5.1: Generate Message Components
**Context Priority**:
1. **Work folder context** (if available from Step 1.3)
- Use metadata.title for description context
- Use metadata.identifier for Related reference
- Use work_progress for phase-based commit titles
2. **User-provided context** (Jira, GitHub, notes from Step 1.1-1.2)
3. **Diff analysis** (from Phase 3-4)
**Title Line** (50-72 chars):
```
<commit_type>: <brief description of primary change>
```
**If work_progress exists** (multi-phase implementation):
```
<commit_type>: Complete <phase title> (Phase N/Total)
```
Examples:
- `feat: Add JWT authentication to user login`
- `fix: Resolve token expiration handling bug`
- `feat: Complete authentication core (Phase 1/4)` [from work_progress]
- `chore: Update dependencies to latest versions`
**Description Paragraph** (2-3 sentences):
Explain the problem, solution, or context. Answer:
- What problem does this solve? (if fix/feat)
- Why was this change needed?
- What approach was taken?
**If work_metadata exists**: Reference work title
- "Implements Phase N of [work_title]"
- "Part of [work_title] implementation"
**Bullet Points** (2-5 items):
List specific changes:
- Technical changes (new functions, modified logic)
- File-level changes (new files, removed files)
- Integration changes (API changes, database changes)
**Related Reference** (if applicable):
```
Related to: EC-1234
```
or
```
Related to: owner/repo#123
```
**Footer**:
```
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
```
### Step 5.2: Assemble Complete Message
**Format**:
```
<TYPE>: <Title>
<Description paragraph explaining problem/solution/changes>
- <Bullet point 1>
- <Bullet point 2>
- <Bullet point 3>
- <Bullet point 4>
Related to: <JIRA-KEY or GitHub reference>
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
```
### Step 5.3: Display Preview
```markdown
╭─────────────────────────────────────────────╮
│ 📝 COMMIT MESSAGE PREVIEW │
╰─────────────────────────────────────────────╯
```
feat: Add JWT authentication to user login
Implements JSON Web Token based authentication system to replace
session-based auth. Adds token generation, verification, and refresh
functionality with configurable expiration times.
- Add AuthController with login/logout endpoints
- Implement JwtService for token operations
- Create User model with password hashing
- Add authentication middleware for protected routes
- Update tests to cover new auth flow
Related to: EC-1234
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
```
Proceeding with commit...
```
---
## PHASE 6: Commit Execution & Verification
### Step 6.1: Execute Commit
```
IMPORTANT: Use HEREDOC format for multi-line commit messages to ensure proper formatting.
```
Execute commit:
```bash
git commit -m "$(cat <<'EOF'
feat: Add JWT authentication to user login
Implements JSON Web Token based authentication system to replace
session-based auth. Adds token generation, verification, and refresh
functionality with configurable expiration times.
- Add AuthController with login/logout endpoints
- Implement JwtService for token operations
- Create User model with password hashing
- Add authentication middleware for protected routes
- Update tests to cover new auth flow
Related to: EC-1234
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
### Step 6.2: Verify Commit Created
```bash
git log -1 --oneline
```
**Success indicators**:
- Command exit code 0
- Log shows new commit hash
**Failure indicators**:
- Non-zero exit code
- Errors about hooks, conflicts, etc.
### Step 6.2.5: Update Work Folder Metadata (if applicable)
**If work_folder exists** (from Step 1.3):
1. Get commit hash:
```bash
git log -1 --format='%H'
```
2. Read existing metadata:
```bash
cat "$work_folder/.metadata.json"
```
3. Update fields:
```json
{
...existing,
"git": {
"branch": "[current branch]",
"commits": [...existing.commits, "[new commit hash]"],
"lastCommit": "[new commit hash]"
},
"timestamps": {
...existing.timestamps,
"lastModified": "[now from date -u]"
}
}
```
4. If work_progress exists (phase-based implementation):
- Update metadata.phases.list[current_phase].commit with new hash
- Update 04-progress.md with commit reference
5. Use Write tool to save updated metadata to `$work_folder/.metadata.json`.
**If no work folder**:
- Skip metadata update (not an error)
### Step 6.3: Display Result
**Success Display**:
```markdown
╭─────────────────────────────────────────────╮
│ ✅ COMMIT CREATED SUCCESSFULLY │
╰─────────────────────────────────────────────╯
📝 **Commit Hash**: a3b2c1d
📋 **Type**: feat
📌 **Title**: Add JWT authentication to user login
🔗 **Related**: EC-1234
**Commit Details**:
```
git log -1 --stat
```
**Next Steps**:
1. Review commit: `git show`
2. Continue development and commit more changes
3. Create pull request when ready: `git push` and use GitHub UI
4. Or amend if needed: `git commit --amend`
```
**Error Display** (if commit failed):
```markdown
╭─────────────────────────────────────────────╮
│ ❌ COMMIT FAILED │
╰─────────────────────────────────────────────╯
**Error Output**:
```
[Git error message]
```
**Possible Causes**:
1. Pre-commit hooks failed (linting, tests)
2. Commit message validation failed
3. File permissions issues
**Suggested Actions**:
1. Check hook output above for specific errors
2. Fix issues and run `/schovi:commit` again
3. Override hooks if necessary: `git commit --no-verify -m "message"`
4. Check git configuration: `git config --list`
Run `/schovi:commit --message "..."` to retry with specific message.
```
---
## SPECIAL CASES
### Case 1: Called from Implement Flow
When this command is called from `/schovi:implement` or other workflows:
**Detection**: Check if called with special context variable or flag `--from-implement`
**Modified Behavior**:
1. **Staging**: Use `--staged-only` behavior (don't auto-stage all changes)
2. **Validation**: Skip branch name validation (implement handles it)
3. **Message**: Use phase-specific message format if provided
4. **Display**: Minimal output (just commit hash and title)
**Integration Pattern**:
```markdown
Within implement command:
1. Make changes for Phase 1
2. Stage only Phase 1 files: `git add file1.ts file2.ts`
3. Call commit logic with:
- context: "Phase 1: Backend Service"
- type: "feat"
- staged_only: true
4. Repeat for each phase
```
### Case 2: Override with --message Flag
If user provides `--message "custom message"`:
**Behavior**:
1. Skip all analysis phases
2. Use provided message verbatim
3. Still add Claude Code footer
4. Still perform git state validation
**Display**:
```markdown
╭─────────────────────────────────────────────╮
│ 📝 USING CUSTOM MESSAGE │
╰─────────────────────────────────────────────╯
**User-provided message**:
```
feat: Custom commit message
```
Skipping analysis, using provided message...
```
### Case 3: Amending Last Commit
If user provides `--amend` flag:
**Validation**:
1. Check last commit author: `git log -1 --format='%an %ae'`
2. Check if commit is pushed: `git branch -r --contains HEAD`
3. Only allow if:
- Last commit author is "Claude" or user
- Commit is not pushed to remote
- No merge commits involved
**Behavior**:
- Run same analysis
- Generate new message or use existing
- Execute: `git commit --amend -m "$(cat <<'EOF' ... EOF)"`
---
## ERROR HANDLING
### Common Errors
**Error 1: Not a Git Repository**
```markdown
**Not a Git Repository**
Current directory is not a git repository.
**Initialize**: `git init`
**Or navigate**: `cd <git-repo-path>`
```
**Error 2: No Remote Repository**
```markdown
⚠️ **No Remote Repository**
No git remote configured. Commits will be local only.
**Add remote**: `git remote add origin <url>`
```
**Error 3: Detached HEAD State**
```markdown
**Detached HEAD State**
You are not on a branch. Commits may be lost.
**Create branch**: `git checkout -b <branch-name>`
```
**Error 4: GPG Signing Required but Not Configured**
```markdown
**GPG Signing Error**
Repository requires GPG signing but GPG is not configured.
**Configure**: `git config user.signingkey <key-id>`
**Disable**: `git config commit.gpgsign false`
```
---
## COMPLETION
### Success Summary
```markdown
╭─────────────────────────────────────────────╮
│ 🎉 COMMIT COMPLETE │
╰─────────────────────────────────────────────╯
**Summary**:
- ✅ Commit created: a3b2c1d
- ✅ Type: feat
- ✅ Files changed: 5
- ✅ Lines: +234, -45
**Commit Message**:
feat: Add JWT authentication to user login
**What's Next?**:
1. Continue development: Make more changes and commit again
2. Review your changes: `git show` or `git log --stat`
3. Push to remote: `git push` (or `git push -u origin <branch>` for first push)
4. Create PR: Use `/schovi:analyze` for PR analysis
Keep up the great work! 🚀
```
---
## IMPLEMENTATION NOTES
**For Claude Code**:
1. **Diff Analysis Intelligence**:
- Parse `git diff` to identify file types (controllers, models, tests, configs)
- Look for keywords in diff content (class, function, export, import, test, describe)
- Count additions vs deletions to gauge change magnitude
2. **Commit Type Detection**:
- Use file paths first (test/ = test, docs/ = docs)
- Check changed files (package.json = chore)
- Parse diff content for semantic keywords
- Default to "chore" if uncertain
3. **Message Quality**:
- Title: Clear, active voice, no period at end
- Description: Context-rich, explains "why" not just "what"
- Bullets: Specific, technical, file/function-level details
- Avoid vague terms like "update", "change", "modify" without specifics
4. **Validation Strictness**:
- BLOCK: main/master commits, merge conflicts
- WARN: Branch name mismatch, no remote, GPG issues
- ALLOW: Everything else with appropriate messaging
5. **Context Fetching Decision**:
- Prefer diff analysis (faster, no external dependencies)
- Fetch external context only when:
- Diff shows minimal changes (<20 lines)
- Changed files are unclear (generic names)
- Multiple unrelated changes detected
- User explicitly provided Jira/GitHub reference
6. **Integration with Implement**:
- When called from implement, expect `--from-implement` flag
- Respect phase boundaries (only commit phase-specific changes)
- Use provided phase name in commit title
- Minimal output (implement will summarize)

363
commands/debug.md Normal file
View File

@@ -0,0 +1,363 @@
---
description: Deep debugging workflow with root cause analysis, problematic flow identification, and single fix proposal
argument-hint: [jira-id|pr-url|#pr-number|github-issue-url|description] [--input PATH] [--output PATH] [--no-file] [--quiet] [--work-dir PATH]
allowed-tools: ["Read", "Write", "Task", "ExitPlanMode"]
---
# Problem Debugger Workflow
You are performing **deep debugging and root cause analysis** for a bug or production issue using the **executor pattern**. Follow this structured workflow to identify the problematic flow and propose a single, targeted fix.
**Key Innovation**: The debug-executor subagent performs ALL work (context fetching, debugging, fix generation) in isolated context, keeping main context clean.
---
## ⚙️ MODE ENFORCEMENT
**CRITICAL**: This command operates in **PLAN MODE** throughout Phases 1-2 (argument parsing and executor invocation). You MUST use the **ExitPlanMode tool** before Phase 3 (output handling) to transition from analysis to execution.
**Workflow**:
```
┌──────────────────────────────────┐
│ PLAN MODE (Read-only) │
│ Phases 1-2: Setup & Execute │
└──────────────────────────────────┘
[ExitPlanMode Tool]
┌──────────────────────────────────┐
│ EXECUTION MODE (Write) │
│ Phases 3-4: Output & Completion │
└──────────────────────────────────┘
```
---
## PHASE 1: ARGUMENT PARSING
Use lib/argument-parser.md:
```
Configuration:
command_name: "debug"
command_label: "Debug-Problem"
positional:
- name: "problem_input"
description: "Jira ID, GitHub URL, error description"
required: false
flags:
- name: "--input"
type: "path"
description: "Read problem description from file (error log, stack trace)"
- name: "--output"
type: "path"
description: "Custom output file path for debug report"
- name: "--work-dir"
type: "path"
description: "Custom work directory"
- name: "--no-file"
type: "boolean"
description: "Skip file creation, terminal only"
- name: "--quiet"
type: "boolean"
description: "Suppress terminal output"
validation:
- --output and --no-file are mutually exclusive
- At least one input source required (positional or --input)
```
**Store parsed values:**
- `problem_input`: Positional argument or --input file content
- `output_path`: --output value or null
- `work_dir`: --work-dir value or null
- `file_output`: true (unless --no-file)
- `terminal_output`: true (unless --quiet)
---
## PHASE 2: EXECUTE DEBUG (Isolated Context)
**Objective**: Spawn debug-executor subagent to perform ALL debugging work in isolated context.
**Use Task tool with debug-executor**:
```
Task tool configuration:
subagent_type: "schovi:debug-executor:debug-executor"
model: "sonnet"
description: "Execute debug workflow"
prompt: |
PROBLEM REFERENCE: [problem_input]
CONFIGURATION:
- identifier: [auto-detect from problem_input or generate slug]
- severity: [auto-detect or "Medium"]
Execute complete debugging workflow:
1. Fetch external context (Jira/GitHub if applicable)
2. Deep debugging & root cause analysis (Explore subagent, very thorough mode)
3. Generate fix proposal (location, code changes, testing, rollout)
Return structured fix proposal (~1500-2500 tokens).
```
**Expected output from executor**:
- Complete structured fix proposal markdown (~1500-2500 tokens)
- Includes: problem summary, root cause with execution flow, fix proposal with code changes, testing strategy, rollout plan
- All file references in file:line format
- Already formatted
**Store executor output**:
- `fix_proposal_output`: Complete markdown from executor
- `identifier`: Extract from fix proposal header or use fallback
---
## PHASE 3: EXIT PLAN MODE
**CRITICAL**: Before proceeding to output handling, use ExitPlanMode tool.
```
ExitPlanMode tool:
plan: |
# Debugging Complete
Root cause analysis and fix proposal completed via debug-executor subagent.
**Identifier**: [identifier]
**Problem**: [Brief description]
## Key Findings
- Problem context fetched and analyzed
- Deep debugging completed (Explore subagent, very thorough mode)
- Root cause identified with execution flow trace
- Fix location pinpointed with file:line
- Code changes proposed with testing strategy
## Next Steps
1. Save fix proposal to work folder
2. Display summary to user
3. Offer to implement fix
```
**Wait for user approval before proceeding to Phase 4.**
---
## PHASE 4: OUTPUT HANDLING & WORK FOLDER
### Step 4.1: Work Folder Resolution
Use lib/work-folder.md:
```
Configuration:
mode: "auto-detect"
identifier: [identifier extracted from fix_proposal_output or input]
description: [extract problem title from fix_proposal_output]
workflow_type: "debug"
current_step: "debug"
custom_work_dir: [work_dir from argument parsing, or null]
Output (store for use below):
work_folder: [path from library, e.g., ".WIP/EC-1234-feature"]
metadata_file: [path from library, e.g., ".WIP/EC-1234-feature/.metadata.json"]
output_file: [path from library, e.g., ".WIP/EC-1234-feature/debug-EC-1234.md"]
identifier: [identifier from library]
is_new: [true/false from library]
```
**Store the returned values for steps below.**
### Step 4.2: Write Debug Output
**If `file_output == true` (default unless --no-file):**
Use Write tool:
```
file_path: [output_file from Step 4.1]
content: [fix_proposal_output from Phase 3]
```
**If write succeeds:**
```
📄 Fix proposal saved to: [output_file]
```
**If write fails or --no-file:**
Skip file creation, continue to terminal output.
### Step 4.3: Update Metadata
**If work_folder exists and file was written:**
Read current metadata:
```bash
cat [metadata_file from Step 4.1]
```
Update fields:
```json
{
...existing fields,
"workflow": {
...existing.workflow,
"completed": ["debug"],
"current": "debug"
},
"files": {
"debug": "debug-[identifier].md"
},
"timestamps": {
...existing.timestamps,
"lastModified": "[current timestamp]"
}
}
```
Get current timestamp:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
Write updated metadata:
```
Write tool:
file_path: [metadata_file]
content: [updated JSON]
```
### Step 4.4: Terminal Output
**If `terminal_output == true` (default unless --quiet):**
Display:
```markdown
# 🐛 Debug Complete: [identifier]
Root cause analysis and fix proposal ready.
## 🔍 Root Cause
[Extract root cause summary from fix_proposal_output - 2-3 sentences]
## 💡 Fix Location
[Extract fix location from output - file:line]
## 📁 Output
Fix proposal saved to: `[output_file]`
Work folder: `[work_folder]`
## 🚀 Next Steps
Ready to implement the fix:
```bash
# Review the fix proposal first
cat [output_file]
# Then implement (coming soon)
/schovi:implement --input debug-[identifier].md
```
```
---
## PHASE 5: COMPLETION
**Final Message**:
```
✅ Debugging completed successfully!
🐛 Root cause identified for [identifier]
💡 Fix proposal generated with code changes
📁 Saved to: [file_path]
🚀 Ready to implement the fix? Review the proposal and run:
/schovi:implement --input debug-[identifier].md
```
**Command complete.**
---
## ERROR HANDLING
### Input Processing Errors
- **No input provided**: Ask user for Jira ID, GitHub URL, or error description
- **Invalid format**: Report error, show format examples
- **File not found**: Report error, ask for correct path
### Executor Errors
- **Executor failed**: Report error with details from subagent
- **Validation failed**: Check fix_proposal_output has required sections
- **Token budget exceeded**: Executor handles compression, shouldn't happen
### Output Errors
- **File write failed**: Report error, offer terminal-only output
- **Work folder error**: Use fallback location or report error
---
## QUALITY GATES
Before completing, verify:
- [ ] Input processed successfully with clear problem reference
- [ ] Executor invoked and completed successfully
- [ ] Fix proposal output received (~1500-2500 tokens)
- [ ] Output contains all required sections
- [ ] Root cause identified with execution flow
- [ ] Fix location specified with file:line
- [ ] Code changes provided (before/after)
- [ ] Testing strategy included
- [ ] Rollout plan included
- [ ] All file references use file:line format
- [ ] File saved to work folder (unless --no-file)
- [ ] Metadata updated
- [ ] Terminal output displayed (unless --quiet)
---
## NOTES
**Design Philosophy**:
- **Executor pattern**: ALL work (fetch + debug + generate) happens in isolated context
- **Main context stays clean**: Only sees final formatted output (~1.5-2.5k tokens)
- **Token efficiency**: 96% reduction in main context (from ~63k to ~2.5k tokens)
- **Consistent experience**: User sees same output, just more efficient internally
**Token Benefits**:
- Before: Main context sees input + debugging (60k) + generation = ~63k tokens
- After: Main context sees only final output = ~2.5k tokens
- Savings: 60.5k tokens (96% reduction)
**Integration**:
- Input from: Jira, GitHub issues/PRs, error descriptions, stack traces
- Output to: Work folder with metadata
- Next command: Implement for applying the fix
**Executor Capabilities**:
- Spawns jira-analyzer, gh-issue-analyzer for external context
- Spawns Explore subagent for very thorough debugging
- Generates fix proposal with code changes
- All in isolated context, returns clean result
---
**Command Version**: 2.0 (Executor Pattern)
**Last Updated**: 2025-11-07
**Dependencies**:
- `lib/argument-parser.md`
- `lib/work-folder.md`
- `schovi/agents/debug-executor/AGENT.md`

1974
commands/implement.md Normal file

File diff suppressed because it is too large Load Diff

693
commands/implementV2.md Normal file
View File

@@ -0,0 +1,693 @@
---
description: Phase-based implementation with progress tracking, pause/resume support, and automatic checkpoints
argument-hint: [--resume] [--phase N] [--no-commit] [--work-dir PATH]
allowed-tools: ["Read", "Write", "Edit", "Grep", "Glob", "Task", "Bash", "AskUserQuestion"]
---
# Implementation Executor with Phase Management
You are **executing an implementation plan** with phase-based progress tracking, automatic checkpoints, and resume capability for large tasks.
---
## ⚙️ PHASE 1: INITIALIZATION & WORK FOLDER RESOLUTION
### Step 1.1: Parse Command Arguments
**Input Received**: $ARGUMENTS
Parse flags:
- **`--resume`**: Continue from last checkpoint
- Reads metadata.phases.current to determine where to continue
- Loads 04-progress.md to show completed work
- **`--phase N`**: Start from specific phase number
- Example: --phase 2 starts at Phase 2
- Validates phase exists in plan
- **`--no-commit`**: Skip automatic commits after phases
- User will handle git commits manually
- Still updates progress.md
- **`--work-dir PATH`**: Use specific work folder
- Example: --work-dir .WIP/EC-1234-add-auth
- Overrides auto-detection
**Store parsed values**:
```
resume_mode = [boolean]
specific_phase = [number or null]
auto_commit = [true unless --no-commit]
work_dir = [path or null]
```
### Step 1.2: Auto-detect Work Folder
**Objective**: Find work folder containing 03-plan.md
**Priority Order**:
1. **From --work-dir flag**:
```bash
work_folder="$work_dir"
```
2. **From Git Branch**:
```bash
branch=$(git rev-parse --abbrev-ref HEAD)
identifier=$(echo "$branch" | grep -oE '[A-Z]{2,10}-[0-9]+')
if [ -n "$identifier" ]; then
work_folder=$(find .WIP -type d -name "${identifier}*" | head -1)
fi
```
3. **Recent work folders**:
```bash
ls -dt .WIP/*/ | head -5
# Check each for 03-plan.md
```
**Validation**:
```bash
if [ ! -f "$work_folder/03-plan.md" ]; then
echo "❌ No plan found in $work_folder"
echo "Run /schovi:plan first to generate implementation plan"
exit 1
fi
```
**Acknowledge work folder**:
```
📁 **[Implement]** Work folder: $work_folder
```
### Step 1.3: Load Plan and Metadata
**Read plan**:
```bash
cat "$work_folder/03-plan.md"
```
**Parse plan structure**:
- Extract phases (look for "## Phase N:" or "### Phase N:" headers)
- Count total phases
- Extract tasks per phase
- Identify if multi-phase or single-phase
**Read metadata**:
```bash
cat "$work_folder/.metadata.json"
```
**Extract phase status**:
```json
{
"phases": {
"total": 4,
"completed": 2,
"current": 3,
"list": [
{"number": 1, "title": "...", "status": "completed", "commit": "abc123"},
{"number": 2, "title": "...", "status": "completed", "commit": "def456"},
{"number": 3, "title": "...", "status": "in_progress", "commit": null},
{"number": 4, "title": "...", "status": "pending", "commit": null}
]
}
}
```
### Step 1.4: Read or Create Progress File
**If 04-progress.md exists**:
```bash
cat "$work_folder/04-progress.md"
```
**Parse progress**:
- Identify completed phases (✅ markers)
- Identify current phase (🚧 marker)
- Extract last checkpoint commit
**If 04-progress.md doesn't exist, create initial**:
Use Write tool: `$work_folder/04-progress.md`
```markdown
# Implementation Progress
**Work Folder**: [work_folder]
**Plan**: 03-plan.md
**Started**: [timestamp]
## Phases
### ⏳ Phase 1: [Title]
Status: Pending
Tasks: [count] tasks
### ⏳ Phase 2: [Title]
Status: Pending
Tasks: [count] tasks
[... for each phase]
---
**Legend**:
- ✅ Completed
- 🚧 In Progress
- ⏳ Pending
- ❌ Failed
```
### Step 1.5: Determine Starting Phase
**Logic**:
1. **If --phase N provided**:
- Start at phase N
- Validate N <= total phases
2. **If --resume flag**:
- Start at metadata.phases.current
- Or find first non-completed phase in metadata.phases.list
3. **If neither flag**:
- If phases.completed == 0: Start at phase 1
- Else: Ask user to use --resume or --phase
**Acknowledge start point**:
```
🚀 **[Implement]** Starting at Phase [N]: [Title]
```
---
## ⚙️ PHASE 2: PHASE EXECUTION LOOP
**For each phase** from starting_phase to total_phases:
### Step 2.1: Load Phase Context
**Extract phase details from plan**:
```markdown
## Phase [N]: [Title]
**Tasks**:
- Task 1: Description
Files: path/to/file.ts:123
- Task 2: Description
Files: path/to/file.ts:456
**Acceptance Criteria**:
- [ ] Criterion 1
- [ ] Criterion 2
```
**Parse**:
- phase_number = N
- phase_title = [Title]
- phase_tasks = [List of tasks with file references]
- phase_criteria = [Acceptance criteria]
**Show phase header**:
```
╭─────────────────────────────────────────────────────────╮
│ 🚧 PHASE [N]/[TOTAL]: [TITLE] │
╰─────────────────────────────────────────────────────────╯
Tasks: [count]
Files affected: [list key files]
Starting implementation...
```
### Step 2.2: Execute Phase Tasks
**For each task in phase**:
1. **Show task**:
```
📝 Task [N.M]: [Description]
Files: [file references]
```
2. **Read relevant files**:
- Use Read tool for files mentioned in task
- Load context for changes
3. **Implement changes**:
- Use Edit tool for modifications
- Use Write tool for new files
- Follow task description
4. **Mark task complete**:
```
✅ Task [N.M] complete
```
5. **Update progress.md** (append):
```markdown
- [x] Task [N.M]: [Description]
```
**Handle errors**:
```
If task fails (edit error, file not found, etc.):
❌ **[Implement]** Task [N.M] failed: [error]
Options:
1. Skip task and continue (mark as TODO)
2. Pause implementation (save progress)
3. Cancel implementation
What would you like to do? [1-3]
```
### Step 2.3: Verify Phase Completion
**Check acceptance criteria**:
```
For each criterion in phase_criteria:
- Run tests if criterion mentions testing
- Check file exists if criterion mentions file creation
- Validate logic if criterion is verifiable
Mark:
- ✅ if verified
- ⚠️ if cannot auto-verify (manual check needed)
- ❌ if fails
```
**Summary**:
```
📊 Phase [N] Summary:
✅ Tasks completed: [count]/[total]
⚠️ Manual verification needed: [count]
❌ Tests failing: [count]
[If all ✅]:
✅ Phase [N] complete! Ready to commit.
[If any ❌]:
⚠️ Phase [N] has failures. Review before committing.
```
### Step 2.4: Create Phase Checkpoint
**If auto_commit == true** (default):
1. **Stage changes**:
```bash
git add .
```
2. **Generate commit message**:
```markdown
feat: Complete [Phase Title] (Phase [N]/[TOTAL])
Implemented:
- [Task 1 summary]
- [Task 2 summary]
- [Task 3 summary]
Files modified:
- path/to/file1.ts
- path/to/file2.ts
Related to: [identifier]
Co-Authored-By: Claude <noreply@anthropic.com>
```
3. **Commit**:
```bash
git add . && git commit -m "$(cat <<'EOF'
[commit message from above]
EOF
)"
```
4. **Get commit hash**:
```bash
git log -1 --format='%H'
```
5. **Update metadata**:
```json
{
"phases": {
"completed": [current + 1],
"current": [next phase or null if done],
"list": [
...update phase[N] with status="completed", commit="hash", completedAt="now"
]
},
"git": {
"commits": [...existing, "hash"],
"lastCommit": "hash"
},
"timestamps": {
"lastModified": "[now]"
}
}
```
6. **Update progress.md**:
```markdown
### ✅ Phase [N]: [Title] (Completed [timestamp])
- [x] Task N.1: [description]
- [x] Task N.2: [description]
**Commit**: [hash]
**Duration**: [time since phase started]
```
7. **Acknowledge checkpoint**:
```
💾 **[Implement]** Phase [N] checkpoint created
📍 Commit: [short-hash]
```
**If auto_commit == false** (--no-commit):
- Skip git commit
- Update progress.md with status
- Update metadata with completed status (no commit hash)
- Show: "⚠️ Commit skipped (--no-commit flag). Commit manually when ready."
### Step 2.5: Check if Pause Requested
**After each phase, ask**:
```
🎯 Phase [N] complete! ([completed]/[total] phases done)
Continue to Phase [N+1]? [yes/no/pause]
- yes: Continue immediately
- no: Pause (resume with --resume)
- pause: Same as no
```
**If user says "yes"**:
- Continue to next phase
**If user says "no" or "pause"**:
- Update metadata.phases.current to next phase
- Save all progress
- Show resume instructions:
```
⏸️ **[Implement]** Implementation paused
Progress saved:
- Completed: [count] phases
- Next: Phase [N+1]
To resume:
/schovi:implement --resume
```
- Exit phase loop
### Step 2.6: Move to Next Phase
**If more phases remaining**:
- Increment current_phase
- Loop back to Step 2.1
**If all phases complete**:
- Proceed to Phase 3 (Completion)
---
## ⚙️ PHASE 3: COMPLETION
### Step 3.1: Final Summary
```
╭─────────────────────────────────────────────────────────╮
│ ✅ IMPLEMENTATION COMPLETE │
╰─────────────────────────────────────────────────────────╯
**Work Folder**: $work_folder
**Phases Completed**: [total]
[For each phase]:
✅ Phase [N]: [Title]
Commit: [hash]
Tasks: [count]
**Total Commits**: [count]
**Total Files Modified**: [count]
**Next Steps**:
- Review changes: git log --oneline
- Run tests: [test command if available]
- Create PR: /schovi:publish
```
### Step 3.2: Update Final Metadata
**Set workflow as complete**:
```json
{
"workflow": {
"completed": ["analyze", "plan", "implement"],
"current": "implement"
},
"phases": {
"completed": [total],
"current": null
},
"timestamps": {
"lastModified": "[now]",
"completed": "[now]"
}
}
```
### Step 3.3: Proactive Next Steps
**Offer to create PR**:
```
🚀 Ready to publish?
I can create a GitHub Pull Request with:
- Branch: [current branch]
- Title: [from Jira or commits]
- Description: [from plan and progress]
- Changes: [all commits]
Would you like me to run `/schovi:publish` now? [yes/no]
```
**If user says "yes"**:
- Use SlashCommand tool: `/schovi:publish`
**If user says "no"**:
```
Perfect! Here's what you can do:
1. 📤 Create PR manually:
- /schovi:publish
- Or: gh pr create
2. 🧪 Run tests:
- npm test
- pytest
- [project-specific]
3. 📝 Review changes:
- git diff main
- git log --oneline
4. ✅ Mark as done:
- Implementation complete! 🎉
```
---
## ⚙️ ERROR HANDLING
### Scenario 1: No Plan Found
```
❌ Cannot start implementation - no plan found
Work folder: $work_folder
Required: 03-plan.md
Actions:
1. Generate plan: /schovi:plan
2. Or specify different work folder: --work-dir PATH
```
### Scenario 2: Git Conflicts
```
❌ Git conflicts detected
Cannot commit Phase [N] due to merge conflicts.
Actions:
1. Resolve conflicts manually
2. Stage resolved files: git add .
3. Resume: /schovi:implement --resume
Or:
- Skip auto-commit: /schovi:implement --no-commit
- Commit manually later
```
### Scenario 3: Test Failures
```
⚠️ Tests failing after Phase [N]
Phase committed but tests are failing.
Options:
1. Continue anyway (fix later)
2. Pause and fix now
3. Rollback phase (git reset HEAD~1)
What would you like to do? [1-3]
```
### Scenario 4: File Not Found
```
❌ Cannot find file: path/to/file.ts:123
Mentioned in Phase [N], Task [M]
Possible causes:
- File path incorrect in plan
- File not yet created
- Wrong directory
Actions:
1. Skip task (mark as TODO)
2. Search for file: find . -name "file.ts"
3. Pause implementation
What would you like to do? [1-3]
```
---
## 💡 USAGE EXAMPLES
### Example 1: Fresh Implementation
```bash
# After analyze → plan workflow
/schovi:implement
# Workflow:
# 1. Auto-detects work folder from git branch
# 2. Loads 03-plan.md
# 3. Creates 04-progress.md
# 4. Executes Phase 1
# 5. Commits Phase 1
# 6. Asks to continue to Phase 2
# 7. Repeats until all phases done
```
### Example 2: Resume After Pause
```bash
# Previously paused after Phase 2
/schovi:implement --resume
# Workflow:
# 1. Auto-detects work folder
# 2. Reads metadata: phases.current = 3
# 3. Loads progress from 04-progress.md
# 4. Shows: "Resuming from Phase 3"
# 5. Continues with Phase 3
```
### Example 3: Start from Specific Phase
```bash
# Jump to Phase 3 (skip 1 and 2)
/schovi:implement --phase 3
# Use case: Phases 1-2 already done manually
```
### Example 4: Manual Commits
```bash
# Implement without auto-commits
/schovi:implement --no-commit
# Workflow:
# 1. Executes all tasks
# 2. Updates progress.md
# 3. No git commits
# 4. User commits manually when ready
```
---
## 🎯 KEY FEATURES
1. **Phase-based Execution**: Break large tasks into manageable phases
2. **Progress Tracking**: 04-progress.md shows exactly what's done
3. **Pause/Resume**: Stop anytime, resume later with --resume
4. **Automatic Checkpoints**: Git commit after each phase
5. **Metadata Sync**: Full workflow state in .metadata.json
6. **Error Recovery**: Handle failures gracefully, allow skip/retry
7. **Context Management**: Load only current phase details
8. **Proactive**: Offers next steps (create PR, run tests)
---
## 📋 VALIDATION CHECKLIST
Before starting implementation:
- [ ] Work folder found
- [ ] 03-plan.md exists and readable
- [ ] Metadata loaded successfully
- [ ] Git working directory clean (or user acknowledges)
- [ ] Phases extracted from plan
- [ ] Starting phase determined
During implementation:
- [ ] Each task executed successfully or marked for skip
- [ ] Progress.md updated after each task
- [ ] Phase checkpoint created (commit or progress update)
- [ ] Metadata updated with phase status
- [ ] User prompted for pause after each phase
After completion:
- [ ] All phases marked complete
- [ ] Final metadata updated
- [ ] Summary shown to user
- [ ] Next steps provided
---
## 🚀 WORKFLOW INTEGRATION
**Full Workflow**:
```
spec → analyze → plan → implement → commit → publish
↓ ↓ ↓ ↓
01-spec 02-anly 03-plan 04-prog
```
**Technical Workflow**:
```
analyze → plan → implement → commit → publish
↓ ↓ ↓
02-anly 03-plan 04-prog
```
**Bug Workflow**:
```
debug → [plan] → implement → commit → publish
↓ ↓
02-dbg 04-prog
```
---
## 🎉 BEGIN IMPLEMENTATION
Start with Phase 1: Initialization & Work Folder Resolution.

2169
commands/implementV3.md Normal file

File diff suppressed because it is too large Load Diff

998
commands/plan.md Normal file
View File

@@ -0,0 +1,998 @@
---
description: Generate implementation specification from problem analysis with flexible input sources
argument-hint: [jira-id|github-issue-url|--input path|--from-scratch description] [--work-dir PATH]
allowed-tools: ["Read", "Grep", "Glob", "Task", "mcp__jira__*", "mcp__jetbrains__*", "Bash", "AskUserQuestion", "Write"]
---
# Create Specification Workflow
You are **creating an implementation specification** that bridges problem analysis and implementation. This spec transforms exploratory analysis into actionable, clear implementation guidance.
---
## ARGUMENT PARSING
Parse command arguments using lib/argument-parser.md:
```
Configuration:
command_name: "plan"
command_label: "Create-Spec"
positional:
- name: "input"
description: "Jira ID, GitHub URL, analysis file path, or description"
required: false
flags:
- name: "--input"
type: "path"
description: "Analysis file path"
- name: "--output"
type: "path"
description: "Custom output file path"
- name: "--from-scratch"
type: "string"
description: "Create spec without analysis"
- name: "--work-dir"
type: "path"
description: "Custom work directory"
- name: "--no-file"
type: "boolean"
description: "Skip file creation, terminal only"
- name: "--quiet"
type: "boolean"
description: "Suppress terminal output"
- name: "--post-to-jira"
type: "boolean"
description: "Post spec as Jira comment"
validation:
- Check for conflicting flags (--input with --from-scratch)
- Ensure at least one input source provided
```
**Store parsed values:**
- `input_value`: Positional argument or --input value or --from-scratch value
- `output_path`: --output value or null
- `work_dir`: --work-dir value or null
- `file_output`: true (unless --no-file)
- `terminal_output`: true (unless --quiet)
- `jira_posting`: true if --post-to-jira, else false
- `from_scratch_mode`: true if --from-scratch flag present
---
## PHASE 1: INPUT VALIDATION & ANALYSIS EXTRACTION
### Step 1.1: Classify and Validate Input Type
**Input classification:**
1. **Research File** (✅ VALID - --input flag)
- Pattern: `--input ./research-*.md`
- Has deep technical analysis with file:line references
- Action: Read and extract research analysis
2. **Analysis File (Legacy)** (✅ VALID - --input flag)
- Pattern: `--input ./analysis-*.md`
- Has technical analysis with file:line references (from old analyze command)
- Action: Read and extract analysis
- Note: Legacy support, use research command for new workflows
3. **From Scratch** (✅ VALID - --from-scratch flag)
- Pattern: `--from-scratch "description"`
- Bypass research requirement
- Action: Interactive minimal spec creation
4. **Conversation Analysis** (✅ VALID - no args, research/analysis in conversation)
- Pattern: No arguments
- Recent `/schovi:research` or `/schovi:analyze` output in conversation
- Action: Extract from conversation history
5. **Brainstorm File** (❌ INVALID - requires research first)
- Pattern: `--input ./brainstorm-*.md`
- Has multiple solution options, lacks deep technical analysis
- Action: STOP with guidance to run research first
6. **Raw Input** (❌ INVALID - Jira ID, GitHub URL, text description)
- Patterns: `EC-1234`, `#123`, `owner/repo#123`, free text without --from-scratch
- Requires research first
- Action: STOP with guidance message
**If input type is INVALID (Raw inputs or brainstorm without research):**
Determine specific error type and display appropriate message:
#### Error Type A: Raw Input (Jira, GitHub, text without flags)
```markdown
╭─────────────────────────────────────────────────────────────────╮
│ ❌ RESEARCH REQUIRED BEFORE SPECIFICATION GENERATION │
╰─────────────────────────────────────────────────────────────────╯
**Problem**: Cannot generate actionable specification without deep technical research.
**Input Detected**: [Describe what was provided - Jira ID, GitHub URL, description]
**Why Research is Required**:
Specifications need specific file locations, affected components, and technical context
to generate actionable implementation tasks. Without research:
❌ Tasks will be vague: "Fix the bug" instead of "Update validation in Validator.ts:67"
❌ No clear entry points: Which files to change?
❌ Missing context: How do components interact?
❌ Unclear scope: What else might be affected?
**Required Workflow**:
🧠 **Step 1: Brainstorm Options** (optional, recommended)
Explore 2-3 solution approaches:
/schovi:brainstorm [your-input]
🔬 **Step 2: Deep Research** (required)
Analyze ONE specific approach:
/schovi:research --input brainstorm-[id].md --option [N]
OR directly: /schovi:research --input [your-input]
📋 **Step 3: Create Spec** (this command)
Generate implementation plan:
/schovi:plan --input research-[id].md
**Quick Path** (skip brainstorm):
# Direct deep research
/schovi:research --input [jira-id|github-url|file]
/schovi:plan --input research-[id].md
**Simple Tasks** (skip research):
# Create minimal spec without research
/schovi:plan --from-scratch "Task description"
**Examples**:
# Wrong: Raw input
/schovi:plan EC-1234 ❌
# Right: Research first
/schovi:research --input EC-1234
/schovi:plan --input research-EC-1234.md ✅
# Or full workflow
/schovi:brainstorm EC-1234
/schovi:research --input brainstorm-EC-1234.md --option 2
/schovi:plan --input research-EC-1234-option2.md ✅
```
#### Error Type B: Brainstorm File (--input brainstorm-*.md)
```markdown
╭─────────────────────────────────────────────────────────────────╮
│ ❌ BRAINSTORM CANNOT BE USED DIRECTLY FOR SPECIFICATION │
╰─────────────────────────────────────────────────────────────────╯
**Problem**: Brainstorm files contain multiple solution options without deep technical analysis.
**Input Detected**: [brainstorm file path]
**Why Research is Required**:
Brainstorm provides 2-3 high-level solution options with broad feasibility analysis.
To create actionable implementation tasks, you must:
1. Choose ONE option from brainstorm
2. Perform deep technical research on that option
3. Then create specification from research
Brainstorm → Research → Plan
(2-3 opts) (1 deep) (spec)
**Required Actions**:
🔬 **Run research on chosen option**:
# Research option 2 from brainstorm
/schovi:research --input [brainstorm-file] --option 2
# Then create spec from research
/schovi:plan --input research-[id]-option2.md
**Available Options** (from your brainstorm):
[List options from brainstorm file if readable]
**Example**:
# Wrong: Use brainstorm directly
/schovi:plan --input brainstorm-EC-1234.md ❌
# Right: Research first, then plan
/schovi:research --input brainstorm-EC-1234.md --option 2
/schovi:plan --input research-EC-1234-option2.md ✅
```
**Workflow**:
┌─────────────┐ ┌──────────────┐ ┌──────────────┐ ┌─────────────┐
│ Problem │ → │ Brainstorm │ → │ Research │ → │ Plan │
│ (Jira, GH) │ │ (2-3 opts) │ │ (1 deep) │ │ (Spec Gen) │
└─────────────┘ └──────────────┘ └──────────────┘ └─────────────┘
↓ optional ↑ required
└─────────────────────────┘
╭─────────────────────────────────────────────────────────────────────────────╮
│ 💡 TIP: Run /schovi:research --input [input] to perform deep analysis first │
╰─────────────────────────────────────────────────────────────────────────────╯
```
**HALT EXECUTION** - Do not proceed.
---
### Step 1.2: Extract Analysis Content
**Based on validated input type:**
#### Option A: Research/Analysis File (--input flag provided)
```
1. Acknowledge file read:
📄 **[Create-Spec]** Reading research from file: [PATH]
2. Use Read tool to load file contents:
file_path: [PATH from --input flag]
3. If file doesn't exist or read fails:
**[Create-Spec]** File not found: [PATH]
Ask user for correct path or alternative input source.
HALT EXECUTION
4. If file loads successfully:
**[Create-Spec]** File loaded ([X] lines)
5. Extract key content based on file type:
**For Research Files (research-*.md)**:
- Problem/topic summary (from 📋 Problem/Topic Summary section)
- Research focus and specific approach
- Current state analysis with file:line references
- Architecture overview with components
- Technical deep dive (data flow, dependencies, code quality)
- Implementation considerations (complexity, testing, risks)
- Performance and security implications
**For Analysis Files (analysis-*.md - legacy)**:
- Problem summary (core issue, impact, severity)
- Affected components with file:line references
- User flow and data flow (if present)
- Solution proposals with pros/cons
- Technical details and dependencies
6. Verify content quality:
- Check: Has file:line references? (Critical for actionable spec)
- Check: Has affected components identified?
- Check: Has problem description?
- Check: Has technical context (architecture, dependencies)?
If missing critical elements → Flag for enrichment in Step 1.3
```
#### Option B: Conversation Analysis (no arguments, search conversation)
```
1. Acknowledge search:
🔍 **[Create-Spec]** Searching conversation for research output...
2. Search conversation history (last 100 messages) for:
- Messages containing "/schovi:research" command (priority)
- Messages containing "/schovi:analyze" command (legacy)
- Messages with research sections ("## 🔬 Research:", "## 📋 Problem/Topic Summary", etc.)
- Messages with analysis sections ("## 🎯 1. PROBLEM SUMMARY", etc.)
- File:line references in recent messages
3. If research/analysis found:
**[Create-Spec]** Found research from [N messages ago]
Extract same content as Option A (research or analysis format)
4. If NOT found:
⚠️ **[Create-Spec]** No research found in recent conversation
Ask user to:
1. Run: /schovi:research --input [input] first
2. Provide research file: /schovi:plan --input ./research-[id].md
3. Create simple spec: /schovi:plan --from-scratch "description"
HALT EXECUTION
5. Verify content quality (same checks as Option A)
```
#### Option C: From Scratch (--from-scratch flag provided)
```
1. Acknowledge mode:
**[Create-Spec]** Creating spec from scratch...
2. Parse provided description from --from-scratch argument
3. Use AskUserQuestion tool for interactive requirements gathering:
Q1: "What is the primary goal of this task?"
Options: "Bug fix", "New feature", "Refactoring", "Technical debt", "Other"
Q2: "Which components or areas will be affected?"
Free text input
Q3: "What are the key requirements or acceptance criteria?"
Free text input (bullet points encouraged)
Q4: "Any known constraints or risks?" (Optional)
Free text input
4. Acknowledge collected info:
**[Create-Spec]** Requirements collected
5. Prepare minimal spec data:
- Title: From description
- Goal: From Q1
- Affected areas: From Q2 (high-level, no file:line refs)
- Acceptance criteria: From Q3
- Constraints/risks: From Q4
6. Template type: "minimal" (no flows, no solution comparisons)
7. Skip enrichment step (from-scratch intentionally lacks technical detail)
```
---
### Step 1.3: Optional Context Enrichment (If Analysis Lacks File:Line References)
**Skip this step if:**
- From-scratch mode is active
- Analysis already has sufficient file:line references
**If analysis is vague (missing specific file locations):**
```
1. Detect gaps:
- Count file:line references in analysis
- If < 3 references found → Analysis may be too vague
2. Ask user for permission:
⚠️ **[Create-Spec]** The analysis appears to lack specific file locations.
Options:
1. Enrich via quick codebase search (20-40 seconds, finds exact files)
2. Skip enrichment (spec will have high-level tasks)
3. Manually provide file locations
Which would you prefer? [1/2/3]
3. If user chooses option 1 (Enrich):
**[Create-Spec]** Enriching analysis with file locations...
Use Task tool with Explore subagent (quick mode):
- Search for components mentioned in analysis
- Find file locations for affected areas
- Add file:line references to analysis
**[Create-Spec]** Enrichment complete ([N] file references added)
4. If user chooses option 2 (Skip):
⚠️ **[Create-Spec]** Proceeding without enrichment (spec will be high-level)
5. If user chooses option 3 (Manual):
Ask user to provide file locations, then merge with analysis
```
---
### Step 1.4: Detect User's Chosen Approach (If Multiple Solutions in Analysis)
**If analysis contains multiple solution options:**
```
1. Search for user preference in:
- User messages after analysis
- Jira comments (if from Jira)
- File content (if from file)
2. Look for patterns:
- "Let's go with Option [N]"
- "I prefer Option [N]"
- "Option [N] makes most sense"
3. If preference found:
**[Create-Spec]** Detected preference: Option [N] - [Solution Name]
4. If preference NOT found:
⚠️ **[Create-Spec]** Multiple options available, no clear preference
Use AskUserQuestion tool:
"Which approach should I use for the spec?"
Options (from analysis):
- Option 1: [Name] - [Brief description]
- Option 2: [Name] - [Brief description]
- Option 3: [Name] - [Brief description]
Wait for user selection
5. Confirm selection:
🎯 **[Create-Spec]** Selected approach: Option [N] - [Solution Name]
```
**If single approach or from-scratch mode:**
- Skip selection step
- Use the single approach or minimal template
---
### Step 1.5: Work Folder Resolution
Use lib/work-folder.md:
```
Configuration:
mode: "auto-detect"
identifier: [Jira ID from analysis, or null]
description: [Problem title from analysis]
workflow_type: "full"
current_step: "plan"
custom_work_dir: [work_dir from arguments, or null]
Output (store for later phases):
work_folder: [path from library, e.g., ".WIP/EC-1234-feature"]
metadata_file: [path from library, e.g., ".WIP/EC-1234-feature/.metadata.json"]
output_file: [path from library, e.g., ".WIP/EC-1234-feature/plan-EC-1234.md"]
identifier: [identifier from library]
is_new: [true/false from library]
```
---
**Phase 1 Validation Checkpoint:**
```
- [ ] Input type validated (analysis file / conversation / from-scratch)
- [ ] If raw input: STOPPED with guidance (not proceeded)
- [ ] If valid: Analysis content extracted
- [ ] Analysis quality checked (file:line refs present or enriched)
- [ ] Enrichment decision made (yes/no/manual/skipped)
- [ ] Chosen approach identified (if multiple options)
- [ ] Work folder resolved (if applicable)
```
---
## PHASE 1.5: LOAD FRAGMENT CONTEXT (if fragments exist)
**Objective**: Load existing fragment registry and details to pass to spec-generator for acceptance criteria traceability.
**Use lib/fragment-loader.md**:
### Step 1.5.1: Check if fragments exist (Operation 1)
```
work_folder: [work_folder from Phase 1]
```
**If fragments don't exist**:
- Skip this phase, proceed to Phase 2
- Spec generation works without fragment context
**If fragments exist**:
- Continue to next steps
### Step 1.5.2: Load fragment registry (Operation 2)
```
work_folder: [work_folder path]
```
**Parse registry for**:
- Count of assumptions (A-#)
- Count of risks (R-#)
- Count of metrics (M-#)
**Store**:
- `fragments_exist`: true
- `assumption_count`: N
- `risk_count`: N
- `metric_count`: N
### Step 1.5.3: Load all assumptions (Operation 4)
```
work_folder: [work_folder]
fragment_type: "A"
```
**For each assumption**:
- Extract ID (A-1, A-2, ...)
- Extract statement
- Extract current status (pending, validated, failed)
**Store**:
- `assumptions_list`: [
{id: "A-1", statement: "...", status: "validated"},
{id: "A-2", statement: "...", status: "pending"}
]
### Step 1.5.4: Load all risks (Operation 4)
```
work_folder: [work_folder]
fragment_type: "R"
```
**For each risk**:
- Extract ID (R-1, R-2, ...)
- Extract description
- Extract impact/probability
**Store**:
- `risks_list`: [
{id: "R-1", description: "...", impact: "High", probability: "Medium"},
{id: "R-2", description: "...", impact: "Medium", probability: "Low"}
]
### Step 1.5.5: Load all metrics (Operation 4)
```
work_folder: [work_folder]
fragment_type: "M"
```
**For each metric**:
- Extract ID (M-1, M-2, ...)
- Extract description
- Extract target
**Store**:
- `metrics_list`: [
{id: "M-1", description: "...", target: "p95 < 200ms"},
{id: "M-2", description: "...", target: "< 0.1% error rate"}
]
**After Phase 1.5**:
- Fragment context loaded (if exists)
- Ready to pass summaries to spec-generator
---
## PHASE 2: SPEC GENERATION
### Step 2.1: Prepare Input Context for Spec Generator
Build structured input context from analysis extraction:
```markdown
## Input Context
### Problem Summary
[Problem description from analysis - 2-4 sentences]
### Chosen Approach
[If multiple options existed: "Option [N]: [Name]"]
[Detailed approach description from analysis]
### Technical Details
- Affected files: [List with file:line references from analysis]
- User flow: [Flow description if present]
- Data flow: [Flow description if present]
- Dependencies: [List of dependencies if identified]
### Fragment Context (if fragments_exist == true)
**Validated Assumptions** (from research):
[For each in assumptions_list:]
- [id]: [statement] - Status: [status]
Example:
- A-1: Database supports transactions - Status: ✅ Validated
- A-2: Frontend handles async responses - Status: ✅ Validated
- A-3: External API supports webhooks - Status: ⏳ Pending
**Identified Risks** (from research):
[For each in risks_list:]
- [id]: [description] - Impact: [impact], Probability: [probability]
Example:
- R-1: Database migration may cause downtime - Impact: High, Probability: Medium
- R-2: Caching layer consistency issues - Impact: Medium, Probability: Low
**Defined Metrics** (from research):
[For each in metrics_list:]
- [id]: [description] - Target: [target]
Example:
- M-1: API response time - Target: p95 < 200ms
- M-2: Error rate during rollout - Target: < 0.1%
**Traceability Guidance**:
Create acceptance criteria that:
1. Validate pending assumptions (link with "validates: A-#")
2. Mitigate identified risks (link with "mitigates: R-#")
3. Verify metrics are met (link with "verifies: M-#")
[If fragments_exist == false:]
No fragment context available. Spec generator will create acceptance criteria without fragment linkage.
### User Notes
[Any user preferences, comments, or special requirements]
### Metadata
- Jira ID: [ID or N/A]
- Created date: [Today's date in YYYY-MM-DD format]
- Created by: Claude Code
- Template type: [full or minimal]
- Fragments available: [true/false]
```
**Template type selection:**
- **"full"**: When detailed analysis exists (file:line refs, flows, multiple options considered)
- **"minimal"**: When from-scratch mode or simple tasks without deep analysis
### Step 2.2: Spawn Spec Generator Subagent
```
⏳ **[Create-Spec]** Generating implementation specification...
```
Use Task tool to spawn spec-generator subagent:
```
Task tool parameters:
subagent_type: "schovi:spec-generator:spec-generator"
description: "Generate implementation spec"
prompt: """
[Full input context from Step 2.1, formatted as markdown with all sections]
"""
```
**Important**: Use three-part naming format `schovi:spec-generator:spec-generator` for proper subagent resolution.
### Step 2.3: Receive and Validate Spec
The subagent will return spec with visual header/footer:
```markdown
╭─────────────────────────────────────────────╮
│ 📋 SPEC GENERATOR │
╰─────────────────────────────────────────────╯
[FULL SPEC CONTENT - YAML frontmatter + all sections]
╭─────────────────────────────────────────────╮
✅ Spec generated | ~[X] tokens | [Y] lines
╰─────────────────────────────────────────────╯
```
**Validation checks:**
- [ ] Spec has YAML frontmatter with title, status, jira_id
- [ ] Decision rationale present (full) or goal statement (minimal)
- [ ] Implementation tasks are specific and actionable with checkboxes
- [ ] Acceptance criteria are testable with checkboxes
- [ ] Testing strategy included
- [ ] File references use `file:line` format where applicable
- [ ] Total spec is under 3000 tokens
- [ ] Markdown formatting is valid
**If validation fails:**
```
⚠️ **[Create-Spec]** Spec validation failed: [Issue description]
Regenerating with more specific guidance...
```
Re-invoke Task tool with additional clarification in prompt.
**If subagent fails completely:**
```
❌ **[Create-Spec]** Spec generation failed: [Error message]
Cannot proceed without specification. Please check:
- Analysis content is complete
- Technical details are present
- File references are available
Would you like to:
1. Try from-scratch mode: /schovi:plan --from-scratch "description"
2. Provide more analysis detail
3. Create spec manually
```
HALT EXECUTION - Do not attempt fallback generation in main context.
**If successful:**
```
✅ **[Create-Spec]** Specification generated ([X] lines, [Y] tasks, [Z] criteria)
```
Extract spec content (strip visual header/footer) and store for Phase 3 output handling.
---
## PHASE 3: OUTPUT HANDLING
Use lib/output-handler.md:
```
Configuration:
content: [Generated spec from Phase 2]
content_type: "plan"
command_label: "Create-Spec"
flags:
terminal_output: [terminal_output from argument parsing]
file_output: [file_output from argument parsing]
jira_posting: [jira_posting from argument parsing]
file_config:
output_path: [output_path from arguments, or null for auto]
default_basename: "plan"
work_folder: [work_folder from Step 1.5, or null]
jira_id: [from analysis, or null]
workflow_step: "plan"
jira_config:
jira_id: [from analysis, or null]
cloud_id: "productboard.atlassian.net"
jira_title: "Implementation Specification"
jira_author: "Claude Code"
Output (store result for Phase 4):
output_result: {
terminal_displayed: [true/false],
file_created: [true/false],
file_path: [path or null],
jira_posted: [true/false],
metadata_updated: [true/false]
}
```
---
## PHASE 3.5: CREATE AC/EC FRAGMENTS (if fragments exist)
**Objective**: Parse spec output for acceptance criteria and exit criteria, create fragment files for traceability.
**If `fragments_exist == false` from Phase 1.5**:
- Skip this phase, proceed to Phase 4
- Spec works without fragments
**If `fragments_exist == true`**:
**Use lib/fragment-loader.md**:
Parse spec output (from Phase 2) for:
1. **Acceptance Criteria**
2. **Exit Criteria** (phase gates)
### Step 3.5.1: Create Acceptance Criteria Fragments
**Parse spec** for acceptance criteria section:
- Look for lines starting with "- [ ]" under "Acceptance Criteria"
- Extract criterion text
- Extract fragment references: `*(validates: A-#, mitigates: R-#, verifies: M-#)*`
**For each AC**:
**Get next AC number** (Operation 11):
```
work_folder: [work_folder]
fragment_type: "AC"
```
Returns next_number (e.g., 1, 2, 3, ...)
**Create AC fragment** (Operation 7):
```
work_folder: [work_folder]
fragment_type: "AC"
fragment_number: [next_number]
fragment_data: {
statement: [criterion text],
validates: [A-IDs extracted from references],
mitigates: [R-IDs extracted from references],
verifies: [M-IDs extracted from references],
verification_method: [extracted from spec if available],
stage: "plan",
timestamp: [current_timestamp]
}
```
**Get current timestamp**:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
**Result**: New fragment file created (e.g., `fragments/AC-1.md`, `fragments/AC-2.md`, ...)
### Step 3.5.2: Create Exit Criteria Fragments
**Parse spec** for exit criteria by phase:
- Look for exit criteria in implementation plan (usually in phases or deployment plan)
- Extract criterion text
- Extract phase name
- Extract which ACs it validates
**For each EC**:
**Get next EC number** (Operation 11):
```
work_folder: [work_folder]
fragment_type: "EC"
```
**Create EC fragment** (Operation 7):
```
work_folder: [work_folder]
fragment_type: "EC"
fragment_number: [next_number]
fragment_data: {
statement: [criterion text],
phase: [phase name - e.g., "Phase 1", "Pre-deployment"],
validates: [AC-IDs this criterion proves],
verification_method: [how to verify - e.g., command, test],
stage: "plan",
timestamp: [current_timestamp]
}
```
**Result**: New fragment file created (e.g., `fragments/EC-1.md`, `fragments/EC-2.md`, ...)
### Step 3.5.3: Update Fragment Registry (Operation 9)
**Update registry** with all new fragments:
- New acceptance criteria added (AC-1, AC-2, ...)
- New exit criteria added (EC-1, EC-2, ...)
- Summary counts updated
**Update**:
```
work_folder: [work_folder]
identifier: [identifier]
updates: [all AC and EC fragment updates from above steps]
```
**Result**: `fragments.md` registry updated with AC/EC fragments
**If fragment creation fails**:
- Log warning but don't block command
- Continue to Phase 4
---
## PHASE 4: COMPLETION & NEXT STEPS
Use lib/completion-handler.md:
```
Configuration:
command_type: "plan"
command_label: "Create-Spec"
summary_data:
problem: [One-line problem summary from analysis or from-scratch input]
output_files: [file_path from output_result if file_created]
jira_posted: [jira_posted from output_result]
jira_id: [from analysis or null]
work_folder: [work_folder from Step 1.5 or null]
terminal_only: [true if file_output was false]
command_specific_data:
spec_title: [Title from generated spec]
template: "Full" | "Minimal" [from Phase 2]
task_count: [Count of implementation tasks in spec]
criteria_count: [Count of acceptance criteria in spec]
test_count: [Count of test scenarios in spec]
This will:
- Display completion summary box
- Suggest next steps (review spec, start implementation, share with team)
- Wait for user direction
```
---
## QUALITY GATES CHECKLIST
Before presenting the spec, verify ALL of these are complete:
### Input Validation (Phase 1)
- [ ] Input type classified correctly (analysis file, conversation, from-scratch, or raw)
- [ ] If raw input: STOPPED with clear guidance message (not proceeded to spec generation)
- [ ] If valid input: Analysis content successfully extracted
- [ ] User's chosen approach identified (or prompted if multiple options)
- [ ] Enrichment decision made (if applicable): yes/no/manual/skipped
### Fragment Loading (Phase 1.5, if applicable)
- [ ] Fragment existence checked (if work folder available)
- [ ] If fragments exist: Assumptions loaded (A-#)
- [ ] If fragments exist: Risks loaded (R-#)
- [ ] If fragments exist: Metrics loaded (M-#)
- [ ] Fragment context passed to spec-generator (if applicable)
### Spec Generation (Phase 2)
- [ ] Spec generated via spec-generator subagent (context isolated)
- [ ] Spec contains title and metadata
- [ ] Decision rationale or goal statement present
- [ ] Implementation tasks are specific and actionable (checkboxes)
- [ ] Acceptance criteria are testable and clear
- [ ] If fragments provided: Acceptance criteria reference fragment IDs (validates: A-#, mitigates: R-#, verifies: M-#)
- [ ] Testing strategy included (unit/integration/manual)
- [ ] Risks documented (if applicable for full template)
- [ ] File references use `file:line` format where applicable
### Output Handling (Phase 3)
- [ ] Terminal output displayed (unless --quiet)
- [ ] File written to correct path (unless --no-file)
- [ ] Jira posted successfully (if --post-to-jira flag)
- [ ] All output operations confirmed with success messages
- [ ] Error handling executed for any failed operations
### Fragment Creation (Phase 3.5, if applicable)
- [ ] If fragments exist: Acceptance criteria fragments created (AC-1.md, AC-2.md, ...)
- [ ] If fragments exist: Exit criteria fragments created (EC-1.md, EC-2.md, ...)
- [ ] If fragments exist: Fragment registry updated with AC/EC counts
- [ ] Fragment IDs properly linked to assumptions/risks/metrics
### Quality
- [ ] Spec is actionable (can be implemented from it)
- [ ] Spec is complete (all required sections present)
- [ ] Spec is clear (no ambiguous requirements)
- [ ] Spec matches chosen approach from analysis
---
## INTERACTION GUIDELINES
**Communication Style**:
- Be clear and concise - spec generation is straightforward
- Use visual formatting (boxes, emojis) for status updates
- Provide helpful next steps after completion
- Always confirm file paths and operations
**Handling Errors**:
- If input source fails, offer alternatives
- If file write fails, try alternate path or terminal-only
- If Jira post fails, confirm file was still saved locally
- Never fail completely - always provide partial output
**Flexibility**:
- Support multiple input sources (conversation, Jira, file, scratch)
- Support multiple output destinations (terminal, file, Jira)
- Handle both full and minimal spec templates
- Work with or without Jira integration
**Proactive Guidance**:
After creating spec, suggest:
- "Need me to start the implementation workspace?"
- "Want me to break down any section further?"
- "Should I create implementation tasks in Jira?"
---
**Command Version**: 2.0 (Fragment System Integration)
**Last Updated**: 2025-11-08
**Dependencies**:
- `lib/argument-parser.md`
- `lib/output-handler.md`
- `lib/completion-handler.md`
- `lib/fragment-loader.md` (NEW: Fragment loading and creation)
- `schovi/agents/spec-generator/AGENT.md`
- `schovi/templates/spec/full.md`
**Changelog**: v2.0 - Added fragment system integration: loads A/R/M fragments, passes to spec-generator, creates AC/EC fragments
---
## 🚀 BEGIN WORKFLOW
Start with Argument Parsing, then proceed to Phase 1: Input Validation & Analysis Extraction.

1595
commands/publish.md Normal file

File diff suppressed because it is too large Load Diff

636
commands/research.md Normal file
View File

@@ -0,0 +1,636 @@
---
description: Deep technical analysis of ONE specific approach with detailed file:line references
argument-hint: --input PATH [--option N] [--output PATH] [--no-file] [--quiet] [--work-dir PATH]
allowed-tools: ["Read", "Write", "Task", "ExitPlanMode", "AskUserQuestion"]
---
# Research Workflow
You are performing **deep technical research** of ONE specific approach using the **executor pattern**. Follow this structured workflow to generate detailed analysis with comprehensive file:line references and implementation considerations.
**Key Innovation**: The research-executor subagent performs ALL work (target extraction, context fetching, exploration, generation) in isolated context, keeping main context clean.
---
## ⚙️ MODE ENFORCEMENT
**CRITICAL**: This command operates in **PLAN MODE** throughout Phases 1-2 (argument parsing and executor invocation). You MUST use the **ExitPlanMode tool** before Phase 3 (output handling) to transition from analysis to execution.
**Workflow**:
```
┌──────────────────────────────────┐
│ PLAN MODE (Read-only) │
│ Phases 1-2: Setup & Execute │
└──────────────────────────────────┘
[ExitPlanMode Tool]
┌──────────────────────────────────┐
│ EXECUTION MODE (Write) │
│ Phases 3-4: Output & Completion │
└──────────────────────────────────┘
```
---
## PHASE 1: ARGUMENT PARSING & OPTION SELECTION
Use lib/argument-parser.md:
```
Configuration:
command_name: "research"
command_label: "Research-Approach"
positional: []
flags:
- name: "--input"
type: "path"
description: "REQUIRED: Brainstorm file, Jira ID, GitHub URL, file, or description"
required: true
- name: "--option"
type: "number"
description: "Option number to research (1, 2, 3) if input is brainstorm file"
- name: "--output"
type: "path"
description: "Custom output file path for research"
- name: "--work-dir"
type: "path"
description: "Custom work directory"
- name: "--no-file"
type: "boolean"
description: "Skip file creation, terminal only"
- name: "--quiet"
type: "boolean"
description: "Suppress terminal output"
validation:
- --input is REQUIRED
- --output and --no-file are mutually exclusive
- --option must be 1-5 if provided
```
**Store parsed values:**
- `input_value`: --input value (file path, Jira ID, GitHub URL, or text)
- `option_number`: --option value or null
- `output_path`: --output value or null
- `work_dir`: --work-dir value or null
- `file_output`: true (unless --no-file)
- `terminal_output`: true (unless --quiet)
**If brainstorm file without --option flag**:
```
Use AskUserQuestion tool: "Which option would you like to research? (1, 2, or 3)"
Store user response as option_number
```
---
## PHASE 1.5: LOAD FRAGMENT CONTEXT (if fragments exist)
**Objective**: Load existing fragment registry and assumption/unknown details to pass to research-executor for validation.
**Use lib/fragment-loader.md**:
### Step 1.5.1: Check if fragments exist (Operation 1)
```
work_folder: [from Phase 1, derived from input or work-folder library]
```
**If fragments don't exist**:
- Skip this phase, proceed to Phase 2
- Research will work without fragment context
**If fragments exist**:
- Continue to next step
### Step 1.5.2: Load fragment registry (Operation 2)
```
work_folder: [work_folder path]
```
**Parse registry for**:
- Count of assumptions (A-#)
- Count of unknowns (U-#)
- Current status of each
**Store**:
- `fragments_exist`: true
- `assumption_count`: N
- `unknown_count`: N
### Step 1.5.3: Load all assumptions (Operation 4)
```
work_folder: [work_folder]
fragment_type: "A"
```
**For each assumption**:
- Extract ID (A-1, A-2, ...)
- Extract statement
- Extract current status (pending, validated, failed)
**Store**:
- `assumptions_list`: [
{id: "A-1", statement: "...", status: "pending"},
{id: "A-2", statement: "...", status: "pending"}
]
### Step 1.5.4: Load all unknowns (Operation 4)
```
work_folder: [work_folder]
fragment_type: "U"
```
**For each unknown**:
- Extract ID (U-1, U-2, ...)
- Extract question
- Extract current status (pending, answered)
**Store**:
- `unknowns_list`: [
{id: "U-1", question: "...", status: "pending"},
{id: "U-2", question: "...", status: "pending"}
]
**After Phase 1.5**:
- Fragment context loaded (if exists)
- Ready to pass to research-executor for validation
---
## PHASE 2: EXECUTE RESEARCH (Isolated Context)
**Objective**: Spawn research-executor subagent to perform ALL research work in isolated context, including assumption validation if fragments exist.
**Use Task tool with research-executor**:
```
Task tool configuration:
subagent_type: "schovi:research-executor:research-executor"
model: "sonnet"
description: "Execute research workflow"
prompt: |
RESEARCH INPUT: [input_value]
CONFIGURATION:
- option_number: [option_number or null]
- identifier: [auto-detect or generate]
- exploration_mode: thorough
FRAGMENT CONTEXT (if fragments_exist == true):
ASSUMPTIONS TO VALIDATE:
[For each in assumptions_list:]
- [id]: [statement] (current status: [status])
UNKNOWNS TO INVESTIGATE:
[For each in unknowns_list:]
- [id]: [question] (current status: [status])
[If fragments_exist == false:]
No existing fragment context. Research will identify new assumptions/risks/metrics.
Execute complete research workflow:
1. Extract research target (from brainstorm option, Jira, GitHub, etc.)
2. Fetch external context if needed (Jira/GitHub via nested subagents)
3. Deep codebase exploration (Plan subagent, thorough mode)
4. If fragments exist: Validate each assumption and answer each unknown
5. Identify risks (R-1, R-2, ...) and metrics (M-1, M-2, ...)
6. Generate detailed technical analysis following template
Return structured research output (~4000-6000 tokens) with file:line references.
Include assumption validation results and unknown answers if fragments provided.
```
**Expected output from executor**:
- Complete structured research markdown (~4000-6000 tokens)
- Includes: problem summary, architecture, data flow, dependencies, implementation considerations, performance/security
- All file references in file:line format
- Already formatted following `schovi/templates/research/full.md`
**Store executor output**:
- `research_output`: Complete markdown from executor
- `identifier`: Extract from research header or use fallback
---
## PHASE 3: EXIT PLAN MODE
**CRITICAL**: Before proceeding to output handling, use ExitPlanMode tool.
```
ExitPlanMode tool:
plan: |
# Deep Research Completed
Research analysis completed via research-executor subagent.
**Identifier**: [identifier]
**Research Target**: [Brief description]
## Key Findings
- Research target extracted and analyzed
- Deep codebase exploration completed (thorough mode)
- Architecture mapped with file:line references
- Dependencies identified (direct and indirect)
- Implementation considerations provided
## Next Steps
1. Save research output to work folder
2. Display summary to user
3. Guide user to plan command for implementation spec
```
**Wait for user approval before proceeding to Phase 4.**
---
## PHASE 4: OUTPUT HANDLING & WORK FOLDER
### Step 4.1: Update Fragments (if fragments exist)
**If `fragments_exist == true` from Phase 1.5**:
**Use lib/fragment-loader.md**:
Parse research output for:
1. **Assumption Validation Results**
2. **Unknown Answers**
3. **New Risks**
4. **New Metrics**
#### 4.1.1: Update Assumption Fragments (Operation 8)
For each assumption in `assumptions_list`:
**Parse research output** for validation section matching assumption ID:
- Look for "Assumption Validation Matrix" table
- Extract: validation method, result (✅/❌/⏳), evidence
**Update fragment** (Operation 8):
```
work_folder: [work_folder]
fragment_id: [assumption.id - e.g., "A-1"]
updates: {
status: "validated" | "failed" | "pending",
validation_method: [extracted method],
validation_result: "pass" | "fail",
evidence: [extracted evidence items],
tested_by: "Research phase ([current_timestamp])"
}
```
**Get current timestamp**:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
**Result**: Fragment file updated with validation results
#### 4.1.2: Update Unknown Fragments (Operation 8)
For each unknown in `unknowns_list`:
**Parse research output** for answer matching unknown ID:
- Look for answers in research output
- Extract: finding, evidence, decision
**Update fragment** (Operation 8):
```
work_folder: [work_folder]
fragment_id: [unknown.id - e.g., "U-1"]
updates: {
status: "answered" | "pending",
answer: [extracted finding],
evidence: [extracted evidence items],
decision: [extracted decision],
answered_by: "Research phase ([current_timestamp])"
}
```
**Result**: Fragment file updated with answer
#### 4.1.3: Create Risk Fragments (Operation 7)
**Parse research output** for risks section:
- Look for "Risks & Mitigation" or similar section
- Extract risks identified
**For each risk**:
**Get next risk number** (Operation 11):
```
work_folder: [work_folder]
fragment_type: "R"
```
Returns next_number (e.g., 1, 2, 3, ...)
**Create risk fragment** (Operation 7):
```
work_folder: [work_folder]
fragment_type: "R"
fragment_number: [next_number]
fragment_data: {
description: [risk description from research],
category: [Technical | Business | Operational],
impact: [High | Medium | Low],
probability: [High | Medium | Low],
impact_description: [what happens if risk occurs],
probability_rationale: [why this probability],
validates: [A-IDs this risk relates to],
mitigation_steps: [mitigation strategy],
contingency_steps: [contingency plan],
stage: "research",
timestamp: [current_timestamp]
}
```
**Result**: New fragment file created (e.g., `fragments/R-1.md`)
#### 4.1.4: Create Metric Fragments (Operation 7)
**Parse research output** for metrics section:
- Look for "What We Will Measure Later" section
- Extract metrics defined
**For each metric**:
**Get next metric number** (Operation 11):
```
work_folder: [work_folder]
fragment_type: "M"
```
**Create metric fragment** (Operation 7):
```
work_folder: [work_folder]
fragment_type: "M"
fragment_number: [next_number]
fragment_data: {
description: [metric description],
purpose_validates: [A-IDs],
purpose_monitors: [R-IDs],
target_value: [target value],
acceptable_range: [min-max],
critical_threshold: [threshold],
baseline_commands: [how to establish baseline],
owner: [team or person],
timeline: [when to measure],
stage: "research",
timestamp: [current_timestamp]
}
```
**Result**: New fragment file created (e.g., `fragments/M-1.md`)
#### 4.1.5: Update Fragment Registry (Operation 9)
**Update registry** with all changes:
- Updated assumption statuses
- Updated unknown statuses
- New risks added
- New metrics added
**Update summary counts**:
```
work_folder: [work_folder]
identifier: [identifier]
updates: [all fragment updates from above steps]
```
**Result**: `fragments.md` registry updated with current state
**If fragment updates fail**:
- Log warning but don't block command
- Continue to file writing
### Step 4.2: Work Folder Resolution
Use lib/work-folder.md:
```
Configuration:
mode: "auto-detect"
identifier: [identifier extracted from research_output or input]
description: [extract problem title from research_output]
workflow_type: "research"
current_step: "research"
custom_work_dir: [work_dir from argument parsing, or null]
Output (store for use below):
work_folder: [path from library, e.g., ".WIP/EC-1234-feature"]
metadata_file: [path from library, e.g., ".WIP/EC-1234-feature/.metadata.json"]
output_file: [path from library, e.g., ".WIP/EC-1234-feature/research-EC-1234.md"]
identifier: [identifier from library]
is_new: [true/false from library]
```
**Note**: If `option_number` was provided (from brainstorm), adjust output_file:
- Change from: `research-[identifier].md`
- To: `research-[identifier]-option[N].md`
**Store the returned values for steps below.**
### Step 4.3: Write Research Output
**If `file_output == true` (default unless --no-file):**
Use Write tool:
```
file_path: [output_file from Step 4.1, adjusted for option if needed]
content: [research_output from Phase 3]
```
**If write succeeds:**
```
📄 Research saved to: [output_file]
```
**If write fails or --no-file:**
Skip file creation, continue to terminal output.
### Step 4.4: Update Metadata
**If work_folder exists and file was written:**
Read current metadata:
```bash
cat [metadata_file from Step 4.1]
```
Update fields:
```json
{
...existing fields,
"workflow": {
...existing.workflow,
"completed": [...existing.completed, "research"],
"current": "research"
},
"files": {
...existing.files,
"research": "research-[identifier].md"
},
"timestamps": {
...existing.timestamps,
"lastModified": "[current timestamp]"
}
}
```
Get current timestamp:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
Write updated metadata:
```
Write tool:
file_path: [metadata_file]
content: [updated JSON]
```
### Step 4.5: Terminal Output
**If `terminal_output == true` (default unless --quiet):**
Display:
```markdown
# 🔬 Research Complete: [identifier]
Deep technical analysis completed.
## 📊 Analysis Summary
[Extract key findings from research_output - 3-5 bullet points]
## 📁 Output
Research saved to: `[output_file]`
Work folder: `[work_folder]`
## 📋 Next Steps
Ready to create implementation specification:
```bash
/schovi:plan --input research-[identifier].md
```
This will generate detailed implementation tasks, acceptance criteria, and rollout plan.
```
---
## PHASE 5: COMPLETION
**Final Message**:
```
✅ Research completed successfully!
🔬 Deep analysis for [identifier] complete
📊 Architecture mapped with file:line references
📁 Saved to: [file_path]
📋 Ready for implementation planning? Run:
/schovi:plan --input research-[identifier].md
```
**Command complete.**
---
## ERROR HANDLING
### Input Processing Errors
- **--input not provided**: Report error, show usage example
- **File not found**: Report error, ask for correct path
- **Brainstorm file without option**: Ask user interactively which option to research
- **--option with non-brainstorm input**: Report error, explain --option only for brainstorm files
- **Invalid option number**: Report error, show valid options
### Executor Errors
- **Executor failed**: Report error with details from subagent
- **Validation failed**: Check research_output has required sections
- **Token budget exceeded**: Executor handles compression, shouldn't happen
### Output Errors
- **File write failed**: Report error, offer terminal-only output
- **Work folder error**: Use fallback location or report error
---
## QUALITY GATES
Before completing, verify:
- [ ] Input processed successfully with research target identified
- [ ] Fragment context loaded (if fragments exist)
- [ ] Assumptions and unknowns passed to executor (if fragments exist)
- [ ] Executor invoked and completed successfully
- [ ] Research output received (~4000-6000 tokens)
- [ ] Output contains all required sections
- [ ] Architecture mapped with file:line references
- [ ] Dependencies identified (direct and indirect)
- [ ] Data flow traced with file:line references
- [ ] Code quality assessed with examples
- [ ] Implementation considerations provided
- [ ] All file references use file:line format
- [ ] Assumption fragments updated with validation results (if fragments exist)
- [ ] Unknown fragments updated with answers (if fragments exist)
- [ ] Risk fragments created for identified risks (if fragments exist)
- [ ] Metric fragments created for defined metrics (if fragments exist)
- [ ] Fragment registry updated with all changes (if fragments exist)
- [ ] File saved to work folder (unless --no-file)
- [ ] Metadata updated
- [ ] Terminal output displayed (unless --quiet)
- [ ] User guided to plan command for next step
---
## NOTES
**Design Philosophy**:
- **Executor pattern**: ALL work (extract + fetch + explore + generate) happens in isolated context
- **Main context stays clean**: Only sees final formatted output (~4-6k tokens)
- **Token efficiency**: 93% reduction in main context (from ~86k to ~6k tokens)
- **Consistent experience**: User sees same output, just more efficient internally
**Token Benefits**:
- Before: Main context sees input + exploration (83k) + generation = ~86k tokens
- After: Main context sees only final output = ~6k tokens
- Savings: 80k tokens (93% reduction)
**Integration**:
- Input from: Brainstorm files (with option), Jira, GitHub, files, or text
- Output to: Work folder with metadata
- Next command: Plan for implementation spec
**Executor Capabilities**:
- Extracts research target from brainstorm files (with option selection)
- Spawns jira-analyzer, gh-pr-analyzer for external context
- Spawns Plan subagent for thorough-mode exploration
- Reads research template and generates formatted output with file:line refs
- All in isolated context, returns clean result
---
**Command Version**: 3.0 (Executor Pattern + Fragment System)
**Last Updated**: 2025-11-08
**Dependencies**:
- `lib/argument-parser.md`
- `lib/work-folder.md`
- `lib/fragment-loader.md` (NEW: Fragment loading and updating)
- `schovi/agents/research-executor/AGENT.md`
- `schovi/templates/research/full.md`
**Changelog**: v3.0 - Added fragment system integration: loads assumptions/unknowns, validates during research, updates fragment files with results, creates risk/metric fragments

566
commands/review.md Normal file
View File

@@ -0,0 +1,566 @@
---
description: Comprehensive code review with issue detection and improvement suggestions
argument-hint: [PR/Jira/issue/file] [--quick]
allowed-tools: ["Task", "Bash", "Read", "Grep", "Glob", "mcp__jetbrains__*"]
---
# Review Command
Perform comprehensive code review focused on GitHub PRs, Jira tickets, GitHub issues, or documents. Provides summary, key analysis, potential issues, and improvement suggestions.
## Command Arguments
**Input Types**:
- GitHub PR: URL, `owner/repo#123`, or `#123`
- Jira ID: `EC-1234`, `IS-8046`, etc.
- GitHub Issue: URL or `owner/repo#123`
- File path: `./path/to/file.md` or absolute path
- Free-form: Description text
**Flags**:
- `--quick`: Perform quick review (lighter analysis, faster results)
- Default: Deep review (comprehensive analysis with codebase exploration)
## Execution Workflow
### Phase 1: Input Parsing & Classification
1. **Parse Arguments**: Extract input and flags from command line
2. **Classify Input Type**:
- GitHub PR: Contains "github.com/.../pull/", "owner/repo#", or "#\d+"
- Jira ID: Matches `[A-Z]{2,10}-\d{1,6}`
- GitHub Issue: Contains "github.com/.../issues/" or "owner/repo#" (not PR)
- File path: Starts with `.` or `/`, file exists
- Free-form: Everything else
3. **Extract Review Mode**:
- `--quick` flag present → Quick review
- Default → Deep review
### Phase 2: Context Fetching
**Fetch context based on input type using appropriate subagent**:
1. **GitHub PR**:
- Use Task tool with subagent_type: `schovi:gh-pr-reviewer:gh-pr-reviewer`
- Prompt: "Fetch and summarize GitHub PR [input]"
- Description: "Fetching GitHub PR review data"
- **Important**: gh-pr-reviewer returns ALL changed files with stats and PR head SHA for code fetching
2. **Jira Issue**:
- Use Task tool with subagent_type: `schovi:jira-auto-detector:jira-analyzer`
- Prompt: "Fetch and summarize Jira issue [input]"
- Description: "Fetching Jira issue summary"
3. **GitHub Issue**:
- Use Task tool with subagent_type: `schovi:gh-issue-analyzer:gh-issue-analyzer`
- Prompt: "Fetch and summarize GitHub issue [input]"
- Description: "Fetching GitHub issue summary"
4. **File Path**:
- Use Read tool to read file contents
- Store as context for review
5. **Free-form**:
- Use provided text as context directly
**Wait for subagent completion before proceeding**.
### Phase 2.5: Source Code Fetching
**For GitHub PRs and issues with code references, fetch actual source code for deeper analysis**.
This phase is **CRITICAL** for providing accurate, code-aware reviews. Skip only for quick reviews or non-code content.
#### Step 1: Identify Files to Fetch
**Extract file paths from fetched context**:
1. **For GitHub PRs**:
- gh-pr-reviewer returns **ALL changed files** with individual stats
- Files already include: additions, deletions, total changes, status (added/modified/removed)
- Files are sorted by changes (descending) for easy prioritization
- PR head SHA included for fetching
2. **For Jira/GitHub Issues**:
- Extract file:line references from description/comments
- May need to search for files if not explicitly mentioned
3. **For file inputs**:
- Already have content from Phase 2
**Prioritize files for fetching** (up to 10 most relevant for deep review):
- **From PR context**: Top 10 files by total changes (already sorted)
- Files mentioned in PR description or review comments
- Core logic files (controllers, services, models) over config/docs
- Test files related to changes
- Exclude: package-lock.json, yarn.lock, large generated files
#### Step 2: Determine Source Code Access Method
**Check available methods in priority order**:
1. **Local Repository Access** (PREFERRED):
- Check if current working directory is the target repository
- For PRs: Use `git remote -v` to verify repo matches PR repository
- If branch exists locally: Check out the PR branch or use current branch
- Direct file access via Read tool
2. **JetBrains MCP Integration** (if available):
- Check if `mcp__jetbrains__*` tools are available
- Use `mcp__jetbrains__get_file_content` for file reading
- Use `mcp__jetbrains__search_everywhere` for finding related files
- Provides IDE-aware context (usages, definitions)
3. **GitHub API Fetch** (fallback):
- For external repositories or when local access unavailable
- Use `gh api` to fetch file contents from GitHub
- Fetch from the PR branch/commit SHA
- Example: `gh api repos/{owner}/{repo}/contents/{path}?ref={sha}`
#### Step 3: Fetch Source Files
**Execute fetching based on determined method**:
**Local Repository Method**:
```bash
# Verify we're in correct repo
git remote -v | grep -q "owner/repo"
# For PR review: Optionally fetch and checkout PR branch
gh pr checkout <PR_NUMBER> # or use existing branch
# Read files directly
# Use Read tool for each file path identified in Step 1
```
**JetBrains MCP Method** (if available):
```
# Use mcp__jetbrains__get_file_content for each file
# This provides IDE context like imports, usages, etc.
# Example:
mcp__jetbrains__get_file_content(file_path: "src/api/controller.ts")
```
**GitHub API Method**:
```bash
# Use PR head SHA from gh-pr-reviewer output
# Extract owner, repo, and headRefOid from PR summary
# For each file path:
gh api repos/{owner}/{repo}/contents/{file_path}?ref={headRefOid} \
--jq '.content' | base64 -d
# Alternative: Use gh pr diff for changed files
gh pr diff <PR_NUMBER>
# Note: headRefOid (commit SHA) from full mode summary ensures exact version
```
#### Step 4: Store Fetched Content
**Organize fetched code for analysis**:
1. **Create in-memory file map**:
- Key: file path (e.g., "src/api/controller.ts")
- Value: file content or relevant excerpt
- Include line numbers for changed sections
2. **Handle large files**:
- For files >500 lines, fetch only changed sections ±50 lines of context
- Use `git diff` with context lines: `gh pr diff <PR> --patch`
- Store full path + line ranges
3. **Capture metadata**:
- File size, lines changed (additions/deletions)
- File type/language
- Related test files
#### Step 5: Fetch Related Dependencies (Deep Review Only)
**For deep reviews, explore related code**:
1. **Identify dependencies**:
- Parse imports/requires from fetched files
- Find files that import changed files (reverse dependencies)
- Locate test files for changed code
2. **Fetch related files**:
- Use Grep tool to find related code: `import.*from.*{filename}`
- Use Glob tool to find test files: `**/*{filename}.test.*`
- Read up to 5 most relevant related files
3. **Build call graph context**:
- Identify functions/methods changed
- Find callers of those functions
- Track data flow through changed code
#### Error Handling
**Handle fetching failures gracefully**:
- **Local repo not available**: Fall back to GitHub API or proceed with context summary only
- **GitHub API rate limit**: Use available context, note limitation in review
- **File too large**: Fetch diff sections only, note in review
- **Branch/commit not found**: Use main/master branch, add warning
- **Authentication failure**: Proceed with summary context, suggest `gh auth login`
**Always notify user of fetching method used**:
```
📥 Fetching source code via [local repository / JetBrains MCP / GitHub API]
📄 Retrieved X files (Y lines total)
```
### Phase 3: Review Analysis
#### Deep Review (Default)
**Comprehensive analysis using fetched source code**:
1. **Direct Code Analysis** (using Phase 2.5 fetched files):
**Analyze fetched source files directly**:
- Review each fetched file for code quality, patterns, and issues
- Focus on changed sections but consider full file context
- Cross-reference between related files
- Verify imports/exports are correct
- Check for unused code or imports
**Use Explore subagent for additional context** (if needed):
- Use Task tool with subagent_type: `Explore`
- Set thoroughness: `medium` (since we already have main files)
- Prompt: "Explore additional context for the review:
- Find additional related files not yet fetched
- Locate integration points and dependencies
- Search for similar patterns in codebase
- Context: [fetched code summary + file list]"
- Description: "Exploring additional codebase context"
2. **Multi-dimensional Analysis** (on fetched code):
**Functionality**:
- Does implementation match requirements from context?
- Are edge cases handled (null checks, empty arrays, boundary conditions)?
- Is error handling comprehensive?
- Are return values consistent?
**Code Quality**:
- Readability: Clear variable names, function names, code organization
- Maintainability: DRY principle, single responsibility, modularity
- Patterns: Appropriate design patterns, consistent style
- Complexity: Cyclomatic complexity, nesting depth
**Security** (CRITICAL):
- SQL injection risks (raw queries, string concatenation)
- XSS vulnerabilities (unescaped output, innerHTML usage)
- Authentication/Authorization issues (missing checks, hardcoded credentials)
- Data leaks (logging sensitive data, exposing internal details)
- Input validation (user input sanitization, type checking)
- CSRF protection (state-changing operations)
**Performance**:
- N+1 query problems (loops with database calls)
- Memory leaks (event listeners, closures, cache)
- Inefficient algorithms (O(n²) when O(n) possible)
- Unnecessary re-renders (React/Vue/Angular)
- Resource handling (file handles, connections, streams)
**Testing**:
- Test coverage for changed code
- Test quality (unit vs integration, assertions)
- Missing test scenarios (edge cases, error paths)
- Test maintainability (mocks, fixtures)
**Architecture**:
- Design patterns appropriate for use case
- Coupling between modules (tight vs loose)
- Cohesion within modules (single responsibility)
- Separation of concerns (business logic, UI, data)
**Documentation**:
- Code comments for complex logic
- JSDoc/docstrings for public APIs
- README updates if needed
- Inline explanations for non-obvious code
3. **Code-Specific Issue Detection**:
**Scan fetched code for common issues**:
- TODO/FIXME comments left in code
- Console.log/debug statements in production code
- Commented-out code blocks
- Hardcoded values that should be constants/config
- Magic numbers without explanation
- Inconsistent naming conventions
- Missing error handling in async code
- Race conditions in concurrent code
- Resource leaks (unclosed files, connections)
#### Quick Review (--quick flag)
**Lighter analysis without full source code fetching**:
**Skip Phase 2.5 or fetch minimal files only**:
- For PRs: Use `gh pr diff` to get code changes without full file fetching
- Limit to top 3 most important files if fetching
- No dependency exploration
- No related file fetching
1. **Context-based Analysis**:
- Review fetched context summary and diffs
- Limited file exploration (max 3 files)
- Focus on obvious issues and high-level patterns
- Fast turnaround (30-60 seconds)
2. **Focus Areas**:
- Summary of changes/content
- Obvious code quality issues from diff
- Critical security concerns (if visible in diff)
- High-level improvement suggestions
- Surface-level pattern detection
### Phase 4: Structured Output
**Generate comprehensive review output** (no file output, terminal only):
```markdown
# 🔍 Code Review: [Input Identifier]
## 📝 Summary
[2-3 sentence overview of what's being reviewed and overall assessment]
## 🎯 Risk Assessment
**Risk Level:** [Low / Low-Medium / Medium / Medium-High / High]
[2-4 bullet points explaining risk factors]:
- [Technical risk factors: complexity, scope, affected systems]
- [Test coverage status: comprehensive/partial/missing]
- [Data/schema changes: yes/no and impact]
- [Dependencies: new dependencies, breaking changes, version updates]
- [Deployment risk: can be deployed independently / requires coordination]
## 🔒 Security Review
[Security assessment - always include even if no concerns]:
**If concerns found**:
⚠️ Security concerns identified:
- [Specific security issue with file:line reference]
- [Classification: SQL injection, XSS, auth bypass, data leak, etc.]
- [Impact assessment and recommendation]
**If no concerns**:
✅ No security concerns identified
- [Verified: appropriate auth/validation patterns]
- [Data handling: proper sanitization/escaping]
- [Access control: correct permissions/authorization]
## ⚡ Performance Impact
[Performance assessment - always include]:
**If concerns found**:
⚠️ Performance concerns identified:
- [Specific performance issue with file:line reference]
- [Classification: N+1 queries, memory leak, inefficient algorithm, etc.]
- [Expected impact and recommendation]
**If no concerns**:
✅ No performance concerns
- [Database queries: optimized / no new queries / properly indexed]
- [Memory handling: appropriate / no leaks detected]
- [Algorithm efficiency: acceptable complexity / optimized]
- [Processing: in-memory / batch processing / streaming where appropriate]
## 🔍 Key Changes/Information
[Bullet list where each item has a 2-5 word title and sub-bullets with details]
- **2-5 word title**
- Short detail with file:line reference
- Short detail with file:line reference
- **Another 2-5 word title**
- Short detail
- Short detail
- **Third change title**
- Detail
- Detail
## ⚠️ Issues Found
[Identified problems, bugs, concerns - organized by priority and severity]
### 🚨 Must Fix
[Critical issues that MUST be addressed before merge]
1. **Issue title** (file:line)
- Description of the issue with code evidence
- Why it's critical (impact, risk, blockers)
- **Action:** Specific fix recommendation
### ⚠️ Should Fix
[Important issues that SHOULD be addressed, may block merge depending on severity]
2. **Issue title** (file:line)
- Description of the issue
- Why it's important (technical debt, maintainability, bugs)
- **Action:** Specific fix recommendation
### 💭 Consider
[Minor issues or suggestions that can be addressed later]
3. **Issue title** (file:line)
- Description of the concern
- Optional improvement or nice-to-have
- **Action:** Suggestion for improvement
[If no issues found: "✅ No significant issues identified"]
## 💡 Recommendations
[2-5 actionable suggestions for improvement, can include code examples]
1. **Recommendation title** (file:line if applicable)
- Explanation of improvement
- Expected benefit
- [Optional: Code example showing before/after]
2. **Recommendation title**
- Explanation
- Benefit
[Continue for 2-5 recommendations]
## 🎯 Verdict
**[⚠️ Approve with changes / ✅ Approve / 🚫 Needs work / ❌ Blocked]**
[1-2 sentences explaining verdict reasoning]
**Merge Criteria:**
- [ ] [Specific requirement from Must Fix items]
- [ ] [Specific requirement from Should Fix items]
- [ ] [Optional: Additional verification or testing needed]
**Estimated Fix Time:** [X minutes/hours for addressing Must Fix + Should Fix items]
```
## Quality Gates
**Before presenting review, verify**:
- ✅ Context successfully fetched (or file read)
- ✅ Source code fetched (deep review) or diff retrieved (quick review)
- ✅ Fetching method reported to user (local/JetBrains/GitHub)
- ✅ Analysis completed on actual source code (deep or quick as requested)
- ✅ Summary section with 2-3 sentence overview
- ✅ Risk Assessment section with risk level and 2-4 factors
- ✅ Security Review section present (concerns found OR explicit "no concerns")
- ✅ Performance Impact section present (concerns found OR explicit "no concerns")
- ✅ At least 3 key changes/info points identified with specific code references
- ✅ Issues section organized by priority (Must Fix / Should Fix / Consider)
- ✅ Each issue includes file:line reference and Action recommendation
- ✅ 2-5 recommendations provided with benefits and optional code examples
- ✅ File references use `file:line` format for all code mentions
- ✅ Verdict section with approval status (Approve/Approve with changes/Needs work/Blocked)
- ✅ Merge Criteria checklist with specific requirements from Must Fix and Should Fix
- ✅ Estimated Fix Time provided
## Important Rules
1. **No File Output**: This command outputs to terminal ONLY, no file creation
2. **No Work Folder Integration**: Does not use work folder system (unlike implement/debug)
3. **Context Isolation**: Always use subagents for external data fetching
4. **Holistic Assessment**: Always include Risk, Security, and Performance sections (even if no concerns)
5. **Priority-Based Issues**: Organize issues by priority (Must Fix / Should Fix / Consider), not just severity
6. **Actionable Feedback**: All issues and recommendations must include specific Action items
7. **Clear Verdict**: Provide explicit merge decision with criteria checklist and estimated fix time
8. **Security Focus**: Always check for common vulnerabilities (injection, XSS, auth issues, data leaks)
9. **File References**: Use `file:line` format for all code references
## Example Usage
```bash
# Review GitHub PR (deep)
/schovi:review https://github.com/owner/repo/pull/123
/schovi:review owner/repo#123
/schovi:review #123
# Quick review of PR
/schovi:review #123 --quick
# Review Jira ticket
/schovi:review EC-1234
# Review local file
/schovi:review ./spec-EC-1234.md
# Review GitHub issue
/schovi:review https://github.com/owner/repo/issues/456
```
## Execution Instructions
**YOU MUST**:
1. Parse input and classify type correctly
2. Use appropriate subagent for context fetching (fully qualified names)
3. Wait for subagent completion before analysis
4. **Execute Phase 2.5: Source Code Fetching** (critical for accurate reviews):
- Identify files to fetch from context
- Determine access method (local > JetBrains > GitHub)
- Fetch source files (up to 10 for deep, up to 3 for quick)
- Notify user of fetching method and file count
- Handle errors gracefully, fall back when needed
5. Analyze **actual fetched source code**, not just context summaries
6. For deep review: Fetch dependencies and use Explore for additional context
7. For quick review: Fetch minimal files (top 3) or use diff only
8. **Generate all required sections**:
- Summary (2-3 sentences)
- Risk Assessment (risk level + 2-4 factors)
- Security Review (concerns OR explicit "no concerns")
- Performance Impact (concerns OR explicit "no concerns")
- Key Changes (3+ items with file:line)
- Issues Found (organized as Must Fix / Should Fix / Consider)
- Recommendations (2-5 actionable items)
- Verdict (approval status + merge criteria + fix time estimate)
9. Always perform security analysis on fetched code
10. Provide specific file:line references from actual code
11. Prioritize issues by urgency (Must/Should/Consider) with Action items
12. Give 2-5 actionable recommendations with benefits and optional code examples
13. Provide clear verdict with merge criteria checklist and estimated fix time
14. Check all quality gates before output
15. Output to terminal ONLY (no files)
**YOU MUST NOT**:
1. Create any files or use work folders
2. Skip context fetching phase (Phase 2)
3. Skip source code fetching phase (Phase 2.5) without valid reason
4. Proceed without waiting for subagent completion
5. Review without fetching actual source code (except quick mode fallback)
6. Skip Risk Assessment, Security Review, or Performance Impact sections
7. Give vague suggestions without specific file:line references from fetched code
8. Miss security vulnerability analysis on actual code
9. Provide generic feedback without code-level specifics
10. Skip priority classification for issues (Must Fix / Should Fix / Consider)
11. Omit Action items from issues or merge criteria from verdict
12. Base review solely on PR descriptions without examining code
## Error Handling
- **Invalid input**: Ask user to clarify or provide valid PR/Jira/file
- **Context fetch failure**: Report error and suggest checking credentials/permissions
- **Source code fetch failure**:
- Try alternate methods (local → JetBrains → GitHub)
- Fall back to diff-based review if all methods fail
- Notify user of limitation and suggest fixes
- **Repository mismatch**: Notify user if reviewing external repo from different local repo
- **Branch not found**: Fall back to main/master branch with warning
- **File too large**: Fetch diff sections only, note in review
- **GitHub API rate limit**: Use local/JetBrains if available, or note limitation
- **Empty context**: Report that nothing was found to review
- **Analysis timeout**: Fall back to quick review and notify user

584
commands/spec.md Normal file
View File

@@ -0,0 +1,584 @@
---
description: Product discovery and specification generation from requirements, images, documents, or Jira issues
argument-hint: [jira-id|description] [--input FILE1 FILE2...] [--output PATH] [--no-file] [--quiet] [--post-to-jira] [--work-dir PATH]
allowed-tools: ["Read", "Write", "Grep", "Glob", "Task", "mcp__jira__*", "mcp__jetbrains__*", "Bash", "AskUserQuestion"]
---
# Product Specification Generator
You are **creating a product specification** that defines WHAT to build, WHY it's needed, and FOR WHOM. This is the discovery phase - focusing on requirements, user needs, and product decisions (NOT technical implementation).
---
## ⚙️ PHASE 1: INPUT PROCESSING & WORK FOLDER SETUP
### Step 1.1: Parse Command Arguments
**Input Received**: $ARGUMENTS
Parse to extract:
**Primary Input** (first non-flag argument):
- **Jira Issue ID**: Pattern `[A-Z]+-\d+` (e.g., EC-1234)
- **Text Description**: Free-form product idea or requirement
- **Empty**: Interactive mode (will prompt for requirements)
**Input Files** (supporting materials):
- **`--input FILE1 FILE2...`**: Images, PDFs, documents to analyze
- Examples: wireframes, mockups, PRD, requirements.pdf
- Supports multiple files
- File types: .png, .jpg, .pdf, .md, .txt, .doc, .docx
**Output Flags**:
- **`--output PATH`**: Save spec to specific file path (default: `.WIP/[identifier]/01-spec.md`)
- **`--no-file`**: Skip file output, terminal only
- **`--quiet`**: Skip terminal output, file only
- **`--post-to-jira`**: Post spec as Jira comment (requires Jira ID)
- **`--work-dir PATH`**: Use specific work folder (default: auto-generate)
**Flag Validation**:
- `--output` and `--no-file` cannot be used together → Error
- `--post-to-jira` without Jira ID → Warning, skip posting
- `--work-dir` overrides auto-generation → Use as specified
**Store parsed values** for later phases:
```
problemInput = [jira-id or description or empty]
inputFiles = [list of file paths]
outputPath = [path or null]
noFile = [boolean]
quiet = [boolean]
postToJira = [boolean]
workDir = [path or null]
jiraId = [extracted jira id or null]
```
### Step 1.2: Resolve or Create Work Folder
Use lib/work-folder.md:
```
Configuration:
mode: "auto-detect"
identifier: [jiraId from Step 1.1, or null]
description: [problemInput from Step 1.1]
workflow_type: "full"
current_step: "spec"
custom_work_dir: [workDir from Step 1.1, or null]
Output (store for later phases):
work_folder: [path from library, e.g., ".WIP/EC-1234-feature"]
metadata_file: [path from library, e.g., ".WIP/EC-1234-feature/.metadata.json"]
output_file: [path from library, e.g., ".WIP/EC-1234-feature/01-spec.md"]
identifier: [identifier from library]
is_new: [true/false from library]
```
**Store the returned values for later phases.**
**Note**: The work-folder library creates `.WIP/[identifier]/` with metadata. Workflow type "full" means: spec → analyze → plan → implement.
### Step 1.3: Copy Supporting Materials to Context Folder
If `--input` files provided:
```bash
for file in $inputFiles; do
# Copy to context folder
cp "$file" "[work_folder from Step 1.2]/context/$(basename $file)"
# Read file if it's readable (images, PDFs, text)
# Use Read tool to load content for analysis
done
```
**Update metadata.files.context** with list of copied files.
---
## ⚙️ PHASE 2: GATHER REQUIREMENTS & CONTEXT
### Step 2.1: Fetch External Context (if Jira)
If Jira ID detected in Step 1.1:
**Use jira-analyzer subagent**:
```
Task tool:
subagent_type: "schovi:jira-auto-detector:jira-analyzer"
prompt: "Fetch and summarize Jira issue [jira-id]"
description: "Fetching Jira context"
```
Extract from summary:
- Issue title
- Issue type (Story, Epic, Bug)
- Description (full text)
- Acceptance criteria
- Comments and discussion points
### Step 2.2: Analyze Supporting Materials
For each file in context/:
**Images (wireframes, mockups, screenshots)**:
- Use Read tool to view images
- Extract UI elements, user flows, visual design
- Identify screens, components, interactions
**Documents (PDFs, text files)**:
- Use Read tool to extract text
- Parse requirements, user stories, constraints
- Identify stakeholder needs, success metrics
**Existing markdown/text**:
- Read directly
- Extract structured requirements if present
### Step 2.3: Interactive Discovery (if needed)
If requirements are vague or incomplete, ask clarifying questions:
**Use AskUserQuestion tool** to gather:
1. **Core Requirements**:
- What problem are we solving?
- Who are the users?
- What's the expected outcome?
2. **User Experience**:
- What are the key user journeys?
- What actions will users perform?
- What edge cases exist?
3. **Scope & Constraints**:
- What's IN scope vs OUT of scope?
- Any technical constraints to be aware of?
- Any compliance or security requirements?
4. **Success Criteria**:
- How will we know this is successful?
- What metrics matter?
- What are the acceptance criteria?
**Example Questions**:
```
1. Who is the primary user for this feature? (e.g., end users, admins, developers)
2. What's the main problem this solves? (1-2 sentences)
3. What are the key user actions? (e.g., "User logs in, sees dashboard, filters data")
4. Are there any known constraints? (e.g., must work on mobile, needs to support 10k users)
5. What's explicitly OUT of scope for v1? (helps clarify boundaries)
```
**Generate identifier now** if not done in Step 1.2.3:
- Take first sentence of problem description
- Generate slug
- Update work folder name if needed
---
## ⚙️ PHASE 3: GENERATE PRODUCT SPECIFICATION
### Step 3.1: Structure Specification Outline
Create structured specification with these sections:
```markdown
# Product Specification: [Title]
## 📋 Overview
[What, Why, For Whom - 3-4 sentences]
## 👥 Target Users
[Primary, secondary users]
## 🎯 Problem Statement
[What problem we're solving, current pain points]
## 💡 Proposed Solution
[High-level solution approach, key features]
## 📖 User Stories
### Story 1: [Title]
**As a** [user type]
**I want to** [action]
**So that** [benefit]
**Acceptance Criteria**:
- [ ] Criterion 1
- [ ] Criterion 2
**Edge Cases**:
- [Edge case 1]
- [Edge case 2]
[Repeat for each major user story]
## 🎨 User Experience
### User Journey 1: [Journey Name]
1. User starts at [point]
2. User performs [action]
3. System responds with [response]
4. User sees [outcome]
[Repeat for key journeys]
## ✅ Product Acceptance Criteria
### Must Have (v1)
- [ ] Criterion 1
- [ ] Criterion 2
### Should Have (v1)
- [ ] Criterion 1
### Nice to Have (Future)
- [ ] Criterion 1
## 🔍 Scope & Decisions
### In Scope ✅
- Feature A
- Feature B
### Out of Scope ❌
- Feature X (reason: future iteration)
- Feature Y (reason: complexity)
### Product Decisions Made
- **Decision 1**: [Why we chose this]
- **Decision 2**: [Why we chose this]
## 🔗 Dependencies & Constraints
### External Dependencies
- [Third-party service, API, etc.]
### User Constraints
- [Browser support, device requirements]
### Business Constraints
- [Timeline, budget, compliance]
### Known Limitations
- [Technical or product limitations to be aware of]
## 📊 Success Metrics
### Key Performance Indicators
- Metric 1: [target]
- Metric 2: [target]
### User Success Metrics
- [How users measure success]
## 🗂️ Related Resources
### Context Files
[List files in context/ folder]
- wireframe.png - [brief description]
- requirements.pdf - [brief description]
### External References
- Jira: [link]
- Design doc: [link]
- Discussion: [link]
---
**Next Steps**: Run `/schovi:analyze` to explore technical implementation options.
```
### Step 3.2: Write Specification Content
Follow the outline above, filling in content based on:
- Jira context (if available)
- Supporting materials (images, docs)
- Interactive answers (if gathered)
**Key Principles**:
1. **Non-technical** - No file:line references, no code mentions
2. **User-focused** - Describe features from user perspective
3. **Clear scope** - Explicit about what's IN vs OUT
4. **Actionable criteria** - Testable acceptance criteria
5. **Decision record** - Document why we chose this approach
**Quality Check**:
- [ ] Clear problem statement (why we're building this)
- [ ] Defined target users
- [ ] At least 2-3 user stories with acceptance criteria
- [ ] Clear scope boundaries (in/out)
- [ ] Success metrics defined
- [ ] All supporting materials referenced
---
## ⚙️ PHASE 4: OUTPUT HANDLING
### Step 4.1: Determine Output Path
**Priority**:
1. If `--output` flag: Use specified path
2. If `--no-file`: Skip file output (terminal only)
3. Default: Use `output_file` from Step 1.2 (work-folder library)
```bash
if [ -n "$outputPath" ]; then
final_output_file="$outputPath"
elif [ "$noFile" = true ]; then
final_output_file="" # Skip file output
else
final_output_file="[output_file from Step 1.2]" # e.g., .WIP/EC-1234/01-spec.md
fi
```
### Step 4.2: Write Specification File
If file output enabled:
**Use Write tool**:
- file_path: [final_output_file from Step 4.1]
- content: [Full specification from Phase 3]
### Step 4.3: Update Metadata
**If work_folder exists and file was written:**
Read current metadata:
```bash
cat [metadata_file from Step 1.2]
```
Update fields:
```json
{
...existing fields,
"workflow": {
...existing.workflow,
"completed": ["spec"],
"current": "spec"
},
"files": {
"spec": "01-spec.md",
"context": ["wireframe.png", "requirements.pdf"]
},
"timestamps": {
...existing.timestamps,
"lastModified": "[current timestamp]"
}
}
```
Get current timestamp:
```bash
date -u +"%Y-%m-%dT%H:%M:%SZ"
```
Write updated metadata:
```
Write tool:
file_path: [metadata_file from Step 1.2]
content: [updated JSON]
```
### Step 4.4: Post to Jira (if requested)
If `--post-to-jira` flag AND Jira ID present:
**Use mcp__jira__add_comment**:
- issue_key: [jira-id]
- comment: [Specification markdown or link to spec file]
Format comment:
```
Product Specification Generated
Specification saved to: .WIP/[identifier]/01-spec.md
Key highlights:
- [Target user]
- [Core problem]
- [Proposed solution]
Next step: Run /schovi:analyze to explore technical implementation.
```
### Step 4.5: Display Results to User
If not `--quiet`, show terminal output:
```
✅ Product specification complete!
📁 Work folder: .WIP/[identifier]/
📄 Specification: 01-spec.md
📂 Supporting materials: [count] files in context/
📋 Specification Summary:
• Target Users: [primary user types]
• Problem: [one-sentence problem]
• Solution: [one-sentence solution]
• User Stories: [count] stories defined
• Acceptance Criteria: [count] criteria
🎯 Scope:
✅ In Scope: [count] features
❌ Out of Scope: [count] items
📊 Success Metrics:
• [Metric 1]
• [Metric 2]
---
📍 What You Have:
✓ Product specification with user stories and acceptance criteria
✓ Clear scope boundaries and product decisions
✓ Success metrics defined
🚀 Next Steps:
1. Review spec: cat .WIP/[identifier]/01-spec.md
2. Explore technical approaches: /schovi:analyze --input .WIP/[identifier]/01-spec.md
3. Or iterate on spec if requirements change
💡 Tip: The spec is non-technical - it focuses on WHAT and WHY.
Run analyze next to explore HOW to implement it.
```
If `--post-to-jira` succeeded:
```
✅ Posted to Jira: [jira-url]
```
---
## 🔍 VALIDATION CHECKLIST
Before completing, ensure:
- [ ] Work folder created: `.WIP/[identifier]/`
- [ ] Metadata initialized with workflow.type = "full"
- [ ] Specification written to `01-spec.md`
- [ ] Supporting materials copied to `context/`
- [ ] Metadata updated with completed = ["spec"]
- [ ] Clear problem statement and target users defined
- [ ] At least 2-3 user stories with acceptance criteria
- [ ] Scope boundaries (in/out) clearly defined
- [ ] Success metrics specified
- [ ] Next steps communicated to user
---
## 💡 USAGE EXAMPLES
### Example 1: Spec from Jira Issue
```bash
/schovi:spec EC-1234
# Workflow:
# 1. Fetches Jira issue details
# 2. Creates .WIP/EC-1234-[title-slug]/
# 3. Generates spec from Jira description and acceptance criteria
# 4. Saves to 01-spec.md
# 5. Shows summary
```
### Example 2: Spec from Wireframes and Description
```bash
/schovi:spec "Build user dashboard" --input wireframe.png design-doc.pdf
# Workflow:
# 1. Creates .WIP/build-user-dashboard/
# 2. Copies wireframe.png and design-doc.pdf to context/
# 3. Analyzes images and documents
# 4. Asks clarifying questions about requirements
# 5. Generates comprehensive spec
# 6. Saves to 01-spec.md with references to context files
```
### Example 3: Interactive Spec Generation
```bash
/schovi:spec
# Workflow:
# 1. Prompts for problem description
# 2. Asks about target users
# 3. Asks about key user actions
# 4. Asks about scope and constraints
# 5. Generates spec based on interactive answers
# 6. Creates work folder with generated identifier
# 7. Saves to 01-spec.md
```
### Example 4: Spec with Custom Output
```bash
/schovi:spec EC-1234 --output ~/docs/product-specs/auth-spec.md
# Custom output location, not in .WIP/ folder
```
---
## 🚫 ERROR HANDLING
### No Requirements Found
```
❌ Cannot generate specification without requirements
No Jira ID, description, or input files provided.
Please provide ONE of:
- Jira Issue: /schovi:spec EC-1234
- Description: /schovi:spec "Build user authentication"
- Files: /schovi:spec --input requirements.pdf wireframe.png
- Interactive: /schovi:spec (will prompt for details)
```
### Invalid File Type
```
⚠️ Warning: Unsupported file type: file.xyz
Supported types: .png, .jpg, .pdf, .md, .txt, .doc, .docx
Skipping file.xyz...
```
### Jira Not Found
```
❌ Jira issue EC-9999 not found
Please check:
- Issue key is correct
- You have access to the issue
- MCP Jira server is configured
Tip: Continue without Jira using description:
/schovi:spec "Description of the feature"
```
---
## 🎯 KEY PRINCIPLES
1. **Product Focus** - This is about WHAT to build, not HOW
2. **User-Centric** - Describe features from user perspective
3. **Non-Technical** - No code, no files, no implementation details
4. **Clear Boundaries** - Explicit scope (in/out)
5. **Actionable** - Criteria should be testable
6. **Decision Record** - Document why we chose this approach
7. **Workflow Foundation** - Sets up analyze → plan → implement chain
**Remember**: This is the first step in a full workflow. The spec should be detailed enough for the analyze command to explore technical approaches.