Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:05:52 +08:00
commit db12a906d2
62 changed files with 27669 additions and 0 deletions

View File

@@ -0,0 +1,126 @@
---
description: Generate BMAD architecture document from PRD
---
# BMAD Architecture - Generate Technical Architecture
Use the architect subagent to create comprehensive technical architecture for this project following BMAD methodology.
## Task Delegation
First check if the PRD exists, then launch the architect subagent to handle the complete architecture generation workflow.
## Process
### Step 1: Verify Prerequisites
Check that PRD exists before delegating to architect:
```bash
ls bmad-backlog/prd/prd.md 2>/dev/null || echo "PRD not found"
```
**If PRD NOT found**:
```
❌ Error: PRD not found at bmad-backlog/prd/prd.md
Architecture generation requires a PRD to work from.
Please run: /titanium-toolkit:bmad-prd first
(Or /titanium-toolkit:bmad-start for complete guided workflow)
```
Stop here - do not launch architect without PRD.
**If PRD exists**: Continue to Step 2.
### Step 2: Launch Architect Subagent
Use the Task tool to launch the architect subagent in its own context window:
```
Task(
description: "Generate BMAD architecture",
prompt: "Create comprehensive technical architecture document following BMAD methodology.
Input:
- PRD: bmad-backlog/prd/prd.md
- Research findings: bmad-backlog/research/*.md (if any exist)
Output:
- Architecture document: bmad-backlog/architecture/architecture.md
Requirements:
1. Read the PRD to understand requirements
2. Check for research findings and incorporate recommendations
3. Generate architecture using bmad_generator MCP tool
4. Review tech stack with user and get approval
5. Validate architecture using bmad_validator MCP tool
6. Run vibe-check to validate architectural decisions
7. Store result in Pieces for future reference
8. Present summary with next steps
**IMPORTANT**: Keep your summary response BRIEF (under 500 tokens). Just return:
- Confirmation architecture is complete
- Proposed tech stack (2-3 sentences)
- MVP cost estimate
- Any critical decisions made
DO NOT include the full architecture content in your response - it's already saved to the file.
Follow your complete architecture workflow from the bmad-methodology skill.
Project path: $(pwd)",
subagent_type: "architect"
)
```
The architect subagent will handle:
- Reading PRD and research findings
- Generating architecture document (1000-1500 lines)
- Tech stack selection and user approval
- Validation (structural and vibe-check)
- Pieces storage
- Summary presentation
### Step 3: Return Results
The architect will return a summary when complete. Present this to the user.
## What the Architect Creates
The architect subagent generates `bmad-backlog/architecture/architecture.md` containing:
- **System Overview**: High-level architecture diagram (ASCII), component descriptions
- **Technology Stack**: Complete stack with rationale for each choice
- **Component Details**: Detailed design for each system component
- **Database Design**: Complete SQL schemas with CREATE TABLE statements
- **API Design**: Endpoint specifications with request/response examples
- **Security Architecture**: Auth, rate limiting, encryption, security controls
- **Infrastructure**: Deployment strategy, scaling plan, CI/CD pipeline
- **Monitoring**: Metrics, logging, tracing, alerting specifications
- **Cost Analysis**: MVP costs and production projections
- **Technology Decisions Table**: Each tech choice with rationale
## Integration with Research
If research findings exist in `bmad-backlog/research/`, the architect will:
- Read all RESEARCH-*-findings.md files
- Extract vendor/technology recommendations
- Incorporate into architecture decisions
- Reference research in Technology Decisions table
- Use research pricing in cost estimates
## Voice Feedback
Voice hooks announce:
- "Generating architecture" (when starting)
- "Architecture complete" (when finished)
## Cost
Typical cost: ~$0.08 per architecture generation (Claude Sonnet 4.5 API usage in bmad_generator tool)
---
**This command delegates to the architect subagent who creates the complete technical blueprint!**

270
commands/bmad-brief.md Normal file
View File

@@ -0,0 +1,270 @@
---
description: Generate BMAD product brief from project idea
---
# BMAD Brief - Generate Product Brief
Use the product-manager subagent to create a comprehensive Product Brief following BMAD methodology. The brief captures the high-level vision and goals.
## Task Delegation
First gather the project idea, then launch the product-manager subagent to handle the complete brief generation workflow.
## Process
### Step 1: Gather Project Idea
**If user provided description**:
- Store their description
**If user said just `/bmad:brief`**:
- Ask: "What's your project idea at a high level?"
- Wait for response
- Ask follow-up if needed: "What problem does it solve? Who is it for?"
### Step 2: Launch Product-Manager Subagent
Use the Task tool to launch the product-manager subagent in its own context window:
```
Task(
description: "Generate BMAD product brief",
prompt: "Create comprehensive product brief following BMAD methodology.
User's Project Idea:
{{user_idea}}
Your workflow:
1. **Generate product brief** using the MCP tool:
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"brief\",
input_path: \"{{user_idea}}\",
project_path: \"$(pwd)\"
)
```
2. **Review generated brief** - Read bmad-backlog/product-brief.md and present key sections to user
3. **Validate the brief** using:
```
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: \"brief\",
document_path: \"bmad-backlog/product-brief.md\"
)
```
4. **Run vibe-check** to validate the brief quality
5. **Store in Pieces** for future reference
6. **Present summary** to user with next steps
**IMPORTANT**: Keep your summary response BRIEF (under 300 tokens). Just return:
- Confirmation brief is complete
- 1-2 sentence project description
- Primary user segment
- MVP feature count
DO NOT include the full brief content in your response - it's already saved to the file.
Follow your complete brief workflow from the bmad-methodology skill.
Project path: $(pwd)",
subagent_type: "product-manager"
)
```
The product-manager subagent will handle:
- Generating product brief
- Reviewing and presenting key sections
- Validation (structural and vibe-check)
- Pieces storage
- Summary presentation
### Step 3: Return Results
The product-manager will return a summary when complete. Present this to the user.
## What the Product-Manager Creates
The product-manager subagent generates `bmad-backlog/product-brief.md` containing:
- **Executive Summary**: Project concept, problem, target market, value proposition
- **Problem Statement**: Current state, pain points, urgency
- **Proposed Solution**: Core concept, differentiators
- **Target Users**: Primary and secondary user segments with detailed profiles
- **Goals & Success Metrics**: Business objectives, user success metrics, KPIs
- **MVP Scope**: Core features and what's out of scope
- **Technical Considerations**: Platform requirements, tech preferences
- **Constraints & Assumptions**: Budget, timeline, resources
- **Risks & Open Questions**: Key risks and areas needing research
- **Next Steps**: Immediate actions and PM handoff
## Integration with Research
The product-manager may identify research needs during brief generation and suggest running `/bmad:research` for topics like:
- Data vendors or APIs
- Technology comparisons
- Market research
## Voice Feedback
Voice hooks announce:
- "Generating product brief" (when starting)
- "Product brief complete" (when finished)
## Cost
Typical cost: ~$0.01 per brief generation (Claude Haiku 4.5 API usage in bmad_generator tool)
### Step 4: Present Summary and Next Steps
```
✅ Product Brief Complete!
📄 Location: bmad-backlog/product-brief.md
📊 Summary:
- Problem: {{one-line problem}}
- Solution: {{one-line solution}}
- Users: {{primary user segment}}
- MVP Features: {{count}} core features
💡 Next Steps:
Option 1: Generate PRD next
Run: /bmad:prd
Option 2: Generate complete backlog
Run: /bmad:start
(This will use the brief to generate PRD, Architecture, and all Epics)
What would you like to do?
```
## Error Handling
### If ANTHROPIC_API_KEY Missing
```
❌ Error: ANTHROPIC_API_KEY not found
The brief generation needs Anthropic Claude to create comprehensive content.
Please add your API key to ~/.env:
echo 'ANTHROPIC_API_KEY=sk-ant-your-key-here' >> ~/.env
chmod 600 ~/.env
Get your key from: https://console.anthropic.com/settings/keys
Then restart Claude Code and try again.
```
### If Generation Fails
```
❌ Brief generation failed
This could be due to:
- API rate limits
- Network issues
- Invalid project description
Let me try again with a simplified approach.
[Retry with more basic prompt]
```
### If User Wants to Skip Brief
```
Note: Product brief is optional but recommended.
You can skip directly to PRD with:
/bmad:prd
However, the brief helps organize your thoughts and produces better PRDs.
Skip brief and go to PRD? (yes/no)
```
## Voice Feedback
Voice hooks will announce:
- "Generating product brief" (when utility starts)
- "Product brief complete" (when done)
## Example Usage
**Example 1: Simple Idea**
```
User: /bmad:brief "Social network for developers"
Claude: "What problem does it solve?"
User: "Developers want to show off projects, not just resumes"
Claude: "Who are the primary users?"
User: "Junior developers looking for jobs"
[Generates brief]
Claude: "Brief complete! Would you like to generate the PRD next?"
```
**Example 2: Detailed Idea**
```
User: /bmad:brief "AI-powered precious metals research platform with real-time pricing, company fundamentals, smart screening, and AI-generated trade ideas for retail investors"
[Generates comprehensive brief from detailed description]
Claude: "Comprehensive brief generated! Next: /bmad:prd"
```
**Example 3: Interactive Mode**
```
User: /bmad:brief
Claude: "What's your project idea?"
User: "Todo app"
Claude: "What makes it different from existing todo apps?"
User: "Uses voice input and AI scheduling"
Claude: "Who is it for?"
User: "Busy professionals"
[Generates brief with full context]
```
## Important Guidelines
**Always**:
- ✅ Use `bmad_generator` MCP tool (don't generate manually)
- ✅ Validate with vibe-check
- ✅ Store in Pieces
- ✅ Present clear summary
- ✅ Suggest next steps
**Never**:
- ❌ Generate brief content manually (use the tool)
- ❌ Skip vibe-check validation
- ❌ Forget to store in Pieces
- ❌ Leave user uncertain about next steps
## Integration
**After `/bmad:brief`**:
- Suggest `/bmad:prd` to continue
- Or suggest `/bmad:start` to generate complete backlog
- Brief is referenced by PRD generation
**Part of `/bmad:start`**:
- Guided workflow calls brief generation
- Uses brief for PRD generation
- Seamless flow
---
**This command creates the foundation for your entire project backlog!**

256
commands/bmad-epic.md Normal file
View File

@@ -0,0 +1,256 @@
---
description: Generate single BMAD epic with user stories
---
# BMAD Epic - Generate Epic File
Use the product-manager subagent to create a single epic file with user stories following BMAD methodology. This command is used to add NEW epics to existing backlog or regenerate existing epics.
## When to Use This Command
**Add NEW Epic** (change request, new feature):
```bash
# 6 months after launch, need mobile app
/bmad:epic "Mobile App"
# → Creates EPIC-012-mobile-app.md
```
**Regenerate Existing Epic** (refinement):
```bash
/bmad:epic 3
# → Regenerates EPIC-003 with updated content
```
**NOT used during `/bmad:start`** - guided workflow generates all epics automatically.
## Task Delegation
First check prerequisites, determine which epic to generate, then launch the product-manager subagent to handle the complete epic generation workflow.
## Process
### Step 1: Check Prerequisites
**Require PRD**:
```bash
ls bmad-backlog/prd/prd.md 2>/dev/null || echo "No PRD found"
```
If not found:
```
❌ Error: PRD required for epic generation
Please run: /bmad:prd
(Or /bmad:start for complete workflow)
```
Stop here - do not launch product-manager without PRD.
**Check for Architecture** (recommended):
```bash
ls bmad-backlog/architecture/architecture.md 2>/dev/null || echo "No architecture found"
```
If not found:
```
⚠️ Architecture not found
Epic generation works best with architecture (for technical notes).
Would you like to:
1. Generate architecture first (recommended): /bmad:architecture
2. Continue without architecture (epics will have minimal technical notes)
3. Cancel
Choose:
```
If user chooses 1: Run `/bmad:architecture` first, then continue
If user chooses 2: Continue to Step 2
If user chooses 3: Exit gracefully
### Step 2: Determine Epic to Generate
**If user provided epic number**:
```bash
# User ran: /bmad:epic 3
```
- Epic number = 3
- Store epic_identifier = "3"
**If user provided epic name**:
```bash
# User ran: /bmad:epic "Mobile App"
```
- Epic name = "Mobile App"
- Store epic_identifier = "Mobile App"
**If user provided nothing**:
- Ask: "Which epic would you like to generate?
- Provide epic number (e.g., 1, 2, 3)
- Or epic name for NEW epic (e.g., 'Mobile App')
- Or 'all' to generate all epics from PRD"
- Wait for response
- Store epic_identifier
### Step 3: Launch Product-Manager Subagent
Use the Task tool to launch the product-manager subagent in its own context window:
```
Task(
description: "Generate BMAD epic with user stories",
prompt: "Create comprehensive epic file following BMAD methodology.
Epic to Generate: {{epic_identifier}}
Input:
- PRD: bmad-backlog/prd/prd.md
- Architecture: bmad-backlog/architecture/architecture.md (if exists)
Output:
- Epic file: bmad-backlog/epics/EPIC-{num:03d}-{slug}.md
- Updated index: bmad-backlog/STORY-INDEX.md
Your workflow:
1. **Read inputs** to understand context:
- Read bmad-backlog/prd/prd.md
- Read bmad-backlog/architecture/architecture.md (if exists)
- Extract epic definition and user stories
2. **Generate epic** using MCP tool:
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"epic\",
input_path: \"bmad-backlog/prd/prd.md bmad-backlog/architecture/architecture.md {{epic_identifier}}\",
project_path: \"$(pwd)\"
)
```
3. **Review and present** epic summary:
- Read generated epic file
- Present title, priority, story count, story points
- Show story list
- Note if technical notes included/minimal
4. **Validate epic** using:
```
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: \"epic\",
document_path: \"bmad-backlog/epics/EPIC-{num}-{name}.md\"
)
```
5. **Update story index**:
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"index\",
input_path: \"bmad-backlog/epics/\",
project_path: \"$(pwd)\"
)
```
6. **Run vibe-check** to validate epic quality
7. **Store in Pieces** for future reference
8. **Present summary** with next steps:
- If more epics in PRD: offer to generate next
- If this was last epic: show completion status
- If new epic not in PRD: suggest updating PRD
**IMPORTANT**: Keep your summary response VERY BRIEF (under 200 tokens). Just return:
- Confirmation epic is complete
- Epic title and number
- Story count
- Story points total
DO NOT include the full epic content in your response - it's already saved to the file.
Follow your complete epic workflow from the bmad-methodology skill.
Project path: $(pwd)",
subagent_type: "product-manager"
)
```
The product-manager subagent will handle:
- Reading PRD and Architecture
- Generating epic file (300-500 lines)
- Presenting epic summary
- Validation (structural and vibe-check)
- Updating story index
- Pieces storage
- Summary presentation with next steps
### Step 4: Return Results
The product-manager will return a summary when complete. Present this to the user.
## What the Product-Manager Creates
The product-manager subagent generates `bmad-backlog/epics/EPIC-{num:03d}-{slug}.md` containing:
- **Epic Header**: Owner, Priority, Sprint, Status, Effort
- **Epic Description**: What and why
- **Business Value**: Why this epic matters
- **Success Criteria**: Checkboxes for completion
- **User Stories**: STORY-{epic}-{num} format
- Each with "As a... I want... so that..."
- Acceptance criteria (checkboxes)
- Technical notes (code examples from architecture)
- **Dependencies**: Blocks/blocked by relationships
- **Risks & Mitigation**: Potential issues and solutions
- **Related Epics**: Cross-references
- **Definition of Done**: Completion checklist
Also updates `bmad-backlog/STORY-INDEX.md` with new epic totals.
## Epic Numbering
**If adding new epic**:
- Determines next epic number by counting existing epics
- New epic becomes EPIC-{next_num}-{slug}.md
**If regenerating**:
- Uses existing epic number
- Overwrites file
- Preserves filename
## Integration
**Standalone**:
```
/bmad:epic 1
/bmad:epic 2
/bmad:epic 3
```
**Part of `/bmad:start`**:
- Guided workflow generates all epics automatically
- Loops through epic list from PRD
- Generates each sequentially
**After Initial Backlog**:
```
# 6 months later, need new feature
/bmad:epic "Mobile App"
# → Adds EPIC-012
# → Updates index
# → Ready to implement
```
## Voice Feedback
Voice announces:
- "Generating epic" (when starting)
- "Epic {{num}} complete: {{story count}} stories" (when done)
## Cost
Typical cost: ~$0.01 per epic (Claude Haiku 4.5 API usage in bmad_generator tool)
---
**This command delegates to the product-manager subagent who creates complete epic files with user stories!**

213
commands/bmad-index.md Normal file
View File

@@ -0,0 +1,213 @@
---
description: Generate BMAD story index summary
---
# BMAD Index - Generate Story Index
Use the product-manager subagent to generate a STORY-INDEX.md file that summarizes all epics and user stories in the backlog. This provides a quick overview for sprint planning and progress tracking.
## Purpose
Create a summary table showing:
- Total epics, stories, and story points
- Epic overview with story counts
- Per-epic story details
- Priority distribution
- Development phases
## When to Use
- After `/bmad:start` completes (auto-generated)
- After adding new epic with `/bmad:epic`
- After manually editing epic files
- Want refreshed totals and summaries
- Planning sprints
## Task Delegation
First check that epics exist, then launch the product-manager subagent to handle the complete index generation workflow.
## Process
### Step 1: Check for Epics
```bash
ls bmad-backlog/epics/EPIC-*.md 2>/dev/null || echo "No epics found"
```
**If no epics found**:
```
❌ No epic files found
Story index requires epic files to summarize.
Please generate epics first:
- Run: /bmad:epic 1
- Or: /bmad:start (complete workflow)
```
Stop here - do not launch product-manager without epic files.
**If epics found**: Continue to Step 2.
### Step 2: Launch Product-Manager Subagent
Use the Task tool to launch the product-manager subagent in its own context window:
```
Task(
description: "Generate BMAD story index",
prompt: "Create comprehensive story index summarizing all epics and user stories.
Input:
- Epic files: bmad-backlog/epics/EPIC-*.md
Output:
- Story index: bmad-backlog/STORY-INDEX.md
Your workflow:
1. **Generate story index** using MCP tool:
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"index\",
input_path: \"bmad-backlog/epics/\",
project_path: \"$(pwd)\"
)
```
2. **Review generated index**:
- Read bmad-backlog/STORY-INDEX.md
- Extract totals (epics, stories, story points)
- Extract epic breakdown
- Extract priority distribution
3. **Present summary** with key metrics:
- Total epics, stories, story points
- Epic breakdown with story counts per epic
- Priority distribution (P0/P1/P2 percentages)
- Show sample from index (epic overview table)
4. **Run vibe-check** to validate index quality
5. **Store in Pieces** for future reference:
- Include index file
- Include all epic files
- Summarize totals and breakdown
6. **Suggest next steps**:
- Sprint planning guidance
- Implementation readiness
- Progress tracking tips
Follow your complete index workflow from the bmad-methodology skill.
Project path: $(pwd)",
subagent_type: "product-manager"
)
```
The product-manager subagent will handle:
- Scanning all epic files
- Generating story index
- Extracting and presenting totals
- Validation (vibe-check)
- Pieces storage
- Summary presentation with next steps
### Step 3: Return Results
The product-manager will return a summary when complete. Present this to the user.
## What the Product-Manager Creates
The product-manager subagent generates `bmad-backlog/STORY-INDEX.md` containing:
- **Summary Statistics**: Total epics, stories, story points
- **Epic Overview Table**: Epic ID, name, story count, points, status
- **Per-Epic Story Details**: All stories with IDs, titles, priorities
- **Priority Distribution**: P0/P1/P2 breakdown with percentages
- **Development Phases**: Logical grouping of epics
- **Quick Reference**: Key metrics for sprint planning
## Error Handling
### If No Epics Found
Handled in Step 1 - command exits gracefully with helpful message.
### If Epic Files Malformed
The product-manager subagent will:
- Report which files couldn't be parsed
- Generate index from parseable epics only
- Offer to help fix malformed files
## Voice Feedback
Voice announces:
- "Generating story index" (when starting)
- "Story index complete: {{N}} epics, {{M}} stories" (when done)
## Example Usage
**Example 1: After Epic Generation**
```
User: /bmad:epic 1
[Epic 1 generated]
User: /bmad:epic 2
[Epic 2 generated]
User: /bmad:index
Product-Manager:
- Scans epics/
- Finds 2 epics
- Counts stories
- Generates index
- "Index complete: 2 epics, 18 stories, 75 story points"
```
**Example 2: After Manual Edits**
```
User: [Edits EPIC-003.md, adds more stories]
User: /bmad:index
Product-Manager:
- Rescans all epics
- Updates totals
- "Index updated: 5 epics, 52 stories (was 45), 210 points (was 180)"
```
**Example 3: Sprint Planning**
```
User: /bmad:index
Product-Manager:
- Generates index
- "Total: 148 stories, 634 points"
- "P0 stories: 98 (65%)"
```
## Integration
**Auto-generated by**:
- `/bmad:start` (after all epics created)
- `/bmad:epic` (after each epic)
**Manually run**:
- After editing epic files
- Before sprint planning
- To refresh totals
**Used by**:
- Project managers for planning
- Developers for understanding scope
- Stakeholders for status updates
## Cost
Typical cost: ~$0.01 (minimal - just parsing and formatting, using Claude Haiku 4.5)
---
**This command delegates to the product-manager subagent who creates the 30,000-foot view of your entire backlog!**

359
commands/bmad-prd.md Normal file
View File

@@ -0,0 +1,359 @@
---
description: Generate BMAD Product Requirements Document
---
# BMAD PRD - Generate Product Requirements Document
Use the product-manager subagent to create a comprehensive Product Requirements Document (PRD) following BMAD methodology.
## Task Delegation
First check for product brief, then launch the product-manager subagent to handle the complete PRD generation workflow.
## Process
### Step 1: Check for Product Brief
```bash
ls bmad-backlog/product-brief.md 2>/dev/null || echo "No brief found"
```
**If brief NOT found**:
```
❌ Error: Product Brief not found at bmad-backlog/product-brief.md
PRD generation requires a product brief to work from.
Please run: /titanium-toolkit:bmad-brief first
(Or /titanium-toolkit:bmad-start for complete guided workflow)
```
Stop here - do not launch product-manager without brief.
**If brief exists**: Continue to Step 2.
### Step 2: Launch Product-Manager Subagent
Use the Task tool to launch the product-manager subagent in its own context window:
```
Task(
description: "Generate BMAD PRD",
prompt: "Create comprehensive Product Requirements Document following BMAD methodology.
Input:
- Product Brief: bmad-backlog/product-brief.md
Output:
- PRD: bmad-backlog/prd/prd.md
Your workflow:
1. **Read the product brief** to understand the project vision
2. **Generate PRD** using the MCP tool:
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"prd\",
input_path: \"bmad-backlog/product-brief.md\",
project_path: \"$(pwd)\"
)
```
3. **Review epic structure** - Ensure Epic 1 is \"Foundation\" and epic sequence is logical
4. **Detect research needs** - Scan for API, vendor, data source, payment, hosting keywords
5. **Validate PRD** using:
```
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: \"prd\",
document_path: \"bmad-backlog/prd/prd.md\"
)
```
6. **Run vibe-check** to validate PRD quality and completeness
7. **Store in Pieces** for future reference
8. **Present summary** with epic list, research needs, and next steps
**IMPORTANT**: Keep your summary response BRIEF (under 500 tokens). Just return:
- Confirmation PRD is complete
- Epic count and list (just titles)
- Total user stories count
- Total features count
DO NOT include the full PRD content in your response - it's already saved to the file.
Follow your complete PRD workflow from the bmad-methodology skill.
Project path: $(pwd)",
subagent_type: "product-manager"
)
```
The product-manager subagent will handle:
- Reading product brief
- Generating comprehensive PRD (500-1000 lines)
- Epic structure review
- Research needs detection
- Validation (structural and vibe-check)
- Pieces storage
- Summary presentation
### Step 3: Return Results
The product-manager will return a summary when complete. Present this to the user.
## What the Product-Manager Creates
The product-manager subagent generates `bmad-backlog/prd/prd.md` containing:
**Sections generated**:
1. Executive Summary (Vision, Mission)
2. Product Overview (Users, Value Props, Competitive Positioning)
3. Success Metrics (North Star, KPIs)
4. Feature Requirements (V1 MVP, V2 Features with acceptance criteria)
5. User Stories (organized by Epic)
6. Technical Requirements (Performance, Scalability, Security, etc.)
7. Data Requirements (if applicable)
8. AI/ML Requirements (if applicable)
9. Design Requirements
10. Go-to-Market Strategy
11. Risks & Mitigation (tables)
12. Open Questions
13. Appendix (Glossary, References)
### Step 3: Review Generated PRD
Read the PRD:
```bash
Read bmad-backlog/prd/prd.md
```
**Key sections to review with user**:
1. **Epic List** (from User Stories section):
```
Epic Structure:
- Epic 1: {{name}} ({{story count}} stories)
- Epic 2: {{name}} ({{story count}} stories)
- Epic 3: {{name}} ({{story count}} stories)
...
Total: {{N}} epics, {{M}} stories
Is this epic breakdown logical and complete?
```
2. **Feature Requirements**:
```
V1 MVP Features: {{count}}
V2 Features: {{count}}
Are priorities correct (P0, P1, P2)?
```
3. **Technical Requirements**:
```
Performance: {{targets}}
Security: {{requirements}}
Tech Stack Preferences: {{from brief or inferred}}
Any adjustments needed?
```
### Step 4: Detect Research Needs
Scan PRD for research keywords:
- "API", "vendor", "data source", "integration"
- "payment", "authentication provider"
- "hosting", "infrastructure"
**If research needs detected**:
```
⚠️ I detected you'll need research on:
- {{Research topic 1}} (e.g., "data vendors for pricing")
- {{Research topic 2}} (e.g., "authentication providers")
- {{Research topic 3}} (e.g., "hosting platforms")
Would you like me to generate research prompts for these?
Research prompts help you:
- Use ChatGPT/Claude web (they have web search!)
- Get current pricing and comparisons
- Make informed architecture decisions
Generate research prompts? (yes/no/specific topics)
```
**If user says yes**:
- For each research topic, run `/bmad:research "{{topic}}"`
- Wait for user to complete research
- Note that architecture generation will use research findings
**If user says no**:
- Continue without research
- Architecture will make best guesses
### Step 5: Refine PRD (if needed)
**If user wants changes**:
- Identify specific sections to refine
- Can regenerate entire PRD with additional context
- Or user can manually edit the file
**To regenerate**:
```
# Add context to brief or provide directly
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: "prd",
input_path: "bmad-backlog/product-brief.md",
project_path: "$(pwd)"
)
```
### Step 6: Validate PRD Structure
Use the `bmad_validator` MCP tool to check completeness:
```
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: "prd",
document_path: "bmad-backlog/prd/prd.md"
)
```
**Check results**:
- If valid → Continue
- If missing sections → Alert user, regenerate
### Step 7: Validate with vibe-check
```
mcp__vibe-check__vibe_check(
goal: "Create comprehensive PRD for {{project}}",
plan: "Generated PRD with {{N}} epics, {{M}} features, technical requirements, user stories",
uncertainties: [
"Is epic structure logical and sequential?",
"Are requirements complete?",
"Any missing critical features?"
]
)
```
**Process feedback**:
- Review vibe-check suggestions
- Make adjustments if needed
- Regenerate if significant concerns
### Step 8: Store in Pieces
```
mcp__Pieces__create_pieces_memory(
summary_description: "Product Requirements Document for {{project}}",
summary: "Complete PRD generated with {{N}} sections. Epics: {{list epics}}. Key features: {{list main features}}. Technical requirements: {{summary}}. User stories: {{count}} across {{epic count}} epics. Ready for architecture generation.",
files: [
"bmad-backlog/product-brief.md",
"bmad-backlog/prd/prd.md"
],
project: "$(pwd)"
)
```
### Step 9: Present Summary
```
✅ Product Requirements Document Complete!
📄 Location: bmad-backlog/prd/prd.md
📊 PRD Summary:
- {{N}} Epics defined
- {{M}} User stories
- {{F}} V1 MVP features
- Technical requirements specified
- Success metrics defined
Epic Structure:
1. Epic 1: {{name}} (Foundation - this is always first)
2. Epic 2: {{name}}
3. Epic 3: {{name}}
...
📏 Document Size: ~{{line count}} lines
✅ vibe-check validated structure
---
💡 Next Steps:
Option 1: Generate Architecture (Recommended)
Run: /bmad:architecture
Option 2: Review PRD first
Open: bmad-backlog/prd/prd.md
(Review and come back when ready)
Option 3: Generate complete backlog
Run: /bmad:start
(Will use this PRD to generate Architecture and all Epics)
What would you like to do?
```
## Important Guidelines
**Always**:
- ✅ Check for product brief first
- ✅ Use `bmad_generator` MCP tool (don't generate manually)
- ✅ Detect research needs from requirements
- ✅ Validate with `bmad_validator` MCP tool
- ✅ Validate with vibe-check
- ✅ Store in Pieces
- ✅ Present epic structure clearly
- ✅ Suggest next steps
**Never**:
- ❌ Generate PRD content manually
- ❌ Skip validation steps
- ❌ Ignore vibe-check concerns
- ❌ Forget to check epic structure (Epic 1 must be Foundation)
- ❌ Miss research opportunities
## Epic List Quality Check
**Verify Epic 1 is Foundation**:
```
Epic 1 should be: "Foundation", "Infrastructure", "Core Setup", or similar
Epic 1 should NOT be: Feature-specific like "User Profiles" or "Dashboard"
If Epic 1 is not foundation:
- Alert user
- Suggest reordering
- Regenerate with correct sequence
```
## Integration with Workflow
**Standalone Usage**:
```
/bmad:brief
/bmad:prd ← You are here
/bmad:architecture
```
**Part of `/bmad:start`**:
- Guided workflow generates brief first
- Then calls PRD generation
- Uses brief automatically
- Continues to architecture
**Cost**: ~$0.03 (Claude Haiku 4.5 for PRD generation)
---
**This command creates the complete product specification that drives architecture and implementation!**

576
commands/bmad-research.md Normal file
View File

@@ -0,0 +1,576 @@
---
description: Generate research prompts for technical decisions
---
# BMAD Research - Generate Research Prompts
You are helping the user research technical decisions by generating comprehensive research prompts for web-based AI (ChatGPT, Claude web) which have web search capabilities.
## Purpose
Generate structured research prompts that users can copy to ChatGPT/Claude web to research:
- API vendors and data sources
- Authentication providers
- Hosting platforms
- Payment processors
- Third-party integrations
- Technology stack options
Results are documented in structured templates and referenced during architecture generation.
## When to Use
**During BMAD workflow**:
- After PRD mentions external APIs/vendors
- Before architecture generation
- When technical decisions need research
**Standalone**:
- Evaluating vendor options
- Comparing technologies
- Cost analysis
- Technical due diligence
## Process
### Step 1: Identify Research Topic
**If user provided topic**:
```bash
# User ran: /bmad:research "data vendors for precious metals"
```
- Topic = "data vendors for precious metals"
**If no topic**:
- Ask: "What do you need to research?"
- Show common topics:
```
Common research topics:
1. Data vendors/APIs
2. Hosting platforms (Railway, Vercel, GCP, etc.)
3. Authentication providers (Clerk, Auth0, custom, etc.)
4. Payment processors (Stripe, PayPal, etc.)
5. AI/ML options (OpenAI, Anthropic, self-hosted)
6. Database options
7. Other (specify)
Topic:
```
### Step 2: Gather Context from PRD
**If PRD exists**:
```bash
Read bmad-backlog/prd/prd.md
```
Extract relevant context:
- What features need this research?
- What are the constraints? (budget, performance)
- Any technical preferences mentioned?
**If no PRD**:
- Use topic only
- Generate generic research prompt
- Note: "Research will be more focused with a PRD"
### Step 3: Generate Research Prompt
Create comprehensive prompt for web AI.
**Topic slug**: Convert topic to filename-safe string
```python
topic_slug = topic.lower().replace(' ', '-').replace('/', '-')
# "data vendors for precious metals" → "data-vendors-for-precious-metals"
```
**Save to**: `bmad-backlog/research/RESEARCH-{topic_slug}-prompt.md`
**Prompt content**:
```markdown
# Research Prompt: {Topic}
**COPY THIS ENTIRE PROMPT** and paste into ChatGPT (GPT-4) or Claude (web).
They have web search and can provide current, comprehensive research.
---
## Research Request
**Project**: {{project name from PRD or "New Project"}}
**Research Topic**: {{topic}}
**Context**:
{{Extract from PRD:
- What features need this
- Performance requirements
- Budget constraints
- Technical preferences}}
---
## What I Need
Please research and provide:
### 1. Overview
- What options exist for {{topic}}?
- What are the top 5-7 solutions/vendors/APIs?
- Current market leaders?
### 2. Comparison Table
Create a detailed comparison table:
| Option | Pricing | Key Features | Pros | Cons | Best For |
|--------|---------|--------------|------|------|----------|
| Option 1 | | | | | |
| Option 2 | | | | | |
| Option 3 | | | | | |
### 3. Technical Details
For each option, provide:
- **API Documentation**: Official docs link
- **Authentication**: API key, OAuth, etc.
- **Rate Limits**: Requests per minute/hour
- **Data Format**: JSON, XML, GraphQL, etc.
- **SDKs**: Python, Node.js, etc. with links
- **Code Examples**: If available
- **Community**: GitHub stars, Stack Overflow activity
### 4. Integration Complexity
For each option:
- **Estimated Setup Time**: Hours/days
- **Dependencies**: What else is needed
- **Learning Curve**: Easy/Medium/Hard
- **Documentation Quality**: Excellent/Good/Poor
- **Community Support**: Active/Moderate/Limited
### 5. Recommendations
Based on my project requirements:
{{List key requirements}}
Which option would you recommend and why?
Provide recommendation for:
- **MVP**: Best for getting started quickly
- **Production**: Best for long-term reliability
- **Budget**: Most cost-effective option
### 6. Cost Analysis
For each option, provide:
**Free Tier**:
- What's included
- Limitations
- Good for MVP? (yes/no)
**Paid Tiers**:
- Tier names and pricing
- What each tier includes
- Rate limit increases
**Estimated Monthly Cost**:
- MVP (low volume): $X-Y
- Production (medium volume): $X-Y
- Scale (high volume): $X-Y
### 7. Risks & Considerations
For each option:
- **Vendor Lock-in**: How easy to migrate away?
- **Data Quality**: Accuracy, freshness, reliability
- **Compliance**: Regional restrictions, data governance
- **Uptime/SLA**: Published SLAs, historical uptime
- **Support**: Response times, support channels
### 8. Source Links
Provide links to:
- Official website
- Pricing page
- API documentation
- Getting started guide
- Community forums/Discord
- Comparison articles/reviews
- GitHub repositories (if applicable)
---
## Deliverable Format
Please structure your response to match the sections above for easy copy/paste into my findings template.
Thank you!
```
**Write this to file**: bmad-backlog/research/RESEARCH-{topic_slug}-prompt.md
### Step 4: Generate Findings Template
Create structured template for documenting research.
**Save to**: `bmad-backlog/research/RESEARCH-{topic_slug}-findings.md`
**Template content**:
```markdown
# Research Findings: {Topic}
**Date**: {current date}
**Researcher**: {user name or TBD}
**Status**: Draft
---
## Research Summary
**Question**: {what was researched}
**Recommendation**: {chosen option and why}
**Confidence**: High | Medium | Low
---
## Options Evaluated
### Option 1: {Name}
**Overview**:
**Pricing**:
- Free tier:
- Paid tiers:
- Estimated cost for MVP: $X/month
- Estimated cost for Production: $Y/month
**Features**:
-
-
**Pros**:
-
-
**Cons**:
-
-
**Technical Details**:
- API: REST | GraphQL | WebSocket
- Authentication:
- Rate limits:
- Data format:
- SDKs:
**Documentation**: {link}
**Community**: {GitHub stars, activity}
---
### Option 2: {Name}
[Same structure]
---
### Option 3: {Name}
[Same structure]
---
## Comparison Matrix
| Criteria | Option 1 | Option 2 | Option 3 | Winner |
|----------|----------|----------|----------|--------|
| Cost (MVP) | $X/mo | $Y/mo | $Z/mo | |
| Features | X | Y | Z | |
| API Quality | {rating} | {rating} | {rating} | |
| Documentation | {rating} | {rating} | {rating} | |
| Community | {rating} | {rating} | {rating} | |
| Ease of Use | {rating} | {rating} | {rating} | |
| **Overall** | | | | **{Winner}** |
---
## Recommendation
**Chosen**: {Option X}
**Rationale**:
1. {Reason 1}
2. {Reason 2}
3. {Reason 3}
**For MVP**: {Why this is good for MVP}
**For Production**: {Scalability considerations}
**Implementation Priority**: {When to implement - MVP/Phase 2/etc}
---
## Implementation Notes
**Setup Steps**:
1. {Step 1}
2. {Step 2}
3. {Step 3}
**Configuration**:
```
{Config example or .env variables needed}
```
**Code Example**:
```{language}
{Basic usage example if available}
```
---
## Cost Projection
**MVP** (low volume):
- Monthly cost: $X
- Included: {what's covered}
**Production** (medium volume):
- Monthly cost: $Y
- Growth: {how costs scale}
**At Scale** (high volume):
- Monthly cost: $Z
- Optimization: {cost reduction strategies}
---
## Risks & Mitigations
| Risk | Impact | Likelihood | Mitigation |
|------|--------|-----------|------------|
| {Risk 1} | High/Med/Low | High/Med/Low | {How to mitigate} |
| {Risk 2} | High/Med/Low | High/Med/Low | {How to mitigate} |
---
## Implementation Checklist
- [ ] Create account/sign up
- [ ] Obtain API key/credentials
- [ ] Test in development environment
- [ ] Review pricing and set cost alerts
- [ ] Document integration in architecture
- [ ] Add credentials to .env.example
- [ ] Test error handling and rate limits
---
## References
- Official Website: {link}
- Pricing Page: {link}
- API Docs: {link}
- Getting Started: {link}
- Community: {link}
- Comparison Articles: {links}
---
## Next Steps
1. ✅ Research complete
2. Review findings with team (if applicable)
3. Make final decision on {chosen option}
4. Update PRD Technical Assumptions with this research
5. Reference in Architecture document generation
---
**Status**: ✅ Research Complete | ⏳ Awaiting Decision | ❌ Needs More Research
---
*Fill in this template with findings from ChatGPT/Claude web research.*
*Save this file when complete.*
*Architecture generation will reference this research.*
```
### Step 5: Present to User
```
📋 Research Prompt and Template Generated!
I've created two files:
📄 1. Research Prompt
Location: bmad-backlog/research/RESEARCH-{{topic}}-prompt.md
This contains a comprehensive research prompt with your project context.
📄 2. Findings Template
Location: bmad-backlog/research/RESEARCH-{{topic}}-findings.md
This is a structured template for documenting research results.
---
🔍 Next Steps:
1. Open: bmad-backlog/research/RESEARCH-{{topic}}-prompt.md
2. **Copy the entire prompt**
3. Open ChatGPT (https://chat.openai.com) or Claude (https://claude.ai)
→ They have web search for current info!
4. Paste the prompt
5. Wait for comprehensive research (5-10 minutes)
6. Copy findings into template:
bmad-backlog/research/RESEARCH-{{topic}}-findings.md
7. Save the template file
8. Come back and run:
- /bmad:prd (if updating PRD)
- /bmad:architecture (I'll use your research!)
---
Would you like me to show you the research prompt now?
```
**If user says yes**:
- Display the prompt file content
- User can copy directly
**If user says no**:
- "The files are ready when you need them!"
### Step 6: Store in Pieces
```
mcp__Pieces__create_pieces_memory(
summary_description: "Research prompt for {{topic}}",
summary: "Generated research prompt for {{topic}}. User will research: {{what to evaluate}}. Purpose: {{why needed for project}}. Findings will inform: {{PRD technical assumptions / Architecture tech stack decisions}}. Template provided for structured documentation.",
files: [
"bmad-backlog/research/RESEARCH-{{topic}}-prompt.md",
"bmad-backlog/research/RESEARCH-{{topic}}-findings.md"
],
project: "$(pwd)"
)
```
## Integration with Other Commands
### Called from `/bmad:prd`
When PRD generation detects research needs:
```
Claude: "I see you need data vendors. Generate research prompt?"
User: "yes"
[Runs /bmad:research "data vendors"]
Claude: "Research prompt generated. Please complete research and return when done."
[User researches, fills template]
User: "Research complete"
Claude: "Great! Continuing PRD with your findings..."
[Reads RESEARCH-data-vendors-findings.md]
[Incorporates into PRD Technical Assumptions]
```
### Used by `/bmad:architecture`
Architecture generation automatically checks for research:
```bash
ls bmad-backlog/research/RESEARCH-*-findings.md
```
If found:
- Read all findings
- Use recommendations in tech stack
- Reference research in Technology Decisions table
- Include costs from research in cost estimates
## Voice Feedback
Voice announces:
- "Research prompt generated" (when done)
- "Ready for external research" (reminder)
## Example Topics
**Data & APIs**:
- "data vendors for {domain}"
- "API marketplaces"
- "real-time data feeds"
**Infrastructure**:
- "hosting platforms for {tech stack}"
- "CI/CD providers"
- "monitoring solutions"
- "CDN providers"
**Third-Party Services**:
- "authentication providers"
- "payment processors"
- "email services"
- "SMS providers"
**AI/ML**:
- "LLM hosting options"
- "embedding models"
- "vector databases"
## Important Guidelines
**Always**:
- ✅ Include project context in prompt
- ✅ Generate findings template
- ✅ Guide user to web AI
- ✅ Store prompts in Pieces
- ✅ Explain next steps clearly
**Never**:
- ❌ Try to research in Claude Code (limited web search)
- ❌ Hallucinate vendor pricing (use web AI)
- ❌ Skip generating findings template
- ❌ Forget project context in prompt
## Why This Approach
**Claude Code limitations**:
- Limited web search
- Can't browse vendor pricing pages
- May hallucinate current details
**ChatGPT/Claude Web strengths**:
- Actual web search
- Can browse documentation
- Current pricing information
- Community discussions
- Up-to-date comparisons
**Best of both worlds**:
- Claude Code: Generate prompts, manage workflow
- Web AI: Thorough research with search
- Result: Informed decisions, documented rationale
**Cost**: $0 (no API calls, just template generation)
---
**This command enables informed technical decisions with documented research!**

1025
commands/bmad-start.md Normal file

File diff suppressed because it is too large Load Diff

24
commands/catchup.md Normal file
View File

@@ -0,0 +1,24 @@
---
description: Get context about recent projects and what was left off from Pieces LTM
---
You are starting a new session with the user. Use the Pieces MCP `ask_pieces_ltm` tool to gather context about:
1. What projects the user has been working on recently (last 24-48 hours)
2. What specific tasks or files they were editing
3. Any unfinished work or issues they encountered
4. The current state of their active projects
Query Pieces with questions like:
- "What projects has the user been working on in the last 24 hours?"
- "What was the user working on most recently?"
- "What files or code was the user editing recently?"
- "Were there any errors or issues the user was troubleshooting?"
After gathering this context, provide a concise summary organized by:
- **Active Projects**: List the main projects with brief descriptions
- **Recent Work**: What was being worked on most recently
- **Where We Left Off**: Specific tasks, files, or issues that may need continuation
- **Current Focus**: What appears to be the highest priority based on recent activity
Be specific with file paths, timestamps, and concrete details. The goal is to help both you and the user quickly resume work without losing context.

View File

@@ -0,0 +1,375 @@
---
description: Run CodeRabbit CLI analysis on uncommitted changes
---
# CodeRabbit Review Command
You are running CodeRabbit CLI analysis to catch race conditions, memory leaks, security vulnerabilities, and logic errors in uncommitted code changes.
## Purpose
CodeRabbit CLI provides AI-powered static analysis that detects:
- Race conditions in concurrent code
- Memory leaks and resource leaks
- Security vulnerabilities
- Logic errors and edge cases
- Performance issues
- Code quality problems
This complements the 3-agent review by finding issues that require deep static analysis.
## Prerequisites
**CodeRabbit CLI must be installed**:
Check installation:
```bash
command -v coderabbit >/dev/null 2>&1 || echo "Not installed"
```
**If not installed**:
```
❌ CodeRabbit CLI not found
CodeRabbit CLI is optional but provides enhanced code analysis.
To install:
curl -fsSL https://cli.coderabbit.ai/install.sh | sh
source ~/.zshrc # or your shell rc file
Then authenticate:
coderabbit auth login
See: https://docs.coderabbit.ai/cli/overview
Skip CodeRabbit and continue? (yes/no)
```
If skip: Exit
If install: Wait for user to install, then continue
## Process
### Step 1: Check Authentication
```bash
coderabbit auth status
```
**If not authenticated**:
```
⚠️ CodeRabbit not authenticated
For enhanced reviews (with team learnings):
coderabbit auth login
Continue without authentication? (yes/no)
```
Authentication is optional but provides better reviews (Pro feature).
### Step 2: Choose Review Mode
Ask user:
```
CodeRabbit Review Mode:
1. **AI-Optimized** (--prompt-only)
- Token-efficient output
- Optimized for Claude to parse
- Quick fix application
- Recommended for workflows
2. **Detailed** (--plain)
- Human-readable detailed output
- Comprehensive explanations
- Good for learning
- More verbose
Which mode? (1 or 2)
```
Store choice.
### Step 3: Determine Review Scope
**Default**: Uncommitted changes only
**Options**:
```
What should CodeRabbit review?
1. Uncommitted changes only (default)
2. All changes vs main branch
3. All changes vs specific branch
Scope:
```
**Map to flags**:
- Option 1: `--type uncommitted`
- Option 2: `--base main`
- Option 3: `--base [branch name]`
### Step 4: Run CodeRabbit in Background
**For AI-Optimized mode**:
```bash
# Run in background (can take 7-30 minutes)
coderabbit --prompt-only --type uncommitted
```
**For Detailed mode**:
```bash
coderabbit --plain --type uncommitted
```
Use Bash tool with `run_in_background: true`
Show user:
```
🤖 CodeRabbit Analysis Running...
This will take 7-30 minutes depending on code size.
Running in background - you can continue working.
I'll check progress periodically.
```
### Step 5: Wait for Completion
Check periodically with BashOutput tool:
```bash
# Check if CodeRabbit completed
# Look for completion markers in output
```
Every 2-3 minutes, show:
```
CodeRabbit analyzing... ([X] minutes elapsed)
```
When complete:
```
✅ CodeRabbit analysis complete!
```
### Step 6: Parse Findings
**If --prompt-only mode**:
- Read structured output
- Extract issues by severity:
- Critical
- High
- Medium
- Low
**If --plain mode**:
- Show full output to user
- Ask if they want Claude to fix issues
### Step 7: Present Findings
```
🤖 CodeRabbit Analysis Complete
⏱️ Duration: [X] minutes
📊 Findings:
- 🔴 Critical: [X] issues
- 🟠 High: [Y] issues
- 🟡 Medium: [Z] issues
- 🟢 Low: [W] issues
Critical Issues:
1. Race condition in auth.ts:45
Issue: Shared state access without lock
Fix: Add mutex or use atomic operations
2. Memory leak in websocket.ts:123
Issue: Event listener not removed on disconnect
Fix: Add cleanup in disconnect handler
[List all critical and high issues]
Would you like me to fix these issues?
1. Fix critical and high priority (recommended)
2. Fix critical only
3. Show me the issues, I'll fix manually
4. Skip (not recommended)
```
### Step 8: Apply Fixes (if requested)
**For each critical/high issue**:
1. Read the issue details
2. Locate the problematic code
3. Apply CodeRabbit's suggested fix
4. Run relevant tests
5. Mark as fixed
Show progress:
```
Fixing issues...
✅ Fixed race condition in auth.ts
✅ Fixed memory leak in websocket.ts
✅ Fixed SQL injection in users.ts
⏳ Fixing error handling in api.ts...
```
### Step 9: Optional Re-run
After fixes:
```
Fixes applied: [X] critical, [Y] high
Re-run CodeRabbit to verify fixes? (yes/no)
```
**If yes**:
```bash
coderabbit --prompt-only --type uncommitted
```
Check no new critical issues introduced.
### Step 10: Store in Pieces
```
mcp__Pieces__create_pieces_memory(
summary_description: "CodeRabbit review findings for [files]",
summary: "CodeRabbit CLI analysis complete. Findings: [X] critical, [Y] high, [Z] medium, [W] low. Critical issues: [list]. High issues: [list]. Fixes applied: [what was fixed]. Duration: [X] minutes. Verified: [yes/no].",
files: [
"list all reviewed files",
".titanium/coderabbit-report.md" (if created)
],
project: "$(pwd)"
)
```
### Step 11: Present Summary
```
✅ CodeRabbit Review Complete!
📊 Summary:
- Duration: [X] minutes
- Files reviewed: [N]
- Issues found: [Total]
- Critical: [X] ([fixed/pending])
- High: [Y] ([fixed/pending])
- Medium: [Z]
- Low: [W]
✅ Critical issues: All fixed
✅ High priority: All fixed
⚠️ Medium/Low: Review manually if needed
💾 Findings stored in Pieces
---
Next steps:
1. Run tests to verify fixes
2. Run /titanium:review for additional validation
3. Or continue with your workflow
```
## Error Handling
### If CodeRabbit Not Installed
```
⚠️ CodeRabbit CLI not found
CodeRabbit is optional but provides enhanced static analysis.
Would you like to:
1. Install now (I'll guide you)
2. Skip and use 3-agent review only
3. Cancel
Choose:
```
### If CodeRabbit Times Out
```
⏰ CodeRabbit taking longer than expected
Analysis started [X] minutes ago.
Typical duration: 7-30 minutes.
Options:
1. Keep waiting
2. Cancel and proceed without CodeRabbit
3. Check CodeRabbit output so far
What would you like to do?
```
### If No Changes to Review
```
No uncommitted changes found
CodeRabbit needs changes to review.
Options:
1. Review all changes vs main branch
2. Specify different base branch
3. Cancel
Choose:
```
## Integration with Workflow
### Standalone Usage
```bash
/coderabbit:review
# Runs analysis
# Applies fixes
# Done
```
### Part of /titanium:work
```bash
/titanium:work
# ... implementation ...
# Phase 3.5: CodeRabbit (if installed)
# ... 3-agent review ...
# Complete
```
### Before Committing
```bash
# Before commit
/coderabbit:review
# Fix critical issues
# Then commit
```
## Voice Feedback
Voice hooks announce:
- "Running CodeRabbit analysis" (when starting)
- "CodeRabbit complete: [X] issues found" (when done)
- "Applying CodeRabbit fixes" (during fixes)
- "CodeRabbit fixes complete" (after fixes)
## Cost
**CodeRabbit pricing**:
- Free tier: Basic analysis, limited usage
- Pro: Enhanced reviews with learnings
- Enterprise: Custom limits
**Not included in titanium-toolkit pricing** - separate service.
---
**This command provides deep static analysis to catch issues agents might miss!**

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,751 @@
---
description: Understand how Titanium Toolkit orchestrates subagents, skills, and MCP tools
---
# Titanium Toolkit: Orchestration Model
You are Claude Code running in **orchestrator mode** with the Titanium Toolkit plugin. This guide explains your role and how to effectively coordinate specialized subagents for both **planning (BMAD)** and **development (Titanium)** workflows.
## Your Role as Orchestrator
**You are the conductor, not the performer.**
In Titanium Toolkit, you don't generate documents or write code directly. Instead, you orchestrate two types of workflows:
### BMAD Workflows (Planning & Documentation)
Generate comprehensive project documentation through specialized planning agents:
- `/bmad:start` - Complete backlog generation (Brief → PRD → Architecture → Epics)
- `/bmad:brief`, `/bmad:prd`, `/bmad:architecture`, `/bmad:epic` - Individual documents
- Delegates to: @product-manager, @architect subagents
### Titanium Workflows (Development & Implementation)
Execute implementation through specialized development agents:
- `/titanium:plan` - Requirements → Implementation plan
- `/titanium:work` - Execute implementation with sequential task delegation
- `/titanium:review` - Parallel quality review by 3+ agents
- Delegates to: @api-developer, @frontend-developer, @test-runner, @security-scanner, @code-reviewer, etc.
## Your Orchestration Responsibilities
1. **Listen to user requests** and understand their goals
2. **Follow slash command prompts** that provide detailed delegation instructions
3. **Launch specialized subagents** via the Task tool to perform work
4. **Coordinate workflow** by managing prerequisites, sequencing, and handoffs
5. **Present results** from subagents back to the user
6. **Handle errors** and guide users through issues
7. **Manage state transitions** for multi-phase workflows
8. **Run meta-validations** (vibe-check) at checkpoints
9. **Store milestones** in Pieces LTM for context recovery
## The Orchestration Architecture
### Three-Layer System
```
Layer 1: YOU (Orchestrator Claude)
├── Receives user requests
├── Interprets slash commands
├── Checks prerequisites
├── Launches subagents via Task tool
└── Presents results to user
Layer 2: Specialized Subagents (Separate Context Windows)
├── @product-manager (Brief, PRD, Epics)
├── @architect (Architecture)
├── @api-developer (Backend code)
├── @frontend-developer (UI code)
├── @test-runner (Testing)
├── @security-scanner (Security review)
├── @code-reviewer (Code quality)
└── ... (17 total specialized agents)
Layer 3: Tools & Knowledge
├── MCP Tools (tt server: plan_parser, bmad_generator, bmad_validator)
├── Skills (bmad-methodology, api-best-practices, frontend-patterns, etc.)
└── Standard Tools (Read, Write, Edit, Bash, etc.)
```
## How Slash Commands Guide You
Slash commands (like `/bmad:start`, `/titanium:work`) contain **detailed orchestration scripts** that tell you exactly how to delegate work.
### Slash Command Structure
Each command provides:
1. **Prerequisites check** - What you verify before proceeding
2. **Task delegation instructions** - Exact Task tool calls with prompts for subagents
3. **Suggested MCP tool usage** - Which MCP tools subagents should use
4. **Validation requirements** - What must be validated
5. **Error handling** - How to handle failures
6. **Next steps** - What to suggest after completion
### Example: How You Orchestrate `/bmad:architecture`
**The slash command tells you**:
```
Step 1: Check if PRD exists
- If not found: Error, tell user to run /bmad:prd
- If found: Continue to Step 2
Step 2: Launch Architect Subagent
Task(
description: "Generate BMAD architecture",
prompt: "... [detailed workflow] ...",
subagent_type: "architect"
)
Step 3: Return Results
Present architect's summary to user
```
**You execute**:
1. ✅ Check: `ls bmad-backlog/prd/prd.md`
2. ✅ Launch: `Task(description: "Generate BMAD architecture", ...)`
3. ✅ Wait: Architect runs in separate context window
4. ✅ Present: Show architect's summary to user
**You DON'T**:
- ❌ Read the PRD yourself
- ❌ Call bmad_generator yourself
- ❌ Generate the architecture content
- ❌ Validate the output yourself
The **architect subagent** does all that work in its own context window.
## Subagent Context Windows
Each subagent runs in a **separate, isolated context window** with:
### What Subagents Have
1. **Specialized expertise** - Their agent prompt defines their role
2. **Skills** - Knowledge bases (bmad-methodology, api-best-practices, etc.)
3. **Tool access** - MCP tools and standard tools they need
4. **Clean context** - No token pollution from orchestrator's context
5. **Focus** - Single task to complete
### What Subagents Don't Have
1. **Your conversation history** - They only see what you pass in the Task prompt
2. **User's original request** - You must include relevant context in prompt
3. **Other subagents' work** - Each runs independently
4. **Orchestration knowledge** - They focus on their specific task
### Why Separate Context Windows Matter
**Token efficiency**:
- Your orchestration context stays clean
- Each subagent only loads what it needs
- Large documents don't pollute main conversation
**Specialization**:
- Subagent loads its skills (500-1000 line knowledge bases)
- Subagent focuses on single task
- Better quality output
**Parallelization** (when applicable):
- Multiple review agents can run simultaneously
- Independent tasks don't block each other
## MCP Tools: The Shared Utilities
### The `tt` MCP Server
Titanium Toolkit provides a custom MCP server (`tt`) with three tools:
1. **plan_parser** - Requirements → Implementation Plan
```
mcp__plugin_titanium-toolkit_tt__plan_parser(
requirements_file: ".titanium/requirements.md",
project_path: "$(pwd)"
)
```
2. **bmad_generator** - Generate BMAD Documents
```
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: "brief|prd|architecture|epic|index",
input_path: "...",
project_path: "$(pwd)"
)
```
3. **bmad_validator** - Validate BMAD Documents
```
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: "brief|prd|architecture|epic",
document_path: "..."
)
```
### How Subagents Use MCP Tools
**The slash command tells subagents which tools to use**:
```
Task(
prompt: "...
2. **Generate PRD** using MCP tool:
mcp__plugin_titanium-toolkit_tt__bmad_generator(
doc_type: \"prd\",
input_path: \"bmad-backlog/product-brief.md\",
project_path: \"$(pwd)\"
)
4. **Validate PRD** using:
mcp__plugin_titanium-toolkit_tt__bmad_validator(
doc_type: \"prd\",
document_path: \"bmad-backlog/prd/prd.md\"
)
...",
subagent_type: "product-manager"
)
```
The subagent sees these MCP tool examples and uses them.
## Skills: Domain Knowledge for Subagents
### Available Skills
**Product/Planning**:
- `bmad-methodology` (1092 lines) - PRD, Architecture, Epic, Story creation best practices
- `project-planning` (883 lines) - Work breakdown, estimation, dependencies, sprint planning
**Development**:
- `api-best-practices` (700+ lines) - REST API design, authentication, versioning, OpenAPI
- `frontend-patterns` (800+ lines) - React patterns, state management, performance, accessibility
**Quality**:
- `testing-strategy` (909 lines) - Test pyramid, TDD, mocking, coverage, CI/CD
- `code-quality-standards` (1074 lines) - SOLID, design patterns, refactoring, code smells
- `security-checklist` (1012 lines) - OWASP Top 10, vulnerabilities, auth, secrets management
**Operations**:
- `devops-patterns` (1083 lines) - CI/CD, infrastructure as code, deployments, monitoring
- `debugging-methodology` (773 lines) - Systematic debugging, root cause analysis, profiling
**Documentation**:
- `technical-writing` (912 lines) - Clear docs, README structure, API docs, tutorials
### How Skills Work
**Model-invoked** (not user-invoked):
- Subagents automatically use skills when relevant
- Skills are discovered based on their description
- No explicit invocation needed
**Progressive disclosure**:
- Skills are large (500-1000 lines each)
- Claude only loads relevant sections when needed
- Supports deep expertise without token waste
**Example**: When @architect generates architecture:
1. Architect agent loads in separate context
2. Sees `skills: [bmad-methodology, api-best-practices, devops-patterns]` in frontmatter
3. Claude automatically loads these skills when relevant
4. Uses bmad-methodology for document structure
5. Uses api-best-practices for API design sections
6. Uses devops-patterns for infrastructure sections
## Complete Workflow Example: `/bmad:start`
Let's walk through the complete orchestration:
### User Request
```
User: /bmad:start
```
### Your Orchestration (Step by Step)
**Phase 1: Introduction**
- YOU: Welcome user, explain workflow
- YOU: Check for existing docs
- YOU: Ask for workflow mode (Interactive/YOLO)
**Phase 2: Product Brief**
- YOU: Ask user for project idea
- YOU: Gather idea and context
- YOU: Launch @product-manager subagent via Task tool
- @product-manager (in separate window):
- Uses bmad_generator MCP tool
- Uses bmad-methodology skill
- Validates with bmad_validator
- Runs vibe-check
- Stores in Pieces
- Returns summary
- YOU: Present product-manager's summary to user
**Phase 3: PRD**
- YOU: Launch @product-manager subagent via Task tool
- @product-manager (new separate window):
- Reads product brief
- Uses bmad_generator MCP tool
- Reviews epic structure
- Uses bmad-methodology skill
- Validates with bmad_validator
- Runs vibe-check
- Stores in Pieces
- Returns summary with epic list
- YOU: Present epic list to user
- YOU: Detect research needs from epic keywords
**Phase 4: Research (If Needed)**
- YOU: Offer to generate research prompts
- YOU: Generate prompts if user wants them
- YOU: Wait for user to complete research
**Phase 5: Architecture**
- YOU: Launch @architect subagent via Task tool
- @architect (separate window):
- Reads PRD and research findings
- Uses bmad_generator MCP tool
- Uses bmad-methodology, api-best-practices, devops-patterns skills
- Proposes tech stack
- Validates with bmad_validator
- Runs vibe-check
- Stores in Pieces
- Returns summary with tech stack
- YOU: Present architect's tech stack to user
**Phase 6: Epic Generation**
- YOU: Extract epic list from PRD
- YOU: Count how many epics to generate
- YOU: For each epic (sequential):
- Launch @product-manager subagent via Task tool
- @product-manager (new window each time):
- Reads PRD and Architecture
- Uses bmad_generator MCP tool for epic
- Uses bmad-methodology skill
- Validates epic
- Runs vibe-check
- Stores in Pieces
- Returns brief summary
- YOU: Show progress ("Epic 3 of 5 complete")
- YOU: Launch @product-manager for story index
- @product-manager:
- Uses bmad_generator MCP tool for index
- Extracts totals
- Runs vibe-check
- Stores in Pieces
- Returns summary
**Phase 7: Final Summary**
- YOU: Run final vibe-check on complete backlog
- YOU: Store complete backlog summary in Pieces
- YOU: Present comprehensive completion summary
### What You Did
✅ Orchestrated 6+ subagent launches
✅ Managed workflow state transitions
✅ Handled user interactions and approvals
✅ Coordinated data handoffs between phases
✅ Presented all results clearly
### What You Didn't Do
❌ Generate any documents yourself
❌ Call MCP tools directly
❌ Read PRDs/Architecture for content (only for epic lists)
❌ Validate documents (subagents did this)
## Key Orchestration Principles
### 1. Follow the Slash Command Prompts
**Slash commands are your script**. They tell you exactly:
- Which subagent to launch
- What prompt to give them
- What MCP tools they should use
- What to validate
- What to return
**Don't improvise** - follow the script.
### 2. Prerequisites Are Your Responsibility
Before launching subagents, you check:
- Required files exist
- API keys are configured
- User has provided necessary input
- Previous phases completed successfully
If prerequisites fail, you error gracefully and guide user.
### 3. Delegation, Not Doing
**Your job**:
```
✅ Check prerequisites
✅ Launch subagent with detailed prompt
✅ Wait for subagent completion
✅ Present subagent's results
✅ Guide user to next steps
```
**Not your job**:
```
❌ Generate content yourself
❌ Call tools that subagents should call
❌ Duplicate work that subagents do
❌ Make decisions subagents should make
```
### 4. Subagents Are Autonomous
Once you launch a subagent:
- They have complete workflow instructions
- They make decisions within their domain
- They validate their own work
- They store their results
- They return a summary
You don't micromanage - you trust their expertise.
### 5. Quality Gates at Every Level
**Subagents run**:
- Structural validation (bmad_validator)
- Quality validation (vibe-check)
- Pieces storage (memory)
**You run**:
- Final meta-validation (overall workflow quality)
- Complete backlog storage
- Comprehensive summary
This ensures quality at both individual and system levels.
## Common Orchestration Patterns
### Pattern 1: Single Subagent (Simple)
```
/bmad:brief
├── YOU: Gather project idea
├── YOU: Launch @product-manager subagent
├── @product-manager: Generate, validate, store brief
└── YOU: Present summary
```
### Pattern 2: Sequential Subagents (Pipeline)
```
/bmad:start
├── YOU: Gather idea
├── @product-manager: Generate brief
├── YOU: Transition
├── @product-manager: Generate PRD
├── YOU: Detect research needs
├── @architect: Generate architecture
├── YOU: Extract epic list
├── @product-manager: Generate Epic 1
├── @product-manager: Generate Epic 2
├── @product-manager: Generate Epic 3
├── @product-manager: Generate index
└── YOU: Final summary
```
### Pattern 3: Parallel Subagents (Review)
```
/titanium:review
├── YOU: Check for changes
├── Launch in parallel (single message, multiple Task calls):
│ ├── @code-reviewer: Review code quality
│ ├── @security-scanner: Review security
│ └── @tdd-specialist: Review test coverage
├── YOU: Wait for all three to complete
├── YOU: Aggregate findings
└── YOU: Present consolidated report
```
### Pattern 4: Implementation Workflow (Complex)
```
/titanium:work
├── YOU: Check for plan, create if needed
├── YOU: Get user approval
├── YOU: For each task (sequential):
│ ├── YOU: Parse task info (epic, story, task, agent)
│ ├── YOU: Launch appropriate subagent with task details
│ ├── Subagent: Implement, test, validate
│ ├── YOU: Run quality check (vibe-check)
│ └── YOU: Mark task complete
├── YOU: Launch parallel review agents
├── YOU: Aggregate review findings
├── YOU: Optionally fix critical issues
└── YOU: Complete workflow, store in Pieces
```
## Agent-to-Skills Mapping
Each subagent has access to relevant skills:
**Planning Agents**:
- @product-manager: bmad-methodology, project-planning
- @project-planner: bmad-methodology, project-planning
- @architect: bmad-methodology, api-best-practices, devops-patterns
**Development Agents**:
- @api-developer: api-best-practices, testing-strategy, security-checklist
- @frontend-developer: frontend-patterns, testing-strategy, technical-writing
- @devops-engineer: devops-patterns, security-checklist
**Quality Agents**:
- @code-reviewer: code-quality-standards, security-checklist, testing-strategy
- @refactor: code-quality-standards, testing-strategy
- @tdd-specialist: testing-strategy, code-quality-standards
- @test-runner: testing-strategy, debugging-methodology
- @security-scanner: security-checklist, code-quality-standards
- @debugger: debugging-methodology, testing-strategy
**Documentation Agents**:
- @doc-writer: technical-writing, bmad-methodology
- @api-documenter: technical-writing, api-best-practices
**Specialized**:
- @shadcn-ui-builder: frontend-patterns, technical-writing
- @marketing-writer: technical-writing
- @meta-agent: (no skills - needs flexibility)
## MCP Tools: When Subagents Use Them
### tt Server Tools
**plan_parser**:
- Used by: Slash command `/titanium:plan`
- Called by: Orchestrator or planning subagent
- Purpose: Requirements → Implementation plan with tasks
**bmad_generator**:
- Used by: All BMAD slash commands
- Called by: @product-manager, @architect subagents
- Purpose: Generate comprehensive BMAD documents
**bmad_validator**:
- Used by: All BMAD slash commands
- Called by: @product-manager, @architect subagents
- Purpose: Validate document completeness
**Other MCP Servers**:
- vibe-check: Quality validation (used by orchestrator and subagents)
- Pieces: Memory storage (used by orchestrator and subagents)
- context7: Documentation lookup (used by subagents)
- ElevenLabs: Voice announcements (used by hooks, not agents)
## Best Practices for Orchestration
### 1. Trust the Slash Command
Don't second-guess the command prompts. They're carefully designed workflows.
### 2. Pass Complete Context to Subagents
When launching subagents, include in the Task prompt:
- What they're building
- Where input files are
- What output is expected
- Complete workflow steps
- Which MCP tools to use
- Which skills are relevant
- Success criteria
### 3. Don't Batch Results
Mark todos complete immediately after each task. Don't wait to batch updates.
### 4. Handle Errors Gracefully
If a subagent fails:
- Present error to user
- Offer options (retry, skip, modify)
- Guide user through resolution
- Don't proceed if critical task failed
### 5. Validate at Checkpoints
Subagents validate their own work, but you also:
- Run meta-validations (vibe-check) at phase transitions
- Verify prerequisites before launching next phase
- Confirm user approval at key points
### 6. Store Milestones in Pieces
After completing significant work:
- Store results in Pieces
- Include comprehensive summary
- List all files created
- Document key decisions
- Enable future context recovery
## Common Mistakes to Avoid
### ❌ Doing Work Yourself
**Wrong**:
```
User: /bmad:prd
You:
- Read brief
- Generate PRD content manually
- Write to file
```
**Right**:
```
User: /bmad:prd
You:
- Check brief exists
- Launch @product-manager subagent
- @product-manager generates PRD
- Present product-manager's summary
```
### ❌ Calling MCP Tools Directly (When Subagent Should)
**Wrong**:
```
You call: mcp__plugin_titanium-toolkit_tt__bmad_generator(...)
```
**Right**:
```
You launch: Task(prompt: "... use bmad_generator MCP tool ...", subagent_type: "product-manager")
```
### ❌ Batching Task Completions
**Wrong**:
```
Complete tasks 1, 2, 3
Then update TodoWrite
```
**Right**:
```
Complete task 1
Update TodoWrite (mark task 1 complete)
Complete task 2
Update TodoWrite (mark task 2 complete)
```
### ❌ Proceeding Without User Approval
**Wrong**:
```
Generate plan
Immediately start implementation
```
**Right**:
```
Generate plan
Present plan to user
Ask: "Proceed with implementation?"
Wait for explicit "yes"
Then start implementation
```
### ❌ Ignoring vibe-check Concerns
**Wrong**:
```
vibe-check raises concerns
You: "Okay, continuing anyway..."
```
**Right**:
```
vibe-check raises concerns
You: "⚠️ vibe-check identified concerns: [list]
Would you like to address these or proceed anyway?"
Wait for user decision
```
## Workflow State Management
For complex workflows (`/titanium:work`), you manage state:
```bash
# Initialize workflow
uv run ${CLAUDE_PLUGIN_ROOT}/hooks/utils/workflow/workflow_state.py init "$(pwd)" "development" "Goal"
# Update phase
uv run ${CLAUDE_PLUGIN_ROOT}/hooks/utils/workflow/workflow_state.py update_phase "$(pwd)" "implementation" "in_progress"
# Complete workflow
uv run ${CLAUDE_PLUGIN_ROOT}/hooks/utils/workflow/workflow_state.py complete "$(pwd)"
```
This tracks:
- Current phase (planning, implementation, review, complete)
- Phase status (pending, in_progress, completed)
- Workflow goal
- Start/end timestamps
## Voice Announcements
Voice hooks automatically announce:
- Phase transitions
- Tool completions
- Subagent completions
- Session summaries
You don't call voice tools - hooks handle this automatically.
## Summary: Your Orchestration Checklist
When executing a slash command:
- [ ] Read and understand the complete slash command prompt
- [ ] Check all prerequisites (files, API keys, user input)
- [ ] Follow the command's delegation instructions exactly
- [ ] Launch subagents via Task tool with detailed prompts
- [ ] Wait for subagents to complete (don't do their work)
- [ ] Present subagent results to user
- [ ] Run meta-validations at checkpoints
- [ ] Handle errors gracefully with clear guidance
- [ ] Store milestones in Pieces
- [ ] Guide user to next steps
- [ ] Update todos immediately after each completion
## When to Deviate from This Model
**You CAN work directly** (without subagents) for:
- Simple user questions ("What does this code do?")
- Quick file reads or searches
- Answering questions about the project
- Running single bash commands
- Simple edits or bug fixes
**You MUST use subagents** for:
- BMAD document generation (Brief, PRD, Architecture, Epics)
- Implementation tasks in `/titanium:work`
- Code reviews in `/titanium:review`
- Any work assigned to specific agent types in plans
- Complex multi-step workflows
## Next Steps
Now that you understand the orchestration model:
1. **Execute slash commands faithfully** - They're your detailed scripts
2. **Delegate to specialized subagents** - Trust their expertise
3. **Use MCP tools via subagents** - Not directly
4. **Leverage skills** - Subagents have deep domain knowledge
5. **Coordinate, don't create** - You orchestrate, they perform
---
**Remember**: You are the conductor of a specialized team. Your job is to coordinate their expertise, not to replace it. Follow the slash command scripts, delegate effectively, and present results clearly.
**The Titanium Toolkit turns Claude Code into an AI development team with you as the orchestrator!**

397
commands/titanium-plan.md Normal file
View File

@@ -0,0 +1,397 @@
---
description: Analyze requirements and create detailed implementation plan
---
# Titanium Plan Command
You are creating a structured implementation plan from requirements. Follow this systematic process to break down work into actionable tasks with agent assignments.
**MCP Tools Used**: This command uses the `tt` MCP server (Titanium Toolkit) which provides:
- `mcp__plugin_titanium-toolkit_tt__plan_parser` - Generates structured implementation plans from requirements
The `tt` server wraps Python utilities that use Claude AI to analyze requirements and create detailed project plans with task-to-agent assignments.
**Agent Assignment**: The plan_parser automatically assigns tasks to appropriate specialized agents based on task type (API work → @api-developer, UI work → @frontend-developer, etc.). These assignments are used by `/titanium:work` to delegate implementation.
## Process Overview
This command will:
1. Gather and validate requirements
2. Use Claude (via `plan_parser` MCP tool) to generate structured plan
3. Validate plan with vibe-check
4. Create human-readable documentation
5. Store plan in Pieces for future reference
## Step 1: Gather Requirements
**If user provides a file path:**
```bash
# User might say: /titanium:plan ~/bmad/output/user-auth-prd.md
```
- Use Read tool to read the file
- Extract requirements text
**If user provides inline description:**
```bash
# User might say: /titanium:plan
# Then describe: "I need to add JWT authentication with login, register, password reset"
```
- Write description to `.titanium/requirements.md` using Write tool
- Ask clarifying questions if needed:
- What tech stack? (Node.js, Python, Ruby, etc.)
- What database? (PostgreSQL, MongoDB, etc.)
- Any specific libraries or frameworks?
- Security requirements?
- Performance requirements?
## Step 2: Generate Structured Plan
Use the `plan_parser` MCP tool to generate the plan:
```
mcp__plugin_titanium-toolkit_tt__plan_parser(
requirements_file: ".titanium/requirements.md",
project_path: "$(pwd)"
)
```
This will:
- Call Claude with the requirements
- Generate structured JSON plan with:
- Epics (major features)
- Stories (user-facing functionality)
- Tasks (implementation steps)
- Agent assignments
- Time estimates
- Task dependencies
- Save to `.titanium/plan.json`
- Return the JSON plan directly to Claude
**Important**: The plan_parser tool needs ANTHROPIC_API_KEY environment variable. If it fails with an API key error, inform the user they need to add it to ~/.env
## Step 3: Review the Generated Plan
Read and analyze `.titanium/plan.json`:
```bash
# Read the plan
Read .titanium/plan.json
```
Check that the plan:
- Has reasonable epics (1-5 major features)
- Each epic has logical stories (1-5 per epic)
- Each story has actionable tasks (2-10 per story)
- Agent assignments are appropriate
- Time estimates seem realistic
- Dependencies make sense
**Common issues to watch for:**
- Tasks assigned to wrong agents (e.g., frontend work to @api-developer)
- Missing testing tasks
- Missing documentation tasks
- Unrealistic time estimates
- Circular dependencies
If the plan needs adjustments:
- Edit `.titanium/requirements.md` to add clarifications
- Re-run the `plan_parser` tool
- Review again
## Step 4: Validate Plan with vibe-check
Use vibe-check to validate the plan quality:
```
mcp__vibe-check__vibe_check(
goal: "User's stated goal from requirements",
plan: "Summary of the generated plan - list epics, key stories, agents involved, total time",
uncertainties: [
"List any concerns about complexity",
"Note any ambiguous requirements",
"Mention any technical risks"
]
)
```
**Example**:
```
mcp__vibe-check__vibe_check(
goal: "Implement JWT authentication system with login, register, and password reset",
plan: "2 epics: Backend API (JWT middleware, 3 endpoints, database) and Frontend UI (login/register forms, password reset flow). Agents: @product-manager, @api-developer, @frontend-developer, @test-runner, @security-scanner. Total: 4 hours",
uncertainties: [
"Should we use refresh tokens or just access tokens?",
"Password hashing algorithm not specified - suggest argon2",
"Rate limiting strategy needs clarification"
]
)
```
**Handle vibe-check response:**
- If vibe-check raises **concerns**:
- Review the concerns carefully
- Update requirements or plan approach
- Re-run the `plan_parser` tool with adjustments
- Validate again with vibe-check
- If vibe-check **approves**:
- Continue to next step
## Step 5: Create Human-Readable Plan
Write a markdown version of the plan to `.titanium/plan.md`:
```markdown
# Implementation Plan: [Project Goal]
**Created**: [Date]
**Estimated Time**: [Total time from plan.json]
## Goal
[User's goal statement]
## Tech Stack
[List technologies mentioned in requirements]
## Epics
### Epic 1: [Epic Name]
**Description**: [Epic description]
**Estimated Time**: [Sum of all story times]
#### Story 1.1: [Story Name]
**Description**: [Story description]
**Tasks**:
1. [Task 1 name] - [@agent-name] - [time estimate]
2. [Task 2 name] - [@agent-name] - [time estimate]
#### Story 1.2: [Story Name]
**Description**: [Story description]
**Tasks**:
1. [Task 1 name] - [@agent-name] - [time estimate]
2. [Task 2 name] - [@agent-name] - [time estimate]
### Epic 2: [Epic Name]
[... repeat structure ...]
## Agents Involved
- **@product-manager**: Requirements validation
- **@api-developer**: Backend implementation
- **@frontend-developer**: UI development
- **@test-runner**: Testing
- **@doc-writer**: Documentation
## Dependencies
[List any major dependencies between epics/stories]
## Next Steps
Ready to execute? Run: `/titanium:work`
```
## Step 6: Store Plan in Pieces
Store the plan in Pieces LTM for future reference:
```
mcp__Pieces__create_pieces_memory(
summary_description: "Implementation plan for [project name/goal]",
summary: "Plan created with [X] epics, [Y] stories, [Z] tasks. Agents: [list agents]. Estimated time: [total time]. Key features: [brief list of main epics]. vibe-check validation: [summary of validation results]",
files: [
".titanium/plan.json",
".titanium/plan.md",
".titanium/requirements.md"
],
project: "$(pwd)"
)
```
**Example**:
```
mcp__Pieces__create_pieces_memory(
summary_description: "Implementation plan for JWT authentication system",
summary: "Plan created with 2 epics, 5 stories, 12 tasks. Agents: @product-manager, @api-developer, @frontend-developer, @test-runner, @security-scanner. Estimated time: 4 hours. Key features: JWT middleware with refresh tokens, login/register/reset endpoints, frontend auth forms, comprehensive testing. vibe-check validation: Plan structure is sound, recommended argon2 for password hashing, suggested rate limiting on auth endpoints.",
files: [
".titanium/plan.json",
".titanium/plan.md",
".titanium/requirements.md"
],
project: "/Users/username/projects/my-app"
)
```
## Step 7: Present Plan to User
Format the output in a clear, organized way:
```
📋 Implementation Plan Created
🎯 Goal: [User's goal]
📦 Structure:
- [X] epics
- [Y] stories
- [Z] implementation tasks
⏱️ Estimated Time: [total time]
🤖 Agents Involved:
- @agent-name (role description)
- @agent-name (role description)
- [... list all agents ...]
📁 Plan saved to:
- .titanium/plan.json (structured data)
- .titanium/plan.md (readable format)
✅ vibe-check validated: [Brief summary of validation results]
📝 Key Epics:
1. [Epic 1 name] - [time estimate]
2. [Epic 2 name] - [time estimate]
[... list all epics ...]
---
Ready to execute this plan?
Run: /titanium:work
This will orchestrate the implementation using the plan,
with voice announcements and quality gates throughout.
```
## Important Guidelines
**Always:**
- ✅ Use the `plan_parser` MCP tool (don't try to generate plans manually)
- ✅ Validate with vibe-check before finalizing
- ✅ Store the plan in Pieces
- ✅ Create both JSON (for machines) and Markdown (for humans)
- ✅ Get user approval before they proceed to /titanium:work
- ✅ Be specific about agent roles in the summary
**Never:**
- ❌ Skip vibe-check validation
- ❌ Generate plans without using the `plan_parser` tool
- ❌ Proceed to implementation without user approval
- ❌ Ignore vibe-check concerns
- ❌ Create plans without clear task assignments
## Error Handling
**If ANTHROPIC_API_KEY is missing:**
```
Error: The plan_parser tool needs an Anthropic API key to generate plans.
Please add your API key to ~/.env:
echo 'ANTHROPIC_API_KEY=sk-ant-your-key-here' >> ~/.env
chmod 600 ~/.env
Then restart Claude Code and try again.
```
**If vibe-check is not available:**
```
Warning: vibe-check MCP is not available. Proceeding without quality validation.
Consider setting up vibe-check for AI-powered quality gates:
1. Create ~/.vibe-check/.env
2. Add at least one API key (GEMINI_API_KEY, OPENAI_API_KEY, or OPENROUTER_API_KEY)
3. Restart Claude Code
```
**If requirements are unclear:**
```
The requirements need more detail. Let me ask some questions:
1. [Specific question about tech stack]
2. [Specific question about features]
3. [Specific question about constraints]
Once I have this information, I'll create a detailed plan.
```
**If plan generation fails:**
```
The plan generation failed. This could be due to:
- Invalid requirements format
- API rate limits
- Network issues
Let me try again with simplified requirements.
[Retry with more basic prompt]
```
## Example Usage
**Example 1: With BMAD PRD file**
```
User: /titanium:plan ~/bmad/output/user-auth-prd.md
Claude:
- Reads PRD file
- Calls plan_parser tool
- Reviews generated plan
- Validates with vibe-check
- Creates plan.md
- Stores in Pieces
- Presents formatted summary to user
```
**Example 2: With inline description**
```
User: /titanium:plan
Claude: "What would you like to implement?"
User: "Add a search feature to filter products by name and category"
Claude:
- Writes description to .titanium/requirements.md
- Asks: "What's your tech stack?"
User: "React frontend, Node.js backend with PostgreSQL"
- Calls plan_parser tool with complete requirements
- Generates plan with appropriate tech stack
- Validates with vibe-check
- Presents plan to user
```
**Example 3: Complex feature with multiple clarifications**
```
User: /titanium:plan "Build a payment processing system"
Claude: "I need more details:
1. Which payment provider? (Stripe, PayPal, etc.)
2. What types of payments? (One-time, subscriptions, both?)
3. Currency support? (Single, multiple?)
4. Security requirements? (PCI compliance needed?)
5. Refund handling?"
User: "Stripe, one-time payments only, USD, yes PCI compliance, yes refunds"
Claude:
- Writes complete requirements
- Generates comprehensive plan
- vibe-check raises concern about PCI compliance complexity
- Adjusts plan to include @security-scanner more heavily
- Re-validates with vibe-check
- Presents approved plan
```
## Voice Feedback
The voice hooks will automatically announce:
- "Starting plan generation" (when script is called)
- "Plan created with [X] epics" (when complete)
- "vibe-check validation complete" (after validation)
No additional voice calls needed - the hooks handle this automatically.
## Next Command
After creating the plan, the user should run:
```
/titanium:work
```
This will execute the plan with orchestrated agent coordination.

692
commands/titanium-review.md Normal file
View File

@@ -0,0 +1,692 @@
---
description: Run comprehensive multi-agent quality review
---
# Titanium Review Command
You are coordinating a comprehensive quality review of the codebase. This command launches multiple specialized review agents in parallel, aggregates their findings, and creates a detailed review report.
**Orchestration Model**: You launch 3 review agents simultaneously in separate context windows. Each agent has specialized skills and reviews from their domain expertise. They run in parallel for efficiency.
**Review Agents & Their Skills**:
- @code-reviewer: code-quality-standards, security-checklist, testing-strategy
- @security-scanner: security-checklist, code-quality-standards
- @tdd-specialist: testing-strategy, code-quality-standards
**Why Parallel**: Review agents are independent - they don't need each other's results. Running in parallel saves 60-70% time compared to sequential reviews.
## Overview
This review process:
1. Identifies what code to review
2. Launches 3 review agents in parallel (single message, multiple Task calls)
3. Aggregates and categorizes findings from all agents
4. Uses vibe-check for meta-review
5. Creates comprehensive review report
6. Stores findings in Pieces LTM
7. Presents actionable summary with severity-based recommendations
---
## Step 1: Identify Review Scope
### Determine What to Review
**Option A: Recent Changes** (default)
```bash
git diff --name-only HEAD~1
```
Reviews files changed in last commit.
**Option B: Current Branch Changes**
```bash
git diff --name-only main...HEAD
```
Reviews all changes in current branch vs main.
**Option C: Specific Files** (if user specified)
```bash
# User might say: /titanium:review src/api/*.ts
```
Use the files/pattern user specified.
**Option D: All Code** (if user requested)
```bash
# Find all source files
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.rb" \) -not -path "*/node_modules/*" -not -path "*/venv/*"
```
### Build File List
Create list of files to review. Store in memory for agent prompts.
**Example**:
```
Files to review:
- src/api/auth.ts
- src/middleware/jwt.ts
- src/routes/users.ts
- tests/api/auth.test.ts
```
---
## Step 2: Launch Review Agents in Parallel
**CRITICAL**: Launch all three agents in a **SINGLE message** with multiple Task calls.
This enables parallel execution for faster reviews.
### Agent 1: Code Reviewer
```
[Task 1]: @code-reviewer
Prompt: "Review all code changes for quality, readability, and best practices.
Focus on:
- Code quality and maintainability
- DRY principles
- SOLID principles
- Error handling
- Code organization
- Comments and documentation
Files to review: [list all modified files]
Provide findings categorized by severity:
- Critical: Must fix before deployment
- Important: Should fix soon
- Nice-to-have: Optional improvements
For each finding, specify:
- File and line number
- Issue description
- Recommendation"
```
### Agent 2: Security Scanner
```
[Task 2]: @security-scanner
Prompt: "Scan for security vulnerabilities and security best practices.
Focus on:
- Input validation
- SQL injection risks
- XSS vulnerabilities
- Authentication/authorization issues
- Secrets in code
- Dependency vulnerabilities
- HTTPS enforcement
- Rate limiting
Files to review: [list all modified files]
Provide findings with:
- Severity (Critical/High/Medium/Low)
- Vulnerability type
- File and line number
- Risk description
- Remediation steps
Severity mapping for aggregation:
- Critical → Critical (must fix)
- High → Important (should fix)
- Medium → Nice-to-have (optional)
- Low → Nice-to-have (optional)"
```
### Agent 3: Test Coverage Specialist
```
[Task 3]: @tdd-specialist
Prompt: "Check test coverage and test quality.
Focus on:
- Test coverage percentage
- Edge cases covered
- Integration tests
- Unit tests
- E2E tests (if applicable)
- Test quality and assertions
- Mock usage
- Test organization
Files to review: [list all test files and source files]
Provide findings on:
- Coverage gaps
- Missing test cases
- Test quality issues
- Recommendations for improvement"
```
---
## Step 3: Wait for All Agents
All three agents will run in parallel. Wait for all to complete before proceeding.
Voice hooks will announce: "Review agents completed"
---
## Step 4: Aggregate Findings
### Collect All Findings
Gather results from all three agents:
- Code quality findings from @code-reviewer
- Security findings from @security-scanner
- Test coverage findings from @tdd-specialist
### Categorize by Severity
**🔴 Critical Issues** (must fix before deployment):
- Security vulnerabilities (Critical/High)
- Code that will cause bugs or crashes
- Core functionality with no tests
**🟡 Important Issues** (should fix soon):
- Security issues (Medium)
- Code quality problems that impact maintainability
- Important features with incomplete tests
- Performance issues
**🟢 Nice-to-have** (optional improvements):
- Code style improvements
- Refactoring opportunities
- Additional test coverage
- Documentation gaps
### Count Issues
```
Total findings:
- Critical: [X]
- Important: [Y]
- Nice-to-have: [Z]
By source:
- Code quality: [N] findings
- Security: [M] findings
- Test coverage: [P] findings
```
---
## Step 5: Meta-Review with vibe-check
Use vibe-check to provide AI oversight of the review:
```
mcp__vibe-check__vibe_check(
goal: "Quality review of codebase changes",
plan: "Ran parallel review: @code-reviewer, @security-scanner, @tdd-specialist",
progress: "Review complete. Findings: [X] critical, [Y] important, [Z] minor.
Critical issues found:
[List each critical issue briefly]
Important issues found:
[List each important issue briefly]
Test coverage: approximately [X]%",
uncertainties: [
"Are there systemic quality issues we're missing?",
"Is the security approach sound?",
"Are we testing the right things?",
"Any architectural concerns?"
]
)
```
**Process vibe-check response**:
- If vibe-check identifies systemic issues → Include in recommendations
- If vibe-check suggests additional areas to review → Note in report
- Include vibe-check insights in final report
---
## Step 6: Create Review Report
Write comprehensive report to `.titanium/review-report.md`:
```markdown
# Quality Review Report
**Date**: [current date and time]
**Project**: [project name or goal if known]
**Reviewers**: @code-reviewer, @security-scanner, @tdd-specialist
## Executive Summary
- 🔴 Critical issues: [X]
- 🟡 Important issues: [Y]
- 🟢 Nice-to-have: [Z]
- 📊 Test coverage: ~[X]%
**Overall Assessment**: [Brief 1-2 sentence assessment]
---
## Critical Issues 🔴
### 1. [Issue Title]
**Category**: [Code Quality | Security | Testing]
**File**: `path/to/file.ext:line`
**Severity**: Critical
**Issue**:
[Clear description of what's wrong]
**Risk/Impact**:
[Why this is critical]
**Recommendation**:
```[language]
// Show example fix if applicable
[code example]
```
**Steps to Fix**:
1. [Step 1]
2. [Step 2]
3. [Step 3]
---
### 2. [Next Critical Issue]
[... repeat structure ...]
---
## Important Issues 🟡
### 1. [Issue Title]
**Category**: [Code Quality | Security | Testing]
**File**: `path/to/file.ext:line`
**Severity**: Important
**Issue**:
[Description]
**Impact**:
[Why this matters]
**Recommendation**:
[How to address it]
---
### 2. [Next Important Issue]
[... repeat structure ...]
---
## Nice-to-have Improvements 🟢
### Code Quality
- [Improvement 1 with file reference]
- [Improvement 2 with file reference]
### Testing
- [Test improvement 1]
- [Test improvement 2]
### Documentation
- [Doc improvement 1]
- [Doc improvement 2]
---
## Test Coverage Analysis
**Overall Coverage**: ~[X]%
**Files with Insufficient Coverage** (<80%):
- `file1.ts` - ~[X]% coverage
- `file2.ts` - ~[Y]% coverage
**Untested Critical Functions**:
- `functionName()` in file.ts:line
- `anotherFunction()` in file.ts:line
**Missing Test Categories**:
- [ ] Error condition tests
- [ ] Edge case tests
- [ ] Integration tests
- [ ] E2E tests for critical flows
**Recommendations**:
1. [Priority test to add]
2. [Second priority test]
3. [Third priority test]
---
## Security Analysis
**Vulnerabilities Found**: [X]
**Security Best Practices Violations**: [Y]
**Key Security Concerns**:
1. [Concern 1]
2. [Concern 2]
**Security Recommendations**:
1. [Priority 1 security fix]
2. [Priority 2 security fix]
---
## vibe-check Meta-Review
[Paste vibe-check assessment here]
**Systemic Issues Identified**:
[Any patterns or systemic problems vibe-check identified]
**Additional Recommendations**:
[Any suggestions from vibe-check that weren't captured by agents]
---
## Recommendations Priority List
### Must Do (Critical):
1. [Critical fix 1] - File: `path/to/file.ext:line`
2. [Critical fix 2] - File: `path/to/file.ext:line`
### Should Do (Important):
1. [Important fix 1] - File: `path/to/file.ext:line`
2. [Important fix 2] - File: `path/to/file.ext:line`
3. [Important fix 3] - File: `path/to/file.ext:line`
### Nice to Do (Optional):
1. [Optional improvement 1]
2. [Optional improvement 2]
---
## Files Reviewed
Total files: [X]
**Source Files** ([N] files):
- path/to/file1.ext
- path/to/file2.ext
**Test Files** ([M] files):
- path/to/test1.test.ext
- path/to/test2.test.ext
---
## Next Steps
1. Address all critical issues immediately
2. Plan to fix important issues in next sprint
3. Consider nice-to-have improvements for tech debt backlog
4. Re-run review after fixes: `/titanium:review`
```
---
## Step 7: Store Review in Pieces
```
mcp__Pieces__create_pieces_memory(
summary_description: "Quality review findings for [project/files]",
summary: "Comprehensive quality review completed by @code-reviewer, @security-scanner, @tdd-specialist.
Findings:
- Critical issues: [X] - [briefly list each critical issue]
- Important issues: [Y] - [briefly describe categories]
- Nice-to-have: [Z]
Test coverage: approximately [X]%
Security assessment: [summary - no vulnerabilities / minor issues / concerns found]
Code quality assessment: [summary - excellent / good / needs improvement]
vibe-check meta-review: [brief summary of vibe-check insights]
Key recommendations:
1. [Top priority recommendation]
2. [Second priority]
3. [Third priority]
All findings documented in .titanium/review-report.md with file:line references and fix recommendations.",
files: [
".titanium/review-report.md",
"list all reviewed source files",
"list all test files"
],
project: "$(pwd)"
)
```
---
## Step 8: Present Summary to User
```
🔍 Quality Review Complete
📊 Summary:
- 🔴 [X] Critical Issues
- 🟡 [Y] Important Issues
- 🟢 [Z] Nice-to-have Improvements
- 📈 Test Coverage: ~[X]%
📄 Full Report: .titanium/review-report.md
---
⚠️ Critical Issues (must fix):
1. [Issue 1 title]
File: `path/to/file.ext:line`
[Brief description]
2. [Issue 2 title]
File: `path/to/file.ext:line`
[Brief description]
[... list all critical issues ...]
---
💡 Top Recommendations:
1. [Priority 1 action item]
2. [Priority 2 action item]
3. [Priority 3 action item]
---
🤖 vibe-check Assessment:
[Brief quote or summary from vibe-check]
---
Would you like me to:
1. Fix the critical issues now
2. Create GitHub issues for these findings
3. Provide more details on any specific issue
4. Skip and continue (not recommended if critical issues exist)
```
### Handle User Response
**If user wants fixes**:
- Address critical issues one by one
- After each fix, run relevant tests
- Re-run review to verify fixes
- Update review report
**If user wants GitHub issues**:
- Create issues for each critical and important finding
- Include all details from review report
- Provide issue URLs
**If user wants more details**:
- Read specific sections of review report
- Explain the issue and fix in more detail
**If user says continue**:
- Acknowledge and complete
- Remind that issues are documented in review report
---
## Error Handling
### If No Files to Review
```
⚠️ No files found to review.
This could mean:
- No changes since last commit
- Working directory is clean
- Specified files don't exist
Would you like to:
1. Review all source files
2. Specify which files to review
3. Cancel review
```
### If Review Agents Fail
```
❌ Review failed
Agent @[agent-name] encountered an error: [error]
Continuing with other review agents...
[Proceed with available results]
```
### If vibe-check Not Available
```
Note: vibe-check MCP is not available. Proceeding without meta-review.
To enable AI-powered meta-review:
1. Create ~/.vibe-check/.env
2. Add API key (GEMINI_API_KEY, OPENAI_API_KEY, or OPENROUTER_API_KEY)
3. Restart Claude Code
```
---
## Integration with Workflow
**After /titanium:work**:
```
User: /titanium:work
[... implementation completes ...]
User: /titanium:review
[... review runs ...]
```
**Standalone Usage**:
```
User: /titanium:review
# Reviews recent changes
```
**With File Specification**:
```
User: /titanium:review src/api/*.ts
# Reviews only specified files
```
**Before Committing**:
```
User: I'm about to commit. Can you review my changes?
Claude: /titanium:review
[... review runs on uncommitted changes ...]
```
---
## Voice Feedback
Voice hooks automatically announce:
- "Starting quality review" (at start)
- "Review agents completed" (after parallel execution)
- "Review complete: [X] issues found" (at end)
No additional voice calls needed.
---
## Example Outputs
### Example 1: No Issues Found
```
🔍 Quality Review Complete
📊 Summary:
- 🔴 0 Critical Issues
- 🟡 0 Important Issues
- 🟢 3 Nice-to-have Improvements
- 📈 Test Coverage: ~92%
✅ No critical or important issues found!
💡 Optional Improvements:
1. Consider extracting duplicated validation logic in auth.ts and users.ts
2. Add JSDoc comments to public API methods
3. Increase test coverage for edge cases in payment module
Code quality: Excellent
Security: No vulnerabilities found
Testing: Comprehensive coverage
📄 Full details: .titanium/review-report.md
```
### Example 2: Critical Issues Found
```
🔍 Quality Review Complete
📊 Summary:
- 🔴 2 Critical Issues
- 🟡 5 Important Issues
- 🟢 12 Nice-to-have Improvements
- 📈 Test Coverage: ~65%
⚠️ CRITICAL ISSUES (must fix):
1. SQL Injection Vulnerability
File: `src/api/users.ts:45`
User input concatenated directly into SQL query
Risk: Attacker could read/modify database
2. Missing Authentication Check
File: `src/api/admin.ts:23`
Admin endpoint has no auth middleware
Risk: Unauthorized access to admin functions
💡 MUST DO:
1. Use parameterized queries for all SQL
2. Add authentication middleware to admin routes
3. Add tests for authentication flows
Would you like me to fix these critical issues now?
```
---
**This command provides comprehensive multi-agent quality review with actionable findings and clear priorities.**

555
commands/titanium-status.md Normal file
View File

@@ -0,0 +1,555 @@
---
description: Show current workflow progress and status
---
# Titanium Status Command
You are reporting on the current workflow state and progress. This command provides a comprehensive view of where the project stands, what's been completed, and what's remaining.
## Overview
This command will:
1. Check for active workflow state
2. Query Pieces for recent work
3. Analyze TodoWrite progress (if available)
4. Check for existing plan
5. Calculate progress metrics
6. Present formatted status report
7. Optionally provide voice summary
---
## Step 1: Check for Active Workflow
### Check Workflow State File
```bash
uv run ${CLAUDE_PLUGIN_ROOT}/hooks/utils/workflow/workflow_state.py get "$(pwd)"
```
**If workflow exists**:
- Parse the JSON response
- Extract:
- workflow_type
- goal
- status (planning/in_progress/completed/failed)
- current_phase
- started_at timestamp
- phases history
**If no workflow exists**:
- Report: "No active workflow found in this project"
- Check for plan anyway (might be planning only)
- Query Pieces for any previous work
---
## Step 2: Query Pieces for Context
Use Pieces LTM to get recent work history:
```
mcp__Pieces__ask_pieces_ltm(
question: "What work has been done in the last session on this project at [current directory]? What was being worked on? What was completed? What was left unfinished?",
chat_llm: "claude-sonnet-4-5",
topics: ["workflow", "implementation", "development"],
application_sources: ["Code"]
)
```
**Extract from Pieces**:
- Recent activities
- What was completed
- What's in progress
- Any issues encountered
- Last known state
---
## Step 3: Check for Plan
```bash
# Check if plan exists
ls .titanium/plan.json
```
**If plan exists**:
- Read `.titanium/plan.json`
- Extract:
- Total epics count
- Total stories count
- Total tasks count
- Estimated total time
- List of agents needed
**Calculate progress** (if TodoWrite is available):
- Count completed tasks vs total tasks
- Calculate percentage complete
- Identify current task (first pending task)
---
## Step 4: Analyze TodoWrite Progress (if in active session)
**Note**: TodoWrite state is session-specific. This step only works if we're in the same session that created the workflow.
If TodoWrite is available in current session:
- Count total tasks
- Count completed tasks
- Count pending tasks
- Identify current task (first in_progress task)
- Calculate progress percentage
If TodoWrite not available:
- Use plan.json task count as reference
- Note: "Progress tracking available only during active session"
---
## Step 5: Calculate Metrics
### Progress Metrics
**Overall Progress**:
```
progress_percentage = (completed_tasks / total_tasks) * 100
```
**Time Metrics**:
```
elapsed_time = current_time - workflow.started_at
remaining_tasks = total_tasks - completed_tasks
avg_time_per_task = elapsed_time / completed_tasks (if > 0)
estimated_remaining = avg_time_per_task * remaining_tasks
```
**Phase Progress**:
- Identify which phase is active
- List completed phases with timestamps
- Show phase transition history
---
## Step 6: Present Status Report
### Format: Comprehensive Status
```
📊 Titanium Workflow Status
═══════════════════════════════════════════════
🎯 Goal: [workflow.goal]
📍 Current Phase: [current_phase]
Status: [status emoji] [status]
⏱️ Timeline:
Started: [formatted timestamp] ([X] hours/days ago)
[If completed: Completed: [timestamp]]
[If in progress: Elapsed: [duration]]
───────────────────────────────────────────────
📈 Progress: [X]% Complete
✅ Completed: [X] tasks
⏳ Pending: [Y] tasks
🔄 Current: [current task name if known]
───────────────────────────────────────────────
📦 Project Structure:
Epics: [X]
Stories: [Y]
Tasks: [Z]
Total Estimated Time: [time]
🤖 Agents Used/Planned:
[List agents with their roles]
───────────────────────────────────────────────
📝 Recent Work (from Pieces):
[Summary from Pieces query - what was done recently]
Key Accomplishments:
- [Item 1]
- [Item 2]
- [Item 3]
Current Focus:
[What's being worked on now or what's next]
───────────────────────────────────────────────
🔄 Phase History:
1. ✅ Planning - Completed ([duration])
2. 🔄 Implementation - In Progress (started [time ago])
3. ⏳ Review - Pending
4. ⏳ Completion - Pending
───────────────────────────────────────────────
⏰ Time Estimates:
Elapsed: [duration]
Est. Remaining: [duration] (based on current pace)
Original Estimate: [total from plan]
[If ahead/behind schedule: [emoji] [X]% [ahead/behind] schedule]
───────────────────────────────────────────────
📁 Key Files:
Created/Modified:
[List from Pieces or plan if available]
Configuration:
- Plan: .titanium/plan.json
- State: .titanium/workflow-state.json
[If exists: - Review: .titanium/review-report.md]
───────────────────────────────────────────────
💡 Next Steps:
[Based on current state, suggest what should happen next]
1. [Next action item]
2. [Second action item]
3. [Third action item]
───────────────────────────────────────────────
🔊 Voice Summary Available
Say "yes" for voice summary of current status
```
---
## Step 7: Status Variations by Phase
### If Phase: Planning
```
📊 Status: Planning Phase
🎯 Goal: [goal]
Current Activity:
- Analyzing requirements
- Generating implementation plan
- Validating with vibe-check
Next: Implementation phase will begin after plan approval
```
### If Phase: Implementation
```
📊 Status: Implementation In Progress
🎯 Goal: [goal]
Progress: [X]% ([completed]/[total] tasks)
Current Task: [task name]
Agent: [current agent]
Recently Completed:
- [Task 1] by @agent-1
- [Task 2] by @agent-2
- [Task 3] by @agent-3
Up Next:
- [Next task 1]
- [Next task 2]
Estimated Remaining: [time]
```
### If Phase: Review
```
📊 Status: Quality Review Phase
🎯 Goal: [goal]
Implementation: ✅ Complete ([X] tasks finished)
Current Activity:
- Running quality review
- @code-reviewer analyzing code
- @security-scanner checking vulnerabilities
- @tdd-specialist reviewing tests
Next: Address review findings, then complete workflow
```
### If Phase: Completed
```
📊 Status: Workflow Complete ✅
🎯 Goal: [goal]
Completion Summary:
- Started: [timestamp]
- Completed: [timestamp]
- Duration: [total time]
Deliverables:
- [X] epics completed
- [Y] stories delivered
- [Z] tasks finished
Final Metrics:
- Test Coverage: [X]%
- Quality Review: [findings summary]
- All work stored in Pieces ✅
Next: Run /catchup in future sessions to resume context
```
### If No Active Workflow
```
📊 Status: No Active Workflow
Current Directory: [pwd]
No .titanium/workflow-state.json found
Checking for plan...
[If plan exists: Plan found but not yet executed]
[If no plan: No plan found]
Checking Pieces for history...
[Results from Pieces query]
---
Ready to start a new workflow?
Run:
- /titanium:plan [requirements] - Create implementation plan
- /titanium:work [requirements] - Start full workflow
```
---
## Step 8: Voice Summary (Optional)
**If user requests voice summary or says "yes" to voice option**:
Create concise summary for TTS (under 100 words):
```
"Workflow status: [Phase], [X] percent complete.
[Completed count] tasks finished, [pending count] remaining.
Currently working on [current task or phase activity].
[Key recent accomplishment].
Estimated [time] remaining.
[Next major milestone or action]."
```
**Example**:
```
"Workflow status: Implementation phase, sixty-seven percent complete.
Eight tasks finished, four remaining.
Currently implementing the login form component with the frontend developer agent.
Just completed the backend authentication API with all tests passing.
Estimated one hour remaining.
Next, we'll run the quality review phase."
```
**Announce using existing TTS**:
- Voice hooks will handle the announcement
- No need to call TTS directly
---
## Integration with Workflow Commands
### After /titanium:plan
```
User: /titanium:plan [requirements]
[... plan created ...]
User: /titanium:status
Shows:
- Phase: Planning (completed)
- Plan details
- Ready for /titanium:work
```
### During /titanium:work
```
User: /titanium:work
[... implementation in progress ...]
User: /titanium:status
Shows:
- Phase: Implementation (in progress)
- Progress: X%
- Current task
- Time estimates
```
### After /titanium:work
```
User: /titanium:work
[... completes ...]
User: /titanium:status
Shows:
- Phase: Completed
- Summary of deliverables
- Quality metrics
```
### Next Session
```
User: (new session)
/titanium:status
Shows:
- Workflow state from file
- Pieces context from previous session
- Can resume work
```
---
## Error Handling
### If Workflow State File Corrupted
```
⚠️ Workflow state file exists but is corrupted
Attempting to recover from Pieces...
[Query Pieces for last known state]
[If recovery successful: Use Pieces data]
[If recovery fails: Report what's known, suggest starting fresh]
```
### If Pieces Not Available
```
Note: Pieces MCP not available. Status based on local files only.
To enable full context recovery:
1. Install Pieces OS (https://pieces.app)
2. Install Pieces CLI (pip3 install pieces-cli)
3. Configure MCP (pieces mcp setup claude_code --stdio)
4. Restart Claude Code
```
### If No Information Available
```
📊 Status: No Data Available
Current Directory: [pwd]
- No workflow state file (.titanium/workflow-state.json)
- No plan file (.titanium/plan.json)
- Pieces query returned no results
This appears to be a new project or no workflows have been run yet.
Ready to start? Run:
- /titanium:plan - Create implementation plan
- /titanium:work - Start full workflow
```
---
## Use Cases
### Use Case 1: Check Progress Mid-Workflow
```
User: How's the workflow going?
Claude: Let me check the status.
[Runs /titanium:status]
Claude: We're 60% through implementation. 6 of 10 tasks complete.
Currently working on the frontend login form.
About 2 hours remaining.
```
### Use Case 2: Resume After Interruption
```
User: (new session) What was I working on?
Claude: Let me check.
[Runs /titanium:status]
Claude: You were implementing user authentication.
Phase: Implementation (in progress)
Last completed: Backend API by @api-developer
Next up: Frontend forms by @frontend-developer
Would you like to continue with /titanium:work?
```
### Use Case 3: Check Before Review
```
User: Is implementation done? Can we review?
Claude: Let me check status.
[Runs /titanium:status]
Claude: Yes! Implementation is 100% complete.
All 12 tasks finished.
Ready for quality review.
Run /titanium:review to start the review process.
```
### Use Case 4: Daily Standup Info
```
User: What did we accomplish yesterday?
Claude: Checking status and history...
[Runs /titanium:status + Pieces query]
Claude: Yesterday's work:
- Completed Backend API epic (3 stories, 7 tasks)
- @api-developer implemented JWT middleware
- @api-developer created login/register endpoints
- @test-runner wrote integration tests
- All tests passing
Today: Moving to Frontend epic
```
---
## Voice Feedback
Voice hooks may announce:
- "Status check complete" (after generating report)
- "[X] percent complete" (if voice summary requested)
---
## Advanced Features (Future)
Potential enhancements:
- Progress visualization (ASCII charts)
- Time series data (velocity over time)
- Agent performance metrics
- Quality trend tracking
- Burndown charts
---
**This command provides comprehensive workflow status with context from state files, Pieces LTM, and current session, enabling users to track progress and make informed decisions about next steps.**

1096
commands/titanium-work.md Normal file

File diff suppressed because it is too large Load Diff