Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:51:34 +08:00
commit acde81dcfe
59 changed files with 22282 additions and 0 deletions

View File

@@ -0,0 +1,848 @@
# PRISM Best Practices
This document consolidates best practices for the PRISM methodology for effective AI-driven development.
## Core PRISM Principles
### The PRISM Framework
**P - Predictability**
- Structured processes with measurement
- Quality gates at each step
- PSP (Personal Software Process) tracking
- Clear acceptance criteria
**R - Resilience**
- Test-driven development (TDD)
- Graceful error handling
- Defensive programming
- Comprehensive test coverage
**I - Intentionality**
- Clear, purposeful code
- SOLID principles
- Clean Code practices
- Explicit over implicit
**S - Sustainability**
- Maintainable code
- Documentation that doesn't go stale
- Continuous improvement
- Technical debt management
**M - Maintainability**
- Domain-driven design where applicable
- Clear boundaries and interfaces
- Expressive naming
- Minimal coupling, high cohesion
## Guiding Principles
### 1. Lean Dev Agents
**Minimize Context Overhead:**
- Small files loaded on demand
- Story contains all needed info
- Never load PRDs/architecture unless directed
- Keep `devLoadAlwaysFiles` minimal
**Why:** Large context windows slow development and increase errors. Focused context improves quality.
### 2. Natural Language First
**Markdown Over Code:**
- Use plain English throughout
- No code in core workflows
- Instructions as prose, not programs
- Leverage LLM natural language understanding
**Why:** LLMs excel at natural language. Code-based workflows fight against their strengths.
### 3. Clear Role Separation
**Each Agent Has Specific Expertise:**
- Architect: System design
- PM/PO: Requirements and stories
- Dev: Implementation
- QA: Quality and testing
- SM: Epic decomposition and planning
**Why:** Focused roles prevent scope creep and maintain quality.
## Architecture Best Practices
### DO:
**Start with User Journeys**
- Understand user needs before technology
- Work backward from experience
- Map critical paths
**Document Decisions and Trade-offs**
- Why this choice over alternatives?
- What constraints drove decisions?
- What are the risks?
**Include Diagrams**
- System architecture diagrams
- Data flow diagrams
- Deployment diagrams
- Sequence diagrams for critical flows
**Specify Non-Functional Requirements**
- Performance targets
- Security requirements
- Scalability needs
- Reliability expectations
**Plan for Observability**
- Logging strategy
- Metrics and monitoring
- Alerting thresholds
- Debug capabilities
**Choose Boring Technology Where Possible**
- Proven, stable technologies for foundations
- Exciting technology only where necessary
- Consider team expertise
**Design for Change**
- Modular architecture
- Clear interfaces
- Loose coupling
- Feature flags for rollback
### DON'T:
**Over-engineer for Hypothetical Futures**
- YAGNI (You Aren't Gonna Need It)
- Build for current requirements
- Make future changes easier, but don't implement them now
**Choose Technology Based on Hype**
- Evaluate objectively
- Consider maturity and support
- Match to team skills
**Neglect Security and Performance**
- Security must be architected in
- Performance requirements drive design
- Don't defer these concerns
**Create Documentation That Goes Stale**
- Living architecture docs
- Keep with code where possible
- Regular reviews and updates
**Ignore Developer Experience**
- Complex setups hurt productivity
- Consider onboarding time
- Optimize for daily workflows
## Story Creation Best Practices
### DO:
**Define Clear, Testable Acceptance Criteria**
```markdown
✅ GOOD:
- User can login with email and password
- Invalid credentials show "Invalid email or password" error
- Successful login redirects to dashboard
❌ BAD:
- Login works correctly
- Errors are handled
- User can access the system
```
**Include Technical Context in Dev Notes**
- Relevant architecture decisions
- Integration points
- Performance considerations
- Security requirements
**Break into Specific, Implementable Tasks**
- Each task is atomic
- Clear success criteria
- Estimated in hours
- Can be done in order
**Size Appropriately (1-3 days)**
- Not too large (>8 points = split it)
- Not too small (<2 points = combine)
- Can be completed in one development session
**Document Dependencies Explicitly**
- Technical dependencies (services, libraries)
- Story dependencies (what must be done first)
- External dependencies (APIs, third-party)
**Link to Source Documents**
- Reference PRD sections
- Reference architecture docs
- Reference Jira epics
**Set Status to "Draft" Until Approved**
- Requires user review
- May need refinement
- Not ready for development
### DON'T:
**Create Vague or Ambiguous Stories**
- "Improve performance" ← What does this mean?
- "Fix bugs" ← Which ones?
- "Update UI" ← Update how?
**Skip Acceptance Criteria**
- Every story needs measurable success
- AC drives test design
- AC enables validation
**Make Stories Too Large**
- >8 points is too large
- Split along feature boundaries
- Maintain logical cohesion
**Forget Dependencies**
- Hidden dependencies cause delays
- Map all prerequisites
- Note integration points
**Mix Multiple Features in One Story**
- One user need per story
- Clear single purpose
- Easier to test and validate
**Approve Without Validation**
- Run validation checklist
- Ensure completeness
- Verify testability
## Development Best Practices
### Test-Driven Development (TDD)
**Red-Green-Refactor:**
1. **Red**: Write failing test first
2. **Green**: Implement minimal code to pass
3. **Refactor**: Improve code while keeping tests green
**Benefits:**
- Tests actually verify behavior (you saw them fail)
- Better design (testable code is better code)
- Confidence in changes
- Living documentation
**Example:**
```
1. Write test: test_user_login_with_valid_credentials()
2. Run test → FAILS (no implementation)
3. Implement login functionality
4. Run test → PASSES
5. Refactor: Extract validation logic
6. Run test → Still PASSES
```
### Clean Code Principles
**Meaningful Names**
```python
# ✅ GOOD
def calculate_monthly_payment(principal, rate, term_months):
return principal * rate / (1 - (1 + rate) ** -term_months)
# ❌ BAD
def calc(p, r, t):
return p * r / (1 - (1 + r) ** -t)
```
**Small Functions**
- One responsibility per function
- Maximum 20-30 lines
- Single level of abstraction
**No Magic Numbers**
```python
# ✅ GOOD
MAX_RETRIES = 3
TIMEOUT_SECONDS = 30
# ❌ BAD
if retries > 3: # What's 3? Why 3?
time.sleep(30) # Why 30?
```
**Explicit Error Handling**
```python
# ✅ GOOD
try:
result = api.call()
except APIError as e:
logger.error(f"API call failed: {e}")
return fallback_response()
# ❌ BAD
try:
result = api.call()
except:
pass
```
### SOLID Principles
**S - Single Responsibility Principle**
- Class has one reason to change
- Function does one thing
- Module has cohesive purpose
**O - Open/Closed Principle**
- Open for extension
- Closed for modification
- Use composition and interfaces
**L - Liskov Substitution Principle**
- Subtypes must be substitutable for base types
- Maintain contracts
- Don't break expectations
**I - Interface Segregation Principle**
- Many specific interfaces > one general interface
- Clients shouldn't depend on unused methods
- Keep interfaces focused
**D - Dependency Inversion Principle**
- Depend on abstractions, not concretions
- High-level modules don't depend on low-level
- Both depend on abstractions
### Story Implementation
**Update Story File Correctly**
- ONLY update Dev Agent Record sections
- Mark tasks complete when ALL tests pass
- Update File List with every change
- Document issues in Debug Log
**Run Full Regression Before Completion**
- All tests must pass
- No skipped tests
- Linting clean
- Build successful
**Track PSP Accurately**
- Set Started timestamp when beginning
- Set Completed when done
- Calculate Actual Hours
- Compare to estimates for improvement
### DON'T:
**Modify Restricted Story Sections**
- Don't change Story content
- Don't change Acceptance Criteria
- Don't change Testing approach
- Only Dev Agent Record sections
**Skip Tests or Validations**
- Tests are not optional
- Validations must pass
- No "TODO: add tests later"
**Mark Tasks Complete With Failing Tests**
- Complete = ALL validations pass
- Includes unit + integration + E2E
- No exceptions
**Load External Docs Without Direction**
- Story has what you need
- Don't load PRD "just in case"
- Keep context minimal
**Implement Without Understanding**
- If unclear, ask user
- Don't guess requirements
- Better to HALT than implement wrong
## Testing Best Practices
### Test Level Selection
**Unit Tests - Use For:**
- Pure functions
- Business logic
- Calculations and algorithms
- Validation rules
- Data transformations
**Integration Tests - Use For:**
- Component interactions
- Database operations
- API endpoints
- Service integrations
- Message queue operations
**E2E Tests - Use For:**
- Critical user journeys
- Cross-system workflows
- Compliance requirements
- Revenue-impacting flows
### Test Priorities
**P0 - Critical (>90% coverage):**
- Revenue-impacting features
- Security paths
- Data integrity operations
- Compliance requirements
- Authentication/authorization
**P1 - High (Happy path + key errors):**
- Core user journeys
- Frequently used features
- Complex business logic
- Integration points
**P2 - Medium (Happy path + basic errors):**
- Secondary features
- Admin functionality
- Reporting and analytics
**P3 - Low (Smoke tests):**
- Rarely used features
- Cosmetic improvements
- Nice-to-have functionality
### Test Quality Standards
**No Flaky Tests**
- Tests must be deterministic
- No random failures
- Reproducible results
**Dynamic Waiting**
```python
# ✅ GOOD
wait_for(lambda: element.is_visible(), timeout=5)
# ❌ BAD
time.sleep(5) # What if it takes 6 seconds? Or 2?
```
**Stateless and Parallel-Safe**
- Tests don't depend on order
- Can run in parallel
- No shared state
**Self-Cleaning Test Data**
- Setup in test
- Cleanup in test
- No manual database resets
**Explicit Assertions in Tests**
```python
# ✅ GOOD
def test_user_creation():
user = create_user("test@example.com")
assert user.email == "test@example.com"
assert user.is_active is True
# ❌ BAD
def test_user_creation():
user = create_user("test@example.com")
verify_user(user) # Assertion hidden in helper
```
### Test Anti-Patterns
**Testing Mock Behavior**
- Test real code, not mocks
- Mocks should simulate real behavior
- Integration tests often better than heavily mocked unit tests
**Production Pollution**
- No test-only methods in production code
- No test-specific conditionals
- Keep test code separate
**Mocking Without Understanding**
- Understand what you're mocking
- Know why you're mocking it
- Consider integration test instead
## Quality Assurance Best Practices
### Risk Assessment (Before Development)
**Always Run for Brownfield**
- Legacy code = high risk
- Integration points = complexity
- Use risk-profile task
**Score by Probability × Impact**
**Risk Score Formula**: Probability (1-9) × Impact (1-9)
**Probability Factors:**
- Code complexity (higher = more likely to have bugs)
- Number of integration points (more = higher chance of issues)
- Developer experience level (less experience = higher probability)
- Time constraints (rushed = more bugs)
- Technology maturity (new tech = higher risk)
**Impact Factors:**
- Number of users affected (more users = higher impact)
- Revenue impact (money at stake)
- Security implications (data breach potential)
- Compliance requirements (legal/regulatory)
- Business process disruption (operational impact)
**Risk Score Interpretation:**
- **1-9**: Low risk - Basic testing sufficient
- **10-29**: Medium risk - Standard testing required
- **30-54**: High risk - Comprehensive testing needed
- **55+**: Critical risk - Extensive testing + design review
**Gate Decisions by Risk Score:**
- Score ≥9 on any single risk = FAIL gate (must address before proceeding)
- Score ≥6 on multiple risks = CONCERNS gate (enhanced testing required)
**Document Mitigation Strategies**
- How to reduce risk (technical approaches)
- What testing is needed (test coverage requirements)
- What monitoring to add (observability needs)
- Rollback procedures (safety nets)
### Test Design (Before Development)
**Create Comprehensive Strategy**
- Map all acceptance criteria
- Choose appropriate test levels
- Assign priorities (P0/P1/P2/P3)
**Avoid Duplicate Coverage**
- Unit for logic
- Integration for interactions
- E2E for journeys
- Don't test same thing at multiple levels
**Plan Regression Tests for Brownfield**
- Existing functionality must still work
- Test touchpoints with legacy
- Validate backward compatibility
### Requirements Tracing (During Development)
**Map Every AC to Tests**
- Given-When-Then scenarios
- Traceability matrix
- Audit trail
**Identify Coverage Gaps**
- Missing test scenarios
- Untested edge cases
- Incomplete validation
### Review (After Development)
**Comprehensive Analysis**
- Code quality
- Test coverage
- Security concerns
- Performance issues
**Active Refactoring**
- QA can suggest improvements
- Not just finding problems
- Collaborative quality
**Advisory, Not Blocking**
- PASS/CONCERNS/FAIL/WAIVED gates
- Teams set their quality bar
- Document trade-offs
### Quality Gate Decisions
**PASS** ✅ - All criteria met, ready for production
Criteria:
- All acceptance criteria tested
- Test coverage adequate for risk level
- No critical or high severity issues
- NFRs validated
- Technical debt acceptable
**CONCERNS** ⚠️ - Issues exist but not blocking
When to use:
- Minor issues that don't block release
- Technical debt documented for future
- Nice-to-have improvements identified
- Low-risk issues with workarounds
- Document clearly what concerns exist
**FAIL** ❌ - Blocking issues must be fixed
Blocking criteria:
- Acceptance criteria not met
- Critical/high severity bugs
- Security vulnerabilities
- Performance unacceptable
- Missing required tests
- Technical debt too high
- Clear action items required
**WAIVED** 🔓 - Issues acknowledged, explicitly waived
When to use:
- User accepts known issues
- Conscious technical debt decision
- Time constraints prioritized
- Workarounds acceptable
- Require explicit user approval with documentation
## Brownfield Best Practices
### Always Document First
**Run document-project**
- Even if you "know" the codebase
- AI agents need context
- Discover undocumented patterns
### Respect Existing Patterns
**Match Current Style**
- Coding conventions
- Architectural patterns
- Technology choices
- Team preferences
### Plan for Gradual Rollout
**Feature Flags**
- Toggle new functionality
- Enable rollback
- Gradual user migration
**Backwards Compatibility**
- Don't break existing APIs
- Support legacy consumers
- Migration paths
**Migration Scripts**
- Data transformations
- Schema updates
- Rollback procedures
### Test Integration Thoroughly
**Enhanced QA for Brownfield**
- ALWAYS run risk assessment first
- Design regression test strategy
- Test all integration points
- Validate performance unchanged
**Critical Brownfield Sequence:**
```
1. QA: *risk {story} # FIRST - before any dev
2. QA: *design {story} # Plan regression tests
3. Dev: Implement
4. QA: *trace {story} # Verify coverage
5. QA: *nfr {story} # Check performance
6. QA: *review {story} # Deep integration analysis
```
## Process Best Practices
### Multiple Focused Tasks > One Branching Task
**Why:** Keeps developer context minimal and focused
**GOOD:**
```
- Task 1: Create User model
- Task 2: Implement registration endpoint
- Task 3: Add email validation
- Task 4: Write integration tests
```
**BAD:**
```
- Task 1: Implement user registration
- Create model
- Add endpoint
- Validate email
- Write tests
- Handle errors
- Add logging
- Document API
```
### Reuse Templates
**Use create-doc with Templates**
- Maintain consistency
- Proven structure
- Embedded generation instructions
**Don't Create Template Duplicates**
- One template per document type
- Customize through prompts, not duplication
### Progressive Loading
**Load On-Demand**
- Don't load everything at activation
- Load when command executed
- Keep context focused
**Don't Front-Load Context**
- Overwhelming context window
- Slower processing
- More errors
### Human-in-the-Loop
**Critical Checkpoints**
- PRD/Architecture: User reviews before proceeding
- Story drafts: User approves before dev
- QA gates: User decides on CONCERNS/WAIVED
**Don't Blindly Proceed**
- Ambiguous requirements → HALT and ask
- Risky changes → Get approval
- Quality concerns → Communicate
## Anti-Patterns to Avoid
### Development Anti-Patterns
**"I'll Add Tests Later"**
- Tests are never added
- Code becomes untestable
- TDD prevents this
**"Just Ship It"**
- Skipping quality gates
- Incomplete testing
- Technical debt accumulates
**"It Works On My Machine"**
- Environment-specific behavior
- Not reproducible
- Integration issues
**"We'll Refactor It Later"**
- Later never comes
- Code degrades
- Costs compound
### Testing Anti-Patterns
**Testing Implementation Instead of Behavior**
```python
# ❌ BAD - Testing implementation
assert user_service._hash_password.called
# ✅ GOOD - Testing behavior
assert user_service.authenticate(email, password) is True
```
**Sleeping Instead of Waiting**
```javascript
// ❌ BAD
await sleep(5000);
expect(element).toBeVisible();
// ✅ GOOD
await waitFor(() => expect(element).toBeVisible());
```
**Shared Test State**
```python
# ❌ BAD
class TestUser:
user = None # Shared across tests!
def test_create_user(self):
self.user = User.create()
def test_user_login(self):
# Depends on test_create_user running first!
self.user.login()
# ✅ GOOD
class TestUser:
def test_create_user(self):
user = User.create()
assert user.id is not None
def test_user_login(self):
user = User.create() # Independent!
assert user.login() is True
```
### Process Anti-Patterns
**Skipping Risk Assessment on Brownfield**
- Hidden dependencies
- Integration failures
- Regression bugs
**Approval Without Validation**
- Incomplete stories
- Vague requirements
- Downstream failures
**Loading Context "Just In Case"**
- Bloated context window
- Slower processing
- More errors
**Ignoring Quality Gates**
- Accumulating technical debt
- Production issues
- Team frustration
## Summary: The Path to Excellence
### For Architects:
1. Start with user needs
2. Choose pragmatic technology
3. Document decisions and trade-offs
4. Design for change
5. Plan observability from the start
### For Product Owners:
1. Clear, testable acceptance criteria
2. Appropriate story sizing (1-3 days)
3. Explicit dependencies
4. Technical context for developers
5. Validation before approval
### For Developers:
1. TDD - tests first, always
2. Clean Code and SOLID principles
3. Update only authorized story sections
4. Full regression before completion
5. Keep context lean and focused
### For QA:
1. Risk assessment before development (especially brownfield)
2. Test design with appropriate levels and priorities
3. Requirements traceability
4. Advisory gates, not blocking
5. Comprehensive review with active refactoring
### For Everyone:
1. Follow PRISM principles (Predictability, Resilience, Intentionality, Sustainability, Maintainability)
2. Lean dev agents, natural language first, clear roles
3. Progressive loading, human-in-the-loop
4. Quality is everyone's responsibility
5. Continuous improvement through measurement
---
**Last Updated**: 2025-10-22

View File

@@ -0,0 +1,297 @@
# PRISM Command Reference
This document describes the command structure and common commands available across PRISM skills.
## Command Structure
All PRISM commands follow a consistent pattern:
```
{command-name} [arguments]
```
When using skills in slash command mode, prefix with `*`:
```
*help
*create-story
*develop-story
```
## Common Commands (All Skills)
### Help & Information
**`help`**
- **Purpose**: Display available commands for the current skill
- **Output**: Numbered list of commands with descriptions
- **Usage**: `*help`
**`exit`**
- **Purpose**: Exit the current skill persona
- **Output**: Farewell message and return to normal mode
- **Usage**: `*exit`
### Jira Integration
**`jira {issueKey}`**
- **Purpose**: Fetch context from a Jira ticket
- **Arguments**:
- `issueKey`: The Jira issue identifier (e.g., "PROJ-123")
- **Output**: Issue details including description, acceptance criteria, comments
- **Usage**: `*jira PROJ-123`
- **Available in**: All skills with Jira integration
## Architect Commands
### Document Creation
**`create-architecture`**
- **Purpose**: Intelligently create architecture documentation based on project type
- **How it works**:
- Analyzes PRD and project requirements
- Recommends appropriate template (fullstack or backend-focused)
- Gets user confirmation
- Creates comprehensive architecture doc
- **Templates**:
- `fullstack-architecture-tmpl.yaml` for full-stack projects
- `architecture-tmpl.yaml` for backend/services projects
- **Output**: Complete architecture covering all relevant layers
### Analysis & Research
**`research {topic}`**
- **Purpose**: Conduct deep technical research
- **Arguments**: `topic` - The architecture topic to research
- **Task**: Executes `create-deep-research-prompt.md`
- **Output**: Comprehensive research findings
**`document-project`**
- **Purpose**: Document existing project architecture
- **Task**: Executes `document-project.md`
- **Output**: Complete project documentation
### Quality & Validation
**`execute-checklist`**
- **Purpose**: Run architecture quality checklist
- **Arguments**: Optional checklist name (defaults to `architect-checklist`)
- **Task**: Executes `execute-checklist.md`
- **Output**: Checklist validation results
**`shard-prd`**
- **Purpose**: Break architecture document into implementable pieces
- **Task**: Executes `shard-doc.md`
- **Output**: Multiple story files from architecture
**`doc-out`**
- **Purpose**: Output full document to destination file
- **Usage**: Used during document creation workflows
## Product Owner Commands
### Story Management
**`create-story`**
- **Purpose**: Create user story from requirements
- **Task**: Executes `brownfield-create-story.md`
- **Output**: Complete story YAML file
**`validate-story-draft {story}`**
- **Purpose**: Validate story completeness and quality
- **Arguments**: `story` - Path to story file
- **Task**: Executes `validate-next-story.md`
- **Output**: Validation results and recommendations
**`correct-course`**
- **Purpose**: Handle requirement changes and re-estimation
- **Task**: Executes `correct-course.md`
- **Output**: Updated stories and estimates
### Document Processing
**`shard-doc {document} {destination}`**
- **Purpose**: Break large document into stories
- **Arguments**:
- `document`: Path to source document (PRD, architecture, etc.)
- `destination`: Output directory for story files
- **Task**: Executes `shard-doc.md`
- **Output**: Multiple story files with dependencies
**`doc-out`**
- **Purpose**: Output full document to destination file
- **Usage**: Used during document creation workflows
### Quality Assurance
**`execute-checklist-po`**
- **Purpose**: Run PO master checklist
- **Task**: Executes `execute-checklist.md` with `po-master-checklist`
- **Output**: Checklist validation results
**`yolo`**
- **Purpose**: Toggle Yolo Mode (skip confirmations)
- **Usage**: `*yolo`
- **Note**: ON = skip section confirmations, OFF = confirm each section
## Developer Commands
### Story Implementation
**`develop-story`**
- **Purpose**: Execute complete story implementation workflow
- **Workflow**:
1. Set PSP tracking started timestamp
2. Read task → Implement → Write tests → Validate
3. Mark task complete, update File List
4. Repeat until all tasks complete
5. Run full regression
6. Update PSP tracking, set status to "Ready for Review"
- **Critical Rules**:
- Only update Dev Agent Record sections
- Follow PRISM principles (Predictability, Resilience, Intentionality, Sustainability, Maintainability)
- Write tests before implementation (TDD)
- Run validations before marking tasks complete
**`explain`**
- **Purpose**: Educational breakdown of implementation
- **Usage**: `*explain`
- **Output**: Detailed explanation of recent work, teaching junior engineer perspective
### Quality & Testing
**`review-qa`**
- **Purpose**: Apply QA fixes from review feedback
- **Task**: Executes `apply-qa-fixes.md`
- **Usage**: After receiving QA review results
**`run-tests`**
- **Purpose**: Execute linting and test suite
- **Usage**: `*run-tests`
- **Output**: Test results and coverage
### Integration
**`strangler`**
- **Purpose**: Execute strangler pattern migration workflow
- **Usage**: For legacy code modernization
- **Pattern**: Gradual replacement of legacy systems
## QA/Test Architect Commands
### Risk & Design (Before Development)
**`risk-profile {story}` (short: `*risk`)**
- **Purpose**: Assess regression and integration risks
- **Arguments**: `story` - Story file path or ID
- **Task**: Executes `risk-profile.md`
- **Output**: `docs/qa/assessments/{epic}.{story}-risk-{YYYYMMDD}.md`
- **Use When**: IMMEDIATELY after story creation, especially for brownfield
**`test-design {story}` (short: `*design`)**
- **Purpose**: Plan comprehensive test strategy
- **Arguments**: `story` - Story file path or ID
- **Task**: Executes `test-design.md`
- **Output**: `docs/qa/assessments/{epic}.{story}-test-design-{YYYYMMDD}.md`
- **Use When**: After risk assessment, before development
### Review (After Development)
**`review {story}`**
- **Purpose**: Comprehensive quality review with active refactoring
- **Arguments**: `story` - Story file path or ID
- **Task**: Executes `review-story.md`
- **Outputs**:
- QA Results section in story file
- Gate file: `docs/qa/gates/{epic}.{story}-{slug}.yml`
- **Gate Statuses**: PASS / CONCERNS / FAIL / WAIVED
- **Use When**: Development complete, before committing
**`gate {story}`**
- **Purpose**: Update quality gate decision after fixes
- **Arguments**: `story` - Story file path or ID
- **Task**: Executes `qa-gate.md`
- **Output**: Updated gate YAML file
- **Use When**: After addressing review issues
## Scrum Master Commands
**`create-epic`**
- **Purpose**: Create epic from brownfield requirements
- **Task**: Executes `brownfield-create-epic.md`
- **Output**: Epic document with stories
## Command Execution Order
### Typical Story Lifecycle
```
1. PO: *create-story
2. PO: *validate-story-draft {story}
3. QA: *risk {story} # Assess risks (optional)
4. QA: *design {story} # Plan tests (optional)
5. Dev: *develop-story # Implement
6. QA: *review {story} # Full review (optional)
7. Dev: *review-qa # Apply fixes (if needed)
8. QA: *gate {story} # Update gate (optional)
```
### Brownfield Story Lifecycle (High Risk)
```
1. PO: *create-story
2. QA: *risk {story} # CRITICAL: Before dev
3. QA: *design {story} # Plan regression tests
4. PO: *validate-story-draft {story}
5. Dev: *develop-story
6. QA: *review {story} # Deep integration analysis
7. Dev: *review-qa
8. QA: *gate {story} # May WAIVE legacy issues
```
## Command Flags & Options
### Yolo Mode (PO)
- **Toggle**: `*yolo`
- **Effect**: Skip document section confirmations
- **Use**: Batch story creation, time-critical work
### Checklist Variants
- `execute-checklist` - Default checklist for skill
- `execute-checklist {custom-checklist}` - Specific checklist
## Best Practices
**Command Usage:**
- ✅ Use short forms in brownfield workflows (`*risk`, `*design`)
- ✅ Always run `*help` when entering a new skill
- ✅ Use `*risk` before starting ANY brownfield work
- ✅ Run `*design` after risk assessment
- ✅ Execute `*review` when development is complete
**Anti-Patterns:**
- ❌ Skipping `*risk` on legacy code changes
- ❌ Running `*review` before all tasks are complete
- ❌ Using `*yolo` mode for critical stories
## Integration Commands
### Jira Integration Pattern
```
1. *jira PROJ-123 # Fetch issue
2. Use fetched context for story/architecture creation
3. Reference Jira key in created artifacts
```
## Command Help
For skill-specific commands, use the `*help` command within each skill:
- Architect: `*help` → Lists architecture commands
- PO: `*help` → Lists story/backlog commands
- Dev: `*help` → Lists development commands
- QA: `*help` → Lists testing commands
- SM: `*help` → Lists scrum master commands
---
**Last Updated**: 2025-10-22

View File

@@ -0,0 +1,436 @@
# PRISM Dependencies Reference
This document describes the dependencies, integrations, and file structure used by PRISM skills.
## Dependency Structure
PRISM uses a modular dependency system where each skill can reference:
1. **Tasks** - Executable workflows (`.prism/tasks/`)
2. **Templates** - Document structures (`.prism/templates/`)
3. **Checklists** - Quality gates (`.prism/checklists/`)
4. **Data** - Reference information (`.prism/data/`)
5. **Integrations** - External systems (Jira, etc.)
## File Resolution
Dependencies follow this pattern:
```
.prism/{type}/{name}
```
**Examples:**
- `create-doc.md``.prism/tasks/create-doc.md`
- `architect-checklist.md``.prism/checklists/architect-checklist.md`
- `architecture-tmpl.yaml``.prism/templates/architecture-tmpl.yaml`
- `technical-preferences.md``.prism/data/technical-preferences.md`
## Architect Dependencies
### Tasks
- `create-deep-research-prompt.md` - Deep technical research
- `create-doc.md` - Document generation engine
- `document-project.md` - Project documentation workflow
- `execute-checklist.md` - Checklist validation
### Templates
- `architecture-tmpl.yaml` - Backend architecture template
- `brownfield-architecture-tmpl.yaml` - Legacy system assessment template
- `front-end-architecture-tmpl.yaml` - Frontend architecture template
- `fullstack-architecture-tmpl.yaml` - Complete system architecture template
### Checklists
- `architect-checklist.md` - Architecture quality gates
### Data
- `technical-preferences.md` - Team technology preferences and patterns
## Product Owner Dependencies
### Tasks
- `correct-course.md` - Requirement change management
- `execute-checklist.md` - Checklist validation
- `shard-doc.md` - Document sharding workflow
- `validate-next-story.md` - Story validation workflow
- `brownfield-create-story.md` - Brownfield story creation
### Templates
- `story-tmpl.yaml` - User story template
### Checklists
- `change-checklist.md` - Change management checklist
- `po-master-checklist.md` - Product owner master checklist
## Developer Dependencies
### Tasks
- `apply-qa-fixes.md` - QA feedback application workflow
- `execute-checklist.md` - Checklist validation
- `validate-next-story.md` - Story validation (pre-development)
### Checklists
- `story-dod-checklist.md` - Story Definition of Done checklist
### Configuration
**Dev Load Always Files** (from `core-config.yaml`):
- Files automatically loaded during developer activation
- Contains project-specific patterns and standards
- Keeps developer context lean and focused
**Story File Sections** (Developer can update):
- Tasks/Subtasks checkboxes
- Dev Agent Record (all subsections)
- Agent Model Used
- Debug Log References
- Completion Notes List
- File List
- Change Log
- Status (only to "Ready for Review")
## QA/Test Architect Dependencies
### Tasks
- `nfr-assess.md` - Non-functional requirements validation
- `qa-gate.md` - Quality gate decision management
- `review-story.md` - Comprehensive story review
- `risk-profile.md` - Risk assessment workflow
- `test-design.md` - Test strategy design
- `trace-requirements.md` - Requirements traceability mapping
### Templates
- `qa-gate-tmpl.yaml` - Quality gate template
- `story-tmpl.yaml` - Story template (for reading)
### Data
- `technical-preferences.md` - Team preferences
- `test-levels-framework.md` - Unit/Integration/E2E decision framework
- `test-priorities-matrix.md` - P0/P1/P2/P3 priority system
### Output Locations
**Assessment Documents:**
```
docs/qa/assessments/
├── {epic}.{story}-risk-{YYYYMMDD}.md
├── {epic}.{story}-test-design-{YYYYMMDD}.md
├── {epic}.{story}-trace-{YYYYMMDD}.md
└── {epic}.{story}-nfr-{YYYYMMDD}.md
```
**Gate Decisions:**
```
docs/qa/gates/
└── {epic}.{story}-{slug}.yml
```
**Story File Sections** (QA can update):
- QA Results section ONLY
- Cannot modify: Status, Story, Acceptance Criteria, Tasks, Dev Notes, Testing, Dev Agent Record, Change Log
## Scrum Master Dependencies
### Tasks
- `brownfield-create-epic.md` - Epic creation for brownfield projects
## Jira Integration
### Configuration
Jira integration is configured in `.prism/core-config.yaml`:
```yaml
integrations:
jira:
enabled: true
baseUrl: "https://your-company.atlassian.net"
# Additional config...
```
### Usage Pattern
**1. Fetch Issue Context:**
```
*jira PROJ-123
```
**2. Use in Workflows:**
- Architect: Fetch epic for architecture planning
- PO: Fetch epic/story for refinement
- Dev: Fetch story for implementation context
- QA: Fetch story for test planning
**3. Automatic Linking:**
- Created artifacts reference source Jira key
- Traceability maintained throughout workflow
### Integration Points
**Available in:**
- ✅ Architect skill
- ✅ Product Owner skill
- ✅ Developer skill
- ✅ QA skill
- ✅ Scrum Master skill
**Command:**
```
*jira {issueKey}
```
**Output:**
- Issue summary and description
- Acceptance criteria (if available)
- Comments and discussion
- Current status and assignee
- Labels and components
## PRISM Configuration
### Core Config File
**Location:** `.prism/core-config.yaml`
**Purpose:** Central configuration for all PRISM skills
**Key Sections:**
```yaml
project:
name: "Your Project"
type: "brownfield" | "greenfield"
paths:
stories: "docs/stories"
architecture: "docs/architecture"
qa:
qaLocation: "docs/qa"
assessments: "docs/qa/assessments"
gates: "docs/qa/gates"
dev:
devStoryLocation: "docs/stories"
devLoadAlwaysFiles:
- "docs/architecture/technical-standards.md"
- "docs/architecture/project-conventions.md"
integrations:
jira:
enabled: true
baseUrl: "https://your-company.atlassian.net"
```
### Story File Structure
**Location:** `{devStoryLocation}/{epic}.{story}.{slug}.md`
**Example:** `docs/stories/1.3.user-authentication.md`
**Required Sections:**
- Story ID and Title
- Story (user need and business value)
- Acceptance Criteria
- Tasks/Subtasks with checkboxes
- Dev Notes
- Testing approach
- Dev Agent Record (for developer updates)
- QA Results (for QA updates)
- PSP Estimation Tracking
- File List
- Change Log
- Status
### Template Structure
**Location:** `.prism/templates/{template-name}.yaml`
**Format:**
```yaml
metadata:
id: template-id
title: Template Title
version: 1.0.0
workflow:
elicit: true | false
confirm_sections: true | false
sections:
- id: section-1
title: Section Title
prompt: |
Instructions for generating this section
elicit:
- question: "What is...?"
placeholder: "Example answer"
```
## Workflow Dependencies
### Story Creation Workflow
```
1. PO creates story using story-tmpl.yaml
2. Story validation using validate-next-story.md
3. QA risk assessment using risk-profile.md
4. QA test design using test-design.md
5. Dev implements using develop-story command
6. QA traces coverage using trace-requirements.md
7. QA reviews using review-story.md
8. QA gates using qa-gate.md
```
### Architecture Workflow
```
1. Architect creates doc using create-doc.md + architecture template
2. Validation using execute-checklist.md + architect-checklist.md
3. Sharding using shard-doc.md
4. Stories created from sharded content
```
### Brownfield Workflow
```
1. Architect documents project using document-project.md
2. PM creates brownfield PRD
3. Architect creates brownfield architecture using brownfield-architecture-tmpl.yaml
4. PO creates stories using brownfield-create-story.md
5. QA risk profiles using risk-profile.md (CRITICAL)
6. Development proceeds with enhanced QA validation
```
## Data Files
### Technical Preferences
**Location:** `.prism/data/technical-preferences.md`
**Purpose:** Team-specific technology choices and patterns
**Used By:** All skills to bias recommendations
**Example Content:**
```markdown
# Technical Preferences
## Backend
- Language: Python 3.11+
- Framework: FastAPI
- Database: PostgreSQL 15+
- ORM: SQLAlchemy 2.0
## Frontend
- Framework: React 18+ with TypeScript
- State: Redux Toolkit
- Routing: React Router v6
## Testing
- Unit: pytest
- E2E: Playwright
- Coverage: >80% for new code
```
### Test Frameworks
**test-levels-framework.md:**
- Unit test criteria and scenarios
- Integration test criteria
- E2E test criteria
- Selection guidance
**test-priorities-matrix.md:**
- P0: Critical (>90% unit, >80% integration, all E2E)
- P1: High (happy path + key errors)
- P2: Medium (happy path + basic errors)
- P3: Low (smoke tests)
## Dependency Loading
### Progressive Loading
**Principle:** Load dependencies only when needed, not during activation
**Activation:**
1. Read skill SKILL.md
2. Adopt persona
3. Load core-config.yaml
4. Greet and display help
5. HALT and await commands
**Execution:**
1. User requests command
2. Load required dependencies
3. Execute workflow
4. Return results
### Dev Agent Special Rules
**CRITICAL:**
- Story has ALL info needed
- NEVER load PRD/architecture unless explicitly directed
- Only load devLoadAlwaysFiles during activation
- Keep context minimal and focused
## External Dependencies
### Version Control
- Git required for all PRISM workflows
- Branch strategies defined per project
### Node.js (Optional)
- Optional for CLI tools
- Required for flattener utilities
### IDEs
- Claude Code (recommended)
- VS Code with Claude extension
- Cursor
- Any IDE with Claude support
### AI Models
- Claude 3.5 Sonnet (recommended for all skills)
- Claude 3 Opus (alternative)
- Other models may work but not optimized
## Best Practices
**Dependency Management:**
- ✅ Keep dependencies minimal and focused
- ✅ Load progressively (on-demand)
- ✅ Reference by clear file paths
- ✅ Maintain separation of concerns
**File Organization:**
- ✅ Tasks in `.prism/tasks/`
- ✅ Templates in `.prism/templates/`
- ✅ Checklists in `.prism/checklists/`
- ✅ Data in `.prism/data/`
**Configuration:**
- ✅ Central config in `core-config.yaml`
- ✅ Project-specific settings
- ✅ Integration credentials secure
**Anti-Patterns:**
- ❌ Loading all dependencies during activation
- ❌ Mixing task types in single file
- ❌ Hardcoding paths instead of using config
- ❌ Dev agents loading excessive context
## Troubleshooting
**Dependency Not Found:**
- Check file path matches pattern: `.prism/{type}/{name}`
- Verify file exists in correct directory
- Check core-config.yaml paths configuration
**Integration Failures:**
- Verify Jira configuration in core-config.yaml
- Check credentials and permissions
- Test connection with `*jira {test-key}`
**Task Execution Errors:**
- Ensure all required dependencies loaded
- Check task file format (markdown with YAML frontmatter)
- Verify user has permissions for file operations
---
**Last Updated**: 2025-10-22

View File

@@ -0,0 +1,828 @@
# PRISM Workflow Examples
This document provides real-world examples of PRISM workflows across different scenarios.
## Table of Contents
1. [Greenfield: New E-Commerce Platform](#greenfield-new-e-commerce-platform)
2. [Brownfield: Legacy System Modernization](#brownfield-legacy-system-modernization)
3. [API Integration](#api-integration)
4. [Bug Fix in Complex System](#bug-fix-in-complex-system)
5. [Performance Optimization](#performance-optimization)
6. [Security Enhancement](#security-enhancement)
---
## Greenfield: New E-Commerce Platform
### Scenario
Building a new e-commerce platform from scratch with modern technology stack.
### Workflow
#### Phase 1: Architecture Planning
**User Request:**
> "I need to design a full-stack e-commerce platform with product catalog, shopping cart, checkout, and payment processing."
**Step 1: Create Architecture**
```
@architect
*create-fullstack-architecture
```
**Architect Process:**
1. Gathers requirements (users, products, orders, payments)
2. Designs system components:
- Frontend: React + Redux
- Backend: Node.js + Express
- Database: PostgreSQL
- Cache: Redis
- Payments: Stripe integration
3. Creates architecture document with:
- System diagrams
- Data models
- API specifications
- Security architecture
- Deployment strategy
**Step 2: Validate Architecture**
```
@architect
*execute-checklist
```
**Output:** `docs/architecture/ecommerce-architecture.md`
#### Phase 2: Product Planning
**Step 3: Create PRD**
```
@pm
*create-prd
```
**PM Process:**
1. Defines product requirements
2. Creates epics:
- Epic 1: User Management
- Epic 2: Product Catalog
- Epic 3: Shopping Cart
- Epic 4: Checkout & Payments
- Epic 5: Admin Dashboard
3. Prioritizes features
4. Defines success metrics
**Output:** `docs/prd.md`
#### Phase 3: Shard into Stories
**Step 4: Break Architecture into Stories**
```
@po
*shard-doc docs/architecture/ecommerce-architecture.md docs/stories
```
**PO Process:**
1. Identifies components:
- User service
- Product service
- Cart service
- Order service
- Payment service
2. Creates story sequence:
- Story 1.1: User registration and authentication
- Story 1.2: User profile management
- Story 2.1: Product catalog API
- Story 2.2: Product search and filtering
- Story 2.3: Product detail pages
- Story 3.1: Shopping cart state management
- Story 3.2: Cart API endpoints
- Story 4.1: Checkout workflow
- Story 4.2: Payment integration
- Story 4.3: Order confirmation
3. Adds dependencies (e.g., Story 3.1 requires 1.1, 2.1)
**Output:** `docs/stories/1.1.user-registration.md`, etc.
#### Phase 4: Development
**Step 5: Implement First Story**
```
@dev
*develop-story docs/stories/1.1.user-registration.md
```
**Dev Process:**
1. Sets PSP tracking started timestamp
2. **Task 1: Create User model**
- Writes test: `test_user_creation()`
- Implements User model with email, password fields
- Runs test → PASSES
- Updates File List: `models/user.py`
- Marks task [x]
3. **Task 2: Implement registration endpoint**
- Writes test: `test_post_register_creates_user()`
- Implements `/api/auth/register` endpoint
- Runs test → PASSES
- Updates File List: `routes/auth.py`
- Marks task [x]
4. **Task 3: Add password hashing**
- Writes test: `test_password_is_hashed()`
- Implements bcrypt hashing
- Runs test → PASSES
- Updates File List: `utils/crypto.py`
- Marks task [x]
5. **Task 4: Write integration tests**
- Writes E2E test: `test_user_can_register_and_login()`
- Runs all tests → PASSES
- Marks task [x]
6. Runs full regression → PASSES
7. Updates PSP tracking completed
8. Sets status: "Ready for Review"
#### Phase 5: Quality Review
**Step 6: QA Review**
```
@qa
*review docs/stories/1.1.user-registration.md
```
**QA Process:**
1. Reviews code quality
2. Checks test coverage (>90% for auth)
3. Validates security (password hashing, input validation)
4. Tests edge cases
5. Updates QA Results section in story
6. Creates gate: `docs/qa/gates/1.1-user-registration.yml`
7. Gate decision: **PASS**
**Step 7: Commit and Continue**
```
git add .
git commit -m "feat: Add user registration with authentication"
git push
```
Move to next story (1.2, 2.1, etc.)
### Key Takeaways
- ✅ Architecture first, then implementation
- ✅ Break into small, focused stories
- ✅ TDD throughout development
- ✅ Quality gates before merging
- ✅ Systematic progression through workflow
---
## Brownfield: Legacy System Modernization
### Scenario
Modernizing a 10-year-old PHP monolith to microservices with modern tech stack.
### Workflow
#### Phase 1: Document Existing System
**Step 1: Document Legacy Project**
```
@architect
*document-project
```
**Architect Process:**
1. Analyzes existing codebase
2. Documents:
- Current architecture (monolithic PHP)
- Database schema
- API endpoints (if any)
- Business logic patterns
- Integration points
- Technical debt areas
3. Creates source tree
4. Identifies modernization candidates
**Output:** `docs/architecture/legacy-system-docs.md`
#### Phase 2: Plan Modernization
**Step 2: Create Brownfield Architecture**
```
@architect
*create-brownfield-architecture
```
**Architect Process:**
1. Reviews legacy documentation
2. Designs migration strategy:
- **Strangler Fig Pattern**: Gradually replace modules
- **Phase 1**: Extract user service
- **Phase 2**: Extract product service
- **Phase 3**: Extract order service
3. Plans parallel running (old + new)
4. Defines rollback procedures
5. Specifies feature flags
**Output:** `docs/architecture/modernization-architecture.md`
#### Phase 3: Create Modernization Story
**Step 3: Create Brownfield Story**
```
@po
*create-story
```
**Story:** Extract User Service from Monolith
**Acceptance Criteria:**
- New user service handles authentication
- Facade routes requests to new service
- Legacy code still accessible via facade
- All existing user tests pass
- Feature flag controls routing
- Performance unchanged or improved
#### Phase 4: Risk Assessment (CRITICAL for Brownfield)
**Step 4: Assess Integration Risks**
```
@qa
*risk docs/stories/1.1.extract-user-service.md
```
**QA Process:**
1. **Identifies Risks:**
- **High**: Breaking authentication for existing users (P=8, I=9, Score=72)
- **High**: Data migration failures (P=6, I=9, Score=54)
- **Medium**: Performance degradation (P=5, I=7, Score=35)
- **Medium**: Session handling mismatches (P=6, I=6, Score=36)
2. **Documents Mitigation:**
- Comprehensive integration tests
- Parallel running with feature flag
- Gradual rollout (5% → 25% → 50% → 100%)
- Rollback procedure documented
- Performance monitoring
3. **Risk Score:** 72 (High) - Requires enhanced testing
**Output:** `docs/qa/assessments/1.1-extract-user-service-risk-20251022.md`
**Step 5: Design Test Strategy**
```
@qa
*design docs/stories/1.1.extract-user-service.md
```
**QA Process:**
1. **Unit Tests** (15 scenarios):
- User service authentication logic
- Password validation
- Token generation
2. **Integration Tests** (12 scenarios):
- Facade routing logic
- New service endpoints
- Database operations
- Session management
3. **E2E Tests** (8 scenarios) - P0 Critical:
- Existing user can still login (legacy path)
- New user registers and logs in (new path)
- Feature flag switches between paths
- Session persists across services
4. **Regression Tests** (20 scenarios):
- All existing user functionality still works
- No performance degradation
- All legacy integrations intact
**Output:** `docs/qa/assessments/1.1-extract-user-service-test-design-20251022.md`
#### Phase 5: Strangler Pattern Implementation
**Step 6: Implement with Strangler Pattern**
```
@dev
*strangler docs/stories/1.1.extract-user-service.md
```
**Dev Process:**
1. **Task 1: Create new user service**
- Writes unit tests for new service
- Implements Node.js user service
- Tests pass
2. **Task 2: Create facade layer**
- Writes tests for routing logic
- Implements facade in legacy codebase
- Routes to legacy by default
- Tests pass
3. **Task 3: Add feature flag**
- Writes tests for flag logic
- Implements flag: `USE_NEW_USER_SERVICE`
- Tests both paths
4. **Task 4: Data migration script**
- Writes tests for migration
- Implements safe migration with rollback
- Tests on copy of production data
5. **Task 5: Integration tests**
- Writes tests for both old and new paths
- Validates facade routing
- Tests session management
6. **Task 6: Performance tests**
- Benchmarks legacy performance
- Tests new service performance
- Validates no degradation
#### Phase 6: Validation During Development
**Step 7: Trace Requirements Coverage**
```
@qa
*trace docs/stories/1.1.extract-user-service.md
```
**QA Process:**
1. Maps each AC to tests:
- AC1 (new service auth) → 8 unit, 4 integration, 2 E2E tests
- AC2 (facade routing) → 3 integration, 2 E2E tests
- AC3 (legacy still works) → 12 regression tests
- AC4 (tests pass) → All 20 legacy tests + 35 new tests
- AC5 (feature flag) → 4 integration, 3 E2E tests
- AC6 (performance) → 5 performance benchmark tests
2. **Coverage:** 100% of ACs covered
3. **Gaps:** None identified
**Output:** `docs/qa/assessments/1.1-extract-user-service-trace-20251022.md`
**Step 8: NFR Validation**
```
@qa
*nfr docs/stories/1.1.extract-user-service.md
```
**QA Process:**
1. **Performance:**
- Login latency: 120ms (legacy) → 95ms (new) ✅
- Throughput: 500 req/s (legacy) → 600 req/s (new) ✅
2. **Security:**
- Password hashing: bcrypt → argon2 (stronger) ✅
- Token expiry: 24h → 1h (more secure) ✅
- SQL injection tests: All pass ✅
3. **Reliability:**
- Error handling: Comprehensive ✅
- Retry logic: 3 retries with backoff ✅
- Circuit breaker: Implemented ✅
**Output:** `docs/qa/assessments/1.1-extract-user-service-nfr-20251022.md`
#### Phase 7: Comprehensive Review
**Step 9: Full QA Review**
```
@qa
*review docs/stories/1.1.extract-user-service.md
```
**QA Process:**
1. **Code Quality:** Excellent, follows Node.js best practices
2. **Test Coverage:** 95% unit, 88% integration, 100% critical E2E
3. **Security:** Enhanced security with argon2, proper token handling
4. **Performance:** 20% improvement over legacy
5. **Integration Safety:** Facade pattern ensures safe rollback
6. **Regression:** All 20 legacy tests pass
7. **Documentation:** Complete rollback procedure
**Gate Decision:** **PASS**
**Output:**
- QA Results in story file
- `docs/qa/gates/1.1-extract-user-service.yml`
#### Phase 8: Gradual Rollout
**Step 10: Deploy with Feature Flag**
1. Deploy with flag OFF (0% new service)
2. Enable for 5% of users
3. Monitor for 24 hours
4. If stable, increase to 25%
5. Monitor for 48 hours
6. If stable, increase to 50%
7. Monitor for 1 week
8. If stable, increase to 100%
9. Monitor for 1 month
10. If stable, remove facade, deprecate legacy
### Key Takeaways
-**ALWAYS** run risk assessment before brownfield work
- ✅ Strangler fig pattern for safe migration
- ✅ Feature flags for gradual rollout
- ✅ Comprehensive regression testing
- ✅ Performance benchmarking
- ✅ Rollback procedures documented
- ✅ Enhanced QA validation throughout
---
## API Integration
### Scenario
Integrating Stripe payment processing into existing e-commerce platform.
### Workflow
**Step 1: Create Story**
```
@po
*create-story
```
**Story:** Integrate Stripe for Payment Processing
**Step 2: Risk Assessment**
```
@qa
*risk docs/stories/3.1.stripe-integration.md
```
**Risks Identified:**
- Payment failures (P=6, I=9, Score=54) - High
- Data security (P=4, I=9, Score=36) - Medium-High
- API rate limits (P=5, I=5, Score=25) - Medium
**Step 3: Test Design**
```
@qa
*design docs/stories/3.1.stripe-integration.md
```
**Test Strategy:**
- Unit: Payment amount calculation, currency conversion
- Integration: Stripe API calls, webhook handling
- E2E: Complete checkout with test cards (P0)
**Step 4: Implement**
```
@dev
*develop-story docs/stories/3.1.stripe-integration.md
```
**Implementation:**
1. Stripe SDK integration
2. Payment intent creation
3. Webhook handler for payment events
4. Error handling and retries
5. Idempotency keys for safety
6. Comprehensive logging
**Step 5: Review**
```
@qa
*review docs/stories/3.1.stripe-integration.md
```
**QA Checks:**
- PCI compliance validation
- Error handling for all Stripe exceptions
- Webhook signature verification
- Idempotency testing
- Test card scenarios
**Gate:** **PASS WITH CONCERNS**
- Concern: Need production monitoring alerts
- Action: Add CloudWatch alerts for payment failures
### Key Takeaways
- ✅ External integrations need comprehensive error handling
- ✅ Security is critical for payment processing
- ✅ Test with provider's test environment
- ✅ Idempotency prevents duplicate charges
- ✅ Monitoring and alerting essential
---
## Bug Fix in Complex System
### Scenario
Users report intermittent authentication failures in production.
### Workflow
**Step 1: Create Bug Story**
```
@po
*create-story
```
**Story:** Fix intermittent authentication failures
**AC:**
- Identify root cause of authentication failures
- Implement fix
- Add tests to prevent regression
- No new failures in production
**Step 2: Risk Profile**
```
@qa
*risk docs/stories/2.5.fix-auth-failures.md
```
**Risks:**
- Side effects in auth system (P=6, I=8, Score=48)
- Performance impact (P=4, I=6, Score=24)
**Mitigation:**
- Comprehensive regression tests
- Performance benchmarks
**Step 3: Investigate and Implement**
```
@dev
*develop-story docs/stories/2.5.fix-auth-failures.md
```
**Investigation:**
1. Reviews logs → Finds race condition in token validation
2. Writes failing test reproducing the race condition
3. Fixes: Adds proper locking around token validation
4. Test now passes
5. Adds performance test to ensure no degradation
**Step 4: Trace Coverage**
```
@qa
*trace docs/stories/2.5.fix-auth-failures.md
```
**Coverage:**
- AC1 (root cause identified): Covered by investigation notes
- AC2 (fix implemented): Covered by 3 unit tests, 2 integration tests
- AC3 (regression tests): 5 new tests added
- AC4 (no new failures): E2E smoke tests pass
**Step 5: Review**
```
@qa
*review docs/stories/2.5.fix-auth-failures.md
```
**QA Validates:**
- Root cause analysis documented
- Fix addresses core issue (race condition)
- Regression tests comprehensive
- No performance degradation
- Error handling improved
**Gate:** **PASS**
### Key Takeaways
- ✅ TDD helps: Reproduce bug in test first
- ✅ Document root cause analysis
- ✅ Regression tests prevent recurrence
- ✅ Performance validation for production fixes
---
## Performance Optimization
### Scenario
Dashboard loading time is 8 seconds, needs to be under 2 seconds.
### Workflow
**Step 1: Create Performance Story**
```
@po
*create-story
```
**Story:** Optimize dashboard loading performance
**AC:**
- Dashboard loads in <2 seconds (P50)
- <3 seconds P95
- No functionality broken
- Maintain current data freshness
**Step 2: NFR Assessment Early**
```
@qa
*nfr docs/stories/4.2.optimize-dashboard.md
```
**QA Establishes Baselines:**
- Current P50: 8.2s
- Current P95: 12.5s
- Target P50: <2s
- Target P95: <3s
**Step 3: Implement Optimizations**
```
@dev
*develop-story docs/stories/4.2.optimize-dashboard.md
```
**Optimizations:**
1. **Database Query Optimization:**
- Added indexes on frequently queried columns
- Reduced N+1 queries with joins
- Result: Queries 85% faster
2. **Caching:**
- Added Redis cache for dashboard data
- 5-minute TTL
- Result: 70% of requests served from cache
3. **Frontend Optimization:**
- Lazy loading of charts
- Virtual scrolling for tables
- Result: Initial render 60% faster
4. **API Response Optimization:**
- Pagination for large datasets
- Compression enabled
- Result: Payload size reduced 75%
**Step 4: Validate NFRs**
```
@qa
*nfr docs/stories/4.2.optimize-dashboard.md
```
**QA Measures:**
- New P50: 1.7s ✅ (Target: <2s)
- New P95: 2.4s ✅ (Target: <3s)
- Functionality: All tests pass ✅
- Data freshness: 5-min delay acceptable ✅
**Step 5: Review**
```
@qa
*review docs/stories/4.2.optimize-dashboard.md
```
**Gate:** **PASS**
**Improvements:**
- 79% reduction in load time
- 81% reduction in P95
- All functionality preserved
### Key Takeaways
- ✅ Establish baselines before optimization
- ✅ Measure after each change
- ✅ Multiple optimization techniques
- ✅ Validate functionality not broken
- ✅ Early NFR assessment guides work
---
## Security Enhancement
### Scenario
Adding two-factor authentication (2FA) to user accounts.
### Workflow
**Step 1: Create Security Story**
```
@po
*create-story
```
**Story:** Add Two-Factor Authentication
**AC:**
- Users can enable 2FA with authenticator apps
- 2FA required for sensitive operations
- Backup codes provided
- SMS fallback option
- Graceful degradation if service unavailable
**Step 2: Risk Assessment**
```
@qa
*risk docs/stories/1.5.add-2fa.md
```
**Risks:**
- Lockout scenarios (P=5, I=8, Score=40)
- SMS service failures (P=4, I=6, Score=24)
- Backup code mismanagement (P=3, I=7, Score=21)
**Mitigation:**
- Admin override for lockouts
- Fallback to email if SMS fails
- Secure backup code storage
**Step 3: Security-Focused Design**
```
@qa
*design docs/stories/1.5.add-2fa.md
```
**Test Strategy:**
- **Security Tests (P0):**
- Brute force protection on 2FA codes
- Backup code single-use validation
- Rate limiting on verification attempts
- Time-based code expiration
- **Unit Tests:**
- TOTP code generation and validation
- Backup code generation
- SMS formatting
- **Integration Tests:**
- 2FA enable/disable flow
- Verification with authenticator
- SMS delivery
- **E2E Tests:**
- Complete 2FA enrollment
- Login with 2FA enabled
- Backup code usage
- Account recovery
**Step 4: Implement**
```
@dev
*develop-story docs/stories/1.5.add-2fa.md
```
**Implementation:**
1. TOTP library integration
2. QR code generation for authenticator setup
3. Backup codes (cryptographically secure)
4. SMS integration with Twilio
5. Rate limiting (5 attempts per 15 minutes)
6. Admin override capability
7. Audit logging for all 2FA events
**Step 5: Security Review**
```
@qa
*review docs/stories/1.5.add-2fa.md
```
**QA Security Checks:**
- ✅ TOTP implementation follows RFC 6238
- ✅ Backup codes are cryptographically random
- ✅ Codes stored hashed, not plaintext
- ✅ Rate limiting prevents brute force
- ✅ Time window appropriate (30 seconds)
- ✅ SMS service failover implemented
- ✅ Audit trail complete
- ✅ Admin override requires MFA
**Gate:** **PASS**
### Key Takeaways
- ✅ Security features need comprehensive threat modeling
- ✅ Multiple fallback mechanisms
- ✅ Audit logging essential
- ✅ Admin override with safeguards
- ✅ Follow established standards (RFC 6238)
---
## Summary: Pattern Recognition
### Greenfield Projects
- Start with architecture
- Break into small stories
- TDD throughout
- Standard QA flow
### Brownfield Projects
- **Always** risk assessment first
- Strangler fig pattern
- Feature flags
- Comprehensive regression testing
- Gradual rollout
### Integrations
- Error handling comprehensive
- Test with provider sandbox
- Idempotency critical
- Monitoring essential
### Bug Fixes
- Reproduce in test first
- Document root cause
- Regression tests
- Validate no side effects
### Performance Work
- Baseline first
- Measure continuously
- Multiple techniques
- Validate functionality preserved
### Security Features
- Threat modeling
- Follow standards
- Multiple fallbacks
- Comprehensive audit trails
---
**Last Updated**: 2025-10-22