Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:57:33 +08:00
commit b4a8193ca4
28 changed files with 7229 additions and 0 deletions

View File

@@ -0,0 +1,371 @@
---
name: Epic Identification
description: This skill should be used when the user asks to "identify epics", "break down vision into epics", "find major features", "discover capability areas", "decompose vision", "group requirements into themes", "define high-level features", "what epics do I need", "turn vision into work items", or "split project into epics". It provides methodology for systematically deriving epics from a vision statement using multiple discovery techniques including user journey mapping, capability decomposition, and stakeholder analysis.
version: 0.2.0
---
# Epic Identification
## Overview
Systematically decompose a product vision into well-defined epics—major capabilities or features that can be further broken down into user stories and tasks. Epics represent significant bodies of work that are too large for a single iteration but directly contribute to achieving the vision.
## Purpose
Epics serve as the middle layer in the requirements hierarchy:
- **Above**: Product Vision (the "why" and "what" at highest level)
- **Epics**: Major capabilities (the "what" at feature level)
- **Below**: User Stories (the "what" at detailed level)
Well-defined epics:
- Organize work into logical, valuable chunks
- Enable roadmap planning and sequencing
- Provide clear scope boundaries for teams
- Facilitate prioritization of major capabilities
## When to Use This Skill
Use epic identification when:
- Vision document exists and needs to be broken down
- User asks what major features or capabilities are needed
- Planning a product roadmap from a vision
- Validating that all necessary epics have been identified
- Refining or adding to an existing set of epics
**Prerequisite:** Vision must exist before identifying epics. If no vision exists, use the vision-discovery skill first.
## Epic Identification Process
### Step 1: Review the Vision
Begin by thoroughly understanding the vision:
**Key Actions:**
- Read the vision issue in GitHub Projects
- Identify core capabilities mentioned or implied
- Note user goals and success metrics
- Understand scope boundaries (what's included/excluded)
**Extract Signals:**
- What major capabilities does the solution need?
- What user journeys must be supported?
- What integration points or dependencies exist?
- What success metrics drive capability requirements?
### Step 2: Identify Major Capabilities
Break down the vision into distinct major capabilities:
**Discovery Techniques:**
**User Journey Mapping:**
- What are the end-to-end journeys users will take?
- Each major journey often maps to one or more epics
- Example: "User Onboarding", "Content Creation", "Analytics & Reporting"
**Capability Decomposition:**
- What are the 5-10 major things this product must do?
- Group related functionality into logical capabilities
- Example: "User Authentication", "Data Import/Export", "Collaboration Features"
**Stakeholder Needs:**
- What capabilities do different user types need?
- Admin vs. end-user capabilities
- Example: "User Management", "Permissions & Access Control"
**Technical Enablers:**
- What infrastructure or foundational capabilities are required?
- APIs, integrations, data pipelines
- Example: "Third-party Integrations", "Data Synchronization"
### Step 3: Define Epic Characteristics
For each identified capability, determine if it qualifies as an epic:
**Epic Criteria:**
- **Valuable**: Delivers significant user or business value
- **Large**: Too big to complete in a single iteration (typically multiple user stories)
- **Cohesive**: Represents a logical grouping of related functionality
- **Bounded**: Has clear scope—what's included and excluded
- **Measurable**: Success can be defined and tracked
**Size Guideline:**
- An epic typically contains 3-12 user stories
- Takes multiple sprints/iterations to complete
- If smaller, consider combining with related epics
- If larger, consider splitting into multiple epics
### Step 4: Name and Describe Each Epic
Create clear, descriptive titles and summaries:
**Epic Naming:**
- Use noun phrases describing the capability
- Be specific but concise (3-6 words)
- Focus on "what" not "how"
**Good Examples:**
- "User Authentication & Authorization"
- "Campaign Performance Dashboard"
- "Automated Email Notifications"
- "Third-party Calendar Integration"
**Poor Examples:**
- "Build the backend" (too vague, technical)
- "Make users happy" (outcome, not capability)
- "Phase 1" (not descriptive)
### Epic Issue Template (Minimal)
```markdown
## Epic Overview
[Brief description]
## Value Proposition
[Why this matters]
## Scope
- Included: [capabilities]
- Excluded: [out of scope]
## Success Criteria
- [ ] [Measurable outcomes]
```
See `references/epic-template.md` for comprehensive templates and domain-specific examples.
### Step 5: Validate Completeness
Ensure all necessary epics have been identified:
**Validation Questions:**
- Do these epics, collectively, deliver the full vision?
- Are there gaps in user journeys or capabilities?
- Have we covered all target user types and their needs?
- Are success metrics from the vision addressable with these epics?
- Have we identified necessary infrastructure or technical epics?
**Gap Analysis Technique:**
- Map epics back to vision sections (problem, users, capabilities, metrics)
- Identify vision elements not covered by any epic
- Create additional epics to fill gaps
### Step 6: Organize and Prioritize
Structure epics for planning and sequencing:
**Logical Grouping:**
- Group related epics (e.g., all authentication-related, all reporting-related)
- Identify epic clusters that deliver cohesive value together
**Dependency Mapping:**
- Which epics must come before others?
- What's the critical path through epic delivery?
- Example: "User Authentication" likely precedes "User Profile Management"
**Initial Prioritization:**
- Apply MoSCoW framework (Must/Should/Could/Won't)
- Consider value, risk, dependencies, effort
- Use the prioritization skill for detailed prioritization
### Step 7: Create Epic Issues in GitHub Projects
For each epic, create a GitHub issue in the relevant GitHub Project:
**Issue Title:** "[Epic Name]"
**Issue Description:** Full epic definition using template
**Custom Fields:**
- Type: Epic
- Priority: [Must Have / Should Have / Could Have]
- Status: Not Started
**Labels:**
- `type:epic`
- `priority:[moscow-level]`
**Parent:** Link to Vision issue as parent
All user stories for this epic will be created as child issues, establishing hierarchy.
## Epic Templates and Patterns
### Common Epic Patterns
**User-Facing Capabilities:**
- User Onboarding & Registration
- Profile & Settings Management
- Core Workflow/Activity (varies by product)
- Search & Discovery
- Notifications & Alerts
**Data & Content:**
- Data Import/Export
- Content Creation & Editing
- Content Organization (tags, folders, etc.)
- Data Visualization & Reporting
**Collaboration & Sharing:**
- Team/Organization Management
- Permissions & Access Control
- Sharing & Collaboration Features
- Activity Feeds & History
**Integration & APIs:**
- Third-party Integrations
- Public API
- Webhooks & Event Streaming
**Infrastructure/Technical:**
- Authentication & Authorization
- Performance & Scalability
- Data Migration
- Offline Support
### Example: E-commerce Product
**Vision:** "Enable small businesses to sell products online easily"
**Identified Epics:**
1. Product Catalog Management
2. Shopping Cart & Checkout
3. Payment Processing Integration
4. Order Management & Fulfillment
5. Customer Account Management
6. Admin Dashboard & Analytics
7. Marketing & Promotions
8. Email Notifications
Each maps to a major capability needed to deliver the vision.
## Best Practices
### Right Level of Granularity
Epics should be:
- **Not too big**: "Build the entire platform" → Split into multiple epics
- **Not too small**: "Add a button" → This is a task, not an epic
- **Just right**: "Shopping Cart & Checkout" → Major capability with multiple stories
### Focus on Capabilities, Not Implementation
❌ "Build React components for dashboard"
✅ "Analytics Dashboard"
❌ "Set up PostgreSQL database"
✅ "Data Storage & Persistence" (if it's a major capability)
### Ensure User-Centric Value
Every epic should answer: "What can users do with this that they couldn't before?"
If an epic is purely technical with no user-facing impact, consider:
- Is it really necessary as a standalone epic?
- Can it be folded into a user-facing epic?
- Is it an enabler for multiple epics? (Then it's valid as infrastructure epic)
### Avoid Epic Overlap
Epics should be distinct and non-overlapping:
- Clear boundaries between epics
- Related functionality grouped into one epic, not split across several
- If unsure, combine into one epic and split later if needed
### Plan for Iteration
Epics will likely be refined:
- Initial identification may miss epics—add them as discovered
- Epics may be split or combined as understanding grows
- Scope boundaries may shift during user story creation
- This is normal—embrace learning and adaptation
## Integration with Requirements Lifecycle
### Before Epic Identification
**Vision exists** (created via vision-discovery skill)
- Problem, users, solution, success metrics defined
- Scope boundaries established
### During Epic Identification
**Create epic issues** in GitHub Projects
- Each epic is a child of the vision issue
- Epics organized and prioritized
### After Epic Identification
**Proceed to user story creation** (user-story-creation skill)
- Select an epic and break it down into stories
- Iterate epic-by-epic until all epics have stories
## Common Pitfalls to Avoid
### Too Many Epics
- More than 15-20 epics often indicates too much granularity
- Consider combining related epics
- Large products may need epic grouping into themes/initiatives
### Too Few Epics
- Fewer than 5 epics often indicates insufficient breakdown
- Vision may need decomposition into more specific capabilities
- Consider all user types, journeys, and infrastructure needs
### Implementation-Focused Epics
❌ "API Development"
✅ "Third-party Integration Support"
❌ "Database Schema"
✅ "Data Storage & Management" (if user-facing)
### Missing Infrastructure Epics
Don't forget necessary enablers:
- Authentication/Authorization
- Data migration/import
- Performance optimization (if critical to UX)
- Compliance/Security features
## Quick Reference: Epic Identification Flow
1. **Review Vision** → Understand problem, users, capabilities, metrics
2. **Identify Capabilities** → Use journey mapping, decomposition, stakeholder needs
3. **Validate as Epics** → Check criteria: valuable, large, cohesive, bounded, measurable
4. **Name & Describe** → Clear titles, structured descriptions using template
5. **Check Completeness** → Ensure all vision elements covered, no gaps
6. **Organize** → Group logically, map dependencies
7. **Prioritize** → Apply MoSCoW framework
8. **Create Issues** → Add to GitHub Projects as children of vision
9. **Proceed** → Move to user story creation for each epic
## When to Use References
Load references based on context:
- **`references/discovery-techniques.md`**: When applying multiple discovery methods or user needs technique guidance
- **`references/epic-template.md`**: When creating epic issue content or user requests templates
- **`references/common-patterns.md`**: When user's domain is identified for pattern suggestions
## Additional Resources
### Reference Files
For detailed epic templates and examples:
- **`${CLAUDE_PLUGIN_ROOT}/skills/epic-identification/references/epic-template.md`** - Complete epic definition template
- **`${CLAUDE_PLUGIN_ROOT}/skills/epic-identification/references/discovery-techniques.md`** - Six techniques for identifying epics
- **`${CLAUDE_PLUGIN_ROOT}/skills/epic-identification/references/common-patterns.md`** - Universal and domain-specific epic patterns
## Next Steps
After completing epic identification:
1. Create epic issues in GitHub Projects (as children of vision issue)
2. Prioritize epics using the prioritization skill
3. Select highest-priority epic and proceed to user story creation
4. Iterate through all epics, creating user stories for each
Epics provide the roadmap from vision to execution—invest time to identify them comprehensively and define them clearly.

View File

@@ -0,0 +1,233 @@
# Common Epic Patterns
This reference provides universal and domain-specific epic patterns to accelerate epic identification. Use these patterns as starting points, adapting them to your specific product context.
---
## Universal Epic Patterns
These patterns appear across most software products regardless of domain.
### User Management & Identity
| Epic | Description |
|------|-------------|
| User Onboarding & Registration | Sign-up flows, account creation, initial setup |
| Authentication & Authorization | Login, SSO, MFA, session management |
| User Profile Management | Profile editing, preferences, settings |
| Role & Permission Management | Access control, role assignment, permissions |
### Core User Experience
| Epic | Description |
|------|-------------|
| Search & Discovery | Finding content, filtering, navigation |
| Notifications & Alerts | In-app, email, push notifications |
| Help & Support | Documentation, tooltips, support tickets |
| Personalization | User preferences, customization, themes |
### Data & Content
| Epic | Description |
|------|-------------|
| Data Import/Export | Bulk import, export formats, migrations |
| Content Creation & Editing | Create, edit, version content |
| Content Organization | Tags, folders, categories, hierarchies |
| File Management | Upload, storage, preview, download |
### Collaboration & Social
| Epic | Description |
|------|-------------|
| Team/Organization Management | Teams, workspaces, organizations |
| Sharing & Permissions | Share content, access levels |
| Comments & Discussions | Threaded comments, mentions, reactions |
| Activity Feeds & History | Audit logs, activity streams, notifications |
### Integration & Platform
| Epic | Description |
|------|-------------|
| Third-party Integrations | Connect external services |
| Public API | REST/GraphQL API for developers |
| Webhooks & Events | Event-driven integrations |
| Single Sign-On (SSO) | Enterprise identity providers |
### Analytics & Reporting
| Epic | Description |
|------|-------------|
| Dashboards & Visualization | Charts, graphs, real-time displays |
| Report Generation | Scheduled reports, exports |
| Usage Analytics | User behavior, engagement metrics |
| Audit & Compliance | Audit trails, compliance reports |
### Infrastructure
| Epic | Description |
|------|-------------|
| Performance & Scalability | Optimization, caching, load handling |
| Security & Compliance | Encryption, security audits, certifications |
| Data Migration | Legacy system migration, data transformation |
| Offline Support | Offline-first, sync, conflict resolution |
---
## Domain-Specific Patterns
### E-commerce / Marketplace
| Epic | Typical Scope |
|------|---------------|
| Product Catalog Management | Products, categories, inventory, pricing |
| Shopping Cart & Checkout | Cart, checkout flow, guest checkout |
| Payment Processing | Payment gateways, refunds, invoicing |
| Order Management | Order tracking, fulfillment, returns |
| Customer Accounts | Order history, saved addresses, wishlists |
| Marketing & Promotions | Discounts, coupons, campaigns |
| Seller/Vendor Management | Multi-vendor support, seller tools |
| Reviews & Ratings | Product reviews, seller ratings |
### SaaS / B2B Platform
| Epic | Typical Scope |
|------|---------------|
| Subscription Management | Plans, billing, upgrades/downgrades |
| Multi-tenancy | Tenant isolation, tenant administration |
| Admin Console | System configuration, tenant management |
| Usage Metering & Billing | Usage tracking, invoicing, quotas |
| Onboarding & Activation | Trial setup, guided tours, activation |
| Customer Success Tools | Health scores, usage insights |
| White-labeling | Custom branding, domains |
| Enterprise Features | SSO, advanced security, SLAs |
### Mobile Application
| Epic | Typical Scope |
|------|---------------|
| Mobile Authentication | Biometrics, device trust, secure storage |
| Offline Mode | Local storage, sync, conflict resolution |
| Push Notifications | Notification management, deep linking |
| Device Features | Camera, GPS, contacts integration |
| App Performance | Startup time, memory, battery optimization |
| App Store Presence | Listings, ratings, updates |
| Cross-platform Sync | State sync across devices |
| Accessibility | Screen readers, dynamic type, VoiceOver |
### API / Developer Platform
| Epic | Typical Scope |
|------|---------------|
| API Design & Documentation | OpenAPI specs, interactive docs |
| Developer Portal | Registration, API keys, documentation |
| Authentication & Security | OAuth, API keys, rate limiting |
| SDKs & Client Libraries | Language-specific SDKs |
| Sandbox Environment | Test environment, mock data |
| Usage Analytics | API metrics, endpoint analytics |
| Versioning & Deprecation | Version management, migration guides |
| Developer Support | Forums, tickets, status page |
### Content Management / Publishing
| Epic | Typical Scope |
|------|---------------|
| Content Authoring | Rich text editor, media embedding |
| Content Workflow | Draft, review, publish states |
| Media Library | Image/video management, optimization |
| Content Scheduling | Scheduled publishing, content calendar |
| Multi-language Support | Localization, translation management |
| SEO & Metadata | Meta tags, sitemaps, structured data |
| Content Distribution | RSS, social sharing, syndication |
| Templates & Layouts | Page templates, component library |
### Healthcare / Clinical
| Epic | Typical Scope |
|------|---------------|
| Patient Management | Patient records, demographics |
| Clinical Documentation | Notes, orders, results |
| Appointment Scheduling | Calendar, booking, reminders |
| Medication Management | Prescriptions, drug interactions |
| Care Coordination | Referrals, care plans, handoffs |
| Compliance & Privacy | HIPAA, consent management, audit |
| Patient Portal | Patient access, messaging, records |
| Clinical Decision Support | Alerts, guidelines, protocols |
### Financial Services / Fintech
| Epic | Typical Scope |
|------|---------------|
| Account Management | Accounts, balances, statements |
| Transaction Processing | Transfers, payments, scheduling |
| Identity Verification | KYC, document verification |
| Fraud Detection | Monitoring, alerts, investigation |
| Regulatory Compliance | Reporting, audits, regulations |
| Financial Reporting | Statements, tax documents |
| Notifications & Alerts | Transaction alerts, balance notifications |
| Secure Authentication | MFA, device binding, biometrics |
---
## Using Patterns Effectively
### Pattern Selection Process
1. **Identify your domain**: Which domain-specific pattern set applies?
2. **Start with universals**: Most products need user management, notifications, etc.
3. **Add domain patterns**: Layer in domain-specific epics
4. **Customize names**: Adapt generic names to your product's language
5. **Validate against vision**: Ensure patterns align with your specific vision
### Avoiding Pattern Pitfalls
- **Don't force-fit**: Not every pattern applies to every product
- **Customize scope**: Adjust epic scope to match your product size
- **Combine when small**: Merge related patterns if your product is simpler
- **Split when large**: Break patterns into multiple epics for complex products
- **Validate value**: Each epic should deliver user or business value
### Pattern Adaptation Example
**Generic Pattern**: "User Onboarding & Registration"
| Product Type | Adapted Epic |
|--------------|--------------|
| Consumer app | "Social Sign-up & Profile Setup" |
| Enterprise SaaS | "Organization Provisioning & Admin Setup" |
| Developer tool | "Account & API Key Setup" |
| Healthcare | "Patient Registration & Consent" |
---
## Quick Reference: Epic Starter Sets
### Minimum Viable Product (5-7 epics)
1. User Authentication
2. Core Workflow (product-specific)
3. Data Management
4. Basic Notifications
5. Settings & Profile
### Standard Product (10-15 epics)
All MVP epics plus:
- Advanced User Management
- Search & Discovery
- Collaboration Features
- Analytics Dashboard
- Integrations
- Help & Support
### Enterprise Product (15-25 epics)
All Standard epics plus:
- Multi-tenancy
- Advanced Security
- Compliance & Audit
- Admin Console
- SSO & Enterprise Auth
- Advanced Analytics
- API & Developer Tools

View File

@@ -0,0 +1,262 @@
# Epic Discovery Techniques
This reference provides detailed guidance on six techniques for identifying epics from a product vision. Use these techniques individually or in combination to ensure comprehensive epic coverage.
---
## 1. User Journey Mapping
Map the end-to-end journeys users will take through your product to identify the major capabilities needed at each stage.
### When to Use
- Product has clear user workflows or processes
- Multiple user touchpoints exist
- User experience is a primary concern
### Process
1. **Identify Key User Types**: List the primary personas who will use the product
2. **Map Entry Points**: How do users first encounter or access the product?
3. **Trace Core Workflows**: What steps do users take to achieve their goals?
4. **Identify Exit Points**: How do users complete their journey or leave?
5. **Note Pain Points**: Where might users struggle or need support?
### Epic Extraction
Each major stage or transition in the journey often maps to an epic:
- **Onboarding Journey** → "User Onboarding & Registration" epic
- **Core Activity** → "Content Creation" or "Data Entry" epic
- **Review/Analysis** → "Analytics & Reporting" epic
- **Sharing/Export** → "Collaboration & Sharing" epic
### Example
For a project management tool:
| Journey Stage | Epic Candidate |
|---------------|----------------|
| Sign up and setup | User Onboarding |
| Create first project | Project Management |
| Add team members | Team Collaboration |
| Track progress | Progress Tracking & Reporting |
| Complete and archive | Project Lifecycle Management |
---
## 2. Capability Decomposition
Break down the vision into the 5-10 major things the product must do, grouping related functionality into logical capabilities.
### When to Use
- Vision describes what the product should accomplish
- Product has distinct functional areas
- Technical and business stakeholders need alignment
### Process
1. **List Vision Outcomes**: What does the vision say the product will enable?
2. **Identify Required Capabilities**: What must the product DO to deliver those outcomes?
3. **Group Related Functions**: Cluster similar or dependent capabilities together
4. **Name the Groups**: Give each cluster a capability name (noun phrase)
5. **Validate Coverage**: Does each vision outcome map to at least one capability?
### Epic Extraction
Each capability group becomes an epic candidate:
- Group of authentication functions → "User Authentication & Authorization" epic
- Group of data handling functions → "Data Import/Export" epic
- Group of team features → "Collaboration Features" epic
### Example
Vision: "Enable small businesses to manage customer relationships effectively"
| Capability Group | Functions Included | Epic |
|------------------|-------------------|------|
| Contact Management | Add, edit, search, segment contacts | Customer Data Management |
| Communication | Email, call logging, notes | Customer Communication |
| Pipeline | Deals, stages, forecasting | Sales Pipeline |
| Reporting | Dashboards, exports, analytics | Analytics & Reporting |
---
## 3. Stakeholder Needs Analysis
Examine what different user types and stakeholders need from the product to identify role-specific capabilities.
### When to Use
- Multiple user roles exist (admin, end-user, manager)
- Different stakeholders have different needs
- Access control or permissions are important
### Process
1. **List All Stakeholders**: End users, admins, managers, external parties
2. **Document Each Role's Needs**: What does each stakeholder need to accomplish?
3. **Identify Unique Capabilities**: What capabilities are specific to certain roles?
4. **Find Shared Capabilities**: What do multiple roles need?
5. **Map to Epics**: Group needs into capability-based epics
### Epic Extraction
Role-specific needs often reveal epics:
- Admin needs → "User Management", "System Configuration" epics
- Manager needs → "Reporting & Analytics", "Team Oversight" epics
- End-user needs → "Core Workflow", "Personal Settings" epics
### Example
| Stakeholder | Key Needs | Epic Candidates |
|-------------|-----------|-----------------|
| End User | Create content, collaborate | Content Creation, Collaboration |
| Team Lead | Monitor progress, assign work | Team Management, Reporting |
| Admin | Manage users, configure system | User Management, System Settings |
| External Partner | View shared content | External Sharing & Access |
---
## 4. Technical Enablers Identification
Identify infrastructure, platform, or foundational capabilities required to support user-facing features.
### When to Use
- Product requires significant technical foundation
- Integrations with external systems are needed
- Performance, security, or scalability are critical
### Process
1. **Review User-Facing Epics**: What technical capabilities do they require?
2. **Identify Shared Infrastructure**: What technical needs appear across multiple epics?
3. **List External Dependencies**: What third-party systems must be integrated?
4. **Consider Non-Functional Requirements**: Security, performance, compliance
5. **Create Technical Epics**: Group infrastructure needs into coherent epics
### Epic Extraction
Technical needs become infrastructure epics:
- Authentication/authorization needs → "Identity & Access Management" epic
- External system connections → "Third-party Integrations" epic
- Data synchronization needs → "Data Pipeline & Sync" epic
- Performance requirements → "Performance & Scalability" epic
### Example
| Technical Need | Scope | Epic |
|----------------|-------|------|
| User authentication | SSO, MFA, session management | Identity & Access Management |
| Payment processing | Stripe, PayPal integration | Payment Integration |
| File storage | Upload, CDN, versioning | File Management Infrastructure |
| Search | Full-text, filters, indexing | Search Infrastructure |
---
## 5. Value Stream Mapping
Trace the flow of value from initial input to final outcome to identify where major capabilities are needed.
### When to Use
- Product transforms inputs into valuable outputs
- Process efficiency is important
- Multiple handoffs or stages exist
### Process
1. **Identify Value Input**: What enters the system? (data, requests, content)
2. **Trace Transformations**: How is the input processed and transformed?
3. **Map Value Additions**: Where is value added at each stage?
4. **Identify Outputs**: What valuable outputs are produced?
5. **Extract Capabilities**: What capabilities enable each value-adding step?
### Epic Extraction
Each value-adding stage suggests an epic:
- Input stage → "Data Ingestion" or "Content Upload" epic
- Processing stage → "Data Processing" or "Workflow Engine" epic
- Output stage → "Report Generation" or "Export & Delivery" epic
### Example
For a document processing product:
| Value Stage | Activity | Epic |
|-------------|----------|------|
| Input | Upload documents | Document Ingestion |
| Processing | Extract data, validate | Document Processing |
| Enrichment | Add metadata, classify | Document Intelligence |
| Output | Generate reports, export | Reporting & Export |
| Storage | Archive, retrieve | Document Management |
---
## 6. Gap Analysis
Compare the current state (or competitor offerings) with the desired future state to identify capability gaps that become epics.
### When to Use
- Replacing or improving an existing system
- Competitive analysis has been done
- Clear "before and after" vision exists
### Process
1. **Document Current State**: What exists today? What can users do now?
2. **Define Future State**: What should users be able to do?
3. **Identify Gaps**: What's missing between current and future?
4. **Prioritize Gaps**: Which gaps are most critical to close?
5. **Convert to Epics**: Each significant gap becomes an epic
### Epic Extraction
Gaps become epics:
- Missing capability → New epic for that capability
- Insufficient capability → Enhancement epic
- Broken capability → Fix/rebuild epic
### Example
| Current State | Future State | Gap | Epic |
|---------------|--------------|-----|------|
| Manual data entry | Automated import | Automation | Data Import Automation |
| Basic reports | Interactive dashboards | Visualization | Analytics Dashboard |
| Email notifications | Multi-channel alerts | Channels | Notification System |
| No mobile access | Full mobile app | Platform | Mobile Application |
---
## Combining Techniques
For comprehensive epic identification, use multiple techniques:
1. **Start with User Journey Mapping** to understand the user perspective
2. **Apply Capability Decomposition** to ensure technical completeness
3. **Use Stakeholder Needs** to catch role-specific requirements
4. **Add Technical Enablers** for infrastructure epics
5. **Validate with Gap Analysis** to ensure nothing is missed
Cross-reference results from different techniques to validate epic completeness and identify any gaps.
---
## Quick Reference
| Technique | Best For | Key Question |
|-----------|----------|--------------|
| User Journey Mapping | UX-focused products | "What journey do users take?" |
| Capability Decomposition | Feature-rich products | "What must the product DO?" |
| Stakeholder Needs | Multi-role products | "What does each role need?" |
| Technical Enablers | Complex integrations | "What infrastructure is required?" |
| Value Stream Mapping | Process-oriented products | "How does value flow?" |
| Gap Analysis | Replacements/upgrades | "What's missing today?" |

View File

@@ -0,0 +1,164 @@
# Epic Definition Template
Use this template when creating epic issues in GitHub Projects. Copy the structure below into the issue description, then fill in each section.
---
## Epic: [Epic Name]
### Overview
[Brief description of what this epic delivers—1-2 sentences capturing the essence of this capability]
**Category:** [User-Facing / Infrastructure / Integration / Data & Content / Collaboration]
---
### User Value
**Who benefits:**
[Which user types/personas benefit from this epic]
**Value delivered:**
[What can users do with this capability that they couldn't before? How does it improve their experience or solve their problems?]
**Alignment with Vision:**
[How does this epic contribute to achieving the product vision? Reference specific vision elements]
---
### Scope
**Included in this epic:**
- [Capability 1]
- [Capability 2]
- [Capability 3]
[List the major functionality that IS part of this epic]
**Explicitly excluded:**
- [Not included 1]
- [Not included 2]
[Define boundaries—what related functionality is NOT part of this epic to prevent scope creep]
**Related Epics:**
[List other epics that are related or adjacent to this one]
---
### Success Criteria
**This epic is complete when:**
1. [Criterion 1: Measurable outcome]
2. [Criterion 2: Measurable outcome]
3. [Criterion 3: Measurable outcome]
**Acceptance at Epic Level:**
[High-level acceptance criteria for the entire epic—specific acceptance criteria will be defined at the story/task level]
**Metrics:**
[What metrics will indicate this epic is successful?]
- [Metric 1]
- [Metric 2]
---
### Dependencies
**Prerequisite Epics:**
[Other epics that must be completed (or partially completed) before this epic can be worked on]
- [Epic A: reason why]
- [Epic B: reason why]
**External Dependencies:**
[Third-party services, APIs, or external factors required for this epic]
- [Dependency 1]
- [Dependency 2]
**Blocks:**
[Other epics that are blocked waiting for this epic to complete]
- [Epic X]
- [Epic Y]
---
### Technical Considerations
[Optional: High-level technical notes, architectural considerations, or constraints]
**Key Technical Requirements:**
- [Requirement 1]
- [Requirement 2]
**Known Constraints:**
- [Constraint 1]
- [Constraint 2]
**Risks:**
- [Risk 1: and mitigation]
- [Risk 2: and mitigation]
---
### User Stories
[This section will be populated as stories are created. Each story will be a child issue of this epic]
**Planned Stories:** [Count: TBD]
- Link to Story 1
- Link to Story 2
- [Stories will be linked as children when created]
---
### Estimation & Planning
**Effort Estimate:** [T-Shirt size: S / M / L / XL, or story points if known]
**Target Timeline:** [Optional: Target quarter, milestone, or release]
**Team/Owner:** [Optional: Which team or person is responsible]
---
### Notes
[Any additional context, background, or considerations for this epic]
---
### Definition of Done
At epic level, done means:
- [ ] All user stories created and completed
- [ ] Success criteria met
- [ ] User testing/validation completed (if applicable)
- [ ] Documentation updated
- [ ] Epic reviewed and accepted by stakeholders
---
**Parent:** [Link to Vision Issue]
**Children:** [User Story Issues will be linked here]

View File

@@ -0,0 +1,423 @@
---
name: Prioritization
description: This skill should be used when the user asks to "prioritize requirements", "use MoSCoW", "prioritize epics", "prioritize stories", "prioritize tasks", "what should I build first", "rank features", or when they need to determine the priority order of epics, user stories, or tasks using the MoSCoW framework.
version: 0.2.0
---
# Prioritization
## Overview
Prioritization is the process of determining the relative importance and sequence of requirements at any level—epics, user stories, or tasks. Using the MoSCoW framework (Must Have, Should Have, Could Have, Won't Have), teams can make informed decisions about what to build first, ensuring maximum value delivery within constraints.
## Purpose
Effective prioritization:
- Focuses effort on highest-value work
- Enables incremental delivery of working software
- Manages scope within time and budget constraints
- Aligns team and stakeholders on what matters most
- Provides clear rationale for sequencing decisions
## MoSCoW Framework
### Must Have
**Definition:** Requirements critical for success. Without these, the product fails to deliver core value or is fundamentally broken.
**Characteristics:**
- Non-negotiable for initial release
- Product is not viable without these
- Legal, regulatory, or safety requirements
- Core functionality essential to vision
**Examples:**
- User authentication (for a product requiring accounts)
- Payment processing (for an e-commerce product)
- Core workflow (the main thing users do)
**Questions to Ask:**
- Can we ship without this?
- Does this deliver essential core value?
- Is this legally or contractually required?
- Would users reject the product without this?
**Typical Percentage:** 60% of total requirements
---
### Should Have
**Definition:** Important requirements that significantly enhance value but aren't absolutely critical for initial release. Can be deferred if necessary.
**Characteristics:**
- High impact, not mission-critical
- Significantly improves user experience
- Differentiates from competitors
- Can work around absence (though painful)
**Examples:**
- Advanced filtering and search
- Batch operations
- Export to multiple formats
- Email notifications
**Questions to Ask:**
- Does this significantly improve UX or value?
- Can users achieve their goals without this (even if harder)?
- Does this provide important differentiation?
- Would delaying this cause pain but not failure?
**Typical Percentage:** 20% of total requirements
---
### Could Have
**Definition:** Nice-to-have requirements that provide marginal value. Include only if time and resources permit.
**Characteristics:**
- Low impact on core value
- "Nice to have" enhancements
- Polish and convenience features
- Easy to cut if needed
**Examples:**
- Customizable themes
- Additional chart types
- Keyboard shortcuts
- Tooltips and help text
**Questions to Ask:**
- Would users notice if this were missing?
- Does this provide marginal or incremental value?
- Is this primarily for convenience or polish?
- Can this easily be added later?
**Typical Percentage:** 20% of total requirements
---
### Won't Have (This Time)
**Definition:** Requirements explicitly excluded from current scope. May be considered for future releases but are off the table now.
**Characteristics:**
- Out of current scope
- Lower priority than other work
- Not aligned with current goals
- Explicitly deferred or rejected
**Examples:**
- Mobile app (when focusing on web first)
- AI-powered features (for MVP)
- Advanced analytics (phase 2 feature)
- Third-party integrations (beyond core)
**Questions to Ask:**
- Does this align with current vision and goals?
- Is this better suited for a future release?
- Does including this risk delaying more important work?
- Can we explicitly say "not now" to this?
**Purpose:** Prevents scope creep by making exclusions explicit
---
## Prioritization Process
### Step 1: Define Context
Establish the scope and constraints:
**Clarify:**
- What are we prioritizing? (Epics? Stories? Tasks?)
- What are the constraints? (Time, budget, resources)
- What's the target? (MVP? V1.0? Next sprint?)
- Who are the stakeholders?
**Set Boundaries:**
- Total number of items to prioritize
- Decision criteria (value, risk, effort, dependencies)
- Time frame for this prioritization
### Step 2: Assess Each Item
For each epic/story/task, evaluate:
**Value to Users:**
- How much does this improve user experience?
- How many users benefit?
- How often will this be used?
**Business Value:**
- Revenue impact?
- Strategic importance?
- Competitive advantage?
**Risk:**
- Technical risk (complexity, unknowns)?
- Market risk (assumptions about user needs)?
- Higher risk items often prioritized earlier for learning
**Effort/Cost:**
- How much work is required?
- Resource needs (people, time, tools)?
- Return on investment?
**Dependencies:**
- What must come before this?
- What is blocked by this?
- External dependencies?
### Step 3: Apply MoSCoW Categories
For each item, assign a MoSCoW category:
**Decision Framework:**
1. **Start with Must Haves:**
- Identify absolute essentials
- Challenge each: "Can we really not ship without this?"
- Aim for <60% in this category
2. **Identify Should Haves:**
- High-value, not mission-critical
- Important for good UX or differentiation
- Can defer if Must Haves at risk
3. **Mark Could Haves:**
- Nice to have if time permits
- Easy to cut without major impact
- Often polish or convenience features
4. **Explicitly List Won't Haves:**
- Items that won't be in current scope
- Document WHY to prevent revisiting
- May be reconsidered in future
### Step 4: Validate and Balance
Review the prioritization:
**Balance Check:**
- Are <60% of items "Must Have"? (If more, challenge them)
- Is there a healthy mix across categories?
- Have we explicitly identified "Won't Haves"?
**Sanity Checks:**
- Do "Must Haves" collectively deliver minimum viable product?
- Can we ship with just "Must Haves" if needed?
- Are dependencies respected? (prerequisite items high priority)
**Stakeholder Review:**
- Share prioritization with key stakeholders
- Get feedback on category assignments
- Discuss trade-offs and rationale
- Build consensus
### Step 5: Sequence Within Categories
Within each MoSCoW category, establish order:
**Sequencing Factors:**
- Dependencies (blockers first)
- Risk (tackle unknowns early for learning)
- Value (highest value first within category)
- Effort (quick wins can build momentum)
**Common Strategies:**
- **Risk-driven:** Tackle high-risk, high-uncertainty items early
- **Value-driven:** Deliver highest-value items first
- **Dependency-driven:** Respect technical dependencies
- **Quick wins:** Mix in some easy, visible wins for morale
### Step 6: Document and Communicate
Record prioritization decisions:
**In GitHub Projects:**
- Update "Priority" custom field on issues
- Add priority labels (priority:must-have, etc.)
- Order issues in project views by priority
**Rationale:**
- Document WHY items are prioritized as they are
- Capture trade-offs and decisions made
- Reference for future prioritization
**Communication:**
- Share prioritized backlog with team
- Explain sequencing and rationale
- Set expectations about what's in/out of scope
## Prioritization at Different Levels
### Prioritizing Epics
**Context:** Determining which major capabilities to build first
**Considerations:**
- Strategic alignment with vision
- Foundation vs. enhancement (build foundation first)
- User journey completeness (can users accomplish goals?)
- Market differentiation (what makes you unique?)
**Example:**
- Must Have: User Authentication, Core Workflow, Payment Processing
- Should Have: Advanced Analytics, Team Collaboration
- Could Have: Custom Branding, API Access
- Won't Have: Mobile App (Web-first strategy)
### Prioritizing User Stories
**Context:** Determining which stories within an epic to implement first
**Considerations:**
- Happy path before edge cases
- Core functionality before enhancements
- Foundation before polish
- High-frequency use cases before rare ones
**Example (within "Campaign Management" epic):**
- Must Have: Create campaign, View campaign list, Edit campaign basics
- Should Have: Duplicate campaign, Archive campaign, Bulk operations
- Could Have: Campaign templates, Custom fields
- Won't Have: Campaign scheduling (separate epic)
### Prioritizing Tasks
**Context:** Determining sequence of implementation tasks within a story
**Considerations:**
- Technical dependencies (backend before frontend)
- Iterative progress (working slice early, then enhance)
- Testing alongside feature work (not all at end)
- Documentation concurrent with implementation
**Example (within "Filter campaigns by date" story):**
- Must Have: Backend date filtering logic, Basic UI with date pickers, Integration
- Should Have: Validation and error handling, Unit tests
- Could Have: Date range presets (Last 7 days, Last 30 days)
- Won't Have: Save filter preferences (separate story)
## Best Practices
### Challenge "Must Haves"
Everything feels critical to someone:
- Use strict criteria: "Product is broken without this"
- Push back on inflated urgency
- Ask: "Can we ship an MVP without this?"
- Aim to keep "Must Haves" under 60%
### Be Explicit About "Won't Haves"
Prevent scope creep:
- Document what's explicitly out of scope
- Explain why (not aligned, too costly, wrong time)
- Revisit in future planning cycles
- Helps manage stakeholder expectations
### Consider Technical Dependencies
Priority isn't just about value:
- Some items must come before others (architecture, data model)
- Foundation before features built on it
- Don't prioritize item X high if it depends on low-priority item Y
### Revisit and Refine
Priorities change:
- New information emerges
- Market conditions shift
- User feedback reveals new priorities
- Re-prioritize at regular intervals (quarterly, per release)
### Involve Stakeholders
Prioritization is collaborative:
- Product owners provide business perspective
- Developers provide technical perspective
- Users/customers provide value perspective
- Build consensus on trade-offs
### Use Data When Available
Inform decisions with evidence:
- Usage analytics (what features are used most?)
- User research (what do users need most?)
- Revenue data (what drives business value?)
- Competitor analysis (what's table stakes?)
## Common Pitfalls to Avoid
### Everything is a "Must Have"
If everything is critical, nothing is:
- Challenge assumptions
- Force trade-offs
- Use strict criteria for "Must Have"
### Ignoring Technical Dependencies
Prioritizing features without considering what they depend on:
- Map dependencies
- Prioritize prerequisites appropriately
- Consider architectural foundations
### Forgetting "Won't Have"
Scope creep occurs when exclusions aren't explicit:
- Actively identify what's out of scope
- Document and communicate "Won't Haves"
- Revisit them in future planning
### Prioritizing Based on Who Shouts Loudest
Let the loudest voice determine priority:
- Use objective criteria
- Base decisions on data and strategy
- Build consensus across stakeholders
### Never Re-Prioritizing
Priorities set once and never revisited:
- Revisit priorities regularly
- Adjust based on new information
- Stay flexible and adaptive
## Quick Reference: Prioritization Flow
1. **Define Context** → What are we prioritizing? Constraints? Goals?
2. **Assess Items** → Evaluate value, risk, effort, dependencies
3. **Apply MoSCoW** → Assign each item to a category
4. **Validate Balance** → Check distribution, sanity check, stakeholder review
5. **Sequence** → Order within categories (dependencies, risk, value)
6. **Document** → Update GitHub Projects, record rationale
7. **Communicate** → Share with team and stakeholders
8. **Execute** → Build in priority order (Must → Should → Could)
9. **Revisit** → Re-prioritize as needed based on learning
## Additional Resources
### Reference Files
For detailed prioritization frameworks and examples:
- **`${CLAUDE_PLUGIN_ROOT}/skills/prioritization/references/moscow-worksheet.md`** - Template for conducting MoSCoW prioritization sessions
## Integration with Requirements Lifecycle
**When to Prioritize:**
- After identifying epics (prioritize which epics to build first)
- After creating user stories (prioritize which stories within an epic)
- During sprint planning (prioritize tasks for the iteration)
- During refinement (adjust priorities based on new information)
**Updating GitHub Projects:**
- Set "Priority" custom field on issues
- Apply priority labels
- Order backlog by priority
- Review and adjust regularly
Prioritization is an ongoing activity throughout the requirements lifecycle—use it to focus effort on what matters most and deliver maximum value incrementally.

View File

@@ -0,0 +1,263 @@
# MoSCoW Prioritization Worksheet
Use this worksheet to systematically prioritize requirements using the MoSCoW framework.
---
## Prioritization Session Info
**Date:** [YYYY-MM-DD]
**Facilitator:** [Name]
**Participants:** [Names/roles of stakeholders involved]
**Scope:** [What are we prioritizing? Epics for Q1? Stories for Sprint 5?]
**Target/Goal:** [What are we trying to achieve? MVP? V1.0? Next release?]
**Constraints:** [Time, budget, team size, deadlines]
---
## Decision Criteria
**Value to Users:** [Weight: High / Medium / Low]
- Impact on user experience
- Number of users affected
- Frequency of use
**Business Value:** [Weight: High / Medium / Low]
- Revenue impact
- Strategic importance
- Competitive advantage
**Risk:** [Weight: High / Medium / Low]
- Technical complexity
- Uncertainty / unknowns
- Market assumptions
**Effort:** [Weight: High / Medium / Low]
- Time required
- Resources needed
- Technical dependencies
---
## Items to Prioritize
List all items (epics, stories, or tasks) that need prioritization:
| ID | Item Name | Description | Value | Risk | Effort | Dependencies |
|----|-----------|-------------|-------|------|--------|--------------|
| 1 | [Name] | [Brief desc]| H/M/L | H/M/L| H/M/L | [IDs] |
| 2 | [Name] | [Brief desc]| H/M/L | H/M/L| H/M/L | [IDs] |
| 3 | [Name] | [Brief desc]| H/M/L | H/M/L| H/M/L | [IDs] |
|... | ... | ... | ... | ... | ... | ... |
---
## MoSCoW Classification
### Must Have (≤60% of items)
> Critical requirements without which the product cannot launch or function
| ID | Item Name | Rationale |
|----|-----------|-----------|
| [#]| [Name] | [Why this is absolutely essential] |
| [#]| [Name] | [Why this is absolutely essential] |
|... | ... | ... |
**Total Must Haves:** [Count] out of [Total] = [%]
---
### Should Have (~20% of items)
> Important requirements that significantly enhance value but can be deferred if necessary
| ID | Item Name | Rationale | Workaround if Deferred |
|----|-----------|-----------|------------------------|
| [#]| [Name] | [Why important but not critical] | [How users cope without it] |
| [#]| [Name] | [Why important but not critical] | [How users cope without it] |
|... | ... | ... | ... |
**Total Should Haves:** [Count] out of [Total] = [%]
---
### Could Have (~20% of items)
> Nice-to-have requirements that provide marginal value
| ID | Item Name | Rationale |
|----|-----------|-----------|
| [#]| [Name] | [Why nice to have but low priority] |
| [#]| [Name] | [Why nice to have but low priority] |
|... | ... | ... |
**Total Could Haves:** [Count] out of [Total] = [%]
---
### Won't Have (This Time)
> Requirements explicitly excluded from current scope
| ID | Item Name | Rationale | Future Consideration? |
|----|-----------|-----------|----------------------|
| [#]| [Name] | [Why not now] | [When might we revisit?] |
| [#]| [Name] | [Why not now] | [When might we revisit?] |
|... | ... | ... | ... |
**Total Won't Haves:** [Count] out of [Total] = [%]
---
## Sequencing Within Categories
### Must Haves - Execution Order
Ordered by dependencies, risk, and value:
1. [Item name] - [Reason for sequence position]
2. [Item name] - [Reason for sequence position]
3. [Item name] - [Reason for sequence position]
...
### Should Haves - Execution Order
1. [Item name] - [Reason for sequence position]
2. [Item name] - [Reason for sequence position]
...
### Could Haves - Execution Order
1. [Item name] - [Reason for sequence position]
2. [Item name] - [Reason for sequence position]
...
---
## Validation & Sanity Checks
### Distribution Check
- **Must Haves:** [%] (Target: ≤60%)
- **Should Haves:** [%] (Target: ~20%)
- **Could Haves:** [%] (Target: ~20%)
- **Won't Haves:** [%]
**Is distribution balanced?** [Yes / No]
**If no, what needs adjustment?** [Notes]
### MVP Viability Check
**Can we ship a viable product with just "Must Haves"?** [Yes / No]
**Does it deliver core value?** [Yes / No]
**Would users pay for / use it?** [Yes / No]
**Are all critical user journeys covered?** [Yes / No]
### Dependency Check
**Are all dependencies respected in sequencing?** [Yes / No]
**Any items prioritized high that depend on low-priority items?** [List if any]
### Stakeholder Alignment
**Have key stakeholders reviewed and approved?** [Yes / No]
**Any major disagreements?** [Notes]
**Consensus achieved?** [Yes / No]
---
## Decisions & Trade-offs
Document key decisions and trade-offs made during prioritization:
**Decision 1:**
- **What:** [What was decided]
- **Rationale:** [Why]
- **Trade-off:** [What was sacrificed or deferred]
**Decision 2:**
- **What:** [What was decided]
- **Rationale:** [Why]
- **Trade-off:** [What was sacrificed or deferred]
...
---
## Action Items
**Immediate Actions:**
- [ ] Update GitHub Projects with priority labels
- [ ] Update custom "Priority" field on all issues
- [ ] Order backlog by priority
- [ ] Communicate prioritization to team
- [ ] Begin work on first "Must Have" item
**Follow-up Actions:**
- [ ] Schedule re-prioritization review [Date]
- [ ] Gather user feedback on priorities [Date]
- [ ] Re-assess "Won't Haves" for future releases [Date]
---
## Notes & Comments
[Any additional notes, context, or discussion points from the prioritization session]
---
## Revision History
| Date | Change | By |
|------|--------|-----|
| [YYYY-MM-DD] | Initial prioritization | [Name] |
| [YYYY-MM-DD] | Re-prioritized based on user feedback | [Name] |
| [YYYY-MM-DD] | Moved Item X from Should to Must | [Name] |
---
## Example: E-commerce MVP
**Scope:** Prioritizing epics for e-commerce platform MVP
**Target:** Launch in 3 months with core functionality
**Constraints:** 2 developers, limited budget
### Must Have (5 epics)
1. **Product Catalog** - Cannot sell without products
2. **Shopping Cart** - Core functionality for purchases
3. **Checkout & Payment** - Must be able to complete purchases
4. **User Accounts** - Track orders, save preferences
5. **Order Management** - Sellers need to fulfill orders
### Should Have (3 epics)
6. **Product Search & Filtering** - Important for UX, but can browse categories initially
7. **Email Notifications** - Enhances experience, but can manually check orders
8. **Product Reviews** - Builds trust, but not critical for launch
### Could Have (3 epics)
9. **Wishlist** - Nice feature, low priority
10. **Product Recommendations** - Enhances discovery, not essential
11. **Social Sharing** - Marketing feature, not core
### Won't Have (4 epics)
12. **Mobile App** - Web-first strategy, mobile later
13. **Loyalty Program** - Phase 2 feature
14. **Multi-vendor Support** - Single vendor for MVP
15. **International Shipping** - Domestic only for MVP
**Distribution:** 5 Must (45%), 3 Should (27%), 3 Could (27%) = Balanced ✓
**MVP Viable:** Yes - Can sell products, accept payment, fulfill orders ✓
**Sequence:** Product Catalog → Shopping Cart → Checkout → User Accounts → Order Management

View File

@@ -0,0 +1,513 @@
---
name: Requirements Feedback
description: This skill should be used when the user asks about "feedback loops", "iterate on requirements", "continuous documentation", "refine requirements", "update requirements", "requirements changed", or when they need guidance on gathering feedback and continuously improving requirements throughout the product lifecycle.
version: 0.2.0
---
# Requirements Feedback & Continuous Documentation
## Overview
Requirements are not static—they evolve as teams learn more about users, technology, and the market. This skill guides the practice of gathering feedback at each stage of the requirements lifecycle and using that feedback to continuously refine and improve requirements. Establishing effective feedback loops ensures requirements remain accurate, valuable, and aligned with user needs.
## Purpose
Continuous requirements feedback:
- Validates assumptions early and often
- Catches misunderstandings before they become expensive
- Adapts to new information and changing conditions
- Keeps requirements aligned with user needs
- Improves quality through iteration
- Builds shared understanding across team and stakeholders
## Feedback Loops at Each Stage
### Vision-Level Feedback
**When:** After creating or updating vision
**Who to Involve:**
- Key stakeholders (executives, product leaders)
- Representative users or customers
- Technical leads (feasibility check)
- Marketing/sales (market validation)
**Questions to Ask:**
- Does this vision resonate with target users?
- Is the problem statement accurate and compelling?
- Are success metrics realistic and measurable?
- Does this align with business strategy?
- Are scope boundaries clear and appropriate?
**Feedback Mechanisms:**
- Vision review meetings
- Customer interviews about the problem space
- Stakeholder alignment sessions
- Competitive analysis validation
**How to Incorporate Feedback:**
- Update vision issue in GitHub Projects
- Revise problem statement based on user insights
- Adjust success metrics based on stakeholder input
- Clarify scope based on feasibility feedback
- Document assumptions that were validated or invalidated
---
### Epic-Level Feedback
**When:** After identifying epics, before creating stories
**Who to Involve:**
- Product team (completeness check)
- Engineering leads (technical feasibility)
- UX/Design (user journey validation)
- Business stakeholders (strategic alignment)
**Questions to Ask:**
- Do these epics collectively deliver the full vision?
- Are there gaps in user journeys or capabilities?
- Is the scope of each epic appropriate?
- Are epics properly bounded and distinct?
- Are dependencies and sequence logical?
**Feedback Mechanisms:**
- Epic review sessions
- User journey mapping workshops
- Technical feasibility assessments
- Roadmap planning meetings
**How to Incorporate Feedback:**
- Add missing epics that were overlooked
- Split epics that are too large or cover multiple concerns
- Combine epics that overlap or are too granular
- Adjust epic scope based on technical constraints
- Re-prioritize based on new strategic insights
- Update epic issues in GitHub Projects
---
### Story-Level Feedback
**When:** After creating stories, during refinement, after user testing
**Who to Involve:**
- Development team (implementability)
- UX/Design (user experience)
- QA/Test (testability)
- Users (value validation)
**Questions to Ask:**
- Do stories follow INVEST criteria?
- Are acceptance criteria clear and testable?
- Are stories small enough for an iteration?
- Do stories deliver real user value?
- Are edge cases and error scenarios covered?
**Feedback Mechanisms:**
- Backlog refinement sessions
- Story review with development team
- User testing of prototypes or early implementations
- Acceptance criteria walkthrough
**How to Incorporate Feedback:**
- Split stories that are too large
- Add missing acceptance criteria
- Create new stories for overlooked scenarios
- Revise story descriptions for clarity
- Adjust priorities based on user feedback
- Update story issues in GitHub Projects
---
### Task-Level Feedback
**When:** During implementation, code review, testing
**Who to Involve:**
- Developers (during implementation)
- Code reviewers (during review)
- QA engineers (during testing)
- Product owner (during acceptance)
**Questions to Ask:**
- Are acceptance criteria accurate and complete?
- Did implementation reveal missing requirements?
- Are there technical challenges not anticipated?
- Do actual results match expected outcomes?
- Are there edge cases not covered?
**Feedback Mechanisms:**
- Daily standups
- Code reviews
- Test results and bug reports
- Sprint demos and retrospectives
**How to Incorporate Feedback:**
- Update task acceptance criteria if incomplete
- Create new tasks for discovered work
- Update story acceptance criteria if task feedback reveals gaps
- Document lessons learned for similar future work
- Update task status in GitHub Projects
---
## Continuous Documentation Practices
### Keep Requirements Up to Date
**Living Documents:**
- Requirements in GitHub issues (in GitHub Projects) are living documents, not static specs
- Update issues as understanding evolves
- Add clarifications when questions arise
- Document decisions made during implementation
**Version History:**
- GitHub automatically tracks issue edit history
- Use comments to explain significant changes
- Reference related issues or discussions
**When to Update:**
- When assumptions are proven wrong
- When user feedback reveals new insights
- When technical constraints require scope changes
- When priorities shift
### Document What You Learn
**Capture Insights:**
- Add comments to issues with findings from user research
- Link to test results or analytics that inform requirements
- Reference customer feedback or support tickets
- Note technical discoveries that impact requirements
**Make it Searchable:**
- Use consistent terminology across issues
- Tag issues with relevant labels
- Reference related issues via links
- Keep GitHub Projects metadata current
### Maintain Traceability
**Link Related Work:**
- Tasks link to parent stories
- Stories link to parent epics
- Epics link to parent vision
- Full chain of traceability maintained
**Cross-Reference:**
- Link to related issues (dependencies, related work)
- Reference GitHub PRs that implement requirements
- Link to supporting documents (design mocks, research findings)
---
## Feedback Collection Techniques
### User Research
**Methods:**
- User interviews about problems and needs
- Usability testing of prototypes or implementations
- Surveys about feature priorities
- Analytics on actual usage patterns
**When to Use:**
- Vision validation (does problem resonate?)
- Epic prioritization (what matters most?)
- Story refinement (does solution work for users?)
- Post-launch (are we delivering value?)
**Incorporating Results:**
- Update requirements based on validated learnings
- Adjust priorities based on actual user behavior
- Create new requirements for discovered needs
- Remove or deprioritize unused features
### Stakeholder Reviews
**Methods:**
- Regular review meetings with key stakeholders
- Async reviews via GitHub issue (in a GitHub Project) comments
- Roadmap planning sessions
- Business alignment check-ins
**When to Use:**
- Vision and epic level (strategic alignment)
- Major priority decisions
- Scope change requests
- Resource allocation decisions
**Incorporating Results:**
- Update strategic alignment notes in vision/epics
- Adjust priorities based on business needs
- Add or remove requirements based on strategic shifts
### Team Feedback
**Methods:**
- Backlog refinement sessions
- Sprint retrospectives
- Code review comments
- Technical spike findings
**When to Use:**
- Story and task creation
- Implementation challenges
- Technical feasibility questions
- Process improvements
**Incorporating Results:**
- Clarify ambiguous requirements
- Split overly complex requirements
- Add technical constraints or considerations
- Improve future requirements based on lessons learned
### Automated Feedback
**Methods:**
- Analytics and usage tracking
- Error logs and monitoring
- Performance metrics
- A/B test results
**When to Use:**
- Post-launch validation
- Feature usage analysis
- Performance requirements validation
- Hypothesis testing
**Incorporating Results:**
- Validate or invalidate assumptions
- Identify unused features to remove
- Discover high-value features to enhance
- Update success metrics based on reality
---
## Iteration Patterns
### The Build-Measure-Learn Loop
**Build:** Implement requirements (epic → stories → tasks)
**Measure:** Collect data and feedback
- User testing and interviews
- Usage analytics
- Business metrics
- Team retrospectives
**Learn:** Extract insights and refine requirements
- What worked? (do more)
- What didn't? (change or remove)
- What's missing? (add)
- What changed? (adapt)
**Repeat:** Update requirements and iterate
### Regular Review Cadences
**Weekly:** Story refinement and task feedback
- Review upcoming stories with team
- Incorporate feedback from current sprint
- Adjust task estimates and acceptance criteria
**Monthly:** Epic and priority review
- Review progress on epics
- Adjust epic priorities based on learnings
- Add/remove/modify epics as needed
**Quarterly:** Vision and strategy review
- Validate vision still accurate and relevant
- Update based on market changes
- Adjust success metrics based on progress
- Re-align with business strategy
### Trigger-Based Updates
Update requirements when:
- **Major user feedback:** Significant insights from research or support
- **Technical discovery:** Implementation reveals new constraints or opportunities
- **Market change:** Competitors, regulations, or user expectations shift
- **Strategic pivot:** Business priorities or direction changes
---
## Best Practices
### Create Safe Space for Feedback
**Encourage honesty:**
- Make it safe to say "this requirement doesn't make sense"
- Reward people who catch problems early
- Avoid blame when requirements change
**Ask open questions:**
- "What are we missing?"
- "What concerns do you have?"
- "What would make this clearer?"
### Act on Feedback Quickly
**Rapid incorporation:**
- Update requirements soon after feedback (while fresh)
- Communicate changes to those who provided feedback
- Show that feedback leads to action
**Close the loop:**
- Tell people how their feedback was used
- Explain decisions (even if feedback not incorporated)
- Thank contributors
### Balance Stability and Flexibility
**Don't change constantly:**
- Batch small changes
- Major changes require broader review
- Communicate changes clearly
**But don't resist change:**
- New information should lead to updates
- Better to change requirements than build wrong thing
- Adaptability is a strength, not weakness
### Document the "Why" Behind Changes
**Change rationale:**
- Why was this requirement updated?
- What new information prompted the change?
- Who requested or approved the change?
**Use GitHub issue (in a GitHub Project) comments:**
- Add comment explaining significant updates
- Reference supporting evidence (user quotes, data, etc.)
- Tag relevant people
### Validate with Real Users
**Not just internal feedback:**
- Get outside perspective regularly
- Test assumptions with actual users
- Observe real usage, not just opinions
**Early and often:**
- Don't wait for "finished" to get feedback
- Prototypes, mockups, beta releases
- Small tests beat big reveals
---
## Common Pitfalls to Avoid
### Treating Requirements as Contracts
Requirements are communication tools, not legal contracts:
- They should evolve as understanding grows
- Resist "frozen" requirements that can't change
- Collaboration over specification
### Ignoring Implementation Feedback
Developers discover important details during implementation:
- Listen when they say "this is harder than expected"
- Investigate when estimates are way off
- Respect technical insights
### Feedback Without Action
Collecting feedback but not using it:
- If you ask for feedback, act on it (or explain why not)
- Update requirements based on learnings
- Close the feedback loop
### Changing Too Frequently
Constant churn creates confusion:
- Batch minor updates
- Communicate major changes clearly
- Stabilize before critical milestones
### Only Internal Feedback
Echo chamber risk:
- Involve real users regularly
- Seek outside perspectives
- Challenge assumptions with data
---
## Quick Reference: Feedback Integration Flow
1. **Collect Feedback** → Gather insights from users, stakeholders, team
2. **Analyze** → Identify patterns, validate assumptions, extract learnings
3. **Decide** → Determine what changes to make to requirements
4. **Update** → Modify GitHub issues (vision, epics, stories, tasks) - issues in GitHub Projects
5. **Communicate** → Inform stakeholders and team of changes
6. **Validate** → Verify changes address feedback and improve requirements
7. **Repeat** → Continuous cycle throughout product lifecycle
## GitHub Projects Integration
### Tracking Feedback
**Use issue comments:**
- Add feedback as comments on relevant issues
- Tag people who provided feedback
- Link to supporting evidence (research findings, analytics)
**Use labels:**
- `needs-validation` - Requires user feedback
- `feedback-received` - Feedback available, needs incorporation
- `updated-from-feedback` - Changed based on feedback
**Use custom fields:**
- Last Updated date
- Validation Status (Not Validated / In Progress / Validated)
### Documenting Changes
**Issue edit history:**
- GitHub tracks all changes to issues
- View history to see how requirements evolved
**Comments for rationale:**
- Add comment when making significant updates
- Explain what changed and why
- Reference feedback sources
**Linking:**
- Link to user research issues
- Link to test results
- Link to related PRs or discussions
---
## Additional Resources
### Reference Files
For feedback templates and frameworks:
- **`${CLAUDE_PLUGIN_ROOT}/skills/requirements-feedback/references/feedback-checklist.md`** - Checklist for conducting feedback reviews at each level
---
## Integration with Requirements Lifecycle
Feedback loops operate throughout the entire lifecycle:
> Vision → Epics → Stories → Tasks
At each transition:
1. Gather feedback on current level
2. Incorporate learnings
3. Use refined requirements to inform next level
4. Repeat
**Post-Implementation:**
- Gather usage data and user feedback
- Update requirements based on learnings
- Inform future work and iterations
---
Requirements feedback is not a phase—it's an ongoing practice that keeps requirements aligned with reality and ensures continuous improvement throughout the product lifecycle.

View File

@@ -0,0 +1,422 @@
# Requirements Feedback Checklist
Use this checklist to ensure comprehensive feedback collection and incorporation at each level of the requirements hierarchy.
---
## Vision-Level Feedback Review
**Timing:** After creating/updating vision, quarterly reviews
### Preparation
- [ ] Vision issue is complete and current in GitHub Projects
- [ ] Key stakeholders identified and available
- [ ] Supporting research/data gathered (market analysis, user research)
- [ ] Review meeting scheduled with adequate time (60-90 minutes)
### Feedback Questions
**Problem Validation:**
- [ ] Do target users resonate with the problem statement?
- [ ] Is the problem significant enough to solve?
- [ ] Are there other more pressing problems to address first?
**User Validation:**
- [ ] Are target user descriptions accurate?
- [ ] Have we identified all key user types?
- [ ] Do we understand their context and constraints?
**Solution Validation:**
- [ ] Does the solution vision make sense to stakeholders?
- [ ] Is the value proposition compelling?
- [ ] Are there alternative approaches we should consider?
**Strategic Alignment:**
- [ ] Does this align with business goals and strategy?
- [ ] Is timing right for this initiative?
- [ ] Do we have resources and commitment?
**Success Metrics:**
- [ ] Are success metrics measurable and realistic?
- [ ] Do metrics align with business objectives?
- [ ] Can we actually track these metrics?
**Scope & Boundaries:**
- [ ] Are boundaries clear and appropriate?
- [ ] Is anything missing from scope?
- [ ] Is anything included that shouldn't be?
### Actions
- [ ] Document feedback received (in issue comments)
- [ ] Update vision based on validated learnings
- [ ] Communicate changes to stakeholders
- [ ] Schedule next vision review (quarterly or as needed)
---
## Epic-Level Feedback Review
**Timing:** After identifying epics, before story creation, monthly reviews
### Preparation
- [ ] All epic issues created in GitHub Projects
- [ ] Epics linked to vision as children
- [ ] Initial prioritization complete
- [ ] Product team and technical leads available
### Feedback Questions
**Completeness:**
- [ ] Do these epics collectively deliver the full vision?
- [ ] Are there gaps in user journeys?
- [ ] Are all target user types covered?
- [ ] Do epics address all success metrics?
**Scope & Boundaries:**
- [ ] Is each epic appropriately scoped (not too big/small)?
- [ ] Are epic boundaries clear and distinct?
- [ ] Is there overlap between epics that should be resolved?
**Technical Feasibility:**
- [ ] Are all epics technically feasible?
- [ ] Are there technical risks or unknowns to investigate?
- [ ] Are architectural foundations identified?
**Dependencies:**
- [ ] Have all inter-epic dependencies been identified?
- [ ] Is sequencing logical given dependencies?
- [ ] Are there external dependencies to consider?
**Prioritization:**
- [ ] Does priority order make sense?
- [ ] Are "Must Haves" truly essential?
- [ ] Are "Won't Haves" clearly documented?
### Actions
- [ ] Add missing epics discovered during review
- [ ] Split or combine epics as needed
- [ ] Update epic scope and descriptions
- [ ] Adjust priorities based on feedback
- [ ] Update dependency mapping
- [ ] Document changes in epic issue comments
---
## Story-Level Feedback Review
**Timing:** During backlog refinement, before sprint planning
### Preparation
- [ ] Stories created for epic being reviewed
- [ ] Stories linked to epic as children
- [ ] Development team, UX, and QA available
- [ ] User research findings available (if applicable)
### Feedback Questions
**INVEST Criteria:**
- [ ] Are stories independent enough?
- [ ] Are details negotiable (not over-specified)?
- [ ] Does each story deliver clear value?
- [ ] Can team estimate each story?
- [ ] Are stories small enough (1-5 days)?
- [ ] Are stories testable with clear acceptance criteria?
**Completeness:**
- [ ] Do stories cover all epic scope?
- [ ] Are edge cases and error scenarios covered?
- [ ] Are all user types' needs addressed?
**Clarity:**
- [ ] Is the user goal clear in each story?
- [ ] Are acceptance criteria specific and testable?
- [ ] Are assumptions and constraints documented?
**Value:**
- [ ] Does each story deliver user-facing value?
- [ ] Are priorities appropriate?
- [ ] Can we explain "why" for each story?
### Actions
- [ ] Split stories that are too large
- [ ] Combine stories that are too granular
- [ ] Add missing stories
- [ ] Clarify vague acceptance criteria
- [ ] Update story descriptions and estimates
- [ ] Adjust priorities based on team feedback
---
## Task-Level Feedback Review
**Timing:** During sprint, daily standups, code reviews
### Preparation
- [ ] Tasks created for story being implemented
- [ ] Tasks linked to story as children
- [ ] Implementers identified and assigned
- [ ] Story acceptance criteria reviewed
### Feedback Questions
**Clarity:**
- [ ] Is it clear what needs to be done?
- [ ] Are acceptance criteria specific enough?
- [ ] Are technical notes helpful?
**Completeness:**
- [ ] Are all necessary tasks identified?
- [ ] Are testing tasks included?
- [ ] Are documentation tasks included?
**Dependencies:**
- [ ] Is task sequencing correct?
- [ ] Have blockers been identified?
- [ ] Can tasks be parallelized?
**Implementation Feedback:**
- [ ] Are actual results matching expected outcomes?
- [ ] Have we discovered missing tasks?
- [ ] Are estimates accurate?
- [ ] Have we learned anything that should update the story?
### Actions
- [ ] Add tasks discovered during implementation
- [ ] Update task acceptance criteria if incomplete
- [ ] Update story acceptance criteria if tasks reveal gaps
- [ ] Document implementation learnings
- [ ] Adjust future task breakdown based on learnings
---
## Post-Implementation Feedback Review
**Timing:** After feature launch, ongoing
### Preparation
- [ ] Feature deployed and available to users
- [ ] Analytics/metrics collection in place
- [ ] User feedback channels established
- [ ] Stakeholders identified for review
### Feedback Questions
**Value Delivery:**
- [ ] Are users actually using this feature?
- [ ] Is it delivering expected value?
- [ ] Are success metrics being achieved?
- [ ] What's working well?
**Problems & Pain Points:**
- [ ] What issues are users encountering?
- [ ] What's confusing or difficult?
- [ ] What's missing that users need?
- [ ] What feedback are we getting from support?
**Opportunities:**
- [ ] What enhancements would increase value?
- [ ] What related needs have been discovered?
- [ ] What should we build next?
**Validation:**
- [ ] Were our assumptions correct?
- [ ] What did we get wrong?
- [ ] What did we learn?
### Actions
- [ ] Update requirements based on usage data
- [ ] Create new stories for enhancements
- [ ] Deprioritize or remove unused features
- [ ] Document lessons learned
- [ ] Update vision/epics if major insights emerged
- [ ] Plan iteration or follow-up work
---
## Feedback Collection Methods Checklist
### User Research Methods
- [ ] **User Interviews:** One-on-one conversations about needs and experiences
- [ ] **Surveys:** Structured questions to larger user groups
- [ ] **Usability Testing:** Observe users attempting tasks
- [ ] **Analytics Review:** Examine actual usage patterns and behaviors
- [ ] **Support Ticket Analysis:** Identify common issues and requests
### Stakeholder Feedback Methods
- [ ] **Review Meetings:** Formal reviews with stakeholders
- [ ] **Async Reviews:** GitHub issue (in a GitHub Project) comments and discussions
- [ ] **Roadmap Planning:** Strategic alignment sessions
- [ ] **Demos:** Show working software for feedback
### Team Feedback Methods
- [ ] **Backlog Refinement:** Regular story review with team
- [ ] **Daily Standups:** Quick updates and blockers
- [ ] **Retrospectives:** Reflect on what worked and what didn't
- [ ] **Code Reviews:** Technical feedback during implementation
### Automated Feedback Methods
- [ ] **Usage Analytics:** Track feature adoption and usage
- [ ] **Error Monitoring:** Identify bugs and issues
- [ ] **Performance Metrics:** Measure speed and reliability
- [ ] **A/B Testing:** Compare alternatives empirically
---
## Feedback Incorporation Workflow
### 1. Collect
- [ ] Gather feedback from various sources
- [ ] Document in GitHub issue comments - in GitHub Projects
- [ ] Tag relevant people and issues
- [ ] Categorize by type and severity
### 2. Analyze
- [ ] Look for patterns across feedback sources
- [ ] Validate with data when possible
- [ ] Distinguish opinions from facts
- [ ] Prioritize based on impact and confidence
### 3. Decide
- [ ] Determine what changes to make
- [ ] Assess scope of changes (minor vs major)
- [ ] Get buy-in from stakeholders if needed
- [ ] Plan communication of changes
### 4. Update
- [ ] Modify GitHub issues (vision/epics/stories/tasks) - issues in GitHub Projects
- [ ] Add comments explaining changes
- [ ] Update priority, scope, or acceptance criteria
- [ ] Link to feedback sources
### 5. Communicate
- [ ] Notify people who provided feedback
- [ ] Explain how feedback was used
- [ ] Update team on requirement changes
- [ ] Thank contributors
### 6. Validate
- [ ] Verify changes address the feedback
- [ ] Confirm with feedback providers if possible
- [ ] Test new requirements with users if significant
- [ ] Monitor impact of changes
### 7. Document Learnings
- [ ] What did we learn?
- [ ] How can we improve future requirements?
- [ ] What patterns are emerging?
- [ ] Update processes based on learnings
---
## Red Flags to Watch For
### Feedback That Suggests Problems
- [ ] **Confusion:** People don't understand what's being built → Requirements unclear
- [ ] **Misalignment:** Stakeholders disagree on priorities → Need alignment session
- [ ] **Feasibility Concerns:** Team says it's much harder than expected → Scope or approach issue
- [ ] **Low Engagement:** Users don't care about feature → Value hypothesis wrong
- [ ] **Frequent Changes:** Requirements changing constantly → Unstable vision or poor research
- [ ] **No Feedback:** Nobody commenting or engaging → Not enough collaboration
### When to Escalate
- [ ] Fundamental vision questions (scope, viability)
- [ ] Major technical feasibility issues
- [ ] Significant stakeholder disagreement
- [ ] Resources or timeline at risk
- [ ] User feedback contradicts assumptions
---
## Feedback Review Cadences
### Continuous (Daily/Weekly)
- [ ] Task-level feedback during standups
- [ ] Code review feedback
- [ ] Bug/issue reports
### Regular (Weekly/Bi-weekly)
- [ ] Story refinement sessions
- [ ] Sprint retrospectives
- [ ] User testing sessions
### Periodic (Monthly)
- [ ] Epic priority review
- [ ] Roadmap adjustment
- [ ] Stakeholder check-ins
### Strategic (Quarterly)
- [ ] Vision validation
- [ ] Success metrics review
- [ ] Strategic alignment
---
## Template: Feedback Review Meeting
**Meeting:** [Vision/Epic/Story] Feedback Review
**Date:** [YYYY-MM-DD]
**Attendees:** [Names and roles]
**Duration:** [60-90 minutes]
**Agenda:**
1. **Review Current State** (10 min)
- Present current requirements
- Highlight recent changes
- Share context
2. **Collect Feedback** (30 min)
- Go through feedback questions for this level
- Capture all input (no debate yet)
- Use checklist above
3. **Discussion** (20 min)
- Discuss key themes and patterns
- Validate or invalidate assumptions
- Identify areas of disagreement
4. **Decisions** (15 min)
- Determine what changes to make
- Assign action items
- Set next review date
5. **Wrap-up** (5 min)
- Summarize decisions
- Confirm action items and owners
- Thank participants
**Follow-up:**
- [ ] Update GitHub issues within 48 hours - in GitHub Projects
- [ ] Communicate changes to broader team
- [ ] Schedule next review
---
Use this checklist to establish regular, effective feedback loops that keep your requirements aligned with user needs and stakeholder goals throughout the product lifecycle.

View File

@@ -0,0 +1,488 @@
---
name: Task Breakdown
description: This skill should be used when the user asks to "create tasks", "break down story into tasks", "define tasks", "what tasks are needed", "write acceptance criteria", or when they have user stories and need to decompose them into specific, executable tasks with clear acceptance criteria that can be created as GitHub issues in a GitHub Project.
version: 0.2.0
---
# Task Breakdown
## Overview
Task breakdown transforms user stories into concrete, executable work items that can be assigned, tracked, and completed. Tasks represent the actual implementation steps needed to deliver a user story, each with clear acceptance criteria. This skill guides the process of decomposing stories into well-defined tasks suitable for GitHub issue tracking.
## Purpose
Tasks are the execution layer in the requirements hierarchy:
- **Above**: User Stories (user-facing functionality)
- **Tasks**: Specific implementation steps
- **Below**: (Nothing—tasks are the lowest level)
Well-defined tasks:
- Represent discrete units of work (hours to 1-2 days max)
- Have clear, testable acceptance criteria
- Can be assigned to individuals
- Track progress toward story completion
- Enable accurate status reporting
## When to Use This Skill
Use task breakdown when:
- A user story exists and needs to be implemented
- User asks for specific work items or tasks
- Planning a sprint and need granular work breakdown
- Creating GitHub issues in a GitHub Project for tracking work
- Defining clear acceptance criteria for work items
**Prerequisite:** User story must exist before creating tasks. If no story exists, use user-story-creation skill first.
## Task Characteristics
### What Makes a Good Task
**Specific and Concrete:**
- Clear, unambiguous description
- Obvious what needs to be done
- No interpretation needed
**Right-sized:**
- 2-8 hours of work (up to 1-2 days maximum)
- Small enough to complete in a single sitting when possible
- Large enough to deliver meaningful progress
**Testable:**
- Has clear acceptance criteria
- Can verify when complete
- Observable outcome
**Assignable:**
- One person can own and complete it
- Doesn't require coordinating multiple people
**Valuable:**
- Contributes toward completing the story
- Represents real progress
## Task Breakdown Process
### Step 1: Review the User Story
Understand the story being implemented:
**Key Actions:**
- Read story issue in GitHub Projects
- Understand the user goal and value
- Review acceptance criteria
- Note any constraints or assumptions
### Step 2: Identify Implementation Layers
Break down work by typical software layers:
**Frontend/UI Tasks:**
- Component creation
- UI layout and styling
- User interactions and events
- Client-side validation
- State management
**Backend/API Tasks:**
- API endpoint implementation
- Business logic
- Data validation
- Error handling
- Integration with services
**Data/Database Tasks:**
- Schema changes
- Migrations
- Data access layer
- Queries and optimization
**Testing Tasks:**
- Unit tests
- Integration tests
- E2E tests
- Manual test scenarios
**Documentation Tasks:**
- API documentation
- User-facing documentation
- Code comments
- README updates
**DevOps/Infrastructure Tasks:**
- Configuration changes
- Deployment scripts
- Environment setup
- Monitoring/logging
### Step 3: Apply Common Task Patterns
Use these patterns as starting points:
**CRUD Operations:**
- Implement Create functionality
- Implement Read/List functionality
- Implement Update functionality
- Implement Delete functionality
- Add validation for each operation
**Feature Implementation:**
- Design and create UI components
- Implement backend API
- Connect frontend to backend
- Add error handling
- Write tests
- Update documentation
**Integration:**
- Research third-party API/service
- Implement authentication/connection
- Implement data mapping/transformation
- Handle errors and edge cases
- Test integration end-to-end
### Step 4: Define Acceptance Criteria
For each task, specify clear success conditions:
**Format:**
Use specific, testable statements:
- "Component renders correctly with props X, Y, Z"
- "API endpoint returns 200 status with correct data structure"
- "Database migration runs without errors and creates table T"
- "Unit tests achieve >80% code coverage for module M"
**Key Elements:**
- What should exist/work when task is complete
- How to verify it works
- Any performance or quality standards
### Step 5: Sequence and Dependencies
Order tasks logically:
**Dependency Analysis:**
- Which tasks must come first?
- What's the critical path?
- Can any tasks be done in parallel?
**Typical Sequence:**
1. Data/schema changes (if needed)
2. Backend implementation
3. Frontend implementation
4. Integration and testing
5. Documentation
**Mark Dependencies:**
- Note which tasks block others
- Identify tasks that can start immediately
- Plan for parallel work when possible
### Step 6: Create Task Issues in GitHub Projects
For each task, create a GitHub issue in the relevant GitHub Project:
**Issue Title:** "[Clear, action-oriented task description]"
**Issue Description:** Task details and acceptance criteria using template
**Custom Fields:**
- Type: Task
- Priority: (inherited from story)
- Status: Not Started
**Labels:**
- `type:task`
- Technical labels (frontend, backend, database, testing, docs)
**Parent:** Link to Story issue as parent
**Estimate:** (optional) Hour or story point estimate
## Task Templates and Examples
### Frontend Task Example
**Title:** "Create campaign filter UI component"
**Description:**
Implement React component for filtering campaigns by date range.
**Acceptance Criteria:**
- [ ] Component accepts startDate and endDate props
- [ ] Renders two date picker inputs with labels
- [ ] Emits onChange event when dates are selected
- [ ] Validates that end date is not before start date
- [ ] Shows error message for invalid date ranges
- [ ] Component is responsive on mobile and desktop
- [ ] Unit tests cover all functionality
**Technical Notes:**
- Use existing DatePicker component from design system
- Follow component structure in /src/components/filters/
---
### Backend Task Example
**Title:** "Implement GET /api/campaigns endpoint with date filtering"
**Description:**
Create API endpoint that returns campaigns filtered by date range.
**Acceptance Criteria:**
- [ ] Endpoint accepts startDate and endDate query params
- [ ] Returns campaigns with activity between specified dates
- [ ] Returns 400 error if dates are invalid
- [ ] Returns 200 with empty array if no campaigns match
- [ ] Response follows standard campaign object schema
- [ ] Query is optimized with proper indexes
- [ ] Endpoint documented in API docs
**Technical Notes:**
- Use existing campaign repository pattern
- Add integration tests
---
### Database Task Example
**Title:** "Add indexes for campaign date filtering performance"
**Description:**
Create database indexes to optimize campaign date range queries.
**Acceptance Criteria:**
- [ ] Migration script creates index on campaigns.created_at
- [ ] Migration script creates composite index on (status, created_at)
- [ ] Migration runs successfully in dev environment
- [ ] Migration rollback script works correctly
- [ ] Query execution time reduced from >500ms to <100ms
- [ ] Migration documented in migrations/README.md
---
### Testing Task Example
**Title:** "Write integration tests for campaign filtering"
**Description:**
Create comprehensive integration tests for campaign date filtering feature.
**Acceptance Criteria:**
- [ ] Test successful filtering with valid date range
- [ ] Test error handling for invalid date range
- [ ] Test edge case: start date equals end date
- [ ] Test edge case: no campaigns in date range
- [ ] Test edge case: very large result set (1000+ campaigns)
- [ ] All tests pass in CI environment
- [ ] Test coverage for this feature is >90%
---
### Documentation Task Example
**Title:** "Document campaign filtering in user guide"
**Description:**
Add user-facing documentation for the new campaign filtering feature.
**Acceptance Criteria:**
- [ ] Section added to /docs/user-guide/campaigns.md
- [ ] Includes screenshots showing date filter UI
- [ ] Explains how to filter by date range
- [ ] Documents error messages and how to resolve them
- [ ] Includes example use cases
- [ ] Reviewed for clarity and accuracy
## Best Practices
### Keep Tasks Focused
One clear objective per task:
❌ "Implement campaign filtering and sorting and export"
✅ "Implement campaign date filtering"
✅ "Implement campaign sorting by name"
✅ "Implement campaign CSV export"
### Use Action-Oriented Titles
Start with verbs:
**Good:**
- "Create campaign filter component"
- "Implement date validation logic"
- "Add database index for performance"
- "Write unit tests for filter module"
**Poor:**
- "Campaign filtering" (vague)
- "Database" (what about it?)
- "Tests" (what kind? for what?)
### Include Technical Context
Help the implementer:
- Reference existing code patterns to follow
- Note relevant files or modules
- Mention design system components to use
- Link to related tasks or documentation
### Balance Granularity
Not too big, not too small:
**Too big:**
"Implement entire campaign management system" (this is a story or epic)
**Too small:**
"Import React" (this is a substep, not a task)
**Just right:**
"Create CampaignList component with sorting and filtering props"
### Always Include Acceptance Criteria
Never create a task without clear success conditions:
- Minimum 3-5 criteria per task
- Make them specific and testable
- Include both functional and quality criteria
## Task Types
### Implementation Tasks
Primary work to build functionality:
- Create UI components
- Implement API endpoints
- Write business logic
- Build data access layer
### Testing Tasks
Verify quality and correctness:
- Write unit tests
- Create integration tests
- Perform manual testing
- User acceptance testing
### Documentation Tasks
Communicate how it works:
- API documentation
- User guides
- Code comments
- README updates
### Infrastructure Tasks
Enable functionality:
- Database migrations
- Configuration changes
- Deployment scripts
- Environment setup
### Research/Spike Tasks
Investigate unknowns:
- Evaluate libraries/tools
- Prototype approaches
- Performance testing
- Feasibility studies
## Common Pitfalls to Avoid
### Tasks Without Acceptance Criteria
Every task needs testable success conditions:
❌ "Work on campaign filtering"
✅ "Implement date filter UI" + 5 specific acceptance criteria
### Tasks Too Large
Watch for multi-day or multi-person tasks:
- Split tasks larger than 2 days
- Each task should be completable by one person
- If coordination needed, it's probably too big
### Mixing Concerns
One task, one focus area:
❌ "Implement filter UI and backend API and database schema"
✅ Three separate tasks for UI, API, and database
### Vague Descriptions
Be specific about what needs to be done:
❌ "Fix bugs"
✅ "Fix date validation bug where end date before start date is allowed"
### Missing Dependencies
Note what must be done first:
- Database schema before backend code
- Backend API before frontend integration
- Core functionality before tests (usually)
## Integration with GitHub Issues (GitHub Projects)
### Task Issue Format
**Title:** Clear, action-oriented description
**Labels:**
- `type:task`
- Technical area (frontend, backend, database, etc.)
- Priority (inherited from story)
**Assignment:** Person responsible
**Estimate:** Optional hours or points
**Parent Link:** User story issue
**Acceptance Criteria:** In issue description
### Tracking Progress
Tasks enable granular progress tracking:
- Story progress = % of tasks complete
- Epic progress = % of stories complete (via tasks)
- Vision progress = % of epics complete (via stories → tasks)
Full traceability: Vision → Epic → Story → Task
## Quick Reference: Task Breakdown Flow
1. **Review Story** → Understand user goal, value, acceptance criteria
2. **Identify Layers** → Frontend, backend, database, testing, docs, infrastructure
3. **Apply Patterns** → Use common task patterns as starting points
4. **Define Acceptance Criteria** → Specify testable success conditions for each task
5. **Sequence** → Order tasks, note dependencies
6. **Create Issues** → Add to GitHub Projects as children of story
7. **Assign** → (Optional) Assign tasks to team members
8. **Execute** → Begin work, update task status as progress is made
## Additional Resources
### Reference Files
For detailed task templates:
- **`${CLAUDE_PLUGIN_ROOT}/skills/task-breakdown/references/task-template.md`** - Complete task template with acceptance criteria formats
## Next Steps
After creating tasks:
1. Create task issues in GitHub Projects (as children of story issue)
2. Assign tasks to team members (if applicable)
3. Begin execution—implement, test, document
4. Update task status as work progresses
5. When all tasks complete, story is complete
Tasks are where vision becomes reality—invest time to make them clear, testable, and actionable.

View File

@@ -0,0 +1,271 @@
# Task Template
Use this template when creating task issues in GitHub Projects. Copy the structure below into the issue description.
---
## Task: [Action-oriented title]
### Description
[Clear description of what needs to be done. Be specific about the objective and context.]
---
### Acceptance Criteria
[Define specific, testable conditions that must be met for this task to be considered complete]
- [ ] Criterion 1: [Specific, observable outcome]
- [ ] Criterion 2: [Specific, observable outcome]
- [ ] Criterion 3: [Specific, observable outcome]
- [ ] Criterion 4: [Specific, observable outcome]
- [ ] Criterion 5: [Specific, observable outcome]
---
### Technical Notes
[Optional: Implementation details, patterns to follow, files to modify, libraries to use, etc.]
**Files to Modify:**
- [File 1]
- [File 2]
**Patterns to Follow:**
- [Pattern or example to reference]
**Dependencies:**
- [External libraries, services, or other tasks]
---
### Definition of Done
At task level, done means:
- [ ] All acceptance criteria met
- [ ] Code written and works as expected
- [ ] Tests written (if applicable)
- [ ] Code reviewed (if using PR process)
- [ ] No new warnings or errors introduced
- [ ] Changes committed with clear commit message
---
**Parent:** [Link to User Story Issue]
**Estimate:** [Optional: 2h, 4h, 1d, etc.]
**Assignee:** [Optional: Person responsible]
---
## Examples by Type
### Frontend Task Example
> Task: Create campaign filter UI component
**Description:**
Implement React component that allows users to filter campaigns by date range using start and end date pickers.
**Acceptance Criteria:**
- [ ] Component renders two date picker inputs (start date, end date)
- [ ] Component accepts `onFilterChange` callback prop
- [ ] Emits selected dates when user changes either date picker
- [ ] Validates that end date is not before start date
- [ ] Shows inline error message when validation fails
- [ ] Component is accessible (ARIA labels, keyboard navigation)
- [ ] Component is responsive on mobile and desktop
- [ ] Unit tests cover all functionality with >80% coverage
**Technical Notes:**
Files to Modify:
- `/src/components/filters/CampaignDateFilter.tsx` (create new)
- `/src/components/filters/index.ts` (add export)
Patterns to Follow:
- Use existing `DatePicker` component from design system
- Follow component structure in `/src/components/filters/StatusFilter.tsx`
- Use Formik for form state management
Dependencies:
- Design system DatePicker component
- date-fns library for date manipulation
---
### Backend Task Example
> Task: Implement GET /api/campaigns endpoint with date filtering
**Description:**
Create API endpoint that returns list of campaigns filtered by optional date range query parameters.
**Acceptance Criteria:**
- [ ] Endpoint accepts `startDate` and `endDate` query params (ISO 8601 format)
- [ ] Returns campaigns with `created_at` between specified dates (inclusive)
- [ ] Returns all campaigns if no date params provided
- [ ] Returns 400 error with clear message if date format invalid
- [ ] Returns 400 error if end date is before start date
- [ ] Returns 200 with empty array if no campaigns match criteria
- [ ] Response follows standard campaign schema (id, name, created_at, etc.)
- [ ] Query uses database indexes for performance (<100ms response time)
- [ ] Endpoint documented in API docs with examples
- [ ] Integration tests cover all scenarios
**Technical Notes:**
Files to Modify:
- `/src/api/campaigns/routes.ts` (add GET route)
- `/src/api/campaigns/controller.ts` (add getCampaigns method)
- `/src/api/campaigns/service.ts` (add filtering logic)
- `/docs/api/campaigns.md` (add documentation)
Patterns to Follow:
- Use existing repository pattern
- Follow error handling in `/src/api/utils/errorHandler.ts`
- Use Joi for query parameter validation
---
### Database Task Example
> Task: Add indexes for campaign date filtering performance
**Description:**
Create database indexes to optimize queries that filter campaigns by date range.
**Acceptance Criteria:**
- [ ] Migration creates index on `campaigns.created_at` column
- [ ] Migration creates composite index on `(status, created_at)` for filtered queries
- [ ] Migration includes rollback script to remove indexes
- [ ] Migration runs successfully in dev environment without errors
- [ ] Migration tested with rollback - confirms clean revert
- [ ] Query execution time for date-filtered queries reduced to <100ms
- [ ] Index usage confirmed via EXPLAIN ANALYZE
- [ ] Migration documented in `/migrations/README.md`
**Technical Notes:**
Files to Create:
- `/migrations/YYYYMMDDHHMMSS_add_campaign_date_indexes.sql`
- `/migrations/YYYYMMDDHHMMSS_add_campaign_date_indexes_down.sql`
SQL to Include:
```sql
CREATE INDEX idx_campaigns_created_at ON campaigns(created_at);
CREATE INDEX idx_campaigns_status_created_at ON campaigns(status, created_at);
```
---
### Testing Task Example
> Task: Write integration tests for campaign date filtering
**Description:**
Create comprehensive integration tests for the campaign date filtering API endpoint to ensure correct behavior across all scenarios.
**Acceptance Criteria:**
- [ ] Test: Filtering with valid date range returns correct campaigns
- [ ] Test: Filtering with no matches returns empty array with 200 status
- [ ] Test: Invalid date format returns 400 error with helpful message
- [ ] Test: End date before start date returns 400 error
- [ ] Test: Edge case - start date equals end date works correctly
- [ ] Test: Edge case - very large date range (10 years) performs acceptably
- [ ] Test: Omitting date params returns all campaigns
- [ ] All tests pass locally and in CI environment
- [ ] Test coverage for filtering logic is >90%
**Technical Notes:**
Files to Create/Modify:
- `/tests/integration/api/campaigns/filtering.test.ts`
Patterns to Follow:
- Use existing test utilities in `/tests/utils/testSetup.ts`
- Follow AAA pattern (Arrange, Act, Assert)
- Clean up test data in afterEach hook
---
### Documentation Task Example
> Task: Document campaign date filtering in user guide
**Description:**
Add comprehensive user-facing documentation for the new campaign date filtering feature in the user guide.
**Acceptance Criteria:**
- [ ] New section added to `/docs/user-guide/managing-campaigns.md`
- [ ] Section includes annotated screenshot showing filter UI
- [ ] Step-by-step instructions for using date filters
- [ ] Explanation of what happens when no campaigns match
- [ ] Common error messages documented with solutions
- [ ] Example use cases provided (e.g., "View last month's campaigns")
- [ ] Links to related documentation (campaign management, reporting)
- [ ] Documentation reviewed for clarity and grammar
- [ ] Screenshots are up-to-date and clearly labeled
**Technical Notes:**
Files to Modify:
- `/docs/user-guide/managing-campaigns.md`
- Add screenshots to `/docs/images/campaign-filtering/`
Style Guide:
- Follow documentation style in `/docs/STYLE_GUIDE.md`
- Use second person ("you can filter campaigns...")
- Include both positive and edge case examples
---
### Research/Spike Task Example
> Task: Evaluate date picker libraries for campaign filtering UI
**Description:**
Research and recommend a date picker library for the campaign filtering feature. Evaluate options based on accessibility, mobile support, and bundle size.
**Acceptance Criteria:**
- [ ] At least 3 libraries evaluated (e.g., react-datepicker, react-day-picker, MUI DatePicker)
- [ ] Comparison matrix created with: accessibility score, mobile UX, bundle size, license
- [ ] Each library tested with quick prototype
- [ ] Recommendation documented with justification
- [ ] Any integration challenges identified
- [ ] Fallback option identified if primary choice has issues
- [ ] Findings shared with team for feedback
- [ ] Decision documented in `/docs/decisions/date-picker-library.md`
**Technical Notes:**
Evaluation Criteria:
- Accessibility (keyboard nav, screen readers, ARIA)
- Mobile experience (touch-friendly, responsive)
- Bundle size (<20KB ideal)
- Customization options (styling, localization)
- Active maintenance and community support
**Time Box:** Maximum 4 hours for evaluation
---
## Quick Checklist
Before creating a task, verify:
- [ ] Title is action-oriented (starts with verb)
- [ ] Description clearly states what needs to be done
- [ ] Has 3-5 specific acceptance criteria
- [ ] Acceptance criteria are testable/verifiable
- [ ] Includes relevant technical notes or context
- [ ] Right-sized (2 hours to 2 days max)
- [ ] Linked to parent user story
- [ ] Has appropriate labels (type, technical area)

View File

@@ -0,0 +1,503 @@
---
name: User Story Creation
description: This skill should be used when the user asks to "create user stories", "write user stories", "break down epic into stories", "define user stories", "what stories do I need", or when they have epics defined and need to decompose them into specific, valuable user stories following INVEST criteria.
version: 0.2.0
---
# User Story Creation
## Overview
User story creation transforms epics into specific, actionable requirements that describe functionality from a user's perspective. Well-written user stories follow the INVEST criteria and provide clear value while remaining small enough to be completed in a single iteration. This skill guides the process of breaking down epics into high-quality user stories.
## Purpose
User stories serve as the detailed requirements layer:
- **Above**: Epics (major capabilities)
- **Stories**: Specific user-facing functionality
- **Below**: Tasks (implementation steps)
Well-written user stories:
- Describe functionality from user perspective
- Deliver independent, testable value
- Fit within a single iteration/sprint
- Enable detailed estimation and planning
- Facilitate conversation and refinement
## When to Use This Skill
Use user story creation when:
- An epic exists and needs to be broken into stories
- User asks for detailed requirements for a capability
- Planning an iteration and need to define work
- Refining or adding stories to an existing epic
- Validating that stories cover the full epic scope
**Prerequisite:** Epic must exist before creating stories. If no epic exists, use epic-identification skill first.
## User Story Format
### Standard Template
```
As a [user type/persona],
I want [goal/desire],
So that [benefit/value].
```
**Components:**
- **User type**: WHO wants this (specific role or persona)
- **Goal**: WHAT they want to do (capability or action)
- **Benefit**: WHY it matters (value or outcome)
### Example Stories
**Good:**
```
As a marketing manager,
I want to filter campaign data by date range,
So that I can analyze performance for specific time periods.
```
**Poor:**
```
As a user, (too vague—which user?)
I want to see data, (too vague—what data? how?)
So that I can use the app. (no specific benefit)
```
### When to Deviate from Template
The template is a guideline, not a requirement:
- Use it when it clarifies value and perspective
- Deviate when it adds unnecessary words
- Alternative: Simple title + detailed description
**Alternative format:**
- **Title**: "Filter campaigns by date range"
- **Description**: Detailed explanation of functionality and value
## INVEST Criteria
Every user story should meet INVEST criteria:
### I - Independent
Stories should be completable without depending on other stories:
**Why**: Enables flexible prioritization and parallel work
**How to achieve**:
- Minimize dependencies between stories
- If dependencies exist, sequence stories appropriately
- Consider vertical slicing (full stack per story) vs. horizontal
**Example of dependence issue:**
- Story 1: "Build database schema"
- Story 2: "Create API endpoints"
- Story 3: "Build UI"
**Better (independent slices)**:
- Story 1: "User can view their profile data"
- Story 2: "User can edit their profile name"
- Story 3: "User can upload profile photo"
### N - Negotiable
Details are open for discussion, not fixed contracts:
**Why**: Encourages collaboration and emergence of best solution
**How to achieve**:
- Focus on WHAT and WHY, not HOW
- Leave implementation details for later
- Specify constraints, not solutions
**Negotiable:**
"Display campaign performance metrics in an easy-to-scan format"
**Too prescriptive:**
"Display campaign performance in a table with exactly 5 columns: Name, Clicks, Cost, ROI, Status, using blue headers and zebra-striped rows"
### V - Valuable
Must deliver value to users or stakeholders:
**Why**: Every story should move toward vision/epic goals
**How to achieve**:
- Describe user-facing value
- Avoid purely technical stories without user impact
- If technical work is needed, frame it in terms of enabling user value
**Valuable:**
"User can reset their password via email link"
**Low value:**
"Refactor authentication module" (unless it enables something valuable)
### E - Estimable
Team can estimate size/effort:
**Why**: Enables planning and prioritization
**How to achieve**:
- Provide enough detail to understand scope
- Clarify unknowns before committing to story
- If not estimable, spike or research story may be needed
**Estimable:**
"User can filter campaigns by date range (start date, end date)"
**Not estimable:**
"Improve campaign filtering" (too vague—how much improvement?)
### S - Small
Fits within a single iteration:
**Why**: Enables frequent delivery and feedback
**How to achieve**:
- Aim for 1-5 days of work
- If larger, split into smaller stories
- Use vertical slicing to create small, valuable increments
**Right size:**
"User can export campaign data to CSV format"
**Too large:**
"User can export campaign data to any format with custom fields and scheduling"
**Split into:**
- "User can export campaign data to CSV"
- "User can select which fields to include in export"
- "User can schedule recurring exports"
### T - Testable
Clear acceptance criteria enable verification:
**Why**: Know when story is complete and working correctly
**How to achieve**:
- Define specific, observable outcomes
- Include acceptance criteria in story description
- Focus on behavior, not implementation
**Testable:**
"User can filter campaigns by date range"
- AC: Date pickers for start and end dates
- AC: Campaigns outside range are hidden
- AC: Validation prevents invalid date ranges
**Not testable:**
"System should be performant" (what does performant mean?)
## Story Creation Process
### Step 1: Review the Epic
Understand the epic being broken down:
**Key Actions:**
- Read epic issue in GitHub Projects
- Understand scope, value, success criteria
- Identify user types and journeys covered
### Step 2: Identify User Journeys
Map out the user flows within the epic:
**Techniques:**
**Task Analysis:**
- What tasks do users need to complete?
- What's the sequence of actions?
- Example: "View → Filter → Analyze → Export" for analytics epic
**Scenario Mapping:**
- What scenarios or use cases exist?
- What different paths might users take?
- Example: "First-time setup" vs. "Daily usage" vs. "Troubleshooting"
**User Type Breakdown:**
- Do different user types need different stories?
- Admin vs. end-user flows
- Power user vs. casual user needs
### Step 3: Draft Initial Stories
Create draft stories covering the epic:
**Start with happy paths:**
- Core functionality for primary scenarios
- Most common user needs
- Essential capabilities
**Then add edge cases and variations:**
- Error handling
- Alternative flows
- Advanced features
**Ensure coverage:**
- All epic scope is covered by stories
- No gaps in user journeys
- All success criteria are addressable
### Step 4: Apply INVEST Criteria
Review each story against INVEST:
**Refine for Independence:**
- Can this story be completed without others?
- If not, can it be reframed or split?
**Check Value:**
- Does this deliver something users care about?
- Can we articulate the "so that" benefit clearly?
**Verify Size:**
- Is this 1-5 days of work?
- If larger, how can it be split?
**Add Testability:**
- What are the acceptance criteria?
- How will we verify this works?
### Step 5: Add Acceptance Criteria
For each story, define clear acceptance criteria:
**Format:**
Given [context],
When [action],
Then [expected outcome].
**Or simple checklist:**
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
**Example:**
Story: "User can filter campaigns by date range"
Acceptance Criteria:
- [ ] Date picker UI for start date and end date
- [ ] Only campaigns with activity in date range are shown
- [ ] Selecting invalid range (end before start) shows error message
- [ ] Clearing filters shows all campaigns again
### Step 6: Prioritize Stories
Determine sequence and priority:
**Sequencing:**
- Which stories must come first (dependencies)?
- What's the logical build-up of functionality?
**Prioritization:**
- Which stories deliver most value?
- Which are riskiest (do early for learning)?
- Use MoSCoW framework (prioritization skill)
### Step 7: Create Story Issues in GitHub Projects
For each story, create a GitHub issue in the relevant GitHub Project:
**Issue Title:** "[Story summary in user voice]"
**Issue Description:** Full story with acceptance criteria using template
**Custom Fields:**
- Type: Story
- Priority: [Must Have / Should Have / Could Have]
- Status: Not Started
**Labels:**
- `type:story`
- `priority:[moscow-level]`
**Parent:** Link to Epic issue as parent
All tasks for this story will be created as child issues.
## Story Splitting Techniques
When stories are too large, use these techniques:
### Workflow Steps
Split by steps in a workflow:
**Large:**
"User can manage their subscription"
**Split:**
- "User can view subscription details"
- "User can upgrade subscription plan"
- "User can cancel subscription"
### Operations (CRUD)
Split by different operations:
**Large:**
"User can manage team members"
**Split:**
- "User can invite team members"
- "User can view team member list"
- "User can remove team members"
- "User can change team member roles"
### Business Rules
Split by different rules or variations:
**Large:**
"System applies discount codes"
**Split:**
- "System applies percentage discount codes"
- "System applies fixed-amount discount codes"
- "System applies buy-one-get-one promotions"
### Happy Path vs. Variations
Start with simple case, add complexity:
**Large:**
"User can upload files"
**Split:**
- "User can upload single image file (basic case)"
- "User can upload multiple files at once"
- "User can drag and drop files to upload"
- "System validates file types and shows errors"
### Data Variations
Split by different data types or sources:
**Large:**
"System imports contact data"
**Split:**
- "System imports contacts from CSV file"
- "System imports contacts from Google Contacts"
- "System imports contacts from Outlook"
### Platforms/Interfaces
Split by different interfaces:
**Large:**
"User receives notifications"
**Split:**
- "User receives in-app notifications"
- "User receives email notifications"
- "User receives SMS notifications"
## Best Practices
### Write from User Perspective
Focus on what users see and experience:
❌ "Implement database indexes for performance"
✅ "Campaign list loads in under 2 seconds"
### Keep Stories Testable
Always include acceptance criteria:
**Minimum:**
- 2-5 acceptance criteria per story
- Specific, observable outcomes
- Testable without looking at code
### Avoid Technical Tasks as Stories
Technical work should be tasks within user-facing stories:
❌ Story: "Set up CI/CD pipeline"
✅ Story: "User can see deployment status", Tasks include CI/CD setup
### One Story, One Goal
Each story should have singular focus:
❌ "User can edit profile and change password and upload avatar"
✅ Three separate stories
### Include Non-Functional Requirements
Don't forget quality attributes:
- Performance requirements
- Security constraints
- Accessibility standards
- Usability expectations
## Common Pitfalls to Avoid
### Stories Too Large
Watch for stories that take weeks, not days:
- Split using techniques above
- Aim for 1-5 days of work
- Smaller is better than larger
### Stories Too Small
Avoid stories that are trivial or just tasks:
❌ "Add Submit button to form" (this is a task)
✅ "User can submit contact form with validation"
### Missing Acceptance Criteria
Every story needs testability:
- How do we know when it's done?
- What are the specific behaviors?
- What should we test?
### Pure Technical Stories
Frame technical work in terms of user value:
❌ "Refactor payment module"
✅ "Payment processing completes in under 3 seconds" (enables user value)
## Quick Reference: Story Creation Flow
1. **Review Epic** → Understand scope, value, success criteria
2. **Identify Journeys** → Map user flows and scenarios
3. **Draft Stories** → Cover happy paths, then edge cases
4. **Apply INVEST** → Check and refine against criteria
5. **Add Acceptance Criteria** → Define testability for each story
6. **Prioritize** → Sequence and rank by value
7. **Create Issues** → Add to GitHub Projects as children of epic
8. **Proceed** → Move to task breakdown for each story
## Additional Resources
### Reference Files
For detailed story templates:
- **`${CLAUDE_PLUGIN_ROOT}/skills/user-story-creation/references/story-template.md`** - Complete user story template with acceptance criteria formats
## Next Steps
After creating user stories:
1. Create story issues in GitHub Projects (as children of epic issue)
2. Prioritize stories using the prioritization skill
3. Select highest-priority story and proceed to task breakdown
4. Iterate through all stories, creating tasks for each
User stories are the bridge between epics and executable work—invest time to make them clear, valuable, and testable.

View File

@@ -0,0 +1,180 @@
# User Story Template
Use this template when creating user story issues in GitHub Projects. Copy the structure below into the issue description.
---
## User Story: [Brief title from user perspective]
### Story
**As a** [specific user type/persona],
**I want** [goal or capability],
**So that** [benefit or value].
---
### Context
[Optional: Additional background or context that helps understand this story. Why is this needed? What problem does it solve?]
---
### Acceptance Criteria
[Define specific, testable conditions that must be met for this story to be considered complete]
> Format Option 1: Given-When-Then
- **Given** [context or starting state]
**When** [action taken]
**Then** [expected outcome]
- **Given** [another context]
**When** [another action]
**Then** [another outcome]
> Format Option 2: Simple Checklist
- [ ] Criterion 1: [Specific, observable outcome]
- [ ] Criterion 2: [Specific, observable outcome]
- [ ] Criterion 3: [Specific, observable outcome]
---
### Notes & Assumptions
[Optional: Any assumptions, constraints, or additional notes]
**Assumptions:**
- [Assumption 1]
- [Assumption 2]
**Constraints:**
- [Constraint 1]
- [Constraint 2]
**Out of Scope:**
- [What's NOT included in this story]
---
### Definition of Done
At story level, done means:
- [ ] All acceptance criteria met
- [ ] Code reviewed and merged
- [ ] Unit tests written and passing
- [ ] Integration tests passing (if applicable)
- [ ] Documented (user-facing docs, API docs, etc.)
- [ ] Tested in staging environment
- [ ] Acceptance confirmed by product owner/stakeholder
---
### Tasks
[Tasks will be created as child issues of this story. Link them here when created]
- Link to Task 1
- Link to Task 2
- [Tasks will be linked as children when created]
---
**Parent:** [Link to Epic Issue]
**Children:** [Task Issues will be linked here]
---
## Examples
### Example 1: Campaign Filtering
**As a** marketing manager,
**I want** to filter campaigns by date range,
**So that** I can analyze performance for specific time periods.
**Acceptance Criteria:**
- [ ] Date picker UI allows selection of start date and end date
- [ ] Only campaigns with activity within the selected date range are displayed
- [ ] If end date is before start date, system shows validation error
- [ ] "Clear filters" button returns to showing all campaigns
- [ ] Selected date range is preserved when navigating away and returning
**Notes:**
- Use native browser date pickers for best UX
- Default to "last 30 days" on initial page load
---
### Example 2: Password Reset
**As a** user who forgot my password,
**I want** to receive a password reset link via email,
**So that** I can regain access to my account.
**Acceptance Criteria:**
- **Given** I click "Forgot Password" on the login page
**When** I enter my email address and submit
**Then** I receive an email with a reset link within 2 minutes
- **Given** I click the reset link in the email
**When** the link is less than 1 hour old
**Then** I'm taken to a page to set a new password
- **Given** I enter a new password meeting requirements (8+ chars, etc.)
**When** I submit the new password
**Then** my password is updated and I'm redirected to login
- **Given** I click a reset link
**When** the link is more than 1 hour old
**Then** I see an error message saying the link expired
**Assumptions:**
- User email must already exist in system
- Reset links expire after 1 hour for security
**Out of Scope:**
- Multi-factor authentication (separate story)
- Account recovery without email access (separate story)
---
### Example 3: File Upload
**As a** content creator,
**I want** to upload image files to my media library,
**So that** I can use them in my posts and campaigns.
**Acceptance Criteria:**
- [ ] "Upload" button is clearly visible in media library
- [ ] Clicking "Upload" opens file browser allowing image selection
- [ ] Supported formats: JPG, PNG, GIF, WebP (max 10MB per file)
- [ ] Upload progress indicator shows during upload
- [ ] On success, uploaded image appears in media library immediately
- [ ] On failure (wrong format, too large), clear error message is shown
- [ ] Multiple files can be selected and uploaded at once
**Technical Notes:**
- Images should be automatically resized/optimized on server
- Store originals and generate thumbnails
---
## INVEST Check
Before finalizing a story, verify it meets INVEST criteria:
- [ ] **Independent**: Can be completed without depending on other stories
- [ ] **Negotiable**: Details are discussable, not fixed implementation
- [ ] **Valuable**: Delivers clear value to users or stakeholders
- [ ] **Estimable**: Team can reasonably estimate size/effort
- [ ] **Small**: Fits within a single iteration (1-5 days)
- [ ] **Testable**: Has clear acceptance criteria that can be verified
If any criteria aren't met, refine the story before committing to it.

View File

@@ -0,0 +1,202 @@
---
name: Vision Discovery
description: This skill should be used when the user asks to "discover vision", "create a vision", "define product vision", "document vision", "what should my vision be", "help me with vision", or when starting a new requirements project and needs to establish the foundational product vision before identifying epics or stories.
version: 0.2.0
---
# Vision Discovery
## Overview
Vision discovery is the critical first step in the requirements lifecycle. A clear, well-articulated product vision provides direction for all subsequent work—epics, user stories, and tasks all flow from and align with the vision. This skill guides the process of discovering and documenting a compelling product vision through structured questioning and best practices.
## Purpose
A product vision defines:
- **What problem** is being solved
- **Who** will benefit from the solution
- **Why** this solution matters
- **What success looks like** when achieved
The vision serves as a north star for all product decisions, helping teams stay aligned and prioritize work that delivers the most value.
## When to Use This Skill
Use vision discovery when:
- Starting a new product or feature from scratch
- The user has a vague idea but needs help articulating it clearly
- Existing vision is unclear, outdated, or poorly defined
- Team lacks alignment on product direction
- Before identifying epics (vision must exist first)
## Vision Discovery Process
### Step 1: Understand the Problem Space
Begin by exploring the problem being solved. Ask probing questions to uncover the root issue:
**Essential Questions:**
- What problem are you trying to solve?
- Who experiences this problem?
- How do they currently address it (workarounds, competitors, manual processes)?
- Why is the current situation unsatisfactory?
- What happens if this problem remains unsolved?
**Technique:** Use the "5 Whys" technique to dig deeper into root causes. When the user describes a problem, ask "why is that a problem?" repeatedly to uncover underlying issues.
### Step 2: Identify Target Users
Clearly define who will use and benefit from the solution:
**Essential Questions:**
- Who is the primary user/customer?
- Are there secondary users (admins, support staff, etc.)?
- What are their key characteristics (role, expertise level, context)?
- What are their goals and motivations?
- What pain points do they experience?
**Output:** Create user personas or archetypes with specific, concrete details. Avoid vague descriptions like "business users"—be specific: "marketing managers at mid-size B2B companies tracking campaign ROI."
### Step 3: Define the Solution Vision
Articulate what the solution is and how it addresses the problem:
**Essential Questions:**
- In one sentence, what does this product do?
- What makes this solution different or better than alternatives?
- What are the 2-3 core capabilities that define this product?
- What is explicitly NOT part of this vision (scope boundaries)?
**Technique:** Use the "elevator pitch" format: "For [target users] who [need/problem], [product name] is a [category] that [key benefit]. Unlike [alternatives], our product [unique differentiator]."
### Step 4: Establish Success Metrics
Define how success will be measured:
**Essential Questions:**
- How will we know if this product is successful?
- What metrics matter most (usage, revenue, satisfaction, efficiency)?
- What does "good" look like in 6 months? 1 year?
- What user behaviors indicate value delivery?
**Output:** Specific, measurable success criteria. Avoid vanity metrics—focus on indicators of genuine value and impact.
### Step 5: Document the Vision
Create a structured vision document in GitHub Projects as an issue with Type: Vision. Use the template structure from `${CLAUDE_PLUGIN_ROOT}/skills/vision-discovery/references/vision-template.md`.
**Core Sections:**
1. **Problem Statement** - What problem exists and why it matters
2. **Target Users** - Who will use this and their key characteristics
3. **Solution Overview** - What the product is and does
4. **Core Value Proposition** - Why users will choose this solution
5. **Success Metrics** - How success will be measured
6. **Scope & Boundaries** - What's included and explicitly excluded
## Best Practices
### Keep It Concise
A vision should be digestible in 5-10 minutes. Aim for:
- 1-2 paragraphs for each major section
- Total length: 500-1,000 words
- Clear, jargon-free language
### Make It Inspiring Yet Realistic
Balance ambition with achievability:
- Articulate a compelling future state
- Ground it in real user needs and market realities
- Avoid buzzwords and hype
- Focus on genuine value creation
### Focus on "Why" Not "How"
The vision defines direction, not implementation:
- Describe outcomes and benefits, not technical solutions
- Avoid specifying features or architecture
- Leave room for discovery during epic and story creation
- Answer "what problem" and "why it matters," not "how we'll build it"
### Ensure Alignment
Before finalizing the vision:
- Review with key stakeholders
- Confirm it resonates with target users
- Verify it aligns with business goals
- Check that success metrics are measurable
### Iterate and Refine
Vision is not set in stone:
- Expect to refine as you learn more
- Update when market conditions or user needs change
- Use feedback from epic and story creation to improve clarity
- Treat vision as a living document
## Integration with GitHub Projects
Create the vision as a GitHub issue in the relevant GitHub Project:
**Issue Title:** "Product Vision: [Product Name]"
**Issue Description:** Full vision document with all sections
**Custom Fields:**
- Type: Vision
- Status: Active
- Priority: (Not applicable for vision)
**Labels:**
- `type:vision`
All epics will be created as child issues of this vision issue, establishing clear traceability.
## Common Pitfalls to Avoid
### Too Vague
❌ "Build a platform for users to interact"
✅ "Enable marketing managers to track campaign ROI across channels in real-time"
### Too Prescriptive
❌ "Build a React app with a dashboard showing charts"
✅ "Provide visibility into campaign performance to enable data-driven decisions"
### Scope Creep
❌ Vision that includes everything: e-commerce, social, analytics, AI, blockchain...
✅ Focused vision with clear boundaries: "Campaign ROI tracking, NOT creative design or email delivery"
### Unmeasurable Success
❌ "Be the best product in the market"
✅ "Achieve 10,000 active users with 70%+ weekly retention within 12 months"
## Quick Reference: Vision Discovery Flow
1. **Problem Space** → Understand what problem exists and why it matters
2. **Target Users** → Define who experiences the problem and will use the solution
3. **Solution Vision** → Articulate what the solution is and its core value
4. **Success Metrics** → Establish measurable success criteria
5. **Document** → Create vision issue in GitHub Projects
6. **Validate** → Review with stakeholders and refine
7. **Proceed** → Move to epic identification once vision is solid
## Additional Resources
### Reference Files
For detailed vision templates and examples:
- **`${CLAUDE_PLUGIN_ROOT}/skills/vision-discovery/references/vision-template.md`** - Complete vision template with all sections and guidance
## Next Steps
After completing vision discovery:
1. Create the vision issue in GitHub Projects
2. Share with stakeholders for feedback
3. Proceed to epic identification using the epic-identification skill
4. Reference the vision throughout all subsequent requirements work
The vision is the foundation—invest time to get it right before moving to epics and stories.

View File

@@ -0,0 +1,178 @@
# Vision Document Template
Use this template when creating a vision issue in GitHub Projects. Copy the structure below into the issue description, then fill in each section based on the discovery process.
---
## Product Vision: [Product Name]
### Problem Statement
**What problem exists?**
[Describe the core problem being addressed. Be specific about the pain points, challenges, or inefficiencies that currently exist.]
**Why does this problem matter?**
[Explain the impact of this problem—costs, frustration, missed opportunities, risks, etc. Quantify when possible.]
**Current State:**
[How do people currently address this problem? What workarounds, competitors, or manual processes exist? Why are these insufficient?]
---
### Target Users
**Primary Users:**
- **Who:** [Role, title, context]
- **Characteristics:** [Expertise level, environment, constraints]
- **Goals:** [What they're trying to achieve]
- **Pain Points:** [Specific frustrations or challenges they face]
**Secondary Users (if applicable):**
- **Who:** [Other stakeholders, admins, support staff]
- **Relationship:** [How they interact with primary users or the product]
**User Personas:**
[Optional: Create 1-2 concrete personas with names, backgrounds, and specific scenarios]
---
### Solution Overview
**In one sentence:**
[Elevator pitch: What this product does and who it's for]
**Core Capabilities:**
1. [First major capability]
2. [Second major capability]
3. [Third major capability]
[Describe the 2-3 essential things this product must do to deliver value]
**Unique Value Proposition:**
[What makes this solution different from or better than alternatives? Why would users choose this?]
---
### Core Value Proposition
**For Users:**
[How does this solution make users' lives better? What specific benefits do they gain?]
**Key Benefits:**
- [Benefit 1: e.g., Save 10 hours/week on manual reporting]
- [Benefit 2: e.g., Increase decision confidence with real-time data]
- [Benefit 3: e.g., Reduce errors from manual data entry]
**Differentiation:**
[Why is this better than current alternatives? What's the compelling reason to switch/adopt?]
---
### Success Metrics
**How we'll measure success:**
| Metric | Target | Timeframe |
|--------|--------|-----------|
| [Metric 1: e.g., Active Users] | [e.g., 10,000] | [e.g., 12 months] |
| [Metric 2: e.g., Weekly Retention] | [e.g., 70%] | [e.g., 6 months] |
| [Metric 3: e.g., Time Saved per User] | [e.g., 10 hrs/week] | [e.g., 3 months] |
**User Success Indicators:**
[What user behaviors or outcomes indicate the product is delivering value?]
- [Indicator 1]
- [Indicator 2]
- [Indicator 3]
---
### Scope & Boundaries
**In Scope:**
[What IS included in this vision]
- [Capability 1]
- [Capability 2]
- [Capability 3]
**Out of Scope:**
[What is explicitly NOT included—define boundaries to prevent scope creep]
- [Not doing 1]
- [Not doing 2]
- [Not doing 3]
**Future Considerations:**
[Things that might be considered later but aren't part of the initial vision]
- [Future item 1]
- [Future item 2]
---
### Strategic Alignment
**Business Goals:**
[How does this vision align with broader organizational or business objectives?]
**Market Opportunity:**
[What market need or opportunity does this address? Size, growth, trends?]
**Competitive Landscape:**
[Who else is in this space? How do we compare or differentiate?]
---
### Risks & Assumptions
**Key Assumptions:**
[What are we assuming to be true that could impact success?]
1. [Assumption 1]
2. [Assumption 2]
3. [Assumption 3]
**Known Risks:**
[What could go wrong or prevent success?]
- [Risk 1 + mitigation strategy]
- [Risk 2 + mitigation strategy]
---
## Notes
[Any additional context, background, or considerations]
---
## Next Steps
After finalizing this vision:
1. Share with stakeholders for feedback and alignment
2. Identify epics that will deliver on this vision
3. Reference this vision when prioritizing and making product decisions
4. Review and update quarterly or when significant learnings emerge