Initial commit
This commit is contained in:
359
skills/product-manager/references/analysis_patterns.md
Normal file
359
skills/product-manager/references/analysis_patterns.md
Normal file
@@ -0,0 +1,359 @@
|
||||
# Ticket and Epic Analysis Patterns
|
||||
|
||||
This guide provides structured approaches for analyzing tickets and epics using LLM reasoning.
|
||||
|
||||
## Overview
|
||||
|
||||
Analysis tasks leverage LLM reasoning to:
|
||||
- Identify gaps and missing tickets
|
||||
- Detect mismatches between ticket assumptions and code reality
|
||||
- Find dependencies and relationships
|
||||
- Evaluate ticket clarity and completeness
|
||||
- Generate thoughtful refinement questions
|
||||
|
||||
## Pattern 1: Identifying Gaps in Epic Coverage
|
||||
|
||||
**Scenario**: User wants to identify missing tickets for an epic (e.g., "Identify gaps in epic XXX-123").
|
||||
|
||||
### Process
|
||||
|
||||
1. **Fetch the epic**:
|
||||
- Get the epic issue to understand scope, description, acceptance criteria
|
||||
- Parse the epic's stated goals and requirements
|
||||
|
||||
2. **Fetch all subtickets**:
|
||||
- Query: `parent:"EPIC-ID"`
|
||||
- List all child tickets
|
||||
- Map coverage: which parts of the epic have tickets?
|
||||
|
||||
3. **Analyze coverage**:
|
||||
- Compare epic description against existing tickets
|
||||
- Identify missing areas:
|
||||
- Frontend/UI components not covered
|
||||
- Backend services or APIs not covered
|
||||
- Testing or QA work not covered
|
||||
- Documentation or knowledge base gaps
|
||||
- Deployment or infrastructure work not covered
|
||||
- Look for edge cases or error scenarios
|
||||
- **Content filtering**:
|
||||
- Extract user needs and business value
|
||||
- Translate code discussions to high-level requirements
|
||||
- Exclude code snippets unless explicitly requested
|
||||
- Include technical details when deviating from conventions
|
||||
|
||||
4. **Present findings**:
|
||||
- List identified gaps with context
|
||||
- Suggest new tickets for uncovered work
|
||||
- Include estimated scope for each gap
|
||||
|
||||
### Example Analysis Structure
|
||||
|
||||
**Epic**: "Implement email digest feature"
|
||||
|
||||
**Current tickets**:
|
||||
- AIA-100: Design email digest schema
|
||||
- AIA-101: Build digest generation API
|
||||
- AIA-102: Create email templates
|
||||
|
||||
**Identified gaps**:
|
||||
- [ ] UI component: Digest frequency preferences (suggested: 2-3 points)
|
||||
- [ ] Testing: End-to-end digest flow with real email delivery
|
||||
- [ ] Documentation: Digest API documentation and examples
|
||||
- [ ] Infrastructure: Set up scheduled job for digest generation (cron/task queue)
|
||||
|
||||
## Pattern 2: Detecting Assumption Mismatches
|
||||
|
||||
**Scenario**: User asks "Are there any wrong assumptions in this ticket?" (often comparing ticket description to actual code implementation).
|
||||
|
||||
### Process
|
||||
|
||||
1. **Extract assumptions from ticket**:
|
||||
- Read ticket description, acceptance criteria, requirements
|
||||
- Identify explicit assumptions:
|
||||
- "API returns JSON array"
|
||||
- "User already authenticated"
|
||||
- "Database table exists with these fields"
|
||||
- Identify implicit assumptions:
|
||||
- "Feature X is already implemented"
|
||||
- "Libraries are available in codebase"
|
||||
- "System can handle concurrent requests"
|
||||
|
||||
2. **Cross-reference with provided context**:
|
||||
- User may provide code snippets, implementation details, or codebase structure
|
||||
- Compare assumptions against actual state
|
||||
- Identify mismatches:
|
||||
- API returns different format
|
||||
- Prerequisites not yet implemented
|
||||
- Different database schema
|
||||
- Performance constraints
|
||||
- Library incompatibilities
|
||||
|
||||
3. **Categorize mismatches**:
|
||||
- **Critical**: Breaks implementation (e.g., "API structure wrong")
|
||||
- **High**: Requires significant rework (e.g., "Missing prerequisite")
|
||||
- **Medium**: Needs clarification or small adjustment
|
||||
- **Low**: Minor edge case or optimization
|
||||
|
||||
4. **Present findings with remediation**:
|
||||
- Flag each mismatch clearly
|
||||
- Explain impact on implementation
|
||||
- Suggest updated acceptance criteria or requirements
|
||||
- Use AskUserQuestion tool to confirm: "Should we update the ticket or is there missing context?"
|
||||
|
||||
### Example Mismatch Detection
|
||||
|
||||
**Ticket assumption**: "Email parser API returns structured fields: { sender, subject, body, attachments }"
|
||||
|
||||
**Code reality**: "Parser returns raw MIME structure; field extraction not yet implemented"
|
||||
|
||||
**Mismatch**: Critical
|
||||
- Ticket describes endpoint as done; actually needs field extraction layer
|
||||
- Acceptance criteria unrealistic for current implementation
|
||||
- **Suggested fix**: Split into (1) MIME parsing, (2) field extraction, (3) API endpoint
|
||||
|
||||
## Pattern 3: Dependency Analysis and Parallelization
|
||||
|
||||
**Scenario**: User wants to understand dependencies across tickets for parallel work.
|
||||
|
||||
### Process
|
||||
|
||||
1. **Extract dependencies from ticket relationships**:
|
||||
- Review Blocks/Blocked-by relationships
|
||||
- Look for implicit dependencies:
|
||||
- "Requires backend API" → ticket mentions API endpoint
|
||||
- "Needs database migration" → schema changes
|
||||
- "Depends on design decision" → tickets waiting on decisions
|
||||
|
||||
2. **Identify critical path**:
|
||||
- Work that must complete before others can start
|
||||
- Usually: design → backend → frontend → testing
|
||||
|
||||
3. **Group by parallelizable tracks**:
|
||||
- Frontend UI work (if API contract defined)
|
||||
- Backend API implementation
|
||||
- Testing and QA scenarios
|
||||
- Documentation and knowledge base
|
||||
- Infrastructure/deployment work
|
||||
|
||||
4. **Suggest optimal sequence**:
|
||||
- Propose which work can happen in parallel
|
||||
- Identify blockers that must resolve first
|
||||
- Recommend team allocation to maximize parallelization
|
||||
|
||||
### Example Parallelization
|
||||
|
||||
**Tickets for feature**: Email digest feature (AIA-100 to AIA-105)
|
||||
|
||||
**Analysis**:
|
||||
- AIA-100 (Design schema) - critical path start
|
||||
- AIA-101 (Backend API) - depends on AIA-100 ✓ can start after design
|
||||
- AIA-102 (Email templates) - independent ✓ can start immediately
|
||||
- AIA-103 (Frontend UI) - depends on AIA-101 API contract
|
||||
- AIA-104 (Testing) - depends on AIA-101 being runnable
|
||||
- AIA-105 (Documentation) - can start after AIA-100
|
||||
|
||||
**Suggested parallelization**:
|
||||
1. **Start simultaneously**: AIA-100 (design), AIA-102 (templates)
|
||||
2. **Once AIA-100 done**: Start AIA-101 (backend), AIA-105 (docs)
|
||||
3. **Once AIA-101 done**: Start AIA-103 (frontend), AIA-104 (testing)
|
||||
|
||||
## Pattern 4: Clarity and Completeness Review
|
||||
|
||||
**Scenario**: User asks "Review existing Linear tickets for completeness, clarity, dependencies, open questions" for a range of tickets.
|
||||
|
||||
### Process
|
||||
|
||||
1. **Fetch all tickets** in the specified range (e.g., AIA-100 through AIA-110)
|
||||
|
||||
2. **Evaluate each ticket against criteria**:
|
||||
|
||||
**Clarity Assessment**:
|
||||
- Is title specific and action-oriented?
|
||||
- Is description concise and understandable?
|
||||
- Are acceptance criteria testable and specific?
|
||||
- Would a developer confidently estimate and implement?
|
||||
|
||||
**Completeness Check**:
|
||||
- Does complex ticket have acceptance criteria?
|
||||
- Are bugs reproducible (steps provided)?
|
||||
- Are features properly scoped?
|
||||
- Are dependencies identified?
|
||||
- Are open questions flagged?
|
||||
|
||||
**Dependency Verification**:
|
||||
- Are blocking relationships explicitly set?
|
||||
- Could work run in parallel if clearer?
|
||||
- Are implicit dependencies made explicit?
|
||||
|
||||
**Open Questions Assessment**:
|
||||
- Are uncertainties flagged?
|
||||
- Are questions assignable to right parties?
|
||||
|
||||
3. **Compile findings**:
|
||||
- Create summary per ticket
|
||||
- Highlight strengths and issues
|
||||
- Rank by urgency of refinement needed
|
||||
|
||||
4. **Present with recommendations**:
|
||||
- Strong tickets (ready for development)
|
||||
- Needs clarity (specific improvements recommended)
|
||||
- Needs breakdown (too large or complex)
|
||||
- Blocked by decisions (needs input from product/design)
|
||||
|
||||
### Evaluation Template
|
||||
|
||||
For each ticket, assess:
|
||||
|
||||
```
|
||||
Ticket: AIA-XXX - [Title]
|
||||
Status: [Ready/Needs Work/Blocked]
|
||||
|
||||
Clarity: [✓/⚠/✗] with reason
|
||||
Completeness: [✓/⚠/✗] with reason
|
||||
Dependencies: [✓/⚠/✗] with reason
|
||||
Questions: [✓/⚠/✗] with reason
|
||||
|
||||
Issues found:
|
||||
- [Issue 1 with recommendation]
|
||||
|
||||
Recommended next step:
|
||||
- [Action needed]
|
||||
```
|
||||
|
||||
## Pattern 5: Generating Refinement Session Questions
|
||||
|
||||
**Scenario**: User asks "Generate questions for the next refinement session for tickets XXX-100 through XXX-110".
|
||||
|
||||
### Process
|
||||
|
||||
1. **Fetch tickets** in the range
|
||||
|
||||
2. **Identify uncertainty patterns**:
|
||||
- Missing acceptance criteria
|
||||
- Ambiguous requirements
|
||||
- Conflicting specs
|
||||
- Implicit assumptions
|
||||
- Unknown edge cases
|
||||
- Unclear priorities or value
|
||||
|
||||
3. **Generate clarifying questions** organized by type:
|
||||
|
||||
**For Product/Business**:
|
||||
- "What's the priority of this feature relative to X?"
|
||||
- "Who is the primary user and what problem does this solve?"
|
||||
- "What's the expected timeline/deadline?"
|
||||
- "Are there any compliance or regulatory requirements?"
|
||||
|
||||
**For Engineering**:
|
||||
- "Is this technically feasible with current stack?"
|
||||
- "Are there performance constraints we should consider?"
|
||||
- "Does this require changes to authentication/authorization?"
|
||||
- "What's the rollout strategy (feature flag, gradual, etc.)?"
|
||||
|
||||
**For Design/UX**:
|
||||
- "Have we designed the user flows for this?"
|
||||
- "Are there accessibility requirements?"
|
||||
- "What's the visual/interaction pattern from existing UI?"
|
||||
|
||||
**For All**:
|
||||
- "How does this integrate with existing feature X?"
|
||||
- "What happens in error scenarios?"
|
||||
- "What's the success metric for this?"
|
||||
|
||||
4. **Organize questions strategically**:
|
||||
- Start with high-impact, blockers
|
||||
- Group by theme or epic
|
||||
- Flag critical unknowns
|
||||
- Note dependencies between questions
|
||||
|
||||
### Example Question Set
|
||||
|
||||
**For epic "Email digest feature" refinement session**:
|
||||
|
||||
**Critical blockers** (must resolve first):
|
||||
- [Product] Is the digest frequency (daily/weekly/monthly) configurable per user or system-wide?
|
||||
- [Engineering] Can our email system handle the volume of digest sends? Should we batch them?
|
||||
|
||||
**Design questions** (needed before building):
|
||||
- [Design] Have we designed the digest preview UI?
|
||||
- [All] What unsubscribe mechanism do we need for digests?
|
||||
|
||||
**Edge cases** (clarify before acceptance):
|
||||
- [Product] What happens if a user has no emails in the digest period?
|
||||
- [Engineering] How do we handle timezone differences for "weekly" digests?
|
||||
|
||||
## Pattern 6: Epic Analysis and Adjustment Suggestions
|
||||
|
||||
**Scenario**: User wants "Based on the conversation transcript suggest adjustments to the epic XXX-123".
|
||||
|
||||
### Process
|
||||
|
||||
1. **Fetch the epic** with full description and current subtickets
|
||||
|
||||
2. **Analyze conversation transcript** for:
|
||||
- New insights or requirements not in epic description
|
||||
- Stakeholder concerns or constraints
|
||||
- Changed priorities or scope
|
||||
- Technical challenges or trade-offs discussed
|
||||
- User feedback or use cases mentioned
|
||||
|
||||
3. **Cross-reference with current epic**:
|
||||
- What's aligned between epic description and conversation?
|
||||
- What's missing from epic description?
|
||||
- What's in epic that wasn't discussed or is outdated?
|
||||
- Are subtickets still appropriate?
|
||||
|
||||
4. **Suggest adjustments**:
|
||||
- Update epic description with new context
|
||||
- Recommend new tickets for discussion outcomes
|
||||
- Suggest removing or deferring scope
|
||||
- Highlight dependencies discovered in discussion
|
||||
- Flag trade-offs for decision
|
||||
|
||||
5. **Present as proposed amendments**:
|
||||
- "Current epic description" vs. "Suggested updates"
|
||||
- "Current subtickets" vs. "Suggested changes"
|
||||
- Rationale for each change
|
||||
|
||||
### Amendment Template
|
||||
|
||||
```
|
||||
Epic: XXX-123 - [Current Title]
|
||||
|
||||
Current scope: [quote from description]
|
||||
|
||||
Suggested scope updates:
|
||||
- Add: [new requirement or insight]
|
||||
- Remove: [scope to defer]
|
||||
- Clarify: [ambiguous part with suggested rewording]
|
||||
|
||||
New subtickets suggested:
|
||||
- [Ticket with estimated scope]
|
||||
|
||||
Subtickets to reconsider:
|
||||
- [Ticket that may no longer fit]
|
||||
|
||||
Dependencies or blockers discovered:
|
||||
- [Dependency or constraint]
|
||||
```
|
||||
|
||||
## General Analysis Guidelines
|
||||
|
||||
**Always**:
|
||||
- Quote specific text from tickets/epics in findings
|
||||
- Provide specific, actionable recommendations
|
||||
- Explain the "why" behind observations
|
||||
- Distinguish between facts (what's written) and inferences (what's implied)
|
||||
- Flag assumptions clearly
|
||||
|
||||
**Consider**:
|
||||
- Team's estimation approach (story points, t-shirt, none)
|
||||
- Sprint velocity and capacity
|
||||
- Current backlog health and priorities
|
||||
- Existing patterns in ticket structure (to match style)
|
||||
|
||||
**When unsure**:
|
||||
- Use AskUserQuestion tool for clarifying questions
|
||||
- Flag as open question for refinement
|
||||
- Suggest options with trade-offs
|
||||
- Don't assume team preferences or standards
|
||||
333
skills/product-manager/references/refinement_session_guide.md
Normal file
333
skills/product-manager/references/refinement_session_guide.md
Normal file
@@ -0,0 +1,333 @@
|
||||
# Refinement Session Guide
|
||||
|
||||
This guide explains how to prepare for and facilitate refinement sessions, including generating meaningful discussion questions.
|
||||
|
||||
## Refinement Session Overview
|
||||
|
||||
Refinement sessions are where product teams:
|
||||
- Discuss upcoming work (next 2-3 sprints)
|
||||
- Break down epics into actionable tickets
|
||||
- Clarify requirements and acceptance criteria
|
||||
- Identify dependencies and blockers
|
||||
- Estimate complexity (if using estimation)
|
||||
- Align on priorities
|
||||
|
||||
## Pre-Refinement: Generating Questions
|
||||
|
||||
**Scenario**: "Generate questions for the next refinement session for tickets XXX-100 through XXX-110"
|
||||
|
||||
### Question Generation Strategy
|
||||
|
||||
Generate questions that:
|
||||
1. **Unblock implementation** - Remove technical uncertainties
|
||||
2. **Clarify value** - Ensure everyone understands the "why"
|
||||
3. **Surface dependencies** - Identify work that affects other work
|
||||
4. **Challenge assumptions** - Find gaps in thinking
|
||||
5. **Enable estimation** - Provide clarity for sizing complexity
|
||||
|
||||
### Question Categories and Examples
|
||||
|
||||
#### 1. Scope and Value Questions
|
||||
|
||||
**Product/Business focus**:
|
||||
- "What's the primary user need this solves?"
|
||||
- "Why now? What drives the priority?"
|
||||
- "How does this relate to our OKRs?"
|
||||
- "What's the definition of success?"
|
||||
- "Who should we talk to validate this with users?"
|
||||
|
||||
**When to use**: For new features, roadmap items, or items with unclear "why"
|
||||
|
||||
#### 2. Technical Feasibility Questions
|
||||
|
||||
**Engineering focus**:
|
||||
- "Is our current tech stack a good fit for this?"
|
||||
- "Are there performance or scalability concerns?"
|
||||
- "Do we need new infrastructure or tooling?"
|
||||
- "Is this technically feasible in the next sprint?"
|
||||
- "What technical debt might block this?"
|
||||
|
||||
**When to use**: Complex features, infrastructure work, new integrations
|
||||
|
||||
#### 3. Design and UX Questions
|
||||
|
||||
**Design focus**:
|
||||
- "Have we designed this user flow?"
|
||||
- "Are there accessibility requirements?"
|
||||
- "How does this fit our existing design system?"
|
||||
- "What's the mobile experience like?"
|
||||
- "Should we prototype or validate first?"
|
||||
|
||||
**When to use**: Customer-facing features, UI changes
|
||||
|
||||
#### 4. Dependency and Integration Questions
|
||||
|
||||
**Cross-functional**:
|
||||
- "Does this depend on other work in progress?"
|
||||
- "Could this block other teams or projects?"
|
||||
- "Does this integrate with [system X]? How?"
|
||||
- "What APIs or data do we need from [team Y]?"
|
||||
- "What's the integration test strategy?"
|
||||
|
||||
**When to use**: Any feature involving multiple systems, teams, or components
|
||||
|
||||
#### 5. Edge Cases and Error Handling
|
||||
|
||||
**Completeness focus**:
|
||||
- "What happens if [error scenario]?"
|
||||
- "How do we handle concurrent requests?"
|
||||
- "What's the rollback strategy if things go wrong?"
|
||||
- "Are there rate limits or capacity considerations?"
|
||||
- "What about inactive/deleted users/data?"
|
||||
|
||||
**When to use**: Critical systems, features affecting data integrity, payment/security
|
||||
|
||||
#### 6. Decision-Required Questions
|
||||
|
||||
**Flag blockers**:
|
||||
- "Do we need a design decision before starting?"
|
||||
- "Should we do [approach A] or [approach B]? What are the trade-offs?"
|
||||
- "Is this a platform-wide change requiring architectural decision?"
|
||||
- "Should we spike this first to reduce uncertainty?"
|
||||
|
||||
**When to use**: Items with multiple paths forward or architectural implications
|
||||
|
||||
#### 7. Rollout and Deployment Questions
|
||||
|
||||
**Operations focus**:
|
||||
- "Should this go behind a feature flag?"
|
||||
- "How do we roll this out without impacting users?"
|
||||
- "What monitoring/alerts do we need?"
|
||||
- "Is there a canary/gradual rollout strategy?"
|
||||
- "What's the rollback procedure?"
|
||||
|
||||
**When to use**: User-facing changes, infrastructure changes, high-impact work
|
||||
|
||||
### Question Framing for Different Audiences
|
||||
|
||||
**For Engineering-heavy sessions**:
|
||||
- Focus on technical feasibility and dependencies
|
||||
- Include spike/investigation questions
|
||||
- Address edge cases and error scenarios
|
||||
- Discuss performance and testing implications
|
||||
|
||||
**For Product-heavy sessions**:
|
||||
- Emphasize user value and success metrics
|
||||
- Clarify scope and priorities
|
||||
- Discuss trade-offs between features
|
||||
- Identify missing user research or validation
|
||||
|
||||
**For Cross-functional sessions**:
|
||||
- Start with "why" (value and goals)
|
||||
- Surface dependencies early
|
||||
- Identify design/technical gaps
|
||||
- End with "what's next" (work plan)
|
||||
|
||||
## Sample Refinement Question Sets
|
||||
|
||||
### Example 1: New Feature (Email Digest)
|
||||
|
||||
```
|
||||
**Value & Scope**:
|
||||
- What problem does email digest solve for users?
|
||||
- Who's the primary user? What's their current workflow?
|
||||
- Should frequency be configurable per user or system-wide?
|
||||
|
||||
**Technical**:
|
||||
- Can our email system handle batch sending this volume?
|
||||
- Should we use a background job or scheduled task?
|
||||
- What email format (HTML/plain text/both)?
|
||||
|
||||
**Edge Cases**:
|
||||
- What if user has no emails in the period?
|
||||
- How do we handle timezone differences?
|
||||
- Unsubscribe mechanism?
|
||||
|
||||
**Dependencies**:
|
||||
- Does this block or depend on other email features?
|
||||
- Do we need design approval before building?
|
||||
|
||||
**Success**:
|
||||
- How do we measure if this succeeds?
|
||||
- What's the rollout timeline?
|
||||
```
|
||||
|
||||
### Example 2: Bug Fix (Parser Failing on Special Characters)
|
||||
|
||||
```
|
||||
**Clarification**:
|
||||
- What's the impact? How many users affected?
|
||||
- Which special characters cause failures?
|
||||
- Is this a security issue or just a UX problem?
|
||||
|
||||
**Root Cause**:
|
||||
- Have we root-caused this?
|
||||
- Is it encoding, parsing, or validation?
|
||||
|
||||
**Solution Scope**:
|
||||
- Should we fix just these characters or handle all UTF-8?
|
||||
- Do we need to update error messages?
|
||||
- Should we add input validation on the frontend?
|
||||
|
||||
**Testing**:
|
||||
- What test cases should we add?
|
||||
- How do we prevent regression?
|
||||
|
||||
**Rollout**:
|
||||
- Is this a hotfix or can we batch with other parser work?
|
||||
- Do we need to handle existing data that's affected?
|
||||
```
|
||||
|
||||
### Example 3: Infrastructure Work (Database Migration)
|
||||
|
||||
```
|
||||
**Why and When**:
|
||||
- Why do we need this migration now?
|
||||
- What's the impact of not doing it?
|
||||
- Is this blocking feature work?
|
||||
|
||||
**Approach**:
|
||||
- What's the migration strategy (zero-downtime? maintenance window?)?
|
||||
- Do we need a feature flag or gradual rollout?
|
||||
- What's the rollback plan?
|
||||
|
||||
**Scope**:
|
||||
- Affected tables and data volumes?
|
||||
- Performance impact?
|
||||
- Monitoring and alerting?
|
||||
|
||||
**Dependencies**:
|
||||
- Deployment sequence with other work?
|
||||
- Do other teams need to prepare?
|
||||
|
||||
**Communication**:
|
||||
- Do users need notification?
|
||||
- What's the customer-facing impact?
|
||||
```
|
||||
|
||||
## During Refinement: Using Questions to Guide Discussion
|
||||
|
||||
### Facilitation Tips
|
||||
|
||||
**Opening** (set context):
|
||||
- "We're refining the next 2 sprints' work"
|
||||
- "Goal is to clarify scope, surface unknowns, and identify dependencies"
|
||||
- "It's okay if we don't have all answers—we'll capture questions for resolution"
|
||||
|
||||
**Present the work** (overview):
|
||||
- Share epic or feature theme
|
||||
- Briefly describe what we're working on
|
||||
- Set the context for why now
|
||||
|
||||
**Ask clarifying questions** (engagement):
|
||||
- Start with scope and value: "What problem does this solve?"
|
||||
- Move to feasibility: "Is this technically doable?"
|
||||
- Explore details: "What about edge case X?"
|
||||
- Flag unknowns: "Do we need design input on this?"
|
||||
|
||||
**Capture decisions** (outcomes):
|
||||
- What did we decide?
|
||||
- What's still open?
|
||||
- Who owns next steps?
|
||||
|
||||
**Identify follow-ups** (action items):
|
||||
- Spike investigations
|
||||
- Design reviews needed
|
||||
- External dependencies
|
||||
- Clarifications from stakeholders
|
||||
|
||||
### Handling "I Don't Know" Responses
|
||||
|
||||
When questions can't be answered:
|
||||
|
||||
1. **Capture as open question**:
|
||||
- Assign to appropriate person/team
|
||||
- Link in ticket for traceability
|
||||
- Flag as blocking or non-blocking
|
||||
|
||||
2. **Offer options to move forward**:
|
||||
- "Should we make an assumption to proceed?"
|
||||
- "Do we need a spike to validate?"
|
||||
- "Can we defer this to a separate ticket?"
|
||||
|
||||
3. **Note dependency**:
|
||||
- "This is blocked on [decision/clarification]"
|
||||
- "Engineering to spike approach by [date]"
|
||||
|
||||
## Post-Refinement: Ticket Quality Checklist
|
||||
|
||||
After refinement, ensure tickets are ready for development:
|
||||
|
||||
### Before Moving to Sprint
|
||||
|
||||
**For each ticket**, verify:
|
||||
|
||||
- [ ] **Title** is specific and action-oriented
|
||||
- [ ] **Description** is concise (150-200 words) and answers: what, why, how
|
||||
- [ ] **Type label** applied (Feature, Bug, Enhancement, etc.)
|
||||
- [ ] **Acceptance criteria** are testable and specific (for complex work)
|
||||
- [ ] **Dependencies** are identified and linked (if applicable)
|
||||
- [ ] **Open questions** are flagged (if any remain)
|
||||
- [ ] **Estimate** is provided (if team uses estimation)
|
||||
- [ ] **No external blockers** (all prerequisites in progress or done)
|
||||
|
||||
### Team Readiness Check
|
||||
|
||||
**Before sprint starts**:
|
||||
- [ ] All near-term tickets are in "Ready for Development" state
|
||||
- [ ] Dependencies between tickets are clear
|
||||
- [ ] Team has asked all blocking questions
|
||||
- [ ] Success metrics defined for features
|
||||
- [ ] Testing approach discussed for new work
|
||||
|
||||
## Sample Refinement Session Agenda
|
||||
|
||||
**Duration**: 60-90 minutes for 2-week sprint
|
||||
|
||||
### Timeboxed Segments
|
||||
|
||||
**0:00-5:00**: Opening and context
|
||||
- Share sprint theme or roadmap context
|
||||
- Overview of items to refine
|
||||
|
||||
**5:00-45:00**: Ticket refinement (30-40 minutes)
|
||||
- Present each epic/feature
|
||||
- Ask clarifying questions
|
||||
- Discuss design, technical approach, edge cases
|
||||
- Identify dependencies
|
||||
- Capture open questions
|
||||
|
||||
**45:00-55:00**: Dependency mapping (10 minutes)
|
||||
- Review identified dependencies
|
||||
- Suggest parallelization opportunities
|
||||
- Flag critical path
|
||||
|
||||
**55:00-85:00**: Estimation and prioritization (20-30 minutes)
|
||||
- If using estimation, estimate tickets
|
||||
- Confirm prioritization
|
||||
- Ensure sprint capacity alignment
|
||||
|
||||
**85:00-90:00**: Wrap-up
|
||||
- Recap decisions and open questions
|
||||
- Assign owners for follow-ups
|
||||
- Confirm next sprint readiness
|
||||
|
||||
## Common Refinement Issues and Solutions
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| "This is too big" | Scope creep or lack of breakdown | Suggest splitting: by layer, by user journey, by value slice |
|
||||
| "We don't know how to estimate" | Missing technical details | Spike first, then estimate. Or use T-shirt sizing (S/M/L). |
|
||||
| "We're blocked on [X]" | External dependency or decision needed | Create separate decision/spike ticket; mark main ticket as blocked |
|
||||
| "Acceptance criteria are too vague" | Product unclear on requirements | Ask specific questions; rewrite criteria to be testable |
|
||||
| "This doesn't fit in a sprint" | Ticket too large | Break into sub-tickets; move lower-priority items to next sprint |
|
||||
| "We forgot about [edge case]" | Incomplete analysis | Add as acceptance criteria; may increase estimate |
|
||||
|
||||
## Tips for Efficient Refinement
|
||||
|
||||
- **Prepare ahead**: Share ticket drafts before the session so team can read
|
||||
- **Time-box discussions**: Allocate time per epic/feature, move on if no progress
|
||||
- **Use templates**: Consistent ticket structure speeds discussion
|
||||
- **Ask not to answer**: Ask questions; don't impose solutions
|
||||
- **Record decisions**: Capture what was decided, not just what was discussed
|
||||
- **Assign ownership**: Each open question has an owner and due date
|
||||
296
skills/product-manager/references/ticket_structure_guide.md
Normal file
296
skills/product-manager/references/ticket_structure_guide.md
Normal file
@@ -0,0 +1,296 @@
|
||||
# Ticket Structure and Best Practices
|
||||
|
||||
This guide defines how to structure tickets for clarity, completeness, and actionability.
|
||||
|
||||
## Ticket Content Standards
|
||||
|
||||
**Concise Content**: Maximum 500 words per ticket, ideally 150-200 words.
|
||||
|
||||
## Title Guidelines
|
||||
|
||||
- **Action-oriented**: Start with clear verbs
|
||||
- **Specific**: Include what and where, not just "Fix bug" or "Add feature"
|
||||
- **Under 50 characters**: Concise and scannable
|
||||
- **Examples**:
|
||||
- ✅ "Add CSV export for user data"
|
||||
- ✅ "Fix email parser failing on non-ASCII domains"
|
||||
- ❌ "Export feature"
|
||||
- ❌ "Parser issue"
|
||||
|
||||
## Labels and Categorization
|
||||
|
||||
Always apply type labels for clarity:
|
||||
- `Type/Feature` - New functionality
|
||||
- `Type/Bug` - Defect or broken functionality
|
||||
- `Type/Enhancement` - Improvement to existing feature
|
||||
- `Type/Documentation` - Docs, guides, or knowledge base
|
||||
- `Type/Refactor` - Code cleanup, technical debt
|
||||
- `Type/Testing` - Test coverage improvements
|
||||
- `Type/Infrastructure` - Deployment, CI/CD, DevOps
|
||||
|
||||
Additional context labels from workspace (if available):
|
||||
- Priority labels (if defined): High, Medium, Low
|
||||
- Platform labels: Frontend, Backend, API
|
||||
- Domain labels: specific to team's structure
|
||||
|
||||
## Description Format
|
||||
|
||||
Adapt structure based on ticket complexity:
|
||||
|
||||
### Simple Tickets (UI changes, text updates, small fixes)
|
||||
|
||||
```
|
||||
Brief context (1-2 sentences):
|
||||
- What: What needs to change
|
||||
- Where: Which component/page
|
||||
- Why: Brief reason if not obvious
|
||||
|
||||
Example:
|
||||
The "Save" button label is inconsistent with other forms.
|
||||
|
||||
Change "Save Draft" to "Save" on the user preferences form to match the pattern used across the dashboard.
|
||||
```
|
||||
|
||||
### Complex Tickets (new features, workflows, API changes)
|
||||
|
||||
```
|
||||
## Context
|
||||
Why this matters (2-3 sentences, can quote user feedback):
|
||||
- Problem it solves
|
||||
- User impact
|
||||
- Business value
|
||||
|
||||
## Requirements
|
||||
Specific, detailed needs (bullet points):
|
||||
- Functional requirements
|
||||
- Non-functional requirements (performance, compatibility, etc.)
|
||||
- Integration points
|
||||
|
||||
## Acceptance Criteria
|
||||
Clear, testable conditions. For complex logic, use Given-When-Then format:
|
||||
|
||||
### Given-When-Then Examples:
|
||||
Given a user is on the email settings page
|
||||
When they select "Weekly digest"
|
||||
Then the system saves the preference and shows a confirmation message
|
||||
|
||||
### Simple Checkboxes:
|
||||
- [ ] API endpoint returns 200 status
|
||||
- [ ] Response includes all required fields
|
||||
- [ ] Error handling returns 400 for invalid input
|
||||
|
||||
## Open Questions
|
||||
Flag unknowns for relevant parties (Engineering, Product, Business):
|
||||
- [Engineering] How should we handle authentication for this API?
|
||||
- [Product] What's the expected behavior for returning users?
|
||||
- [Business] Do we need audit logging for compliance?
|
||||
```
|
||||
|
||||
### Bug Tickets
|
||||
|
||||
Include reproducibility information:
|
||||
|
||||
```
|
||||
## Steps to Reproduce
|
||||
1. Navigate to email settings
|
||||
2. Select a date range with special characters
|
||||
3. Click "Apply"
|
||||
|
||||
## Expected Behavior
|
||||
Settings are saved and confirmed with a message
|
||||
|
||||
## Actual Behavior
|
||||
Page throws a 500 error
|
||||
|
||||
## Environment
|
||||
- Browser: Chrome 120 on macOS
|
||||
- Device: MacBook Pro
|
||||
- OS: macOS 14.1
|
||||
- Network: Stable broadband
|
||||
|
||||
## Severity
|
||||
- Critical: Service down, data loss, security issue
|
||||
- High: Core feature broken, widespread impact
|
||||
- Medium: Feature partially broken, workaround exists
|
||||
- Low: Minor UI issue, edge case, cosmetic
|
||||
```
|
||||
|
||||
## Features vs Bugs
|
||||
|
||||
- **Features**: Focus on user need and business value
|
||||
- Why does the user need this?
|
||||
- What problem does it solve?
|
||||
- How does it fit the product roadmap?
|
||||
|
||||
- **Bugs**: Focus on reproduction and impact
|
||||
- Steps to reproduce (numbered, specific)
|
||||
- Expected vs. actual behavior
|
||||
- Environment details
|
||||
- Severity and workarounds
|
||||
|
||||
**Prioritization**: Treat high-severity bugs like features; defer low-severity bugs.
|
||||
|
||||
## Acceptance Criteria Best Practices
|
||||
|
||||
Make acceptance criteria:
|
||||
- **Testable**: Can QA or user verify without ambiguity?
|
||||
- **Specific**: Includes expected data, responses, formats
|
||||
- **Complete**: Covers happy path and error cases
|
||||
- **Independent**: Each criterion can be verified separately
|
||||
|
||||
### Example: Good vs. Poor
|
||||
|
||||
❌ **Poor**: "User can export data"
|
||||
- Too vague. Export to what format? All data or filtered? Where does the file go?
|
||||
|
||||
✅ **Good**:
|
||||
- [ ] Export button appears in the toolbar
|
||||
- [ ] Clicking export shows format options (CSV, Excel, JSON)
|
||||
- [ ] Selected format exports all visible rows
|
||||
- [ ] File downloads to default downloads folder
|
||||
- [ ] Exported file contains all columns shown in current view
|
||||
- [ ] Error handling: Shows message if no data available
|
||||
|
||||
## Dependency Management
|
||||
|
||||
Explicitly capture ticket relationships:
|
||||
|
||||
- **Blocks**: This ticket prevents work on another
|
||||
- Use when: "Backend API must be complete before frontend can integrate"
|
||||
- Example: Create endpoint (blocks) → Integrate in UI
|
||||
|
||||
- **Blocked By**: This ticket is waiting on another
|
||||
- Use when: "Frontend work waiting on API design decision"
|
||||
- Example: Integrate API (blocked by) → API design decision
|
||||
|
||||
- **Relates To**: Related work that should be coordinated but doesn't block
|
||||
- Example: Email parser improvements (relates to) → Email formatting standards
|
||||
|
||||
## Estimation Guidelines
|
||||
|
||||
If team uses estimation:
|
||||
|
||||
- **Story points**: Represent relative complexity, not hours
|
||||
- 1 point: Simple (UI label, small config)
|
||||
- 2-3 points: Straightforward (small feature, isolated fix)
|
||||
- 5-8 points: Moderate (feature with dependencies, multiple components)
|
||||
- 13+ points: Large (complex feature, needs breakdown)
|
||||
|
||||
- **When to split**: Tickets larger than team's typical sprint velocity
|
||||
- Examples: 13+ points should usually be split
|
||||
|
||||
## Ticket Lifecycle States
|
||||
|
||||
Common workflow (adapt to team's actual states):
|
||||
|
||||
1. **Backlog**: New, not yet reviewed
|
||||
2. **Ready for Development**: Refined, detailed, dependencies clear
|
||||
3. **In Progress**: Work started
|
||||
4. **In Review**: Awaiting code/product review
|
||||
5. **Done**: Complete and merged
|
||||
|
||||
## Refinement Checklist
|
||||
|
||||
Before marking ticket as "Ready for Development":
|
||||
|
||||
- [ ] Clear, action-oriented title?
|
||||
- [ ] Description concise and actionable for developers?
|
||||
- [ ] Appropriate labels applied?
|
||||
- [ ] Dependencies identified and linked?
|
||||
- [ ] Acceptance criteria present (for complex work)?
|
||||
- [ ] Open questions flagged for relevant parties?
|
||||
- [ ] Estimate provided (if team uses estimation)?
|
||||
- [ ] No external blockers?
|
||||
|
||||
## Red Flags That Indicate Issues
|
||||
|
||||
During review, flag these as needing refinement:
|
||||
|
||||
- ❌ Tickets older than 90 days without updates
|
||||
- ❌ Missing acceptance criteria on complex tickets
|
||||
- ❌ No clear user value or "why"
|
||||
- ❌ Acceptance criteria that can't be tested
|
||||
- ❌ Unclear dependencies or relationships
|
||||
- ❌ Multiple conflicting acceptance criteria
|
||||
- ❌ Epic stories with no breakdown
|
||||
- ❌ Missing error handling or edge cases
|
||||
|
||||
## Healthy Backlog Indicators
|
||||
|
||||
- Near-term items (next 2 sprints) are detailed and ready
|
||||
- Long-term items (3+ months) are high-level and strategic
|
||||
- Dependencies are mapped and clear
|
||||
- Priorities are current and actionable
|
||||
- Open questions are flagged for resolution
|
||||
- No zombie tickets (unchanged for 6+ months)
|
||||
- Clear epic-to-ticket hierarchy
|
||||
|
||||
## Technical Content Guidelines
|
||||
|
||||
**Default: Product-Centric Focus**
|
||||
|
||||
Tickets should focus on:
|
||||
- User needs and business value
|
||||
- Expected behavior and outcomes
|
||||
- High-level architectural approach (patterns, technology choices)
|
||||
- What needs to happen, not how it will be coded
|
||||
- **NO code snippets or implementation details**
|
||||
|
||||
**Include Technical Details When:**
|
||||
|
||||
1. **Explicitly instructed**: User says "include X" or "make sure to note Y"
|
||||
2. **Convention deviation**: Approach differs from team's standard patterns
|
||||
3. **Critical constraints**: Technical limitations that affect implementation
|
||||
4. **Non-standard technology**: Using different tools/libraries than usual
|
||||
|
||||
**Examples:**
|
||||
|
||||
**Default case:**
|
||||
```
|
||||
User request: "Create a ticket for user authentication"
|
||||
|
||||
✅ Good ticket:
|
||||
Title: Implement user authentication
|
||||
Context: Users need secure authentication to protect their accounts
|
||||
Requirements:
|
||||
- Secure session management
|
||||
- Password reset capability
|
||||
- Multi-factor authentication support
|
||||
|
||||
❌ Poor ticket (too implementation-heavy):
|
||||
Title: Add JWT authentication
|
||||
Context: Need to implement JWT with passport.js library
|
||||
Code: const jwt = require('jsonwebtoken')...
|
||||
```
|
||||
|
||||
**Explicit instruction:**
|
||||
```
|
||||
User request: "Create a ticket for auth, note we're using OAuth2 not JWT"
|
||||
|
||||
✅ Good ticket:
|
||||
Title: Implement OAuth2 authentication
|
||||
Context: Users need secure authentication to protect their accounts
|
||||
Technical Note: Use OAuth2 instead of standard JWT pattern due to third-party integration requirements
|
||||
```
|
||||
|
||||
**Convention deviation:**
|
||||
```
|
||||
User request: "We need to use MongoDB for this feature because of the dynamic schema requirements"
|
||||
|
||||
✅ Good ticket:
|
||||
Title: Implement user preferences storage
|
||||
Context: Users need ability to store custom preferences with flexible structure
|
||||
Technical Note: Use MongoDB for storage (deviation from PostgreSQL standard) due to schema flexibility requirements for custom fields
|
||||
```
|
||||
|
||||
**When to exclude code:**
|
||||
|
||||
❌ Don't include:
|
||||
- Actual code snippets: `function authenticate(user) { ... }`
|
||||
- Specific implementations: variable names, exact method signatures
|
||||
- Example code from discussions
|
||||
|
||||
✅ Instead translate to:
|
||||
- Architectural guidance: "Use Factory pattern for user creation"
|
||||
- Technology choices: "Implement with WebSockets for real-time updates"
|
||||
- Design constraints: "Must be stateless to support horizontal scaling"
|
||||
Reference in New Issue
Block a user