Initial commit
This commit is contained in:
225
skills/prioritization-effort-impact/SKILL.md
Normal file
225
skills/prioritization-effort-impact/SKILL.md
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
name: prioritization-effort-impact
|
||||
description: Use when ranking backlogs, deciding what to do first based on effort vs impact (quick wins vs big bets), prioritizing feature roadmaps, triaging bugs or technical debt, allocating resources across initiatives, identifying low-hanging fruit, evaluating strategic options with 2x2 matrix, or when user mentions prioritization, quick wins, effort-impact matrix, high-impact low-effort, big bets, or asks "what should we do first?".
|
||||
---
|
||||
# Prioritization: Effort-Impact Matrix
|
||||
|
||||
## Table of Contents
|
||||
1. [Purpose](#purpose)
|
||||
2. [When to Use](#when-to-use)
|
||||
3. [What Is It?](#what-is-it)
|
||||
4. [Workflow](#workflow)
|
||||
5. [Common Patterns](#common-patterns)
|
||||
6. [Scoring Frameworks](#scoring-frameworks)
|
||||
7. [Guardrails](#guardrails)
|
||||
8. [Quick Reference](#quick-reference)
|
||||
|
||||
## Purpose
|
||||
|
||||
Transform overwhelming backlogs and option lists into clear, actionable priorities by mapping items on a 2x2 matrix of effort (cost/complexity) vs impact (value/benefit). Identify quick wins (high impact, low effort) and distinguish them from big bets (high impact, high effort), time sinks (low impact, high effort), and fill-ins (low impact, low effort).
|
||||
|
||||
## When to Use
|
||||
|
||||
**Use this skill when:**
|
||||
|
||||
- **Backlog overflow**: You have 20+ items (features, bugs, tasks, ideas) and need to decide execution order
|
||||
- **Resource constraints**: Limited time, budget, or people force trade-off decisions
|
||||
- **Strategic planning**: Choosing between initiatives, projects, or investments for quarterly/annual roadmaps
|
||||
- **Quick wins needed**: Stakeholders want visible progress fast; you need high-impact low-effort items
|
||||
- **Trade-off clarity**: Team debates "should we do A or B?" without explicit effort/impact comparison
|
||||
- **Alignment gaps**: Different stakeholders (eng, product, sales, exec) have conflicting priorities
|
||||
- **Context switching**: Too many simultaneous projects; need to focus on what matters most
|
||||
- **New PM/leader**: Taking over a backlog and need systematic prioritization approach
|
||||
|
||||
**Common triggers:**
|
||||
- "We have 50 feature requests, where do we start?"
|
||||
- "What are the quick wins?"
|
||||
- "Should we do the migration or the new feature first?"
|
||||
- "How do we prioritize technical debt vs new features?"
|
||||
- "What gives us the most bang for our buck?"
|
||||
|
||||
## What Is It?
|
||||
|
||||
**Effort-Impact Matrix** (also called Impact-Effort Matrix, Quick Wins Matrix, or 2x2 Prioritization) plots each item on two dimensions:
|
||||
|
||||
- **X-axis: Effort** (time, cost, complexity, risk, dependencies)
|
||||
- **Y-axis: Impact** (value, revenue, user benefit, strategic alignment, risk reduction)
|
||||
|
||||
**Four quadrants:**
|
||||
|
||||
```
|
||||
High Impact │
|
||||
│ Big Bets │ Quick Wins
|
||||
│ (do 2nd) │ (do 1st!)
|
||||
│─────────────────┼─────────────
|
||||
│ Time Sinks │ Fill-Ins
|
||||
│ (avoid) │ (do last)
|
||||
Low Impact │
|
||||
└─────────────────┴─────────────
|
||||
High Effort Low Effort
|
||||
```
|
||||
|
||||
**Example:** Feature backlog with 12 items
|
||||
|
||||
| Item | Effort | Impact | Quadrant |
|
||||
|------|--------|--------|----------|
|
||||
| Add "Export to CSV" button | Low (2d) | High (many users) | **Quick Win** ✓ |
|
||||
| Rebuild entire auth system | High (3mo) | High (security) | Big Bet |
|
||||
| Perfect pixel alignment on logo | High (1wk) | Low (aesthetic) | Time Sink ❌ |
|
||||
| Fix typo in footer | Low (5min) | Low (trivial) | Fill-In |
|
||||
|
||||
**Decision:** Do "Export to CSV" first (quick win), schedule auth rebuild next (big bet), skip logo perfection (time sink), batch typo fixes (fill-ins).
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Prioritization Progress:
|
||||
- [ ] Step 1: Gather items and clarify scoring
|
||||
- [ ] Step 2: Score effort and impact
|
||||
- [ ] Step 3: Plot matrix and identify quadrants
|
||||
- [ ] Step 4: Create prioritized roadmap
|
||||
- [ ] Step 5: Validate and communicate decisions
|
||||
```
|
||||
|
||||
**Step 1: Gather items and clarify scoring**
|
||||
|
||||
Collect all items to prioritize (features, bugs, initiatives, etc.) and define scoring scales for effort and impact. See [Scoring Frameworks](#scoring-frameworks) for effort and impact definitions. Use [resources/template.md](resources/template.md) for structure.
|
||||
|
||||
**Step 2: Score effort and impact**
|
||||
|
||||
Rate each item on effort (1-5: trivial to massive) and impact (1-5: negligible to transformative). Involve subject matter experts for accuracy. See [resources/methodology.md](resources/methodology.md) for advanced scoring techniques like Fibonacci, T-shirt sizes, or RICE.
|
||||
|
||||
**Step 3: Plot matrix and identify quadrants**
|
||||
|
||||
Place items on 2x2 matrix and categorize into Quick Wins (high impact, low effort), Big Bets (high impact, high effort), Fill-Ins (low impact, low effort), and Time Sinks (low impact, high effort). See [Common Patterns](#common-patterns) for typical quadrant distributions.
|
||||
|
||||
**Step 4: Create prioritized roadmap**
|
||||
|
||||
Sequence items: Quick Wins first, Big Bets second (after quick wins build momentum), Fill-Ins during downtime, avoid Time Sinks unless required. See [resources/template.md](resources/template.md) for roadmap structure.
|
||||
|
||||
**Step 5: Validate and communicate decisions**
|
||||
|
||||
Self-check using [resources/evaluators/rubric_prioritization_effort_impact.json](resources/evaluators/rubric_prioritization_effort_impact.json). Ensure scoring is defensible, stakeholder perspectives included, and decisions clearly explained with rationale.
|
||||
|
||||
## Common Patterns
|
||||
|
||||
**By domain:**
|
||||
|
||||
- **Product backlogs**: Quick wins = small UX improvements, Big bets = new workflows, Time sinks = edge case perfection
|
||||
- **Technical debt**: Quick wins = config fixes, Big bets = architecture overhauls, Time sinks = premature optimizations
|
||||
- **Bug triage**: Quick wins = high-impact easy fixes, Big bets = complex critical bugs, Time sinks = cosmetic issues
|
||||
- **Strategic initiatives**: Quick wins = process tweaks, Big bets = market expansion, Time sinks = vanity metrics
|
||||
- **Marketing campaigns**: Quick wins = email nurture, Big bets = brand overhaul, Time sinks = minor A/B tests
|
||||
|
||||
**By stakeholder priority:**
|
||||
|
||||
- **Execs want**: Quick wins (visible progress) + Big bets (strategic impact)
|
||||
- **Engineering wants**: Technical debt quick wins + Big bets (platform work)
|
||||
- **Sales wants**: Quick wins that unblock deals + Big bets (major features)
|
||||
- **Customers want**: Quick wins (pain relief) + Big bets (transformative value)
|
||||
|
||||
**Typical quadrant distribution:**
|
||||
- Quick Wins: 10-20% (rare, high-value opportunities)
|
||||
- Big Bets: 20-30% (strategic, resource-intensive)
|
||||
- Fill-Ins: 40-50% (most backlogs have many low-value items)
|
||||
- Time Sinks: 10-20% (surprisingly common, often disguised as "polish")
|
||||
|
||||
**Red flags:**
|
||||
- ❌ **No quick wins**: Likely overestimating effort or underestimating impact
|
||||
- ❌ **All quick wins**: Scores probably not calibrated correctly
|
||||
- ❌ **Many time sinks**: Cut scope or reject these items
|
||||
- ❌ **Effort/impact scores all 3**: Need more differentiation (use 1-2 and 4-5)
|
||||
|
||||
## Scoring Frameworks
|
||||
|
||||
**Effort dimensions (choose relevant ones):**
|
||||
- **Time**: Engineering/execution hours (1=hours, 2=days, 3=weeks, 4=months, 5=quarters)
|
||||
- **Complexity**: Technical difficulty (1=trivial, 5=novel/unprecedented)
|
||||
- **Risk**: Failure probability (1=safe, 5=high-risk)
|
||||
- **Dependencies**: External blockers (1=none, 5=many teams/approvals)
|
||||
- **Cost**: Financial investment (1=$0-1K, 2=$1-10K, 3=$10-100K, 4=$100K-1M, 5=$1M+)
|
||||
|
||||
**Impact dimensions (choose relevant ones):**
|
||||
- **Users affected**: Reach (1=<1%, 2=1-10%, 3=10-50%, 4=50-90%, 5=>90%)
|
||||
- **Business value**: Revenue/savings (1=$0-10K, 2=$10-100K, 3=$100K-1M, 4=$1-10M, 5=$10M+)
|
||||
- **Strategic alignment**: OKR contribution (1=tangential, 5=critical to strategy)
|
||||
- **User pain**: Problem severity (1=nice-to-have, 5=blocker/crisis)
|
||||
- **Risk reduction**: Mitigation value (1=minor, 5=existential risk)
|
||||
|
||||
**Composite scoring:**
|
||||
- **Simple**: Average of dimensions (Effort = avg(time, complexity), Impact = avg(users, value))
|
||||
- **Weighted**: Multiply by importance (Effort = 0.6×time + 0.4×complexity)
|
||||
- **Fibonacci**: Use 1, 2, 3, 5, 8 instead of 1-5 for exponential differences
|
||||
- **T-shirt sizes**: S/M/L/XL mapped to 1/2/3/5
|
||||
|
||||
**Example scoring (feature: "Add dark mode"):**
|
||||
- Effort: Time=3 (2 weeks), Complexity=2 (CSS), Risk=2 (minor bugs), Dependencies=1 (no blockers) → **Avg = 2.0 (Low)**
|
||||
- Impact: Users=4 (80% want it), Value=2 (retention, not revenue), Strategy=3 (design system goal), Pain=3 (eye strain) → **Avg = 3.0 (Medium-High)**
|
||||
- **Result**: Medium-High Impact, Low Effort → **Quick Win!**
|
||||
|
||||
## Guardrails
|
||||
|
||||
**Ensure quality:**
|
||||
|
||||
1. **Include diverse perspectives**: Don't let one person score alone (eng overestimates effort, sales overestimates impact)
|
||||
- ✓ Get engineering, product, sales, customer success input
|
||||
- ❌ PM scores everything solo
|
||||
|
||||
2. **Differentiate scores**: If everything is scored 3, you haven't prioritized
|
||||
- ✓ Force rank or use wider scale (1-10)
|
||||
- ✓ Aim for distribution: few 1s/5s, more 2s/4s, many 3s
|
||||
- ❌ All items scored 2.5-3.5
|
||||
|
||||
3. **Question extreme scores**: High-impact low-effort items are rare (if you have 10, something's wrong)
|
||||
- ✓ "Why haven't we done this already?" test for quick wins
|
||||
- ❌ Wishful thinking (underestimating effort, overestimating impact)
|
||||
|
||||
4. **Make scoring transparent**: Document why each score was assigned
|
||||
- ✓ "Effort=4 because requires 3 teams, new infrastructure, 6-week timeline"
|
||||
- ❌ "Effort=4" with no rationale
|
||||
|
||||
5. **Revisit scores periodically**: Effort/impact change as context evolves
|
||||
- ✓ Re-score quarterly or after major changes (new tech, new team size)
|
||||
- ❌ Use 2-year-old scores
|
||||
|
||||
6. **Don't ignore dependencies**: Low-effort items blocked by high-effort prerequisites aren't quick wins
|
||||
- ✓ "Effort=2 for task, but depends on Effort=5 migration"
|
||||
- ❌ Score task in isolation
|
||||
|
||||
7. **Beware of "strategic" override**: Execs calling everything "high impact" defeats prioritization
|
||||
- ✓ "Strategic" is one dimension, not a veto
|
||||
- ❌ "CEO wants it" → auto-scored 5
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Resources:**
|
||||
- **Quick start**: [resources/template.md](resources/template.md) - 2x2 matrix template and scoring table
|
||||
- **Advanced techniques**: [resources/methodology.md](resources/methodology.md) - RICE, MoSCoW, Kano, weighted scoring
|
||||
- **Quality check**: [resources/evaluators/rubric_prioritization_effort_impact.json](resources/evaluators/rubric_prioritization_effort_impact.json) - Evaluation criteria
|
||||
|
||||
**Success criteria:**
|
||||
- ✓ Identified 1-3 quick wins to execute immediately
|
||||
- ✓ Sequenced big bets into realistic roadmap (don't overcommit)
|
||||
- ✓ Cut or deferred time sinks (low ROI items)
|
||||
- ✓ Scoring rationale is transparent and defensible
|
||||
- ✓ Stakeholders aligned on priorities
|
||||
- ✓ Roadmap has capacity buffer (don't schedule 100% of time)
|
||||
|
||||
**Common mistakes:**
|
||||
- ❌ Scoring in isolation (no stakeholder input)
|
||||
- ❌ Ignoring effort (optimism bias: "everything is easy")
|
||||
- ❌ Ignoring impact (building what's easy, not what's valuable)
|
||||
- ❌ Analysis paralysis (perfect scores vs good-enough prioritization)
|
||||
- ❌ Not saying "no" to time sinks
|
||||
- ❌ Overloading roadmap (filling every week with big bets)
|
||||
- ❌ Forgetting maintenance/support time (assuming 100% project capacity)
|
||||
|
||||
**When to use alternatives:**
|
||||
- **Weighted scoring (RICE)**: When you need more nuance than 2x2 (Reach × Impact × Confidence / Effort)
|
||||
- **MoSCoW**: When prioritizing for fixed scope/deadline (Must/Should/Could/Won't)
|
||||
- **Kano model**: When evaluating customer satisfaction (basic/performance/delight features)
|
||||
- **ICE score**: Simpler than RICE (Impact × Confidence × Ease)
|
||||
- **Value vs complexity**: Same as effort-impact, different labels
|
||||
- **Cost of delay**: When timing matters (revenue lost by delaying)
|
||||
@@ -0,0 +1,360 @@
|
||||
{
|
||||
"name": "Prioritization Effort-Impact Evaluator",
|
||||
"description": "Evaluates prioritization artifacts (effort-impact matrices, roadmaps) for quality of scoring, stakeholder alignment, and decision clarity",
|
||||
"criteria": [
|
||||
{
|
||||
"name": "Scoring Quality & Differentiation",
|
||||
"weight": 1.4,
|
||||
"scale": {
|
||||
"1": "All items scored similarly (e.g., all 3s) with no differentiation, or scores appear random/unsupported",
|
||||
"2": "Some differentiation but clustering around middle (2.5-3.5 range), limited use of full scale, weak rationale",
|
||||
"3": "Moderate differentiation with scores using 1-5 range, basic rationale provided for most items, some bias evident",
|
||||
"4": "Strong differentiation across full 1-5 scale, clear rationale for scores, stakeholder input documented, few items cluster at boundaries",
|
||||
"5": "Exemplary differentiation with calibrated scoring (reference examples documented), transparent rationale for all items, bias mitigation techniques used (silent voting, forced ranking), no suspicious clustering"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Scores use full 1-5 range with clear distribution (few 1s/5s, more 2s/4s)",
|
||||
"Reference items documented for calibration (e.g., 'Effort=2 example: CSV export, 2 days')",
|
||||
"Scoring rationale explicit for each item (why Effort=4, why Impact=3)",
|
||||
"Stakeholder perspectives documented (eng estimated effort, sales estimated impact)",
|
||||
"Bias mitigation used (silent voting, anonymous scoring before discussion)"
|
||||
],
|
||||
"poor": [
|
||||
"All scores 2.5-3.5 (no differentiation)",
|
||||
"No rationale for why scores assigned",
|
||||
"Single person scored everything alone",
|
||||
"Scores don't match descriptions (called 'critical' but scored Impact=2)",
|
||||
"Obvious optimism bias (everything is low effort, high impact)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Quadrant Classification Accuracy",
|
||||
"weight": 1.3,
|
||||
"scale": {
|
||||
"1": "Items misclassified (e.g., Effort=5 Impact=2 called 'Quick Win'), or no quadrants identified at all",
|
||||
"2": "Quadrants identified but boundaries unclear (what's 'high' vs 'low'?), some misclassifications",
|
||||
"3": "Quadrants correctly identified with reasonable boundaries (e.g., >3.5 = high), minor edge cases unclear",
|
||||
"4": "Clear quadrant boundaries documented, all items classified correctly, edge cases explicitly addressed",
|
||||
"5": "Exemplary classification with explicit boundary definitions, items near boundaries re-evaluated, typical quadrant distribution validated (10-20% Quick Wins, not 50%)"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Quadrant boundaries explicit (e.g., 'High Impact = ≥4, Low Effort = ≤2')",
|
||||
"10-20% Quick Wins (realistic, not over-optimistic)",
|
||||
"20-30% Big Bets (sufficient strategic work)",
|
||||
"Time Sinks identified and explicitly cut/deferred",
|
||||
"Items near boundaries (e.g., Effort=3, Impact=3) re-evaluated or called out as edge cases"
|
||||
],
|
||||
"poor": [
|
||||
"50%+ Quick Wins (unrealistic, likely miscalibrated)",
|
||||
"0 Quick Wins (likely miscalibrated, overestimating effort)",
|
||||
"No Time Sinks identified (probably hiding low-value work)",
|
||||
"Boundaries undefined (unclear what 'high impact' means)",
|
||||
"Items clearly misclassified (Effort=5 Impact=1 in roadmap as priority)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Stakeholder Alignment & Input Quality",
|
||||
"weight": 1.2,
|
||||
"scale": {
|
||||
"1": "Single person created prioritization with no stakeholder input, or stakeholder disagreements unresolved",
|
||||
"2": "Minimal stakeholder input (1-2 people), no documentation of how disagreements resolved",
|
||||
"3": "Multiple stakeholders involved (eng, product, sales), basic consensus reached, some perspectives missing",
|
||||
"4": "Diverse stakeholders (eng, product, sales, CS, design) contributed appropriately (eng on effort, sales on value), disagreements discussed and resolved, participants documented",
|
||||
"5": "Exemplary stakeholder process with weighted input by expertise (eng estimates effort, sales estimates customer value), bias mitigation (silent voting, anonymous scoring), pre-mortem for controversial items, all participants and resolution process documented"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Participants listed with roles (3 eng, 1 PM, 2 sales, 1 CS)",
|
||||
"Expertise-based weighting (eng scores effort 100%, sales contributes to impact)",
|
||||
"Bias mitigation documented (silent voting used, then discussion)",
|
||||
"Disagreements surfaced and resolved (eng said Effort=5, product said 3, converged at 4 because...)",
|
||||
"Pre-mortem or red-teaming for controversial/uncertain items"
|
||||
],
|
||||
"poor": [
|
||||
"No participant list (unclear who contributed)",
|
||||
"PM scored everything alone",
|
||||
"HIPPO (highest paid person) scores overrode team input with no discussion",
|
||||
"Stakeholders disagree but no resolution documented",
|
||||
"One function (e.g., only eng) scored both effort and impact"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Roadmap Sequencing & Realism",
|
||||
"weight": 1.3,
|
||||
"scale": {
|
||||
"1": "No roadmap created, or roadmap ignores quadrants (Time Sinks scheduled first), or plans 100%+ of capacity",
|
||||
"2": "Roadmap exists but doesn't follow quadrant logic (Big Bets before Quick Wins), capacity planning missing or unrealistic",
|
||||
"3": "Roadmap sequences Quick Wins → Big Bets, basic timeline, capacity roughly considered but not calculated",
|
||||
"4": "Roadmap sequences correctly, timeline realistic with capacity calculated (team size × time), dependencies mapped, buffer included (70-80% utilization)",
|
||||
"5": "Exemplary roadmap with Quick Wins first (momentum), Big Bets phased for incremental value, Fill-Ins opportunistic, Time Sinks explicitly cut with rationale, dependencies mapped with critical path identified, capacity buffer (20-30%), velocity-based forecasting"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Phase 1: Quick Wins (Weeks 1-4) to build momentum",
|
||||
"Phase 2: Big Bets (phased for incremental value, not monolithic)",
|
||||
"Fill-Ins not scheduled explicitly (opportunistic during downtime)",
|
||||
"Time Sinks explicitly rejected with rationale communicated",
|
||||
"Dependencies mapped (item X depends on Y completing first)",
|
||||
"Capacity buffer (planned 70-80% of capacity, not 100%)",
|
||||
"Timeline realistic (effort scores × team size = weeks)"
|
||||
],
|
||||
"poor": [
|
||||
"No sequencing (items listed randomly)",
|
||||
"Big Bets scheduled before Quick Wins (no momentum)",
|
||||
"Time Sinks included in roadmap (low ROI items)",
|
||||
"Planned at 100%+ capacity (no buffer for unknowns)",
|
||||
"No timeline or unrealistic timeline (20 effort points in 1 week)",
|
||||
"Dependencies ignored (dependent items scheduled in parallel)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Effort Scoring Rigor",
|
||||
"weight": 1.1,
|
||||
"scale": {
|
||||
"1": "Effort scored on single dimension (time only) with no consideration of complexity, risk, dependencies, or scores are guesses with no rationale",
|
||||
"2": "Effort considers time but inconsistently accounts for complexity/risk/dependencies, weak rationale",
|
||||
"3": "Effort considers multiple dimensions (time, complexity, risk) with reasonable rationale, some dimensions missing (e.g., dependencies)",
|
||||
"4": "Effort considers time, complexity, risk, dependencies with clear rationale, minor gaps (e.g., didn't account for QA/deployment)",
|
||||
"5": "Effort comprehensively considers time, complexity, risk, dependencies, unknowns, cross-team coordination, QA, deployment, with transparent rationale and historical calibration (past estimates vs actuals reviewed)"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Effort dimensions documented (time=3, complexity=4, risk=2, dependencies=3 → avg=3)",
|
||||
"Rationale explains all factors (Effort=4 because: 6 weeks, requires 3 teams, new tech stack, integration with 2 external systems)",
|
||||
"Historical calibration referenced (similar item took 8 weeks last time)",
|
||||
"Accounts for full lifecycle (dev + design + QA + deployment + docs)",
|
||||
"Risk/unknowns factored in (confidence intervals or buffers)"
|
||||
],
|
||||
"poor": [
|
||||
"Effort = engineering time only (ignores design, QA, deployment)",
|
||||
"No rationale (just 'Effort=3' with no explanation)",
|
||||
"Optimism bias evident (everything is 1-2 effort)",
|
||||
"Dependencies ignored (item requires prerequisite but scored standalone)",
|
||||
"Doesn't match description (called 'major migration' but Effort=2)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Impact Scoring Rigor",
|
||||
"weight": 1.2,
|
||||
"scale": {
|
||||
"1": "Impact scored on single dimension (revenue only, or gut feel) with no consideration of users, strategy, pain, or scores appear arbitrary",
|
||||
"2": "Impact considers one dimension (e.g., users) but ignores business value, strategic alignment, or pain severity, weak rationale",
|
||||
"3": "Impact considers multiple dimensions (users, value, strategy) with reasonable rationale, some dimensions missing or speculative",
|
||||
"4": "Impact considers users, business value, strategic alignment, user pain with clear rationale, minor gaps (e.g., no data validation)",
|
||||
"5": "Impact comprehensively considers users, business value, strategic alignment, user pain, competitive positioning, with transparent rationale and data validation (user research, usage analytics, revenue models, NPS/CSAT drivers)"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Impact dimensions documented (users=5, value=$500K, strategy=4, pain=3 → avg=4.25)",
|
||||
"Rationale explains all factors (Impact=5 because: 90% users affected, $1M ARR at risk, critical to Q1 OKR, top NPS detractor)",
|
||||
"Data-driven validation (50 customer survey, 80% rated 'very important')",
|
||||
"Usage analytics support (10K support tickets, 500K page views/mo with 30% bounce)",
|
||||
"Strategic alignment explicit (ties to company OKR, competitive differentiation)",
|
||||
"User pain quantified (severity, frequency, workarounds)"
|
||||
],
|
||||
"poor": [
|
||||
"Impact = revenue only (ignores users, strategy, pain)",
|
||||
"No rationale (just 'Impact=4' with no explanation)",
|
||||
"Speculation without validation ('probably' high impact, 'might' drive revenue)",
|
||||
"Doesn't match description (called 'niche edge case' but Impact=5)",
|
||||
"Strategic override without justification ('CEO wants it' → Impact=5)",
|
||||
"Ignores user research (survey says low importance, scored high anyway)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Communication & Decision Transparency",
|
||||
"weight": 1.1,
|
||||
"scale": {
|
||||
"1": "No explanation of decisions, just list of prioritized items with no rationale, or decisions contradict scores without explanation",
|
||||
"2": "Minimal explanation (prioritized X, Y, Z) with no rationale for why or why not others, trade-offs unclear",
|
||||
"3": "Basic explanation of decisions (doing X because high impact, deferring Y because low impact), trade-offs mentioned but not detailed",
|
||||
"4": "Clear explanation of decisions with rationale tied to scores, trade-offs explicit (doing X means not doing Y), stakeholder concerns addressed",
|
||||
"5": "Exemplary transparency with full rationale for all decisions, trade-offs explicit and quantified, stakeholder concerns documented and addressed, communication plan for rejected items (what we're NOT doing and why), success metrics defined, review cadence set"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Decision rationale clear (prioritized X because Impact=5 Effort=2, deferred Y because Impact=2 Effort=5)",
|
||||
"Trade-offs explicit (doing X means not doing Y this quarter)",
|
||||
"Stakeholder concerns addressed (Sales wanted Z but impact is low because only 2 customers requesting)",
|
||||
"Rejected items communicated (explicitly closing 15 Time Sinks to focus resources)",
|
||||
"Success metrics defined (how will we know this roadmap succeeded? Ship 3 Quick Wins by end of month, 50% user adoption of Big Bet)",
|
||||
"Review cadence set (re-score quarterly, adjust roadmap monthly)"
|
||||
],
|
||||
"poor": [
|
||||
"No rationale for decisions (just 'we're doing X, Y, Z')",
|
||||
"Trade-offs hidden (doesn't mention what's NOT being done)",
|
||||
"Stakeholder concerns ignored or dismissed without explanation",
|
||||
"No communication plan for rejected items",
|
||||
"No success metrics (unclear how to measure if prioritization worked)",
|
||||
"One-time prioritization (no plan to revisit/adjust)"
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Completeness & Structure",
|
||||
"weight": 1.0,
|
||||
"scale": {
|
||||
"1": "Missing critical components (no matrix, no roadmap, or just a list of items), or completely unstructured",
|
||||
"2": "Some components present (matrix OR roadmap) but incomplete, minimal structure",
|
||||
"3": "Most components present (scoring table, matrix, roadmap) with basic structure, some sections missing detail",
|
||||
"4": "All components present and well-structured (scoring table with rationale, matrix with quadrants, phased roadmap, capacity planning), minor gaps",
|
||||
"5": "Comprehensive artifact with all components (scoring table with multi-dimensional rationale, visual matrix, phased roadmap with dependencies, capacity planning with buffer, quality checklist completed, stakeholder sign-off documented)"
|
||||
},
|
||||
"indicators": {
|
||||
"excellent": [
|
||||
"Scoring table with all items, effort/impact scores, quadrant classification, rationale",
|
||||
"Visual matrix plotted (2x2 grid with items positioned)",
|
||||
"Quadrant summary (lists Quick Wins, Big Bets, Fill-Ins, Time Sinks with counts)",
|
||||
"Phased roadmap (Phase 1: Quick Wins weeks 1-4, Phase 2: Big Bets weeks 5-16, etc.)",
|
||||
"Capacity planning (team size, utilization, buffer calculated)",
|
||||
"Dependencies mapped (critical path identified)",
|
||||
"Quality checklist completed (self-assessment documented)",
|
||||
"Stakeholder participants and sign-off documented"
|
||||
],
|
||||
"poor": [
|
||||
"Just a list of items with scores (no matrix, no roadmap)",
|
||||
"No visual representation (hard to see quadrants at a glance)",
|
||||
"No roadmap sequencing (unclear execution order)",
|
||||
"No capacity planning (unclear if realistic)",
|
||||
"Missing quadrant summaries (can't quickly see Quick Wins)",
|
||||
"No documentation of process (unclear how decisions were made)"
|
||||
]
|
||||
}
|
||||
}
|
||||
],
|
||||
"guidance": {
|
||||
"by_context": {
|
||||
"product_backlog": {
|
||||
"focus": "Emphasize user reach, business value, and technical complexity. Quick wins should be UX improvements or small integrations. Big bets are new workflows or platform changes.",
|
||||
"red_flags": [
|
||||
"All features scored high impact (if everything is priority, nothing is)",
|
||||
"Effort ignores design/QA time (only engineering hours)",
|
||||
"No usage data to validate impact assumptions",
|
||||
"Edge cases prioritized over core functionality"
|
||||
]
|
||||
},
|
||||
"technical_debt": {
|
||||
"focus": "Emphasize developer productivity impact, future velocity, and risk reduction. Quick wins are dependency upgrades or small refactors. Big bets are architecture overhauls.",
|
||||
"red_flags": [
|
||||
"Impact scored only on 'clean code' (not business value or velocity)",
|
||||
"Premature optimizations (performance work with no bottleneck)",
|
||||
"Refactoring for refactoring's sake (no measurable improvement)",
|
||||
"Not tying technical debt to business outcomes"
|
||||
]
|
||||
},
|
||||
"bug_triage": {
|
||||
"focus": "Emphasize user pain severity, frequency, and business impact (revenue, support cost). Quick wins are high-frequency easy fixes. Big bets are complex architectural bugs.",
|
||||
"red_flags": [
|
||||
"Severity without frequency (rare edge case scored high priority)",
|
||||
"Cosmetic bugs prioritized over functional bugs",
|
||||
"Effort underestimated (bug fixes often have hidden complexity)",
|
||||
"No workarounds considered (high-effort bug with easy workaround is lower priority)"
|
||||
]
|
||||
},
|
||||
"strategic_initiatives": {
|
||||
"focus": "Emphasize strategic alignment, competitive positioning, and revenue/cost impact. Quick wins are pilot programs or process tweaks. Big bets are market expansion or platform investments.",
|
||||
"red_flags": [
|
||||
"All initiatives scored 'strategic' (dilutes meaning)",
|
||||
"No tie to company OKRs or goals",
|
||||
"Ignoring opportunity cost (resources used here can't be used there)",
|
||||
"Betting on too many big bets (spreading too thin)"
|
||||
]
|
||||
}
|
||||
},
|
||||
"by_team_size": {
|
||||
"small_team_2_5": {
|
||||
"advice": "Focus heavily on Quick Wins and Fill-Ins. Can only do 1 Big Bet at a time. Avoid Time Sinks completely (no capacity to waste). Expect 60-70% utilization (support/bugs take more time in small teams).",
|
||||
"capacity_planning": "Assume 60% project capacity (40% goes to support, bugs, meetings, context switching)"
|
||||
},
|
||||
"medium_team_6_15": {
|
||||
"advice": "Balance Quick Wins (70%) and Big Bets (30%). Can parallelize 2-3 Big Bets if low dependencies. Explicitly cut Time Sinks. Expect 70-80% utilization.",
|
||||
"capacity_planning": "Assume 70% project capacity (30% support, bugs, meetings, code review)"
|
||||
},
|
||||
"large_team_16_plus": {
|
||||
"advice": "Can run multiple Big Bets in parallel, but watch for coordination overhead. Need more strategic work (Big Bets 40%, Quick Wins 60%) to justify team size. Expect 75-85% utilization.",
|
||||
"capacity_planning": "Assume 75% project capacity (25% meetings, cross-team coordination, support)"
|
||||
}
|
||||
},
|
||||
"by_time_horizon": {
|
||||
"sprint_2_weeks": {
|
||||
"advice": "Only Quick Wins and Fill-Ins. No Big Bets (can't complete in 2 weeks). Focus on 1-3 Quick Wins max. Expect interruptions (support, bugs).",
|
||||
"typical_velocity": "3-5 effort points per sprint for 3-person team"
|
||||
},
|
||||
"quarter_3_months": {
|
||||
"advice": "2-3 Quick Wins in first month, 1-2 Big Bets over remaining 2 months. Don't overcommit (leave buffer for Q-end support/planning).",
|
||||
"typical_velocity": "15-25 effort points per quarter for 3-person team"
|
||||
},
|
||||
"annual_12_months": {
|
||||
"advice": "Mix of 8-12 Quick Wins and 3-5 Big Bets across year. Revisit quarterly (don't lock in for full year). Balance short-term momentum and long-term strategy.",
|
||||
"typical_velocity": "60-100 effort points per year for 3-person team"
|
||||
}
|
||||
}
|
||||
},
|
||||
"common_failure_modes": {
|
||||
"all_quick_wins": {
|
||||
"symptom": "50%+ of items scored as Quick Wins (high impact, low effort)",
|
||||
"root_cause": "Optimism bias (underestimating effort or overestimating impact), lack of calibration, wishful thinking",
|
||||
"fix": "Run pre-mortem on 'Quick Wins': If this is so easy and valuable, why haven't we done it already? Re-calibrate effort scores with engineering input. Validate impact with user research."
|
||||
},
|
||||
"no_quick_wins": {
|
||||
"symptom": "0 Quick Wins identified (everything is low impact or high effort)",
|
||||
"root_cause": "Pessimism bias (overestimating effort or underestimating impact), lack of creativity, analysis paralysis",
|
||||
"fix": "Force brainstorm: What's the smallest thing we could do to deliver value? What's the lowest-hanging fruit? Consider config changes, UX tweaks, integrations."
|
||||
},
|
||||
"all_3s": {
|
||||
"symptom": "80%+ of items scored 2.5-3.5 (no differentiation)",
|
||||
"root_cause": "Lack of calibration, avoiding hard choices, consensus-seeking without debate",
|
||||
"fix": "Forced ranking (only one item can be #1), use wider scale (1-10), calibrate with reference items, silent voting to avoid groupthink."
|
||||
},
|
||||
"time_sinks_in_roadmap": {
|
||||
"symptom": "Time Sinks (low impact, high effort) scheduled in roadmap",
|
||||
"root_cause": "Sunk cost fallacy, HIPPO pressure, not saying 'no', ignoring opportunity cost",
|
||||
"fix": "Explicitly cut Time Sinks. Challenge: Can we descope to make this lower effort? If not, reject. Communicate to stakeholders: 'We're not doing X because low ROI.'"
|
||||
},
|
||||
"capacity_overload": {
|
||||
"symptom": "Roadmap plans 100%+ of team capacity",
|
||||
"root_cause": "Ignoring support/bugs/meetings, optimism about execution, not accounting for unknowns",
|
||||
"fix": "Reduce planned capacity to 70-80% (buffer for unknowns). Calculate realistic capacity: team size × hours × utilization. Cut lowest-priority items to fit."
|
||||
},
|
||||
"solo_prioritization": {
|
||||
"symptom": "One person (usually PM) scored everything alone",
|
||||
"root_cause": "Lack of process, time pressure, avoiding conflict",
|
||||
"fix": "Multi-stakeholder scoring session (2-hour workshop with eng, product, sales, CS). Diverse input improves accuracy and builds buy-in."
|
||||
}
|
||||
},
|
||||
"excellence_indicators": {
|
||||
"overall": [
|
||||
"Scores are differentiated (use full 1-5 range, not clustered at 3)",
|
||||
"Scoring rationale is transparent and defensible for all items",
|
||||
"Diverse stakeholders contributed (eng, product, sales, CS, design)",
|
||||
"Quadrant distribution is realistic (10-20% Quick Wins, 20-30% Big Bets, not 50% Quick Wins)",
|
||||
"Roadmap sequences Quick Wins → Big Bets → Fill-Ins, explicitly cuts Time Sinks",
|
||||
"Capacity planning includes buffer (70-80% utilization, not 100%)",
|
||||
"Dependencies mapped and accounted for in sequencing",
|
||||
"Trade-offs explicit (doing X means not doing Y)",
|
||||
"Success metrics defined and review cadence set"
|
||||
],
|
||||
"data_driven": [
|
||||
"Impact scores validated with user research (surveys, interviews, usage analytics)",
|
||||
"Effort scores calibrated with historical data (past estimates vs actuals)",
|
||||
"Business value quantified (revenue impact, cost savings, NPS drivers)",
|
||||
"User pain measured (support ticket frequency, NPS detractor feedback)",
|
||||
"A/B test results inform prioritization (validate assumptions before big bets)"
|
||||
],
|
||||
"stakeholder_alignment": [
|
||||
"Participants documented (names, roles, contributions)",
|
||||
"Bias mitigation used (silent voting, anonymous scoring, forced ranking)",
|
||||
"Disagreements surfaced and resolved (documented how consensus reached)",
|
||||
"Pre-mortem for controversial items (surface hidden assumptions/risks)",
|
||||
"Stakeholder sign-off documented (alignment on final roadmap)"
|
||||
]
|
||||
}
|
||||
}
|
||||
488
skills/prioritization-effort-impact/resources/methodology.md
Normal file
488
skills/prioritization-effort-impact/resources/methodology.md
Normal file
@@ -0,0 +1,488 @@
|
||||
# Prioritization: Advanced Methodologies
|
||||
|
||||
## Table of Contents
|
||||
1. [Advanced Scoring Frameworks](#1-advanced-scoring-frameworks)
|
||||
2. [Alternative Prioritization Models](#2-alternative-prioritization-models)
|
||||
3. [Stakeholder Alignment Techniques](#3-stakeholder-alignment-techniques)
|
||||
4. [Data-Driven Prioritization](#4-data-driven-prioritization)
|
||||
5. [Roadmap Optimization](#5-roadmap-optimization)
|
||||
6. [Common Pitfalls and Solutions](#6-common-pitfalls-and-solutions)
|
||||
|
||||
---
|
||||
|
||||
## 1. Advanced Scoring Frameworks
|
||||
|
||||
### RICE Score (Reach × Impact × Confidence / Effort)
|
||||
|
||||
**Formula**: `Score = (Reach × Impact × Confidence) / Effort`
|
||||
|
||||
**When to use**: Product backlogs where user reach and confidence matter more than simple impact
|
||||
|
||||
**Components**:
|
||||
- **Reach**: Users/customers affected per time period (e.g., 1000 users/quarter)
|
||||
- **Impact**: Value per user (1=minimal, 2=low, 3=medium, 4=high, 5=massive)
|
||||
- **Confidence**: How certain are we? (100%=high, 80%=medium, 50%=low)
|
||||
- **Effort**: Person-months (e.g., 2 = 2 engineers for 1 month, or 1 engineer for 2 months)
|
||||
|
||||
**Example**:
|
||||
- Feature A: (5000 users/qtr × 3 impact × 100% confidence) / 2 effort = **7500 score**
|
||||
- Feature B: (500 users/qtr × 5 impact × 50% confidence) / 1 effort = **1250 score**
|
||||
- **Decision**: Feature A scores 6× higher despite lower per-user impact
|
||||
|
||||
**Advantages**: More nuanced than 2x2 matrix, accounts for uncertainty
|
||||
**Disadvantages**: Requires estimation of reach (hard for new features)
|
||||
|
||||
### ICE Score (Impact × Confidence × Ease)
|
||||
|
||||
**Formula**: `Score = Impact × Confidence × Ease`
|
||||
|
||||
**When to use**: Growth experiments, marketing campaigns, quick prioritization
|
||||
|
||||
**Components**:
|
||||
- **Impact**: Potential value (1-10 scale)
|
||||
- **Confidence**: How sure are we this will work? (1-10 scale)
|
||||
- **Ease**: How easy to implement? (1-10 scale, inverse of effort)
|
||||
|
||||
**Example**:
|
||||
- Experiment A: 8 impact × 9 confidence × 7 ease = **504 score**
|
||||
- Experiment B: 10 impact × 3 confidence × 5 ease = **150 score**
|
||||
- **Decision**: A scores higher due to confidence, even with lower max impact
|
||||
|
||||
**Advantages**: Simpler than RICE, no time period needed
|
||||
**Disadvantages**: Multiplicative scoring can exaggerate differences
|
||||
|
||||
### Weighted Scoring (Custom Criteria)
|
||||
|
||||
**When to use**: Complex decisions with multiple evaluation dimensions beyond effort/impact
|
||||
|
||||
**Process**:
|
||||
1. Define criteria (e.g., Revenue, Strategic Fit, User Pain, Complexity, Risk)
|
||||
2. Weight each criterion (sum to 100%)
|
||||
3. Score each item on each criterion (1-5)
|
||||
4. Calculate weighted score: `Score = Σ(criterion_score × criterion_weight)`
|
||||
|
||||
**Example**:
|
||||
|
||||
| Criteria | Weight | Feature A Score | Feature A Weighted | Feature B Score | Feature B Weighted |
|
||||
|----------|--------|----------------|-------------------|----------------|-------------------|
|
||||
| Revenue | 40% | 4 | 1.6 | 2 | 0.8 |
|
||||
| Strategic | 30% | 5 | 1.5 | 4 | 1.2 |
|
||||
| User Pain | 20% | 3 | 0.6 | 5 | 1.0 |
|
||||
| Complexity | 10% | 2 (low) | 0.2 (penalty) | 4 (high) | 0.4 (penalty) |
|
||||
| **Total** | **100%** | - | **3.9** | - | **3.0** |
|
||||
|
||||
**Decision**: Feature A scores higher (3.9 vs 3.0) due to revenue and strategic fit
|
||||
|
||||
**Advantages**: Transparent, customizable to organization's values
|
||||
**Disadvantages**: Can become analysis paralysis with too many criteria
|
||||
|
||||
### Kano Model (Customer Satisfaction)
|
||||
|
||||
**When to use**: Understanding which features delight vs must-have vs don't-matter
|
||||
|
||||
**Categories**:
|
||||
- **Must-Be (Basic)**: Absence causes dissatisfaction, presence is expected (e.g., product works, no security bugs)
|
||||
- **Performance (Linear)**: More is better, satisfaction increases linearly (e.g., speed, reliability)
|
||||
- **Delighters (Excitement)**: Unexpected features that wow users (e.g., dark mode before it was common)
|
||||
- **Indifferent**: Users don't care either way (e.g., internal metrics dashboard for end users)
|
||||
- **Reverse**: Some users actively dislike (e.g., forced tutorials, animations)
|
||||
|
||||
**Survey technique**:
|
||||
- Ask functional question: "How would you feel if we had feature X?" (Satisfied / Expected / Neutral / Tolerate / Dissatisfied)
|
||||
- Ask dysfunctional question: "How would you feel if we did NOT have feature X?"
|
||||
- Map responses to category
|
||||
|
||||
**Prioritization strategy**:
|
||||
1. **Must-Be first**: Fix broken basics (dissatisfiers)
|
||||
2. **Delighters for differentiation**: Quick wins that wow
|
||||
3. **Performance for competitiveness**: Match or exceed competitors
|
||||
4. **Avoid indifferent/reverse**: Don't waste time
|
||||
|
||||
**Example**:
|
||||
- Must-Be: Fix crash on login (everyone expects app to work)
|
||||
- Performance: Improve search speed from 2s to 0.5s
|
||||
- Delighter: Add "undo send" for emails (unexpected delight)
|
||||
- Indifferent: Add 50 new color themes (most users don't care)
|
||||
|
||||
### Value vs Complexity (Alternative Labels)
|
||||
|
||||
**When to use**: Same as effort-impact, but emphasizes business value and technical complexity
|
||||
|
||||
**Axes**:
|
||||
- **X-axis: Complexity** (technical difficulty, dependencies, unknowns)
|
||||
- **Y-axis: Value** (business value, strategic value, user value)
|
||||
|
||||
**Quadrants** (same concept, different framing):
|
||||
- **High Value, Low Complexity** = Quick Wins (same)
|
||||
- **High Value, High Complexity** = Strategic Investments (same as Big Bets)
|
||||
- **Low Value, Low Complexity** = Nice-to-Haves (same as Fill-Ins)
|
||||
- **Low Value, High Complexity** = Money Pits (same as Time Sinks)
|
||||
|
||||
**When to use this framing**: Technical audiences respond to "complexity", business audiences to "value"
|
||||
|
||||
---
|
||||
|
||||
## 2. Alternative Prioritization Models
|
||||
|
||||
### MoSCoW (Must/Should/Could/Won't)
|
||||
|
||||
**When to use**: Fixed-scope projects (e.g., product launch, migration) with deadline constraints
|
||||
|
||||
**Categories**:
|
||||
- **Must Have**: Non-negotiable, project fails without (e.g., core functionality, security, legal requirements)
|
||||
- **Should Have**: Important but not critical, defer if needed (e.g., nice UX, analytics)
|
||||
- **Could Have**: Desirable if time/budget allows (e.g., polish, edge cases)
|
||||
- **Won't Have (this time)**: Out of scope, revisit later (e.g., advanced features, integrations)
|
||||
|
||||
**Process**:
|
||||
1. List all requirements
|
||||
2. Stakeholders categorize each (force hard choices: only 60% can be Must/Should)
|
||||
3. Build Must-Haves first, then Should-Haves, then Could-Haves if time remains
|
||||
4. Communicate Won't-Haves to set expectations
|
||||
|
||||
**Example (product launch)**:
|
||||
- **Must**: User registration, core workflow, payment processing, security
|
||||
- **Should**: Email notifications, basic analytics, help docs
|
||||
- **Could**: Dark mode, advanced filters, mobile app
|
||||
- **Won't**: Integrations, API, white-labeling (v2 scope)
|
||||
|
||||
**Advantages**: Forces scope discipline, clear for deadline-driven work
|
||||
**Disadvantages**: Doesn't account for effort (may put high-effort items in Must)
|
||||
|
||||
### Cost of Delay (CD3: Cost, Duration, Delay)
|
||||
|
||||
**When to use**: Time-sensitive decisions where delaying has quantifiable revenue/strategic cost
|
||||
|
||||
**Formula**: `CD3 Score = Cost of Delay / Duration`
|
||||
- **Cost of Delay**: $/month of delaying this (revenue loss, market share, customer churn)
|
||||
- **Duration**: Months to complete
|
||||
|
||||
**Example**:
|
||||
- Feature A: $100K/mo delay cost, 2 months duration → **$50K/mo score**
|
||||
- Feature B: $200K/mo delay cost, 5 months duration → **$40K/mo score**
|
||||
- **Decision**: Feature A higher score (faster time-to-value despite lower total CoD)
|
||||
|
||||
**When to use**: Competitive markets, revenue-critical features, time-limited opportunities (e.g., seasonal)
|
||||
|
||||
**Advantages**: Explicitly values speed to market
|
||||
**Disadvantages**: Requires estimating revenue impact (often speculative)
|
||||
|
||||
### Opportunity Scoring (Jobs-to-be-Done)
|
||||
|
||||
**When to use**: Understanding which user jobs are underserved (high importance, low satisfaction)
|
||||
|
||||
**Survey**:
|
||||
- Ask users to rate each "job" on:
|
||||
- **Importance**: How important is this outcome? (1-5)
|
||||
- **Satisfaction**: How well does current solution satisfy? (1-5)
|
||||
|
||||
**Opportunity Score**: `Importance + max(Importance - Satisfaction, 0)`
|
||||
- Score ranges 0-10
|
||||
- **>8 = High opportunity** (important but poorly served)
|
||||
- **<5 = Low opportunity** (either unimportant or well-served)
|
||||
|
||||
**Example**:
|
||||
- Job: "Quickly find relevant past conversations"
|
||||
- Importance: 5 (very important)
|
||||
- Satisfaction: 2 (very dissatisfied)
|
||||
- Opportunity: 5 + (5-2) = **8 (high opportunity)** → prioritize search improvements
|
||||
|
||||
- Job: "Customize notification sounds"
|
||||
- Importance: 2 (not important)
|
||||
- Satisfaction: 3 (neutral)
|
||||
- Opportunity: 2 + 0 = **2 (low opportunity)** → deprioritize
|
||||
|
||||
**Advantages**: User-centric, identifies gaps between need and solution
|
||||
**Disadvantages**: Requires user research, doesn't account for effort
|
||||
|
||||
---
|
||||
|
||||
## 3. Stakeholder Alignment Techniques
|
||||
|
||||
### Silent Voting (Avoid Anchoring Bias)
|
||||
|
||||
**Problem**: First person to score influences others (anchoring bias), HIPPO dominates
|
||||
**Solution**: Everyone scores independently, then discuss
|
||||
|
||||
**Process**:
|
||||
1. Each stakeholder writes scores on sticky notes (don't share yet)
|
||||
2. Reveal all scores simultaneously
|
||||
3. Discuss discrepancies (why did eng score Effort=5 but product scored Effort=2?)
|
||||
4. Converge on consensus score
|
||||
|
||||
**Tools**: Planning poker (common in agile), online voting (Miro, Mural)
|
||||
|
||||
### Forced Ranking (Avoid "Everything is High Priority")
|
||||
|
||||
**Problem**: Stakeholders rate everything 4-5, no differentiation
|
||||
**Solution**: Force stack ranking (only one item can be #1)
|
||||
|
||||
**Process**:
|
||||
1. List all items
|
||||
2. Stakeholders must rank 1, 2, 3, ..., N (no ties allowed)
|
||||
3. Convert ranks to scores (e.g., top 20% = 5, next 20% = 4, middle 20% = 3, etc.)
|
||||
|
||||
**Variant: $100 Budget**:
|
||||
- Give stakeholders $100 to "invest" across all items
|
||||
- They allocate dollars based on priority ($30 to A, $25 to B, $20 to C, ...)
|
||||
- Items with most investment are highest priority
|
||||
|
||||
### Weighted Stakeholder Input (Account for Expertise)
|
||||
|
||||
**Problem**: Not all opinions are equal (eng knows effort, sales knows customer pain)
|
||||
**Solution**: Weight scores by expertise domain
|
||||
|
||||
**Example**:
|
||||
- Effort score = 100% from Engineering (they know effort best)
|
||||
- Impact score = 40% Product + 30% Sales + 30% Customer Success (all know value)
|
||||
|
||||
**Process**:
|
||||
1. Define who estimates what (effort, user impact, revenue, etc.)
|
||||
2. Assign weights (e.g., 60% engineering + 40% product for effort)
|
||||
3. Calculate weighted average: `Score = Σ(stakeholder_score × weight)`
|
||||
|
||||
### Pre-Mortem for Controversial Items
|
||||
|
||||
**Problem**: Stakeholders disagree on whether item is Quick Win or Time Sink
|
||||
**Solution**: Run pre-mortem to surface hidden risks/assumptions
|
||||
|
||||
**Process**:
|
||||
1. Assume item failed spectacularly ("We spent 6 months and it failed")
|
||||
2. Each stakeholder writes down "what went wrong" (effort blowups, impact didn't materialize)
|
||||
3. Discuss: Are these risks real? Should we adjust scores or descope?
|
||||
|
||||
**Example**:
|
||||
- Item: "Build mobile app" (scored Impact=5, Effort=3)
|
||||
- Pre-mortem reveals: "App store approval took 3 months", "iOS/Android doubled effort", "Users didn't download"
|
||||
- **Revised score**: Impact=3 (uncertain), Effort=5 (doubled for two platforms) → Time Sink, defer
|
||||
|
||||
---
|
||||
|
||||
## 4. Data-Driven Prioritization
|
||||
|
||||
### Usage Analytics (Prioritize High-Traffic Areas)
|
||||
|
||||
**When to use**: Product improvements where usage data is available
|
||||
|
||||
**Metrics**:
|
||||
- **Page views / Feature usage**: Improve high-traffic areas first (more users benefit)
|
||||
- **Conversion funnel**: Fix drop-off points with biggest impact (e.g., 50% drop at checkout)
|
||||
- **Support tickets**: High ticket volume = high user pain
|
||||
|
||||
**Example**:
|
||||
- Dashboard page: 1M views/mo, 10% bounce rate → **100K frustrated users** → high impact
|
||||
- Settings page: 10K views/mo, 50% bounce rate → **5K frustrated users** → lower impact
|
||||
- **Decision**: Fix dashboard first (20× more impact)
|
||||
|
||||
**Advantages**: Objective, quantifies impact
|
||||
**Disadvantages**: Ignores new features (no usage data yet), low-traffic areas may be high-value for specific users
|
||||
|
||||
### A/B Test Results (Validate Impact Assumptions)
|
||||
|
||||
**When to use**: Uncertain impact; run experiment to measure before committing
|
||||
|
||||
**Process**:
|
||||
1. Build minimal version of feature (1-2 weeks)
|
||||
2. A/B test with 10% of users
|
||||
3. Measure impact (conversion, retention, revenue, NPS)
|
||||
4. If impact validated, commit to full version; if not, deprioritize
|
||||
|
||||
**Example**:
|
||||
- Hypothesis: "Adding social login will increase signups 20%"
|
||||
- A/B test: 5% increase (not 20%)
|
||||
- **Decision**: Impact=3 (not 5 as assumed), deprioritize vs other items
|
||||
|
||||
**Advantages**: Reduces uncertainty, validates assumptions before big bets
|
||||
**Disadvantages**: Requires experimentation culture, time to run tests
|
||||
|
||||
### Customer Request Frequency (Vote Counting)
|
||||
|
||||
**When to use**: Feature requests from sales, support, customers
|
||||
|
||||
**Metrics**:
|
||||
- **Request count**: Number of unique customers asking
|
||||
- **Revenue at risk**: Total ARR of customers requesting (enterprise vs SMB)
|
||||
- **Churn risk**: Customers threatening to leave without feature
|
||||
|
||||
**Example**:
|
||||
- Feature A: 50 SMB customers requesting ($5K ARR each) = **$250K ARR**
|
||||
- Feature B: 2 Enterprise customers requesting ($200K ARR each) = **$400K ARR**
|
||||
- **Decision**: Feature B higher impact (more revenue at risk) despite fewer requests
|
||||
|
||||
**Guardrail**: Don't just count votes (10 vocal users ≠ real demand), weight by revenue/strategic value
|
||||
|
||||
### NPS/CSAT Drivers Analysis
|
||||
|
||||
**When to use**: Understanding which improvements drive customer satisfaction most
|
||||
|
||||
**Process**:
|
||||
1. Collect NPS/CSAT scores
|
||||
2. Ask open-ended: "What's the one thing we could improve?"
|
||||
3. Categorize feedback (performance, features, support, etc.)
|
||||
4. Correlate categories with NPS (which issues are mentioned by detractors most?)
|
||||
|
||||
**Example**:
|
||||
- Detractors (NPS 0-6) mention "slow performance" 80% of time
|
||||
- Passives (NPS 7-8) mention "missing integrations" 60%
|
||||
- **Decision**: Fix performance first (bigger impact on promoter score)
|
||||
|
||||
---
|
||||
|
||||
## 5. Roadmap Optimization
|
||||
|
||||
### Dependency Mapping (Critical Path)
|
||||
|
||||
**Problem**: Items with dependencies can't start until prerequisites complete
|
||||
**Solution**: Map dependency graph, identify critical path
|
||||
|
||||
**Process**:
|
||||
1. List all items with dependencies (A depends on B, C depends on A and D)
|
||||
2. Draw dependency graph (use tools: Miro, Mural, project management software)
|
||||
3. Identify critical path (longest chain of dependencies)
|
||||
4. Parallelize non-dependent work
|
||||
|
||||
**Example**:
|
||||
- Quick Win A (2 weeks) → Big Bet B (8 weeks) → Quick Win C (1 week) = **11 week critical path**
|
||||
- Quick Win D (2 weeks, no dependencies) can run in parallel with A
|
||||
- **Optimized timeline**: 11 weeks instead of 13 weeks (if sequential)
|
||||
|
||||
### Team Velocity and Capacity Planning
|
||||
|
||||
**Problem**: Overcommitting to more work than team can deliver
|
||||
**Solution**: Use historical velocity to forecast capacity
|
||||
|
||||
**Process**:
|
||||
1. Measure past velocity (effort points completed per sprint/quarter)
|
||||
2. Estimate total capacity (team size × utilization × time period)
|
||||
3. Don't plan >70-80% of capacity (buffer for unknowns, support, bugs)
|
||||
|
||||
**Example**:
|
||||
- Team completes 20 effort points/quarter historically
|
||||
- Next quarter roadmap: 30 effort points planned
|
||||
- **Problem**: 150% overcommitted
|
||||
- **Fix**: Cut lowest-priority items (time sinks, fill-ins) to fit 16 effort points (80% of 20)
|
||||
|
||||
**Guardrail**: If you consistently complete <50% of roadmap, estimation is broken (not just overcommitted)
|
||||
|
||||
### Incremental Delivery (Break Big Bets into Phases)
|
||||
|
||||
**Problem**: Big Bet takes 6 months; no value delivery until end
|
||||
**Solution**: Break into phases that deliver incremental value
|
||||
|
||||
**Example**:
|
||||
- **Original**: "Rebuild reporting system" (6 months, Effort=5)
|
||||
- **Phased**:
|
||||
- Phase 1: Migrate 3 most-used reports (1 month, Effort=2, Impact=3)
|
||||
- Phase 2: Add drill-down capability (1 month, Effort=2, Impact=4)
|
||||
- Phase 3: Real-time data (2 months, Effort=3, Impact=5)
|
||||
- Phase 4: Custom dashboards (2 months, Effort=3, Impact=3)
|
||||
- **Benefit**: Ship value after 1 month (not 6), can adjust based on feedback
|
||||
|
||||
**Advantages**: Faster time-to-value, reduces risk, allows pivoting
|
||||
**Disadvantages**: Requires thoughtful phasing (some work can't be incrementalized)
|
||||
|
||||
### Portfolio Balancing (Mix of Quick Wins and Big Bets)
|
||||
|
||||
**Problem**: Roadmap is all quick wins (no strategic depth) or all big bets (no momentum)
|
||||
**Solution**: Balance portfolio across quadrants and time horizons
|
||||
|
||||
**Target distribution**:
|
||||
- **70% Quick Wins + Fill-Ins** (short-term value, momentum)
|
||||
- **30% Big Bets** (long-term strategic positioning)
|
||||
- OR by time: **Now (0-3 months)**: 60%, **Next (3-6 months)**: 30%, **Later (6-12 months)**: 10%
|
||||
|
||||
**Example**:
|
||||
- Q1: 5 quick wins + 1 big bet Phase 1
|
||||
- Q2: 3 quick wins + 1 big bet Phase 2
|
||||
- Q3: 4 quick wins + 1 new big bet start
|
||||
- **Result**: Consistent value delivery (quick wins) + strategic progress (big bets)
|
||||
|
||||
---
|
||||
|
||||
## 6. Common Pitfalls and Solutions
|
||||
|
||||
### Pitfall 1: Solo Scoring (No Stakeholder Input)
|
||||
|
||||
**Problem**: PM scores everything alone, misses engineering effort or sales context
|
||||
**Solution**: Multi-stakeholder scoring session (2-hour workshop)
|
||||
|
||||
**Workshop agenda**:
|
||||
- 0-15min: Align on scoring scales (calibrate with examples)
|
||||
- 15-60min: Silent voting on effort/impact for all items
|
||||
- 60-90min: Discuss discrepancies, converge on consensus
|
||||
- 90-120min: Plot matrix, identify quadrants, sequence roadmap
|
||||
|
||||
### Pitfall 2: Analysis Paralysis (Perfect Scores)
|
||||
|
||||
**Problem**: Spending days debating if item is Effort=3.2 or Effort=3.4
|
||||
**Solution**: Good-enough > perfect; prioritization is iterative
|
||||
|
||||
**Guardrail**: Limit scoring session to 2 hours; if still uncertain, default to conservative (higher effort, lower impact)
|
||||
|
||||
### Pitfall 3: Ignoring Dependencies
|
||||
|
||||
**Problem**: Quick Win scored Effort=2, but depends on Effort=5 migration completing first
|
||||
**Solution**: Score standalone effort AND prerequisite effort separately
|
||||
|
||||
**Example**:
|
||||
- Item: "Add SSO login" (Effort=2 standalone)
|
||||
- Depends on: "Migrate to new auth system" (Effort=5)
|
||||
- **True effort**: 5 (for new roadmaps) or 2 (if migration already planned)
|
||||
|
||||
### Pitfall 4: Strategic Override ("CEO Wants It")
|
||||
|
||||
**Problem**: Exec declares item "high priority" without scoring, bypasses process
|
||||
**Solution**: Make execs participate in scoring, apply same framework
|
||||
|
||||
**Response**: "Let's score this using our framework so we can compare to other priorities. If it's truly high impact and low effort, it'll naturally rise to the top."
|
||||
|
||||
### Pitfall 5: Sunk Cost Fallacy (Continuing Time Sinks)
|
||||
|
||||
**Problem**: "We already spent 2 months on X, we can't stop now" (even if impact is low)
|
||||
**Solution**: Sunk costs are sunk; evaluate based on future effort/impact only
|
||||
|
||||
**Decision rule**: If you wouldn't start this project today knowing what you know, stop it now
|
||||
|
||||
### Pitfall 6: Neglecting Maintenance (Assuming 100% Project Capacity)
|
||||
|
||||
**Problem**: Roadmap plans 100% of team time, ignoring support/bugs/tech debt/meetings
|
||||
**Solution**: Reduce capacity by 20-50% for non-project work
|
||||
|
||||
**Realistic capacity**:
|
||||
- 100% time - 20% support/bugs - 10% meetings - 10% code review/pairing = **60% project capacity**
|
||||
- If team is 5 people × 40 hrs/wk × 12 weeks = 2400 hrs, only 1440 hrs available for roadmap
|
||||
|
||||
### Pitfall 7: Ignoring User Research (Opinion-Based Scoring)
|
||||
|
||||
**Problem**: Impact scores based on team intuition, not user data
|
||||
**Solution**: Validate impact with user research (surveys, interviews, usage data)
|
||||
|
||||
**Quick validation**:
|
||||
- Survey 50 users: "How important is feature X?" (1-5)
|
||||
- If <50% say 4-5, impact is not as high as assumed
|
||||
- Adjust scores based on data
|
||||
|
||||
### Pitfall 8: Scope Creep During Execution
|
||||
|
||||
**Problem**: Quick Win (Effort=2) grows to Big Bet (Effort=5) during implementation
|
||||
**Solution**: Timebox quick wins; if effort exceeds estimate, cut scope or defer
|
||||
|
||||
**Guardrail**: "If this takes >1 week, we stop and re-evaluate whether it's still worth it"
|
||||
|
||||
### Pitfall 9: Forgetting to Say "No"
|
||||
|
||||
**Problem**: Roadmap keeps growing (never remove items), becomes unexecutable
|
||||
**Solution**: Explicitly cut time sinks, communicate what you're NOT doing
|
||||
|
||||
**Communication template**: "We prioritized X, Y, Z based on impact/effort. This means we're NOT doing A, B, C this quarter because [reason]. We'll revisit in [timeframe]."
|
||||
|
||||
### Pitfall 10: One-Time Prioritization (Never Re-Score)
|
||||
|
||||
**Problem**: Scores from 6 months ago are stale (context changed, new data available)
|
||||
**Solution**: Re-score quarterly, adjust roadmap based on new information
|
||||
|
||||
**Triggers for re-scoring**:
|
||||
- Quarterly planning cycles
|
||||
- Major context changes (new competitor, customer churn, team size change)
|
||||
- Big bets complete (update dependent items' scores)
|
||||
- User research reveals new insights
|
||||
374
skills/prioritization-effort-impact/resources/template.md
Normal file
374
skills/prioritization-effort-impact/resources/template.md
Normal file
@@ -0,0 +1,374 @@
|
||||
# Prioritization: Effort-Impact Matrix Template
|
||||
|
||||
## Table of Contents
|
||||
1. [Workflow](#workflow)
|
||||
2. [Prioritization Matrix Template](#prioritization-matrix-template)
|
||||
3. [Scoring Table Template](#scoring-table-template)
|
||||
4. [Prioritized Roadmap Template](#prioritized-roadmap-template)
|
||||
5. [Guidance for Each Section](#guidance-for-each-section)
|
||||
6. [Quick Patterns](#quick-patterns)
|
||||
7. [Quality Checklist](#quality-checklist)
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Prioritization Progress:
|
||||
- [ ] Step 1: Gather items and clarify scoring
|
||||
- [ ] Step 2: Score effort and impact
|
||||
- [ ] Step 3: Plot matrix and identify quadrants
|
||||
- [ ] Step 4: Create prioritized roadmap
|
||||
- [ ] Step 5: Validate and communicate decisions
|
||||
```
|
||||
|
||||
**Step 1:** Collect all items to prioritize and define scoring scales. See [Scoring Table Template](#scoring-table-template) for structure.
|
||||
|
||||
**Step 2:** Rate each item on effort (1-5) and impact (1-5) with stakeholder input. See [Guidance: Scoring](#guidance-scoring) for calibration tips.
|
||||
|
||||
**Step 3:** Plot items on 2x2 matrix and categorize into quadrants. See [Prioritization Matrix Template](#prioritization-matrix-template) for visualization.
|
||||
|
||||
**Step 4:** Sequence items into roadmap (Quick Wins → Big Bets → Fill-Ins, avoid Time Sinks). See [Prioritized Roadmap Template](#prioritized-roadmap-template) for execution plan.
|
||||
|
||||
**Step 5:** Self-check quality and communicate decisions with rationale. See [Quality Checklist](#quality-checklist) for validation.
|
||||
|
||||
---
|
||||
|
||||
## Prioritization Matrix Template
|
||||
|
||||
Copy this section to create your effort-impact matrix:
|
||||
|
||||
### Effort-Impact Matrix: [Context Name]
|
||||
|
||||
**Date**: [YYYY-MM-DD]
|
||||
**Scope**: [e.g., Q1 Product Backlog, Technical Debt Items, Strategic Initiatives]
|
||||
**Participants**: [Names/roles who contributed to scoring]
|
||||
|
||||
#### Matrix Visualization
|
||||
|
||||
```
|
||||
High Impact │
|
||||
5 │ Big Bets │ Quick Wins
|
||||
│ [Item names] │ [Item names]
|
||||
4 │ │
|
||||
│ │
|
||||
3 │─────────────────────┼─────────────────
|
||||
│ │
|
||||
2 │ Time Sinks │ Fill-Ins
|
||||
│ [Item names] │ [Item names]
|
||||
1 │ │
|
||||
Low Impact │ │
|
||||
└─────────────────────┴─────────────────
|
||||
5 4 3 2 1
|
||||
High Effort Low Effort
|
||||
```
|
||||
|
||||
**Visual Plotting** (if using visual tools):
|
||||
- Create 2x2 grid (effort on X-axis, impact on Y-axis)
|
||||
- Place each item at coordinates (effort, impact)
|
||||
- Use color coding: Green=Quick Wins, Blue=Big Bets, Yellow=Fill-Ins, Red=Time Sinks
|
||||
- Add item labels or numbers for reference
|
||||
|
||||
#### Quadrant Summary
|
||||
|
||||
**Quick Wins (High Impact, Low Effort)** - Do First! ✓
|
||||
- [Item 1]: Impact=5, Effort=2 - [Brief rationale]
|
||||
- [Item 2]: Impact=4, Effort=1 - [Brief rationale]
|
||||
- **Total**: X items
|
||||
|
||||
**Big Bets (High Impact, High Effort)** - Do Second
|
||||
- [Item 3]: Impact=5, Effort=5 - [Brief rationale]
|
||||
- [Item 4]: Impact=4, Effort=4 - [Brief rationale]
|
||||
- **Total**: X items
|
||||
|
||||
**Fill-Ins (Low Impact, Low Effort)** - Do During Downtime
|
||||
- [Item 5]: Impact=2, Effort=1 - [Brief rationale]
|
||||
- [Item 6]: Impact=1, Effort=2 - [Brief rationale]
|
||||
- **Total**: X items
|
||||
|
||||
**Time Sinks (Low Impact, High Effort)** - Avoid/Defer ❌
|
||||
- [Item 7]: Impact=2, Effort=5 - [Brief rationale for why low impact]
|
||||
- [Item 8]: Impact=1, Effort=4 - [Brief rationale]
|
||||
- **Total**: X items
|
||||
- **Recommendation**: Cut scope, reject, or significantly descope these items
|
||||
|
||||
---
|
||||
|
||||
## Scoring Table Template
|
||||
|
||||
Copy this table to score all items systematically:
|
||||
|
||||
### Scoring Table: [Context Name]
|
||||
|
||||
| # | Item Name | Effort | Impact | Quadrant | Notes/Rationale |
|
||||
|---|-----------|--------|--------|----------|-----------------|
|
||||
| 1 | [Feature/initiative name] | 2 | 5 | Quick Win ✓ | [Why this score?] |
|
||||
| 2 | [Another item] | 4 | 4 | Big Bet | [Why this score?] |
|
||||
| 3 | [Another item] | 1 | 2 | Fill-In | [Why this score?] |
|
||||
| 4 | [Another item] | 5 | 2 | Time Sink ❌ | [Why low impact?] |
|
||||
| ... | ... | ... | ... | ... | ... |
|
||||
|
||||
**Scoring Scales:**
|
||||
|
||||
**Effort (1-5):**
|
||||
- **1 - Trivial**: < 1 day, one person, no dependencies, no risk
|
||||
- **2 - Small**: 1-3 days, one person or pair, minimal dependencies
|
||||
- **3 - Medium**: 1-2 weeks, small team, some dependencies or moderate complexity
|
||||
- **4 - Large**: 1-2 months, cross-team coordination, significant complexity or risk
|
||||
- **5 - Massive**: 3+ months, major initiative, high complexity/risk/dependencies
|
||||
|
||||
**Impact (1-5):**
|
||||
- **1 - Negligible**: <5% users affected, <$10K value, minimal pain relief
|
||||
- **2 - Minor**: 5-20% users, $10-50K value, nice-to-have improvement
|
||||
- **3 - Moderate**: 20-50% users, $50-200K value, meaningful pain relief
|
||||
- **4 - Major**: 50-90% users, $200K-1M value, significant competitive advantage
|
||||
- **5 - Transformative**: >90% users, $1M+ value, existential or strategic imperative
|
||||
|
||||
**Effort Dimensions (optional detail):**
|
||||
| # | Item | Time | Complexity | Risk | Dependencies | **Avg Effort** |
|
||||
|---|------|------|------------|------|--------------|----------------|
|
||||
| 1 | [Item] | 2 | 2 | 1 | 2 | **2** |
|
||||
|
||||
**Impact Dimensions (optional detail):**
|
||||
| # | Item | Users | Business Value | Strategy | Pain | **Avg Impact** |
|
||||
|---|------|-------|----------------|----------|------|----------------|
|
||||
| 1 | [Item] | 5 | 4 | 5 | 5 | **5** |
|
||||
|
||||
---
|
||||
|
||||
## Prioritized Roadmap Template
|
||||
|
||||
Copy this section to sequence items into execution plan:
|
||||
|
||||
### Prioritized Roadmap: [Context Name]
|
||||
|
||||
**Planning Horizon**: [e.g., Q1 2024, Next 6 months]
|
||||
**Team Capacity**: [e.g., 3 engineers × 80% project time = 2.4 FTE, assumes 20% support/maintenance]
|
||||
**Execution Strategy**: Quick Wins first to build momentum, then Big Bets for strategic impact
|
||||
|
||||
#### Phase 1: Quick Wins (Weeks 1-4)
|
||||
|
||||
**Objective**: Deliver visible value fast, build stakeholder confidence
|
||||
|
||||
| Priority | Item | Effort | Impact | Timeline | Owner | Dependencies |
|
||||
|----------|------|--------|--------|----------|-------|--------------|
|
||||
| 1 | [Quick Win 1] | 2 | 5 | Week 1-2 | [Name] | None |
|
||||
| 2 | [Quick Win 2] | 1 | 4 | Week 2 | [Name] | None |
|
||||
| 3 | [Quick Win 3] | 2 | 4 | Week 3-4 | [Name] | [Blocker if any] |
|
||||
|
||||
**Expected Outcomes**: [User impact, metrics improvement, stakeholder wins]
|
||||
|
||||
#### Phase 2: Big Bets (Weeks 5-16)
|
||||
|
||||
**Objective**: Tackle high-value strategic initiatives
|
||||
|
||||
| Priority | Item | Effort | Impact | Timeline | Owner | Dependencies |
|
||||
|----------|------|--------|----------|----------|-------|--------------|
|
||||
| 4 | [Big Bet 1] | 5 | 5 | Week 5-12 | [Team/Name] | Quick Win 1 complete |
|
||||
| 5 | [Big Bet 2] | 4 | 4 | Week 8-14 | [Team/Name] | External API access |
|
||||
| 6 | [Big Bet 3] | 4 | 5 | Week 12-18 | [Team/Name] | Phase 1 learnings |
|
||||
|
||||
**Expected Outcomes**: [Strategic milestones, competitive positioning, revenue impact]
|
||||
|
||||
#### Phase 3: Fill-Ins (Ongoing, Low Priority)
|
||||
|
||||
**Objective**: Batch small tasks during downtime, sprint buffers, or waiting periods
|
||||
|
||||
| Item | Effort | Impact | Timing | Notes |
|
||||
|------|--------|--------|--------|-------|
|
||||
| [Fill-In 1] | 1 | 2 | Sprint buffer | Do if capacity available |
|
||||
| [Fill-In 2] | 2 | 1 | Between phases | Nice-to-have polish |
|
||||
| [Fill-In 3] | 1 | 2 | Waiting on blocker | Quick task while blocked |
|
||||
|
||||
**Strategy**: Don't schedule these explicitly; fill gaps opportunistically
|
||||
|
||||
#### Deferred/Rejected Items (Time Sinks)
|
||||
|
||||
**Objective**: Communicate what we're NOT doing and why
|
||||
|
||||
| Item | Effort | Impact | Reason for Rejection | Reconsider When |
|
||||
|------|--------|--------|----------------------|-----------------|
|
||||
| [Time Sink 1] | 5 | 2 | Low ROI, niche use case | User demand increases 10× |
|
||||
| [Time Sink 2] | 4 | 1 | Premature optimization | Performance becomes bottleneck |
|
||||
| [Time Sink 3] | 5 | 2 | Edge case perfection | Core features stable for 6mo |
|
||||
|
||||
**Communication**: Explicitly tell stakeholders these are cut to focus resources on higher-impact work
|
||||
|
||||
#### Capacity Planning
|
||||
|
||||
**Total Planned Work**: [X effort points] across Quick Wins + Big Bets
|
||||
**Available Capacity**: [Y effort points] (team size × time × utilization)
|
||||
**Buffer**: [Z%] for unplanned work, support, bugs
|
||||
**Risk**: [High/Medium/Low] - [Explanation of capacity risks]
|
||||
|
||||
**Guardrail**: Don't exceed 70-80% of available capacity to allow for unknowns
|
||||
|
||||
---
|
||||
|
||||
## Guidance for Each Section
|
||||
|
||||
### Guidance: Scoring
|
||||
|
||||
**Get diverse input**:
|
||||
- **Engineering**: Estimates effort (time, complexity, risk, dependencies)
|
||||
- **Product**: Estimates impact (user value, business value, strategic alignment)
|
||||
- **Sales/CS**: Validates customer pain and business value
|
||||
- **Design**: Assesses UX impact and design effort
|
||||
|
||||
**Calibration session**:
|
||||
1. Score 3-5 reference items together to calibrate scale
|
||||
2. Use these as anchors: "If X is a 3, then Y is probably a 2"
|
||||
3. Document examples: "Effort=2 example: Add CSV export (2 days, one dev)"
|
||||
|
||||
**Avoid bias**:
|
||||
- ❌ **Anchoring**: First person's score influences others → use silent voting, then discuss
|
||||
- ❌ **Optimism bias**: Engineers underestimate effort → add 20-50% buffer
|
||||
- ❌ **HIPPO (Highest Paid Person's Opinion)**: Exec scores override reality → anonymous scoring first
|
||||
- ❌ **Recency bias**: Recent successes inflate confidence → review past estimates
|
||||
|
||||
**Differentiate scores**:
|
||||
- If 80% of items are scored 3, you haven't prioritized
|
||||
- Force distribution: Top 20% are 4-5, bottom 20% are 1-2, middle 60% are 2-4
|
||||
- Use ranking if needed: "Rank all items, then assign scores based on distribution"
|
||||
|
||||
### Guidance: Quadrant Interpretation
|
||||
|
||||
**Quick Wins (High Impact, Low Effort)** - Rare, valuable
|
||||
- ✓ Do these immediately
|
||||
- ✓ Communicate early wins to build momentum
|
||||
- ❌ Beware: If you have >5 quick wins, scores may be miscalibrated
|
||||
- ❓ Ask: "If this is so easy and valuable, why haven't we done it already?"
|
||||
|
||||
**Big Bets (High Impact, High Effort)** - Strategic focus
|
||||
- ✓ Schedule 1-2 big bets per quarter (don't overcommit)
|
||||
- ✓ Break into phases/milestones for incremental value
|
||||
- ✓ Start after quick wins to build team capability and stakeholder trust
|
||||
- ❌ Don't start 3+ big bets simultaneously (thrashing, context switching)
|
||||
|
||||
**Fill-Ins (Low Impact, Low Effort)** - Opportunistic
|
||||
- ✓ Batch together (e.g., "polish sprint" once per quarter)
|
||||
- ✓ Do during downtime, sprint buffers, or while blocked
|
||||
- ❌ Don't schedule explicitly (wastes planning time)
|
||||
- ❌ Don't let these crowd out big bets
|
||||
|
||||
**Time Sinks (Low Impact, High Effort)** - Avoid!
|
||||
- ✓ Explicitly reject or defer with clear rationale
|
||||
- ✓ Challenge: Can we descope to make this lower effort?
|
||||
- ✓ Communicate to stakeholders: "We're not doing X because..."
|
||||
- ❌ Don't let these sneak into roadmap due to HIPPO or sunk cost fallacy
|
||||
|
||||
### Guidance: Roadmap Sequencing
|
||||
|
||||
**Phase 1: Quick Wins First**
|
||||
- Builds momentum, team confidence, stakeholder trust
|
||||
- Delivers early value while learning about systems/users
|
||||
- Creates psychological safety for bigger risks later
|
||||
|
||||
**Phase 2: Big Bets Second**
|
||||
- Team is warmed up, systems are understood
|
||||
- Quick wins have bought goodwill for longer timeline items
|
||||
- Learnings from Phase 1 inform Big Bet execution
|
||||
|
||||
**Phase 3: Fill-Ins Opportunistically**
|
||||
- Don't schedule; do when capacity available
|
||||
- Useful for onboarding new team members (low-risk tasks)
|
||||
- Good for sprint buffers or while waiting on dependencies
|
||||
|
||||
**Dependencies:**
|
||||
- Map explicitly (item X depends on item Y completing)
|
||||
- Use critical path analysis for complex roadmaps
|
||||
- Build slack/buffer before dependent items
|
||||
|
||||
---
|
||||
|
||||
## Quick Patterns
|
||||
|
||||
### By Context
|
||||
|
||||
**Product Backlog (50+ features)**:
|
||||
- Effort: Engineering time + design + QA + deployment risk
|
||||
- Impact: User reach × pain severity × business value
|
||||
- Quick wins: UX fixes, config changes, small integrations
|
||||
- Big bets: New workflows, platform changes, major redesigns
|
||||
|
||||
**Technical Debt (30+ items)**:
|
||||
- Effort: Refactoring time + testing + migration risk
|
||||
- Impact: Developer productivity + future feature velocity + incidents prevented
|
||||
- Quick wins: Dependency upgrades, linting fixes, small refactors
|
||||
- Big bets: Architecture overhauls, language migrations, monolith → microservices
|
||||
|
||||
**Bug Triage (100+ bugs)**:
|
||||
- Effort: Debug time + fix complexity + regression risk + deployment
|
||||
- Impact: User pain × frequency × business impact (revenue/support cost)
|
||||
- Quick wins: High-frequency easy fixes, workarounds for critical bugs
|
||||
- Big bets: Complex race conditions, performance issues, architectural bugs
|
||||
|
||||
**Strategic Initiatives (10-20 ideas)**:
|
||||
- Effort: People × months + capital + dependencies
|
||||
- Impact: Revenue/cost impact + strategic alignment + competitive advantage
|
||||
- Quick wins: Process improvements, pilot programs, low-cost experiments
|
||||
- Big bets: Market expansion, platform bets, major partnerships
|
||||
|
||||
### Common Scenarios
|
||||
|
||||
**All Big Bets, No Quick Wins**:
|
||||
- Problem: Roadmap takes 6+ months for first value delivery
|
||||
- Fix: Break big bets into phases; ship incremental value
|
||||
- Example: Instead of "Rebuild platform" (6mo), do "Migrate auth" (1mo) + "Migrate users" (1mo) + ...
|
||||
|
||||
**All Quick Wins, No Strategic Depth**:
|
||||
- Problem: Delivering small wins but losing competitive ground
|
||||
- Fix: Schedule 1-2 big bets per quarter for strategic positioning
|
||||
- Balance: 70% quick wins + fill-ins, 30% big bets
|
||||
|
||||
**Too Many Time Sinks**:
|
||||
- Problem: Backlog clogged with low-value high-effort items
|
||||
- Fix: Purge ruthlessly; if impact is low, effort doesn't matter
|
||||
- Communication: "We're closing 20 low-value items to focus resources"
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing, verify:
|
||||
|
||||
**Scoring Quality:**
|
||||
- [ ] Diverse stakeholders contributed to scores (eng, product, sales, etc.)
|
||||
- [ ] Scores are differentiated (not all 3s; use full 1-5 range)
|
||||
- [ ] Extreme scores questioned ("Why haven't we done this quick win already?")
|
||||
- [ ] Scoring rationale documented for transparency
|
||||
- [ ] Effort includes time, complexity, risk, dependencies (not just time)
|
||||
- [ ] Impact includes users, value, strategy, pain (not just one dimension)
|
||||
|
||||
**Matrix Quality:**
|
||||
- [ ] 10-20% Quick Wins (if 0%, scores miscalibrated; if 50%, too optimistic)
|
||||
- [ ] 20-30% Big Bets (strategic work, not just small tasks)
|
||||
- [ ] Time Sinks identified and explicitly cut/deferred
|
||||
- [ ] Items clustered around quadrant boundaries re-evaluated (e.g., Effort=2.5, Impact=2.5)
|
||||
- [ ] Visual matrix created (not just table) for stakeholder communication
|
||||
|
||||
**Roadmap Quality:**
|
||||
- [ ] Quick Wins scheduled first (Weeks 1-4)
|
||||
- [ ] Big Bets scheduled second (after momentum built)
|
||||
- [ ] Fill-Ins not explicitly scheduled (opportunistic)
|
||||
- [ ] Time Sinks explicitly rejected with rationale communicated
|
||||
- [ ] Dependencies mapped (item X depends on Y)
|
||||
- [ ] Capacity buffer included (don't plan 100% of capacity)
|
||||
- [ ] Timeline realistic (effort scores × team size = weeks)
|
||||
|
||||
**Communication Quality:**
|
||||
- [ ] Prioritization decisions explained (not just "we're doing X")
|
||||
- [ ] Trade-offs visible ("Doing X means not doing Y")
|
||||
- [ ] Stakeholder concerns addressed ("Sales wanted Z, but impact is low because...")
|
||||
- [ ] Success metrics defined (how will we know this roadmap succeeded?)
|
||||
- [ ] Review cadence set (re-score quarterly, adjust roadmap monthly)
|
||||
|
||||
**Red Flags to Fix:**
|
||||
- ❌ One person scored everything alone
|
||||
- ❌ All scores are 2.5-3.5 (not differentiated)
|
||||
- ❌ Zero quick wins identified
|
||||
- ❌ Roadmap is 100% big bets (unrealistic)
|
||||
- ❌ Time sinks included in roadmap (low ROI)
|
||||
- ❌ No capacity buffer (planned at 100%)
|
||||
- ❌ No rationale for why items were prioritized
|
||||
- ❌ Stakeholders disagree on scores but no discussion happened
|
||||
Reference in New Issue
Block a user