Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,288 @@
{
"name": "Portfolio Roadmapping Bets Evaluator",
"description": "Evaluate quality of portfolio roadmaps and betting frameworks—assessing strategic clarity, bet sizing, horizon sequencing, exit/scale criteria, portfolio balance, dependencies, capacity feasibility, and impact ambition.",
"version": "1.0.0",
"criteria": [
{
"name": "Strategic Theme Clarity",
"description": "Evaluates whether portfolio theme is specific, measurable, time-bound, and inspiring",
"weight": 1.3,
"scale": {
"1": {
"label": "No theme or vague",
"description": "No strategic theme, or too vague ('improve product', 'grow business'). No timeline. Not measurable. Random collection of projects."
},
"2": {
"label": "Generic theme",
"description": "Theme stated but generic. Missing specifics (target number or timeline unclear). Hard to tell what success looks like. Loosely connected to bets."
},
"3": {
"label": "Clear theme with target",
"description": "Theme is specific with measurable target and timeline. Example: 'Grow revenue 3x in 18 months'. Bets mostly align with theme. Purpose clear."
},
"4": {
"label": "Compelling theme with rationale",
"description": "Theme is specific, measurable, time-bound, and strategic. Rationale explained (why this goal, why now). Success metrics defined. All bets clearly ladder up to theme. Inspiring to team."
},
"5": {
"label": "Exceptional strategic clarity",
"description": "Theme is North Star-aligned with quantified targets, strategic rationale (market opportunity, competitive dynamics), and clear success metrics. Multi-level goals (business, user, team). Bets comprehensively ladder up with impact math shown. Inspires team and aligns stakeholders. Constraints acknowledged (what we're not doing)."
}
}
},
{
"name": "Bet Sizing & Estimation",
"description": "Evaluates whether bets are sized by effort and impact with clear, consistent methodology",
"weight": 1.2,
"scale": {
"1": {
"label": "No sizing or inconsistent",
"description": "Bets not sized, or effort/impact vague ('big', 'small' without definition). No methodology. Can't compare bets."
},
"2": {
"label": "Vague sizing",
"description": "Some bets sized but inconsistent. Effort in different units (days vs weeks vs person-months). Impact qualitative only ('high', 'medium', 'low'). Hard to prioritize."
},
"3": {
"label": "Consistent sizing",
"description": "All bets sized with consistent methodology. Effort in S/M/L/XL or person-months. Impact quantified (1x/3x/10x or metric-based). Can compare and prioritize bets."
},
"4": {
"label": "Well-calibrated sizing",
"description": "Bets sized using framework (RICE, ICE, effort/impact matrix). Effort includes all functions (eng, design, PM, QA). Impact tied to metrics with baselines. Examples provided. Estimates justified with rationale or historical data."
},
"5": {
"label": "Rigorous estimation",
"description": "Comprehensive sizing with RICE or similar framework. Effort broken down by function and phase. Impact quantified with baseline, target, and confidence intervals. Historical calibration (past estimates vs actuals). Ranges provided for uncertainty (best/likely/worst case). Assumptions documented. Comparable bets benchmarked."
}
}
},
{
"name": "Horizon Sequencing & Dependencies",
"description": "Evaluates whether bets are sequenced across horizons with clear dependencies and rationale",
"weight": 1.3,
"scale": {
"1": {
"label": "No sequencing or random",
"description": "Bets not assigned to horizons, or random sequencing. No dependencies identified. Unclear what's now vs next vs later."
},
"2": {
"label": "Vague sequencing",
"description": "Bets assigned to horizons but rationale unclear. Dependencies mentioned but not mapped. Some bets seem out of order (H2 depends on H1 bet not prioritized)."
},
"3": {
"label": "Clear sequencing with dependencies",
"description": "Bets assigned to H1/H2/H3 with rationale. Dependencies identified (technical, learning, strategic, resource). Critical path visible. Sequencing makes sense."
},
"4": {
"label": "Well-sequenced roadmap",
"description": "Bets thoughtfully sequenced across horizons. Dependencies explicitly mapped (dependency matrix or diagram). Critical path identified. Sequencing rationale explained (why this before that). Learning-based sequencing (small experiments before large bets). Parallel work streams identified."
},
"5": {
"label": "Optimized sequencing",
"description": "Bets sequenced using critical path method or similar. All dependency types mapped (technical, learning, strategic, resource). Parallel paths identified to minimize timeline. Sequencing heuristics applied (dependencies first, learn before scaling, quick wins early, long bets start early). Mitigation for critical path risks. Phasing plan for complex bets. Shows deep thinking about execution order."
}
}
},
{
"name": "Exit & Scale Criteria",
"description": "Evaluates whether bets have clear, measurable exit (kill) and scale (double-down) criteria",
"weight": 1.2,
"scale": {
"1": {
"label": "No criteria",
"description": "Exit and scale criteria missing. No decision framework for when to kill or double-down. 'We'll see how it goes' mentality."
},
"2": {
"label": "Vague criteria",
"description": "Some criteria mentioned but vague ('if it works', 'if users like it'). Not measurable. No timelines. Unclear decision points."
},
"3": {
"label": "Clear criteria for most bets",
"description": "Exit and scale criteria defined for most bets. Metrics specified. Timelines provided. Decision points clear. Some criteria measurable."
},
"4": {
"label": "Well-defined criteria",
"description": "Exit and scale criteria for all bets. Criteria are SMART (specific, measurable, achievable, relevant, time-bound). Examples: 'Exit if adoption <5% after 60 days', 'Scale if engagement >50% and NPS >70'. Thresholds justified with baselines or benchmarks. Decision owner specified."
},
"5": {
"label": "Rigorous decision framework",
"description": "Comprehensive exit/scale criteria for all bets. Criteria tied to North Star metric. Multiple criteria types (time-based, metric-based, cost-based, strategic). Staged funding model (milestones with go/no-go decisions). Thresholds calibrated with baselines, benchmarks, and risk tolerance. Decision process documented. Shows discipline and learning mindset (celebrate killing losers)."
}
}
},
{
"name": "Portfolio Balance",
"description": "Evaluates whether portfolio is balanced across risk profiles, horizons, and bet sizes",
"weight": 1.3,
"scale": {
"1": {
"label": "Imbalanced or unchecked",
"description": "Portfolio balance not considered. All bets one type (all core or all moonshots). All one size (all small or all large). No mix."
},
"2": {
"label": "Some balance awareness",
"description": "Portfolio balance mentioned but not quantified. No target distribution. Actual distribution unclear. Imbalanced (>80% one type, >70% one horizon, >60% one size)."
},
"3": {
"label": "Balanced with targets",
"description": "Portfolio balance targets defined (e.g., 70% core / 20% adjacent / 10% transformational). Actual distribution calculated. Mostly balanced. Some mix across horizons and sizes."
},
"4": {
"label": "Well-balanced portfolio",
"description": "Portfolio balanced across multiple dimensions: risk (70/20/10 core/adjacent/transformational), horizons (50/30/20 H1/H2/H3), sizes (mix of S/M/L/XL). Target and actual distribution shown. Balance rationale explained (why 70/20/10 for this context). Adjustments made to rebalance if needed."
},
"5": {
"label": "Comprehensively balanced",
"description": "Portfolio rigorously balanced using frameworks (McKinsey Three Horizons, barbell strategy, risk-return diversification). Multiple balance checks: risk distribution, horizon distribution, size distribution, cycle time distribution (fast/medium/slow). Context-specific targets (startup vs enterprise vs scale-up). Balance validated against strategic goals and risk tolerance. Trade-offs acknowledged. Shows sophisticated portfolio thinking."
}
}
},
{
"name": "Capacity Feasibility",
"description": "Evaluates whether total effort is realistic given team capacity and constraints",
"weight": 1.2,
"scale": {
"1": {
"label": "Capacity ignored",
"description": "No capacity analysis. Effort totals unknown. Likely overcommitted (more bets than team can handle). Unrealistic roadmap."
},
"2": {
"label": "Vague capacity check",
"description": "Capacity mentioned but not quantified. Effort totals rough or missing. Unclear if feasible. Team likely overcommitted or underutilized."
},
"3": {
"label": "Capacity-constrained",
"description": "Total effort calculated per horizon. Capacity quantified (person-months available). Effort ≤ capacity. Feasibility checked. Some slack for unknowns."
},
"4": {
"label": "Realistic capacity planning",
"description": "Capacity by function (eng, design, PM, QA). Effort allocated accordingly. Utilization target set (≤80% for 20% slack). Effort totals ≤ capacity × 0.8. Contingency for unknowns, vacations, attrition. Overcommitment risks identified."
},
"5": {
"label": "Sophisticated resource planning",
"description": "Capacity planning by function, by horizon, by skill set. Utilization targets justified (80% for mature teams, 60% for new teams). Effort includes all work types (feature dev, tech debt, ops, learning). Dependency on external teams or vendors factored in. Hiring plan aligned to roadmap (if scaling team). Risk scenarios modeled (what if 2 people leave, what if key bet slips). Shows deep understanding of execution realities."
}
}
},
{
"name": "Impact Ambition & Alignment",
"description": "Evaluates whether portfolio impact ladders up to strategic theme with risk adjustment",
"weight": 1.1,
"scale": {
"1": {
"label": "No impact analysis",
"description": "Portfolio impact not calculated. Unclear if bets ladder up to theme. No connection between bet impacts and strategic goal."
},
"2": {
"label": "Vague impact",
"description": "Impact mentioned but not quantified. Hard to tell if portfolio achieves theme. No risk adjustment. Optimistic assumptions."
},
"3": {
"label": "Impact quantified",
"description": "Total portfolio impact calculated (sum of all bet impacts). Compared to strategic goal. Bets generally aligned to theme. Some risk adjustment (not assuming 100% success)."
},
"4": {
"label": "Impact ladders up with risk adjustment",
"description": "Portfolio impact comprehensively calculated. Risk-adjusted (assume 50% success rate or similar). Expected impact ≥ strategic goal. Impact math shown (Bet A: 1.5x, Bet B: 2x → Total: 3.5x if all succeed → Expected: 1.75x at 50% success). Gaps identified and addressed."
},
"5": {
"label": "Rigorous impact modeling",
"description": "Portfolio impact modeled with scenarios (best/likely/worst case). Risk-adjusted using historical win rates or confidence scores. Impact tied to North Star metric and business outcomes. Sensitivity analysis (what if key bets fail). Portfolio ambition justified (aggressive but achievable). Gaps between expected impact and strategic goal addressed with additional bets or revised targets. Shows strategic thinking and quantitative rigor."
}
}
},
{
"name": "Review & Iteration Plan",
"description": "Evaluates whether review cadence, criteria, and iteration process are defined",
"weight": 1.0,
"scale": {
"1": {
"label": "No review plan",
"description": "Review process not mentioned. Roadmap created once, never updated. No iteration framework."
},
"2": {
"label": "Vague review plan",
"description": "Review mentioned but cadence unclear. No criteria for what to review. No iteration process (kill/pivot/scale)."
},
"3": {
"label": "Review cadence defined",
"description": "Review cadence specified (monthly, quarterly). Review criteria mentioned. Some iteration process (check progress, make adjustments)."
},
"4": {
"label": "Structured review process",
"description": "Review cadence by horizon (H1 monthly, H2 quarterly, H3 semi-annually). Review criteria clear (check exit/scale criteria, capacity, dependencies). Iteration framework defined (kill/pivot/persevere/scale). Next review date scheduled."
},
"5": {
"label": "Rigorous review discipline",
"description": "Comprehensive review process with cadence, criteria, and iteration framework. Portfolio health metrics tracked (velocity, win rate, impact, balance). Decision framework for kill/pivot/persevere/scale. Version control (track roadmap changes over time). Celebration of learning (reward killing losers, not just shipping). Shows commitment to continuous improvement and adaptive planning."
}
}
}
],
"guidance": {
"by_portfolio_type": {
"product_portfolio": {
"focus": "Prioritize bet sizing (1.3x), horizon sequencing (1.3x), and balance (1.3x). Product teams need clear prioritization and feasibility.",
"typical_scores": "Bet sizing 4+, sequencing 4+, balance 4+. Impact and criteria can be 3+ (evolving based on experiments).",
"red_flags": "All bets in H1 (unrealistic), no exit criteria (sunk cost), imbalanced (all features or all infrastructure)"
},
"technology_portfolio": {
"focus": "Prioritize capacity feasibility (1.3x), dependencies (1.3x), and sequencing (1.3x). Tech work has complex dependencies.",
"typical_scores": "Capacity 4+, dependencies 4+, sequencing 4+. Strategic theme can be 3+ (tech goals may be less flashy).",
"red_flags": "Dependencies ignored (H2 blocked), capacity overcommitted (100% utilization), no tech debt paydown"
},
"innovation_portfolio": {
"focus": "Prioritize exit/scale criteria (1.3x), balance (1.3x), and impact ambition (1.3x). Innovation requires disciplined experimentation.",
"typical_scores": "Exit/scale 4+, balance 4+ (70/20/10), impact 4+. Sequencing can be 3+ (more exploratory).",
"red_flags": "No exit criteria (zombie projects), all transformational (too risky), impact below strategic goal"
},
"marketing_portfolio": {
"focus": "Prioritize exit/scale criteria (1.3x), bet sizing (1.3x), and review process (1.3x). Marketing experiments need fast iteration.",
"typical_scores": "Exit/scale 4+, bet sizing 4+, review 4+ (monthly). Sequencing can be 3+ (less dependent).",
"red_flags": "No exit criteria (continuing failed campaigns), unmeasurable impact, no review cadence"
}
},
"by_portfolio_maturity": {
"first_time": {
"expectations": "Strategic theme 3+, bet sizing 3+, sequencing 3+. First portfolio roadmap may be rough. Focus: Establish basics.",
"next_steps": "Refine sizing methodology, map dependencies, set review cadence"
},
"established": {
"expectations": "All criteria 3.5+. Team has roadmapping experience. Focus: Improve balance, capacity planning, impact alignment.",
"next_steps": "Risk-adjust impact, optimize sequencing, track portfolio health metrics"
},
"advanced": {
"expectations": "All criteria 4+. Sophisticated portfolio management. Focus: Continuous improvement, scenario planning, advanced optimization.",
"next_steps": "Sensitivity analysis, portfolio health dashboard, predictive modeling"
}
}
},
"common_failure_modes": {
"vague_theme": "Theme too generic ('improve product'). Fix: Quantify target (3x revenue in 18 months) and tie bets to it.",
"everything_h1": "All bets crammed into H1 (wish list). Fix: Capacity-constrain H1 to what's realistic, move rest to H2/H3.",
"no_exit_criteria": "No decision points to kill bets. Fix: Set exit criteria upfront (metric + timeline), review monthly, celebrate killing.",
"portfolio_imbalanced": "All core (too safe) or all transformational (too risky). Fix: Use 70/20/10 rule, rebalance to targets.",
"dependencies_ignored": "H2 bets depend on H1 infrastructure not prioritized. Fix: Map dependencies, prioritize blocking work.",
"capacity_overcommitted": "Total effort exceeds team capacity. Fix: Sum effort, compare to capacity, cut scope to ≤80% utilization.",
"impact_below_goal": "Portfolio impact below strategic theme even if all succeed. Fix: Add more bets, increase ambition, or revise goal.",
"no_review_discipline": "Roadmap created once, never updated. Fix: Set monthly/quarterly review cadence, track progress, iterate."
},
"excellence_indicators": [
"Strategic theme is specific, measurable, time-bound, and inspires team (North Star-aligned)",
"All bets sized using consistent methodology (RICE, ICE) with effort and impact quantified",
"Bets sequenced across H1/H2/H3 with dependencies explicitly mapped and critical path identified",
"Exit and scale criteria defined for all bets with SMART metrics and decision owners",
"Portfolio balanced using frameworks (70/20/10 risk, 50/30/20 horizons) with context-specific targets",
"Capacity feasibility validated (effort ≤ capacity × 0.8) with contingency for unknowns",
"Portfolio impact ladders up to strategic theme with risk adjustment and scenario modeling",
"Review cadence defined (H1 monthly, H2 quarterly, H3 semi-annually) with iteration framework (kill/pivot/scale)",
"Portfolio health metrics tracked (velocity, win rate, impact, balance) with dashboard",
"Stakeholder alignment achieved with clear prioritization and trade-offs documented"
],
"evaluation_notes": {
"scoring": "Calculate weighted average across all criteria. Minimum passing score: 3.0 (basic quality). Production-ready target: 3.5+. Excellence threshold: 4.2+. For product portfolios, weight bet sizing, sequencing, and balance higher. For tech portfolios, weight capacity, dependencies, and sequencing higher. For innovation portfolios, weight exit/scale criteria, balance, and impact higher.",
"context": "Adjust expectations by portfolio maturity. First-time roadmaps can have looser targets (3+). Established teams should hit 3.5+ across the board. Advanced teams should aim for 4+. Different portfolio types need different emphasis: product portfolios need clear prioritization (bet sizing 4+), tech portfolios need dependency management (dependencies 4+), innovation portfolios need disciplined experimentation (exit/scale criteria 4+).",
"iteration": "Low scores indicate specific improvement areas. Priority order: 1) Fix vague strategic theme (clarifies direction), 2) Size bets consistently (enables prioritization), 3) Map dependencies (prevents blocking), 4) Set exit/scale criteria (enables learning), 5) Balance portfolio (manages risk), 6) Validate capacity (prevents burnout), 7) Align impact (achieves goal), 8) Establish review cadence (enables iteration). Re-score after each improvement cycle."
}
}

View File

@@ -0,0 +1,297 @@
# Portfolio Roadmapping Bets Methodology
## Table of Contents
1. [Horizon Planning Frameworks](#1-horizon-planning-frameworks)
2. [Bet Sizing Methodologies](#2-bet-sizing-methodologies)
3. [Portfolio Balancing Techniques](#3-portfolio-balancing-techniques)
4. [Dependency Mapping](#4-dependency-mapping)
5. [Exit & Scale Criteria](#5-exit--scale-criteria)
6. [Portfolio Review](#6-portfolio-review)
7. [Anti-Patterns & Fixes](#7-anti-patterns--fixes)
---
## 1. Horizon Planning Frameworks
### McKinsey Three Horizons
**H1: Extend & Defend Core** (70%)
- Timeline: 0-12mo | Risk: Low | Return: 10-30% | Examples: Feature improvements, optimizations
**H2: Build Emerging Businesses** (20%)
- Timeline: 6-24mo | Risk: Medium | Return: 2-5x | Examples: New product lines, geographies
**H3: Create Transformational Options** (10%)
- Timeline: 12-36+mo | Risk: High | Return: 10x+ | Examples: Moonshots, new business models
**Adjustments**: Startup (50/30/20), Enterprise (80/15/5), Scale-up (70/20/10)
### Now-Next-Later
**Now** (Shipping this quarter): >80% confidence, clear reqs, in development
**Next** (Starting 1-2 quarters): ~60% confidence, mostly clear, in planning
**Later** (Future quarters): ~40% confidence, unclear, in research
Use when: Teams uncomfortable with 6-12-24mo planning
### Dual-Track Agile
**Discovery** (Learn): User research, prototypes, experiments → Decide what to build
**Delivery** (Ship): Build, ship, monitor, iterate
**Application**: H1 = delivery, H2 = mix, H3 = discovery. Discovery runs 1-2 sprints ahead.
---
## 2. Bet Sizing Methodologies
### RICE Scoring
**Formula**: (Reach × Impact × Confidence) / Effort
- **Reach**: Users affected per quarter
- **Impact**: 0.25 (minimal) to 3 (massive)
- **Confidence**: 50% (low) to 100% (high)
- **Effort**: Person-months
**Example**: (5000 × 2 × 80%) / 4 = 2000 score
### ICE Scoring
**Formula**: (Impact + Confidence + Ease) / 3 or Impact × Confidence × Ease
- **Impact**: 1-10 scale
- **Confidence**: 1-10 scale
- **Ease**: 1-10 scale (inverse of effort)
Use when: Quick prioritization without reach data
### Effort/Impact Matrix
**Quadrants**:
- High Impact, Low Effort → Quick wins (do first)
- High Impact, High Effort → Strategic (plan carefully, H2/H3)
- Low Impact, Low Effort → Fill-ins (if spare capacity)
- Low Impact, High Effort → Avoid
### Kano Model
**Basic Needs**: Must-have (if missing, dissatisfied) → H1
**Performance Needs**: Linear satisfaction (more is better) → H1/H2
**Delight Needs**: Unexpected wow factors → H2/H3
---
## 3. Portfolio Balancing Techniques
### 70-20-10 Rule
- **70% Core**: Optimize existing (low risk, predictable return)
- **20% Adjacent**: Extend to new (medium risk, substantial return)
- **10% Transformational**: Create new (high risk, breakthrough potential)
**Measure by**: Bet count or effort. **Red flags**: >80% core (too safe), >30% transformational (too risky)
### Risk-Return Diversification
**Low Risk, Low Return** (Core): 80-90% win rate, 1.2-1.5x return
**Medium Risk, Medium Return** (Adjacent): 50-60% win rate, 2-3x return
**High Risk, High Return** (Transformational): 10-30% win rate, 10x+ return
**Portfolio construction**: Combine to achieve desired risk/return profile
### Barbell Strategy
**Structure**: 80-90% very safe + 10-20% very risky, 0% medium
**Rationale**: Safe bets sustain, risky bets create upside, avoid "meh" middle
### Pacing by Cycle Time
**Fast** (days-weeks): A/B tests, experiments → 50%
**Medium** (months): Features, initiatives → 30%
**Slow** (quarters-years): Platform, R&D → 20%
---
## 4. Dependency Mapping
### Critical Path Method
1. List all bets and dependencies
2. Map dependencies (A → B → C)
3. Calculate duration for each path
4. Identify critical path (longest)
5. Accelerate critical path
**Example**: Path A (3mo) → C (4mo) = 7mo ← Critical. Path B (2mo) → C (4mo) = 6mo (1mo slack)
### Dependency Types
- **Technical**: Infrastructure, APIs, data pipelines
- **Learning**: Insights from experiments
- **Strategic**: Prior bet must validate market
- **Resource**: Team availability
### Learning-Based Sequencing
**Pattern**: Small experiment (H1) → Validate → Large bet (H2) → Scale
**Example**: H1: 2-week prototype ($5K) | Exit if CTR <5% | Scale: H2: Full build ($500K) if CTR >10%
---
## 5. Exit & Scale Criteria
### North Star Metric Thresholds
**Example**: North Star = WAU
- Exit: If WAU lift <5% after 60 days, kill
- Scale: If WAU lift >15%, expand to all users
### Staged Funding
**Stage 1** (Seed): $50K → Prototype, 100 users, 20% engagement
- Exit if <20%, fund $200K for alpha if ≥20%
**Stage 2** (Series A): $200K → Alpha, 1000 users, 10% conversion
- Exit if <10%, fund $1M for full build if ≥10%
### Kill Criteria Examples
- **Time**: "If not validated in 90 days, kill"
- **Metric**: "If adoption <5%, kill"
- **Cost**: "If CAC >$100, kill"
- **Strategic**: "If competitor launches first, reassess"
### Scale Criteria Examples
- **Adoption**: "If >20% adopt in 30 days, expand"
- **Engagement**: "If usage >3x baseline, add features"
- **Revenue**: "If ARR >$100K, hire team"
- **Efficiency**: "If LTV/CAC >5, increase budget 3x"
---
## 6. Portfolio Review
### Review Cadence
- **H1**: Monthly (check progress, blockers, kill/pivot/scale)
- **H2**: Quarterly (ready to start? dependencies? promote to H1 or push to H3?)
- **H3**: Semi-annually (still strategic? market shifts? add/kill)
### Kill / Pivot / Persevere / Scale
**For each bet**:
- **Kill**: Criteria not met, no path to success
- **Pivot**: Partially working, adjust approach
- **Persevere**: On track, continue
- **Scale**: Exceeding expectations, double-down
### Portfolio Health Metrics
**Velocity**: Bets shipped/quarter (target: 5-10)
**Win Rate**: % meeting scale criteria (target: 20-40%), % exited (target: 10-30%)
**Impact**: Portfolio contribution to North Star, ROI (target: >3x)
**Balance**: Risk 70/20/10, Horizon 50/30/20
**Red flags**: Win <10% (too risky), Win >80% (too conservative), Exit <5% (not killing), Exit >50% (too risky)
---
## 7. Anti-Patterns & Fixes
### #1: Everything High Priority
**Symptom**: All must-have, no trade-offs
**Fix**: Force-rank (only top 3 high), MoSCoW (20% must, 30% should, 30% could, 20% won't), capacity-constrain
### #2: No Exit Criteria
**Symptom**: Bets continue indefinitely, zombie projects
**Fix**: Set criteria upfront, review monthly, celebrate killing
### #3: All Bets in H1
**Symptom**: Wish list, unrealistic
**Fix**: Capacity-constrain H1, move excess to H2/H3, set expectations
### #4: No H3 Pipeline
**Symptom**: Only H1/H2, no future exploration
**Fix**: Reserve 10-20% for H3, run experiments, refresh quarterly
### #5: All Core, No Transformational
**Symptom**: 100% incremental
**Fix**: Mandate 10% transformational, innovation sprints, measure % revenue from <3yr products (target 20%+)
### #6: Dependencies Ignored
**Symptom**: H2 depends on H1 infrastructure not prioritized
**Fix**: Map dependencies, prioritize blockers, review critical path
### #7: No Review Discipline
**Symptom**: Roadmap created once, never updated
**Fix**: Monthly H1, quarterly portfolio review, version control
### #8: Metrics-Free Bets
**Symptom**: No success metrics, unclear if worked
**Fix**: Require metrics per bet, instrument before ship, review post-launch
### #9: Over-Optimistic Impact
**Symptom**: Every bet "10x potential"
**Fix**: Use baselines, benchmark, risk-adjust (assume 50% success)
### #10: No Portfolio Balance
**Symptom**: All small (busy work) or all large (nothing ships)
**Fix**: Mix sizes (50% S, 30% M, 15% L, 5% XL), cycles (fast/medium/slow), risk (70/20/10)
---
## Quick Reference
### When to Use Each Framework
**Horizon Planning**: McKinsey (classic), Now-Next-Later (adaptive), Dual-Track (continuous)
**Bet Sizing**: RICE (quantitative), ICE (quick), Effort/Impact (visual), Kano (user satisfaction)
**Balancing**: 70-20-10 (risk), Risk-Return (diversification), Barbell (extremes), Pacing (cycles)
**Sequencing**: CPM (critical path), Dependency Matrix (complex), Learning-Based (de-risk)
**Criteria**: North Star (aligned), Staged Funding (VC model), Time/Metric/Cost/Strategic (varied)
**Review**: Monthly/Quarterly/Semi-annual (by horizon), Kill/Pivot/Persevere/Scale (framework)
### Common Patterns
**Product**: H1: Quick wins + strategic features | H2: Major features + platform | H3: Exploratory | 60% incremental, 30% substantial, 10% breakthrough
**Tech**: H1: Stability + migration start | H2: Complete migration + improvements | H3: Next-gen research | 50% maintain, 30% improve, 20% transform
**Innovation**: H1: Scale validated + new tests | H2: Strategic bets + experiments | H3: Moonshots | 70% core, 20% adjacent, 10% transformational
**Marketing**: H1: Optimize proven + test new | H2: Scale winners + brand | H3: Positioning + market entry | 70% performance, 20% growth, 10% brand
### Success Criteria
✓ Strategic theme clear & measurable
✓ Bets sized (S/M/L/XL) & impact quantified (1x/3x/10x)
✓ Sequenced across H1/H2/H3 with dependencies mapped
✓ Exit & scale criteria defined per bet
✓ Portfolio balanced (risk, horizon, size)
✓ Capacity feasible (effort ≤ capacity × 0.8)
✓ Impact ladders to theme (risk-adjusted)
✓ Review cadence established
### Red Flags
❌ No theme → wish list
❌ All "Large" → no prioritization
❌ No exit criteria → zombies
❌ Imbalanced (all core or all moonshots)
❌ Dependencies ignored → blocking
❌ Overcommitted (>80% capacity)
❌ Impact below goal
❌ No review → stale roadmap

View File

@@ -0,0 +1,288 @@
# Portfolio Roadmapping Bets Template
## Table of Contents
1. [Workflow](#workflow)
2. [Portfolio Roadmap Template](#portfolio-roadmap-template)
3. [Bet Template](#bet-template)
4. [Guidance for Each Section](#guidance-for-each-section)
5. [Quick Patterns](#quick-patterns)
6. [Quality Checklist](#quality-checklist)
## Workflow
Copy this checklist and track your progress:
```
Portfolio Roadmapping Bets Progress:
- [ ] Step 1: Define theme and constraints
- [ ] Step 2: Inventory and size bets
- [ ] Step 3: Sequence across horizons
- [ ] Step 4: Set exit and scale criteria
- [ ] Step 5: Balance and validate portfolio
```
**Step 1: Define theme and constraints**
Fill out [Portfolio Context](#1-portfolio-context) section with strategic theme, time horizon definitions, resource constraints, and portfolio balance targets.
**Step 2: Inventory and size bets**
List all candidate initiatives using [Bet Template](#bet-template). Size each by effort (S/M/L/XL) and impact (1x/3x/10x). See [Guidance: Sizing Bets](#sizing-bets) for examples.
**Step 3: Sequence across horizons**
Assign bets to H1/H2/H3 based on dependencies, capacity, and strategic timing. Fill out [Horizon 1](#3-horizon-1-h1-now-0-6-months), [Horizon 2](#4-horizon-2-h2-next-6-12-months), [Horizon 3](#5-horizon-3-h3-later-12-24-months) sections.
**Step 4: Set exit and scale criteria**
For each bet, define success (scale) and failure (exit) criteria with metrics and timelines. See [Guidance: Exit & Scale Criteria](#exit--scale-criteria).
**Step 5: Balance and validate portfolio**
Complete [Portfolio Balance Summary](#6-portfolio-balance-summary). Check risk distribution, horizon mix, capacity feasibility. Use [Quality Checklist](#quality-checklist) to validate.
---
## Portfolio Roadmap Template
### 1. Portfolio Context
**Strategic Theme:**
[What overarching goal drives this portfolio? Example: "Grow marketplace GMV 3x to $300M in 18 months"]
**Time Horizons:**
- **H1 (Now)**: [Date range, e.g., Jan-Jun 2024] - [Description: shipping, high confidence]
- **H2 (Next)**: [Date range, e.g., Jul-Dec 2024] - [Description: planning/development, medium confidence]
- **H3 (Later)**: [Date range, e.g., Jan-Jun 2025+] - [Description: exploration/research, lower confidence]
**Resource Constraints:**
- **Budget**: $[X] total available
- **People**: [Y engineers, Z designers, etc. - specify capacity by function]
- **Time**: [Key deadlines: board meeting, fiscal year end, competitive launch, etc.]
**Portfolio Balance Targets:**
- **By Risk**: [Example: 70% core / 20% adjacent / 10% transformational]
- **By Horizon**: [Example: 50% H1 / 30% H2 / 20% H3]
- **By Type**: [Example: 60% product features / 30% platform / 10% R&D]
**Success Metrics:**
[How will we measure portfolio success? Example: "GMV growth rate, seller retention, buyer NPS"]
---
### 2. Bet Inventory
**Total Bets**: [Count across all horizons]
**Effort Distribution**:
- Small (1-2 weeks): [Count]
- Medium (1-3 months): [Count]
- Large (3-6 months): [Count]
- X-Large (6-12+ months): [Count]
**Impact Distribution**:
- 1x (Incremental): [Count]
- 3x (Substantial): [Count]
- 10x (Breakthrough): [Count]
**Type Distribution**:
- Core (optimize existing): [Count, %]
- Adjacent (extend to new): [Count, %]
- Transformational (new paradigm): [Count, %]
---
### 3-5. Horizons (H1/H2/H3)
For each horizon, fill out:
**Theme for H[X]**: [Focus for this horizon]
**Capacity Available**: [Person-months by function]
**Bets**: Use [Bet Template](#bet-template) below for each bet
**Summary**:
- Total Bets: [Count]
- Total Effort: [Sum] vs Capacity: [X] ✓ or ⚠
- Expected Impact: [All succeed vs risk-adjusted]
- Dependencies: [Critical path or blockers]
---
### 6. Portfolio Balance Summary
**Risk Distribution:**
| Type | Count | % of Bets | % of Effort |
|------|-------|-----------|-------------|
| Core | [X] | [Y%] | [Z%] |
| Adjacent | [X] | [Y%] | [Z%] |
| Transformational | [X] | [Y%] | [Z%] |
| **Total** | [Sum] | 100% | 100% |
**Target**: [e.g., 70% core / 20% adjacent / 10% transformational]
**Assessment**: ✓ Balanced / ⚠ Too conservative / ⚠ Too aggressive
**Horizon Distribution:**
| Horizon | Count | % of Bets | % of Effort |
|---------|-------|-----------|-------------|
| H1 (Now) | [X] | [Y%] | [Z%] |
| H2 (Next) | [X] | [Y%] | [Z%] |
| H3 (Later) | [X] | [Y%] | [Z%] |
| **Total** | [Sum] | 100% | 100% |
**Target**: [e.g., 50% H1 / 30% H2 / 20% H3]
**Assessment**: ✓ Balanced / ⚠ Too short-term / ⚠ Too long-term
**Capacity Feasibility:**
- **H1**: [Total effort] / [Capacity] = [X%] utilization (target: ≤80% for 20% slack)
- **H2**: [Total effort] / [Capacity] = [X%] utilization
- **H3**: [Total effort] / [Capacity] = [X%] utilization (rough estimate)
**Assessment**: ✓ Feasible / ⚠ Overcommitted / ⚠ Underutilized
**Impact Ambition:**
- **Strategic Theme Goal**: [e.g., "3x revenue = $300M"]
- **Portfolio Total Impact (all succeed)**: [Sum of all bet impacts, e.g., "4.5x revenue"]
- **Portfolio Expected Impact (risk-adjusted)**: [Assume 50% success rate, e.g., "2.25x revenue"]
**Assessment**: ✓ Ambitious enough / ⚠ Below goal / ⚠ Unrealistic
---
### 7. Dependencies & Review Plan
**Critical Path**: [Must-have sequence, e.g., "Bet A → Bet B → Bet C"]
**Dependency Map**: [Visual or list of dependencies]
**Risk**: [What if key dependencies fail?]
**Review Cadence**: [Monthly H1, quarterly H2, semi-annual H3]
**Review Criteria**: Exit/scale met? Dependencies on track? Re-scope needed?
**Iteration**: Kill / Pivot / Persevere / Scale framework
**Next Review**: [Date]
---
## Bet Template
Copy this for each bet:
```markdown
### Bet [Horizon]-[Number]: [Bet Name]
**Owner**: [Name/team]
**Type**: [Core / Adjacent / Transformational]
**Effort**: [S/M/L/XL] - [Estimate]
**Impact**: [1x/3x/10x] - [Metric]
**Dependencies**: [Prerequisites]
**Rationale**: [Why this? Why now?]
**Exit Criteria**: [Kill if...]
**Scale Criteria**: [Double-down if...]
**Status**: [Not Started / In Progress / Shipped / Exited]
```
---
## Guidance for Each Section
### Portfolio Context
- **Theme**: Specific, measurable, time-bound (e.g., "Grow revenue 3x in 18mo")
- **Horizons**: H1 >80% confidence, H2 ~60%, H3 ~40%
- **Constraints**: Realistic capacity, account for unknowns
### Sizing Bets
**Effort**: S (1-2wk), M (1-3mo), L (3-6mo), XL (6-12+mo)
**Impact**: 1x (10-50% lift), 3x (2-5x lift), 10x (breakthrough)
### Exit & Scale Criteria
**Exit**: Time ("90 days"), Metric ("<5%"), Cost (">$100K"), Strategic ("market shifts")
**Scale**: Adoption (">20%"), Engagement (">3x"), Revenue (">$100K"), Efficiency ("LTV/CAC >5")
**Example**: Bet: Premium tier | Exit: <1000 signups in 90d OR churn >20% | Scale: >5000 signups AND churn <5%
### Dependencies
Types: Technical, Learning, Strategic, Resource
Document clearly: "Depends on Bet H1-2 (payment API) by March"
### Portfolio Balance
**Risk**: 70% Core / 20% Adjacent / 10% Transformational (adjust: Startup 50/30/20, Enterprise 80/15/5)
**Horizon**: ~50-60% H1, ~25-30% H2, ~15-20% H3
---
## Quick Patterns
### Product Portfolio (Features)
```
H1: 5 quick wins (S/M bets) + 2 strategic features (L bets)
H2: 3 major features (L bets) + 1 platform upgrade (XL)
H3: 1-2 exploratory bets (L/XL)
Balance: 60% incremental, 30% substantial, 10% breakthrough
```
### Technology Portfolio (Platform)
```
H1: Stability & performance (4 M bets) + 1 migration start (L)
H2: Complete migration (L) + 2 platform improvements (M)
H3: Next-gen architecture research (XL)
Balance: 50% maintain, 30% improve, 20% transform
```
### Innovation Portfolio (R&D)
```
H1: Scale 2 validated experiments (M) + Run 3 new tests (S)
H2: 2 strategic bets (L) + 4 experiments (S)
H3: 2 moonshots (XL)
Balance: 70% core business, 20% adjacent, 10% transformational
```
### Marketing Portfolio (Campaigns)
```
H1: Optimize 3 proven channels (M) + Test 2 new channels (S)
H2: Scale winners (M/L) + Brand campaign (L)
H3: New positioning & market entry (L)
Balance: 70% performance, 20% growth tests, 10% brand
```
---
## Quality Checklist
Before finalizing your portfolio roadmap, verify:
**Portfolio Context**:
- [ ] Strategic theme is specific and measurable (includes target number + timeline)
- [ ] Time horizons clearly defined with date ranges
- [ ] Resource constraints realistic (budget, people, time)
- [ ] Portfolio balance targets explicit (risk mix, horizon mix)
**Bets**:
- [ ] Each bet has owner, type, effort, impact, dependencies
- [ ] Effort sized as S/M/L/XL with rough estimates
- [ ] Impact quantified (1x/3x/10x) with metric examples
- [ ] Exit criteria defined (metric + timeline to kill)
- [ ] Scale criteria defined (metric + timeline to double-down)
- [ ] Rationale explains why this bet, why now, why this horizon
**Sequencing**:
- [ ] Dependencies mapped (technical, learning, strategic, resource)
- [ ] Critical path identified
- [ ] Bets assigned to H1/H2/H3 with rationale
- [ ] No circular dependencies
**Balance**:
- [ ] Risk distribution checked (core/adjacent/transformational)
- [ ] Horizon distribution checked (H1/H2/H3)
- [ ] Capacity feasibility validated (effort ≤ capacity × 0.8)
- [ ] Impact ambition validated (portfolio total ≥ strategic goal with risk adjustment)
**Review Plan**:
- [ ] Review cadence defined (monthly, quarterly)
- [ ] Review criteria specified (what to check)
- [ ] Iteration process clear (kill/scale/pivot/add)
- [ ] Next review date set
**Overall**:
- [ ] Portfolio ladders up to strategic theme
- [ ] No overcommitment (realistic about capacity)
- [ ] No under-ambition (portfolio impact < strategic goal)
- [ ] Stakeholders aligned on priorities and trade-offs