Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,247 @@
---
name: hypotheticals-counterfactuals
description: Use when exploring alternative scenarios, testing assumptions through "what if" questions, understanding causal relationships, conducting pre-mortem analysis, stress testing decisions, or when user mentions counterfactuals, hypothetical scenarios, thought experiments, alternative futures, what-if analysis, or needs to challenge assumptions and explore possibilities.
---
# Hypotheticals and Counterfactuals
## Table of Contents
- [Purpose](#purpose)
- [When to Use](#when-to-use)
- [What Is It?](#what-is-it)
- [Workflow](#workflow)
- [Common Patterns](#common-patterns)
- [Guardrails](#guardrails)
- [Quick Reference](#quick-reference)
## Purpose
Hypotheticals and Counterfactuals uses "what if" thinking to explore alternative scenarios, test assumptions, understand causal relationships, and prepare for uncertainty. This skill guides you through counterfactual reasoning (what would have happened differently?), scenario exploration (what could happen?), pre-mortem analysis (imagine failure, identify causes), and stress testing decisions against alternative futures.
## When to Use
Use this skill when:
- **Testing assumptions**: Challenge underlying beliefs by asking "what if this assumption is wrong?"
- **Pre-mortem analysis**: Imagine project failure, identify potential causes before they occur
- **Causal inference**: Understand "what caused X?" by asking "would X have happened without Y?"
- **Scenario planning**: Explore alternative futures (best case, worst case, surprising case)
- **Risk identification**: Uncover hidden risks through "what could go wrong?" analysis
- **Strategic planning**: Test strategy robustness across different market conditions
- **Learning from failures**: Counterfactual analysis "what if we had done X instead?"
- **Decision stress testing**: Check if decision holds across optimistic/pessimistic scenarios
- **Innovation exploration**: "What if we removed constraint X?" to unlock new possibilities
- **Historical analysis**: "What would have happened if..." to understand key factors
Trigger phrases: "what if", "counterfactual", "hypothetical scenario", "thought experiment", "alternative future", "pre-mortem", "stress test", "what could go wrong", "imagine if", "suppose that"
## What Is It?
**Hypotheticals and Counterfactuals** combines forward-looking scenario exploration (hypotheticals) with backward-looking alternative history analysis (counterfactuals):
**Core components**:
- **Counterfactuals**: "What would have happened if X had been different?" Understand causality by imagining alternatives.
- **Pre-mortem**: Imagine future failure, work backward to identify causes. Inversion of post-mortem.
- **Scenario Planning**: Explore multiple plausible futures (2×2 matrix, three scenarios, cone of uncertainty).
- **Stress Testing**: Test decisions/plans against extreme scenarios (best/worst case, black swans).
- **Thought Experiments**: Explore ideas through imagined scenarios (Einstein's elevator, trolley problem).
- **Assumption Reversal**: "What if our key assumption is backwards?" to challenge mental models.
**Quick example:**
**Scenario**: Startup deciding whether to pivot from B2B to B2C.
**Counterfactual Analysis** (Learning from past):
- **Actual**: We focused on B2B, growth slow (5% MoM)
- **Counterfactual**: "What if we had gone B2C from start?"
- Hypothesis: Faster growth (viral potential) but higher CAC, lower LTV
- Evidence: Competitor X did B2C, grew 20% MoM but 60% churn
- Insight: B2C growth faster BUT unit economics worse. B2B slower but sustainable.
**Pre-Mortem** (Preparing for future):
- Imagine: It's 1 year from now, B2C pivot failed
- Why did it fail?
1. CAC higher than projected (Facebook ads too expensive)
2. Churn higher than B2B (no contracts, easy to switch)
3. Team lacked consumer product expertise
4. Existing B2B customers churned (felt abandoned)
- **Action**: Before pivoting, test assumptions with small B2C experiment. Don't abandon B2B entirely.
**Outcome**: Decision to run parallel B2C pilot while maintaining B2B, de-risking pivot through counterfactual insights and pre-mortem preparation.
**Core benefits**:
- **Causal clarity**: Understand what drives outcomes by imagining alternatives
- **Risk identification**: Pre-mortem uncovers failure modes before they happen
- **Assumption testing**: Stress test beliefs against extreme scenarios
- **Strategic flexibility**: Prepare for multiple futures, not just one forecast
- **Learning enhancement**: Counterfactuals reveal what mattered vs. what didn't
## Workflow
Copy this checklist and track your progress:
```
Hypotheticals & Counterfactuals Progress:
- [ ] Step 1: Define the focal question
- [ ] Step 2: Generate counterfactuals or scenarios
- [ ] Step 3: Develop each scenario
- [ ] Step 4: Identify implications and insights
- [ ] Step 5: Extract actions or decisions
- [ ] Step 6: Monitor and update
```
**Step 1: Define the focal question**
What are you exploring? Past decision (counterfactual)? Future possibility (hypothetical)? Assumption to test? See [resources/template.md](resources/template.md#focal-question-template).
**Step 2: Generate counterfactuals or scenarios**
Counterfactual: Change one key factor, ask "what would have happened?" Hypothetical: Imagine future scenarios (2-4 plausible alternatives). See [resources/template.md](resources/template.md#scenario-generation-template) and [resources/methodology.md](resources/methodology.md#1-counterfactual-reasoning).
**Step 3: Develop each scenario**
Describe what's different, trace implications, identify key assumptions. Make it vivid and concrete. See [resources/template.md](resources/template.md#scenario-development-template) and [resources/methodology.md](resources/methodology.md#2-scenario-planning-techniques).
**Step 4: Identify implications and insights**
What does each scenario teach? What assumptions are tested? What risks revealed? See [resources/methodology.md](resources/methodology.md#3-extracting-insights-from-scenarios).
**Step 5: Extract actions or decisions**
What should we do differently based on these scenarios? Hedge against downside? Prepare for upside? See [resources/template.md](resources/template.md#action-extraction-template).
**Step 6: Monitor and update**
Track which scenario is unfolding. Update plans as reality diverges from expectations. See [resources/methodology.md](resources/methodology.md#4-monitoring-and-adaptation).
Validate using [resources/evaluators/rubric_hypotheticals_counterfactuals.json](resources/evaluators/rubric_hypotheticals_counterfactuals.json). **Minimum standard**: Average score ≥ 3.5.
## Common Patterns
**Pattern 1: Pre-Mortem (Prospective Hindsight)**
- **Format**: Imagine it's future date, project failed. List reasons why.
- **Best for**: Project planning, risk identification before launch
- **Process**: (1) Set future date, (2) Assume failure, (3) List causes, (4) Prioritize top 3-5 risks, (5) Mitigate now
- **When**: Before major launch, strategic decision, resource commitment
- **Output**: Risk list with mitigations
**Pattern 2: Counterfactual Causal Analysis**
- **Format**: "What would have happened if we had done X instead of Y?"
- **Best for**: Learning from past decisions, understanding what mattered
- **Process**: (1) Identify decision, (2) Imagine alternative, (3) Trace different outcome, (4) Identify causal factor
- **When**: Post-mortem, retrospective, learning from success/failure
- **Output**: Causal insight (X caused Y because...)
**Pattern 3: Three Scenarios (Optimistic, Baseline, Pessimistic)**
- **Format**: Describe best case, expected case, worst case futures
- **Best for**: Strategic planning, forecasting, resource allocation
- **Process**: (1) Define time horizon, (2) Describe three futures, (3) Assign probabilities, (4) Plan for each
- **When**: Annual planning, market uncertainty, investment decisions
- **Output**: Three detailed scenarios with implications
**Pattern 4: 2×2 Scenario Matrix**
- **Format**: Two key uncertainties create four quadrants (scenarios)
- **Best for**: Exploring interaction of two critical unknowns
- **Process**: (1) Identify two key uncertainties, (2) Define extremes, (3) Develop four scenarios, (4) Name each world
- **When**: Strategic planning with multiple drivers of uncertainty
- **Output**: Four distinct future worlds with narratives
**Pattern 5: Assumption Reversal**
- **Format**: "What if our key assumption is backwards?"
- **Best for**: Challenging mental models, unlocking innovation
- **Process**: (1) List key assumptions, (2) Reverse each, (3) Explore implications, (4) Identify if reversal plausible
- **When**: Stuck in conventional thinking, need breakthrough
- **Output**: New perspectives, potential pivots
**Pattern 6: Stress Test (Extreme Scenarios)**
- **Format**: Push key variables to extremes, test if decision holds
- **Best for**: Risk management, decision robustness testing
- **Process**: (1) Identify decision, (2) List key variables, (3) Set to extremes, (4) Check if decision still valid
- **When**: High-stakes decisions, need to ensure resilience
- **Output**: Decision validation or hedges needed
## Guardrails
**Critical requirements:**
1. **Plausibility constraint**: Scenarios must be possible, not just imaginable. "What if gravity reversed?" is not useful counterfactual. Stay within bounds of plausibility given current knowledge.
2. **Minimal rewrite principle** (counterfactuals): Change as little as possible. "What if we had chosen Y instead of X?" not "What if we had chosen Y and market doubled and competitor failed?" Isolate causal factor.
3. **Avoid hindsight bias**: Pre-mortem assumes failure, but don't just list things that went wrong in similar past failures. Generate new failure modes specific to this context.
4. **Specify mechanism**: Don't just state outcome ("sales would be higher"), explain HOW ("sales would be higher because lower price → higher conversion → more customers despite lower margin").
5. **Assign probabilities** (scenarios): Don't treat all scenarios as equally likely. Estimate rough probabilities (e.g., 60% baseline, 25% pessimistic, 15% optimistic). Avoids equal-weight fallacy.
6. **Time horizon clarity**: Specify WHEN in future. "Product fails" is vague. "In 6 months, adoption <1000 users" is concrete. Enables tracking.
7. **Extract actions, not just stories**: Scenarios are useless without implications. Always end with "so what should we do?" Prepare, hedge, pivot, or double-down.
8. **Update scenarios**: Reality evolves. Quarterly review: which scenario is unfolding? Update probabilities and plans accordingly.
**Common pitfalls:**
-**Confusing counterfactual with fantasy**: "What if we had $100M funding from start?" vs. realistic "What if we had raised $2M seed instead of $1M?"
-**Too many scenarios**: 10 scenarios = analysis paralysis. Stick to 2-4 meaningful, distinct futures.
-**Scenarios too similar**: Three scenarios that differ only in magnitude (10% growth, 15% growth, 20% growth). Need qualitatively different worlds.
-**No causal mechanism**: "Sales would be 2× higher" without explaining why. Must specify how change leads to outcome.
-**Hindsight bias in pre-mortem**: Just listing past failures. Need to imagine new, context-specific risks.
-**Ignoring low-probability, high-impact**: "Black swan won't happen" until it does. Include tail risks.
## Quick Reference
**Counterfactual vs. Hypothetical:**
| Type | Direction | Question | Purpose | Example |
|------|-----------|----------|---------|---------|
| **Counterfactual** | Backward (past) | "What would have happened if...?" | Understand causality, learn from past | "What if we had launched in EU first?" |
| **Hypothetical** | Forward (future) | "What could happen if...?" | Explore futures, prepare for uncertainty | "What if competitor launches free tier?" |
**Scenario types:**
| Type | # Scenarios | Structure | Best For |
|------|-------------|-----------|----------|
| **Three scenarios** | 3 | Optimistic, Baseline, Pessimistic | General forecasting, strategic planning |
| **2×2 matrix** | 4 | Two uncertainties create quadrants | Exploring interaction of two drivers |
| **Cone of uncertainty** | Continuous | Range widens over time | Long-term planning (5-10 years) |
| **Pre-mortem** | 1 | Imagine failure, list causes | Risk identification before launch |
| **Stress test** | 2-4 | Extreme scenarios (best/worst) | Decision robustness testing |
**Pre-mortem process** (6 steps):
1. **Set future date**: "It's 6 months from now..."
2. **Assume failure**: "...the project has failed completely."
3. **Individual brainstorm**: Each person writes 3-5 reasons (5 min, silent)
4. **Share and consolidate**: Round-robin sharing, group similar items
5. **Vote on top risks**: Dot voting or force-rank top 5 causes
6. **Mitigate now**: For each top risk, assign owner and mitigation action
**2×2 Scenario Matrix** (example):
**Uncertainties**: (1) Market adoption rate, (2) Regulatory environment
| | Slow Adoption | Fast Adoption |
|---------------------|---------------|---------------|
| **Strict Regulation** | "Constrained Growth" | "Regulated Scale" |
| **Loose Regulation** | "Patient Build" | "Wild West Growth" |
**Assumption reversal questions:**
- "What if our biggest advantage is actually a liability?"
- "What if the problem we're solving isn't the real problem?"
- "What if our target customer is wrong?"
- "What if cheaper/slower is better than premium/fast?"
- "What if we're too early/too late, not right on time?"
**Inputs required:**
- **Focal decision or event**: What are you analyzing?
- **Key uncertainties**: What factors most shape outcomes?
- **Time horizon**: How far into future/past?
- **Constraints**: What must remain fixed vs. what can vary?
- **Stakeholders**: Who should contribute scenarios?
**Outputs produced:**
- `counterfactual-analysis.md`: Alternative history analysis with causal insights
- `pre-mortem-risks.md`: List of potential failure modes and mitigations
- `scenarios.md`: 2-4 future scenarios with narratives and implications
- `action-plan.md`: Decisions and preparations based on scenario insights

View File

@@ -0,0 +1,211 @@
{
"criteria": [
{
"name": "Scenario Plausibility",
"description": "Scenarios are possible given current knowledge, not fantasy. Counterfactuals were realistic alternatives at decision time.",
"scale": {
"1": "Implausible scenarios (magic, impossible foreknowledge). Counterfactuals couldn't have been chosen at the time.",
"3": "Mostly plausible but some unrealistic assumptions. Counterfactuals stretch believability.",
"5": "All scenarios plausible given what was/is known. Counterfactuals were genuine alternatives available at decision time."
}
},
{
"name": "Minimal Rewrite Principle (Counterfactuals)",
"description": "Counterfactuals change as little as possible to isolate causal factor. Not multiple changes bundled together.",
"scale": {
"1": "Many factors changed simultaneously. Can't tell which caused different outcome. 'What if X AND Y AND Z...'",
"3": "Some attempt at isolation but still multiple changes. Primary factor identified but confounded.",
"5": "Perfect isolation: single factor changed, all else held constant. Causal factor clearly identified."
}
},
{
"name": "Causal Mechanism Specification",
"description": "Explains HOW change leads to different outcome. Not just stating result but tracing causal chain.",
"scale": {
"1": "No mechanism specified. Just outcome stated ('sales would be higher') without explanation.",
"3": "Partial mechanism. Some causal steps identified but incomplete chain.",
"5": "Complete causal chain: initial change → immediate effect → secondary effects → final outcome. Each step explained."
}
},
{
"name": "Probability Calibration",
"description": "Scenarios assigned probabilities based on evidence, base rates, analogies. Not all weighted equally.",
"scale": {
"1": "No probabilities assigned, or all scenarios treated as equally likely. No base rate consideration.",
"3": "Rough probabilities assigned but weak justification. Some consideration of likelihood.",
"5": "Well-calibrated probabilities using base rates, analogies, expert judgment. Sum to 100%. Clear reasoning for each."
}
},
{
"name": "Pre-Mortem Rigor",
"description": "For pre-mortems: follows 6-step process, generates novel failure modes specific to context, not just generic risks.",
"scale": {
"1": "Generic risk list copied from elsewhere. Hindsight bias ('obvious' failures). No structured process.",
"3": "Some specific risks but mixed with generic ones. Process partially followed.",
"5": "Rigorous 6-step process: silent brainstorm, round-robin, voting, mitigations. Context-specific failure modes identified."
}
},
{
"name": "Action Extraction",
"description": "Clear extraction of common actions, hedges, options, and decision points from scenarios. Not just stories.",
"scale": {
"1": "Scenarios developed but no actions extracted. 'Interesting stories' with no operational implications.",
"3": "Some actions identified but incomplete. Missing hedges or options.",
"5": "Comprehensive action plan: common actions (all scenarios), hedges (downside), options (upside), decision triggers clearly specified."
}
},
{
"name": "Leading Indicator Quality",
"description": "Indicators are observable, early (6+ months advance), and actionable. Clear thresholds defined.",
"scale": {
"1": "No indicators, or lagging indicators (show scenario after it's happened). No thresholds.",
"3": "Some leading indicators but vague thresholds or not truly early signals.",
"5": "High-quality leading indicators: observable metrics, 6+ months advance notice, clear thresholds, trigger specific actions."
}
},
{
"name": "Scenario Diversity",
"description": "Scenarios are qualitatively different, not just magnitude variations. Cover meaningful range of futures.",
"scale": {
"1": "Scenarios differ only in magnitude (10% growth vs 15% vs 20%). Basically same story.",
"3": "Some qualitative differences but scenarios too similar or narrow range.",
"5": "Meaningfully different scenarios: qualitative distinctions, broad range captured, distinct strategic implications for each."
}
},
{
"name": "Bias Avoidance (Hindsight/Confirmation)",
"description": "Avoids hindsight bias in counterfactuals, confirmation bias in scenario selection, anchoring on current trends.",
"scale": {
"1": "Strong hindsight bias ('we should have known'). Only scenarios confirming current view. Anchored on status quo.",
"3": "Some bias awareness but incomplete mitigation. Mostly avoids obvious biases.",
"5": "Rigorous bias mitigation: re-inhabits decision context, considers disconfirming scenarios, challenges assumptions, uses base rates."
}
},
{
"name": "Monitoring and Adaptation Plan",
"description": "Defined monitoring cadence (quarterly), indicator tracking, scenario probability updates, adaptation triggers.",
"scale": {
"1": "No monitoring plan. Set-it-and-forget-it scenarios. No updates planned.",
"3": "Informal plan to review occasionally. No specific cadence or triggers.",
"5": "Detailed monitoring: quarterly reviews, indicator dashboard, probability updates, clear adaptation triggers and owner."
}
}
],
"guidance_by_type": {
"Strategic Planning (1-3 year horizon)": {
"target_score": 4.2,
"key_criteria": ["Scenario Diversity", "Probability Calibration", "Action Extraction"],
"common_pitfalls": ["Too narrow scenario range", "No hedges against downside", "Monitoring plan missing"],
"specific_guidance": "Use three-scenario framework (optimistic/baseline/pessimistic) or 2×2 matrix. Assign probabilities (optimistic 15-30%, baseline 40-60%, pessimistic 15-30%). Extract common actions that work across all scenarios, plus hedges for downside. Quarterly monitoring."
},
"Pre-Mortem (Project Risk Identification)": {
"target_score": 4.0,
"key_criteria": ["Pre-Mortem Rigor", "Action Extraction", "Bias Avoidance"],
"common_pitfalls": ["Generic risks (not context-specific)", "Hindsight bias ('obvious' failures)", "No mitigations assigned"],
"specific_guidance": "Follow 6-step process rigorously. Silent brainstorm 5-10 min to prevent groupthink. Generate context-specific failure modes. Vote on top 5-7 risks. Assign mitigation owner and deadline for each."
},
"Counterfactual Learning (Post-Decision Analysis)": {
"target_score": 3.8,
"key_criteria": ["Minimal Rewrite Principle", "Causal Mechanism Specification", "Bias Avoidance"],
"common_pitfalls": ["Changing multiple factors (can't isolate cause)", "No causal mechanism (just outcome)", "Hindsight bias ('knew it all along')"],
"specific_guidance": "Change single factor, hold all else constant. Trace complete causal chain (change → immediate effect → secondary effects → outcome). Re-inhabit decision context to avoid hindsight. Use base rates and analogies to estimate counterfactual probability."
},
"Stress Testing (Decision Robustness)": {
"target_score": 4.0,
"key_criteria": ["Scenario Diversity", "Causal Mechanism Specification", "Action Extraction"],
"common_pitfalls": ["Only optimistic/pessimistic (no black swan)", "No mechanism for how extremes occur", "Decision not actually tested"],
"specific_guidance": "Test decision against optimistic, pessimistic, AND black swan scenarios. Specify HOW extreme outcomes occur. Ask 'Does decision still hold?' for each scenario. Extract hedges to protect against downside extremes."
},
"Assumption Reversal (Innovation/Pivots)": {
"target_score": 3.5,
"key_criteria": ["Scenario Plausibility", "Action Extraction", "Bias Avoidance"],
"common_pitfalls": ["Reversed assumptions implausible", "Interesting but no experiments", "Confirmation bias (only reverse convenient assumptions)"],
"specific_guidance": "Reverse core assumptions ('customers want more features' → 'want fewer'). Test plausibility (could reversal be true?). Design small experiments to test reversal. Challenge assumptions that support current strategy, not just peripheral ones."
}
},
"guidance_by_complexity": {
"Simple (Routine Decisions, Short-Term)": {
"target_score": 3.5,
"focus_areas": ["Scenario Plausibility", "Causal Mechanism Specification", "Action Extraction"],
"acceptable_shortcuts": ["Informal probabilities", "Two scenarios instead of three", "Simple pre-mortem (no voting)"],
"specific_guidance": "Quick pre-mortem (30 min) or simple counterfactual analysis. Two scenarios (optimistic/pessimistic). Extract 2-3 key actions. Informal monitoring acceptable."
},
"Standard (Strategic Decisions, 1-2 year horizon)": {
"target_score": 4.0,
"focus_areas": ["Probability Calibration", "Scenario Diversity", "Leading Indicator Quality", "Monitoring and Adaptation"],
"acceptable_shortcuts": ["Three scenarios (not full 2×2 matrix)", "Quarterly vs monthly monitoring"],
"specific_guidance": "Three-scenario framework with probabilities. Extract common actions, hedges, options. Define 5-7 leading indicators. Quarterly scenario reviews and updates. Assign owners for monitoring."
},
"Complex (High Stakes, Multi-Year, High Uncertainty)": {
"target_score": 4.5,
"focus_areas": ["All criteria", "Rigorous validation", "Comprehensive monitoring"],
"acceptable_shortcuts": ["None - full rigor required"],
"specific_guidance": "Full 2×2 scenario matrix or cone of uncertainty. Rigorous probability calibration using base rates and expert judgment. Comprehensive pre-mortem with cross-functional team. Leading indicators with clear thresholds and decision triggers. Monthly monitoring, quarterly deep reviews. All mitigations assigned with owners and deadlines."
}
},
"common_failure_modes": [
{
"name": "Implausible Counterfactuals (Fantasy)",
"symptom": "Counterfactuals require magic, impossible foreknowledge, or weren't real options at decision time. 'What if we had known pandemic was coming?'",
"detection": "Ask: 'Could a reasonable decision-maker have chosen this alternative given information available then?' If no, implausible.",
"fix": "Restrict to alternatives actually available at decision time. Use 'what was on the table?' test. Avoid hindsight-dependent counterfactuals."
},
{
"name": "Multiple Changes (Can't Isolate Cause)",
"symptom": "Counterfactual changes many factors: 'What if we had raised $3M AND launched EU AND hired different CEO...' Can't tell which mattered.",
"detection": "Count changes. If >1 factor changed, causal isolation violated.",
"fix": "Minimal rewrite: change ONE factor, hold all else constant. Want to test funding? Change funding only. Want to test geography? Change geography only."
},
{
"name": "No Causal Mechanism",
"symptom": "Outcome stated without explanation. 'Sales would be 2× higher' but no WHY or HOW.",
"detection": "Ask 'How does change lead to outcome?' If answer vague or missing, no mechanism.",
"fix": "Trace causal chain: initial change → immediate effect → secondary effects → final outcome. Each step must be explained with logic or evidence."
},
{
"name": "Scenarios Too Similar",
"symptom": "Three scenarios differ only in magnitude (10% growth vs 15% vs 20%). Same story, different numbers.",
"detection": "Read scenarios. Do they describe qualitatively different worlds? If no, too similar.",
"fix": "Make scenarios qualitatively distinct. Different drivers, different strategic implications. Use 2×2 matrix to force diversity via two independent uncertainties."
},
{
"name": "No Probabilities Assigned",
"symptom": "All scenarios treated as equally likely, or no probabilities given. Implies 33% each for three scenarios regardless of plausibility.",
"detection": "Check if probabilities assigned and justified. If missing or all equal, red flag.",
"fix": "Assign probabilities using base rates, analogies, expert judgment. Baseline typically 40-60%, optimistic/pessimistic 15-30% each. Justify each estimate."
},
{
"name": "Hindsight Bias in Counterfactuals",
"symptom": "'Obviously we should have done X' - outcome seems inevitable in retrospect. Overconfidence counterfactual would have succeeded.",
"detection": "Ask: 'Was outcome predictable given information at decision time?' If reasoning depends on information learned after, hindsight bias.",
"fix": "Re-inhabit decision context: what was known/unknown then? What uncertainty existed? Acknowledge alternative could have failed too. Use base rates to calibrate confidence."
},
{
"name": "Generic Pre-Mortem Risks",
"symptom": "Pre-mortem lists generic risks ('ran out of money', 'competition', 'tech didn't work') not specific to this project.",
"detection": "Could these risks apply to any project? If yes, too generic.",
"fix": "Push for context-specific failure modes. What's unique about THIS project? What specific technical challenges? Which specific competitors? What particular market risks?"
},
{
"name": "Scenarios Without Actions",
"symptom": "Interesting stories developed but no operational implications. 'So what should we do?' question unanswered.",
"detection": "Read scenario analysis. Is there action plan with common actions, hedges, options? If no, incomplete.",
"fix": "Always end with action extraction: (1) Common actions (all scenarios), (2) Hedges (downside protection), (3) Options (upside preparation), (4) Leading indicators (monitoring)."
},
{
"name": "Lagging Indicators (Not Leading)",
"symptom": "Indicators show scenario after it's happened. 'Revenue collapse' indicates pessimistic scenario, but too late to act.",
"detection": "Ask: 'Does this indicator give 6+ months advance notice?' If no, it's lagging.",
"fix": "Find early signals: regulatory votes (before law passed), competitor funding rounds (before product launched), adoption rate trends (before market share shift). Leading indicators are predictive, not reactive."
},
{
"name": "No Monitoring Plan",
"symptom": "Scenarios developed, actions defined, then filed away. No one tracking which scenario unfolding or updating probabilities.",
"detection": "Ask: 'Who monitors? How often? What triggers update?' If no answers, no plan.",
"fix": "Define: (1) Owner responsible for monitoring, (2) Cadence (monthly/quarterly reviews), (3) Indicator dashboard, (4) Decision triggers ('If X crosses threshold Y, then action Z'), (5) Scenario probability update process."
}
],
"minimum_standard": 3.5,
"target_score": 4.0,
"excellence_threshold": 4.5
}

View File

@@ -0,0 +1,498 @@
# Hypotheticals and Counterfactuals: Advanced Methodology
This document provides advanced techniques for counterfactual reasoning, scenario planning, pre-mortem analysis, and extracting actionable insights from alternative futures.
## Table of Contents
1. [Counterfactual Reasoning](#1-counterfactual-reasoning)
2. [Scenario Planning Techniques](#2-scenario-planning-techniques)
3. [Extracting Insights from Scenarios](#3-extracting-insights-from-scenarios)
4. [Monitoring and Adaptation](#4-monitoring-and-adaptation)
5. [Advanced Topics](#5-advanced-topics)
---
## 1. Counterfactual Reasoning
### What Is Counterfactual Reasoning?
Counterfactual reasoning asks: **"What would have happened if X had been different?"** It's a form of causal inference through imagined alternatives.
**Core principle**: To understand causality, imagine the world with one factor changed and trace the consequences.
**Example**:
- **Actual**: Startup raised $5M Series A → burned through runway in 14 months → failed to reach profitability
- **Counterfactual**: "What if we had raised $3M instead?"
- Hypothesis: Smaller team (8 vs 15 people), lower burn, forced focus on revenue, reached profitability in 12 months
- Reasoning: Constraint forces discipline. Without $5M runway, couldn't afford large team. Would prioritize revenue over growth.
- Insight: Raising more money enabled premature scaling. Constraint would have been beneficial.
### The Minimal Rewrite Principle
When constructing counterfactuals, change **as little as possible** to isolate the causal factor.
**Bad counterfactual** (too many changes):
- "What if we had raised $3M AND competitor had failed AND market had doubled?"
- Problem: Can't tell which factor caused different outcome
**Good counterfactual** (minimal change):
- "What if we had raised $3M (all else equal)?"
- Isolates the funding amount as causal variable
**Technique**: Hold everything constant except the factor you're testing. This reveals whether that specific factor was causal.
### Constructing Plausible Counterfactuals
Not all "what ifs" are useful. Counterfactuals must be **plausible** given what was known/possible at the time.
**Plausible**: "What if we had launched in EU first instead of US?"
- This was a real option available at decision time
**Implausible**: "What if we had magically known the pandemic was coming?"
- Requires impossible foreknowledge
**Test for plausibility**: Could a reasonable decision-maker have chosen this alternative given information available at the time?
### Specifying Causal Mechanisms
Don't just state outcome; explain **HOW** the change leads to different result.
**Weak counterfactual**: "Sales would be 2× higher"
**Strong counterfactual**: "Sales would be 2× higher because lower price ($50 vs $100) → 3× higher conversion rate (15% vs 5%) → more customers despite 50% lower margin per customer → net revenue impact +2×"
**Framework for causal chains**:
1. **Initial change**: What's different? (e.g., "Price is $50 instead of $100")
2. **Immediate effect**: What happens next? (e.g., "Conversion rate increases from 5% to 15%")
3. **Secondary effects**: What follows? (e.g., "Customer volume triples")
4. **Final outcome**: Net result? (e.g., "Revenue doubles despite lower margin")
### Using Counterfactuals for Learning
**Post-decision counterfactual analysis**:
After a decision plays out, ask:
1. **What did we decide?** (Actual decision)
2. **What was the outcome?** (Actual result)
3. **What else could we have done?** (Alternative decision)
4. **What would have happened?** (Counterfactual outcome)
5. **What was the key causal factor?** (Insight for future)
**Example**: Hired candidate A (strong technical, weak communication) → struggled, left after 6 months. Counterfactual: B (moderate technical, strong communication) would have stayed longer, collaborated better. **Insight**: For this role, communication > pure technical skill.
### Avoiding Hindsight Bias
**Hindsight bias**: "I knew it all along" — outcome seems inevitable after the fact.
**Problem**: Makes counterfactual analysis distorted. "Of course it failed, we should have known."
**Mitigation**:
- **Re-inhabit decision context**: What information was available then (not now)?
- **List alternatives considered**: What options were on the table at the time?
- **Acknowledge uncertainty**: How predictable was outcome given information available?
**Technique**: Write counterfactual analysis in past tense but from perspective of decision-maker at the time, without benefit of hindsight.
---
## 2. Scenario Planning Techniques
### Three-Scenario Framework
**Structure**: Optimistic, Baseline, Pessimistic futures
**When to use**: General strategic planning, forecasting, resource allocation
**Process**:
1. **Define time horizon**: 6 months? 1 year? 3 years? 5 years?
- Shorter horizons: More specific, quantitative
- Longer horizons: More qualitative, exploratory
2. **Identify key uncertainties**: What 2-3 factors most shape the future?
- Market adoption rate
- Competitive intensity
- Regulatory environment
- Technology maturity
- Economic conditions
3. **Develop three scenarios**:
- **Optimistic** (15-30% probability): Best-case assumptions on key uncertainties
- **Baseline** (40-60% probability): Expected-case, extrapolating current trends
- **Pessimistic** (15-30% probability): Worst-case assumptions on key uncertainties
4. **Describe each vividly**: Write 2-4 paragraph narrative making each world feel real
5. **Extract implications**: What should we do in each scenario?
**Example (SaaS startup, 2-year horizon)**:
**Key uncertainties**: (1) Market adoption rate, (2) Competition intensity
**Optimistic scenario** (20% probability): "Market Leader"
- Adoption: 40% market share, viral growth, $10M ARR
- Competition: Weak, no major new entrants
- Drivers: Product-market fit strong, word-of-mouth, early mover advantage
- Implications: Invest heavily in scale infrastructure, expand to adjacent markets
**Baseline scenario** (60% probability): "Steady Climb"
- Adoption: 15% market share, steady growth, $3M ARR
- Competition: Moderate, 2-3 well-funded competitors
- Drivers: Expected adoption curve, competitive but differentiated
- Implications: Focus on core product, maintain burn discipline, build moat
**Pessimistic scenario** (20% probability): "Survival Mode"
- Adoption: 5% market share, slow growth, $500k ARR
- Competition: Intense, major player launches competing product
- Drivers: Slow adoption, strong competition, pivot needed
- Implications: Cut burn, extend runway, explore pivot or acquisition
### 2×2 Scenario Matrix
**Structure**: Two key uncertainties create four quadrants (scenarios)
**When to use**: When two specific uncertainties dominate strategic decision
**Process**:
1. **Identify two critical uncertainties**: Factors that:
- Are genuinely uncertain (not predictable)
- Have major impact on outcomes
- Are independent (not correlated)
2. **Define axes extremes**:
- Uncertainty 1: [Low extreme] ←→ [High extreme]
- Uncertainty 2: [Low extreme] ←→ [High extreme]
3. **Name four quadrants**: Give each world a memorable name
4. **Develop narratives**: Describe what each world looks like
5. **Identify strategic implications**: What works in each quadrant?
**Example (Market entry decision)**:
**Uncertainty 1**: Regulatory environment (Strict ←→ Loose)
**Uncertainty 2**: Market adoption rate (Slow ←→ Fast)
| | **Slow Adoption** | **Fast Adoption** |
|---|---|---|
| **Strict Regulation** | **"Constrained Growth"**: Premium focus, compliance differentiator | **"Regulated Scale"**: Invest in compliance infrastructure early |
| **Loose Regulation** | **"Patient Build"**: Bootstrap, iterate slowly | **"Wild West Growth"**: Fast growth, grab market share |
**Strategic insight**: Common actions (build product), hedges (low burn), options (compliance prep), monitoring (regulation + adoption)
### Cone of Uncertainty
**Structure**: Range of outcomes widens over time
**When to use**: Long-term planning (5-10+ years), high uncertainty domains (technology, policy)
**Visualization**:
```
Present → 1 year: Narrow cone (±20%)
→ 3 years: Medium cone (±50%)
→ 5 years: Wide cone (±100%)
→ 10 years: Very wide cone (±200%+)
```
**Technique**:
1. **Start with trend**: Current trajectory (e.g., "10% annual growth")
2. **Add uncertainty bands**: Upper and lower bounds that widen over time
3. **Identify branch points**: Key decisions or events that shift trajectory
4. **Track leading indicators**: Signals that show which path we're on
**Example (Revenue forecasting)**:
- Year 1: $1M ± 20% ($800k - $1.2M) — narrow range, short-term visibility
- Year 3: $3M ± 50% ($1.5M - $4.5M) — wider range, product-market fit uncertain
- Year 5: $10M ± 100% ($5M - $20M) — very wide range, market evolution unknown
### Pre-Mortem Process (Prospective Hindsight)
**What is it?**: Imagine future failure, work backward to identify causes
**Why it works**: "Prospective hindsight" — imagining outcome has occurred unlocks insights impossible from forward planning
**Research foundation**: Gary Klein, "Performing a Project Premortem" (HBR 2007)
**6-Step Process**:
**Step 1: Set the scene**
- Future date: "It is [6 months / 1 year / 2 years] from now..."
- Assumed outcome: "...and the [project has failed completely / decision was disastrous]."
- Make it vivid: "The product has been shut down. The team disbanded. We lost $X."
**Step 2: Individual brainstorm (5-10 minutes, silent)**
- Each person writes 3-5 reasons WHY it failed
- Silent writing prevents groupthink
- Encourage wild ideas, non-obvious causes
**Step 3: Round-robin sharing**
- Each person shares one reason (rotating until all shared)
- No debate yet, just capture ideas
- Scribe records all items
**Step 4: Consolidate and cluster**
- Group similar causes together
- Look for themes (technical, market, team, execution, external)
**Step 5: Vote on top risks**
- Dot voting: Each person gets 3-5 votes
- Distribute votes across risks
- Identify top 5-7 risks by vote count
**Step 6: Develop mitigations**
- For each top risk, assign:
- **Mitigation action**: Specific step to prevent or reduce risk
- **Owner**: Who is responsible
- **Deadline**: When mitigation must be in place
- **Success metric**: How to know mitigation worked
**Pre-mortem psychology**:
- **Permission to dissent**: Failure assumption gives license to voice concerns without seeming negative
- **Cognitive relief**: Easier to imagine specific failure than abstract "what could go wrong?"
- **Team alignment**: Surfaces hidden concerns before they become real problems
---
## 3. Extracting Insights from Scenarios
### Moving from Stories to Actions
Scenarios are useless without actionable implications. After developing scenarios, ask:
**Core questions**:
1. **What should we do regardless of which scenario unfolds?** (Common actions)
2. **What hedges should we take against downside scenarios?** (Risk mitigation)
3. **What options should we create for upside scenarios?** (Opportunity capture)
4. **What should we monitor to track which scenario is unfolding?** (Leading indicators)
### Identifying Common Actions
**Common actions** ("no-regrets moves"): Work across all scenarios
**Technique**: List actions that make sense in optimistic, baseline, AND pessimistic scenarios
**Example (Product launch scenarios)**:
| Scenario | Build Core Product | Hire Marketing | Raise Series B |
|----------|-------------------|----------------|----------------|
| Optimistic | ✓ Essential | ✓ Essential | ✓ Essential |
| Baseline | ✓ Essential | ✓ Essential | △ Maybe |
| Pessimistic | ✓ Essential | △ Maybe | ✗ Too risky |
**Common actions**: Build core product, hire marketing (work in all scenarios)
**Not common**: Raise Series B (only makes sense in optimistic/baseline)
### Designing Hedges
**Hedges**: Actions that reduce downside risk if pessimistic scenario unfolds
**Principle**: Pay small cost now to protect against large cost later
**Examples**:
- **Pessimistic scenario**: "Competitor launches free version, our revenue drops 50%"
- **Hedge**: Keep burn low, maintain 18-month runway (vs. 12-month)
- Cost: Hire 2 fewer people now
- Benefit: Survive revenue shock if it happens
- **Pessimistic scenario**: "Regulatory crackdown makes our business model illegal"
- **Hedge**: Develop alternative revenue model in parallel
- Cost: 10% of eng time on alternative
- Benefit: Can pivot quickly if regulation hits
**Hedge evaluation**: Compare cost of hedge vs. expected loss × probability
- Hedge cost: $X
- Loss if scenario occurs: $Y
- Probability of scenario: P
- Expected value of hedge: (P × $Y) - $X
### Creating Options
**Options**: Prepare to capture upside if optimistic scenario unfolds, without committing resources now
**Real options theory**: Create flexibility to make future decisions when more information available
**Examples**:
- **Optimistic scenario**: "Adoption faster than expected, enterprise demand emerges"
- **Option**: Design product architecture with enterprise features in mind (multi-tenancy, SSO hooks), but don't build until demand confirmed
- Low cost now: Design decisions
- High value later: Fast enterprise launch if demand materializes
- **Optimistic scenario**: "International markets grow 3× faster than expected"
- **Option**: Hire one person with international experience, build relationships with international partners
- Low cost now: One hire
- High value later: Quick international expansion if opportunity emerges
### Defining Leading Indicators
**Leading indicators**: Early signals that show which scenario is unfolding
**Characteristics of good leading indicators**:
- **Observable**: Can be measured objectively
- **Early**: Visible before scenario fully plays out (6+ months advance notice)
- **Actionable**: If indicator triggers, we know what to do
**Example (Market adoption scenarios)**:
| Scenario | Leading Indicator | Threshold | Action if Triggered |
|----------|------------------|-----------|---------------------|
| Optimistic | Monthly adoption rate | >20% MoM for 3 months | Accelerate hiring, raise capital |
| Baseline | Monthly adoption rate | 10-20% MoM | Maintain plan |
| Pessimistic | Monthly adoption rate | <10% MoM for 3 months | Cut burn, explore pivot |
**Monitoring cadence**: Review indicators monthly or quarterly, update scenario probabilities based on new data
### Decision Points and Trigger Actions
**Decision points**: Pre-defined thresholds that trigger specific actions
**Format**: "If [indicator] crosses [threshold], then [action]"
**Examples**:
- "If monthly churn rate >8% for 2 consecutive months, then launch retention task force"
- "If competitor raises >$50M, then accelerate roadmap and increase marketing spend"
- "If regulation bill passes committee vote, then begin compliance implementation immediately"
**Benefits**:
- **Pre-commitment**: Decide now what to do later, avoids decision paralysis in moment
- **Speed**: Trigger action immediately when condition met
- **Alignment**: Team knows what to expect, can prepare
---
## 4. Monitoring and Adaptation
### Tracking Which Scenario Is Unfolding
**Reality ≠ any single scenario**: Real world is blend of scenarios, or something unexpected
**Monitoring approach**:
1. **Quarterly scenario review**: Update probabilities based on new evidence
2. **Indicator dashboard**: Track 5-10 leading indicators, visualize trends
3. **Surprise tracking**: Log unexpected events not captured by scenarios
**Example dashboard**:
| Indicator | Optimistic | Baseline | Pessimistic | Current | Trend |
|-----------|-----------|----------|-------------|---------|-------|
| Adoption rate | >20% MoM | 10-20% | <10% | 15% | ↑ |
| Churn rate | <3%/mo | 3-5% | >5% | 4% | → |
| Competitor funding | <$20M | $20-50M | >$50M | $30M | ↑ |
| NPS | >50 | 30-50 | <30 | 45 | ↑ |
**Interpretation**: Trending optimistic (adoption, NPS), watch competitor funding
### Updating Scenarios
**When to update**:
- **Scheduled**: Quarterly reviews
- **Triggered**: Major unexpected event (pandemic, regulation, acquisition, etc.)
**Update process**:
1. **Review what happened**: What changed since last review?
2. **Update probabilities**: Which scenario looking more/less likely?
3. **Revise scenarios**: Do scenarios still capture range of plausible futures? Add new ones if needed
4. **Adjust actions**: Change hedges, options, or common actions based on new information
**Example**: Before pandemic: Optimistic 20%, Baseline 60%, Pessimistic 20%. After: Add "Remote-first world" (30%), reduce Baseline to 40%. Action: shift from office expansion to remote tooling.
### Dealing with Surprises
**Black swans**: High-impact, low-probability events not captured by scenarios (Taleb)
**Response protocol**:
1. **Acknowledge**: "This is outside our scenarios"
2. **Assess**: How does this change the landscape?
3. **Create emergency scenario**: Rapid scenario development (hours/days, not weeks)
4. **Decide**: What immediate actions needed?
5. **Update scenarios**: Incorporate new uncertainty into ongoing planning
**Example**: COVID-19 lockdowns (not in scenarios) → Assess: dining impossible → Emergency scenario: "Delivery-only world" (6-12 mo) → Actions: pivot to takeout, renegotiate leases → Update: add "Hybrid dining" scenario
### Scenario Planning as Organizational Learning
**Scenarios as shared language**: Team uses scenario names to communicate quickly
- "We're in Constrained Growth mode" → Everyone knows what that means
**Scenario-based planning**: Budgets, roadmaps reference scenarios
- "If we hit Optimistic scenario by Q3, we trigger hiring plan B"
**Cultural benefit**: Reduces certainty bias, maintains flexibility, normalizes uncertainty
---
## 5. Advanced Topics
### Counterfactual Probability Estimation
**Challenge**: How likely was counterfactual outcome?
**Approach**: Use base rates and analogies
1. **Find analogous cases**: What happened in similar situations?
2. **Calculate base rate**: Of N analogous cases, in how many did X occur?
3. **Adjust for specifics**: Is our case different? How?
4. **Estimate probability range**: Not point estimate, but range (40-60%)
**Example**: "What if we had launched in EU first?" — 20 similar startups: 8/20 chose EU-first (3/8 succeeded = 37.5%), 12/20 chose US-first (7/12 = 58%). Our product has EU features (+10%) → EU-first 35-50%, US-first 50-65%. **Insight**: US-first was better bet.
### Scenario Narrative Techniques
**Make scenarios memorable and vivid**:
**Technique 1: Use present tense**
- Bad: "Adoption will grow quickly"
- Good: "It's January 2026. Our user base has grown 10× in 12 months..."
**Technique 2: Add concrete details**
- Bad: "Competition is intense"
- Good: "Three well-funded competitors (FundedCo with $50M Series B, StartupX acquired by BigTech, OpenSource Project with 10k stars) are fighting for same customers..."
**Technique 3: Use personas/characters**
- "Sarah, our typical customer (marketing manager at 50-person B2B SaaS company), now has five alternatives to our product..."
**Technique 4: Include metrics**
- "Monthly churn rate: 8%, NPS: 25, CAC: $500 (up from $200)"
### Assumption Reversal for Innovation
**Technique**: Take core assumption, flip it, explore implications
**Process**:
1. **List key assumptions**: What do we take for granted?
2. **Reverse each**: "What if opposite is true?"
3. **Explore plausibility**: Could reversal be true?
4. **Identify implications**: What would we do differently?
5. **Test**: Can we experiment with reversal?
**Examples**:
| Current Assumption | Reversed Assumption | Implications | Action |
|-------------------|---------------------|--------------|--------|
| "Customers want more features" | "Customers want FEWER features" | Simplify product, remove rarely-used features, focus on core workflow | Survey: Would users pay same for product with 50% fewer features but better UX? |
| "Freemium is best model" | "Paid-only from day 1" | No free tier, premium positioning, higher LTV but lower top-of-funnel | Test: Launch premium SKU, measure willingness to pay |
| "We need to raise VC funding" | "Bootstrap and self-fund" | Slower growth, but control + profitability focus | Calculate: Can we reach profitability on current runway? |
### Timeboxing Scenario Work
**Problem**: Scenario planning can become endless theorizing
**Solution**: Timebox exercises
**Suggested time budgets**: Pre-mortem (60-90 min), Three scenarios (2-4 hrs), 2×2 matrix (3-5 hrs), Quarterly review (1-2 hrs)
**Principle**: Scenarios are decision tools, not academic exercises. Generate enough insight to decide, then act.
---
## Summary
**Counterfactual reasoning** reveals causality through minimal-change thought experiments. Focus on plausibility, specify mechanisms, avoid hindsight bias.
**Scenario planning** (three scenarios, 2×2 matrix, cone of uncertainty) explores alternative futures. Assign probabilities, make vivid, extract actions.
**Extract insights** by identifying common actions (no-regrets moves), hedges (downside protection), options (upside preparation), and leading indicators (early signals).
**Monitor and adapt** quarterly. Track indicators, update scenario probabilities, adjust strategy as reality unfolds. Treat surprises as learning opportunities.
**Advanced techniques** include counterfactual probability estimation, narrative crafting, assumption reversal, and rigorous timeboxing to avoid analysis paralysis.
**The goal**: Prepare for uncertainty, maintain strategic flexibility, and make better decisions by systematically exploring "what if?"

View File

@@ -0,0 +1,304 @@
# Hypotheticals and Counterfactuals Templates
Quick-start templates for counterfactual analysis, scenario planning, and pre-mortem exercises.
## Focal Question Template
**What are you exploring?**
**Type**: [Counterfactual (past) / Hypothetical (future)]
**Core question**:
- Counterfactual: "What would have happened if [X] had been different?"
- Hypothetical: "What could happen if [X] occurs in the future?"
**Context**: [What decision, event, or situation are you analyzing?]
**Time frame**: [Past event date / Future time horizon (6 months, 1 year, 5 years)]
**Purpose**: [What do you hope to learn? Understand causality? Identify risks? Test assumptions?]
---
## Counterfactual Analysis Template
**Actual outcome** (what happened):
- Decision made: [What did we actually do?]
- Outcome: [What resulted?]
- Key metrics: [Quantify results]
**Counterfactual** (what if we had done differently):
- Alternative decision: "What if we had [done X instead]?"
- Hypothesized outcome: [What would have happened?]
- Reasoning: [WHY would outcome be different? Specify causal mechanism]
**Evidence for counterfactual**:
- Analogies: [Similar cases where X led to Y]
- Data: [Market data, competitor examples, historical patterns]
- Expert opinion: [What do domain experts say?]
**Causal insight**:
- What mattered: [Which factor was causal?]
- What didn't matter: [Which factors were irrelevant?]
- Lesson learned: [What should we do differently next time?]
**Example**:
- **Actual**: Launched in US first, 10k users in 6 months
- **Counterfactual**: "What if we had launched in EU first?"
- **Hypothesized outcome**: 5k users (smaller market, slower adoption)
- **Reasoning**: EU market 40% size of US, GDPR compliance slows growth
- **Insight**: US-first was right call. Market size matters more than competition.
---
## Pre-Mortem Template
**Project/Decision**: [What are you launching or deciding?]
**Future date**: "It is [6 months / 1 year] from now..."
**Assumed outcome**: "...and the [project has failed / decision was disastrous]."
**Individual brainstorm** (5 min, silent):
Each person writes 3-5 reasons why it failed.
1. [Failure reason 1]
2. [Failure reason 2]
3. [Failure reason 3]
4. [Failure reason 4]
5. [Failure reason 5]
**Consolidate** (round-robin sharing):
- [Consolidated failure cause 1]
- [Consolidated failure cause 2]
- [Consolidated failure cause 3]
- [Consolidated failure cause 4]
- [Consolidated failure cause 5]
...
**Vote on top risks** (dot voting):
| Risk | Votes | Likelihood | Impact | Priority |
|------|-------|------------|--------|----------|
| [Risk 1] | 8 | High | High | ⚠ Critical |
| [Risk 2] | 6 | Medium | High | ⚠ High |
| [Risk 3] | 4 | High | Medium | Medium |
| [Risk 4] | 2 | Low | Low | Low |
**Mitigation actions** (top 3-5 risks):
| Risk | Mitigation | Owner | Deadline |
|------|------------|-------|----------|
| [Risk 1] | [Specific action to prevent/reduce] | [Name] | [Date] |
| [Risk 2] | [Specific action] | [Name] | [Date] |
| [Risk 3] | [Specific action] | [Name] | [Date] |
---
## Scenario Generation Template
**Time horizon**: [6 months / 1 year / 3 years / 5 years]
**Key uncertainties** (2-3 factors that most shape the future):
1. [Uncertainty 1, e.g., "Market adoption rate"]
2. [Uncertainty 2, e.g., "Competitive intensity"]
3. [Uncertainty 3, e.g., "Regulatory environment"]
### Option A: Three Scenarios
**Optimistic scenario** (Probability: [%]):
- Name: "[Descriptive name]"
- Description: [1-2 paragraphs describing this future]
- Key drivers: [What makes this happen?]
- Implications: [What does this mean for us?]
**Baseline scenario** (Probability: [%]):
- Name: "[Descriptive name]"
- Description: [1-2 paragraphs]
- Key drivers: [What makes this happen?]
- Implications: [What does this mean for us?]
**Pessimistic scenario** (Probability: [%]):
- Name: "[Descriptive name]"
- Description: [1-2 paragraphs]
- Key drivers: [What makes this happen?]
- Implications: [What does this mean for us?]
### Option B: 2×2 Matrix
**Uncertainty 1**: [e.g., Market adoption] - Axes: [Slow / Fast]
**Uncertainty 2**: [e.g., Regulation] - Axes: [Strict / Loose]
| | **Slow Adoption** | **Fast Adoption** |
|---|---|---|
| **Strict Regulation** | **Scenario 1**: "[Name]"<br>[Description] | **Scenario 2**: "[Name]"<br>[Description] |
| **Loose Regulation** | **Scenario 3**: "[Name]"<br>[Description] | **Scenario 4**: "[Name]"<br>[Description] |
---
## Scenario Development Template
**Scenario name**: "[Memorable title]"
**Time**: [Future date, e.g., "January 2026"]
**Narrative** (tell the story, make it vivid):
[2-4 paragraphs describing this world. Use present tense, concrete details, make it feel real.]
**Key assumptions**:
- [Assumption 1: what had to be true for this scenario?]
- [Assumption 2]
- [Assumption 3]
**Metrics in this world**:
- [Metric 1]: [Value, e.g., "Market size: $500M"]
- [Metric 2]: [Value, e.g., "Our market share: 15%"]
- [Metric 3]: [Value, e.g., "Churn rate: 3%/month"]
**Leading indicators** (early signals this scenario is unfolding):
- [Indicator 1]: [e.g., "If regulation bill passes Q1"]
- [Indicator 2]: [e.g., "If competitor raises >$50M"]
- [Indicator 3]: [e.g., "If adoption rate >20% MoM for 3 months"]
**Implications for our strategy**:
- What should we do in this world? [Strategic response]
- What should we avoid? [Actions that fail in this scenario]
- What capabilities do we need? [Org/tech requirements]
---
## Assumption Reversal Template
**Current assumption**: [State the belief we take for granted]
**Reversed assumption**: "What if [opposite] is true?"
**Explore the reversal**:
- Is it plausible? [Could the reversal actually be true?]
- Evidence for reversal: [What would suggest our assumption is wrong?]
- Implications if reversed: [What would we do differently?]
- New possibilities: [What doors does this open?]
**Example**:
- **Current**: "Customers want more features"
- **Reversed**: "What if customers want fewer features?"
- **Plausible?**: Yes (research shows feature bloat frustrates users)
- **Implications**: Simplify product, remove rarely-used features, focus on core workflow
- **New possibility**: "Feature-light" positioning vs. competitors
---
## Stress Test Template
**Decision being tested**: [What are we deciding?]
**Baseline assumptions**:
- [Assumption 1]: [Current expectation, e.g., "CAC = $100"]
- [Assumption 2]: [e.g., "Churn = 5%/month"]
- [Assumption 3]: [e.g., "Market size = $1B"]
**Stress scenario 1: Optimistic**
- [Assumption 1]: [Best case, e.g., "CAC = $50"]
- [Assumption 2]: [e.g., "Churn = 2%/month"]
- [Assumption 3]: [e.g., "Market size = $2B"]
- **Decision still valid?**: [Yes/No, with explanation]
**Stress scenario 2: Pessimistic**
- [Assumption 1]: [Worst case, e.g., "CAC = $200"]
- [Assumption 2]: [e.g., "Churn = 10%/month"]
- [Assumption 3]: [e.g., "Market size = $500M"]
- **Decision still valid?**: [Yes/No, with explanation]
**Stress scenario 3: Black swan**
- [Extreme event]: [e.g., "Major competitor offers product free"]
- **Decision still valid?**: [Yes/No, with explanation]
**Conclusion**:
- Decision robust? [Does it hold across scenarios?]
- Hedges needed? [What can we do to protect downside?]
- Go/no-go? [Final decision]
---
## Action Extraction Template
**Scenarios analyzed**: [List 2-4 scenarios explored]
**Common actions** (work across all scenarios):
- [Action 1]: [What should we do regardless of which future unfolds?]
- [Action 2]
- [Action 3]
**Hedges** (protect against downside scenarios):
- [Hedge 1]: [What reduces risk if pessimistic scenario happens?]
- [Hedge 2]
**Options** (prepare for upside scenarios):
- [Option 1]: [What positions us to capture value if optimistic scenario happens?]
- [Option 2]
**Monitoring** (track which scenario unfolding):
- [Indicator 1]: [What to watch, e.g., "Track regulation votes monthly"]
- [Indicator 2]: [e.g., "Monitor competitor funding rounds"]
- [Indicator 3]: [e.g., "Measure adoption rate vs. baseline"]
**Decision points** (when to adjust):
- If [indicator crosses threshold], then [action]
- If [indicator crosses threshold], then [action]
**Example**:
- **Common**: Build core product, hire team, launch beta
- **Hedge**: Keep burn low, maintain 18-month runway for slow-growth scenario
- **Option**: Prepare enterprise sales motion if early adoption strong
- **Monitor**: Track adoption rate monthly; if >15% MoM for 3 months, trigger enterprise hiring
---
## Quick Examples
### Example 1: Product Launch Pre-Mortem
**Project**: Launch new mobile app, target 50k downloads in 6 months
**Pre-mortem** (failure causes):
1. App crashes on Android (not tested thoroughly)
2. Marketing budget too small (couldn't acquire users at scale)
3. Onboarding too complex (80% drop-off after signup)
4. Competitor launched free version (undercut pricing)
5. App Store rejection (didn't follow guidelines)
**Mitigation**:
- Comprehensive Android testing before launch
- Double marketing budget or lower target
- Simplify onboarding to 3 steps max
- Monitor competitor activity, prepare pricing flex
- Review App Store guidelines, get pre-approval
### Example 2: Counterfactual Learning
**Actual**: Raised $5M Series A, 18-month runway, hired 15 people
**Outcome**: Burned through runway in 14 months, failed to reach next milestone
**Counterfactual**: "What if we had raised $3M instead?"
- **Hypothesized outcome**: 12-month runway, hired 8 people, reached profitability
- **Reasoning**: Smaller team = lower burn, forced focus on revenue, faster decisions
- **Insight**: Raising more money led to premature scaling. Constraint is good early-stage.
### Example 3: Strategic Scenarios (3 Futures)
**Time**: 2026 (2 years out)
**Optimistic ("Market Leader")**:
- 40% market share, $10M ARR, profitability
- Drivers: Product-market fit strong, viral growth, weak competition
**Baseline ("Steady Climb")**:
- 15% market share, $3M ARR, break-even
- Drivers: Expected growth, moderate competition, steady execution
**Pessimistic ("Survival Mode")**:
- 5% market share, $500k ARR, burning cash
- Drivers: Strong competitor launches, slow adoption, pivot needed
**Implications**: Build for "Steady Climb", hedge for "Survival" (low burn), prepare for "Leader" (scale infrastructure).