Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,462 @@
---
name: forecast-premortem
description: Use to stress-test predictions by assuming they failed and working backward to identify why. Invoke when confidence is high (>80% or <20%), need to identify tail risks and unknown unknowns, or want to widen overconfident intervals. Use when user mentions premortem, backcasting, what could go wrong, stress test, or black swans.
---
# Forecast Pre-Mortem
## Table of Contents
- [What is a Forecast Pre-Mortem?](#what-is-a-forecast-pre-mortem)
- [When to Use This Skill](#when-to-use-this-skill)
- [Interactive Menu](#interactive-menu)
- [Quick Reference](#quick-reference)
- [Resource Files](#resource-files)
---
## What is a Forecast Pre-Mortem?
A **forecast pre-mortem** is a stress-testing technique where you assume your prediction has already failed and work backward to construct the history of how it failed. This reveals blind spots, tail risks, and overconfidence.
**Core Principle:** Invert the problem. Don't ask "Will this succeed?" Ask "It has failed - why?"
**Why It Matters:**
- Defeats overconfidence by forcing you to imagine failure
- Identifies specific failure modes you hadn't considered
- Transforms vague doubt into concrete risk variables
- Widens confidence intervals appropriately
- Surfaces "unknown unknowns"
**Origin:** Gary Klein's "premortem" technique, adapted for probabilistic forecasting
---
## When to Use This Skill
Use this skill when:
- **High confidence** (>80% or <20%) - Most likely to be overconfident
- **Feeling certain** - Certainty is a red flag in forecasting
- **Prediction is important** - Stakes are high, need robustness
- **After inside view analysis** - Used specific details, might have missed big picture
- **Before finalizing forecast** - Last check before committing
Do NOT use when:
- Confidence already low (~50%) - You're already uncertain
- Trivial low-stakes prediction - Not worth the time
- Pure base rate forecasting - Premortem is for inside view adjustments
---
## Interactive Menu
**What would you like to do?**
### Core Workflows
**1. [Run a Failure Premortem](#1-run-a-failure-premortem)** - Assume prediction failed, explain why
**2. [Run a Success Premortem](#2-run-a-success-premortem)** - For pessimistic predictions (<20%)
**3. [Dragonfly Eye Perspective](#3-dragonfly-eye-perspective)** - View failure through multiple lenses
**4. [Identify Tail Risks](#4-identify-tail-risks)** - Find black swans and unknown unknowns
**5. [Adjust Confidence Intervals](#5-adjust-confidence-intervals)** - Quantify the adjustment
**6. [Learn the Framework](#6-learn-the-framework)** - Deep dive into methodology
**7. Exit** - Return to main forecasting workflow
---
## 1. Run a Failure Premortem
**Let's stress-test your prediction by imagining it has failed.**
```
Failure Premortem Progress:
- [ ] Step 1: State the prediction and current confidence
- [ ] Step 2: Time travel to failure
- [ ] Step 3: Write the history of failure
- [ ] Step 4: Identify concrete failure modes
- [ ] Step 5: Assess plausibility and adjust
```
### Step 1: State the prediction and current confidence
**Tell me:**
1. What are you predicting?
2. What's your current probability?
3. What's your confidence interval?
**Example:** "This startup will reach $10M ARR within 2 years" - Probability: 75%, CI: 60-85%
### Step 2: Time travel to failure
**The Crystal Ball Exercise:**
Jump forward to the resolution date. **It is now [resolution date]. The event did NOT happen.** This is a certainty. Do not argue with it.
**How does it feel?** Surprising? Expected? Shocking? This emotional response tells you about your true confidence.
### Step 3: Write the history of failure
**Backcasting Narrative:** Starting from the failure point, work backward in time. Write the story of how we got here.
**Prompts:**
- "The headlines that led to this were..."
- "The first sign of trouble was when..."
- "In retrospect, we should have known because..."
- "The critical mistake was..."
**Frameworks to consider:**
- **Internal friction:** Team burned out, co-founders fought, execution failed
- **External shocks:** Regulation changed, competitor launched, market shifted
- **Structural flaws:** Unit economics didn't work, market too small, tech didn't scale
- **Black swans:** Pandemic, war, financial crisis, unexpected disruption
See [Failure Mode Taxonomy](resources/failure-mode-taxonomy.md) for comprehensive categories.
### Step 4: Identify concrete failure modes
**Extract specific, actionable failure causes from your narrative.**
For each failure mode: (1) What happened, (2) Why it caused failure, (3) How likely it is, (4) Early warning signals
**Example:**
| Failure Mode | Mechanism | Likelihood | Warning Signals |
|--------------|-----------|------------|-----------------|
| Key engineer quit | Lost technical leadership, delayed product | 15% | Declining code commits, complaints |
| Competitor launched free tier | Destroyed unit economics | 20% | Hiring spree, beta leaks |
| Regulation passed | Made business model illegal | 5% | Proposed legislation, lobbying |
### Step 5: Assess plausibility and adjust
**The Plausibility Test:**
Ask yourself:
- **How easy was it to write the failure narrative?**
- Very easy → Drop confidence by 15-30%
- Very hard, felt absurd → Confidence was appropriate
- **How many plausible failure modes did you identify?**
- 5+ modes each >5% likely → Too much uncertainty for high confidence
- 1-2 modes, low likelihood → Confidence can stay high
- **Did you discover any "unknown unknowns"?**
- Yes, multiple → Widen confidence intervals by 20%
- No, all known risks → Confidence appropriate
**Quantitative Method:** Sum the probabilities of failure modes:
```
P(failure) = P(mode_1) + P(mode_2) + ... + P(mode_n)
```
If this sum is greater than `1 - your_current_probability`, your probability is too high.
**Example:** Current success: 75% (implied failure: 25%), Sum of failure modes: 40%
**Conclusion:** Underestimating failure risk by 15%, **Adjusted:** 60% success
**Next:** Return to [menu](#interactive-menu) or document findings
---
## 2. Run a Success Premortem
**For pessimistic predictions - assume the unlikely success happened.**
```
Success Premortem Progress:
- [ ] Step 1: State pessimistic prediction (<20%)
- [ ] Step 2: Time travel to success
- [ ] Step 3: Write the history of success
- [ ] Step 4: Identify how you could be wrong
- [ ] Step 5: Assess and adjust upward if needed
```
### Step 1: State pessimistic prediction
**Tell me:** (1) What low-probability event are you predicting? (2) Why is your confidence so low?
**Example:** "Fusion energy will be commercialized by 2030" - Probability: 10%, Reasoning: Technical challenges too great
### Step 2: Time travel to success
**It is now 2030. Fusion energy is commercially available.** This happened. It's real. How?
### Step 3: Write the history of success
**Backcasting the unlikely:** What had to happen for this to occur?
- "The breakthrough came when..."
- "We were wrong about [assumption] because..."
- "The key enabler was..."
- "In retrospect, we underestimated..."
### Step 4: Identify how you could be wrong
**Challenge your pessimism:**
- Are you anchoring too heavily on current constraints?
- Are you underestimating exponential progress?
- Are you ignoring parallel approaches?
- Are you biased by past failures?
### Step 5: Assess and adjust upward if needed
If success narrative was surprisingly plausible, increase probability.
**Next:** Return to [menu](#interactive-menu)
---
## 3. Dragonfly Eye Perspective
**View the failure through multiple conflicting perspectives.**
The dragonfly has compound eyes that see from many angles simultaneously. We simulate this by adopting radically different viewpoints.
```
Dragonfly Eye Progress:
- [ ] Step 1: The Skeptic (why this will definitely fail)
- [ ] Step 2: The Fanatic (why failure is impossible)
- [ ] Step 3: The Disinterested Observer (neutral analysis)
- [ ] Step 4: Synthesize perspectives
- [ ] Step 5: Extract robust failure modes
```
### Step 1: The Skeptic
**Channel the harshest critic.** You are a short-seller, a competitor, a pessimist. Why will this DEFINITELY fail?
**Be extreme:** Assume worst case, highlight every flaw, no charity, no benefit of doubt
**Output:** List of failure reasons from skeptical view
### Step 2: The Fanatic
**Channel the strongest believer.** You are the founder's mother, a zealot, an optimist. Why is failure IMPOSSIBLE?
**Be extreme:** Assume best case, highlight every strength, maximum charity and optimism
**Output:** List of success reasons from optimistic view
### Step 3: The Disinterested Observer
**Channel a neutral analyst.** You have no stake in the outcome. You're running a simulation, analyzing data dispassionately.
**Be analytical:** No emotional investment, pure statistical reasoning, reference class thinking
**Output:** Balanced probability estimate with reasoning
### Step 4: Synthesize perspectives
**Find the overlap:** Which failure modes appeared in ALL THREE perspectives?
- Skeptic mentioned it
- Even fanatic couldn't dismiss it
- Observer identified it statistically
**These are your robust failure modes** - the ones most likely to actually happen.
### Step 5: Extract robust failure modes
**The synthesis:**
| Failure Mode | Skeptic | Fanatic | Observer | Robust? |
|--------------|---------|---------|----------|---------|
| Market too small | Definitely | Debatable | Base rate suggests yes | YES |
| Execution risk | Definitely | No way | 50/50 | Maybe |
| Tech won't scale | Definitely | Already solved | Unknown | Investigate |
Focus adjustment on the **robust** failures that survived all perspectives.
**Next:** Return to [menu](#interactive-menu)
---
## 4. Identify Tail Risks
**Find the black swans and unknown unknowns.**
```
Tail Risk Identification Progress:
- [ ] Step 1: Define what counts as "tail risk"
- [ ] Step 2: Systematic enumeration
- [ ] Step 3: Impact × Probability matrix
- [ ] Step 4: Set kill criteria
- [ ] Step 5: Monitor signposts
```
### Step 1: Define what counts as "tail risk"
**Criteria:** Low probability (<5%), High impact (would completely change outcome), Outside normal planning, Often exogenous shocks
**Examples:** Pandemic, war, financial crisis, regulatory ban, key person death, natural disaster, technological disruption
### Step 2: Systematic enumeration
**Use the PESTLE framework for comprehensive coverage:**
- **Political:** Elections, coups, policy changes, geopolitical shifts
- **Economic:** Recession, inflation, currency crisis, market crash
- **Social:** Cultural shifts, demographic changes, social movements
- **Technological:** Breakthrough inventions, disruptions, cyber attacks
- **Legal:** New regulations, lawsuits, IP challenges, compliance changes
- **Environmental:** Climate events, pandemics, natural disasters
For each category, ask: "What low-probability event would kill this prediction?"
See [Failure Mode Taxonomy](resources/failure-mode-taxonomy.md) for detailed categories.
### Step 3: Impact × Probability matrix
**Plot your tail risks:**
```
High Impact
│ [Pandemic] [Key Founder Dies]
│ [Recession] [Competitor Emerges]
└─────────────────────────────────────→ Probability
Low High
```
**Focus on:** High impact, even if very low probability
### Step 4: Set kill criteria
**For each major tail risk, define the "kill criterion":**
**Format:** "If [event X] happens, probability drops to [Y]%"
**Examples:**
- "If FDA rejects our drug, probability drops to 5%"
- "If key engineer quits, probability drops to 30%"
- "If competitor launches free tier, probability drops to 20%"
- "If regulation passes, probability drops to 0%"
**Why this matters:** You now have clear indicators to watch
### Step 5: Monitor signposts
**For each kill criterion, identify early warning signals:**
| Kill Criterion | Warning Signals | Check Frequency |
|----------------|----------------|-----------------|
| FDA rejection | Phase 2 trial results, FDA feedback | Monthly |
| Engineer quit | Code velocity, satisfaction surveys | Weekly |
| Competitor launch | Hiring spree, beta leaks, patents | Monthly |
| Regulation | Proposed bills, lobbying, hearings | Quarterly |
**Setup monitoring:** Calendar reminders, news alerts, automated tracking
**Next:** Return to [menu](#interactive-menu)
---
## 5. Adjust Confidence Intervals
**Quantify how much the premortem should change your bounds.**
```
Confidence Interval Adjustment Progress:
- [ ] Step 1: State current CI
- [ ] Step 2: Evaluate premortem findings
- [ ] Step 3: Calculate width adjustment
- [ ] Step 4: Set new bounds
- [ ] Step 5: Document reasoning
```
### Step 1: State current CI
**Current confidence interval:** Lower bound: __%, Upper bound: __%, Width: ___ percentage points
### Step 2: Evaluate premortem findings
**Score your premortem on these dimensions (1-5 each):**
1. **Narrative plausibility** - 1 = Failure felt absurd, 5 = Failure felt inevitable
2. **Number of failure modes** - 1 = Only 1-2 unlikely modes, 5 = 5+ plausible modes
3. **Unknown unknowns discovered** - 1 = No surprises, all known, 5 = Many blind spots revealed
4. **Dragonfly synthesis** - 1 = Perspectives diverged completely, 5 = All agreed on failure modes
**Total score:** __ / 20
### Step 3: Calculate width adjustment
**Adjustment formula:**
```
Width multiplier = 1 + (Score / 20)
```
**Examples:**
- Score = 4/20 → Multiplier = 1.2 → Widen CI by 20%
- Score = 10/20 → Multiplier = 1.5 → Widen CI by 50%
- Score = 16/20 → Multiplier = 1.8 → Widen CI by 80%
**Current width:** ___ points, **Adjusted width:** Current × Multiplier = ___ points
### Step 4: Set new bounds
**Method: Symmetric widening around current estimate**
```
New lower = Current estimate - (Adjusted width / 2)
New upper = Current estimate + (Adjusted width / 2)
```
**Example:** Current: 70%, CI: 60-80% (width = 20), Score: 12/20, Multiplier: 1.6, New width: 32, **New CI: 54-86%**
### Step 5: Document reasoning
**Record:** (1) What failure modes drove the adjustment, (2) Which perspective was most revealing, (3) What unknown unknowns were discovered, (4) What monitoring you'll do going forward
**Next:** Return to [menu](#interactive-menu)
---
## 6. Learn the Framework
**Deep dive into the methodology.**
### Resource Files
📄 **[Premortem Principles](resources/premortem-principles.md)** - Why humans are overconfident, hindsight bias and outcome bias, the power of inversion, research on premortem effectiveness
📄 **[Backcasting Method](resources/backcasting-method.md)** - Structured backcasting process, temporal reasoning techniques, causal chain construction, narrative vs quantitative backcasting
📄 **[Failure Mode Taxonomy](resources/failure-mode-taxonomy.md)** - Comprehensive failure categories, internal vs external failures, preventable vs unpreventable, PESTLE framework for tail risks, kill criteria templates
**Next:** Return to [menu](#interactive-menu)
---
## Quick Reference
### The Premortem Commandments
1. **Assume failure is certain** - Don't debate whether, debate why
2. **Be specific** - Vague risks don't help; concrete mechanisms do
3. **Use multiple perspectives** - Skeptic, fanatic, observer
4. **Quantify failure modes** - Estimate probability of each
5. **Set kill criteria** - Know what would change your mind
6. **Monitor signposts** - Track early warning signals
7. **Widen CIs** - If premortem was too easy, you're overconfident
### One-Sentence Summary
> Assume your prediction has failed, write the history of how, and use that to identify blind spots and adjust confidence.
### Integration with Other Skills
- **Before:** Use after inside view analysis (you need something to stress-test)
- **After:** Use `scout-mindset-bias-check` to validate adjustments
- **Companion:** Works with `bayesian-reasoning-calibration` for quantitative updates
- **Feeds into:** Monitoring systems and adaptive forecasting
---
## Resource Files
📁 **resources/**
- [premortem-principles.md](resources/premortem-principles.md) - Theory and research
- [backcasting-method.md](resources/backcasting-method.md) - Temporal reasoning process
- [failure-mode-taxonomy.md](resources/failure-mode-taxonomy.md) - Comprehensive failure categories
---
**Ready to start? Choose a number from the [menu](#interactive-menu) above.**

View File

@@ -0,0 +1,378 @@
# Backcasting Method
## Temporal Reasoning from Future to Present
**Backcasting** is the practice of starting from a future state and working backward to identify the path that led there.
**Contrast with forecasting:**
- **Forecasting:** Present → Future (What will happen?)
- **Backcasting:** Future → Present (How did this happen?)
---
## The Structured Backcasting Process
### Phase 1: Define the Future State
**Step 1.1: Set the resolution date**
- When will you know if the prediction came true?
- Be specific: "December 31, 2025"
**Step 1.2: State the outcome as a certainty**
- Don't say "might fail" or "probably fails"
- Say "HAS failed" or "DID fail"
- Use past tense
**Step 1.3: Emotional calibration**
- How surprising is this outcome?
- Shocking → You were very overconfident
- Expected → Appropriate confidence
- Inevitable → You were underconfident
---
### Phase 2: Construct the Timeline
**Step 2.1: Work backward in time chunks**
Start at resolution date, work backward in intervals:
**For 2-year prediction:**
- Resolution date (final failure)
- 6 months before (late-stage warning)
- 1 year before (mid-stage problems)
- 18 months before (early signs)
- Start date (initial conditions)
**For 6-month prediction:**
- Resolution date
- 1 month before
- 3 months before
- Start date
---
**Step 2.2: Fill in each time chunk**
For each period, ask:
- What was happening at this time?
- What decisions were made?
- What external events occurred?
- What warning signs appeared?
**Template:**
```
[Date]: [Event that occurred]
Effect: [How this contributed to failure]
Warning sign: [What would have indicated this was coming]
```
---
### Phase 3: Identify Causal Chains
**Step 3.1: Map the causal structure**
```
Initial condition → Trigger event → Cascade → Failure
```
**Example:**
```
Team overworked → Key engineer quit → Lost 3 months → Missed deadline → Funding fell through → Failure
```
---
**Step 3.2: Classify causes**
| Type | Description | Example |
|------|-------------|---------|
| **Necessary** | Without this, failure wouldn't happen | Regulatory ban |
| **Sufficient** | This alone causes failure | Founder death |
| **Contributing** | Makes failure more likely | Market downturn |
| **Catalytic** | Speeds up inevitable failure | Competitor launch |
---
**Step 3.3: Find the "brittle point"**
**Question:** Which single event, if prevented, would have avoided failure?
This is your **critical dependency** and highest-priority monitoring target.
---
### Phase 4: Narrative Construction
**Step 4.1: Write the headlines**
Imagine you're a journalist covering this failure. What headlines mark the timeline?
**Example:**
- "Startup X raises $10M Series A" (12 months before)
- "Startup X faces regulatory scrutiny" (9 months before)
- "Key executive departs Startup X" (6 months before)
- "Startup X misses Q3 targets" (3 months before)
- "Startup X shuts down, cites regulatory pressure" (resolution)
---
**Step 4.2: Write the obituary**
"Startup X failed because..."
Complete this sentence with a single, clear causal narrative. Force yourself to be concise.
**Good:**
"Startup X failed because regulatory uncertainty froze customer adoption, leading to missed revenue targets and inability to raise Series B."
**Bad (too vague):**
"Startup X failed because of various challenges."
---
**Step 4.3: The insider vs outsider narrative**
**Insider view:** What would the founders say?
- "We underestimated regulatory risk"
- "We hired too slowly"
- "We ran out of runway"
**Outsider view:** What would analysts say?
- "82% of startups in this space fail due to regulation"
- "Classic execution failure"
- "Unit economics never made sense"
**Compare:** Does your insider narrative match outsider base rates?
---
## Narrative vs Quantitative Backcasting
### Narrative Backcasting
**Strengths:**
- Rich, detailed stories
- Reveals unknown unknowns
- Good for complex systems
**Weaknesses:**
- Subject to narrative fallacy
- Can feel too "real" and bias you
- Hard to quantify
**Use when:**
- Complex, multi-causal failures
- Human/organizational factors dominate
- Need to surface blind spots
---
### Quantitative Backcasting
**Strengths:**
- Precise probability estimates
- Aggregates multiple failure modes
- Less subject to bias
**Weaknesses:**
- Requires data
- Can miss qualitative factors
- May feel mechanical
**Use when:**
- Statistical models exist
- Multiple independent failure modes
- Need to calculate confidence intervals
---
## Advanced Technique: Multiple Backcast Paths
### Generate 3-5 Different Failure Narratives
Instead of one story, create multiple:
**Path 1: Internal Execution Failure**
- Team burned out
- Product quality suffered
- Customers churned
- Revenue missed
- Funding dried up
**Path 2: External Market Shift**
- Competitor launched free tier
- Market commoditized
- Margins compressed
- Unit economics broke
- Shutdown
**Path 3: Regulatory Kill**
- New law passed
- Business model illegal
- Forced shutdown
**Path 4: Black Swan**
- Pandemic
- Supply chain collapse
- Force majeure
---
### Aggregate the Paths
**Calculate probability for each path:**
- Path 1 (Internal): 40%
- Path 2 (Market): 30%
- Path 3 (Regulatory): 20%
- Path 4 (Black Swan): 10%
**Total failure probability:** 100% (since we assumed failure)
**Insight:** But in reality, your prediction gives 25% failure. This means you're underestimating by 75 percentage points, OR these paths are not independent.
**Adjustment:**
If paths are partially overlapping (e.g., internal failure AND market shift), use:
```
P(A or B) = P(A) + P(B) - P(A and B)
```
---
## Temporal Reasoning Techniques
### The "Newspaper Test"
**Method:**
For each time period, imagine you're reading a newspaper from that date.
**What headlines would you see?**
- Macro news (economy, politics, technology)
- Industry news (competitors, regulations, trends)
- Company news (your specific case)
**This forces you to think about:**
- External context, not just internal execution
- Leading indicators, not just lagging outcomes
---
### The "Retrospective Interview"
**Method:**
Imagine you're interviewing someone 1 year after failure.
**Questions:**
- "Looking back, when did you first know this was in trouble?"
- "What was the moment of no return?"
- "If you could go back, what would you change?"
- "What signs did you ignore?"
**This reveals:**
- Early warning signals you should monitor
- Critical decision points
- Hindsight that can become foresight
---
### The "Parallel Universe" Technique
**Method:**
Create two timelines:
**Timeline A: Success**
What had to happen for success?
**Timeline B: Failure**
What happened instead?
**Divergence point:**
Where do the timelines split? That's your critical uncertainty.
---
## Common Backcasting Mistakes
### Mistake 1: Being Too Vague
**Bad:** "Things went wrong and it failed."
**Good:** "Q3 2024: Competitor X launched free tier. Q4 2024: We lost 30% of customers. Q1 2025: Revenue dropped below runway. Q2 2025: Failed to raise Series B. Q3 2025: Shutdown."
**Fix:** Force yourself to name specific events and dates.
---
### Mistake 2: Only Internal Causes
**Bad:** "We executed poorly."
**Good:** "We executed poorly AND market shifted AND regulation changed."
**Fix:** Use PESTLE framework to ensure external factors are considered.
---
### Mistake 3: Hindsight Bias
**Bad:** "It was always obvious this would fail."
**Good:** "In retrospect, these warning signs were present, but at the time they were ambiguous."
**Fix:** Acknowledge that foresight ≠ hindsight. Don't pretend everything was obvious.
---
### Mistake 4: Single-Cause Narratives
**Bad:** "Failed because of regulation."
**Good:** "Regulation was necessary but not sufficient. Also needed internal execution failure and market downturn to actually fail."
**Fix:** Multi-causal explanations are almost always more accurate.
---
## Integration with Forecasting
### How Backcasting Improves Forecasts
**Before Backcasting:**
- Forecast: 80% success
- Reasoning: Strong team, good market, solid plan
- Confidence interval: 70-90%
**After Backcasting:**
- Identified failure modes: Regulatory (20%), Execution (15%), Market (10%), Black Swan (5%)
- Total failure probability from backcasting: 50%
- **Realized:** Current 80% is too high
- **Adjusted forecast:** 60% success
- **Adjusted CI:** 45-75% (wider, reflecting uncertainty)
---
## Practical Workflow
### Quick Backcast (15 minutes)
1. **State outcome:** "It failed."
2. **One-sentence cause:** "Failed because..."
3. **Three key events:** Timeline points
4. **Probability check:** Does failure narrative feel >20% likely?
5. **Adjust:** If yes, lower confidence.
---
### Rigorous Backcast (60 minutes)
1. Define future state and resolution date
2. Create timeline working backward in chunks
3. Write detailed narrative for each period
4. Identify causal chains (necessary, sufficient, contributing)
5. Generate 3-5 alternative failure paths
6. Estimate probability of each path
7. Aggregate and compare to current forecast
8. Adjust probability and confidence intervals
9. Set monitoring signposts
10. Document assumptions
---
**Return to:** [Main Skill](../SKILL.md#interactive-menu)

View File

@@ -0,0 +1,497 @@
# Failure Mode Taxonomy
## Comprehensive Categories for Systematic Risk Identification
---
## The Two Primary Dimensions
### 1. Internal vs External
**Internal failures:**
- Under your control (at least partially)
- Organizational, execution, resource-based
- Can be prevented with better planning
**External failures:**
- Outside your control
- Market, regulatory, competitive, acts of God
- Can only be mitigated, not prevented
---
### 2. Preventable vs Unpreventable
**Preventable:**
- Known risk with available mitigation
- Happens due to negligence or oversight
- "We should have seen this coming"
**Unpreventable (Black Swans):**
- Unknown unknowns
- No reasonable way to anticipate
- "Nobody could have predicted this"
---
## Four Quadrants
| | **Preventable** | **Unpreventable** |
|---|---|---|
| **Internal** | Execution failure, bad hiring | Key person illness, burnout |
| **External** | Competitor launch (foreseeable) | Pandemic, war, black swan |
**Premortem focus:** Mostly on **preventable failures** (both internal and external)
---
## Internal Failure Modes
### 1. Execution Failures
**Team/People:**
- Key person quits
- Co-founder conflict
- Team burnout
- Cultural toxicity
- Skills gap
- Hiring too slow/fast
- Onboarding failure
**Process:**
- Missed deadlines
- Scope creep
- Poor prioritization
- Communication breakdown
- Decision paralysis
- Process overhead
- Lack of process
**Product/Technical:**
- Product quality issues
- Technical debt collapse
- Scalability failures
- Security breach
- Data loss
- Integration failures
- Performance degradation
---
### 2. Resource Failures
**Financial:**
- Ran out of money (runway)
- Failed to raise funding
- Revenue shortfall
- Cost overruns
- Budget mismanagement
- Fraud/embezzlement
- Cash flow crisis
**Time:**
- Too slow to market
- Missed window of opportunity
- Critical path delays
- Underestimated timeline
- Overcommitted resources
**Knowledge/IP:**
- Lost key knowledge (person left)
- IP stolen
- Failed to protect IP
- Technological obsolescence
- R&D dead ends
---
### 3. Strategic Failures
**Market fit:**
- Built wrong product
- Solved non-problem
- Target market too small
- Pricing wrong
- Value prop unclear
- Positioning failure
**Business model:**
- Unit economics don't work
- CAC > LTV
- Churn too high
- Margins too thin
- Revenue model broken
- Unsustainable burn rate
**Competitive:**
- Differentiation lost
- Commoditization
- Underestimated competition
- Failed to defend moat
- Technology leapfrogged
---
## External Failure Modes
### 1. Market Failures
**Demand side:**
- Market smaller than expected
- Adoption slower than expected
- Customer behavior changed
- Willingness to pay dropped
- Switching costs too high
**Supply side:**
- Input costs increased
- Suppliers failed
- Supply chain disruption
- Talent shortage
- Infrastructure unavailable
**Market structure:**
- Market consolidated
- Winner-take-all dynamics
- Network effects favored competitor
- Platform risk (dependency on another company)
---
### 2. Competitive Failures
**Direct competition:**
- Incumbent responded aggressively
- New entrant with more capital
- Competitor launched superior product
- Price war
- Competitor acquired key talent
**Ecosystem:**
- Complementary product failed
- Partnership fell through
- Distribution channel cut off
- Platform changed terms
- Ecosystem shifted away
---
### 3. Regulatory/Legal Failures
**Regulation:**
- New law banned business model
- Compliance costs too high
- Licensing denied
- Government investigation
- Regulatory capture by incumbents
**Legal:**
- Lawsuit (IP, employment, customer)
- Contract breach
- Fraud allegations
- Criminal charges
- Bankruptcy proceedings
---
### 4. Macroeconomic Failures
**Economic:**
- Recession
- Inflation
- Interest rate spike
- Currency fluctuation
- Credit crunch
- Stock market crash
**Geopolitical:**
- War
- Trade restrictions
- Sanctions
- Political instability
- Coup/revolution
- Expropriation
---
### 5. Technological Failures
**Disruption:**
- New technology made product obsolete
- Paradigm shift (e.g., mobile, cloud, AI)
- Standard changed
- Interoperability broke
**Infrastructure:**
- Cloud provider outage
- Internet backbone failure
- Power grid failure
- Critical dependency failed
---
### 6. Social/Cultural Failures
**Public opinion:**
- Reputation crisis
- Boycott
- Social media backlash
- Cultural shift away from product
- Ethical concerns raised
**Demographics:**
- Target demographic shrunk
- Generational shift
- Migration patterns changed
- Urbanization/de-urbanization
---
### 7. Environmental/Health Failures
**Natural disasters:**
- Earthquake, hurricane, flood
- Wildfire
- Drought
- Extreme weather
**Health:**
- Pandemic
- Endemic disease outbreak
- Health regulation
- Contamination/recall
---
## Black Swans (Unpreventable External)
### Characteristics
- Extreme impact
- Retrospectively predictable, prospectively invisible
- Outside normal expectations
### Examples
- 9/11 terrorist attacks
- COVID-19 pandemic
- 2008 financial crisis
- Fukushima disaster
- Technological singularity
- Asteroid impact
### How to Handle
**Can't prevent, can:**
1. **Increase robustness** - Survive the shock
2. **Increase antifragility** - Benefit from volatility
3. **Widen confidence intervals** - Acknowledge unknown unknowns
4. **Plan for "unspecified bad thing"** - Generic resilience
---
## PESTLE Framework for Systematic Enumeration
Use this checklist to ensure comprehensive coverage:
### Political
- [ ] Elections/regime change
- [ ] Policy shifts
- [ ] Government instability
- [ ] Geopolitical conflict
- [ ] Trade agreements
- [ ] Lobbying success/failure
### Economic
- [ ] Recession/depression
- [ ] Inflation/deflation
- [ ] Interest rates
- [ ] Currency fluctuations
- [ ] Market bubbles/crashes
- [ ] Unemployment
### Social
- [ ] Demographic shifts
- [ ] Cultural trends
- [ ] Public opinion
- [ ] Social movements
- [ ] Consumer behavior change
- [ ] Generational values
### Technological
- [ ] Disruptive innovation
- [ ] Obsolescence
- [ ] Cyber attacks
- [ ] Infrastructure failure
- [ ] Standards change
- [ ] Technology convergence
### Legal
- [ ] New regulations
- [ ] Lawsuits
- [ ] IP challenges
- [ ] Compliance requirements
- [ ] Contract disputes
- [ ] Liability exposure
### Environmental
- [ ] Climate change
- [ ] Natural disasters
- [ ] Pandemics
- [ ] Resource scarcity
- [ ] Pollution/contamination
- [ ] Sustainability pressures
---
## Kill Criteria Templates
### What is a Kill Criterion?
**Definition:** A specific event that, if it occurs, drastically changes your probability.
**Format:** "If [event], then probability drops to [X%]"
---
### Template Library
**Regulatory kill criteria:**
```
If [specific regulation] passes, probability drops to [X]%
If FDA rejects in Phase [N], probability drops to [X]%
If government bans [activity], probability drops to 0%
```
**Competitive kill criteria:**
```
If [competitor] launches [feature], probability drops to [X]%
If incumbent drops price by [X]%, probability drops to [X]%
If [big tech co] enters market, probability drops to [X]%
```
**Financial kill criteria:**
```
If we miss Q[N] revenue target by >20%, probability drops to [X]%
If we can't raise Series [X] by [date], probability drops to [X]%
If burn rate exceeds $[X]/month, probability drops to [X]%
```
**Team kill criteria:**
```
If [key person] leaves, probability drops to [X]%
If we can't hire [critical role] by [date], probability drops to [X]%
If team size drops below [X], probability drops to [X]%
```
**Product kill criteria:**
```
If we can't ship by [date], probability drops to [X]%
If NPS drops below [X], probability drops to [X]%
If churn exceeds [X]%, probability drops to [X]%
```
**Market kill criteria:**
```
If TAM shrinks below $[X], probability drops to [X]%
If adoption rate < [X]% by [date], probability drops to [X]%
If market shifts to [substitute], probability drops to [X]%
```
**Macro kill criteria:**
```
If recession occurs, probability drops to [X]%
If interest rates exceed [X]%, probability drops to [X]%
If war breaks out in [region], probability drops to [X]%
```
---
## Failure Mode Probability Estimation
### Quick Heuristics
**For each failure mode, estimate:**
**Very Low (1-5%):**
- Black swans
- Never happened in this industry
- Requires multiple unlikely events
**Low (5-15%):**
- Happened before but rare
- Strong mitigations in place
- Early warning systems exist
**Medium (15-35%):**
- Common failure mode in industry
- Moderate mitigations
- Uncertain effectiveness
**High (35-70%):**
- Very common failure mode
- Weak mitigations
- History of this happening
**Very High (>70%):**
- Almost certain to occur
- No effective mitigation
- Base rate is very high
---
### Aggregation
**If failure modes are independent:**
```
P(any failure) = 1 - ∏(1 - P(failure_i))
```
**Example:**
- P(regulatory) = 20%
- P(competitive) = 30%
- P(execution) = 25%
```
P(any) = 1 - (0.8 × 0.7 × 0.75) = 1 - 0.42 = 58%
```
**If failure modes are dependent:**
Use Venn diagram logic or conditional probabilities (more complex).
---
## Monitoring and Signposts
### Early Warning Signals
For each major failure mode, identify **leading indicators:**
**Example: "Key engineer will quit"**
**Leading indicators (6-12 months before):**
- Code commit frequency drops
- Participation in meetings declines
- Starts saying "no" more often
- Takes more sick days
- LinkedIn profile updated
- Asks about vesting schedule
**Action:** Monitor these monthly, set alerts
---
### Monitoring Cadence
| Risk Level | Check Frequency |
|------------|----------------|
| Very High (>50%) | Weekly |
| High (25-50%) | Bi-weekly |
| Medium (10-25%) | Monthly |
| Low (5-10%) | Quarterly |
| Very Low (<5%) | Annually |
---
## Practical Usage
**Step-by-Step:** (1) Choose categories (Internal/External, PESTLE), (2) Brainstorm 10-15 failure modes, (3) Estimate probability for each, (4) Aggregate, (5) Compare to forecast, (6) Identify top 3-5 risks, (7) Set kill criteria, (8) Define monitoring signposts, (9) Set calendar reminders based on risk level.
**Return to:** [Main Skill](../SKILL.md#interactive-menu)

View File

@@ -0,0 +1,292 @@
# Premortem Principles
## The Psychology of Overconfidence
### Why We're Systematically Overconfident
**The Planning Fallacy:**
- We focus on best-case scenarios
- We ignore historical delays and failures
- We assume "our case is different"
- We underestimate Murphy's Law
**Research:**
- 90% of projects run over budget
- 70% of projects run late
- Yet 80% of project managers predict on-time completion
**The fix:** Premortem forces you to imagine failure has already happened.
---
## Hindsight Bias
### The "I Knew It All Along" Effect
**What it is:**
After an outcome occurs, we believe we "always knew" it would happen.
**Example:**
- Before 2008 crash: "Housing is safe"
- After 2008 crash: "The signs were obvious"
**Problem for forecasting:**
If we think outcomes were predictable in hindsight, we'll be overconfident going forward.
**The premortem fix:**
By forcing yourself into "hindsight mode" BEFORE the outcome, you:
1. Generate the warning signs you would have seen
2. Realize how many ways things could go wrong
3. Reduce overconfidence
---
## The Power of Inversion
### Solving Problems Backward
**Charlie Munger:**
> "Invert, always invert. Many hard problems are best solved backward."
**In forecasting:**
- Hard: "Will this succeed?" (requires imagining all paths to success)
- Easier: "It failed - why?" (failure modes are more concrete)
**Why this works:**
- Failure modes are finite and enumerable
- Success paths are infinite and vague
- Humans are better at imagining concrete negatives than abstract positives
---
## Research on Premortem Effectiveness
### Gary Klein's Studies
**Original research:**
- Teams that did premortems identified 30% more risks
- Risks identified were more specific and actionable
- Teams adjusted plans proactively
**Key finding:**
> "Prospective hindsight" (imagining an event has happened) improves recall by 30%
---
### Kahneman's Endorsement
**Daniel Kahneman:**
> "The premortem is the single best debiasing technique I know."
**Why it works:**
1. **Legitimizes doubt** - In group settings, dissent is hard. Premortem makes it safe.
2. **Concrete > Abstract** - "Identify risks" is vague. "Explain the failure" is concrete.
3. **Defeats groupthink** - Forces even optimists to imagine failure.
---
## Outcome Bias
### Judging Decisions by Results, Not Process
**What it is:**
We judge the quality of a decision based on its outcome, not the process.
**Example:**
- Drunk driver gets home safely → "It was fine"
- Sober driver has accident → "Bad decision to drive"
**Reality:**
Quality of decision ≠ Quality of outcome (because of randomness)
**For forecasting:**
A 90% prediction that fails doesn't mean the forecast was bad (10% events happen 10% of the time).
**The premortem fix:**
By imagining failure BEFORE it happens, you evaluate the decision process independent of outcome.
---
## When Premortems Work Best
### High-Confidence Predictions
**Use when:**
- Your probability is >80% or <20%
- You feel very certain
- Confidence intervals are narrow
**Why:**
These are the predictions most likely to be overconfident.
---
### Team Forecasting
**Use when:**
- Multiple people are making predictions
- Groupthink is a risk
- Dissent is being suppressed
**Why:**
Premortems legitimize expressing doubts without seeming disloyal.
---
### Important Decisions
**Use when:**
- Stakes are high
- Irreversible commitments
- Significant resource allocation
**Why:**
Worth the time investment to reduce overconfidence.
---
## When Premortems Don't Help
### Already Uncertain
**Skip if:**
- Your probability is ~50%
- Confidence intervals are already wide
- You're confused, not confident
**Why:**
You don't need a premortem to tell you you're uncertain.
---
### Trivial Predictions
**Skip if:**
- Low stakes
- Easily reversible
- Not worth the time
**Why:**
Premortems take effort; save them for important forecasts.
---
## The Premortem vs Other Techniques
### Premortem vs Red Teaming
**Red Teaming:**
- Adversarial: Find flaws in the plan
- Focus: Attack the strategy
- Mindset: "How do we defeat this?"
**Premortem:**
- Temporal: Failure has occurred
- Focus: Understand what happened
- Mindset: "What led to this outcome?"
**Use both:** Red team attacks the plan, premortem explains the failure.
---
### Premortem vs Scenario Planning
**Scenario Planning:**
- Multiple futures: Good, bad, likely
- Branching paths
- Strategies for each scenario
**Premortem:**
- Single future: Failure has occurred
- Backward path
- Identify risks to avoid
**Use both:** Scenario planning explores, premortem stress-tests.
---
### Premortem vs Risk Register
**Risk Register:**
- List of identified risks
- Probability and impact scores
- Mitigation strategies
**Premortem:**
- Narrative of failure
- Causal chains
- Discover unknown unknowns
**Use both:** Premortem feeds into risk register.
---
## Cognitive Mechanisms
### Why Premortems Defeat Overconfidence
**1. Prospective Hindsight**
Imagining an event has occurred improves memory access by 30%.
**2. Permission to Doubt**
Social license to express skepticism without seeming negative.
**3. Concrete Failure Modes**
Abstract "risks" become specific "this happened, then this, then this."
**4. Temporal Distancing**
Viewing from the future reduces emotional attachment to current plan.
**5. Narrative Construction**
Building a story forces causal reasoning, revealing gaps.
---
## Common Objections
### "This is too negative!"
**Response:**
Pessimism during planning prevents failure during execution.
**Reframe:**
Not negative - realistic. You're not hoping for failure, you're preparing for it.
---
### "We don't have time for this."
**Response:**
- Premortem: 30 minutes
- Recovering from preventable failure: Months/years
**Math:**
If premortem prevents 10% of failures, ROI is massive.
---
### "Our case really is different!"
**Response:**
Maybe. But the premortem will reveal HOW it's different, not just assert it.
**Test:**
If the premortem reveals nothing new, you were right. If it reveals risks, you weren't.
---
## Practical Takeaways
1. **Use for high-confidence predictions** - When you feel certain
2. **Legitimate skepticism** - Makes doubt socially acceptable
3. **Concrete failure modes** - Forces specific risks, not vague worries
4. **Widen confidence intervals** - Adjust based on plausibility of failure narrative
5. **Set kill criteria** - Know what would change your mind
6. **Monitor signposts** - Track early warning signals
**The Rule:**
> If you can easily write a plausible failure narrative, your confidence is too high.
---
**Return to:** [Main Skill](../SKILL.md#interactive-menu)