Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,378 @@
# Backcasting Method
## Temporal Reasoning from Future to Present
**Backcasting** is the practice of starting from a future state and working backward to identify the path that led there.
**Contrast with forecasting:**
- **Forecasting:** Present → Future (What will happen?)
- **Backcasting:** Future → Present (How did this happen?)
---
## The Structured Backcasting Process
### Phase 1: Define the Future State
**Step 1.1: Set the resolution date**
- When will you know if the prediction came true?
- Be specific: "December 31, 2025"
**Step 1.2: State the outcome as a certainty**
- Don't say "might fail" or "probably fails"
- Say "HAS failed" or "DID fail"
- Use past tense
**Step 1.3: Emotional calibration**
- How surprising is this outcome?
- Shocking → You were very overconfident
- Expected → Appropriate confidence
- Inevitable → You were underconfident
---
### Phase 2: Construct the Timeline
**Step 2.1: Work backward in time chunks**
Start at resolution date, work backward in intervals:
**For 2-year prediction:**
- Resolution date (final failure)
- 6 months before (late-stage warning)
- 1 year before (mid-stage problems)
- 18 months before (early signs)
- Start date (initial conditions)
**For 6-month prediction:**
- Resolution date
- 1 month before
- 3 months before
- Start date
---
**Step 2.2: Fill in each time chunk**
For each period, ask:
- What was happening at this time?
- What decisions were made?
- What external events occurred?
- What warning signs appeared?
**Template:**
```
[Date]: [Event that occurred]
Effect: [How this contributed to failure]
Warning sign: [What would have indicated this was coming]
```
---
### Phase 3: Identify Causal Chains
**Step 3.1: Map the causal structure**
```
Initial condition → Trigger event → Cascade → Failure
```
**Example:**
```
Team overworked → Key engineer quit → Lost 3 months → Missed deadline → Funding fell through → Failure
```
---
**Step 3.2: Classify causes**
| Type | Description | Example |
|------|-------------|---------|
| **Necessary** | Without this, failure wouldn't happen | Regulatory ban |
| **Sufficient** | This alone causes failure | Founder death |
| **Contributing** | Makes failure more likely | Market downturn |
| **Catalytic** | Speeds up inevitable failure | Competitor launch |
---
**Step 3.3: Find the "brittle point"**
**Question:** Which single event, if prevented, would have avoided failure?
This is your **critical dependency** and highest-priority monitoring target.
---
### Phase 4: Narrative Construction
**Step 4.1: Write the headlines**
Imagine you're a journalist covering this failure. What headlines mark the timeline?
**Example:**
- "Startup X raises $10M Series A" (12 months before)
- "Startup X faces regulatory scrutiny" (9 months before)
- "Key executive departs Startup X" (6 months before)
- "Startup X misses Q3 targets" (3 months before)
- "Startup X shuts down, cites regulatory pressure" (resolution)
---
**Step 4.2: Write the obituary**
"Startup X failed because..."
Complete this sentence with a single, clear causal narrative. Force yourself to be concise.
**Good:**
"Startup X failed because regulatory uncertainty froze customer adoption, leading to missed revenue targets and inability to raise Series B."
**Bad (too vague):**
"Startup X failed because of various challenges."
---
**Step 4.3: The insider vs outsider narrative**
**Insider view:** What would the founders say?
- "We underestimated regulatory risk"
- "We hired too slowly"
- "We ran out of runway"
**Outsider view:** What would analysts say?
- "82% of startups in this space fail due to regulation"
- "Classic execution failure"
- "Unit economics never made sense"
**Compare:** Does your insider narrative match outsider base rates?
---
## Narrative vs Quantitative Backcasting
### Narrative Backcasting
**Strengths:**
- Rich, detailed stories
- Reveals unknown unknowns
- Good for complex systems
**Weaknesses:**
- Subject to narrative fallacy
- Can feel too "real" and bias you
- Hard to quantify
**Use when:**
- Complex, multi-causal failures
- Human/organizational factors dominate
- Need to surface blind spots
---
### Quantitative Backcasting
**Strengths:**
- Precise probability estimates
- Aggregates multiple failure modes
- Less subject to bias
**Weaknesses:**
- Requires data
- Can miss qualitative factors
- May feel mechanical
**Use when:**
- Statistical models exist
- Multiple independent failure modes
- Need to calculate confidence intervals
---
## Advanced Technique: Multiple Backcast Paths
### Generate 3-5 Different Failure Narratives
Instead of one story, create multiple:
**Path 1: Internal Execution Failure**
- Team burned out
- Product quality suffered
- Customers churned
- Revenue missed
- Funding dried up
**Path 2: External Market Shift**
- Competitor launched free tier
- Market commoditized
- Margins compressed
- Unit economics broke
- Shutdown
**Path 3: Regulatory Kill**
- New law passed
- Business model illegal
- Forced shutdown
**Path 4: Black Swan**
- Pandemic
- Supply chain collapse
- Force majeure
---
### Aggregate the Paths
**Calculate probability for each path:**
- Path 1 (Internal): 40%
- Path 2 (Market): 30%
- Path 3 (Regulatory): 20%
- Path 4 (Black Swan): 10%
**Total failure probability:** 100% (since we assumed failure)
**Insight:** But in reality, your prediction gives 25% failure. This means you're underestimating by 75 percentage points, OR these paths are not independent.
**Adjustment:**
If paths are partially overlapping (e.g., internal failure AND market shift), use:
```
P(A or B) = P(A) + P(B) - P(A and B)
```
---
## Temporal Reasoning Techniques
### The "Newspaper Test"
**Method:**
For each time period, imagine you're reading a newspaper from that date.
**What headlines would you see?**
- Macro news (economy, politics, technology)
- Industry news (competitors, regulations, trends)
- Company news (your specific case)
**This forces you to think about:**
- External context, not just internal execution
- Leading indicators, not just lagging outcomes
---
### The "Retrospective Interview"
**Method:**
Imagine you're interviewing someone 1 year after failure.
**Questions:**
- "Looking back, when did you first know this was in trouble?"
- "What was the moment of no return?"
- "If you could go back, what would you change?"
- "What signs did you ignore?"
**This reveals:**
- Early warning signals you should monitor
- Critical decision points
- Hindsight that can become foresight
---
### The "Parallel Universe" Technique
**Method:**
Create two timelines:
**Timeline A: Success**
What had to happen for success?
**Timeline B: Failure**
What happened instead?
**Divergence point:**
Where do the timelines split? That's your critical uncertainty.
---
## Common Backcasting Mistakes
### Mistake 1: Being Too Vague
**Bad:** "Things went wrong and it failed."
**Good:** "Q3 2024: Competitor X launched free tier. Q4 2024: We lost 30% of customers. Q1 2025: Revenue dropped below runway. Q2 2025: Failed to raise Series B. Q3 2025: Shutdown."
**Fix:** Force yourself to name specific events and dates.
---
### Mistake 2: Only Internal Causes
**Bad:** "We executed poorly."
**Good:** "We executed poorly AND market shifted AND regulation changed."
**Fix:** Use PESTLE framework to ensure external factors are considered.
---
### Mistake 3: Hindsight Bias
**Bad:** "It was always obvious this would fail."
**Good:** "In retrospect, these warning signs were present, but at the time they were ambiguous."
**Fix:** Acknowledge that foresight ≠ hindsight. Don't pretend everything was obvious.
---
### Mistake 4: Single-Cause Narratives
**Bad:** "Failed because of regulation."
**Good:** "Regulation was necessary but not sufficient. Also needed internal execution failure and market downturn to actually fail."
**Fix:** Multi-causal explanations are almost always more accurate.
---
## Integration with Forecasting
### How Backcasting Improves Forecasts
**Before Backcasting:**
- Forecast: 80% success
- Reasoning: Strong team, good market, solid plan
- Confidence interval: 70-90%
**After Backcasting:**
- Identified failure modes: Regulatory (20%), Execution (15%), Market (10%), Black Swan (5%)
- Total failure probability from backcasting: 50%
- **Realized:** Current 80% is too high
- **Adjusted forecast:** 60% success
- **Adjusted CI:** 45-75% (wider, reflecting uncertainty)
---
## Practical Workflow
### Quick Backcast (15 minutes)
1. **State outcome:** "It failed."
2. **One-sentence cause:** "Failed because..."
3. **Three key events:** Timeline points
4. **Probability check:** Does failure narrative feel >20% likely?
5. **Adjust:** If yes, lower confidence.
---
### Rigorous Backcast (60 minutes)
1. Define future state and resolution date
2. Create timeline working backward in chunks
3. Write detailed narrative for each period
4. Identify causal chains (necessary, sufficient, contributing)
5. Generate 3-5 alternative failure paths
6. Estimate probability of each path
7. Aggregate and compare to current forecast
8. Adjust probability and confidence intervals
9. Set monitoring signposts
10. Document assumptions
---
**Return to:** [Main Skill](../SKILL.md#interactive-menu)

View File

@@ -0,0 +1,497 @@
# Failure Mode Taxonomy
## Comprehensive Categories for Systematic Risk Identification
---
## The Two Primary Dimensions
### 1. Internal vs External
**Internal failures:**
- Under your control (at least partially)
- Organizational, execution, resource-based
- Can be prevented with better planning
**External failures:**
- Outside your control
- Market, regulatory, competitive, acts of God
- Can only be mitigated, not prevented
---
### 2. Preventable vs Unpreventable
**Preventable:**
- Known risk with available mitigation
- Happens due to negligence or oversight
- "We should have seen this coming"
**Unpreventable (Black Swans):**
- Unknown unknowns
- No reasonable way to anticipate
- "Nobody could have predicted this"
---
## Four Quadrants
| | **Preventable** | **Unpreventable** |
|---|---|---|
| **Internal** | Execution failure, bad hiring | Key person illness, burnout |
| **External** | Competitor launch (foreseeable) | Pandemic, war, black swan |
**Premortem focus:** Mostly on **preventable failures** (both internal and external)
---
## Internal Failure Modes
### 1. Execution Failures
**Team/People:**
- Key person quits
- Co-founder conflict
- Team burnout
- Cultural toxicity
- Skills gap
- Hiring too slow/fast
- Onboarding failure
**Process:**
- Missed deadlines
- Scope creep
- Poor prioritization
- Communication breakdown
- Decision paralysis
- Process overhead
- Lack of process
**Product/Technical:**
- Product quality issues
- Technical debt collapse
- Scalability failures
- Security breach
- Data loss
- Integration failures
- Performance degradation
---
### 2. Resource Failures
**Financial:**
- Ran out of money (runway)
- Failed to raise funding
- Revenue shortfall
- Cost overruns
- Budget mismanagement
- Fraud/embezzlement
- Cash flow crisis
**Time:**
- Too slow to market
- Missed window of opportunity
- Critical path delays
- Underestimated timeline
- Overcommitted resources
**Knowledge/IP:**
- Lost key knowledge (person left)
- IP stolen
- Failed to protect IP
- Technological obsolescence
- R&D dead ends
---
### 3. Strategic Failures
**Market fit:**
- Built wrong product
- Solved non-problem
- Target market too small
- Pricing wrong
- Value prop unclear
- Positioning failure
**Business model:**
- Unit economics don't work
- CAC > LTV
- Churn too high
- Margins too thin
- Revenue model broken
- Unsustainable burn rate
**Competitive:**
- Differentiation lost
- Commoditization
- Underestimated competition
- Failed to defend moat
- Technology leapfrogged
---
## External Failure Modes
### 1. Market Failures
**Demand side:**
- Market smaller than expected
- Adoption slower than expected
- Customer behavior changed
- Willingness to pay dropped
- Switching costs too high
**Supply side:**
- Input costs increased
- Suppliers failed
- Supply chain disruption
- Talent shortage
- Infrastructure unavailable
**Market structure:**
- Market consolidated
- Winner-take-all dynamics
- Network effects favored competitor
- Platform risk (dependency on another company)
---
### 2. Competitive Failures
**Direct competition:**
- Incumbent responded aggressively
- New entrant with more capital
- Competitor launched superior product
- Price war
- Competitor acquired key talent
**Ecosystem:**
- Complementary product failed
- Partnership fell through
- Distribution channel cut off
- Platform changed terms
- Ecosystem shifted away
---
### 3. Regulatory/Legal Failures
**Regulation:**
- New law banned business model
- Compliance costs too high
- Licensing denied
- Government investigation
- Regulatory capture by incumbents
**Legal:**
- Lawsuit (IP, employment, customer)
- Contract breach
- Fraud allegations
- Criminal charges
- Bankruptcy proceedings
---
### 4. Macroeconomic Failures
**Economic:**
- Recession
- Inflation
- Interest rate spike
- Currency fluctuation
- Credit crunch
- Stock market crash
**Geopolitical:**
- War
- Trade restrictions
- Sanctions
- Political instability
- Coup/revolution
- Expropriation
---
### 5. Technological Failures
**Disruption:**
- New technology made product obsolete
- Paradigm shift (e.g., mobile, cloud, AI)
- Standard changed
- Interoperability broke
**Infrastructure:**
- Cloud provider outage
- Internet backbone failure
- Power grid failure
- Critical dependency failed
---
### 6. Social/Cultural Failures
**Public opinion:**
- Reputation crisis
- Boycott
- Social media backlash
- Cultural shift away from product
- Ethical concerns raised
**Demographics:**
- Target demographic shrunk
- Generational shift
- Migration patterns changed
- Urbanization/de-urbanization
---
### 7. Environmental/Health Failures
**Natural disasters:**
- Earthquake, hurricane, flood
- Wildfire
- Drought
- Extreme weather
**Health:**
- Pandemic
- Endemic disease outbreak
- Health regulation
- Contamination/recall
---
## Black Swans (Unpreventable External)
### Characteristics
- Extreme impact
- Retrospectively predictable, prospectively invisible
- Outside normal expectations
### Examples
- 9/11 terrorist attacks
- COVID-19 pandemic
- 2008 financial crisis
- Fukushima disaster
- Technological singularity
- Asteroid impact
### How to Handle
**Can't prevent, can:**
1. **Increase robustness** - Survive the shock
2. **Increase antifragility** - Benefit from volatility
3. **Widen confidence intervals** - Acknowledge unknown unknowns
4. **Plan for "unspecified bad thing"** - Generic resilience
---
## PESTLE Framework for Systematic Enumeration
Use this checklist to ensure comprehensive coverage:
### Political
- [ ] Elections/regime change
- [ ] Policy shifts
- [ ] Government instability
- [ ] Geopolitical conflict
- [ ] Trade agreements
- [ ] Lobbying success/failure
### Economic
- [ ] Recession/depression
- [ ] Inflation/deflation
- [ ] Interest rates
- [ ] Currency fluctuations
- [ ] Market bubbles/crashes
- [ ] Unemployment
### Social
- [ ] Demographic shifts
- [ ] Cultural trends
- [ ] Public opinion
- [ ] Social movements
- [ ] Consumer behavior change
- [ ] Generational values
### Technological
- [ ] Disruptive innovation
- [ ] Obsolescence
- [ ] Cyber attacks
- [ ] Infrastructure failure
- [ ] Standards change
- [ ] Technology convergence
### Legal
- [ ] New regulations
- [ ] Lawsuits
- [ ] IP challenges
- [ ] Compliance requirements
- [ ] Contract disputes
- [ ] Liability exposure
### Environmental
- [ ] Climate change
- [ ] Natural disasters
- [ ] Pandemics
- [ ] Resource scarcity
- [ ] Pollution/contamination
- [ ] Sustainability pressures
---
## Kill Criteria Templates
### What is a Kill Criterion?
**Definition:** A specific event that, if it occurs, drastically changes your probability.
**Format:** "If [event], then probability drops to [X%]"
---
### Template Library
**Regulatory kill criteria:**
```
If [specific regulation] passes, probability drops to [X]%
If FDA rejects in Phase [N], probability drops to [X]%
If government bans [activity], probability drops to 0%
```
**Competitive kill criteria:**
```
If [competitor] launches [feature], probability drops to [X]%
If incumbent drops price by [X]%, probability drops to [X]%
If [big tech co] enters market, probability drops to [X]%
```
**Financial kill criteria:**
```
If we miss Q[N] revenue target by >20%, probability drops to [X]%
If we can't raise Series [X] by [date], probability drops to [X]%
If burn rate exceeds $[X]/month, probability drops to [X]%
```
**Team kill criteria:**
```
If [key person] leaves, probability drops to [X]%
If we can't hire [critical role] by [date], probability drops to [X]%
If team size drops below [X], probability drops to [X]%
```
**Product kill criteria:**
```
If we can't ship by [date], probability drops to [X]%
If NPS drops below [X], probability drops to [X]%
If churn exceeds [X]%, probability drops to [X]%
```
**Market kill criteria:**
```
If TAM shrinks below $[X], probability drops to [X]%
If adoption rate < [X]% by [date], probability drops to [X]%
If market shifts to [substitute], probability drops to [X]%
```
**Macro kill criteria:**
```
If recession occurs, probability drops to [X]%
If interest rates exceed [X]%, probability drops to [X]%
If war breaks out in [region], probability drops to [X]%
```
---
## Failure Mode Probability Estimation
### Quick Heuristics
**For each failure mode, estimate:**
**Very Low (1-5%):**
- Black swans
- Never happened in this industry
- Requires multiple unlikely events
**Low (5-15%):**
- Happened before but rare
- Strong mitigations in place
- Early warning systems exist
**Medium (15-35%):**
- Common failure mode in industry
- Moderate mitigations
- Uncertain effectiveness
**High (35-70%):**
- Very common failure mode
- Weak mitigations
- History of this happening
**Very High (>70%):**
- Almost certain to occur
- No effective mitigation
- Base rate is very high
---
### Aggregation
**If failure modes are independent:**
```
P(any failure) = 1 - ∏(1 - P(failure_i))
```
**Example:**
- P(regulatory) = 20%
- P(competitive) = 30%
- P(execution) = 25%
```
P(any) = 1 - (0.8 × 0.7 × 0.75) = 1 - 0.42 = 58%
```
**If failure modes are dependent:**
Use Venn diagram logic or conditional probabilities (more complex).
---
## Monitoring and Signposts
### Early Warning Signals
For each major failure mode, identify **leading indicators:**
**Example: "Key engineer will quit"**
**Leading indicators (6-12 months before):**
- Code commit frequency drops
- Participation in meetings declines
- Starts saying "no" more often
- Takes more sick days
- LinkedIn profile updated
- Asks about vesting schedule
**Action:** Monitor these monthly, set alerts
---
### Monitoring Cadence
| Risk Level | Check Frequency |
|------------|----------------|
| Very High (>50%) | Weekly |
| High (25-50%) | Bi-weekly |
| Medium (10-25%) | Monthly |
| Low (5-10%) | Quarterly |
| Very Low (<5%) | Annually |
---
## Practical Usage
**Step-by-Step:** (1) Choose categories (Internal/External, PESTLE), (2) Brainstorm 10-15 failure modes, (3) Estimate probability for each, (4) Aggregate, (5) Compare to forecast, (6) Identify top 3-5 risks, (7) Set kill criteria, (8) Define monitoring signposts, (9) Set calendar reminders based on risk level.
**Return to:** [Main Skill](../SKILL.md#interactive-menu)

View File

@@ -0,0 +1,292 @@
# Premortem Principles
## The Psychology of Overconfidence
### Why We're Systematically Overconfident
**The Planning Fallacy:**
- We focus on best-case scenarios
- We ignore historical delays and failures
- We assume "our case is different"
- We underestimate Murphy's Law
**Research:**
- 90% of projects run over budget
- 70% of projects run late
- Yet 80% of project managers predict on-time completion
**The fix:** Premortem forces you to imagine failure has already happened.
---
## Hindsight Bias
### The "I Knew It All Along" Effect
**What it is:**
After an outcome occurs, we believe we "always knew" it would happen.
**Example:**
- Before 2008 crash: "Housing is safe"
- After 2008 crash: "The signs were obvious"
**Problem for forecasting:**
If we think outcomes were predictable in hindsight, we'll be overconfident going forward.
**The premortem fix:**
By forcing yourself into "hindsight mode" BEFORE the outcome, you:
1. Generate the warning signs you would have seen
2. Realize how many ways things could go wrong
3. Reduce overconfidence
---
## The Power of Inversion
### Solving Problems Backward
**Charlie Munger:**
> "Invert, always invert. Many hard problems are best solved backward."
**In forecasting:**
- Hard: "Will this succeed?" (requires imagining all paths to success)
- Easier: "It failed - why?" (failure modes are more concrete)
**Why this works:**
- Failure modes are finite and enumerable
- Success paths are infinite and vague
- Humans are better at imagining concrete negatives than abstract positives
---
## Research on Premortem Effectiveness
### Gary Klein's Studies
**Original research:**
- Teams that did premortems identified 30% more risks
- Risks identified were more specific and actionable
- Teams adjusted plans proactively
**Key finding:**
> "Prospective hindsight" (imagining an event has happened) improves recall by 30%
---
### Kahneman's Endorsement
**Daniel Kahneman:**
> "The premortem is the single best debiasing technique I know."
**Why it works:**
1. **Legitimizes doubt** - In group settings, dissent is hard. Premortem makes it safe.
2. **Concrete > Abstract** - "Identify risks" is vague. "Explain the failure" is concrete.
3. **Defeats groupthink** - Forces even optimists to imagine failure.
---
## Outcome Bias
### Judging Decisions by Results, Not Process
**What it is:**
We judge the quality of a decision based on its outcome, not the process.
**Example:**
- Drunk driver gets home safely → "It was fine"
- Sober driver has accident → "Bad decision to drive"
**Reality:**
Quality of decision ≠ Quality of outcome (because of randomness)
**For forecasting:**
A 90% prediction that fails doesn't mean the forecast was bad (10% events happen 10% of the time).
**The premortem fix:**
By imagining failure BEFORE it happens, you evaluate the decision process independent of outcome.
---
## When Premortems Work Best
### High-Confidence Predictions
**Use when:**
- Your probability is >80% or <20%
- You feel very certain
- Confidence intervals are narrow
**Why:**
These are the predictions most likely to be overconfident.
---
### Team Forecasting
**Use when:**
- Multiple people are making predictions
- Groupthink is a risk
- Dissent is being suppressed
**Why:**
Premortems legitimize expressing doubts without seeming disloyal.
---
### Important Decisions
**Use when:**
- Stakes are high
- Irreversible commitments
- Significant resource allocation
**Why:**
Worth the time investment to reduce overconfidence.
---
## When Premortems Don't Help
### Already Uncertain
**Skip if:**
- Your probability is ~50%
- Confidence intervals are already wide
- You're confused, not confident
**Why:**
You don't need a premortem to tell you you're uncertain.
---
### Trivial Predictions
**Skip if:**
- Low stakes
- Easily reversible
- Not worth the time
**Why:**
Premortems take effort; save them for important forecasts.
---
## The Premortem vs Other Techniques
### Premortem vs Red Teaming
**Red Teaming:**
- Adversarial: Find flaws in the plan
- Focus: Attack the strategy
- Mindset: "How do we defeat this?"
**Premortem:**
- Temporal: Failure has occurred
- Focus: Understand what happened
- Mindset: "What led to this outcome?"
**Use both:** Red team attacks the plan, premortem explains the failure.
---
### Premortem vs Scenario Planning
**Scenario Planning:**
- Multiple futures: Good, bad, likely
- Branching paths
- Strategies for each scenario
**Premortem:**
- Single future: Failure has occurred
- Backward path
- Identify risks to avoid
**Use both:** Scenario planning explores, premortem stress-tests.
---
### Premortem vs Risk Register
**Risk Register:**
- List of identified risks
- Probability and impact scores
- Mitigation strategies
**Premortem:**
- Narrative of failure
- Causal chains
- Discover unknown unknowns
**Use both:** Premortem feeds into risk register.
---
## Cognitive Mechanisms
### Why Premortems Defeat Overconfidence
**1. Prospective Hindsight**
Imagining an event has occurred improves memory access by 30%.
**2. Permission to Doubt**
Social license to express skepticism without seeming negative.
**3. Concrete Failure Modes**
Abstract "risks" become specific "this happened, then this, then this."
**4. Temporal Distancing**
Viewing from the future reduces emotional attachment to current plan.
**5. Narrative Construction**
Building a story forces causal reasoning, revealing gaps.
---
## Common Objections
### "This is too negative!"
**Response:**
Pessimism during planning prevents failure during execution.
**Reframe:**
Not negative - realistic. You're not hoping for failure, you're preparing for it.
---
### "We don't have time for this."
**Response:**
- Premortem: 30 minutes
- Recovering from preventable failure: Months/years
**Math:**
If premortem prevents 10% of failures, ROI is massive.
---
### "Our case really is different!"
**Response:**
Maybe. But the premortem will reveal HOW it's different, not just assert it.
**Test:**
If the premortem reveals nothing new, you were right. If it reveals risks, you weren't.
---
## Practical Takeaways
1. **Use for high-confidence predictions** - When you feel certain
2. **Legitimate skepticism** - Makes doubt socially acceptable
3. **Concrete failure modes** - Forces specific risks, not vague worries
4. **Widen confidence intervals** - Adjust based on plausibility of failure narrative
5. **Set kill criteria** - Know what would change your mind
6. **Monitor signposts** - Track early warning signals
**The Rule:**
> If you can easily write a plausible failure narrative, your confidence is too high.
---
**Return to:** [Main Skill](../SKILL.md#interactive-menu)