Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,298 @@
---
name: kill-criteria-exit-ramps
description: Use when defining stopping rules for projects, avoiding sunk cost fallacy, setting objective exit criteria, deciding whether to continue/pivot/kill initiatives, or when users mention kill criteria, exit ramps, stopping rules, go/no-go decisions, project termination, sunk costs, or need disciplined decision-making about when to quit.
---
# Kill Criteria & Exit Ramps
## Purpose
Kill criteria are pre-defined, objective conditions that trigger stopping a project, product, or initiative. Exit ramps are specific decision points where you evaluate whether to continue, pivot, or kill. This skill helps avoid sunk cost fallacy and opportunity cost by establishing discipline around quitting.
Use this skill when:
- **Starting new projects**: Define kill criteria upfront before emotional/financial investment
- **Evaluating ongoing initiatives**: Decide whether to continue, pivot, or stop
- **Avoiding sunk cost trap**: "We've invested too much to quit now"
- **Portfolio management**: Which projects to kill to free resources for winners
- **Setting go/no-go gates**: Milestone-based decision points
- **Managing risk**: Exit before losses escalate
The hardest decision is often knowing when to quit. Kill criteria remove emotion and politics from stopping decisions.
---
## Common Patterns
### Pattern 1: Upfront Kill Criteria (Before Launch)
**When**: Starting new project, experiment, or product
**Process**: (1) Define success metrics ("10% conversion"), (2) Set time horizon ("6 months"), (3) Establish kill criteria ("If <5% after 6 months, kill"), (4) Assign decision rights (specific person), (5) Document formally (signed PRD)
**Example**: New feature — Success: 20% adoption in 3 months, Kill: <10% adoption, Decision: Product VP makes call
### Pattern 2: Go/No-Go Gates (Milestone-Based)
**When**: Multi-stage projects with increasing investment
**Structure**: Stage 1 (cheap, concept) → Go/No-Go → Stage 2 (moderate, MVP) → Go/No-Go → Stage 3 (expensive, launch) → Go/No-Go
**Example**: Gate 1 (4wk, $10k): 15+ customer interviews show interest → GO. Gate 2 (3mo, $50k): 40% weekly active (got 25%) → NO-GO, kill
**Benefit**: Small investments first, kill before expensive stages
### Pattern 3: Trigger-Based Exit Ramps
**When**: Ongoing projects with uncertain outcomes
**Common triggers**: Time-based ("not profitable by Month 18"), Metric-based ("churn >8% for 2 months"), Market-based ("competitor launches"), Resource-based ("budget overrun >30%"), Opportunity-based ("better option emerges")
**Example**: SaaS — Trigger 1: MRR growth <10%/mo for 3 months → Evaluate. Trigger 2: CAC payback >24mo → Evaluate. Trigger 3: Competitor raises >$50M → Evaluate
**Note**: Triggers prompt evaluation, not automatic kill
### Pattern 4: Pivot vs. Kill Decision
**When**: Project isn't working as planned — should you pivot or kill?
**Framework**:
**Pivot if**:
- Core insight is valid but execution is wrong
- Customer pain is real, solution is wrong
- Market exists, go-to-market is wrong
- Learning rate is high (discovering new insights rapidly)
- Resource burn is sustainable (not desperation mode)
**Kill if**:
- No customer pain (nice-to-have, not must-have)
- Market too small (can't sustain business)
- Burn rate too high relative to progress
- Team doesn't believe in vision
- Better opportunities available (opportunity cost)
- Regulatory/legal blockers
**Example**: Mobile app with low engagement
- **Situation**: Launched fitness app, 10k downloads, 5% weekly active (target was 40%)
- **Pivot option**: Interviews reveal users want meal tracking not workout tracking → Pivot to nutrition app
- **Kill option**: Users don't care about fitness tracking at all, market saturated → Kill, reallocate team
**Decision**: Pivot if hypothesis valid but execution wrong. Kill if hypothesis invalid.
### Pattern 5: Portfolio Kill Criteria (Multiple Projects)
**When**: Managing portfolio of projects, need to kill some to focus
**Process**:
1. **Rank by expected value**: ROI, strategic fit, resource efficiency
2. **Define minimum threshold**: "Top 70% of portfolio gets resources"
3. **Kill bottom 30%**: Projects below threshold, regardless of sunk cost
4. **Reallocate resources**: Winners get resources from killed projects
**Example**: Company with 10 projects, capacity for 7
- Rank by: (Expected Revenue × Probability of Success) / Resource Cost
- Kill: Projects ranked #8, #9, #10 (even if they're "almost done")
- Reallocate: Engineers from killed projects to top 3
**Principle**: Opportunity cost matters more than sunk cost. "Almost done" doesn't justify continuing if better alternatives exist.
### Pattern 6: Sunk Cost Trap Avoidance
**When**: Team resists killing project due to past investment
**Technique**: **Pre-mortem inversion**
1. Ask: "If we were starting today with zero investment, would we start this project?"
2. If answer is "No" → Kill (sunk costs are irrelevant)
3. If answer is "Yes, but differently" → Pivot
4. If answer is "Yes, exactly as-is" → Continue
**Example**: Failed enterprise sales push
- **Situation**: 18 months, $2M spent, 2 customers (need 50 for viability)
- **Inversion**: "If starting today, would we pursue enterprise sales?" → "No, we'd focus on self-serve SMB"
- **Decision**: Kill enterprise sales, pivot to SMB (sunk $2M is irrelevant)
**Trap**: "We've invested so much, we can't quit now" → This is sunk cost fallacy
**Escape**: Only future costs and benefits matter. Past is gone.
---
## Workflow
Use this structured approach when defining or applying kill criteria:
```
□ Step 1: Define success metrics and time horizon
□ Step 2: Establish objective kill criteria
□ Step 3: Assign decision rights and governance
□ Step 4: Set milestone gates or trigger points
□ Step 5: Document formally (signed agreement)
□ Step 6: Monitor metrics regularly
□ Step 7: Evaluate at gates/triggers
□ Step 8: Execute kill decision (if triggered)
```
**Step 1: Define success metrics and time horizon** ([details](#1-define-success-metrics-and-time-horizon))
Specify quantifiable success criteria (e.g., "20% conversion") and evaluation period (e.g., "6 months post-launch").
**Step 2: Establish objective kill criteria** ([details](#2-establish-objective-kill-criteria))
Set numeric thresholds that trigger stop decision (e.g., "If <10% conversion after 6 months"). Make criteria objective, not subjective.
**Step 3: Assign decision rights and governance** ([details](#3-assign-decision-rights-and-governance))
Name specific person who makes kill decision. Define escalation process. Avoid "team consensus" (leads to paralysis).
**Step 4: Set milestone gates or trigger points** ([details](#4-set-milestone-gates-or-trigger-points))
For multi-stage projects: define go/no-go gates. For ongoing projects: define triggers that prompt evaluation.
**Step 5: Document formally** ([details](#5-document-formally))
Write kill criteria in PRD, project charter, or investment memo. Get stakeholders to sign/approve before launch (prevents moving goalposts).
**Step 6: Monitor metrics regularly** ([details](#6-monitor-metrics-regularly))
Track metrics weekly/monthly. Dashboard with kill criteria thresholds clearly marked. Automate alerts when approaching thresholds.
**Step 7: Evaluate at gates/triggers** ([details](#7-evaluate-at-gatestriggers))
When gate or trigger hit, conduct formal evaluation. Use pre-mortem inversion: "Would we start this today?" Decide: continue, pivot, or kill.
**Step 8: Execute kill decision** ([details](#8-execute-kill-decision))
If kill triggered: communicate decision, wind down project, reallocate resources, conduct postmortem. Execute quickly (avoid zombie projects).
---
## Critical Guardrails
### 1. Set Kill Criteria Before Launch (Not After)
**Danger**: Defining kill criteria after project starts leads to moving goalposts
**Guardrail**: Write kill criteria in initial project document, before emotional/financial investment. Get stakeholder sign-off.
**Red flag**: "We'll figure out when to stop as we go" — this leads to sunk cost trap
### 2. Make Criteria Objective (Not Subjective)
**Danger**: Subjective criteria ("team feels it's not working") are easy to ignore
**Guardrail**: Use quantifiable metrics (numbers, dates, milestones). "5% conversion" not "low adoption". "6 months" not "reasonable time".
**Test**: Could two people independently evaluate criteria and reach same conclusion? If not, too subjective.
### 3. Assign Clear Decision Rights
**Danger**: "Team decides" or "we'll discuss" leads to paralysis (everyone has sunk cost)
**Guardrail**: Name specific person who makes kill decision. Define what data they need. Escalation path for overrides.
**Example**: "Product VP makes kill decision based on 6-month metrics. Can be overridden only by CEO with written justification."
### 4. Don't Move the Goalposts
**Danger**: When kill criteria approached, team lowers bar or extends timeline
**Guardrail**: Kill criteria are fixed at launch. Changes require formal process (written justification, senior approval, new document).
**Red flag**: "Let's give it another 3 months" when 6-month criteria not met
### 5. Sunk Costs Are Irrelevant
**Danger**: "We've invested $2M, can't stop now" — sunk cost fallacy
**Guardrail**: Use pre-mortem inversion: "If starting today with $0 invested, would we do this?" Only future matters.
**Principle**: Past costs are gone. Only question: "Is future investment better here or elsewhere?"
### 6. Kill Quickly (Avoid Zombie Projects)
**Danger**: Projects that should be killed linger, draining resources ("zombie projects")
**Guardrail**: Kill decision → immediate wind-down. Announce within 1 week, reallocate team within 1 month.
**Red flag**: Project in "wind-down" for >3 months — this is zombie mode, not killing
### 7. Opportunity Cost > Sunk Cost
**Danger**: Continuing project because "almost done" even if better opportunities exist
**Guardrail**: Portfolio thinking. Ask: "Is this the best use of these resources?" If not, kill even if 90% done.
**Principle**: Opportunity cost of *not* pursuing better option often exceeds benefit of finishing current project
### 8. Postmortem, Don't Blame
**Danger**: Kill decisions seen as "failure", teams avoid them
**Guardrail**: Normalize killing projects. Celebrate disciplined stopping. Postmortem focuses on learning, not blame.
**Culture**: "We killed 3 projects this quarter" = good (freed resources for winners), not bad (failures)
---
## Quick Reference
### Kill Criteria Checklist
Before launching project, answer:
- [ ] Success metrics defined? (quantifiable, e.g., "20% conversion")
- [ ] Time horizon set? (e.g., "6 months post-launch")
- [ ] Kill criteria established? (e.g., "If <10% conversion after 6 months, kill")
- [ ] Decision rights assigned? (specific person, not "team")
- [ ] Documented formally? (in PRD, signed by stakeholders)
- [ ] Monitoring plan? (who tracks, how often, dashboard)
- [ ] Wind-down plan? (how to kill if criteria triggered)
### Go/No-Go Gate Template
| Gate | Investment | Timeline | Success Criteria | Decision |
|------|-----------|----------|------------------|----------|
| Gate 1: Concept | $10k | 4 weeks | 15+ customer interviews showing strong interest | GO / NO-GO |
| Gate 2: MVP | $50k | 3 months | 40% weekly active users (50 beta users) | GO / NO-GO |
| Gate 3: Launch | $200k | 6 months | 10% conversion, <$100 CAC | GO / NO-GO |
### Pivot vs. Kill Decision Framework
| Factor | Pivot | Kill |
|--------|-------|------|
| Customer pain | Real but solution wrong | No pain, nice-to-have |
| Market size | Large enough | Too small |
| Learning rate | High (new insights) | Low (stuck) |
| Burn rate | Sustainable | Too high |
| Team belief | Believes with changes | Doesn't believe |
| Opportunity cost | Pivot is best option | Better options exist |
---
## Resources
### Navigation to Resources
- [**Templates**](resources/template.md): Kill criteria document, go/no-go gate template, pivot/kill decision framework, wind-down plan
- [**Methodology**](resources/methodology.md): Sunk cost psychology, portfolio management, decision rights frameworks, postmortem processes
- [**Rubric**](resources/evaluators/rubric_kill_criteria_exit_ramps.json): Evaluation criteria for kill criteria quality (10 criteria)
### Related Skills
- **expected-value**: For quantifying opportunity cost of continuing vs. killing
- **hypotheticals-counterfactuals**: For pre-mortem analysis ("what if we had killed earlier?")
- **decision-matrix**: For comparing continue/pivot/kill options
- **postmortem**: For learning from killed projects
- **portfolio-roadmapping-bets**: For portfolio-level kill decisions
---
## Examples in Context
### Example 1: Startup Feature Kill
**Context**: SaaS launched "Advanced Analytics", kill criteria: <15% adoption after 3 months
**Result**: 12% adoption → Killed feature, reallocated 2 engineers to core. Saved 6 months maintenance.
### Example 2: Enterprise Sales Pivot
**Context**: B2B SaaS, pivot trigger: <10 customers by Month 12
**Result**: 7 customers → Pivoted to self-serve SMB. Hit 200 SMB customers in 6 months, 4× faster growth.
### Example 3: R&D Portfolio Kill
**Context**: 8 R&D projects, capacity for 5. Ranked by EV/Cost: A(3.5), B(2.8), C(2.5), D(2.1), E(1.8), F(1.5), G(1.2), H(0.9)
**Decision**: Killed F, G, H despite F being "80% done". Top 3 projects shipped 4 months earlier.

View File

@@ -0,0 +1,216 @@
{
"skill_name": "kill-criteria-exit-ramps",
"version": "1.0",
"criteria": [
{
"name": "Upfront Definition",
"description": "Kill criteria defined before launch, not after emotional/financial investment",
"ratings": {
"1": "No kill criteria defined, or defined after project started",
"3": "Kill criteria defined at launch but not formally documented or signed",
"5": "Kill criteria defined in PRD before launch, signed by stakeholders, formally documented"
}
},
{
"name": "Objectivity",
"description": "Criteria use quantifiable metrics and thresholds, not subjective judgment",
"ratings": {
"1": "Subjective criteria (e.g., 'team feels it's not working', 'low adoption')",
"3": "Mix of quantitative and qualitative criteria, some ambiguity",
"5": "Fully objective criteria with specific numbers (e.g., '<10% conversion after 6 months', 'CAC >$200 for 3 months')"
}
},
{
"name": "Decision Rights",
"description": "Clear assignment of who makes kill decision, with escalation path",
"ratings": {
"1": "No clear decision-maker, 'team decides' or unclear ownership",
"3": "Decision-maker named but no escalation path or override process",
"5": "Specific person named as decision authority, clear escalation path, override process with limits"
}
},
{
"name": "No Goalpost Moving",
"description": "Kill criteria remain fixed; changes require formal justification and re-approval",
"ratings": {
"1": "Criteria change informally when approached, goalposts move frequently",
"3": "Criteria mostly stable but some informal adjustments without documentation",
"5": "Criteria fixed at launch, changes require written justification + senior approval + new document"
}
},
{
"name": "Sunk Cost Avoidance",
"description": "Decision based on future value, not past investment; uses pre-mortem inversion",
"ratings": {
"1": "Heavy focus on sunk costs ('invested too much to quit'), no counterfactual analysis",
"3": "Acknowledges sunk costs but also considers future, limited use of inversion techniques",
"5": "Explicitly uses pre-mortem inversion ('would we start this today?'), focuses only on future value"
}
},
{
"name": "Execution Speed",
"description": "Kill decision executed quickly (wind-down within 1 month), avoiding zombie projects",
"ratings": {
"1": "Projects linger for 3+ months after kill decision, zombie projects exist",
"3": "Wind-down takes 1-2 months, some delays but eventual completion",
"5": "Wind-down plan executed within 1 month: team reallocated, resources freed, postmortem completed"
}
},
{
"name": "Opportunity Cost Consideration",
"description": "Evaluates best alternative use of resources, quantifies opportunity cost",
"ratings": {
"1": "No consideration of alternatives, 'finish what we started' mentality",
"3": "Mentions alternatives informally but no quantification or comparison",
"5": "Explicitly compares current project to top 3 alternatives, quantifies EV/Cost ratio for each"
}
},
{
"name": "Portfolio Thinking",
"description": "Projects ranked and managed as portfolio, bottom performers killed to free resources",
"ratings": {
"1": "Projects evaluated individually, no portfolio ranking or rebalancing",
"3": "Informal portfolio view but no systematic ranking or kill threshold",
"5": "Quarterly portfolio ranking with explicit methodology, kill threshold (e.g., bottom 30%), active rebalancing"
}
},
{
"name": "Postmortem Culture",
"description": "Blameless postmortems conducted within 2 weeks, learnings shared, killing normalized",
"ratings": {
"1": "No postmortems, or blame-focused, killed projects hidden or stigmatized",
"3": "Postmortems occur but sometimes delayed, limited sharing, some stigma remains",
"5": "Blameless postmortems within 2 weeks, documented and shared widely, killing celebrated as discipline"
}
},
{
"name": "Wind-Down Planning",
"description": "Detailed wind-down plan prepared upfront, includes communication, reallocation, customer transition",
"ratings": {
"1": "No wind-down plan, ad-hoc execution when kill triggered, customers/team surprised",
"3": "Basic wind-down plan exists but incomplete (missing communication or customer transition)",
"5": "Comprehensive wind-down plan with communication timeline, team reallocation, customer transition, postmortem schedule"
}
}
],
"guidance_by_type": [
{
"type": "New Product Launch",
"focus_areas": ["Upfront Definition", "Objectivity", "Decision Rights", "Wind-Down Planning"],
"target_score": "≥4.0",
"rationale": "New products carry high uncertainty. Strong upfront kill criteria and wind-down planning prevent prolonged resource drain if product-market fit not achieved."
},
{
"type": "Feature Development",
"focus_areas": ["Objectivity", "Execution Speed", "Opportunity Cost Consideration"],
"target_score": "≥3.5",
"rationale": "Features compete for engineering resources. Clear metrics, fast kill decisions, and opportunity cost analysis ensure resources flow to highest-impact features."
},
{
"type": "R&D / Experiments",
"focus_areas": ["Upfront Definition", "Sunk Cost Avoidance", "Postmortem Culture"],
"target_score": "≥4.0",
"rationale": "Experiments should have clear success/failure criteria upfront. Avoid sunk cost trap for failed experiments; learn systematically via postmortems."
},
{
"type": "Portfolio Management",
"focus_areas": ["Portfolio Thinking", "Opportunity Cost Consideration", "No Goalpost Moving"],
"target_score": "≥4.5",
"rationale": "Managing multiple projects requires disciplined ranking, rebalancing, and adherence to kill criteria. Portfolio optimization depends on killing bottom performers."
},
{
"type": "Organizational Change",
"focus_areas": ["Postmortem Culture", "Decision Rights", "Wind-Down Planning"],
"target_score": "≥3.5",
"rationale": "Building culture around disciplined stopping requires clear governance, blameless learning, and smooth wind-downs that preserve team morale."
}
],
"guidance_by_complexity": [
{
"complexity": "Simple (single feature/experiment)",
"target_score": "≥3.5",
"priority_criteria": ["Objectivity", "Upfront Definition", "Execution Speed"],
"notes": "Focus on clear metrics defined upfront and fast execution. Complexity is low enough that informal decision rights acceptable."
},
{
"complexity": "Moderate (product/initiative with team of 3-10)",
"target_score": "≥4.0",
"priority_criteria": ["Upfront Definition", "Decision Rights", "Sunk Cost Avoidance", "Wind-Down Planning"],
"notes": "Requires formal kill criteria document, named decision-maker, and detailed wind-down plan. Team size creates reallocation complexity."
},
{
"complexity": "Complex (portfolio of 5+ projects, or multi-year initiatives)",
"target_score": "≥4.5",
"priority_criteria": ["Portfolio Thinking", "No Goalpost Moving", "Opportunity Cost Consideration", "Postmortem Culture"],
"notes": "Demands systematic portfolio management with quarterly rebalancing, strict adherence to criteria, opportunity cost analysis, and strong learning culture."
}
],
"common_failure_modes": [
{
"name": "Moving Goalposts",
"symptom": "When kill criteria approached, team extends timeline or lowers bar ('let's give it another 3 months')",
"detection": "Compare current criteria to original PRD; check for informal changes without re-approval",
"fix": "Lock criteria at launch with sign-off; require written justification + senior approval for any changes"
},
{
"name": "Sunk Cost Justification",
"symptom": "'We've invested $2M/18 months, can't quit now' even though future prospects poor",
"detection": "Listen for past-focused language ('invested', 'spent', 'too far to stop'); absence of opportunity cost analysis",
"fix": "Apply pre-mortem inversion: 'If starting today with $0, would we do this?' Focus only on future value."
},
{
"name": "Subjective Criteria",
"symptom": "Debates over whether 'low engagement' or 'poor adoption' triggers kill, no clear threshold",
"detection": "Two people evaluating same data reach different conclusions; criteria use qualitative terms",
"fix": "Quantify all criteria with specific numbers and dates (e.g., '<10% weekly active after 6 months')"
},
{
"name": "Committee Decision-Making",
"symptom": "'Team decides' or 'we'll discuss' leads to paralysis, no one willing to make tough call",
"detection": "Kill decision delayed for weeks/months despite criteria met; multiple meetings with no resolution",
"fix": "Assign single decision-maker by name in kill criteria doc; decision-maker accountable for timely call"
},
{
"name": "Zombie Projects",
"symptom": "Projects linger in 'wind-down' for 3+ months, still consuming resources (time, attention, budget)",
"detection": "Check wind-down duration; team members still partially allocated; infrastructure still running",
"fix": "Set hard deadline for wind-down (1 month max); track completion; reallocate team immediately"
},
{
"name": "Portfolio Inertia",
"symptom": "Same projects continue year over year without reevaluation; no projects killed despite poor performance",
"detection": "Portfolio composition unchanged for 2+ quarters; no active rebalancing or kill decisions",
"fix": "Implement quarterly portfolio ranking; define kill threshold (bottom 20-30%); actively rebalance"
},
{
"name": "No Postmortem",
"symptom": "Killed projects immediately forgotten; no documentation of learnings; repeat same mistakes",
"detection": "No postmortem docs; team can't articulate what went wrong; same failure patterns in new projects",
"fix": "Require postmortem within 2 weeks of kill; document and share learnings; track pattern across projects"
},
{
"name": "Stigma Around Killing",
"symptom": "Teams reluctant to propose kill; PMs hide struggles; 'failure' language used; career concerns",
"detection": "Projects linger past kill criteria; escalations delayed; teams defensive in reviews",
"fix": "Leadership celebrates disciplined stopping; share kill decisions transparently; reward early recognition"
},
{
"name": "After-the-Fact Criteria",
"symptom": "Kill criteria defined after project launched; criteria adjusted to match current performance",
"detection": "No kill criteria in original PRD; criteria added months into project; retroactive documentation",
"fix": "Make kill criteria mandatory in PRD template; gate project approval on criteria definition; sign-off required"
},
{
"name": "Ignoring Opportunity Cost",
"symptom": "Continue project because 'almost done' even though resources could create more value elsewhere",
"detection": "'80% complete' or 'just need to finish' justifications; no comparison to alternative uses",
"fix": "Quantify opportunity cost; rank portfolio by EV/Cost ratio; kill if better alternatives exist regardless of completion"
}
],
"overall_guidance": {
"excellent": "Score ≥4.5: Exemplary discipline around stopping. Clear upfront criteria, objective metrics, decisive execution, portfolio thinking, strong learning culture. Organization skilled at capital allocation through disciplined killing.",
"good": "Score 3.5-4.4: Solid kill criteria and execution. Some areas for improvement (portfolio thinking, speed, or culture) but core discipline exists. Most projects have clear criteria and timely decisions.",
"needs_improvement": "Score <3.5: Significant gaps. Likely suffering from sunk cost fallacy, zombie projects, moved goalposts, or stigma around killing. Risk of prolonged resource drain on underperformers.",
"key_principle": "The hardest and most valuable skill is knowing when to quit. Organizations that kill projects decisively outperform those that let them linger. Sunk costs are irrelevant; only future value and opportunity cost matter."
}
}

View File

@@ -0,0 +1,363 @@
# Kill Criteria & Exit Ramps: Methodology
Advanced techniques for setting kill criteria, avoiding behavioral biases, managing portfolios, and building organizational culture around disciplined stopping.
---
## 1. Sunk Cost Psychology and Behavioral Economics
### Understanding Sunk Cost Fallacy
**Definition**: Tendency to continue project based on past investment rather than future value
**Cognitive mechanisms**:
- **Loss aversion**: Losses feel 2× more painful than equivalent gains (Kahneman & Tversky)
- **Escalation of commitment**: Justifying past decisions by doubling down
- **Status quo bias**: Preference for current state over change
- **Endowment effect**: Overvaluing what we already own/built
**Common rationalizations**:
- "We've invested $2M, can't quit now" (ignoring opportunity cost)
- "Just need a little more time" (moving goalposts)
- "Too close to finishing to stop" (completion bias)
- "Team morale will suffer if we quit" (social pressure)
### Behavioral Interventions
**Pre-commitment devices**:
1. **Signed kill criteria document** before launch (removes discretion)
2. **Third-party decision maker** without sunk cost attachment
3. **Time-locked gates** with automatic NO-GO if criteria not met
4. **Financial caps** ("Will not invest more than $X total")
**Framing techniques**:
1. **Pre-mortem inversion**: "If starting today with $0, would we do this?"
2. **Opportunity cost framing**: "What else could these resources achieve?"
3. **Reverse trial**: "To CONTINUE, we need to prove X" (default = kill)
4. **Outside view**: "What would we advise another company to do?"
**Example**: Company spent $5M on enterprise sales, 3 customers in 2 years (need 50 for viability)
- **Sunk cost framing**: "We've invested $5M, can't stop now"
- **Opportunity cost framing**: "Next $5M could yield 200 SMB customers based on pilot data"
- **Pre-mortem inversion**: "If starting today, would we choose enterprise over SMB?" → No
- **Decision**: Kill enterprise sales, reallocate to SMB
### Quantifying Opportunity Cost
**Formula**: Opportunity Cost = Value of Best Alternative Value of Current Project
**Process**:
1. Identify top 3 alternatives for resources (team, budget)
2. Estimate expected value for each: EV = Σ (Probability × Payoff)
3. Rank by EV / Resource Cost ratio
4. If current project ranks below alternatives → Kill signal
**Example**: SaaS with 3 options
- **Current project** (mobile app): EV = $2M, Cost = 5 engineers × 6 months = 30 eng-months, Ratio = $67k/eng-month
- **Alternative 1** (API platform): EV = $8M, Cost = 30 eng-months, Ratio = $267k/eng-month
- **Alternative 2** (integrations): EV = $5M, Cost = 20 eng-months, Ratio = $250k/eng-month
- **Decision**: Kill mobile app (lowest ratio), reallocate to API platform (highest ratio)
---
## 2. Portfolio Management Frameworks
### Portfolio Ranking Methodologies
**Method 1: Expected Value / Resource Cost**
Formula: `Rank Score = (Revenue × Probability of Success) / (Engineer-Months × Avg Cost)`
**Steps**:
1. Estimate revenue potential for each project (pessimistic, baseline, optimistic)
2. Assign probability of success (use reference class forecasting for calibration)
3. Calculate expected value: EV = Σ (Probability × Revenue)
4. Estimate resource cost (engineer-months, budget, opportunity cost)
5. Rank by EV/Cost ratio
6. Define kill threshold (e.g., bottom 30% of portfolio)
**Method 2: Weighted Scoring Model**
Formula: `Score = Σ (Weight_i × Rating_i)`
**Dimensions** (weights sum to 100%):
- Strategic fit (30%): Alignment with company vision
- Revenue potential (25%): Market size × conversion × pricing
- Probability of success (20%): Team capability, market readiness, technical risk
- Resource efficiency (15%): ROI, payback period, opportunity cost
- Competitive urgency (10%): Time-to-market importance
**Ratings**: 1-5 scale for each dimension
**Example**:
- **Project A**: Strategic fit (4), Revenue (5), Probability (3), Efficiency (4), Urgency (5) → Score = 0.30×4 + 0.25×5 + 0.20×3 + 0.15×4 + 0.10×5 = 4.05
- **Project B**: Strategic fit (3), Revenue (3), Probability (4), Efficiency (2), Urgency (2) → Score = 0.30×3 + 0.25×3 + 0.20×4 + 0.15×2 + 0.10×2 = 3.05
- **Ranking**: A > B
**Method 3: Real Options Analysis**
Treat projects as options (right but not obligation to continue)
**Value of option**:
- **Upside**: High if market/tech uncertainty resolves favorably
- **Downside**: Limited to incremental investment at each gate
- **Flexibility value**: Ability to pivot, expand, or abandon based on new info
**Decision rule**: Continue if `Option Value > Immediate Kill Value`
**Example**: R&D project with 3 gates
- Gate 1 ($50k): Learn if technology feasible (60% chance) → Continue if yes
- Gate 2 ($200k): Learn if market demand exists (40% chance) → Continue if yes
- Gate 3 ($1M): Full launch if both tech + market validated
- **Option value**: Flexibility to kill early if tech/market fails (limits downside)
- **Immediate kill value**: $0 but foregoes learning
**Decision**: Continue through gates (option value > kill value)
### Portfolio Rebalancing Cadence
**Quarterly portfolio review**:
1. Re-rank all projects using latest data
2. Identify projects below threshold (bottom 20-30%)
3. Evaluate kill vs. pivot for bottom performers
4. Reallocate resources from killed projects to top performers
5. Document decisions and communicate transparently
**Trigger-based review** (between quarterly reviews):
- Major market change (competitor launch, regulation, economic shift)
- Significant underperformance vs. projections (>30% variance)
- Resource constraints (hiring freeze, budget cut, key person departure)
- Strategic pivot (new company direction)
---
## 3. Decision Rights Frameworks
### RACI Matrix for Kill Decisions
**Roles**:
- **Responsible (R)**: Gathers data, monitors metrics, presents to decision-maker
- **Accountable (A)**: Makes final kill decision (single person, not committee)
- **Consulted (C)**: Provides input before decision (team, stakeholders)
- **Informed (I)**: Notified after decision (broader org, customers)
**Example**:
- **Kill Criteria Met → Kill Decision**:
- Responsible: Product Manager (gathers data)
- Accountable: Product VP (makes decision)
- Consulted: Engineering Lead, Finance, CEO (input)
- Informed: Team, customers, company (notification)
**Anti-pattern**: "Team decides" (everyone has sunk cost, leads to paralysis)
### Escalation Paths
**Standard path**:
1. **Metrics tracked** → Alert if approaching kill threshold
2. **Project owner evaluates** → Presents data + recommendation to decision authority
3. **Decision authority decides** → GO, NO-GO, or PIVOT
4. **Execute decision** → If kill, wind down within 1 month
**Override path** (use sparingly):
- **Override condition**: Decision authority wants to continue despite kill criteria met
- **Override process**: Written justification, senior approval (e.g., CEO), new kill criteria with shorter timeline
- **Override limit**: Max 1 override per project (prevents repeated goal-post moving)
**Example**: Product VP wants to override kill criteria for feature with 8% adoption (threshold: 10%)
- **Justification**: "Enterprise pilot starting next month, expect 15% adoption within 60 days"
- **New criteria**: "If <12% adoption after 60 days, kill immediately (no further overrides)"
- **CEO approval**: Required for override
### Governance Mechanisms
**Pre-launch approval**:
- Kill criteria document signed by project owner, decision authority, executive sponsor
- Changes to criteria require re-approval by all signatories
**Monitoring dashboard**:
- Real-time metrics vs. kill thresholds
- Traffic light system: Green (>20% above threshold), Yellow (within 20%), Red (below threshold)
- Automated alerts when entering Yellow or Red zones
**Postmortem requirement**:
- All killed projects require postmortem within 2 weeks
- Focus: learning, not blame
- Shared with broader org to normalize killing projects
---
## 4. Postmortem Processes and Learning Culture
### Postmortem Structure
**Timing**: Within 2 weeks of kill decision (while context fresh)
**Participants**: Project team + key stakeholders (5-10 people)
**Facilitator**: Neutral person (not project owner, avoids defensiveness)
**Duration**: 60-90 minutes
**Agenda**:
1. **Project recap** (5 min): Goals, kill criteria, timeline, outcome
2. **What went well?** (15 min): Successes, positive learnings
3. **What went wrong?** (20 min): Root causes of failure (not blame)
4. **What did we learn?** (20 min): Insights, surprises, assumptions invalidated
5. **What would we do differently?** (15 min): Specific changes for future projects
6. **Action items** (10 min): How to apply learnings (process changes, skill gaps, hiring needs)
**Output**: Written doc (2-3 pages) shared with broader team
### Blameless Postmortem Techniques
**Goal**: Learn from failure without blame culture
**Techniques**:
1. **Focus on systems, not individuals**: "Why did our process allow this?" not "Who screwed up?"
2. **Assume good intent**: Team made best decisions with info available at the time
3. **Prime Directive**: "Everyone did the best job they could, given what they knew, their skills, the resources available, and the situation at hand"
4. **"How might we" framing**: Forward-looking, solutions-focused
**Red flags** (blame culture):
- Finger-pointing ("PM should have known...")
- Defensiveness ("It wasn't my fault...")
- Punishment mindset ("Someone should be held accountable...")
- Learning avoidance ("Let's just move on...")
**Example**:
- **Blame framing**: "Why didn't PM validate market demand before building?"
- **Blameless framing**: "How might we improve our discovery process to validate demand earlier with less investment?"
### Normalizing Killing Projects
**Cultural shift**: Killing projects = good (disciplined capital allocation), not bad (failure)
**Messaging**:
- **Positive framing**: "We killed 3 projects this quarter, freeing resources for winners"
- **Celebrate discipline**: Acknowledge teams for recognizing kill criteria and executing quickly
- **Success metrics**: % of portfolio actively managed (killed or pivoted) each quarter (target: 20-30%)
**Leadership behaviors**:
- CEO/VPs publicly discuss projects they killed and why
- Reward PMs who kill projects early (before major resource drain)
- Promote "disciplined stopping" as core competency in performance reviews
**Anti-patterns**:
- Hiding killed projects (creates stigma)
- Only discussing successes (survivorship bias)
- Punishing teams for "failed" projects (discourages risk-taking)
---
## 5. Advanced Topics
### Real Options Theory
**Concept**: Treat uncertain projects as financial options
**Option types**:
1. **Option to defer**: Delay investment until uncertainty resolves
2. **Option to expand**: Scale up if initial results positive
3. **Option to contract**: Scale down if results negative
4. **Option to abandon**: Kill if results very negative
5. **Option to switch**: Pivot to alternative use
**Valuation**: Black-Scholes model adapted for projects
- **Underlying asset**: NPV of project
- **Strike price**: Investment required
- **Volatility**: Uncertainty in outcomes
- **Time to expiration**: Decision window
**Application**: Continue projects with high option value (high upside, limited downside, flexibility)
### Stage-Gate Process Design
**Optimal gate structure**:
- **3-5 gates** for major initiatives
- **Investment increases** by 3-5× at each gate (e.g., $10k → $50k → $200k → $1M)
- **Success criteria tighten** at each gate (higher bar as investment grows)
**Gate design**:
- **Gate 0 (Concept)**: $0-10k, 1-2 weeks, validate problem exists
- **Gate 1 (Discovery)**: $10-50k, 4-8 weeks, validate solution direction
- **Gate 2 (MVP)**: $50-200k, 2-3 months, validate product-market fit
- **Gate 3 (Scale)**: $200k-1M, 6-12 months, validate unit economics
- **Gate 4 (Growth)**: $1M+, ongoing, optimize and scale
**Kill rates by gate** (typical):
- Gate 0 → 1: Kill 50% (cheap to kill, many bad ideas)
- Gate 1 → 2: Kill 30% (learning reveals issues)
- Gate 2 → 3: Kill 20% (product-market fit hard)
- Gate 3 → 4: Kill 10% (unit economics don't work)
### Bayesian Updating for Kill Criteria
**Process**: Update kill probability as new data arrives
**Steps**:
1. **Prior probability** of kill: P(Kill) = initial estimate (e.g., 40% based on historical kill rate)
2. **Likelihood** of data given kill: P(Data | Kill) = how likely is this data if project should be killed?
3. **Likelihood** of data given success: P(Data | Success) = how likely is this data if project will succeed?
4. **Posterior probability** using Bayes theorem: P(Kill | Data) = P(Data | Kill) × P(Kill) / P(Data)
**Example**: SaaS feature with 10% adoption target (kill if <10% after 6 months)
- **Month 3 data**: 7% adoption
- **Prior**: P(Kill) = 40% (4 in 10 similar features killed historically)
- **Likelihood**: P(7% at month 3 | Kill) = 70% (projects that get killed typically have ~7% at halfway point)
- **Likelihood**: P(7% at month 3 | Success) = 30% (successful projects typically have ~12% at halfway point)
- **Posterior**: P(Kill | 7% adoption) = (0.70 × 0.40) / [(0.70 × 0.40) + (0.30 × 0.60)] = 0.28 / 0.46 = 61%
- **Interpretation**: 61% chance this project should be killed (up from 40% prior)
- **Action**: Evaluate closely, prepare pivot/kill plan
### Stopping Rules in Scientific Research
**Clinical trial stopping rules** (adapted for product development):
1. **Futility stopping**: Stop early if interim data shows unlikely to reach success criteria
- **Rule**: If <10% chance of reaching target at current trajectory → Stop
- **Application**: Monitor weekly, project trajectory, stop if trajectory misses by >30%
2. **Efficacy stopping**: Stop early if interim data shows clear success (reallocate resources)
- **Rule**: If >95% confident success criteria will be met → Graduate early
- **Application**: Feature with 25% adoption at month 3 (target: 15% at month 6) → Graduate to core product
3. **Safety stopping**: Stop if harmful unintended consequences detected
- **Rule**: If churn increases >20% or NPS drops >10 points → Stop immediately
- **Application**: New feature causing user confusion, support ticket spike → Kill
**Example**: Mobile app experiment
- **Target**: 20% weekly active users at month 6
- **Month 2 data**: 5% weekly active
- **Trajectory**: Projecting 10% at month 6 (50% below target)
- **Futility analysis**: 95% confidence interval for month 6: 8-12% (entirely below 20% target)
- **Decision**: Invoke futility stopping, kill experiment at month 2 (save 4 months)
---
## Key Principles Summary
1. **Set kill criteria before launch** (remove emotion, politics, sunk cost bias)
2. **Make criteria objective** (numbers, dates, not feelings)
3. **Assign clear decision rights** (single decision-maker, not committee)
4. **Don't move goalposts** (criteria are fixed; changes require formal process)
5. **Sunk costs are irrelevant** (only future value matters)
6. **Kill quickly** (wind down within 1 month, avoid zombie projects)
7. **Opportunity cost > sunk cost** (kill even if "almost done" if better options exist)
8. **Normalize killing** (celebrate discipline, share learnings, remove stigma)
9. **Portfolio thinking** (rank all projects, kill bottom 20-30% regularly)
10. **Learn from kills** (blameless postmortems, apply insights to future projects)
---
## Common Mistakes and Solutions
| Mistake | Symptom | Solution |
|---------|---------|----------|
| **Setting criteria after launch** | Goalposts move when results disappoint | Document criteria in PRD before launch, get sign-off |
| **Subjective criteria** | Debate over "low engagement" | Quantify: "<10% weekly active", not "poor adoption" |
| **Team consensus for kill** | Paralysis, no one wants to kill | Single decision-maker with clear authority |
| **Sunk cost justification** | "Invested $2M, can't quit" | Pre-mortem inversion: "Would we start this today?" |
| **Zombie projects** | Lingering for 6+ months | Wind down within 1 month of kill decision |
| **Stigma around killing** | Teams hide struggles, delay kill | Celebrate kills, share postmortems, normalize stopping |
| **Portfolio inertia** | Same projects year over year | Quarterly ranking + kill bottom 20-30% |
| **No postmortem** | Repeat same mistakes | Require postmortem within 2 weeks, share learnings |

View File

@@ -0,0 +1,373 @@
# Kill Criteria & Exit Ramps Templates
Quick-start templates for defining kill criteria, go/no-go gates, pivot/kill decisions, and wind-down plans.
---
## Template 1: Kill Criteria Document (Pre-Launch)
**When to use**: Before starting new project, feature, or product
### Kill Criteria Document Template
**Project Name**: [Project name]
**Project Owner**: [Name, role]
**Start Date**: [Date]
**Kill Decision Authority**: [Specific person who makes kill decision, e.g., "Product VP"]
**Escalation Path**: [Who can override, e.g., "CEO can override with written justification"]
---
### Success Metrics
**Primary Success Metric**: [Quantifiable metric, e.g., "20% conversion rate"]
**Secondary Success Metrics** (if applicable):
- [Metric 2, e.g., "NPS >40"]
- [Metric 3, e.g., "CAC <$100"]
**Time Horizon**: [Evaluation period, e.g., "6 months post-launch"]
---
### Kill Criteria
**Kill Criterion 1** (Primary):
- **Metric**: [What to measure, e.g., "Conversion rate"]
- **Threshold**: [Trigger value, e.g., "<10%"]
- **Time**: [When to evaluate, e.g., "6 months post-launch"]
- **Action**: If threshold met → Kill project
**Kill Criterion 2** (Secondary, if applicable):
- **Metric**: [e.g., "CAC"]
- **Threshold**: [e.g., ">$200"]
- **Time**: [e.g., "3 months post-launch"]
- **Action**: If threshold met → Evaluate pivot or kill
**Kill Criterion 3** (Time-based):
- **Metric**: [e.g., "Profitability"]
- **Threshold**: [e.g., "Not profitable"]
- **Time**: [e.g., "By Month 18"]
- **Action**: If threshold met → Kill project
---
### Pivot Criteria (Optional)
**Pivot Criterion 1**:
- **Metric**: [e.g., "Conversion rate"]
- **Threshold**: [e.g., "10-15% (between kill and success)"]
- **Time**: [e.g., "6 months post-launch"]
- **Action**: If threshold met → Evaluate pivot options
**Pivot Options to Consider** (if pivot triggered):
- [ ] Option 1: [e.g., "Target different customer segment"]
- [ ] Option 2: [e.g., "Change pricing model"]
- [ ] Option 3: [e.g., "Shift go-to-market strategy"]
---
### Monitoring Plan
**Tracking Frequency**: [How often to review metrics, e.g., "Weekly dashboard, monthly review"]
**Dashboard Owner**: [Who maintains dashboard, e.g., "Product Analyst"]
**Review Meetings**: [When to formally review, e.g., "Monthly project review"]
**Alert Thresholds**: [When to notify decision-maker, e.g., "Alert if conversion <12% (approaching kill threshold)"]
---
### Wind-Down Plan (If Kill Triggered)
See **Template 5: Wind-Down Plan** for detailed execution checklist.
---
### Signatures (Pre-Launch Approval)
- **Project Owner**: ________________ Date: ______
- **Decision Authority**: ________________ Date: ______
**Note**: Changes to kill criteria after launch require re-approval.
---
## Template 2: Go/No-Go Gate Assessment
**When to use**: At milestone gates during multi-stage project
### Go/No-Go Gate Template
**Project Name**: [Project name]
**Gate Number**: [e.g., "Gate 2: MVP"]
**Gate Date**: [Date]
**Decision Authority**: [Who makes go/no-go decision]
---
### Gate Success Criteria (Defined at Previous Gate)
1. **Criterion 1**: [e.g., "40% weekly active users among 50 beta users"]
- **Target**: 40%
- **Actual Result**: [X]%
- **Met?**: ☐ Yes ☐ No
2. **Criterion 2**: [e.g., "NPS >30"]
- **Target**: >30
- **Actual Result**: [X]
- **Met?**: ☐ Yes ☐ No
3. **Criterion 3**: [e.g., "CAC <$150"]
- **Target**: <$150
- **Actual Result**: $[X]
- **Met?**: ☐ Yes ☐ No
---
### Overall Gate Assessment
**Criteria Met**: [X] out of [Y] criteria met
**Business Case Review**:
- **Original Business Case**: [Revenue projection, strategic rationale]
- **Current Business Case**: [Has it changed? Still valid?]
- **Changes**: [Any significant changes in market, competition, resources?]
**Alternative Opportunities**:
- **Alternative 1**: [Other project/opportunity] — Expected value: [X]
- **Alternative 2**: [Other project/opportunity] — Expected value: [X]
- **This Project**: Expected value: [X]
- **Ranking**: [Is this project still best use of resources?]
---
### Gate Decision
**Decision**: ☐ **GO** (Continue to next stage) ☐ **NO-GO** (Kill project) ☐ **PIVOT** (Change approach)
**Rationale**: [1-2 paragraph explanation of decision]
**Next Stage Investment** (if GO):
- **Budget**: $[X]
- **Timeline**: [X months]
- **Team Size**: [X people]
- **Next Gate**: [Date and success criteria for next gate]
**Kill Actions** (if NO-GO):
- [ ] Wind down within [timeframe]
- [ ] Reallocate team to [project]
- [ ] Postmortem scheduled for [date]
**Pivot Plan** (if PIVOT):
- **Pivot Direction**: [What changes?]
- **Hypothesis**: [What are we testing?]
- **Timeline**: [How long to test pivot?]
- **Success Criteria**: [What success looks like for pivot]
**Decision Authority**: ________________ Date: ______
---
## Template 3: Pivot vs. Kill Decision Framework
**When to use**: Project not meeting targets, deciding whether to pivot or kill
### Pivot vs. Kill Assessment
**Project Name**: [Project name]
**Assessment Date**: [Date]
**Current Status**: [Brief summary of current state]
---
### Assessment Factors
| Factor | Pivot Score (1-5) | Kill Score (1-5) | Notes |
|--------|-------------------|------------------|-------|
| **Customer Pain** | ☐ Real pain, wrong solution (5) | ☐ No pain, nice-to-have (1) | [Evidence] |
| **Market Size** | ☐ Large enough (5) | ☐ Too small (1) | [TAM estimate] |
| **Learning Rate** | ☐ High insights (5) | ☐ Low, stuck (1) | [Discoveries per week] |
| **Burn Rate** | ☐ Sustainable (5) | ☐ Too high (1) | [Monthly burn vs. runway] |
| **Team Belief** | ☐ Believes in vision (5) | ☐ Doesn't believe (1) | [Team sentiment] |
| **Opportunity Cost** | ☐ Still best option (5) | ☐ Better options exist (1) | [Alternatives] |
| **Execution vs. Hypothesis** | ☐ Hypothesis valid, execution wrong (5) | ☐ Hypothesis invalid (1) | [Root cause analysis] |
**Total Pivot Score**: [Sum] / 35
**Total Kill Score**: [Sum] / 35
---
### Pre-Mortem Inversion Test
**Question**: "If we were starting today with $0 invested and zero sunk cost, would we start this project?"
**Yes, exactly as-is****CONTINUE** (no pivot needed)
**Yes, but differently****PIVOT** (describe how:_____________________)
**No****KILL** (reallocate resources)
---
### Decision
**Decision**: ☐ **CONTINUE****PIVOT****KILL**
**Rationale**: [Explanation based on factors above]
**If PIVOT**:
- **Pivot Hypothesis**: [What changes and why]
- **Pivot Timeline**: [How long to test]
- **Pivot Success Criteria**: [What success looks like]
- **Pivot Kill Criteria**: [If pivot doesn't work, when to kill]
**If KILL**:
- **Wind-Down Timeline**: [X weeks/months]
- **Resource Reallocation**: [Where team/budget goes]
- **Postmortem Date**: [When to conduct postmortem]
**Decision Authority**: ________________ Date: ______
---
## Template 4: Portfolio Kill Criteria
**When to use**: Managing multiple projects, need to kill some to focus
### Portfolio Ranking & Kill Decision
**Portfolio Review Date**: [Date]
**Decision Authority**: [Who makes portfolio decisions]
**Capacity**: [How many projects can we support? e.g., "5 projects"]
---
### Project Ranking
| Rank | Project | Expected Value (EV) | Resource Cost (RC) | EV/RC Ratio | Status |
|------|---------|---------------------|-------------------|-------------|--------|
| 1 | [Project A] | $[X]M | [Y] FTEs | [X/Y] | ✓ Keep |
| 2 | [Project B] | $[X]M | [Y] FTEs | [X/Y] | ✓ Keep |
| 3 | [Project C] | $[X]M | [Y] FTEs | [X/Y] | ✓ Keep |
| 4 | [Project D] | $[X]M | [Y] FTEs | [X/Y] | ✓ Keep |
| 5 | [Project E] | $[X]M | [Y] FTEs | [X/Y] | ✓ Keep |
| **6** | **[Project F]** | **$[X]M** | **[Y] FTEs** | **[X/Y]** | **← Kill line** |
| 7 | [Project G] | $[X]M | [Y] FTEs | [X/Y] | ✗ Kill |
| 8 | [Project H] | $[X]M | [Y] FTEs | [X/Y] | ✗ Kill |
**Ranking Methodology**: [How EV calculated, e.g., "(Revenue × Probability) / Resource Cost"]
**Kill Threshold**: Projects ranked below [X] get killed
---
### Projects to Kill (Example)
**[Project Name]**:
- **Status**: [e.g., "80% complete"] | **Sunk Cost**: $[X] | **Expected Value**: $[Z]
- **Why Kill**: [Rationale, e.g., "Opportunity cost — reallocating to Project A yields 2× more value"]
- **Wind-Down**: [Timeline] | **Team Reallocation**: [X FTEs] → [Projects]
**Decision Authority**: ________________ Date: ______
---
## Template 5: Wind-Down Plan
**When to use**: Kill decision made, need to execute wind-down
### Project Wind-Down Checklist
**Project Name**: [Project name]
**Kill Decision Date**: [Date]
**Wind-Down Owner**: [Person responsible for executing wind-down]
**Target Wind-Down Completion**: [Date, ideally <1 month]
---
### Communication (Week 1)
- [ ] **Team**: Notify within 1 week ([Decision authority + owner], focus on learning not blame)
- [ ] **Stakeholders**: Notify exec team, adjacent teams ([Project owner])
- [ ] **Customers** (if applicable): Migration plan, support timeline, refund policy ([Customer success])
---
### Team & Technical Transition (Weeks 2-4)
- [ ] **1-on-1s**: Discuss next roles, reallocation plan (Week 2, [Manager])
- [ ] **Knowledge Transfer**: Document learnings, tech docs, research insights (Week 2-3, [Team])
- [ ] **Code Archive**: Archive repository at [Location] (Week 3, [Tech lead])
- [ ] **Infrastructure**: Shutdown servers, cancel services, save $[X]/month (Week 4, [DevOps])
- [ ] **Data**: Backup critical data, ensure GDPR compliance (Week 3, [Tech lead])
---
### Customer Transition (if applicable)
- [ ] **Migration**: [X months timeline] to [Alternative product], support until [date]
- [ ] **Refunds**: Pro-rated refunds for annual customers within [X weeks]
- [ ] **Sunset**: Support ends [date], then [full deprecation]
---
### Postmortem (Week 3-4)
- [ ] **Schedule**: Within 2 weeks ([Date], [Team + stakeholders], [Neutral facilitator])
- [ ] **Agenda**: What worked? What didn't? Learnings? Apply to future?
- [ ] **Document**: Write postmortem doc ([Facilitator]), share with team (learning focus)
- [ ] **Celebrate**: Acknowledge disciplined stopping frees resources for winners
---
### Budget & Metrics
**Sunk Cost**: $[Total spent]
**Remaining Budget Saved**: $[Budget that would have been spent if continued]
**Resources Freed**: [X FTEs, $Y budget]
**Reallocated To**: [List projects receiving resources]
**Time to Wind-Down**: [Actual weeks from decision to completion]
---
### Completion Checklist
- [ ] Team reallocated | [ ] Infrastructure shutdown | [ ] Customers transitioned
- [ ] Postmortem shared | [ ] Budget reallocated | [ ] Project marked "Killed"
---
**Wind-Down Owner Signature**: ________________ Date: ______
---
## Quick Reference: When to Use Each Template
| Template | Use Case | Timing |
|----------|----------|--------|
| **Kill Criteria Document** | New project starting | Before launch |
| **Go/No-Go Gate Assessment** | Milestone decision point | At each gate |
| **Pivot vs. Kill Decision** | Project not meeting targets | When struggling |
| **Portfolio Kill Criteria** | Managing multiple projects | Quarterly or as needed |
| **Wind-Down Plan** | Kill decision made | After kill decision |