Initial commit
This commit is contained in:
@@ -0,0 +1,176 @@
|
||||
{
|
||||
"criteria": [
|
||||
{
|
||||
"name": "Estimation Quality",
|
||||
"description": "Are costs and benefits quantified with appropriate ranges/probabilities?",
|
||||
"scale": {
|
||||
"1": "Single-point estimates with no uncertainty. Major cost or benefit categories missing.",
|
||||
"2": "Some ranges provided but many point estimates. Several categories incomplete.",
|
||||
"3": "Most estimates have ranges. Key cost and benefit categories covered. Some uncertainty acknowledged.",
|
||||
"4": "Comprehensive estimation with ranges for uncertain variables. Probabilities assigned to scenarios. Justification provided for estimates.",
|
||||
"5": "Rigorous estimation with probability distributions, data sources cited, estimation method explained (analogous, parametric, bottom-up), and confidence levels stated."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Probability Calibration",
|
||||
"description": "Are probabilities reasonable, justified, and calibrated?",
|
||||
"scale": {
|
||||
"1": "No probabilities assigned or completely arbitrary (e.g., all 50%).",
|
||||
"2": "Probabilities assigned but no justification. Appear overconfident (too many 5% or 95%).",
|
||||
"3": "Probabilities have some justification. Reasonable calibration for most scenarios.",
|
||||
"4": "Probabilities justified with base rates, expert judgment, or reference class. Well-calibrated ranges.",
|
||||
"5": "Rigorous probability assignment using historical data, base rates, and adjustments. Calibration checked explicitly. Confidence bounds stated."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Decision Analysis Rigor",
|
||||
"description": "Is expected value and comparison logic sound?",
|
||||
"scale": {
|
||||
"1": "No expected value calculation. Comparison is purely subjective.",
|
||||
"2": "Expected value attempted but calculation errors. Comparison incomplete.",
|
||||
"3": "Expected value calculated correctly. Basic comparison of alternatives using EV or simple scoring.",
|
||||
"4": "Sound EV calculation with appropriate decision criteria (NPV, IRR, utility). Clear comparison methodology.",
|
||||
"5": "Rigorous analysis using appropriate technique (EV, decision tree, Monte Carlo, MCDA). Multiple decision criteria considered. Methodology appropriate for problem complexity."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Sensitivity Analysis",
|
||||
"description": "Are key drivers identified and impact tested?",
|
||||
"scale": {
|
||||
"1": "No sensitivity analysis performed.",
|
||||
"2": "Limited sensitivity (single variable tested). No identification of key drivers.",
|
||||
"3": "One-way sensitivity on 2-3 key variables. Drivers identified but impact not quantified well.",
|
||||
"4": "Comprehensive one-way sensitivity on all major variables. Key drivers ranked by impact. Break-even analysis performed.",
|
||||
"5": "Advanced sensitivity including two-way analysis, scenario analysis, or tornado diagrams. Robustness tested across reasonable ranges. Conditions that change conclusion clearly stated."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Alternative Comparison",
|
||||
"description": "Are all relevant alternatives considered and compared fairly?",
|
||||
"scale": {
|
||||
"1": "Only one alternative analyzed (no comparison).",
|
||||
"2": "Two alternatives but comparison is cursory or biased.",
|
||||
"3": "2-3 alternatives analyzed. Comparison is fair but may miss some options or factors.",
|
||||
"4": "3-5 alternatives including creative options. Fair comparison across all relevant factors.",
|
||||
"5": "Comprehensive alternative generation (considered 5+ initially, narrowed to 3-5). Comparison addresses all stakeholder concerns. Dominated options eliminated with explanation."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Assumption Transparency",
|
||||
"description": "Are assumptions stated explicitly and justified?",
|
||||
"scale": {
|
||||
"1": "Assumptions hidden or unstated. Reader must guess what's assumed.",
|
||||
"2": "Few assumptions stated. Most are implicit. Little justification.",
|
||||
"3": "Major assumptions stated but justification is thin. Some assumptions still implicit.",
|
||||
"4": "All key assumptions stated explicitly with justification. Reader can assess reasonableness.",
|
||||
"5": "Complete assumption transparency. Each assumption justified with source or reasoning. Alternative assumptions considered. Impact of changing assumptions tested."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Narrative Clarity",
|
||||
"description": "Is the story clear, logical, and persuasive?",
|
||||
"scale": {
|
||||
"1": "Narrative is confusing, illogical, or missing. Just numbers with no story.",
|
||||
"2": "Some narrative but disjointed. Logic is hard to follow. Key points buried.",
|
||||
"3": "Clear narrative structure. Main points are clear. Logic is mostly sound.",
|
||||
"4": "Compelling narrative with clear problem statement, analysis summary, recommendation, and reasoning. Flows logically.",
|
||||
"5": "Highly persuasive narrative that leads reader through problem, analysis, and conclusion. Key insights highlighted. Tradeoffs acknowledged. Objections preempted. Memorable framing."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Audience Tailoring",
|
||||
"description": "Is content appropriate for stated audience?",
|
||||
"scale": {
|
||||
"1": "No consideration of audience. Wrong level of detail or wrong focus.",
|
||||
"2": "Minimal tailoring. May have too much or too little detail for audience.",
|
||||
"3": "Content generally appropriate. Length and detail reasonable for audience.",
|
||||
"4": "Well-tailored to audience needs. Executives get summary, technical teams get methodology, finance gets numbers. Appropriate jargon level.",
|
||||
"5": "Expertly tailored with multiple versions or sections for different stakeholders. Executive summary for leaders, technical appendix for specialists, financial detail for finance. Anticipates audience questions."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Risk Acknowledgment",
|
||||
"description": "Are downside scenarios, risks, and limitations addressed?",
|
||||
"scale": {
|
||||
"1": "No mention of risks or limitations. Only upside presented.",
|
||||
"2": "Brief mention of risks but no detail. Limitations glossed over.",
|
||||
"3": "Downside scenarios included. Major risks identified. Some limitations noted.",
|
||||
"4": "Comprehensive risk analysis with downside scenarios, mitigation strategies, and clear limitations. Probability of loss quantified.",
|
||||
"5": "Rigorous risk treatment including probability-weighted downside, specific mitigation plans, uncertainty quantified, and honest assessment of analysis limitations. 'What would change our mind' conditions stated."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Actionability",
|
||||
"description": "Are next steps clear, specific, and feasible?",
|
||||
"scale": {
|
||||
"1": "No next steps or recommendation unclear.",
|
||||
"2": "Vague next steps ('consider options', 'study further'). No specifics.",
|
||||
"3": "Recommendation clear. Next steps identified but lack detail on who/when/how.",
|
||||
"4": "Clear recommendation with specific next steps, owners, and timeline. Success metrics defined.",
|
||||
"5": "Highly actionable with clear recommendation, detailed implementation plan with milestones, owners assigned, success metrics defined, decision review cadence specified, and monitoring plan for key assumptions."
|
||||
}
|
||||
}
|
||||
],
|
||||
"minimum_standard": 3.5,
|
||||
"stakes_guidance": {
|
||||
"low_stakes": {
|
||||
"threshold": 3.0,
|
||||
"description": "Decisions under $100k or low strategic importance. Acceptable to have simpler analysis (criteria 3-4).",
|
||||
"focus_criteria": ["Estimation Quality", "Decision Analysis Rigor", "Actionability"]
|
||||
},
|
||||
"medium_stakes": {
|
||||
"threshold": 3.5,
|
||||
"description": "Decisions $100k-$1M or moderate strategic importance. Standard threshold applies (criteria average ≥3.5).",
|
||||
"focus_criteria": ["All criteria should meet threshold"]
|
||||
},
|
||||
"high_stakes": {
|
||||
"threshold": 4.0,
|
||||
"description": "Decisions >$1M or high strategic importance. Higher bar required (criteria average ≥4.0).",
|
||||
"focus_criteria": ["Estimation Quality", "Sensitivity Analysis", "Risk Acknowledgment", "Assumption Transparency"],
|
||||
"additional_requirements": ["External validation of key estimates", "Multiple modeling approaches for robustness", "Explicit stakeholder review process"]
|
||||
}
|
||||
},
|
||||
"common_failure_modes": [
|
||||
{
|
||||
"failure": "Optimism bias",
|
||||
"symptoms": "All probabilities favor best case. Downside scenarios underweighted.",
|
||||
"fix": "Use reference class forecasting. Require explicit base rates. Weight downside equally."
|
||||
},
|
||||
{
|
||||
"failure": "Sunk cost fallacy",
|
||||
"symptoms": "Past investments influence forward-looking analysis.",
|
||||
"fix": "Evaluate only incremental future costs/benefits. Ignore sunk costs explicitly."
|
||||
},
|
||||
{
|
||||
"failure": "False precision",
|
||||
"symptoms": "Point estimates to multiple decimal places when uncertainty is ±50%.",
|
||||
"fix": "Use ranges. State confidence levels. Round appropriately given uncertainty."
|
||||
},
|
||||
{
|
||||
"failure": "Anchoring on first estimate",
|
||||
"symptoms": "All alternatives compared to one 'anchor' rather than objectively.",
|
||||
"fix": "Generate alternatives independently. Use multiple estimation methods."
|
||||
},
|
||||
{
|
||||
"failure": "Analysis paralysis",
|
||||
"symptoms": "Endless modeling, no decision. Waiting for perfect information.",
|
||||
"fix": "Set time limits. Use 'good enough' threshold. Decide with available info."
|
||||
},
|
||||
{
|
||||
"failure": "Ignoring opportunity cost",
|
||||
"symptoms": "Only evaluating direct costs, not what else could be done with resources.",
|
||||
"fix": "Explicitly include opportunity cost. Compare to next-best alternative use of capital/time."
|
||||
},
|
||||
{
|
||||
"failure": "Confirmation bias",
|
||||
"symptoms": "Analysis structured to justify predetermined conclusion.",
|
||||
"fix": "Generate alternatives before analyzing. Use blind evaluation. Seek disconfirming evidence."
|
||||
},
|
||||
{
|
||||
"failure": "Overweighting quantifiable",
|
||||
"symptoms": "Strategic or qualitative factors ignored because hard to measure.",
|
||||
"fix": "Explicitly list qualitative factors. Use scoring for non-quantifiable. Ask 'what matters that we're not measuring?'"
|
||||
}
|
||||
],
|
||||
"usage_notes": "Use this rubric to self-assess before delivering analysis. For high-stakes decisions (>$1M or strategic), aim for 4.0+ average. For low-stakes (<$100k), 3.0+ may be acceptable. Pay special attention to Estimation Quality, Decision Analysis Rigor, and Risk Acknowledgment as these are most critical for sound decisions."
|
||||
}
|
||||
@@ -0,0 +1,485 @@
|
||||
# Decision: Build Custom Analytics Platform vs. Buy SaaS Solution
|
||||
|
||||
**Date:** 2024-01-15
|
||||
**Decision-maker:** CTO + VP Product
|
||||
**Audience:** Executive team
|
||||
**Stakes:** Medium ($500k-$1.5M over 3 years)
|
||||
|
||||
---
|
||||
|
||||
## 1. Decision Context
|
||||
|
||||
**What we're deciding:**
|
||||
Should we build a custom analytics platform in-house or purchase a SaaS analytics solution?
|
||||
|
||||
**Why this matters:**
|
||||
- Current analytics are manual and time-consuming (20 hours/week analyst time)
|
||||
- Product team needs real-time insights to inform roadmap decisions
|
||||
- Sales needs usage data to identify expansion opportunities
|
||||
- Engineering wants to reduce operational burden of maintaining custom tools
|
||||
|
||||
**Alternatives:**
|
||||
1. **Build custom**: Develop in-house analytics platform with our exact requirements
|
||||
2. **Buy SaaS**: Purchase enterprise analytics platform (e.g., Amplitude, Mixpanel)
|
||||
3. **Hybrid**: Use SaaS for standard metrics, build custom for proprietary analysis
|
||||
|
||||
**Key uncertainties:**
|
||||
- Development cost and timeline (historical variance ±40%)
|
||||
- Feature completeness of SaaS solution (will it meet all needs?)
|
||||
- Usage growth rate (affects SaaS costs which scale with volume)
|
||||
- Long-term flexibility needs (will we outgrow SaaS or need custom features?)
|
||||
|
||||
**Constraints:**
|
||||
- Budget: $150k available in current year, $50k/year ongoing
|
||||
- Timeline: Need solution operational within 6 months
|
||||
- Requirements: Must support 100M events/month, 50+ team members, custom dashboards
|
||||
- Strategic: Prefer minimal vendor lock-in, prioritize time-to-value
|
||||
|
||||
**Audience:** Executive team (need bottom-line recommendation + risks)
|
||||
|
||||
---
|
||||
|
||||
## 2. Estimation
|
||||
|
||||
### Alternative 1: Build Custom
|
||||
|
||||
**Costs:**
|
||||
- **Initial development**: $200k-$400k (most likely $300k)
|
||||
- Base estimate: 6 engineer-months × $50k loaded cost = $300k
|
||||
- Range reflects scope uncertainty and potential technical challenges
|
||||
- Source: Similar internal projects averaged $280k ±$85k (30% std dev)
|
||||
|
||||
- **Annual operational costs**: $40k-$60k per year (most likely $50k)
|
||||
- Infrastructure: $15k-$25k (based on 100M events/month)
|
||||
- Maintenance: 0.5 engineer FTE = $25k-$35k per year
|
||||
- Source: Current analytics tools cost $45k/year to maintain
|
||||
|
||||
- **Opportunity cost**: $150k
|
||||
- Engineering team would otherwise work on core product features
|
||||
- Estimated value of deferred features: $150k in potential revenue impact
|
||||
|
||||
**Benefits:**
|
||||
- **Cost savings**: $0 subscription fees (vs $120k/year for SaaS)
|
||||
- **Perfect fit**: 100% feature match to our specific needs
|
||||
- **Flexibility**: Full control to add custom analysis
|
||||
- **Strategic value**: Build analytics competency, own our data
|
||||
|
||||
**Probabilities:**
|
||||
- **Best case (20%)**: On-time delivery at $250k, perfect execution
|
||||
- Prerequisites: Clear requirements, no scope creep, experienced team available
|
||||
|
||||
- **Base case (50%)**: Moderate delays and cost overruns to $350k over 8 months
|
||||
- Typical scenario based on historical performance
|
||||
|
||||
- **Worst case (30%)**: Significant delays to $500k over 12 months, some features cut
|
||||
- Risk factors: Key engineer departure, underestimated complexity, changing requirements
|
||||
|
||||
**Key assumptions:**
|
||||
- Engineering team has capacity (currently 70% utilized)
|
||||
- No major technical unknowns in data pipeline
|
||||
- Requirements are stable (< 10% scope change)
|
||||
- Infrastructure costs scale linearly with events
|
||||
|
||||
### Alternative 2: Buy SaaS
|
||||
|
||||
**Costs:**
|
||||
- **Initial implementation**: $15k-$25k (most likely $20k)
|
||||
- Setup and integration: 2-3 weeks consulting
|
||||
- Data migration and testing
|
||||
- Team training
|
||||
- Source: Vendor quote + reference customer feedback
|
||||
|
||||
- **Annual subscription**: $100k-$140k per year (most likely $120k)
|
||||
- Base: $80k for 100M events/month
|
||||
- Users: $2k per user × 20 power users = $40k
|
||||
- Growth buffer: Assume 20% event growth per year
|
||||
- Source: Vendor pricing confirmed, escalates with usage
|
||||
|
||||
- **Switching cost** (if we change vendors later): $50k-$75k
|
||||
- Data export and migration
|
||||
- Re-implementing integrations
|
||||
- Team retraining
|
||||
|
||||
**Benefits:**
|
||||
- **Faster time-to-value**: 2 months vs. 8 months for build
|
||||
- 6-month head start = earlier insights = better decisions sooner
|
||||
- Estimated value: $75k (half of opportunity cost avoided)
|
||||
|
||||
- **Proven reliability**: 99.9% uptime SLA
|
||||
- Reduces operational risk
|
||||
- Frees engineering for core product
|
||||
|
||||
- **Feature velocity**: Continuous improvements from vendor
|
||||
- New capabilities quarterly (ML-powered insights, predictive analytics)
|
||||
- Estimated value: $30k/year in avoided feature development
|
||||
|
||||
- **Lower risk**: Predictable costs, no schedule risk
|
||||
- High confidence in timeline and total cost
|
||||
|
||||
**Probabilities:**
|
||||
- **Best case (40%)**: Perfect fit, seamless implementation, $100k/year steady state
|
||||
- Vendor delivers on promises, usage grows slower than expected
|
||||
|
||||
- **Base case (45%)**: Good fit with minor gaps, standard implementation, $120k/year
|
||||
- 85% of needs met out-of-box, workarounds for remaining 15%
|
||||
|
||||
- **Worst case (15%)**: Poor fit requiring workarounds or supplemental tools, $150k/year
|
||||
- Missing critical features, need to maintain some custom tooling
|
||||
|
||||
**Key assumptions:**
|
||||
- SaaS vendor is stable and continues product development
|
||||
- Event volume growth is 20% per year (manageable)
|
||||
- Vendor lock-in is acceptable (switching cost is reasonable)
|
||||
- Security and compliance requirements are met by vendor
|
||||
|
||||
### Alternative 3: Hybrid
|
||||
|
||||
**Costs:**
|
||||
- **Initial investment**: $100k-$150k (most likely $125k)
|
||||
- SaaS implementation: $20k
|
||||
- Custom integrations and proprietary metrics: $100k-$130k development
|
||||
|
||||
- **Annual costs**: $80k-$100k per year (most likely $90k)
|
||||
- SaaS subscription (smaller tier): $60k-$70k
|
||||
- Maintenance of custom components: $20k-$30k
|
||||
|
||||
**Benefits:**
|
||||
- **Balanced approach**: Standard analytics from SaaS, custom analysis in-house
|
||||
- **Reduced risk**: Less development than full build, more control than pure SaaS
|
||||
- **Flexibility**: Can shift balance over time based on needs
|
||||
|
||||
**Probabilities:**
|
||||
- **Base case (60%)**: Works reasonably well, $125k + $90k/year
|
||||
- **Integration complexity (40%)**: More overhead than expected, $150k + $100k/year
|
||||
|
||||
**Key assumptions:**
|
||||
- Clean separation between standard and custom analytics
|
||||
- SaaS provides good API for custom integrations
|
||||
- Maintaining two systems doesn't create excessive complexity
|
||||
|
||||
---
|
||||
|
||||
## 3. Decision Analysis
|
||||
|
||||
### Expected Value Calculation (3-Year NPV)
|
||||
|
||||
**Discount rate:** 10% (company's cost of capital)
|
||||
|
||||
#### Alternative 1: Build Custom
|
||||
|
||||
**Year 0 (Initial):**
|
||||
- Best case (20%): -$250k development - $150k opportunity cost = -$400k
|
||||
- Base case (50%): -$350k development - $150k opportunity cost = -$500k
|
||||
- Worst case (30%): -$500k development - $150k opportunity cost = -$650k
|
||||
|
||||
**Expected Year 0:** ($-400k × 0.20) + ($-500k × 0.50) + ($-650k × 0.30) = -$525k
|
||||
|
||||
**Years 1-3 (Operational):**
|
||||
- Annual cost: $50k/year
|
||||
- PV of 3 years at 10%: $50k × 2.49 = $124k
|
||||
|
||||
**Total Expected NPV (Build):** -$525k - $124k = **-$649k**
|
||||
|
||||
*Note: Costs are negative because this is an investment. Focus is on minimizing cost since benefits (analytics capability) are equivalent across alternatives.*
|
||||
|
||||
#### Alternative 2: Buy SaaS
|
||||
|
||||
**Year 0 (Initial):**
|
||||
- Implementation: $20k
|
||||
- No opportunity cost (fast implementation)
|
||||
|
||||
**Years 1-3 (Operational):**
|
||||
- Best case (40%): $100k/year × 2.49 = $249k
|
||||
- Base case (45%): $120k/year × 2.49 = $299k
|
||||
- Worst case (15%): $150k/year × 2.49 = $374k
|
||||
|
||||
**Expected annual cost:** ($100k × 0.40) + ($120k × 0.45) + ($150k × 0.15) = $116.5k/year
|
||||
**PV of 3 years:** $116.5k × 2.49 = $290k
|
||||
|
||||
**Total Expected NPV (Buy):** -$20k - $290k = **-$310k**
|
||||
|
||||
**Benefit adjustment for faster time-to-value:** +$75k (6-month head start)
|
||||
**Adjusted NPV (Buy):** -$310k + $75k = **-$235k**
|
||||
|
||||
#### Alternative 3: Hybrid
|
||||
|
||||
**Year 0 (Initial):**
|
||||
- Development + implementation: $125k
|
||||
- Partial opportunity cost: $75k (half the custom build time)
|
||||
|
||||
**Years 1-3 (Operational):**
|
||||
- Expected annual: $90k/year × 2.49 = $224k
|
||||
|
||||
**Total Expected NPV (Hybrid):** -$125k - $75k - $224k = **-$424k**
|
||||
|
||||
### Comparison Summary
|
||||
|
||||
| Alternative | Expected 3-Year Cost | Risk Profile | Time to Value |
|
||||
|-------------|---------------------|--------------|---------------|
|
||||
| Build Custom | $649k | **High** (30% worst case) | 8 months |
|
||||
| Buy SaaS | $235k | **Low** (predictable) | 2 months |
|
||||
| Hybrid | $424k | **Medium** | 5 months |
|
||||
|
||||
**Cost difference:** Buy SaaS saves **$414k** vs. Build Custom over 3 years
|
||||
|
||||
### Sensitivity Analysis
|
||||
|
||||
**What if development cost for Build is 20% lower ($240k base instead of $300k)?**
|
||||
- Build NPV: -$577k (still $342k worse than Buy)
|
||||
- **Conclusion still holds**
|
||||
|
||||
**What if SaaS costs grow 40% per year instead of 20%?**
|
||||
- Year 3 SaaS cost: $230k (vs. $145k base case)
|
||||
- Buy NPV: -$325k (still $324k better than Build)
|
||||
- **Conclusion still holds**
|
||||
|
||||
**What if we need to switch SaaS vendors in Year 3?**
|
||||
- Additional switching cost: $65k
|
||||
- Buy NPV: -$300k (still $349k better than Build)
|
||||
- **Conclusion still holds**
|
||||
|
||||
**Break-even analysis:**
|
||||
At what annual SaaS cost does Build become cheaper?
|
||||
- Build 3-year cost: $649k
|
||||
- Buy 3-year cost: $20k + (X × 2.49) - $75k = $649k
|
||||
- Solve: X = $282k/year
|
||||
|
||||
**Interpretation:** SaaS would need to cost $282k/year (2.4x current estimate) for Build to break even. Very unlikely.
|
||||
|
||||
### Robustness Check
|
||||
|
||||
**Conclusion is robust if:**
|
||||
- Development cost < $600k (currently $300k base, $500k worst case ✓)
|
||||
- SaaS annual cost < $280k (currently $120k base, $150k worst case ✓)
|
||||
- Time-to-value benefit > $0 (6-month head start valuable ✓)
|
||||
|
||||
**Conclusion changes if:**
|
||||
- SaaS vendor goes out of business (low probability, large incumbents)
|
||||
- Regulatory requirements force on-premise solution (not currently foreseen)
|
||||
- Custom analytics become core competitive differentiator (possible but unlikely)
|
||||
|
||||
---
|
||||
|
||||
## 4. Recommendation
|
||||
|
||||
### **Recommended option: Buy SaaS Solution**
|
||||
|
||||
**Reasoning:**
|
||||
|
||||
Buy SaaS dominates Build Custom on three dimensions:
|
||||
|
||||
1. **Lower expected cost**: $235k vs. $649k over 3 years (saves $414k)
|
||||
2. **Lower risk**: Predictable subscription vs. 30% chance of 2x cost overrun on build
|
||||
3. **Faster time-to-value**: 2 months vs. 8 months (6-month head start enables better decisions sooner)
|
||||
|
||||
The cost advantage is substantial ($414k savings) and robust to reasonable assumption changes. Even if SaaS costs double or we need to switch vendors, Buy still saves $300k+.
|
||||
|
||||
The risk profile strongly favors Buy. Historical data shows 30% of similar build projects experience 2x cost overruns. SaaS has predictable costs with 99.9% uptime SLA.
|
||||
|
||||
Time-to-value matters: getting analytics operational 6 months sooner means better product decisions sooner, worth approximately $75k in avoided opportunity cost.
|
||||
|
||||
**Key factors:**
|
||||
1. **Cost**: $414k lower expected cost over 3 years
|
||||
2. **Risk**: Predictable vs. high uncertainty (30% worst case for Build)
|
||||
3. **Speed**: 2 months vs. 8 months to operational
|
||||
4. **Strategic fit**: Analytics are important but not core competitive differentiator
|
||||
|
||||
**Tradeoffs accepted:**
|
||||
- **Vendor dependency**: Accepting switching cost of $65k if we change vendors
|
||||
- Mitigation: Choose stable, market-leading vendor (Amplitude or Mixpanel)
|
||||
|
||||
- **Some feature gaps**: SaaS may not support 100% of custom analysis needs
|
||||
- Mitigation: 85% coverage out-of-box, workarounds for remaining 15%
|
||||
- Can supplement with lightweight custom tools if needed ($20k-$30k vs. $300k+ full build)
|
||||
|
||||
- **Less flexibility**: Can't customize as freely as in-house solution
|
||||
- Mitigation: Most SaaS platforms offer extensive APIs and integrations
|
||||
- True custom needs can be addressed incrementally
|
||||
|
||||
**Why not Hybrid?**
|
||||
Hybrid ($424k) is $189k more expensive than Buy with minimal additional benefit. The complexity of maintaining two systems outweighs the incremental flexibility.
|
||||
|
||||
---
|
||||
|
||||
## 5. Risks and Mitigations
|
||||
|
||||
### Risk 1: SaaS doesn't meet all requirements
|
||||
|
||||
**Probability:** Medium (15% worst case scenario)
|
||||
|
||||
**Impact:** Need workarounds or supplemental tools
|
||||
|
||||
**Mitigation:**
|
||||
- Conduct thorough vendor evaluation with 2-week pilot
|
||||
- Map all requirements to vendor capabilities before committing
|
||||
- Budget $30k for lightweight custom supplements if needed
|
||||
- Still cheaper than full Build even with supplements
|
||||
|
||||
### Risk 2: Vendor lock-in / price increases
|
||||
|
||||
**Probability:** Low-Medium (vendors typically increase 5-10%/year)
|
||||
|
||||
**Impact:** Higher ongoing costs
|
||||
|
||||
**Mitigation:**
|
||||
- Negotiate multi-year contract with price protection
|
||||
- Maintain data export capability (ensure vendor supports data portability)
|
||||
- Budget includes 20% annual growth buffer
|
||||
- Switching cost is manageable ($65k) if needed
|
||||
|
||||
### Risk 3: Usage growth exceeds estimates
|
||||
|
||||
**Probability:** Low (current trajectory is 15%/year, estimated 20%)
|
||||
|
||||
**Impact:** Higher subscription costs
|
||||
|
||||
**Mitigation:**
|
||||
- Monitor usage monthly against plan
|
||||
- Optimize event instrumentation to reduce unnecessary events
|
||||
- Renegotiate tier if growth is faster than expected
|
||||
- Even at 2x usage growth, still cheaper than Build
|
||||
|
||||
### Risk 4: Security or compliance issues
|
||||
|
||||
**Probability:** Very Low (vendor is SOC 2 Type II certified)
|
||||
|
||||
**Impact:** Cannot use vendor, forced to build
|
||||
|
||||
**Mitigation:**
|
||||
- Verify vendor security certifications before contract
|
||||
- Review data handling and privacy policies
|
||||
- Include compliance requirements in vendor evaluation
|
||||
- This risk applies to any vendor; not specific to this decision
|
||||
|
||||
---
|
||||
|
||||
## 6. Next Steps
|
||||
|
||||
**If approved:**
|
||||
|
||||
1. **Vendor evaluation** (2 weeks) - VP Product + Data Lead
|
||||
- Demo top 3 vendors (Amplitude, Mixpanel, Heap)
|
||||
- Map requirements to capabilities
|
||||
- Validate pricing and terms
|
||||
- Decision by: Feb 1
|
||||
|
||||
2. **Pilot implementation** (2 weeks) - Engineering Lead
|
||||
- 2-week pilot with selected vendor
|
||||
- Instrument 3 key product flows
|
||||
- Validate data accuracy and latency
|
||||
- Go/no-go decision by: Feb 15
|
||||
|
||||
3. **Full rollout** (4 weeks) - Data Team + Engineering
|
||||
- Instrument all product events
|
||||
- Migrate existing dashboards
|
||||
- Train team on new platform
|
||||
- Launch by: March 15
|
||||
|
||||
**Success metrics:**
|
||||
- **Time to value**: Analytics operational within 2 months (by March 15)
|
||||
- **Cost**: Stay within $20k implementation + $120k annual budget
|
||||
- **Adoption**: 50+ team members using platform within 30 days of launch
|
||||
- **Value delivery**: Reduce manual analytics time from 20 hours/week to <5 hours/week
|
||||
|
||||
**Decision review:**
|
||||
- **6-month review** (Sept 2024): Validate cost and value delivered
|
||||
- Key question: Are we getting value proportional to cost?
|
||||
- Metrics: Usage stats, time savings, decisions influenced by data
|
||||
|
||||
- **Annual review** (Jan 2025): Assess whether to continue, renegotiate, or reconsider build
|
||||
- Key indicators: Usage growth trend, missing features impact, pricing changes
|
||||
|
||||
**What would change our mind:**
|
||||
- If vendor quality degrades significantly (downtime, bugs, poor support)
|
||||
- If pricing increases >30% beyond projections
|
||||
- If we identify analytics as core competitive differentiator (requires custom innovation)
|
||||
- If regulatory requirements force on-premise solution
|
||||
|
||||
---
|
||||
|
||||
## 7. Appendix: Assumptions Log
|
||||
|
||||
**Development estimates:**
|
||||
- Based on: 3 similar internal projects (API platform, reporting tool, data pipeline)
|
||||
- Historical variance: ±30% from initial estimate
|
||||
- Team composition: 2-3 senior engineers for 3-4 months
|
||||
- Scope: Event ingestion, storage, query engine, dashboarding UI
|
||||
|
||||
**SaaS pricing:**
|
||||
- Based on: Vendor quotes for 100M events/month, 50 users
|
||||
- Confirmed with: 2 reference customers at similar scale
|
||||
- Growth assumption: 20% annual event growth (aligned with product roadmap)
|
||||
- User assumption: 20 power users (product, sales, exec) need full access
|
||||
|
||||
**Opportunity cost:**
|
||||
- Based on: Engineering team would otherwise work on product features
|
||||
- Estimated value: Product features could drive $150k additional revenue
|
||||
- Source: Product roadmap prioritization (deferred features)
|
||||
|
||||
**Time-to-value benefit:**
|
||||
- Based on: 6-month head start with SaaS (2 months vs. 8 months)
|
||||
- Estimated value: Better decisions sooner = avoided mistakes + seized opportunities
|
||||
- Conservative estimate: 50% of opportunity cost = $75k
|
||||
|
||||
**Discount rate:**
|
||||
- Company cost of capital: 10%
|
||||
- Used to calculate present value of multi-year costs
|
||||
|
||||
---
|
||||
|
||||
## Self-Assessment (Rubric Scores)
|
||||
|
||||
**Estimation Quality:** 4/5
|
||||
- Comprehensive estimation with ranges and probabilities
|
||||
- Justification provided for estimates with sources
|
||||
- Could improve: More rigorous data collection from reference customers
|
||||
|
||||
**Probability Calibration:** 4/5
|
||||
- Probabilities justified with base rates (historical project performance)
|
||||
- Well-calibrated ranges
|
||||
- Could improve: External validation of probability estimates
|
||||
|
||||
**Decision Analysis Rigor:** 5/5
|
||||
- Sound expected value calculation with NPV
|
||||
- Appropriate decision criteria
|
||||
- Multiple scenarios tested
|
||||
|
||||
**Sensitivity Analysis:** 5/5
|
||||
- Comprehensive one-way sensitivity on key variables
|
||||
- Break-even analysis performed
|
||||
- Conditions that change conclusion clearly stated
|
||||
|
||||
**Alternative Comparison:** 4/5
|
||||
- Three alternatives analyzed fairly
|
||||
- Could improve: Consider more creative alternatives (e.g., open-source + custom)
|
||||
|
||||
**Assumption Transparency:** 5/5
|
||||
- All key assumptions stated explicitly with justification
|
||||
- Alternative assumptions tested in sensitivity analysis
|
||||
|
||||
**Narrative Clarity:** 4/5
|
||||
- Clear structure and logical flow
|
||||
- Could improve: More compelling framing for exec audience
|
||||
|
||||
**Audience Tailoring:** 4/5
|
||||
- Appropriate detail for executive audience
|
||||
- Could improve: Add one-page executive summary
|
||||
|
||||
**Risk Acknowledgment:** 5/5
|
||||
- Comprehensive risk analysis with probabilities and mitigations
|
||||
- Downside scenarios quantified
|
||||
- "What would change our mind" conditions stated
|
||||
|
||||
**Actionability:** 5/5
|
||||
- Clear recommendation with specific next steps
|
||||
- Owners and timeline defined
|
||||
- Success metrics and review cadence specified
|
||||
|
||||
**Average Score:** 4.5/5 (Exceeds standard for medium-stakes decision)
|
||||
|
||||
---
|
||||
|
||||
**Analysis completed:** January 15, 2024
|
||||
**Analyst:** [Name]
|
||||
**Reviewed by:** CTO
|
||||
**Status:** Ready for executive decision
|
||||
@@ -0,0 +1,339 @@
|
||||
# Advanced Chain Estimation → Decision → Storytelling Methodology
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Advanced Analysis Progress:
|
||||
- [ ] Step 1: Select appropriate advanced technique for complexity
|
||||
- [ ] Step 2: Build model (decision tree, Monte Carlo, real options)
|
||||
- [ ] Step 3: Run analysis and interpret results
|
||||
- [ ] Step 4: Validate robustness across scenarios
|
||||
- [ ] Step 5: Translate technical findings into narrative
|
||||
```
|
||||
|
||||
**Step 1: Select appropriate advanced technique for complexity**
|
||||
|
||||
Choose technique based on decision characteristics: decision trees for sequential choices, Monte Carlo for multiple interacting uncertainties, real options for flexibility value, multi-criteria analysis for qualitative + quantitative factors. See [Technique Selection Guide](#technique-selection-guide) for decision flowchart.
|
||||
|
||||
**Step 2: Build model**
|
||||
|
||||
Structure problem using chosen technique: define states and branches for decision trees, specify probability distributions for Monte Carlo, identify options and decision points for real options analysis, establish criteria and weights for multi-criteria. See technique-specific sections below for modeling guidance.
|
||||
|
||||
**Step 3: Run analysis and interpret results**
|
||||
|
||||
Execute calculations (manually for small trees, with tools for complex simulations), interpret output distributions or decision paths, identify dominant strategies or highest-value options, and quantify value of information or flexibility where applicable.
|
||||
|
||||
**Step 4: Validate robustness across scenarios**
|
||||
|
||||
Test assumptions with stress testing, vary key parameters to check sensitivity, compare results across different modeling approaches, and identify conditions where conclusion changes. See [Sensitivity and Robustness Testing](#sensitivity-and-robustness-testing).
|
||||
|
||||
**Step 5: Translate technical findings into narrative**
|
||||
|
||||
Convert technical analysis into business language, highlight key insights without overwhelming with methodology, explain "so what" for decision-makers, and provide clear recommendation with confidence bounds. See [Communicating Complex Analysis](#communicating-complex-analysis).
|
||||
|
||||
---
|
||||
|
||||
## Technique Selection Guide
|
||||
|
||||
**Decision Trees** → Sequential decisions with discrete outcomes and known probabilities
|
||||
- Use when: Clear sequence of choices, branching scenarios, need optimal path
|
||||
- Example: Build vs buy with adoption uncertainty
|
||||
|
||||
**Monte Carlo Simulation** → Multiple interacting uncertainties with continuous distributions
|
||||
- Use when: Many uncertain variables, complex interactions, need probability distributions
|
||||
- Example: Project NPV with uncertain cost, revenue, timeline
|
||||
|
||||
**Real Options Analysis** → Decisions with flexibility value (defer, expand, abandon)
|
||||
- Use when: Uncertainty resolves over time, value of waiting, staged commitments
|
||||
- Example: Pilot before full launch, expand if successful
|
||||
|
||||
**Multi-Criteria Decision Analysis (MCDA)** → Mix of quantitative and qualitative factors
|
||||
- Use when: Multiple objectives, stakeholder tradeoffs, subjective criteria
|
||||
- Example: Vendor selection (cost + quality + relationship)
|
||||
|
||||
---
|
||||
|
||||
## Decision Trees
|
||||
|
||||
### Structure
|
||||
- **Decision node (□)**: Your choice
|
||||
- **Chance node (○)**: Uncertain outcome with probabilities
|
||||
- **Terminal node**: Final payoff
|
||||
|
||||
### Method
|
||||
1. Map all decisions and chance events
|
||||
2. Assign probabilities to chance events
|
||||
3. Work backward: calculate EV at chance nodes, choose best at decision nodes
|
||||
4. Identify optimal path
|
||||
|
||||
### Example
|
||||
```
|
||||
□ Build vs Buy
|
||||
├─ Build → ○ Success (60%) → $500k
|
||||
│ └─ Fail (40%) → $100k
|
||||
└─ Buy → ○ Fits (70%) → $400k
|
||||
└─ Doesn't (30%) → $150k
|
||||
|
||||
Build EV = (500 × 0.6) + (100 × 0.4) = $340k
|
||||
Buy EV = (400 × 0.7) + (150 × 0.3) = $325k
|
||||
Decision: Build (higher EV)
|
||||
```
|
||||
|
||||
### Value of Information
|
||||
- EVPI = EV with perfect info - EV without info
|
||||
- Tells you how much to spend on reducing uncertainty
|
||||
|
||||
---
|
||||
|
||||
## Monte Carlo Simulation
|
||||
|
||||
### When to Use
|
||||
- Multiple uncertain variables (>3)
|
||||
- Complex interactions between variables
|
||||
- Need full probability distribution of outcomes
|
||||
- Continuous ranges (not discrete scenarios)
|
||||
|
||||
### Method
|
||||
1. **Identify uncertain variables**: cost, revenue, timeline, adoption rate, etc.
|
||||
2. **Define distributions**: normal, log-normal, triangular, uniform
|
||||
3. **Specify correlations**: if variables move together
|
||||
4. **Run simulation**: 10,000+ iterations
|
||||
5. **Analyze output**: mean, median, percentiles, probability of success
|
||||
|
||||
### Distribution Types
|
||||
- **Normal**: μ ± σ (height, measurement error)
|
||||
- **Log-normal**: positively skewed (project duration, costs)
|
||||
- **Triangular**: min/most likely/max (quick estimation)
|
||||
- **Uniform**: all values equally likely (no information)
|
||||
|
||||
### Interpretation
|
||||
- **P50 (median)**: 50% chance of exceeding
|
||||
- **P10/P90**: 80% confidence interval
|
||||
- **Probability of target**: P(NPV > $0), P(ROI > 20%)
|
||||
|
||||
### Tools
|
||||
- Excel: =NORM.INV(RAND(), mean, stdev)
|
||||
- Python: `numpy.random.normal(mean, stdev, size=10000)`
|
||||
- @RISK, Crystal Ball: Monte Carlo add-ins
|
||||
|
||||
---
|
||||
|
||||
## Real Options Analysis
|
||||
|
||||
### Concept
|
||||
Flexibility has value. Option to defer, expand, contract, or abandon is worth more than committing upfront.
|
||||
|
||||
### When to Use
|
||||
- Uncertainty resolves over time (can learn before committing)
|
||||
- Irreversible investments (can't easily reverse)
|
||||
- Staged decisions (pilot → scale)
|
||||
|
||||
### Types of Options
|
||||
- **Defer**: Wait for more information before committing
|
||||
- **Expand**: Scale up if successful
|
||||
- **Contract/Abandon**: Scale down or exit if unsuccessful
|
||||
- **Switch**: Change approach mid-course
|
||||
|
||||
### Valuation Approach
|
||||
|
||||
**Simple NPV (no flexibility):**
|
||||
- Commit now: EV = Σ(outcome × probability)
|
||||
|
||||
**With real option:**
|
||||
- Value = NPV of commitment + Value of flexibility
|
||||
- Flexibility value = Expected payoff from optimal future decision - Expected payoff from committing now
|
||||
|
||||
### Example
|
||||
- **Commit to full launch now**: $1M investment, 60% success → $3M, 40% fail → $0
|
||||
- EV = (3M × 0.6) + (0 × 0.4) - 1M = $800K
|
||||
|
||||
- **Pilot first ($200K), then decide**:
|
||||
- Good pilot (60%) → full launch → EV $1.8M (0.6 × 3M - 1M)
|
||||
- Bad pilot (40%) → abandon → lose $200K
|
||||
- EV = (1.8M × 0.6) + (-0.2M × 0.4) = $1.0M
|
||||
|
||||
- **Real option value** = $1.0M - $800K = $200K (value of flexibility to learn first)
|
||||
|
||||
---
|
||||
|
||||
## Multi-Criteria Decision Analysis (MCDA)
|
||||
|
||||
### When to Use
|
||||
- Multiple objectives that can't be reduced to single metric (not just NPV)
|
||||
- Qualitative + quantitative factors
|
||||
- Stakeholder tradeoffs (different groups value different things)
|
||||
|
||||
### Method
|
||||
|
||||
**1. Identify criteria** (from stakeholder perspectives)
|
||||
- Cost, speed, quality, risk, strategic fit, customer impact, etc.
|
||||
|
||||
**2. Weight criteria** (based on priorities)
|
||||
- Sum to 100%
|
||||
- Finance might weight cost 40%, Product weights customer impact 30%
|
||||
|
||||
**3. Score alternatives** (1-5 or 1-10 scale on each criterion)
|
||||
- Alternative A: Cost=4, Speed=2, Quality=5
|
||||
- Alternative B: Cost=2, Speed=5, Quality=3
|
||||
|
||||
**4. Calculate weighted scores**
|
||||
- A = (4 × 0.3) + (2 × 0.4) + (5 × 0.3) = 3.5
|
||||
- B = (2 × 0.3) + (5 × 0.4) + (3 × 0.3) = 3.5
|
||||
|
||||
**5. Sensitivity analysis** on weights
|
||||
- How much would weights need to change to flip the decision?
|
||||
|
||||
### Handling Qualitative Criteria
|
||||
- **Scoring rubric**: Define what 1, 3, 5 means for "strategic fit"
|
||||
- **Pairwise comparison**: Compare alternatives head-to-head on each criterion
|
||||
- **Range**: Use min-max scaling to normalize disparate units
|
||||
|
||||
---
|
||||
|
||||
## Sensitivity and Robustness Testing
|
||||
|
||||
### One-Way Sensitivity
|
||||
- Vary one parameter at a time (e.g., cost ±20%)
|
||||
- Check if conclusion changes
|
||||
- Identify which parameters matter most
|
||||
|
||||
### Two-Way Sensitivity
|
||||
- Vary two parameters simultaneously
|
||||
- Create sensitivity matrix or contour plot
|
||||
- Example: Cost (rows) × Revenue (columns) → NPV
|
||||
|
||||
### Tornado Diagram
|
||||
- Bar chart showing impact of each parameter
|
||||
- Longest bars = most sensitive parameters
|
||||
- Focus analysis on top 2-3 drivers
|
||||
|
||||
### Scenario Analysis
|
||||
- Define coherent scenarios (pessimistic, base, optimistic)
|
||||
- Not just parameter ranges, but plausible futures
|
||||
- Calculate outcome for each complete scenario
|
||||
|
||||
### Break-Even Analysis
|
||||
- At what value does conclusion change?
|
||||
- "Need revenue >$500K to beat alternative"
|
||||
- "If cost exceeds $300K, pivot to Plan B"
|
||||
|
||||
### Stress Testing
|
||||
- Extreme scenarios (worst case everything goes wrong)
|
||||
- Identify fragility: "Works unless X and Y both fail"
|
||||
- Build contingency plans for stress scenarios
|
||||
|
||||
---
|
||||
|
||||
## Communicating Complex Analysis
|
||||
|
||||
### For Executives
|
||||
**Focus**: Bottom line, confidence, risks
|
||||
- Recommendation (1 sentence)
|
||||
- Key numbers (EV, NPV, ROI)
|
||||
- Confidence level (P10-P90 range)
|
||||
- Top 2 risks + mitigations
|
||||
- Decision criteria: "Proceed if X, pivot if Y"
|
||||
|
||||
### For Technical Teams
|
||||
**Focus**: Methodology, assumptions, sensitivity
|
||||
- Modeling approach and rationale
|
||||
- Key assumptions with justification
|
||||
- Sensitivity analysis results
|
||||
- Robustness checks performed
|
||||
- Limitations of analysis
|
||||
|
||||
### For Finance
|
||||
**Focus**: Numbers, assumptions, financial metrics
|
||||
- Cash flow timing
|
||||
- Discount rate and rationale
|
||||
- NPV, IRR, payback period
|
||||
- Risk-adjusted returns
|
||||
- Comparison to hurdle rate
|
||||
|
||||
### General Principles
|
||||
- **Lead with conclusion**, then support with analysis
|
||||
- **Show confidence bounds**, not just point estimates
|
||||
- **Explain "so what"**, not just "what"
|
||||
- **Use visuals**: probability distributions, decision trees, tornado charts
|
||||
- **Be honest about limitations**: "Assumes X, sensitive to Y"
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls in Advanced Analysis
|
||||
|
||||
### False Precision
|
||||
- **Problem**: Reporting $1,234,567 when uncertainty is ±50%
|
||||
- **Fix**: Round appropriately. Use ranges, not points.
|
||||
|
||||
### Ignoring Correlations
|
||||
- **Problem**: Modeling all uncertainties as independent when they're linked
|
||||
- **Fix**: Specify correlations in Monte Carlo (costs move together, revenue and volume linked)
|
||||
|
||||
### Overfit ting Models
|
||||
- **Problem**: Building complex models with 20 parameters when data is thin
|
||||
- **Fix**: Keep models simple. Complexity doesn't equal accuracy.
|
||||
|
||||
### Anchoring on Base Case
|
||||
- **Problem**: Treating "most likely" as "expected value"
|
||||
- **Fix**: Calculate probability-weighted EV. Assymetric distributions matter.
|
||||
|
||||
### Analysis Paralysis
|
||||
- **Problem**: Endless modeling instead of deciding
|
||||
- **Fix**: Set time limits. "Good enough" threshold. Decide with available info.
|
||||
|
||||
### Confirmation Bias
|
||||
- **Problem**: Modeling to justify predetermined conclusion
|
||||
- **Fix**: Model alternatives fairly. Seek disconfirming evidence. External review.
|
||||
|
||||
### Ignoring Soft Factors
|
||||
- **Problem**: Optimizing NPV while ignoring strategic fit, team morale, brand impact
|
||||
- **Fix**: Use MCDA for mixed quantitative + qualitative. Make tradeoffs explicit.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Tools and Resources
|
||||
|
||||
### Spreadsheet Tools
|
||||
- **Excel**: Data tables, Scenario Manager, Goal Seek
|
||||
- **Google Sheets**: Same capabilities, collaborative
|
||||
|
||||
### Specialized Software
|
||||
- **@RISK** (Palisade): Monte Carlo simulation add-in for Excel
|
||||
- **Crystal Ball** (Oracle): Similar Monte Carlo tool
|
||||
- **Python**: `numpy`, `scipy`, `simpy` for custom simulations
|
||||
- **R**: Statistical analysis and simulation
|
||||
|
||||
### When to Use Tools vs. Manual
|
||||
- **Manual** (small decision trees): < 10 branches, quick calculation
|
||||
- **Spreadsheet** (medium complexity): Decision trees, simple Monte Carlo (< 5 variables)
|
||||
- **Specialized tools** (high complexity): 10+ uncertain variables, complex correlations, sensitivity analysis
|
||||
|
||||
### Learning Resources
|
||||
- Decision analysis: "Decision Analysis for the Professional" - Skinner
|
||||
- Monte Carlo: "Risk Analysis in Engineering" - Modarres
|
||||
- Real options: "Real Options" - Copeland & Antikarov
|
||||
- MCDA: "Multi-Criteria Decision Analysis" - Belton & Stewart
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
**Choose technique based on problem structure:**
|
||||
- Sequential choices → Decision trees
|
||||
- Multiple uncertainties → Monte Carlo
|
||||
- Flexibility value → Real options
|
||||
- Mixed criteria → MCDA
|
||||
|
||||
**Focus on:**
|
||||
- Robust conclusions (stress test assumptions)
|
||||
- Clear communication (translate technical to business language)
|
||||
- Actionable insights (not just numbers)
|
||||
- Honest limits (acknowledge what analysis can't tell you)
|
||||
|
||||
**Remember:**
|
||||
- Models inform decisions, don't make them
|
||||
- Simple model well-executed beats complex model poorly-executed
|
||||
- Transparency about assumptions matters more than sophistication
|
||||
- "All models are wrong, some are useful" - George Box
|
||||
@@ -0,0 +1,433 @@
|
||||
# Chain Estimation → Decision → Storytelling Template
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Analysis Progress:
|
||||
- [ ] Step 1: Gather inputs and define decision scope
|
||||
- [ ] Step 2: Estimate costs, benefits, and probabilities
|
||||
- [ ] Step 3: Calculate expected value and compare alternatives
|
||||
- [ ] Step 4: Structure narrative with clear recommendation
|
||||
- [ ] Step 5: Validate completeness with quality checklist
|
||||
```
|
||||
|
||||
**Step 1: Gather inputs and define decision scope**
|
||||
|
||||
Clarify what decision needs to be made, identify 2-5 alternatives to compare, list key uncertainties (costs, benefits, probabilities), determine audience (executives, technical team, finance), and note constraints (budget, timeline, requirements). Use [Quick Template](#quick-template) structure below.
|
||||
|
||||
**Step 2: Estimate costs, benefits, and probabilities**
|
||||
|
||||
For each alternative, quantify all relevant costs (development, operation, opportunity cost), estimate benefits (revenue, savings, productivity gains), assign probabilities to scenarios (best/base/worst case), and use ranges rather than point estimates. See [Estimation Guidelines](#estimation-guidelines) for techniques.
|
||||
|
||||
**Step 3: Calculate expected value and compare alternatives**
|
||||
|
||||
Compute probability-weighted outcomes for each alternative, compare using appropriate decision criteria (NPV, IRR, payback, utility), identify which option has best risk-adjusted return, and test sensitivity to key assumptions. See [Decision Analysis](#decision-analysis) section.
|
||||
|
||||
**Step 4: Structure narrative with clear recommendation**
|
||||
|
||||
Follow storytelling framework: problem statement, alternatives considered, analysis summary, clear recommendation with reasoning, and next steps. Tailor level of detail to audience. See [Narrative Structure](#narrative-structure) for guidance.
|
||||
|
||||
**Step 5: Validate completeness with quality checklist**
|
||||
|
||||
Use [Quality Checklist](#quality-checklist) to verify: all alternatives considered, estimates are justified, probabilities are reasonable, expected value is calculated correctly, sensitivity analysis performed, narrative is clear and persuasive, assumptions stated explicitly.
|
||||
|
||||
## Quick Template
|
||||
|
||||
Copy this structure to create your analysis:
|
||||
|
||||
```markdown
|
||||
# Decision: {Decision Question}
|
||||
|
||||
## 1. Decision Context
|
||||
|
||||
**What we're deciding:** {Clear statement of the choice}
|
||||
|
||||
**Why this matters:** {Business impact, urgency, strategic importance}
|
||||
|
||||
**Alternatives:**
|
||||
1. {Option A}
|
||||
2. {Option B}
|
||||
3. {Option C}
|
||||
|
||||
**Key uncertainties:**
|
||||
- {Variable 1}: {Range or distribution}
|
||||
- {Variable 2}: {Range or distribution}
|
||||
- {Variable 3}: {Range or distribution}
|
||||
|
||||
**Constraints:**
|
||||
- Budget: {Available resources}
|
||||
- Timeline: {Decision deadline, implementation timeline}
|
||||
- Requirements: {Must-haves, non-negotiables}
|
||||
|
||||
**Audience:** {Who needs to approve this decision?}
|
||||
|
||||
---
|
||||
|
||||
## 2. Estimation
|
||||
|
||||
### Alternative 1: {Name}
|
||||
|
||||
**Costs:**
|
||||
- Initial investment: ${Low}k - ${High}k (most likely: ${Base}k)
|
||||
- Annual operational: ${Low}k - ${High}k per year
|
||||
- Opportunity cost: {What we give up}
|
||||
|
||||
**Benefits:**
|
||||
- Revenue impact: +${Low}k - ${High}k (most likely: ${Base}k)
|
||||
- Cost savings: ${Low}k - ${High}k per year
|
||||
- Strategic value: {Qualitative benefits}
|
||||
|
||||
**Probabilities:**
|
||||
- Best case (30%): {Scenario description}
|
||||
- Base case (50%): {Scenario description}
|
||||
- Worst case (20%): {Scenario description}
|
||||
|
||||
**Key assumptions:**
|
||||
- {Assumption 1}
|
||||
- {Assumption 2}
|
||||
- {Assumption 3}
|
||||
|
||||
### Alternative 2: {Name}
|
||||
{Same structure}
|
||||
|
||||
### Alternative 3: {Name}
|
||||
{Same structure}
|
||||
|
||||
---
|
||||
|
||||
## 3. Decision Analysis
|
||||
|
||||
### Expected Value Calculation
|
||||
|
||||
**Alternative 1: {Name}**
|
||||
- Best case (30%): ${Amount} × 0.30 = ${Weighted}
|
||||
- Base case (50%): ${Amount} × 0.50 = ${Weighted}
|
||||
- Worst case (20%): ${Amount} × 0.20 = ${Weighted}
|
||||
- **Expected value: ${Total}**
|
||||
|
||||
**Alternative 2: {Name}**
|
||||
{Same calculation}
|
||||
**Expected value: ${Total}**
|
||||
|
||||
**Alternative 3: {Name}**
|
||||
{Same calculation}
|
||||
**Expected value: ${Total}**
|
||||
|
||||
### Comparison
|
||||
|
||||
| Alternative | Expected Value | Risk Profile | Time to Value | Strategic Fit |
|
||||
|-------------|----------------|--------------|---------------|---------------|
|
||||
| {Alt 1} | ${EV} | {High/Med/Low} | {Timeline} | {Score/10} |
|
||||
| {Alt 2} | ${EV} | {High/Med/Low} | {Timeline} | {Score/10} |
|
||||
| {Alt 3} | ${EV} | {High/Med/Low} | {Timeline} | {Score/10} |
|
||||
|
||||
### Sensitivity Analysis
|
||||
|
||||
**What if {key variable} changes?**
|
||||
- If {variable} is 20% higher: {Impact on decision}
|
||||
- If {variable} is 20% lower: {Impact on decision}
|
||||
|
||||
**Most sensitive to:**
|
||||
- {Variable 1}: {Explanation of impact}
|
||||
- {Variable 2}: {Explanation of impact}
|
||||
|
||||
**Robustness check:**
|
||||
- Conclusion holds if {conditions}
|
||||
- Would change if {conditions}
|
||||
|
||||
---
|
||||
|
||||
## 4. Recommendation
|
||||
|
||||
**Recommended option: {Alternative X}**
|
||||
|
||||
**Reasoning:**
|
||||
{1-2 paragraphs explaining why this is the best choice given the analysis}
|
||||
|
||||
**Key factors:**
|
||||
- {Factor 1}: {Why it matters}
|
||||
- {Factor 2}: {Why it matters}
|
||||
- {Factor 3}: {Why it matters}
|
||||
|
||||
**Tradeoffs accepted:**
|
||||
- We're accepting {downside} in exchange for {upside}
|
||||
- We're prioritizing {value 1} over {value 2}
|
||||
|
||||
**Risks and mitigations:**
|
||||
- **Risk**: {What could go wrong}
|
||||
- **Mitigation**: {How we'll address it}
|
||||
- **Risk**: {What could go wrong}
|
||||
- **Mitigation**: {How we'll address it}
|
||||
|
||||
---
|
||||
|
||||
## 5. Next Steps
|
||||
|
||||
**If approved:**
|
||||
1. {Immediate action 1} - {Owner} by {Date}
|
||||
2. {Immediate action 2} - {Owner} by {Date}
|
||||
3. {Immediate action 3} - {Owner} by {Date}
|
||||
|
||||
**Success metrics:**
|
||||
- {Metric 1}: Target {value} by {date}
|
||||
- {Metric 2}: Target {value} by {date}
|
||||
- {Metric 3}: Target {value} by {date}
|
||||
|
||||
**Decision review:**
|
||||
- Revisit this decision in {timeframe} to validate assumptions
|
||||
- Key indicators to monitor: {metrics to track}
|
||||
|
||||
**What would change our mind:**
|
||||
- If {condition}, we should reconsider
|
||||
- If {condition}, we should accelerate
|
||||
- If {condition}, we should pause
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Estimation Guidelines
|
||||
|
||||
### Cost Estimation
|
||||
|
||||
**Categories to consider:**
|
||||
- **One-time costs**: Development, implementation, migration, training
|
||||
- **Recurring costs**: Subscription fees, maintenance, support, infrastructure
|
||||
- **Hidden costs**: Opportunity cost, technical debt, switching costs
|
||||
- **Risk costs**: Probability-weighted downside scenarios
|
||||
|
||||
**Estimation techniques:**
|
||||
- **Analogous**: Similar past projects (adjust for differences)
|
||||
- **Parametric**: Cost per unit × quantity (e.g., $150k per engineer × 2 engineers)
|
||||
- **Bottom-up**: Estimate components and sum
|
||||
- **Three-point**: Best case, most likely, worst case → calculate expected value
|
||||
|
||||
**Expressing uncertainty:**
|
||||
- Use ranges: $200k-$400k (not $300k)
|
||||
- Assign probabilities: 60% likely $300k, 20% $200k, 20% $400k
|
||||
- Show confidence: "High confidence" vs "Rough estimate"
|
||||
|
||||
### Benefit Estimation
|
||||
|
||||
**Categories to consider:**
|
||||
- **Revenue impact**: New revenue, increased conversion, higher retention
|
||||
- **Cost savings**: Reduced operational costs, avoided hiring, infrastructure savings
|
||||
- **Productivity gains**: Time saved × value of time
|
||||
- **Risk reduction**: Probability of bad outcome × cost of bad outcome
|
||||
- **Strategic value**: Market positioning, competitive advantage, optionality
|
||||
|
||||
**Quantification approaches:**
|
||||
- **Direct measurement**: Historical data, benchmarks, experiments
|
||||
- **Proxy metrics**: Leading indicators that correlate with value
|
||||
- **Scenario modeling**: Best/base/worst case with probabilities
|
||||
- **Comparable analysis**: Similar initiatives at comparable companies
|
||||
|
||||
### Probability Assignment
|
||||
|
||||
**How to assign probabilities:**
|
||||
- **Base rates**: Start with historical frequency (e.g., 70% of projects finish on time)
|
||||
- **Adjustments**: Modify for specific circumstances (this project is simpler/more complex)
|
||||
- **Expert judgment**: Multiple estimates, average or calibrated
|
||||
- **Reference class forecasting**: Look at similar situations
|
||||
|
||||
**Common probability pitfalls:**
|
||||
- **Overconfidence**: Ranges too narrow, probabilities too extreme (5% or 95%)
|
||||
- **Anchoring**: First number becomes reference even if wrong
|
||||
- **Optimism bias**: Best case feels more likely than it is
|
||||
- **Planning fallacy**: Underestimating time and cost
|
||||
|
||||
**Calibration check:**
|
||||
- If you say 70% confident, are you right 70% of the time?
|
||||
- Test with past predictions if available
|
||||
- Use wider ranges for higher uncertainty
|
||||
|
||||
---
|
||||
|
||||
## Decision Analysis
|
||||
|
||||
### Expected Value Calculation
|
||||
|
||||
**Formula:**
|
||||
```
|
||||
Expected Value = Σ (Outcome × Probability)
|
||||
```
|
||||
|
||||
**Example:**
|
||||
- Best case: $500k × 30% = $150k
|
||||
- Base case: $300k × 50% = $150k
|
||||
- Worst case: $100k × 20% = $20k
|
||||
- Expected value = $150k + $150k + $20k = $320k
|
||||
|
||||
**Multi-year NPV:**
|
||||
```
|
||||
NPV = Σ (Cash Flow_t / (1 + discount_rate)^t)
|
||||
```
|
||||
|
||||
**When to use:**
|
||||
- **Expected value**: When outcomes are roughly linear with value (money, time)
|
||||
- **Decision trees**: When sequence of choices matters
|
||||
- **Monte Carlo**: When multiple uncertainties interact
|
||||
- **Scoring/weighting**: When mix of quantitative and qualitative factors
|
||||
|
||||
### Comparison Methods
|
||||
|
||||
**1. Expected Value Ranking**
|
||||
- Calculate EV for each alternative
|
||||
- Rank by highest expected value
|
||||
- **Best for**: Decisions with quantifiable outcomes
|
||||
|
||||
**2. NPV Comparison**
|
||||
- Discount future cash flows to present value
|
||||
- Compare NPV across alternatives
|
||||
- **Best for**: Multi-year investments
|
||||
|
||||
**3. Payback Period**
|
||||
- Time to recover initial investment
|
||||
- Consider in addition to NPV (not instead of)
|
||||
- **Best for**: When liquidity or fast ROI matters
|
||||
|
||||
**4. Weighted Scoring**
|
||||
- Score each alternative on multiple criteria (1-10)
|
||||
- Multiply by importance weight
|
||||
- Sum weighted scores
|
||||
- **Best for**: Mix of quantitative and qualitative factors
|
||||
|
||||
### Sensitivity Analysis
|
||||
|
||||
**One-way sensitivity:**
|
||||
- Vary one input at a time (e.g., cost ±20%)
|
||||
- Check if conclusion changes
|
||||
- Identify which inputs matter most
|
||||
|
||||
**Tornado diagram:**
|
||||
- Show impact of each variable on outcome
|
||||
- Order by magnitude of impact
|
||||
- Focus on top 2-3 drivers
|
||||
|
||||
**Scenario analysis:**
|
||||
- Define coherent scenarios (pessimistic, base, optimistic)
|
||||
- Calculate outcome for each complete scenario
|
||||
- Assign probabilities to scenarios
|
||||
|
||||
**Break-even analysis:**
|
||||
- At what value of {key variable} does decision change?
|
||||
- Provides threshold for monitoring
|
||||
|
||||
---
|
||||
|
||||
## Narrative Structure
|
||||
|
||||
### Executive Summary (for executives)
|
||||
|
||||
**Format:**
|
||||
1. **The decision** (1 sentence): What we're choosing between
|
||||
2. **The recommendation** (1 sentence): What we should do
|
||||
3. **The reasoning** (2-3 bullets): Key factors driving recommendation
|
||||
4. **The ask** (1 sentence): What approval or resources needed
|
||||
5. **The timeline** (1 sentence): When this happens
|
||||
|
||||
**Length:** 4-6 sentences, fits in one paragraph
|
||||
|
||||
**Example:**
|
||||
> "We evaluated building custom analytics vs. buying a SaaS tool. Recommendation: Buy the SaaS solution. Key factors: (1) $130k lower expected cost due to build risk, (2) 6 months faster time-to-value, (3) proven reliability vs. custom development uncertainty. Requesting $20k implementation budget and $120k annual subscription approval. Implementation begins next month with value delivery in 8 weeks."
|
||||
|
||||
### Detailed Analysis (for stakeholders)
|
||||
|
||||
**Structure:**
|
||||
1. **Problem statement**: Why this decision matters (1 paragraph)
|
||||
2. **Alternatives considered**: Show you did the work (bullets)
|
||||
3. **Analysis approach**: Methodology and assumptions (1 paragraph)
|
||||
4. **Key findings**: Numbers, comparison, sensitivity (1-2 paragraphs)
|
||||
5. **Recommendation**: Clear choice with reasoning (1-2 paragraphs)
|
||||
6. **Risks and mitigations**: What could go wrong (bullets)
|
||||
7. **Next steps**: Implementation plan (bullets)
|
||||
|
||||
**Length:** 1-2 pages
|
||||
|
||||
**Tone:** Professional, balanced, transparent about tradeoffs
|
||||
|
||||
### Technical Deep-Dive (for technical teams)
|
||||
|
||||
**Additional detail:**
|
||||
- Estimation methodology and data sources
|
||||
- Sensitivity analysis details
|
||||
- Technical assumptions and constraints
|
||||
- Implementation considerations
|
||||
- Alternative approaches considered and why rejected
|
||||
|
||||
**Length:** 2-4 pages
|
||||
|
||||
**Tone:** Analytical, rigorous, shows technical depth
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing, verify:
|
||||
|
||||
**Estimation quality:**
|
||||
- [ ] All relevant costs included (one-time, recurring, opportunity, risk)
|
||||
- [ ] All relevant benefits quantified or described
|
||||
- [ ] Uncertainty expressed with ranges or probabilities
|
||||
- [ ] Assumptions stated explicitly with justification
|
||||
- [ ] Sources cited for estimates where applicable
|
||||
|
||||
**Decision analysis quality:**
|
||||
- [ ] Expected value calculated correctly (probability × outcome)
|
||||
- [ ] All alternatives compared fairly
|
||||
- [ ] Sensitivity analysis performed on key variables
|
||||
- [ ] Robustness tested (does conclusion hold across reasonable ranges?)
|
||||
- [ ] Dominant option identified with clear rationale
|
||||
|
||||
**Narrative quality:**
|
||||
- [ ] Clear recommendation stated upfront
|
||||
- [ ] Problem statement explains why decision matters
|
||||
- [ ] Alternatives shown (proves due diligence)
|
||||
- [ ] Analysis summary appropriate for audience
|
||||
- [ ] Tradeoffs acknowledged honestly
|
||||
- [ ] Risks and mitigations addressed
|
||||
- [ ] Next steps are actionable
|
||||
|
||||
**Communication quality:**
|
||||
- [ ] Tailored to audience (exec vs technical vs finance)
|
||||
- [ ] Jargon explained or avoided
|
||||
- [ ] Key numbers highlighted
|
||||
- [ ] Visual aids used where helpful (tables, charts)
|
||||
- [ ] Length appropriate (not too long or too short)
|
||||
|
||||
**Integrity checks:**
|
||||
- [ ] No cherry-picking of favorable data
|
||||
- [ ] Downside scenarios included, not just upside
|
||||
- [ ] Probabilities are calibrated (not overconfident)
|
||||
- [ ] "What would change my mind" conditions stated
|
||||
- [ ] Limitations and uncertainties acknowledged
|
||||
|
||||
---
|
||||
|
||||
## Common Decision Types
|
||||
|
||||
### Build vs Buy
|
||||
- **Estimate**: Dev cost, maintenance, SaaS fees, implementation
|
||||
- **Decision**: 3-5 year TCO with risk adjustment
|
||||
- **Story**: Control vs. cost, speed vs. customization
|
||||
|
||||
### Market Entry
|
||||
- **Estimate**: TAM/SAM/SOM, CAC, LTV, time to profitability
|
||||
- **Decision**: NPV with market uncertainty scenarios
|
||||
- **Story**: Growth opportunity vs. execution risk
|
||||
|
||||
### Hiring
|
||||
- **Estimate**: Comp, recruiting, ramp time, productivity impact
|
||||
- **Decision**: Cost per output vs. alternatives
|
||||
- **Story**: Capacity constraints vs. efficiency gains
|
||||
|
||||
### Technology Migration
|
||||
- **Estimate**: Migration cost, operational savings, risk reduction
|
||||
- **Decision**: Multi-year TCO plus risk-adjusted benefits
|
||||
- **Story**: Short-term pain for long-term gain
|
||||
|
||||
### Resource Allocation
|
||||
- **Estimate**: Cost per initiative, expected impact
|
||||
- **Decision**: Portfolio optimization or impact/effort ranking
|
||||
- **Story**: Given constraints, maximize expected value
|
||||
Reference in New Issue
Block a user