Initial commit
This commit is contained in:
182
skills/decision-matrix/SKILL.md
Normal file
182
skills/decision-matrix/SKILL.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
name: decision-matrix
|
||||
description: Use when comparing multiple named alternatives across several criteria, need transparent trade-off analysis, making group decisions requiring alignment, choosing between vendors/tools/strategies, stakeholders need to see decision rationale, balancing competing priorities (cost vs quality vs speed), user mentions "which option should we choose", "compare alternatives", "evaluate vendors", "trade-offs", or when decision needs to be defensible and data-driven.
|
||||
---
|
||||
|
||||
# Decision Matrix
|
||||
|
||||
## What Is It?
|
||||
|
||||
A decision matrix is a structured tool for comparing multiple alternatives against weighted criteria to make transparent, defensible choices. It forces explicit trade-off analysis by scoring each option on each criterion, making subjective factors visible and comparable.
|
||||
|
||||
**Quick example:**
|
||||
|
||||
| Option | Cost (30%) | Speed (25%) | Quality (45%) | Weighted Score |
|
||||
|--------|-----------|------------|---------------|----------------|
|
||||
| Option A | 8 (2.4) | 6 (1.5) | 9 (4.05) | **7.95** ← Winner |
|
||||
| Option B | 6 (1.8) | 9 (2.25) | 7 (3.15) | 7.20 |
|
||||
| Option C | 9 (2.7) | 4 (1.0) | 6 (2.7) | 6.40 |
|
||||
|
||||
The numbers in parentheses show criterion score × weight. Option A wins despite not being fastest or cheapest because quality matters most (45% weight).
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Decision Matrix Progress:
|
||||
- [ ] Step 1: Frame the decision and list alternatives
|
||||
- [ ] Step 2: Identify and weight criteria
|
||||
- [ ] Step 3: Score each alternative on each criterion
|
||||
- [ ] Step 4: Calculate weighted scores and analyze results
|
||||
- [ ] Step 5: Validate quality and deliver recommendation
|
||||
```
|
||||
|
||||
**Step 1: Frame the decision and list alternatives**
|
||||
|
||||
Ask user for decision context (what are we choosing and why), list of alternatives (specific named options, not generic categories), constraints or dealbreakers (must-have requirements), and stakeholders (who needs to agree). Understanding must-haves helps filter options before scoring. See [Framing Questions](#framing-questions) for clarification prompts.
|
||||
|
||||
**Step 2: Identify and weight criteria**
|
||||
|
||||
Collaborate with user to identify criteria (what factors matter for this decision), determine weights (which criteria matter most, as percentages summing to 100%), and validate coverage (do criteria capture all important trade-offs). If user is unsure about weighting → Use [resources/template.md](resources/template.md) for weighting techniques. See [Criterion Types](#criterion-types) for common patterns.
|
||||
|
||||
**Step 3: Score each alternative on each criterion**
|
||||
|
||||
For each option, score on each criterion using consistent scale (typically 1-10 where 10 = best). Ask user for scores or research objective data (cost, speed metrics) where available. Document assumptions and data sources. For complex scoring → See [resources/methodology.md](resources/methodology.md) for calibration techniques.
|
||||
|
||||
**Step 4: Calculate weighted scores and analyze results**
|
||||
|
||||
Calculate weighted score for each option (sum of criterion score × weight). Rank options by total score. Identify close calls (options within 5% of each other). Check for sensitivity (would changing one weight flip the decision). See [Sensitivity Analysis](#sensitivity-analysis) for interpretation guidance.
|
||||
|
||||
**Step 5: Validate quality and deliver recommendation**
|
||||
|
||||
Self-assess using [resources/evaluators/rubric_decision_matrix.json](resources/evaluators/rubric_decision_matrix.json) (minimum score ≥ 3.5). Present decision-matrix.md file with clear recommendation, highlight key trade-offs revealed by analysis, note sensitivity to assumptions, and suggest next steps (gather more data on close calls, validate with stakeholders).
|
||||
|
||||
## Framing Questions
|
||||
|
||||
**To clarify the decision:**
|
||||
- What specific decision are we making? (Choose X from Y alternatives)
|
||||
- What happens if we don't decide or choose wrong?
|
||||
- When do we need to decide by?
|
||||
- Can we choose multiple options or only one?
|
||||
|
||||
**To identify alternatives:**
|
||||
- What are all the named options we're considering?
|
||||
- Are there other alternatives we're ruling out immediately? Why?
|
||||
- What's the "do nothing" or status quo option?
|
||||
|
||||
**To surface must-haves:**
|
||||
- Are there absolute dealbreakers? (Budget cap, timeline requirement, compliance need)
|
||||
- Which constraints are flexible vs rigid?
|
||||
|
||||
## Criterion Types
|
||||
|
||||
Common categories for criteria (adapt to your decision):
|
||||
|
||||
**Financial Criteria:**
|
||||
- Upfront cost, ongoing cost, ROI, payback period, budget impact
|
||||
- Typical weight: 20-40% (higher for cost-sensitive decisions)
|
||||
|
||||
**Performance Criteria:**
|
||||
- Speed, quality, reliability, scalability, capacity, throughput
|
||||
- Typical weight: 30-50% (higher for technical decisions)
|
||||
|
||||
**Risk Criteria:**
|
||||
- Implementation risk, reversibility, vendor lock-in, technical debt, compliance risk
|
||||
- Typical weight: 10-25% (higher for enterprise/regulated environments)
|
||||
|
||||
**Strategic Criteria:**
|
||||
- Alignment with goals, future flexibility, competitive advantage, market positioning
|
||||
- Typical weight: 15-30% (higher for long-term decisions)
|
||||
|
||||
**Operational Criteria:**
|
||||
- Ease of use, maintenance burden, training required, integration complexity
|
||||
- Typical weight: 10-20% (higher for internal tools)
|
||||
|
||||
**Stakeholder Criteria:**
|
||||
- Team preference, user satisfaction, executive alignment, customer impact
|
||||
- Typical weight: 5-15% (higher for change management contexts)
|
||||
|
||||
## Weighting Approaches
|
||||
|
||||
**Method 1: Direct Allocation (simplest)**
|
||||
Stakeholders assign percentages totaling 100%. Quick but can be arbitrary.
|
||||
|
||||
**Method 2: Pairwise Comparison (more rigorous)**
|
||||
Compare each criterion pair: "Is cost more important than speed?" Build ranking, then assign weights.
|
||||
|
||||
**Method 3: Must-Have vs Nice-to-Have (filters first)**
|
||||
Separate absolute requirements (pass/fail) from weighted criteria. Only evaluate options that pass must-haves.
|
||||
|
||||
**Method 4: Stakeholder Averaging (group decisions)**
|
||||
Each stakeholder assigns weights independently, then average. Reveals divergence in priorities.
|
||||
|
||||
See [resources/methodology.md](resources/methodology.md) for detailed facilitation techniques.
|
||||
|
||||
## Sensitivity Analysis
|
||||
|
||||
After calculating scores, check robustness:
|
||||
|
||||
**1. Close calls:** Options within 5-10% of winner → Need more data or second opinion
|
||||
**2. Dominant criteria:** One criterion driving entire decision → Is weight too high?
|
||||
**3. Weight sensitivity:** Would swapping two criterion weights flip the winner? → Decision is fragile
|
||||
**4. Score sensitivity:** Would adjusting one score by ±1 point flip the winner? → Decision is sensitive to that data point
|
||||
|
||||
**Red flags:**
|
||||
- Winner changes with small weight adjustments → Need stakeholder alignment on priorities
|
||||
- One option wins every criterion → Matrix is overkill, choice is obvious
|
||||
- Scores are mostly guesses → Gather more data before deciding
|
||||
|
||||
## Common Patterns
|
||||
|
||||
**Technology Selection:**
|
||||
- Criteria: Cost, performance, ecosystem maturity, team familiarity, vendor support
|
||||
- Weight: Performance and maturity typically 50%+
|
||||
|
||||
**Vendor Evaluation:**
|
||||
- Criteria: Price, features, integration, support, reputation, contract terms
|
||||
- Weight: Features and integration typically 40-50%
|
||||
|
||||
**Strategic Choices:**
|
||||
- Criteria: Market opportunity, resource requirements, risk, alignment, timing
|
||||
- Weight: Market opportunity and alignment typically 50%+
|
||||
|
||||
**Hiring Decisions:**
|
||||
- Criteria: Experience, culture fit, growth potential, compensation expectations, availability
|
||||
- Weight: Experience and culture fit typically 50%+
|
||||
|
||||
**Feature Prioritization:**
|
||||
- Criteria: User impact, effort, strategic value, risk, dependencies
|
||||
- Weight: User impact and strategic value typically 50%+
|
||||
|
||||
## When NOT to Use This Skill
|
||||
|
||||
**Skip decision matrix if:**
|
||||
- Only one viable option (no real alternatives to compare)
|
||||
- Decision is binary yes/no with single criterion (use simpler analysis)
|
||||
- Options differ on only one dimension (just compare that dimension)
|
||||
- Decision is urgent and stakes are low (analysis overhead not worth it)
|
||||
- Criteria are impossible to define objectively (purely emotional/aesthetic choice)
|
||||
- You already know the answer (using matrix to justify pre-made decision is waste)
|
||||
|
||||
**Use instead:**
|
||||
- Single criterion → Simple ranking or threshold check
|
||||
- Binary decision → Pro/con list or expected value calculation
|
||||
- Highly uncertain → Scenario planning or decision tree
|
||||
- Purely subjective → Gut check or user preference vote
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Process:**
|
||||
1. Frame decision → List alternatives
|
||||
2. Identify criteria → Assign weights (sum to 100%)
|
||||
3. Score each option on each criterion (1-10 scale)
|
||||
4. Calculate weighted scores → Rank options
|
||||
5. Check sensitivity → Deliver recommendation
|
||||
|
||||
**Resources:**
|
||||
- [resources/template.md](resources/template.md) - Structured matrix format and weighting techniques
|
||||
- [resources/methodology.md](resources/methodology.md) - Advanced techniques (group facilitation, calibration, sensitivity analysis)
|
||||
- [resources/evaluators/rubric_decision_matrix.json](resources/evaluators/rubric_decision_matrix.json) - Quality checklist before delivering
|
||||
|
||||
**Deliverable:** `decision-matrix.md` file with table, rationale, and recommendation
|
||||
@@ -0,0 +1,218 @@
|
||||
{
|
||||
"criteria": [
|
||||
{
|
||||
"name": "Decision Framing & Context",
|
||||
"description": "Is the decision clearly defined with all viable alternatives identified?",
|
||||
"scoring": {
|
||||
"1": "Decision is vague or ill-defined. Alternatives are incomplete or include non-comparable options. No stakeholder identification.",
|
||||
"3": "Decision is stated but lacks specificity. Most alternatives listed but may be missing key options. Stakeholders mentioned generally.",
|
||||
"5": "Exemplary framing. Decision is specific and unambiguous. All viable alternatives identified (including 'do nothing' if relevant). Must-have requirements separated from criteria. Stakeholders clearly identified with their priorities noted."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Criteria Quality & Coverage",
|
||||
"description": "Are criteria well-chosen, measurable, independent, and comprehensive?",
|
||||
"scoring": {
|
||||
"1": "Criteria are vague, redundant, or missing key factors. Too many (>10) or too few (<3). No clear definitions.",
|
||||
"3": "Criteria cover main factors but may have some redundancy or gaps. 4-8 criteria with basic definitions. Some differentiation between options.",
|
||||
"5": "Exemplary criteria selection. 4-7 criteria that are measurable, independent, relevant, and differentiate between options. Each criterion has clear definition and measurement approach. No redundancy. Captures all important trade-offs."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Weighting Appropriateness",
|
||||
"description": "Do criterion weights reflect true priorities and sum to 100%?",
|
||||
"scoring": {
|
||||
"1": "Weights don't sum to 100%, are arbitrary, or clearly misaligned with stated priorities. No rationale provided.",
|
||||
"3": "Weights sum to 100% and are reasonable but may lack explicit justification. Some alignment with priorities.",
|
||||
"5": "Exemplary weighting. Weights sum to 100%, clearly reflect stakeholder priorities, and have documented rationale (pairwise comparison, swing weighting, or stakeholder averaging). Weight distribution makes sense for decision type."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Scoring Rigor & Data Quality",
|
||||
"description": "Are scores based on data or defensible judgments with documented sources?",
|
||||
"scoring": {
|
||||
"1": "Scores appear to be wild guesses with no justification. No data sources. Inconsistent scale usage.",
|
||||
"3": "Mix of data-driven and subjective scores. Some sources documented. Mostly consistent 1-10 scale. Some assumptions noted.",
|
||||
"5": "Exemplary scoring rigor. Objective criteria backed by real data (quotes, benchmarks, measurements). Subjective criteria have clear anchors/definitions. All assumptions and data sources documented. Consistent 1-10 scale usage."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Calculation Accuracy",
|
||||
"description": "Are weighted scores calculated correctly and presented clearly?",
|
||||
"scoring": {
|
||||
"1": "Calculation errors present. Weights don't match stated percentages. Formula mistakes. Unclear presentation.",
|
||||
"3": "Calculations are mostly correct with minor issues. Weighted scores shown but presentation could be clearer.",
|
||||
"5": "Perfect calculations. Weighted scores = Σ(score × weight) for each option. Table clearly shows raw scores, weights (as percentages), weighted scores, and totals. Ranking is correct."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Sensitivity Analysis",
|
||||
"description": "Is decision robustness assessed (close calls, weight sensitivity, score uncertainty)?",
|
||||
"scoring": {
|
||||
"1": "No sensitivity analysis. Winner declared without checking if decision is robust.",
|
||||
"3": "Basic sensitivity noted (e.g., 'close call' mentioned) but not systematically analyzed.",
|
||||
"5": "Thorough sensitivity analysis. Identifies close calls (<10% margin). Tests weight sensitivity (would swapping weights flip decision?). Notes which scores are most uncertain. Assesses decision robustness and flags fragile decisions."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Recommendation Quality",
|
||||
"description": "Is recommendation clear with rationale, trade-offs, and confidence level?",
|
||||
"scoring": {
|
||||
"1": "No clear recommendation or just states winner without rationale. No trade-off discussion.",
|
||||
"3": "Recommendation stated with basic rationale. Some trade-offs mentioned. Confidence level implied but not stated.",
|
||||
"5": "Exemplary recommendation. Clear winner with score. Explains WHY winner prevails (which criteria drive decision). Acknowledges trade-offs (where winner scores lower). States confidence level based on margin and sensitivity. Suggests next steps."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Assumption & Limitation Documentation",
|
||||
"description": "Are key assumptions, uncertainties, and limitations explicitly stated?",
|
||||
"scoring": {
|
||||
"1": "No assumptions documented. Presents results as facts without acknowledging uncertainty or limitations.",
|
||||
"3": "Some assumptions mentioned. Acknowledges uncertainty exists but not comprehensive.",
|
||||
"5": "All key assumptions explicitly documented. Uncertainties flagged (which scores are guesses vs data). Limitations noted (e.g., 'cost estimates are preliminary', 'performance benchmarks unavailable'). Reader understands confidence bounds."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Stakeholder Alignment",
|
||||
"description": "For group decisions, are different stakeholder priorities surfaced and addressed?",
|
||||
"scoring": {
|
||||
"1": "Single set of weights/scores presented as if universal. No acknowledgment of stakeholder differences.",
|
||||
"3": "Stakeholder differences mentioned but not systematically addressed. Single averaged view presented.",
|
||||
"5": "Stakeholder differences explicitly surfaced. If priorities diverge, shows impact (e.g., 'Under engineering priorities, A wins; under sales priorities, B wins'). Facilitates alignment or escalates decision appropriately."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Communication & Presentation",
|
||||
"description": "Is matrix table clear, readable, and appropriately formatted?",
|
||||
"scoring": {
|
||||
"1": "Matrix is confusing, poorly formatted, or missing key elements (weights, totals). Hard to interpret.",
|
||||
"3": "Matrix is readable with minor formatting issues. Weights and totals shown but could be clearer.",
|
||||
"5": "Exemplary presentation. Table is clean and scannable. Column headers show criteria names AND weights (%). Weighted scores shown (not just raw scores). Winner visually highlighted. Assumptions and next steps clearly stated."
|
||||
}
|
||||
}
|
||||
],
|
||||
"minimum_score": 3.5,
|
||||
"guidance_by_decision_type": {
|
||||
"Technology Selection (tools, platforms, vendors)": {
|
||||
"target_score": 4.0,
|
||||
"focus_criteria": [
|
||||
"Criteria Quality & Coverage",
|
||||
"Scoring Rigor & Data Quality",
|
||||
"Sensitivity Analysis"
|
||||
],
|
||||
"common_pitfalls": [
|
||||
"Missing 'Total Cost of Ownership' as criterion (not just upfront cost)",
|
||||
"Ignoring integration complexity or vendor lock-in risk",
|
||||
"Not scoring 'do nothing / keep current solution' as baseline"
|
||||
]
|
||||
},
|
||||
"Strategic Choices (market entry, partnerships, positioning)": {
|
||||
"target_score": 4.0,
|
||||
"focus_criteria": [
|
||||
"Decision Framing & Context",
|
||||
"Weighting Appropriateness",
|
||||
"Stakeholder Alignment"
|
||||
],
|
||||
"common_pitfalls": [
|
||||
"Weighting short-term metrics too heavily over strategic fit",
|
||||
"Not including reversibility / optionality as criterion",
|
||||
"Ignoring stakeholder misalignment on priorities"
|
||||
]
|
||||
},
|
||||
"Vendor / Supplier Evaluation": {
|
||||
"target_score": 3.8,
|
||||
"focus_criteria": [
|
||||
"Criteria Quality & Coverage",
|
||||
"Scoring Rigor & Data Quality",
|
||||
"Assumption & Limitation Documentation"
|
||||
],
|
||||
"common_pitfalls": [
|
||||
"Relying on vendor-provided data without validation",
|
||||
"Not including 'vendor financial health' or 'support SLA' criteria",
|
||||
"Missing contract terms (pricing lock, exit clauses) as criterion"
|
||||
]
|
||||
},
|
||||
"Feature Prioritization": {
|
||||
"target_score": 3.5,
|
||||
"focus_criteria": [
|
||||
"Weighting Appropriateness",
|
||||
"Scoring Rigor & Data Quality",
|
||||
"Sensitivity Analysis"
|
||||
],
|
||||
"common_pitfalls": [
|
||||
"Not including 'effort' or 'technical risk' as criteria",
|
||||
"Scoring 'user impact' without user research data",
|
||||
"Ignoring dependencies between features"
|
||||
]
|
||||
},
|
||||
"Hiring Decisions": {
|
||||
"target_score": 3.5,
|
||||
"focus_criteria": [
|
||||
"Criteria Quality & Coverage",
|
||||
"Scoring Rigor & Data Quality",
|
||||
"Assumption & Limitation Documentation"
|
||||
],
|
||||
"common_pitfalls": [
|
||||
"Criteria too vague (e.g., 'culture fit' without definition)",
|
||||
"Interviewer bias in scores (need calibration)",
|
||||
"Not documenting what good vs poor looks like for each criterion"
|
||||
]
|
||||
}
|
||||
},
|
||||
"guidance_by_complexity": {
|
||||
"Simple (3-4 alternatives, clear criteria, aligned stakeholders)": {
|
||||
"target_score": 3.5,
|
||||
"sufficient_rigor": "Basic weighting (direct allocation), data-driven scores where possible, simple sensitivity check (margin analysis)"
|
||||
},
|
||||
"Moderate (5-7 alternatives, some subjectivity, minor disagreement)": {
|
||||
"target_score": 3.8,
|
||||
"sufficient_rigor": "Structured weighting (rank-order or pairwise), documented scoring rationale, sensitivity analysis on close calls"
|
||||
},
|
||||
"Complex (8+ alternatives, high subjectivity, stakeholder conflict)": {
|
||||
"target_score": 4.2,
|
||||
"sufficient_rigor": "Advanced weighting (AHP, swing), score calibration/normalization, Monte Carlo or scenario sensitivity, stakeholder convergence process (Delphi, NGT)"
|
||||
}
|
||||
},
|
||||
"common_failure_modes": {
|
||||
"1. Post-Rationalization": {
|
||||
"symptom": "Weights or scores appear engineered to justify pre-made decision",
|
||||
"detection": "Oddly specific weights (37%), generous scores for preferred option, stakeholders admit 'we already know the answer'",
|
||||
"prevention": "Assign weights BEFORE scoring alternatives. Use blind facilitation. Ask: 'If matrix contradicts gut, do we trust it?'"
|
||||
},
|
||||
"2. Garbage In, Garbage Out": {
|
||||
"symptom": "All scores are guesses with no data backing",
|
||||
"detection": "Cannot answer 'where did this score come from?', scores assigned in <5 min, all round numbers (5, 7, 8)",
|
||||
"prevention": "Require data sources for objective criteria. Define scoring anchors for subjective criteria. Flag uncertainties."
|
||||
},
|
||||
"3. Analysis Paralysis": {
|
||||
"symptom": "Endless refinement, never deciding",
|
||||
"detection": ">10 criteria, winner changes 3+ times, 'just one more round' requests",
|
||||
"prevention": "Set decision deadline. Cap criteria at 5-7. Use satisficing rule: 'Any option >7.0 is acceptable.'"
|
||||
},
|
||||
"4. Criterion Soup": {
|
||||
"symptom": "Overlapping, redundant, or conflicting criteria",
|
||||
"detection": "Two criteria always score the same, scorer confusion ('how is this different?')",
|
||||
"prevention": "Independence test: Can option score high on A but low on B? If no, merge them. Write clear definitions."
|
||||
},
|
||||
"5. Ignoring Sensitivity": {
|
||||
"symptom": "Winner declared without robustness check",
|
||||
"detection": "No mention of margin, close calls, or what would flip decision",
|
||||
"prevention": "Always report margin. Test: 'If we swapped top 2 weights, does winner change?' Flag fragile decisions."
|
||||
},
|
||||
"6. Stakeholder Misalignment": {
|
||||
"symptom": "Different stakeholders have different priorities but single matrix presented",
|
||||
"detection": "Engineering wants A, sales wants B, but matrix 'proves' one is right",
|
||||
"prevention": "Surface weight differences. Show 'under X priorities, A wins; under Y priorities, B wins.' Escalate if needed."
|
||||
},
|
||||
"7. Missing 'Do Nothing'": {
|
||||
"symptom": "Only evaluating new alternatives, forgetting status quo is an option",
|
||||
"detection": "All alternatives are new changes, no baseline comparison",
|
||||
"prevention": "Always include current state / do nothing as an option to evaluate if change is worth it."
|
||||
},
|
||||
"8. False Precision": {
|
||||
"symptom": "Scores to 2 decimals when underlying data is rough guess",
|
||||
"detection": "Weighted total: 7.342 but scores are subjective estimates",
|
||||
"prevention": "Match precision to confidence. Rough guesses → round to 0.5. Data-driven → decimals OK."
|
||||
}
|
||||
}
|
||||
}
|
||||
398
skills/decision-matrix/resources/methodology.md
Normal file
398
skills/decision-matrix/resources/methodology.md
Normal file
@@ -0,0 +1,398 @@
|
||||
# Decision Matrix: Advanced Methodology
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist for complex decision scenarios:
|
||||
|
||||
```
|
||||
Advanced Decision Matrix Progress:
|
||||
- [ ] Step 1: Diagnose decision complexity
|
||||
- [ ] Step 2: Apply advanced weighting techniques
|
||||
- [ ] Step 3: Calibrate and normalize scores
|
||||
- [ ] Step 4: Perform rigorous sensitivity analysis
|
||||
- [ ] Step 5: Facilitate group convergence
|
||||
```
|
||||
|
||||
**Step 1: Diagnose decision complexity** - Identify complexity factors (stakeholder disagreement, high uncertainty, strategic importance). See [1. Decision Complexity Assessment](#1-decision-complexity-assessment).
|
||||
|
||||
**Step 2: Apply advanced weighting techniques** - Use AHP or other rigorous methods for contentious decisions. See [2. Advanced Weighting Methods](#2-advanced-weighting-methods).
|
||||
|
||||
**Step 3: Calibrate and normalize scores** - Handle different scoring approaches and normalize across scorers. See [3. Score Calibration & Normalization](#3-score-calibration--normalization).
|
||||
|
||||
**Step 4: Perform rigorous sensitivity analysis** - Test decision robustness with Monte Carlo or scenario analysis. See [4. Advanced Sensitivity Analysis](#4-advanced-sensitivity-analysis).
|
||||
|
||||
**Step 5: Facilitate group convergence** - Use Delphi method or consensus-building techniques. See [5. Group Decision Facilitation](#5-group-decision-facilitation).
|
||||
|
||||
---
|
||||
|
||||
## 1. Decision Complexity Assessment
|
||||
|
||||
### Complexity Indicators
|
||||
|
||||
**Low Complexity** (use basic template):
|
||||
- Clear stakeholder alignment on priorities
|
||||
- Objective criteria with available data
|
||||
- Low stakes (reversible decision)
|
||||
- 3-5 alternatives
|
||||
|
||||
**Medium Complexity** (use enhanced techniques):
|
||||
- Moderate stakeholder disagreement
|
||||
- Mix of objective and subjective criteria
|
||||
- Moderate stakes (partially reversible)
|
||||
- 5-8 alternatives
|
||||
|
||||
**High Complexity** (use full methodology):
|
||||
- Significant stakeholder disagreement on priorities
|
||||
- Mostly subjective criteria or high uncertainty
|
||||
- High stakes (irreversible or strategic decision)
|
||||
- >8 alternatives or multi-phase decision
|
||||
- Regulatory or compliance implications
|
||||
|
||||
### Complexity Scoring
|
||||
|
||||
| Factor | Low (1) | Medium (2) | High (3) |
|
||||
|--------|---------|------------|----------|
|
||||
| **Stakeholder alignment** | Aligned priorities | Some disagreement | Conflicting priorities |
|
||||
| **Criteria objectivity** | Mostly data-driven | Mix of data & judgment | Mostly subjective |
|
||||
| **Decision stakes** | Reversible, low cost | Partially reversible | Irreversible, strategic |
|
||||
| **Uncertainty level** | Low uncertainty | Moderate uncertainty | High uncertainty |
|
||||
| **Number of alternatives** | 3-4 options | 5-7 options | 8+ options |
|
||||
|
||||
**Complexity Score = Sum of factors**
|
||||
- **5-7 points:** Use basic template
|
||||
- **8-11 points:** Use enhanced techniques (sections 2-3)
|
||||
- **12-15 points:** Use full methodology (all sections)
|
||||
|
||||
---
|
||||
|
||||
## 2. Advanced Weighting Methods
|
||||
|
||||
### Analytic Hierarchy Process (AHP)
|
||||
|
||||
**When to use:** High-stakes decisions with contentious priorities, need rigorous justification
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Create pairwise comparison matrix:** For each pair, rate 1-9 (1=equal, 3=slightly more important, 5=moderately, 7=strongly, 9=extremely)
|
||||
2. **Calculate weights:** Normalize columns, average rows
|
||||
3. **Check consistency:** CR < 0.10 acceptable (use online AHP calculator: bpmsg.com/ahp/ahp-calc.php)
|
||||
|
||||
**Example:** Comparing Cost, Performance, Risk, Ease pairwise yields weights: Performance 55%, Risk 20%, Cost 15%, Ease 10%
|
||||
|
||||
**Advantage:** Rigorous, forces logical consistency in pairwise judgments.
|
||||
|
||||
### Swing Weighting
|
||||
|
||||
**When to use:** Need to justify weights based on value difference, not just importance
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Baseline:** Imagine all criteria at worst level
|
||||
2. **Swing:** For each criterion, ask "What value does moving from worst to best create?"
|
||||
3. **Rank swings:** Which swing creates most value?
|
||||
4. **Assign points:** Give highest swing 100 points, others relative to it
|
||||
5. **Convert to weights:** Normalize points to percentages
|
||||
|
||||
**Example:**
|
||||
|
||||
| Criterion | Worst → Best Scenario | Value of Swing | Points | Weight |
|
||||
|-----------|----------------------|----------------|--------|--------|
|
||||
| Performance | 50ms → 5ms response | Huge value gain | 100 | 45% |
|
||||
| Cost | $100K → $50K | Moderate value | 60 | 27% |
|
||||
| Risk | High → Low risk | Significant value | 50 | 23% |
|
||||
| Ease | Hard → Easy to use | Minor value | 10 | 5% |
|
||||
|
||||
**Total points:** 220 → **Weights:** 100/220=45%, 60/220=27%, 50/220=23%, 10/220=5%
|
||||
|
||||
**Advantage:** Focuses on marginal value, not abstract importance. Reveals if criteria with wide option variance should be weighted higher.
|
||||
|
||||
### Multi-Voting (Group Weighting)
|
||||
|
||||
**When to use:** Group of 5-15 stakeholders needs to converge on weights
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Round 1 - Individual allocation:** Each person assigns 100 points across criteria
|
||||
2. **Reveal distribution:** Show average and variance for each criterion
|
||||
3. **Discuss outliers:** Why did some assign 40% to Cost while others assigned 10%?
|
||||
4. **Round 2 - Revised allocation:** Re-allocate with new information
|
||||
5. **Converge:** Repeat until variance is acceptable or use average
|
||||
|
||||
**Example:**
|
||||
|
||||
| Criterion | Round 1 Avg | Round 1 Variance | Round 2 Avg | Round 2 Variance |
|
||||
|-----------|-------------|------------------|-------------|------------------|
|
||||
| Cost | 25% | High (±15%) | 30% | Low (±5%) |
|
||||
| Performance | 40% | Medium (±10%) | 38% | Low (±4%) |
|
||||
| Risk | 20% | Low (±5%) | 20% | Low (±3%) |
|
||||
| Ease | 15% | High (±12%) | 12% | Low (±4%) |
|
||||
|
||||
**Convergence achieved** when variance <±5% for all criteria.
|
||||
|
||||
---
|
||||
|
||||
## 3. Score Calibration & Normalization
|
||||
|
||||
### Handling Different Scorer Tendencies
|
||||
|
||||
**Problem:** Some scorers are "hard graders" (6-7 range), others are "easy graders" (8-9 range). This skews results.
|
||||
|
||||
**Solution: Z-score normalization**
|
||||
|
||||
**Step 1: Calculate each scorer's mean and standard deviation**
|
||||
|
||||
Scorer A: Gave scores [8, 9, 7, 8] → Mean=8, SD=0.8
|
||||
Scorer B: Gave scores [5, 6, 4, 6] → Mean=5.25, SD=0.8
|
||||
|
||||
**Step 2: Normalize each score**
|
||||
|
||||
Z-score = (Raw Score - Scorer Mean) / Scorer SD
|
||||
|
||||
**Step 3: Re-scale to 1-10**
|
||||
|
||||
Normalized Score = 5.5 + (Z-score × 1.5)
|
||||
|
||||
**Result:** Scorers are calibrated to same scale, eliminating grading bias.
|
||||
|
||||
### Dealing with Missing Data
|
||||
|
||||
**Scenario:** Some alternatives can't be scored on all criteria (e.g., vendor A won't share cost until later).
|
||||
|
||||
**Approach 1: Conditional matrix**
|
||||
|
||||
Score available criteria only, note which are missing. Once data arrives, re-run matrix.
|
||||
|
||||
**Approach 2: Pessimistic/Optimistic bounds**
|
||||
|
||||
Assign worst-case and best-case scores for missing data. Run matrix twice:
|
||||
- Pessimistic scenario: Missing data gets low score (e.g., 3)
|
||||
- Optimistic scenario: Missing data gets high score (e.g., 8)
|
||||
|
||||
If same option wins both scenarios → Decision is robust. If different winners → Missing data is decision-critical, must obtain before deciding.
|
||||
|
||||
### Non-Linear Scoring Curves
|
||||
|
||||
**Problem:** Not all criteria are linear. E.g., cost difference between $10K and $20K matters more than $110K vs $120K.
|
||||
|
||||
**Solution: Apply utility curves**
|
||||
|
||||
**Diminishing returns curve** (Cost, Time):
|
||||
- Score = 10 × (1 - e^(-k × Cost Improvement))
|
||||
- k = sensitivity parameter (higher k = faster diminishing returns)
|
||||
|
||||
**Threshold curve** (Must meet minimum):
|
||||
- Score = 0 if below threshold
|
||||
- Score = 1-10 linear above threshold
|
||||
|
||||
**Example:** Load time criterion with 2-second threshold:
|
||||
- Option A: 1.5s → Score = 10 (below threshold = great)
|
||||
- Option B: 3s → Score = 5 (above threshold, linear penalty)
|
||||
- Option C: 5s → Score = 1 (way above threshold)
|
||||
|
||||
---
|
||||
|
||||
## 4. Advanced Sensitivity Analysis
|
||||
|
||||
### Monte Carlo Sensitivity
|
||||
|
||||
**When to use:** High uncertainty in scores, want to understand probability distribution of outcomes
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Define uncertainty ranges** for each score
|
||||
- Option A Cost score: 6 ± 2 (could be 4-8)
|
||||
- Option A Performance: 9 ± 0.5 (could be 8.5-9.5)
|
||||
|
||||
2. **Run simulations** (1000+ iterations):
|
||||
- Randomly sample scores within uncertainty ranges
|
||||
- Calculate weighted total for each option
|
||||
- Record winner
|
||||
|
||||
3. **Analyze results:**
|
||||
- Option A wins: 650/1000 = 65% probability
|
||||
- Option B wins: 300/1000 = 30% probability
|
||||
- Option C wins: 50/1000 = 5% probability
|
||||
|
||||
**Interpretation:**
|
||||
- **>80% win rate:** High confidence in decision
|
||||
- **50-80% win rate:** Moderate confidence, option is likely but not certain
|
||||
- **<50% win rate:** Low confidence, gather more data or consider decision is close call
|
||||
|
||||
**Tools:** Excel (=RANDBETWEEN or =NORM.INV), Python (numpy.random), R (rnorm)
|
||||
|
||||
### Scenario Analysis
|
||||
|
||||
**When to use:** Future is uncertain, decisions need to be robust across scenarios
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Define scenarios** (typically 3-4):
|
||||
- Best case: Favorable market conditions
|
||||
- Base case: Expected conditions
|
||||
- Worst case: Unfavorable conditions
|
||||
- Black swan: Unlikely but high-impact event
|
||||
|
||||
2. **Adjust criterion weights or scores per scenario:**
|
||||
|
||||
| Scenario | Cost Weight | Performance Weight | Risk Weight |
|
||||
|----------|-------------|--------------------|-------------|
|
||||
| Best case | 20% | 50% | 30% |
|
||||
| Base case | 30% | 40% | 30% |
|
||||
| Worst case | 40% | 20% | 40% |
|
||||
|
||||
3. **Run matrix for each scenario**, identify winner
|
||||
|
||||
4. **Evaluate robustness:**
|
||||
- **Dominant option:** Wins in all scenarios → Robust choice
|
||||
- **Scenario-dependent:** Different winners → Need to assess scenario likelihood
|
||||
- **Mixed:** Wins in base + one other → Moderately robust
|
||||
|
||||
### Threshold Analysis
|
||||
|
||||
**Question:** At what weight does the decision flip?
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Vary one criterion weight** from 0% to 100% (keeping others proportional)
|
||||
2. **Plot total scores** for all options vs. weight
|
||||
3. **Identify crossover point** where lines intersect (decision flips)
|
||||
|
||||
**Example:**
|
||||
|
||||
When Performance weight < 25% → Option B wins (cost-optimized)
|
||||
When Performance weight > 25% → Option A wins (performance-optimized)
|
||||
|
||||
**Insight:** Current weight is 40% for Performance. Decision is robust unless Performance drops below 25% importance.
|
||||
|
||||
**Practical use:** Communicate to stakeholders: "Even if we reduce Performance priority to 25% (vs current 40%), Option A still wins. Decision is robust."
|
||||
|
||||
---
|
||||
|
||||
## 5. Group Decision Facilitation
|
||||
|
||||
### Delphi Method (Asynchronous Consensus)
|
||||
|
||||
**When to use:** Experts geographically distributed, want to avoid groupthink, need convergence without meetings
|
||||
|
||||
**Process:**
|
||||
|
||||
**Round 1:**
|
||||
- Each expert scores options independently (no discussion)
|
||||
- Facilitator compiles scores, calculates median and range
|
||||
|
||||
**Round 2:**
|
||||
- Share Round 1 results (anonymous)
|
||||
- Experts see median scores and outliers
|
||||
- Ask experts to re-score, especially if they were outliers (optional: provide reasoning)
|
||||
|
||||
**Round 3:**
|
||||
- Share Round 2 results
|
||||
- Experts make final adjustments
|
||||
- Converge on consensus scores (median or mean)
|
||||
|
||||
**Convergence criteria:** Standard deviation of scores <1.5 points per criterion
|
||||
|
||||
**Example:**
|
||||
|
||||
| Option | Criterion | R1 Scores | R1 Median | R2 Scores | R2 Median | R3 Scores | R3 Median |
|
||||
|--------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
|
||||
| A | Cost | [5, 7, 9, 6] | 6.5 | [6, 7, 8, 6] | 6.5 | [6, 7, 7, 7] | **7** |
|
||||
|
||||
**Advantage:** Avoids dominance by loudest voice, reduces groupthink, allows reflection time.
|
||||
|
||||
### Nominal Group Technique (Structured Meeting)
|
||||
|
||||
**When to use:** In-person or virtual meeting, need structured discussion to surface disagreements
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Silent generation (10 min):** Each person scores options independently
|
||||
2. **Round-robin sharing (20 min):** Each person shares one score and rationale (no debate yet)
|
||||
3. **Discussion (30 min):** Debate differences, especially outliers
|
||||
4. **Re-vote (5 min):** Independent re-scoring after hearing perspectives
|
||||
5. **Aggregation:** Calculate final scores (mean or median)
|
||||
|
||||
**Facilitation tips:**
|
||||
- Enforce "no interruptions" during round-robin
|
||||
- Time-box discussion to avoid analysis paralysis
|
||||
- Focus debate on criteria with widest score variance
|
||||
|
||||
### Handling Persistent Disagreement
|
||||
|
||||
**Scenario:** After multiple rounds, stakeholders still disagree on weights or scores.
|
||||
|
||||
**Options:**
|
||||
|
||||
**1. Separate matrices by stakeholder group:**
|
||||
|
||||
Run matrix for Engineering priorities, Sales priorities, Executive priorities separately. Present all three results. Highlight where recommendations align vs. differ.
|
||||
|
||||
**2. Escalate to decision-maker:**
|
||||
|
||||
Present divergence transparently: "Engineering weights Performance at 60%, Sales weights Cost at 50%. Under Engineering weights, Option A wins. Under Sales weights, Option B wins. Recommendation: [Decision-maker] must adjudicate priority trade-off."
|
||||
|
||||
**3. Multi-criteria satisficing:**
|
||||
|
||||
Instead of optimizing weighted sum, find option that meets minimum thresholds on all criteria. This avoids weighting debate.
|
||||
|
||||
**Example:** Option must score ≥7 on Performance AND ≤$50K cost AND ≥6 on Ease of Use. Find options that satisfy all constraints.
|
||||
|
||||
---
|
||||
|
||||
## 6. Matrix Variations & Extensions
|
||||
|
||||
### Weighted Pros/Cons Matrix
|
||||
Hybrid: Add "Key Pros/Cons/Dealbreakers" columns to matrix for qualitative context alongside quantitative scores.
|
||||
|
||||
### Multi-Phase Decision Matrix
|
||||
**Phase 1:** High-level filter (simple criteria) → shortlist top 3
|
||||
**Phase 2:** Deep-dive (detailed criteria) → select winner
|
||||
Avoids analysis paralysis by not deep-diving on all options upfront.
|
||||
|
||||
### Risk-Adjusted Matrix
|
||||
For uncertain scores, use expected value: (Optimistic + 4×Most Likely + Pessimistic) / 6
|
||||
Accounts for score uncertainty in final weighted total.
|
||||
|
||||
---
|
||||
|
||||
## 7. Common Failure Modes & Recovery
|
||||
|
||||
| Failure Mode | Symptoms | Recovery |
|
||||
|--------------|----------|----------|
|
||||
| **Post-Rationalization** | Oddly specific weights, generous scores for preferred option | Assign weights BEFORE scoring, use third-party facilitator |
|
||||
| **Analysis Paralysis** | >10 criteria, endless tweaking, winner changes repeatedly | Set deadline, time-box criteria (5 max), use satisficing rule |
|
||||
| **Garbage In, Garbage Out** | Scores are guesses, no data sources, false confidence | Flag uncertainties, gather real data, acknowledge limits |
|
||||
| **Criterion Soup** | Overlapping criteria, scorer confusion | Consolidate redundant criteria, define each clearly |
|
||||
| **Spreadsheet Error** | Calculation mistakes, weights don't sum to 100% | Use templates with formulas, peer review calculations |
|
||||
|
||||
---
|
||||
|
||||
## 8. When to Abandon the Matrix
|
||||
|
||||
Despite best efforts, sometimes a decision matrix is not the right tool:
|
||||
|
||||
**Abandon if:**
|
||||
|
||||
1. **Purely emotional decision:** Choosing baby name, selecting wedding venue (no "right" answer)
|
||||
- **Use instead:** Gut feel, user preference vote
|
||||
|
||||
2. **Single dominant criterion:** Only cost matters, everything else is noise
|
||||
- **Use instead:** Simple cost comparison table
|
||||
|
||||
3. **Decision already made:** Political realities mean decision is predetermined
|
||||
- **Use instead:** Document decision rationale (not fake analysis)
|
||||
|
||||
4. **Future is too uncertain:** Can't meaningfully score because context will change dramatically
|
||||
- **Use instead:** Scenario planning, real options analysis, reversible pilot
|
||||
|
||||
5. **Stakeholders distrust process:** Matrix seen as "math washing" to impose decision
|
||||
- **Use instead:** Deliberative dialog, voting, or delegated authority
|
||||
|
||||
**Recognize when structured analysis adds value vs. when it's theater.** Decision matrices work best when:
|
||||
- Multiple alternatives genuinely exist
|
||||
- Trade-offs are real and must be balanced
|
||||
- Stakeholders benefit from transparency
|
||||
- Data is available or can be gathered
|
||||
- Decision is reversible if matrix misleads
|
||||
|
||||
If these don't hold, consider alternative decision frameworks.
|
||||
370
skills/decision-matrix/resources/template.md
Normal file
370
skills/decision-matrix/resources/template.md
Normal file
@@ -0,0 +1,370 @@
|
||||
# Decision Matrix Template
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
Decision Matrix Progress:
|
||||
- [ ] Step 1: Frame the decision
|
||||
- [ ] Step 2: Identify criteria and assign weights
|
||||
- [ ] Step 3: Score alternatives
|
||||
- [ ] Step 4: Calculate and analyze results
|
||||
- [ ] Step 5: Validate and deliver
|
||||
```
|
||||
|
||||
**Step 1: Frame the decision** - Clarify decision context, list alternatives, identify must-haves. See [Decision Framing](#decision-framing).
|
||||
|
||||
**Step 2: Identify criteria and assign weights** - Determine what factors matter, assign percentage weights. See [Criteria Identification](#criteria-identification) and [Weighting Techniques](#weighting-techniques).
|
||||
|
||||
**Step 3: Score alternatives** - Rate each option on each criterion (1-10 scale). See [Scoring Guidance](#scoring-guidance).
|
||||
|
||||
**Step 4: Calculate and analyze results** - Compute weighted scores, rank options, check sensitivity. See [Matrix Calculation](#matrix-calculation) and [Interpretation](#interpretation).
|
||||
|
||||
**Step 5: Validate and deliver** - Quality check against [Quality Checklist](#quality-checklist), deliver with recommendation.
|
||||
|
||||
---
|
||||
|
||||
## Decision Framing
|
||||
|
||||
### Input Questions
|
||||
|
||||
Ask user to clarify:
|
||||
|
||||
**1. Decision context:**
|
||||
- What are we deciding? (Be specific: "Choose CRM platform" not "improve sales")
|
||||
- Why now? (Triggering event, deadline, opportunity)
|
||||
- What happens if we don't decide or choose wrong?
|
||||
|
||||
**2. Alternatives:**
|
||||
- What are ALL the options we're considering? (Get exhaustive list)
|
||||
- Include "do nothing" or status quo as an option if relevant
|
||||
- Are these mutually exclusive or can we combine them?
|
||||
|
||||
**3. Must-have requirements (filters):**
|
||||
- Are there absolute dealbreakers? (Budget cap, compliance requirement, technical constraint)
|
||||
- Which options fail must-haves and can be eliminated immediately?
|
||||
- Distinguish between "must have" (filter) and "nice to have" (criterion)
|
||||
|
||||
**4. Stakeholders:**
|
||||
- Who needs to agree with this decision?
|
||||
- Who will be affected by it?
|
||||
- Do different stakeholders have different priorities?
|
||||
|
||||
### Framing Template
|
||||
|
||||
```markdown
|
||||
## Decision Context
|
||||
- **Decision:** [Specific choice to be made]
|
||||
- **Timeline:** [When decision needed by]
|
||||
- **Stakeholders:** [Who needs to agree]
|
||||
- **Consequences of wrong choice:** [What we risk]
|
||||
|
||||
## Alternatives
|
||||
1. [Option A name]
|
||||
2. [Option B name]
|
||||
3. [Option C name]
|
||||
4. [Option D name - if applicable]
|
||||
5. [Do nothing / Status quo - if applicable]
|
||||
|
||||
## Must-Have Requirements (Pass/Fail)
|
||||
- [ ] [Requirement 1] - All options must meet this
|
||||
- [ ] [Requirement 2] - Eliminates options that don't pass
|
||||
- [ ] [Requirement 3] - Non-negotiable constraint
|
||||
|
||||
**Options eliminated:** [List any that fail must-haves]
|
||||
**Remaining options:** [List that pass filters]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Criteria Identification
|
||||
|
||||
### Process
|
||||
|
||||
**Step 1: Brainstorm factors**
|
||||
|
||||
Ask: "What makes one option better than another?"
|
||||
|
||||
Common categories:
|
||||
- **Cost:** Upfront, ongoing, total cost of ownership
|
||||
- **Performance:** Speed, quality, reliability, scalability
|
||||
- **Risk:** Implementation risk, reversibility, vendor lock-in
|
||||
- **Strategic:** Alignment with goals, competitive advantage, future flexibility
|
||||
- **Operational:** Ease of use, maintenance, training, support
|
||||
- **Stakeholder:** Team preference, customer impact, executive buy-in
|
||||
|
||||
**Step 2: Validate criteria**
|
||||
|
||||
Each criterion should be:
|
||||
- [ ] **Measurable or scorable** (can assign 1-10 rating)
|
||||
- [ ] **Differentiating** (options vary on this dimension)
|
||||
- [ ] **Relevant** (actually matters for this decision)
|
||||
- [ ] **Independent** (not redundant with other criteria)
|
||||
|
||||
**Remove:**
|
||||
- Criteria where all options score the same (no differentiation)
|
||||
- Duplicate criteria that measure same thing
|
||||
- Criteria that should be must-haves (pass/fail, not scored)
|
||||
|
||||
**Step 3: Keep list manageable**
|
||||
|
||||
- **Ideal:** 4-7 criteria (enough to capture trade-offs, not overwhelming)
|
||||
- **Minimum:** 3 criteria (otherwise too simplistic)
|
||||
- **Maximum:** 10 criteria (beyond this, hard to weight meaningfully)
|
||||
|
||||
If you have >10 criteria, group related ones into categories with sub-criteria.
|
||||
|
||||
### Criteria Template
|
||||
|
||||
```markdown
|
||||
## Evaluation Criteria
|
||||
|
||||
| # | Criterion | Definition | How We'll Measure |
|
||||
|---|-----------|------------|-------------------|
|
||||
| 1 | [Name] | [What this measures] | [Data source or scoring approach] |
|
||||
| 2 | [Name] | [What this measures] | [Data source or scoring approach] |
|
||||
| 3 | [Name] | [What this measures] | [Data source or scoring approach] |
|
||||
| 4 | [Name] | [What this measures] | [Data source or scoring approach] |
|
||||
| 5 | [Name] | [What this measures] | [Data source or scoring approach] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Weighting Techniques
|
||||
|
||||
### Technique 1: Direct Allocation (Fastest)
|
||||
Solo decision or aligned stakeholders. Assign percentages summing to 100%. Start with most important (30-50%), avoid weights <5%, round to 5% increments.
|
||||
|
||||
**Example:** Cost 30%, Performance 25%, Ease of use 20%, Risk 15%, Team preference 10% = 100%
|
||||
|
||||
### Technique 2: Pairwise Comparison (Most Rigorous)
|
||||
Difficult to weight directly or need justification. Compare each pair ("Is A more important than B?"), tally wins, convert to percentages.
|
||||
|
||||
**Example:** Cost vs Performance → Performance wins. After all pairs, Performance has 4 wins (40%), Cost has 2 wins (20%), etc.
|
||||
|
||||
### Technique 3: Stakeholder Averaging (Group Decisions)
|
||||
Multiple stakeholders with different priorities. Each assigns weights independently, then average. Large variance reveals disagreement → discuss before proceeding.
|
||||
|
||||
**Example:** If stakeholders assign Cost weights of 40%, 20%, 30% → Average is 30%, but variance suggests need for alignment discussion.
|
||||
|
||||
---
|
||||
|
||||
## Scoring Guidance
|
||||
|
||||
### Scoring Scale
|
||||
|
||||
**Use 1-10 scale** (better granularity than 1-5):
|
||||
|
||||
- **10:** Exceptional, best-in-class
|
||||
- **8-9:** Very good, exceeds requirements
|
||||
- **6-7:** Good, meets requirements
|
||||
- **4-5:** Acceptable, meets minimum
|
||||
- **2-3:** Poor, below requirements
|
||||
- **1:** Fails, unacceptable
|
||||
|
||||
**Consistency tips:**
|
||||
- Define what 10 means for each criterion before scoring
|
||||
- Score all options on one criterion at a time (easier to compare)
|
||||
- Use half-points (7.5) if needed for precision
|
||||
|
||||
### Scoring Process
|
||||
|
||||
**For objective criteria (cost, speed, measurable metrics):**
|
||||
|
||||
1. Get actual data (quotes, benchmarks, measurements)
|
||||
2. Convert to 1-10 scale using formula:
|
||||
- **Lower is better** (cost, time): Score = 10 × (Best value / This value)
|
||||
- **Higher is better** (performance, capacity): Score = 10 × (This value / Best value)
|
||||
|
||||
**Example (Cost - lower is better):**
|
||||
- Option A: $50K → Score = 10 × ($30K / $50K) = 6.0
|
||||
- Option B: $30K → Score = 10 × ($30K / $30K) = 10.0
|
||||
- Option C: $40K → Score = 10 × ($30K / $40K) = 7.5
|
||||
|
||||
**For subjective criteria (ease of use, team preference):**
|
||||
|
||||
1. Define what 10, 7, and 4 look like for this criterion
|
||||
2. Score relative to those anchors
|
||||
3. Document reasoning/assumptions
|
||||
|
||||
**Example (Ease of Use):**
|
||||
- 10 = No training needed, intuitive UI, users productive day 1
|
||||
- 7 = 1-week training, moderate learning curve
|
||||
- 4 = Significant training (1 month), complex UI
|
||||
|
||||
**Calibration questions:**
|
||||
- Would I bet money on this score being accurate?
|
||||
- Is this score relative to alternatives or absolute?
|
||||
- What would change this score by ±2 points?
|
||||
|
||||
### Scoring Template
|
||||
|
||||
```markdown
|
||||
## Scoring Matrix
|
||||
|
||||
| Option | Criterion 1 (Weight%) | Criterion 2 (Weight%) | Criterion 3 (Weight%) | Criterion 4 (Weight%) |
|
||||
|--------|-----------------------|-----------------------|-----------------------|-----------------------|
|
||||
| Option A | [Score] | [Score] | [Score] | [Score] |
|
||||
| Option B | [Score] | [Score] | [Score] | [Score] |
|
||||
| Option C | [Score] | [Score] | [Score] | [Score] |
|
||||
|
||||
**Data sources and assumptions:**
|
||||
- Criterion 1: [Where scores came from, what assumptions]
|
||||
- Criterion 2: [Where scores came from, what assumptions]
|
||||
- Criterion 3: [Where scores came from, what assumptions]
|
||||
- Criterion 4: [Where scores came from, what assumptions]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Matrix Calculation
|
||||
|
||||
### Calculation Process
|
||||
|
||||
**For each option:**
|
||||
1. Multiply criterion score by criterion weight
|
||||
2. Sum all weighted scores
|
||||
3. This is the option's total score
|
||||
|
||||
**Formula:** Total Score = Σ (Criterion Score × Criterion Weight)
|
||||
|
||||
**Example:**
|
||||
|
||||
| Option | Cost (30%) | Performance (40%) | Risk (20%) | Ease (10%) | **Total** |
|
||||
|--------|-----------|------------------|-----------|-----------|---------|
|
||||
| Option A | 7 × 0.30 = 2.1 | 9 × 0.40 = 3.6 | 6 × 0.20 = 1.2 | 8 × 0.10 = 0.8 | **7.7** |
|
||||
| Option B | 9 × 0.30 = 2.7 | 6 × 0.40 = 2.4 | 8 × 0.20 = 1.6 | 6 × 0.10 = 0.6 | **7.3** |
|
||||
| Option C | 5 × 0.30 = 1.5 | 8 × 0.40 = 3.2 | 7 × 0.20 = 1.4 | 9 × 0.10 = 0.9 | **7.0** |
|
||||
|
||||
**Winner: Option A (7.7)**
|
||||
|
||||
### Final Matrix Template
|
||||
|
||||
```markdown
|
||||
## Decision Matrix Results
|
||||
|
||||
| Option | [Criterion 1] ([W1]%) | [Criterion 2] ([W2]%) | [Criterion 3] ([W3]%) | [Criterion 4] ([W4]%) | **Weighted Total** | **Rank** |
|
||||
|--------|----------------------|----------------------|----------------------|----------------------|-------------------|----------|
|
||||
| [Option A] | [S] ([S×W1]) | [S] ([S×W2]) | [S] ([S×W3]) | [S] ([S×W4]) | **[Total]** | [Rank] |
|
||||
| [Option B] | [S] ([S×W1]) | [S] ([S×W2]) | [S] ([S×W3]) | [S] ([S×W4]) | **[Total]** | [Rank] |
|
||||
| [Option C] | [S] ([S×W1]) | [S] ([S×W2]) | [S] ([S×W3]) | [S] ([S×W4]) | **[Total]** | [Rank] |
|
||||
|
||||
**Weights:** [Criterion 1] ([W1]%), [Criterion 2] ([W2]%), [Criterion 3] ([W3]%), [Criterion 4] ([W4]%)
|
||||
|
||||
**Scoring scale:** 1-10 (10 = best)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Interpretation
|
||||
|
||||
### Analysis Checklist
|
||||
|
||||
After calculating scores, analyze:
|
||||
|
||||
**1. Clear winner vs close call**
|
||||
- [ ] **Margin >10%:** Clear winner, decision is robust
|
||||
- [ ] **Margin 5-10%:** Moderate confidence, validate assumptions
|
||||
- [ ] **Margin <5%:** Toss-up, need more data or stakeholder discussion
|
||||
|
||||
**2. Dominant criterion check**
|
||||
- [ ] Does one criterion drive entire decision? (accounts for >50% of score difference)
|
||||
- [ ] Is that appropriate or is weight too high?
|
||||
|
||||
**3. Surprising results**
|
||||
- [ ] Does the winner match gut instinct?
|
||||
- [ ] If not, what does the matrix reveal? (Trade-off you hadn't considered)
|
||||
- [ ] Or are weights/scores wrong?
|
||||
|
||||
**4. Sensitivity questions**
|
||||
- [ ] If we swapped top two criterion weights, would winner change?
|
||||
- [ ] If we adjusted one score by ±1 point, would winner change?
|
||||
- [ ] Which scores are most uncertain? (Could they change with more data)
|
||||
|
||||
### Recommendation Template
|
||||
|
||||
```markdown
|
||||
## Recommendation
|
||||
|
||||
**Recommended Option:** [Option name] (Score: [X.X])
|
||||
|
||||
**Rationale:**
|
||||
- [Option] scores highest overall ([X.X] vs [Y.Y] for runner-up)
|
||||
- Key strengths: [What it excels at based on criterion scores]
|
||||
- Acceptable trade-offs: [Where it scores lower but weight is low enough]
|
||||
|
||||
**Key Trade-offs:**
|
||||
- **Winner:** Strong on [Criterion A, B] ([X]% of total weight)
|
||||
- **Runner-up:** Strong on [Criterion C] but weaker on [Criterion A]
|
||||
- **Decision driver:** [Criterion A] matters most ([X]%), where [Winner] excels
|
||||
|
||||
**Confidence Level:**
|
||||
- [ ] **High (>10% margin):** Decision is robust to reasonable assumption changes
|
||||
- [ ] **Moderate (5-10% margin):** Sensitive to [specific assumption], recommend validating
|
||||
- [ ] **Low (<5% margin):** Effectively a tie, consider [additional data needed] or [stakeholder input]
|
||||
|
||||
**Sensitivity:**
|
||||
- [Describe any sensitivity - e.g., "If Risk weight increased from 20% to 35%, Option B would win"]
|
||||
|
||||
**Next Steps:**
|
||||
1. [Immediate action - e.g., "Get final pricing from vendor"]
|
||||
2. [Validation - e.g., "Confirm technical feasibility with engineering"]
|
||||
3. [Communication - e.g., "Present to steering committee by [date]"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before delivering, verify:
|
||||
|
||||
**Decision framing:**
|
||||
- [ ] Decision is specific and well-defined
|
||||
- [ ] All viable alternatives included
|
||||
- [ ] Must-haves clearly separated from nice-to-haves
|
||||
- [ ] Stakeholders identified
|
||||
|
||||
**Criteria:**
|
||||
- [ ] 3-10 criteria (enough to capture trade-offs, not overwhelming)
|
||||
- [ ] Each criterion is measurable/scorable
|
||||
- [ ] Criteria differentiate between options (not all scored the same)
|
||||
- [ ] No redundancy between criteria
|
||||
- [ ] Weights sum to 100%
|
||||
- [ ] Weight distribution reflects true priorities
|
||||
|
||||
**Scoring:**
|
||||
- [ ] Scores use consistent 1-10 scale
|
||||
- [ ] Objective criteria based on data (not guesses)
|
||||
- [ ] Subjective criteria have clear definitions/anchors
|
||||
- [ ] Assumptions and data sources documented
|
||||
- [ ] Scores are defensible (could explain to stakeholder)
|
||||
|
||||
**Analysis:**
|
||||
- [ ] Weighted scores calculated correctly
|
||||
- [ ] Options ranked by total score
|
||||
- [ ] Sensitivity analyzed (close calls identified)
|
||||
- [ ] Recommendation includes rationale and trade-offs
|
||||
- [ ] Next steps identified
|
||||
|
||||
**Communication:**
|
||||
- [ ] Matrix table is clear and readable
|
||||
- [ ] Weights shown in column headers
|
||||
- [ ] Weighted scores shown (not just raw scores)
|
||||
- [ ] Recommendation stands out visually
|
||||
- [ ] Assumptions and limitations noted
|
||||
|
||||
---
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
| Pitfall | Fix |
|
||||
|---------|-----|
|
||||
| **Too many criteria (>10)** | Consolidate related criteria into categories |
|
||||
| **Redundant criteria** | Combine criteria that always score the same |
|
||||
| **Arbitrary weights** | Use pairwise comparison or stakeholder discussion |
|
||||
| **Scores are guesses** | Gather data for objective criteria, define anchors for subjective |
|
||||
| **Confirmation bias** | Weight criteria BEFORE scoring options |
|
||||
| **Ignoring sensitivity** | Always check if small changes flip the result |
|
||||
| **False precision** | Match precision to confidence level |
|
||||
| **Missing "do nothing"** | Include status quo as an option to evaluate |
|
||||
Reference in New Issue
Block a user