Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,261 @@
---
name: chain-roleplay-debate-synthesis
description: Use when facing decisions with multiple legitimate perspectives and inherent tensions. Invoke when stakeholders have competing priorities (growth vs. sustainability, speed vs. quality, innovation vs. risk), need to pressure-test ideas from different angles before committing, exploring tradeoffs between incompatible values, synthesizing conflicting expert opinions into coherent strategy, or surfacing assumptions that single-viewpoint analysis would miss.
---
# Chain Roleplay → Debate → Synthesis
## Workflow
Copy this checklist and track your progress:
```
Roleplay → Debate → Synthesis Progress:
- [ ] Step 1: Frame the decision and identify roles
- [ ] Step 2: Roleplay each perspective authentically
- [ ] Step 3: Structured debate between viewpoints
- [ ] Step 4: Synthesize into coherent recommendation
- [ ] Step 5: Validate synthesis quality
```
**Step 1: Frame the decision and identify roles**
State the decision clearly as a question, identify 2-5 stakeholder perspectives or roles that have legitimate but competing interests, and clarify what a successful synthesis looks like. See [Decision Framing](#decision-framing) for guidance on choosing productive roles.
**Step 2: Roleplay each perspective authentically**
For each role, articulate their position, priorities, concerns, and evidence. Genuinely advocate for each viewpoint without strawmanning. See [Roleplay Guidelines](#roleplay-guidelines) for authentic advocacy techniques and use [resources/template.md](resources/template.md) for complete structure.
**Step 3: Structured debate between viewpoints**
Facilitate direct clash between perspectives on key points of disagreement. Surface tensions, challenge assumptions, test edge cases, and identify cruxes (what evidence would change each perspective's mind). See [Debate Structure](#debate-structure) for debate formats and facilitation techniques.
**Step 4: Synthesize into coherent recommendation**
Integrate insights from all perspectives into a unified decision that acknowledges tradeoffs, incorporates valid concerns from each viewpoint, and explains what's being prioritized and why. See [Synthesis Patterns](#synthesis-patterns) for integration approaches and [resources/template.md](resources/template.md) for synthesis framework. For complex multi-stakeholder decisions, see [resources/methodology.md](resources/methodology.md).
**Step 5: Validate synthesis quality**
Check synthesis against [resources/evaluators/rubric_chain_roleplay_debate_synthesis.json](resources/evaluators/rubric_chain_roleplay_debate_synthesis.json) to ensure all perspectives were represented authentically, debate surfaced real tensions, synthesis is coherent and actionable, and no perspective was dismissed without engagement. See [When NOT to Use This Skill](#when-not-to-use-this-skill) to confirm this approach was appropriate.
---
## Decision Framing
### Choosing Productive Roles
**Good role selection:**
- **Competing interests**: Roles have legitimate but different priorities (e.g., Speed Advocate vs. Quality Guardian)
- **Different expertise**: Roles bring distinct knowledge domains (e.g., Engineer, Designer, Customer)
- **Value tensions**: Roles represent incompatible values (e.g., Privacy Advocate vs. Personalization)
- **Stakeholder representation**: Roles map to real decision-makers or affected parties
**Typical role patterns:**
- **Functional roles**: Engineer, Designer, PM, Marketer, Finance, Legal, Customer
- **Archetype roles**: Optimist, Pessimist, Risk Manager, Visionary, Pragmatist
- **Stakeholder roles**: Customer, Employee, Investor, Community, Regulator
- **Value roles**: Ethics Officer, Growth Hacker, Brand Guardian, Innovation Lead
- **Temporal roles**: Short-term Thinker, Long-term Strategist
**How many roles:**
- **2 roles**: Clean binary debate (build vs. buy, growth vs. profitability)
- **3 roles**: Triadic tension (speed vs. quality vs. cost)
- **4-5 roles**: Multi-stakeholder complexity (product strategy with eng, design, marketing, finance, customer)
- **Avoid >5**: Becomes unwieldy, synthesis too complex
### Framing the Question
**Strong framing:**
- "Should we prioritize X over Y?" (clear tradeoff)
- "What's the right balance between A and B?" (explicit tension)
- "Should we pursue strategy X?" (specific, actionable)
**Weak framing:**
- "What should we do?" (too vague)
- "How can we have our cake and eat it too?" (assumes false resolution)
- "Who's right?" (assumes winner rather than synthesis)
---
## Roleplay Guidelines
### Authentic Advocacy
**Each role should:**
1. **State position clearly**: What do they believe should be done?
2. **Articulate priorities**: What values or goals drive this position?
3. **Surface concerns**: What risks or downsides do they see in other approaches?
4. **Provide evidence**: What data, experience, or reasoning supports this view?
5. **Show vulnerability**: What uncertainties or limitations does this role acknowledge?
**Avoiding strawmen:**
- ❌ "The engineer just wants to use shiny new tech" (caricature)
- ✅ "The engineer values maintainability and believes new framework reduces technical debt"
- ❌ "Sales only cares about closing deals" (dismissive)
- ✅ "Sales is accountable for revenue and sees this feature as critical for competitive positioning"
**Empathy without capitulation:**
You can deeply understand a perspective without agreeing with it. Each role should be the "hero of their own story."
### Perspective-Taking Checklist
For each role, answer:
- [ ] What success looks like from this perspective
- [ ] What failure looks like from this perspective
- [ ] What metrics or evidence this role finds most compelling
- [ ] What this role fears about alternative approaches
- [ ] What this role knows that others might not
- [ ] What constraints or pressures this role faces
---
## Debate Structure
### Facilitating Productive Clash
**Debate formats:**
**1. Point-Counterpoint**
- Role A makes case for their position
- Role B responds with objections and counterarguments
- Role A addresses objections
- Repeat with Role B's case
**2. Devil's Advocate**
- One role presents the "default" or "obvious" choice
- Other roles systematically challenge assumptions and surface risks
- Goal: Pressure-test before committing
**3. Constructive Confrontation**
- Identify 3-5 key decision dimensions (cost, speed, risk, quality, etc.)
- Each role articulates position on each dimension
- Surface where perspectives conflict most
**4. Crux-Finding**
- Ask each role: "What would need to be true for you to change your mind?"
- Identify testable assumptions or evidence that would shift debate
- Focus discussion on cruxes rather than rehashing positions
### Questions to Surface Tensions
- "What's the strongest argument against your position?"
- "What does [other role] see that you might be missing?"
- "Where is the irreducible tradeoff between your perspectives?"
- "If you had to steelman the opposing view, what would you say?"
- "What happens in edge cases for your approach?"
- "What are you optimizing for that others aren't?"
### Red Flags in Debate
- **Premature consensus**: Roles agree too quickly without surfacing real tensions
- **Talking past each other**: Roles argue different points rather than engaging
- **Appeal to authority**: "Because the CEO said so" rather than reasoning
- **False dichotomies**: "Either we do X or we fail" without exploring middle ground
- **Unsupported claims**: "Everyone knows Y" without evidence or reasoning
---
## Synthesis Patterns
### Integration Approaches
**1. Weighted Synthesis**
- "We'll prioritize X, while incorporating safeguards for Y's concerns"
- Example: "Ship fast (PM's priority), but with feature flags and monitoring (Engineer's concern)"
**2. Sequencing**
- "First we do X, then we address Y"
- Example: "Launch MVP to test market (Growth), then invest in quality (Engineering) once product-market fit is proven"
**3. Conditional Strategy**
- "If condition A, do X; if condition B, do Y"
- Example: "If adoption > 10K users in Q1, invest in scale; otherwise, pivot based on feedback"
**4. Hybrid Approach**
- "Combine elements of multiple perspectives"
- Example: "Build core in-house (control) but buy peripheral components (speed)"
**5. Reframing**
- "Debate reveals the real question is Z, not X vs Y"
- Example: "Debate about pricing reveals we need to segment customers first"
**6. Elevating Constraint**
- "Identify the binding constraint both perspectives agree on"
- Example: "Both speed and quality advocates agree engineering capacity is the bottleneck; synthesis is to hire first"
### Synthesis Quality Markers
**Strong synthesis:**
- ✅ Acknowledges validity of multiple perspectives
- ✅ Explains what's being prioritized and why
- ✅ Addresses major concerns from each viewpoint
- ✅ Clear on tradeoffs being accepted
- ✅ Actionable recommendation
- ✅ Monitoring plan for key assumptions
**Weak synthesis:**
- ❌ "Let's do everything" (no prioritization)
- ❌ "X wins, Y loses" (dismisses valid concerns)
- ❌ "We need more information" (avoids decision)
- ❌ "It depends" without specifying conditions
- ❌ Vague platitudes without concrete next steps
---
## Examples
### Example 1: Short-form Synthesis
**Decision**: Should we rewrite our monolith as microservices?
**Roles**:
- **Scalability Engineer**: We need microservices to scale independently and deploy faster
- **Pragmatic Engineer**: Rewrite is 12-18 months with high risk; monolith works fine
- **Finance**: What's the ROI? Rewrite costs $2M in eng time
**Synthesis**:
Don't rewrite everything, but extract the 2-3 services with clear scaling needs (authentication, payment processing) as independent microservices. Keep core business logic in monolith for now. This addresses scalability concerns for bottleneck components (Scalability Engineer), limits risk and timeline (Pragmatic Engineer), and reduces cost to $400K vs. $2M (Finance). Revisit full migration if extracted services succeed and prove the pattern.
### Example 2: Full Analysis
For a complete worked example with detailed roleplay, debate, and synthesis, see:
- [resources/examples/build-vs-buy-crm.md](resources/examples/build-vs-buy-crm.md) - Sales, Engineering, Finance debate CRM platform decision
---
## When NOT to Use This Skill
**Skip roleplay-debate-synthesis when:**
**Single clear expert**: If one person has definitive expertise and others defer, just ask the expert
**No genuine tension**: If stakeholders actually agree, debate is artificial
**Values cannot be negotiated**: If ethical red line, don't roleplay the unethical side
**Time-critical decision**: If decision must be made in minutes, skip full debate
**Implementation details**: If decision is "how" not "whether" or "what", use technical collaboration not debate
**Use simpler approaches when:**
- ✅ Decision is straightforward with clear data → Use decision matrix or expected value
- ✅ Need creative options not evaluation → Use brainstorming not debate
- ✅ Need detailed analysis not perspective clash → Use analytical frameworks
- ✅ Implementation planning not decision-making → Use project planning not roleplay
---
## Advanced Techniques
For complex multi-stakeholder decisions, see [resources/methodology.md](resources/methodology.md) for:
- **Multi-round debates** (iterative refinement of positions)
- **Audience-perspective shifts** (how synthesis changes for different stakeholders)
- **Facilitation anti-patterns** (how debates go wrong)
- **Synthesis under uncertainty** (when evidence is incomplete)
- **Stakeholder mapping** (identifying who needs to be represented)
---
## Resources
- **[resources/template.md](resources/template.md)** - Structured template for roleplay → debate → synthesis analysis
- **[resources/methodology.md](resources/methodology.md)** - Advanced facilitation techniques and debate formats
- **[resources/examples/](resources/examples/)** - Complete worked examples across domains
- **[resources/evaluators/rubric_chain_roleplay_debate_synthesis.json](resources/evaluators/rubric_chain_roleplay_debate_synthesis.json)** - Quality assessment rubric (10 criteria)

View File

@@ -0,0 +1,186 @@
{
"criteria": [
{
"name": "Perspective Authenticity",
"description": "Are roles represented genuinely without strawman arguments?",
"scale": {
"1": "Roles are caricatures or strawmen. Positions feel artificial or dismissive ('X just wants...'). No genuine advocacy.",
"2": "Some roles feel authentic but others are weak. Uneven representation. Some strawmanning present.",
"3": "Most roles feel genuine. Positions are reasonable but may lack depth. Minimal strawmanning.",
"4": "All roles authentically represented. Each is 'hero of their own story.' Steelman approach evident. Strong advocacy for each perspective.",
"5": "Exceptional authenticity. Every role's position is compellingly argued. Reader could genuinely support any perspective. Intellectual empathy demonstrated throughout."
}
},
{
"name": "Depth of Roleplay",
"description": "Are priorities, concerns, evidence, and vulnerabilities fully articulated for each role?",
"scale": {
"1": "Roles are one-dimensional. Only positions stated, no priorities, concerns, or evidence.",
"2": "Positions and some priorities stated. Concerns and evidence missing or thin.",
"3": "Positions, priorities, and concerns articulated. Evidence present but may be thin. Vulnerabilities rarely acknowledged.",
"4": "Comprehensive roleplay. Clear position, well-justified priorities, specific concerns, supporting evidence, and acknowledged vulnerabilities for each role.",
"5": "Exceptionally deep roleplay. Rich detail on priorities, nuanced concerns, strong evidence, intellectual honesty about vulnerabilities and limits. Success metrics defined for each role."
}
},
{
"name": "Debate Quality",
"description": "Do perspectives genuinely clash on key points of disagreement?",
"scale": {
"1": "No actual debate. Roles present positions but don't engage. Talking past each other or premature consensus.",
"2": "Limited engagement. Some responses to other roles but mostly separate monologues. Key tensions not surfaced.",
"3": "Moderate debate. Roles respond to each other on main points. Some clash evident but could go deeper.",
"4": "Strong debate. Perspectives directly engage on 3-5 key dimensions. Real clash of ideas. Tensions clearly surfaced.",
"5": "Exceptional debate. Deep engagement with point-counterpoint structure. All major tensions explored thoroughly. Debate format (devil's advocate, crux-finding, etc.) skillfully applied."
}
},
{
"name": "Tension Surfacing",
"description": "Are irreducible tradeoffs and conflicts explicitly identified?",
"scale": {
"1": "No tensions identified. Falsely suggests all perspectives align. Missing the point of debate.",
"2": "Some tensions mentioned but not explored. Glossed over or minimized.",
"3": "Main tensions identified (2-3 key tradeoffs). Reasonably clear where perspectives conflict.",
"4": "All major tensions surfaced and explored. Clear on irreducible tradeoffs (speed vs quality, cost vs flexibility, etc.). Dimensions of disagreement explicit.",
"5": "Comprehensive tension mapping. All conflicts identified, categorized, and explored deeply. False dichotomies challenged. Genuine irreducible tradeoffs distinguished from resolvable disagreements."
}
},
{
"name": "Crux Identification",
"description": "Are conditions that would change each role's mind identified?",
"scale": {
"1": "No cruxes identified. Positions appear fixed and immovable.",
"2": "Vague acknowledgment that positions could change but no specifics.",
"3": "Some cruxes identified for some roles. Moderate specificity on what would change minds.",
"4": "Clear cruxes for all roles. Specific conditions or evidence that would shift positions. Enables productive focus on key uncertainties.",
"5": "Exceptional crux identification. Specific, testable conditions for each role. Distinguishes between cruxes (truly pivotal) and nice-to-haves. Debate explicitly focuses on resolving cruxes."
}
},
{
"name": "Synthesis Coherence",
"description": "Is the unified recommendation logical, well-integrated, and addresses the decision?",
"scale": {
"1": "No synthesis. Just restates positions or says 'we need more information.' Avoids deciding.",
"2": "Weak synthesis. Either 'do everything' (no prioritization) or 'X wins, Y loses' (dismisses perspectives). Not truly integrated.",
"3": "Moderate synthesis. Clear recommendation but integration is shallow. May not fully address all concerns or explain tradeoffs.",
"4": "Strong synthesis. Coherent recommendation that integrates insights from all perspectives. Integration pattern clear (weighted, sequencing, conditional, hybrid, reframing, constraint elevation). Addresses decision directly.",
"5": "Exceptional synthesis. Deeply integrated recommendation better than any single perspective. Pattern expertly applied. Innovative solution that satisfies multiple objectives. Elegant and actionable."
}
},
{
"name": "Concern Integration",
"description": "Are each role's core concerns explicitly addressed in the synthesis?",
"scale": {
"1": "No concern integration. Synthesis ignores or dismisses most perspectives' concerns.",
"2": "Limited integration. Addresses concerns from 1-2 roles, ignores others.",
"3": "Moderate integration. Most roles' main concerns acknowledged but may not be fully addressed. Some perspectives feel under-represented.",
"4": "Strong integration. All roles' core concerns explicitly addressed. Clear explanation of how synthesis handles each perspective's priorities.",
"5": "Exceptional integration. Every role's concerns not just addressed but shown to be *strengthened* by the integrated approach. Synthesis makes each perspective better off than going solo."
}
},
{
"name": "Tradeoff Transparency",
"description": "Are accepted tradeoffs and rejected alternatives clearly explained?",
"scale": {
"1": "No tradeoff transparency. Synthesis presented as 'best of all worlds' without acknowledging costs.",
"2": "Minimal transparency. Vague acknowledgment that tradeoffs exist but not specified.",
"3": "Moderate transparency. Main tradeoffs mentioned. Some explanation of what's being accepted/rejected.",
"4": "Strong transparency. Clear on what's prioritized and what's sacrificed. Explicit rationale for tradeoffs. Alternatives rejected with reasons.",
"5": "Exceptional transparency. Comprehensive accounting of all tradeoffs. Clear on second-order effects. Honest about what each perspective gives up and why it's worth it. 'What we're NOT doing' as clear as 'What we ARE doing.'"
}
},
{
"name": "Actionability",
"description": "Are next steps specific, feasible, with owners and timelines?",
"scale": {
"1": "No action plan. Vague or missing next steps.",
"2": "Vague next steps. 'Consider X', 'Explore Y.' No owners or timelines.",
"3": "Moderate actionability. Next steps identified but lack detail on who/when/how. Success metrics missing or vague.",
"4": "Strong actionability. Clear next steps with owners and dates. Success metrics defined. Implementation approach specified (phased, conditional, etc.).",
"5": "Exceptional actionability. Detailed implementation plan. Owners assigned, timeline clear, milestones defined, success metrics from each role's perspective, monitoring plan, escalation conditions, decision review cadence."
}
},
{
"name": "Stakeholder Readiness",
"description": "Is synthesis communicated appropriately for different audiences?",
"scale": {
"1": "No stakeholder tailoring. Single narrative that may not resonate with any audience.",
"2": "Minimal tailoring. One-size-fits-all communication. May be too technical for execs or too vague for implementers.",
"3": "Moderate tailoring. Some audience awareness. Key messages identified but not fully adapted.",
"4": "Strong tailoring. Synthesis clearly communicates to different stakeholders (exec summary, technical detail, operational guidance). Appropriate emphasis for each audience.",
"5": "Exceptional tailoring. Multiple versions or sections for different stakeholders. Executive summary for leaders, technical appendix for specialists, operational guide for implementation teams. Anticipates questions from each audience."
}
}
],
"minimum_standard": 3.5,
"complexity_guidance": {
"simple_decisions": {
"threshold": 3.0,
"description": "Binary choices with 2-3 clear stakeholders (e.g., build vs buy, speed vs quality). Acceptable to have simpler analysis (criteria 3-4).",
"focus_criteria": ["Perspective Authenticity", "Tension Surfacing", "Synthesis Coherence"]
},
"standard_decisions": {
"threshold": 3.5,
"description": "Multi-faceted decisions with 3-4 stakeholders and competing priorities (e.g., product roadmap, hiring strategy, market entry). Standard threshold applies (criteria average ≥3.5).",
"focus_criteria": ["All criteria should meet threshold"]
},
"complex_decisions": {
"threshold": 4.0,
"description": "Strategic decisions with 4-5+ stakeholders, power dynamics, and high stakes (e.g., company strategy, major pivots, organizational change). Higher bar required (criteria average ≥4.0).",
"focus_criteria": ["Depth of Roleplay", "Debate Quality", "Concern Integration", "Tradeoff Transparency"],
"additional_requirements": ["Stakeholder mapping", "Multi-round debate structure", "Facilitation anti-pattern awareness"]
}
},
"common_failure_modes": [
{
"failure": "Strawman arguments",
"symptoms": "Roles are caricatured or weakly represented. 'X just wants shiny tech', 'Y only cares about money.'",
"fix": "Steelman approach: Present the *strongest* version of each perspective. Each role is 'hero of their own story.' If a position feels weak, strengthen it."
},
{
"failure": "Premature consensus",
"symptoms": "Roles agree too quickly without genuine debate. 'We all want the same thing.'",
"fix": "Play devil's advocate. Ask: 'What could go wrong?' 'Where do you disagree?' Test agreement with edge cases. Give permission to disagree."
},
{
"failure": "Talking past each other",
"symptoms": "Roles present positions but don't engage. Arguments on different dimensions (one says cost, other says speed).",
"fix": "Make dimensions of disagreement explicit. Force direct engagement: '[Role A], respond to [Role B]'s point about X.'"
},
{
"failure": "False dichotomies",
"symptoms": "'Either X or we fail.' 'We must choose between quality and speed.'",
"fix": "Challenge the dichotomy. Explore middle ground, sequencing, conditional strategies. Ask: 'Are those really the only two options?'"
},
{
"failure": "Synthesis dismisses perspectives",
"symptoms": "'X wins, Y loses.' Some roles' concerns ignored or minimized in synthesis.",
"fix": "Explicitly address every role's core concerns. Show how synthesis incorporates (not dismisses) each perspective. Check: 'Does this address your concern about [issue]?'"
},
{
"failure": "Vague tradeoffs",
"symptoms": "Synthesis presented as win-win without acknowledging costs. No transparency on what's sacrificed.",
"fix": "Make tradeoffs explicit. 'We're prioritizing X, accepting Y, rejecting Z because [rationale].' Be honest about costs."
},
{
"failure": "Analysis paralysis",
"symptoms": "'We need more information.' Endless debate without convergence. Perfect information fallacy.",
"fix": "Set decision deadline. Clarify decision criteria. Good-enough threshold. Explicitly trade off value of info vs cost of delay."
},
{
"failure": "Dominant voice",
"symptoms": "One role speaks 70%+ of time. Others defer. Synthesis reflects single perspective.",
"fix": "Explicit turn-taking. Direct questions to quieter roles. Affirm their contributions. Check power dynamics."
},
{
"failure": "No actionability",
"symptoms": "Synthesis is conceptual but not actionable. No clear next steps, owners, or timeline.",
"fix": "Specify: Who does what by when? How do we measure success? What's the implementation approach (phased, conditional)? When do we review?"
},
{
"failure": "Single narrative for all audiences",
"symptoms": "Same explanation for execs, engineers, and operators. Too technical or too vague for some.",
"fix": "Tailor communication. Exec summary (strategic), technical brief (implementation), operational guide (execution). Emphasize what each audience cares about."
}
],
"usage_notes": "Use this rubric to self-assess before delivering synthesis. For simple binary decisions, 3.0+ is acceptable. For standard multi-stakeholder decisions, aim for 3.5+. For complex strategic decisions with high stakes, aim for 4.0+. Pay special attention to Perspective Authenticity, Synthesis Coherence, and Concern Integration as these are most critical for effective roleplay-debate-synthesis."
}

View File

@@ -0,0 +1,599 @@
# Decision: Build Custom CRM vs. Buy SaaS CRM
**Date**: 2024-11-02
**Decision-maker**: CTO + VP Sales
**Stakes**: Medium (affects 30-person sales team, $200K-$800K over 3 years)
---
## 1. Decision Context
**What we're deciding:**
Should we build a custom CRM tailored to our sales process or buy an existing SaaS CRM platform (Salesforce, HubSpot, etc.)?
**Why this matters:**
- Current CRM is a patchwork of spreadsheets and email threads (manual, error-prone)
- Sales team losing deals due to poor pipeline visibility and follow-up tracking
- Need solution operational within 6 months to support Q4 sales push
- Decision impacts sales productivity for next 3+ years
**Success criteria for synthesis:**
- Addresses all three stakeholders' core concerns
- Clear on timeline and costs
- Actionable implementation plan
- All parties can commit to making it work
**Constraints:**
- Budget: $150K available this year, $50K/year ongoing
- Timeline: Must be operational within 6 months
- Requirements: Support 30 users, custom deal stages, integration with current tools
- Non-negotiable: Cannot disrupt Q4 sales push (our busiest quarter)
**Audience:** Executive team (CEO, CTO, VP Sales, CFO)
---
## 2. Roles & Perspectives
### Role 1: VP Sales ("Revenue Defender")
**Position**: Buy SaaS CRM - we need it operational fast to support sales team.
**Priorities**:
1. **Speed to value**: Sales team needs this now, not in 12 months
2. **Proven reliability**: Can't afford downtime or bugs during Q4 push
3. **User adoption**: Sales reps need familiar, polished interface
4. **Feature completeness**: Reporting, forecasting, mobile app out-of-box
**Concerns about building custom**:
- Development will take 12-18 months (historical track record of eng projects)
- Custom tools are often clunky and sales reps hate them
- Maintenance burden (who fixes bugs when it breaks?)
- Opportunity cost: Every month without CRM costs us deals
**Evidence**:
- Previous custom tool (deal calculator) took 14 months to build and was buggy for first 6 months
- Sales team productivity dropped 20% during transition to last custom tool
- Industry benchmark: SaaS CRM implementation takes 2-3 months vs. 12+ for custom build
**Vulnerabilities**:
- "I acknowledge SaaS solutions have vendor lock-in risk and ongoing subscription costs"
- "If engineering can credibly deliver in 6 months with high quality, I'd reconsider"
- "Main uncertainty: Will SaaS meet our unique sales process needs (2-stage approval workflow)?"
**Success metrics**:
- Time to operational: <3 months
- User adoption: >90% of sales reps using daily within 30 days
- Deal velocity: Reduce avg sales cycle from 60 days to 45 days
- Win rate: Maintain or improve current 25% win rate
---
### Role 2: Engineering Lead ("Builder")
**Position**: Build custom CRM - we can tailor exactly to our needs and own the platform.
**Priorities**:
1. **Perfect fit**: Our sales process is unique (2-stage approval, custom deal stages); SaaS won't fit
2. **Long-term ownership**: Build once, no ongoing subscription fees ($50K/year saved)
3. **Extensibility**: Can add features as we grow (integrations, automation, reporting)
4. **Technical debt reduction**: Opportunity to modernize our stack
**Concerns about buying SaaS**:
- Vendor lock-in: Stuck with their roadmap, pricing, and limitations
- Data ownership: Our customer data lives on their servers
- Customization limits: SaaS tools are 80% fit, not 100%
- Hidden costs: Subscription fees compound, APIs cost extra, training costs
**Evidence**:
- Previous SaaS tool (project management) had 15% of features unused but we still paid for them
- Customization via SaaS APIs is limited and often breaks on upgrades
- Engineering team has capacity: 2 senior engineers available for 4-6 months
- Build cost: $200K-$300K upfront vs. $50K/year SaaS ($250K over 5 years)
**Vulnerabilities**:
- "I acknowledge custom builds often take longer than estimated (our track record is +40%)"
- "If SaaS can handle our 2-stage approval workflow, that reduces the 'perfect fit' advantage"
- "Main uncertainty: Will we actually build all planned features or ship minimal version?"
**Success metrics**:
- Feature completeness: 100% of specified requirements met
- Cost: <$300K initial build + <$30K/year maintenance
- Extensibility: Add 2-3 new features per year as business evolves
- Technical quality: <5 bugs per month in production
---
### Role 3: Finance/Operations ("Cost Realist")
**Position**: Depends on ROI - need to see credible numbers for both options.
**Priorities**:
1. **Total cost of ownership (TCO)**: Upfront + ongoing costs over 3-5 years
2. **Risk-adjusted value**: Factor in probability of delays, overruns, adoption failure
3. **Payback period**: How quickly do we recoup investment via productivity gains?
4. **Operational predictability**: Prefer predictable costs to variable/uncertain
**Concerns about both options**:
- **Build**: High upfront cost, uncertain timeline, may exceed budget
- **Buy**: Ongoing subscription compounds, pricing can increase, switching costs if we change later
- **Both**: Adoption risk (if sales team doesn't use it, value = $0)
**Evidence**:
- Current manual CRM costs us 20 hours/week of sales rep time (= $40K/year in lost productivity)
- Average SaaS CRM for 30 users: $50K/year (HubSpot Professional)
- Engineering project cost overruns average +40% from initial estimate (historical data)
- Failed internal tool adoption has happened twice in past 3 years (calc tool, reporting dashboard)
**Vulnerabilities**:
- "I'm making assumptions about productivity gains that haven't been validated"
- "If custom build delivers high value in 6 months (unlikely but possible), ROI beats SaaS"
- "Main uncertainty: Adoption rate - will sales team actually use whichever solution we choose?"
**Success metrics**:
- TCO: Minimize 3-year total cost (build + operate)
- Payback: <18 months to recoup investment
- Adoption: >80% active usage (logging deals, updating pipeline)
- Predictability: <20% variance from projected costs
---
## 3. Debate
### Key Points of Disagreement
**Dimension 1: Timeline - Fast (Buy) vs. Perfect (Build)**
- **VP Sales**: Need it in 3 months to support Q4. Cannot wait 12+ months for custom build.
- **Engineering Lead**: 6-month build timeline is achievable with focused team. SaaS implementation still takes 2-3 months (not that much faster).
- **Finance**: Every month of delay costs $3K in lost productivity. Time is money.
- **Tension**: Speed to value vs. building it right
**Dimension 2: Fit - Good Enough (Buy) vs. Perfect (Build)**
- **VP Sales**: 80% fit is fine if it's reliable and fast. Sales reps can adapt process slightly.
- **Engineering Lead**: 80% fit means 20% pain forever. Our 2-stage approval is core to how we sell.
- **Finance**: "Perfect fit" is theoretical. Custom builds often ship with missing features or bugs.
- **Tension**: Process flexibility vs. tool flexibility
**Dimension 3: Cost - Predictable (Buy) vs. Upfront (Build)**
- **VP Sales**: $50K/year is predictable, budgetable, and includes support/updates.
- **Engineering Lead**: $50K/year = $250K over 5 years. Build is $300K once, then $30K/year = $420K over 5 years. But we own it.
- **Finance**: Engineering track record suggests $300K becomes $400K. Risk-adjusted, not clear build is cheaper.
- **Tension**: Upfront vs. ongoing, certain vs. uncertain
**Dimension 4: Risk - Operational (Buy) vs. Execution (Build)**
- **VP Sales**: Execution risk is high (engineering track record). Operational risk with SaaS is low (proven, 99.9% uptime).
- **Engineering Lead**: Vendor risk is real (lock-in, pricing changes, product direction). We control our own destiny with custom build.
- **Finance**: Both have risks. Mitigatable? SaaS = switching costs. Build = timeline/budget overruns.
- **Tension**: Vendor dependency vs. execution capability
### Debate Transcript (Point-Counterpoint)
**VP Sales Opening Case:**
Look, I get the appeal of building our perfect CRM, but we cannot afford to wait 12-18 months. Our sales team is losing deals *right now* because we don't have proper pipeline tracking. I had a rep miss a $50K deal last month because a follow-up fell through the cracks in our spreadsheet system.
The reality is that engineering has a track record of delays. The deal calculator took 14 months instead of 8. The reporting dashboard was 6 months late. If we start a custom build today, best case is 12 months to launch, realistically 18 months. By then, we'll have lost $100K+ in productivity and missed deals.
A SaaS solution like HubSpot can be operational in 2-3 months. Yes, it's not a perfect fit for our 2-stage approval, but we can adapt our process slightly or use workflow automation. The key is: it works, it's reliable, and my sales team will actually use it because it's polished and familiar.
The $50K/year cost is worth it for the speed, reliability, and ongoing support. We get updates, new features, and 24/7 support included. If we build custom and it breaks during Q4, who fixes it? Engineering is focused on product features, not internal tools.
**Engineering Lead Response:**
I appreciate the urgency, but "speed at any cost" is how we end up with tools that don't fit our needs and frustrate users. Let me address the timeline concern: I'm proposing a 6-month build, not 12-18 months. We scope it tightly to core CRM features (contacts, deals, pipeline, basic reporting). No gold-plating.
Yes, past projects have run over, but those were exploratory with unclear requirements. This is a well-defined problem. We've built CRMs before (I personally built one at my previous company in 5 months). With 2 senior engineers dedicated full-time, 6 months is realistic.
On the "good enough" point: Our 2-stage approval process is non-negotiable. It's a compliance requirement and competitive differentiator. SaaS CRMs make this painful or impossible. I've demoed HubSpot - their approval workflows are clunky and don't match our needs. Sales reps will work around the system, undermining data quality.
On cost: Yes, $50K/year seems manageable, but that's $250K over 5 years, $500K over 10 years, *forever*. And that's today's price. SaaS pricing increases 5-10% per year. We're also locked in - switching CRMs later is a $100K+ migration project. Build once, own forever, total control.
And let's not forget: We can build exactly what we need. Want a custom integration with our accounting system? Done. Want automated deal scoring based on our proprietary criteria? Done. SaaS tools require expensive consultants or hacky API integrations that break on upgrades.
**VP Sales Rebuttal:**
Six months is your *best case* estimate. But you're assuming everything goes perfectly: no scope creep, no technical challenges, no distractions from product team priorities. Even if you hit 8 months, that's 5 months longer than SaaS, costing us $15K+ in lost productivity.
On the 2-stage approval: I've seen it work in HubSpot via custom workflows and approval automations. It's not perfect, but it's workable. And honestly, maybe our process could use some optimization. SaaS CRMs embody industry best practices - might be an opportunity to streamline.
On the "locked in" concern: Sure, switching CRMs is painful. But building custom and then replacing it later is *also* painful (arguably more so - you're throwing away $300K of investment). At least with SaaS, you can evaluate competitors and switch if needed. With custom, you're locked into your own technical debt.
Here's my core concern: *Adoption risk*. Sales reps have used Salesforce or HubSpot before. They know the interface, the mobile app, the workflows. Your custom CRM, no matter how well-built, will feel unfamiliar. Adoption could be slow or fail entirely. Then we've spent $300K on a tool no one uses.
**Engineering Lead Rebuttal:**
On timeline: I hear your skepticism given our track record. What if we mitigate that risk? We could do a phased approach: MVP in 3 months (just contact management and deal tracking), then add features incrementally. If the MVP works, we continue. If not, we bail after $75K and switch to SaaS. That's a middle ground.
On adoption: Fair point. But I'd argue familiarity cuts both ways. Sales reps are frustrated with *current* tools that don't fit our process. A custom tool that matches their workflow could actually drive *better* adoption than a foreign SaaS tool they need to work around. We'd design with sales team input from day one.
On process optimization: I'm not opposed to improving the process, but the 2-stage approval isn't arbitrary - it's a compliance requirement from our legal team. We can't just "streamline" it away. A SaaS tool that doesn't support it is a non-starter, full stop.
On the cost trajectory: You're right that switching from custom later is painful. But switching from SaaS is *also* painful, *plus* we've paid $250K in subscription fees *plus* we've built dependencies on their platform. At least with custom, we own the asset.
**Finance Cross-Examination:**
**Finance to Engineering**: "You say 6 months and $300K. Our historical data shows engineering projects run +40% over budget. Shouldn't we plan for 8-9 months and $400K as the realistic case?"
**Engineering**: "Fair. If we scope tightly and use my phased approach (MVP first), I'm confident in 6 months. But yes, let's budget $350K to be safe."
**Finance to VP Sales**: "You say SaaS is $50K/year. Have you confirmed that price includes 30 users, all features we need, and API access for integrations?"
**VP Sales**: "I've gotten quotes. HubSpot Professional is $45K/year for 30 users. Salesforce is $55K/year. Both include core features. API access is included, though there are rate limits."
**Finance to both**: "Here's the cost comparison I'm seeing:
- **SaaS (HubSpot)**: $20K implementation + $50K/year × 3 years = $170K (3-year TCO)
- **Build**: $350K (risk-adjusted) + $30K/year × 3 years = $440K (3-year TCO)
SaaS is $270K cheaper over 3 years. Engineering, what would need to be true for build to be cheaper?"
**Engineering**: "Two things: (1) We'd need to run the custom CRM for 7+ years for TCO to break even. (2) If SaaS costs increase to $70K/year (SaaS companies often raise prices), break-even is 5 years. I'm thinking long-term; you're thinking 3-year horizon."
**VP Sales to Engineering**: "What happens if you start the build and realize at month 4 that you're behind schedule or over budget? Do we have an exit ramp, or are we committed?"
**Engineering**: "That's why I proposed phased approach with MVP milestone. At 3 months, we review: Is MVP working? Is it on budget? If yes, continue. If no, we pivot to SaaS. That gives us an exit ramp."
### Cruxes (What Would Change Minds)
**VP Sales would shift to Build if:**
- Engineering commits to 6-month delivery with phased milestones and exit ramps
- MVP is demonstrated at 3 months with core functionality working
- Sales reps are involved in design from day one (adoption risk mitigation)
- There's a backup plan (SaaS) if custom build fails
**Engineering Lead would shift to Buy if:**
- SaaS demo shows it can handle 2-stage approval workflow without painful workarounds
- Cost analysis includes long-term subscription growth (10-year TCO, not 3-year)
- We retain option to migrate to custom later if SaaS limitations become painful
- Data export and portability are guaranteed (no vendor lock-in on data)
**Finance would support Build if:**
- Timeline is credibly 6 months with phased milestones (not 12-18 months)
- Budget is capped at $350K with clear scope controls
- Adoption plan is strong (sales team involvement, training, change management)
- Long-term TCO analysis shows break-even within 5-7 years
**Finance would support Buy if:**
- 3-year TCO is significantly lower ($170K vs $440K)
- Predictable costs with no surprises
- Adoption risk is low (familiar interface for sales reps)
- Operational in 2-3 months to start capturing productivity gains
### Areas of Agreement
Despite disagreements, all three roles agree on:
- **Current state is unacceptable**: Spreadsheets are costing us deals and productivity
- **Timeline pressure**: Need solution operational before end of year
- **Adoption is critical**: Doesn't matter if we build or buy if sales team doesn't use it
- **2-stage approval is non-negotiable**: Compliance requirement can't be compromised
- **Budget constraint**: ~$150K available this year, need to stay within bounds
---
## 4. Synthesis
### Integration Approach
**Pattern used**: Phased + Conditional (with exit ramps)
**Synthesis statement:**
We'll pursue a **hybrid phased approach** that mitigates the risks of both options:
**Phase 1 (Months 1-3): Build MVP**
- Engineering builds minimal CRM (contacts, deals, pipeline, 2-stage approval)
- Budget: $100K (1.5 engineers × 3 months)
- Success criteria: MVP demonstrates core functionality, sales team validates it works for their workflow
**Phase 2 (Month 3): Decision Gate**
- If MVP succeeds: Continue to full build (Phase 3)
- If MVP fails or is behind schedule: Pivot to SaaS (Phase 4)
- Decision criteria: (1) Core features working, (2) On budget (<$100K spent), (3) Sales team positive on usability
**Phase 3 (Months 4-6): Complete Build** (if MVP succeeds)
- Add reporting, mobile app, integrations
- Budget: Additional $150K
- Total: $250K build cost
**Phase 4: SaaS Fallback** (if MVP fails)
- Implement HubSpot Professional
- Cost: $20K implementation + $50K/year
- Timeline: 2-3 months to operational
This approach addresses all three stakeholders' core concerns:
### What We're Prioritizing
**Primary focus**: Mitigate execution risk while preserving custom fit option
- From **VP Sales**: Speed to value (MVP in 3 months, exit ramp if build fails)
- From **Engineering**: Opportunity to build custom fit (MVP tests feasibility)
- From **Finance**: Capital-efficient (only invest $100K before decision gate, pivot if needed)
**Secondary considerations**: Address concerns from each perspective
- **VP Sales**'s concern about adoption: Sales team involved in MVP design, validates at 3 months
- **Engineering**'s concern about SaaS fit: MVP proves we can build 2-stage approval properly
- **Finance**'s concern about cost overruns: Phased budget ($100K → gate → $150K), cap at $250K total
### Tradeoffs Accepted
**We're accepting:**
- **3-month delay vs. starting SaaS immediately**
- **Rationale**: Worth the delay to test if custom build is feasible. If MVP fails, we pivot to SaaS and only lost 3 months.
- **Risk of $100K sunk cost if MVP fails**
- **Rationale**: $100K is our "learning cost" to validate whether custom build can work. Still cheaper than committing to $250K+ full build that might fail.
**We're NOT accepting:**
- **Blind commitment to custom build** (Engineering's original proposal)
- **Reason**: Too risky given track record. MVP + gate reduces risk.
- **Immediate SaaS adoption** (Sales' original proposal)
- **Reason**: Doesn't test whether custom fit is achievable. Worth 3 months to find out.
- **Waiting 12+ months for full custom solution** (original concern)
- **Reason**: Phased approach with exit ramps means we pivot to SaaS if build isn't working by month 3.
### Addressing Each Role's Core Concerns
**VP Sales's main concern (speed and adoption) is addressed by:**
- MVP in 3 months (not 12-18 months)
- Exit ramp at month 3 if build is behind schedule
- Sales team involved in MVP design (adoption risk mitigation)
- Fallback to SaaS if build doesn't work out
**Engineering Lead's main concern (custom fit and control) is addressed by:**
- Opportunity to prove custom build is feasible via MVP
- If MVP succeeds, we complete the build (Phases 3)
- 2-stage approval built exactly as needed
- Only pivot to SaaS if custom build *demonstrably* fails (data-driven decision)
**Finance's main concern (cost and risk) is addressed by:**
- Capital-efficient: Only $100K at risk before decision gate
- Clear decision criteria at month 3 (on budget? on schedule? working?)
- Capped budget: $250K total if we complete build (vs. $350K uncapped original proposal)
- Predictable costs: SaaS fallback available if build overruns
---
## 5. Recommendation
**Recommended Action:**
Pursue phased build with MVP milestone at 3 months and decision gate (continue build vs. pivot to SaaS).
**Rationale:**
This synthesis is superior to either "pure build" or "pure buy" because it:
1. **Tests feasibility before full commitment**: The MVP validates whether we can build the custom CRM on time and on budget. If yes, we continue. If no, we pivot to SaaS with only $100K sunk cost.
2. **Mitigates execution risk (Sales' top concern)**: Exit ramp at month 3 means we're not locked into a long, uncertain build. If engineering can't deliver, we bail quickly and go SaaS.
3. **Preserves custom fit option (Engineering's top concern)**: If MVP succeeds, we get the perfectly tailored CRM with 2-stage approval. If SaaS doesn't fit our needs (as Engineering argues), we've built the right solution.
4. **Optimizes cost under uncertainty (Finance's top concern)**: We only invest $100K to learn whether custom build is viable. If it works, total cost is $250K (less than original $350K). If it doesn't, we pivot to SaaS having "paid" $100K for the knowledge that custom wasn't feasible.
The key insight from the debate: **The real uncertainty is execution capability, not which option is better in theory**. By building an MVP first, we resolve that uncertainty before committing the full budget.
**Key factors driving this decision:**
1. **Execution risk** (from Sales): Phased approach with exit ramps mitigates this
2. **Custom fit value** (from Engineering): MVP tests whether we can actually build it
3. **Cost efficiency** (from Finance): Capital-efficient with decision gate before full investment
---
## 6. Implementation
**Immediate next steps:**
1. **Kickoff MVP build** (Week 1) - Engineering Lead
- Scope MVP: Contacts, Deals, Pipeline, 2-stage approval
- Assign 1.5 senior engineers full-time
- Set up weekly check-ins with Sales for feedback
2. **Define MVP success criteria** (Week 1) - Finance + CTO
- Core features functional (create contact, create deal, advance through 2-stage approval)
- Budget: <$100K spent by month 3
- Usability: Sales reps can complete key workflows without confusion
3. **Involve Sales team in design** (Week 2) - VP Sales
- Weekly design reviews with 3-5 sales reps
- Prototype testing at weeks 4, 8, 12
- Feedback incorporated into MVP
**Phased approach:**
- **Phase 1** (Months 1-3): Build and validate MVP
- Milestone 1 (Month 1): Contacts and deal creation working
- Milestone 2 (Month 2): 2-stage approval workflow functional
- Milestone 3 (Month 3): Sales team testing, feedback incorporated
- **Phase 2** (Month 3): Decision Gate
- Review meeting: CTO, VP Sales, CFO, Engineering Lead
- Decision criteria:
- ✅ MVP core features working?
- ✅ Budget on track (<$100K)?
- ✅ Sales team feedback positive (usability acceptable)?
- If yes: Proceed to Phase 3 (complete build)
- If no: Pivot to Phase 4 (SaaS implementation)
- **Phase 3** (Months 4-6): Complete Build (if Phase 2 = yes)
- Add reporting dashboard
- Build mobile app (iOS/Android)
- Integrate with accounting system and email
- User training and rollout to full sales team
- **Phase 4**: SaaS Fallback (if Phase 2 = no)
- Month 4: Select vendor (HubSpot vs. Salesforce), contract negotiation
- Months 4-5: Implementation and customization
- Month 6: Training and rollout
**Conditional triggers:**
- **If MVP fails Month 3 gate**: Pivot to SaaS immediately (do not continue build)
- **If build exceeds $250K total**: Stop, reassess whether to continue or pivot to SaaS
- **If adoption is <80% by Month 9**: Investigate issues, consider switching (build or buy)
**Success metrics:**
- **Engineering's perspective**: MVP functional by month 3, full build by month 6, <5 bugs/month
- **Sales' perspective**: Operational by month 6, >90% adoption within 30 days, sales cycle reduces to 45 days
- **Finance's perspective**: Total cost <$250K (if build) or <$20K + $50K/year (if SaaS), payback <18 months
**Monitoring plan:**
- **Weekly**: Engineering progress vs. milestones, budget burn rate
- **Monthly**: Sales team feedback on usability, adoption metrics
- **Month 3**: Formal decision gate review (continue build or pivot to SaaS)
- **Month 6**: Post-launch review (did we hit success metrics?)
- **Month 12**: ROI review (productivity gains vs. investment)
**Escalation conditions:**
- If budget exceeds $100K by month 3 → automatic pivot to SaaS
- If MVP core features not working by month 3 → escalate to CEO for decision
- If adoption <50% by month 9 → escalate to executive team for intervention
---
## 7. Stakeholder Communication
**For Executive Team (CEO, CFO, Board):**
**Key message**: "We're pursuing a phased build approach that tests feasibility before full commitment, with SaaS fallback if custom build doesn't work."
**Focus on**: Risk mitigation, cost efficiency, timeline
- Risk: MVP + decision gate reduces execution risk from high to medium
- Cost: Only $100K at risk before decision point, capped at $250K total
- Timeline: MVP operational in 3 months, decision by month 3, full solution by month 6
**For Engineering Team:**
**Key message**: "We're building an MVP to prove we can deliver a custom CRM on time and on budget. If successful, we'll complete the build. If not, we'll pivot to SaaS."
**Focus on**: Technical scope, timeline, success criteria
- Scope: MVP = contacts, deals, pipeline, 2-stage approval (no gold-plating)
- Timeline: 3 months to MVP, 6 months to full build (if MVP succeeds)
- Success criteria: Functional, on budget, sales team validates usability
- Commitment: If we prove we can do this, company will invest in completing the build
**For Sales Team:**
**Key message**: "We're building a custom CRM tailored to your workflow, with your input. If it doesn't work out, we'll get you a proven SaaS CRM instead."
**Focus on**: Involvement, timeline, fallback plan
- Involvement: You'll be involved from day one - design reviews, prototype testing, feedback
- Timeline: Testing MVP in 3 months, full CRM operational by month 6
- Custom fit: Built for your 2-stage approval workflow (not workarounds)
- Safety net: If custom build doesn't work, we have SaaS option (HubSpot) ready to go
---
## 8. Appendix: Assumptions & Uncertainties
**Key assumptions:**
1. **Engineering can deliver MVP in 3 months with 1.5 engineers**
- **Confidence**: Medium (based on similar past projects, but track record includes delays)
- **Impact if wrong**: Delays decision gate, may force SaaS pivot
2. **MVP will provide sufficient signal on full build feasibility**
- **Confidence**: High (MVP includes core technical challenges: data model, 2-stage approval logic, UI)
- **Impact if wrong**: May commit to full build that later fails
3. **Sales team will provide constructive feedback during MVP development**
- **Confidence**: High (VP Sales committed to involving team)
- **Impact if wrong**: Build wrong features, adoption fails
4. **SaaS option (HubSpot) will still be available at $50K/year in 3 months**
- **Confidence**: High (enterprise contracts are stable, pricing doesn't fluctuate month-to-month)
- **Impact if wrong**: Fallback plan costs more than projected
5. **Current manual CRM costs $40K/year in lost productivity**
- **Confidence**: Medium (rough estimate based on 20 hours/week sales rep time)
- **Impact if wrong**: ROI calculation changes, but decision logic still holds
**Unresolved uncertainties:**
- **Will sales reps actually use the custom CRM?**: Mitigated by involving them in design, but adoption risk remains
- **Can engineering complete full build in months 4-6?**: MVP reduces uncertainty but doesn't eliminate it
- **Will SaaS really handle 2-stage approval well?**: Need to do deeper demo/trial if we pivot to SaaS
**What would change our mind:**
- **If MVP demonstrates build is infeasible** (behind schedule, over budget, or sales team feedback is negative) → Pivot to SaaS immediately
- **If SaaS vendors introduce features that handle our 2-stage approval** → Reconsider SaaS earlier
- **If budget gets cut below $100K** → Go straight to SaaS (can't afford MVP experiment)
---
## Self-Assessment (Rubric Scores)
**Perspective Authenticity**: 4/5
- All three roles authentically represented with strong advocacy
- Each role feels genuine ("hero of their own story")
- Could improve: More depth on Finance's methodology for cost analysis
**Depth of Roleplay**: 4/5
- Priorities, concerns, evidence, and vulnerabilities articulated for each role
- Success metrics defined
- Could improve: More specific evidence citations (e.g., which past projects, exact timeline data)
**Debate Quality**: 5/5
- Strong point-counterpoint engagement
- Roles respond directly to each other's arguments
- Cross-examination adds depth
- All perspectives clash on key dimensions
**Tension Surfacing**: 5/5
- Four key tensions explicitly identified and explored (timeline, fit, cost, risk)
- Irreducible tradeoffs clear
- False dichotomies avoided (phased approach finds middle ground)
**Crux Identification**: 4/5
- Clear cruxes for each role
- Conditions that would change minds specified
- Could improve: More specificity on evidence thresholds (e.g., exactly what MVP must demonstrate)
**Synthesis Coherence**: 5/5
- Phased + conditional pattern well-applied
- Recommendation is unified and logical
- Addresses the decision directly
- Better than any single perspective alone (integrates insights)
**Concern Integration**: 5/5
- All three roles' core concerns explicitly addressed
- Synthesis shows how each perspective is strengthened by integration
- No perspective dismissed
**Tradeoff Transparency**: 5/5
- Clear on what's accepted (3-month delay, $100K risk) and why
- Explicit on what's rejected (blind build commitment, immediate SaaS)
- Honest about second-order effects
**Actionability**: 5/5
- Detailed implementation plan with phases
- Owners and timelines specified
- Success metrics from each role's perspective
- Monitoring plan and escalation conditions
- Decision review cadence (Month 3 gate)
**Stakeholder Readiness**: 4/5
- Communication tailored for execs, engineering, and sales
- Key messages appropriate for each audience
- Could improve: Add one-page executive summary at the top
**Average Score**: 4.6/5 (Exceeds standard for medium-complexity decision)
**Why this exceeds standard:**
- Genuine multi-stakeholder debate with real tension
- Synthesis pattern (phased + conditional) elegantly resolves competing priorities
- Decision gate provides exit ramp that addresses execution risk
- All perspectives strengthened by integration (not just compromise)
---
**Analysis completed**: November 2, 2024
**Facilitator**: [Internal Strategy Team]
**Status**: Ready for executive approval
**Next milestone**: Kickoff MVP build (Week 1)

View File

@@ -0,0 +1,361 @@
# Advanced Roleplay → Debate → Synthesis Methodology
## Workflow
Copy this checklist for advanced techniques:
```
Advanced Facilitation Progress:
- [ ] Step 1: Map stakeholder landscape and power dynamics
- [ ] Step 2: Design multi-round debate structure
- [ ] Step 3: Facilitate with anti-pattern awareness
- [ ] Step 4: Synthesize under uncertainty and constraints
- [ ] Step 5: Adapt communication for different audiences
```
**Step 1: Map stakeholder landscape**
Identify all stakeholders, map influence and interest, understand power dynamics and coalitions, determine who must be represented in the debate. See [1. Stakeholder Mapping](#1-stakeholder-mapping) for power-interest matrix and role selection strategy.
**Step 2: Design multi-round structure**
Plan debate rounds (diverge, converge, iterate), allocate time appropriately, choose debate formats for each round, set decision criteria upfront. See [2. Multi-Round Debate Structure](#2-multi-round-debate-structure) for three-round framework and time management.
**Step 3: Facilitate with anti-pattern awareness**
Recognize when debates go wrong (premature consensus, dominance, false dichotomies), intervene with techniques to surface genuine tensions, ensure all perspectives get authentic hearing. See [3. Facilitation Anti-Patterns](#3-facilitation-anti-patterns) for common failures and interventions.
**Step 4: Synthesize under uncertainty**
Handle incomplete information, conflicting evidence, and irreducible disagreement. Use conditional strategies and monitoring plans. See [4. Synthesis Under Uncertainty](#4-synthesis-under-uncertainty) for approaches when evidence is incomplete.
**Step 5: Adapt communication**
Tailor synthesis narrative for technical, executive, and operational audiences. Emphasize different aspects for different stakeholders. See [5. Audience-Perspective Adaptation](#5-audience-perspective-adaptation) for stakeholder-specific messaging.
---
## 1. Stakeholder Mapping
### Power-Interest Matrix
**High Power, High Interest****Manage Closely**
- Must be represented in debate
- Concerns must be addressed
- Examples: Executive sponsor, Product owner, Key customer
**High Power, Low Interest****Keep Satisfied**
- Consult but may not need full representation
- Examples: CFO (if not budget owner), Adjacent VP, Legal
**Low Power, High Interest****Keep Informed**
- Valuable input, may aggregate into broader role
- Examples: End users, Support team, Implementation team
**Low Power, Low Interest****Monitor**
- Don't need direct representation
### Role Selection Strategy
**Must include:**
- Primary decision-maker or proxy
- Implementation owner
- Resource controller (budget, people, time)
- Risk owner
**Should include:**
- Key affected stakeholders (customer, user)
- Domain expert
- Devil's advocate
**Aggregation when >5 stakeholders:**
- Combine similar perspectives into archetype roles
- Rotate roles across debate rounds
- Focus on distinct viewpoints, not individuals
### Coalition Identification
**Common coalitions:**
- **Revenue**: Sales, Marketing, Growth → prioritize growth
- **Quality**: Engineering, Support, Brand → prioritize quality
- **Efficiency**: Finance, Operations → prioritize cost
- **Innovation**: R&D, Product, Strategy → prioritize new capabilities
**Why matters**: Coalitions amplify perspectives. Synthesis must address coalition concerns, not just individual roles.
---
## 2. Multi-Round Debate Structure
### Three-Round Framework
**Round 1: Diverge (30-45 min)**
- **Goal**: Surface all perspectives
- **Format**: Sequential roleplay (no interruption)
- **Outcome**: Clear understanding of each position
**Round 2: Engage (45-60 min)**
- **Goal**: Surface tensions, challenge assumptions, identify cruxes
- **Format**: Point-counterpoint or constructive confrontation
- **Facilitation**: Direct traffic, push for specifics, surface cruxes, note agreements
**Round 3: Converge (30-45 min)**
- **Goal**: Build unified recommendation
- **Format**: Collaborative synthesis
- **Facilitation**: Propose patterns, test against roles, refine, check coherence
### Adaptive Structures
**Two-round** (simpler decisions): Roleplay+Debate → Synthesis
**Four-round** (complex decisions): Positions → Challenge → Refine → Synthesize
**Iterative**: Initial synthesis → Test → Refine → Repeat
---
## 3. Facilitation Anti-Patterns
### Premature Consensus
**Symptoms**: Roles agree quickly without genuine debate
**Fix**: Play devil's advocate, test with edge cases, give permission to disagree
### Dominant Voice
**Symptoms**: One role speaks 70%+ of time, others defer
**Fix**: Explicit turn-taking, direct questions to quieter roles, affirm contributions
### Talking Past Each Other
**Symptoms**: Roles make points but don't engage
**Fix**: Make dimensions explicit, force direct engagement, summarize and redirect
### False Dichotomies
**Symptoms**: "Either X or we fail"
**Fix**: Challenge dichotomy, explore spectrum, introduce alternatives, reframe
### Appeal to Authority
**Symptoms**: "CEO wants X, so we do X"
**Fix**: Ask for underlying reasoning, question applicability, examine evidence
### Strawman Arguments
**Symptoms**: Weak versions of opposing views
**Fix**: Steelman request, direct to role for their articulation, empathy prompt
### Analysis Paralysis
**Symptoms**: "Need more data" endlessly
**Fix**: Set decision deadline, clarify decision criteria, good-enough threshold
---
## 4. Synthesis Under Uncertainty
### When Evidence is Incomplete
**Conditional strategy with learning triggers:**
- "Start with X. Monitor [metric]. If [threshold] not met by [date], switch to Y."
**Reversible vs. irreversible:**
- Choose reversible option first
- Example: "Buy SaaS (reversible). Only build custom if SaaS proves inadequate after 6 months."
**Small bets and experiments:**
- Run pilots before full commitment
- Example: "Test feature with 10% users. Rollout to 100% only if retention improves >5%."
**Information value calculation:**
- Is value of additional information worth the delay?
### When Roles Fundamentally Disagree
**Disagree and commit:**
- Make decision, all commit to making it work
- Document disagreement for learning
**Escalate to decision-maker:**
- Present both perspectives clearly
- Let higher authority break tie
**Parallel paths** (if resources allow):
- Pursue both approaches simultaneously
- Let data decide which to scale
**Defer decision:**
- Explicitly choose to wait
- Set conditions for revisiting
### When Constraints Shift Mid-Debate
**Revisit assumptions:**
- Which roles' positions change given new constraint?
**Re-prioritize:**
- Given new constraint, what's binding now?
**Scope reduction:**
- What can we cut to stay within constraints?
**Challenge the constraint:**
- Is the new constraint real or negotiable?
---
## 5. Audience-Perspective Adaptation
### For Executives
**Focus**: Strategic impact, ROI, risk, competitive positioning
- Bottom-line recommendation (1 sentence)
- Strategic rationale (2-3 bullets)
- Financial impact (costs, benefits, ROI)
- Risk summary (top 2 risks + mitigations)
- Competitive implications
**Format**: 1-page executive summary
### For Technical Teams
**Focus**: Implementation feasibility, technical tradeoffs, timeline, resources
- Technical approach (how)
- Architecture decisions and rationale
- Resource requirements (people, time, tools)
- Technical risks and mitigation
- Success metrics (technical KPIs)
**Format**: 2-3 page technical brief
### For Operational Teams
**Focus**: Customer impact, ease of execution, support burden, messaging
- Customer value proposition
- Operational changes (what changes for them)
- Training and enablement needs
- Support implications
- Timeline and rollout plan
**Format**: Operational guide
---
## 6. Advanced Debate Formats
### Socratic Dialogue
**Purpose**: Deep exploration through questioning
**Method**: One role (Socrates) asks probing questions, other responds
**Questions**: "What do you mean by [term]?", "Why is that important?", "What if opposite were true?"
### Steelman Debate
**Purpose**: Understand deeply before challenging
**Method**: Role B steelmans Role A's argument (stronger than A did), then challenges
**Why works**: Forces genuine understanding, surfaces real strengths
### Pre-Mortem Debate
**Purpose**: Surface risks and failure modes
**Method**: Assume decision X failed. Each role explains why from their perspective
**Repeat for each alternative**
### Fishbowl Debate
**Purpose**: Represent multiple layers (decision-makers + affected parties)
**Format**: Inner circle debates, outer circle observes, pause periodically for outer circle input
### Delphi Method
**Purpose**: Aggregate expert opinions without groupthink
**Format**: Round 1 (anonymous positions) → Share → Round 2 (revise) → Repeat until convergence
---
## 7. Complex Synthesis Patterns
### Multi-Criteria Decision Analysis (MCDA)
**When**: Multiple competing criteria, can't integrate narratively
**Method**:
1. Identify criteria (from role perspectives): Cost, Speed, Quality, Risk, Customer Impact
2. Weight criteria (based on priorities): Sum to 100%
3. Score alternatives (1-5 scale per criterion)
4. Calculate weighted scores
5. Sensitivity analysis on weights
### Pareto Frontier Analysis
**When**: Two competing objectives with tradeoff curve
**Method**:
1. Plot alternatives on two dimensions (e.g., Cost vs Quality)
2. Identify Pareto frontier (non-dominated alternatives)
3. Choose based on priorities
### Real Options Analysis
**When**: Decision can be staged with learning opportunities
**Method**:
1. Identify decision points (Now: invest $X, Later: decide based on results)
2. Map scenarios and outcomes
3. Calculate option value (flexibility value - upfront commitment value)
---
## 8. Facilitation Best Practices
### Reading the Room
**Verbal cues:**
- Hesitation: "Well, I guess..." (not convinced)
- Qualifiers: "Maybe", "Possibly" (hedging)
- Repetition: Saying same point multiple times (not feeling heard)
**Facilitation responses:**
- Check in: "I sense hesitation. Can you say more?"
- Affirm: "I hear X is important. Let's address that."
- Give space: "Let's pause and hear from [quieter person]."
### Managing Conflict
**Productive** (encourage):
- Disagreement on ideas (not people)
- Specificity, evidence-based, openness to changing mind
**Unproductive** (intervene):
- Personal attacks, generalizations, dismissiveness, stonewalling
**Interventions**: Reframe (focus on idea), ground in evidence, seek understanding, take break
### Building Toward Synthesis
**Incremental agreement**: Note areas of agreement as they emerge
**Trial balloons**: Float potential synthesis ideas early, gauge reactions
**Role-checking**: Test synthesis against each role iteratively
### Closing the Debate
**Signals**: Positions clear, tensions explored, cruxes identified, repetition, time pressure
**Transition**: "We've heard all perspectives. Now let's build unified recommendation."
**Final check**: "Can everyone live with this?" "What would make this 10% better for each of you?"
---
## 9. Case Studies
For detailed worked examples showing stakeholder mapping, multi-round debates, and complex synthesis:
- [Monolith vs Microservices](examples/methodology/case-study-monolith-microservices.md) - Engineering team debate
- [Market Entry Decision](examples/methodology/case-study-market-entry.md) - Executive team with 5 stakeholders
- [Pricing Model Debate](examples/methodology/case-study-pricing-model.md) - Customer segmentation synthesis
---
## Summary
**Key principles:**
1. **Map the landscape**: Understand stakeholders, power dynamics, coalitions before designing debate
2. **Structure for depth**: Multiple rounds allow positions to evolve as understanding deepens
3. **Recognize anti-patterns**: Premature consensus, dominant voice, talking past, false dichotomies, appeal to authority, strawmen, analysis paralysis
4. **Synthesize under uncertainty**: Conditional strategies, reversible decisions, small bets, monitoring plans
5. **Adapt communication**: Tailor for executives (strategic), technical teams (implementation), operational teams (execution)
6. **Master advanced formats**: Socratic dialogue, steelman, pre-mortem, fishbowl, Delphi for different contexts
7. **Facilitate skillfully**: Read the room, manage conflict productively, build incremental agreement, know when to close
**The best synthesis** integrates insights from all perspectives, addresses real concerns, makes tradeoffs explicit, and results in a decision better than any single viewpoint alone.

View File

@@ -0,0 +1,306 @@
# Roleplay → Debate → Synthesis Template
## Workflow
Copy this checklist and track your progress:
```
Roleplay → Debate → Synthesis Progress:
- [ ] Step 1: Frame decision and select 2-5 roles
- [ ] Step 2: Roleplay each perspective authentically
- [ ] Step 3: Facilitate structured debate
- [ ] Step 4: Synthesize unified recommendation
- [ ] Step 5: Self-assess with rubric
```
**Step 1: Frame decision and select roles**
Define the decision question clearly, identify 2-5 stakeholder perspectives with competing interests, and determine what successful synthesis looks like. Use [Quick Template](#quick-template) structure below.
**Step 2: Roleplay each perspective**
For each role, articulate their position, priorities, concerns, evidence, and vulnerabilities without strawmanning. See [Section 2](#2-roles--perspectives) of template structure.
**Step 3: Facilitate structured debate**
Use debate format (point-counterpoint, devil's advocate, crux-finding) to surface tensions and challenge assumptions. See [Section 3](#3-debate) of template structure.
**Step 4: Synthesize unified recommendation**
Integrate insights using synthesis patterns (weighted, sequencing, conditional, hybrid, reframing, or constraint elevation). See [Section 4](#4-synthesis) of template structure and [Synthesis Patterns](#synthesis-patterns).
**Step 5: Self-assess with rubric**
Validate using rubric: perspective authenticity, debate quality, synthesis coherence, and actionability. Use [Quality Checklist](#quality-checklist) before finalizing.
---
## Quick Template
Copy this structure to create your analysis:
```markdown
# Decision: [Question]
**Date**: [Today's date]
**Decision-maker**: [Who decides]
**Stakes**: [High/Medium/Low - impact and reversibility]
---
## 1. Decision Context
**What we're deciding:**
[Clear statement of the choice - "Should we X?" or "What's the right balance between X and Y?"]
**Why this matters:**
[Business impact, urgency, strategic importance]
**Success criteria for synthesis:**
[What makes this synthesis successful]
**Constraints:**
- [Budget, timeline, requirements, non-negotiables]
**Audience:** [Who needs to approve or act on this decision]
---
## 2. Roles & Perspectives
### Role 1: [Name - e.g., "Engineering Lead" or "Growth Advocate"]
**Position**: [What they believe should be done]
**Priorities**: [What values or goals drive this position]
- [Priority 1]
- [Priority 2]
- [Priority 3]
**Concerns about alternatives**: [What risks or downsides they see]
- [Concern 1]
- [Concern 2]
**Evidence**: [What data, experience, or reasoning supports this view]
- [Evidence 1]
- [Evidence 2]
**Vulnerabilities**: [What uncertainties or limitations they acknowledge]
- [What they're unsure about]
- [What could prove them wrong]
**Success metrics**: [How this role measures success]
---
### Role 2: [Name]
[Same structure as Role 1]
---
### Role 3: [Name] (if applicable)
[Same structure as Role 1]
---
## 3. Debate
### Key Points of Disagreement
**Dimension 1: [e.g., "Timeline - Fast vs. Thorough"]**
- **[Role A]**: [Their position on this dimension]
- **[Role B]**: [Their position on this dimension]
- **Tension**: [Where the conflict lies]
**Dimension 2: [e.g., "Risk Tolerance"]**
- **[Role A]**: [Position]
- **[Role B]**: [Position]
- **Tension**: [Conflict]
### Debate Transcript (Point-Counterpoint)
**[Role A] Opening Case:**
[Their argument for their position - 2-3 paragraphs]
**[Role B] Response:**
[Objections and counterarguments - 2-3 paragraphs]
**[Role A] Rebuttal:**
[Addresses objections - 1-2 paragraphs]
**Cross-examination:**
- **[Role A] to [Role B]**: [Probing question]
- **[Role B]**: [Response]
### Cruxes (What Would Change Minds)
**[Role A] would shift if:**
- [Condition or evidence that would change their position]
**[Role B] would shift if:**
- [Condition or evidence]
### Areas of Agreement
Despite disagreements, roles agree on:
- [Common ground 1]
- [Common ground 2]
---
## 4. Synthesis
### Integration Approach
**Pattern used**: [Weighted Synthesis / Sequencing / Conditional / Hybrid / Reframing / Constraint Elevation]
**Synthesis statement:**
[1-2 paragraphs explaining the unified recommendation that integrates insights from all perspectives]
### What We're Prioritizing
**Primary focus**: [What's being prioritized and why]
- From [Role X]: [What we're adopting from this perspective]
- From [Role Y]: [What we're adopting]
**Secondary considerations**: [How we're addressing other concerns]
- [Role X]'s concern about [issue]: Mitigated by [approach]
### Tradeoffs Accepted
**We're accepting:**
- [Tradeoff 1]
- **Rationale**: [Why this makes sense]
**We're NOT accepting:**
- [What we explicitly decided against]
- **Reason**: [Why rejected]
---
## 5. Recommendation
**Recommended Action:**
[Clear, specific recommendation in 1-2 sentences]
**Rationale:**
[2-3 paragraphs explaining why this synthesis is the best path forward]
**Key factors driving this decision:**
1. [Factor 1 - from which role's perspective]
2. [Factor 2]
3. [Factor 3]
---
## 6. Implementation
**Immediate next steps:**
1. [Action 1] - [Owner] by [Date]
2. [Action 2] - [Owner] by [Date]
3. [Action 3] - [Owner] by [Date]
**Phased approach:** (if using sequencing)
- **Phase 1** ([Timeline]): [What happens first]
- **Phase 2** ([Timeline]): [What happens next]
**Conditional triggers:** (if using conditional strategy)
- **If [condition A]**: [Then do X]
- **If [condition B]**: [Then do Y]
**Success metrics:**
- [Metric 1 - from Role X's perspective]: Target [value] by [date]
- [Metric 2 - from Role Y's perspective]: Target [value] by [date]
**Monitoring plan:**
- **Weekly**: [What we track frequently]
- **Monthly**: [What we review periodically]
---
## 7. Stakeholder Communication
**For [Stakeholder Group A - e.g., Executive Team]:**
- Key message: [1-sentence summary]
- Focus on: [What matters most to them]
**For [Stakeholder Group B - e.g., Engineering Team]:**
- Key message: [1-sentence summary]
- Focus on: [What matters most to them]
---
## 8. Appendix: Assumptions & Uncertainties
**Key assumptions:**
1. [Assumption 1]
- **Confidence**: High / Medium / Low
- **Impact if wrong**: [What happens]
**Unresolved uncertainties:**
- [Uncertainty 1]: [How we'll handle this]
**What would change our mind:**
- [Condition or evidence that would trigger reconsideration]
```
---
## Synthesis Patterns
### 1. Weighted Synthesis
"Prioritize X, while incorporating safeguards for Y"
- Example: "Ship fast (PM), with feature flags and monitoring (Engineer)"
### 2. Sequencing
"First X, then Y"
- Example: "Launch MVP (Growth), then invest in quality (Engineering) if PMF proven"
### 3. Conditional Strategy
"If A, do X; if B, do Y"
- Example: "If >10K users in Q1, scale; otherwise pivot"
### 4. Hybrid Approach
"Combine elements of multiple perspectives"
- Example: "Build core in-house (control) but buy peripherals (speed)"
### 5. Reframing
"Debate reveals real question is Z, not X vs Y"
- Example: "Pricing debate reveals we need to segment customers first"
### 6. Constraint Elevation
"Identify binding constraint all perspectives agree on"
- Example: "Both agree eng capacity is bottleneck; hire first"
---
## Quality Checklist
Before finalizing, verify:
**Roleplay quality:**
- [ ] Each role has clear position, priorities, concerns, evidence
- [ ] Perspectives feel authentic (not strawmen)
- [ ] Vulnerabilities acknowledged
- [ ] Success metrics defined for each role
**Debate quality:**
- [ ] Key disagreements surfaced on 3-5 dimensions
- [ ] Perspectives directly engage (not talking past each other)
- [ ] Cruxes identified (what would change minds)
- [ ] Areas of agreement noted
**Synthesis quality:**
- [ ] Clear integration approach (weighted/sequencing/conditional/hybrid/reframe/constraint)
- [ ] All roles' concerns addressed (not dismissed)
- [ ] Tradeoffs explicit (what we're accepting and why)
- [ ] Recommendation is unified and coherent
- [ ] Actionable next steps with owners and dates
**Communication quality:**
- [ ] Tailored for different stakeholders
- [ ] Key messages clear (1-sentence summaries)
- [ ] Emphasis appropriate for audience
**Integrity:**
- [ ] Assumptions stated explicitly
- [ ] Uncertainties acknowledged
- [ ] "What would change our mind" conditions specified
- [ ] No perspective dismissed without engagement