Files
gh-lyndonkl-claude/skills/brainstorm-diverge-converge/resources/template.md
2025-11-30 08:38:26 +08:00

17 KiB
Raw Blame History

Brainstorm Diverge-Converge Template

Workflow

Copy this checklist and track your progress:

Brainstorm Progress:
- [ ] Step 1: Define problem and criteria using template structure
- [ ] Step 2: Diverge with creative prompts and techniques
- [ ] Step 3: Cluster using bottom-up or top-down methods
- [ ] Step 4: Converge with systematic scoring
- [ ] Step 5: Document selections and next steps

Step 1: Define problem and criteria using template structure

Fill in problem statement, decision context, constraints, and evaluation criteria (3-5 criteria that matter for your context). Use Quick Template to structure. See Detailed Guidance for criteria selection.

Step 2: Diverge with creative prompts and techniques

Generate 20-50 ideas using SCAMPER prompts, perspective shifting, constraint removal, and analogies. Suspend judgment, aim for quantity and variety. See Phase 1: Diverge for stimulation techniques and quality checks.

Step 3: Cluster using bottom-up or top-down methods

Group similar ideas into 4-8 distinct clusters. Use bottom-up clustering (identify natural groupings) or top-down (predefined categories). Name clusters clearly and specifically. See Phase 2: Cluster for methods and quality checks.

Step 4: Converge with systematic scoring

Score ideas on defined criteria (1-10 scale or Low/Med/High), rank by total/weighted score, and select top 3-5. Document tradeoffs and runner-ups. See Phase 3: Converge for scoring approaches and selection guidelines.

Step 5: Document selections and next steps

Fill in top selections with rationale, next steps, and timeline. Include runner-ups for future consideration and measurement plan. See Worked Example for complete example.

Quick Template

# Brainstorm: {Topic}

## Problem Statement

**What we're solving**: {Clear description of problem or opportunity}

**Decision to make**: {What will we do with the output?}

**Constraints**: {Must-haves, no-gos, boundaries}

---

## Diverge: Generate Ideas

**Target**: {20-50 ideas}

**Prompt**: Generate as many ideas as possible for {topic}. Suspend judgment. All ideas are valid.

### All Ideas

1. {Idea 1}
2. {Idea 2}
3. {Idea 3}
... (continue to target number)

**Total generated**: {N} ideas

---

## Cluster: Organize Themes

**Goal**: Group similar ideas into 4-8 distinct categories

### Cluster 1: {Theme Name}
- {Idea A}
- {Idea B}
- {Idea C}

### Cluster 2: {Theme Name}
- {Idea D}
- {Idea E}

... (continue for all clusters)

**Total clusters**: {N} themes

---

## Converge: Evaluate & Select

**Evaluation Criteria**:
1. {Criterion 1} (weight: {X}x)
2. {Criterion 2} (weight: {X}x)
3. {Criterion 3} (weight: {X}x)

### Scored Ideas

| Idea | {Criterion 1} | {Criterion 2} | {Criterion 3} | Total |
|------|--------------|--------------|--------------|-------|
| {Top idea 1} | {score} | {score} | {score} | {total} |
| {Top idea 2} | {score} | {score} | {score} | {total} |
| {Top idea 3} | {score} | {score} | {score} | {total} |

### Top Selections

**1. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}

**2. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}

**3. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}

### Runner-Ups (For Future Consideration)
- {Idea with potential but not top priority}
- {Another promising idea}

---

## Next Steps

**Immediate**:
- {Action 1 based on top selection}
- {Action 2}

**Short-term** (next 2-4 weeks):
- {Action for second priority}

**Parking lot** (revisit later):
- {Ideas to reconsider in different context}

Detailed Guidance

Phase 1: Diverge (Generate Ideas)

Goal: Generate maximum quantity and variety of ideas

Techniques to stimulate ideas:

  1. Classic brainstorming: Free-flow idea generation

  2. SCAMPER prompts:

    • Substitute: What could we replace?
    • Combine: What could we merge?
    • Adapt: What could we adjust?
    • Modify: What could we change?
    • Put to other uses: What else could this do?
    • Eliminate: What could we remove?
    • Reverse: What if we did the opposite?
  3. Perspective shifting:

    • "What would {competitor/expert/user type} do?"
    • "What if we had 10x the budget?"
    • "What if we had 1/10th the budget?"
    • "What if we had to launch tomorrow?"
    • "What's the most unconventional approach?"
  4. Constraint removal:

    • "What if technical limitations didn't exist?"
    • "What if we didn't care about cost?"
    • "What if we ignored industry norms?"
  5. Analogies:

    • "How do other industries solve similar problems?"
    • "What can we learn from nature?"
    • "What historical precedents exist?"

Divergence quality checks:

  • Generated at least 20 ideas (minimum)
  • Ideas vary in type/approach (not all incremental or all radical)
  • Included "wild" ideas (push boundaries)
  • Included "safe" ideas (low risk)
  • Covered different scales (quick wins and long-term bets)
  • No premature filtering (saved criticism for converge phase)

Common divergence mistakes:

  • Stopping too early (quantity breeds quality)
  • Self-censoring "bad" ideas (they often spark good ones)
  • Focusing only on obvious solutions
  • Letting one person/perspective dominate
  • Jumping to evaluation too quickly

Phase 2: Cluster (Organize Themes)

Goal: Create meaningful structure from raw ideas

Clustering methods:

  1. Bottom-up clustering (recommended for most cases):

    • Read through all ideas
    • Identify natural groupings (2-3 similar ideas)
    • Label each group
    • Assign remaining ideas to groups
    • Refine group labels for clarity
  2. Top-down clustering:

    • Define categories upfront (e.g., short-term/long-term, user types, etc.)
    • Assign ideas to predefined categories
    • Adjust categories if many ideas don't fit
  3. Affinity mapping (for large idea sets):

    • Group ideas that "feel similar"
    • Name groups after grouping (not before)
    • Create sub-clusters if main clusters are too large

Cluster naming guidelines:

  • Use descriptive, specific labels (not generic)
  • Good: "Automated self-service tools", Bad: "Automation"
  • Good: "Human high-touch onboarding", Bad: "Customer service"
  • Include mechanism or approach in name when possible

Cluster quality checks:

  • 4-8 clusters (sweet spot for most topics)
  • Clusters are distinct (minimal overlap)
  • Clusters are balanced (not 1 idea in one cluster, 20 in another)
  • Cluster names are clear and specific
  • All ideas assigned to a cluster
  • Clusters represent meaningfully different approaches

Handling edge cases:

  • Outliers: Create "Other/Misc" cluster for ideas that don't fit, or leave unclustered if very few
  • Ideas that fit multiple clusters: Assign to best-fit cluster, note cross-cluster themes
  • Too many clusters (>10): Merge similar clusters or create super-clusters
  • Too few clusters (<4): Consider whether ideas truly vary, or subdivide large clusters

Phase 3: Converge (Evaluate & Select)

Goal: Systematically identify strongest ideas

Step 1: Define Evaluation Criteria

Choose 3-5 criteria that matter for your context:

Common criteria:

Criterion Description When to use
Impact How much value does this create? Almost always
Feasibility How easy is this to implement? When resources are constrained
Cost What's the financial investment? When budget is limited
Speed How quickly can we do this? When time is critical
Risk What could go wrong? For high-stakes decisions
Alignment Does this fit our strategy? For strategic decisions
Novelty How unique/innovative is this? For competitive differentiation
Reversibility Can we undo this if wrong? For experimental approaches
Learning value What will we learn? For research/exploration
User value How much do users benefit? Product/feature decisions

Weighting criteria (optional):

  • Assign importance weights (e.g., 3x for impact, 2x for feasibility, 1x for speed)
  • Multiply scores by weights before summing
  • Use when some criteria matter much more than others

Step 2: Score Ideas

Scoring approaches:

  1. Simple 1-10 scale (recommended for most cases):

    • 1-3: Low (weak on this criterion)
    • 4-6: Medium (moderate on this criterion)
    • 7-9: High (strong on this criterion)
    • 10: Exceptional (best possible)
  2. Low/Medium/High:

    • Faster but less precise
    • Convert to numbers for ranking (Low=2, Med=5, High=8)
  3. Pairwise comparison:

    • Compare each idea to every other idea
    • Count "wins" for each idea
    • Slower but more thorough (good for critical decisions)

Scoring tips:

  • Score all ideas on Criterion 1, then all on Criterion 2, etc. (maintains consistency)
  • Use reference points ("This idea is more impactful than X but less than Y")
  • Document reasoning for extreme scores (1-2 or 9-10)
  • Consider both upside (best case) and downside (worst case)

Step 3: Rank and Select

Ranking methods:

  1. Total score ranking:

    • Sum scores across all criteria
    • Sort by total score (highest to lowest)
    • Select top 3-5
  2. Must-have filtering + scoring:

    • First, eliminate ideas that violate must-have constraints
    • Then score remaining ideas
    • Select top scorers
  3. Two-dimensional prioritization:

    • Plot ideas on 2x2 matrix (e.g., Impact vs. Feasibility)
    • Prioritize high-impact, high-feasibility quadrant
    • Common matrices:
      • Impact / Effort (classic prioritization)
      • Risk / Reward (for innovation)
      • Cost / Value (for ROI focus)

Selection guidelines:

  • Diversify: Don't just pick the top 3 if they're all in same cluster
  • Balance: Mix quick wins (fast, low-risk) with big bets (high-impact, longer-term)
  • Consider dependencies: Some ideas may enable or enhance others
  • Document tradeoffs: Why did 4th place not make the cut?

Convergence quality checks:

  • Evaluation criteria are explicit and relevant
  • All top ideas scored on all criteria
  • Scores are justified (not arbitrary)
  • Top selections clearly outperform alternatives
  • Tradeoffs are documented
  • Runner-up ideas noted for future consideration

Worked Example

Problem: How to increase user retention in first 30 days?

Context: SaaS product, 100k users, 40% churn in first month, limited eng resources

Constraints:

  • Must ship within 3 months
  • No more than 2 engineer-months of work
  • Must work for both free and paid users

Criteria:

  • Impact on retention (weight: 3x)
  • Feasibility with current team (weight: 2x)
  • Speed to ship (weight: 1x)

Diverge: 32 Ideas Generated

  1. Email drip campaign with usage tips
  2. In-app interactive tutorial
  3. Weekly webinar for new users
  4. Gamification with achievement badges
  5. 1-on-1 onboarding calls for high-value users
  6. Contextual tooltips for key features
  7. Progress tracking dashboard
  8. Community forum for peer help
  9. AI chatbot for instant support
  10. Daily usage streak rewards
  11. Personalized feature recommendations
  12. "Success checklist" in first 7 days
  13. Video library of use cases
  14. Slack/Discord community
  15. Monthly power-user showcase
  16. Referral rewards program
  17. Usage analytics dashboard for users
  18. Mobile app push notifications
  19. SMS reminders for inactive users
  20. Quarterly user survey with gift card
  21. In-app messaging for tips
  22. Certification program for expertise
  23. Template library for quick starts
  24. Integration marketplace
  25. Office hours with product team
  26. User-generated content showcase
  27. Automated workflow suggestions
  28. Milestone celebrations (email)
  29. Cohort-based onboarding groups
  30. Seasonal feature highlights
  31. Feedback loop with product updates
  32. Partnership with complementary tools

Cluster: 6 Themes

1. Guided Learning (8 ideas)

  • Email drip campaign with usage tips
  • In-app interactive tutorial
  • Contextual tooltips for key features
  • "Success checklist" in first 7 days
  • Video library of use cases
  • In-app messaging for tips
  • Automated workflow suggestions
  • Template library for quick starts

2. Community & Social (7 ideas)

  • Community forum for peer help
  • Slack/Discord community
  • Monthly power-user showcase
  • Office hours with product team
  • User-generated content showcase
  • Cohort-based onboarding groups
  • Partnership with complementary tools

3. Motivation & Gamification (5 ideas)

  • Gamification with achievement badges
  • Daily usage streak rewards
  • Progress tracking dashboard
  • Milestone celebrations (email)
  • Certification program for expertise

4. Personalization & AI (4 ideas)

  • AI chatbot for instant support
  • Personalized feature recommendations
  • Usage analytics dashboard for users
  • Seasonal feature highlights

5. Proactive Engagement (5 ideas)

  • Weekly webinar for new users
  • Mobile app push notifications
  • SMS reminders for inactive users
  • Quarterly user survey with gift card
  • Feedback loop with product updates

6. High-Touch Service (3 ideas)

  • 1-on-1 onboarding calls for high-value users
  • Referral rewards program
  • Integration marketplace

Converge: Evaluation & Selection

Scoring (Impact: 1-10, Feasibility: 1-10, Speed: 1-10):

Idea Impact (3x) Feasibility (2x) Speed (1x) Weighted Total
In-app interactive tutorial 9 6 7 9×3 + 6×2 + 7×1 = 46
Email drip campaign 7 9 9 7×3 + 9×2 + 9×1 = 48
Success checklist (first 7 days) 8 8 8 8×3 + 8×2 + 8×1 = 48
Contextual tooltips 6 9 9 6×3 + 9×2 + 9×1 = 45
Progress tracking dashboard 8 7 6 8×3 + 7×2 + 6×1 = 44
Template library 7 7 8 7×3 + 7×2 + 8×1 = 43
Community forum 6 4 3 6×3 + 4×2 + 3×1 = 29
AI chatbot 7 3 2 7×3 + 3×2 + 2×1 = 29
1-on-1 calls 9 5 8 9×3 + 5×2 + 8×1 = 45

Top 3 Selections

1. Email Drip Campaign (Score: 48)

  • Why: Highest feasibility and speed, good impact. Can implement with existing tools (no eng time).
  • Rationale:
    • Impact (7/10): Proven tactic, industry benchmarks show 10-15% retention improvement
    • Feasibility (9/10): Use existing Mailchimp setup, just need copy + timing
    • Speed (9/10): Can launch in 2 weeks with marketing team
  • Next steps:
    • Draft 7-email sequence (days 1, 3, 7, 14, 21, 28, 30)
    • A/B test subject lines and CTAs
    • Measure open rates and feature adoption

2. Success Checklist (First 7 Days) (Score: 48, tie)

  • Why: Balanced impact, feasibility, and speed. Clear value for new users.
  • Rationale:
    • Impact (8/10): Gives users clear path to value, reduces overwhelm
    • Feasibility (8/10): 1 engineer-week for UI + backend tracking
    • Speed (8/10): Can ship in 4 weeks
  • Next steps:
    • Define 5-7 "success milestones" (e.g., complete profile, create first project, invite teammate)
    • Build in-app checklist UI
    • Track completion rates per milestone

3. In-App Interactive Tutorial (Score: 46)

  • Why: Highest impact potential, moderate feasibility and speed.
  • Rationale:
    • Impact (9/10): Shows users value immediately, reduces "blank slate" problem
    • Feasibility (6/10): Requires 3-4 engineer-weeks (tooltips + guided flow)
    • Speed (7/10): Can ship MVP in 8 weeks
  • Next steps:
    • Design 3-5 step tutorial for core workflow
    • Use existing tooltip library to reduce build time
    • Make tutorial skippable but prominent

Runner-Ups (For Future Consideration)

Progress Tracking Dashboard (Score: 44)

  • High impact but slightly slower to build (6-8 weeks)
  • Revisit in Q3 after core onboarding stabilizes

Template Library (Score: 43)

  • Good balance, but requires content creation (not just eng work)
  • Explore in parallel with email campaign (marketing can create templates)

1-on-1 Onboarding Calls (Score: 45, but doesn't scale)

  • Very high impact for high-value users
  • Consider as premium offering for enterprise tier only

Next Steps

Immediate (next 2 weeks):

  • Finalize email drip sequence copy
  • Design success checklist UI mockups
  • Scope interactive tutorial feature requirements

Short-term (next 1-3 months):

  • Launch email drip campaign (week 2)
  • Ship success checklist (week 6)
  • Ship interactive tutorial MVP (week 10)

Measurement plan:

  • Track 30-day retention rate weekly
  • Target: Improve from 60% to 70% retention
  • Break down by cohort (email recipients vs. non-recipients, etc.)

Parking lot (revisit Q3):

  • Progress tracking dashboard
  • Template library
  • Community forum (once we hit 200k users)