Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,166 @@
---
name: brainstorm-diverge-converge
description: Use when you need to generate many creative options before systematically narrowing to the best choices. Invoke when exploring product ideas, solving open-ended problems, generating strategic alternatives, developing research questions, designing experiments, or when you need both breadth (many ideas) and rigor (principled selection). Use when user mentions brainstorming, ideation, divergent thinking, generating options, or evaluating alternatives.
---
# Brainstorm Diverge-Converge
## Table of Contents
- [Purpose](#purpose)
- [When to Use This Skill](#when-to-use-this-skill)
- [What is Brainstorm Diverge-Converge?](#what-is-brainstorm-diverge-converge)
- [Workflow](#workflow)
- [1. Gather Requirements](#1--gather-requirements)
- [2. Diverge (Generate Ideas)](#2--diverge-generate-ideas)
- [3. Cluster (Group Themes)](#3--cluster-group-themes)
- [4. Converge (Evaluate & Select)](#4--converge-evaluate--select)
- [5. Document & Validate](#5--document--validate)
- [Common Patterns](#common-patterns)
- [Guardrails](#guardrails)
- [Quick Reference](#quick-reference)
## Purpose
Apply structured divergent-convergent thinking to generate many creative options, organize them into meaningful clusters, then systematically evaluate and narrow to the strongest choices. This balances creative exploration with disciplined decision-making.
## When to Use This Skill
- Generating product or feature ideas
- Exploring solution approaches for open-ended problems
- Developing research questions or hypotheses
- Creating marketing or content strategies
- Identifying strategic initiatives or opportunities
- Designing experiments or tests
- Naming products, features, or projects
- Developing interview questions or survey items
- Exploring design alternatives (UI, architecture, process)
- Prioritizing from a large possibility space
- Overcoming creative blocks
- When you need both quantity (many options) and quality (best options)
**Trigger phrases:** "brainstorm", "generate ideas", "explore options", "what are all the ways", "divergent thinking", "ideation", "evaluate alternatives", "narrow down choices"
## What is Brainstorm Diverge-Converge?
A three-phase creative problem-solving method:
- **Diverge (Expand)**: Generate many ideas without judgment or filtering. Focus on quantity and variety. Defer evaluation.
- **Cluster (Organize)**: Group similar ideas into themes or categories. Identify patterns and connections. Create structure from chaos.
- **Converge (Select)**: Evaluate ideas against criteria. Score, rank, or prioritize. Select strongest options for action.
**Quick Example:**
```markdown
# Problem: How to improve customer onboarding?
## Diverge (30 ideas)
- In-app video tutorials
- Interactive walkthroughs
- Email drip campaign
- Live webinar onboarding
- 1-on-1 concierge calls
- ... (25 more ideas)
## Cluster (6 themes)
1. **Self-serve content** (videos, docs, tooltips)
2. **Interactive guidance** (walkthroughs, checklists)
3. **Human touch** (calls, webinars, chat)
4. **Motivation** (gamification, progress tracking)
5. **Timing** (just-in-time help, preemptive)
6. **Social** (community, peer examples)
## Converge (Top 3)
1. Interactive walkthrough (high impact, medium effort) - 8.5/10
2. Email drip campaign (medium impact, low effort) - 8.0/10
3. Just-in-time tooltips (medium impact, low effort) - 7.5/10
```
## Workflow
Copy this checklist and track your progress:
```
Brainstorm Progress:
- [ ] Step 1: Gather requirements
- [ ] Step 2: Diverge (generate ideas)
- [ ] Step 3: Cluster (group themes)
- [ ] Step 4: Converge (evaluate and select)
- [ ] Step 5: Document and validate
```
**Step 1: Gather requirements**
Clarify topic/problem (what are you brainstorming?), goal (what decision will this inform?), constraints (must-haves, no-gos, boundaries), evaluation criteria (what makes an idea "good" - impact, feasibility, cost, speed, risk, alignment), target quantity (suggest 20-50 ideas), and rounds (single session or multiple rounds, default: 1).
**Step 2: Diverge (generate ideas)**
Generate 20-50 ideas without judgment or filtering. Suspend criticism (all ideas valid during divergence), aim for quantity and variety (different types, scales, approaches), and use creative prompts: "What if unlimited resources?", "What would competitor do?", "Simplest approach?", "Most ambitious?", "Unconventional alternatives?". Output: Numbered list of raw ideas. For simple topics → generate directly. For complex topics → Use `resources/template.md` for structured prompts.
**Step 3: Cluster (group themes)**
Organize ideas into 4-8 distinct clusters by identifying patterns, creating categories (mechanism, user/audience, timeline, effort, risk, strategic objective), naming clusters clearly, and checking coverage (distinct approaches). Fewer than 4 = not enough variety, more than 8 = too fragmented. Output: Ideas grouped under cluster labels.
**Step 4: Converge (evaluate and select)**
Define criteria (from step 1), score ideas on criteria (1-10 or Low/Med/High scale), rank by total/weighted score, select top 3-5 options, and document tradeoffs (why chosen, what deprioritized). Evaluation patterns: Impact/Effort matrix, weighted scoring, must-have filtering, pairwise comparison. See [Common Patterns](#common-patterns) for domain-specific approaches.
**Step 5: Document and validate**
Create `brainstorm-diverge-converge.md` with: problem statement, diverge (full list), cluster (organized themes), converge (scored/ranked/selected), and next steps. Validate using `resources/evaluators/rubric_brainstorm_diverge_converge.json`: verify 20+ ideas with variety, distinct clusters, explicit criteria, consistent scoring, top selections clearly better, actionable next steps. Minimum standard: Score ≥ 3.5.
## Common Patterns
**For product/feature ideation:**
- Diverge: 30-50 feature ideas
- Cluster by: User need, use case, or feature type
- Converge: Impact vs. effort scoring
- Select: Top 3-5 for roadmap
**For problem-solving:**
- Diverge: 20-40 solution approaches
- Cluster by: Mechanism (how it solves problem)
- Converge: Feasibility vs. effectiveness
- Select: Top 2-3 to prototype
**For research questions:**
- Diverge: 25-40 potential questions
- Cluster by: Research method or domain
- Converge: Novelty, tractability, impact
- Select: Top 3-5 to investigate
**For strategic planning:**
- Diverge: 20-30 strategic initiatives
- Cluster by: Time horizon or strategic pillar
- Converge: Strategic value vs. resource requirements
- Select: Top 5 for quarterly planning
## Guardrails
**Do:**
- Generate at least 20 ideas in diverge phase (quantity matters)
- Suspend judgment during divergence (criticism kills creativity)
- Create distinct clusters (avoid overlap and confusion)
- Use explicit, relevant criteria for convergence (not vague "goodness")
- Score consistently across all ideas
- Document why top ideas were selected (transparency)
- Include "runner-up" ideas (for later consideration)
**Don't:**
- Filter ideas during divergence (defeats the purpose)
- Create clusters that are too similar or overlapping
- Use vague evaluation criteria ("better", "more appealing")
- Cherry-pick scores to favor pet ideas
- Select ideas without systematic evaluation
- Ignore constraints from requirements gathering
- Skip documentation of the full process
## Quick Reference
- **Template**: `resources/template.md` - Structured prompts and techniques for diverge-cluster-converge
- **Quality rubric**: `resources/evaluators/rubric_brainstorm_diverge_converge.json`
- **Output file**: `brainstorm-diverge-converge.md`
- **Typical idea count**: 20-50 ideas → 4-8 clusters → 3-5 selections
- **Common criteria**: Impact, Feasibility, Cost, Speed, Risk, Alignment

View File

@@ -0,0 +1,135 @@
{
"name": "Brainstorm Diverge-Converge Quality Rubric",
"scale": {
"min": 1,
"max": 5,
"description": "1=Poor, 2=Fair, 3=Good, 4=Very Good, 5=Excellent"
},
"criteria": [
{
"name": "Divergence Quantity",
"description": "Generated sufficient number of ideas to explore the possibility space",
"scoring": {
"1": "Fewer than 10 ideas - insufficient exploration",
"2": "10-19 ideas - minimal exploration",
"3": "20-29 ideas - adequate exploration",
"4": "30-49 ideas - thorough exploration",
"5": "50+ ideas - comprehensive exploration of possibilities"
}
},
{
"name": "Divergence Variety",
"description": "Ideas show diversity in approach, scale, and type (not all similar)",
"scoring": {
"1": "All ideas are nearly identical or very similar",
"2": "Mostly similar ideas with 1-2 different approaches",
"3": "Mix of similar and different ideas, some variety present",
"4": "Good variety across multiple dimensions (incremental/radical, short/long-term, etc.)",
"5": "Exceptional variety - ideas span multiple approaches, scales, mechanisms, and perspectives"
}
},
{
"name": "Divergence Creativity",
"description": "Includes both safe/obvious ideas and creative/unconventional ideas",
"scoring": {
"1": "Only obvious, conventional ideas",
"2": "Mostly obvious ideas with 1-2 slightly creative ones",
"3": "Mix of obvious and creative ideas, some boundary-pushing",
"4": "Good balance of safe and creative ideas with several unconventional approaches",
"5": "Exceptional creativity - includes wild ideas that challenge assumptions alongside practical ones"
}
},
{
"name": "Cluster Quality",
"description": "Ideas are organized into meaningful, distinct, well-labeled themes",
"scoring": {
"1": "No clustering or random groupings with unclear logic",
"2": "Poor clustering - significant overlap between clusters or vague labels",
"3": "Decent clustering - mostly distinct groups with adequate labels",
"4": "Good clustering - clear distinct themes with descriptive, specific labels",
"5": "Exceptional clustering - perfectly distinct themes with insightful labels that reveal patterns"
}
},
{
"name": "Cluster Coverage",
"description": "Clusters represent meaningfully different approaches (4-8 clusters typical)",
"scoring": {
"1": "1-2 clusters (insufficient structure) or 12+ clusters (over-fragmented)",
"2": "3 clusters or 10-11 clusters (suboptimal structure)",
"3": "4-8 clusters with some overlap between them",
"4": "4-8 clusters that are distinct and well-balanced",
"5": "4-8 clusters that are distinct, balanced, and reveal strategic dimensions of the problem"
}
},
{
"name": "Evaluation Criteria Clarity",
"description": "Convergence criteria are explicit, relevant, and well-defined",
"scoring": {
"1": "No criteria stated or purely subjective ('better', 'best')",
"2": "Vague criteria without clear definition",
"3": "Criteria stated but could be more specific or relevant",
"4": "Clear, specific, relevant criteria (e.g., impact, feasibility, cost)",
"5": "Exceptional criteria - specific, relevant, weighted appropriately, with clear definitions"
}
},
{
"name": "Scoring Rigor",
"description": "Ideas are scored systematically with justified ratings",
"scoring": {
"1": "No scoring or arbitrary rankings",
"2": "Scoring present but inconsistent or unjustified",
"3": "Basic scoring with some justification",
"4": "Systematic scoring with clear justification for ratings",
"5": "Exceptional scoring - consistent, justified, includes sensitivity analysis or confidence intervals"
}
},
{
"name": "Selection Quality",
"description": "Top selections clearly outperform alternatives based on stated criteria",
"scoring": {
"1": "Selections don't match scores or criteria, appear arbitrary",
"2": "Selections somewhat aligned with scores but weak justification",
"3": "Selections aligned with scores, basic justification provided",
"4": "Selections clearly justified based on scores and criteria, tradeoffs noted",
"5": "Exceptional selections - fully justified with explicit tradeoff analysis and consideration of dependencies"
}
},
{
"name": "Actionability",
"description": "Output includes clear next steps and decision-ready recommendations",
"scoring": {
"1": "No next steps or vague 'implement this' statements",
"2": "Generic next steps without specifics",
"3": "Basic next steps with some specific actions",
"4": "Clear, specific next steps with timelines and owners",
"5": "Exceptional actionability - detailed implementation plan with milestones, resources, and success metrics"
}
},
{
"name": "Process Integrity",
"description": "Follows diverge-cluster-converge sequence without premature filtering",
"scoring": {
"1": "Process violated - filtered during divergence or skipped clustering",
"2": "Some premature filtering or weak clustering step",
"3": "Process mostly followed with minor shortcuts",
"4": "Process followed correctly with clear phase separation",
"5": "Exceptional process integrity - clear phase separation, no premature judgment, explicit constraints"
}
}
],
"overall_assessment": {
"thresholds": {
"excellent": "Average score ≥ 4.5 (publication or high-stakes use)",
"very_good": "Average score ≥ 4.0 (most strategic decisions should aim for this)",
"good": "Average score ≥ 3.5 (minimum for important decisions)",
"acceptable": "Average score ≥ 3.0 (workable for low-stakes brainstorms)",
"needs_rework": "Average score < 3.0 (redo before using for decisions)"
},
"stakes_guidance": {
"low_stakes": "Exploratory ideation, early brainstorming: aim for ≥ 3.0",
"medium_stakes": "Feature prioritization, project selection: aim for ≥ 3.5",
"high_stakes": "Strategic initiatives, resource allocation: aim for ≥ 4.0"
}
},
"usage_instructions": "Rate each criterion on 1-5 scale. Calculate average. For important decisions, minimum score is 3.5. For high-stakes strategic choices, aim for ≥4.0. Check especially for divergence quantity (at least 20 ideas), cluster quality (distinct themes), and evaluation rigor (explicit criteria with justified scoring)."
}

View File

@@ -0,0 +1,394 @@
# Brainstorm: API Performance Optimization Strategies
## Problem Statement
**What we're solving**: API response time has degraded from 200ms (p95) to 800ms (p95) over the past 3 months. Users are experiencing slow page loads and some are timing out.
**Decision to make**: Which optimization approaches should we prioritize for the next quarter to bring p95 response time back to <300ms?
**Context**:
- REST API serving 50k requests/day
- PostgreSQL database, 200GB data
- Node.js/Express backend
- Current p95: 800ms, p50: 350ms
- Team: 3 backend engineers, 1 devops
- Quarterly engineering budget: 4 engineer-months
**Constraints**:
- Cannot break existing API contracts (backwards compatible)
- Must maintain 99.9% uptime during changes
- No more than $2k/month additional infrastructure cost
- Must ship improvements within 3 months
---
## Diverge: Generate Ideas
**Target**: 40 ideas
**Prompt**: Generate as many ways as possible to improve API response time. Suspend judgment. All ideas are valid - from quick wins to major architectural changes.
### All Ideas
1. Add Redis caching layer for frequent queries
2. Database query optimization (add indexes)
3. Implement database connection pooling
4. Use GraphQL to reduce over-fetching
5. Add CDN for static assets
6. Implement HTTP/2 server push
7. Compress API responses with gzip
8. Paginate large result sets
9. Use database read replicas
10. Implement response caching headers (ETag, If-None-Match)
11. Migrate to serverless (AWS Lambda)
12. Add API gateway for request routing
13. Implement request batching
14. Use database query result caching
15. Optimize N+1 query problems
16. Implement lazy loading for related data
17. Switch to gRPC from REST
18. Add application-level caching (in-memory)
19. Optimize JSON serialization
20. Implement database partitioning
21. Use faster ORM or raw SQL
22. Add async processing for slow operations
23. Implement API rate limiting to prevent overload
24. Optimize Docker container size
25. Use database materialized views
26. Implement query result streaming
27. Add load balancer for horizontal scaling
28. Optimize database schema (denormalization)
29. Implement incremental/delta responses
30. Use WebSockets for real-time data
31. Migrate to NoSQL (MongoDB, DynamoDB)
32. Implement API response compression (Brotli)
33. Add edge caching (Cloudflare Workers)
34. Use database archival for old data
35. Implement request queuing/throttling
36. Optimize API middleware chain
37. Use faster JSON parser (simdjson)
38. Implement selective field loading
39. Add monitoring and alerting for slow queries
40. Database vacuum/analyze for query planner
**Total generated**: 40 ideas
---
## Cluster: Organize Themes
**Goal**: Group similar ideas into 4-8 distinct categories
### Cluster 1: Caching Strategies (9 ideas)
- Add Redis caching layer for frequent queries
- Implement response caching headers (ETag, If-None-Match)
- Use database query result caching
- Add application-level caching (in-memory)
- Add CDN for static assets
- Add edge caching (Cloudflare Workers)
- Implement response caching at API gateway
- Use database materialized views
- Cache computed/aggregated results
### Cluster 2: Database Query Optimization (11 ideas)
- Database query optimization (add indexes)
- Optimize N+1 query problems
- Use faster ORM or raw SQL
- Implement selective field loading
- Optimize database schema (denormalization)
- Add monitoring and alerting for slow queries
- Database vacuum/analyze for query planner
- Implement lazy loading for related data
- Use database query result caching (also in caching)
- Database archival for old data
- Database partitioning
### Cluster 3: Data Transfer Optimization (7 ideas)
- Compress API responses with gzip/Brotli
- Paginate large result sets
- Implement request batching
- Optimize JSON serialization
- Use faster JSON parser (simdjson)
- Implement incremental/delta responses
- Implement query result streaming
### Cluster 4: Infrastructure Scaling (7 ideas)
- Use database read replicas
- Add load balancer for horizontal scaling
- Implement database connection pooling
- Optimize Docker container size
- Migrate to serverless (AWS Lambda)
- Add API gateway for request routing
- Implement request queuing/throttling
### Cluster 5: Architectural Changes (4 ideas)
- Use GraphQL to reduce over-fetching
- Switch to gRPC from REST
- Use WebSockets for real-time data
- Migrate to NoSQL (MongoDB, DynamoDB)
### Cluster 6: Async & Offloading (2 ideas)
- Add async processing for slow operations
- Implement background job processing for heavy tasks
**Total clusters**: 6 themes
---
## Converge: Evaluate & Select
**Evaluation Criteria**:
1. **Impact on p95 latency** (weight: 3x) - How much will this reduce response time?
2. **Implementation effort** (weight: 2x) - Engineering time required (lower = better)
3. **Infrastructure cost** (weight: 1x) - Additional monthly cost (lower = better)
**Scoring scale**: 1-10 (higher = better)
### Scored Ideas
| Idea | Impact (3x) | Effort (2x) | Cost (1x) | Weighted Total |
|------|------------|------------|-----------|----------------|
| Add Redis caching | 9 | 7 | 7 | 9×3 + 7×2 + 7×1 = 48 |
| Optimize N+1 queries | 8 | 8 | 10 | 8×3 + 8×2 + 10×1 = 50 |
| Add database indexes | 7 | 9 | 10 | 7×3 + 9×2 + 10×1 = 49 |
| Response compression (gzip) | 6 | 9 | 10 | 6×3 + 9×2 + 10×1 = 45 |
| Database connection pooling | 6 | 8 | 10 | 6×3 + 8×2 + 10×1 = 44 |
| Paginate large results | 7 | 7 | 10 | 7×3 + 7×2 + 10×1 = 45 |
| DB read replicas | 8 | 5 | 4 | 8×3 + 5×2 + 4×1 = 38 |
| Async processing | 6 | 6 | 8 | 6×3 + 6×2 + 8×1 = 38 |
| GraphQL migration | 7 | 3 | 9 | 7×3 + 3×2 + 9×1 = 36 |
| Serverless migration | 5 | 2 | 5 | 5×3 + 2×2 + 5×1 = 24 |
**Scoring notes**:
- **Impact**: Based on estimated latency reduction (9-10 = >400ms, 7-8 = 200-400ms, 5-6 = 100-200ms)
- **Effort**: Inverse scale (9-10 = <1 week, 7-8 = 1-2 weeks, 5-6 = 3-4 weeks, 3-4 = 1-2 months, 1-2 = 3+ months)
- **Cost**: Inverse scale (10 = $0, 8-9 = <$200/mo, 6-7 = <$500/mo, 4-5 = <$1k/mo, 1-3 = >$1k/mo)
---
### Top 3 Selections
**1. Fix N+1 Query Problems** (Score: 50)
**Why selected**: Highest overall score - high impact, reasonable effort, zero cost
**Rationale**:
- **Impact (8/10)**: N+1 queries are a common culprit for slow APIs. Profiling shows several endpoints making 50-100 queries per request. Fixing this could reduce p95 by 300-500ms.
- **Effort (8/10)**: Can identify with APM tools (DataDog), fix iteratively. Estimated 2-3 weeks for main endpoints.
- **Cost (10/10)**: Zero additional infrastructure cost - purely code optimization.
**Next steps**:
- Week 1: Profile top 10 slowest endpoints with APM to identify N+1 patterns
- Week 2-3: Implement eager loading/joins for identified queries
- Week 4: Deploy with feature flags, measure impact
- **Expected improvement**: Reduce p95 from 800ms to 500-600ms
**Measurement**:
- Track p95/p99 latency per endpoint before/after
- Monitor database query counts (should decrease significantly)
- Verify no increase in memory usage from eager loading
---
**2. Add Database Indexes** (Score: 49)
**Why selected**: Second highest score - very low effort for solid impact
**Rationale**:
- **Impact (7/10)**: Database query analysis shows several full table scans. Adding indexes could reduce individual query time by 50-80%.
- **Effort (9/10)**: Quick wins - can identify missing indexes via EXPLAIN ANALYZE, add indexes with minimal risk. Estimated 1 week.
- **Cost (10/10)**: Marginal storage cost for indexes (~5-10GB), no new infrastructure.
**Next steps**:
- Day 1-2: Run EXPLAIN ANALYZE on slow queries (from slow query log)
- Day 3-4: Create indexes on foreign keys, WHERE clause columns, JOIN columns
- Day 5: Deploy indexes during low-traffic window, monitor impact
- **Expected improvement**: Reduce p95 by 100-200ms for index-heavy endpoints
**Measurement**:
- Compare query execution plans before/after (table scan → index scan)
- Track index usage with pg_stat_user_indexes
- Monitor index size growth
**Considerations**:
- Some writes may slow down slightly (index maintenance)
- Test on staging first to verify no lock contention
---
**3. Implement Redis Caching** (Score: 48)
**Why selected**: Highest impact potential, moderate effort and cost
**Rationale**:
- **Impact (9/10)**: Caching frequently-accessed data (user profiles, config, lookup tables) could eliminate 60-70% of database queries. Massive impact for cacheable endpoints.
- **Effort (7/10)**: Moderate effort - setup Redis, implement caching layer, handle cache invalidation. Estimated 2-3 weeks.
- **Cost (7/10)**: Redis managed service ~$200-400/month (ElastiCache t3.medium)
**Next steps**:
- Week 1: Analyze request patterns - identify most-frequent queries for caching
- Week 2: Setup Redis (ElastiCache), implement cache-aside pattern for top 3 endpoints
- Week 3: Implement cache invalidation strategy (TTL + event-based)
- Week 4: Rollout with monitoring
- **Expected improvement**: Reduce p95 from 800ms to 300-400ms for cached endpoints (cache hit rate target: >80%)
**Measurement**:
- Track cache hit rate (target >80%)
- Monitor Redis memory usage and eviction rate
- Compare endpoint latency with/without cache
- Track database query reduction
**Considerations**:
- Cache invalidation complexity (implement carefully to avoid stale data)
- Redis failover strategy (what happens if Redis is down?)
- Cold start performance (first request still slow)
---
### Runner-Ups (For Future Consideration)
**Response Compression (gzip)** (Score: 45)
- Very quick win (1-2 days to implement)
- Modest impact for large payloads (~20-30% response size reduction → ~100ms latency improvement)
- **Recommendation**: Implement in parallel with top 3 (low effort, no downside)
**Database Connection Pooling** (Score: 44)
- Quick to implement if not already in place
- Reduces connection overhead
- **Recommendation**: Verify current pooling configuration first - may already be optimized
**Pagination** (Score: 45)
- Essential for endpoints returning large result sets
- Quick to implement (2-3 days)
- **Recommendation**: Implement in parallel - protect against future growth
**Database Read Replicas** (Score: 38)
- Good for read-heavy workload scaling
- Higher cost (~$500-800/month)
- **Recommendation**: Defer to Q2 after quick wins exhausted - consider if traffic grows 2-3x
---
## Next Steps
### Immediate Actions (Week 1-2)
**Priority 1: N+1 Query Optimization**
- [ ] Enable APM detailed query tracing
- [ ] Profile top 10 slowest endpoints
- [ ] Create backlog of N+1 fixes prioritized by impact
- [ ] Assign to Engineer A
**Priority 2: Database Index Analysis**
- [ ] Export slow query log (queries >500ms)
- [ ] Run EXPLAIN ANALYZE on top 20 slow queries
- [ ] Identify missing indexes
- [ ] Assign to Engineer B
**Priority 3: Redis Caching Planning**
- [ ] Analyze request patterns to identify cacheable data
- [ ] Design cache key strategy
- [ ] Document cache invalidation approach
- [ ] Get budget approval for Redis ($300/month)
- [ ] Assign to Engineer C
**Quick Win (parallel)**:
- [ ] Implement gzip compression (Engineer A, 4 hours)
- [ ] Verify connection pooling config (Engineer B, 2 hours)
- [ ] Add pagination to `/users` and `/orders` endpoints (Engineer C, 1 day)
---
### Timeline
**Week 1-2**: Analysis + quick wins
- N+1 profiling complete
- Index analysis complete
- Redis architecture designed
- Gzip compression live
- Pagination live for 2 endpoints
**Week 3-4**: N+1 fixes + Indexes
- Top 5 N+1 queries fixed and deployed
- 10-15 database indexes added
- **Target**: p95 drops to 600ms
**Week 5-7**: Redis caching
- Redis infrastructure provisioned
- Top 3 endpoints cached
- Cache invalidation tested
- **Target**: p95 drops to 350ms for cached endpoints
**Week 8-9**: Measure, iterate, polish
- Monitor metrics
- Fix any regressions
- Extend caching to 5 more endpoints
- **Target**: Overall p95 <300ms
**Week 10-12**: Buffer for unknowns
- Address unexpected issues
- Optimize further if needed
- Document learnings
---
### Success Criteria
**Primary**:
- [ ] p95 latency <300ms (currently 800ms)
- [ ] p99 latency <600ms (currently 1.5s)
- [ ] No increase in error rate
- [ ] 99.9% uptime maintained
**Secondary**:
- [ ] Database query count reduced by >40%
- [ ] Cache hit rate >80% for cached endpoints
- [ ] Additional infrastructure cost <$500/month
**Monitoring**:
- Daily p95/p99 latency dashboard
- Weekly review of slow query log
- Redis cache hit rate tracking
- Database connection pool utilization
---
### Risks & Mitigation
**Risk 1: N+1 fixes increase memory usage**
- **Mitigation**: Profile memory before/after, implement pagination if needed
- **Rollback**: Revert to lazy loading if memory spikes >20%
**Risk 2: Cache invalidation bugs cause stale data**
- **Mitigation**: Start with short TTL (5 min), add event-based invalidation gradually
- **Rollback**: Disable caching for affected endpoints immediately
**Risk 3: Index additions cause write performance degradation**
- **Mitigation**: Test on staging with production-like load, monitor write latency
- **Rollback**: Drop problematic indexes
**Risk 4: Timeline slips due to complexity**
- **Mitigation**: Front-load quick wins (gzip, indexes) to show early progress
- **Contingency**: Descope Redis to Q2 if needed, focus on N+1 and indexes
---
## Rubric Self-Assessment
Using `rubric_brainstorm_diverge_converge.json`:
**Scores**:
1. Divergence Quantity: 5/5 (40 ideas - comprehensive exploration)
2. Divergence Variety: 4/5 (good variety from quick fixes to major architecture changes)
3. Divergence Creativity: 4/5 (includes both practical and ambitious ideas)
4. Cluster Quality: 5/5 (6 distinct, well-labeled themes)
5. Cluster Coverage: 5/5 (6 clusters covering infrastructure, data, architecture)
6. Evaluation Criteria Clarity: 5/5 (impact, effort, cost - specific and weighted)
7. Scoring Rigor: 4/5 (systematic scoring with justification)
8. Selection Quality: 5/5 (clear top 3 with tradeoff analysis)
9. Actionability: 5/5 (detailed timeline, owners, success criteria)
10. Process Integrity: 5/5 (clear phase separation, no premature filtering)
**Average**: 4.7/5 - Excellent (high-stakes technical decision quality)
**Assessment**: This brainstorm is ready for use in prioritizing engineering work. Strong divergence phase with 40 varied ideas, clear clustering by mechanism, and rigorous convergence with weighted scoring. Actionable plan with timeline and risk mitigation.

View File

@@ -0,0 +1,519 @@
# Brainstorm Diverge-Converge Template
## Workflow
Copy this checklist and track your progress:
```
Brainstorm Progress:
- [ ] Step 1: Define problem and criteria using template structure
- [ ] Step 2: Diverge with creative prompts and techniques
- [ ] Step 3: Cluster using bottom-up or top-down methods
- [ ] Step 4: Converge with systematic scoring
- [ ] Step 5: Document selections and next steps
```
**Step 1: Define problem and criteria using template structure**
Fill in problem statement, decision context, constraints, and evaluation criteria (3-5 criteria that matter for your context). Use [Quick Template](#quick-template) to structure. See [Detailed Guidance](#detailed-guidance) for criteria selection.
**Step 2: Diverge with creative prompts and techniques**
Generate 20-50 ideas using SCAMPER prompts, perspective shifting, constraint removal, and analogies. Suspend judgment, aim for quantity and variety. See [Phase 1: Diverge](#phase-1-diverge-generate-ideas) for stimulation techniques and quality checks.
**Step 3: Cluster using bottom-up or top-down methods**
Group similar ideas into 4-8 distinct clusters. Use bottom-up clustering (identify natural groupings) or top-down (predefined categories). Name clusters clearly and specifically. See [Phase 2: Cluster](#phase-2-cluster-organize-themes) for methods and quality checks.
**Step 4: Converge with systematic scoring**
Score ideas on defined criteria (1-10 scale or Low/Med/High), rank by total/weighted score, and select top 3-5. Document tradeoffs and runner-ups. See [Phase 3: Converge](#phase-3-converge-evaluate--select) for scoring approaches and selection guidelines.
**Step 5: Document selections and next steps**
Fill in top selections with rationale, next steps, and timeline. Include runner-ups for future consideration and measurement plan. See [Worked Example](#worked-example) for complete example.
## Quick Template
```markdown
# Brainstorm: {Topic}
## Problem Statement
**What we're solving**: {Clear description of problem or opportunity}
**Decision to make**: {What will we do with the output?}
**Constraints**: {Must-haves, no-gos, boundaries}
---
## Diverge: Generate Ideas
**Target**: {20-50 ideas}
**Prompt**: Generate as many ideas as possible for {topic}. Suspend judgment. All ideas are valid.
### All Ideas
1. {Idea 1}
2. {Idea 2}
3. {Idea 3}
... (continue to target number)
**Total generated**: {N} ideas
---
## Cluster: Organize Themes
**Goal**: Group similar ideas into 4-8 distinct categories
### Cluster 1: {Theme Name}
- {Idea A}
- {Idea B}
- {Idea C}
### Cluster 2: {Theme Name}
- {Idea D}
- {Idea E}
... (continue for all clusters)
**Total clusters**: {N} themes
---
## Converge: Evaluate & Select
**Evaluation Criteria**:
1. {Criterion 1} (weight: {X}x)
2. {Criterion 2} (weight: {X}x)
3. {Criterion 3} (weight: {X}x)
### Scored Ideas
| Idea | {Criterion 1} | {Criterion 2} | {Criterion 3} | Total |
|------|--------------|--------------|--------------|-------|
| {Top idea 1} | {score} | {score} | {score} | {total} |
| {Top idea 2} | {score} | {score} | {score} | {total} |
| {Top idea 3} | {score} | {score} | {score} | {total} |
### Top Selections
**1. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}
**2. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}
**3. {Idea Name}** (Score: {X}/10)
- Why selected: {Rationale}
- Next steps: {Immediate actions}
### Runner-Ups (For Future Consideration)
- {Idea with potential but not top priority}
- {Another promising idea}
---
## Next Steps
**Immediate**:
- {Action 1 based on top selection}
- {Action 2}
**Short-term** (next 2-4 weeks):
- {Action for second priority}
**Parking lot** (revisit later):
- {Ideas to reconsider in different context}
```
---
## Detailed Guidance
### Phase 1: Diverge (Generate Ideas)
**Goal**: Generate maximum quantity and variety of ideas
**Techniques to stimulate ideas**:
1. **Classic brainstorming**: Free-flow idea generation
2. **SCAMPER prompts**:
- Substitute: What could we replace?
- Combine: What could we merge?
- Adapt: What could we adjust?
- Modify: What could we change?
- Put to other uses: What else could this do?
- Eliminate: What could we remove?
- Reverse: What if we did the opposite?
3. **Perspective shifting**:
- "What would {competitor/expert/user type} do?"
- "What if we had 10x the budget?"
- "What if we had 1/10th the budget?"
- "What if we had to launch tomorrow?"
- "What's the most unconventional approach?"
4. **Constraint removal**:
- "What if technical limitations didn't exist?"
- "What if we didn't care about cost?"
- "What if we ignored industry norms?"
5. **Analogies**:
- "How do other industries solve similar problems?"
- "What can we learn from nature?"
- "What historical precedents exist?"
**Divergence quality checks**:
- [ ] Generated at least 20 ideas (minimum)
- [ ] Ideas vary in type/approach (not all incremental or all radical)
- [ ] Included "wild" ideas (push boundaries)
- [ ] Included "safe" ideas (low risk)
- [ ] Covered different scales (quick wins and long-term bets)
- [ ] No premature filtering (saved criticism for converge phase)
**Common divergence mistakes**:
- Stopping too early (quantity breeds quality)
- Self-censoring "bad" ideas (they often spark good ones)
- Focusing only on obvious solutions
- Letting one person/perspective dominate
- Jumping to evaluation too quickly
---
### Phase 2: Cluster (Organize Themes)
**Goal**: Create meaningful structure from raw ideas
**Clustering methods**:
1. **Bottom-up clustering** (recommended for most cases):
- Read through all ideas
- Identify natural groupings (2-3 similar ideas)
- Label each group
- Assign remaining ideas to groups
- Refine group labels for clarity
2. **Top-down clustering**:
- Define categories upfront (e.g., short-term/long-term, user types, etc.)
- Assign ideas to predefined categories
- Adjust categories if many ideas don't fit
3. **Affinity mapping** (for large idea sets):
- Group ideas that "feel similar"
- Name groups after grouping (not before)
- Create sub-clusters if main clusters are too large
**Cluster naming guidelines**:
- Use descriptive, specific labels (not generic)
- Good: "Automated self-service tools", Bad: "Automation"
- Good: "Human high-touch onboarding", Bad: "Customer service"
- Include mechanism or approach in name when possible
**Cluster quality checks**:
- [ ] 4-8 clusters (sweet spot for most topics)
- [ ] Clusters are distinct (minimal overlap)
- [ ] Clusters are balanced (not 1 idea in one cluster, 20 in another)
- [ ] Cluster names are clear and specific
- [ ] All ideas assigned to a cluster
- [ ] Clusters represent meaningfully different approaches
**Handling edge cases**:
- **Outliers**: Create "Other/Misc" cluster for ideas that don't fit, or leave unclustered if very few
- **Ideas that fit multiple clusters**: Assign to best-fit cluster, note cross-cluster themes
- **Too many clusters** (>10): Merge similar clusters or create super-clusters
- **Too few clusters** (<4): Consider whether ideas truly vary, or subdivide large clusters
---
### Phase 3: Converge (Evaluate & Select)
**Goal**: Systematically identify strongest ideas
**Step 1: Define Evaluation Criteria**
Choose 3-5 criteria that matter for your context:
**Common criteria**:
| Criterion | Description | When to use |
|-----------|-------------|-------------|
| **Impact** | How much value does this create? | Almost always |
| **Feasibility** | How easy is this to implement? | When resources are constrained |
| **Cost** | What's the financial investment? | When budget is limited |
| **Speed** | How quickly can we do this? | When time is critical |
| **Risk** | What could go wrong? | For high-stakes decisions |
| **Alignment** | Does this fit our strategy? | For strategic decisions |
| **Novelty** | How unique/innovative is this? | For competitive differentiation |
| **Reversibility** | Can we undo this if wrong? | For experimental approaches |
| **Learning value** | What will we learn? | For research/exploration |
| **User value** | How much do users benefit? | Product/feature decisions |
**Weighting criteria** (optional):
- Assign importance weights (e.g., 3x for impact, 2x for feasibility, 1x for speed)
- Multiply scores by weights before summing
- Use when some criteria matter much more than others
**Step 2: Score Ideas**
**Scoring approaches**:
1. **Simple 1-10 scale** (recommended for most cases):
- 1-3: Low (weak on this criterion)
- 4-6: Medium (moderate on this criterion)
- 7-9: High (strong on this criterion)
- 10: Exceptional (best possible)
2. **Low/Medium/High**:
- Faster but less precise
- Convert to numbers for ranking (Low=2, Med=5, High=8)
3. **Pairwise comparison**:
- Compare each idea to every other idea
- Count "wins" for each idea
- Slower but more thorough (good for critical decisions)
**Scoring tips**:
- Score all ideas on Criterion 1, then all on Criterion 2, etc. (maintains consistency)
- Use reference points ("This idea is more impactful than X but less than Y")
- Document reasoning for extreme scores (1-2 or 9-10)
- Consider both upside (best case) and downside (worst case)
**Step 3: Rank and Select**
**Ranking methods**:
1. **Total score ranking**:
- Sum scores across all criteria
- Sort by total score (highest to lowest)
- Select top 3-5
2. **Must-have filtering + scoring**:
- First, eliminate ideas that violate must-have constraints
- Then score remaining ideas
- Select top scorers
3. **Two-dimensional prioritization**:
- Plot ideas on 2x2 matrix (e.g., Impact vs. Feasibility)
- Prioritize high-impact, high-feasibility quadrant
- Common matrices:
- Impact / Effort (classic prioritization)
- Risk / Reward (for innovation)
- Cost / Value (for ROI focus)
**Selection guidelines**:
- **Diversify**: Don't just pick the top 3 if they're all in same cluster
- **Balance**: Mix quick wins (fast, low-risk) with big bets (high-impact, longer-term)
- **Consider dependencies**: Some ideas may enable or enhance others
- **Document tradeoffs**: Why did 4th place not make the cut?
**Convergence quality checks**:
- [ ] Evaluation criteria are explicit and relevant
- [ ] All top ideas scored on all criteria
- [ ] Scores are justified (not arbitrary)
- [ ] Top selections clearly outperform alternatives
- [ ] Tradeoffs are documented
- [ ] Runner-up ideas noted for future consideration
---
## Worked Example
### Problem: How to increase user retention in first 30 days?
**Context**: SaaS product, 100k users, 40% churn in first month, limited eng resources
**Constraints**:
- Must ship within 3 months
- No more than 2 engineer-months of work
- Must work for both free and paid users
**Criteria**:
- Impact on retention (weight: 3x)
- Feasibility with current team (weight: 2x)
- Speed to ship (weight: 1x)
---
### Diverge: 32 Ideas Generated
1. Email drip campaign with usage tips
2. In-app interactive tutorial
3. Weekly webinar for new users
4. Gamification with achievement badges
5. 1-on-1 onboarding calls for high-value users
6. Contextual tooltips for key features
7. Progress tracking dashboard
8. Community forum for peer help
9. AI chatbot for instant support
10. Daily usage streak rewards
11. Personalized feature recommendations
12. "Success checklist" in first 7 days
13. Video library of use cases
14. Slack/Discord community
15. Monthly power-user showcase
16. Referral rewards program
17. Usage analytics dashboard for users
18. Mobile app push notifications
19. SMS reminders for inactive users
20. Quarterly user survey with gift card
21. In-app messaging for tips
22. Certification program for expertise
23. Template library for quick starts
24. Integration marketplace
25. Office hours with product team
26. User-generated content showcase
27. Automated workflow suggestions
28. Milestone celebrations (email)
29. Cohort-based onboarding groups
30. Seasonal feature highlights
31. Feedback loop with product updates
32. Partnership with complementary tools
---
### Cluster: 6 Themes
**1. Guided Learning** (8 ideas)
- Email drip campaign with usage tips
- In-app interactive tutorial
- Contextual tooltips for key features
- "Success checklist" in first 7 days
- Video library of use cases
- In-app messaging for tips
- Automated workflow suggestions
- Template library for quick starts
**2. Community & Social** (7 ideas)
- Community forum for peer help
- Slack/Discord community
- Monthly power-user showcase
- Office hours with product team
- User-generated content showcase
- Cohort-based onboarding groups
- Partnership with complementary tools
**3. Motivation & Gamification** (5 ideas)
- Gamification with achievement badges
- Daily usage streak rewards
- Progress tracking dashboard
- Milestone celebrations (email)
- Certification program for expertise
**4. Personalization & AI** (4 ideas)
- AI chatbot for instant support
- Personalized feature recommendations
- Usage analytics dashboard for users
- Seasonal feature highlights
**5. Proactive Engagement** (5 ideas)
- Weekly webinar for new users
- Mobile app push notifications
- SMS reminders for inactive users
- Quarterly user survey with gift card
- Feedback loop with product updates
**6. High-Touch Service** (3 ideas)
- 1-on-1 onboarding calls for high-value users
- Referral rewards program
- Integration marketplace
---
### Converge: Evaluation & Selection
**Scoring** (Impact: 1-10, Feasibility: 1-10, Speed: 1-10):
| Idea | Impact (3x) | Feasibility (2x) | Speed (1x) | Weighted Total |
|------|-------------|------------------|------------|----------------|
| In-app interactive tutorial | 9 | 6 | 7 | 9×3 + 6×2 + 7×1 = 46 |
| Email drip campaign | 7 | 9 | 9 | 7×3 + 9×2 + 9×1 = 48 |
| Success checklist (first 7 days) | 8 | 8 | 8 | 8×3 + 8×2 + 8×1 = 48 |
| Contextual tooltips | 6 | 9 | 9 | 6×3 + 9×2 + 9×1 = 45 |
| Progress tracking dashboard | 8 | 7 | 6 | 8×3 + 7×2 + 6×1 = 44 |
| Template library | 7 | 7 | 8 | 7×3 + 7×2 + 8×1 = 43 |
| Community forum | 6 | 4 | 3 | 6×3 + 4×2 + 3×1 = 29 |
| AI chatbot | 7 | 3 | 2 | 7×3 + 3×2 + 2×1 = 29 |
| 1-on-1 calls | 9 | 5 | 8 | 9×3 + 5×2 + 8×1 = 45 |
---
### Top 3 Selections
**1. Email Drip Campaign** (Score: 48)
- **Why**: Highest feasibility and speed, good impact. Can implement with existing tools (no eng time).
- **Rationale**:
- Impact (7/10): Proven tactic, industry benchmarks show 10-15% retention improvement
- Feasibility (9/10): Use existing Mailchimp setup, just need copy + timing
- Speed (9/10): Can launch in 2 weeks with marketing team
- **Next steps**:
- Draft 7-email sequence (days 1, 3, 7, 14, 21, 28, 30)
- A/B test subject lines and CTAs
- Measure open rates and feature adoption
**2. Success Checklist (First 7 Days)** (Score: 48, tie)
- **Why**: Balanced impact, feasibility, and speed. Clear value for new users.
- **Rationale**:
- Impact (8/10): Gives users clear path to value, reduces overwhelm
- Feasibility (8/10): 1 engineer-week for UI + backend tracking
- Speed (8/10): Can ship in 4 weeks
- **Next steps**:
- Define 5-7 "success milestones" (e.g., complete profile, create first project, invite teammate)
- Build in-app checklist UI
- Track completion rates per milestone
**3. In-App Interactive Tutorial** (Score: 46)
- **Why**: Highest impact potential, moderate feasibility and speed.
- **Rationale**:
- Impact (9/10): Shows users value immediately, reduces "blank slate" problem
- Feasibility (6/10): Requires 3-4 engineer-weeks (tooltips + guided flow)
- Speed (7/10): Can ship MVP in 8 weeks
- **Next steps**:
- Design 3-5 step tutorial for core workflow
- Use existing tooltip library to reduce build time
- Make tutorial skippable but prominent
---
### Runner-Ups (For Future Consideration)
**Progress Tracking Dashboard** (Score: 44)
- High impact but slightly slower to build (6-8 weeks)
- Revisit in Q3 after core onboarding stabilizes
**Template Library** (Score: 43)
- Good balance, but requires content creation (not just eng work)
- Explore in parallel with email campaign (marketing can create templates)
**1-on-1 Onboarding Calls** (Score: 45, but doesn't scale)
- Very high impact for high-value users
- Consider as premium offering for enterprise tier only
---
## Next Steps
**Immediate** (next 2 weeks):
- Finalize email drip sequence copy
- Design success checklist UI mockups
- Scope interactive tutorial feature requirements
**Short-term** (next 1-3 months):
- Launch email drip campaign (week 2)
- Ship success checklist (week 6)
- Ship interactive tutorial MVP (week 10)
**Measurement plan**:
- Track 30-day retention rate weekly
- Target: Improve from 60% to 70% retention
- Break down by cohort (email recipients vs. non-recipients, etc.)
**Parking lot** (revisit Q3):
- Progress tracking dashboard
- Template library
- Community forum (once we hit 200k users)