Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:58:08 +08:00
commit 64792088d0
29 changed files with 13209 additions and 0 deletions

View File

@@ -0,0 +1,504 @@
---
name: competitive-analysis-templates
description: Master competitive analysis templates including competitor deep-dives, battle cards, feature comparison matrices, and win/loss analysis. Use when analyzing competitors, creating battle cards for sales, positioning against alternatives, tracking competitive moves, or preparing for competitive threats. Covers competitive intelligence frameworks, positioning strategies, and templates from Crayon, Klue, and April Dunford.
---
# Competitive Analysis Templates
Templates for analyzing competitors, enabling sales teams, and defining strategic market positioning.
## When to Use This Skill
**Auto-loaded by agents**:
- `market-analyst` - For competitor deep-dives, battle cards, and positioning maps
**Use when you need**:
- Analyzing competitor strategies and offerings
- Creating sales battle cards
- Building feature comparison matrices
- Tracking competitive intelligence
- Conducting win/loss analysis
- Defining differentiation strategy
- Monitoring competitive threats
- Positioning against alternatives
## What is Competitive Analysis?
Competitive analysis helps you:
- **Understand** your market position and competitive landscape
- **Identify** differentiation opportunities and white space
- **Enable** sales teams to win competitive deals
- **Inform** product roadmap and strategic decisions
- **Monitor** competitive threats and market shifts
**NOT**: Copying competitors or building feature parity
**BUT**: Finding defensible differentiation and strategic positioning
**Good competitive analysis**: Actionable, evidence-based, honest about trade-offs, regularly updated
**Bad competitive analysis**: Feature lists without context, outdated, biased, not used by teams
**When to Use This Skill**:
- Conducting competitive research
- Enabling sales for competitive deals
- Defining product positioning
- Planning market entry strategy
- Responding to competitive threats
- Quarterly strategic planning
---
## Evidence Standards
**Core principle:** All competitive intelligence must be based on verifiable public sources. Strategic analysis and recommendations are core value, but factual claims require source attribution.
**Research requirements:**
1. **Use verifiable public sources**
- Competitor websites, product pages, and documentation
- Pricing pages (with URLs and date accessed)
- Review sites (G2, Capterra, TrustRadius) with specific review citations
- News articles, press releases, and public announcements
- Product demos, trials, and publicly available materials
2. **Source attribution**
- Every claim about competitor features, pricing, or strategy must cite source
- Format: "[Feature/Claim] (Source: [URL], accessed [Date])"
- Review quotes must cite specific review with link
- When multiple sources confirm: cite 2-3 representative sources
3. **When data is unavailable**
- Explicitly note: "[Pricing not publicly available]", "[Requires product trial]"
- Never fabricate competitor information to complete analysis
- Recommend steps to gather missing data (demo request, trial signup)
- Note limitation in battle card or analysis
4. **What you CANNOT do**
- Fabricate competitor features, pricing, or capabilities
- Invent user testimonials or review quotes
- Make up market share numbers or competitive statistics
- Create fictional customer case studies
5. **What you SHOULD do (core value)**
- Provide strategic analysis of competitive landscape
- Identify differentiation opportunities based on research
- Recommend positioning strategy against competitors
- Guide battle card creation and sales enablement
- Teach competitive intelligence methodologies
- Help interpret competitive data for strategic decisions
**When in doubt:** If public information is unavailable, note the limitation explicitly rather than inventing data. Your strategic expertise in competitive analysis and positioning is your primary value.
---
## Three Core Templates
### 1. Competitor Deep-Dive Analysis
**Purpose**: Comprehensive competitive intelligence for strategic decisions
**Use when**:
- Quarterly competitive reviews
- Market entry planning
- Product roadmap planning
- Understanding new competitive threat
- Executive briefings
**Key sections**:
- Company overview (size, funding, strategy)
- Product analysis (features, UX, pricing)
- Go-to-market strategy (sales, marketing)
- SWOT analysis (strengths, weaknesses, opportunities, threats)
- Customer intelligence (reviews, win/loss)
- Competitive response strategy
**Template**: `assets/competitor-deep-dive-template.md`
Complete template with all sections, example content, intelligence sources
**Time investment**: 8-12 hours initial, 2-3 hours quarterly updates
**Comprehensive guide**: `references/competitive-intel-guide.md`
Intelligence gathering, research techniques, analysis frameworks, organizing intelligence
---
### 2. Battle Card (Sales Enablement)
**Purpose**: Quick-reference competitive enablement for sales teams
**Use when**:
- Sales team faces competitor regularly
- New competitive threat emerges
- Need to improve win rates vs. specific competitor
- Onboarding new sales reps
**Key sections**:
- Quick stats (foundational context)
- When you'll face them (common scenarios)
- Where we win (strengths with proof)
- Where they win (their strengths + how to counter)
- Common objections & responses
- Discovery questions (uncover our advantages)
- Proof points (customer stories, data)
**Template**: `assets/battle-card-template.md`
Sales-ready format, specific talk tracks, objection handling, proof points
**Format**: 1-2 pages maximum (quick reference for calls)
**Update frequency**: Immediately on competitor changes, monthly review
**Comprehensive guide**: `references/battle-card-guide.md`
Creating effective battle cards, sales enablement, training, maintenance, metrics
---
### 3. Positioning & Perceptual Mapping
**Purpose**: Strategic market positioning and visual competitive landscape
**Use when**:
- Strategic planning (annual/quarterly)
- Defining product strategy
- Finding market white space
- Developing messaging
- Repositioning product
**Key sections**:
- Perceptual maps (visual market positioning)
- Positioning statement (April Dunford framework)
- Differentiation matrix
- Value proposition map (jobs, pains, gains)
- Strategic positioning grid
- Brand positioning
- Messaging framework
**Template**: `assets/positioning-map-template.md`
Perceptual mapping formats, positioning frameworks, differentiation analysis
**Use cases**: Strategic planning, market entry, messaging development
**Comprehensive guide**: `references/positioning-guide.md`
Positioning frameworks, differentiation strategies, perceptual mapping, repositioning
---
## Choosing the Right Template
**For strategic planning** → Use all three:
1. Deep-dive analysis (understand landscape)
2. Positioning map (define your position)
3. Battle cards (enable execution)
**For sales enablement** → Battle cards:
- Quick reference during calls
- Specific talk tracks and counters
- Updated frequently
**For product strategy** → Deep-dive + Positioning:
- Feature gap analysis
- Market white space
- Differentiation strategy
**For one competitor** → Start with deep-dive, then battle card
**For market overview** → Positioning map first (landscape), then selective deep-dives
---
## Competitive Intelligence Process
### 1. Gather Intelligence
**Primary sources**:
- Competitor website, demos, trials
- Customer reviews (G2, Capterra, TrustRadius)
- Win/loss interviews
- Customer feedback
**Secondary sources**:
- Press releases, news
- Job postings (strategic direction)
- Social media, podcasts
- Analyst reports
**Source citation requirements:**
- Document URL and access date for every source
- Cite specific G2/Capterra reviews with links for user quotes
- Note when information is unavailable: "[Pricing not public]"
- Never fabricate data to fill intelligence gaps
- Recommend additional research steps for missing data
**Detailed guide**: `references/competitive-intel-guide.md`
Research techniques, intelligence sources, ethical guidelines, source citation best practices
---
### 2. Analyze Patterns
**Look for**:
- Feature launch velocity (product strategy)
- Market expansion signals
- Pricing changes
- Partnership announcements
- Messaging shifts
**Framework**: Use SWOT analysis
- **Strengths**: What they do well (facts, evidence)
- **Weaknesses**: Exploitable gaps (customer complaints, missing features)
- **Opportunities** (for them): Anticipate their moves
- **Threats** (to us): How they could hurt us
---
### 3. Act on Insights
**Product decisions**:
- Close critical gaps (table stakes)
- Exploit weaknesses (differentiation)
- Defend strengths (maintain advantage)
**Positioning**:
- Find defensible differentiation
- Avoid head-to-head on their strengths
- Position in category where you win
**Sales enablement**:
- Create battle cards (competitive situations)
- Train on objection handling
- Share win stories
---
## Positioning Frameworks
### April Dunford's 5 Components
Work through in this order:
**1. Competitive Alternatives**: What would customers use if you didn't exist?
**2. Unique Attributes**: What can you do that alternatives can't?
**3. Value (Benefits)**: What do those unique attributes enable?
**4. Target Customer**: Who cares most about that value?
**5. Market Category**: What context makes your value obvious?
**Example**:
> For Series A B2B SaaS companies who need to scale pipeline without growing SDR headcount, [Product] is an AI-powered sales development platform that books 10x more qualified meetings. Unlike Salesforce which requires large SDR teams, we use AI to automate the entire top-of-funnel.
**Complete framework**: `references/positioning-guide.md`
Positioning strategies, differentiation, perceptual mapping, validation
---
### Perceptual Mapping
**Purpose**: Visualize competitive landscape, find white space
**Common axes**:
- Price vs. Capabilities
- Ease of Use vs. Power
- Horizontal vs. Vertical
- Self-serve vs. Enterprise
**Strategic insights**:
- White space (underserved segments)
- Crowded areas (avoid competing)
- Movement over time (strategic direction)
**Template**: `assets/positioning-map-template.md`
Multiple perceptual map formats, strategic analysis
---
## Battle Card Best Practices
**Format**:
- 1-2 pages maximum (quick reference)
- Bullets, not paragraphs
- Specific talk tracks (exact words)
- Print-friendly
**Content**:
- Be honest about weaknesses (builds credibility)
- Use customer language (outcomes, not features)
- Include proof points (quotes, data, case studies)
- Provide discovery questions (uncover advantages)
**Maintenance**:
- Update immediately on competitor changes
- Monthly review, quarterly refresh
- Sales feedback loop
- Track win rates
**Detailed guide**: `references/battle-card-guide.md`
Creating, training, maintenance, metrics
---
## Organizing Competitive Intelligence
### Repository Structure
```
/Competitive Intelligence
/Competitor A
- Deep-dive analysis.md
- Battle card.md
- Product screenshots/
- Reviews analysis.md
/Competitor B
[Same structure]
/Market Maps
- Positioning map.md
- Feature comparison.xlsx
/Win-Loss Analysis
- Quarterly summaries
```
### Update Cadence
**Continuous**: Monitor for major changes (Google Alerts, RSS)
**Monthly**:
- Scan competitor websites, blogs, changelogs
- Read recent reviews
- Update battle cards if needed
**Quarterly**:
- Deep-dive analysis refresh
- Win/loss review
- Positioning assessment
- Sales team competitive briefing
**Detailed guide**: `references/competitive-intel-guide.md`
Monitoring strategies, intelligence distribution
---
## Common Mistakes to Avoid
**Feature parity obsession**:
- Problem: Building everything competitors have
- Fix: Focus on defensible differentiation
**Outdated intelligence**:
- Problem: Stale battle cards, wrong pricing
- Fix: Regular update cadence, monitoring systems
**Too comprehensive**:
- Problem: 50-page analysis nobody reads
- Fix: Right level of detail for audience (battle card vs. deep-dive)
**Ignoring weaknesses**:
- Problem: Only highlighting strengths
- Fix: Honest assessment of trade-offs
**Not using the intelligence**:
- Problem: Analysis sits in drive, not actioned
- Fix: Clear distribution, sales training, product impact
---
## For Solo Operators / Small Teams
**Simplified approach**:
**Focus**: 2-3 most important competitors only
**Monthly routine** (2-3 hours):
- Check competitor websites for changes
- Read 5-10 recent G2 reviews
- Scan their blog/changelog
- Update battle card if needed
**Quarterly deep-dive** (half day):
- Full product trial
- Read 20-30 reviews
- Update positioning map
- Refresh analysis doc
**Templates to use**:
- Battle card (1 page version)
- Simplified positioning map
- Lightweight deep-dive (key sections only)
**Tools**:
- Google Alerts (free monitoring)
- Spreadsheet for tracking
- Notion/Google Docs for documentation
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/competitor-deep-dive-template.md` - Comprehensive competitive analysis
- `assets/battle-card-template.md` - Sales enablement quick reference
- `assets/positioning-map-template.md` - Strategic positioning & perceptual maps
### References (Deep Dives)
When you need comprehensive guidance:
- `references/competitive-intel-guide.md` - Intelligence gathering, research, analysis
- `references/battle-card-guide.md` - Creating effective battle cards, sales enablement
- `references/positioning-guide.md` - Positioning frameworks, differentiation strategies
---
## Related Skills
- `product-positioning` - Product positioning frameworks
- `go-to-market-playbooks` - GTM and launch strategy
- `roadmap-frameworks` - Product roadmaps
- `market-sizing-frameworks` - Market opportunity assessment
---
## Quick Start
**For your first competitive analysis**:
1. Pick your top competitor
2. Start with `assets/competitor-deep-dive-template.md`
3. Research: Product trial, 20 reviews, website analysis
4. Fill in template (focus on implications, not just facts)
5. Create battle card using `assets/battle-card-template.md`
6. Train sales team on battle card
7. Set quarterly review cadence
**For sales enablement**:
1. Identify 2-3 competitors sales faces most
2. Use `assets/battle-card-template.md`
3. Gather: Win/loss data, sales objections, customer quotes
4. Create 1-2 page battle cards
5. Role-play objection handling with sales
6. Update monthly based on field feedback
**For strategic positioning**:
1. Use `assets/positioning-map-template.md`
2. Create perceptual map (choose relevant axes)
3. Plot competitors
4. Identify white space
5. Define positioning using April Dunford framework
6. Validate with customer research
7. Update messaging and roadmap
---
**Key Principle**: Competitive intelligence informs strategy, not dictates it. Understand competitors to find your unique position and enable sales to win, not to copy or obsess over feature parity.

View File

@@ -0,0 +1,568 @@
---
name: go-to-market-playbooks
description: Master product launches, positioning, messaging, and GTM strategies. Use when planning launches, entering markets, or positioning products.
---
# Go-to-Market Playbooks
## Overview
Comprehensive guide to go-to-market (GTM) strategies, product positioning, launch planning, and market entry tactics.
## When to Use This Skill
**Auto-loaded by agents**:
- `launch-planner` - For GTM strategies, positioning, and distribution playbooks
**Use when you need**:
- Planning product launches
- Positioning new products
- Entering new markets
- Rebranding or repositioning
- Competitive differentiation
## GTM Strategy Types
### 1. Product-Led Growth (PLG)
**Model**: Product drives acquisition, conversion, expansion
**Funnel**:
```
Free Signup → Activation → Expansion → Paid → Advocacy
```
**Characteristics**:
- Self-serve onboarding
- Freemium or free trial
- Virality built-in
- Low-touch sales
**Examples**: Slack, Dropbox, Notion, Figma
**When to Use**:
- Low price point (<$100/month)
- Easy to understand product
- Quick time-to-value
- Network effects
**Playbook**:
1. **Frictionless Signup**: Email-only, no credit card
2. **Aha Moment Fast**: <5 minutes to value
3. **Viral Loops**: Invites, sharing, collaboration
4. **Usage-Based Limits**: Paywall at usage threshold
5. **Self-Serve Upgrade**: One-click payment
---
### 2. Sales-Led Growth
**Model**: Sales team drives revenue
**Funnel**:
```
Lead → MQL → SQL → Demo → Proposal → Close
```
**Characteristics**:
- High price point ($10K+ annually)
- Complex product
- Custom implementation
- Relationship-driven
**Examples**: Salesforce, SAP, enterprise software
**When to Use**:
- Complex sales cycle
- High contract values
- Custom solutions
- Long sales cycles (3-12 months)
**Playbook**:
1. **Lead Generation**: Events, content, partnerships
2. **Qualification**: BANT (Budget, Authority, Need, Timeline)
3. **Demo**: Customized to pain points
4. **Proof of Concept**: Pilot/trial with champion
5. **Procurement**: Legal, security, contracts
6. **Onboarding**: CSM-led implementation
---
### 3. Community-Led Growth
**Model**: Community drives adoption and revenue
**Funnel**:
```
Awareness → Community Join → Engagement → Conversion → Advocacy
```
**Examples**: GitHub, HashiCorp, Figma, Discord
**When to Use**:
- Developer tools
- Open-source foundation
- Network effects
- Passion-driven users
**Playbook**:
1. **Build in Public**: Share roadmap, engage users
2. **Developer Advocates**: Evangelize, educate
3. **Events**: Conferences, meetups, webinars
4. **Content**: Tutorials, docs, blog posts
5. **Open Source → Enterprise**: Freemium model
---
### 4. Partner-Led Growth
**Model**: Partners drive distribution
**Examples**: Shopify apps, Salesforce AppExchange, AWS Marketplace
**When to Use**:
- Complement existing platforms
- Need distribution
- Ecosystem play
**Playbook**:
1. **Integration**: Deep platform integration
2. **Marketplace Listing**: Optimize for discovery
3. **Co-Marketing**: Partner webinars, content
4. **Revenue Share**: Align incentives
5. **Partner Enablement**: Training, support
---
## Positioning Framework (April Dunford)
### The 5 Components
**1. Competitive Alternatives**
What do customers use today if not your product?
**2. Unique Attributes**
What do you have that alternatives don't?
**3. Value (Benefits)**
What value do those unique attributes enable?
**4. Target Customer**
Who cares most about that value?
**5. Market Category**
What context makes the value obvious?
---
### Positioning Process
**Step 1: List Competitive Alternatives**
```
Alternatives for Project Management Tool:
- Asana, Monday.com, Linear
- Spreadsheets
- Email + meetings
- Do nothing
```
**Step 2: Identify Unique Attributes**
```
What we have that they don't:
- AI-powered task prioritization
- Real-time async collaboration
- Built-in analytics dashboard
```
**Step 3: Map Attributes to Value**
```
AI prioritization → Save 5 hours/week on planning
Async collaboration → Works across timezones
Analytics → Measure team productivity
```
**Step 4: Find Best-Fit Customer**
```
Who cares most?
- Remote-first startups (20-100 people)
- Product/engineering teams
- Fast-paced, data-driven culture
```
**Step 5: Choose Category**
```
Options:
A. "Project Management" (crowded)
B. "AI Productivity Platform" (new category)
C. "Team Operating System" (abstract)
Choice: B - Differentiated, clear value
```
---
### Positioning Statement Template
```
For [target customer]
Who [need/opportunity]
[Product] is a [category]
That [key benefit]
Unlike [alternative]
We [primary differentiation]
```
**Example**:
```
For remote product teams at fast-growing startups
Who struggle with async collaboration and context switching
Acme is an AI-native team productivity platform
That automates busywork so teams ship faster
Unlike Asana or Monday which are just task trackers
We combine planning, collaboration, and intelligence in one tool
```
---
## Messaging Hierarchy
### Structure
**Level 1: Company Messaging**
- Mission/vision
- Brand positioning
- Core values
**Level 2: Product Messaging**
- Product positioning
- Value propositions
- Key differentiators
**Level 3: Feature Messaging**
- Feature benefits
- Use cases
- Proof points
---
### Messaging Formulas
**Before-After-Bridge (BAB)**
```
Before: [Current pain]
After: [Desired state]
Bridge: [How product gets them there]
Example:
Before: "Teams waste 10 hours/week in status meetings"
After: "Imagine if everyone knew what's happening without meetings"
Bridge: "Our AI generates status updates automatically from your work"
```
**PAS (Problem-Agitate-Solve)**
```
Problem: [Identify pain]
Agitate: [Make it worse]
Solve: [Your solution]
Example:
Problem: "Your team is drowning in Slack messages"
Agitate: "You miss critical updates, decisions get lost, and new hires can't find context"
Solve: "Acme organizes all team knowledge automatically"
```
---
## Launch Strategy
### Launch Tiers
**Tier 1: Major Launch**
- New product
- Major rebranding
- Strategic pivot
- Full PR, events, marketing
**Tier 2: Feature Launch**
- Significant new capability
- Blog post, email, social
- Targeted outreach
**Tier 3: Improvement**
- Bug fixes, small features
- Release notes, changelog
---
### Launch Timeline (Tier 1)
**T-minus 8 weeks: Preparation**
- [ ] Define positioning and messaging
- [ ] Create launch plan
- [ ] Assemble launch team
- [ ] Set goals and metrics
**T-minus 6 weeks: Asset Creation**
- [ ] Landing page
- [ ] Product video
- [ ] Demo scripts
- [ ] Sales collateral
- [ ] Press kit
**T-minus 4 weeks: Beta Program**
- [ ] Recruit beta users
- [ ] Gather feedback
- [ ] Capture testimonials
- [ ] Refine product
**T-minus 2 weeks: Enablement**
- [ ] Train sales team
- [ ] Train support team
- [ ] Internal launch
- [ ] Press briefings
**T-minus 1 week: Final Prep**
- [ ] Go/no-go decision
- [ ] Asset review
- [ ] Monitoring setup
- [ ] Rollback plan
**Launch Day**
- [ ] Feature flag on
- [ ] Announcement (blog, email, social)
- [ ] Press release
- [ ] Monitor metrics
- [ ] War room for issues
**Post-Launch (Week 1-4)**
- [ ] Daily metrics review
- [ ] Feedback triage
- [ ] Bug fixes
- [ ] Amplification (webinars, case studies)
---
### Launch Channels
**Owned**:
- Website/landing page
- Blog
- Email list
- In-app notifications
- Social media
**Earned**:
- Press (TechCrunch, VentureBeat)
- Product Hunt
- Hacker News
- Industry publications
- Influencers
**Paid**:
- Google Ads
- Social ads (LinkedIn, Twitter)
- Retargeting
- Sponsored content
**Partnerships**:
- Co-marketing
- Integration announcements
- Channel partners
---
## Market Entry Strategy
### New Market Assessment
**TAM/SAM/SOM**:
```
TAM (Total Addressable Market): Everyone who could use product
SAM (Serviceable Available Market): Segment you can reach
SOM (Serviceable Obtainable Market): Realistic target
Example:
TAM: All project management users = $50B
SAM: Remote startups 20-100 people = $2B
SOM: 1% market share in 3 years = $20M
```
**Market Entry Checklist**:
- [ ] Market size validation
- [ ] Competitive landscape
- [ ] Regulatory requirements
- [ ] Localization needs (language, currency, compliance)
- [ ] Distribution partners
- [ ] Pricing for market
---
### Beachhead Strategy (Geoffrey Moore)
**Concept**: Dominate one narrow segment before expanding
**Process**:
1. **Identify Beachhead**: Narrow, winnable segment
2. **Dominate**: Become #1 in that segment
3. **Expand**: Adjacent segments
**Example**:
- Facebook: Harvard → Ivy League → All colleges → Everyone
- Amazon: Books → Electronics → Everything
- Salesforce: Sales teams → All CRM → All cloud software
---
## Competitive Strategy
### Competitive Intel
**What to Track**:
- Product features and roadmap
- Pricing and packaging
- Marketing messages
- Customer reviews
- Funding and news
**Sources**:
- Competitor websites, blogs
- G2, Capterra reviews
- LinkedIn (hiring = roadmap hints)
- Earnings calls (public companies)
- Customer conversations
---
### Battle Cards
**Format**:
```
# Competitor X
## Overview
- Company size, funding
- Target customers
- Key strengths
## When They Win
- [Scenario where they're strong]
## When We Win
- [Our advantages]
## Differentiation
- Feature comparison
- Positioning vs them
## Objection Handling
Q: "Why not use [Competitor]?"
A: "[Response emphasizing our unique value]"
## Proof Points
- Customer wins from competitor
- Case studies
```
---
## Pricing & Packaging
### Pricing Models
**Freemium**:
- Free tier + paid upgrade
- Examples: Slack, Dropbox, Notion
**Free Trial**:
- 14-30 day trial + paywall
- Examples: Netflix, SaaS tools
**Usage-Based**:
- Pay for what you use
- Examples: AWS, Stripe, Snowflake
**Tiered**:
- Good/Better/Best packages
- Examples: HubSpot, Salesforce
**Per-Seat**:
- Price per user
- Examples: Zoom, Asana
---
### Packaging Strategy
**3-Tier Model (Good-Better-Best)**:
```
Starter: $10/user/month
- Core features
- Email support
- Target: Small teams
Professional: $25/user/month
- All Starter features
- Advanced features
- Priority support
- Target: Growing teams
Enterprise: Custom pricing
- All Professional features
- Custom integrations
- Dedicated support
- SLA
- Target: Large organizations
```
**Value Metric**: What you charge for
- Per user (Slack)
- Per event (Segment)
- Per API call (Stripe)
- Storage (Dropbox)
Choose metric that scales with value delivered
---
## GTM Playbook Summary
```
Choose GTM Motion:
Low-touch, self-serve → PLG
High-touch, complex sale → Sales-Led
Developer product → Community-Led
Platform complement → Partner-Led
Then:
1. Position (find differentiation)
2. Message (communicate value)
3. Launch (create awareness)
4. Optimize (iterate and scale)
Key Success Factors:
- Clear positioning
- Focused beachhead
- Aligned team
- Measured results
```
---
## Resources
**Books**:
- "Obviously Awesome" - April Dunford (positioning)
- "Crossing the Chasm" - Geoffrey Moore (market entry)
- "Product-Led Growth" - Wes Bush (PLG strategy)
- "The Mom Test" - Rob Fitzpatrick (customer discovery)
**Frameworks**:
- April Dunford positioning canvas
- Jobs-to-be-Done framework
- Lean Canvas (market fit)
**Tools**:
- Competitors: Crayon, Klue
- Launches: Product Hunt, BetaList
- Analytics: Mixpanel, Segment

View File

@@ -0,0 +1,479 @@
---
name: interview-frameworks
description: Master user interviews including interview guide design, questioning techniques, active listening, probing, avoiding leading questions, and interview analysis. Use when conducting user interviews, designing interview guides, researching user needs, discovering problems, validating assumptions, or gathering qualitative insights. Covers interview best practices, question types, and interviewing techniques from Teresa Torres and Erika Hall.
---
# Interview Frameworks
## Overview
Frameworks for conducting effective user interviews including question design, interviewing techniques, and extracting actionable insights.
**Core Principle**: Good interviews come from great questions. Focus on past behavior and specific examples, not hypotheticals or opinions. Listen more than you talk.
**Origin**: Synthesized from Teresa Torres ("Continuous Discovery Habits"), Erika Hall ("Just Enough Research"), and Rob Fitzpatrick ("The Mom Test").
**Key Insight**: User interviews are the foundation of customer discovery. Good interviews uncover unmet needs, validate assumptions, and reveal the "why" behind user behavior that quantitative data cannot.
## When to Use This Skill
**Auto-loaded by agents**:
- `research-ops` - For user interview guides and JTBD interviews
**Use when you need**:
- Customer discovery research
- Problem validation
- Solution validation
- Usability testing
- Feature prioritization research
- Understanding user workflows
- Jobs-to-be-Done research
---
## Four Types of User Interviews
### Discovery Interviews (Problem Exploration)
**Purpose**: Understand problems, needs, contexts, pain points
**When**: Early product development, new market, problem validation
**Focus**:
- Current workflows
- Pain points and workarounds
- Unmet needs
- Context and constraints
**Example question**: "Walk me through how you currently manage projects for your team"
**Complete script**: `assets/interview-script-templates.md` (Discovery section)
---
### Validation Interviews (Solution Testing)
**Purpose**: Test solutions, get feedback on concepts
**When**: After problem validation, before building
**Focus**:
- Solution fit with workflow
- Feature prioritization
- Willingness to pay
- Alternatives considered
**Example question**: "If we built [solution], how would that fit into your workflow?"
**Complete script**: `assets/interview-script-templates.md` (Validation section)
---
### Usability Interviews (Product Testing)
**Purpose**: Test actual product, identify friction
**When**: During or after building
**Focus**:
- Task completion
- Confusion points
- Feature clarity
- Missing functionality
**Example question**: "Try creating a project. Think aloud as you go."
**Complete script**: `assets/interview-script-templates.md` (Usability section)
---
### Generative Interviews (Open Exploration)
**Purpose**: Broad exploration, no specific hypothesis
**When**: Exploring new markets or pivots
**Focus**:
- Day in the life
- Goals and motivations
- Decision-making process
- Context and constraints
**Example question**: "Tell me about a recent project that didn't go as planned"
**Complete script**: `assets/interview-script-templates.md` (Generative section)
---
## Core Interview Techniques
### The Mom Test (Rob Fitzpatrick)
**Principle**: Ask questions that even your mom couldn't lie to you about
**Bad questions** (invite lies):
- "Would you buy this?" (people lie about future)
- "Do you think this is a good idea?" (want to be nice)
- "Would you use this feature?" (hypothetical)
**Good questions** (reveal truth):
- "When's the last time [problem] came up?" (frequency)
- "How much time/money did that cost you?" (severity)
- "What are you currently doing to solve it?" (existing solutions)
- "Show me your [current solution]" (actual behavior)
- "Who else should I talk to?" (validates problem is real)
**Complete guide**: `assets/questioning-techniques-template.md` (Mom Test section)
---
### The 5 Whys
**Purpose**: Get to root cause by asking "why" repeatedly
**Process**:
1. Start with observation
2. Ask "why" or "what causes that?"
3. Repeat 4-5 times until reaching root cause
4. Stop when you have actionable insight
**Example**:
```
"I don't use feature X"
→ Why? "It's too complicated"
→ Why complicated? "Don't understand buttons"
→ Why? "Labels unclear"
→ Why? "Use jargon I don't know"
→ Why? "Never learned terminology"
Root cause: Onboarding gap + plain language needed
```
**Complete guide**: `assets/questioning-techniques-template.md` (5 Whys section)
---
### Jobs-to-be-Done (JTBD) Interview
**Framework**: Understand the "job" user is hiring product to do
**Question Structure**:
1. **Context**: "Tell me about the last time you [hired product/made decision]"
2. **Situation**: "What was happening? What triggered you?"
3. **Desired Outcome**: "What were you hoping to achieve?"
4. **Alternatives**: "What else did you consider? Why didn't those work?"
5. **Decision**: "What made you choose [solution]?"
**Key Insight**: Find the real job (not surface level)
- Surface: "Manage tasks"
- Actual job: "Maintain visibility and accountability as team scales"
**Complete guide**: `assets/questioning-techniques-template.md` (JTBD section)
---
### Laddering Technique
**Purpose**: Connect features to emotional benefits
**Structure**: Attributes → Consequences → Values
**Process**: Keep asking "Why does that matter?" until reaching emotional value
**Example**:
```
Feature: "Real-time collaboration"
→ "So team can work together" (Consequence)
→ "Faster to get things done" (Consequence)
→ "I can go home on time" (Consequence)
→ "Spend time with my family" (Value)
Marketing insight: Sell "time with family" not "real-time sync"
```
**Complete guide**: `assets/questioning-techniques-template.md` (Laddering section)
---
## Best Practices
### DO
**Ask open-ended questions**:
- ✓ "How do you currently solve [problem]?"
- ✗ "Do you solve [problem] with a spreadsheet?" (yes/no)
**Listen more than talk** (80/20 rule):
- ✓ Silent pauses (let them fill silence)
- ✗ Jump to next question immediately
**Ask for specific examples**:
- ✓ "Tell me about the last time you..."
- ✗ "Generally, do you...?" (hypothetical)
**Probe for details**:
- ✓ "How much time does that take?"
- ✗ Accept vague answers ("a lot")
**Ask about past behavior**:
- ✓ "When did you last...?"
- ✗ "Would you...?" (future hypothetical)
---
### DON'T
**Lead the witness**:
- ✗ "Wouldn't it be great if...?"
- ✓ "How would that fit your workflow?"
**Pitch your solution** (discovery phase):
- ✗ "Our tool solves this with [feature]!"
- ✓ "What would an ideal solution look like?"
**Ask hypothetical questions**:
- ✗ "If we built X, would you use it?"
- ✓ "How did you decide to try your current solution?"
**Multi-part questions**:
- ✗ "Do you use A and what about B, also C?"
- ✓ One question at a time
---
## Participant Recruitment
### Define Target
**Must-Have Criteria** (Disqualify if NO):
- Role: [Specific role]
- Company size: [Range]
- Uses [product category]
- [Decision authority level]
**Disqualify**:
- Works at competitor
- Friends/family (too biased)
- Professional interviewees (6+ studies/year)
**Complete guide**: `assets/participant-recruitment-guide.md`
Screener templates, sourcing channels, outreach templates, incentive guidance
---
### Sourcing Channels
**Your users** (30-50% response rate):
- Best for: Usability testing, current user feedback
- Incentive: $0-50
**Your leads** (20-40% response rate):
- Best for: Problem validation, why they didn't buy
- Incentive: $50-75
**LinkedIn** (5-15% response rate):
- Best for: Discovery, reaching specific roles
- Incentive: $50-75
**Research panels** (Respondent.io, User Interviews):
- Best for: Fast recruitment, hard-to-reach segments
- Cost: $100-300 per participant
**Communities** (Reddit, Slack, Discord):
- Best for: Niche audiences, passionate users
- Incentive: $0-75 (sometimes free)
---
## Note-Taking and Recording
### Recording
**Always ask permission**:
```
"Do you mind if I record for my notes? Only our team will see it."
Tools:
- Zoom/Google Meet (built-in)
- Otter.ai (transcription)
- Grain.co (AI highlights)
```
**Benefits**: Don't miss details, review later, share with team, focus on listening
---
### Two-Column Notes
**Left: Observations** (What they said/did)
**Right: Interpretations** (What you think it means)
Example:
| Observations | Interpretations |
|--------------|-----------------|
| "Use spreadsheets" | Pain: Manual, error-prone? |
| "Takes 2 hours every Monday" | Severity: HIGH (100hr/year) |
| [Frustrated sigh] | Emotion: Strong frustration |
**Complete guide**: `assets/note-taking-analysis-guide.md`
Recording best practices, note templates, immediate debrief, transcription review
---
## Analysis and Synthesis
### Affinity Mapping
**Process**:
1. Write observations on sticky notes (one per note)
2. Put all notes on wall (physical or digital)
3. Group similar notes together
4. Name each cluster (theme)
5. Look for patterns across clusters
**Example themes**:
- "Manual reporting overhead"
- "Management visibility needs"
- "Tool fragmentation"
- "Coordination failures"
**Meta-pattern**: Management needs visibility → Forces manual reporting → Causes coordination failures
---
### Extracting Insights
**Good Insight Formula**: Observation + Pattern + Implication
**Bad** (too vague):
- "Users don't like the UI"
**Good** (specific, surprising, actionable):
- "5/8 users abandoned task at 'Add Members' step because 'team member' label was ambiguous, suggesting we need clearer labeling and examples"
**Insight Template**:
```
INSIGHT: [One-sentence summary]
EVIDENCE: [X/Y participants, specific data]
PATTERN: [How many? What's common?]
IMPLICATIONS:
- Product: [What to build/change]
- Priority: [HIGH/MEDIUM/LOW]
- Risk if ignored: [What happens if we don't act]
QUOTES: [Compelling verbatim quotes]
```
**Complete guide**: `assets/note-taking-analysis-guide.md` (Analysis section)
Observation extraction, affinity mapping detailed process, insight templates, analysis tools
---
## Research Deliverables
### Report Types
**Email Summary** (30 min):
- Best for: Quick updates, busy team members
- Format: TL;DR + 3 findings + recommendations
**Slide Deck** (2-4 hours):
- Best for: Presentations, cross-functional teams
- Format: 15-25 slides with findings and quotes
**Research Report** (4-8 hours):
- Best for: Comprehensive documentation
- Structure:
1. Executive Summary (1 page)
2. Methodology (1 page)
3. Key Findings (3-5 pages)
4. Recommendations (1-2 pages)
5. Appendix
**Highlight Reel** (1-2 hours):
- Best for: Emotional impact, building empathy
- Format: 2-4 min video with 3-5 user clips
**Complete guide**: `references/research-deliverables-guide.md`
Report structures, presentation formats, storytelling with research, video highlights, sharing across org
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/interview-script-templates.md` - 4 complete interview scripts (Discovery, Validation, Usability, Generative) with timing and questions
- `assets/questioning-techniques-template.md` - Mom Test, 5 Whys, JTBD, Laddering, probing techniques
- `assets/participant-recruitment-guide.md` - Screener surveys, sourcing channels, outreach templates, incentive guide
- `assets/note-taking-analysis-guide.md` - Recording setup, two-column notes, affinity mapping, insight extraction
### References (Deep Dives)
When you need comprehensive guidance:
- `references/interviewing-best-practices-guide.md` - Complete interviewing guide covering preparation, execution, common pitfalls, advanced techniques, ethical considerations
- `references/research-deliverables-guide.md` - Report structures, presentation formats, storytelling with research, sharing insights across organization
---
## Quick Reference
```
Problem: Need to understand user needs and validate product decisions
Solution: Conduct user interviews
Interview Type Selection:
├─ Problem unclear? → Discovery interviews
├─ Have solution idea? → Validation interviews
├─ Product built? → Usability interviews
└─ Exploring broadly? → Generative interviews
Core Techniques:
├─ Mom Test: Past behavior, not future intentions
├─ 5 Whys: Find root cause
├─ JTBD: Understand the job
└─ Laddering: Connect features to emotions
Key Rules:
- Listen 80%, talk 20%
- Ask about past (not future)
- Probe surface answers
- Use silence
- Record everything
- Debrief immediately
```
---
## Resources
**Books**:
- "The Mom Test" by Rob Fitzpatrick (2013) - Asking good questions
- "Just Enough Research" by Erika Hall (2013) - Practical user research
- "Continuous Discovery Habits" by Teresa Torres (2021) - Ongoing customer interviews
**Articles**:
- "How to Do User Interviews" - Teresa Torres
- "The Mom Test: Customer Development Done Right" - Rob Fitzpatrick
**Tools**:
- **Recording**: Zoom, Otter.ai, Grain
- **Analysis**: Dovetail, Miro, Notion, Airtable
- **Recruitment**: Respondent.io, User Interviews
---
## Related Skills
- `synthesis-frameworks` - Synthesizing qualitative research into actionable insights
- `validation-frameworks` - Solution validation techniques and testing
- `user-research-techniques` - Comprehensive user research methods beyond interviews
---
**Key Principle**: Great interviews reveal truth through specific past behavior, not opinions about hypothetical futures. Master questioning techniques (Mom Test, 5 Whys, JTBD, Laddering) to uncover what users really think, not what they think you want to hear. Listen more than you talk, probe surface-level answers, and use synthesis techniques to extract actionable insights.

View File

@@ -0,0 +1,400 @@
---
name: launch-planning-frameworks
description: Master product launch planning including launch types (soft, hard, tiered), launch strategies, launch timelines, cross-functional coordination, and launch execution. Use when planning product launches, coordinating cross-functional teams, creating launch plans, timing market entry, executing launches, or building launch playbooks. Covers launch tier frameworks, launch checklists, and launch management best practices.
---
# Launch Planning Frameworks
Frameworks for planning and executing successful product launches including launch strategy, cross-functional coordination, and launch measurement.
## Why Launch Planning Matters
A great product poorly launched underperforms. A good product well-launched succeeds. Launch planning ensures:
- Products reach target customers
- Messaging resonates clearly
- Teams execute in coordination
- Success is measurable
- Issues are caught early
## When to Use This Skill
**Auto-loaded by agents**:
- `launch-planner` - For launch tiers, timelines, checklists, and go/no-go decisions
**Use when you need**:
- Launching new products/features
- Major releases or rebrands
- Market entry or expansion
- Coordinating cross-functional teams
- Post-launch analysis
- Creating launch timelines
- Defining launch tiers (soft, hard, tiered)
- Building launch checklists
---
## Launch Tier Framework
Not all launches are equal. Determine your launch tier first - it drives everything else.
### Three Tiers
**Tier 1: Major Launch** (Company-wide priority)
- New product line, platform launch, major strategic initiative
- 8-12 weeks planning, full cross-functional team
- High investment: PR, events, full marketing campaign
**Tier 2: Standard Launch** (Team priority)
- Significant feature, market expansion, competitive parity
- 4-6 weeks planning, core team (PM, Marketing, Sales)
- Medium investment: Email campaigns, blog, in-app
**Tier 3: Minor Launch** (Low-key release)
- Feature improvements, enhancements, quality updates
- 1-2 weeks planning, PM + minimal support
- Low investment: Release notes, in-app notification
### Decision Framework
Determine tier by scoring 6 factors (1-3 points each):
1. Customer reach (all/segment/small)
2. Customer importance (critical/important/nice)
3. Revenue opportunity (high/medium/low)
4. Strategic importance (critical/supportive/incremental)
5. Competitive impact (differentiation/parity/minor)
6. Complexity (high/medium/low)
**Total Score**: 15-18 = T1 | 10-14 = T2 | 6-9 = T3
**Full framework**: See `assets/launch-tier-decision-template.md` for scorecard, decision matrix, and examples.
---
## Ready-to-Use Launch Plans
### 12-Week Launch Plan (Tier 1)
Complete timeline for major launches with weekly breakdown:
- **Weeks 12-10**: Strategy & planning
- **Weeks 9-7**: Content & assets creation
- **Weeks 6-4**: Team enablement & preparation
- **Weeks 3-1**: Final prep & go-live
- **Week 0**: Launch day execution
- **Weeks 1-4**: Post-launch monitoring
**Template**: `assets/12-week-launch-plan-template.md`
Includes: Weekly activities for each function, deliverables, team roster, meeting cadence
**Adaptable**:
- Tier 2: Condense to 6 weeks
- Tier 3: Condense to 2 weeks
- Solo operator: Simplify cross-functional activities
---
### Launch Checklist
Comprehensive readiness checklist across all functions:
- Product readiness (features, QA, performance, security)
- Marketing readiness (website, content, campaign)
- Sales readiness (enablement, pipeline, demos)
- Customer Success readiness (comms, onboarding)
- Support readiness (training, runbooks, FAQs)
- Technical readiness (deployment, monitoring, rollback)
**Template**: `assets/launch-checklist-template.md`
100+ checklist items with critical items (⚠️ = go/no-go blockers) identified
**Use at T-2 weeks**: Begin checking items, hold readiness review, make go/no-go decision
---
### Go/No-Go Decision Framework
Final readiness decision at T-1 week from launch.
**Decision Criteria**:
- Must-Have (hard blockers): Product stable, teams ready, critical deliverables complete
- Nice-to-Have (soft preferences): Polish items, optional features
- Risk Assessment: Likelihood × Impact with mitigation plans
- Readiness Scores: Each function rates 1-5
- Confidence Poll: Team votes on launch confidence
**Decision**: GO (with conditions) or NO-GO (with new date and action plan)
**Template**: `assets/go-no-go-template.md`
Includes: Meeting agenda, decision matrix, sign-off template
---
## Launch Messaging Framework
Clear positioning and messaging are critical for launch success.
### Positioning Statement
**Format** (Geoffrey Moore):
```
For [target customer]
Who [customer need]
[Product] is a [category]
That [key benefit]
Unlike [competitors]
We [differentiation]
```
### Message Hierarchy
**Level 1: Headline** (8-12 words)
- Grab attention, communicate core value instantly
- Example: "Ship features 2x faster with continuous deployment"
**Level 2: Subhead** (20-30 words)
- Expand on headline, clarify specific value
**Level 3: Key Messages** (3-5 bullets)
- Core value propositions, each standing alone
- Use numbers, focus on outcomes
**Level 4: Proof Points**
- Stats, customer quotes, case studies
- Back up key messages with evidence
**Full framework**: `assets/launch-messaging-template.md`
Includes: Examples, audience-specific messaging, testing framework
---
## Cross-Functional Coordination
Successful launches require tight coordination across teams.
### Launch Roles
**Product (PM)**: Launch owner, strategy, coordination, go/no-go decisions
**Marketing**: Campaign strategy, content, PR, demand generation
**Sales**: Enablement, outreach, pipeline, competitive positioning
**Customer Success**: Customer comms, onboarding, adoption, feedback
**Engineering**: Development, deployment, monitoring, stability
**Support**: Training, runbooks, triage, escalation
**Design**: Marketing assets, landing page, demo video
### Coordination Patterns
**Weekly Launch Sync** (6-8 weeks before launch):
- Status updates per function
- Go/no-go checkpoints
- Key decisions
- Timeline review
**Launch Readiness Review** (T-1 week):
- Final go/no-go decision
- 60-minute meeting with full team
**Launch Day War Room**:
- Real-time monitoring and triage
- Dedicated Slack + Zoom
- All day with core team
**Comprehensive guide**: `references/launch-coordination-guide.md`
Includes: Full role descriptions, RACI matrix, meeting templates, escalation framework, solo operator adaptations
---
## Launch Channels
### Internal Channels
**All-Hands Announcement** (T-1 week): Build company excitement
**Internal Email** (Launch day): Mobilize company to spread word
**Slack Announcement** (Launch day): Real-time celebration and links
### External Channels
**Email to Customers**:
- Beta users (T-1 day)
- Target customers (Launch day)
- All customers (T+1 week)
**Blog Post** (Launch day):
- 800-1,200 words
- Problem, solution, how it works, customer stories
- Screenshots, demo video, clear CTA
**Social Media**:
- Announcement (Launch day)
- Testimonial (T+1)
- Deep-dive (T+2)
- Results (T+1 week)
**Press Release** (Tier 1 only):
- Launch day distribution
- Press briefings scheduled
**Website Landing Page**:
- Hero, benefits, how it works, social proof, demo, CTA
### Channel Selection by Tier
**Tier 1**: All channels (email, blog, social, PR, paid ads, events)
**Tier 2**: Core channels (email, blog, social, in-app)
**Tier 3**: Minimal channels (email announcement, blog/release notes)
**Comprehensive guide**: `references/launch-channels-guide.md`
Includes: Channel templates, timing calendars, best practices, platform-specific tips
---
## Launch Metrics
### Leading Indicators (Week 1)
Early signals that predict success:
**Awareness**: Website visits, blog views, social impressions, email open rates
**Early Adoption**: Signups, activation rate, time to first use, D1 retention
**Technical Health**: Uptime, error rate, performance, support tickets
**Purpose**: Quick feedback, course correction
---
### Lagging Indicators (Months 1-3)
Longer-term measures of success:
**Adoption**: % of target segment using, DAU/MAU, retention (D7, D30), usage frequency
**Business Impact**: Revenue, conversion lift, expansion revenue, churn reduction, LTV impact
**Product Quality**: Bug rate, support volume, NPS, CSAT, app ratings
**Purpose**: Validate product-market fit, business impact
---
### Metrics by Launch Tier
**Tier 1**: All metrics, high targets, daily monitoring Week 1
**Tier 2**: Focus on adoption and business impact, weekly monitoring
**Tier 3**: Basic adoption and technical health, monthly check-ins
**Comprehensive guide**: `references/launch-metrics-guide.md`
Includes: Complete metrics catalog, success criteria examples, dashboard templates, measurement setup, red flags for pivoting
---
## Launch Best Practices
**DO**:
- Start planning early (8-12 weeks for Tier 1)
- Align on launch tier first
- Coordinate cross-functionally (not PM alone)
- Test messaging with customers
- Set clear success metrics
- Communicate internally before externally
- Have rollback plan ready
- Monitor closely post-launch
- Iterate based on feedback
**DON'T**:
- Launch without clear positioning
- Surprise internal teams
- Over-promise in messaging
- Forget support training
- Launch and disappear (no follow-through)
- Skip post-launch review
- Launch on Friday (no support over weekend)
- Make everything Tier 1 (save energy for what matters)
---
## For Solo Operators / Small Teams
If you don't have separate marketing, sales, CS teams:
**Simplify the framework**:
- You own all functions (PM + Marketing + Sales + CS)
- Focus on: positioning, website, email, blog, basic support
- Skip: elaborate sales training, press release (unless Tier 1), complex campaigns
- Use templates aggressively
- 4-6 weeks is plenty for Tier 1 solo launch
**Timeline**:
- Week 6-4: Strategy, positioning, messaging
- Week 4-2: Create content (website, blog, email)
- Week 2-1: Final prep, customer comms
- Week 0: Launch
- Week 1+: Monitor, iterate
**Key**: Do less, but do it well. Better to nail positioning + blog + email than to spread thin across 10 channels.
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/launch-tier-decision-template.md` - Determine T1/T2/T3 with scorecard
- `assets/12-week-launch-plan-template.md` - Complete timeline, all functions
- `assets/launch-checklist-template.md` - 100+ readiness items
- `assets/launch-messaging-template.md` - Positioning + message hierarchy
- `assets/go-no-go-template.md` - Decision framework and meeting template
### References (Deep Dives)
When you need comprehensive guidance:
- `references/launch-coordination-guide.md` - Cross-functional roles, meetings, RACI, escalation
- `references/launch-channels-guide.md` - All channels with templates, timing, best practices
- `references/launch-metrics-guide.md` - Complete metrics catalog, dashboards, success criteria
---
## Related Skills
- `competitive-analysis-templates` - Competitive positioning and battle cards
- `product-positioning` - Market positioning and differentiation
- `go-to-market-playbooks` - GTM strategy and distribution channels
---
## Quick Start
**For your first launch**:
1. Determine launch tier using `assets/launch-tier-decision-template.md`
2. Choose timeline based on tier (12 weeks T1, 6 weeks T2, 2 weeks T3)
3. Create positioning using `assets/launch-messaging-template.md`
4. Use `assets/12-week-launch-plan-template.md` to plan activities
5. Track readiness with `assets/launch-checklist-template.md`
6. Make go/no-go decision using `assets/go-no-go-template.md`
7. Execute launch, monitor metrics
8. Post-launch: Review results, document learnings
**For repeat launches**:
- Update your launch playbook based on learnings
- Refine messaging based on what resonated
- Adjust timeline based on what took longer than expected
- Build on what worked, fix what didn't
---
**Key Principle**: Launch planning is about coordination and preparedness, not perfection. A well-coordinated launch of a good product beats a chaotic launch of a great product. Plan thoroughly, execute decisively, measure rigorously, iterate continuously.

View File

@@ -0,0 +1,425 @@
---
name: market-sizing-frameworks
description: Master TAM/SAM/SOM calculations, market sizing methodologies, and validation frameworks. Use when assessing market opportunity, validating business viability, planning market entry, estimating revenue potential, or determining if a market is worth pursuing. Covers bottom-up, top-down, and value theory sizing methods, competitive analysis, and systematic validation approaches.
---
# Market Sizing Frameworks
Frameworks and methodologies for estimating market size and validating market opportunity.
## Overview
Market sizing answers the critical question: "Is this opportunity large enough to pursue?" It provides the foundation for strategic decisions, resource allocation, and investment prioritization.
**Core Principle:** Market sizing is educated guessing with documented assumptions. The goal is reasonable estimates and order-of-magnitude accuracy (is it $1M, $10M, or $100M?), not false precision.
**Key Insight:** Always use multiple methods (bottom-up, top-down, value theory) to triangulate and validate estimates. If methods disagree by more than 2-3x, your assumptions need scrutiny.
---
## When to Use This Skill
**Auto-loaded by agents**:
- `market-analyst` - For TAM/SAM/SOM calculation and market validation
**Use when you need to**:
- Assess if a market opportunity is worth pursuing
- Calculate TAM, SAM, and SOM for business planning
- Validate market assumptions before building
- Support fundraising or strategic planning
- Evaluate competitive landscape impact on opportunity
- Determine realistic revenue projections
---
## The Three-Tier Framework
### TAM (Total Addressable Market)
**Definition:** Total revenue opportunity if you achieved 100% market share globally.
**Purpose:** Understand the absolute ceiling of opportunity.
**Typical Range:**
- Side project: $1M+ TAM minimum
- Full-time business: $10M+ TAM minimum
- VC-backed startup: $100M+ TAM minimum
**Calculation:** See three methods below.
---
### SAM (Serviceable Addressable Market)
**Definition:** Portion of TAM you can realistically serve given your business model, geography, and product capabilities.
**Purpose:** Your realistic target market after applying real-world constraints.
**Filters to Apply:**
1. **Geographic reach:** Where can you operate?
2. **Customer segment:** Which types of customers fit your solution?
3. **Product capabilities:** Who can your product actually serve?
4. **Distribution channels:** Who can you reach?
**Typical Range:** SAM is usually 10-40% of TAM for focused products.
**Formula:**
```
SAM = TAM × Geographic % × Segment % × Product Fit % × Distribution %
```
---
### SOM (Serviceable Obtainable Market)
**Definition:** Portion of SAM you can realistically capture in the near term (1-3 years).
**Purpose:** Your achievable revenue target given resources, competition, and time constraints.
**Realistic Benchmarks:**
- Year 1: 0.1-0.5% of SAM (new products)
- Year 3: 1-5% of SAM (if successful)
- Year 5: 5-15% of SAM (market leader position)
**Formula:**
```
SOM = SAM × Realistic Market Share %
```
**Reality Check:** Convert SOM to customer count. Is that number achievable per month/week?
---
## Three Market Sizing Methods
Always use all three methods for robust validation. If they disagree significantly, investigate your assumptions.
### Method 1: Bottom-Up (Most Reliable)
**Approach:** Count actual customers and multiply by revenue per customer.
**Formula:**
```
TAM = Total Potential Customers × Average Revenue per Customer
```
**Process:**
1. Define who is a potential customer (be specific!)
2. Count them using reliable data sources
3. Apply realistic adoption/penetration filters
4. Estimate average annual revenue per customer
5. Multiply to get TAM
**Strengths:**
- Most grounded in reality
- Easy to validate assumptions
- Can name actual customers
**When to Use:** Always start here as your primary method.
**Complete methodology:** See `references/market-sizing-methodologies.md` for detailed step-by-step process with examples.
---
### Method 2: Top-Down (For Validation)
**Approach:** Start with total market size and estimate your segment percentage.
**Formula:**
```
TAM = Total Market Size × Your Segment %
```
**Process:**
1. Find comparable market size data (Gartner, IDC, etc.)
2. Identify what percentage is your specific segment
3. Apply multiple filters to narrow down
4. Compare to bottom-up calculation
**Strengths:**
- Quick sanity check
- Uses industry research
- Good for validation
**Weaknesses:**
- Often produces inflated numbers
- Hard to validate percentages
- Can feel like guesswork
**When to Use:** As secondary validation, never as primary method.
**Complete methodology:** See `references/market-sizing-methodologies.md` for examples and industry applications.
---
### Method 3: Value Theory
**Approach:** Calculate value created for customers, then estimate capture rate.
**Formula:**
```
TAM = (Value Created per Customer × Potential Customers) × Capture Rate %
```
**Process:**
1. Quantify value delivered (time saved, cost reduced, revenue increased)
2. Calculate dollar value of that benefit
3. Determine what percentage you can capture in pricing (typically 10-30%)
4. Multiply by potential customer base
**Strengths:**
- Tests pricing assumptions
- Grounds estimates in customer value
- Helps justify pricing strategy
**When to Use:** To validate pricing is reasonable relative to value created.
**Complete methodology:** See `references/market-sizing-methodologies.md` for value calculation frameworks.
---
## Validation Framework
### The Reality Check Questions
Before trusting your market sizing, validate with these critical tests:
**1. Can you name 10 specific potential customers?**
- If no: Market may be too narrow or unclear
- If yes: Proceed with confidence
**2. Are there existing competitors making money?**
- If yes: Market is validated (good!)
- If no: Either no market exists OR huge greenfield (risky)
**3. Does TAM > SAM > SOM make sense?**
- Progression should be logical
- SAM typically 10-40% of TAM
- SOM Year 1 typically 0.1-1% of SAM
**4. Is Year 1 SOM achievable with your resources?**
- Convert to customer count per month
- Is that acquisition rate realistic?
- Do you have budget/capacity?
**5. Is the market big enough to justify effort?**
- Minimum thresholds matter
- Compare to your goals (bootstrap vs VC)
**Complete validation checklist:** See `assets/market-validation-checklist.md` for comprehensive 100+ point validation framework.
---
## Common Mistakes to Avoid
1. **Confusing TAM with SAM** - Be explicit which number you're discussing
2. **Top-down only sizing** - Always validate with bottom-up
3. **Ignoring competition** - Available market is smaller than total market
4. **Assuming linear growth** - Use S-curves, not straight lines
5. **No customer names** - If you can't name 10 customers, market may not exist
6. **One-and-done sizing** - Update assumptions quarterly as you learn
**Detailed guide:** See `references/market-sizing-best-practices.md` for:
- How to avoid each mistake
- Industry-specific considerations
- Competitive landscape analysis
- Assumption management frameworks
- Sensitivity analysis approaches
- Case studies (Superhuman, Quibi, Figma, Slack)
---
## Recommended Workflow
### Step 1: Bottom-Up Calculation (Primary)
Use this as your primary estimate:
1. Define universe of potential customers (be specific)
2. Count them using reliable data sources
3. Estimate realistic adoption/penetration percentage
4. Determine average annual revenue per customer
5. Calculate: TAM = Customers × Adoption % × Price
**Tool:** Use `assets/market-sizing-calculator.md` for step-by-step worksheet with formulas.
---
### Step 2: Top-Down Validation (Secondary)
Validate your bottom-up with industry data:
1. Find comparable market size from research firms
2. Estimate what percentage is your segment
3. Compare to bottom-up calculation
4. If within 2-3x: Good confidence
5. If >5x difference: Investigate assumptions
---
### Step 3: Value Theory Check
Test pricing reasonableness:
1. Quantify value delivered to customers
2. Calculate dollar value of benefits
3. Determine capture rate (10-30% typical)
4. Validate pricing is within reasonable range
---
### Step 4: Apply SAM Filters
Narrow TAM to realistic serviceable market:
```
Starting TAM: $__________
Geographic filter: × ____% = $__________
Segment filter: × ____% = $__________
Product fit filter: × ____% = $__________
Distribution filter: × ____% = $__________
Final SAM: $__________
```
**Template:** Use `assets/tam-sam-som-template.md` for complete calculation template.
---
### Step 5: Calculate Realistic SOM
Project achievable market capture:
**Conservative Approach:**
- Year 1: 0.1-0.3% of SAM
- Year 2: 0.5-1% of SAM
- Year 3: 1-3% of SAM
**Consider:**
- Competitive intensity (high = lower %)
- Switching costs (high = lower %)
- Your differentiation (strong = higher %)
- Distribution advantage (strong = higher %)
---
### Step 6: Validate Thoroughly
Run through comprehensive validation:
1. Complete all reality checks
2. Verify unit economics work (LTV:CAC ratio)
3. Check competitive landscape math
4. Model three scenarios (pessimistic, base, optimistic)
5. Conduct sensitivity analysis on key assumptions
**Validation tool:** Use `assets/market-validation-checklist.md` for systematic validation.
---
### Step 7: Document Assumptions
Critical for updating as you learn:
```markdown
## Key Assumptions
1. Customer count: [number]
- Source: [where this came from]
- Confidence: [High/Medium/Low]
- Impact if wrong: [+/- X% on TAM]
2. Pricing: $[amount]/year
- Basis: [competitive analysis, value-based, etc.]
- Confidence: [High/Medium/Low]
- Impact if wrong: [direct 1:1 impact]
3. Adoption rate: [%]
- Basis: [customer interviews, analogies, etc.]
- Confidence: [High/Medium/Low]
- Impact if wrong: [+/- X% on TAM]
```
---
## Templates and Tools
### Calculation Tools
**Complete TAM/SAM/SOM Template:**
- `assets/tam-sam-som-template.md`
- Full calculation framework with all filters
- Includes validation checklist
- Assumption documentation section
- Sensitivity analysis worksheet
**Step-by-Step Calculator:**
- `assets/market-sizing-calculator.md`
- All three methods with formulas
- Worked examples
- Comparison framework
- Confidence scoring
**Validation Checklist:**
- `assets/market-validation-checklist.md`
- 100+ validation points
- Reality checks and red flags
- Customer count validation
- Pricing validation
- Competitive validation
---
## Reference Guides
### Comprehensive Methodologies
**Complete Methods Guide:**
- `references/market-sizing-methodologies.md`
- Detailed bottom-up, top-down, and value theory processes
- Industry-specific approaches (B2B SaaS, Consumer, Enterprise, Marketplace, Dev Tools)
- Method comparison and triangulation
- Data source recommendations
**Best Practices Guide:**
- `references/market-sizing-best-practices.md`
- Common mistakes and how to avoid them
- Validation frameworks
- Competitive landscape analysis
- Assumption management
- Sensitivity analysis
- Case studies: Superhuman, Quibi, Figma, Slack
- Advanced considerations (timing, geographic expansion, platform effects)
---
## Summary
**Market sizing is educated guessing** - the goal is reasonable estimates with documented assumptions, not precision.
**The Three-Step Approach:**
1. **Calculate:** Use all three methods (bottom-up, top-down, value theory)
2. **Validate:** Reality-check with customers, competition, and economics
3. **Document:** Track assumptions and update quarterly
**Key Principles:**
- Always start with bottom-up (most reliable)
- Use top-down only for validation
- Can you name 10 customers? (Critical test)
- Update assumptions as you learn
- Model three scenarios (pessimistic, base, optimistic)
**Decision Framework:**
- If SAM < $10M: Too small for most ventures
- If Year 1 SOM < $50K: Question if worth effort
- If methods disagree >5x: Assumptions need work
- If no competitors exist: Either no market OR huge opportunity (validate carefully)
---
## Related Skills
- `product-positioning` - Position against competitive landscape
- `product-market-fit` - Validate market demand exists
- `competitive-analysis-templates` - Analyze market attractiveness and competitive dynamics

View File

@@ -0,0 +1,518 @@
---
name: prd-templates
description: Master PRD templates including problem statements, success metrics, requirements, user stories, and technical considerations. Use when writing PRDs, documenting features, defining requirements, communicating product decisions, or creating feature specifications. Covers PRD structure, writing best practices, and templates from Amazon, Google, and high-performing PM teams.
---
# PRD Templates
Comprehensive PRD (Product Requirements Document) templates and frameworks for documenting product requirements and driving successful execution.
## When to Use This Skill
**Auto-loaded by agents**:
- `requirements-engineer` - For Amazon PR/FAQ, comprehensive PRD, and lean PRD templates
**Use when you need**:
- Writing product requirements documents
- Documenting feature specifications
- Aligning cross-functional teams
- Communicating product decisions
- Creating feature briefs
- Defining project scope
- Establishing success metrics
## What is a PRD?
A PRD is the primary document for defining what to build and why. It serves as:
- **Alignment tool**: Gets everyone on the same page
- **Decision artifact**: Documents what and why
- **Communication vehicle**: Shares vision across teams
- **Execution blueprint**: Guides building and launch
- **Historical record**: Captures rationale for future decisions
**A great PRD answers**:
- What are we building?
- Why are we building it?
- Who is it for?
- How will we know it's successful?
- What are we explicitly NOT building?
---
## When to Use Which Template
We provide 4 ready-to-use PRD templates for different contexts:
### Lean PRD
**Use for**: Small features, enhancements, bug fixes
**Effort**: 1-2 hours to write
**Length**: 1-2 pages
**Audience**: Engineering team
**Best for**:
- Features requiring < 1 week of development
- Well-understood problems with clear solutions
- Internal tools or quick experiments
- Low cross-functional complexity
**Template**: `assets/lean-prd-template.md`
---
### Comprehensive PRD (Standard)
**Use for**: Most features, new capabilities, significant enhancements
**Effort**: 4-8 hours to write
**Length**: 3-5 pages
**Audience**: Cross-functional team
**Best for**:
- Standard features (1-4 weeks of development)
- Customer-facing changes
- Features requiring cross-functional coordination
- New capabilities or platform features
**Template**: `assets/comprehensive-prd-template.md`
---
### Amazon PR/FAQ
**Use for**: New products, major bets, working backwards from customer
**Effort**: 8-16 hours to write
**Length**: 3-6 pages (1-2 page PR + 2-4 pages FAQ)
**Audience**: Solo dev or small team (customer-first thinking)
**Best for**:
- New products or product lines
- Major strategic initiatives
- Customer-facing launches
- Working backwards from customer experience
- Pitching to executives or board
**Unique approach**: Start with the press release announcing the finished product, then answer hard questions. Forces customer-first thinking and addresses risks early.
**Template**: `assets/amazon-pr-faq-template.md`
---
### Google PRD
**Use for**: Data-driven products, cross-team initiatives, scale-focused
**Effort**: 8-12 hours to write
**Length**: 5-10 pages
**Audience**: Cross-functional teams, leadership
**Best for**:
- Products where metrics and scale matter from day one
- Cross-team initiatives requiring tight coordination
- Features with significant technical complexity
- Initiatives requiring executive approval
**Unique approach**: Emphasizes objectives (goals and non-goals), user benefits, and clear success criteria with measurable targets.
**Template**: `assets/google-prd-template.md`
---
## Core PRD Components
Regardless of template, great PRDs include these key elements:
### 1. Problem Statement
**Purpose**: Validate this is worth solving
**Include**:
- User pain point (describe from their perspective)
- Who experiences it (user segment, % affected)
- Impact if not solved (business + customer cost)
- Evidence (research, data, quotes)
**Good example**:
"Small business owners spend 5+ hours per week manually creating invoices, leading to delayed payments and cash flow issues. 68% of survey respondents cited invoicing as their #1 time sink. Current tools require accounting expertise most small business owners lack."
**Bad example**:
"We need an invoicing feature because competitors have one."
---
### 2. Goals & Success Criteria
**Purpose**: Define what winning looks like
**Include**:
- Business goals (revenue, efficiency, market position)
- User goals (what users can newly accomplish)
- Success metrics with baselines and targets
- Timeline for measurement (30/60/90 days)
**Make metrics SMART**:
- **Specific**: "Increase DAU" not "grow users"
- **Measurable**: Quantifiable number
- **Achievable**: Stretch but realistic
- **Relevant**: Ties to business goals
- **Time-bound**: Clear deadline
**Track both**:
- **Leading metrics**: Predict success (activation, engagement)
- **Lagging metrics**: Measure outcome (revenue, retention)
---
### 2.5. Evidence (Optional Section)
**Purpose**: Validate the problem with data
**CRITICAL - Never Fabricate Evidence**:
PRDs are specifications Claude Code uses to build features. Fabricated evidence leads to wrong implementations. All included evidence MUST be from real sources.
**Include Evidence section ONLY when you have**:
- Real user research (synthesized by research-ops or user-provided)
- Competitive analysis (from competitive-landscape.md or market-analyst)
- Support data (ticket volumes, customer quotes from user)
- Analytics data (user-provided metrics and usage patterns)
**If no evidence exists**: Omit the Evidence section entirely. Do not use placeholders or template examples as if they were real data.
**Valid evidence sources**:
- Context files: `competitive-landscape.md` (Stage 1), `customer-segments.md` (Stage 2)
- Specialist agents: research-ops synthesis, market-analyst competitive research
- User-provided: Support tickets, analytics, surveys, interview transcripts
- WebSearch: Current market data, competitor information
**Attribution required** - All evidence must cite source:
- "Synthesized from 6 interviews (research-ops, Oct 2025)"
- "From competitive-landscape.md (created Stage 1, validated Oct 2025)"
- "User-provided: 12 support tickets over 3 months"
- "WebSearch: Current competitor analysis (Oct 2025)"
**Template examples show FORMAT only**:
- Examples in templates demonstrate structure, not content to copy
- Never copy example numbers, quotes, or data as if they were real
- Replace with actual data or omit section
**Check context first**:
- Before asking user, check if `competitive-landscape.md` or `customer-segments.md` contain feature-relevant data
- If context files exist but don't cover this feature, ask user if new research is needed
- Route to specialist agents (market-analyst, research-ops) when beneficial
---
### 3. Proposed Solution
**Purpose**: Paint picture of what we're building
**Include**:
- High-level description (what and why)
- Key capabilities (what it can do)
- User experience (how users interact)
- Value proposition (why users care)
**Show, don't just tell**:
- User scenarios (storytelling)
- User flows (step-by-step)
- Mockups (visual representation)
- Concrete examples
---
### 4. Requirements
**Purpose**: Define what to build
**Functional requirements**:
- Number them (REQ-001, REQ-002)
- Use active voice ("System shall...")
- Be specific and testable
- Include acceptance criteria
**Non-functional requirements**:
- Performance (speed, latency, throughput)
- Security (authentication, authorization, encryption)
- Scalability (load handling, growth capacity)
- Accessibility (WCAG compliance)
- Reliability (uptime, error rates)
**Prioritize ruthlessly**:
- **P0/Must**: Required for launch
- **P1/Should**: Important, can defer if needed
- **P2/Nice-to-have**: Future consideration
---
### 5. Out of Scope
**Purpose**: Prevent scope creep and maintain focus
**Why it matters**:
- Explicitly stating what we're NOT doing prevents scope creep mid-development
- Helps prioritization discussions
- Maintains focus on core value
**Include**:
- Features/capabilities explicitly excluded
- Brief rationale for each
- Note if it's "never" or "not now"
---
### 6. Launch Plan
**Purpose**: Define rollout strategy
**Include**:
- Rollout approach (phased, beta, full launch)
- Target segments or cohorts
- Success validation approach
- Go/No-Go criteria
---
## PRD Writing Best Practices
### Start with Why
Most PRDs start with "what". Great PRDs start with "why".
**Poor order**: "We're building guest checkout. Users will be able to..."
**Good order**: "Users abandon checkout because account creation adds friction (45% rate). To solve this, we're building..."
---
### Be Specific and Measurable
Vague language kills PRDs.
**Vague → Specific**:
- "Fast" → "Page load < 2 seconds (p95)"
- "Many users" → "35% of daily active users"
- "Improve engagement" → "Increase session length from 3min to 5min"
- "Easy to use" → "New users complete key task in < 5 minutes without help"
---
### Write for Clarity
Clear documentation serves multiple purposes:
**For implementation**: Clear requirements, acceptance criteria, non-functional requirements
**For yourself**: User scenarios, flows, edge cases, decisions and rationale
**For users**: Target audience, value proposition, differentiation
---
### Document Decisions and Rationale
Future you needs to know why decisions were made.
**Example**:
"We're starting with web only (no mobile app) because:
1. 80% of current usage is web
2. Mobile requires 3x development time
3. We can validate core value on web first
4. Mobile can launch in Q2 based on learnings"
---
### Use Visuals
**Tables** for comparisons, **diagrams** for flows, **mockups** for UI, **charts** for data.
A picture is worth a thousand words of requirements.
---
## Common PRD Mistakes to Avoid
**Solutionizing too early**: Start with problem, not "we need to build..."
**Vague requirements**: "Fast" instead of "< 2 seconds"
**Missing acceptance criteria**: No clear definition of "done"
**No prioritization**: Everything is P0
**Scope creep**: Adding "just one more thing" repeatedly
**Missing success metrics**: No way to measure if it worked
**No evidence**: "I think users want..." instead of data
**Too long**: 20-page PRDs nobody reads
---
## Writing Clear PRDs for Solo Builders
For solo developers and small teams, PRDs serve as thinking documents and future reference.
### Start with a Brief
**Before writing full PRD**:
1. Draft 1-page overview (problem, solution, metrics, scope)
2. Validate direction with quick user feedback or research
3. Refine based on insights
4. Then write full PRD
**Why this works**:
- Low time investment if direction is wrong
- Easier to course-correct early
- Validates problem before deep specification
- Prevents over-planning before validation
---
### Self-Review Process
**Before finalizing**:
1. Step away for 24 hours
2. Re-read with fresh eyes
3. Ask: Would Claude Code understand this?
4. Ask: Would a new teammate understand this?
5. Update for clarity
**Review checklist**:
- Problem clearly stated with evidence
- Success metrics specific and measurable
- Requirements testable and complete
- Out of scope explicitly documented
- Decisions and rationale captured
---
## PRD Review Checklist
Before submitting your PRD, verify:
**Content Completeness**:
- [ ] Problem clearly defined with evidence
- [ ] Goals and success metrics specified
- [ ] Solution described with user flows
- [ ] Requirements listed with acceptance criteria
- [ ] Out of scope explicitly stated
- [ ] Launch plan defined
**Quality**:
- [ ] Well-structured and easy to navigate
- [ ] Clear, concise language
- [ ] Specific and measurable requirements
- [ ] Technical feasibility considered
- [ ] Risks identified with mitigation
**For full checklist**: See `assets/prd-review-checklist.md`
---
## Ready-to-Use Templates
We provide complete, copy-paste-ready templates:
### In `assets/`:
- **lean-prd-template.md**: Quick 1-2 page format for small features
- **comprehensive-prd-template.md**: Standard 3-5 page format for most features
- **amazon-pr-faq-template.md**: Working backwards from customer experience
- **google-prd-template.md**: Data-driven, objectives-focused format
- **prd-review-checklist.md**: Quality checklist before submission
### In `references/`:
- **prd-writing-guide.md**: Deep dive on writing techniques, section-by-section guidance, common mistakes
---
## PRD Workflow
### 1. Research & Discovery
- Customer interviews
- Data analysis
- Competitive research
- Problem validation
### 2. Pre-PRD Validation (1-2 days)
- Draft 1-pager
- Validate with user feedback or research
- Iterate on direction
### 3. Write PRD (1-3 days)
- Choose appropriate template
- Document problem, solution, requirements
- Add mockups and user flows
- Define success metrics
### 4. Review & Iterate (1-2 days)
- Self-review after 24 hours
- Peer review if available
- Refine for clarity
- Finalize
### 5. Development
- Update PRD as decisions made
- Regular demos and check-ins
- Scope change management
### 6. Launch & Learn
- Execute launch plan
- Track success metrics
- Document results
- Share learnings
---
## When NOT to Write a PRD
PRDs aren't always needed:
**Skip for**:
- Trivial changes (copy tweaks, simple bug fixes)
- Well-understood maintenance work
- One-person side projects
- Throwaway prototypes
**Use instead**:
- Quick Slack/email for trivial changes
- GitHub issue for bug fixes
- Lightweight spec for small projects
**PRDs make sense when**:
- Significant time investment (>1 week)
- Complex feature requiring clear specification
- Need documentation for future reference
- Building with AI coding tools like Claude Code
---
## Adapting Templates
Templates are starting points, not rigid structures.
**Adapt based on**:
- Company culture (startup vs enterprise)
- Product maturity (new product vs established)
- Team size (2 people vs 20)
- Audience (internal tool vs customer-facing)
**Good adaptation**:
- Removing sections that don't apply
- Adding sections for specific context
- Adjusting level of detail
- Changing format/structure
**Bad adaptation**:
- Removing problem statement (always needed)
- Skipping success metrics (always needed)
- No out-of-scope (always needed)
- Ignoring audience needs
---
## Related Skills
- **user-story-templates**: For user story format and acceptance criteria
- **specification-techniques**: General spec writing best practices
- **go-to-market-playbooks**: For launch planning
---
## Learning More
**In this skill**:
- See templates in `assets/` for ready-to-use formats
- See `references/prd-writing-guide.md` for writing techniques
**Books**:
- "Inspired" by Marty Cagan
- "Escaping the Build Trap" by Melissa Perri
- "The Lean Product Playbook" by Dan Olsen
**Key Principle**: The best PRD is the one that gets your team aligned and building the right thing. Use these templates as starting points and adapt to your context.

View File

@@ -0,0 +1,285 @@
---
name: prioritization-methods
description: Apply RICE, ICE, MoSCoW, Kano, and Value vs Effort frameworks. Use when prioritizing features, roadmap planning, or making trade-off decisions.
---
# Prioritization Methods & Frameworks
## Overview
Data-driven frameworks for feature prioritization, backlog ranking, and MVP scoping. Choose the right framework based on your context: data availability, team size, and decision type.
## When to Use This Skill
**Auto-loaded by agents**:
- `feature-prioritizer` - For RICE/ICE scoring, MVP scoping, and backlog ranking
**Use when you need**:
- Choosing between competing features
- Building quarterly roadmaps
- Backlog prioritization
- Saying "no" with evidence
- Clear prioritization decisions
- Resource allocation decisions
- MVP scoping decisions
---
## Seven Core Frameworks
### 1. RICE Scoring (Intercom)
**Formula**: (Reach × Impact × Confidence) / Effort
**Best for**: Large backlogs (20+ items) with quantitative data
**Components**:
- **Reach**: Users impacted per quarter
- **Impact**: 0.25 (minimal) to 3 (massive)
- **Confidence**: 50% (low data) to 100% (high data)
- **Effort**: Person-months to ship
**Example**:
```
Dark Mode: (10,000 × 2.0 × 0.80) / 1.5 = 10,667
```
**When to use**: Post-PMF with metrics, need defendable priorities, data-driven culture
**Template**: `assets/rice-scoring-template.md`
---
### 2. ICE Scoring (Sean Ellis)
**Formula**: (Impact + Confidence + Ease) / 3
**Best for**: Quick experiments, early-stage products, limited data
**Components** (each 1-10):
- **Impact**: How much will this move the needle?
- **Confidence**: How sure are we?
- **Ease**: How simple to implement?
**Example**:
```
Email Notifications: (8 + 9 + 7) / 3 = 8.0
```
**When to use**: Growth experiments, startups, need speed over rigor
**Template**: `assets/ice-scoring-template.md`
---
### 3. Value vs Effort Matrix (2×2)
**Quadrants**:
- **Quick Wins** (high value, low effort) - Do first
- **Big Bets** (high value, high effort) - Strategic
- **Fill-Ins** (low value, low effort) - If capacity
- **Time Sinks** (low value, high effort) - Avoid
**Best for**: Visual presentations, portfolio planning, quick assessments
**When to use**: Clear communication, strategic planning, need visualization
**Template**: `assets/value-effort-matrix-template.md`
---
### 4. MoSCoW Method
**Categories**:
- **Must Have** (60%) - Critical for launch
- **Should Have** (20%) - Important but not critical
- **Could Have** (20%) - Nice-to-have
- **Won't Have** - Explicitly out of scope
**Best for**: MVP scoping, release planning, clear scope decisions
**When to use**: Fixed timeline, need to cut scope, binary go/no-go decisions
**Template**: `assets/moscow-prioritization-template.md`
---
### 5. Kano Model
**Categories**:
- **Basic Needs (Must-Be)**: Expected, dissatisfiers if absent
- **Performance Needs**: More is better, linear satisfaction
- **Excitement Needs (Delighters)**: Unexpected joy
- **Indifferent**: Users don't care
- **Reverse**: Users prefer without it
**Best for**: Understanding user expectations, competitive positioning, roadmap sequencing
**When to use**: Strategic planning, differentiation strategy, multi-release planning
**Template**: `assets/kano-model-template.md`
---
### 6. Weighted Scoring
**Process**:
1. Define criteria (User Value, Revenue, Strategic Fit, Effort)
2. Assign weights (must sum to 100%)
3. Score features (1-10) on each criterion
4. Calculate weighted score
**Example**:
```
Criteria: User Value 40%, Revenue 30%, Strategic 20%, Ease 10%
Feature: (8 × 0.40) + (6 × 0.30) + (9 × 0.20) + (5 × 0.10) = 7.3
```
**Best for**: Multiple criteria, complex trade-offs, custom needs
**When to use**: Balancing priorities, transparent decisions
**Template**: `assets/weighted-scoring-template.md`
---
### 7. Opportunity Scoring (Jobs-to-be-Done)
**Formula**: Importance + Max(Importance - Satisfaction, 0)
**Process**:
1. Identify customer jobs (outcomes, not features)
2. Survey: Rate importance (1-5) and satisfaction (1-5)
3. Calculate opportunity = importance + gap
4. Prioritize high-opportunity jobs (>7.0)
**Best for**: Outcome-driven innovation, understanding underserved needs, feature gap analysis
**When to use**: JTBD methodology, finding innovation opportunities, validation
**Template**: `assets/opportunity-scoring-template.md`
---
## Choosing the Right Framework
**Need speed?** → ICE (fastest)
**Have user data?** → RICE (most rigorous)
**Visual presentation?** → Value/Effort (clear visualization)
**MVP scoping?** → MoSCoW (forces cuts)
**User expectations?** → Kano (strategic insights)
**Complex criteria?** → Weighted Scoring (custom)
**Outcome-focused?** → Opportunity Scoring (JTBD)
**Detailed comparison**: `references/framework-selection-guide.md`
Complete decision tree, framework comparison table, combining strategies
---
## Best Practices
**1. Be Consistent**
- Use same framework across team
- Document assumptions explicitly
- Update scores as you learn
**2. Combine Frameworks**
- RICE for ranking + Value/Effort for visualization
- MoSCoW for release + RICE for roadmap
- Kano for strategy + ICE for tactics
**3. Avoid Common Pitfalls**
- Don't prioritize by HiPPO (Highest Paid Person's Opinion)
- Don't ignore effort (value alone insufficient)
- Don't set-and-forget (re-prioritize regularly)
- Don't game the system (honest scoring)
**4. Clear Communication**
- Show your work (transparent criteria)
- Visualize priorities clearly
- Explain trade-offs explicitly
- Document "why not" for rejected items
**5. Iterate and Learn**
- Track actual vs estimated impact
- Refine scoring over time
- Calibrate team estimates
- Learn from misses
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/rice-scoring-template.md` - Reach × Impact × Confidence / Effort
- `assets/ice-scoring-template.md` - Impact + Confidence + Ease / 3
- `assets/value-effort-matrix-template.md` - 2×2 visualization
- `assets/moscow-prioritization-template.md` - Must/Should/Could/Won't
- `assets/kano-model-template.md` - Expectation analysis
- `assets/weighted-scoring-template.md` - Custom criteria scoring
- `assets/opportunity-scoring-template.md` - Jobs-to-be-done prioritization
### References (Deep Dives)
When you need comprehensive guidance:
- `references/framework-selection-guide.md` - Choose the right framework, comparison table, combining strategies, decision tree
---
## Quick Reference
```
Problem: Too many features, limited resources
Solution: Use prioritization framework
Context-Based Selection:
├─ Lots of data? → RICE
├─ Need speed? → ICE
├─ Visual presentation? → Value/Effort
├─ MVP scoping? → MoSCoW
├─ User expectations? → Kano
├─ Complex criteria? → Weighted Scoring
└─ Outcome-focused? → Opportunity Scoring
Always: Document, communicate, iterate
```
---
## Resources
**Books**:
- "Intercom on Product Management" (RICE framework)
- "Hacking Growth" by Sean Ellis (ICE scoring)
- "Jobs to be Done" by Anthony Ulwick (Opportunity scoring)
**Tools**:
- Airtable/Notion for scoring
- ProductPlan for roadmaps
- Aha!, ProductBoard for frameworks
**Articles**:
- "RICE: Simple prioritization for product managers" - Intercom
- "How to use ICE Scoring" - Sean Ellis
- "The Kano Model" - UX Magazine
---
## Related Skills
- `roadmap-frameworks` - Turn priorities into roadmaps
- `specification-techniques` - Spec prioritized features
- `product-positioning` - Strategic positioning and differentiation
---
**Key Principle**: Choose one framework, use it consistently, iterate. Don't over-analyze - prioritization should enable decisions, not paralyze them.

View File

@@ -0,0 +1,484 @@
---
name: product-market-fit
description: Master frameworks for measuring, achieving, and maintaining product-market fit (PMF). Use when validating new products, assessing readiness to scale, diagnosing retention problems, planning market expansion, measuring "very disappointed" score, implementing PMF engines, or determining if you have permission to grow. Covers Sean Ellis survey methodology, Superhuman PMF engine, retention curve analysis, leading/lagging indicators, pre-PMF vs. post-PMF strategies, and maintaining fit as markets evolve.
---
# Product-Market Fit
Frameworks for measuring, achieving, and maintaining the critical milestone where your product satisfies strong market demand.
## Overview
Product-Market Fit (PMF) is the degree to which a product satisfies strong market demand - the inflection point where a product becomes a "must-have" for a well-defined market segment.
**Core Principle:** PMF is not a destination, it's a milestone that gives you permission to scale. Maintaining it requires continuous attention to customer needs and market evolution.
**Key Insight:** You can't manufacture PMF through marketing or sales tactics. PMF comes from deeply understanding a specific market segment and building something they desperately need. Scaling before PMF is the number one killer of startups.
**Historical Context:**
- Term coined by Marc Andreessen (2007)
- Operationalized by Sean Ellis with 40% rule (2010)
- Systematized by Rahul Vohra with Superhuman PMF Engine (2017)
## When to Use This Skill
**Auto-loaded by agents**:
- `product-strategist` - For PMF measurement, Sean Ellis survey, and retention analysis
**Use when you need**:
- Measuring product-market fit status
- Running Sean Ellis PMF surveys
- Analyzing retention curves
- Determining readiness to scale
- Diagnosing retention problems
- Planning PMF improvement strategies
- Deciding pre-PMF vs. post-PMF tactics
- Validating market expansion opportunities
---
## Measuring Product-Market Fit
### The Sean Ellis Test (40% Rule)
The definitive method for measuring PMF through a single powerful question.
**The Question:**
> "How would you feel if you could no longer use [product]?"
> - a) Very disappointed
> - b) Somewhat disappointed
> - c) Not disappointed (it isn't really that useful)
**PMF Threshold:**
- **40%+ "Very disappointed" = PMF achieved**
- 25-40% = Close, keep iterating
- <25% = No PMF yet
**Why this works:**
- Measures must-have vs. nice-to-have
- Predictive of retention
- Correlates with organic growth
- Simple to administer
- Actionable results
**Complete survey methodology:** See `assets/sean-ellis-pmf-survey.md` for:
- Full survey template
- When and how to administer
- Sample size requirements
- Analysis framework
- Segment breakdowns
---
### The Superhuman PMF Engine
Systematic framework for measuring and improving PMF score quarter over quarter.
**Philosophy:** PMF is not binary - it's a spectrum you can measure and improve systematically.
**The 5-Step Engine:**
1. **Segment users:** Very disappointed / Somewhat / Not disappointed
2. **Analyze champions:** Who are the "very disappointed" users? What do they have in common?
3. **Find your roadmap:** Different strategies for each segment
4. **Build strategically:** 50% for champions, 50% to convert warm users, 0% for wrong-fit
5. **Measure progress:** Re-survey quarterly, track improvement
**Superhuman's Results:**
```
Q1 2017: 22% → Q2 2018: 58% (18 months)
```
**Complete framework:** See `assets/superhuman-pmf-engine.md` for:
- Detailed 5-step process
- Segment analysis worksheets
- Roadmap allocation strategy
- Progress tracking templates
- Prioritization frameworks
---
### Retention Curves: The Ultimate PMF Test
Retention patterns reveal if your product is truly a must-have.
**Three Patterns:**
**1. Leaky Bucket (No PMF):**
- Continuously declining curve
- Never flattens
- Users leave permanently
- Action: Find PMF before scaling
**2. Flattening Curve (PMF!):**
- Drops initially, then flattens at 30-50%
- Core users retain long-term
- Ready to scale
- Action: Prove acquisition channel, then scale
**3. Smiling Curve (Strong PMF):**
- Usage increases over time
- Network effects or habit formation
- Examples: Social networks, collaboration tools
- Action: Scale aggressively
**Complete analysis:** See `assets/retention-curve-analysis.md` for:
- How to build retention curves
- Diagnosing problems
- Industry benchmarks
- Improving retention by phase
---
## Leading vs. Lagging Indicators
Use both types of indicators to measure PMF comprehensively.
### Leading Indicators (Feel It Now)
Early signals before metrics confirm PMF:
**1. Organic Growth:**
- Word-of-mouth referrals happening
- Unprompted social media mentions
- Inbound signup requests
- Target: >50% of growth organic
**2. User Engagement:**
- High DAU/MAU ratio (stickiness)
- Deep feature adoption
- Long session times
- Target: DAU/MAU >30-40% (B2B), >60% (B2C Social)
**3. Customer Passion:**
- "Don't take this away from me"
- Volunteering to help
- Unsolicited recommendations
- Active community forming
**4. Sales Velocity (B2B):**
- Deals closing faster over time
- Less price resistance
- Shorter sales cycles
- Higher win rates
**5. Struggle to Keep Up:**
- Natural waitlist forming
- Capacity challenges
- Can't hire fast enough
- Good problem to have
### Lagging Indicators (Metrics Confirm It)
Hard metrics that retrospectively validate PMF:
**1. Retention:**
- B2C: <5% monthly churn
- B2B: <2% logo churn
- Cohort curves flattening
**2. Net Promoter Score:**
- NPS >50 (world-class)
- High promoters, low detractors
**3. Unit Economics:**
- LTV:CAC >3:1 (minimum), >5:1 (ideal)
- Payback period <12 months
- Gross margin >70% (SaaS)
**4. Growth Rate:**
- Exponential not linear
- 10%+ month-over-month
- Compounding effects visible
**5. Market Pull:**
- Inbound >50% of new customers
- PR coverage without effort
- Competitive response
- Industry recognition
**Comprehensive guide:** See `references/leading-lagging-indicators.md` for:
- Detailed metrics and benchmarks
- How to use both together
- Early warning systems
- Decision frameworks
---
## Dashboard and Tracking
### The PMF Dashboard
Track PMF through multiple lenses for complete picture.
**Primary Metrics (The Big 3):**
1. Sean Ellis PMF Score (>40% target)
2. Retention Curves (flattening pattern)
3. Net Promoter Score (>50 target)
**Supporting Metrics:**
- Leading indicators (organic growth, engagement, passion)
- Lagging indicators (unit economics, growth rate)
- Segment-specific breakdowns
**Update frequency:**
- Daily: Engagement metrics
- Weekly: Growth metrics
- Monthly: Dashboard review
- Quarterly: Deep-dive + PMF survey
**Complete dashboard:** See `assets/pmf-measurement-dashboard.md` for:
- Full dashboard template
- Metric definitions and benchmarks
- Alert thresholds
- Segment analysis
- Visualization guidelines
---
## Path to Achieving PMF
### Stage 1: Market Understanding
**Activities:**
- Interview 30-50 potential customers
- Understand current alternatives
- Map jobs-to-be-done
- Identify underserved segments
**Timeline:** 2-4 weeks
### Stage 2: Value Hypothesis
**Framework:**
```
For [target segment]
Who [problem/need]
Our [product category]
That [key benefit]
Unlike [alternatives]
We [unique capability]
```
**Validation:** Would 40% be "very disappointed" to lose this?
**Timeline:** 1-2 weeks
**Complete canvas:** See `assets/value-proposition-canvas.md`
### Stage 3: MVP Validation
**Build minimum viable product:**
- Core value only
- Fast to iterate
- Good enough to test hypothesis
**Validation criteria:**
- 10-20 users experiencing value
- Qualitative feedback
- Usage patterns match hypothesis
**Timeline:** 4-8 weeks
### Stage 4: PMF Measurement
**Implement measurement:**
- Sean Ellis survey (after 2-4 weeks of use)
- Minimum 40 responses
- Track % "very disappointed"
- Set improvement targets
**Timeline:** 2-4 weeks to implement
### Stage 5: Systematic Improvement
**Apply Superhuman Engine:**
- Segment by PMF score
- Analyze champions
- Build 50/50 roadmap
- Iterate quarterly
**Timeline:** 6-18 months to reach 40%+
---
## The Three Stages of PMF
### Pre-PMF: Finding Fit (6-24 months)
**Characteristics:**
- High churn, low organic growth
- Sales struggle
- <40% "very disappointed"
**Focus:**
- Rapid iteration
- Customer discovery (10+ interviews/week)
- Small cohorts, extreme learning velocity
- Don't scale yet
**Common mistakes:**
- Premature scaling
- Building too many features
- Ignoring retention data
### At-PMF: Initial Traction (3-6 months)
**Characteristics:**
- 40%+ "very disappointed"
- Retention curves flattening
- Word-of-mouth spreading
- Easier to close deals
**Focus:**
- Prove one acquisition channel works
- Optimize unit economics
- Build for scalability
- Strengthen core value
**Green lights to scale:**
- LTV:CAC >3:1
- Retention curves flat/improving
- One repeatable channel working
### Post-PMF: Scaling (Years)
**Characteristics:**
- Predictable growth
- Multiple channels working
- Strong unit economics
- Efficient go-to-market
**Focus:**
- Scale acquisition
- Geographic expansion
- Adjacent segments
- Product line extensions
**Risk:** Losing PMF through feature bloat, serving wrong customers, losing focus
**Detailed guide:** See `references/pmf-stages-guide.md` for:
- Complete stage breakdowns
- Strategies for each stage
- Transition criteria
- Common mistakes and solutions
---
## Maintaining PMF Over Time
### Why PMF Gets Lost
**Internal factors:**
- Feature bloat dilutes core value
- Serving wrong customers
- Slow iteration speed
- Technical debt blocks innovation
**External factors:**
- Market evolution (needs change)
- New competitors (better alternatives)
- Technology shifts (new capabilities)
- Economic conditions (budget priorities)
### Maintenance Strategies
**1. Continuous Customer Contact:**
- Never stop interviewing (10-20 per week)
- Watch usage data constantly
- Monitor NPS and PMF scores quarterly
- Teresa Torres' weekly touchpoints
**2. Core Value Protection:**
- Resist feature bloat (80% strengthen core, 20% new)
- Maintain product focus
- Protect speed and simplicity
- Regular feature pruning
**3. Segment Discipline:**
- Don't chase every customer
- Say no to wrong-fit deals
- Maintain ICP (ideal customer profile)
- Measure PMF by segment
**4. Regular PMF Surveys:**
- Quarterly Sean Ellis surveys
- Track score by segment
- Watch for declining scores
- Act on early warnings
**5. Competitive Monitoring:**
- Track new alternatives
- Monitor customer switching
- Stay ahead on innovation
- Evolve value proposition
**Complete guide:** See `references/maintaining-pmf-guide.md` for:
- Why PMF degrades
- Detailed maintenance strategies
- Warning signs checklist
- Recovery playbook
---
## Case Studies
Learn from real-world PMF journeys:
**Superhuman: Systematic PMF Improvement**
- 22% → 58% in 18 months
- Data-driven PMF engine
- Methodical quarterly improvement
**Slack: Maintaining PMF Through Evolution**
- Strong initial PMF with tech startups
- Expanded while protecting core value
- Multiple segment expansion successful
**Quibi: Cautionary Tale of No PMF**
- $1.75B raised, complete failure
- Built 18 months without validation
- Ignored user feedback, iterated too slowly
**Figma: Remote Work Inflection Point**
- 5 years to PMF (patient technology building)
- COVID accelerated PMF dramatically
- Right product, right time
**Detailed case studies:** See `references/pmf-case-studies.md` for:
- Complete journey narratives
- Metrics and timelines
- Key lessons from each
- What worked and what didn't
---
## PMF Best Practices
**DO:**
- Measure PMF systematically (40% rule)
- Survey quarterly to track progress
- Focus on champions (double down on "very disappointed")
- Protect core value as you scale
- Maintain customer proximity always
- Use retention curves as ultimate test
- Say no to wrong-fit customers
- Iterate rapidly before PMF
- Be patient (can take 6-24 months)
**DON'T:**
- Scale before achieving PMF (leaky bucket)
- Ignore retention for acquisition
- Build for everyone (niche down)
- Assume PMF is permanent (keep measuring)
- Stop talking to customers (ever)
- Add features constantly (bloat)
- Chase every deal (segment discipline)
- Rush the process (systematic > fast)
---
## Related Skills
- `user-research-techniques` - Interview methods, research synthesis (understanding users)
- `validation-frameworks` - Problem/solution validation and MVP testing
- `market-sizing-frameworks` - Market opportunity assessment

View File

@@ -0,0 +1,382 @@
---
name: product-positioning
description: Master product positioning and differentiation using April Dunford's framework for creating category-defining products. Use when launching new products, repositioning existing products, defining competitive strategy, developing messaging, entering new markets, or differentiating from competitors. Covers positioning statement frameworks, category design, value articulation, target market selection, and competitive alternatives analysis.
---
# Product Positioning
## Overview
Product positioning is the strategic process of defining how your product is uniquely different and valuable compared to alternatives, and how it fits into the market landscape in the minds of your target customers.
**Core Principle**: Positioning is context that helps customers understand what your product is, why it's special, and why they should care.
**Origin**: Modern positioning frameworks built on Al Ries & Jack Trout's "Positioning" (1981) and evolved by April Dunford in "Obviously Awesome" (2019) for modern products.
**Key Insight**: Great positioning makes everything else easier - sales, marketing, product development. Bad positioning makes everything harder, even if you have a great product.
## When to Use This Skill
**Auto-loaded by agents**:
- `market-analyst` - For competitive positioning and differentiation strategy
- `product-strategist` - For strategic positioning and value propositions
**Use when you need**:
- Launching new product or feature
- Repositioning existing product
- Entering new market or segment
- Defining competitive strategy
- Developing messaging and marketing
- Sales enablement and training
- Differentiating from competitors
- Category design and creation
---
## Positioning Framework Overview
### The 6 Components (in order)
Every positioning must define these components IN SEQUENCE:
**1. Competitive Alternatives**
- What would customers do if your product didn't exist?
- NOT competitors—alternatives (email, spreadsheets, manual process)
- Frame: "Compared to..."
**2. Unique Attributes**
- Capabilities you have that alternatives don't
- Meaningfully different, not just features
**3. Value (and Proof)**
- Why customers care about your unique attributes
- Group attributes into 2-4 value themes
- Provide proof (customer quotes, metrics, case studies)
**4. Target Market Characteristics**
- Who cares most about your value?
- Specific segment with shared characteristics
- NOT "everyone"
**5. Market Category**
- The mental box customers put you in
- Sets expectations about features, price, competitors
- Choose category that makes your value obvious
**6. Relevant Trends (Optional)**
- Market forces that make your product timely
- Use sparingly—only if genuinely relevant
- NOT buzzword stuffing
**Complete framework guide**: `references/april-dunford-framework-guide.md`
Comprehensive 10-step process, workshop agenda, facilitation guide, real-world examples
---
## Positioning Statement Frameworks
Once you've defined the 6 components, capture them in a positioning statement.
### Three Proven Formats
**1. Classic Positioning Statement**
```
For [target customer]
Who [customer need/problem]
The [product name] is a [market category]
That [key benefit/value]
Unlike [competitive alternative]
Our product [key differentiation]
```
**2. Geoffrey Moore's Format**
```
For [target customers]
Who are dissatisfied with [current alternative]
Our product is a [new category]
That provides [key problem-solving capability]
Unlike [product alternative]
We have assembled [key whole product features]
```
**3. Value Proposition One-Liner**
```
[Product] helps [target customer] [achieve goal/solve problem]
by [unique mechanism/approach]
```
**Complete templates and examples**: `assets/positioning-statement-template.md`
3 formats with examples (Uber, Slack, Superhuman), choosing guide, validation checklist
---
## Differentiation Strategies
### Five Core Strategies
**1. Feature Differentiation**
- Unique capabilities competitors don't have
- Example: Zoom's gallery view, Notion's all-in-one workspace
- When: Technical innovation, hard-to-copy features
**2. Quality Differentiation**
- Superior performance on existing dimensions
- Example: Superhuman's 100ms speed, Netflix's 99.9% uptime
- When: Can sustainably deliver better quality
**3. Experience Differentiation**
- Better end-to-end customer experience
- Example: Apple's ecosystem, Zappos' service
- When: Experience is hard to replicate, consistent across touchpoints
**4. Price Differentiation**
- Significantly lower or higher price point
- Example: Southwest (low), Rolex (high)
- When: Cost structure or value perception supports it
**5. Category Creation**
- Create new category you can own
- Example: Salesforce ("no software"), HubSpot ("inbound marketing")
- When: Truly novel approach, willing to educate market
**Complete strategy guide**: `assets/differentiation-strategy-template.md`
Deep dive on each strategy, examples, when to use, validation checklist, decision framework
---
## Messaging Hierarchy
Transform positioning into consistent external communications.
### Three Levels
**Level 1: Positioning (Internal)**
- Target market, alternatives, attributes, value, category
- Shared with entire company
- Foundation for all external messaging
**Level 2: Messaging (External)**
- Value proposition (one-liner)
- Key messages (3-5 that support positioning)
- Proof points (evidence for each message)
- Differentiation statement
**Level 3: Copy (Tactical)**
- Headlines and body copy
- CTAs for specific channels
- Feature descriptions
- Channel-specific adaptation (website, ads, emails, sales)
**Principle**: Every piece of copy ladders up to messaging, messaging ladders up to positioning.
**Complete hierarchy guide**: `assets/messaging-hierarchy-template.md`
Templates for each level, examples, consistency checklist, common mistakes
---
## Competitive Positioning Maps
Visualize competitive landscape to identify white space and differentiation opportunities.
### Creating the Map
**1. Choose Two Dimensions**
- Must matter to customers (not internal metrics)
- Must differentiate competitors (spread across axes)
- Should favor your strengths
**2. Plot Competitors**
- Place based on customer perception (not competitor claims)
- Include all major alternatives
**3. Identify White Space**
- Uncrowded quadrants
- Underserved customer segments
**4. Position Yourself**
- Differentiated from alternatives
- Plays to your strengths
- Credible to customers
**Example Dimensions**: Simple ↔ Complex, SMB ↔ Enterprise, Flexible ↔ Opinionated, Free ↔ Premium
**Complete map guide**: `assets/competitive-positioning-map-template.md`
Dimension selection, plotting guide, examples (CRM, project management, email), multi-dimensional mapping
---
## Positioning Tests and Validation
### Four Essential Tests
**1. The Elevator Test**
- Can anyone in company explain positioning in 30 seconds?
- Pass: Consistent explanations across team
**2. The Sales Test**
- Does sales team use positioning in every pitch?
- Pass: Sales converging on consistent message, customers understand quickly
**3. The Win/Loss Test**
- Are you winning based on your differentiation?
- Pass: Wins cite your differentiation, losses are about fit (not confusion)
**4. The Pricing Test**
- Does positioning support your pricing?
- Pass: Premium positioning = Premium pricing acceptance
### When to Reposition
**Triggers**:
1. Win rate declining (losing to new competitors)
2. Attracting wrong customers (not target market)
3. Market shift (category expectations changed)
4. Product evolution (added major capabilities)
5. Market expansion (entering new segment)
**Complete validation guide**: `references/positioning-validation-guide.md`
Testing methods, repositioning triggers, anti-patterns to avoid, real-world examples
---
## Common Anti-Patterns
### ❌ Everything for Everyone
"CRM for sales, marketing, service, ops, and more!"
**Fix**: Pick specific segment and value
### ❌ The Feature List
"We have channels, threads, search, integrations..."
**Fix**: Features → Benefits → Value
### ❌ The Buzzword Soup
"AI-powered, blockchain-enabled, cloud-native platform"
**Fix**: Plain language, actual differentiation
### ❌ The Category Confusion
"We're a CRM... and project management... and communication..."
**Fix**: Pick one primary category
### ❌ The Weak Differentiation
"Unlike competitors, we have a mobile app"
**Fix**: Find real, meaningful differentiation
**Detailed anti-patterns**: `references/positioning-validation-guide.md` (Part 3)
---
## Best Practices
**1. Start with Customers**
- Interview 10-15 customers who love your product
- Understand why they chose you and what value they get
- Look for patterns in best customers
**2. Follow the Sequence**
- Define components in order (alternatives → attributes → value → market → category)
- Each builds on previous
- Don't skip steps
**3. Be Specific**
- Target market: "Enterprise sales teams with 10+ reps" not "businesses"
- Value: "2x faster" not "more efficient"
- Differentiation: Concrete capabilities, not vague claims
**4. Test and Validate**
- Elevator test (30-second explanation)
- Sales test (consistent pitch)
- Win/loss analysis (winning on differentiation)
- Customer interviews (positioning resonates)
**5. Align and Document**
- Share positioning with entire company
- Train all teams (sales, marketing, CS, product)
- Update quarterly or when major changes occur
- Reference in all strategic decisions
**6. Iterate Based on Results**
- Track win rate, customer fit, sales velocity
- Reposition when triggers occur
- Refresh as market or product evolves
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/positioning-statement-template.md` - 3 proven formats with examples and validation
- `assets/differentiation-strategy-template.md` - 5 strategies with decision framework
- `assets/messaging-hierarchy-template.md` - 3-level hierarchy from strategy to copy
- `assets/competitive-positioning-map-template.md` - Creating visual competitive maps
### References (Deep Dives)
When you need comprehensive guidance:
- `references/april-dunford-framework-guide.md` - Complete 10-step positioning process, workshop agenda, facilitation tips
- `references/positioning-validation-guide.md` - Testing methods, repositioning triggers, anti-patterns, real-world examples
---
## Quick Reference
```
Problem: Product value isn't obvious, losing deals to confusion
Solution: Clear, customer-centric positioning
Process:
1. Define competitive alternatives (not competitors)
2. Isolate unique attributes (vs alternatives)
3. Map attributes to value themes
4. Identify target market (who cares most)
5. Choose market category (makes value obvious)
6. Create positioning statement
7. Build messaging hierarchy
8. Test with sales and customers
9. Document and train teams
10. Iterate based on win/loss data
Key Tests:
- Elevator test: 30-second explanation
- Sales test: Consistent pitch
- Win/loss test: Winning on differentiation
- Pricing test: Price supported by positioning
```
---
## Resources
**Books**:
- "Obviously Awesome" by April Dunford (2019) - Modern positioning framework
- "Crossing the Chasm" by Geoffrey Moore (1991) - Positioning for tech products
- "Positioning: The Battle for Your Mind" by Al Ries & Jack Trout (1981) - Classic positioning
**Articles**:
- "RICE: Simple prioritization for product managers" - Intercom blog
- April Dunford blog (aprildunford.com) - Positioning case studies
**Tools**:
- Positioning canvas (aprildunford.com)
- Value proposition canvas (Strategyzer)
- Perceptual mapping tools
---
## Related Skills
- `business-model-canvas` - Value propositions in business model context
- `prioritization-methods` - Prioritize positioning work and positioning-driven features
- `product-strategy-frameworks` - Strategic positioning and competitive strategy
- `gtm-strategy` - Launch positioning to market
---
**Key Principle**: Positioning is strategic context that makes your value obvious. Define alternatives, isolate unique attributes, map to value, identify who cares most, choose category that highlights strengths, then test and iterate. Great positioning makes everything else easier.

View File

@@ -0,0 +1,349 @@
---
name: roadmap-frameworks
description: Master product roadmaps including roadmap types (timeline, outcome-based, Now-Next-Later), communication strategies, and prioritization. Use when creating roadmaps, communicating strategy, prioritizing initiatives, or evolving product direction. Covers roadmap formats, communication tactics, and roadmap best practices from product leaders.
---
# Roadmap Frameworks
Frameworks for building, communicating, and managing product roadmaps that align teams, guide execution, and drive strategic outcomes.
## What is a Roadmap?
A roadmap is a strategic communication tool that:
- Shows **WHERE** you're going (direction, themes)
- Explains **WHY** you're going there (strategy, rationale)
- Indicates **WHEN** (roughly) you'll get there (timeframes)
- Communicates **HOW** you'll get there (initiatives, bets)
**NOT**: A list of features with dates
**BUT**: A strategic narrative about the future
**Good roadmaps**: Outcome-oriented, flexible, strategic, audience-appropriate, actionable
**Bad roadmaps**: Feature lists, hard dates, everything for everyone, disconnected from strategy, stale
## When to Use This Skill
**Auto-loaded by agents**:
- `roadmap-builder` - For Now-Next-Later, theme-based, and outcome roadmaps
**Use when you need**:
- Quarterly/annual planning
- Strategic clarity
- Team coordination
- Clear communication
- Investment decisions
- Customer/user communication
---
## Roadmap Types
### 1. Now-Next-Later (Recommended for Most)
**Structure**: Three buckets without dates
**NOW**: What we're working on right now (high confidence, active)
**NEXT**: What we'll likely do next (medium confidence, validated)
**LATER**: What we're exploring (low confidence, directional)
**When to use**: Maximum flexibility, minimal commitment, high uncertainty
**Benefits**:
- No date commitments
- Easy to adjust
- Clear focus
- Simple communication
**Template**: `assets/now-next-later-template.md`
Complete template with examples, confidence levels, updating guidance
---
### 2. Theme-Based
**Structure**: Strategic themes with grouped initiatives
Organize by themes (e.g., "Enterprise Readiness", "Customer Experience") rather than features.
**When to use**: Communicate strategic focus areas
**Benefits**:
- Strategic clarity
- Outcome-focused
- Flexible within themes
---
### 3. Outcome-Based
**Structure**: Lead with results, not outputs
Focus on customer/business outcomes (e.g., "Reduce churn by 50%") with flexible approaches.
**When to use**: Results-driven teams, goal-driven culture
**Benefits**:
- Clear success criteria
- Measurable
- Team autonomy on "how"
**Template**: `assets/outcome-roadmap-template.md`
Includes outcome format, examples, comparison with feature roadmaps
---
### 4. Timeline
**Structure**: Initiatives plotted on calendar/quarters
Visual timeline showing sequencing and dependencies.
**When to use**: Internal planning only, complex dependencies
**NOT for**: External communication (creates date expectations)
---
**Choosing the right type**: See `references/roadmap-types-guide.md` for detailed comparison and selection criteria.
---
## Roadmap by Audience
Different audiences need different roadmaps:
### Executive Roadmap
**Focus**: Strategy, business outcomes, resource needs
**Format**: Themes + outcomes, annual + quarterly
**Detail**: Low (strategic)
### Customer Roadmap
**Focus**: Value delivery, transparency
**Format**: Now-Next-Later with problem framing
**Exclude**: Internal work, hard dates
### Sales Roadmap
**Focus**: Deal enablement, competitive positioning
**Guidance**: "Commit to Now, position Next as likely, describe Later as exploring"
### Engineering Roadmap
**Focus**: Execution, technical detail
**Format**: Timeline with dependencies
**Detail**: High (sprint-plannable)
### Internal All-Hands
**Focus**: Company alignment, transparency
**Frequency**: Quarterly updates
**Comprehensive guide**: `references/roadmap-communication-guide.md`
Includes communication tactics, update formats, anti-patterns
---
## Building Your Roadmap
### 7-Step Process
**Step 1**: Establish Strategy (company goals, product strategy, market position)
**Step 2**: Gather Inputs (customer feedback, business priorities, technical needs, competitive intel)
**Step 3**: Prioritize (RICE, Impact/Effort, Strategic Fit)
**Step 4**: Define Themes (3-5 customer-centric, strategic themes)
**Step 5**: Sequence (dependencies, resources, timing, value delivery)
**Step 6**: Validate & Align (exec, engineering, sales/CS, customers)
**Step 7**: Communicate (audience-specific views, all-hands, documentation)
**Detailed guide**: `references/roadmap-building-guide.md`
Includes detailed steps, outputs, prioritization frameworks, maintenance cadence
---
## Roadmap Narrative
Tell the story of your roadmap - where, why, how:
**Structure**:
1. Vision (where we're going)
2. Strategy (why this roadmap)
3. Prioritization approach (how we chose)
4. What we're building (Now, Next, Later)
5. Trade-offs (what we're NOT doing)
6. Feedback process (how to influence)
**Template**: `assets/roadmap-narrative-template.md`
---
## Roadmap Best Practices
**DO**:
- Start with strategy (not features)
- Use themes and outcomes (not feature lists)
- Tailor to audience (exec, team, customer)
- Show trade-offs (what you're NOT doing)
- Update regularly (quarterly planning, monthly review)
- Communicate changes (transparency)
- Link to metrics (measurable outcomes)
- Keep "Now" specific, "Later" vague
**DON'T**:
- Commit to dates (use timeframes)
- Promise everything (prioritize ruthlessly)
- Use internal jargon (customer language)
- Build in vacuum (validate with user feedback)
- Set and forget (iterate continuously)
- Hide trade-offs (be transparent)
- Lead with features (lead with problems)
- Make it static (living document)
---
## Roadmap Anti-Patterns
**Common mistakes**:
1. **Feature Laundry List**: Just features, no strategy → Use theme-based, outcome-oriented
2. **Date-Driven Commitments**: "Ship X on June 15" → Use timeframes, confidence levels
3. **One Size Fits All**: Same roadmap for all audiences → Tailor by audience
4. **Set and Forget**: Never updated, stale → Regular review cadence
5. **Everything for Everyone**: No priorities → Explicit prioritization, "not doing" list
6. **No Strategic Connection**: Disconnected from goals → Link every theme to objective
7. **Too Much Detail**: Over-specified → Appropriate detail for timeframe
8. **Internal Jargon**: Technical speak → Problem-focused, customer language
---
## Roadmap Maintenance
### Review Cadence
**Weekly** (30 min): Current work on track? Adjust "Now"
**Monthly** (60 min): Progress on quarter, validate "Next", refine "Later"
**Quarterly** (Half day): Build next quarter roadmap, review outcomes
### When to Update
**DO update**:
- Quarterly planning (always)
- Major strategic shift
- Significant customer feedback
- Competitive threat
- Resource changes
**DON'T update**:
- Every feature request
- Minor adjustments
- Random requests
### Communicating Changes
When roadmap changes materially:
```
Roadmap Update: [Date]
What Changed: [Change + Why]
What Stayed: [Core themes still priority]
Impact: [Who this affects]
```
**Frequency**: Only material changes
---
## For Solo Operators / Small Teams
**Simplify**:
- Use Now-Next-Later (simplest format)
- Focus on 2-3 themes max
- Skip elaborate tools (Google Slides works)
- Update monthly (not weekly)
- Share with customers for feedback
**Timeline**: 4-6 hours for quarterly roadmap
**Key**: Simple beats perfect. Better a clear 1-page roadmap than elaborate 20-page deck nobody reads.
---
## Roadmap Tools
**Lightweight** (Early stage):
- Google Slides/PowerPoint
- Notion/Coda
- Miro/Figma
**Purpose-Built** (Growth):
- Productboard
- Aha!
- ProductPlan
- Jira Product Discovery
**Custom** (Enterprise):
- Custom-built, integrated with data warehouse
**Recommendation for solo/small teams**: Start with slides, upgrade only when pain is real.
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/now-next-later-template.md` - Most flexible format, complete example
- `assets/outcome-roadmap-template.md` - Results-focused format
- `assets/roadmap-narrative-template.md` - Storytelling structure
### References (Deep Dives)
When you need comprehensive guidance:
- `references/roadmap-types-guide.md` - All types compared, selection criteria
- `references/roadmap-communication-guide.md` - Audience-specific roadmaps, communication tactics
- `references/roadmap-building-guide.md` - 7-step process, prioritization, maintenance
---
## Related Skills
- `prioritization-methods` - Prioritization frameworks (RICE, ICE, Impact/Effort)
- `product-positioning` - Strategic positioning
- `go-to-market-playbooks` - Launch planning and GTM strategy
---
## Quick Start
**For your first roadmap**:
1. Use Now-Next-Later format (simplest)
2. Start with `assets/now-next-later-template.md`
3. Define 2-3 strategic themes
4. Fill in Now (what you're working on)
5. Add Next (validated problems, likely next)
6. Add Later (exploring)
7. Include "Not Doing" (trade-offs)
8. Present to team, get feedback
9. Update quarterly
**For quarterly planning**:
1. Review last quarter: What shipped? What didn't? Why?
2. Gather inputs: Customer feedback, business priorities, tech needs
3. Prioritize: Impact, effort, strategic fit
4. Sequence: Now → Next → Later
5. Communicate: All-hands + written doc
6. Update monthly based on learnings
---
**Key Principle**: Roadmaps are strategic communication tools, not commitments. They show direction and rationale, enabling alignment while maintaining flexibility. Good roadmaps create clarity without over-committing. Update regularly, communicate changes, focus on outcomes.

View File

@@ -0,0 +1,554 @@
---
name: specification-techniques
description: Master techniques for writing clear, complete product specifications and requirements documents (PRDs). Use when defining feature requirements, writing user stories, creating acceptance criteria, documenting API specifications, aligning cross-functional teams, reducing ambiguity, covering edge cases, or translating product vision into actionable development tasks. Covers PRD structure, user story formats (INVEST, 3Cs, Given-When-Then), Jobs-to-be-Done, use cases, non-functional requirements, and specification best practices.
---
# Specification Techniques
Structured methods for translating product vision into clear, actionable requirements that align teams, reduce ambiguity, and enable successful execution.
## Overview
Specification techniques help you answer "what" and "why" clearly while leaving "how" flexible for the team to determine. They create shared understanding without being prescriptive about implementation.
**Core Principle:** The value of specifications isn't in comprehensive documentation—it's in the conversations and shared understanding they create. A 3-sentence user story that everyone understands is better than a 30-page spec no one reads.
**Origin:** Modern specification techniques evolved from software engineering (Karl Wiegers), agile methodologies (Mike Cohn), and design thinking (Alan Cooper).
**Key Insight:** Good specifications enable teams to build the right thing, while bad specifications constrain teams and lead to rework.
---
## When to Use This Skill
**Auto-loaded by agents**:
- `requirements-engineer` - For PRDs, user stories, acceptance criteria, and NFRs
**Use when you need to**:
- Write product requirements documents (PRDs)
- Create user stories with acceptance criteria
- Define feature requirements clearly
- Document edge cases and error handling
- Specify non-functional requirements (performance, security)
- Align cross-functional teams on scope
- Translate product vision into development tasks
---
## Product Requirements Documents (PRDs)
### Core Structure
A well-structured PRD contains:
1. **Overview** - Problem, goals, non-goals, success metrics
2. **Background** - Context, user research, competitive analysis
3. **User Personas** - Primary/secondary users, use cases
4. **Requirements** - Functional and non-functional requirements
5. **User Experience** - Flows, wireframes, interaction details
6. **Technical Considerations** - Architecture, APIs, performance
7. **Success Criteria** - Launch criteria, key metrics
8. **Timeline & Resources** - Milestones, team, risks
**Complete Template:** See `assets/prd-structure-template.md` for full PRD structure with examples and best practices.
---
### Essential PRD Components
**Problem Statement:**
```
What: Clear description of the problem
Who: Who experiences this problem
Impact: Business/user impact with data
Current: What users do today
```
**Goals and Non-Goals:**
- **Goals:** What you want to achieve (measurable)
- **Non-Goals:** What you're explicitly not doing (sets boundaries)
**Success Metrics:**
- Specific, measurable outcomes
- Baseline → Target values
- Timeline for achievement
**Example:**
```
Problem: Users miss critical alerts when not in app, leading to 40% missed events.
Goal: Reduce missed alerts from 40% to <15% via email notifications.
Non-Goals:
- SMS notifications (future scope)
- Customizable notification frequency (v1: fixed)
Success Metrics:
- 60% opt-in to email notifications
- 25% reduction in missed alerts
- <2% unsubscribe rate
```
---
## User Stories
### The Standard Format
```
As a [user type]
I want [capability]
So that [benefit]
```
**Purpose:** Placeholder for conversation, not complete specification.
**Example:**
```
As a project manager
I want to assign tasks to team members
So that everyone knows their responsibilities
```
---
### INVEST Criteria
Framework for evaluating user story quality (Mike Cohn):
- **I - Independent:** Can be developed in any order
- **N - Negotiable:** Details flexible, not a contract
- **V - Valuable:** Delivers user or business value
- **E - Estimable:** Team can estimate effort
- **S - Small:** Fits within one sprint (1-5 days)
- **T - Testable:** Clear acceptance criteria
**Complete Guide:** See `references/user-story-writing-guide.md` for detailed explanations, examples, and story splitting patterns.
**Template:** See `assets/user-story-template.md` for comprehensive user story template with INVEST checklist.
---
### The 3 Cs Framework (Ron Jeffries)
**Card:** Brief story description (placeholder)
**Conversation:** Discuss details during planning/refinement (most important)
**Confirmation:** Acceptance criteria (how to verify done)
**Key Insight:** The conversation is more valuable than the written artifact.
---
## Acceptance Criteria
### Three Formats
**Format 1: Given-When-Then (Gherkin/BDD)**
```
Given [precondition/context]
When [action/event]
Then [expected outcome]
And [additional outcome]
```
**Best for:** Behavior-driven development, sequential workflows, automated testing
**Example:**
```
Given I am on the login page
When I enter valid credentials and click "Login"
Then I am redirected to my dashboard
And I see a welcome message with my name
```
---
**Format 2: Checklist Style**
```
Acceptance Criteria:
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
```
**Best for:** Simple features, independent criteria, non-sequential requirements
**Example:**
```
- [ ] User can upload JPG, PNG, GIF files
- [ ] Maximum file size is 10MB
- [ ] Upload progress bar shows percentage
- [ ] Error shown if file too large
```
---
**Format 3: Example-Based**
```
Scenario: [Description]
Input: [What user provides]
Output: [What system produces]
```
**Best for:** Complex calculations, validation rules, data transformations
**Example:**
```
Scenario 1: Valid email
Input: user@example.com
Output: Email accepted
Scenario 2: Missing @ symbol
Input: userexample.com
Output: Error "Email must include @ symbol"
```
**Complete Guide:** See `references/acceptance-criteria-guide.md` for comprehensive guide to writing testable acceptance criteria with all three formats, examples, and best practices.
---
## Job Stories (Jobs-to-be-Done)
### Format
```
When [situation/context]
I want to [motivation]
So I can [expected outcome]
```
**Difference from User Stories:**
- Focuses on context/situation (not persona)
- Emphasizes motivation (not implementation)
- Better for discovering underlying needs
**Example:**
```
When I'm in a meeting and need to reference previous discussions
I want to quickly search message history by keyword
So I can find relevant context without disrupting the meeting flow
```
**Complete Template:** See `assets/job-story-template.md` for detailed job story structure, examples, and when to use vs. user stories.
---
## Use Cases
### Structure
Detailed interaction scenarios documenting step-by-step system behavior.
**Components:**
- Primary actor and goal
- Preconditions and postconditions
- Main success scenario (happy path)
- Alternative flows (variations)
- Exception flows (error handling)
**Example Structure:**
```
Use Case: User Registers for Account
Primary Actor: New user
Goal: Create account to access platform
Main Success Scenario:
1. User enters email, password, name
2. System validates inputs
3. System creates account
4. System sends verification email
5. User clicks verification link
6. System activates account
```
**Complete Template:** See `assets/use-case-template.md` for comprehensive use case template with examples of alternative flows, exception handling, and special requirements.
---
## Edge Cases and Error Handling
### Systematic Coverage
Document behavior for:
**1. Input Validation** - Empty inputs, special characters, invalid formats
**2. Permissions** - Unauthenticated, insufficient privileges, expired sessions
**3. State Handling** - Empty state, loading state, error state, full state
**4. Boundary Conditions** - Zero, one, maximum, just over maximum
**5. Time-Based** - Expired tokens, time zones, scheduling
**6. Network Issues** - Timeouts, server errors, rate limiting
**Example:**
```
Error Scenario: Payment fails due to insufficient funds
User Experience:
- Error message: "Payment declined - insufficient funds. Please use a different payment method."
System Behavior:
- Transaction rolled back
- User not charged
- Order not created
- Error logged
Recovery Action:
- User can update payment method and retry
```
**Complete Checklist:** See `assets/edge-case-checklist.md` for systematic 100+ point checklist covering all edge case categories with examples and error handling templates.
---
## Non-Functional Requirements
### Key Categories
**Performance Requirements:**
- Response time (page loads, API calls)
- Throughput (requests per second)
- Scalability (concurrent users, data volume)
**Security Requirements:**
- Authentication (MFA, password policies)
- Authorization (RBAC, least privilege)
- Data protection (encryption, GDPR compliance)
**Reliability Requirements:**
- Availability (uptime %)
- Data integrity (backups, recovery)
- Fault tolerance (failover, retries)
**Accessibility Requirements:**
- WCAG 2.1 compliance
- Keyboard navigation
- Screen reader compatibility
**Example:**
```
Performance: Page loads in <2 seconds (p95)
Security: All PII encrypted at rest (AES-256)
Reliability: 99.9% uptime (max 8.7 hours downtime/year)
Accessibility: WCAG 2.1 Level AA compliant
```
**Complete Template:** See `assets/non-functional-requirements-template.md` for comprehensive NFR documentation covering all categories with specific metrics and verification methods.
---
## Specification Best Practices
### Start with Why
Always begin with problem statement before jumping to solutions:
**Problem Statement Template:**
```
Problem: [What problem exists?]
Impact: [Who is affected and how?]
Evidence: [Data, research, user quotes]
Goal: [What would success look like?]
```
---
### Be Specific and Measurable
**Vague (Bad):** "System should be fast"
**Specific (Good):** "Page loads in <2 seconds (p95), measured via Lighthouse"
**Avoid Ambiguous Terms:**
- "Fast" → Specific time (< 2 seconds)
- "Good" → Specific criteria
- "User-friendly" → Task completion metrics
- "Secure" → OWASP Top 10 compliant
---
### Cover Happy and Unhappy Paths
**Happy Path:** Successful scenario
**Unhappy Paths:**
- Error cases (invalid input, permission denied)
- Edge cases (boundary conditions, rare scenarios)
- Network failures (timeouts, service down)
**Example:**
```
Happy: User logs in with valid credentials → Dashboard
Error: User enters wrong password → Error message, stay on login
Edge: Session expires during form entry → Preserve data, re-authenticate
```
---
### Specify Behavior, Not Implementation
**Too Prescriptive (Bad):**
```
"Use Material-UI Autocomplete component with Redux state management"
```
**Behavioral (Good):**
```
"User can search for other users by name, see suggestions as they type, select one or multiple users. Selection persists across page refreshes."
Let engineering decide: component library, state management
```
---
### Include Visual Aids
**Use diagrams for:**
- User flows
- State machines
- Data flows
- Wireframes/mockups
**Use examples for:**
- Input validation rules
- Calculations
- Data transformations
---
## Anti-Patterns to Avoid
### 1. Solution Masquerading as Problem
**Bad:** "We need a chatbot on the homepage"
**Good:**
```
Problem: 60% of support tickets are basic questions. Users can't find answers quickly.
Explored solutions: Chatbot, improved search, interactive FAQ
```
---
### 2. Vague Requirements
**Bad:** "System should have good performance"
**Good:** "Dashboard loads in <2 seconds (p95) for 10,000 concurrent users"
---
### 3. Over-Specification
**Bad:** "Use bcrypt with cost factor 12, store in CHAR(60) column"
**Good:** "Passwords hashed with industry-standard algorithm meeting OWASP guidelines"
---
### 4. Missing Acceptance Criteria
**Bad:**
```
As a user, I want notifications
```
**Good:**
```
As a user, I want email notifications
So that I'm aware of critical events
Acceptance Criteria:
- Given payment fails, when processor returns error, then email sent within 5 minutes
- Given I'm in settings, when I toggle notifications, then preference saves immediately
```
---
### 5. Ignoring Edge Cases
**Bad:** "User can upload a file"
**Good:**
```
User can upload file:
- Happy: File <10MB, supported format → Success
- Error: File >10MB → "File exceeds 10MB limit"
- Error: Unsupported format → "Only JPG, PNG, PDF allowed"
- Error: Network failure → Auto-retry 3x, then "Upload failed. Retry?"
```
**Complete Guide:** See `references/specification-anti-patterns.md` for detailed anti-patterns with examples and fixes.
---
## Reference Guides
### Comprehensive Guides
**User Story Writing:**
- `references/user-story-writing-guide.md`
- INVEST criteria in depth
- The 3 Cs framework
- Story splitting patterns
- User story vs job story vs use case
**Acceptance Criteria:**
- `references/acceptance-criteria-guide.md`
- All three formats explained
- Writing testable criteria
- Covering happy/unhappy paths
- Edge case documentation
**Anti-Patterns:**
- `references/specification-anti-patterns.md`
- Common mistakes and how to avoid them
- Solution masquerading as problem
- Vague requirements
- Over-specification
**Best Practices:**
- `references/specification-best-practices.md`
- Core principles
- Structure and organization
- Writing style and clarity
- Collaboration and communication
- Progressive elaboration
---
## Templates and Tools
### Ready-to-Use Templates
**Documentation:**
- `assets/prd-structure-template.md` - Complete PRD template
- `assets/user-story-template.md` - User story with INVEST
- `assets/job-story-template.md` - Jobs-to-be-Done format
- `assets/use-case-template.md` - Detailed use case structure
**Checklists:**
- `assets/edge-case-checklist.md` - 100+ point systematic coverage
- `assets/non-functional-requirements-template.md` - NFR documentation
---
## Summary
**Good Specifications:**
- Start with problem and user needs (the "why")
- Are specific, measurable, and testable
- Cover happy paths, unhappy paths, and edge cases
- Specify behavior and constraints (the "what")
- Leave implementation flexible (the "how")
- Use visual aids and concrete examples
- Create shared understanding through conversation
**Remember:** Specifications are conversation starters, not contracts. Focus on creating shared understanding between product, engineering, design, and QA.
---
## Related Skills
- `prd-templates` - PRD templates and examples
- `user-story-templates` - User story templates and patterns
- `user-research-techniques` - Understanding user needs and requirements

View File

@@ -0,0 +1,333 @@
---
name: synthesis-frameworks
description: Master research synthesis including qualitative analysis, affinity diagramming, theme identification, pattern recognition, and insight extraction. Use when synthesizing research, analyzing interviews, identifying patterns, extracting insights, organizing findings, or translating research into action. Covers synthesis methods, affinity mapping, thematic analysis, and insight generation techniques.
---
# Synthesis Frameworks
Frameworks for synthesizing qualitative research data to identify patterns, extract insights, and drive product decisions.
## Overview
Research synthesis transforms raw observations into actionable insights. Good synthesis reveals hidden patterns, validates assumptions, and informs strategic product decisions through structured analysis and collaborative sense-making.
## When to Use This Skill
**Auto-loaded by agents**:
- `research-ops` - For thematic analysis, insight generation, and research reports
**Use when you need**:
- Analyzing interview transcripts
- Synthesizing usability test findings
- Identifying user needs and pain points
- Creating research deliverables
- Documenting and communicating insights
- Running team synthesis workshops
- Pattern recognition across research data
- Translating research into actionable recommendations
## Core Synthesis Methods
### The Synthesis Process
Research synthesis follows a structured five-step process from raw data to actionable insights:
**The five steps:**
1. Immersion: Consume all raw data without judgment
2. Extraction: Identify individual observations atomically
3. Grouping: Find patterns through affinity mapping
4. Insight Generation: Ask "so what?" and connect to implications
5. Communication: Share findings in actionable formats
**Detailed template:** See `assets/synthesis-process-template.md` for step-by-step instructions, time allocation, and specific techniques for each phase.
**Time allocation:** Expect 8-16 hours total for a typical research project, with most time in grouping (30-40%) and insight generation phases.
### Affinity Mapping
Collaborative technique for organizing observations into themes through physical or digital clustering.
**When to use:**
- Team synthesis workshops
- Large amounts of qualitative data
- Need for shared understanding
- Pattern discovery (not testing hypothesis)
**Basic process:**
1. Silent note writing: Individual observation capture
2. Wall posting: Share all observations
3. Silent grouping: Cluster related notes
4. Name clusters: Identify themes
5. Generate insights: Discuss implications
**Complete guide:** See `assets/affinity-mapping-guide.md` for:
- Setup instructions (physical and digital)
- Step-by-step process with timings
- Color-coding strategies
- Tips for effective clustering
- Remote and solo variations
### Pattern Identification
Systematic approach to recognizing recurring themes across qualitative data.
**Five types of patterns to look for:**
1. **Frequency patterns:** How often something occurred (e.g., "8/10 users mentioned pricing")
2. **Behavioral patterns:** What users actually did vs. said (e.g., "All users skipped tutorial")
3. **Sentiment patterns:** Emotional responses (e.g., frustration, delight, confusion)
4. **Segment patterns:** Differences by user type (e.g., new users vs. power users)
5. **Journey patterns:** When issues occur (e.g., "70% dropped off at onboarding step 3")
**Deep dive:** See `references/pattern-identification-guide.md` for:
- Detailed examples of each pattern type
- How to interpret patterns
- Pattern validation techniques
- Common pitfalls to avoid
- Advanced pattern recognition methods
### Thematic Analysis
Formal method for coding qualitative data and identifying themes.
**Three coding levels:**
- **Level 1 - Descriptive codes:** What was said/done (e.g., "Mentioned pricing")
- **Level 2 - Interpretive codes:** What it means (e.g., "Price sensitivity")
- **Level 3 - Pattern codes:** Big picture themes (e.g., "Value perception issue")
**Four-phase process:**
1. Open coding: Generate initial codes liberally (30-100+)
2. Axial coding: Group related codes into categories (10-30)
3. Selective coding: Refine into core themes (5-10)
4. Validation: Ensure themes are supported and distinct
**Complete methodology:** See `references/thematic-analysis-guide.md` for:
- Detailed coding process with examples
- Tools and techniques (manual and digital)
- Quality checks and validation
- Common coding patterns
- Worked examples from transcripts
### Jobs-to-be-Done (JTBD) Synthesis
Framework for understanding customer motivations and the "jobs" users hire products to do.
**Job statement format:**
```
When [situation],
I want to [motivation],
So I can [expected outcome]
```
**The Forces Framework:**
Analyze four forces that drive or prevent adoption:
- **Push:** Why change from current solution? (pain, frustration)
- **Pull:** Why choose new solution? (benefits, aspirations)
- **Anxieties:** What holds users back? (fear, risk, learning curve)
- **Habits:** Why stay put? (familiarity, switching costs)
**Switching equation:** Push + Pull > Anxieties + Habits = Switch
**Full framework:** See `references/jtbd-synthesis-guide.md` for:
- How to identify jobs in research data
- Finding jobs through struggle and workarounds
- Complete forces analysis with examples
- Strategy recommendations for each force
- JTBD vs traditional research approaches
## Insight Quality
Not all findings are insights. Good insights are specific, surprising, actionable, evidence-based, and prioritized.
### The Five Characteristics
**1. Specific (not vague):**
- Bad: "Users don't like the UI"
- Good: "6/8 users couldn't find Settings in hamburger menu - move to top nav"
**2. Surprising (not obvious):**
- Bad: "Users want fast loading"
- Good: "Users prefer slow load with progress over fast load with no feedback"
**3. Actionable (clear implications):**
- Bad: "Users are frustrated"
- Good: "8-field signup causes 60% abandonment - reduce to 3 fields to match competitors"
**4. Evidence-based (not speculation):**
- Bad: "I think users would like dark mode"
- Good: "5/10 users requested dark mode unprompted, citing eye strain from all-day use"
**5. Prioritized (impact + feasibility):**
- Bad: List of 50 equal findings
- Good: Top 5 insights ranked by impact/effort with quick wins called out
### Insight Formula
**Template:** "[X%/number] of users [did/said specific thing] because [reason], suggesting we should [action] to [expected outcome]"
**Example:** "8/10 users abandoned onboarding at 'Invite Team' step because they didn't have emails ready, suggesting we should make this optional to reduce abandonment from 60% to 30%"
**Quality guide:** See `references/insight-quality-guide.md` for:
- Detailed breakdown of each characteristic
- Bad vs. good insight examples
- How to strengthen weak insights
- Insight quality checklist
- Teaching teams to recognize quality
## Synthesis Deliverables
### Executive Summary
One-page research summary for the team who need highlights without detail.
**Standard sections:**
- Research goals and methodology (brief)
- Top 3-5 key findings with evidence
- Prioritized recommendations (quick wins vs. strategic)
- Next steps and owners
**Template:** See `assets/executive-summary-template.md`
### Full Research Report
Comprehensive documentation (5-10 pages) for complete findings and recommendations.
**Structure:**
1. Executive summary
2. Background and research questions
3. Methodology and participants
4. Findings by theme (with quotes and data)
5. Recommendations (prioritized)
6. Appendix (full participant list, materials, extra quotes)
**Template:** See `assets/research-report-template.md`
### Insight Cards
One-page format for individual insights that can be shared independently.
**Includes:**
- Insight statement and evidence (quotes + data)
- Impact and why it matters
- Specific recommendation with expected outcome
- Priority, effort, and owner
- Visual (screenshot, diagram, or clip)
**Template:** See `assets/insight-card-template.md`
### Data-Driven Personas
User archetypes based on behavioral patterns discovered in research.
**Creation process:**
1. Identify behavioral patterns in research
2. Cluster users with similar behaviors
3. Define 3-5 distinct archetypes
4. Validate against real users
**Persona template includes:**
- Goals and motivations
- Behaviors and workflows
- Pain points and needs
- Representative quote
- When to use: Prioritization, design decisions, communication
**Template:** See `assets/persona-template.md`
## Collaborative Synthesis
### Team Synthesis Workshop
2-hour structured workshop for collaborative insight generation with shared understanding.
**Workshop structure:**
- 0:00-0:15: Context setting (research goals, questions, methodology)
- 0:15-0:45: Affinity mapping (silent note writing + clustering)
- 0:45-1:15: Grouping and naming themes
- 1:15-1:45: Insight generation and prioritization
- 1:45-2:00: Action planning with owners
**Benefits:**
- Shared understanding across team
- Multiple perspectives improve pattern recognition
- Buy-in through participation
- Faster insight to action
**Complete guide:** See `references/synthesis-workshop-guide.md` for:
- Pre-workshop preparation checklist
- Detailed agenda with timings
- Facilitation tips and techniques
- Workshop variations (remote, async, executive)
- Post-workshop documentation
## Impact vs. Effort Prioritization
Use 2x2 matrix to prioritize insights:
```
Low Effort High Effort
High Quick Wins Major Projects
Impact (Do First) (Do Next)
Low Fill-ins Money Pits
Impact (Do Later) (Avoid)
```
**Scoring insights:**
- **Impact (1-5):** Frequency × Severity × Business impact
- **Effort (1-5):** Technical complexity + Time + Dependencies
- **Priority score:** Impact / Effort
**Quick wins** (high impact, low effort) should be shipped within 1-2 sprints to build momentum and demonstrate research value.
## Synthesis Best Practices
**DO:**
- Involve team in synthesis (shared understanding, multiple perspectives)
- Use participant quotes liberally (evidence, authenticity)
- **Evidence Standards:** Use only actual quotes from transcripts - never invent or fabricate
- When paraphrasing: mark as "[Paraphrased from participant]" not as direct quote
- Each quote must be verbatim from transcript and attributed to specific participant
- Quantify when possible ("8/10 users", "60% abandoned")
- Look for surprising patterns (not just confirmation bias)
- Prioritize insights ruthlessly (top 3-5 most important)
- Connect to business goals explicitly (why it matters)
- Make recommendations actionable (clear next steps)
- Document process (photos of affinity map, save artifacts)
**DON'T:**
- Cherry-pick data to support existing beliefs
- Present all findings equally (overwhelming, no guidance)
- Use research jargon (be clear and accessible)
- Bury insights in long reports (lead with key findings)
- Synthesize alone when team synthesis is possible
- Ignore contradictions (explore them, they're interesting)
- Stop at observations (push through to insights and implications)
- Skip validation (check themes across multiple participants)
## Common Synthesis Pitfalls
**Confirmation bias:** Only seeing patterns that confirm beliefs
- Solution: Actively look for disconfirming evidence
**Overgeneralization:** "One user did X, therefore all users..."
- Solution: Require multiple examples to claim pattern
**Too many themes:** More than 10 themes = probably too granular
- Solution: Look for meta-themes, consolidate
**Generic insights:** "Users want better UX" (too vague)
- Solution: Apply the five characteristics of quality insights
**No prioritization:** Treating all findings equally
- Solution: Use impact/effort matrix, call out quick wins
**Insight fatigue:** So many insights team doesn't act
- Solution: Limit to top 3-5, focus on actionable quick wins
## Related Skills
- `interview-frameworks` - Conducting interviews that produce quality data
- `usability-frameworks` - Usability testing methods and analysis
- `validation-frameworks` - Solution validation and experiment design
- `user-research-techniques` - Methods for gathering research data

View File

@@ -0,0 +1,422 @@
---
name: usability-frameworks
description: Usability testing methodology, Nielsen's heuristics, and usability metrics for evaluating user interfaces
---
# Usability Frameworks
Comprehensive frameworks and methodologies for planning, conducting, and analyzing usability tests to improve user experience.
## When to Use This Skill
**Auto-loaded by agents**:
- `research-ops` - For usability testing and heuristic evaluation
**Use when you need**:
- Planning usability tests
- Conducting user testing sessions
- Evaluating interface designs
- Identifying usability problems
- Testing prototypes or live products
- Applying Nielsen's heuristics
- Measuring usability metrics
## Core Concepts
### What is Usability Testing?
Usability testing is a method for evaluating a product by testing it with representative users. Users attempt to complete typical tasks while observers watch, listen, and take notes.
**Purpose**: Identify usability problems, discover opportunities for improvement, and learn about user behavior and preferences.
**When to use**:
- Before development (testing prototypes)
- During development (iterative testing)
- After launch (validation and optimization)
- Before major redesigns
### The Five Usability Quality Components (Jakob Nielsen)
1. **Learnability**: How easy is it for users to accomplish basic tasks the first time?
2. **Efficiency**: How quickly can users perform tasks once they've learned the design?
3. **Memorability**: Can users remember how to use it after time away?
4. **Errors**: How many errors do users make, how severe, and how easily can they recover?
5. **Satisfaction**: How pleasant is it to use the design?
## Usability Testing Methodologies
### 1. Moderated Testing
**Setup**: Researcher guides participants through tasks in real-time
**Location**: In-person or remote (video call)
**Best for**:
- Early-stage prototypes needing clarification
- Complex products requiring guidance
- Exploring "why" behind user behavior
- Uncovering emotional reactions
**Process**:
1. Welcome and set expectations
2. Pre-task questions (background, experience)
3. Task scenarios with think-aloud protocol
4. Post-task questions and discussion
5. Wrap-up and thank you
**Advantages**:
- Rich qualitative insights
- Can probe deeper into issues
- Observe non-verbal cues
- Clarify misunderstandings immediately
**Limitations**:
- More time-intensive (30-60 min per session)
- Researcher bias possible
- Smaller sample sizes
- Scheduling logistics
### 2. Unmoderated Testing
**Setup**: Participants complete tasks independently, recorded for later review
**Location**: Remote, on participant's own schedule
**Best for**:
- Mature products with clear tasks
- Large sample sizes needed
- Quick turnaround required
- Benchmarking and metrics
**Process**:
1. Automated instructions and consent
2. Participants record screen/audio while completing tasks
3. Automated post-task surveys
4. Researcher reviews recordings later
**Advantages**:
- Faster data collection
- Larger sample sizes
- More natural environment
- Lower cost per participant
**Limitations**:
- Can't probe or clarify
- May miss nuanced insights
- Technical issues harder to resolve
- Participants may skip think-aloud
### 3. Hybrid Approaches
**Combination methods**:
- Moderated first impressions + unmoderated task completion
- Unmoderated testing + follow-up interviews with interesting cases
- Moderated pilot + unmoderated scale testing
## Nielsen's 10 Usability Heuristics
Quick reference for evaluating interfaces. See `references/nielsens-10-heuristics.md` for detailed explanations and examples.
1. **Visibility of system status** - Keep users informed
2. **Match between system and real world** - Speak users' language
3. **User control and freedom** - Provide escape hatches
4. **Consistency and standards** - Follow platform conventions
5. **Error prevention** - Prevent problems before they occur
6. **Recognition rather than recall** - Minimize memory load
7. **Flexibility and efficiency of use** - Accelerators for experts
8. **Aesthetic and minimalist design** - Remove irrelevant information
9. **Help users recognize, diagnose, and recover from errors** - Plain language error messages
10. **Help and documentation** - Provide when needed
## Think-Aloud Protocol
### What It Is
Participants verbalize their thoughts while completing tasks, providing real-time insight into their mental model.
### Types
**Concurrent think-aloud**: Speak while performing tasks
- More natural thought flow
- May affect task performance slightly
**Retrospective think-aloud**: Review recording and explain thinking after
- Doesn't disrupt natural behavior
- May forget or rationalize thoughts
### Facilitating Think-Aloud
**Prompts to use**:
- "What are you thinking right now?"
- "What are you looking for?"
- "What would you expect to happen?"
- "Is this what you expected?"
**Don't**:
- Ask leading questions
- Provide hints or solutions
- Interrupt natural flow too often
- Make participants feel tested
See `references/think-aloud-protocol-guide.md` for detailed facilitation techniques.
## Task Scenario Design
Good task scenarios are critical to meaningful usability test results.
### Characteristics of Good Task Scenarios
**Realistic**: Based on actual user goals
**Specific**: Clear endpoint/success criteria
**Self-contained**: Provide all necessary context
**Actionable**: Clear starting point
**Not prescriptive**: Don't tell them how to do it
### Example Transformation
**Poor**: "Click on the 'My Account' link and change your password"
- Too prescriptive, tells them exactly where to click
**Good**: "You've heard about recent security breaches and want to make your account more secure. Update your account to use a stronger password."
- Realistic motivation, clear goal, doesn't prescribe path
### Task Complexity Levels
**Simple tasks** (1-2 steps): Establish baseline usability
**Medium tasks** (3-5 steps): Test core workflows
**Complex tasks** (6+ steps): Evaluate overall experience and error recovery
See `assets/task-scenario-template.md` for ready-to-use templates.
## Severity Rating Framework
Not all usability issues are equal. Prioritize fixes based on severity.
### Three-Factor Severity Rating
**Frequency**: How often does this issue occur?
- High: > 50% of users encounter
- Medium: 10-50% encounter
- Low: < 10% encounter
**Impact**: When it occurs, how badly does it affect users?
- High: Prevents task completion / causes data loss
- Medium: Causes frustration or delays
- Low: Minor annoyance
**Persistence**: Do users overcome it with experience?
- High: Problem doesn't go away
- Medium: Users learn to avoid/work around
- Low: One-time problem only
### Combined Severity Ratings
**Critical** (P0): High frequency + High impact
**Serious** (P1): High frequency + Medium impact, OR Medium frequency + High impact
**Moderate** (P2): High frequency + Low impact, OR Medium frequency + Medium impact, OR Low frequency + High impact
**Minor** (P3): Everything else
See `assets/severity-rating-guide.md` for detailed rating criteria and examples.
## Usability Metrics
### Quantitative Metrics
**Task Success Rate**: % of participants who complete task successfully
- Binary: Did they complete it? (yes/no)
- Partial credit: Did they complete most of it?
**Time on Task**: How long to complete (for successful completions)
- Compare to baseline or competitor benchmarks
**Error Rate**: Number of errors per task
- Define what counts as an error for each task
**Clicks/Taps to Task Completion**: Efficiency measure
- More relevant for well-defined tasks
### Standardized Questionnaires
**SUS (System Usability Scale)**:
- 10 questions, 5-point Likert scale
- Score 0-100 (industry avg ~68)
- Quick, reliable, easy to administer
- Good for comparing versions or benchmarking
**UMUX (Usability Metric for User Experience)**:
- 4 questions, lighter than SUS
- Similar reliability
- Faster for participants
**SEQ (Single Ease Question)**:
- "Overall, how difficult or easy was the task to complete?" (1-7)
- One question per task
- Immediate subjective difficulty rating
**Other scales**:
- SUPR-Q (for websites)
- PSSUQ (post-study)
- NASA-TLX (cognitive load)
### Qualitative Insights
**Observed behaviors**:
- Hesitations and confusion
- Error patterns
- Unexpected paths
- Verbal frustrations
**Verbalized thoughts** (think-aloud):
- Mental model mismatches
- Expectation violations
- Pleasantly surprising discoveries
## Sample Size Guidelines
### For Qualitative Insights
**Nielsen's recommendation**: 5 users finds ~85% of usability problems
- Diminishing returns after 5
- Run 3+ small rounds instead of 1 large round
- Iterate between rounds
**Reality check**:
- 5 is a minimum, not ideal
- Complex products may need 8-10
- Multiple user types need 5 each
### For Quantitative Metrics
**Benchmarking**: 20+ users per user group
**A/B testing**: Depends on effect size and desired confidence
**Statistical significance**: Use power analysis calculators
## Planning Your Usability Test
### 1. Define Objectives
What decisions will this research inform?
- Redesign priorities?
- Feature cut decisions?
- Success of recent changes?
### 2. Identify User Segments
Who needs to be tested?
- New vs. experienced users?
- Different roles or use cases?
- Different devices or contexts?
### 3. Select Tasks
What tasks represent success?
- Most critical user goals
- Most frequent tasks
- Recently changed features
- Known problem areas
### 4. Choose Methodology
Moderated, unmoderated, or hybrid?
- Consider timeline, budget, research questions
### 5. Create Test Script
See `assets/usability-test-script-template.md` for a ready-to-use structure including:
- Welcome and consent
- Background questions
- Task instructions
- Probing questions
- Wrap-up
### 6. Recruit Participants
- Define screening criteria
- Aim for 5-10 per user segment
- Plan for no-shows (recruit 20% extra)
- Offer appropriate incentives
### 7. Conduct Pilot Test
- Test with colleague or friend
- Validate timing
- Check recording setup
- Refine unclear tasks
### 8. Run Sessions
- Stay neutral and encouraging
- Observe without interfering
- Take detailed notes
- Record if permitted
### 9. Analyze and Synthesize
- Code issues by severity
- Identify patterns across participants
- Link issues to heuristics violated
- Quantify task success and time
### 10. Report and Recommend
- Prioritized issue list
- Video clips of critical issues
- Recommendations with rationale
- Quick wins vs. strategic fixes
## Integration with Product Development
### When to Test
**Discovery phase**: Test competitors or analogous products
**Concept phase**: Test paper prototypes or wireframes
**Design phase**: Test high-fidelity mockups
**Development phase**: Test working builds iteratively
**Pre-launch**: Validate before release
**Post-launch**: Identify optimization opportunities
### Continuous Usability Testing
**Build it into your process**:
- Weekly or bi-weekly test sessions
- Rotating focus (new features, established flows, mobile vs. desktop)
- Standing recruiting panel
- Lightweight reporting to team
## Ready-to-Use Templates
We provide templates to accelerate your usability testing:
### In `assets/`:
- **usability-test-script-template.md**: Complete moderator script structure
- **task-scenario-template.md**: Framework for creating effective task scenarios
- **severity-rating-guide.md**: Detailed criteria for rating usability issues
### In `references/`:
- **nielsens-10-heuristics.md**: Deep dive into each heuristic with examples
- **think-aloud-protocol-guide.md**: Advanced facilitation techniques and troubleshooting
## Common Pitfalls to Avoid
1. **Leading participants**: "Was that easy?" → "How would you describe that experience?"
2. **Testing the wrong tasks**: Tasks that aren't real user goals
3. **Over-explaining**: Let users struggle and discover issues naturally
4. **Ignoring severity**: Fixing cosmetic issues while critical issues remain
5. **Testing too late**: After it's expensive to change
6. **Not iterating**: One-and-done testing instead of continuous improvement
7. **Confusing usability with preference**: "I like green" ≠ usability issue
8. **Sample bias**: Testing only power users or only complete novices
## Further Learning
**Books**:
- "Rocket Surgery Made Easy" by Steve Krug
- "Handbook of Usability Testing" by Jeffrey Rubin
- "Moderating Usability Tests" by Joseph Dumas
**Online resources**:
- Nielsen Norman Group articles
- Usability.gov
- Baymard Institute research
---
This skill provides the foundation for conducting effective usability testing. Use the templates in `assets/` for quick starts and `references/` for deeper dives into specific techniques.

View File

@@ -0,0 +1,477 @@
---
name: user-research-techniques
description: Master user interviews, usability testing, surveys, and research synthesis. Use when planning research, gathering user insights, or validating assumptions.
---
# User Research Techniques
## Overview
Comprehensive guide to qualitative and quantitative research methods for understanding users, validating ideas, and informing product decisions.
## When to Use This Skill
**Auto-loaded by agents**:
- `research-ops` - For research methods, planning, and best practices
**Use when you need**:
- Planning research studies
- Choosing research methods
- Conducting interviews or tests
- Analyzing research data
- Validating product decisions
## Research Methods Matrix
```
Quantitative ← → Qualitative
(What & How Many) (Why & How)
Behavioral ─┼─ Analytics Usability Testing
(What they │ Surveys Field Studies
do) │ A/B Tests Diary Studies
Attitudinal ─┼─ Surveys Interviews
(What they │ NPS Focus Groups
say) │ Questionnaires Concept Tests
```
## Qualitative Methods
### 1. User Interviews
**Types**:
- **Discovery**: Understand problems and needs
- **Validation**: Test solution fit
- **Evaluative**: Assess existing product
**Best Practices**:
- Open-ended questions
- Listen more than talk (80/20 rule)
- Ask "why" 5 times
- Avoid leading questions
- Record and take notes
**Interview Structure** (60 min):
```
Introduction (5 min)
- Build rapport
- Explain purpose
- Get consent
Warm-up (5 min)
- Background questions
- Context setting
Main Questions (40 min)
- Behavioral: "Walk me through..."
- Pain points: "Tell me about a time when..."
- Goals: "What are you trying to achieve?"
Wrap-up (10 min)
- Anything missed?
- Who else to talk to?
- Thank you
```
**Sample Questions**:
```
Discovery:
- "Walk me through the last time you [task]"
- "What's most frustrating about [process]?"
- "Tell me about a time when [problem] occurred"
Validation:
- [Show concept] "What's your initial reaction?"
- "How would you use this?"
- "What concerns do you have?"
```
---
### 2. Usability Testing
**Types**:
- **Moderated**: Facilitator guides session
- **Unmoderated**: User completes alone
- **Remote**: Video call or tool-based
- **In-person**: Lab or coffee shop
**Process**:
1. **Recruit**: 5-8 participants (80% of issues)
2. **Prepare**: Tasks, scenarios, prototype
3. **Test**: Think-aloud protocol
4. **Analyze**: Issues, patterns, severity
5. **Report**: Findings and recommendations
**Task Format**:
```
Scenario: "You want to [goal]"
Task: "Using this prototype, [specific action]"
Example:
Scenario: "You want to find last month's sales report"
Task: "Find and download the December 2024 sales report"
```
**Think-Aloud Protocol**:
- "Please speak your thoughts out loud"
- "What are you looking for?"
- "What do you expect to happen?"
- Don't help unless stuck >2 minutes
**Metrics**:
- Task completion rate
- Time on task
- Errors
- Satisfaction (SEQ: Single Ease Question)
---
### 3. Field Studies (Ethnography)
**When**: Understand context and environment
**Methods**:
- **Contextual Inquiry**: Observe in natural setting
- **Shadowing**: Follow user through day
- **Diary Studies**: Users log activities
**Process**:
1. Observe silently
2. Take field notes
3. Ask clarifying questions
4. Look for workarounds
5. Identify pain points
---
### 4. Card Sorting
**Purpose**: Understand mental models, organize information
**Types**:
- **Open**: Users create categories
- **Closed**: Users sort into given categories
- **Hybrid**: Start closed, allow new categories
**Tools**: OptimalSort, UserZoom, Miro
---
### 5. Focus Groups
**Format**: 6-10 participants, moderated discussion
**When to Use**: Explore opinions, generate ideas
**When NOT to Use**: Validation (groupthink risk)
---
## Quantitative Methods
### 1. Surveys
**Types**:
- **NPS (Net Promoter Score)**: "Likelihood to recommend" (0-10)
- **CSAT (Customer Satisfaction)**: "How satisfied?" (1-5)
- **CES (Customer Effort Score)**: "How easy?" (1-7)
- **Custom**: Specific questions
**Survey Design**:
```
Good Question:
"How often do you use [feature]?"
□ Daily
□ Weekly
□ Monthly
□ Rarely
□ Never
Bad Question:
"Do you love our amazing new feature?"
(Leading, biased)
```
**Best Practices**:
- Keep short (< 10 questions)
- One question per topic
- Mix question types
- Avoid double-barreled questions
- Pilot test first
**Sample Sizes**:
- 100+ for directional insights
- 384+ for 95% confidence, ±5% margin
- Calculator: surveymonkey.com/mp/sample-size-calculator
---
### 2. Analytics (Behavioral Data)
**Metrics to Track**:
- **Engagement**: DAU, WAU, MAU, session duration
- **Conversion**: Funnel drop-offs, completion rates
- **Retention**: Cohort retention curves
- **Feature Adoption**: % users using feature
**Tools**: Mixpanel, Amplitude, Heap, Google Analytics
---
### 3. A/B Testing
**Process**:
1. Hypothesis
2. Design variants
3. Determine sample size
4. Run test (1-2 weeks)
5. Analyze (significance)
6. Ship or iterate
(See experiment-designer skill for details)
---
## Research Synthesis
### Affinity Mapping
**Process**:
1. Write observations on sticky notes
2. Group similar notes
3. Label themes
4. Identify patterns
5. Extract insights
**Tool**: Miro, FigJam, physical wall
---
### Thematic Analysis
**Steps**:
1. **Familiarize**: Read all data
2. **Code**: Tag recurring concepts
3. **Theme**: Group codes into themes
4. **Review**: Refine themes
5. **Define**: Name and describe themes
6. **Report**: Write findings
---
### Jobs-to-be-Done Framework
**Job Statement**:
"When [situation], I want to [motivation], so I can [outcome]"
**Example**:
"When I'm preparing for a client meeting, I want to quickly find relevant past conversations, so I can provide informed recommendations"
**Interview Questions**:
- "What job were you trying to get done?"
- "What were you using before?"
- "What triggered you to switch?"
- "What obstacles did you face?"
---
## Research Planning
### Define Research Questions
**Good Research Questions**:
- "How do users currently [task]?"
- "What prevents users from [goal]?"
- "Which features drive retention?"
**Bad Research Questions**:
- "Will users like this?" (too vague)
- "Should we build X?" (not research question)
---
### Choose Method
**Decision Tree**:
```
What vs Why?
├─ What/How Many? → Quantitative
│ ├─ Behavior → Analytics
│ └─ Attitudes → Survey
└─ Why/How? → Qualitative
├─ Discover → Interviews
├─ Validate → Usability Test
└─ Context → Field Study
```
---
### Recruit Participants
**Screener Questions**:
```
1. How often do you [relevant behavior]?
○ Daily (CONTINUE)
○ Weekly (CONTINUE)
○ Monthly (SCREEN OUT)
○ Never (SCREEN OUT)
2. Do you work at [Competitor/Partner]?
○ Yes (SCREEN OUT)
○ No (CONTINUE)
```
**Incentives**:
- $50-100 for 1-hour consumer interview
- $150-300 for B2B professional
- Gift cards, credits, or cash
**Sources**:
- User Interviews, Respondent.io
- Your user base (email list)
- Social media, communities
---
## Best Practices
### 1. Avoid Bias
**Confirmation Bias**: Seek disconfirming evidence
**Leading Questions**: Ask neutral questions
**Selection Bias**: Recruit diverse participants
**Observer Effect**: Users behave differently when watched
---
### 2. Sample Sizes
**Qualitative**:
- 5-8 users per segment (diminishing returns)
- 15-20 total for diverse product
**Quantitative**:
- 100+ for trends
- 384+ for statistical significance
- Use power calculations
---
### 3. Triangulate
**Combine Methods**:
- Interviews (why) + Analytics (what)
- Usability tests + Surveys
- Quantitative → Qualitative → Quantitative
---
### 4. Continuous Discovery (Teresa Torres)
**Weekly Touchpoints**:
- Talk to 2-3 customers per week
- Mix research types
- Share with team
- Document insights
- Map to opportunities
---
## Common Mistakes
### Avoid
- Asking what users want (they don't know)
- Leading questions ("Do you love this?")
- Only talking to power users
- Research without action
- Skipping synthesis
### Do
- Observe behavior, not just opinions
- Ask open-ended questions
- Recruit diverse participants
- Act on findings
- Share insights widely
---
## Tools
**Research Platforms**:
- UserTesting, Maze (unmoderated testing)
- User Interviews, Respondent.io (recruitment)
- Lookback, Zoom (moderated testing)
**Analysis**:
- Dovetail, Airtable (synthesis)
- Miro, FigJam (affinity mapping)
- Typeform, SurveyMonkey (surveys)
**Analytics**:
- Mixpanel, Amplitude (product analytics)
- Hotjar, FullStory (session replay)
- Google Analytics (web analytics)
---
## Templates
### Research Plan
```markdown
# Research Plan: [Topic]
## Goals
- [Research question 1]
- [Research question 2]
## Method
- Type: [Interviews / Testing / Survey]
- Timeline: [Dates]
- Participants: [N, criteria]
## Questions/Tasks
1. [Question/Task 1]
2. [Question/Task 2]
## Analysis
- [How we'll synthesize]
- [Key metrics]
## Deliverables
- [Report, insights, recommendations]
```
---
## Resources
**Books**:
- "The Mom Test" - Rob Fitzpatrick
- "Just Enough Research" - Erika Hall
- "Continuous Discovery Habits" - Teresa Torres
- "Don't Make Me Think" - Steve Krug
**Online**:
- Nielsen Norman Group articles
- IDEO Design Kit
- Google Ventures Research Sprint
---
## Quick Guide
```
Need to understand why? → Interviews
Testing usability? → Usability Tests
Measure satisfaction? → Survey (NPS/CSAT)
Understand behavior? → Analytics
Validate solution? → Prototype Test
Deep context? → Field Study
Always: Define questions, recruit right users, synthesize, act on insights
```

View File

@@ -0,0 +1,429 @@
---
name: user-story-templates
description: Master user story templates including As-a/I-want/So-that format, acceptance criteria (Given-When-Then), story splitting, and INVEST criteria. Use when writing user stories, defining acceptance criteria, splitting stories, refining backlogs, or communicating requirements to development teams. Covers user story best practices, story templates, and agile requirements techniques.
---
# User Story Templates
Comprehensive templates and frameworks for writing effective user stories with clear acceptance criteria, enabling agile development teams to deliver value incrementally.
## What is a User Story?
A user story is a concise description of a feature from the end-user's perspective. It follows the format: "As a [user], I want [goal], so that [benefit]."
**Purpose**:
- Communicate requirements from user perspective
- Enable incremental delivery
- Facilitate conversation and collaboration
- Support estimation and planning
- Guide testing and acceptance
## When to Use This Skill
**Auto-loaded by agents**:
- `requirements-engineer` - For user story formats, epic breakdown, and spike stories
**Use when you need**:
- Breaking down features into user stories
- Writing acceptance criteria
- Planning sprints and iterations
- Communicating requirements to development teams
- Refining product backlogs
- Splitting large stories
- Validating story quality with INVEST criteria
- Estimating and prioritizing work
---
## User Story Format
### Basic Template
```
As a [user type/persona]
I want to [action/goal]
So that [benefit/value]
```
**Example**:
```
As a small business owner
I want to export my invoices as PDF
So that I can send professional-looking invoices to clients
```
### Three Key Components
**1. User Type/Persona** (Who)
- Specific user segment or role
- Good: "free trial user", "team administrator", "mobile app user"
- Bad: "user" (too vague), "the system" (not a user)
**2. Action/Goal** (What)
- What the user wants to accomplish
- Good: "filter search results by date range", "receive email notifications"
- Bad: "have a button" (describes UI), "use the API" (describes implementation)
**3. Benefit/Value** (Why)
- Why the user wants this, what value it provides
- Good: "so that I can quickly find relevant information"
- Bad: "so that it works" (not a benefit), omitting "so that" entirely
For detailed guidance on each component, see `assets/basic-user-story-template.md`.
---
## Ready-to-Use Templates
We provide copy-paste-ready templates for common story types:
### Feature Stories
**Use for**: New features, enhancements, user-facing work
**Template**: `assets/feature-story-template.md`
Includes:
- Full user story format
- Acceptance criteria checklist
- Definition of Done
- Story points and priority
- Dependencies tracking
---
### Bug Fix Stories
**Use for**: Defects, regressions, user-reported issues
**Template**: `assets/bug-fix-story-template.md`
Includes:
- Current vs expected behavior
- Steps to reproduce
- Environment details
- Severity levels
- Impact assessment
---
### Technical Debt Stories
**Use for**: Refactoring, code improvements, infrastructure work
**Template**: `assets/technical-debt-story-template.md`
Includes:
- Technical context
- Proposed solution
- Impact on performance/maintainability/scalability
- Risk assessment
- Business value justification
---
### Spike/Research Stories
**Use for**: Time-boxed investigation, POCs, research tasks
**Template**: `assets/spike-research-story-template.md`
Includes:
- Research questions
- Time box
- Deliverables
- Acceptance criteria for knowledge gained
- Next steps identification
---
### Epic Breakdown
**Use for**: Breaking large initiatives into manageable stories
**Template**: `assets/epic-breakdown-template.md`
Includes:
- Epic goal and value proposition
- Story sequencing
- Definition of Done (epic level)
- Timeline and dependencies
- Risk mitigation
---
## Acceptance Criteria
Acceptance criteria define when a story is "done." Use one of three formats:
### 1. GIVEN-WHEN-THEN (BDD Style)
Best for: Complex workflows, state-dependent behavior, testing focus
```
GIVEN [precondition/context]
WHEN [action/event]
THEN [expected outcome]
```
**Example**:
```
GIVEN I'm on the login page
WHEN I enter valid email and password
THEN I'm redirected to my dashboard
```
### 2. Checklist Format
Best for: Simple features, quick validation, straightforward requirements
```
Acceptance Criteria:
- [ ] [Observable outcome 1]
- [ ] [Observable outcome 2]
- [ ] [Observable outcome 3]
```
**Example**:
```
- [ ] Export button appears on data table
- [ ] Clicking export generates CSV file
- [ ] CSV includes all visible columns
- [ ] CSV filename includes current date
- [ ] Export completes within 5 seconds for up to 10K rows
```
### 3. Rule-Based Format (MoSCoW)
Best for: Variable scope, prioritization needed, time-constrained sprints
```
Must: [Critical requirements]
Should: [Important but not critical]
May: [Nice to have]
```
**Comprehensive guide**: See `references/acceptance-criteria-guide.md` for deep dive on writing testable, specific criteria including error states, edge cases, and performance requirements.
---
## Story Splitting
Large stories (>5 days or >8 points) should be split into smaller, deliverable pieces.
### Common Splitting Patterns
**1. Workflow Steps**: Sequential process steps (e.g., checkout: cart → shipping → payment → confirmation)
**2. CRUD Operations**: Create → Read → Update → Delete
**3. User Roles**: Admin → Manager → Employee → Guest
**4. Business Rules**: Different variations of logic (e.g., discount types: percentage, fixed amount, BOGO)
**5. Simple to Complex**: MVP → Advanced features (progressive enhancement)
**6. Happy Path vs Edge Cases**: Success scenarios → Error handling
**7. Data Variations**: Different formats, sizes, types
**8. Platform/Device**: Desktop → Tablet → Mobile
**9. Performance Tiers**: 1-5 users → 6-20 users → 21-100 users
**10. Interface vs API**: Backend → Frontend → Integration
**Best practice**: Use vertical slicing (complete feature through all layers) vs horizontal slicing (one layer across many features).
**Comprehensive guide**: See `references/story-splitting-guide.md` for detailed examples, anti-patterns, and decision trees.
---
## INVEST Criteria
Good user stories follow the INVEST principles:
**I - Independent**: Can be developed without dependency on other stories
**N - Negotiable**: Details flexible, focuses on outcome not implementation
**V - Valuable**: Delivers clear value to user or business
**E - Estimable**: Team can estimate effort with reasonable confidence
**S - Small**: Fits in one sprint (1-5 days)
**T - Testable**: Clear acceptance criteria that can be verified
### Quick INVEST Check
- [ ] **Independent**: Can this be built without waiting for other stories?
- [ ] **Negotiable**: Does it define outcome, not implementation?
- [ ] **Valuable**: Is the benefit clear and meaningful?
- [ ] **Estimable**: Can the team estimate this confidently?
- [ ] **Small**: Can it be completed in 1-5 days?
- [ ] **Testable**: Are acceptance criteria clear and verifiable?
If a story fails any INVEST criterion, it should be rewritten or split.
**Comprehensive guide**: See `references/invest-criteria-guide.md` for deep dive on each principle with examples, anti-patterns, and fixing strategies.
---
## User Story Best Practices
### Writing Clear Stories
**DO**:
- Focus on user value, not implementation
- Keep stories independent and testable
- Write from user's perspective
- Include clear acceptance criteria
- Size stories for 1-3 days of work
**DON'T**:
- Prescribe implementation ("use React hooks")
- Mix multiple features in one story
- Write from system's perspective
- Skip acceptance criteria
- Create epics disguised as stories
### Good Example
```
As a free trial user
I want to see how many days remain in my trial
So that I can decide when to upgrade
Acceptance Criteria:
- [ ] Days remaining appears in header
- [ ] Warning shows when <7 days remain
- [ ] "Upgrade" CTA appears with countdown
- [ ] Counter updates daily at midnight
```
### Bad Example
```
As the system
I want to implement a counter using React hooks
So that the trial days feature works
[No acceptance criteria]
```
---
## Acceptance Criteria Best Practices
**DO**:
- Make criteria testable and observable
- Include happy path AND error states
- Specify performance requirements
- Cover edge cases
- Use numbers for specificity
**Good Example**:
```
- [ ] Search completes within 2 seconds for datasets <10K records
- [ ] Loading spinner shows while search is processing
- [ ] Error message appears if search fails
- [ ] No results message shows when query returns 0 items
- [ ] Search results highlight matching keywords
```
**Bad Example**:
```
- Search feature works
- UI looks good
- Performance is acceptable
```
---
## Story Point Estimation
### Modified Fibonacci Scale
- **1 point**: Trivial change (1-2 hours)
- **2 points**: Small change (half day)
- **3 points**: Medium change (1 day)
- **5 points**: Larger change (2-3 days)
- **8 points**: Large change (3-5 days) - consider splitting
- **13 points**: Epic, must be split
### Estimation Guidelines
- Points represent complexity, not time
- Compare to reference stories ("this is like that export feature we built")
- Include testing and documentation in estimate
- Account for unknowns
- Split stories >8 points
---
## Common Mistakes to Avoid
**Too technical**:
- Bad: "Implement REST API endpoint"
- Good: "Retrieve my order history via mobile app"
**Too large**:
- Bad: "Build entire checkout system"
- Good: "Enter shipping address during checkout"
**Implementation-focused**:
- Bad: "Add a button to the navbar"
- Good: "Access my account settings quickly"
**Missing acceptance criteria**:
- Bad: Only the story statement
- Good: Story + clear, testable acceptance criteria
**Vague**:
- Bad: "Make the app faster"
- Good: "Load dashboard in under 2 seconds"
---
## Related Skills
- `prd-templates` - Product requirements documentation that feeds into user stories
- `specification-techniques` - General specification writing best practices
- `prioritization-methods` - Prioritizing stories for sprint planning
---
## Templates and References
### Assets (Ready-to-Use Templates)
Copy-paste these for immediate use:
- `assets/basic-user-story-template.md` - Simple story format with guidance
- `assets/feature-story-template.md` - Full-featured story with DoD
- `assets/bug-fix-story-template.md` - Bug tracking and resolution
- `assets/technical-debt-story-template.md` - Technical improvements
- `assets/spike-research-story-template.md` - Time-boxed research
- `assets/epic-breakdown-template.md` - Epic to stories breakdown
### References (Deep Dives)
When you need comprehensive guidance:
- `references/story-splitting-guide.md` - 10 splitting patterns with examples, anti-patterns, decision trees
- `references/acceptance-criteria-guide.md` - Writing testable criteria, all three formats, common mistakes
- `references/invest-criteria-guide.md` - Deep dive on each INVEST principle with examples and fixes
---
## Quick Start
**For new features**:
1. Start with `assets/feature-story-template.md`
2. Fill in user type, goal, and benefit
3. Add acceptance criteria (use `references/acceptance-criteria-guide.md` if needed)
4. Verify INVEST criteria
5. Estimate story points
6. If >8 points, split using patterns from `references/story-splitting-guide.md`
**For bugs**:
1. Use `assets/bug-fix-story-template.md`
2. Document current vs expected behavior
3. Add reproduction steps
4. Include severity and impact
5. Write fix verification criteria
**For research**:
1. Use `assets/spike-research-story-template.md`
2. Define research questions
3. Set time box (critical!)
4. Specify deliverable
5. Include next steps in acceptance criteria
---
**Key Principle**: Great user stories are conversations, not contracts. They enable collaboration while providing enough structure to guide development, testing, and acceptance.

View File

@@ -0,0 +1,595 @@
---
name: validation-frameworks
description: Problem and solution validation methodologies, assumption testing, and MVP validation experiments
---
# Validation Frameworks
Frameworks for validating problems, solutions, and product assumptions before committing to full development.
## When to Use This Skill
**Auto-loaded by agents**:
- `research-ops` - For problem/solution validation and assumption testing
**Use when you need**:
- Validating product ideas before building
- Testing assumptions and hypotheses
- De-risking product decisions
- Running MVP experiments
- Validating problem-solution fit
- Testing willingness to pay
- Evaluating technical feasibility
## Core Principle: Reduce Risk Through Learning
Building the wrong thing is expensive. Validation reduces risk by answering critical questions before major investment:
- **Problem validation**: Is this a real problem worth solving?
- **Solution validation**: Does our solution actually solve the problem?
- **Market validation**: Will people pay for this?
- **Usability validation**: Can people use it?
- **Technical validation**: Can we build it?
## The Validation Spectrum
Validation isn't binary (validated vs. not validated). It's a spectrum of confidence:
```
Wild Guess → Hypothesis → Validated Hypothesis → High Confidence → Proven
```
Early stage: Cheap, fast tests (low confidence gain)
Later stage: More expensive tests (high confidence gain)
**Example progression**:
1. **Assumption**: "Busy parents struggle to plan healthy meals"
2. **Interview 5 parents** → Some validation (small sample)
3. **Survey 100 parents** → More validation (larger sample)
4. **Prototype test with 20 parents** → Strong validation (behavior observed)
5. **Launch MVP, track engagement** → Very strong validation (real usage)
6. **Measure retention after 3 months** → Proven (sustained behavior)
## Problem Validation vs. Solution Validation
### Problem Validation
**Question**: "Is this a problem worth solving?"
**Goal**: Confirm that:
- The problem exists
- It's painful enough that people want it solved
- It's common enough to matter
- Current solutions are inadequate
**Methods**:
- Customer interviews (discovery-focused)
- Ethnographic observation
- Surveys about pain points
- Data analysis (support tickets, analytics)
- Jobs-to-be-Done interviews
**Red flags that stop here**:
- No one cares about this problem
- Existing solutions work fine
- Problem only affects tiny niche
- Pain point is mild annoyance, not real pain
**Output**: Problem statement + evidence it's worth solving
See `assets/problem-validation-canvas.md` for a ready-to-use framework.
---
### Solution Validation
**Question**: "Does our solution solve the problem?"
**Goal**: Confirm that:
- Our solution addresses the problem
- Users understand it
- Users would use it
- It's better than alternatives
- Value proposition resonates
**Methods**:
- Prototype testing
- Landing page tests
- Concierge MVP (manual solution)
- Wizard of Oz (fake backend)
- Pre-sales or waitlist signups
**Red flags**:
- Users don't understand the solution
- They prefer their current workaround
- Would use it but not pay for it
- Solves wrong part of the problem
**Output**: Validated solution concept + evidence of demand
See `assets/solution-validation-checklist.md` for validation steps.
---
## The Assumption-Validation Cycle
### 1. Identify Assumptions
Every product idea rests on assumptions. Make them explicit.
**Types of assumptions**:
**Desirability**: Will people want this?
- "Users want to track their habits"
- "They'll pay $10/month for this"
- "They'll invite their friends"
**Feasibility**: Can we build this?
- "We can process data in under 1 second"
- "We can integrate with their existing tools"
- "Our team can build this in 3 months"
**Viability**: Should we build this?
- "Customer acquisition cost will be under $50"
- "Retention will be above 40% after 30 days"
- "We can reach 10k users in 12 months"
**Best practice**: Write assumptions as testable hypotheses
- Not: "Users need this feature"
- Yes: "At least 60% of users will use this feature weekly"
---
### 2. Prioritize Assumptions to Test
Not all assumptions are equally important. Prioritize by:
**Risk**: How uncertain are we? (Higher risk = higher priority)
**Impact**: What happens if we're wrong? (Higher impact = higher priority)
**Prioritization matrix**:
| Risk/Impact | High Impact | Low Impact |
|------------|-------------|------------|
| **High Risk** | **Test first** | Test second |
| **Low Risk** | Test second | Maybe skip |
**Riskiest assumptions** (test these first):
- Leap-of-faith assumptions the entire business depends on
- Things you've never done before
- Assumptions about user behavior with no data
- Technical feasibility of core functionality
**Lower-risk assumptions** (test later or assume):
- Based on prior experience
- Easy to change if wrong
- Industry best practices
- Minor features
---
### 3. Design Validation Experiments
For each assumption, design the cheapest test that could prove it wrong.
**Experiment design principles**:
**1. Falsifiable**: Can produce evidence that assumption is wrong
**2. Specific**: Clear success/failure criteria defined upfront
**3. Minimal**: Smallest effort needed to learn
**4. Fast**: Results in days/weeks, not months
**5. Ethical**: Don't mislead or harm users
**The Experiment Canvas**: See `assets/validation-experiment-template.md`
---
### 4. Run the Experiment
**Before starting**:
- [ ] Define success criteria ("At least 40% will click")
- [ ] Set sample size ("Test with 50 users")
- [ ] Decide timeframe ("Run for 1 week")
- [ ] Identify what success/failure would mean for product
**During**:
- Track metrics rigorously
- Document unexpected learnings
- Don't change experiment mid-flight
**After**:
- Analyze results honestly (avoid confirmation bias)
- Document what you learned
- Decide: Pivot, persevere, or iterate
---
## Validation Methods by Fidelity
### Low-Fidelity (Hours to Days)
**1. Customer Interviews**
- **Cost**: Very low (just time)
- **Validates**: Problem existence, pain level, current solutions
- **Limitations**: What people say ≠ what they do
- **Best for**: Early problem validation
**2. Surveys**
- **Cost**: Low
- **Validates**: Problem prevalence, feature preferences
- **Limitations**: Response bias, can't probe deeply
- **Best for**: Quantifying what you learned qualitatively
**3. Landing Page Test**
- **Cost**: Low (1-2 days to build)
- **Validates**: Interest in solution, value proposition clarity
- **Measure**: Email signups, clicks to "Get Started"
- **Best for**: Demand validation before building
**4. Paper Prototypes**
- **Cost**: Very low (sketch on paper/whiteboard)
- **Validates**: Concept understanding, workflow feasibility
- **Limitations**: Low realism
- **Best for**: Very early solution concepts
---
### Medium-Fidelity (Days to Weeks)
**5. Clickable Prototypes**
- **Cost**: Medium (design tool, 2-5 days)
- **Validates**: User flow, interaction patterns, comprehension
- **Tools**: Figma, Sketch, Adobe XD
- **Best for**: Usability validation pre-development
**6. Concierge MVP**
- **Cost**: Medium (your time delivering manually)
- **Validates**: Value of outcome, willingness to use/pay
- **Example**: Manually curate recommendations before building algorithm
- **Best for**: Validating value before automation
**7. Wizard of Oz MVP**
- **Cost**: Medium (build facade, manual backend)
- **Validates**: User behavior, feature usage, workflows
- **Example**: "AI" feature that's actually humans behind the scenes
- **Best for**: Testing expensive-to-build features
---
### High-Fidelity (Weeks to Months)
**8. Functional Prototype**
- **Cost**: High (weeks of development)
- **Validates**: Technical feasibility, actual usage patterns
- **Limitations**: Expensive if you're wrong
- **Best for**: After other validation, final pre-launch check
**9. Private Beta**
- **Cost**: High (full build + support)
- **Validates**: Real-world usage, retention, bugs
- **Best for**: Pre-launch final validation with early adopters
**10. Public MVP**
- **Cost**: Very high (full product)
- **Validates**: Product-market fit, business model viability
- **Best for**: After all other validation passed
---
## Validation Metrics and Success Criteria
### Quantitative Validation Metrics
**Interest/Demand**:
- Landing page conversion rate
- Waitlist signup rate
- Pre-order conversion
- Prototype test completion rate
**Usage**:
- Feature adoption rate
- Task completion rate
- Time on task
- Frequency of use
**Engagement**:
- DAU/MAU (daily/monthly active users)
- Session length
- Return rate
- Feature usage depth
**Value Realization**:
- Aha moment rate (reached key action)
- Time to value
- Retention curves
- Referral rate
**Business**:
- Conversion to paid
- Lifetime value (LTV)
- Customer acquisition cost (CAC)
- Churn rate
---
### Qualitative Validation Signals
**Strong positive signals**:
- Users ask when they can buy it
- Users describe it to others excitedly
- Users find use cases you didn't imagine
- Users return without prompting
- Users complain when you take it away
**Weak positive signals**:
- "That's interesting"
- "I might use that"
- Polite feedback with no follow-up
- Theoretical interest but no commitment
**Negative signals**:
- Confusion about value proposition
- Preference for existing solution
- Feature requests that miss the point
- Can't articulate who would use it
- Ghosting after initial interest
---
## Setting Success Criteria
**Before running experiment**, define what success looks like.
**Framework**: Set three thresholds
1. **Strong success**: Clear green light, proceed confidently
2. **Moderate success**: Promising but needs iteration
3. **Failure**: Pivot or abandon
**Example: Landing page test**
- Strong success: > 30% email signup rate
- Moderate success: 15-30% signup rate
- Failure: < 15% signup rate
**Example: Prototype test**
- Strong success: 8/10 users complete task, would use weekly
- Moderate success: 5-7/10 complete, would use monthly
- Failure: < 5/10 complete or no usage intent
**Important**: Decide criteria before seeing results to avoid bias.
---
## Teresa Torres Continuous Discovery Validation
### Opportunity Solution Trees
Map opportunities (user needs) to solutions to validate:
```
Outcome
└── Opportunity 1
├── Solution A
├── Solution B
└── Solution C
└── Opportunity 2
└── Solution D
```
**Validate each level**:
1. Outcome: Is this the right goal?
2. Opportunities: Are these real user needs?
3. Solutions: Will this solution address the opportunity?
### Assumption Testing
For each solution, map assumptions:
**Desirability assumptions**: Will users want this?
**Usability assumptions**: Can users use this?
**Feasibility assumptions**: Can we build this?
**Viability assumptions**: Should we build this?
Then test riskiest assumptions first with smallest possible experiments.
### Weekly Touchpoints
Continuous discovery = continuous validation:
- Weekly customer interviews (problem + solution validation)
- Weekly prototype tests with 2-3 users
- Weekly assumption tests (small experiments)
**Goal**: Continuous evidence flow, not one-time validation.
See `references/lean-startup-validation.md` and `references/assumption-testing-methods.md` for detailed methodologies.
---
## Common Validation Anti-Patterns
### 1. Fake Validation
**What it looks like**:
- Asking friends and family (they'll lie to be nice)
- Leading questions ("Wouldn't you love...?")
- Testing with employees
- Cherry-picking positive feedback
**Fix**: Talk to real users, ask open-ended questions, seek disconfirming evidence.
---
### 2. Analysis Paralysis
**What it looks like**:
- Endless research without decisions
- Testing everything before building anything
- Demanding statistical significance with 3 data points
**Fix**: Accept uncertainty, set decision deadlines, bias toward action.
---
### 3. Confirmation Bias
**What it looks like**:
- Only hearing what confirms existing beliefs
- Dismissing negative feedback as "they don't get it"
- Stopping research when you hear what you wanted
**Fix**: Actively seek disconfirming evidence, set falsifiable criteria upfront.
---
### 4. Vanity Validation
**What it looks like**:
- "I got 500 email signups!" (but 0 conversions)
- "People loved the demo!" (but won't use it)
- "We got great feedback!" (but all feature requests, no usage)
**Fix**: Focus on behavior over opinions, retention over acquisition.
---
### 5. Building Instead of Validating
**What it looks like**:
- "Let's build it and see if anyone uses it"
- "It'll only take 2 weeks" (takes 2 months)
- Full build before any user contact
**Fix**: Always do cheapest possible test first, build only after validation.
---
## Validation Workflow: End-to-End Example
**Idea**: Mobile app for tracking baby sleep patterns
### Phase 1: Problem Validation
**Assumption**: "New parents struggle to understand their baby's sleep patterns"
**Experiment 1: Customer Interviews**
- Interview 10 new parents
- Ask about sleep tracking current methods, pain points
- Success criteria: 7/10 express frustration with current methods
**Result**: 9/10 keep notes on paper, all frustrated by inconsistency
**Conclusion**: Problem validated, proceed.
---
### Phase 2: Solution Validation (Concept)
**Assumption**: "Parents would use an app that auto-tracks sleep from smart monitor"
**Experiment 2: Landing Page**
- Create page describing solution
- Run ads targeting new parents
- Success criteria: 20% email signup rate
**Result**: 8% signup rate
**Conclusion**: Interest is lower than hoped. Talk to non-signups to understand why.
**Learning**: They don't have smart monitors, manual entry would be fine.
---
### Phase 3: Solution Validation (Refined)
**Assumption**: "Parents will manually log sleep times if it's fast"
**Experiment 3: Concierge MVP**
- Ask 5 parents to text sleep times
- Manually create charts/insights
- Send back daily summaries
- Success criteria: 4/5 use it for 2 weeks
**Result**: All 5 use it for 2 weeks, ask to keep using
**Conclusion**: Strong validation, build simple MVP.
---
### Phase 4: Build & Validate MVP
**Build**: Simple app with manual time entry + basic charts
**Experiment 4: Private Beta**
- 30 parents use MVP for 4 weeks
- Success criteria: 60% retention after 2 weeks, 40% after 4 weeks
**Result**: 70% 2-week retention, 50% 4-week retention
**Conclusion**: Exceeded criteria, prepare for launch.
---
This workflow: problem → concept → refined solution → MVP took 8 weeks and cost minimal development before validation.
---
## Validation Checklist by Stage
### Idea Stage
- [ ] Problem validated through customer interviews
- [ ] Current solutions identified and evaluated
- [ ] Target user segment defined and accessible
- [ ] Pain level assessed (nice-to-have vs. must-have)
### Concept Stage
- [ ] Solution concept tested with users
- [ ] Value proposition resonates
- [ ] Demand signal measured (signups, interest)
- [ ] Key assumptions identified and prioritized
### Pre-Build Stage
- [ ] Prototype tested with target users
- [ ] Core workflow validated
- [ ] Pricing validated (willingness to pay)
- [ ] Technical feasibility confirmed
### MVP Stage
- [ ] Beta users recruited
- [ ] Usage patterns observed
- [ ] Retention measured
- [ ] Unit economics validated
---
## Ready-to-Use Resources
### In `assets/`:
- **problem-validation-canvas.md**: Framework for validating problems before solutions
- **solution-validation-checklist.md**: Step-by-step checklist for solution validation
- **validation-experiment-template.md**: Design experiments to test assumptions
### In `references/`:
- **lean-startup-validation.md**: Build-Measure-Learn cycle, MVP types, pivot decisions
- **assumption-testing-methods.md**: Comprehensive assumption testing techniques
---
## Further Reading
**Books**:
- "The Lean Startup" by Eric Ries
- "The Mom Test" by Rob Fitzpatrick
- "Continuous Discovery Habits" by Teresa Torres
- "Testing Business Ideas" by Bland & Osterwalder
**Key Concepts**:
- Minimum Viable Product (MVP)
- Build-Measure-Learn
- Validated learning
- Pivot or persevere
- Assumption mapping
- Evidence-based product development
---
**Remember**: Validation isn't about proving you're right. It's about learning what's true. Seek evidence that you're wrong - that's the fastest path to building something valuable.