Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:38:26 +08:00
commit 41d9f6b189
304 changed files with 98322 additions and 0 deletions

View File

@@ -0,0 +1,210 @@
---
name: discovery-interviews-surveys
description: Use when validating product assumptions before building, discovering unmet user needs, understanding customer problems and workflows, testing concepts or positioning, researching target markets, identifying jobs-to-be-done and hiring triggers, uncovering pain points and workarounds, or when users mention user research, customer interviews, surveys, discovery interviews, validation studies, or voice of customer.
---
# Discovery Interviews & Surveys
## Table of Contents
- [Purpose](#purpose)
- [When to Use](#when-to-use)
- [What Is It?](#what-is-it)
- [Workflow](#workflow)
- [Common Patterns](#common-patterns)
- [Guardrails](#guardrails)
- [Quick Reference](#quick-reference)
## Purpose
Discovery Interviews & Surveys help you learn from users systematically to:
- **Validate assumptions** before investing in building
- **Discover real problems** users experience (not just stated needs)
- **Understand jobs-to-be-done** (what users "hire" your product to do)
- **Identify pain points** and current workarounds
- **Test concepts** and positioning with target audience
- **Uncover unmet needs** that users may not articulate directly
This moves from guessing to evidence-based product decisions.
## When to Use
Use this skill when:
- **Pre-build validation**: Testing product ideas before development
- **Problem discovery**: Understanding user pain points and workflows
- **Jobs-to-be-done research**: Identifying hiring/firing triggers and desired outcomes
- **Market research**: Understanding target audience, competitive landscape, willingness to pay
- **Concept testing**: Validating positioning, messaging, feature prioritization
- **Post-launch learning**: Understanding adoption barriers, churn reasons, expansion opportunities
- **Customer satisfaction research**: Identifying satisfaction/dissatisfaction drivers
- **UX research**: Mental models, task flows, usability issues
- **Voice of customer**: Gathering qualitative insights for roadmap prioritization
Trigger phrases: "user research", "customer interviews", "surveys", "discovery", "validation study", "voice of customer", "jobs-to-be-done", "JTBD", "user needs"
## What Is It?
Discovery Interviews & Surveys provide structured approaches to learn from users while avoiding common biases (leading questions, confirmation bias, selection bias).
**Key components**:
1. **Interview guides**: Open-ended questions that reveal problems and context
2. **Survey instruments**: Scaled questions for quantitative validation at scale
3. **JTBD probes**: Questions focused on hiring/firing triggers and desired outcomes
4. **Bias-avoidance techniques**: Past behavior focus, "show me" requests, avoiding hypotheticals
5. **Analysis frameworks**: Thematic coding, affinity mapping, statistical analysis
**Quick example:**
**Bad interview question** (leading, hypothetical):
"Would you pay $49/month for a tool that automatically backs up your files?"
**Good interview approach** (behavior-focused, problem-discovery):
1. "Tell me about the last time you lost important files. What happened?"
2. "What have you tried to prevent data loss? How's that working?"
3. "Walk me through your current backup process. Show me if possible."
4. "What would need to change for you to invest time/money in better backup?"
**Result**: Learn about actual problems, current solutions, willingness to change—not hypothetical preferences.
## Workflow
Copy this checklist and track your progress:
```
Discovery Research Progress:
- [ ] Step 1: Define research objectives and hypotheses
- [ ] Step 2: Identify target participants
- [ ] Step 3: Choose research method (interviews, surveys, or both)
- [ ] Step 4: Design research instruments
- [ ] Step 5: Conduct research and collect data
- [ ] Step 6: Analyze findings and extract insights
```
**Step 1: Define research objectives**
Specify what you're trying to learn, key hypotheses to test, success criteria for research, and decision to be informed. See [Common Patterns](#common-patterns) for typical objectives.
**Step 2: Identify target participants**
Define participant criteria (demographics, behaviors, firmographics), sample size needed, recruitment strategy, and screening questions. For sampling strategies, see [resources/methodology.md](resources/methodology.md#participant-recruitment).
**Step 3: Choose research method**
Based on objective and constraints:
- **For deep problem discovery (5-15 participants)** → Use [resources/template.md](resources/template.md#interview-guide-template) for in-depth interviews
- **For concept testing at scale (50-200+ participants)** → Use [resources/template.md](resources/template.md#survey-template) for quantitative validation
- **For JTBD research** → Use [resources/methodology.md](resources/methodology.md#jobs-to-be-done-interviews) for switch interviews
- **For mixed methods** → Interviews for discovery, surveys for validation
**Step 4: Design research instruments**
Create interview guide or survey with bias-avoidance techniques. Use [resources/template.md](resources/template.md) for structure. Avoid leading questions, focus on past behavior, use "show me" requests. For advanced question design, see [resources/methodology.md](resources/methodology.md#question-design-principles).
**Step 5: Conduct research**
Execute interviews (record with permission, take notes) or distribute surveys (pilot test first). Use proper techniques (active listening, follow-up probes, silence for thinking). See [Guardrails](#guardrails) for critical requirements.
**Step 6: Analyze findings**
For interviews: thematic coding, affinity mapping, quote extraction. For surveys: statistical analysis, cross-tabs, open-end coding. Create insights document with evidence. Self-assess using [resources/evaluators/rubric_discovery_interviews_surveys.json](resources/evaluators/rubric_discovery_interviews_surveys.json). **Minimum standard**: Average score ≥ 3.5.
## Common Patterns
**Pattern 1: Problem Discovery Interviews**
- **Objective**: Understand user pain points and current workflows
- **Approach**: 8-12 in-depth interviews, open-ended questions, focus on past behavior and actual solutions
- **Key questions**: "Tell me about the last time...", "Walk me through...", "What have you tried?", "How's that working?"
- **Output**: Problem themes, frequency estimates, current workarounds, willingness to change
- **Example**: B2B SaaS discovery—interview potential customers about current tools and pain points
**Pattern 2: Jobs-to-be-Done Research**
- **Objective**: Identify why users "hire" products and what triggers switching
- **Approach**: Switch interviews with recent adopters or switchers, focus on timeline and context
- **Key questions**: "What prompted you to look?", "What alternatives did you consider?", "What almost stopped you?", "What's different now?"
- **Output**: Hiring triggers, firing triggers, desired outcomes, anxieties, habits
- **Example**: SaaS churn research—interview recent churners about switch to competitor
**Pattern 3: Concept Testing (Qualitative)**
- **Objective**: Test product concepts, positioning, or messaging before launch
- **Approach**: 10-15 interviews showing concept (mockup, landing page, description), gather reactions
- **Key questions**: "In your own words, what is this?", "Who is this for?", "What would you use it for?", "How much would you expect to pay?"
- **Output**: Comprehension score, perceived value, target audience clarity, pricing anchors
- **Example**: Pre-launch validation—test landing page messaging with target audience
**Pattern 4: Survey for Quantitative Validation**
- **Objective**: Validate findings from interviews at scale or prioritize features
- **Approach**: 100-500 participants, mix of scaled questions (Likert, ranking) and open-ends
- **Key questions**: Satisfaction scores (CSAT, NPS), feature importance/satisfaction (Kano), usage frequency, demographics
- **Output**: Statistical significance, segmentation, prioritization (importance vs satisfaction matrix)
- **Example**: Product roadmap prioritization—survey 500 users on feature importance
**Pattern 5: Continuous Discovery**
- **Objective**: Ongoing learning, not one-time project
- **Approach**: Weekly customer conversations (15-30 min), rotating team members, shared notes
- **Key questions**: Varies by current focus (new features, onboarding, expansion, retention)
- **Output**: Continuous insight feed, early problem detection, relationship building
- **Example**: Product team does 3-5 customer calls weekly, logs insights in shared doc
## Guardrails
**Critical requirements:**
1. **Avoid leading questions**: Don't telegraph the "right" answer. Bad: "Don't you think our UI is confusing?" Good: "Walk me through using this feature. What happened?"
2. **Focus on past behavior, not hypotheticals**: What people did reveals truth; what they say they'd do is often wrong. Bad: "Would you use this feature?" Good: "Tell me about the last time you needed to do X."
3. **Use "show me" not "tell me"**: Actual behavior > described behavior. Ask to screen-share, demonstrate current workflow, show artifacts (spreadsheets, tools).
4. **Recruit right participants**: Screen carefully. Wrong participants = wasted time. Define inclusion/exclusion criteria, use screening survey.
5. **Sample size appropriate for method**: Interviews: 5-15 for themes to emerge. Surveys: 100+ for statistical significance, 30+ per segment if comparing.
6. **Avoid confirmation bias**: Actively look for disconfirming evidence. If 9/10 interviews support hypothesis, focus heavily on the 1 that doesn't.
7. **Record and transcribe (with permission)**: Memory is unreliable. Record interviews, transcribe for analysis. Take notes as backup.
8. **Analyze systematically**: Don't cherry-pick quotes that support preferred conclusion. Use thematic coding, count themes, present contradictory evidence.
**Common pitfalls:**
-**Asking "would you" questions**: Hypotheticals are unreliable. Focus on "have you", "tell me about when", "show me"
-**Small sample statistical claims**: "80% of users want feature X" from 5 interviews is not valid. Interviews = themes, surveys = statistics
-**Selection bias**: Interviewing only enthusiasts or only detractors skews results. Recruit diverse sample
-**Ignoring non-verbal cues**: Hesitation, confusion, workarounds during "show me" reveal truth beyond words
-**Stopping at surface answers**: First answer is often rationalization. Follow up: "Tell me more", "Why did that matter?", "What else?"
## Quick Reference
**Key resources:**
- **[resources/template.md](resources/template.md)**: Interview guide template, survey template, JTBD question bank, screening questions
- **[resources/methodology.md](resources/methodology.md)**: Advanced techniques (JTBD switch interviews, Kano analysis, thematic coding, statistical analysis, continuous discovery)
- **[resources/evaluators/rubric_discovery_interviews_surveys.json](resources/evaluators/rubric_discovery_interviews_surveys.json)**: Quality criteria for research design and execution
**Typical workflow time:**
- Interview guide design: 1-2 hours
- Conducting 10 interviews: 10-15 hours (including scheduling)
- Analysis and synthesis: 4-8 hours
- Survey design: 2-4 hours
- Survey distribution and collection: 1-2 weeks
- Survey analysis: 2-4 hours
**When to escalate:**
- Large-scale quantitative studies (1000+ participants)
- Statistical modeling or advanced segmentation
- Longitudinal studies (tracking over time)
- Ethnographic research (observing in natural setting)
→ Use [resources/methodology.md](resources/methodology.md) or consider specialist researcher
**Inputs required:**
- **Research objective**: What you're trying to learn
- **Hypotheses** (optional): Specific beliefs to test
- **Target persona**: Who to interview/survey
- **Job-to-be-done** (optional): Specific JTBD focus
**Outputs produced:**
- `discovery-interviews-surveys.md`: Complete research plan with interview guide or survey, recruitment criteria, analysis plan, and insights template

View File

@@ -0,0 +1,265 @@
{
"criteria": [
{
"name": "Research Objective Clarity",
"description": "Is the research objective clearly defined with specific learning goals and hypotheses?",
"scoring": {
"1": "Vague objective ('learn about users'). No specific learning goals. Hypotheses missing. No connection to decision.",
"3": "Objective stated but could be more specific. Some learning goals identified. Hypotheses present but may lack precision. Decision somewhat clear.",
"5": "Crystal clear objective with specific learning goals. Testable hypotheses documented. Clear connection to decision to be informed. Success criteria defined."
}
},
{
"name": "Participant Targeting & Recruitment",
"description": "Are target participants well-defined with appropriate screening and recruitment strategy?",
"scoring": {
"1": "Participant criteria vague or missing. No screening questions. Recruitment strategy not specified. Sample size not justified.",
"3": "Participant criteria identified but may lack specificity. Basic screening questions. Recruitment strategy mentioned. Sample size roughly appropriate.",
"5": "Precise participant criteria (demographics, behaviors, firmographics). Comprehensive screening questions. Clear recruitment strategy with channels. Sample size justified by method (5-15 for qual, 100+ for quant, 30+ per segment)."
}
},
{
"name": "Question Quality & Bias Avoidance",
"description": "Are questions well-designed, open-ended, behavior-focused, and free of leading bias?",
"scoring": {
"1": "Many leading or hypothetical questions ('Would you...?', 'Don't you think...?'). Closed yes/no questions dominate. No behavior focus. Confirmation bias obvious.",
"3": "Mix of open and closed questions. Some behavior focus but also hypotheticals. Minor leading language. Some bias-avoidance techniques attempted.",
"5": "Exemplary questions: open-ended, behavior-focused ('Tell me about the last time...'), use 'show me' requests, avoid hypotheticals, no leading language. Systematic bias-avoidance techniques applied throughout."
}
},
{
"name": "Interview Guide / Survey Structure",
"description": "Is the research instrument well-structured with logical flow and appropriate depth?",
"scoring": {
"1": "Poor structure. No logical flow. Too shallow (only surface questions) or too narrow (missing key areas). Inappropriate question types.",
"3": "Decent structure with some flow. Covers main topics but may miss areas. Question types mostly appropriate. Could use refinement.",
"5": "Excellent structure: warm-up, core questions, concept test (if applicable), wrap-up. Logical flow from general to specific. Appropriate depth and breadth. Question types match objectives (open-ended for discovery, scaled for validation). Includes follow-up probes."
}
},
{
"name": "JTBD / Problem Discovery Focus",
"description": "For discovery research, does it focus on jobs-to-be-done, problems, and context (not just features)?",
"scoring": {
"1": "Feature-focused ('Do you want feature X?'). No JTBD exploration. Missing context about problems, workflows, or triggers. Hypothetical focus.",
"3": "Some problem exploration. Brief JTBD elements. Context partially explored. Mix of problem and feature questions.",
"5": "Deep JTBD focus: hiring/firing triggers, desired outcomes, current workarounds, pain points, context. Timeline reconstruction for switchers. Problems before solutions. 'Show me' requests for workflows."
}
},
{
"name": "Sample Size & Statistical Rigor",
"description": "Is sample size appropriate for method and are statistical considerations addressed for surveys?",
"scoring": {
"1": "Sample size inappropriate (e.g., 3 interviews claiming statistical significance, or 20-person survey). No power analysis. No consideration of statistical validity for quantitative claims.",
"3": "Sample size roughly appropriate but not justified. Some statistical awareness for surveys (e.g., descriptive stats). May lack power analysis or significance testing.",
"5": "Sample size justified: 5-15 for qualitative themes, 100+ for survey stats, 30+ per segment for comparisons. Power analysis for surveys (margin of error, confidence level). Statistical tests specified (t-test, chi-square, etc.). Saturation check for interviews."
}
},
{
"name": "Analysis Plan & Rigor",
"description": "Is there a clear, systematic analysis plan with rigor techniques?",
"scoring": {
"1": "No analysis plan. Unclear how data will be processed. No mention of systematic approach, coding, or statistical tests. Risk of cherry-picking.",
"3": "Basic analysis plan. For interviews: thematic coding mentioned. For surveys: descriptive stats mentioned. Some structure but may lack rigor techniques.",
"5": "Comprehensive analysis plan. For interviews: systematic thematic coding, affinity mapping, frequency counting, saturation check, negative case analysis. For surveys: descriptive stats, inferential tests, segmentation, open-end coding. Pre-specified to avoid p-hacking."
}
},
{
"name": "Facilitation & Execution Guidance",
"description": "For interviews, is there guidance on facilitation techniques (active listening, probes, silence)?",
"scoring": {
"1": "No facilitation guidance. Script-only approach with no flexibility. Missing techniques like probing, silence, active listening.",
"3": "Some facilitation guidance. Probes included for some questions. Brief mention of techniques. Could be more comprehensive.",
"5": "Detailed facilitation guidance: active listening, follow-up probes ('Tell me more', 'Why did that matter?'), embrace silence (3-5 sec pause), mirroring, 'show me' requests, non-verbal cue awareness. Recording and note-taking protocol."
}
},
{
"name": "Ethics & Consent",
"description": "Are ethical considerations addressed (informed consent, privacy, compensation)?",
"scoring": {
"1": "No mention of consent, privacy, or compensation. Ethical considerations ignored.",
"3": "Brief mention of consent or compensation. Some privacy awareness. May lack detail on implementation.",
"5": "Comprehensive ethics: informed consent script, explicit recording permission, anonymization in reports, secure data storage, fair compensation specified, opt-out option. Privacy-first approach."
}
},
{
"name": "Insights Documentation & Actionability",
"description": "Is there a clear template or plan for documenting insights with evidence and recommendations?",
"scoring": {
"1": "No insights documentation plan. Unclear how findings will be communicated. Missing connection to actionable recommendations.",
"3": "Basic insights template. Some structure for documenting findings. Recommendations mentioned but may lack specificity. Somewhat actionable.",
"5": "Comprehensive insights document template: executive summary, methodology, key findings with evidence (quotes/stats), surprises, recommendations (specific actions), confidence level, limitations. Actionable and decision-ready."
}
}
],
"minimum_score": 3.5,
"guidance_by_research_type": {
"Problem Discovery Interviews": {
"target_score": 4.0,
"focus_criteria": [
"Question Quality & Bias Avoidance",
"JTBD / Problem Discovery Focus",
"Facilitation & Execution Guidance"
],
"sample_size": "8-15 participants",
"key_requirements": [
"Open-ended, behavior-focused questions",
"Focus on past behavior, not hypotheticals",
"'Show me' requests for workflows",
"Problem before solution",
"Current workarounds explored",
"Systematic thematic coding planned"
],
"common_pitfalls": [
"Asking 'Would you use...' instead of 'Tell me about the last time you...'",
"Jumping to solutions before understanding problems",
"Not probing deeply enough (stopping at surface answers)",
"Selection bias (only interviewing enthusiasts)"
]
},
"Jobs-to-be-Done Research": {
"target_score": 4.2,
"focus_criteria": [
"JTBD / Problem Discovery Focus",
"Question Quality & Bias Avoidance",
"Interview Guide / Survey Structure"
],
"sample_size": "10-15 recent switchers",
"key_requirements": [
"Recruit recent switchers (last 3-6 months)",
"Timeline reconstruction (first thought → current state)",
"Forces of progress (push, pull, anxiety, habit)",
"Hiring/firing triggers identified",
"Desired outcomes vs current capabilities",
"Context and constraints explored"
],
"common_pitfalls": [
"Interviewing people who switched too long ago (memory fades)",
"Not reconstructing timeline (missing trigger events)",
"Ignoring anxieties and habits (forces resisting change)",
"Focusing only on product features, not job to be done"
]
},
"Concept Testing (Qualitative)": {
"target_score": 3.8,
"focus_criteria": [
"Question Quality & Bias Avoidance",
"Interview Guide / Survey Structure",
"Participant Targeting & Recruitment"
],
"sample_size": "10-15 target users",
"key_requirements": [
"Comprehension check ('In your own words, what is this?')",
"Target audience validation ('Who is this for?')",
"Use case exploration ('When would you use it?')",
"Value perception (pricing anchors, comparisons)",
"Concerns and objections surfaced",
"Avoid leading ('Don't you think this is great?')"
],
"common_pitfalls": [
"Testing with wrong audience (not actual target users)",
"Leading participants to 'correct' answer",
"Not exploring concerns (only positive feedback)",
"Mistaking 'sounds interesting' for 'will actually use'"
]
},
"Quantitative Surveys": {
"target_score": 4.1,
"focus_criteria": [
"Sample Size & Statistical Rigor",
"Question Quality & Bias Avoidance",
"Analysis Plan & Rigor"
],
"sample_size": "100+ overall, 30+ per segment",
"key_requirements": [
"Sample size justified (power analysis, margin of error)",
"Mix of scaled questions and open-ends",
"Avoid leading language in questions",
"Randomize option order",
"Pilot test with 5-10 people",
"Statistical tests pre-specified (t-test, chi-square, etc.)",
"Segmentation plan for subgroup analysis"
],
"common_pitfalls": [
"Too small sample for statistical claims (n < 30 per segment)",
"Leading questions ('How much do you love our product?')",
"No pilot test (discovering issues after launch)",
"Cherry-picking significant results (p-hacking)",
"Ignoring non-response bias"
]
},
"Continuous Discovery": {
"target_score": 3.7,
"focus_criteria": [
"Interview Guide / Survey Structure",
"Facilitation & Execution Guidance",
"Insights Documentation & Actionability"
],
"sample_size": "3-5 conversations per week",
"key_requirements": [
"Lightweight process (15-30 min conversations)",
"Rotating team members (product, eng, design)",
"Shared notes repository",
"Flexible guide based on current focus",
"Monthly synthesis of patterns",
"Relationship building with customers"
],
"common_pitfalls": [
"Making it too formal (blocks adoption)",
"Only product team participates (team stays disconnected)",
"No shared documentation (insights lost)",
"No periodic synthesis (patterns missed)",
"Stopping after a few weeks (not continuous)"
]
}
},
"common_failure_modes": [
{
"failure": "Hypothetical questions ('Would you...?')",
"symptom": "Questions like 'Would you pay $X?', 'Would you use feature Y?', 'If we built Z, would you switch?'. Participants describe future intent, not past behavior.",
"detection": "Look for 'would', 'if', 'imagine'. Check if questions focus on actual past behavior vs hypothetical scenarios.",
"fix": "Reframe to past behavior: 'Tell me about the last time you [needed this]', 'What have you tried?', 'Show me your current workflow'. Use 'have you' not 'would you'."
},
{
"failure": "Leading questions (telegraphing desired answer)",
"symptom": "Questions like 'Don't you think...?', 'Isn't it true that...?', 'How much do you love...?'. Bias obvious.",
"detection": "Check if question suggests 'right' answer. Would neutral observer detect researcher's opinion from question?",
"fix": "Use neutral phrasing: 'What's your experience with...?', 'Walk me through...'. Let participant form own opinion, don't guide."
},
{
"failure": "Insufficient sample size for claims",
"symptom": "Statistical claims from tiny samples ('80% of users want X' from 5 interviews). Surveys with n < 30 per segment claiming significance.",
"detection": "Check sample size vs type of claim. Interviews → themes only. Surveys → need n ≥ 30 per segment for stats.",
"fix": "For interviews: Report themes ('8/12 mentioned Y'), not percentages. For surveys: Ensure n ≥ 100 overall, 30+ per segment. Run power analysis."
},
{
"failure": "Wrong participants (selection bias)",
"symptom": "Interviewing only enthusiasts, or only detractors. Convenience sample (co-workers, friends). Not screening for target criteria.",
"detection": "Check recruitment strategy. Are criteria specific? Is sample diverse? Any obvious biases?",
"fix": "Define precise inclusion/exclusion criteria. Screen with survey. Recruit diverse sample (enthusiasts AND detractors, various demographics). Avoid convenience sampling."
},
{
"failure": "No systematic analysis (cherry-picking)",
"symptom": "No coding or analysis plan. Quotes selected to support pre-existing belief. Contradictory evidence ignored or dismissed.",
"detection": "Check for analysis plan. Is there systematic coding? Are contradictory findings presented? Does report feel one-sided?",
"fix": "Pre-specify analysis approach: thematic coding, affinity mapping, frequency counting. Actively look for disconfirming evidence. Present contradictions. Use inter-rater reliability."
},
{
"failure": "Surface-level probing (stopping too early)",
"symptom": "Accepting first answer without follow-up. Not asking 'Why did that matter?', 'Tell me more', 'What else?'. Missing deeper motivations.",
"detection": "Check interview guide for follow-up probes. Are there '5 whys' style follow-ups? Does guide encourage depth?",
"fix": "Add systematic probes: 'Tell me more', 'Why did that matter?', 'What else?', 'Walk me through what happened next'. Train interviewers to dig deeper."
},
{
"failure": "Feature-focused (not problem-focused)",
"symptom": "Questions about features ('Do you want dark mode?') instead of problems ('Tell me about when poor visibility is an issue'). Solutions before problems.",
"detection": "Count feature mentions vs problem mentions. Are questions about 'what we could build' or 'what problems you face'?",
"fix": "Reframe to problems: 'What challenges do you face with...?', 'When does [current solution] break down?', 'What workarounds have you tried?'. Problems first, solutions later."
},
{
"failure": "No ethics/consent",
"symptom": "Recording without permission. No informed consent. PII not anonymized. No compensation. Participants feel exploited.",
"detection": "Check for consent script, recording permission, anonymization plan, compensation details.",
"fix": "Add consent script. Explicitly ask to record. Anonymize in reports (P1, P2). Offer fair compensation ($50-150 for 60 min). Respect opt-outs."
}
]
}

View File

@@ -0,0 +1,269 @@
# Discovery Interviews & Surveys - Advanced Methodology
## 1. Jobs-to-be-Done (JTBD) Switch Interviews
**When to use**: Understanding why users switch products, identifying hiring/firing triggers.
**Process**:
1. Recruit recent switchers (adopted product in last 3-6 months—memory is fresh)
2. Reconstruct timeline from first thought to current state (forces of progress)
3. Identify push (problems with old solution), pull (attraction to new), anxiety (concerns about new), habit (inertia keeping old)
**Forces of progress framework**:
- **Push**: What problems pushed you away from old solution?
- **Pull**: What attracted you to new solution?
- **Anxiety**: What concerns almost stopped you?
- **Habit**: What kept you using old solution despite problems?
**Key questions**:
- "When did you first realize [old solution] wasn't working?" (First thought—passive)
- "What event made you start actively looking?" (Trigger—active)
- "What did you consider? How did you evaluate?" (Consideration)
- "What almost made you not switch?" (Anxiety)
- "What was the deciding factor?" (Decision moment)
**Output**: Hiring triggers, firing triggers, evaluation criteria, anxieties, decision drivers.
---
## 2. Kano Analysis (Feature Prioritization)
**When to use**: Deciding which features to build based on satisfaction impact.
**Categories**:
- **Must-have** (basic): Dissatisfaction if absent, no extra satisfaction if present
- **Performance** (linear): More is better—satisfaction increases linearly
- **Delight** (exciter): Big satisfaction if present, no dissatisfaction if absent
- **Indifferent**: No impact either way
- **Reverse**: Some users want it, others don't
**Survey approach**:
For each feature, ask 2 questions:
1. "How would you feel if [feature] WAS present?" (Functional)
- I like it / I expect it / I'm neutral / I can tolerate it / I dislike it
2. "How would you feel if [feature] WAS NOT present?" (Dysfunctional)
- I like it / I expect it / I'm neutral / I can tolerate it / I dislike it
**Classification matrix**: Cross-reference functional vs dysfunctional responses to categorize feature.
**Prioritization**:
1. Must-haves first (absence causes dissatisfaction)
2. Performance features second (linear satisfaction gain)
3. Delighters third (differentiation, but not required)
---
## 3. Thematic Coding for Interview Analysis
**Process**:
1. **Familiarization**: Read all transcripts once without coding
2. **Open coding**: Highlight interesting quotes, note initial themes (bottom-up)
3. **Axial coding**: Group codes into broader themes
4. **Selective coding**: Identify core themes, relationships between themes
5. **Frequency counting**: How many participants mentioned each theme?
6. **Saturation check**: Did new interviews reveal new themes, or just confirm existing?
**Rigor techniques**:
- **Inter-rater reliability**: Two coders independently code subset, compare agreement
- **Negative case analysis**: Actively look for quotes that contradict main themes
- **Thick description**: Provide rich context, not just quotes
- **Audit trail**: Document coding decisions
**Software tools**: NVivo, Atlas.ti, or spreadsheet with color-coding.
---
## 4. Statistical Analysis for Surveys
**Descriptive statistics**:
- **Central tendency**: Mean, median, mode
- **Spread**: Standard deviation, range, interquartile range
- **Distribution**: Histogram, check for normality
**Inferential statistics**:
- **t-test**: Compare means between two groups (e.g., users vs non-users)
- **ANOVA**: Compare means across 3+ groups
- **Chi-square**: Test association between categorical variables
- **Correlation**: Relationship between two continuous variables (Pearson's r)
**Sample size requirements**:
- **Minimum for statistical power**: n ≥ 30 per segment
- **Margin of error**: ±5% at 95% confidence requires n ≈ 400 (for population > 10K)
- **For small populations**: Use finite population correction
**Segmentation**:
- Divide sample by demographics, behavior, or attitudes
- Compare segments on key metrics (e.g., satisfaction, willingness to pay)
- Ensure each segment has n ≥ 30 for valid comparisons
---
## 5. Bias Mitigation Techniques
**Common biases**:
- **Confirmation bias**: Seeking evidence that confirms pre-existing beliefs
- **Leading questions**: Telegraphing desired answer
- **Social desirability bias**: Participants say what they think you want to hear
- **Selection bias**: Non-representative sample
- **Recency bias**: Overweighting recent experiences
- **Hindsight bias**: Rewriting history post-hoc
**Mitigation strategies**:
1. **Avoid leading questions**: Bad: "Don't you think our UI is confusing?" Good: "Walk me through using this feature."
2. **Focus on behavior, not attitudes**: Bad: "Do you value security?" Good: "Tell me about the last time security mattered in your decision."
3. **Use concrete examples**: Bad: "How important is speed?" Good: "Show me your current workflow. Where do you wait?"
4. **Recruit diverse sample**: Include detractors, not just enthusiasts. Screen for demographics and behaviors.
5. **Blind analysis**: Analyze data without knowing which participant is which (if possible).
6. **Pre-register hypotheses**: Document what you expect to find before data collection.
---
## 6. Participant Recruitment Strategies
**Approaches**:
**For existing users**:
- **In-app invite**: Email or in-app message to random sample
- **Behavior-triggered**: Invite after specific action (e.g., canceled subscription, completed onboarding)
- **Support tickets**: Recruit from users who contacted support
- **Incentive**: Gift card, product credits, donation to charity
**For non-users/prospects**:
- **User testing platforms**: UserTesting, Respondent, User Interviews
- **Social media**: LinkedIn, Twitter, Facebook groups
- **Snowball sampling**: Ask interviewees to refer others
- **Panel providers**: Qualtrics, Prolific (for surveys)
- **Community forums**: Reddit, Slack communities, Discord
**Screening**:
- Use short survey (3-5 questions) to qualify
- Check for disqualifiers (competitors, never used category, outside target)
- Over-recruit by 20-30% to account for no-shows
**Sample size guidance**:
- **Qualitative interviews**: 5-15 (themes emerge by interview 5-8, saturation by 12-15)
- **Quantitative surveys**: 100+ for basic stats, 400+ for ±5% margin of error, 30+ per segment for comparisons
---
## 7. Interview Facilitation Best Practices
**Before interview**:
- Review objectives and guide
- Set up recording (with participant permission)
- Prepare backup note-taking system
- Join 5 min early to check tech
**During interview**:
- **Active listening**: Focus on what they say, not your next question
- **Follow the energy**: If they get excited or frustrated, dig deeper
- **Embrace silence**: Pause 3-5 seconds after asking. Let them think.
- **Use mirroring**: Repeat last few words to encourage elaboration
- **Ask "why" sparingly**: Can sound accusatory. Use "What prompted..." "What mattered..."
- **Probe with "tell me more"**: When they hint at something interesting
- **Show don't tell**: Ask to screen-share, demonstrate, show artifacts (spreadsheets, tools)
- **Watch non-verbal**: Hesitation, confusion, workarounds reveal truth
**After interview**:
- Debrief: Write 3-5 key takeaways immediately
- Save recording and transcript
- Thank participant, send compensation
- Update sampling tracker (did they fit profile? Any biases?)
---
## 8. Survey Design Best Practices
**Question types**:
- **Likert scale** (1-5 agreement): "I am satisfied with [product]"
- **Semantic differential** (bipolar): Fast [1-7] Slow
- **Multiple choice** (single select): "Which do you prefer?"
- **Checkbox** (multi-select): "Which of these have you used?"
- **Ranking**: "Rank these features 1-5"
- **Open-ended**: "What is the biggest challenge you face?"
- **Matrix**: Rows = items, columns = rating scale
**Order effects**:
- Start with engaging, easy questions (not demographics)
- Group related questions
- Randomize option order (except ordered scales)
- Put demographics at end
- Avoid fatigue: Keep surveys < 10 min (15-20 questions)
**Response scales**:
- **5-point** (standard): Very dissatisfied, Dissatisfied, Neutral, Satisfied, Very satisfied
- **Odd vs even**: Odd (5-point) allows neutral, even (4-point) forces choice
- **Labeled vs numeric**: Fully labeled preferred for clarity
**Pilot testing**:
- Test with 5-10 people before launch
- Check for confusing questions, technical issues, time to complete
- Iterate based on feedback
---
## 9. Continuous Discovery Practices
**Weekly interview cadence**:
- Schedule 3-5 customer conversations per week (15-30 min each)
- Rotate team members (product, design, eng)
- Focus rotates based on current priorities (new features, onboarding, retention, etc.)
**Process**:
1. **Recruiting**: Automated email to random sample, quick scheduling link
2. **Conducting**: Lightweight interview guide, record main points
3. **Sharing**: Post key quotes/insights in shared Slack channel or doc
4. **Synthesis**: Monthly review of patterns across all conversations
**Benefits**:
- Continuous learning loop
- Early problem detection
- Relationship building with customers
- Team alignment (everyone hears customer voice)
**Tools**: Calendly for scheduling, Zoom for calls, Dovetail or Notion for notes.
---
## 10. Mixed Methods Approach
**Sequential**:
- Phase 1 (Qual): Interviews to discover problems and generate hypotheses (n=10-15)
- Phase 2 (Quant): Survey to validate findings at scale (n=100-500)
**Example**:
- Interviews: "Users mention pricing confusion" (theme in 8/12 interviews)
- Survey: Test hypothesis—"65% of users find pricing page confusing" (validated at scale)
**Concurrent**:
- Run interviews and surveys simultaneously
- Use interviews for depth (why), surveys for breadth (how many)
**Triangulation**:
- Interviews: What users say
- Surveys: What users report
- Analytics: What users do
- Convergence across methods = high confidence
---
## 11. Ethical Considerations
**Informed consent**:
- Explain research purpose, how data will be used
- Get explicit permission to record
- Allow opt-out at any time
**Privacy**:
- Anonymize participant data in reports (use P1, P2, etc.)
- Store recordings securely, delete after transcription (or per policy)
- Don't share personally identifiable information
**Compensation**:
- Fair compensation for time ($50-150 for 60 min interview, $10-25 for survey)
- Offer choice (gift card, donation, product credit)
- Pay promptly (within 1 week)
**Vulnerable populations**:
- Extra care with children, elderly, disabled, marginalized groups
- May require IRB approval for academic/medical research

View File

@@ -0,0 +1,300 @@
# Discovery Interviews & Surveys - Template
## Workflow
```
Research Template Progress:
- [ ] Define objectives and hypotheses
- [ ] Design screening and recruitment
- [ ] Create interview guide or survey
- [ ] Plan analysis approach
- [ ] Document research plan
```
---
## Interview Guide Template
### Research Objective
**What we're trying to learn**: [Specific learning goal]
**Key hypotheses**:
1. [Hypothesis 1]
2. [Hypothesis 2]
### Participant Criteria
**Must have**:
- [Criterion 1—e.g., used competitor product in last 6 months]
- [Criterion 2—e.g., decision-maker for this purchase]
**Nice to have**:
- [Optional criterion]
**Sample size**: [5-15 for qualitative themes]
### Interview Script
**Introduction** (2 min):
"Thanks for joining. I'm researching [topic]. There are no right/wrong answers—I want to understand your experience. I'll record this for note-taking (with your permission). Any questions before we start?"
**Warm-up** (3 min):
- Tell me about your role and what you're responsible for.
- [Context-setting question relevant to topic]
**Problem Discovery** (20-30 min):
Core questions (open-ended, behavior-focused):
1. **Recent experience**: "Tell me about the last time you [specific behavior related to problem]. Walk me through what happened."
- Follow-up: "What prompted that?" "What happened next?" "How did that feel?"
2. **Current solution**: "How do you handle [problem] today? Show me if possible."
- Follow-up: "How long have you done it this way?" "What works well?" "What's frustrating?"
3. **Workarounds**: "What have you tried to solve [problem]?"
- Follow-up: "How did that go?" "What made you stop/continue?"
4. **Pain points**: "What's the most frustrating part of [workflow]?"
- Follow-up: "How often does this happen?" "What's the impact when it does?"
5. **Desired outcome**: "If you could wave a magic wand and fix [problem], what would be different?"
- Follow-up: "Why would that matter?" "What would that enable?"
6. **Willingness to change**: "What would need to be true for you to change how you [workflow]?"
- Follow-up: "What's the cost of changing?" "What's the cost of not changing?"
**Concept Test** (10 min, if applicable):
Show concept (mockup, landing page, description):
1. **Comprehension**: "In your own words, what is this?"
2. **Audience**: "Who do you think this is for?"
3. **Use case**: "When would you use this?" "What would you use it for?"
4. **Value perception**: "How much would you expect to pay for this?" "Why?"
5. **Comparison**: "How is this different from [competitor/current solution]?"
6. **Concerns**: "What concerns you about this?" "What would hold you back?"
**Wrap-up** (5 min):
- "Is there anything I should have asked but didn't?"
- "Who else should I talk to?" (snowball sampling)
- "Can I follow up if I have more questions?"
**Thank and compensate**: [Gift card, donation, etc.]
---
## Survey Template
### Survey Structure
**Screener** (qualify participants):
1. [Demographic filter—e.g., age, location]
2. [Behavioral filter—e.g., used product X]
3. [Decision-making filter—e.g., influence on purchase]
**Main Survey**:
**Section 1: Current Behavior** (establish baseline)
1. Which of the following [products/services] do you currently use? (Select all that apply)
- [Option 1]
- [Option 2]
- None of the above
2. How often do you [key behavior]?
- Daily / Weekly / Monthly / Rarely / Never
3. What do you use [product/service] for? (Open-end)
**Section 2: Satisfaction & Problems** (identify pain points)
4. How satisfied are you with your current [solution]? (1-5 scale)
- Very dissatisfied / Dissatisfied / Neutral / Satisfied / Very satisfied
5. What are the biggest challenges you face with [current solution]? (Open-end)
6. How important is [feature/capability] to you? (1-5 scale)
- Not at all important / Slightly important / Moderately important / Very important / Extremely important
**Section 3: Feature Prioritization** (for product roadmap)
7. Please rate the importance of each feature: (Matrix—rows = features, columns = 1-5 importance)
- [Feature 1]
- [Feature 2]
- [Feature 3]
8. If you could only have 3 of these features, which would you choose? (Rank order, top 3)
**Section 4: Concept Test** (if applicable)
Show concept (image, description):
9. In your own words, what is this product/service? (Open-end)
10. How likely are you to use this if it were available? (1-5 scale)
- Very unlikely / Unlikely / Neutral / Likely / Very likely
11. What would you be willing to pay per month? (Price sensitivity)
- Less than $X / $X-$Y / $Y-$Z / More than $Z / I wouldn't pay
12. What concerns do you have about this concept? (Open-end)
**Section 5: Demographics** (for segmentation)
13. Company size (if B2B): [ranges]
14. Industry: [options]
15. Role: [options]
**Thank you**: "Thank you! [Incentive details if applicable]"
---
## Jobs-to-be-Done Interview Template
Focus on recent switchers (adopted your product or competitor in last 3-6 months).
**Timeline reconstruction**:
1. **First thought** (passive looking): "When did you first realize you had a problem with [old solution]? What happened?"
2. **Trigger event** (active looking): "What made you start actively looking for alternatives? What changed?"
3. **Consideration** (evaluation): "What options did you consider? How did you evaluate them?"
- Follow-up: "What criteria mattered most?" "What sources did you trust?"
4. **Anxiety** (concerns): "What almost stopped you from switching?" "What made you hesitate?"
5. **Decision** (commitment): "What made you ultimately choose [product]? What was the deciding factor?"
6. **First use** (onboarding): "Walk me through your first experience using [product]. What stood out?"
7. **Habit formation** (ongoing): "How has your use evolved? What's different now vs. early days?"
8. **Outcome** (job fulfillment): "What's better now compared to before? What job is [product] doing for you?"
9. **Tradeoffs**: "What did you give up by switching? What's worse now?"
---
## Question Design Principles
**DO:**
- ✅ Ask about past behavior: "Tell me about the last time..."
- ✅ Request demonstrations: "Can you show me how you..."
- ✅ Dig deeper: "Why did that matter?" "Tell me more" "What else?"
- ✅ Embrace silence: Pause after questions. Let participant think.
- ✅ Use open-ended questions: "What..." "How..." "Tell me about..."
- ✅ Focus on specifics: "Walk me through..." "What happened next?"
**DON'T:**
- ❌ Ask leading questions: "Don't you think...?" "Isn't it true that...?"
- ❌ Ask hypotheticals: "Would you...?" "If we built..."
- ❌ Ask multiple questions at once: Confuses participants
- ❌ Interrupt or finish sentences: Let them talk
- ❌ Explain or defend: You're learning, not selling
- ❌ Ask "why" repeatedly: Sounds accusatory. Use "What prompted..." "What mattered..."
---
## Screening Questions
**For B2B SaaS**:
1. What is your role? [Job title dropdown]
2. What is your company size? [Employee count ranges]
3. Do you influence or make purchase decisions for [product category]? Yes/No
4. Are you currently using [competitor product]? Yes/No/Used in the past
5. How long have you been using [product]? [Duration ranges]
**For Consumer**:
1. Which age range are you in? [Ranges]
2. Do you currently [key behavior]? Daily/Weekly/Monthly/Rarely/Never
3. When did you last [specific action]? [Time ranges]
4. Which of the following have you used? [Product list, select all]
**Disqualifiers** (screen out):
- Competitors (unless research is competitive analysis)
- Never used category (for product-specific research)
- Outside target demographic
---
## Analysis Templates
**For Interviews: Thematic Coding**
1. **Transcribe**: Convert recordings to text (automated tool or manual)
2. **Initial coding**: Read transcripts, highlight key quotes, note themes
3. **Affinity mapping**: Group similar quotes/observations
4. **Theme identification**: Name each cluster (e.g., "Onboarding confusion", "Pricing concerns")
5. **Frequency counting**: How many participants mentioned each theme?
6. **Quote extraction**: Pull representative quotes for each theme
**Output format**:
```
Theme: [Name]
Frequency: X/Y participants
Representative quotes:
- "Quote 1" (P3)
- "Quote 2" (P7)
Insight: [What this means]
Recommendation: [What to do]
```
**For Surveys: Statistical Analysis**
1. **Data cleaning**: Remove incomplete responses, check for quality
2. **Descriptive stats**: Mean, median, mode, distribution for scaled questions
3. **Cross-tabulation**: Compare segments (e.g., users vs non-users)
4. **Statistical significance**: Chi-square (categorical) or t-test (continuous)
5. **Open-end coding**: Categorize open-ended responses, count frequencies
6. **Visualization**: Charts for key findings (bar charts, distribution plots)
**Key metrics**:
- CSAT (Customer Satisfaction): Average rating (1-5 scale)
- NPS (Net Promoter Score): % Promoters (9-10) minus % Detractors (0-6)
- Feature importance vs satisfaction: 2x2 matrix (importance on Y, satisfaction on X)
- Sample size check: n ≥ 30 per segment for statistical power
---
## Insights Document Template
```markdown
# Research Insights: [Study Name]
## Executive Summary
[2-3 sentences: key findings, decision recommendation]
## Research Objective
**What we wanted to learn**: [Objective]
**Key questions**: [Questions]
## Methodology
- **Method**: [Interviews/Survey/Mixed]
- **Participants**: [N, demographics]
- **Dates**: [When conducted]
## Key Findings
### Finding 1: [Theme Name]
**Evidence**: X/Y participants mentioned [pattern]
**Quotes**:
- "Quote 1" (P3)
- "Quote 2" (P7)
**Insight**: [What this means]
### Finding 2: [Theme Name]
[Same structure]
## Surprises & Contradictions
[What didn't match expectations? Outliers?]
## Recommendations
1. [Action 1—specific, based on findings]
2. [Action 2]
3. [Action 3]
## Confidence & Limitations
- Confidence level: [High/Medium/Low] based on [sample size, consistency, etc.]
- Limitations: [Sampling bias? Small sample? Anything that limits generalization?]
## Next Steps
- [Follow-up research needed?]
- [Decision to be made?]
```