Initial commit
This commit is contained in:
258
skills/one-pager-prd/SKILL.md
Normal file
258
skills/one-pager-prd/SKILL.md
Normal file
@@ -0,0 +1,258 @@
|
||||
---
|
||||
name: one-pager-prd
|
||||
description: Use when proposing new features/products, documenting product requirements, creating concise specs for stakeholder alignment, pitching initiatives, scoping projects before detailed design, capturing user stories and success metrics, or when user mentions one-pager, PRD, product spec, feature proposal, product requirements, or brief.
|
||||
---
|
||||
|
||||
# One-Pager PRD
|
||||
|
||||
## Table of Contents
|
||||
- [Purpose](#purpose)
|
||||
- [When to Use](#when-to-use)
|
||||
- [What Is It](#what-is-it)
|
||||
- [Workflow](#workflow)
|
||||
- [Common Patterns](#common-patterns)
|
||||
- [Guardrails](#guardrails)
|
||||
- [Quick Reference](#quick-reference)
|
||||
|
||||
## Purpose
|
||||
|
||||
Create concise, decision-ready product specifications that align stakeholders on problem, solution, users, success metrics, and constraints—enabling fast approval and reducing back-and-forth.
|
||||
|
||||
## When to Use
|
||||
|
||||
**Early-Stage Product Definition:**
|
||||
- Proposing new feature or product
|
||||
- Need stakeholder alignment before building
|
||||
- Scoping initiative for resource allocation
|
||||
- Pitching idea to leadership for approval
|
||||
|
||||
**Documentation Needs:**
|
||||
- Capturing requirements for engineering handoff
|
||||
- Documenting decisions for future reference
|
||||
- Creating spec for cross-functional team (PM, design, eng)
|
||||
- Recording what's in/out of scope
|
||||
|
||||
**Communication:**
|
||||
- Getting buy-in from multiple stakeholders
|
||||
- Explaining complex feature simply
|
||||
- Aligning sales/marketing on upcoming releases
|
||||
- Onboarding new team members to initiative
|
||||
|
||||
**When NOT to Use:**
|
||||
- Detailed technical design docs (use ADRs instead)
|
||||
- Comprehensive product strategy (too high-level for one-pager)
|
||||
- User research synthesis (different format)
|
||||
- Post-launch retrospectives (use postmortem skill)
|
||||
|
||||
## What Is It
|
||||
|
||||
A one-pager PRD is a 1-2 page product specification covering:
|
||||
|
||||
**Core Elements:**
|
||||
1. **Problem:** What user pain are we solving? Why now?
|
||||
2. **Solution:** What are we building? (High-level approach)
|
||||
3. **Users:** Who benefits? Personas, segments, use cases
|
||||
4. **Goals & Metrics:** How do we measure success?
|
||||
5. **Scope:** What's in/out? Key user flows
|
||||
6. **Constraints:** Technical, business, timeline limits
|
||||
7. **Open Questions:** Unknowns to resolve
|
||||
|
||||
**Format:**
|
||||
- **One-Pager:** 1 page, bullet points, for quick approval
|
||||
- **PRD (Product Requirements Document):** 1-2 pages, more detail, for execution
|
||||
|
||||
**Example One-Pager:**
|
||||
|
||||
**Feature:** Bulk Edit for Data Tables
|
||||
|
||||
**Problem:** Users managing 1000+ rows waste hours editing one-by-one. Competitors have bulk edit. Churn risk for power users.
|
||||
|
||||
**Solution:** Select multiple rows → Edit panel → Apply changes to all. Support: text, dropdowns, dates, numbers.
|
||||
|
||||
**Users:** Data analysts (15% of users, 60% of usage), operations teams.
|
||||
|
||||
**Goals:** Reduce time-to-edit by 80%. Increase retention of power users by 10%. Launch Q2.
|
||||
|
||||
**In Scope:** Select all, filter+select, edit common fields (5 field types).
|
||||
**Out of Scope:** Undo/redo (v2), bulk delete (security concern).
|
||||
|
||||
**Metrics:** Time per edit (baseline: 5 min/row), adoption rate (target: 40% of power users in month 1).
|
||||
|
||||
**Constraints:** Must work with 10K rows without performance degradation.
|
||||
|
||||
**Open Questions:** Validation—fail entire batch or skip invalid rows?
|
||||
|
||||
## Workflow
|
||||
|
||||
Copy this checklist and track your progress:
|
||||
|
||||
```
|
||||
One-Pager PRD Progress:
|
||||
- [ ] Step 1: Gather context
|
||||
- [ ] Step 2: Choose format
|
||||
- [ ] Step 3: Draft one-pager
|
||||
- [ ] Step 4: Validate quality
|
||||
- [ ] Step 5: Review and iterate
|
||||
```
|
||||
|
||||
**Step 1: Gather context**
|
||||
|
||||
Identify the problem (user pain, data supporting it), proposed solution (high-level approach), target users (personas, segments), success criteria (goals, metrics), and constraints (technical, business, timeline). See [Common Patterns](#common-patterns) for typical problem types.
|
||||
|
||||
**Step 2: Choose format**
|
||||
|
||||
For simple features needing quick approval → Use [resources/template.md](resources/template.md) one-pager format (1 page, bullets). For complex features/products requiring detailed requirements → Use [resources/template.md](resources/template.md) full PRD format (1-2 pages). For writing guidance and structure → Study [resources/methodology.md](resources/methodology.md) for problem framing, metric definition, scope techniques.
|
||||
|
||||
**Step 3: Draft one-pager**
|
||||
|
||||
Create `one-pager-prd.md` with: problem statement (user pain + why now), solution overview (what we're building), user personas and use cases, goals with quantified metrics, in-scope flows and out-of-scope items, constraints and assumptions, open questions to resolve. Keep concise—1 page for one-pager, 1-2 for PRD.
|
||||
|
||||
**Step 4: Validate quality**
|
||||
|
||||
Self-assess using [resources/evaluators/rubric_one_pager_prd.json](resources/evaluators/rubric_one_pager_prd.json). Check: problem is specific and user-focused, solution is clear without being overly detailed, metrics are measurable and have targets, scope is realistic and boundaries clear, constraints acknowledged, open questions identified. Minimum standard: Average score ≥ 3.5.
|
||||
|
||||
**Step 5: Review and iterate**
|
||||
|
||||
Share with stakeholders (PM, design, engineering, business). Gather feedback on problem framing, solution approach, scope boundaries, and success metrics. Iterate based on input. Get explicit sign-off before moving to detailed design/development.
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### By Problem Type
|
||||
|
||||
**User Pain (Most Common):**
|
||||
- **Pattern:** Users can't do X, causing friction Y
|
||||
- **Example:** "Users can't search by multiple filters, forcing 10+ clicks to find items"
|
||||
- **Validation:** User interviews, support tickets, analytics showing workarounds
|
||||
|
||||
**Competitive Gap:**
|
||||
- **Pattern:** Competitor has feature, we don't, causing churn
|
||||
- **Example:** "Competitors offer bulk actions. 20% of churned users cited this"
|
||||
- **Validation:** Churn analysis, competitive analysis, win/loss interviews
|
||||
|
||||
**Strategic Opportunity:**
|
||||
- **Pattern:** Market shift creates opening for new capability
|
||||
- **Example:** "Remote work surge → need async collaboration features"
|
||||
- **Validation:** Market research, early customer interest, trend analysis
|
||||
|
||||
**Technical Debt/Scalability:**
|
||||
- **Pattern:** Current system doesn't scale, blocking growth
|
||||
- **Example:** "Database queries timeout above 100K users. Growth blocked."
|
||||
- **Validation:** Performance metrics, system capacity analysis
|
||||
|
||||
### By Solution Complexity
|
||||
|
||||
**Simple Feature (Weeks):**
|
||||
- Clear requirements, minor scope
|
||||
- Example: Add export to CSV button
|
||||
- **One-Pager:** Problem, solution (1 sentence), metrics, constraints
|
||||
|
||||
**Medium Feature (Months):**
|
||||
- Multiple user flows, some complexity
|
||||
- Example: Commenting system with notifications
|
||||
- **PRD:** Detailed flows, edge cases, phasing (MVP vs v2)
|
||||
|
||||
**Large Initiative (Quarters):**
|
||||
- Cross-functional, strategic
|
||||
- Example: Mobile app launch
|
||||
- **Multi-Page PRD:** Break into phases, each with own one-pager
|
||||
|
||||
### By User Segment
|
||||
|
||||
**B2B SaaS:**
|
||||
- Emphasize: ROI, admin controls, security, integrations
|
||||
- Metrics: Adoption rate, time-to-value, NPS
|
||||
|
||||
**B2C Consumer:**
|
||||
- Emphasize: Delight, ease of use, viral potential
|
||||
- Metrics: Daily active users, retention curves, referrals
|
||||
|
||||
**Enterprise:**
|
||||
- Emphasize: Compliance, customization, support
|
||||
- Metrics: Deal size impact, deployment success, enterprise NPS
|
||||
|
||||
**Internal Tools:**
|
||||
- Emphasize: Efficiency gains, adoption by teams
|
||||
- Metrics: Time saved, task completion rate, employee satisfaction
|
||||
|
||||
## Guardrails
|
||||
|
||||
**Problem:**
|
||||
- **Specific, not vague:** ❌ "Users want better search" → ✓ "Users abandon search after 3 failed queries (30% of sessions)"
|
||||
- **User-focused:** Focus on user pain, not internal goals ("We want to increase engagement" is goal, not problem)
|
||||
- **Validated:** Cite data (analytics, interviews, support tickets) not assumptions
|
||||
|
||||
**Solution:**
|
||||
- **High-level, not over-specified:** Describe what, not how. Leave design/engineering latitude.
|
||||
- **Falsifiable:** Clear enough that stakeholders can disagree or suggest alternatives
|
||||
- **Scope-appropriate:** Don't design UI in one-pager. "Filter panel" not "Dropdown menu with checkbox multi-select"
|
||||
|
||||
**Metrics:**
|
||||
- **Measurable:** Must be quantifiable. ❌ "Improve UX" → ✓ "Reduce time-to-complete from 5 min to 2 min"
|
||||
- **Leading + Lagging:** Include both (leading: adoption rate, lagging: revenue impact)
|
||||
- **Baselines + Targets:** Current state + goal. "Increase NPS from 40 to 55"
|
||||
|
||||
**Scope:**
|
||||
- **Crisp boundaries:** Explicitly state what's in/out
|
||||
- **MVP vs Future:** Separate must-haves from nice-to-haves
|
||||
- **User flows:** Describe key happy path, edge cases
|
||||
|
||||
**Constraints:**
|
||||
- **Realistic:** Acknowledge tech debt, dependencies, timeline limits
|
||||
- **Trade-offs:** Explicit about what we're sacrificing (speed vs quality, features vs simplicity)
|
||||
|
||||
**Red Flags:**
|
||||
- Solution looking for problem (built it because cool tech, not user need)
|
||||
- Vague metrics (no baselines, no targets, no timeframes)
|
||||
- Scope creep (everything is "must-have")
|
||||
- No constraints mentioned (unrealistic optimism)
|
||||
- No open questions (haven't thought deeply)
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Resources:**
|
||||
- `resources/template.md` - One-pager and PRD templates with section guidance
|
||||
- `resources/methodology.md` - Problem framing techniques, metric trees, scope prioritization, writing clarity
|
||||
- `resources/evaluators/rubric_one_pager_prd.json` - Quality criteria
|
||||
|
||||
**Output:** `one-pager-prd.md` with problem, solution, users, goals/metrics, scope, constraints, open questions
|
||||
|
||||
**Success Criteria:**
|
||||
- Problem is specific with validation (data/research)
|
||||
- Solution is clear high-level approach (what, not how)
|
||||
- Metrics are measurable with baselines + targets
|
||||
- Scope has crisp in/out boundaries
|
||||
- Constraints acknowledged
|
||||
- Open questions identified
|
||||
- Stakeholder sign-off obtained
|
||||
- Score ≥ 3.5 on rubric
|
||||
|
||||
**Quick Decisions:**
|
||||
- **Simple feature, quick approval?** → One-pager (1 page, bullets)
|
||||
- **Complex feature, detailed handoff?** → Full PRD (1-2 pages)
|
||||
- **Multiple phases?** → Separate one-pager per phase
|
||||
- **Strategic initiative?** → Start with one-pager, expand to multi-page if needed
|
||||
|
||||
**Common Mistakes:**
|
||||
1. Problem too vague ("improve experience")
|
||||
2. Solution too detailed (specifying UI components)
|
||||
3. No metrics or unmeasurable metrics
|
||||
4. Scope creep (no "out of scope" section)
|
||||
5. Ignoring constraints (unrealistic timelines)
|
||||
6. No validation (assumptions not data)
|
||||
7. Writing for yourself, not stakeholders
|
||||
|
||||
**Key Insight:**
|
||||
Brevity forces clarity. If you can't explain it in 1-2 pages, you haven't thought it through. One-pager is thinking tool as much as communication tool.
|
||||
|
||||
**Format Tips:**
|
||||
- Use bullets, not paragraphs (scannable)
|
||||
- Lead with problem (earn the right to propose solution)
|
||||
- Quantify everything possible (numbers > adjectives)
|
||||
- Make scope boundaries explicit (prevent misunderstandings)
|
||||
- Surface open questions (show you've thought deeply)
|
||||
|
||||
**Stakeholder Adaptation:**
|
||||
- **For Execs:** Emphasize business impact, metrics, resource needs
|
||||
- **For Engineering:** Technical constraints, dependencies, phasing
|
||||
- **For Design:** User flows, personas, success criteria
|
||||
- **For Sales/Marketing:** Competitive positioning, customer value, launch timing
|
||||
@@ -0,0 +1,292 @@
|
||||
{
|
||||
"name": "One-Pager PRD Evaluator",
|
||||
"description": "Evaluate quality of one-pagers and PRDs (Product Requirement Documents)—assessing problem clarity, solution feasibility, metric quality, scope definition, and stakeholder alignment.",
|
||||
"version": "1.0.0",
|
||||
"criteria": [
|
||||
{
|
||||
"name": "Problem Clarity & Validation",
|
||||
"description": "Evaluates whether problem is specific, user-focused, validated with evidence, and quantified",
|
||||
"weight": 1.3,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "Vague or missing problem",
|
||||
"description": "Problem not stated, or too vague ('improve UX'). No evidence. Not user-focused (describes internal goal not user pain)."
|
||||
},
|
||||
"2": {
|
||||
"label": "Generic problem statement",
|
||||
"description": "Problem stated but generic. Limited evidence (assumptions not data). Impact not quantified. Unclear which user segment affected."
|
||||
},
|
||||
"3": {
|
||||
"label": "Clear problem with some validation",
|
||||
"description": "Problem is specific with user segment identified. Some evidence provided (1-2 data points). Impact partially quantified. 'Why now' mentioned."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-validated problem",
|
||||
"description": "Problem is specific and user-focused. Multiple evidence sources (user interviews, analytics, support tickets). Impact quantified (how many users, frequency, severity). Clear 'why now' rationale. User pain articulated with examples."
|
||||
},
|
||||
"5": {
|
||||
"label": "Exceptional problem framing",
|
||||
"description": "Problem framed using Jobs-to-be-Done or similar framework. Comprehensive validation (5+ user interviews, analytics with baselines, competitive analysis). Impact deeply quantified (revenue loss, churn risk, time wasted). Root cause analysis (5 Whys). Clear user quotes and examples. Validated as top 3 pain point for segment."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Solution Clarity",
|
||||
"description": "Evaluates whether solution is clearly described without over-specifying, and is appropriate for problem",
|
||||
"weight": 1.2,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "No solution or unclear",
|
||||
"description": "Solution not described, or so vague it's meaningless. Can't tell what's being built."
|
||||
},
|
||||
"2": {
|
||||
"label": "Vague solution",
|
||||
"description": "Solution described at high level but lacks detail. Hard to visualize what user would experience. Key flows not described."
|
||||
},
|
||||
"3": {
|
||||
"label": "Clear high-level solution",
|
||||
"description": "Solution is understandable. High-level approach clear. Key user flows described (happy path). Not over-specified (leaves room for design/eng). Appropriate for problem scale."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-articulated solution",
|
||||
"description": "Solution clearly explained with user perspective ('user does X, system does Y'). Multiple flows covered (happy path, alternatives, errors). Right level of detail (what not how). Mockups or examples provided. Edge cases considered."
|
||||
},
|
||||
"5": {
|
||||
"label": "Exceptional solution clarity",
|
||||
"description": "Solution crystal clear with concrete examples. User flows comprehensively documented. Alternatives considered with pros/cons. Edge cases and error handling explicit. Visual aids (mockups, flow diagrams). Non-functional requirements stated (performance, security, accessibility). Phasing plan for complex solutions."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "User Understanding",
|
||||
"description": "Evaluates whether users are well-defined with personas, use cases, and segment sizing",
|
||||
"weight": 1.1,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "No user definition",
|
||||
"description": "Users not identified or just 'users' (generic). No personas, use cases, or segmentation."
|
||||
},
|
||||
"2": {
|
||||
"label": "Generic users",
|
||||
"description": "Users mentioned but vaguely defined. Personas are stereotypes without depth. Use cases are hypothetical not validated."
|
||||
},
|
||||
"3": {
|
||||
"label": "Users defined with basic segmentation",
|
||||
"description": "Primary user persona identified with basic details. 1-2 use cases described. Some segmentation (% of users, frequency). User goals stated."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-defined user personas",
|
||||
"description": "Primary and secondary personas with details (role, goals, pain points, technical proficiency). 2-3 realistic use cases with context. Segmentation data (% of total users, usage frequency, value to business). User quotes or examples."
|
||||
},
|
||||
"5": {
|
||||
"label": "Deep user insights",
|
||||
"description": "Rich personas based on research (interviews, analytics). Personas include motivations, constraints, current workarounds. 3-5 validated use cases with triggers and expected outcomes. Segmentation with sizing (TAM), engagement metrics, and business value per segment. Jobs-to-be-done framing. Multiple user journeys mapped."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Metrics & Success Criteria",
|
||||
"description": "Evaluates whether metrics are measurable, have baselines + targets, and include leading indicators",
|
||||
"weight": 1.3,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "No metrics or unmeasurable",
|
||||
"description": "No success metrics, or metrics are vague ('improve UX', 'increase engagement'). No way to measure objectively."
|
||||
},
|
||||
"2": {
|
||||
"label": "Vague metrics",
|
||||
"description": "Metrics stated but not measurable. Example: 'users should be happier.' No baselines or targets. No timeline."
|
||||
},
|
||||
"3": {
|
||||
"label": "Measurable metrics with targets",
|
||||
"description": "1-2 metrics that are measurable (quantifiable). Targets defined. Timeline stated. Baseline mentioned. SMART-ish (Specific, Measurable, Achievable, Relevant, Time-bound)."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-defined success criteria",
|
||||
"description": "Primary and secondary metrics clearly defined. Baselines with current state. Specific targets with timeline. Measurement approach specified. Mix of leading (early signals) and lagging (final outcomes) indicators. Metrics tied to business goals."
|
||||
},
|
||||
"5": {
|
||||
"label": "Comprehensive metrics framework",
|
||||
"description": "Metrics tree connecting feature metrics to North Star. Primary metric (1) with 2-3 secondary metrics. All SMART (Specific, Measurable, Achievable, Relevant, Time-bound). Clear baselines from analytics. Ambitious but realistic targets. Leading indicators for early feedback (week 1, month 1). Lagging indicators for outcomes (quarter 1). Measurement plan with tools/dashboards. Success criteria for launch vs post-launch."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Scope Definition",
|
||||
"description": "Evaluates whether in-scope and out-of-scope boundaries are crisp and MVP is realistic",
|
||||
"weight": 1.2,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "Scope undefined or everything included",
|
||||
"description": "No scope definition, or everything is 'must-have.' No prioritization. Unrealistic scope."
|
||||
},
|
||||
"2": {
|
||||
"label": "Vague scope",
|
||||
"description": "Some scope defined but boundaries unclear. Hard to tell what's in vs out. MVP not differentiated from nice-to-haves."
|
||||
},
|
||||
"3": {
|
||||
"label": "Clear in/out scope",
|
||||
"description": "In-scope and out-of-scope items listed. MVP identified. Some prioritization (must-haves vs should-haves). Scope seems feasible."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-prioritized scope",
|
||||
"description": "Crisp scope boundaries with explicit in/out lists. MVP clearly defined with rationale. MoSCoW (Must/Should/Could/Won't) or similar prioritization. Out-of-scope items have reasoning (why not now). User flows for MVP documented. Phasing plan (v1, v2) if complex."
|
||||
},
|
||||
"5": {
|
||||
"label": "Rigorous scope management",
|
||||
"description": "Comprehensive scope with MoSCoW + RICE scoring or similar prioritization framework. MVP is shippable and testable (smallest thing that delivers value). V2/v3 roadmap. Each requirement has justification and success criteria. Non-functional requirements explicit (performance, security, scalability). Edge cases addressed with priority (must-handle vs can-defer). Scope validated against timeline and resources."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Constraints & Risks",
|
||||
"description": "Evaluates whether technical, business, and timeline constraints are acknowledged and risks identified",
|
||||
"weight": 1.0,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "No constraints or risks mentioned",
|
||||
"description": "Constraints and risks ignored. Unrealistic optimism. No mention of dependencies, technical debt, or limitations."
|
||||
},
|
||||
"2": {
|
||||
"label": "Constraints mentioned briefly",
|
||||
"description": "Some constraints noted but not detailed. Risks vaguely acknowledged. No mitigation plans."
|
||||
},
|
||||
"3": {
|
||||
"label": "Key constraints identified",
|
||||
"description": "Technical constraints mentioned (dependencies, performance limits). Business constraints noted (budget, timeline). 2-3 major risks identified. Some mitigation ideas."
|
||||
},
|
||||
"4": {
|
||||
"label": "Comprehensive constraints & risks",
|
||||
"description": "Technical constraints detailed (dependencies, technical debt, platform limits). Business constraints clear (budget, resources, timeline). Assumptions stated explicitly. 3-5 risks with impact assessment. Mitigation strategies for each risk. Dependencies mapped with owners and timelines."
|
||||
},
|
||||
"5": {
|
||||
"label": "Rigorous risk management",
|
||||
"description": "All constraint types covered: technical (architecture, performance, security), business (budget, headcount, timeline), legal/compliance. Assumptions documented and validated where possible. Risk register with probability × impact scoring. Mitigation and contingency plans. Dependencies with critical path analysis. Trade-offs explicitly acknowledged (what we're sacrificing for what). Pre-mortem conducted (what could go wrong)."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Open Questions & Decision-Making",
|
||||
"description": "Evaluates whether unresolved questions are surfaced with owners and deadlines",
|
||||
"weight": 1.0,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "No open questions",
|
||||
"description": "Open questions ignored, or author pretends everything is resolved (dangerous). False confidence."
|
||||
},
|
||||
"2": {
|
||||
"label": "Vague open questions",
|
||||
"description": "Questions mentioned but not actionable. No owners or timelines. Example: 'Need to figure out performance' without specifics."
|
||||
},
|
||||
"3": {
|
||||
"label": "Questions identified",
|
||||
"description": "2-3 open questions listed. Reasonably specific. Some have owners or decision timelines."
|
||||
},
|
||||
"4": {
|
||||
"label": "Well-managed open questions",
|
||||
"description": "3-5 open questions with clear framing. Options for each question stated. Decision owner assigned. Deadline for resolution. Prioritized (blocking vs non-blocking). Context for why question matters."
|
||||
},
|
||||
"5": {
|
||||
"label": "Rigorous decision framework",
|
||||
"description": "Open questions comprehensively documented with: specific question, why it matters, options with pros/cons, decision criteria, decision owner (by name/role), deadline aligned to project milestones, blocking vs non-blocking flag. Decisions already made are documented with rationale. Decision log tracks evolution. Shows deep thinking and awareness of unknowns."
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Conciseness & Clarity",
|
||||
"description": "Evaluates whether document is appropriate length (1-2 pages), scannable, and jargon-free",
|
||||
"weight": 1.1,
|
||||
"scale": {
|
||||
"1": {
|
||||
"label": "Too long or incomprehensible",
|
||||
"description": "Document >5 pages (one-pager) or >10 pages (PRD). Wall of text. Jargon-heavy. Inaccessible."
|
||||
},
|
||||
"2": {
|
||||
"label": "Verbose or hard to scan",
|
||||
"description": "Longer than needed. Long paragraphs without structure. Hard to skim. Some jargon not explained."
|
||||
},
|
||||
"3": {
|
||||
"label": "Appropriate length and mostly clear",
|
||||
"description": "One-pager is ~1 page, PRD is 1-2 pages. Uses bullets and headers. Mostly scannable. Some jargon but generally accessible. Key points findable."
|
||||
},
|
||||
"4": {
|
||||
"label": "Concise and highly scannable",
|
||||
"description": "Appropriate length (1 page one-pager, 1-2 page PRD). Excellent use of formatting (bullets, headers, tables). Pyramid principle (lead with conclusion). Active voice, concrete language. Jargon explained or avoided. Examples liberally used. Can grasp main points in 2-3 min skim."
|
||||
},
|
||||
"5": {
|
||||
"label": "Exemplary clarity",
|
||||
"description": "Maximum information density with minimum words. Every sentence adds value. Perfect formatting for scannability (bullets, bold, tables, visual aids). Pyramid structure throughout (conclusion → support → evidence). No jargon or all explained. Abundant concrete examples. Different stakeholders can extract what they need quickly. Executive summary stands alone. Passes 'grandmother test' (non-expert can understand)."
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"guidance": {
|
||||
"by_document_type": {
|
||||
"one_pager": {
|
||||
"focus": "Prioritize problem clarity (1.5x) and conciseness (1.3x). One-pagers are for quick approval.",
|
||||
"typical_scores": "Problem and conciseness should be 4+. Metrics can be 3+ (less detail OK). Keep to 1 page.",
|
||||
"red_flags": "Over 2 pages, over-specified solution, missing problem validation"
|
||||
},
|
||||
"full_prd": {
|
||||
"focus": "Prioritize solution clarity and scope definition. PRDs are for detailed execution.",
|
||||
"typical_scores": "Solution, scope, and users should be 4+. Metrics 4+. Can be 1-2 pages.",
|
||||
"red_flags": "Vague user flows, scope creep (everything must-have), no edge cases"
|
||||
},
|
||||
"technical_spec": {
|
||||
"focus": "Prioritize constraints & risks and solution clarity. Technical audience.",
|
||||
"typical_scores": "Constraints 4+, solution 4+. Can have more technical detail. Users can be 3+.",
|
||||
"red_flags": "No performance requirements, missing dependencies, unrealistic timeline"
|
||||
},
|
||||
"strategic_initiative": {
|
||||
"focus": "Prioritize problem clarity and metrics (business impact). Executive audience.",
|
||||
"typical_scores": "Problem 4+, metrics 4+ (tie to business outcomes). Solution can be 3+ (high-level OK).",
|
||||
"red_flags": "No business metrics, missing 'why now', solution too detailed for this stage"
|
||||
}
|
||||
},
|
||||
"by_stage": {
|
||||
"ideation": {
|
||||
"expectations": "Problem clarity high (4+), solution can be sketch (3+), metrics directional (3+), scope loose (3+). Purpose: Get alignment to explore further.",
|
||||
"next_steps": "User research, technical spike, design exploration"
|
||||
},
|
||||
"planning": {
|
||||
"expectations": "All criteria 3.5+. Problem validated (4+), solution clear (4+), metrics defined (4+), scope prioritized (4+). Purpose: Detailed planning and resource allocation.",
|
||||
"next_steps": "Engineering kickoff, design mockups, project timeline"
|
||||
},
|
||||
"execution": {
|
||||
"expectations": "All criteria 4+. Living document updated as decisions made. Open questions resolved. Purpose: Execution guide for team.",
|
||||
"next_steps": "Development, testing, iteration based on learnings"
|
||||
},
|
||||
"launch": {
|
||||
"expectations": "All sections complete (4+). Go-to-market plan, success criteria, monitoring dashboard. Purpose: Launch readiness and post-launch tracking.",
|
||||
"next_steps": "Launch, monitor metrics, post-launch review"
|
||||
}
|
||||
}
|
||||
},
|
||||
"common_failure_modes": {
|
||||
"solution_looking_for_problem": "Solution defined before problem validated. Fix: Start with user research, validate problem first.",
|
||||
"vague_problem": "Problem too generic ('improve UX'). Fix: Quantify impact, identify specific user segment, provide evidence.",
|
||||
"over_specified_solution": "PRD specifies UI details, button colors, exact algorithms. Fix: Describe what not how. Leave room for design/eng creativity.",
|
||||
"unmeasurable_metrics": "Metrics like 'user satisfaction' without definition. Fix: Use SMART framework. Define how you'll measure.",
|
||||
"scope_creep": "Everything is must-have for MVP. Fix: Use MoSCoW prioritization. Be ruthless about MVP (minimum VIABLE product).",
|
||||
"no_validation": "Problem and solution are assumptions not validated with users/data. Fix: Interview users, analyze data, check competitors.",
|
||||
"missing_constraints": "No mention of technical debt, dependencies, timeline limits. Fix: Acknowledge reality. List constraints explicitly.",
|
||||
"false_confidence": "No open questions (dangerous). Fix: Surface unknowns. Show you've thought deeply."
|
||||
},
|
||||
"excellence_indicators": [
|
||||
"Problem framed using Jobs-to-be-Done with user quotes and quantified impact",
|
||||
"Solution includes user flows, edge cases, and non-functional requirements",
|
||||
"Users deeply understood with personas, use cases, and segment sizing",
|
||||
"Metrics are SMART with baselines, targets, leading/lagging indicators, and measurement plan",
|
||||
"Scope rigorously prioritized (MoSCoW/RICE) with clear MVP and phasing",
|
||||
"Constraints and risks comprehensively documented with mitigation plans",
|
||||
"Open questions managed with options, owners, deadlines, and blocking status",
|
||||
"Document is concise (1-2 pages), highly scannable, with pyramid structure and examples",
|
||||
"Stakeholder review obtained with feedback incorporated",
|
||||
"Living document updated as decisions made during execution"
|
||||
],
|
||||
"evaluation_notes": {
|
||||
"scoring": "Calculate weighted average across all criteria. Minimum passing score: 3.0 (basic quality). Production-ready target: 3.5+. Excellence threshold: 4.2+. For one-pagers (quick approval), weight problem clarity and conciseness higher. For full PRDs (execution), weight solution clarity and scope definition higher.",
|
||||
"context": "Adjust expectations by stage. Ideation stage can have looser scope (3+). Planning stage needs all criteria 3.5+. Execution stage needs 4+ across the board. Different audiences need different emphasis: engineering wants technical constraints (4+), business wants metrics and ROI (4+), design wants user flows and personas (4+).",
|
||||
"iteration": "Low scores indicate specific improvement areas. Priority order: 1) Fix vague problem (highest ROI—clarifies entire direction), 2) Validate with evidence (de-risks assumptions), 3) Operationalize metrics (enables measurement), 4) Prioritize scope (prevents scope creep), 5) Surface constraints and open questions (manages risk). Re-score after each iteration."
|
||||
}
|
||||
}
|
||||
446
skills/one-pager-prd/resources/methodology.md
Normal file
446
skills/one-pager-prd/resources/methodology.md
Normal file
@@ -0,0 +1,446 @@
|
||||
# One-Pager PRD Methodology
|
||||
|
||||
## Table of Contents
|
||||
1. [Problem Framing Techniques](#1-problem-framing-techniques)
|
||||
2. [Metric Definition & Trees](#2-metric-definition--trees)
|
||||
3. [Scope Prioritization Methods](#3-scope-prioritization-methods)
|
||||
4. [Writing for Clarity](#4-writing-for-clarity)
|
||||
5. [Stakeholder Management](#5-stakeholder-management)
|
||||
6. [User Story Mapping](#6-user-story-mapping)
|
||||
7. [PRD Review & Iteration](#7-prd-review--iteration)
|
||||
|
||||
---
|
||||
|
||||
## 1. Problem Framing Techniques
|
||||
|
||||
### Jobs-to-be-Done Framework
|
||||
|
||||
**Template:** "When I [situation], I want to [motivation], so I can [outcome]."
|
||||
|
||||
**Example:** ❌ "Users want better search" → ✓ "When looking for a document among thousands, I want to filter by type/date/author, so I can find it in <30 seconds"
|
||||
|
||||
**Application:** Interview users for triggers, understand workarounds, quantify pain.
|
||||
|
||||
### Problem Statement Formula
|
||||
|
||||
**Template:** "[User segment] struggles to [task] because [root cause], resulting in [quantified impact]. Evidence: [data]."
|
||||
|
||||
**Example:** "Power users (15% of users, 60% usage) struggle to edit multiple rows because single-row editing only, resulting in 5h/week manual work and 12% higher churn. Evidence: churn interviews (8/10 cited), analytics (300 edits/user/week)."
|
||||
|
||||
### 5 Whys (Root Cause Analysis)
|
||||
|
||||
Ask "why" 5 times to get from symptom to root cause.
|
||||
|
||||
**Example:** Users complain search is slow → Why? Query takes 3s → Why? DB not indexed → Why? Schema not designed for search → Why? Original use case was CRUD → Why? Product pivoted.
|
||||
|
||||
**Root Cause:** Product pivot created tech debt. Real problem: Re-architect for search-first, not just optimize queries.
|
||||
|
||||
### Validation Checklist
|
||||
- [ ] Talked to 5+ users
|
||||
- [ ] Quantified frequency and severity
|
||||
- [ ] Identified workarounds
|
||||
- [ ] Confirmed top 3 pain point
|
||||
- [ ] Checked if competitors solve this
|
||||
- [ ] Estimated TAM
|
||||
|
||||
---
|
||||
|
||||
## 2. Metric Definition & Trees
|
||||
|
||||
### SMART Metrics
|
||||
|
||||
Specific, Measurable, Achievable, Relevant, Time-bound.
|
||||
|
||||
❌ "Improve engagement" → ✓ "Increase WAU from 10K to 15K (+50%) in Q2"
|
||||
❌ "Make search faster" → ✓ "Reduce p95 latency from 3s to <1s by Q1 end"
|
||||
|
||||
### Leading vs Lagging Metrics
|
||||
|
||||
**Lagging:** Outcomes (revenue, retention). Slow to change, final success measure.
|
||||
**Leading:** Early signals (adoption, usage). Fast to change, enable corrections.
|
||||
|
||||
**Example:** Lagging = Revenue. Leading = Adoption rate, usage frequency.
|
||||
|
||||
### Metric Tree Decomposition
|
||||
|
||||
Break down North Star into actionable sub-metrics.
|
||||
|
||||
**Example:** Revenue = Users × Conversion × Price
|
||||
- Users = Signups + Reactivated
|
||||
- Conversion = Activate × Value × Paywall × Pay Rate
|
||||
|
||||
**Application:** Pick 2-3 metrics from tree you can influence.
|
||||
|
||||
### Metric Anti-Patterns
|
||||
|
||||
❌ **Vanity:** Signups without retention, page views without engagement
|
||||
❌ **Lagging-Only:** Only revenue (too slow). Add adoption/engagement.
|
||||
❌ **Too Many:** 15 metrics. Pick 1 primary + 2-3 secondary.
|
||||
❌ **No Baselines:** "Increase engagement" vs "From 2.5 to 4 sessions/week"
|
||||
|
||||
---
|
||||
|
||||
## 3. Scope Prioritization Methods
|
||||
|
||||
### MoSCoW Method
|
||||
|
||||
**Must-Have:** Feature doesn't work without this
|
||||
**Should-Have:** Important but feature works without it (next iteration)
|
||||
**Could-Have:** Nice to have if time permits
|
||||
**Won't-Have:** Explicitly out of scope
|
||||
|
||||
**Example: Bulk Edit**
|
||||
- **Must:** Select multiple rows, edit common field, apply
|
||||
- **Should:** Undo/redo, validation error handling
|
||||
- **Could:** Bulk delete, custom formulas
|
||||
- **Won't:** Conditional formatting (separate feature)
|
||||
|
||||
### Kano Model
|
||||
|
||||
**Basic Needs:** If missing, users dissatisfied. If present, users neutral.
|
||||
- Example: Search must return results
|
||||
|
||||
**Performance Needs:** More is better (linear satisfaction)
|
||||
- Example: Faster search is better
|
||||
|
||||
**Delight Needs:** Unexpected features that wow users
|
||||
- Example: Search suggests related items
|
||||
|
||||
**Application:** Must-haves = Basic needs. Delights can wait for v2.
|
||||
|
||||
### RICE Scoring
|
||||
|
||||
**Reach:** How many users per quarter?
|
||||
**Impact:** How much does it help each user? (0.25 = minimal, 3 = massive)
|
||||
**Confidence:** How sure are you? (0% to 100%)
|
||||
**Effort:** Person-months
|
||||
|
||||
**Score = (Reach × Impact × Confidence) / Effort**
|
||||
|
||||
**Example:**
|
||||
- Feature A: (1000 users × 3 impact × 80% confidence) / 2 months = 1200
|
||||
- Feature B: (5000 users × 0.5 impact × 100%) / 1 month = 2500
|
||||
- **Prioritize Feature B**
|
||||
|
||||
### Value vs Effort Matrix
|
||||
|
||||
**2×2 Grid:**
|
||||
- **High Value, Low Effort:** Quick wins (do first)
|
||||
- **High Value, High Effort:** Strategic projects (plan carefully)
|
||||
- **Low Value, Low Effort:** Fill-ins (do if spare capacity)
|
||||
- **Low Value, High Effort:** Avoid
|
||||
|
||||
**Determine Scope:**
|
||||
1. List all potential features/requirements
|
||||
2. Plot on matrix
|
||||
3. MVP = High Value (both high/low effort)
|
||||
4. v2 = Medium Value
|
||||
5. Backlog/Never = Low Value
|
||||
|
||||
---
|
||||
|
||||
## 4. Writing for Clarity
|
||||
|
||||
### Pyramid Principle
|
||||
|
||||
**Structure:** Lead with conclusion, then support.
|
||||
|
||||
**Template:**
|
||||
1. **Conclusion:** Main point (problem statement)
|
||||
2. **Key Arguments:** 3-5 supporting points
|
||||
3. **Evidence:** Data, examples, details
|
||||
|
||||
**Example:**
|
||||
1. **Conclusion:** "We should build bulk edit because power users churn without it"
|
||||
2. **Arguments:**
|
||||
- 12% higher churn among power users
|
||||
- Competitors all have bulk edit
|
||||
- Users hack workarounds (exports to Excel)
|
||||
3. **Evidence:** Churn analysis, competitive audit, user interviews
|
||||
|
||||
**Application:** Start PRD with executive summary using pyramid structure.
|
||||
|
||||
### Active Voice & Concrete Language
|
||||
|
||||
**Passive → Active:**
|
||||
- ❌ "The feature will be implemented by engineering"
|
||||
- ✓ "Engineering will implement the feature"
|
||||
|
||||
**Vague → Concrete:**
|
||||
- ❌ "Improve search performance"
|
||||
- ✓ "Reduce search latency from 3s to <1s"
|
||||
|
||||
**Abstract → Specific:**
|
||||
- ❌ "Enhance user experience"
|
||||
- ✓ "Reduce clicks from 10 to 3 for common task"
|
||||
|
||||
### Avoid Jargon & Acronyms
|
||||
|
||||
**Bad:** "Implement CQRS with event sourcing for the BFF layer to optimize TTFB for SSR"
|
||||
|
||||
**Good:** "Separate read and write databases to make pages load faster (3s → 1s)"
|
||||
|
||||
**Rule:** Define acronyms on first use, or avoid entirely if stakeholders unfamiliar.
|
||||
|
||||
### Use Examples Liberally
|
||||
|
||||
**Before (Abstract):**
|
||||
"Users need flexible filtering options"
|
||||
|
||||
**After (Concrete Example):**
|
||||
"Users need flexible filtering. Example: Data analyst wants to see 'all invoices from Q4 2023 where amount > $10K and status = unpaid' without opening each invoice."
|
||||
|
||||
**Benefits:** Examples prevent misinterpretation, make requirements testable.
|
||||
|
||||
### Scannable Formatting
|
||||
|
||||
**Use:**
|
||||
- **Bullets:** For lists (easier than paragraphs)
|
||||
- **Headers:** Break into sections (5-7 sections max per page)
|
||||
- **Bold:** For key terms (not entire sentences)
|
||||
- **Tables:** For comparisons or structured data
|
||||
- **Short paragraphs:** 3-5 sentences max
|
||||
|
||||
**Avoid:**
|
||||
- Long paragraphs (>6 sentences)
|
||||
- Wall of text
|
||||
- Too many indentation levels (max 2)
|
||||
|
||||
---
|
||||
|
||||
## 5. Stakeholder Management
|
||||
|
||||
### Identifying Stakeholders
|
||||
|
||||
**Categories:**
|
||||
- **Accountable:** PM (you) - owns outcome
|
||||
- **Approvers:** Who must say yes (eng lead, design lead, exec sponsor)
|
||||
- **Contributors:** Who provide input (engineers, designers, sales, support)
|
||||
- **Informed:** Who need to know (marketing, legal, customer success)
|
||||
|
||||
**Mapping:**
|
||||
1. List everyone who touches this feature
|
||||
2. Categorize by role
|
||||
3. Identify decision-makers vs advisors
|
||||
4. Note what each stakeholder cares about most
|
||||
|
||||
### Tailoring PRD for Stakeholders
|
||||
|
||||
**For Engineering:**
|
||||
- Emphasize: Technical constraints, dependencies, edge cases
|
||||
- Include: Non-functional requirements (performance, scalability, security)
|
||||
- Avoid: Over-specifying implementation
|
||||
|
||||
**For Design:**
|
||||
- Emphasize: User flows, personas, success criteria
|
||||
- Include: Empty states, error states, edge cases
|
||||
- Avoid: Specifying UI components
|
||||
|
||||
**For Business (Sales/Marketing/Exec):**
|
||||
- Emphasize: Business impact, competitive positioning, revenue potential
|
||||
- Include: Go-to-market plan, pricing implications
|
||||
- Avoid: Technical details
|
||||
|
||||
**For Legal/Compliance:**
|
||||
- Emphasize: Privacy, security, regulatory requirements
|
||||
- Include: Data handling, user consent, audit trails
|
||||
- Avoid: Underestimating compliance effort
|
||||
|
||||
### Getting Buy-In
|
||||
|
||||
**Technique 1: Pre-socialize**
|
||||
- Share draft with key stakeholders 1:1 before group review
|
||||
- Gather feedback, address concerns early
|
||||
- Avoid surprises in group meeting
|
||||
|
||||
**Technique 2: Address Objections Preemptively**
|
||||
- Anticipate concerns (cost, timeline, technical risk)
|
||||
- Include "Risks & Mitigation" section
|
||||
- Show you've thought through trade-offs
|
||||
|
||||
**Technique 3: Present Options**
|
||||
- Don't present single solution as fait accompli
|
||||
- Offer 2-3 options with pros/cons
|
||||
- Recommend one, but let stakeholders weigh in
|
||||
|
||||
**Technique 4: Quantify**
|
||||
- Back up claims with data
|
||||
- "This will save users 5 hours/week" (not "improve efficiency")
|
||||
- Estimate revenue impact, churn reduction, support ticket decrease
|
||||
|
||||
---
|
||||
|
||||
## 6. User Story Mapping
|
||||
|
||||
### Concept
|
||||
Map user journey horizontally (steps), features vertically (priority).
|
||||
|
||||
### Structure
|
||||
|
||||
**Backbone (Top Row):** High-level activities
|
||||
**Walking Skeleton (Row 2):** MVP user flow
|
||||
**Additional Features (Rows 3+):** Nice-to-haves
|
||||
|
||||
**Example: E-commerce Checkout**
|
||||
```
|
||||
Backbone: Browse Products → Add to Cart → Checkout → Pay → Confirm
|
||||
Walking Skeleton: View list → Click "Add" → Enter info → Card → Email receipt
|
||||
v2: Filters, search → Save for later → Promo codes → Wallet → Order tracking
|
||||
v3: Recommendations → Wish list → Gift wrap → BNPL → Returns
|
||||
```
|
||||
|
||||
### Application to PRD
|
||||
|
||||
1. **Define backbone:** Key user activities (happy path)
|
||||
2. **Identify MVP:** Minimum to complete journey
|
||||
3. **Slice vertically:** Each slice is shippable increment
|
||||
4. **Prioritize:** Top rows are must-haves, bottom rows are v2/v3
|
||||
|
||||
### Benefits
|
||||
- Visual representation of scope
|
||||
- Easy to see MVP vs future
|
||||
- Stakeholder alignment on priorities
|
||||
- Clear release plan
|
||||
|
||||
---
|
||||
|
||||
## 7. PRD Review & Iteration
|
||||
|
||||
### Review Process
|
||||
|
||||
**Phase 1: Self-Review**
|
||||
- Let draft sit 24 hours
|
||||
- Re-read with fresh eyes
|
||||
- Check against rubric
|
||||
- Run spell check
|
||||
|
||||
**Phase 2: Peer Review**
|
||||
- Share with fellow PM or trusted colleague
|
||||
- Ask: "Is problem clear? Is solution sensible? Any gaps?"
|
||||
- Iterate based on feedback
|
||||
|
||||
**Phase 3: Stakeholder Review**
|
||||
- Share with approvers (eng, design, business)
|
||||
- Schedule review meeting or async comments
|
||||
- Focus on: Do we agree on problem? Do we agree on approach? Are we aligned on scope/metrics?
|
||||
|
||||
**Phase 4: Sign-Off**
|
||||
- Get explicit approval from each approver
|
||||
- Document who approved and when
|
||||
- Proceed to detailed design/development
|
||||
|
||||
### Feedback Types
|
||||
|
||||
**Clarifying Questions:**
|
||||
- "What do you mean by X?"
|
||||
- **Response:** Clarify in PRD (you were unclear)
|
||||
|
||||
**Concerns/Objections:**
|
||||
- "This will take too long / cost too much / won't work because..."
|
||||
- **Response:** Address in Risks section or adjust approach
|
||||
|
||||
**Alternative Proposals:**
|
||||
- "What if we did Y instead?"
|
||||
- **Response:** Evaluate alternatives, update PRD if better option
|
||||
|
||||
**Out of Scope:**
|
||||
- "Can we also add feature Z?"
|
||||
- **Response:** Acknowledge, add to "Out of Scope" or "Future Versions"
|
||||
|
||||
### Iterating Based on Feedback
|
||||
|
||||
**Don't:**
|
||||
- Defend original idea blindly
|
||||
- Take feedback personally
|
||||
- Ignore concerns
|
||||
|
||||
**Do:**
|
||||
- Thank reviewers for input
|
||||
- Evaluate objectively (is their concern valid?)
|
||||
- Update PRD with new information
|
||||
- Re-share revised version with change log
|
||||
|
||||
### Version Control
|
||||
|
||||
**Track Changes:**
|
||||
- **V1.0:** Initial draft
|
||||
- **V1.1:** Updated based on eng feedback (added technical constraints)
|
||||
- **V1.2:** Updated based on design feedback (clarified user flows)
|
||||
- **V2.0:** Approved version
|
||||
|
||||
**Change Log:**
|
||||
- Date, version, what changed, why
|
||||
- Helps stakeholders see evolution
|
||||
- Prevents confusion ("I thought we agreed on X?")
|
||||
|
||||
**Example:**
|
||||
```markdown
|
||||
## Change Log
|
||||
|
||||
**V1.2 (2024-01-15):**
|
||||
- Updated scope: Removed bulk delete (security concern from eng)
|
||||
- Added section: Performance requirements (feedback from eng)
|
||||
- Clarified metric: Changed "engagement" to "% users who use bulk edit >3 times/week"
|
||||
|
||||
**V1.1 (2024-01-10):**
|
||||
- Added personas: Broke out "power users" into data analysts vs operations teams
|
||||
- Updated flows: Added error handling for validation failures
|
||||
|
||||
**V1.0 (2024-01-05):**
|
||||
- Initial draft
|
||||
```
|
||||
|
||||
### When to Stop Iterating
|
||||
|
||||
**Signs PRD is Ready:**
|
||||
- All approvers have signed off
|
||||
- Open questions resolved or have clear decision timeline
|
||||
- Scope boundaries clear (everyone agrees on in/out)
|
||||
- Metrics defined and measurable
|
||||
- No blocking concerns remain
|
||||
|
||||
**Don't:**
|
||||
- Iterate forever seeking perfection
|
||||
- Delay unnecessarily
|
||||
- Overspecify (leave room for design/eng creativity)
|
||||
|
||||
**Rule:** If 80% aligned and no blocking concerns, ship it. Can iterate during development if needed.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference: Methodology Selection
|
||||
|
||||
**Use Jobs-to-be-Done when:**
|
||||
- Framing new problem
|
||||
- Understanding user motivation
|
||||
- Validating problem-solution fit
|
||||
|
||||
**Use Metric Trees when:**
|
||||
- Defining success criteria
|
||||
- Connecting feature to business outcomes
|
||||
- Identifying leading indicators
|
||||
|
||||
**Use MoSCoW/RICE when:**
|
||||
- Prioritizing features for MVP
|
||||
- Managing scope creep
|
||||
- Communicating trade-offs
|
||||
|
||||
**Use Pyramid Principle when:**
|
||||
- Writing executive summary
|
||||
- Structuring arguments
|
||||
- Making PRD scannable
|
||||
|
||||
**Use Stakeholder Mapping when:**
|
||||
- Complex cross-functional initiative
|
||||
- Need to manage many approvers
|
||||
- Tailoring PRD for different audiences
|
||||
|
||||
**Use Story Mapping when:**
|
||||
- Planning multi-phase rollout
|
||||
- Aligning team on scope
|
||||
- Visualizing user journey
|
||||
|
||||
**Use Review Process when:**
|
||||
- Every PRD (always peer review)
|
||||
- High-stakes features (multiple review rounds)
|
||||
- Contentious decisions (pre-socialize)
|
||||
342
skills/one-pager-prd/resources/template.md
Normal file
342
skills/one-pager-prd/resources/template.md
Normal file
@@ -0,0 +1,342 @@
|
||||
# One-Pager PRD Template
|
||||
|
||||
## Quick Start
|
||||
|
||||
**One-Pager (1 page):** For simple features needing quick approval. Bullet points, high-level.
|
||||
**PRD (1-2 pages):** For complex features needing detailed requirements. More flows, edge cases, phasing.
|
||||
|
||||
---
|
||||
|
||||
## Option A: One-Pager Template
|
||||
|
||||
### [Feature/Product Name]
|
||||
|
||||
**Date:** [YYYY-MM-DD]
|
||||
**Author:** [Name]
|
||||
**Stakeholders:** [PM, Eng Lead, Design Lead, etc.]
|
||||
**Status:** [Draft / Review / Approved]
|
||||
|
||||
---
|
||||
|
||||
### Problem
|
||||
|
||||
**What user pain are we solving?**
|
||||
- [Describe specific user problem with evidence]
|
||||
- [Quantify the pain: how many users, how often, impact]
|
||||
- [Why this matters: business/user value]
|
||||
|
||||
**Why now?**
|
||||
- [Timing rationale: competitive pressure, strategic priority, market shift]
|
||||
|
||||
**Supporting Data:**
|
||||
- [User research / Analytics / Support tickets / Competitive analysis]
|
||||
|
||||
---
|
||||
|
||||
### Solution
|
||||
|
||||
**What are we building?**
|
||||
- [High-level approach in 1-3 sentences]
|
||||
- [Key capabilities without over-specifying how]
|
||||
|
||||
**How it works (user perspective):**
|
||||
1. [User action 1]
|
||||
2. [System response 1]
|
||||
3. [User action 2]
|
||||
4. [Outcome]
|
||||
|
||||
---
|
||||
|
||||
### Users
|
||||
|
||||
**Who benefits?**
|
||||
- **Primary:** [Persona name] ([% of users]) - [Use case]
|
||||
- **Secondary:** [Persona name] ([% of users]) - [Use case]
|
||||
|
||||
**Key Use Cases:**
|
||||
1. [Use case 1]: [Description]
|
||||
2. [Use case 2]: [Description]
|
||||
|
||||
---
|
||||
|
||||
### Goals & Metrics
|
||||
|
||||
**Success Metrics:**
|
||||
- **Primary:** [Metric name] - Baseline: [X], Target: [Y], Timeline: [Z]
|
||||
- **Secondary:** [Metric name] - Baseline: [X], Target: [Y], Timeline: [Z]
|
||||
|
||||
**Leading Indicators:**
|
||||
- [Early signal 1]: [Target]
|
||||
- [Early signal 2]: [Target]
|
||||
|
||||
---
|
||||
|
||||
### Scope
|
||||
|
||||
**In Scope (MVP):**
|
||||
- [Must-have 1]
|
||||
- [Must-have 2]
|
||||
- [Must-have 3]
|
||||
|
||||
**Out of Scope (Future):**
|
||||
- [Nice-to-have 1] - Reason: [Why not now]
|
||||
- [Nice-to-have 2] - Reason: [Why not now]
|
||||
|
||||
**Key User Flows:**
|
||||
- **Happy Path:** [Flow description]
|
||||
- **Edge Cases:** [How we handle X, Y, Z]
|
||||
|
||||
---
|
||||
|
||||
### Constraints
|
||||
|
||||
**Technical:**
|
||||
- [Constraint 1]: [Impact]
|
||||
- [Dependency 1]: [On what, when available]
|
||||
|
||||
**Business:**
|
||||
- [Budget / Timeline / Resource limit]
|
||||
|
||||
**Assumptions:**
|
||||
- [Key assumption 1]
|
||||
- [Key assumption 2]
|
||||
|
||||
---
|
||||
|
||||
### Open Questions
|
||||
|
||||
1. **[Question 1]:** [Context] - Decision needed by: [Date]
|
||||
2. **[Question 2]:** [Context] - Decision needed by: [Date]
|
||||
|
||||
---
|
||||
|
||||
### Next Steps
|
||||
|
||||
- [ ] [Stakeholder review]: [By when]
|
||||
- [ ] [Design mockups]: [By when]
|
||||
- [ ] [Technical spike]: [By when]
|
||||
- [ ] [Final approval]: [By when]
|
||||
- [ ] [Kickoff]: [Target date]
|
||||
|
||||
---
|
||||
|
||||
## Option B: Full PRD Template
|
||||
|
||||
### [Feature/Product Name]
|
||||
|
||||
**Document Info:**
|
||||
- **Date:** [YYYY-MM-DD]
|
||||
- **Author:** [Name]
|
||||
- **Status:** [Draft / Review / Approved / In Development]
|
||||
- **Version:** [1.0]
|
||||
- **Last Updated:** [Date]
|
||||
|
||||
**Stakeholders:**
|
||||
- **PM:** [Name]
|
||||
- **Engineering:** [Name]
|
||||
- **Design:** [Name]
|
||||
- **Other:** [Legal, Compliance, Marketing, etc.]
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
### 1.1 Executive Summary
|
||||
[2-3 sentence summary: Problem → Solution → Impact]
|
||||
|
||||
### 1.2 Problem Statement
|
||||
|
||||
**User Pain:**
|
||||
[Detailed description of user problem]
|
||||
|
||||
**Impact:**
|
||||
- [Quantified impact: time wasted, revenue lost, churn risk]
|
||||
- [How many users affected]
|
||||
- [Frequency of problem]
|
||||
|
||||
**Evidence:**
|
||||
- [User interview quotes]
|
||||
- [Analytics data]
|
||||
- [Support ticket volume]
|
||||
- [Competitive benchmarks]
|
||||
|
||||
**Why Now:**
|
||||
[Strategic timing, competitive pressure, market opportunity]
|
||||
|
||||
### 1.3 Goals
|
||||
|
||||
**Business Goals:**
|
||||
- [Revenue / Retention / Market share / Competitive parity]
|
||||
|
||||
**User Goals:**
|
||||
- [What user wants to accomplish]
|
||||
|
||||
**Success Metrics:**
|
||||
|
||||
| Metric | Baseline | Target | Timeline | Measurement |
|
||||
|--------|----------|--------|----------|-------------|
|
||||
| [Primary metric] | [Current] | [Goal] | [When] | [How measured] |
|
||||
| [Secondary metric] | [Current] | [Goal] | [When] | [How measured] |
|
||||
|
||||
**Leading Indicators:**
|
||||
- [Early signal 1]: [How it predicts success]
|
||||
- [Early signal 2]: [How it predicts success]
|
||||
|
||||
---
|
||||
|
||||
## 2. Users & Use Cases
|
||||
|
||||
### 2.1 User Personas
|
||||
|
||||
**Primary Persona: [Name]**
|
||||
- **Profile:** [Demographics, role, technical proficiency]
|
||||
- **Goals:** [What they want to achieve]
|
||||
- **Pain Points:** [Current frustrations]
|
||||
- **Motivation:** [Why they'd use this feature]
|
||||
- **Usage:** [% of user base, frequency]
|
||||
|
||||
**Secondary Persona: [Name]**
|
||||
[Same structure]
|
||||
|
||||
### 2.2 Use Cases
|
||||
|
||||
**Use Case 1: [Name]**
|
||||
- **Actor:** [Primary persona]
|
||||
- **Goal:** [What they want to do]
|
||||
- **Preconditions:** [What must be true before]
|
||||
- **Flow:**
|
||||
1. [User action]
|
||||
2. [System response]
|
||||
3. [User action]
|
||||
4. [Outcome]
|
||||
- **Success Criteria:** [How user knows it worked]
|
||||
|
||||
**Use Case 2: [Name]**
|
||||
[Same structure]
|
||||
|
||||
**Use Case 3: [Name]**
|
||||
[Same structure]
|
||||
|
||||
---
|
||||
|
||||
## 3. Solution
|
||||
|
||||
### 3.1 Overview
|
||||
[High-level description of what we're building]
|
||||
|
||||
### 3.2 Key User Flows
|
||||
|
||||
**Flow 1: [Happy Path]**
|
||||
1. [User action] → [System response]
|
||||
2. [User action] → [System response]
|
||||
3. [Outcome]
|
||||
|
||||
**Flow 2: [Alternative/Error]**
|
||||
[Trigger] → [Handling] → [User sees]
|
||||
|
||||
### 3.3 Functional Requirements
|
||||
|
||||
**Must-Have (MVP):**
|
||||
1. [Requirement]: [Description + rationale]
|
||||
2. [Requirement]: [Description + rationale]
|
||||
|
||||
**Post-MVP:**
|
||||
1. [Requirement]: [Why v2]
|
||||
|
||||
**Out of Scope:**
|
||||
1. [Explicitly excluded]: [Reason]
|
||||
|
||||
### 3.4 Non-Functional Requirements
|
||||
- **Performance:** [Targets]
|
||||
- **Scalability:** [Limits]
|
||||
- **Security:** [Requirements]
|
||||
- **Accessibility:** [WCAG level]
|
||||
- **Compatibility:** [Browsers/devices]
|
||||
|
||||
---
|
||||
|
||||
## 4. Design & Experience
|
||||
- **Navigation:** [Where feature lives in product]
|
||||
- **Key Screens:** [Screen 1], [Screen 2], [Empty/Loading/Error states]
|
||||
- **Interactions:** [Buttons, forms, modals, etc.]
|
||||
- **Copy:** Feature name, onboarding, help text, error messages
|
||||
|
||||
---
|
||||
|
||||
## 5. Technical Considerations
|
||||
- **Architecture:** [High-level approach]
|
||||
- **Dependencies:** Internal [systems], External [vendors]
|
||||
- **Data Model:** [Key entities, storage]
|
||||
- **APIs:** [If exposing/integrating]
|
||||
- **Migration:** [If replacing existing]
|
||||
- **Constraints:** [Limits, rates]
|
||||
|
||||
---
|
||||
|
||||
## 6. Go-to-Market
|
||||
- **Launch:** Beta [when], Rollout [gradual/full], Sunset [if applicable]
|
||||
- **Marketing:** Positioning, target audience, channels
|
||||
- **Sales:** [B2B enablement materials]
|
||||
- **Support:** Help docs, training, FAQ
|
||||
|
||||
---
|
||||
|
||||
## 7. Risks & Mitigation
|
||||
|
||||
| Risk | Impact | Probability | Mitigation | Owner |
|
||||
|------|--------|-------------|------------|-------|
|
||||
| [Risk 1] | [High/Med/Low] | [High/Med/Low] | [How we reduce/handle] | [Name] |
|
||||
| [Risk 2] | [High/Med/Low] | [High/Med/Low] | [How we reduce/handle] | [Name] |
|
||||
|
||||
---
|
||||
|
||||
## 8. Open Questions & Decisions
|
||||
|
||||
| Question | Options | Decision | Decider | Deadline |
|
||||
|----------|---------|----------|---------|----------|
|
||||
| [Question 1] | [Option A, B, C] | [TBD / Decided] | [Who] | [Date] |
|
||||
| [Question 2] | [Option A, B] | [TBD / Decided] | [Who] | [Date] |
|
||||
|
||||
---
|
||||
|
||||
## 9. Timeline & Milestones
|
||||
- **Discovery:** User research [date], mockups [date], spike [date]
|
||||
- **Development:** Kickoff [date], alpha [date], beta [date]
|
||||
- **Launch:** Full launch [date], review [date]
|
||||
- **Dependencies:** [Blockers and when resolved]
|
||||
|
||||
---
|
||||
|
||||
## 10. Success Criteria
|
||||
- **Launch Criteria:** Functional reqs met, performance benchmarks, security review, docs complete
|
||||
- **Monitoring:** Week 1 [metrics], Month 1 [adoption/retention], Quarter 1 [business impact]
|
||||
- **Success:** [Metric 1] reaches [target], [Metric 2] reaches [target]
|
||||
|
||||
---
|
||||
|
||||
## Appendix
|
||||
- **Research:** [Links]
|
||||
- **Design:** [Links]
|
||||
- **Technical:** [Links]
|
||||
- **Related:** [Links]
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] Problem: Specific, evidenced, quantified impact, "why now"
|
||||
- [ ] Solution: Clear high-level, not over-specified, key flows, edge cases
|
||||
- [ ] Users: Personas with %, realistic use cases
|
||||
- [ ] Metrics: Measurable, baselines + targets, leading indicators
|
||||
- [ ] Scope: In/out clear, MVP vs future defined
|
||||
- [ ] Constraints: Technical, business, dependencies identified
|
||||
- [ ] Open Questions: Surfaced with timelines and deciders
|
||||
- [ ] Overall: Appropriate length (1-2 pages), scannable, jargon-free, stakeholder review
|
||||
|
||||
---
|
||||
|
||||
## Tips by Context
|
||||
- **New Feature:** Fits existing flows, adoption strategy. Keep brief.
|
||||
- **New Product:** Vision, market opportunity, differentiation. More detail.
|
||||
- **Technical:** Performance gains, cost savings, risk reduction. Translate to user/business value.
|
||||
- **Experiment:** Hypothesis, success criteria, rollback plan. Stay flexible.
|
||||
- **Internal Tool:** Efficiency gains, time saved. Include end users (employees).
|
||||
Reference in New Issue
Block a user