Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:08:06 +08:00
commit 3457739792
30 changed files with 5972 additions and 0 deletions

View File

@@ -0,0 +1,346 @@
---
description: Define analytics tracking plan for features and initiatives
disable-model-invocation: false
---
# Analytics Plan
Create comprehensive analytics tracking plans to measure feature success.
## When to Use
- Before implementing a new feature (define what to track)
- When launching an experiment
- When setting up product analytics
- When defining success metrics
## Used By
- Data Analyst (primary owner)
- Growth Marketer (growth metrics)
- Product Manager (success metrics)
- Full-Stack Engineer (implementation)
---
## Analytics Plan Template
```markdown
# Analytics Plan: [Feature/Initiative Name]
**Author**: [Name]
**Date**: [Date]
**Status**: Draft | Approved | Implemented
---
## Overview
### Feature Description
[Brief description of the feature]
### Business Questions
What decisions will this data inform?
1. [Question 1]
2. [Question 2]
3. [Question 3]
### Success Criteria
How will we know if this feature is successful?
- **Primary Metric**: [Metric] - Target: [X]
- **Secondary Metric**: [Metric] - Target: [X]
- **Guardrail Metric**: [Metric] - Should not decrease by [X%]
---
## Event Tracking
### Core Events
| Event Name | Trigger | Properties | Priority |
|------------|---------|------------|----------|
| `[event_name]` | [When fired] | [Key properties] | P1 |
| `[event_name]` | [When fired] | [Key properties] | P1 |
| `[event_name]` | [When fired] | [Key properties] | P2 |
### Event Specifications
#### `feature_viewed`
**Trigger**: When user views the feature for the first time in session
**Properties**:
| Property | Type | Required | Description |
|----------|------|----------|-------------|
| `source` | string | Yes | Where user came from |
| `variant` | string | No | A/B test variant |
| `user_tier` | string | Yes | Free/Pro/Enterprise |
**Example**:
```json
{
"event": "feature_viewed",
"properties": {
"source": "navigation",
"variant": "control",
"user_tier": "pro"
}
}
```
#### `feature_action_completed`
**Trigger**: When user completes the primary action
**Properties**:
| Property | Type | Required | Description |
|----------|------|----------|-------------|
| `action_type` | string | Yes | Type of action |
| `time_to_complete` | number | Yes | Seconds from start |
| `success` | boolean | Yes | Action succeeded |
---
## Funnel Definition
### Primary Funnel: [Feature Adoption]
```
Step 1: feature_viewed
↓ [Target: 80%]
Step 2: feature_started
↓ [Target: 60%]
Step 3: feature_completed
↓ [Target: 40%]
Step 4: feature_repeated (within 7 days)
```
### Funnel Analysis Questions
- Where is the biggest drop-off?
- How does drop-off vary by user segment?
- What's the time between steps?
---
## User Properties
| Property | Type | Description | When Updated |
|----------|------|-------------|--------------|
| `has_used_feature` | boolean | User has ever used feature | On first use |
| `feature_usage_count` | number | Times user used feature | On each use |
| `first_feature_use` | timestamp | When first used | On first use |
| `last_feature_use` | timestamp | Most recent use | On each use |
---
## Segments
### Key Segments to Analyze
| Segment | Definition | Why Important |
|---------|------------|---------------|
| New Users | account_age < 7 days | Adoption patterns |
| Power Users | feature_usage > 10/week | Success indicators |
| At-Risk | no_activity > 14 days | Retention insights |
| By Plan | plan_type = [free/pro/enterprise] | Monetization |
---
## Dashboard Requirements
### Overview Dashboard
**Purpose**: Daily monitoring of feature health
**Metrics to Include**:
- Daily Active Users (DAU)
- Feature adoption rate
- Primary action completion rate
- Error rate
**Filters**:
- Date range
- User segment
- Platform
### Deep Dive Dashboard
**Purpose**: Understanding patterns and opportunities
**Charts to Include**:
- Funnel visualization
- Cohort retention
- Time-based trends
- Segment comparison
---
## Experiment Plan (if applicable)
### Hypothesis
[Change] will lead to [X% improvement] in [metric] because [reason].
### Test Setup
- **Control**: [Current experience]
- **Variant**: [New experience]
- **Allocation**: [50/50 or other]
- **Duration**: [X weeks]
- **Sample Size Needed**: [X users per variant]
### Success Metrics
| Metric | Baseline | MDE | Direction |
|--------|----------|-----|-----------|
| Primary: [metric] | [X%] | [Y%] | Increase |
| Secondary: [metric] | [X] | [Y] | Increase |
| Guardrail: [metric] | [X%] | [Y%] | No decrease |
### Analysis Plan
- Primary analysis at [X] days
- Segment analysis by [dimensions]
- Document learnings regardless of outcome
---
## Implementation Checklist
### Before Development
- [ ] Analytics plan reviewed by data/product
- [ ] Event names follow naming convention
- [ ] Success metrics approved
### During Development
- [ ] Events implemented with correct properties
- [ ] Events fire at correct times
- [ ] Properties populated correctly
### Before Launch
- [ ] Events tested in staging
- [ ] Dashboard created
- [ ] Baseline metrics captured
- [ ] Alert thresholds set
### After Launch
- [ ] Verify data flowing correctly
- [ ] Check for data quality issues
- [ ] Monitor metrics daily for first week
---
## Data Quality Checks
| Check | Query/Method | Expected |
|-------|--------------|----------|
| Events firing | Count by day | > 0 after launch |
| Required properties | Null check | No nulls |
| Property values | Distinct values | Expected options |
| User join rate | user_id present | 100% |
```
---
## Event Naming Convention
### Format
```
[object]_[action]
```
### Objects (nouns)
- `page` - Page views
- `button` - Button interactions
- `form` - Form interactions
- `feature` - Feature usage
- `subscription` - Subscription events
- `user` - User lifecycle
### Actions (past tense verbs)
- `viewed` - Something was seen
- `clicked` - Something was clicked
- `submitted` - Form was submitted
- `started` - Process began
- `completed` - Process finished
- `failed` - Something went wrong
### Examples
```
page_viewed
button_clicked
form_submitted
feature_started
feature_completed
subscription_upgraded
user_signed_up
```
---
## Property Guidelines
### Always Include
- `timestamp` - When event occurred
- `user_id` - Logged-in user identifier
- `session_id` - Session identifier
- `platform` - web/ios/android
- `page` - Current page/screen
### Contextual Properties
- `source` - What triggered the action
- `variant` - A/B test variant
- `value` - Numeric value if applicable
- `error_type` - For error events
### Naming Rules
- Use `snake_case`
- Be descriptive but concise
- Use consistent naming across events
- Document allowed values for enums
---
## Metrics Definitions
### Common Metrics
**Daily Active Users (DAU)**
```
Count of unique users with any event in past 24 hours
```
**Activation Rate**
```
(Users who completed key action) / (Users who signed up) × 100
```
**Retention Rate (Day N)**
```
(Users active on day N) / (Users who signed up N days ago) × 100
```
**Feature Adoption**
```
(Users who used feature) / (Total users) × 100
```
**Conversion Rate**
```
(Users who completed goal) / (Users who started flow) × 100
```
---
## Quick Reference
### Before Feature Launch
1. Define success metrics
2. Create event tracking plan
3. Implement events
4. Test in staging
5. Set up dashboard
6. Capture baseline
### After Feature Launch
1. Verify data quality
2. Monitor daily
3. Analyze after 1 week
4. Deep dive after 1 month
5. Document learnings

View File

@@ -0,0 +1,283 @@
---
description: Estimate implementation complexity for features and technical tasks
disable-model-invocation: false
---
# Estimate Complexity
Provide structured complexity estimates for features and tasks to help with planning and prioritization.
## When to Use
- During sprint planning
- When scoping new features
- Before committing to timelines
- When breaking down large initiatives
## Used By
- Full-Stack Engineer
- Frontend Engineer
- Backend Engineer
- DevOps Engineer
---
## Complexity Estimation Framework
### T-Shirt Sizes
| Size | Description | Typical Scope |
|------|-------------|---------------|
| **XS** | Trivial change | Config change, copy update, minor fix |
| **S** | Small, well-understood | Single component, simple API endpoint |
| **M** | Moderate complexity | Multiple components, some unknowns |
| **L** | Significant effort | Cross-cutting changes, new patterns |
| **XL** | Major initiative | New system, architectural changes |
---
## Estimation Template
```markdown
## Complexity Estimate: [Feature/Task Name]
### Summary
**T-Shirt Size**: [XS / S / M / L / XL]
**Confidence**: [High / Medium / Low]
### Breakdown
| Component | Effort | Notes |
|-----------|--------|-------|
| [Component 1] | [XS-XL] | [Details] |
| [Component 2] | [XS-XL] | [Details] |
| [Component 3] | [XS-XL] | [Details] |
### Key Complexity Drivers
1. **[Driver 1]**: [Why this adds complexity]
2. **[Driver 2]**: [Why this adds complexity]
3. **[Driver 3]**: [Why this adds complexity]
### Risk Factors
| Risk | Impact | Mitigation |
|------|--------|------------|
| [Risk 1] | [H/M/L] | [Strategy] |
| [Risk 2] | [H/M/L] | [Strategy] |
### Unknowns
- [ ] [Unknown 1] - Could affect estimate by [amount]
- [ ] [Unknown 2] - Need to spike/investigate
### Suggested Breakdown
If size is L or XL, break into smaller deliverables:
1. **Phase 1**: [Scope] - Size: [S/M]
2. **Phase 2**: [Scope] - Size: [S/M]
3. **Phase 3**: [Scope] - Size: [S/M]
### Dependencies
- Blocked by: [Dependency]
- Blocks: [Other work]
### Recommendations
[Any suggestions for approach, sequencing, or risk reduction]
```
---
## Complexity Indicators
### Factors That Increase Complexity
**Technical**
- New technology or pattern not used before
- Integration with external systems
- Performance-critical requirements
- Complex state management
- Database migrations on large tables
- Security-sensitive functionality
**Organizational**
- Cross-team coordination required
- Unclear or changing requirements
- Multiple stakeholder approval needed
- Compliance or legal review required
**Code Quality**
- Working in unfamiliar codebase area
- Technical debt in affected areas
- Missing test coverage
- Poor documentation
### Factors That Decrease Complexity
- Similar work done before (pattern exists)
- Well-defined requirements
- Strong test coverage
- Clear ownership and decision-making
- Good documentation
- Modern, maintained dependencies
---
## Confidence Levels
### High Confidence
- Similar work completed before
- All requirements are clear
- Technology is well-understood
- No significant unknowns
### Medium Confidence
- Some new elements but core is understood
- Requirements are mostly clear
- Some unknowns that are bounded
### Low Confidence
- New technology or pattern
- Requirements are still evolving
- Significant unknowns exist
- External dependencies unclear
**When confidence is low**: Consider a spike/investigation before committing to estimate.
---
## Breaking Down Large Tasks
If the estimate is L or XL, it should be broken down. Use these strategies:
### 1. Vertical Slicing
Break by user-facing functionality:
- Slice 1: Minimal viable flow
- Slice 2: Add edge cases
- Slice 3: Polish and optimization
### 2. Horizontal Slicing
Break by technical layer:
- Phase 1: Data model and API
- Phase 2: Frontend implementation
- Phase 3: Integration and testing
### 3. Risk-First Slicing
Address unknowns first:
- Phase 1: Spike on risky parts
- Phase 2: Core implementation
- Phase 3: Polish and edge cases
---
## Estimation Anti-Patterns
### Don't Do This
1. **Pressure-driven estimates**: Fitting estimate to desired timeline
2. **Best-case thinking**: Assuming everything goes perfectly
3. **Ignoring testing**: Development isn't done until it's tested
4. **Forgetting integration**: Time for connecting pieces
5. **Missing review cycles**: Code review, design review, etc.
### Do This Instead
1. **Add buffer**: Include time for unknowns and interruptions
2. **Include all work**: Testing, documentation, review, deployment
3. **Communicate uncertainty**: Be honest about confidence level
4. **Update as you learn**: Revise estimates when new info emerges
5. **Track actuals**: Compare estimates to actual to improve
---
## Quick Estimation Checklist
Before providing an estimate, consider:
### Scope
- [ ] Are requirements clear and complete?
- [ ] Is scope explicitly bounded?
- [ ] Are acceptance criteria defined?
### Technical
- [ ] Have you worked in this area before?
- [ ] Are there existing patterns to follow?
- [ ] Are dependencies understood?
- [ ] Is the data model clear?
### Testing
- [ ] What testing is required?
- [ ] Is there existing test coverage?
- [ ] Are there test data needs?
### Deployment
- [ ] Any database migrations?
- [ ] Feature flag needed?
- [ ] Configuration changes?
- [ ] Documentation updates?
### Coordination
- [ ] Other teams involved?
- [ ] Review cycles needed?
- [ ] Stakeholder approval required?
---
## Example Estimates
### XS Example: Copy Update
```
Task: Update error message text
Size: XS
Confidence: High
Breakdown:
- Edit: 10 min
- Test: 10 min
- Deploy: Automatic
```
### S Example: New API Endpoint
```
Task: Add endpoint to fetch user preferences
Size: S
Confidence: High
Breakdown:
- API implementation: 2 hours
- Tests: 1 hour
- Documentation: 30 min
```
### M Example: New Feature Component
```
Task: Add notification preferences UI
Size: M
Confidence: Medium
Breakdown:
- Component design: 2 hours
- State management: 2 hours
- API integration: 2 hours
- Testing: 3 hours
- Accessibility: 2 hours
```
### L Example: New Subsystem
```
Task: Implement real-time notifications
Size: L
Confidence: Medium
Key Drivers:
- WebSocket infrastructure needed
- Multiple notification types
- Delivery guarantees
- UI notifications component
Suggested Breakdown:
1. WebSocket infrastructure (M)
2. Backend notification service (M)
3. Frontend notification component (S)
4. Integration and testing (S)
```

View File

@@ -0,0 +1,330 @@
---
description: Security review checklist for features and changes
disable-model-invocation: false
---
# Security Checklist
Comprehensive security review checklist for new features and changes.
## When to Use
- Before shipping any feature that handles user data
- When implementing authentication or authorization
- When adding new API endpoints
- When integrating third-party services
- During code review for security-sensitive changes
## Used By
- Security Engineer (primary owner)
- Full-Stack Engineer (implementation)
- Backend Engineer (API security)
- DevOps Engineer (infrastructure security)
---
## Security Review Template
```markdown
# Security Review: [Feature/Change Name]
**Reviewer**: [Name]
**Date**: [Date]
**Status**: In Progress | Approved | Needs Changes
---
## Overview
### Feature Description
[Brief description of the feature]
### Data Handled
- [ ] PII (Personal Identifiable Information)
- [ ] Financial data
- [ ] Authentication credentials
- [ ] User-generated content
- [ ] None of the above
### Risk Level
- [ ] High (handles sensitive data, authentication, payments)
- [ ] Medium (user data, API endpoints)
- [ ] Low (display only, no data mutation)
---
## Authentication & Authorization
### Authentication
- [ ] Authentication required for all protected endpoints
- [ ] Session management is secure (httpOnly, secure, sameSite)
- [ ] Token expiration is appropriate
- [ ] Logout properly invalidates session
- [ ] No authentication bypass possible
### Authorization
- [ ] Authorization checked on every request
- [ ] Users can only access their own data
- [ ] Admin functions properly protected
- [ ] Role/permission checks in place
- [ ] No IDOR (Insecure Direct Object Reference) vulnerabilities
### Multi-Factor Authentication (if applicable)
- [ ] MFA enforced for sensitive operations
- [ ] MFA bypass not possible
- [ ] Recovery codes handled securely
---
## Input Validation
### Data Validation
- [ ] All user input validated on server side
- [ ] Input type checked (string, number, etc.)
- [ ] Input length limited appropriately
- [ ] Input format validated (email, URL, etc.)
- [ ] Allowlists preferred over blocklists
### SQL Injection
- [ ] Parameterized queries used (no string concatenation)
- [ ] ORM used correctly
- [ ] Raw queries reviewed for injection
### XSS (Cross-Site Scripting)
- [ ] Output encoded for context (HTML, JS, URL, CSS)
- [ ] User content sanitized before display
- [ ] Content Security Policy configured
- [ ] No dangerous `innerHTML` or `dangerouslySetInnerHTML`
### Command Injection
- [ ] No user input passed to shell commands
- [ ] If necessary, input strictly validated
- [ ] Parameterized execution used
---
## Data Protection
### Data at Rest
- [ ] Sensitive data encrypted in database
- [ ] Encryption keys properly managed
- [ ] PII minimized (don't store what you don't need)
- [ ] Data classified and tagged
### Data in Transit
- [ ] HTTPS enforced everywhere
- [ ] TLS 1.2+ required
- [ ] HSTS enabled
- [ ] Secure cookies (httpOnly, secure, sameSite)
### Data Handling
- [ ] Sensitive data not logged
- [ ] Error messages don't expose internal details
- [ ] Data scrubbed from error reports
- [ ] Secure data deletion implemented
---
## API Security
### Endpoint Security
- [ ] Rate limiting implemented
- [ ] Request size limits set
- [ ] Timeout configured
- [ ] CORS properly configured
### Request Validation
- [ ] Schema validation on all inputs
- [ ] Unexpected fields rejected or ignored
- [ ] Content-type verified
- [ ] File upload restrictions in place
### Response Security
- [ ] Sensitive data not in responses
- [ ] Error codes don't leak information
- [ ] Consistent error format
- [ ] No stack traces in production
---
## Third-Party Security
### Dependencies
- [ ] Dependencies scanned for vulnerabilities
- [ ] Dependencies from trusted sources
- [ ] Dependencies up to date
- [ ] Lock file used (package-lock.json, etc.)
### Integrations
- [ ] Third-party credentials properly managed
- [ ] API keys not in code
- [ ] Webhook signatures verified
- [ ] Third-party responses validated
---
## Infrastructure Security
### Secrets Management
- [ ] No secrets in code
- [ ] Secrets in environment variables or secret manager
- [ ] Secrets rotated regularly
- [ ] Access to secrets logged
### Security Headers
- [ ] Content-Security-Policy
- [ ] X-Content-Type-Options: nosniff
- [ ] X-Frame-Options or CSP frame-ancestors
- [ ] Referrer-Policy
- [ ] Permissions-Policy
- [ ] Strict-Transport-Security
### Error Handling
- [ ] Generic error pages in production
- [ ] No stack traces exposed
- [ ] Errors logged server-side
- [ ] Monitoring for unusual error patterns
---
## Logging & Monitoring
### Security Logging
- [ ] Authentication attempts logged
- [ ] Authorization failures logged
- [ ] Sensitive operations logged
- [ ] Logs don't contain sensitive data
- [ ] Log integrity protected
### Monitoring
- [ ] Alerts for suspicious activity
- [ ] Failed login monitoring
- [ ] Rate limit triggers monitored
- [ ] Error rate monitoring
---
## Threat Model
### Assets
[What data/functionality are we protecting?]
### Threat Actors
- [ ] Anonymous attackers
- [ ] Authenticated users (privilege escalation)
- [ ] Malicious insiders
- [ ] Automated bots/scrapers
### Attack Vectors
| Threat | Likelihood | Impact | Mitigation |
|--------|------------|--------|------------|
| [Threat 1] | H/M/L | H/M/L | [Control] |
| [Threat 2] | H/M/L | H/M/L | [Control] |
### Residual Risks
[Risks that are accepted with justification]
---
## Findings
### Critical (Must Fix)
- [ ] [Finding 1]
- [ ] [Finding 2]
### High (Should Fix)
- [ ] [Finding 1]
- [ ] [Finding 2]
### Medium (Recommend)
- [ ] [Finding 1]
### Informational
- [Note 1]
---
## Sign-Off
| Role | Name | Date | Status |
|------|------|------|--------|
| Security | | | [ ] Approved |
| Dev Lead | | | [ ] Acknowledged |
```
---
## OWASP Top 10 Quick Reference
### 1. Broken Access Control
- Enforce access control on server
- Deny by default
- Verify ownership of resources
### 2. Cryptographic Failures
- Encrypt sensitive data
- Use strong algorithms
- Manage keys securely
### 3. Injection
- Use parameterized queries
- Validate and sanitize input
- Escape output for context
### 4. Insecure Design
- Threat model new features
- Defense in depth
- Secure defaults
### 5. Security Misconfiguration
- Disable unnecessary features
- Secure default configs
- Remove default credentials
### 6. Vulnerable Components
- Scan dependencies
- Keep updated
- Monitor for vulnerabilities
### 7. Authentication Failures
- Strong password requirements
- Secure session management
- Multi-factor authentication
### 8. Software/Data Integrity Failures
- Verify dependencies
- Sign releases
- Secure CI/CD
### 9. Security Logging Failures
- Log security events
- Protect log integrity
- Monitor for anomalies
### 10. Server-Side Request Forgery (SSRF)
- Validate URLs
- Use allowlists
- Limit outbound requests
---
## Quick Security Checks
### Before Every PR
- [ ] No secrets in code
- [ ] Input validation present
- [ ] Auth checks in place
- [ ] No obvious injection vectors
### Before Every Release
- [ ] Dependency scan clean
- [ ] Security headers configured
- [ ] Authentication tested
- [ ] Authorization tested
### Quarterly
- [ ] Full security review
- [ ] Penetration testing
- [ ] Dependency update
- [ ] Access review

295
skills/write-prd/SKILL.md Normal file
View File

@@ -0,0 +1,295 @@
---
description: Write a comprehensive Product Requirements Document for a feature or initiative
disable-model-invocation: false
---
# Write PRD
Create a structured Product Requirements Document that aligns stakeholders and guides implementation.
## When to Use
- Starting a new feature or project
- Documenting requirements before development
- Creating alignment between product, design, and engineering
- Formalizing user feedback into actionable requirements
## Used By
- Product Manager (primary owner)
- Full-Stack Engineer (technical input)
- UI/UX Designer (design requirements)
---
## PRD Template
```markdown
# PRD: [Feature/Project Name]
**Author**: [Name]
**Status**: Draft | In Review | Approved
**Last Updated**: [Date]
**Version**: 1.0
---
## Executive Summary
[2-3 sentence summary of what we're building and why it matters]
---
## Problem Statement
### The Problem
[Clear description of the user/business problem]
### Who Has This Problem
- **Primary Users**: [User segment]
- **Secondary Users**: [Other affected users]
- **Frequency**: [How often does this problem occur]
### Impact
- **User Impact**: [How it affects users]
- **Business Impact**: [How it affects the business]
### Evidence
- [User research finding]
- [Support ticket data]
- [Analytics insight]
---
## Goals & Success Metrics
### Objective
[One clear objective this feature achieves]
### Key Results
1. **KR1**: [Measurable outcome] - Target: [X]
2. **KR2**: [Measurable outcome] - Target: [X]
3. **KR3**: [Measurable outcome] - Target: [X]
### Non-Goals
- [What we are explicitly NOT trying to do]
- [Scope boundaries]
---
## User Stories
### Primary Flow
**As a** [user type]
**I want to** [action/goal]
**So that** [benefit/outcome]
**Acceptance Criteria:**
- [ ] Given [context], when [action], then [result]
- [ ] Given [context], when [action], then [result]
- [ ] Given [context], when [action], then [result]
### Secondary Flows
[Additional user stories for edge cases, admin flows, etc.]
---
## Scope
### In Scope (MVP)
- [ ] [Feature/capability 1]
- [ ] [Feature/capability 2]
- [ ] [Feature/capability 3]
### Out of Scope (Future)
- [ ] [Explicitly excluded 1]
- [ ] [Explicitly excluded 2]
### Dependencies
- [External system/team dependency]
- [Technical prerequisite]
---
## Design & UX
### User Flow
[Description or link to user flow diagram]
1. [Step 1]
2. [Step 2]
3. [Step 3]
### Wireframes/Mockups
[Links to design files or embedded images]
### Key Design Decisions
- **Decision 1**: [Choice made] - Rationale: [Why]
- **Decision 2**: [Choice made] - Rationale: [Why]
### Accessibility Requirements
- [ ] [WCAG requirement]
- [ ] [Keyboard navigation]
- [ ] [Screen reader support]
---
## Technical Requirements
### Architecture Overview
[High-level technical approach]
### Data Model Changes
[New entities, fields, relationships]
### API Design
[New endpoints or changes needed]
### Performance Requirements
- Load time: [Target]
- Throughput: [Target]
- Scalability: [Considerations]
### Security Considerations
- [Authentication requirements]
- [Data protection needs]
- [Compliance requirements]
---
## Analytics & Tracking
### Events to Track
| Event Name | Trigger | Properties |
|------------|---------|------------|
| [event] | [when] | [what data] |
### Success Dashboard
[Metrics to display and how to measure]
### Experiment Plan
[A/B tests or phased rollout approach]
---
## Risks & Mitigations
| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| [Risk 1] | H/M/L | H/M/L | [Strategy] |
| [Risk 2] | H/M/L | H/M/L | [Strategy] |
---
## Timeline & Milestones
### Phase 1: [Name]
- [Deliverable 1]
- [Deliverable 2]
### Phase 2: [Name]
- [Deliverable 1]
- [Deliverable 2]
### Key Dates
- Design Complete: [Date]
- Development Start: [Date]
- Beta Release: [Date]
- GA Release: [Date]
---
## Open Questions
- [ ] [Question 1] - Owner: [Name]
- [ ] [Question 2] - Owner: [Name]
---
## Appendix
### Related Documents
- [Link to design specs]
- [Link to technical specs]
- [Link to research]
### Revision History
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0 | [Date] | [Name] | Initial draft |
```
---
## PRD Best Practices
### Writing Guidelines
1. **Lead with the problem, not the solution**
- Start by deeply understanding and articulating the problem
- Resist jumping to solutions until the problem is clear
2. **Be specific and measurable**
- Avoid vague language like "improve" or "better"
- Define concrete metrics and targets
3. **Keep it concise**
- PRDs that are too long don't get read
- Focus on what's essential for decision-making
4. **Show your work**
- Include evidence for assertions
- Link to research, data, or feedback
5. **Define what's NOT included**
- Out of scope is as important as in scope
- Prevents scope creep
### Common Mistakes to Avoid
- **Solutioning too early**: Define the problem first
- **Vague acceptance criteria**: Make them testable
- **Missing success metrics**: How will you know it worked?
- **Skipping edge cases**: Think about error states and failures
- **Ignoring accessibility**: Include from the start, not as afterthought
### Review Checklist
Before sharing the PRD:
- [ ] Problem statement is clear and evidence-backed
- [ ] User stories have testable acceptance criteria
- [ ] Scope is explicitly defined (in and out)
- [ ] Success metrics are measurable
- [ ] Technical approach has been validated with engineering
- [ ] Design requirements are specified
- [ ] Open questions are documented with owners
- [ ] Risks are identified with mitigations
---
## Quick Reference
### User Story Format
```
As a [user type]
I want to [action/goal]
So that [benefit/outcome]
```
### Acceptance Criteria Format
```
Given [context/precondition]
When [action/trigger]
Then [expected outcome]
```
### INVEST Criteria for Stories
- **I**ndependent: Can be developed separately
- **N**egotiable: Details can be discussed
- **V**aluable: Provides value to users
- **E**stimable: Can estimate effort
- **S**mall: Can complete in a sprint
- **T**estable: Can verify completion

View File

@@ -0,0 +1,373 @@
---
description: Create comprehensive test plans for features and changes
disable-model-invocation: false
---
# Write Test Plan
Create structured test plans that ensure quality and prevent regressions.
## When to Use
- Before implementing a new feature
- When planning a release
- After discovering a bug (to prevent regression)
- When onboarding someone to test a feature
## Used By
- QA Engineer (primary owner)
- Full-Stack Engineer (test implementation)
- Frontend Engineer (component testing)
- Backend Engineer (API testing)
---
## Test Plan Template
```markdown
# Test Plan: [Feature/Change Name]
**Author**: [Name]
**Date**: [Date]
**Status**: Draft | Ready | Executing | Complete
**Feature/PR**: [Link]
---
## Overview
### Feature Summary
[Brief description of what's being tested]
### Testing Scope
- **In Scope**: [What will be tested]
- **Out of Scope**: [What won't be tested, and why]
### Test Environment
- **Environment**: [Staging / Production / Local]
- **Test Data**: [Source of test data]
- **Dependencies**: [External services, mock requirements]
---
## Test Scenarios
### Happy Path Tests
| ID | Scenario | Steps | Expected Result | Priority |
|----|----------|-------|-----------------|----------|
| HP-1 | [Scenario name] | [Brief steps] | [Expected outcome] | P1 |
| HP-2 | [Scenario name] | [Brief steps] | [Expected outcome] | P1 |
### Edge Cases
| ID | Scenario | Steps | Expected Result | Priority |
|----|----------|-------|-----------------|----------|
| EC-1 | [Edge case] | [Steps] | [Expected outcome] | P2 |
| EC-2 | [Edge case] | [Steps] | [Expected outcome] | P2 |
### Error Cases
| ID | Scenario | Steps | Expected Result | Priority |
|----|----------|-------|-----------------|----------|
| ER-1 | [Error scenario] | [Steps] | [Expected error handling] | P1 |
| ER-2 | [Error scenario] | [Steps] | [Expected error handling] | P2 |
### Boundary Conditions
| ID | Scenario | Steps | Expected Result | Priority |
|----|----------|-------|-----------------|----------|
| BC-1 | [Boundary test] | [Steps] | [Expected outcome] | P2 |
| BC-2 | [Boundary test] | [Steps] | [Expected outcome] | P3 |
---
## Detailed Test Cases
### [HP-1] [Test Case Name]
**Preconditions:**
- [Required state/setup]
**Test Data:**
- [Specific data needed]
**Steps:**
1. [Detailed step 1]
2. [Detailed step 2]
3. [Detailed step 3]
**Expected Results:**
- [ ] [Verification point 1]
- [ ] [Verification point 2]
- [ ] [Verification point 3]
**Actual Results:**
[To be filled during execution]
**Status:** [ ] Pass [ ] Fail [ ] Blocked
---
## Test Data Requirements
### Required Test Accounts
| Account Type | Username | Purpose |
|-------------|----------|---------|
| Admin | test-admin | Admin functionality |
| Regular User | test-user | Standard flows |
| New User | (create) | First-time experience |
### Test Data Sets
| Data Set | Description | Location |
|----------|-------------|----------|
| [Set 1] | [Description] | [Location/script] |
| [Set 2] | [Description] | [Location/script] |
---
## Automation Coverage
### Automated Tests
| Test Area | Framework | Coverage | Location |
|-----------|-----------|----------|----------|
| Unit | [Jest/Vitest] | [X%] | [path] |
| Integration | [Testing Library] | [X%] | [path] |
| E2E | [Playwright] | [X scenarios] | [path] |
| API | [Supertest] | [X endpoints] | [path] |
### New Tests Needed
- [ ] [Test 1] - Type: [Unit/Integration/E2E]
- [ ] [Test 2] - Type: [Unit/Integration/E2E]
- [ ] [Test 3] - Type: [Unit/Integration/E2E]
---
## Performance Testing
### Performance Criteria
| Metric | Target | Measurement Method |
|--------|--------|-------------------|
| Page Load | < 2s | Lighthouse |
| API Response | < 200ms | Load testing |
| Time to Interactive | < 3s | Lighthouse |
### Load Testing (if applicable)
- **Concurrent Users**: [Target]
- **Duration**: [Minutes]
- **Scenarios**: [Key flows to test]
---
## Accessibility Testing
### WCAG Compliance
- [ ] Color contrast (AA standard)
- [ ] Keyboard navigation
- [ ] Screen reader testing
- [ ] Focus management
- [ ] Alt text for images
### Testing Tools
- [ ] axe DevTools
- [ ] VoiceOver / NVDA
- [ ] Keyboard-only navigation
---
## Cross-Browser / Cross-Device
### Browsers to Test
- [ ] Chrome (latest)
- [ ] Firefox (latest)
- [ ] Safari (latest)
- [ ] Edge (latest)
- [ ] Mobile Safari
- [ ] Mobile Chrome
### Devices to Test
- [ ] Desktop (1920x1080)
- [ ] Laptop (1440x900)
- [ ] Tablet (768x1024)
- [ ] Mobile (375x667)
---
## Regression Testing
### Areas to Regression Test
- [ ] [Related feature 1]
- [ ] [Related feature 2]
- [ ] [Authentication flow]
### Smoke Test Checklist
- [ ] User can log in
- [ ] Main navigation works
- [ ] Core feature X works
- [ ] No console errors
---
## Test Execution
### Schedule
| Phase | Date | Owner |
|-------|------|-------|
| Test Plan Review | [Date] | QA |
| Test Environment Setup | [Date] | DevOps |
| Test Execution | [Date] | QA |
| Bug Triage | [Date] | Team |
| Retest | [Date] | QA |
| Sign-off | [Date] | QA Lead |
### Test Results Summary
| Category | Total | Passed | Failed | Blocked |
|----------|-------|--------|--------|---------|
| Happy Path | | | | |
| Edge Cases | | | | |
| Error Cases | | | | |
| Regression | | | | |
---
## Sign-Off
### Entry Criteria
- [ ] Feature development complete
- [ ] Code reviewed and merged
- [ ] Test environment ready
- [ ] Test data available
### Exit Criteria
- [ ] All P1 tests passed
- [ ] No P1/P2 bugs open
- [ ] Test coverage meets target
- [ ] Performance criteria met
- [ ] Accessibility verified
### Approval
| Role | Name | Date | Approved |
|------|------|------|----------|
| QA Lead | | | [ ] |
| Dev Lead | | | [ ] |
| PM | | | [ ] |
```
---
## Test Scenario Categories
### Think About These Areas
1. **Happy Path**: Normal, expected user flows
2. **Edge Cases**: Boundary conditions, unusual but valid inputs
3. **Error Cases**: Invalid inputs, failure scenarios
4. **Security**: Authentication, authorization, injection
5. **Performance**: Load, stress, response times
6. **Accessibility**: Keyboard, screen reader, contrast
7. **Compatibility**: Browsers, devices, screen sizes
8. **Integration**: Third-party services, APIs
9. **State**: Different user states, data conditions
10. **Concurrency**: Multiple users, race conditions
### Prioritization
- **P1 (Must Test)**: Core functionality, security-critical, high-usage paths
- **P2 (Should Test)**: Important edge cases, error handling
- **P3 (Nice to Test)**: Rare scenarios, minor functionality
---
## Testing Techniques
### Equivalence Partitioning
Divide inputs into groups that should be treated the same:
- Valid emails: test one representative
- Invalid emails: test one representative
### Boundary Value Analysis
Test at the edges:
- Minimum value
- Maximum value
- Just below minimum
- Just above maximum
### Decision Table Testing
For complex business logic with multiple conditions:
| Condition 1 | Condition 2 | Expected Action |
|-------------|-------------|-----------------|
| True | True | Action A |
| True | False | Action B |
| False | True | Action C |
| False | False | Action D |
### State Transition Testing
For features with state machines:
- Identify all states
- Identify all transitions
- Test each transition
- Test invalid transitions
---
## Automation Guidelines
### What to Automate
**Automate**:
- Regression tests (run frequently)
- Happy path flows
- Data-driven tests (many similar cases)
- API tests
**Don't Automate**:
- Exploratory testing
- One-time tests
- Rapidly changing features
- Visual design validation
### Test Pyramid
```
/\
/E2E\ Few, critical flows
/------\
/ INT \ More, key integrations
/----------\
/ UNIT \ Many, fast, isolated
----------------
```
---
## Quick Test Checklist
Before shipping, verify:
### Functionality
- [ ] All acceptance criteria met
- [ ] Happy path works
- [ ] Error handling works
- [ ] Edge cases handled
### Quality
- [ ] No console errors
- [ ] No broken links
- [ ] Loading states work
- [ ] Empty states work
### Performance
- [ ] Page loads in < 3s
- [ ] No memory leaks
- [ ] Images optimized
### Accessibility
- [ ] Keyboard navigable
- [ ] Screen reader friendly
- [ ] Color contrast OK
### Security
- [ ] Auth required where needed
- [ ] Permissions enforced
- [ ] No sensitive data exposed

View File

@@ -0,0 +1,407 @@
---
description: Write user-facing documentation for features and products
disable-model-invocation: false
---
# Write User Docs
Create clear, helpful documentation for end users.
## When to Use
- When launching a new feature
- When users are asking support questions
- When creating help center content
- When writing in-app guidance
## Used By
- Customer Support (primary owner)
- Content Creator (writing)
- Product Manager (requirements)
- UI/UX Designer (in-app copy)
---
## Documentation Types
### 1. Help Article
Full documentation for a feature or workflow
### 2. FAQ Entry
Quick answer to common questions
### 3. In-App Guidance
Tooltips, empty states, onboarding text
### 4. Troubleshooting Guide
Steps to resolve common issues
---
## Help Article Template
```markdown
# [Action-Oriented Title]
Brief intro paragraph explaining what this article covers and who it's for.
---
## Before You Start
What users need before following this guide:
- [Prerequisite 1]
- [Prerequisite 2]
- [Account type or permission required]
---
## [Main Task]
### Step 1: [Clear action]
[Explanation of what to do]
![Screenshot placeholder](screenshot-url)
> **Tip**: [Helpful tip for this step]
### Step 2: [Clear action]
[Explanation of what to do]
### Step 3: [Clear action]
[Explanation of what to do]
---
## [Secondary Task] (if applicable)
Steps for related functionality...
---
## Troubleshooting
### [Common Issue 1]
**Problem**: [What the user experiences]
**Solution**:
1. [Step to resolve]
2. [Step to resolve]
### [Common Issue 2]
**Problem**: [What the user experiences]
**Solution**: [How to fix it]
---
## Frequently Asked Questions
**Q: [Common question]?**
A: [Clear answer]
**Q: [Common question]?**
A: [Clear answer]
---
## Related Articles
- [Related Article 1](link)
- [Related Article 2](link)
---
## Need Help?
If you're still having trouble, [contact support](link) and we'll help you out.
```
---
## FAQ Template
```markdown
## [Question in user's words?]
[Direct answer to the question - lead with the answer, not background]
### More Details
[Additional context if needed]
### Related
- [Link to full article if exists]
- [Related FAQ]
```
---
## In-App Copy Guidelines
### Empty States
**Structure**:
1. What this area is for
2. Why it's empty
3. How to fill it
**Example**:
```
No projects yet
Create your first project to start tracking your work.
[Create Project]
```
### Tooltips
**Rules**:
- Keep under 100 characters
- Explain the action, not the obvious
- Include "why" when helpful
**Good**:
```
Export your data as CSV for analysis in other tools
```
**Bad**:
```
Click to export
```
### Error Messages
**Structure**:
1. What happened (briefly)
2. What to do next
**Example**:
```
Couldn't save changes
Check your internet connection and try again.
[Retry]
```
**Anti-patterns**:
```
❌ Error 500
❌ Failed to save
❌ Invalid input
❌ Something went wrong
```
### Confirmation Messages
**Structure**:
1. Confirm what was done
2. Next action (optional)
**Example**:
```
Project created successfully
[View Project] or [Create Another]
```
### Loading States
**Keep it simple**:
```
Loading...
Saving...
Processing...
```
**Or be specific**:
```
Uploading file... 45%
Generating report...
Connecting to [Service]...
```
---
## Writing Principles
### 1. Use Plain Language
**Do**:
- "Click the Save button"
- "Your changes are saved"
- "Enter your email address"
**Don't**:
- "Execute the save operation"
- "Modifications have been persisted"
- "Input your electronic mail identifier"
### 2. Lead with the Action
**Do**:
- "To create a project, click the + button"
- "Export your data from Settings > Export"
**Don't**:
- "The + button, which is located in the top right corner of the interface, can be used to create a new project"
### 3. Use Second Person ("You")
**Do**:
- "Your account"
- "You can export..."
- "Your changes are saved"
**Don't**:
- "The user's account"
- "Users can export..."
- "The changes are saved"
### 4. Be Specific, Not Vague
**Do**:
- "Enter a password with at least 8 characters"
- "This file is 2.4 MB (maximum: 5 MB)"
**Don't**:
- "Enter a secure password"
- "File size must be acceptable"
### 5. Explain Why (When Helpful)
**Do**:
- "Archive this project to free up your dashboard while keeping all data"
- "Set up 2-factor authentication to protect your account"
**Don't**:
- Over-explain obvious actions
---
## Troubleshooting Guide Template
```markdown
# Troubleshooting: [Problem Category]
## Quick Fixes
Try these first:
1. [ ] Refresh the page
2. [ ] Clear browser cache
3. [ ] Try a different browser
4. [ ] Check internet connection
---
## [Specific Problem 1]
### Symptoms
- [What user sees or experiences]
- [Related error messages]
### Cause
[Brief explanation of why this happens]
### Solution
**Option A**: [First solution]
1. [Step 1]
2. [Step 2]
**Option B**: [If Option A doesn't work]
1. [Step 1]
2. [Step 2]
### Prevention
[How to avoid this in the future]
---
## [Specific Problem 2]
[Same structure as above]
---
## Still Having Issues?
If none of the above solutions work:
1. **Collect information**:
- Browser and version
- Screenshots of the issue
- Steps to reproduce
2. **Contact support**:
- [Support link]
- Include the information above
---
## Related Articles
- [Related troubleshooting guide]
- [Feature documentation]
```
---
## Documentation Checklist
### Before Publishing
- [ ] Title is action-oriented
- [ ] Language is plain and clear
- [ ] Steps are numbered and specific
- [ ] Screenshots are included (if helpful)
- [ ] Prerequisites are listed
- [ ] Troubleshooting included
- [ ] Related articles linked
- [ ] Tested by someone unfamiliar with feature
### Content Quality
- [ ] Answers the user's question
- [ ] Scannable (headers, bullets, short paragraphs)
- [ ] Accurate and up-to-date
- [ ] Consistent with product UI text
- [ ] Accessible (alt text, clear structure)
### After Publishing
- [ ] Update when product changes
- [ ] Review support tickets for gaps
- [ ] Track page views and search terms
- [ ] Update based on user feedback
---
## Voice and Tone Guide
### Be Helpful
Guide users to success, don't just document features.
### Be Clear
Use simple words, short sentences, direct instructions.
### Be Friendly
Conversational but professional. Not robotic, not too casual.
### Be Confident
"Click Save" not "You might want to click Save"
### Be Respectful
Don't blame users for errors. "Something went wrong" not "You did something wrong"