Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:32:48 +08:00
commit 9997176040
24 changed files with 2371 additions and 0 deletions

View File

@@ -0,0 +1,400 @@
---
name: prp-generator
description: Generate comprehensive Product Requirement Plans (PRPs) for feature implementation with thorough codebase analysis and external research. Use when the user requests a PRP, PRD, or detailed implementation plan for a new feature. Conducts systematic research, identifies patterns, and creates executable validation gates for one-pass implementation success.
---
# PRP Generator
## Overview
This skill generates comprehensive Product Requirement Plans (PRPs) that enable AI agents to implement features in a single pass with high success rates. The skill combines systematic codebase analysis with external research to create detailed, context-rich implementation blueprints.
## When to Use This Skill
Invoke this skill when:
- User requests a PRP or PRD (Product Requirement Plan/Document)
- User wants a detailed implementation plan for a new feature
- User asks to "plan out" or "design" a complex feature
- Beginning a significant feature development that would benefit from structured planning
- User provides a feature description file and asks for implementation guidance
## Core Principle
**Context is Everything**: The AI agent implementing your PRP only receives:
1. The PRP content you create
2. Training data knowledge
3. Access to the codebase
4. WebSearch capabilities
Therefore, your PRP must be self-contained with all necessary context, specific references, and executable validation gates.
## Workflow
### Phase 1: Understanding the Feature
1. **Read the Feature Request**
- If user provides a feature file path, read it completely
- If user provides verbal description, clarify requirements by asking:
- What is the user trying to accomplish?
- What are the acceptance criteria?
- Are there any specific constraints or requirements?
- Identify the core problem being solved
2. **Clarify Ambiguities**
- Use AskUserQuestion tool for any unclear requirements
- Confirm technology stack assumptions
- Verify integration points
- Ask about specific patterns to follow if not obvious
### Phase 2: Codebase Analysis (Mandatory)
**Goal**: Understand existing patterns, conventions, and integration points
Refer to `references/research_methodology.md` for detailed guidance, but the core steps are:
1. **Search for Similar Features**
```
Use Grep to search for:
- Similar component names
- Similar functionality keywords
- Similar UI patterns
- Similar API endpoints
```
Document findings with:
- Exact file paths and line numbers
- Code snippets showing patterns
- Relevance to new feature
- Necessary adaptations
2. **Identify Architectural Patterns**
- Directory structure conventions
- Component organization patterns
- State management approach
- API structure patterns
- Routing patterns (if applicable)
Example findings:
```
Pattern: Feature-based directory structure
Location: src/features/
Application: Create src/features/[new-feature]/
```
3. **Document Coding Conventions**
- TypeScript usage patterns (interfaces vs types, strict mode)
- Component patterns (FC vs function, default vs named exports)
- Styling approach (CSS modules, styled-components, Tailwind)
- Import ordering and organization
- Function and variable naming
- Comment style
Example:
```
Convention: Named exports for all components
Example: export function UserProfile() { ... }
Found in: src/components/*.tsx
```
4. **Study Test Patterns**
- Test framework and version
- Test file naming and location
- Mock strategies
- Coverage expectations
- Example test to mirror
Document:
```
Framework: Vitest + @testing-library/react
Pattern: Co-located tests with *.test.tsx
Example: src/components/Button/Button.test.tsx
Mock Strategy: Use vi.fn() for functions, MSW for HTTP
```
5. **Check Project Configuration**
- Review `package.json` for dependencies and scripts
- Check `tsconfig.json` for TypeScript settings
- Review build configuration (vite.config.ts, etc.)
- Note path aliases and special configurations
Document:
```
Build Tool: Vite 5.x
Path Aliases: '@/' → 'src/', '@components/' → 'src/components/'
TypeScript: Strict mode enabled
```
### Phase 3: External Research (Mandatory)
**Goal**: Find best practices, documentation, examples, and gotchas
Refer to `references/research_methodology.md` for detailed guidance, but the core steps are:
1. **Search for Library Documentation**
- Go to official documentation for any libraries being used
- Find the SPECIFIC version in package.json
- Document exact URLs to relevant sections
- Note version-specific features or changes
Example output:
```
Library: @tanstack/react-query
Version: 5.28.0 (from package.json)
Docs: https://tanstack.com/query/latest/docs/react/overview
Key Sections:
- Queries: https://tanstack.com/query/latest/docs/react/guides/queries
- Mutations: https://tanstack.com/query/latest/docs/react/guides/mutations
Gotchas:
- Query keys must be arrays
- Automatic refetching on window focus
- Default staleTime is 0
```
2. **Find Implementation Examples**
- Search GitHub for similar implementations
- Look for StackOverflow solutions (recent, highly-voted)
- Find blog posts from reputable sources
- Check official example repositories
Document:
```
Example: Form validation with React Hook Form + Zod
Source: https://github.com/react-hook-form/react-hook-form/tree/master/examples/V7/zodResolver
Relevance: Shows exact integration pattern needed
Key Takeaway: Use zodResolver from @hookform/resolvers
```
3. **Research Best Practices**
- Search for "[technology] best practices [current year]"
- Look for common pitfalls and gotchas
- Research performance considerations
- Check security implications (OWASP guidelines)
Document:
```
Practice: Input sanitization for user content
Why: Prevent XSS attacks
How: Use DOMPurify before rendering HTML
Reference: https://owasp.org/www-community/attacks/xss/
Warning: NEVER use dangerouslySetInnerHTML without sanitization
```
4. **Performance & Security Research**
- Bundle size implications of new dependencies
- Runtime performance patterns
- Security vulnerabilities to avoid
- Accessibility considerations
Document specific URLs and recommendations
### Phase 4: Ultra-Thinking (Critical)
**STOP AND THINK DEEPLY BEFORE WRITING THE PRP**
This is the most important phase. Spend significant time analyzing:
1. **Integration Analysis**
- How does the new feature connect to existing code?
- What existing patterns should be followed?
- Where might conflicts arise?
- What files will need to be created vs modified?
2. **Implementation Path Planning**
- What is the logical order of implementation steps?
- What are the dependencies between steps?
- Where are the potential roadblocks?
- What edge cases need handling?
3. **Validation Strategy**
- What can be validated automatically?
- What requires manual testing?
- How can the implementer verify each step?
- What are the success criteria?
4. **Context Completeness Check**
Ask yourself:
- Could an AI agent implement this without asking questions?
- Are all integration points documented?
- Are all necessary examples included?
- Are gotchas and warnings clearly stated?
- Are validation gates executable?
- Is the implementation path clear and logical?
5. **Quality Assessment**
- Is this PRP comprehensive enough for one-pass implementation?
- What could cause the implementation to fail?
- What additional context would be helpful?
- Are all assumptions documented?
### Phase 5: Generate the PRP
Use the template from `assets/prp_template.md` as the base structure, and populate it with:
1. **Metadata Section**
- Feature name
- Timeline estimate
- Confidence score (1-10)
- Creation date
2. **Executive Summary**
- 2-3 sentences describing the feature
- Core value proposition
3. **Research Findings**
- Codebase analysis results (with file:line references)
- External research (with specific URLs and sections)
- Document EVERYTHING discovered in Phase 2 and 3
4. **Technical Specification**
- Architecture overview
- Component breakdown
- Data models
- API endpoints (if applicable)
5. **Implementation Blueprint**
- Prerequisites
- Step-by-step implementation (with pseudocode)
- File-by-file changes
- Reference patterns from codebase
- Error handling strategy
- Edge cases
6. **Testing Strategy**
- Unit test approach
- Integration test approach
- Manual testing checklist
7. **Validation Gates**
Must be EXECUTABLE commands:
```bash
# Type checking
npm run type-check
# Linting
npm run lint
# Tests
npm run test
# Build
npm run build
```
8. **Success Criteria**
- Clear, measurable completion criteria
- Checklist format
### Phase 6: Quality Scoring
Score the PRP on a scale of 1-10 for one-pass implementation success:
**Scoring Criteria**:
- **9-10**: Exceptionally detailed, all context included, clear path, executable gates
- **7-8**: Very good, minor gaps, mostly clear implementation path
- **5-6**: Adequate, some ambiguity, may require clarification
- **3-4**: Incomplete research, missing context, unclear path
- **1-2**: Insufficient for implementation
**If score is below 7**: Go back and improve the PRP before delivering it.
### Phase 7: Save and Deliver
1. **Determine Feature Name**
- Use kebab-case
- Be descriptive but concise
- Example: "user-authentication", "dark-mode-toggle", "data-export"
2. **Save the PRP**
```
Save to: PRPs/[feature-name].md
```
If PRPs directory doesn't exist, create it:
```bash
mkdir -p PRPs
```
3. **Deliver Summary to User**
Provide:
- Brief summary of the feature
- Location of saved PRP
- Confidence score and rationale
- Next steps recommendation
## Quality Checklist
Before delivering the PRP, verify:
- [ ] Feature requirements fully understood
- [ ] Codebase analysis completed with specific file references
- [ ] External research completed with URLs and versions
- [ ] All similar patterns identified and documented
- [ ] Coding conventions documented
- [ ] Test patterns identified
- [ ] Implementation steps clearly defined
- [ ] Validation gates are executable (not pseudo-code)
- [ ] Error handling strategy documented
- [ ] Edge cases identified
- [ ] Success criteria defined
- [ ] Confidence score 7+ (if not, improve the PRP)
- [ ] No assumptions left undocumented
- [ ] Integration points clearly identified
- [ ] PRP saved to correct location
## Common Pitfalls to Avoid
1. **Vague References**
- ❌ "There's a similar component somewhere"
- ✅ "See UserProfile at src/components/UserProfile.tsx:45-67"
2. **Missing Version Information**
- ❌ "Use React Query"
- ✅ "Use @tanstack/react-query v5.28.0"
3. **Non-Executable Validation Gates**
- ❌ "Run tests and make sure they pass"
- ✅ "npm run test && npm run build"
4. **Generic Best Practices**
- ❌ "Follow React best practices"
- ✅ "Use named exports (see src/components/Button.tsx:1)"
5. **Incomplete Research**
- ❌ Skipping codebase analysis
- ✅ Thoroughly document existing patterns
6. **Missing Gotchas**
- ❌ Assuming smooth implementation
- ✅ Document known issues and edge cases
## Example Usage
**User Request**:
> "Create a PRP for adding dark mode support to the application"
**Your Response**:
1. Clarify: "Should dark mode preference persist across sessions? Should it respect system preferences?"
2. Research codebase for theme-related code
3. Research external resources (dark mode best practices, library options)
4. Ultra-think about implementation approach
5. Generate comprehensive PRP using template
6. Score the PRP
7. Save to `PRPs/dark-mode-support.md`
8. Deliver summary with confidence score
## Resources
### Template
- `assets/prp_template.md` - Base template for all PRPs
### References
- `references/research_methodology.md` - Detailed research guidance and best practices
## Notes
- **Research is mandatory**: Never skip codebase or external research
- **Be specific**: Always include file paths, line numbers, URLs, versions
- **Think deeply**: Phase 4 (Ultra-Thinking) is critical for success
- **Validate everything**: All validation gates must be executable
- **Score honestly**: If confidence is below 7, improve the PRP
- **Context is king**: The implementer only has what you put in the PRP

View File

@@ -0,0 +1,316 @@
# Product Requirement Plan: [Feature Name]
## Metadata
- **Feature**: [Feature name]
- **Target Completion**: [Timeline estimate]
- **Confidence Score**: [1-10] - Likelihood of one-pass implementation success
- **Created**: [Date]
## Executive Summary
[2-3 sentences describing what this feature does and why it's valuable]
## Research Findings
### Codebase Analysis
#### Similar Patterns Found
[List similar features or patterns discovered in the codebase with file references]
- **Pattern**: [Pattern name]
- **Location**: `path/to/file.ts:line`
- **Description**: [What this pattern does]
- **Relevance**: [Why this is useful for the current feature]
#### Existing Conventions
[List coding conventions, architectural patterns, and style guidelines to follow]
- **Convention**: [Convention name]
- **Example**: [Code snippet or file reference]
- **Application**: [How to apply this to the new feature]
#### Test Patterns
[Document existing test patterns and validation approaches]
- **Test Framework**: [Framework name and version]
- **Pattern**: [Test pattern to follow]
- **Location**: `path/to/test.spec.ts`
### External Research
#### Documentation References
[List all relevant documentation with specific URLs and sections]
- **Resource**: [Library/Framework name]
- **URL**: [Specific URL to docs]
- **Key Sections**: [Relevant sections to read]
- **Version**: [Specific version]
- **Gotchas**: [Known issues or quirks]
#### Implementation Examples
[List real-world examples and references]
- **Example**: [Title/Description]
- **Source**: [GitHub/StackOverflow/Blog URL]
- **Relevance**: [What to learn from this]
- **Cautions**: [What to avoid]
#### Best Practices
[Document industry best practices and common pitfalls]
- **Practice**: [Best practice name]
- **Why**: [Rationale]
- **How**: [Implementation approach]
- **Warning**: [What to avoid]
## Technical Specification
### Architecture Overview
[High-level architecture diagram or description]
```
[ASCII diagram or description of component interactions]
```
### Component Breakdown
#### Component 1: [Name]
- **Purpose**: [What this component does]
- **Location**: `path/to/component`
- **Dependencies**: [List dependencies]
- **Interface**: [API or props interface]
#### Component 2: [Name]
[Repeat for each major component]
### Data Models
[Define all data structures, types, and schemas]
```typescript
// Example: Define interfaces/types
interface FeatureData {
// ...
}
```
### API Endpoints (if applicable)
[Document any new API endpoints]
- **Endpoint**: `POST /api/feature`
- **Purpose**: [What it does]
- **Request**: [Request schema]
- **Response**: [Response schema]
- **Authentication**: [Auth requirements]
## Implementation Blueprint
### Prerequisites
[List any setup steps, dependencies to install, or environment configuration needed]
1. [Prerequisite 1]
2. [Prerequisite 2]
### Implementation Steps (in order)
#### Step 1: [Step Name]
**Goal**: [What this step accomplishes]
**Pseudocode Approach**:
```
// High-level pseudocode showing the approach
function stepOne() {
// ...
}
```
**Files to Create/Modify**:
- `path/to/file1.ts` - [What changes]
- `path/to/file2.ts` - [What changes]
**Reference Pattern**: See `existing/pattern/file.ts:123` for similar implementation
**Validation**: [How to verify this step works]
#### Step 2: [Step Name]
[Repeat for each implementation step]
### Error Handling Strategy
[Document how errors should be handled]
- **Client-side errors**: [Approach]
- **Server-side errors**: [Approach]
- **Validation errors**: [Approach]
- **Network errors**: [Approach]
### Edge Cases
[List all edge cases to handle]
1. **Edge Case**: [Description]
- **Solution**: [How to handle]
## Testing Strategy
### Unit Tests
[Describe unit test approach]
- **Coverage Target**: [Percentage or scope]
- **Key Test Cases**: [List critical test scenarios]
- **Mock Strategy**: [What to mock and why]
### Integration Tests
[Describe integration test approach]
- **Test Scenarios**: [List integration test cases]
- **Setup Required**: [Test environment setup]
### Manual Testing Checklist
- [ ] [Test scenario 1]
- [ ] [Test scenario 2]
- [ ] [Edge case 1]
- [ ] [Edge case 2]
## Validation Gates
### Pre-Implementation Validation
```bash
# Ensure development environment is ready
[Commands to verify environment setup]
```
### During Implementation Validation
```bash
# Type checking
npm run type-check
# Linting
npm run lint
# Unit tests (watch mode during development)
npm run test:watch
```
### Post-Implementation Validation
```bash
# Full test suite
npm run test
# Type checking
npm run type-check
# Linting
npm run lint
# Build verification
npm run build
# E2E tests (if applicable)
npm run test:e2e
```
### Manual Validation Steps
1. [Manual test step 1]
2. [Manual test step 2]
3. [Verify in browser/UI]
## Dependencies
### New Dependencies (if any)
```json
{
"dependencies": {
"package-name": "^version"
},
"devDependencies": {
"test-package": "^version"
}
}
```
**Justification**: [Why each dependency is needed]
### Version Compatibility
- **Node**: [Version requirement]
- **Framework**: [Version requirement]
- **Other**: [Version requirements]
## Migration & Rollout
### Database Migrations (if applicable)
[Document any database schema changes]
### Feature Flags (if applicable)
[Document feature flag strategy]
### Rollout Plan
1. [Rollout step 1]
2. [Rollout step 2]
## Success Criteria
- [ ] All validation gates pass
- [ ] All test cases pass (unit, integration, manual)
- [ ] No TypeScript errors
- [ ] No linting errors
- [ ] Build succeeds
- [ ] Feature works as specified
- [ ] Edge cases handled
- [ ] Error handling implemented
- [ ] Code follows existing conventions
- [ ] Documentation updated
## Known Limitations
[Document any known limitations or future enhancements]
## References
### Internal Documentation
- [Link to internal docs]
### External Resources
- [Link to external resources used during research]
## Appendix
### Code Snippets from Research
[Include any useful code snippets discovered during research]
```typescript
// Example from existing codebase
```
### Additional Notes
[Any additional context that doesn't fit elsewhere]

View File

@@ -0,0 +1,334 @@
# Research Methodology for PRP Generation
This document provides detailed guidance on conducting thorough research for creating comprehensive Product Requirement Plans.
## Research Philosophy
The AI agent implementing the PRP only receives:
1. The context you include in the PRP
2. Their training data knowledge
3. Access to the codebase
4. WebSearch capabilities
Therefore, your research findings MUST be:
- **Comprehensive**: Cover all aspects of implementation
- **Specific**: Include exact URLs, file paths, line numbers
- **Actionable**: Provide concrete examples and patterns
- **Complete**: Assume the implementer won't have your conversation context
## Codebase Analysis Process
### 1. Find Similar Features
**Goal**: Identify existing implementations that solve similar problems
**Approach**:
```bash
# Search for similar feature keywords
# Use Grep tool with relevant patterns
# Look for:
- Similar UI components
- Similar API endpoints
- Similar data models
- Similar business logic
```
**What to Document**:
- Exact file paths with line numbers (e.g., `src/components/UserProfile.tsx:45-67`)
- Code snippets showing the pattern
- Why this pattern is relevant
- Any modifications needed
**Example**:
```
Found: User authentication flow in `src/auth/AuthProvider.tsx:23-89`
Pattern: Context provider with useAuth hook
Relevance: Similar state management approach needed for feature X
Adaptation: Will need to add Y and Z properties
```
### 2. Identify Architectural Patterns
**Goal**: Understand how the codebase is structured
**Look for**:
- Directory structure conventions
- File naming patterns
- Component organization
- State management approach (Redux, Context, Zustand, etc.)
- API structure patterns
- Database access patterns
- Error handling patterns
**What to Document**:
```
Pattern: Feature-based directory structure
Example: src/features/authentication/
Application: Create src/features/[new-feature]/ with:
- components/
- hooks/
- types/
- api/
- tests/
```
### 3. Analyze Coding Conventions
**Goal**: Ensure consistency with existing codebase
**Check**:
- TypeScript usage (strict mode? interfaces vs types?)
- Component patterns (FC vs function? default vs named exports?)
- Styling approach (CSS modules, styled-components, Tailwind?)
- Import ordering
- Comment style
- Function naming (camelCase, descriptive names)
**What to Document**:
```
Convention: Named exports for all components
Example: export function UserProfile() { ... }
Reasoning: Easier refactoring and better IDE support
```
### 4. Study Test Patterns
**Goal**: Write tests that match existing patterns
**Investigate**:
- Test framework (Jest, Vitest, etc.)
- Testing library usage (@testing-library/react?)
- Test file naming (`*.test.ts` or `*.spec.ts`?)
- Test organization (co-located or separate test directory?)
- Mock patterns
- Test coverage expectations
**What to Document**:
```
Pattern: Co-located tests with *.test.tsx suffix
Framework: Vitest + @testing-library/react
Example: src/components/Button.test.tsx
Approach: Test user interactions, not implementation details
Mock Strategy: Use vi.fn() for callbacks, MSW for API calls
```
### 5. Check Configuration Files
**Goal**: Understand build, lint, and tooling setup
**Review**:
- `package.json` - scripts and dependencies
- `tsconfig.json` - TypeScript configuration
- `vite.config.ts` or `webpack.config.js` - build setup
- `.eslintrc` - linting rules
- `.prettierrc` - formatting rules
**What to Document**:
```
TypeScript: Strict mode enabled
Build: Vite with React plugin
Path Aliases: '@/' maps to 'src/'
Must use: Import type syntax for type-only imports
```
## External Research Process
### 1. Library Documentation
**When to Search**:
- Using a new library or framework feature
- Integrating third-party services
- Implementing complex functionality
**How to Search**:
1. Go directly to official documentation
2. Search for the specific version being used
3. Look for:
- Getting started guides
- API references
- Examples
- Migration guides
- Known issues
**What to Document**:
```
Library: @tanstack/react-query v5
URL: https://tanstack.com/query/latest/docs/react/guides/queries
Key Sections:
- Queries: https://tanstack.com/query/latest/docs/react/guides/queries
- Mutations: https://tanstack.com/query/latest/docs/react/guides/mutations
Version: 5.28.0 (check package.json)
Gotchas:
- Query keys must be arrays
- Automatic refetching on window focus (may want to disable)
- Stale time defaults to 0
```
### 2. Implementation Examples
**Where to Search**:
- GitHub repositories (search: "language:typescript [feature]")
- StackOverflow (recent answers)
- Official example repositories
- Blog posts from reputable sources
**What to Look For**:
- Production-grade code (not quick hacks)
- Recent examples (check dates)
- Well-explained implementations
- Edge case handling
**What to Document**:
```
Example: Form validation with Zod and React Hook Form
Source: https://github.com/react-hook-form/react-hook-form/tree/master/examples/V7/zodResolver
Relevance: Shows integration pattern we need
Key Takeaway: Use zodResolver for seamless integration
Caution: Needs @hookform/resolvers package
```
### 3. Best Practices Research
**Search Queries**:
- "[Technology] best practices 2024"
- "[Feature] common pitfalls"
- "[Library] performance optimization"
- "[Pattern] security considerations"
**What to Document**:
```
Practice: Input sanitization for user-generated content
Why: Prevent XSS attacks
How: Use DOMPurify library before rendering HTML
Reference: https://owasp.org/www-community/attacks/xss/
Warning: Never use dangerouslySetInnerHTML without sanitization
```
### 4. Performance Considerations
**Research**:
- Bundle size implications
- Runtime performance patterns
- Common optimization techniques
- Lazy loading opportunities
**What to Document**:
```
Performance: Large data table rendering
Solution: Use virtualization (@tanstack/react-virtual)
Reference: https://tanstack.com/virtual/latest
Benefit: Render only visible rows (handles 100k+ items)
Tradeoff: Adds 15KB to bundle
```
### 5. Security Research
**Check**:
- Common vulnerabilities (OWASP Top 10)
- Authentication/authorization patterns
- Data validation requirements
- Secure defaults
**What to Document**:
```
Security: API authentication
Pattern: Use HTTP-only cookies for tokens
Reference: https://owasp.org/www-community/HttpOnly
Implementation: Configure in Cloudflare Workers
Warning: Don't store tokens in localStorage
```
## Combining Research Findings
### Integration Analysis
After completing both codebase and external research:
1. **Match external patterns to codebase conventions**
- "Library X recommends pattern Y, but codebase uses pattern Z"
- Document adaptations needed
2. **Identify conflicts or gaps**
- "Existing auth pattern doesn't support OAuth"
- Document how to extend existing patterns
3. **Plan integration points**
- "New feature will integrate with existing [Component] at [Location]"
- Document all touch points
### Context Completeness Check
Before finalizing research, verify:
- [ ] Can implementer understand the codebase structure from your findings?
- [ ] Are all external dependencies documented with versions and URLs?
- [ ] Are code examples specific enough to follow?
- [ ] Are gotchas and warnings clearly stated?
- [ ] Are all integration points identified?
- [ ] Can implementer validate their work with the gates you've defined?
## Research Output Format
Organize findings into these categories for the PRP:
1. **Codebase Analysis**
- Similar patterns (with file:line references)
- Existing conventions (with examples)
- Test patterns (with framework details)
2. **External Research**
- Documentation references (with specific URLs and sections)
- Implementation examples (with source links and relevance)
- Best practices (with rationale and warnings)
3. **Integration Points**
- Where new code connects to existing code
- What patterns to follow
- What conventions to maintain
## Common Research Pitfalls to Avoid
1. **Vague references**: "There's a similar component somewhere" ❌
- Should be: "See UserProfile component at src/components/UserProfile.tsx:45" ✅
2. **Outdated examples**: Linking to old blog posts or deprecated APIs ❌
- Should be: Check dates, verify current best practices ✅
3. **Missing version info**: "Use React Query" ❌
- Should be: "Use @tanstack/react-query v5.28.0" ✅
4. **No gotchas**: Assuming smooth implementation ❌
- Should be: Document known issues, common mistakes, edge cases ✅
5. **Too generic**: "Follow React best practices" ❌
- Should be: Specific patterns with code examples ✅
## Research Time Allocation
**Codebase Analysis**: 40% of research time
- Critical for maintaining consistency
- Most valuable for implementation
**External Research**: 35% of research time
- Essential for new technologies
- Validation of approaches
**Integration Planning**: 15% of research time
- Connecting the dots
- Identifying conflicts
**Documentation**: 10% of research time
- Organizing findings
- Creating clear references
## Quality Indicators
Your research is complete when:
- ✅ An implementer could start coding immediately
- ✅ All necessary context is documented
- ✅ Integration points are clear
- ✅ Validation approach is defined
- ✅ Edge cases are identified
- ✅ Error handling strategy is outlined
- ✅ No assumptions are left undocumented