12 KiB
description
| description |
|---|
| Create comprehensive feature PRP with deep codebase analysis and research |
Create Feature PRP
Feature: $ARGUMENTS
Mission
Transform a feature request into a comprehensive implementation PRP through systematic codebase analysis, external research, and strategic planning.
Core Principle: We do NOT write code in this phase. Our goal is to create a battle-tested, context-rich implementation plan that enables one-pass implementation success.
Key Philosophy: Context is King. The PRP must contain ALL information needed for implementation - patterns, gotchas, documentation, validation commands - so the execution agent succeeds on the first attempt.
Planning Process
Phase 1: Feature Understanding
Deep Feature Analysis:
- Extract the core problem being solved
- Identify user value and business impact
- Determine feature type: New Capability/Enhancement/Refactor/Bug Fix
- Assess complexity: Low/Medium/High
- Map affected systems and components
Create User Story Format:
As a <type of user>
I want to <action/goal>
So that <benefit/value>
Phase 2: Codebase Intelligence Gathering
Use specialized agents and parallel analysis:
1. Project Structure Analysis
- Detect primary language(s), frameworks, and runtime versions
- Map directory structure and architectural patterns
- Identify service/component boundaries and integration points
- Locate configuration files (pyproject.toml, package.json, etc.)
- Find environment setup and build processes
2. Pattern Recognition (Use specialized subagents when beneficial)
- Search for similar implementations in codebase
- Identify coding conventions:
- Naming patterns (CamelCase, snake_case, kebab-case)
- File organization and module structure
- Error handling approaches
- Logging patterns and standards
- Extract common patterns for the feature's domain
- Document anti-patterns to avoid
- Check CLAUDE.md for project-specific rules and conventions
3. Dependency Analysis
- Catalog external libraries relevant to feature
- Understand how libraries are integrated (check imports, configs)
- Find relevant documentation in PRPs/ai_docs/ if available
- Note library versions and compatibility requirements
4. Testing Patterns
- Identify test framework and structure (pytest, jest, etc.)
- Find similar test examples for reference
- Understand test organization (unit vs integration)
- Note coverage requirements and testing standards
5. Integration Points
- Identify existing files that need updates
- Determine new files that need creation and their locations
- Map router/API registration patterns
- Understand database/model patterns if applicable
- Identify authentication/authorization patterns if relevant
Clarify Ambiguities:
- If requirements are unclear at this point, ask the user to clarify before you continue
- Get specific implementation preferences (libraries, approaches, patterns)
- Resolve architectural decisions before proceeding
Phase 3: External Research & Documentation
Use specialized subagents when beneficial for external research:
Documentation Gathering:
- Research latest library versions and best practices
- Find official documentation with specific section anchors
- Locate implementation examples and tutorials
- Identify common gotchas and known issues
- Check for breaking changes and migration guides
Technology Trends:
- Research current best practices for the technology stack
- Find relevant blog posts, guides, or case studies
- Identify performance optimization patterns
- Document security considerations
Compile Research References:
## Relevant Documentation
- [Library Official Docs](https://example.com/docs#section)
- Specific feature implementation guide
- Why: Needed for X functionality
- [Framework Guide](https://example.com/guide#integration)
- Integration patterns section
- Why: Shows how to connect components
Phase 4: Deep Strategic Thinking
Think Harder About:
- How does this feature fit into the existing architecture?
- What are the critical dependencies and order of operations?
- What could go wrong? (Edge cases, race conditions, errors)
- How will this be tested comprehensively?
- What performance implications exist?
- Are there security considerations?
- How maintainable is this approach?
Design Decisions:
- Choose between alternative approaches with clear rationale
- Design for extensibility and future modifications
- Plan for backward compatibility if needed
- Consider scalability implications
Phase 5: PRP Structure Generation
Create comprehensive PRP with the following structure:
# Feature: <feature-name>
## Feature Description
<Detailed description of the feature, its purpose, and value to users>
## User Story
As a <type of user>
I want to <action/goal>
So that <benefit/value>
## Problem Statement
<Clearly define the specific problem or opportunity this feature addresses>
## Solution Statement
<Describe the proposed solution approach and how it solves the problem>
## Feature Metadata
**Feature Type**: [New Capability/Enhancement/Refactor/Bug Fix]
**Estimated Complexity**: [Low/Medium/High]
**Primary Systems Affected**: [List of main components/services]
**Dependencies**: [External libraries or services required]
---
## CONTEXT REFERENCES
### Relevant Codebase Files
<List files with line numbers and relevance>
- `path/to/file.py` (lines 15-45) - Why: Contains pattern for X that we'll mirror
- `path/to/model.py` (lines 100-120) - Why: Database model structure to follow
- `path/to/test.py` - Why: Test pattern example
### New Files to Create
- `path/to/new_service.py` - Service implementation for X functionality
- `path/to/new_model.py` - Data model for Y resource
- `tests/path/to/test_new_service.py` - Unit tests for new service
### Relevant Documentation
- [Documentation Link 1](https://example.com/doc1#section)
- Specific section: Authentication setup
- Why: Required for implementing secure endpoints
- [Documentation Link 2](https://example.com/doc2#integration)
- Specific section: Database integration
- Why: Shows proper async database patterns
### Patterns to Follow
<Specific patterns extracted from codebase - include actual code examples from the project>
**Naming Conventions:** (for example)
**Error Handling:** (for example)
**Logging Pattern:** (for example)
**Other Relevant Patterns:** (for example)
---
## IMPLEMENTATION PLAN
### Phase 1: Foundation
<Describe foundational work needed before main implementation>
**Tasks:**
- Set up base structures (schemas, types, interfaces)
- Configure necessary dependencies
- Create foundational utilities or helpers
### Phase 2: Core Implementation
<Describe the main implementation work>
**Tasks:**
- Implement core business logic
- Create service layer components
- Add API endpoints or interfaces
- Implement data models
### Phase 3: Integration
<Describe how feature integrates with existing functionality>
**Tasks:**
- Connect to existing routers/handlers
- Register new components
- Update configuration files
- Add middleware or interceptors if needed
### Phase 4: Testing & Validation
<Describe testing approach>
**Tasks:**
- Implement unit tests for each component
- Create integration tests for feature workflow
- Add edge case tests
- Validate against acceptance criteria
---
## STEP-BY-STEP TASKS
IMPORTANT: Execute every task in order, top to bottom. Each task is atomic and independently testable.
### Task Format Guidelines
Use information-dense keywords for clarity:
- **CREATE**: New files or components
- **UPDATE**: Modify existing files
- **ADD**: Insert new functionality into existing code
- **REMOVE**: Delete deprecated code
- **REFACTOR**: Restructure without changing behavior
- **MIRROR**: Copy pattern from elsewhere in codebase
### {ACTION} {target_file}
- **IMPLEMENT**: {Specific implementation detail}
- **PATTERN**: {Reference to existing pattern - file:line}
- **IMPORTS**: {Required imports and dependencies}
- **GOTCHA**: {Known issues or constraints to avoid}
- **VALIDATE**: `{executable validation command}`
<Continue with all tasks in dependency order...>
---
## TESTING STRATEGY
<Define testing approach based on project's test framework and patterns discovered in Phase 2>
### Unit Tests
<Scope and requirements based on project standards>
### Integration Tests
<Scope and requirements based on project standards>
### Edge Cases
<List specific edge cases that must be tested for this feature>
---
## VALIDATION COMMANDS
<Define validation commands based on project's tools discovered in Phase 2>
Execute every command to ensure zero regressions and 100% feature correctness.
### Level 1: Syntax & Style
<Project-specific linting and formatting commands>
### Level 2: Unit Tests
<Project-specific unit test commands>
### Level 3: Integration Tests
<Project-specific integration test commands>
### Level 4: Manual Validation
<Feature-specific manual testing steps - API calls, UI testing, etc.>
### Level 5: Additional Validation (Optional)
<MCP servers or additional CLI tools if available>
---
## ACCEPTANCE CRITERIA
<List specific, measurable criteria that must be met for completion>
- [ ] Feature implements all specified functionality
- [ ] All validation commands pass with zero errors
- [ ] Unit test coverage meets requirements (80%+)
- [ ] Integration tests verify end-to-end workflows
- [ ] Code follows project conventions and patterns
- [ ] No regressions in existing functionality
- [ ] Documentation is updated (if applicable)
- [ ] Performance meets requirements (if applicable)
- [ ] Security considerations addressed (if applicable)
---
## COMPLETION CHECKLIST
- [ ] All tasks completed in order
- [ ] Each task validation passed immediately
- [ ] All validation commands executed successfully
- [ ] Full test suite passes (unit + integration)
- [ ] No linting or type checking errors
- [ ] Manual testing confirms feature works
- [ ] Acceptance criteria all met
- [ ] Code reviewed for quality and maintainability
---
## NOTES
<Additional context, design decisions, trade-offs>
<!-- EOF -->
Output Format
Filename: .claude/PRPs/features/{kebab-case-descriptive-name}.md
- Replace
{kebab-case-descriptive-name}with short, descriptive feature name - Examples:
add-user-authentication.md,implement-search-api.md,refactor-database-layer.md
Directory: Create .claude/PRPs/features/ if it doesn't exist
Quality Criteria
Context Completeness ✓
- All necessary patterns identified and documented
- External library usage documented with links
- Integration points clearly mapped
- Gotchas and anti-patterns captured
- Every task has executable validation command
Implementation Ready ✓
- Another developer could execute without additional context
- Tasks ordered by dependency (can execute top-to-bottom)
- Each task is atomic and independently testable
- Pattern references include specific file:line numbers
Pattern Consistency ✓
- Tasks follow existing codebase conventions
- New patterns justified with clear rationale
- No reinvention of existing patterns or utils
- Testing approach matches project standards
Information Density ✓
- No generic references (all specific and actionable)
- URLs include section anchors when applicable
- Task descriptions use codebase keywords
- Validation commands are non interactive executable
Success Metrics
One-Pass Implementation: Execution agent can complete feature without additional research or clarification
Validation Complete: Every task has at least one working validation command
Context Rich: PRP passes "No Prior Knowledge Test" - someone unfamiliar with codebase can implement using only PRP content
Confidence Score: #/10 that execution will succeed on first attempt
Report
After creating the PRP, provide:
- Summary of feature and approach
- Full path to created PRP file
- Complexity assessment
- Key implementation risks or considerations
- Estimated confidence score for one-pass success