383 lines
12 KiB
Markdown
383 lines
12 KiB
Markdown
# How to Create a Technical Requirement Specification
|
|
|
|
Technical Requirements (PRD or TRQ) translate business needs into specific, implementation-ready technical requirements. They bridge the gap between "what we want to build" (business requirements) and "how we'll build it" (design documents).
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# 1. Create a new technical requirement
|
|
scripts/generate-spec.sh technical-requirement prd-001-descriptive-slug
|
|
|
|
# 2. Open and fill in the file
|
|
# (The file will be created at: docs/specs/technical-requirement/prd-001-descriptive-slug.md)
|
|
|
|
# 3. Fill in the sections, then validate:
|
|
scripts/validate-spec.sh docs/specs/technical-requirement/prd-001-descriptive-slug.md
|
|
|
|
# 4. Fix any issues, then check completeness:
|
|
scripts/check-completeness.sh docs/specs/technical-requirement/prd-001-descriptive-slug.md
|
|
```
|
|
|
|
## When to Write a Technical Requirement
|
|
|
|
Use a Technical Requirement when you need to:
|
|
- Define specific technical implementation details for a feature
|
|
- Map business requirements to technical solutions
|
|
- Document design decisions and their rationale
|
|
- Create acceptance criteria that engineers can test against
|
|
- Specify external dependencies and constraints
|
|
|
|
## Research Phase
|
|
|
|
Before writing, do your homework:
|
|
|
|
### 1. Research Related Specifications
|
|
Look for upstream and downstream specs:
|
|
```bash
|
|
# Find the business requirement this fulfills
|
|
grep -r "brd\|business" docs/specs/ --include="*.md" | head -20
|
|
|
|
# Find any existing technical requirements in this domain
|
|
grep -r "prd\|technical" docs/specs/ --include="*.md" | head -20
|
|
|
|
# Find design documents that might inform this
|
|
grep -r "design\|architecture" docs/specs/ --include="*.md" | head -20
|
|
```
|
|
|
|
### 2. Research External Documentation
|
|
Research relevant technologies and patterns:
|
|
|
|
```bash
|
|
# For external libraries/frameworks:
|
|
# Use the doc tools to get latest official documentation
|
|
# Example: research React hooks if implementing a frontend component
|
|
# Example: research database indexing strategies if working with large datasets
|
|
```
|
|
|
|
Ask yourself:
|
|
- What technologies are most suitable for this?
|
|
- Are there industry standards we should follow?
|
|
- What does the existing codebase use for similar features?
|
|
- Are there performance benchmarks or best practices we should know about?
|
|
|
|
### 3. Review the Codebase
|
|
- How have similar features been implemented?
|
|
- What patterns does the team follow?
|
|
- What libraries/frameworks are already in use?
|
|
- Are there existing utilities or services we can reuse?
|
|
|
|
## Structure & Content Guide
|
|
|
|
### Title & Metadata
|
|
- **Title**: Clear, specific requirement (e.g., "Implement Real-Time Notification System")
|
|
- **Priority**: critical | high | medium | low
|
|
- **Document ID**: Use format `PRD-XXX-slug` (e.g., `PRD-001-export-api`)
|
|
|
|
### Description Section
|
|
Answer: "What technical problem are we solving?"
|
|
|
|
Describe:
|
|
- The technical challenge you're addressing
|
|
- Why this particular approach matters
|
|
- Current technical gaps
|
|
- How this impacts system architecture or performance
|
|
|
|
Example:
|
|
```
|
|
Currently, bulk exports run synchronously, blocking requests for up to 30 seconds.
|
|
This causes timeout errors for exports > 100MB. We need an asynchronous export
|
|
system that handles large datasets efficiently.
|
|
```
|
|
|
|
### Business Requirements Addressed Section
|
|
Reference the business requirements this fulfills:
|
|
```
|
|
- [BRD-001] Bulk User Data Export - This implementation enables the export feature
|
|
- [BRD-002] Enterprise Data Audit - This provides the data integrity requirements
|
|
```
|
|
|
|
Link each BRD to how your technical solution addresses it.
|
|
|
|
### Technical Requirements Section
|
|
List specific, measurable technical requirements:
|
|
|
|
```markdown
|
|
1. **[TR-001] Asynchronous Export Processing**
|
|
- Exports must complete within 5 minutes for datasets up to 500MB
|
|
- Must not block HTTP request threads
|
|
- Must handle job queue with at least 100 concurrent exports
|
|
|
|
2. **[TR-002] Data Format Support**
|
|
- Support CSV, JSON, and Parquet formats
|
|
- All formats must preserve data types accurately
|
|
- Handle special characters and encodings (UTF-8, etc.)
|
|
|
|
3. **[TR-003] Resilience & Retries**
|
|
- Failed exports must retry up to 3 times with exponential backoff
|
|
- Incomplete exports must be resumable or cleanly failed
|
|
```
|
|
|
|
**Tips:**
|
|
- Be specific: Use numbers, formats, standards
|
|
- Make it testable: Each requirement should be verifiable
|
|
- Reference technical specs: Link to API contracts, data models, etc.
|
|
- Include edge cases: What about edge cases or error conditions?
|
|
|
|
### Implementation Approach Section
|
|
Describe the high-level technical strategy:
|
|
|
|
```markdown
|
|
**Architecture Pattern**
|
|
We'll use a job queue pattern with async workers. HTTP requests will create
|
|
an export job and return immediately. Workers process jobs asynchronously
|
|
and notify users when complete.
|
|
|
|
**Key Technologies**
|
|
- Job Queue: Redis with Bull library
|
|
- Export Service: Node.js worker process
|
|
- Storage: S3 for export files
|
|
- Notifications: Email service
|
|
|
|
**Integration Points**
|
|
- Integrates with existing User Service API
|
|
- Uses auth middleware for permission checking
|
|
- Publishes completion events to event bus
|
|
```
|
|
|
|
### Key Design Decisions Section
|
|
Document important choices:
|
|
|
|
```markdown
|
|
**Decision 1: Asynchronous Export vs. Synchronous**
|
|
- **Decision**: Use async job queue instead of blocking requests
|
|
- **Rationale**: Synchronous approach causes timeouts for large exports;
|
|
async improves reliability and user experience
|
|
- **Tradeoffs**: Adds complexity (job queue, worker processes, status tracking)
|
|
but enables exports for datasets up to 500MB vs. 50MB limit
|
|
```
|
|
|
|
**Why this matters:**
|
|
- Explains the "why" behind technical choices
|
|
- Helps future developers understand constraints
|
|
- Documents tradeoffs explicitly
|
|
|
|
### Technical Acceptance Criteria Section
|
|
Define how you'll know this is implemented correctly:
|
|
|
|
```markdown
|
|
### [TAC-001] Export Job Creation
|
|
**Description**: When a user requests an export, a job is created and queued
|
|
**Verification**: Unit test verifies job is created with correct parameters;
|
|
integration test verifies job appears in queue
|
|
|
|
### [TAC-002] Async Processing
|
|
**Description**: Export job completes without blocking HTTP request
|
|
**Verification**: Load test shows HTTP response time < 100ms regardless of
|
|
export size; export job completes within target time
|
|
|
|
### [TAC-003] Export Format Accuracy
|
|
**Description**: Exported data matches source data exactly (no data loss)
|
|
**Verification**: Property-based tests verify format accuracy for various
|
|
data types and edge cases
|
|
```
|
|
|
|
**Tips for Acceptance Criteria:**
|
|
- Each should be testable (unit test, integration test, or manual test)
|
|
- Include both happy path and edge cases
|
|
- Reference specific metrics or standards
|
|
|
|
### Dependencies Section
|
|
|
|
**Technical Dependencies**
|
|
- What libraries, services, or systems must be in place?
|
|
- What versions are required?
|
|
- What's the risk if a dependency is unavailable?
|
|
|
|
```markdown
|
|
- **Redis** (v6.0+) - Job queue | Risk: Medium
|
|
- **Bull** (v3.0+) - Queue library | Risk: Low
|
|
- **S3** - Export file storage | Risk: Low
|
|
- **Email Service API** - User notifications | Risk: Medium
|
|
```
|
|
|
|
**Specification Dependencies**
|
|
- What other specs must be completed first?
|
|
- Why is this a blocker?
|
|
|
|
```markdown
|
|
- [API-001] Export Endpoints - Must be designed before implementation
|
|
- [DATA-001] User Data Model - Need schema for understanding export structure
|
|
```
|
|
|
|
### Constraints Section
|
|
Document technical limitations:
|
|
|
|
```markdown
|
|
**Performance**
|
|
- Exports must complete within 5 minutes
|
|
- p95 latency for export requests must be < 100ms
|
|
- System must handle 100 concurrent exports
|
|
|
|
**Scalability**
|
|
- Support up to 500MB export files
|
|
- Handle 1000+ daily exports
|
|
|
|
**Security**
|
|
- Only export user's own data (auth-based filtering)
|
|
- Encryption for files in transit and at rest
|
|
- Audit logs for all exports
|
|
|
|
**Compatibility**
|
|
- Support all major browsers (Chrome, Firefox, Safari, Edge)
|
|
- Works with existing authentication system
|
|
```
|
|
|
|
### Implementation Notes Section
|
|
|
|
**Key Considerations**
|
|
What should the implementation team watch out for?
|
|
|
|
```markdown
|
|
**Error Handling**
|
|
- Handle network interruptions during export
|
|
- Gracefully fail if S3 becomes unavailable
|
|
- Provide clear error messages to users
|
|
|
|
**Testing Strategy**
|
|
- Unit tests for export formatting logic
|
|
- Integration tests for job queue and workers
|
|
- Load tests for concurrent export handling
|
|
- Property-based tests for data accuracy
|
|
```
|
|
|
|
**Migration Strategy** (if applicable)
|
|
- How do we transition from old to new system?
|
|
- What about existing data or users?
|
|
|
|
## Writing Tips
|
|
|
|
### Make Requirements Testable
|
|
- ❌ Bad: "Export should be fast"
|
|
- ✅ Good: "Export must complete within 5 minutes for datasets up to 500MB, with p95 latency under 100ms"
|
|
|
|
### Be Specific About Trade-offs
|
|
- Don't just say "we chose Redis"
|
|
- Explain: "We chose Redis over RabbitMQ because it's already in our stack and provides the job persistence we need"
|
|
|
|
### Link to Other Specs
|
|
- Reference business requirements this fulfills: `[BRD-001]`
|
|
- Reference data models: `[DATA-001]`
|
|
- Reference API contracts: `[API-001]`
|
|
- Reference design documents: `[DES-001]`
|
|
|
|
### Document Constraints Clearly
|
|
- Performance targets with specific numbers
|
|
- Scalability limits and assumptions
|
|
- Security and compliance requirements
|
|
- Browser/platform support
|
|
|
|
### Include Edge Cases
|
|
- What happens with extremely large datasets?
|
|
- How do we handle special characters, encoding issues, missing data?
|
|
- What about rate limiting and concurrent requests?
|
|
|
|
### Complete All TODOs
|
|
- Replace placeholder text with actual decisions
|
|
- If something is still undecided, explain what needs to happen to decide
|
|
|
|
## Validation & Fixing Issues
|
|
|
|
### Run the Validator
|
|
```bash
|
|
scripts/validate-spec.sh docs/specs/technical-requirement/prd-001-your-spec.md
|
|
```
|
|
|
|
### Common Issues & Fixes
|
|
|
|
**Issue**: "Missing Technical Acceptance Criteria"
|
|
- **Fix**: Add 3-5 criteria describing how you'll verify implementation correctness
|
|
|
|
**Issue**: "TODO items in Implementation Approach (2 items)"
|
|
- **Fix**: Complete the architecture pattern, technologies, and integration points
|
|
|
|
**Issue**: "No Performance constraints specified"
|
|
- **Fix**: Add specific latency, throughput, and availability targets
|
|
|
|
**Issue**: "Dependencies section incomplete"
|
|
- **Fix**: List all required libraries, services, and other specifications this depends on
|
|
|
|
### Check Completeness
|
|
```bash
|
|
scripts/check-completeness.sh docs/specs/technical-requirement/prd-001-your-spec.md
|
|
```
|
|
|
|
## Decision-Making Framework
|
|
|
|
As you write the technical requirement, reason through:
|
|
|
|
1. **Problem**: What technical problem are we solving?
|
|
- Is this a performance issue, reliability issue, or capability gap?
|
|
- What's the current cost of not solving this?
|
|
|
|
2. **Approach**: What are the viable technical approaches?
|
|
- Pros and cons of each?
|
|
- What's the simplest approach that solves the problem?
|
|
- What does the team have experience with?
|
|
|
|
3. **Trade-offs**: What are we accepting with this approach?
|
|
- Complexity vs. flexibility?
|
|
- Performance vs. maintainability?
|
|
- Immediate need vs. future extensibility?
|
|
|
|
4. **Measurability**: How will we know this works?
|
|
- What specific metrics define success?
|
|
- What's the threshold for "passing"?
|
|
|
|
5. **Dependencies**: What must happen first?
|
|
- Are there blockers we need to resolve?
|
|
- Can parts be parallelized?
|
|
|
|
## Example: Complete Technical Requirement
|
|
|
|
```markdown
|
|
# [PRD-001] Asynchronous Export Service
|
|
|
|
**Priority:** High
|
|
|
|
## Description
|
|
Currently, bulk exports run synchronously, blocking HTTP requests for up to
|
|
30 seconds, causing timeouts for exports > 100MB. We need an asynchronous
|
|
export system that handles large datasets efficiently and provides job status
|
|
tracking to users.
|
|
|
|
## Business Requirements Addressed
|
|
- [BRD-001] Bulk User Data Export - Enables the core export feature
|
|
- [BRD-002] Enterprise Audit Requirements - Provides reliable data export
|
|
|
|
## Technical Requirements
|
|
|
|
1. **[TR-001] Asynchronous Processing**
|
|
- Export jobs must not block HTTP requests
|
|
- Jobs complete within 5 minutes for datasets up to 500MB
|
|
- System handles 100 concurrent exports
|
|
|
|
2. **[TR-002] Format Support**
|
|
- Support CSV, JSON formats
|
|
- Preserve data types and handle special characters
|
|
|
|
3. **[TR-003] Job Status Tracking**
|
|
- Users can check export job status via API
|
|
- Job history retained for 30 days
|
|
|
|
... [rest of sections follow] ...
|
|
```
|
|
|
|
## Next Steps
|
|
|
|
1. **Create the spec**: `scripts/generate-spec.sh technical-requirement prd-XXX-slug`
|
|
2. **Research**: Find related BRD and understand the context
|
|
3. **Fill in sections** using this guide
|
|
4. **Validate**: `scripts/validate-spec.sh docs/specs/technical-requirement/prd-XXX-slug.md`
|
|
5. **Fix issues** identified by validator
|
|
6. **Share with architecture/design team** for design document creation
|