Files
2025-11-29 18:16:43 +08:00

574 lines
14 KiB
Markdown

# Modern SDLC Best Practices (2024-25)
This document outlines industry-standard software development lifecycle best practices based on 2024-25 research and modern engineering standards.
## Table of Contents
1. [Development Workflow](#development-workflow)
2. [Testing Strategy](#testing-strategy)
3. [Security (DevSecOps)](#security-devsecops)
4. [Code Quality](#code-quality)
5. [Performance](#performance)
6. [Documentation](#documentation)
7. [DevOps & CI/CD](#devops--cicd)
8. [DORA Metrics](#dora-metrics)
9. [Developer Experience](#developer-experience)
10. [Accessibility](#accessibility)
---
## Development Workflow
### Version Control (Git)
**Branching Strategy**:
- Main/master branch is always deployable
- Feature branches for new work: `feature/{component-name}`
- Bugfix branches: `bugfix/{issue-number}`
- Release branches for production releases
- No direct commits to main (use pull requests)
**Commit Messages**:
- Follow Conventional Commits format
- Structure: `type(scope): description`
- Types: feat, fix, docs, style, refactor, test, chore
- Example: `feat(auth): add OAuth2 social login`
**Code Review**:
- All changes require peer review
- Use pull request templates
- Automated checks must pass before merge
- Review within 24 hours for team velocity
- Focus on logic, security, and maintainability
### Test-Driven Development (TDD)
**RED-GREEN-REFACTOR Cycle**:
1. **RED**: Write failing test first
2. **GREEN**: Write minimum code to pass
3. **REFACTOR**: Improve code quality while tests pass
**Benefits**:
- Better design through testability
- Documentation through tests
- Confidence to refactor
- Fewer regression bugs
---
## Testing Strategy
### Testing Trophy (Kent C. Dodds)
**Philosophy**: "Write tests. Not too many. Mostly integration."
**Distribution**:
- **Integration Tests (70%)**: User workflows and component interaction
- Test real user behavior
- Test multiple units working together
- Higher confidence than unit tests
- Example: User registration flow end-to-end
- **Unit Tests (20%)**: Complex business logic only
- Pure functions
- Complex algorithms
- Edge cases and error handling
- Example: Tax calculation logic
- **E2E Tests (10%)**: Critical user journeys
- Full stack, production-like environment
- Happy path scenarios
- Critical business flows
- Example: Complete purchase flow
### What NOT to Test (Brittle Patterns)
**Avoid**:
- Emoji presence in UI elements
- Exact number of DOM elements
- Specific element ordering (unless critical)
- API call counts (unless performance critical)
- CSS class names and styling
- Implementation details over user behavior
- Private methods/functions
- Third-party library internals
### What to Prioritize (User-Focused)
**Prioritize**:
- User workflows and interactions
- Business logic and calculations
- Data accuracy and processing
- Error handling and edge cases
- Performance within acceptable limits
- Accessibility compliance (WCAG 2.1 AA)
- Security boundaries
### Semantic Queries (React Testing Library)
**Priority Order**:
1. `getByRole()` - Most preferred (accessibility-first)
2. `getByLabelText()` - Form elements
3. `getByPlaceholderText()` - Inputs without labels
4. `getByText()` - User-visible content
5. `getByDisplayValue()` - Form current values
6. `getByAltText()` - Images
7. `getByTitle()` - Title attributes
8. `getByTestId()` - Last resort only
### Coverage Targets
**Minimum Requirements**:
- Overall coverage: **80%**
- Critical paths: **100%** (auth, payment, data processing)
- Branch coverage: **75%**
- Function coverage: **90%**
**Tools**:
- Jest/Vitest for unit & integration tests
- Cypress/Playwright for E2E tests
- Istanbul/c8 for coverage reporting
---
## Security (DevSecOps)
### Shift-Left Security
**Principle**: Integrate security into every development stage, not as an afterthought.
**Cost Multiplier**:
- Fix in **design**: 1x cost
- Fix in **development**: 5x cost
- Fix in **testing**: 10x cost
- Fix in **production**: 30x cost
### OWASP Top 10 (2024)
1. **Broken Access Control**: Enforce authorization checks on every request
2. **Cryptographic Failures**: Use TLS, encrypt PII, avoid weak algorithms
3. **Injection**: Validate input, use prepared statements, sanitize output
4. **Insecure Design**: Threat modeling, secure design patterns
5. **Security Misconfiguration**: Harden defaults, disable unnecessary features
6. **Vulnerable Components**: Keep dependencies updated, scan for CVEs
7. **Authentication Failures**: MFA, rate limiting, secure session management
8. **Software Integrity Failures**: Verify integrity with signatures, SRI
9. **Security Logging**: Log security events, monitor for anomalies
10. **SSRF**: Validate URLs, whitelist allowed domains
### Dependency Management
**Best Practices**:
- Run `npm audit` / `yarn audit` weekly
- Update dependencies monthly
- Use Dependabot/Renovate for automated updates
- Pin dependency versions in production
- Check licenses for compliance
- Monitor CVE databases
### Secrets Management
**Rules**:
- NEVER commit secrets to version control
- Use environment variables for configuration
- Use secret management tools (Vault, AWS Secrets Manager)
- Rotate secrets regularly
- Scan git history for leaked secrets
- Use `.env.example` for documentation, not `.env`
---
## Code Quality
### Complexity Metrics
**Cyclomatic Complexity**:
- **1-10**: Simple, easy to test
- **11-20**: Moderate, consider refactoring
- **21-50**: High, should refactor
- **50+**: Very high, must refactor
**Tool**: ESLint `complexity` rule, SonarQube
### Code Duplication
**Thresholds**:
- **< 5%**: Excellent
- **5-10%**: Acceptable
- **10-20%**: Needs attention
- **> 20%**: Critical issue
**DRY Principle**: Don't Repeat Yourself
- Extract common code into functions/modules
- Use design patterns (Template Method, Strategy)
- Balance DRY with readability
### Code Smells
**Common Smells**:
- **God Object**: Too many responsibilities
- **Feature Envy**: Too much coupling to other classes
- **Long Method**: > 50 lines
- **Long Parameter List**: > 4 parameters
- **Dead Code**: Unused code
- **Magic Numbers**: Hard-coded values
- **Primitive Obsession**: Overuse of primitives vs objects
**Refactoring Techniques**:
- Extract Method
- Extract Class
- Introduce Parameter Object
- Replace Magic Number with Constant
- Remove Dead Code
### Static Analysis
**Tools**:
- **SonarQube**: Comprehensive code quality platform
- **ESLint**: JavaScript/TypeScript linting
- **Prettier**: Code formatting
- **TypeScript**: Type checking in strict mode
- **Checkmarx**: Security-focused analysis
---
## Performance
### Build Performance
**Targets**:
- Build time: < 2 minutes
- Hot reload: < 200ms
- First build: < 5 minutes
**Optimization**:
- Use build caching
- Parallelize builds
- Tree-shaking
- Code splitting
- Lazy loading
### Runtime Performance
**Web Vitals (Core)**:
- **LCP (Largest Contentful Paint)**: < 2.5s
- **FID (First Input Delay)**: < 100ms
- **CLS (Cumulative Layout Shift)**: < 0.1
**API Performance**:
- **P50**: < 100ms
- **P95**: < 500ms
- **P99**: < 1000ms
**Optimization Techniques**:
- Caching (Redis, CDN)
- Database indexing
- Query optimization
- Compression (gzip, Brotli)
- Image optimization (WebP, lazy loading)
- Code splitting and lazy loading
### Bundle Size
**Targets**:
- Initial bundle: < 200KB (gzipped)
- Total JavaScript: < 500KB (gzipped)
- Images optimized: < 200KB each
**Tools**:
- webpack-bundle-analyzer
- Lighthouse
- Chrome DevTools Performance tab
---
## Documentation
### Code Documentation
**JSDoc/TSDoc**:
- Document all public APIs
- Include examples for complex functions
- Document parameters, return types, exceptions
**Example**:
```typescript
/**
* Calculates the total price including tax and discounts.
*
* @param items - Array of cart items
* @param taxRate - Tax rate as decimal (e.g., 0.08 for 8%)
* @param discountCode - Optional discount code
* @returns Total price with tax and discounts applied
* @throws {InvalidDiscountError} If discount code is invalid
*
* @example
* const total = calculateTotal(items, 0.08, 'SUMMER20');
*/
function calculateTotal(items: CartItem[], taxRate: number, discountCode?: string): number {
// ...
}
```
### Project Documentation
**Essential Files**:
- **README.md**: Project overview, setup instructions, quick start
- **CONTRIBUTING.md**: How to contribute, coding standards, PR process
- **CODE_OF_CONDUCT.md**: Community guidelines
- **CHANGELOG.md**: Version history and changes
- **LICENSE**: Legal license information
- **ARCHITECTURE.md**: High-level architecture overview
- **ADRs** (Architecture Decision Records): Document important decisions
---
## DevOps & CI/CD
### Continuous Integration
**Requirements**:
- Automated testing on every commit
- Build verification
- Code quality checks (linting, formatting)
- Security scanning
- Fast feedback (< 10 minutes)
**Pipeline Stages**:
1. Lint & Format Check
2. Unit Tests
3. Integration Tests
4. Security Scan
5. Build Artifacts
6. Deploy to Staging
7. E2E Tests
8. Deploy to Production (with approval)
### Continuous Deployment
**Strategies**:
- **Blue-Green**: Two identical environments, switch traffic
- **Canary**: Gradual rollout to subset of users
- **Rolling**: Update instances incrementally
- **Feature Flags**: Control feature visibility without deployment
**Rollback**:
- Automated rollback on failure detection
- Keep last 3-5 versions deployable
- Database migrations reversible
- Monitor key metrics post-deployment
### Infrastructure as Code
**Tools**:
- Terraform, CloudFormation, Pulumi
- Ansible, Chef, Puppet
- Docker, Kubernetes
**Benefits**:
- Version-controlled infrastructure
- Reproducible environments
- Disaster recovery
- Automated provisioning
---
## DORA Metrics
**Four Key Metrics** (DevOps Research and Assessment):
### 1. Deployment Frequency
**How often code is deployed to production**
- **Elite**: Multiple times per day
- **High**: Once per day to once per week
- **Medium**: Once per week to once per month
- **Low**: Less than once per month
### 2. Lead Time for Changes
**Time from commit to production**
- **Elite**: Less than 1 hour
- **High**: 1 day to 1 week
- **Medium**: 1 week to 1 month
- **Low**: More than 1 month
### 3. Change Failure Rate
**Percentage of deployments causing failures**
- **Elite**: < 1%
- **High**: 1-5%
- **Medium**: 5-15%
- **Low**: > 15%
### 4. Time to Restore Service
**Time to recover from production incident**
- **Elite**: < 1 hour
- **High**: < 1 day
- **Medium**: 1 day to 1 week
- **Low**: > 1 week
**Tracking**: Use CI/CD tools, APM (Application Performance Monitoring), incident management systems
---
## Developer Experience
### Why It Matters
**Statistics**:
- 83% of engineers experience burnout
- Developer experience is the strongest predictor of delivery capability
- Happy developers are 2x more productive
### Key Factors
**Fast Feedback Loops**:
- Quick build times
- Fast test execution
- Immediate linting/formatting feedback
- Hot module reloading
**Good Tooling**:
- Modern IDE with autocomplete
- Debuggers and profilers
- Automated code reviews
- Documentation generators
**Clear Standards**:
- Coding style guides
- Architecture documentation
- Onboarding guides
- Runbooks for common tasks
**Psychological Safety**:
- Blameless post-mortems
- Encourage experimentation
- Celebrate learning from failure
- Mentorship programs
---
## Accessibility
### WCAG 2.1 Level AA Compliance
**Four Principles (POUR)**:
1. **Perceivable**: Information must be presentable to users
- Alt text for images
- Captions for videos
- Color contrast ratios
2. **Operable**: UI components must be operable
- Keyboard navigation
- Sufficient time to read content
- No seizure-inducing content
3. **Understandable**: Information must be understandable
- Readable text
- Predictable behavior
- Input assistance (error messages)
4. **Robust**: Content must be robust across technologies
- Valid HTML
- ARIA attributes
- Cross-browser compatibility
### Testing Tools
**Automated**:
- axe DevTools
- Lighthouse
- WAVE
- Pa11y
**Manual**:
- Keyboard navigation testing
- Screen reader testing (NVDA, JAWS, VoiceOver)
- Color contrast checkers
- Zoom testing (200%+)
---
## Modern Trends (2024-25)
### AI-Assisted Development
**Tools**:
- GitHub Copilot
- ChatGPT / Claude
- Tabnine
- Amazon CodeWhisperer
**Best Practices**:
- Review all AI-generated code
- Write tests for AI code
- Understand before committing
- Train team on effective prompting
### Platform Engineering
**Concept**: Internal developer platforms to improve developer experience
**Components**:
- Self-service infrastructure
- Golden paths (templates)
- Developer portals
- Observability dashboards
### Observability (vs Monitoring)
**Three Pillars**:
1. **Logs**: What happened
2. **Metrics**: Quantitative data
3. **Traces**: Request flow through system
**Tools**:
- Datadog, New Relic, Grafana
- OpenTelemetry for standardization
- Distributed tracing (Jaeger, Zipkin)
---
## Industry Benchmarks (2024-25)
### Code Quality
- Tech debt ratio: < 5%
- Duplication: < 5%
- Test coverage: > 80%
- Build time: < 2 minutes
### Security
- CVE remediation: < 30 days
- Security training: Quarterly
- Penetration testing: Annually
### Performance
- Page load: < 3 seconds
- API response: P95 < 500ms
- Uptime: 99.9%+
### Team Metrics
- Pull request review time: < 24 hours
- Deployment frequency: Daily+
- Incident MTTR: < 1 hour
- Developer onboarding: < 1 week
---
**References**:
- DORA State of DevOps Report 2024
- OWASP Top 10 (2024 Edition)
- WCAG 2.1 Guidelines
- Kent C. Dodds Testing Trophy
- SonarQube Quality Gates
- Google Web Vitals
**Last Updated**: 2024-25
**Version**: 1.0