Initial commit
This commit is contained in:
292
skills/codebase-auditor/reference/audit_criteria.md
Normal file
292
skills/codebase-auditor/reference/audit_criteria.md
Normal file
@@ -0,0 +1,292 @@
|
||||
# Codebase Audit Criteria Checklist
|
||||
|
||||
This document provides a comprehensive checklist for auditing codebases based on modern software engineering best practices (2024-25).
|
||||
|
||||
## 1. Code Quality
|
||||
|
||||
### Complexity Metrics
|
||||
- [ ] Cyclomatic complexity measured for all functions/methods
|
||||
- [ ] Functions with complexity > 10 flagged as warnings
|
||||
- [ ] Functions with complexity > 20 flagged as critical
|
||||
- [ ] Cognitive complexity analyzed
|
||||
- [ ] Maximum nesting depth < 4 levels
|
||||
- [ ] Function/method length < 50 LOC (recommendation)
|
||||
- [ ] File length < 500 LOC (recommendation)
|
||||
|
||||
### Code Duplication
|
||||
- [ ] Duplication analysis performed (minimum 6-line blocks)
|
||||
- [ ] Overall duplication < 5%
|
||||
- [ ] Duplicate blocks identified with locations
|
||||
- [ ] Opportunities for abstraction documented
|
||||
|
||||
### Code Smells
|
||||
- [ ] God objects/classes identified (> 10 public methods)
|
||||
- [ ] Feature envy detected (high coupling to other classes)
|
||||
- [ ] Dead code identified (unused imports, variables, functions)
|
||||
- [ ] Magic numbers replaced with named constants
|
||||
- [ ] Hard-coded values moved to configuration
|
||||
- [ ] Naming conventions consistent
|
||||
- [ ] Error handling comprehensive
|
||||
- [ ] No console.log in production code
|
||||
- [ ] No commented-out code blocks
|
||||
|
||||
### Language-Specific (TypeScript/JavaScript)
|
||||
- [ ] No use of `any` type (strict mode)
|
||||
- [ ] No use of `var` keyword
|
||||
- [ ] Strict equality (`===`) used consistently
|
||||
- [ ] Return type annotations present for functions
|
||||
- [ ] Non-null assertions justified with comments
|
||||
- [ ] Async/await preferred over Promise chains
|
||||
- [ ] No implicit any returns
|
||||
|
||||
## 2. Testing & Coverage
|
||||
|
||||
### Coverage Metrics
|
||||
- [ ] Line coverage >= 80%
|
||||
- [ ] Branch coverage >= 75%
|
||||
- [ ] Function coverage >= 90%
|
||||
- [ ] Critical paths have 100% coverage (auth, payment, data processing)
|
||||
- [ ] Coverage reports generated and accessible
|
||||
|
||||
### Testing Trophy Distribution
|
||||
- [ ] Integration tests: ~70% of total tests
|
||||
- [ ] Unit tests: ~20% of total tests
|
||||
- [ ] E2E tests: ~10% of total tests
|
||||
- [ ] Actual distribution documented
|
||||
|
||||
### Test Quality
|
||||
- [ ] Tests follow "should X when Y" naming pattern
|
||||
- [ ] Tests are isolated and independent
|
||||
- [ ] No tests of implementation details (brittle tests)
|
||||
- [ ] Single assertion per test (or grouped related assertions)
|
||||
- [ ] Edge cases covered
|
||||
- [ ] No flaky tests
|
||||
- [ ] Tests use semantic queries (getByRole, getByLabelText)
|
||||
- [ ] Avoid testing emoji presence, exact DOM counts, element ordering
|
||||
|
||||
### Test Performance
|
||||
- [ ] Tests complete in < 30 seconds (unit/integration)
|
||||
- [ ] CPU usage monitored (use `npm run test:low -- --run`)
|
||||
- [ ] No runaway test processes
|
||||
- [ ] Tests run in parallel where possible
|
||||
- [ ] Max threads limited to prevent CPU overload
|
||||
|
||||
## 3. Security
|
||||
|
||||
### Dependency Vulnerabilities
|
||||
- [ ] No critical CVEs in dependencies
|
||||
- [ ] No high-severity CVEs in dependencies
|
||||
- [ ] All dependencies using supported versions
|
||||
- [ ] No dependencies unmaintained for > 2 years
|
||||
- [ ] License compliance verified
|
||||
- [ ] No dependency confusion risks
|
||||
|
||||
### OWASP Top 10 (2024)
|
||||
- [ ] Access control properly implemented
|
||||
- [ ] Sensitive data encrypted at rest and in transit
|
||||
- [ ] Input validation prevents injection attacks
|
||||
- [ ] Security design patterns followed
|
||||
- [ ] Security configuration reviewed (no defaults)
|
||||
- [ ] All components up-to-date
|
||||
- [ ] Authentication robust (MFA, rate limiting)
|
||||
- [ ] Software integrity verified (SRI, signatures)
|
||||
- [ ] Security logging and monitoring enabled
|
||||
- [ ] SSRF protections in place
|
||||
|
||||
### Secrets Management
|
||||
- [ ] No API keys in code
|
||||
- [ ] No tokens in code
|
||||
- [ ] No passwords in code
|
||||
- [ ] No private keys committed
|
||||
- [ ] Environment variables properly used
|
||||
- [ ] No secrets in client-side code
|
||||
- [ ] .env files in .gitignore
|
||||
- [ ] Git history clean of secrets
|
||||
|
||||
### Security Best Practices
|
||||
- [ ] Input validation on all user inputs
|
||||
- [ ] Output encoding prevents XSS
|
||||
- [ ] CSRF tokens implemented
|
||||
- [ ] Secure session management
|
||||
- [ ] HTTPS enforced
|
||||
- [ ] CSP headers configured
|
||||
- [ ] Rate limiting on APIs
|
||||
- [ ] SQL prepared statements used
|
||||
|
||||
## 4. Architecture & Design
|
||||
|
||||
### SOLID Principles
|
||||
- [ ] Single Responsibility: Classes/modules have one reason to change
|
||||
- [ ] Open/Closed: Open for extension, closed for modification
|
||||
- [ ] Liskov Substitution: Subtypes are substitutable for base types
|
||||
- [ ] Interface Segregation: Clients not forced to depend on unused methods
|
||||
- [ ] Dependency Inversion: Depend on abstractions, not concretions
|
||||
|
||||
### Design Patterns
|
||||
- [ ] Appropriate patterns used (Factory, Strategy, Observer, etc.)
|
||||
- [ ] No anti-patterns (Singleton abuse, God Object, etc.)
|
||||
- [ ] Not over-engineered
|
||||
- [ ] Not under-engineered
|
||||
|
||||
### Modularity
|
||||
- [ ] Low coupling between modules
|
||||
- [ ] High cohesion within modules
|
||||
- [ ] No circular dependencies
|
||||
- [ ] Proper separation of concerns
|
||||
- [ ] Clean public APIs
|
||||
- [ ] Internal implementation details hidden
|
||||
|
||||
## 5. Performance
|
||||
|
||||
### Build Performance
|
||||
- [ ] Build time < 2 minutes for typical project
|
||||
- [ ] Bundle size documented and optimized
|
||||
- [ ] Code splitting implemented
|
||||
- [ ] Tree-shaking enabled
|
||||
- [ ] Source maps configured correctly
|
||||
- [ ] Production build optimized
|
||||
|
||||
### Runtime Performance
|
||||
- [ ] No memory leaks
|
||||
- [ ] Algorithms efficient (avoid O(n²) where possible)
|
||||
- [ ] No excessive re-renders (React/Vue)
|
||||
- [ ] Computations memoized where appropriate
|
||||
- [ ] Images optimized (< 200KB)
|
||||
- [ ] Videos optimized or lazy-loaded
|
||||
- [ ] Lazy loading for large components
|
||||
|
||||
### CI/CD Performance
|
||||
- [ ] Pipeline runs in < 10 minutes
|
||||
- [ ] Deployment frequency documented
|
||||
- [ ] Test execution time < 5 minutes
|
||||
- [ ] Docker images < 500MB (if applicable)
|
||||
|
||||
## 6. Documentation
|
||||
|
||||
### Code Documentation
|
||||
- [ ] Public APIs documented (JSDoc/TSDoc)
|
||||
- [ ] Complex logic has inline comments
|
||||
- [ ] README.md comprehensive
|
||||
- [ ] Architecture Decision Records (ADRs) present
|
||||
- [ ] API documentation available
|
||||
- [ ] CONTRIBUTING.md exists
|
||||
- [ ] CODE_OF_CONDUCT.md exists
|
||||
|
||||
### Documentation Maintenance
|
||||
- [ ] No outdated documentation
|
||||
- [ ] No broken links
|
||||
- [ ] All sections complete
|
||||
- [ ] Code examples work correctly
|
||||
- [ ] Changelog maintained
|
||||
|
||||
## 7. DevOps & CI/CD
|
||||
|
||||
### CI/CD Maturity
|
||||
- [ ] Automated testing in pipeline
|
||||
- [ ] Automated deployment configured
|
||||
- [ ] Development/staging/production environments
|
||||
- [ ] Rollback capability exists
|
||||
- [ ] Feature flags used for risky changes
|
||||
- [ ] Blue-green or canary deployments
|
||||
|
||||
### DORA 4 Metrics
|
||||
- [ ] Deployment frequency measured
|
||||
- Elite: Multiple times per day
|
||||
- High: Once per day to once per week
|
||||
- Medium: Once per week to once per month
|
||||
- Low: Less than once per month
|
||||
- [ ] Lead time for changes measured
|
||||
- Elite: Less than 1 hour
|
||||
- High: 1 day to 1 week
|
||||
- Medium: 1 week to 1 month
|
||||
- Low: More than 1 month
|
||||
- [ ] Change failure rate measured
|
||||
- Elite: < 1%
|
||||
- High: 1-5%
|
||||
- Medium: 5-15%
|
||||
- Low: > 15%
|
||||
- [ ] Time to restore service measured
|
||||
- Elite: < 1 hour
|
||||
- High: < 1 day
|
||||
- Medium: 1 day to 1 week
|
||||
- Low: > 1 week
|
||||
|
||||
### Infrastructure as Code
|
||||
- [ ] Configuration managed as code
|
||||
- [ ] Infrastructure versioned
|
||||
- [ ] Secrets managed securely (Vault, AWS Secrets Manager)
|
||||
- [ ] Environment variables documented
|
||||
|
||||
## 8. Accessibility (WCAG 2.1 AA)
|
||||
|
||||
### Semantic HTML
|
||||
- [ ] Proper heading hierarchy (h1 → h2 → h3)
|
||||
- [ ] ARIA labels where needed
|
||||
- [ ] Form labels associated with inputs
|
||||
- [ ] Landmark regions defined (header, nav, main, footer)
|
||||
|
||||
### Keyboard Navigation
|
||||
- [ ] All interactive elements keyboard accessible
|
||||
- [ ] Focus management implemented
|
||||
- [ ] Tab order logical
|
||||
- [ ] Focus indicators visible
|
||||
|
||||
### Screen Reader Support
|
||||
- [ ] Images have alt text
|
||||
- [ ] ARIA live regions for dynamic content
|
||||
- [ ] Links have descriptive text
|
||||
- [ ] Form errors announced
|
||||
|
||||
### Color & Contrast
|
||||
- [ ] Text contrast >= 4.5:1 (normal text)
|
||||
- [ ] Text contrast >= 3:1 (large text 18pt+)
|
||||
- [ ] UI components contrast >= 3:1
|
||||
- [ ] Color not sole means of conveying information
|
||||
|
||||
## 9. Technical Debt
|
||||
|
||||
### SQALE Rating
|
||||
- [ ] Technical debt quantified in person-days
|
||||
- [ ] Rating assigned (A-E)
|
||||
- A: <= 5% of development time
|
||||
- B: 6-10%
|
||||
- C: 11-20%
|
||||
- D: 21-50%
|
||||
- E: > 50%
|
||||
|
||||
### Debt Categories
|
||||
- [ ] Code smell debt identified
|
||||
- [ ] Test debt quantified
|
||||
- [ ] Documentation debt listed
|
||||
- [ ] Security debt prioritized
|
||||
- [ ] Performance debt noted
|
||||
- [ ] Architecture debt evaluated
|
||||
|
||||
## 10. Project-Specific Standards
|
||||
|
||||
### Connor's Global Standards
|
||||
- [ ] TypeScript strict mode enabled
|
||||
- [ ] No `any` types
|
||||
- [ ] Explicit return types
|
||||
- [ ] Comprehensive error handling
|
||||
- [ ] 80%+ test coverage
|
||||
- [ ] No console.log statements
|
||||
- [ ] No `var` keyword
|
||||
- [ ] No loose equality (`==`)
|
||||
- [ ] Conventional commits format
|
||||
- [ ] Branch naming follows pattern: (feature|bugfix|chore)/{component-name}
|
||||
|
||||
## Audit Completion
|
||||
|
||||
### Final Checks
|
||||
- [ ] All critical issues identified
|
||||
- [ ] All high-severity issues documented
|
||||
- [ ] Severity assigned to each finding
|
||||
- [ ] Remediation effort estimated
|
||||
- [ ] Report generated
|
||||
- [ ] Remediation plan created
|
||||
- [ ] Stakeholders notified
|
||||
|
||||
---
|
||||
|
||||
**Note**: This checklist is based on industry best practices as of 2024-25. Adjust severity thresholds and criteria based on your project's maturity stage and business context.
|
||||
573
skills/codebase-auditor/reference/best_practices_2025.md
Normal file
573
skills/codebase-auditor/reference/best_practices_2025.md
Normal file
@@ -0,0 +1,573 @@
|
||||
# Modern SDLC Best Practices (2024-25)
|
||||
|
||||
This document outlines industry-standard software development lifecycle best practices based on 2024-25 research and modern engineering standards.
|
||||
|
||||
## Table of Contents
|
||||
1. [Development Workflow](#development-workflow)
|
||||
2. [Testing Strategy](#testing-strategy)
|
||||
3. [Security (DevSecOps)](#security-devsecops)
|
||||
4. [Code Quality](#code-quality)
|
||||
5. [Performance](#performance)
|
||||
6. [Documentation](#documentation)
|
||||
7. [DevOps & CI/CD](#devops--cicd)
|
||||
8. [DORA Metrics](#dora-metrics)
|
||||
9. [Developer Experience](#developer-experience)
|
||||
10. [Accessibility](#accessibility)
|
||||
|
||||
---
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Version Control (Git)
|
||||
|
||||
**Branching Strategy**:
|
||||
- Main/master branch is always deployable
|
||||
- Feature branches for new work: `feature/{component-name}`
|
||||
- Bugfix branches: `bugfix/{issue-number}`
|
||||
- Release branches for production releases
|
||||
- No direct commits to main (use pull requests)
|
||||
|
||||
**Commit Messages**:
|
||||
- Follow Conventional Commits format
|
||||
- Structure: `type(scope): description`
|
||||
- Types: feat, fix, docs, style, refactor, test, chore
|
||||
- Example: `feat(auth): add OAuth2 social login`
|
||||
|
||||
**Code Review**:
|
||||
- All changes require peer review
|
||||
- Use pull request templates
|
||||
- Automated checks must pass before merge
|
||||
- Review within 24 hours for team velocity
|
||||
- Focus on logic, security, and maintainability
|
||||
|
||||
### Test-Driven Development (TDD)
|
||||
|
||||
**RED-GREEN-REFACTOR Cycle**:
|
||||
1. **RED**: Write failing test first
|
||||
2. **GREEN**: Write minimum code to pass
|
||||
3. **REFACTOR**: Improve code quality while tests pass
|
||||
|
||||
**Benefits**:
|
||||
- Better design through testability
|
||||
- Documentation through tests
|
||||
- Confidence to refactor
|
||||
- Fewer regression bugs
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Testing Trophy (Kent C. Dodds)
|
||||
|
||||
**Philosophy**: "Write tests. Not too many. Mostly integration."
|
||||
|
||||
**Distribution**:
|
||||
- **Integration Tests (70%)**: User workflows and component interaction
|
||||
- Test real user behavior
|
||||
- Test multiple units working together
|
||||
- Higher confidence than unit tests
|
||||
- Example: User registration flow end-to-end
|
||||
|
||||
- **Unit Tests (20%)**: Complex business logic only
|
||||
- Pure functions
|
||||
- Complex algorithms
|
||||
- Edge cases and error handling
|
||||
- Example: Tax calculation logic
|
||||
|
||||
- **E2E Tests (10%)**: Critical user journeys
|
||||
- Full stack, production-like environment
|
||||
- Happy path scenarios
|
||||
- Critical business flows
|
||||
- Example: Complete purchase flow
|
||||
|
||||
### What NOT to Test (Brittle Patterns)
|
||||
|
||||
**Avoid**:
|
||||
- Emoji presence in UI elements
|
||||
- Exact number of DOM elements
|
||||
- Specific element ordering (unless critical)
|
||||
- API call counts (unless performance critical)
|
||||
- CSS class names and styling
|
||||
- Implementation details over user behavior
|
||||
- Private methods/functions
|
||||
- Third-party library internals
|
||||
|
||||
### What to Prioritize (User-Focused)
|
||||
|
||||
**Prioritize**:
|
||||
- User workflows and interactions
|
||||
- Business logic and calculations
|
||||
- Data accuracy and processing
|
||||
- Error handling and edge cases
|
||||
- Performance within acceptable limits
|
||||
- Accessibility compliance (WCAG 2.1 AA)
|
||||
- Security boundaries
|
||||
|
||||
### Semantic Queries (React Testing Library)
|
||||
|
||||
**Priority Order**:
|
||||
1. `getByRole()` - Most preferred (accessibility-first)
|
||||
2. `getByLabelText()` - Form elements
|
||||
3. `getByPlaceholderText()` - Inputs without labels
|
||||
4. `getByText()` - User-visible content
|
||||
5. `getByDisplayValue()` - Form current values
|
||||
6. `getByAltText()` - Images
|
||||
7. `getByTitle()` - Title attributes
|
||||
8. `getByTestId()` - Last resort only
|
||||
|
||||
### Coverage Targets
|
||||
|
||||
**Minimum Requirements**:
|
||||
- Overall coverage: **80%**
|
||||
- Critical paths: **100%** (auth, payment, data processing)
|
||||
- Branch coverage: **75%**
|
||||
- Function coverage: **90%**
|
||||
|
||||
**Tools**:
|
||||
- Jest/Vitest for unit & integration tests
|
||||
- Cypress/Playwright for E2E tests
|
||||
- Istanbul/c8 for coverage reporting
|
||||
|
||||
---
|
||||
|
||||
## Security (DevSecOps)
|
||||
|
||||
### Shift-Left Security
|
||||
|
||||
**Principle**: Integrate security into every development stage, not as an afterthought.
|
||||
|
||||
**Cost Multiplier**:
|
||||
- Fix in **design**: 1x cost
|
||||
- Fix in **development**: 5x cost
|
||||
- Fix in **testing**: 10x cost
|
||||
- Fix in **production**: 30x cost
|
||||
|
||||
### OWASP Top 10 (2024)
|
||||
|
||||
1. **Broken Access Control**: Enforce authorization checks on every request
|
||||
2. **Cryptographic Failures**: Use TLS, encrypt PII, avoid weak algorithms
|
||||
3. **Injection**: Validate input, use prepared statements, sanitize output
|
||||
4. **Insecure Design**: Threat modeling, secure design patterns
|
||||
5. **Security Misconfiguration**: Harden defaults, disable unnecessary features
|
||||
6. **Vulnerable Components**: Keep dependencies updated, scan for CVEs
|
||||
7. **Authentication Failures**: MFA, rate limiting, secure session management
|
||||
8. **Software Integrity Failures**: Verify integrity with signatures, SRI
|
||||
9. **Security Logging**: Log security events, monitor for anomalies
|
||||
10. **SSRF**: Validate URLs, whitelist allowed domains
|
||||
|
||||
### Dependency Management
|
||||
|
||||
**Best Practices**:
|
||||
- Run `npm audit` / `yarn audit` weekly
|
||||
- Update dependencies monthly
|
||||
- Use Dependabot/Renovate for automated updates
|
||||
- Pin dependency versions in production
|
||||
- Check licenses for compliance
|
||||
- Monitor CVE databases
|
||||
|
||||
### Secrets Management
|
||||
|
||||
**Rules**:
|
||||
- NEVER commit secrets to version control
|
||||
- Use environment variables for configuration
|
||||
- Use secret management tools (Vault, AWS Secrets Manager)
|
||||
- Rotate secrets regularly
|
||||
- Scan git history for leaked secrets
|
||||
- Use `.env.example` for documentation, not `.env`
|
||||
|
||||
---
|
||||
|
||||
## Code Quality
|
||||
|
||||
### Complexity Metrics
|
||||
|
||||
**Cyclomatic Complexity**:
|
||||
- **1-10**: Simple, easy to test
|
||||
- **11-20**: Moderate, consider refactoring
|
||||
- **21-50**: High, should refactor
|
||||
- **50+**: Very high, must refactor
|
||||
|
||||
**Tool**: ESLint `complexity` rule, SonarQube
|
||||
|
||||
### Code Duplication
|
||||
|
||||
**Thresholds**:
|
||||
- **< 5%**: Excellent
|
||||
- **5-10%**: Acceptable
|
||||
- **10-20%**: Needs attention
|
||||
- **> 20%**: Critical issue
|
||||
|
||||
**DRY Principle**: Don't Repeat Yourself
|
||||
- Extract common code into functions/modules
|
||||
- Use design patterns (Template Method, Strategy)
|
||||
- Balance DRY with readability
|
||||
|
||||
### Code Smells
|
||||
|
||||
**Common Smells**:
|
||||
- **God Object**: Too many responsibilities
|
||||
- **Feature Envy**: Too much coupling to other classes
|
||||
- **Long Method**: > 50 lines
|
||||
- **Long Parameter List**: > 4 parameters
|
||||
- **Dead Code**: Unused code
|
||||
- **Magic Numbers**: Hard-coded values
|
||||
- **Primitive Obsession**: Overuse of primitives vs objects
|
||||
|
||||
**Refactoring Techniques**:
|
||||
- Extract Method
|
||||
- Extract Class
|
||||
- Introduce Parameter Object
|
||||
- Replace Magic Number with Constant
|
||||
- Remove Dead Code
|
||||
|
||||
### Static Analysis
|
||||
|
||||
**Tools**:
|
||||
- **SonarQube**: Comprehensive code quality platform
|
||||
- **ESLint**: JavaScript/TypeScript linting
|
||||
- **Prettier**: Code formatting
|
||||
- **TypeScript**: Type checking in strict mode
|
||||
- **Checkmarx**: Security-focused analysis
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
### Build Performance
|
||||
|
||||
**Targets**:
|
||||
- Build time: < 2 minutes
|
||||
- Hot reload: < 200ms
|
||||
- First build: < 5 minutes
|
||||
|
||||
**Optimization**:
|
||||
- Use build caching
|
||||
- Parallelize builds
|
||||
- Tree-shaking
|
||||
- Code splitting
|
||||
- Lazy loading
|
||||
|
||||
### Runtime Performance
|
||||
|
||||
**Web Vitals (Core)**:
|
||||
- **LCP (Largest Contentful Paint)**: < 2.5s
|
||||
- **FID (First Input Delay)**: < 100ms
|
||||
- **CLS (Cumulative Layout Shift)**: < 0.1
|
||||
|
||||
**API Performance**:
|
||||
- **P50**: < 100ms
|
||||
- **P95**: < 500ms
|
||||
- **P99**: < 1000ms
|
||||
|
||||
**Optimization Techniques**:
|
||||
- Caching (Redis, CDN)
|
||||
- Database indexing
|
||||
- Query optimization
|
||||
- Compression (gzip, Brotli)
|
||||
- Image optimization (WebP, lazy loading)
|
||||
- Code splitting and lazy loading
|
||||
|
||||
### Bundle Size
|
||||
|
||||
**Targets**:
|
||||
- Initial bundle: < 200KB (gzipped)
|
||||
- Total JavaScript: < 500KB (gzipped)
|
||||
- Images optimized: < 200KB each
|
||||
|
||||
**Tools**:
|
||||
- webpack-bundle-analyzer
|
||||
- Lighthouse
|
||||
- Chrome DevTools Performance tab
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
### Code Documentation
|
||||
|
||||
**JSDoc/TSDoc**:
|
||||
- Document all public APIs
|
||||
- Include examples for complex functions
|
||||
- Document parameters, return types, exceptions
|
||||
|
||||
**Example**:
|
||||
```typescript
|
||||
/**
|
||||
* Calculates the total price including tax and discounts.
|
||||
*
|
||||
* @param items - Array of cart items
|
||||
* @param taxRate - Tax rate as decimal (e.g., 0.08 for 8%)
|
||||
* @param discountCode - Optional discount code
|
||||
* @returns Total price with tax and discounts applied
|
||||
* @throws {InvalidDiscountError} If discount code is invalid
|
||||
*
|
||||
* @example
|
||||
* const total = calculateTotal(items, 0.08, 'SUMMER20');
|
||||
*/
|
||||
function calculateTotal(items: CartItem[], taxRate: number, discountCode?: string): number {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Project Documentation
|
||||
|
||||
**Essential Files**:
|
||||
- **README.md**: Project overview, setup instructions, quick start
|
||||
- **CONTRIBUTING.md**: How to contribute, coding standards, PR process
|
||||
- **CODE_OF_CONDUCT.md**: Community guidelines
|
||||
- **CHANGELOG.md**: Version history and changes
|
||||
- **LICENSE**: Legal license information
|
||||
- **ARCHITECTURE.md**: High-level architecture overview
|
||||
- **ADRs** (Architecture Decision Records): Document important decisions
|
||||
|
||||
---
|
||||
|
||||
## DevOps & CI/CD
|
||||
|
||||
### Continuous Integration
|
||||
|
||||
**Requirements**:
|
||||
- Automated testing on every commit
|
||||
- Build verification
|
||||
- Code quality checks (linting, formatting)
|
||||
- Security scanning
|
||||
- Fast feedback (< 10 minutes)
|
||||
|
||||
**Pipeline Stages**:
|
||||
1. Lint & Format Check
|
||||
2. Unit Tests
|
||||
3. Integration Tests
|
||||
4. Security Scan
|
||||
5. Build Artifacts
|
||||
6. Deploy to Staging
|
||||
7. E2E Tests
|
||||
8. Deploy to Production (with approval)
|
||||
|
||||
### Continuous Deployment
|
||||
|
||||
**Strategies**:
|
||||
- **Blue-Green**: Two identical environments, switch traffic
|
||||
- **Canary**: Gradual rollout to subset of users
|
||||
- **Rolling**: Update instances incrementally
|
||||
- **Feature Flags**: Control feature visibility without deployment
|
||||
|
||||
**Rollback**:
|
||||
- Automated rollback on failure detection
|
||||
- Keep last 3-5 versions deployable
|
||||
- Database migrations reversible
|
||||
- Monitor key metrics post-deployment
|
||||
|
||||
### Infrastructure as Code
|
||||
|
||||
**Tools**:
|
||||
- Terraform, CloudFormation, Pulumi
|
||||
- Ansible, Chef, Puppet
|
||||
- Docker, Kubernetes
|
||||
|
||||
**Benefits**:
|
||||
- Version-controlled infrastructure
|
||||
- Reproducible environments
|
||||
- Disaster recovery
|
||||
- Automated provisioning
|
||||
|
||||
---
|
||||
|
||||
## DORA Metrics
|
||||
|
||||
**Four Key Metrics** (DevOps Research and Assessment):
|
||||
|
||||
### 1. Deployment Frequency
|
||||
|
||||
**How often code is deployed to production**
|
||||
|
||||
- **Elite**: Multiple times per day
|
||||
- **High**: Once per day to once per week
|
||||
- **Medium**: Once per week to once per month
|
||||
- **Low**: Less than once per month
|
||||
|
||||
### 2. Lead Time for Changes
|
||||
|
||||
**Time from commit to production**
|
||||
|
||||
- **Elite**: Less than 1 hour
|
||||
- **High**: 1 day to 1 week
|
||||
- **Medium**: 1 week to 1 month
|
||||
- **Low**: More than 1 month
|
||||
|
||||
### 3. Change Failure Rate
|
||||
|
||||
**Percentage of deployments causing failures**
|
||||
|
||||
- **Elite**: < 1%
|
||||
- **High**: 1-5%
|
||||
- **Medium**: 5-15%
|
||||
- **Low**: > 15%
|
||||
|
||||
### 4. Time to Restore Service
|
||||
|
||||
**Time to recover from production incident**
|
||||
|
||||
- **Elite**: < 1 hour
|
||||
- **High**: < 1 day
|
||||
- **Medium**: 1 day to 1 week
|
||||
- **Low**: > 1 week
|
||||
|
||||
**Tracking**: Use CI/CD tools, APM (Application Performance Monitoring), incident management systems
|
||||
|
||||
---
|
||||
|
||||
## Developer Experience
|
||||
|
||||
### Why It Matters
|
||||
|
||||
**Statistics**:
|
||||
- 83% of engineers experience burnout
|
||||
- Developer experience is the strongest predictor of delivery capability
|
||||
- Happy developers are 2x more productive
|
||||
|
||||
### Key Factors
|
||||
|
||||
**Fast Feedback Loops**:
|
||||
- Quick build times
|
||||
- Fast test execution
|
||||
- Immediate linting/formatting feedback
|
||||
- Hot module reloading
|
||||
|
||||
**Good Tooling**:
|
||||
- Modern IDE with autocomplete
|
||||
- Debuggers and profilers
|
||||
- Automated code reviews
|
||||
- Documentation generators
|
||||
|
||||
**Clear Standards**:
|
||||
- Coding style guides
|
||||
- Architecture documentation
|
||||
- Onboarding guides
|
||||
- Runbooks for common tasks
|
||||
|
||||
**Psychological Safety**:
|
||||
- Blameless post-mortems
|
||||
- Encourage experimentation
|
||||
- Celebrate learning from failure
|
||||
- Mentorship programs
|
||||
|
||||
---
|
||||
|
||||
## Accessibility
|
||||
|
||||
### WCAG 2.1 Level AA Compliance
|
||||
|
||||
**Four Principles (POUR)**:
|
||||
|
||||
1. **Perceivable**: Information must be presentable to users
|
||||
- Alt text for images
|
||||
- Captions for videos
|
||||
- Color contrast ratios
|
||||
|
||||
2. **Operable**: UI components must be operable
|
||||
- Keyboard navigation
|
||||
- Sufficient time to read content
|
||||
- No seizure-inducing content
|
||||
|
||||
3. **Understandable**: Information must be understandable
|
||||
- Readable text
|
||||
- Predictable behavior
|
||||
- Input assistance (error messages)
|
||||
|
||||
4. **Robust**: Content must be robust across technologies
|
||||
- Valid HTML
|
||||
- ARIA attributes
|
||||
- Cross-browser compatibility
|
||||
|
||||
### Testing Tools
|
||||
|
||||
**Automated**:
|
||||
- axe DevTools
|
||||
- Lighthouse
|
||||
- WAVE
|
||||
- Pa11y
|
||||
|
||||
**Manual**:
|
||||
- Keyboard navigation testing
|
||||
- Screen reader testing (NVDA, JAWS, VoiceOver)
|
||||
- Color contrast checkers
|
||||
- Zoom testing (200%+)
|
||||
|
||||
---
|
||||
|
||||
## Modern Trends (2024-25)
|
||||
|
||||
### AI-Assisted Development
|
||||
|
||||
**Tools**:
|
||||
- GitHub Copilot
|
||||
- ChatGPT / Claude
|
||||
- Tabnine
|
||||
- Amazon CodeWhisperer
|
||||
|
||||
**Best Practices**:
|
||||
- Review all AI-generated code
|
||||
- Write tests for AI code
|
||||
- Understand before committing
|
||||
- Train team on effective prompting
|
||||
|
||||
### Platform Engineering
|
||||
|
||||
**Concept**: Internal developer platforms to improve developer experience
|
||||
|
||||
**Components**:
|
||||
- Self-service infrastructure
|
||||
- Golden paths (templates)
|
||||
- Developer portals
|
||||
- Observability dashboards
|
||||
|
||||
### Observability (vs Monitoring)
|
||||
|
||||
**Three Pillars**:
|
||||
1. **Logs**: What happened
|
||||
2. **Metrics**: Quantitative data
|
||||
3. **Traces**: Request flow through system
|
||||
|
||||
**Tools**:
|
||||
- Datadog, New Relic, Grafana
|
||||
- OpenTelemetry for standardization
|
||||
- Distributed tracing (Jaeger, Zipkin)
|
||||
|
||||
---
|
||||
|
||||
## Industry Benchmarks (2024-25)
|
||||
|
||||
### Code Quality
|
||||
- Tech debt ratio: < 5%
|
||||
- Duplication: < 5%
|
||||
- Test coverage: > 80%
|
||||
- Build time: < 2 minutes
|
||||
|
||||
### Security
|
||||
- CVE remediation: < 30 days
|
||||
- Security training: Quarterly
|
||||
- Penetration testing: Annually
|
||||
|
||||
### Performance
|
||||
- Page load: < 3 seconds
|
||||
- API response: P95 < 500ms
|
||||
- Uptime: 99.9%+
|
||||
|
||||
### Team Metrics
|
||||
- Pull request review time: < 24 hours
|
||||
- Deployment frequency: Daily+
|
||||
- Incident MTTR: < 1 hour
|
||||
- Developer onboarding: < 1 week
|
||||
|
||||
---
|
||||
|
||||
**References**:
|
||||
- DORA State of DevOps Report 2024
|
||||
- OWASP Top 10 (2024 Edition)
|
||||
- WCAG 2.1 Guidelines
|
||||
- Kent C. Dodds Testing Trophy
|
||||
- SonarQube Quality Gates
|
||||
- Google Web Vitals
|
||||
|
||||
**Last Updated**: 2024-25
|
||||
**Version**: 1.0
|
||||
307
skills/codebase-auditor/reference/severity_matrix.md
Normal file
307
skills/codebase-auditor/reference/severity_matrix.md
Normal file
@@ -0,0 +1,307 @@
|
||||
# Severity Matrix & Issue Prioritization
|
||||
|
||||
This document defines how to categorize and prioritize issues found during codebase audits.
|
||||
|
||||
## Severity Levels
|
||||
|
||||
### Critical (P0) - Fix Immediately
|
||||
|
||||
**Definition**: Issues that pose immediate risk to security, data integrity, or production stability.
|
||||
|
||||
**Characteristics**:
|
||||
- Security vulnerabilities with known exploits (CVE scores >= 9.0)
|
||||
- Secrets or credentials exposed in code
|
||||
- Data loss or corruption risks
|
||||
- Production-breaking bugs
|
||||
- Authentication/authorization bypasses
|
||||
- SQL injection or XSS vulnerabilities
|
||||
- Compliance violations (GDPR, HIPAA, etc.)
|
||||
|
||||
**Timeline**: Must be fixed within 24 hours
|
||||
**Effort vs Impact**: Fix immediately regardless of effort
|
||||
**Deployment**: Requires immediate hotfix release
|
||||
|
||||
**Examples**:
|
||||
- API key committed to repository
|
||||
- SQL injection vulnerability in production endpoint
|
||||
- Authentication bypass allowing unauthorized access
|
||||
- Critical CVE in production dependency (e.g., log4shell)
|
||||
- Unencrypted PII being transmitted over HTTP
|
||||
- Memory leak causing production crashes
|
||||
|
||||
---
|
||||
|
||||
### High (P1) - Fix This Sprint
|
||||
|
||||
**Definition**: Significant issues that impact quality, security, or user experience but don't pose immediate production risk.
|
||||
|
||||
**Characteristics**:
|
||||
- Medium-severity security vulnerabilities (CVE scores 7.0-8.9)
|
||||
- Critical path missing test coverage
|
||||
- Performance bottlenecks affecting user experience
|
||||
- WCAG AA accessibility violations
|
||||
- TypeScript strict mode violations in critical code
|
||||
- High cyclomatic complexity (> 20) in business logic
|
||||
- Missing error handling in critical operations
|
||||
|
||||
**Timeline**: Fix within current sprint (2 weeks)
|
||||
**Effort vs Impact**: Prioritize high-impact, low-effort fixes first
|
||||
**Deployment**: Include in next regular release
|
||||
|
||||
**Examples**:
|
||||
- Payment processing code with 0% test coverage
|
||||
- Page load time > 3 seconds
|
||||
- Form inaccessible to screen readers
|
||||
- 500+ line function with complexity of 45
|
||||
- Unhandled promise rejections in checkout flow
|
||||
- Dependency with moderate CVE (6.5 score)
|
||||
|
||||
---
|
||||
|
||||
### Medium (P2) - Fix Next Quarter
|
||||
|
||||
**Definition**: Issues that reduce code maintainability, developer productivity, or future scalability but don't immediately impact users.
|
||||
|
||||
**Characteristics**:
|
||||
- Code smells and duplication
|
||||
- Low-severity security issues (CVE scores 4.0-6.9)
|
||||
- Test coverage between 60-80%
|
||||
- Documentation gaps
|
||||
- Minor performance optimizations
|
||||
- Outdated dependencies (no CVEs)
|
||||
- Moderate complexity (10-20)
|
||||
- Technical debt accumulation
|
||||
|
||||
**Timeline**: Fix within next quarter (3 months)
|
||||
**Effort vs Impact**: Plan during sprint planning, batch similar fixes
|
||||
**Deployment**: Include in planned refactoring releases
|
||||
|
||||
**Examples**:
|
||||
- 15% code duplication across services
|
||||
- Missing JSDoc for public API
|
||||
- God class with 25 public methods
|
||||
- Build time of 5 minutes
|
||||
- Test suite takes 10 minutes to run
|
||||
- Dependency 2 major versions behind (stable)
|
||||
|
||||
---
|
||||
|
||||
### Low (P3) - Backlog
|
||||
|
||||
**Definition**: Minor improvements, stylistic issues, or optimizations that have minimal impact on functionality or quality.
|
||||
|
||||
**Characteristics**:
|
||||
- Stylistic inconsistencies
|
||||
- Minor code smells
|
||||
- Documentation improvements
|
||||
- Nice-to-have features
|
||||
- Long-term architectural improvements
|
||||
- Code coverage 80-90% (already meets minimum)
|
||||
- Low complexity optimizations (< 10)
|
||||
|
||||
**Timeline**: Address when time permits or during dedicated tech debt sprints
|
||||
**Effort vs Impact**: Only fix if effort is minimal or during slow periods
|
||||
**Deployment**: Bundle with feature releases
|
||||
|
||||
**Examples**:
|
||||
- Inconsistent variable naming (camelCase vs snake_case)
|
||||
- Missing comments on simple functions
|
||||
- Single-character variable names in non-critical code
|
||||
- Console.log in development-only code
|
||||
- README could be more detailed
|
||||
- Opportunity to refactor small utility function
|
||||
|
||||
---
|
||||
|
||||
## Scoring Rubric
|
||||
|
||||
Use this matrix to assign severity levels:
|
||||
|
||||
| Impact | Effort Low | Effort Medium | Effort High |
|
||||
|--------|------------|---------------|-------------|
|
||||
| **Critical** | P0 | P0 | P0 |
|
||||
| **High** | P1 | P1 | P1 |
|
||||
| **Medium** | P1 | P2 | P2 |
|
||||
| **Low** | P2 | P3 | P3 |
|
||||
|
||||
### Impact Assessment
|
||||
|
||||
**Critical Impact**:
|
||||
- Security breach
|
||||
- Data loss/corruption
|
||||
- Production outage
|
||||
- Legal/compliance violation
|
||||
|
||||
**High Impact**:
|
||||
- User experience degraded
|
||||
- Performance issues
|
||||
- Accessibility barriers
|
||||
- Development velocity reduced significantly
|
||||
|
||||
**Medium Impact**:
|
||||
- Code maintainability reduced
|
||||
- Technical debt accumulating
|
||||
- Future changes more difficult
|
||||
- Developer productivity slightly reduced
|
||||
|
||||
**Low Impact**:
|
||||
- Minimal user/developer effect
|
||||
- Cosmetic issues
|
||||
- Future-proofing
|
||||
- Best practice deviations
|
||||
|
||||
### Effort Estimation
|
||||
|
||||
**Low Effort**: < 4 hours
|
||||
- Simple configuration change
|
||||
- One-line fix
|
||||
- Update dependency version
|
||||
|
||||
**Medium Effort**: 4 hours - 2 days
|
||||
- Refactor single module
|
||||
- Add test coverage for feature
|
||||
- Implement security fix with tests
|
||||
|
||||
**High Effort**: > 2 days
|
||||
- Architectural changes
|
||||
- Major refactoring
|
||||
- Migration to new framework/library
|
||||
- Comprehensive security overhaul
|
||||
|
||||
---
|
||||
|
||||
## Category-Specific Severity Guidelines
|
||||
|
||||
### Security Issues
|
||||
|
||||
| Finding | Severity |
|
||||
|---------|----------|
|
||||
| Known exploit in production | Critical |
|
||||
| Secrets in code | Critical |
|
||||
| Authentication bypass | Critical |
|
||||
| SQL injection | Critical |
|
||||
| XSS vulnerability | High |
|
||||
| CSRF vulnerability | High |
|
||||
| Outdated dependency (CVE 7-9) | High |
|
||||
| Outdated dependency (CVE 4-7) | Medium |
|
||||
| Missing security headers | Medium |
|
||||
| Weak encryption algorithm | Medium |
|
||||
|
||||
### Code Quality Issues
|
||||
|
||||
| Finding | Severity |
|
||||
|---------|----------|
|
||||
| Complexity > 50 | High |
|
||||
| Complexity 20-50 | Medium |
|
||||
| Complexity 10-20 | Low |
|
||||
| Duplication > 20% | High |
|
||||
| Duplication 10-20% | Medium |
|
||||
| Duplication 5-10% | Low |
|
||||
| File > 1000 LOC | Medium |
|
||||
| File > 500 LOC | Low |
|
||||
| Dead code (unused for > 6 months) | Low |
|
||||
|
||||
### Test Coverage Issues
|
||||
|
||||
| Finding | Severity |
|
||||
|---------|----------|
|
||||
| Critical path untested | High |
|
||||
| Coverage < 50% | High |
|
||||
| Coverage 50-80% | Medium |
|
||||
| Coverage 80-90% | Low |
|
||||
| Flaky tests | Medium |
|
||||
| Slow tests (> 10 min) | Medium |
|
||||
| No E2E tests | Medium |
|
||||
| Missing edge case tests | Low |
|
||||
|
||||
### Performance Issues
|
||||
|
||||
| Finding | Severity |
|
||||
|---------|----------|
|
||||
| Page load > 5s | High |
|
||||
| Page load 3-5s | Medium |
|
||||
| Memory leak | High |
|
||||
| O(n²) in hot path | High |
|
||||
| Bundle size > 5MB | Medium |
|
||||
| Build time > 10 min | Medium |
|
||||
| Unoptimized images | Low |
|
||||
|
||||
### Accessibility Issues
|
||||
|
||||
| Finding | Severity |
|
||||
|---------|----------|
|
||||
| No keyboard navigation | High |
|
||||
| Contrast ratio < 3:1 | High |
|
||||
| Missing ARIA labels | High |
|
||||
| Heading hierarchy broken | Medium |
|
||||
| Missing alt text | Medium |
|
||||
| Focus indicators absent | Medium |
|
||||
| Color-only information | Low |
|
||||
|
||||
---
|
||||
|
||||
## Remediation Priority Formula
|
||||
|
||||
Use this formula to calculate a priority score:
|
||||
|
||||
```
|
||||
Priority Score = (Impact × 10) + (Frequency × 5) - (Effort × 2)
|
||||
```
|
||||
|
||||
Where:
|
||||
- **Impact**: 1-10 (10 = critical)
|
||||
- **Frequency**: 1-10 (10 = affects all users/code)
|
||||
- **Effort**: 1-10 (10 = requires months of work)
|
||||
|
||||
Sort issues by priority score (highest first) to create your remediation plan.
|
||||
|
||||
### Example Calculations
|
||||
|
||||
**Example 1**: SQL Injection in Login
|
||||
- Impact: 10 (critical security issue)
|
||||
- Frequency: 10 (affects all users)
|
||||
- Effort: 3 (straightforward fix with prepared statements)
|
||||
- Score: (10 × 10) + (10 × 5) - (3 × 2) = **144** → **P0**
|
||||
|
||||
**Example 2**: Missing Tests on Helper Utility
|
||||
- Impact: 4 (low risk, helper function)
|
||||
- Frequency: 2 (rarely used)
|
||||
- Effort: 2 (quick to test)
|
||||
- Score: (4 × 10) + (2 × 5) - (2 × 2) = **46** → **P3**
|
||||
|
||||
**Example 3**: Performance Bottleneck in Search
|
||||
- Impact: 7 (user experience degraded)
|
||||
- Frequency: 8 (common feature)
|
||||
- Effort: 6 (requires algorithm optimization)
|
||||
- Score: (7 × 10) + (8 × 5) - (6 × 2) = **98** → **P1**
|
||||
|
||||
---
|
||||
|
||||
## Escalation Criteria
|
||||
|
||||
Escalate to leadership when:
|
||||
- 5+ Critical issues found
|
||||
- 10+ High issues in production code
|
||||
- SQALE rating of D or E
|
||||
- Security issues require disclosure
|
||||
- Compliance violations detected
|
||||
- Technical debt > 50% of development capacity
|
||||
|
||||
---
|
||||
|
||||
## Review Cycles
|
||||
|
||||
Recommended audit frequency based on project type:
|
||||
|
||||
| Project Type | Audit Frequency | Focus Areas |
|
||||
|-------------|-----------------|-------------|
|
||||
| Production SaaS | Monthly | Security, Performance, Uptime |
|
||||
| Enterprise Software | Quarterly | Compliance, Security, Quality |
|
||||
| Internal Tools | Semi-annually | Technical Debt, Maintainability |
|
||||
| Open Source | Per major release | Security, Documentation, API stability |
|
||||
| Startup MVP | Before funding rounds | Security, Scalability, Technical Debt |
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2024-25 Standards
|
||||
**Version**: 1.0
|
||||
Reference in New Issue
Block a user