Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:29:07 +08:00
commit 8b4a1b1a99
75 changed files with 18583 additions and 0 deletions

View File

@@ -0,0 +1,120 @@
# Performance Optimization Examples
Real-world examples of performance bottlenecks and their optimizations across different layers.
## Examples Overview
### Algorithm Optimization
**File**: [algorithm-optimization.md](algorithm-optimization.md)
Fix algorithmic bottlenecks:
- Nested loops O(n²) → Map lookups O(n)
- Inefficient array operations
- Sorting and searching optimizations
- Data structure selection (Array vs Set vs Map)
- Before/after performance metrics
**Use when**: Profiling shows slow computational operations, CPU-intensive tasks.
---
### Database Optimization
**File**: [database-optimization.md](database-optimization.md)
Optimize database queries and patterns:
- N+1 query problem detection and fixes
- Eager loading vs lazy loading
- Query optimization with EXPLAIN ANALYZE
- Index strategy (single, composite, partial)
- Connection pooling
- Query result caching
**Use when**: Database queries are slow, high database CPU usage, query timeouts.
---
### Caching Optimization
**File**: [caching-optimization.md](caching-optimization.md)
Implement effective caching strategies:
- In-memory caching patterns
- Redis distributed caching
- HTTP caching headers
- Cache invalidation strategies
- Cache hit rate optimization
- TTL tuning
**Use when**: Repeated expensive computations, external API calls, static data queries.
---
### Frontend Optimization
**File**: [frontend-optimization.md](frontend-optimization.md)
Optimize React/frontend performance:
- Bundle size reduction (code splitting, tree shaking)
- React rendering optimization (memo, useMemo, useCallback)
- Virtual scrolling for long lists
- Image optimization (lazy loading, WebP, responsive images)
- Web Vitals improvement (LCP, FID, CLS)
**Use when**: Slow page load, large bundle sizes, poor Web Vitals scores.
---
### Backend Optimization
**File**: [backend-optimization.md](backend-optimization.md)
Optimize server-side performance:
- Async/parallel processing patterns
- Stream processing for large data
- Request batching and debouncing
- Worker threads for CPU-intensive tasks
- Memory leak prevention
- Connection pooling
**Use when**: High server response times, memory leaks, CPU bottlenecks.
---
## Quick Reference
| Optimization Type | Common Gains | Typical Fixes |
|-------------------|--------------|---------------|
| **Algorithm** | 50-90% faster | O(n²) → O(n), better data structures |
| **Database** | 60-95% faster | Indexes, eager loading, caching |
| **Caching** | 80-99% faster | Redis, in-memory, HTTP headers |
| **Frontend** | 40-70% faster | Code splitting, lazy loading, memoization |
| **Backend** | 50-80% faster | Async processing, streaming, pooling |
## Performance Impact Guide
### High Impact (>50% improvement)
- Fix N+1 queries
- Add missing indexes
- Implement caching layer
- Fix O(n²) algorithms
- Enable code splitting
### Medium Impact (20-50% improvement)
- Optimize React rendering
- Add connection pooling
- Implement lazy loading
- Batch API requests
- Optimize images
### Low Impact (<20% improvement)
- Minify assets
- Enable gzip compression
- Optimize CSS selectors
- Reduce HTTP headers
## Navigation
- **Reference**: [Reference Index](../reference/INDEX.md)
- **Templates**: [Templates Index](../templates/INDEX.md)
- **Main Agent**: [performance-optimizer.md](../performance-optimizer.md)
---
Return to [main agent](../performance-optimizer.md)

View File

@@ -0,0 +1,343 @@
# Algorithm Optimization Examples
Real-world examples of algorithmic bottlenecks and their optimizations with measurable performance gains.
## Example 1: Nested Loop → Map Lookup
### Problem: Finding Related Items (O(n²))
```typescript
// ❌ BEFORE: O(n²) nested loops - 2.5 seconds for 1000 items
interface User {
id: string;
name: string;
managerId: string | null;
}
function assignManagers(users: User[]) {
for (const user of users) {
if (!user.managerId) continue;
// Inner loop searches entire array
for (const potentialManager of users) {
if (potentialManager.id === user.managerId) {
user.manager = potentialManager;
break;
}
}
}
return users;
}
// Benchmark: 1000 users = 2,500ms
console.time('nested-loop');
const result1 = assignManagers(users);
console.timeEnd('nested-loop'); // 2,500ms
```
### Solution: Map Lookup (O(n))
```typescript
// ✅ AFTER: O(n) with Map - 25ms for 1000 items (100x faster!)
function assignManagersOptimized(users: User[]) {
// Build lookup map once: O(n)
const userMap = new Map(users.map(u => [u.id, u]));
// Single pass with O(1) lookups: O(n)
for (const user of users) {
if (user.managerId) {
user.manager = userMap.get(user.managerId);
}
}
return users;
}
// Benchmark: 1000 users = 25ms
console.time('map-lookup');
const result2 = assignManagersOptimized(users);
console.timeEnd('map-lookup'); // 25ms
// Performance gain: 100x faster (2,500ms → 25ms)
```
### Metrics
| Implementation | Time (1K) | Time (10K) | Complexity |
|----------------|-----------|------------|------------|
| **Nested Loop** | 2.5s | 250s | O(n²) |
| **Map Lookup** | 25ms | 250ms | O(n) |
| **Improvement** | **100x** | **1000x** | - |
---
## Example 2: Array Filter Chains → Single Pass
### Problem: Multiple Array Iterations
```typescript
// ❌ BEFORE: Multiple passes through array - 150ms for 10K items
interface Product {
id: string;
price: number;
category: string;
inStock: boolean;
}
function getAffordableInStockProducts(products: Product[], maxPrice: number) {
const inStock = products.filter(p => p.inStock); // 1st pass
const affordable = inStock.filter(p => p.price <= maxPrice); // 2nd pass
const sorted = affordable.sort((a, b) => a.price - b.price); // 3rd pass
return sorted.slice(0, 10); // 4th pass
}
// Benchmark: 10,000 products = 150ms
console.time('multi-pass');
const result1 = getAffordableInStockProducts(products, 100);
console.timeEnd('multi-pass'); // 150ms
```
### Solution: Single Pass with Reduce
```typescript
// ✅ AFTER: Single pass - 45ms for 10K items (3.3x faster)
function getAffordableInStockProductsOptimized(
products: Product[],
maxPrice: number
) {
const filtered = products.reduce<Product[]>((acc, product) => {
if (product.inStock && product.price <= maxPrice) {
acc.push(product);
}
return acc;
}, []);
return filtered
.sort((a, b) => a.price - b.price)
.slice(0, 10);
}
// Benchmark: 10,000 products = 45ms
console.time('single-pass');
const result2 = getAffordableInStockProductsOptimized(products, 100);
console.timeEnd('single-pass'); // 45ms
// Performance gain: 3.3x faster (150ms → 45ms)
```
### Metrics
| Implementation | Memory | Time | Passes |
|----------------|--------|------|--------|
| **Filter Chains** | 4 arrays | 150ms | 4 |
| **Single Reduce** | 1 array | 45ms | 1 |
| **Improvement** | **75% less** | **3.3x** | **4→1** |
---
## Example 3: Linear Search → Binary Search
### Problem: Finding Items in Sorted Array
```typescript
// ❌ BEFORE: Linear search O(n) - 5ms for 10K items
function findUserById(users: User[], targetId: string): User | undefined {
for (const user of users) {
if (user.id === targetId) {
return user;
}
}
return undefined;
}
// Benchmark: 10,000 users, searching 1000 times = 5,000ms
console.time('linear-search');
for (let i = 0; i < 1000; i++) {
findUserById(sortedUsers, randomId());
}
console.timeEnd('linear-search'); // 5,000ms
```
### Solution: Binary Search O(log n)
```typescript
// ✅ AFTER: Binary search O(log n) - 0.01ms for 10K items (500x faster!)
function findUserByIdOptimized(
sortedUsers: User[],
targetId: string
): User | undefined {
let left = 0;
let right = sortedUsers.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
const midId = sortedUsers[mid].id;
if (midId === targetId) {
return sortedUsers[mid];
} else if (midId < targetId) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return undefined;
}
// Benchmark: 10,000 users, searching 1000 times = 10ms
console.time('binary-search');
for (let i = 0; i < 1000; i++) {
findUserByIdOptimized(sortedUsers, randomId());
}
console.timeEnd('binary-search'); // 10ms
// Performance gain: 500x faster (5,000ms → 10ms)
```
### Metrics
| Array Size | Linear Search | Binary Search | Speedup |
|------------|---------------|---------------|---------|
| **1K** | 50ms | 0.1ms | **500x** |
| **10K** | 500ms | 1ms | **500x** |
| **100K** | 5,000ms | 10ms | **500x** |
---
## Example 4: Duplicate Detection → Set
### Problem: Checking for Duplicates
```typescript
// ❌ BEFORE: Nested loop O(n²) - 250ms for 1K items
function hasDuplicates(arr: string[]): boolean {
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) {
return true;
}
}
}
return false;
}
// Benchmark: 1,000 items = 250ms
console.time('nested-duplicate-check');
hasDuplicates(items);
console.timeEnd('nested-duplicate-check'); // 250ms
```
### Solution: Set for O(n) Detection
```typescript
// ✅ AFTER: Set-based O(n) - 2ms for 1K items (125x faster!)
function hasDuplicatesOptimized(arr: string[]): boolean {
const seen = new Set<string>();
for (const item of arr) {
if (seen.has(item)) {
return true;
}
seen.add(item);
}
return false;
}
// Benchmark: 1,000 items = 2ms
console.time('set-duplicate-check');
hasDuplicatesOptimized(items);
console.timeEnd('set-duplicate-check'); // 2ms
// Performance gain: 125x faster (250ms → 2ms)
```
### Metrics
| Implementation | Time (1K) | Time (10K) | Memory | Complexity |
|----------------|-----------|------------|--------|------------|
| **Nested Loop** | 250ms | 25,000ms | O(1) | O(n²) |
| **Set** | 2ms | 20ms | O(n) | O(n) |
| **Improvement** | **125x** | **1250x** | Trade-off | - |
---
## Example 5: String Concatenation → Array Join
### Problem: Building Large Strings
```typescript
// ❌ BEFORE: String concatenation O(n²) - 1,200ms for 10K items
function buildCsv(rows: string[][]): string {
let csv = '';
for (const row of rows) {
for (const cell of row) {
csv += cell + ','; // Creates new string each iteration
}
csv += '\n';
}
return csv;
}
// Benchmark: 10,000 rows × 20 columns = 1,200ms
console.time('string-concat');
buildCsv(largeDataset);
console.timeEnd('string-concat'); // 1,200ms
```
### Solution: Array Join O(n)
```typescript
// ✅ AFTER: Array join O(n) - 15ms for 10K items (80x faster!)
function buildCsvOptimized(rows: string[][]): string {
const lines: string[] = [];
for (const row of rows) {
lines.push(row.join(','));
}
return lines.join('\n');
}
// Benchmark: 10,000 rows × 20 columns = 15ms
console.time('array-join');
buildCsvOptimized(largeDataset);
console.timeEnd('array-join'); // 15ms
// Performance gain: 80x faster (1,200ms → 15ms)
```
### Metrics
| Implementation | Time | Memory Allocations | Complexity |
|----------------|------|-------------------|------------|
| **String Concat** | 1,200ms | 200,000+ | O(n²) |
| **Array Join** | 15ms | ~10,000 | O(n) |
| **Improvement** | **80x** | **95% less** | - |
---
## Summary
| Optimization | Before | After | Gain | When to Use |
|--------------|--------|-------|------|-------------|
| **Nested Loop → Map** | O(n²) | O(n) | 100-1000x | Lookups, matching |
| **Filter Chains → Reduce** | 4 passes | 1 pass | 3-4x | Array transformations |
| **Linear → Binary Search** | O(n) | O(log n) | 100-500x | Sorted data |
| **Loop → Set Duplicate Check** | O(n²) | O(n) | 100-1000x | Uniqueness checks |
| **String Concat → Array Join** | O(n²) | O(n) | 50-100x | String building |
## Best Practices
1. **Profile First**: Measure before optimizing to find real bottlenecks
2. **Choose Right Data Structure**: Map for lookups, Set for uniqueness, Array for ordered data
3. **Avoid Nested Loops**: Nearly always O(n²), look for single-pass alternatives
4. **Binary Search**: Use for sorted data with frequent lookups
5. **Minimize Allocations**: Reuse arrays/objects instead of creating new ones
6. **Benchmark**: Always measure actual performance gains
---
**Next**: [Database Optimization](database-optimization.md) | **Index**: [Examples Index](INDEX.md)

View File

@@ -0,0 +1,230 @@
# Backend Optimization Examples
Server-side performance optimizations for Node.js/FastAPI applications with measurable throughput improvements.
## Example 1: Async/Parallel Processing
### Problem: Sequential Operations
```typescript
// ❌ BEFORE: Sequential - 1,500ms total
async function getUserProfile(userId: string) {
const user = await db.user.findUnique({ where: { id: userId } });
const orders = await db.order.findMany({ where: { userId } });
const reviews = await db.review.findMany({ where: { userId } });
return { user, orders, reviews };
}
// Total time: 500ms + 600ms + 400ms = 1,500ms
```
### Solution: Parallel with Promise.all
```typescript
// ✅ AFTER: Parallel - 600ms total (2.5x faster)
async function getUserProfileOptimized(userId: string) {
const [user, orders, reviews] = await Promise.all([
db.user.findUnique({ where: { id: userId } }), // 500ms
db.order.findMany({ where: { userId } }), // 600ms
db.review.findMany({ where: { userId } }) // 400ms
]);
return { user, orders, reviews };
}
// Total time: max(500, 600, 400) = 600ms
// Performance gain: 2.5x faster
```
---
## Example 2: Streaming Large Files
### Problem: Loading Entire File
```typescript
// ❌ BEFORE: Load 1GB file into memory
import fs from 'fs';
async function processLargeFile(path: string) {
const data = fs.readFileSync(path); // Loads entire file
const lines = data.toString().split('\n');
for (const line of lines) {
await processLine(line);
}
}
// Memory: 1GB
// Time: 5,000ms
```
### Solution: Stream Processing
```typescript
// ✅ AFTER: Stream with readline
import fs from 'fs';
import readline from 'readline';
async function processLargeFileOptimized(path: string) {
const stream = fs.createReadStream(path);
const rl = readline.createInterface({ input: stream });
for await (const line of rl) {
await processLine(line);
}
}
// Memory: 15MB (constant)
// Time: 4,800ms
// Memory gain: 67x less
```
---
## Example 3: Worker Threads for CPU-Intensive Tasks
### Problem: Blocking Event Loop
```typescript
// ❌ BEFORE: CPU-intensive task blocks server
function generateReport(data: any[]) {
// Heavy computation blocks event loop for 3 seconds
const result = complexCalculation(data);
return result;
}
app.get('/report', (req, res) => {
const report = generateReport(largeDataset);
res.json(report);
});
// While generating: All requests blocked for 3s
// Throughput: 0 req/s during computation
```
### Solution: Worker Threads
```typescript
// ✅ AFTER: Worker thread doesn't block event loop
import { Worker } from 'worker_threads';
function generateReportAsync(data: any[]): Promise<any> {
return new Promise((resolve, reject) => {
const worker = new Worker('./report-worker.js');
worker.postMessage(data);
worker.on('message', resolve);
worker.on('error', reject);
});
}
app.get('/report', async (req, res) => {
const report = await generateReportAsync(largeDataset);
res.json(report);
});
// Other requests: Continue processing normally
// Throughput: 200 req/s maintained
```
---
## Example 4: Request Batching
### Problem: Many Small Requests
```typescript
// ❌ BEFORE: Individual requests to external API
async function enrichUsers(users: User[]) {
for (const user of users) {
user.details = await externalAPI.getDetails(user.id);
}
return users;
}
// 1000 users = 1000 API calls = 50,000ms
```
### Solution: Batch Requests
```typescript
// ✅ AFTER: Batch requests
async function enrichUsersOptimized(users: User[]) {
const batchSize = 100;
const results: any[] = [];
for (let i = 0; i < users.length; i += batchSize) {
const batch = users.slice(i, i + batchSize);
const batchResults = await externalAPI.getBatch(
batch.map(u => u.id)
);
results.push(...batchResults);
}
users.forEach((user, i) => {
user.details = results[i];
});
return users;
}
// 1000 users = 10 batch calls = 2,500ms (20x faster)
```
---
## Example 5: Connection Pooling
### Problem: New Connection Per Request
```python
# ❌ BEFORE: New connection each time (Python/FastAPI)
from sqlalchemy import create_engine
def get_user(user_id: int):
engine = create_engine("postgresql://...") # New connection
with engine.connect() as conn:
result = conn.execute("SELECT * FROM users WHERE id = %s", user_id)
return result.fetchone()
# Per request: 150ms (connect) + 20ms (query) = 170ms
```
### Solution: Connection Pool
```python
# ✅ AFTER: Reuse pooled connections
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool
engine = create_engine(
"postgresql://...",
poolclass=QueuePool,
pool_size=20,
max_overflow=10
)
def get_user_optimized(user_id: int):
with engine.connect() as conn: # Reuses connection
result = conn.execute("SELECT * FROM users WHERE id = %s", user_id)
return result.fetchone()
# Per request: 0ms (pool) + 20ms (query) = 20ms (8.5x faster)
```
---
## Summary
| Optimization | Before | After | Gain | Use Case |
|--------------|--------|-------|------|----------|
| **Parallel Processing** | 1,500ms | 600ms | 2.5x | Independent operations |
| **Streaming** | 1GB mem | 15MB | 67x | Large files |
| **Worker Threads** | 0 req/s | 200 req/s | ∞ | CPU-intensive |
| **Request Batching** | 1000 calls | 10 calls | 100x | External APIs |
| **Connection Pool** | 170ms | 20ms | 8.5x | Database queries |
---
**Previous**: [Frontend Optimization](frontend-optimization.md) | **Index**: [Examples Index](INDEX.md)

View File

@@ -0,0 +1,404 @@
# Caching Optimization Examples
Real-world caching strategies to eliminate redundant computations and reduce latency with measurable cache hit rates.
## Example 1: In-Memory Function Cache
### Problem: Expensive Computation
```typescript
// ❌ BEFORE: Recalculates every time - 250ms per call
function calculateComplexMetrics(userId: string) {
// Expensive calculation: database queries + computation
const userData = db.user.findUnique({ where: { id: userId } });
const posts = db.post.findMany({ where: { userId } });
const comments = db.comment.findMany({ where: { userId } });
// Complex aggregations
return {
totalEngagement: calculateEngagement(posts, comments),
averageScore: calculateScores(posts),
trendingTopics: analyzeTrends(posts, comments)
};
}
// Called 100 times/minute = 25,000ms computation time
```
### Solution: LRU Cache with TTL
```typescript
// ✅ AFTER: Cache results - 2ms per cache hit
import LRU from 'lru-cache';
const cache = new LRU<string, MetricsResult>({
max: 500, // Max 500 entries
ttl: 1000 * 60 * 5, // 5 minute TTL
updateAgeOnGet: true // Reset TTL on access
});
function calculateComplexMetricsCached(userId: string) {
// Check cache first
const cached = cache.get(userId);
if (cached) {
return cached; // 2ms cache hit
}
// Cache miss: calculate and store
const result = calculateComplexMetrics(userId);
cache.set(userId, result);
return result;
}
// First call: 250ms (calculation)
// Subsequent calls (within 5 min): 2ms (cache) × 99 = 198ms
// Total: 448ms vs 25,000ms
// Performance gain: 56x faster
```
### Metrics (100 calls, 90% cache hit rate)
| Implementation | Calculations | Total Time | Avg Response |
|----------------|--------------|------------|--------------|
| **No Cache** | 100 | 25,000ms | 250ms |
| **With Cache** | 10 | 2,680ms | 27ms |
| **Improvement** | **90% less** | **9.3x** | **9.3x** |
---
## Example 2: Redis Distributed Cache
### Problem: API Rate Limits
```typescript
// ❌ BEFORE: External API call every time - 450ms per call
async function getGitHubUserData(username: string) {
const response = await fetch(`https://api.github.com/users/${username}`);
return response.json();
}
// API limit: 60 requests/hour
// Average response: 450ms
// Risk: Rate limit errors
```
### Solution: Redis Caching Layer
```typescript
// ✅ AFTER: Cache in Redis - 15ms per cache hit
import { createClient } from 'redis';
const redis = createClient();
await redis.connect();
async function getGitHubUserDataCached(username: string) {
const cacheKey = `github:user:${username}`;
// Try cache first
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached); // 15ms cache hit
}
// Cache miss: call API
const response = await fetch(`https://api.github.com/users/${username}`);
const data = await response.json();
// Cache for 1 hour
await redis.setex(cacheKey, 3600, JSON.stringify(data));
return data;
}
// First call: 450ms (API) + 5ms (cache write) = 455ms
// Subsequent calls: 15ms (cache read)
// Performance gain: 30x faster
```
### Metrics (1000 calls, 95% cache hit rate)
| Implementation | API Calls | Redis Hits | Total Time | Cost |
|----------------|-----------|------------|------------|------|
| **No Cache** | 1000 | 0 | 450,000ms | High |
| **With Cache** | 50 | 950 | 36,750ms | Low |
| **Improvement** | **95% less** | - | **12.2x** | **95% less** |
### Cache Invalidation Strategy
```typescript
// Update cache when data changes
async function updateGitHubUserCache(username: string) {
const cacheKey = `github:user:${username}`;
const response = await fetch(`https://api.github.com/users/${username}`);
const data = await response.json();
// Update cache
await redis.setex(cacheKey, 3600, JSON.stringify(data));
return data;
}
// Invalidate on webhook
app.post('/webhook/github', async (req, res) => {
const { username } = req.body;
await redis.del(`github:user:${username}`); // Clear cache
res.send('OK');
});
```
---
## Example 3: HTTP Caching Headers
### Problem: Static Assets Re-downloaded
```typescript
// ❌ BEFORE: No caching headers - 2MB download every request
app.get('/assets/bundle.js', (req, res) => {
res.sendFile('dist/bundle.js');
});
// Every page load: 2MB download × 1000 users/hour = 2GB bandwidth
// Load time: 800ms on slow connection
```
### Solution: Aggressive HTTP Caching
```typescript
// ✅ AFTER: Cache with hash-based filename - 0ms after first load
app.get('/assets/:filename', (req, res) => {
const file = `dist/${req.params.filename}`;
// Immutable files (with hash in filename)
if (req.params.filename.match(/\.[a-f0-9]{8}\./)) {
res.setHeader('Cache-Control', 'public, max-age=31536000, immutable');
} else {
// Regular files
res.setHeader('Cache-Control', 'public, max-age=3600');
}
res.setHeader('ETag', generateETag(file));
res.sendFile(file);
});
// First load: 800ms (download)
// Subsequent loads: 0ms (browser cache)
// Bandwidth saved: 99% (conditional requests return 304)
```
### Metrics (1000 page loads)
| Implementation | Downloads | Bandwidth | Avg Load Time |
|----------------|-----------|-----------|---------------|
| **No Cache** | 1000 | 2 GB | 800ms |
| **With Cache** | 10 | 20 MB | 8ms |
| **Improvement** | **99% less** | **99% less** | **100x** |
---
## Example 4: Cache-Aside Pattern
### Problem: Database Under Load
```typescript
// ❌ BEFORE: Every request hits database - 150ms per query
async function getProductById(id: string) {
return await db.product.findUnique({
where: { id },
include: { category: true, reviews: true }
});
}
// 1000 requests/min = 150,000ms database load
```
### Solution: Cache-Aside with Stale-While-Revalidate
```typescript
// ✅ AFTER: Cache with background refresh - 5ms typical response
interface CachedData<T> {
data: T;
cachedAt: number;
staleAt: number;
}
class CacheAside<T> {
private cache = new Map<string, CachedData<T>>();
constructor(
private fetchFn: (key: string) => Promise<T>,
private ttl = 60000, // 1 minute fresh
private staleTtl = 300000 // 5 minutes stale
) {}
async get(key: string): Promise<T> {
const cached = this.cache.get(key);
const now = Date.now();
if (cached) {
// Fresh: return immediately
if (now < cached.staleAt) {
return cached.data;
}
// Stale: return old data, refresh in background
this.refreshInBackground(key);
return cached.data;
}
// Miss: fetch and cache
const data = await this.fetchFn(key);
this.cache.set(key, {
data,
cachedAt: now,
staleAt: now + this.ttl
});
return data;
}
private async refreshInBackground(key: string) {
try {
const data = await this.fetchFn(key);
const now = Date.now();
this.cache.set(key, {
data,
cachedAt: now,
staleAt: now + this.ttl
});
} catch (error) {
console.error('Background refresh failed:', error);
}
}
}
const productCache = new CacheAside(
(id) => db.product.findUnique({ where: { id }, include: {...} }),
60000, // Fresh for 1 minute
300000 // Serve stale for 5 minutes
);
async function getProductByIdCached(id: string) {
return await productCache.get(id);
}
// Fresh data: 5ms (cache)
// Stale data: 5ms (cache) + background refresh
// Cache miss: 150ms (database)
// Average: ~10ms (95% cache hit rate)
```
### Metrics (1000 requests/min)
| Implementation | DB Queries | Avg Response | P95 Response |
|----------------|------------|--------------|--------------|
| **No Cache** | 1000 | 150ms | 200ms |
| **Cache-Aside** | 50 | 10ms | 15ms |
| **Improvement** | **95% less** | **15x** | **13x** |
---
## Example 5: Query Result Cache
### Problem: Expensive Aggregation
```typescript
// ❌ BEFORE: Aggregation on every request - 1,200ms
async function getDashboardStats() {
const [
totalUsers,
activeUsers,
totalOrders,
revenue
] = await Promise.all([
db.user.count(),
db.user.count({ where: { lastActiveAt: { gte: new Date(Date.now() - 86400000) } } }),
db.order.count(),
db.order.aggregate({ _sum: { total: true } })
]);
return { totalUsers, activeUsers, totalOrders, revenue: revenue._sum.total };
}
// Called every dashboard load: 1,200ms
```
### Solution: Materialized View with Periodic Refresh
```typescript
// ✅ AFTER: Pre-computed stats - 2ms per read
interface DashboardStats {
totalUsers: number;
activeUsers: number;
totalOrders: number;
revenue: number;
lastUpdated: Date;
}
let cachedStats: DashboardStats | null = null;
// Background job: Update every 5 minutes
setInterval(async () => {
const stats = await calculateDashboardStats();
cachedStats = {
...stats,
lastUpdated: new Date()
};
}, 300000); // 5 minutes
async function getDashboardStatsCached(): Promise<DashboardStats> {
if (!cachedStats) {
// First run: calculate immediately
const stats = await calculateDashboardStats();
cachedStats = {
...stats,
lastUpdated: new Date()
};
}
return cachedStats; // 2ms read from memory
}
// Read time: 2ms (vs 1,200ms)
// Performance gain: 600x faster
```
### Metrics
| Implementation | Computation | Read Time | Freshness |
|----------------|-------------|-----------|-----------|
| **Real-time** | Every request | 1,200ms | Live |
| **Cached** | Every 5 min | 2ms | 5 min stale |
| **Improvement** | **Scheduled** | **600x** | Acceptable |
---
## Summary
| Strategy | Use Case | Cache Hit Response | Best For |
|----------|----------|-------------------|----------|
| **In-Memory LRU** | Function results | 2ms | Single-server apps |
| **Redis** | Distributed caching | 15ms | Multi-server apps |
| **HTTP Cache** | Static assets | 0ms | CDN-cacheable content |
| **Cache-Aside** | Database queries | 5ms | Frequently accessed data |
| **Materialized View** | Aggregations | 2ms | Expensive computations |
## Cache Hit Rate Targets
- **Excellent**: >90% hit rate
- **Good**: 70-90% hit rate
- **Poor**: <70% hit rate
## Best Practices
1. **Set Appropriate TTL**: Balance freshness vs performance
2. **Cache Invalidation**: Clear cache when data changes
3. **Monitor Hit Rates**: Track cache effectiveness
4. **Handle Cache Stampede**: Use locks for simultaneous cache misses
5. **Size Limits**: Use LRU eviction for memory-bounded caches
6. **Fallback**: Always handle cache failures gracefully
---
**Previous**: [Database Optimization](database-optimization.md) | **Next**: [Frontend Optimization](frontend-optimization.md) | **Index**: [Examples Index](INDEX.md)

View File

@@ -0,0 +1,396 @@
# Database Optimization Examples
Real-world database performance bottlenecks and their solutions with measurable query time improvements.
## Example 1: N+1 Query Problem
### Problem: Loading Users with Posts
```typescript
// ❌ BEFORE: N+1 queries - 3,500ms for 100 users
async function getUsersWithPosts() {
// 1 query to get users
const users = await db.user.findMany();
// N queries (1 per user) to get posts
for (const user of users) {
user.posts = await db.post.findMany({
where: { userId: user.id }
});
}
return users;
}
// Total queries: 1 + 100 = 101 queries
// Time: ~3,500ms (35ms per query × 100)
```
### Solution 1: Eager Loading
```typescript
// ✅ AFTER: Eager loading - 80ms for 100 users (44x faster!)
async function getUsersWithPostsOptimized() {
// Single query with JOIN
const users = await db.user.findMany({
include: {
posts: true
}
});
return users;
}
// Total queries: 1 query
// Time: ~80ms
// Performance gain: 44x faster (3,500ms → 80ms)
```
### Solution 2: DataLoader Pattern
```typescript
// ✅ ALTERNATIVE: Batched loading - 120ms for 100 users
import DataLoader from 'dataloader';
const postLoader = new DataLoader(async (userIds: string[]) => {
const posts = await db.post.findMany({
where: { userId: { in: userIds } }
});
// Group posts by userId
const postsByUser = new Map<string, Post[]>();
for (const post of posts) {
if (!postsByUser.has(post.userId)) {
postsByUser.set(post.userId, []);
}
postsByUser.get(post.userId)!.push(post);
}
// Return in same order as input
return userIds.map(id => postsByUser.get(id) || []);
});
async function getUsersWithPostsBatched() {
const users = await db.user.findMany();
// Batches all user IDs into single query
for (const user of users) {
user.posts = await postLoader.load(user.id);
}
return users;
}
// Total queries: 2 queries (users + batched posts)
// Time: ~120ms
```
### Metrics
| Implementation | Queries | Time | Improvement |
|----------------|---------|------|-------------|
| **N+1 (Original)** | 101 | 3,500ms | baseline |
| **Eager Loading** | 1 | 80ms | **44x faster** |
| **DataLoader** | 2 | 120ms | **29x faster** |
---
## Example 2: Missing Index
### Problem: Slow Query on Large Table
```sql
-- ❌ BEFORE: Full table scan - 2,800ms for 1M rows
SELECT * FROM orders
WHERE customer_id = '123'
AND status = 'pending'
ORDER BY created_at DESC
LIMIT 10;
-- EXPLAIN ANALYZE output:
-- Seq Scan on orders (cost=0.00..25000.00 rows=10 width=100) (actual time=2800.000)
-- Filter: (customer_id = '123' AND status = 'pending')
-- Rows Removed by Filter: 999,990
```
### Solution: Composite Index
```sql
-- ✅ AFTER: Index scan - 5ms for 1M rows (560x faster!)
CREATE INDEX idx_orders_customer_status_date
ON orders(customer_id, status, created_at DESC);
-- Same query, now uses index:
SELECT * FROM orders
WHERE customer_id = '123'
AND status = 'pending'
ORDER BY created_at DESC
LIMIT 10;
-- EXPLAIN ANALYZE output:
-- Index Scan using idx_orders_customer_status_date (cost=0.42..8.44 rows=10)
-- (actual time=5.000)
-- Index Cond: (customer_id = '123' AND status = 'pending')
```
### Metrics
| Implementation | Scan Type | Time | Rows Scanned |
|----------------|-----------|------|--------------|
| **No Index** | Sequential | 2,800ms | 1,000,000 |
| **With Index** | Index | 5ms | 10 |
| **Improvement** | - | **560x** | **99.999% less** |
### Index Strategy
```sql
-- Good: Covers WHERE + ORDER BY
CREATE INDEX idx_orders_customer_status_date
ON orders(customer_id, status, created_at DESC);
-- Bad: Wrong column order (status first is less selective)
CREATE INDEX idx_orders_status_customer
ON orders(status, customer_id);
-- Good: Partial index for common queries
CREATE INDEX idx_orders_pending
ON orders(customer_id, created_at DESC)
WHERE status = 'pending';
```
---
## Example 3: SELECT * vs Specific Columns
### Problem: Fetching Unnecessary Data
```typescript
// ❌ BEFORE: Fetching all columns - 450ms for 10K rows
const products = await db.product.findMany({
where: { category: 'electronics' }
// Fetches all 30 columns including large JSONB fields
});
// Network transfer: 25 MB
// Time: 450ms (query) + 200ms (network) = 650ms total
```
### Solution: Select Only Needed Columns
```typescript
// ✅ AFTER: Fetch only required columns - 120ms for 10K rows
const products = await db.product.findMany({
where: { category: 'electronics' },
select: {
id: true,
name: true,
price: true,
inStock: true
}
});
// Network transfer: 2 MB (88% reduction)
// Time: 120ms (query) + 25ms (network) = 145ms total
// Performance gain: 4.5x faster (650ms → 145ms)
```
### Metrics
| Implementation | Columns | Data Size | Total Time |
|----------------|---------|-----------|------------|
| **SELECT *** | 30 | 25 MB | 650ms |
| **Specific Columns** | 4 | 2 MB | 145ms |
| **Improvement** | **87% less** | **88% less** | **4.5x** |
---
## Example 4: Connection Pooling
### Problem: Creating New Connection Per Request
```typescript
// ❌ BEFORE: New connection each request - 150ms overhead
async function handleRequest() {
// Opens new connection (150ms)
const client = await pg.connect({
host: 'db.example.com',
database: 'myapp'
});
const result = await client.query('SELECT ...');
await client.end(); // Closes connection
return result;
}
// Per request: 150ms (connect) + 20ms (query) = 170ms
```
### Solution: Connection Pool
```typescript
// ✅ AFTER: Reuse pooled connections - 20ms per query
import { Pool } from 'pg';
const pool = new Pool({
host: 'db.example.com',
database: 'myapp',
max: 20, // Max 20 connections
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
async function handleRequestOptimized() {
// Reuses existing connection (~0ms overhead)
const client = await pool.connect();
try {
const result = await client.query('SELECT ...');
return result;
} finally {
client.release(); // Return to pool
}
}
// Per request: 0ms (pool) + 20ms (query) = 20ms
// Performance gain: 8.5x faster (170ms → 20ms)
```
### Metrics
| Implementation | Connection Time | Query Time | Total |
|----------------|-----------------|------------|-------|
| **New Connection** | 150ms | 20ms | 170ms |
| **Pooled** | ~0ms | 20ms | 20ms |
| **Improvement** | **∞** | - | **8.5x** |
---
## Example 5: Query Result Caching
### Problem: Repeated Expensive Queries
```typescript
// ❌ BEFORE: Query database every time - 80ms per call
async function getPopularProducts() {
return await db.product.findMany({
where: {
soldCount: { gte: 1000 }
},
orderBy: { soldCount: 'desc' },
take: 20
});
}
// Called 100 times/min = 8,000ms database load
```
### Solution: Redis Caching
```typescript
// ✅ AFTER: Cache results - 2ms per cache hit
import { Redis } from 'ioredis';
const redis = new Redis();
async function getPopularProductsCached() {
const cacheKey = 'popular_products';
// Check cache first
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached); // 2ms cache hit
}
// Cache miss: query database
const products = await db.product.findMany({
where: { soldCount: { gte: 1000 } },
orderBy: { soldCount: 'desc' },
take: 20
});
// Cache for 5 minutes
await redis.setex(cacheKey, 300, JSON.stringify(products));
return products;
}
// First call: 80ms (database)
// Subsequent calls: 2ms (cache) × 99 = 198ms
// Total: 278ms vs 8,000ms
// Performance gain: 29x faster
```
### Metrics (100 calls)
| Implementation | Cache Hits | DB Queries | Total Time |
|----------------|------------|------------|------------|
| **No Cache** | 0 | 100 | 8,000ms |
| **With Cache** | 99 | 1 | 278ms |
| **Improvement** | - | **99% less** | **29x** |
---
## Example 6: Batch Operations
### Problem: Individual Inserts
```typescript
// ❌ BEFORE: Individual inserts - 5,000ms for 1000 records
async function importUsers(users: User[]) {
for (const user of users) {
await db.user.create({ data: user }); // 1000 queries
}
}
// Time: 5ms per insert × 1000 = 5,000ms
```
### Solution: Batch Insert
```typescript
// ✅ AFTER: Single batch insert - 250ms for 1000 records
async function importUsersOptimized(users: User[]) {
await db.user.createMany({
data: users,
skipDuplicates: true
});
}
// Time: 250ms (single query with 1000 rows)
// Performance gain: 20x faster (5,000ms → 250ms)
```
### Metrics
| Implementation | Queries | Time | Network Roundtrips |
|----------------|---------|------|-------------------|
| **Individual** | 1,000 | 5,000ms | 1,000 |
| **Batch** | 1 | 250ms | 1 |
| **Improvement** | **1000x less** | **20x** | **1000x less** |
---
## Summary
| Optimization | Before | After | Gain | When to Use |
|--------------|--------|-------|------|-------------|
| **Eager Loading** | 101 queries | 1 query | 44x | N+1 problems |
| **Add Index** | 2,800ms | 5ms | 560x | Slow WHERE/ORDER BY |
| **Select Specific** | 25 MB | 2 MB | 4.5x | Large result sets |
| **Connection Pool** | 170ms/req | 20ms/req | 8.5x | High request volume |
| **Query Cache** | 100 queries | 1 query | 29x | Repeated queries |
| **Batch Operations** | 1000 queries | 1 query | 20x | Bulk inserts/updates |
## Best Practices
1. **Use EXPLAIN ANALYZE**: Always check query execution plans
2. **Index Wisely**: Cover WHERE, JOIN, ORDER BY columns
3. **Eager Load**: Avoid N+1 queries with includes/joins
4. **Connection Pools**: Never create connections per request
5. **Cache Strategically**: Cache expensive, frequently accessed queries
6. **Batch Operations**: Bulk insert/update when possible
7. **Monitor Slow Queries**: Log queries >100ms in production
---
**Previous**: [Algorithm Optimization](algorithm-optimization.md) | **Next**: [Caching Optimization](caching-optimization.md) | **Index**: [Examples Index](INDEX.md)

View File

@@ -0,0 +1,271 @@
# Frontend Optimization Examples
React and frontend performance optimizations with measurable Web Vitals improvements.
## Example 1: Code Splitting
### Problem: Large Bundle
```typescript
// ❌ BEFORE: Single bundle - 1.2MB JavaScript, 4.5s load time
import { Dashboard } from './Dashboard';
import { Analytics } from './Analytics';
import { Settings } from './Settings';
import { Admin } from './Admin';
function App() {
return (
<Router>
<Route path="/" component={Dashboard} />
<Route path="/analytics" component={Analytics} />
<Route path="/settings" component={Settings} />
<Route path="/admin" component={Admin} />
</Router>
);
}
// Initial bundle: 1.2MB
// First Contentful Paint: 4.5s
```
### Solution: Dynamic Imports
```typescript
// ✅ AFTER: Code splitting - 200KB initial, 1.8s load time
import { lazy, Suspense } from 'react';
const Dashboard = lazy(() => import('./Dashboard'));
const Analytics = lazy(() => import('./Analytics'));
const Settings = lazy(() => import('./Settings'));
const Admin = lazy(() => import('./Admin'));
function App() {
return (
<Router>
<Suspense fallback={<Loading />}>
<Route path="/" component={Dashboard} />
<Route path="/analytics" component={Analytics} />
<Route path="/settings" component={Settings} />
<Route path="/admin" component={Admin} />
</Suspense>
</Router>
);
}
// Initial bundle: 200KB (6x smaller)
// First Contentful Paint: 1.8s (2.5x faster)
```
### Metrics
| Implementation | Bundle Size | FCP | LCP |
|----------------|-------------|-----|-----|
| **Single Bundle** | 1.2 MB | 4.5s | 5.2s |
| **Code Split** | 200 KB | 1.8s | 2.1s |
| **Improvement** | **83% less** | **2.5x** | **2.5x** |
---
## Example 2: React Rendering Optimization
### Problem: Unnecessary Re-renders
```typescript
// ❌ BEFORE: Re-renders entire list on every update - 250ms
function ProductList({ products }) {
const [filter, setFilter] = useState('');
return (
<>
<input value={filter} onChange={e => setFilter(e.target.value)} />
{products.map(product => (
<ProductCard
key={product.id}
product={product}
onUpdate={handleUpdate}
/>
))}
</>
);
}
// Every keystroke: 250ms to re-render 100 items
```
### Solution: Memoization
```typescript
// ✅ AFTER: Memoized components - 15ms per update
const ProductCard = memo(({ product, onUpdate }) => {
return <div>{product.name}</div>;
});
function ProductList({ products }) {
const [filter, setFilter] = useState('');
const handleUpdate = useCallback((id, data) => {
// Update logic
}, []);
const filteredProducts = useMemo(() => {
return products.filter(p => p.name.includes(filter));
}, [products, filter]);
return (
<>
<input value={filter} onChange={e => setFilter(e.target.value)} />
{filteredProducts.map(product => (
<ProductCard
key={product.id}
product={product}
onUpdate={handleUpdate}
/>
))}
</>
);
}
// Every keystroke: 15ms (17x faster)
```
---
## Example 3: Virtual Scrolling
### Problem: Rendering Large Lists
```typescript
// ❌ BEFORE: Render all 10,000 items - 8s initial render
function UserList({ users }) {
return (
<div>
{users.map(user => (
<UserCard key={user.id} user={user} />
))}
</div>
);
}
// 10,000 DOM nodes created
// Initial render: 8,000ms
// Memory: 450MB
```
### Solution: react-window
```typescript
// ✅ AFTER: Render only visible items - 180ms initial render
import { FixedSizeList } from 'react-window';
function UserList({ users }) {
const Row = ({ index, style }) => (
<div style={style}>
<UserCard user={users[index]} />
</div>
);
return (
<FixedSizeList
height={600}
itemCount={users.length}
itemSize={80}
width="100%"
>
{Row}
</FixedSizeList>
);
}
// ~15 DOM nodes created (only visible items)
// Initial render: 180ms (44x faster)
// Memory: 25MB (18x less)
```
---
## Example 4: Image Optimization
### Problem: Large Unoptimized Images
```html
<!-- ❌ BEFORE: 4MB PNG, 3.5s load time -->
<img src="/images/hero.png" alt="Hero" />
```
### Solution: Optimized Formats + Lazy Loading
```html
<!-- ✅ AFTER: 180KB WebP, lazy loaded - 0.4s -->
<picture>
<source srcset="/images/hero-small.webp" media="(max-width: 640px)" />
<source srcset="/images/hero-medium.webp" media="(max-width: 1024px)" />
<source srcset="/images/hero-large.webp" media="(min-width: 1025px)" />
<img
src="/images/hero-large.webp"
alt="Hero"
loading="lazy"
decoding="async"
/>
</picture>
```
### Metrics
| Implementation | File Size | Load Time | LCP Impact |
|----------------|-----------|-----------|------------|
| **PNG** | 4 MB | 3.5s | 3.8s LCP |
| **WebP + Lazy** | 180 KB | 0.4s | 1.2s LCP |
| **Improvement** | **96% less** | **8.8x** | **3.2x** |
---
## Example 5: Tree Shaking
### Problem: Importing Entire Library
```typescript
// ❌ BEFORE: Imports entire lodash (72KB)
import _ from 'lodash';
const debounced = _.debounce(fn, 300);
const sorted = _.sortBy(arr, 'name');
// Bundle includes all 300+ lodash functions
// Added bundle size: 72KB
```
### Solution: Import Specific Functions
```typescript
// ✅ AFTER: Import only needed functions (4KB)
import debounce from 'lodash-es/debounce';
import sortBy from 'lodash-es/sortBy';
const debounced = debounce(fn, 300);
const sorted = sortBy(arr, 'name');
// Bundle includes only 2 functions
// Added bundle size: 4KB (18x smaller)
```
---
## Summary
| Optimization | Before | After | Gain | Web Vital |
|--------------|--------|-------|------|-----------|
| **Code Splitting** | 1.2MB | 200KB | 6x | FCP, LCP |
| **Memo + useCallback** | 250ms | 15ms | 17x | FID |
| **Virtual Scrolling** | 8s | 180ms | 44x | LCP, CLS |
| **Image Optimization** | 4MB | 180KB | 22x | LCP |
| **Tree Shaking** | 72KB | 4KB | 18x | FCP |
## Web Vitals Targets
- **LCP** (Largest Contentful Paint): <2.5s
- **FID** (First Input Delay): <100ms
- **CLS** (Cumulative Layout Shift): <0.1
---
**Previous**: [Caching Optimization](caching-optimization.md) | **Next**: [Backend Optimization](backend-optimization.md) | **Index**: [Examples Index](INDEX.md)