6.7 KiB
6.7 KiB
name, description
| name | description |
|---|---|
| performance-patterns | Performance optimization patterns and common bottlenecks across languages. Auto-loaded when discussing performance or optimization. |
Performance Optimization Patterns
Common Performance Bottlenecks
1. N+1 Query Problem
Problem: Making a separate database query for each item in a loop
Bad:
# Django ORM
users = User.objects.all()
for user in users:
print(user.profile.bio) # Separate query for each user!
Good:
users = User.objects.select_related('profile').all()
for user in users:
print(user.profile.bio) # Single query with JOIN
2. Unnecessary Re-renders (React)
Bad:
function Component({ items }) {
// This creates new object every render!
const config = { limit: 10, offset: 0 };
return <List items={items} config={config} />;
}
Good:
const DEFAULT_CONFIG = { limit: 10, offset: 0 };
function Component({ items }) {
return <List items={items} config={DEFAULT_CONFIG} />;
}
// Or use useMemo for computed values
const config = useMemo(() => ({
limit: 10,
offset: page * 10
}), [page]);
3. Synchronous Operations in Loops
Bad:
// Sequential - takes 5 seconds for 5 items
for (const item of items) {
await fetchData(item); // 1 second each
}
Good:
// Parallel - takes ~1 second total
await Promise.all(items.map(item => fetchData(item)));
// Or with concurrency limit
const pLimit = require('p-limit');
const limit = pLimit(5);
await Promise.all(items.map(item => limit(() => fetchData(item))));
4. Large Bundle Sizes
Bad:
import _ from 'lodash'; // Imports entire library (71KB)
import moment from 'moment'; // 289KB!
Good:
import debounce from 'lodash/debounce'; // Only what you need
import { format } from 'date-fns'; // 12KB vs 289KB
5. Inefficient Algorithms
Bad: O(n²)
def has_duplicates(arr):
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] == arr[j]:
return True
return False
Good: O(n)
def has_duplicates(arr):
return len(arr) != len(set(arr))
Optimization Techniques
Caching
Memoization
const cache = new Map();
function expensiveCalculation(n) {
if (cache.has(n)) {
return cache.get(n);
}
const result = /* expensive work */;
cache.set(n, result);
return result;
}
React Query / SWR
// Automatic caching, deduplication, and background refetching
const { data } = useQuery('users', fetchUsers);
HTTP Caching
// Set cache headers
res.set('Cache-Control', 'public, max-age=3600');
Lazy Loading
Dynamic Imports
// Instead of importing at top
// import HeavyComponent from './HeavyComponent';
// Load only when needed
const HeavyComponent = lazy(() => import('./HeavyComponent'));
<Suspense fallback={<Spinner />}>
<HeavyComponent />
</Suspense>
Image Lazy Loading
<img src="image.jpg" loading="lazy" />
Debouncing & Throttling
// Debounce: Wait until user stops typing
const debouncedSearch = debounce((query) => {
searchAPI(query);
}, 300);
// Throttle: Execute at most once per interval
const throttledScroll = throttle(() => {
handleScroll();
}, 100);
Database Optimization
Index Critical Columns
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);
Use Pagination
// Bad: Load all records
const users = await User.find();
// Good: Paginate
const users = await User.find()
.limit(20)
.skip(page * 20);
Eager Loading
# Django: Avoid N+1
posts = Post.objects.select_related('author').prefetch_related('comments')
# SQLAlchemy
posts = session.query(Post).options(
joinedload(Post.author),
subqueryload(Post.comments)
).all()
Frontend Optimization
Virtual Scrolling
// For long lists, only render visible items
import { FixedSizeList } from 'react-window';
<FixedSizeList
height={600}
itemCount={10000}
itemSize={35}
>
{Row}
</FixedSizeList>
Code Splitting
// Webpack
const routes = [
{
path: '/dashboard',
component: () => import('./Dashboard'),
},
{
path: '/profile',
component: () => import('./Profile'),
},
];
Web Workers
// Offload heavy computation
const worker = new Worker('worker.js');
worker.postMessage(hugeDataset);
worker.onmessage = (e) => {
console.log('Result:', e.data);
};
Language-Specific Optimizations
JavaScript/TypeScript
- Use
Setfor lookups instead ofArray.includes() - Use
Mapinstead of objects for frequent additions/deletions - Avoid
deleteoperator, set toundefinedinstead - Use
forloops instead of.forEach()for hot paths
Python
- Use list comprehensions instead of loops
- Use generators for large datasets
- Use
__slots__for classes with many instances - Use
setfor membership testing
Go
- Reuse buffers instead of allocating new ones
- Use sync.Pool for frequently allocated objects
- Prefer value types over pointers for small structs
- Use buffered channels appropriately
Java
- Use
StringBuilderfor string concatenation in loops - Prefer
ArrayListoverLinkedList(usually) - Use primitive types instead of wrappers when possible
- Set appropriate initial capacity for collections
Performance Monitoring
Profiling Tools
- JavaScript: Chrome DevTools, Lighthouse
- Python: cProfile, py-spy
- Go: pprof
- Java: JProfiler, VisualVM
Key Metrics
- Time to First Byte (TTFB): Server response time
- First Contentful Paint (FCP): When content appears
- Largest Contentful Paint (LCP): Main content loaded (<2.5s)
- Time to Interactive (TTI): When page is interactive
- Total Blocking Time (TBT): How long main thread is blocked
Quick Checks
# Measure API response time
time curl https://api.example.com/endpoint
# Check bundle size
npx webpack-bundle-analyzer dist/stats.json
# Profile Python script
python -m cProfile -s cumtime script.py
# Memory usage
/usr/bin/time -v ./program
Golden Rules
- Profile before optimizing - Measure, don't guess
- Optimize the critical path - Focus on what users experience
- Start with algorithms - O(n²) → O(n log n) matters more than micro-optimizations
- Cache intelligently - But watch for stale data
- Lazy load everything possible - Don't load what you don't need
- Monitor in production - Real user performance matters most
Remember: Premature optimization is the root of all evil. Focus on bottlenecks, not micro-optimizations.