15 KiB
description, shortcut, category, difficulty, estimated_time, version
| description | shortcut | category | difficulty | estimated_time | version |
|---|---|---|---|---|---|
| Implement comprehensive multi-level API caching strategies with Redis, CDN, and intelligent invalidation | cache | api | intermediate | 2-3 hours | 2.0.0 |
Implement API Caching
Creates comprehensive multi-level caching strategies to dramatically improve API performance, reduce database load, and enhance user experience. Implements Redis for server-side caching, CDN integration for edge caching, and proper HTTP cache headers for client-side optimization.
When to Use
Use this command when:
- API response times exceed acceptable thresholds (>200ms)
- Database queries are repetitive and expensive
- Static or semi-static content dominates API responses
- High traffic causes server strain and increased costs
- Geographic distribution requires edge caching
- Rate limiting needs efficient request counting
- Session data requires fast access across servers
Do NOT use this command for:
- Real-time data that changes every request
- User-specific sensitive data (without proper cache isolation)
- APIs with complex invalidation dependencies
- Small-scale applications where caching adds unnecessary complexity
Prerequisites
Before running this command, ensure:
- API endpoints are identified and categorized by cache lifetime
- Redis or Memcached is available (or can be provisioned)
- CDN service is configured (CloudFlare, Fastly, or AWS CloudFront)
- Cache key strategy is defined
- Monitoring tools are ready to track cache performance
Process
Step 1: Analyze API Patterns
The command examines your API to determine optimal caching strategies:
- Identifies read-heavy endpoints suitable for caching
- Categorizes data by volatility (static, semi-dynamic, dynamic)
- Analyzes request patterns and frequency
- Determines appropriate cache TTL values
- Maps data dependencies for invalidation
Step 2: Implement Server-Side Caching
Sets up Redis-based caching with intelligent patterns:
- Cache-aside pattern for on-demand caching
- Write-through for immediate cache updates
- Write-behind for asynchronous cache population
- Distributed caching for horizontal scaling
- Cache warming for critical data
Step 3: Configure HTTP Cache Headers
Implements proper HTTP caching directives:
- Cache-Control headers with appropriate max-age
- ETag generation for conditional requests
- Vary headers for content negotiation
- Surrogate-Control for CDN-specific behavior
- Stale-while-revalidate for improved perceived performance
Step 4: Integrate CDN Caching
Configures edge caching for global distribution:
- Cache rules based on URL patterns
- Geographic cache distribution
- Cache purging API integration
- Origin shield configuration
- Custom cache keys for variants
Step 5: Implement Cache Invalidation
Creates sophisticated invalidation strategies:
- Tag-based invalidation for related content
- Event-driven cache clearing
- Time-based expiration with jitter
- Cascade invalidation for dependent data
- Soft purging with grace periods
Output Format
The command generates a complete caching implementation:
api-caching/
├── src/
│ ├── cache/
│ │ ├── redis-client.js
│ │ ├── cache-middleware.js
│ │ ├── cache-strategies.js
│ │ └── invalidation-service.js
│ ├── middleware/
│ │ ├── http-cache-headers.js
│ │ └── cdn-integration.js
│ └── utils/
│ ├── cache-key-generator.js
│ └── cache-metrics.js
├── config/
│ ├── cache-config.json
│ ├── redis.config.js
│ └── cdn-rules.json
├── tests/
│ └── cache.test.js
└── docs/
└── caching-strategy.md
Examples
Example 1: E-commerce Product API with Redis
Scenario: High-traffic product catalog requiring sub-100ms response times
Generated Redis Implementation:
// cache/redis-client.js
import Redis from 'ioredis';
import { promisify } from 'util';
class CacheManager {
constructor() {
this.client = new Redis({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
retryStrategy: (times) => Math.min(times * 50, 2000),
enableOfflineQueue: false
});
this.defaultTTL = 3600; // 1 hour default
this.client.on('error', this.handleError);
}
async get(key, options = {}) {
try {
const cached = await this.client.get(key);
if (cached) {
this.metrics.hit(key);
return JSON.parse(cached);
}
this.metrics.miss(key);
// Cache-aside pattern: fetch if not cached
if (options.fetchFunction) {
const data = await options.fetchFunction();
await this.set(key, data, options.ttl);
return data;
}
return null;
} catch (error) {
this.handleError(error);
// Fallback to direct fetch on cache error
return options.fetchFunction ? await options.fetchFunction() : null;
}
}
async set(key, value, ttl = this.defaultTTL) {
const serialized = JSON.stringify(value);
if (ttl) {
await this.client.setex(key, ttl, serialized);
} else {
await this.client.set(key, serialized);
}
// Implement cache tags for invalidation
if (value.tags) {
for (const tag of value.tags) {
await this.client.sadd(`tag:${tag}`, key);
}
}
}
async invalidateByTag(tag) {
const keys = await this.client.smembers(`tag:${tag}`);
if (keys.length > 0) {
await this.client.del(...keys);
await this.client.del(`tag:${tag}`);
}
return keys.length;
}
async invalidatePattern(pattern) {
const keys = await this.client.keys(pattern);
if (keys.length > 0) {
await this.client.del(...keys);
}
return keys.length;
}
}
// middleware/cache-middleware.js
export const cacheMiddleware = (options = {}) => {
return async (req, res, next) => {
// Skip caching for non-GET requests
if (req.method !== 'GET') {
return next();
}
// Generate cache key
const cacheKey = generateCacheKey(req);
// Check cache
const cached = await cacheManager.get(cacheKey);
if (cached) {
res.set('X-Cache', 'HIT');
res.set('X-Cache-Key', cacheKey);
return res.json(cached);
}
// Store original send function
const originalSend = res.json;
res.json = function(data) {
res.json = originalSend;
// Cache successful responses only
if (res.statusCode === 200) {
cacheManager.set(cacheKey, data, options.ttl);
}
res.set('X-Cache', 'MISS');
res.set('X-Cache-Key', cacheKey);
return res.json(data);
};
next();
};
};
// Usage in Express routes
app.get('/api/products/:id',
cacheMiddleware({ ttl: 1800 }), // 30 minutes
async (req, res) => {
const product = await db.getProduct(req.params.id);
res.json(product);
}
);
Example 2: CDN Integration with Cache Purging
Scenario: Global content delivery with CloudFlare integration
Generated CDN Configuration:
// cdn-integration.js
class CDNManager {
constructor(config) {
this.zoneId = config.cloudflareZoneId;
this.apiToken = config.cloudflareApiToken;
this.baseUrl = 'https://api.cloudflare.com/client/v4';
}
// Set CDN cache headers
setCacheHeaders(res, options = {}) {
const {
maxAge = 3600,
sMaxAge = 86400,
staleWhileRevalidate = 60,
staleIfError = 3600,
mustRevalidate = false,
public = true
} = options;
// Browser cache
let cacheControl = public ? 'public' : 'private';
cacheControl += `, max-age=${maxAge}`;
// CDN cache (s-maxage)
cacheControl += `, s-maxage=${sMaxAge}`;
// Stale content serving
if (staleWhileRevalidate) {
cacheControl += `, stale-while-revalidate=${staleWhileRevalidate}`;
}
if (staleIfError) {
cacheControl += `, stale-if-error=${staleIfError}`;
}
if (mustRevalidate) {
cacheControl += ', must-revalidate';
}
res.set('Cache-Control', cacheControl);
// CloudFlare specific headers
res.set('CF-Cache-Tag', options.tags?.join(',') || 'default');
// Enable CDN caching for this response
res.set('CDN-Cache-Control', `max-age=${sMaxAge}`);
}
// Purge CDN cache by URL or tag
async purgeCache(options = {}) {
const { urls, tags, everything = false } = options;
let purgeBody = {};
if (everything) {
purgeBody.purge_everything = true;
} else if (tags) {
purgeBody.tags = tags;
} else if (urls) {
purgeBody.files = urls;
}
const response = await fetch(
`${this.baseUrl}/zones/${this.zoneId}/purge_cache`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${this.apiToken}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(purgeBody)
}
);
return response.json();
}
}
// Usage in API endpoints
app.get('/api/content/:slug', async (req, res) => {
const content = await cms.getContent(req.params.slug);
// Set aggressive CDN caching for static content
cdnManager.setCacheHeaders(res, {
maxAge: 300, // 5 min browser cache
sMaxAge: 86400, // 24 hour CDN cache
tags: ['content', `content-${content.id}`]
});
res.json(content);
});
// Invalidate on content update
app.post('/api/content/:slug/update', async (req, res) => {
const content = await cms.updateContent(req.params.slug, req.body);
// Purge CDN cache
await cdnManager.purgeCache({
tags: [`content-${content.id}`]
});
// Invalidate Redis cache
await cacheManager.invalidateByTag(`content-${content.id}`);
res.json({ success: true });
});
Example 3: Advanced Cache Warming and Preloading
Scenario: Critical data that must always be cached for performance
Generated Cache Warming Strategy:
// cache-warming-service.js
class CacheWarmer {
constructor(cacheManager, dataSource) {
this.cache = cacheManager;
this.dataSource = dataSource;
this.warmingInterval = 5 * 60 * 1000; // 5 minutes
}
async warmCache() {
console.log('Starting cache warming...');
// Warm frequently accessed data
const criticalData = [
{ key: 'homepage:featured', fetch: () => this.dataSource.getFeaturedProducts() },
{ key: 'categories:all', fetch: () => this.dataSource.getAllCategories() },
{ key: 'config:site', fetch: () => this.dataSource.getSiteConfig() }
];
const warmingPromises = criticalData.map(async ({ key, fetch }) => {
try {
const data = await fetch();
await this.cache.set(key, data, 3600); // 1 hour TTL
return { key, status: 'warmed' };
} catch (error) {
return { key, status: 'failed', error: error.message };
}
});
const results = await Promise.allSettled(warmingPromises);
console.log('Cache warming complete:', results);
return results;
}
startPeriodicWarming() {
// Initial warming
this.warmCache();
// Periodic warming
setInterval(() => {
this.warmCache();
}, this.warmingInterval);
}
}
Error Handling
Error: Redis Connection Failed
Symptoms: Cache operations timeout or fail Cause: Redis server unavailable or misconfigured Solution:
// Implement fallback to direct database access
if (!redis.isReady()) {
console.warn('Cache unavailable, falling back to database');
return await database.query(sql);
}
Prevention: Implement circuit breaker pattern and health checks
Error: Cache Stampede
Symptoms: Multiple simultaneous cache misses cause database overload Cause: Popular item expires, causing many requests to rebuild cache Solution: Implement probabilistic early expiration or distributed locks
Error: Stale Data Served
Symptoms: Users see outdated information Cause: Cache TTL too long or invalidation not triggered Solution: Implement event-based invalidation and reduce TTL values
Configuration Options
Option: --ttl
- Purpose: Set default time-to-live for cache entries
- Values: Seconds (integer)
- Default: 3600 (1 hour)
- Example:
/cache --ttl 7200
Option: --strategy
- Purpose: Choose caching pattern
- Values:
cache-aside,write-through,write-behind - Default:
cache-aside - Example:
/cache --strategy write-through
Option: --cdn
- Purpose: Specify CDN provider
- Values:
cloudflare,fastly,cloudfront,akamai - Default:
cloudflare - Example:
/cache --cdn fastly
Best Practices
✅ DO:
- Use consistent cache key naming conventions
- Implement cache metrics and monitoring
- Set appropriate TTL values based on data volatility
- Use cache tags for grouped invalidation
- Implement graceful degradation on cache failure
❌ DON'T:
- Cache user-specific sensitive data without isolation
- Use overly long TTLs for frequently changing data
- Forget to handle cache failures gracefully
- Cache large objects that exceed memory limits
💡 TIPS:
- Add jitter to TTL values to prevent synchronized expiration
- Use cache warming for critical data paths
- Monitor cache hit ratios (aim for >80%)
- Implement separate caches for different data types
Related Commands
/api-rate-limiter- Implement rate limiting with Redis/api-response-validator- Validate cached responses/api-monitoring-dashboard- Monitor cache performance/api-load-tester- Test cache effectiveness under load
Performance Considerations
- Cache hit ratio target: >80% for static content, >60% for dynamic
- Redis memory usage: ~1KB per cached object + overhead
- Network latency: <5ms for Redis, <50ms for CDN edge
- Typical improvements: 10x-100x response time reduction
Security Notes
⚠️ Security Considerations:
- Never cache authentication tokens or passwords
- Implement cache key signing to prevent injection
- Use separate cache instances for different security contexts
- Encrypt sensitive data before caching
- Implement proper access controls for cache management endpoints
Troubleshooting
Issue: Low cache hit ratio
Solution: Review cache key strategy and TTL values
Issue: Memory pressure on Redis
Solution: Implement LRU eviction policy and reduce object sizes
Issue: Cache invalidation not working
Solution: Verify tag associations and event triggers
Getting Help
- Redis documentation: https://redis.io/documentation
- CDN best practices: https://web.dev/cache-control
- Cache pattern guide: https://docs.microsoft.com/azure/architecture/patterns/cache-aside
Version History
- v2.0.0 - Complete rewrite with multi-level caching and CDN integration
- v1.0.0 - Initial Redis-only implementation
Last updated: 2025-10-11 Quality score: 9.5/10 Tested with: Redis 7.0, CloudFlare, Fastly, AWS CloudFront