Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:22:25 +08:00
commit c3294f28aa
60 changed files with 10297 additions and 0 deletions

View File

@@ -0,0 +1,183 @@
# Advanced Caching Examples
## Bulk Invalidation
**Invalidate multiple related keys efficiently:**
```typescript
async function invalidateUserCache(userId: string) {
const patterns = [
`user:${userId}`,
`user_profile:${userId}`,
`user_settings:${userId}`,
`posts:author:${userId}`,
`comments:author:${userId}`,
]
await redis.del(...patterns)
}
async function invalidatePostCache(postId: string) {
const post = await prisma.post.findUnique({
where: { id: postId },
select: { authorId: true },
})
if (!post) return
const keys = await redis.keys(`posts:*`)
await Promise.all([
redis.del(`post:${postId}`),
redis.del(`posts:author:${post.authorId}`),
keys.length > 0 ? redis.del(...keys) : Promise.resolve(),
])
}
```
**Pattern:** Collect all related keys and invalidate in a single operation to maintain consistency.
## Cache Warming
**Pre-populate cache with frequently accessed data:**
```typescript
async function warmCache() {
const popularPosts = await prisma.post.findMany({
where: { published: true },
orderBy: { views: 'desc' },
take: 20,
})
await Promise.all(
popularPosts.map(post =>
redis.setex(
`post:${post.id}`,
300,
JSON.stringify(post)
)
)
)
const activeUsers = await prisma.user.findMany({
where: { lastActiveAt: { gte: new Date(Date.now() - 24 * 60 * 60 * 1000) } },
take: 50,
})
await Promise.all(
activeUsers.map(user =>
redis.setex(
`user:${user.id}`,
600,
JSON.stringify(user)
)
)
)
}
```
**Pattern:** Pre-populate cache on application startup or scheduled intervals for predictably popular data.
## Graceful Fallback
**Handle Redis failures without breaking application:**
```typescript
async function getCachedData<T>(
key: string,
fetchFn: () => Promise<T>
): Promise<T> {
try {
const cached = await redis.get(key)
if (cached) {
return JSON.parse(cached)
}
} catch (err) {
console.error('Redis error, falling back to database:', err)
}
const data = await fetchFn()
try {
await redis.setex(key, 300, JSON.stringify(data))
} catch (err) {
console.error('Failed to cache data:', err)
}
return data
}
async function getUserProfile(userId: string) {
return getCachedData(
`user_profile:${userId}`,
() => prisma.user.findUnique({
where: { id: userId },
include: { profile: true },
})
)
}
```
**Pattern:** Wrap all Redis operations in try/catch, always fallback to database on error.
## Advanced TTL Strategy
**Multi-tier caching with different TTL per tier:**
```typescript
const CACHE_TIERS = {
hot: 60,
warm: 300,
cold: 1800,
}
interface CacheOptions {
tier: keyof typeof CACHE_TIERS
keyPrefix: string
}
async function tieredCache<T>(
identifier: string,
options: CacheOptions,
fetchFn: () => Promise<T>
): Promise<T> {
const cacheKey = `${options.keyPrefix}:${identifier}`
const ttl = CACHE_TIERS[options.tier]
const cached = await redis.get(cacheKey)
if (cached) {
return JSON.parse(cached)
}
const data = await fetchFn()
await redis.setex(cacheKey, ttl, JSON.stringify(data))
return data
}
async function getTrendingPosts() {
return tieredCache(
'trending',
{ tier: 'hot', keyPrefix: 'posts' },
() => prisma.post.findMany({
where: { published: true },
orderBy: { views: 'desc' },
take: 10,
})
)
}
async function getArchivedPosts() {
return tieredCache(
'archived',
{ tier: 'cold', keyPrefix: 'posts' },
() => prisma.post.findMany({
where: { archived: true },
orderBy: { archivedAt: 'desc' },
take: 20,
})
)
}
```
**Pattern:** Classify data into tiers based on access patterns, assign appropriate TTL per tier.

View File

@@ -0,0 +1,160 @@
# Common Pitfalls
## Pitfall 1: Infinite TTL
**Problem:** Setting cache values without TTL leads to stale data and memory growth.
**Solution:** Always use `setex()` or `set()` with `EX` option. Never use `set()` alone.
```typescript
await redis.setex(key, 300, value)
```
## Pitfall 2: Cache Key Inconsistency
**Problem:** Query parameter order affects cache key, causing cache misses.
**Solution:** Sort object keys before hashing or use deterministic key generation.
```typescript
function generateKey(obj: Record<string, unknown>) {
const sorted = Object.keys(obj).sort().reduce((acc, key) => {
acc[key] = obj[key]
return acc
}, {} as Record<string, unknown>)
return JSON.stringify(sorted)
}
```
## Pitfall 3: Missing Invalidation Paths
**Problem:** Cache invalidated on direct updates but not on related mutations.
**Solution:** Map all mutation paths and ensure comprehensive invalidation.
```typescript
async function deleteUser(userId: string) {
await prisma.user.delete({ where: { id: userId } })
await Promise.all([
redis.del(`user:${userId}`),
redis.del(`posts:author:${userId}`),
redis.del(`comments:author:${userId}`),
])
}
```
## Pitfall 4: Caching Pagination Without Page Number
**Problem:** Different pages cached with same key, returning wrong results.
**Solution:** Include skip/take or cursor in cache key.
```typescript
const cacheKey = `posts:skip:${skip}:take:${take}`
```
## Pitfall 5: No Redis Fallback
**Problem:** Application crashes when Redis unavailable.
**Solution:** Wrap Redis operations in try/catch, fallback to database.
```typescript
async function getCachedData(key: string, fetchFn: () => Promise<unknown>) {
try {
const cached = await redis.get(key)
if (cached) return JSON.parse(cached)
} catch (err) {
console.error('Redis error, falling back to database:', err)
}
return fetchFn()
}
```
## Pitfall 6: Caching Sensitive Data
**Problem:** Storing passwords, tokens, or sensitive credentials in cache.
**Solution:** Never cache authentication tokens, passwords, or PII without encryption.
```typescript
async function getCachedUser(userId: string) {
const cacheKey = `user:${userId}`
const cached = await redis.get(cacheKey)
if (cached) return JSON.parse(cached)
const user = await prisma.user.findUnique({
where: { id: userId },
select: {
id: true,
email: true,
name: true,
role: true,
},
})
if (user) {
await redis.setex(cacheKey, 300, JSON.stringify(user))
}
return user
}
```
## Pitfall 7: Pattern Matching in Hot Paths
**Problem:** Using `redis.keys('pattern:*')` in high-traffic endpoints causes performance degradation.
**Solution:** Use Redis SCAN for pattern matching or maintain explicit key sets.
```typescript
async function invalidatePostCacheSafe(postId: string) {
const cursor = '0'
const pattern = 'posts:*'
const keysToDelete: string[] = []
let currentCursor = cursor
do {
const [nextCursor, keys] = await redis.scan(
currentCursor,
'MATCH',
pattern,
'COUNT',
100
)
keysToDelete.push(...keys)
currentCursor = nextCursor
} while (currentCursor !== '0')
if (keysToDelete.length > 0) {
await redis.del(...keysToDelete)
}
await redis.del(`post:${postId}`)
}
```
## Pitfall 8: Serialization Issues
**Problem:** Storing Prisma model instances directly without serialization.
**Solution:** Always use JSON.stringify for caching, JSON.parse for retrieval.
```typescript
const user = await prisma.user.findUnique({ where: { id: userId } })
await redis.setex(
`user:${userId}`,
300,
JSON.stringify(user)
)
const cached = await redis.get(`user:${userId}`)
if (cached) {
const user = JSON.parse(cached)
return user
}
```

View File

@@ -0,0 +1,74 @@
# Cache Invalidation Patterns
## Event-Based: Invalidate on Data Changes
Use when: consistency critical, staleness unacceptable.
```typescript
async function createPost(data: { title: string; content: string; authorId: string }) {
const post = await prisma.post.create({ data });
await Promise.all([
redis.del(`posts:author:${data.authorId}`),
redis.del('posts:recent'),
redis.del('posts:popular'),
]);
return post;
}
```
## Time-Based: TTL-Driven Expiration
Use when: staleness acceptable for TTL duration, mutations infrequent.
```typescript
async function getRecentPosts() {
const cached = await redis.get('posts:recent');
if (cached) return JSON.parse(cached);
const posts = await prisma.post.findMany({
orderBy: { createdAt: 'desc' },
take: 10,
});
await redis.setex('posts:recent', 300, JSON.stringify(posts));
return posts;
}
```
## Hybrid: TTL + Event-Based Invalidation
Use when: mutations trigger immediate invalidation, TTL provides safety net.
```typescript
async function updatePost(postId: string, data: { title?: string }) {
const post = await prisma.post.update({
where: { id: postId },
data,
});
await redis.del(`post:${postId}`);
return post;
}
async function getPost(postId: string) {
const cached = await redis.get(`post:${postId}`);
if (cached) return JSON.parse(cached);
const post = await prisma.post.findUnique({
where: { id: postId },
});
if (post) await redis.setex(`post:${postId}`, 600, JSON.stringify(post));
return post;
}
```
## Strategy Selection by Data Characteristics
| Characteristic | Approach |
| ------------------------- | ------------------------------------------------------------------------------------------------------------ |
| Changes >1/min | Avoid caching or use 5-30s TTL; consider real-time updates; event-based invalidation for consistency |
| Changes rare (hours/days) | Use 5-60min TTL; event-based invalidation on mutations; warm cache on startup |
| Read/write ratio >10:1 | Strong cache candidate; cache-aside pattern; warm popular data in background |
| Read/write ratio <3:1 | Weak candidate; optimize queries instead; cache only if DB bottlenecked |
| Consistency required | Short TTL + event-based invalidation; cache-through/write-behind patterns; add versioning for atomic updates |

View File

@@ -0,0 +1,95 @@
# Redis Configuration
## Connection Setup
**ioredis client with connection pooling:**
```typescript
import { Redis } from 'ioredis'
const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
password: process.env.REDIS_PASSWORD,
db: parseInt(process.env.REDIS_DB || '0'),
maxRetriesPerRequest: 3,
retryStrategy: (times) => {
const delay = Math.min(times * 50, 2000)
return delay
},
lazyConnect: true,
})
redis.on('error', (err) => {
console.error('Redis connection error:', err)
})
redis.on('connect', () => {
console.log('Redis connected')
})
export default redis
```
## Serverless Considerations
**Redis in serverless environments (Vercel, Lambda):**
- Use Redis connection pooling (ioredis handles this)
- Consider Upstash Redis (serverless-optimized)
- Set `lazyConnect: true` to avoid connection on module load
- Handle cold starts gracefully (fallback to database)
- Monitor connection count to avoid exhaustion
**Upstash example:**
```typescript
import { Redis } from '@upstash/redis'
const redis = new Redis({
url: process.env.UPSTASH_REDIS_REST_URL,
token: process.env.UPSTASH_REDIS_REST_TOKEN,
})
```
Upstash uses HTTP REST API, avoiding connection pooling issues in serverless.
## Cache Implementation Checklist
When implementing caching:
**Setup:**
- [ ] Redis client configured with connection pooling
- [ ] Error handling for Redis connection failures
- [ ] Fallback to database when Redis unavailable
- [ ] Environment variables for Redis configuration
**Cache Keys:**
- [ ] Consistent key naming convention (entity:identifier)
- [ ] Hash complex query parameters for deterministic keys
- [ ] Namespace keys by entity type
- [ ] Document key patterns
**Caching Logic:**
- [ ] Cache-aside pattern (read from cache, fallback to DB)
- [ ] Serialize/deserialize with JSON.parse/stringify
- [ ] Handle null/undefined results appropriately
- [ ] Log cache hits/misses for monitoring
**Invalidation:**
- [ ] Invalidate on create/update/delete mutations
- [ ] Handle cascading invalidation for related entities
- [ ] Consider bulk invalidation for list queries
- [ ] Test invalidation across all mutation paths
**TTL Configuration:**
- [ ] Define TTL for each data type
- [ ] Shorter TTL for frequently changing data
- [ ] Longer TTL for static/rarely changing data
- [ ] Document TTL choices and rationale
**Monitoring:**
- [ ] Track cache hit rate
- [ ] Monitor cache memory usage
- [ ] Log invalidation events
- [ ] Alert on Redis connection failures