Initial commit
This commit is contained in:
239
skills/implementing-query-caching/SKILL.md
Normal file
239
skills/implementing-query-caching/SKILL.md
Normal file
@@ -0,0 +1,239 @@
|
||||
---
|
||||
name: implementing-query-caching
|
||||
description: Implement query result caching with Redis and proper invalidation strategies for Prisma 6. Use when optimizing frequently accessed data, improving read-heavy application performance, or reducing database load through caching.
|
||||
allowed-tools: Read, Write, Edit
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Query Result Caching with Redis
|
||||
|
||||
Efficient query result caching for Prisma 6 applications using Redis: cache key generation, invalidation strategies, TTL management, and when caching provides value.
|
||||
|
||||
---
|
||||
|
||||
<role>
|
||||
Implement query result caching with Redis for Prisma 6, covering cache key generation, invalidation, TTL strategies, and identifying when caching delivers value.
|
||||
</role>
|
||||
|
||||
<when-to-activate>
|
||||
User mentions: caching, Redis, performance optimization, slow queries, read-heavy applications, frequently accessed data, reducing database load, improving response times, cache invalidation, cache warming, or optimizing Prisma queries.
|
||||
</when-to-activate>
|
||||
|
||||
<overview>
|
||||
Query caching reduces database load and improves read response times, but adds complexity: cache invalidation, consistency challenges, infrastructure. Key capabilities: Redis-Prisma integration, consistent cache key patterns, mutation-triggered invalidation, TTL strategies (time/event-based), and identifying when caching provides value.
|
||||
</overview>
|
||||
|
||||
<workflow>
|
||||
**Phase 1: Identify Cache Candidates**
|
||||
Analyze query patterns for read-heavy operations; identify data with acceptable staleness; measure baseline query performance; estimate cache hit rate and improvement.
|
||||
|
||||
**Phase 2: Implement Cache Layer**
|
||||
Set up Redis with connection pooling; create cache wrapper around Prisma queries; implement consistent cache key generation; add cache read with database fallback.
|
||||
|
||||
**Phase 3: Implement Invalidation**
|
||||
Identify mutations affecting cached data; add invalidation to update/delete operations; handle bulk operations and cascading invalidation; test across scenarios.
|
||||
|
||||
**Phase 4: Configure TTL**
|
||||
Determine appropriate TTL per data type; implement time-based expiration; add event-based invalidation for critical data; monitor hit rates and adjust.
|
||||
</workflow>
|
||||
|
||||
<decision-tree>
|
||||
## When to Cache
|
||||
|
||||
**Strong Candidates:**
|
||||
|
||||
- Read-heavy data (>10:1 ratio): user profiles, product catalogs, configuration, content lists
|
||||
- Expensive queries: large aggregations, multi-join, complex filtering, computed values
|
||||
- High-frequency access
|
||||
|
||||
: homepage data, navigation, popular results, trending content
|
||||
|
||||
**Weak Candidates:**
|
||||
|
||||
- Write-heavy data (<3:1 ratio): analytics, activity logs, messages, live updates
|
||||
- Frequently changing: stock prices, inventory, bids, live scores
|
||||
- User-specific: shopping carts, drafts, recommendations, sessions
|
||||
- Fast simple queries: primary key lookups, indexed queries, already in DB cache
|
||||
|
||||
**Decision Tree:**
|
||||
|
||||
```
|
||||
Read/write ratio > 10:1?
|
||||
├─ Yes: Strong candidate
|
||||
│ └─ Data stale 1+ minutes acceptable?
|
||||
│ ├─ Yes: Long TTL (5-60min) + event invalidation
|
||||
│ └─ No: Short TTL (10-60sec) + aggressive invalidation
|
||||
└─ No: Ratio > 3:1?
|
||||
├─ Yes: Moderate candidate, if query > 100ms → short TTL (30-120sec)
|
||||
└─ No: Skip; optimize query/indexes/pooling instead
|
||||
```
|
||||
|
||||
</decision-tree>
|
||||
|
||||
<examples>
|
||||
## Basic Cache Implementation
|
||||
|
||||
**Example 1: Cache-Aside Pattern**
|
||||
|
||||
```typescript
|
||||
import { PrismaClient } from '@prisma/client';
|
||||
import { Redis } from 'ioredis';
|
||||
|
||||
const prisma = new PrismaClient();
|
||||
const redis = new Redis({
|
||||
host: process.env.REDIS_HOST,
|
||||
port: parseInt(process.env.REDIS_PORT || '6379'),
|
||||
maxRetriesPerRequest: 3,
|
||||
});
|
||||
|
||||
async function getCachedUser(userId: string) {
|
||||
const cacheKey = `user:${userId}`;
|
||||
const cached = await redis.get(cacheKey);
|
||||
if (cached) return JSON.parse(cached);
|
||||
|
||||
const user = await prisma.user.findUnique({
|
||||
where: { id: userId },
|
||||
select: { id: true, email: true, name: true, role: true },
|
||||
});
|
||||
|
||||
if (user) await redis.setex(cacheKey, 300, JSON.stringify(user));
|
||||
return user;
|
||||
}
|
||||
```
|
||||
|
||||
**Example 2: Consistent Key Generation**
|
||||
|
||||
```typescript
|
||||
import crypto from 'crypto';
|
||||
|
||||
function generateCacheKey(entity: string, query: Record<string, unknown>): string {
|
||||
const sortedQuery = Object.keys(query)
|
||||
.sort()
|
||||
.reduce((acc, key) => {
|
||||
acc[key] = query[key];
|
||||
return acc;
|
||||
}, {} as Record<string, unknown>);
|
||||
|
||||
const queryHash = crypto
|
||||
.createHash('sha256')
|
||||
.update(JSON.stringify(sortedQuery))
|
||||
.digest('hex')
|
||||
.slice(0, 16);
|
||||
return `${entity}:${queryHash}`;
|
||||
}
|
||||
|
||||
async function getCachedPosts(filters: {
|
||||
authorId?: string;
|
||||
published?: boolean;
|
||||
tags?: string[];
|
||||
}) {
|
||||
const cacheKey = generateCacheKey('posts', filters);
|
||||
const cached = await redis.get(cacheKey);
|
||||
if (cached) return JSON.parse(cached);
|
||||
|
||||
const posts = await prisma.post.findMany({
|
||||
where: filters,
|
||||
select: { id: true, title: true, createdAt: true },
|
||||
});
|
||||
|
||||
await redis.setex(cacheKey, 120, JSON.stringify(posts));
|
||||
return posts;
|
||||
}
|
||||
```
|
||||
|
||||
**Example 3: Cache Invalidation on Mutation**
|
||||
|
||||
```typescript
|
||||
async function updatePost(postId: string, data: { title?: string; content?: string }) {
|
||||
const post = await prisma.post.update({ where: { id: postId }, data });
|
||||
|
||||
await Promise.all([
|
||||
redis.del(`post:${postId}`),
|
||||
redis.del(`posts:author:${post.authorId}`),
|
||||
redis.keys('posts:*').then((keys) => keys.length > 0 && redis.del(...keys)),
|
||||
]);
|
||||
return post;
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** redis.keys() with patterns is slow on large keysets; use SCAN or maintain key sets.
|
||||
|
||||
**Example 4: TTL Strategy**
|
||||
|
||||
```typescript
|
||||
const TTL = {
|
||||
user_profile: 600,
|
||||
user_settings: 300,
|
||||
posts_list: 120,
|
||||
post_detail: 180,
|
||||
popular_posts: 60,
|
||||
real_time_stats: 10,
|
||||
};
|
||||
|
||||
async function cacheWithTTL<T>(
|
||||
key: string,
|
||||
ttlType: keyof typeof TTL,
|
||||
fetchFn: () => Promise<T>
|
||||
): Promise<T> {
|
||||
const cached = await redis.get(key);
|
||||
if (cached) return JSON.parse(cached);
|
||||
|
||||
const data = await fetchFn();
|
||||
await redis.setex(key, TTL[ttlType], JSON.stringify(data));
|
||||
return data;
|
||||
}
|
||||
```
|
||||
|
||||
</examples>
|
||||
|
||||
<constraints>
|
||||
**MUST:**
|
||||
* Use cache-aside pattern (not cache-through)
|
||||
* Consistent cache key generation (no random/timestamp components)
|
||||
* Invalidate cache on all mutations affecting cached data
|
||||
* Graceful Redis failure handling with database fallback
|
||||
* JSON serialization (consistent with Prisma types)
|
||||
* TTL on all cached values (never infinite)
|
||||
* Thorough cache invalidation testing
|
||||
|
||||
**SHOULD:**
|
||||
|
||||
- Redis connection pooling (ioredis)
|
||||
- Separate cache logic from business logic
|
||||
- Monitor cache hit rates; adjust TTL accordingly
|
||||
- Shorter TTL for frequently changing data
|
||||
- Cache warming for predictably popular data
|
||||
- Document cache key patterns and invalidation rules
|
||||
- Use
|
||||
|
||||
Redis SCAN vs KEYS for pattern matching
|
||||
|
||||
**NEVER:**
|
||||
|
||||
- Cache authentication tokens or sensitive credentials
|
||||
- Use infinite TTL
|
||||
- Pattern-match invalidation in hot paths
|
||||
- Cache Prisma queries with skip/take without pagination in key
|
||||
- Assume cache always available
|
||||
- Store Prisma instances directly (serialize first)
|
||||
- Cache write-heavy data
|
||||
</constraints>
|
||||
|
||||
<validation>
|
||||
**Cache Hit Rate:** Monitor >60% for effective caching; <40% signals strategy reconsideration or TTL adjustment.
|
||||
|
||||
**Invalidation Testing:** Verify all mutations invalidate correct keys; test cascading invalidation for related entities; confirm bulk operations invalidate list caches; ensure no stale data post-mutation.
|
||||
|
||||
**Performance:** Measure query latency with/without cache; target >50% latency reduction; monitor P95/P99 improvements; verify caching doesn't increase memory pressure.
|
||||
|
||||
**Redis Health:** Monitor connection pool utilization, memory usage (set maxmemory-policy), connection failures; test application behavior when Redis is unavailable.
|
||||
</validation>
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [Redis Configuration](./references/redis-configuration.md) — Connection setup, serverless
|
||||
- [Invalidation Patterns](./references/invalidation-patterns.md) — Event-based, time-based, hybrid
|
||||
- [Advanced Examples](./references/advanced-examples.md) — Bulk invalidation, cache warming
|
||||
- [Common Pitfalls](./references/common-pitfalls.md) — Infinite TTL, key inconsistency, missing invalidation
|
||||
@@ -0,0 +1,183 @@
|
||||
# Advanced Caching Examples
|
||||
|
||||
## Bulk Invalidation
|
||||
|
||||
**Invalidate multiple related keys efficiently:**
|
||||
|
||||
```typescript
|
||||
async function invalidateUserCache(userId: string) {
|
||||
const patterns = [
|
||||
`user:${userId}`,
|
||||
`user_profile:${userId}`,
|
||||
`user_settings:${userId}`,
|
||||
`posts:author:${userId}`,
|
||||
`comments:author:${userId}`,
|
||||
]
|
||||
|
||||
await redis.del(...patterns)
|
||||
}
|
||||
|
||||
async function invalidatePostCache(postId: string) {
|
||||
const post = await prisma.post.findUnique({
|
||||
where: { id: postId },
|
||||
select: { authorId: true },
|
||||
})
|
||||
|
||||
if (!post) return
|
||||
|
||||
const keys = await redis.keys(`posts:*`)
|
||||
|
||||
await Promise.all([
|
||||
redis.del(`post:${postId}`),
|
||||
redis.del(`posts:author:${post.authorId}`),
|
||||
keys.length > 0 ? redis.del(...keys) : Promise.resolve(),
|
||||
])
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern:** Collect all related keys and invalidate in a single operation to maintain consistency.
|
||||
|
||||
## Cache Warming
|
||||
|
||||
**Pre-populate cache with frequently accessed data:**
|
||||
|
||||
```typescript
|
||||
async function warmCache() {
|
||||
const popularPosts = await prisma.post.findMany({
|
||||
where: { published: true },
|
||||
orderBy: { views: 'desc' },
|
||||
take: 20,
|
||||
})
|
||||
|
||||
await Promise.all(
|
||||
popularPosts.map(post =>
|
||||
redis.setex(
|
||||
`post:${post.id}`,
|
||||
300,
|
||||
JSON.stringify(post)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
const activeUsers = await prisma.user.findMany({
|
||||
where: { lastActiveAt: { gte: new Date(Date.now() - 24 * 60 * 60 * 1000) } },
|
||||
take: 50,
|
||||
})
|
||||
|
||||
await Promise.all(
|
||||
activeUsers.map(user =>
|
||||
redis.setex(
|
||||
`user:${user.id}`,
|
||||
600,
|
||||
JSON.stringify(user)
|
||||
)
|
||||
)
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern:** Pre-populate cache on application startup or scheduled intervals for predictably popular data.
|
||||
|
||||
## Graceful Fallback
|
||||
|
||||
**Handle Redis failures without breaking application:**
|
||||
|
||||
```typescript
|
||||
async function getCachedData<T>(
|
||||
key: string,
|
||||
fetchFn: () => Promise<T>
|
||||
): Promise<T> {
|
||||
try {
|
||||
const cached = await redis.get(key)
|
||||
if (cached) {
|
||||
return JSON.parse(cached)
|
||||
}
|
||||
} catch (err) {
|
||||
console.error('Redis error, falling back to database:', err)
|
||||
}
|
||||
|
||||
const data = await fetchFn()
|
||||
|
||||
try {
|
||||
await redis.setex(key, 300, JSON.stringify(data))
|
||||
} catch (err) {
|
||||
console.error('Failed to cache data:', err)
|
||||
}
|
||||
|
||||
return data
|
||||
}
|
||||
|
||||
async function getUserProfile(userId: string) {
|
||||
return getCachedData(
|
||||
`user_profile:${userId}`,
|
||||
() => prisma.user.findUnique({
|
||||
where: { id: userId },
|
||||
include: { profile: true },
|
||||
})
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern:** Wrap all Redis operations in try/catch, always fallback to database on error.
|
||||
|
||||
## Advanced TTL Strategy
|
||||
|
||||
**Multi-tier caching with different TTL per tier:**
|
||||
|
||||
```typescript
|
||||
const CACHE_TIERS = {
|
||||
hot: 60,
|
||||
warm: 300,
|
||||
cold: 1800,
|
||||
}
|
||||
|
||||
interface CacheOptions {
|
||||
tier: keyof typeof CACHE_TIERS
|
||||
keyPrefix: string
|
||||
}
|
||||
|
||||
async function tieredCache<T>(
|
||||
identifier: string,
|
||||
options: CacheOptions,
|
||||
fetchFn: () => Promise<T>
|
||||
): Promise<T> {
|
||||
const cacheKey = `${options.keyPrefix}:${identifier}`
|
||||
const ttl = CACHE_TIERS[options.tier]
|
||||
|
||||
const cached = await redis.get(cacheKey)
|
||||
if (cached) {
|
||||
return JSON.parse(cached)
|
||||
}
|
||||
|
||||
const data = await fetchFn()
|
||||
await redis.setex(cacheKey, ttl, JSON.stringify(data))
|
||||
|
||||
return data
|
||||
}
|
||||
|
||||
async function getTrendingPosts() {
|
||||
return tieredCache(
|
||||
'trending',
|
||||
{ tier: 'hot', keyPrefix: 'posts' },
|
||||
() => prisma.post.findMany({
|
||||
where: { published: true },
|
||||
orderBy: { views: 'desc' },
|
||||
take: 10,
|
||||
})
|
||||
)
|
||||
}
|
||||
|
||||
async function getArchivedPosts() {
|
||||
return tieredCache(
|
||||
'archived',
|
||||
{ tier: 'cold', keyPrefix: 'posts' },
|
||||
() => prisma.post.findMany({
|
||||
where: { archived: true },
|
||||
orderBy: { archivedAt: 'desc' },
|
||||
take: 20,
|
||||
})
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern:** Classify data into tiers based on access patterns, assign appropriate TTL per tier.
|
||||
160
skills/implementing-query-caching/references/common-pitfalls.md
Normal file
160
skills/implementing-query-caching/references/common-pitfalls.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Common Pitfalls
|
||||
|
||||
## Pitfall 1: Infinite TTL
|
||||
|
||||
**Problem:** Setting cache values without TTL leads to stale data and memory growth.
|
||||
|
||||
**Solution:** Always use `setex()` or `set()` with `EX` option. Never use `set()` alone.
|
||||
|
||||
```typescript
|
||||
await redis.setex(key, 300, value)
|
||||
```
|
||||
|
||||
## Pitfall 2: Cache Key Inconsistency
|
||||
|
||||
**Problem:** Query parameter order affects cache key, causing cache misses.
|
||||
|
||||
**Solution:** Sort object keys before hashing or use deterministic key generation.
|
||||
|
||||
```typescript
|
||||
function generateKey(obj: Record<string, unknown>) {
|
||||
const sorted = Object.keys(obj).sort().reduce((acc, key) => {
|
||||
acc[key] = obj[key]
|
||||
return acc
|
||||
}, {} as Record<string, unknown>)
|
||||
return JSON.stringify(sorted)
|
||||
}
|
||||
```
|
||||
|
||||
## Pitfall 3: Missing Invalidation Paths
|
||||
|
||||
**Problem:** Cache invalidated on direct updates but not on related mutations.
|
||||
|
||||
**Solution:** Map all mutation paths and ensure comprehensive invalidation.
|
||||
|
||||
```typescript
|
||||
async function deleteUser(userId: string) {
|
||||
await prisma.user.delete({ where: { id: userId } })
|
||||
|
||||
await Promise.all([
|
||||
redis.del(`user:${userId}`),
|
||||
redis.del(`posts:author:${userId}`),
|
||||
redis.del(`comments:author:${userId}`),
|
||||
])
|
||||
}
|
||||
```
|
||||
|
||||
## Pitfall 4: Caching Pagination Without Page Number
|
||||
|
||||
**Problem:** Different pages cached with same key, returning wrong results.
|
||||
|
||||
**Solution:** Include skip/take or cursor in cache key.
|
||||
|
||||
```typescript
|
||||
const cacheKey = `posts:skip:${skip}:take:${take}`
|
||||
```
|
||||
|
||||
## Pitfall 5: No Redis Fallback
|
||||
|
||||
**Problem:** Application crashes when Redis unavailable.
|
||||
|
||||
**Solution:** Wrap Redis operations in try/catch, fallback to database.
|
||||
|
||||
```typescript
|
||||
async function getCachedData(key: string, fetchFn: () => Promise<unknown>) {
|
||||
try {
|
||||
const cached = await redis.get(key)
|
||||
if (cached) return JSON.parse(cached)
|
||||
} catch (err) {
|
||||
console.error('Redis error, falling back to database:', err)
|
||||
}
|
||||
|
||||
return fetchFn()
|
||||
}
|
||||
```
|
||||
|
||||
## Pitfall 6: Caching Sensitive Data
|
||||
|
||||
**Problem:** Storing passwords, tokens, or sensitive credentials in cache.
|
||||
|
||||
**Solution:** Never cache authentication tokens, passwords, or PII without encryption.
|
||||
|
||||
```typescript
|
||||
async function getCachedUser(userId: string) {
|
||||
const cacheKey = `user:${userId}`
|
||||
|
||||
const cached = await redis.get(cacheKey)
|
||||
if (cached) return JSON.parse(cached)
|
||||
|
||||
const user = await prisma.user.findUnique({
|
||||
where: { id: userId },
|
||||
select: {
|
||||
id: true,
|
||||
email: true,
|
||||
name: true,
|
||||
role: true,
|
||||
},
|
||||
})
|
||||
|
||||
if (user) {
|
||||
await redis.setex(cacheKey, 300, JSON.stringify(user))
|
||||
}
|
||||
|
||||
return user
|
||||
}
|
||||
```
|
||||
|
||||
## Pitfall 7: Pattern Matching in Hot Paths
|
||||
|
||||
**Problem:** Using `redis.keys('pattern:*')` in high-traffic endpoints causes performance degradation.
|
||||
|
||||
**Solution:** Use Redis SCAN for pattern matching or maintain explicit key sets.
|
||||
|
||||
```typescript
|
||||
async function invalidatePostCacheSafe(postId: string) {
|
||||
const cursor = '0'
|
||||
const pattern = 'posts:*'
|
||||
const keysToDelete: string[] = []
|
||||
|
||||
let currentCursor = cursor
|
||||
do {
|
||||
const [nextCursor, keys] = await redis.scan(
|
||||
currentCursor,
|
||||
'MATCH',
|
||||
pattern,
|
||||
'COUNT',
|
||||
100
|
||||
)
|
||||
keysToDelete.push(...keys)
|
||||
currentCursor = nextCursor
|
||||
} while (currentCursor !== '0')
|
||||
|
||||
if (keysToDelete.length > 0) {
|
||||
await redis.del(...keysToDelete)
|
||||
}
|
||||
|
||||
await redis.del(`post:${postId}`)
|
||||
}
|
||||
```
|
||||
|
||||
## Pitfall 8: Serialization Issues
|
||||
|
||||
**Problem:** Storing Prisma model instances directly without serialization.
|
||||
|
||||
**Solution:** Always use JSON.stringify for caching, JSON.parse for retrieval.
|
||||
|
||||
```typescript
|
||||
const user = await prisma.user.findUnique({ where: { id: userId } })
|
||||
|
||||
await redis.setex(
|
||||
`user:${userId}`,
|
||||
300,
|
||||
JSON.stringify(user)
|
||||
)
|
||||
|
||||
const cached = await redis.get(`user:${userId}`)
|
||||
if (cached) {
|
||||
const user = JSON.parse(cached)
|
||||
return user
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,74 @@
|
||||
# Cache Invalidation Patterns
|
||||
|
||||
## Event-Based: Invalidate on Data Changes
|
||||
|
||||
Use when: consistency critical, staleness unacceptable.
|
||||
|
||||
```typescript
|
||||
async function createPost(data: { title: string; content: string; authorId: string }) {
|
||||
const post = await prisma.post.create({ data });
|
||||
|
||||
await Promise.all([
|
||||
redis.del(`posts:author:${data.authorId}`),
|
||||
redis.del('posts:recent'),
|
||||
redis.del('posts:popular'),
|
||||
]);
|
||||
|
||||
return post;
|
||||
}
|
||||
```
|
||||
|
||||
## Time-Based: TTL-Driven Expiration
|
||||
|
||||
Use when: staleness acceptable for TTL duration, mutations infrequent.
|
||||
|
||||
```typescript
|
||||
async function getRecentPosts() {
|
||||
const cached = await redis.get('posts:recent');
|
||||
if (cached) return JSON.parse(cached);
|
||||
|
||||
const posts = await prisma.post.findMany({
|
||||
orderBy: { createdAt: 'desc' },
|
||||
take: 10,
|
||||
});
|
||||
|
||||
await redis.setex('posts:recent', 300, JSON.stringify(posts));
|
||||
return posts;
|
||||
}
|
||||
```
|
||||
|
||||
## Hybrid: TTL + Event-Based Invalidation
|
||||
|
||||
Use when: mutations trigger immediate invalidation, TTL provides safety net.
|
||||
|
||||
```typescript
|
||||
async function updatePost(postId: string, data: { title?: string }) {
|
||||
const post = await prisma.post.update({
|
||||
where: { id: postId },
|
||||
data,
|
||||
});
|
||||
await redis.del(`post:${postId}`);
|
||||
return post;
|
||||
}
|
||||
|
||||
async function getPost(postId: string) {
|
||||
const cached = await redis.get(`post:${postId}`);
|
||||
if (cached) return JSON.parse(cached);
|
||||
|
||||
const post = await prisma.post.findUnique({
|
||||
where: { id: postId },
|
||||
});
|
||||
if (post) await redis.setex(`post:${postId}`, 600, JSON.stringify(post));
|
||||
return post;
|
||||
}
|
||||
```
|
||||
|
||||
## Strategy Selection by Data Characteristics
|
||||
|
||||
| Characteristic | Approach |
|
||||
| ------------------------- | ------------------------------------------------------------------------------------------------------------ |
|
||||
| Changes >1/min | Avoid caching or use 5-30s TTL; consider real-time updates; event-based invalidation for consistency |
|
||||
| Changes rare (hours/days) | Use 5-60min TTL; event-based invalidation on mutations; warm cache on startup |
|
||||
| Read/write ratio >10:1 | Strong cache candidate; cache-aside pattern; warm popular data in background |
|
||||
| Read/write ratio <3:1 | Weak candidate; optimize queries instead; cache only if DB bottlenecked |
|
||||
| Consistency required | Short TTL + event-based invalidation; cache-through/write-behind patterns; add versioning for atomic updates |
|
||||
@@ -0,0 +1,95 @@
|
||||
# Redis Configuration
|
||||
|
||||
## Connection Setup
|
||||
|
||||
**ioredis client with connection pooling:**
|
||||
|
||||
```typescript
|
||||
import { Redis } from 'ioredis'
|
||||
|
||||
const redis = new Redis({
|
||||
host: process.env.REDIS_HOST || 'localhost',
|
||||
port: parseInt(process.env.REDIS_PORT || '6379'),
|
||||
password: process.env.REDIS_PASSWORD,
|
||||
db: parseInt(process.env.REDIS_DB || '0'),
|
||||
maxRetriesPerRequest: 3,
|
||||
retryStrategy: (times) => {
|
||||
const delay = Math.min(times * 50, 2000)
|
||||
return delay
|
||||
},
|
||||
lazyConnect: true,
|
||||
})
|
||||
|
||||
redis.on('error', (err) => {
|
||||
console.error('Redis connection error:', err)
|
||||
})
|
||||
|
||||
redis.on('connect', () => {
|
||||
console.log('Redis connected')
|
||||
})
|
||||
|
||||
export default redis
|
||||
```
|
||||
|
||||
## Serverless Considerations
|
||||
|
||||
**Redis in serverless environments (Vercel, Lambda):**
|
||||
|
||||
- Use Redis connection pooling (ioredis handles this)
|
||||
- Consider Upstash Redis (serverless-optimized)
|
||||
- Set `lazyConnect: true` to avoid connection on module load
|
||||
- Handle cold starts gracefully (fallback to database)
|
||||
- Monitor connection count to avoid exhaustion
|
||||
|
||||
**Upstash example:**
|
||||
|
||||
```typescript
|
||||
import { Redis } from '@upstash/redis'
|
||||
|
||||
const redis = new Redis({
|
||||
url: process.env.UPSTASH_REDIS_REST_URL,
|
||||
token: process.env.UPSTASH_REDIS_REST_TOKEN,
|
||||
})
|
||||
```
|
||||
|
||||
Upstash uses HTTP REST API, avoiding connection pooling issues in serverless.
|
||||
|
||||
## Cache Implementation Checklist
|
||||
|
||||
When implementing caching:
|
||||
|
||||
**Setup:**
|
||||
- [ ] Redis client configured with connection pooling
|
||||
- [ ] Error handling for Redis connection failures
|
||||
- [ ] Fallback to database when Redis unavailable
|
||||
- [ ] Environment variables for Redis configuration
|
||||
|
||||
**Cache Keys:**
|
||||
- [ ] Consistent key naming convention (entity:identifier)
|
||||
- [ ] Hash complex query parameters for deterministic keys
|
||||
- [ ] Namespace keys by entity type
|
||||
- [ ] Document key patterns
|
||||
|
||||
**Caching Logic:**
|
||||
- [ ] Cache-aside pattern (read from cache, fallback to DB)
|
||||
- [ ] Serialize/deserialize with JSON.parse/stringify
|
||||
- [ ] Handle null/undefined results appropriately
|
||||
- [ ] Log cache hits/misses for monitoring
|
||||
|
||||
**Invalidation:**
|
||||
- [ ] Invalidate on create/update/delete mutations
|
||||
- [ ] Handle cascading invalidation for related entities
|
||||
- [ ] Consider bulk invalidation for list queries
|
||||
- [ ] Test invalidation across all mutation paths
|
||||
|
||||
**TTL Configuration:**
|
||||
- [ ] Define TTL for each data type
|
||||
- [ ] Shorter TTL for frequently changing data
|
||||
- [ ] Longer TTL for static/rarely changing data
|
||||
- [ ] Document TTL choices and rationale
|
||||
|
||||
**Monitoring:**
|
||||
- [ ] Track cache hit rate
|
||||
- [ ] Monitor cache memory usage
|
||||
- [ ] Log invalidation events
|
||||
- [ ] Alert on Redis connection failures
|
||||
Reference in New Issue
Block a user