Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:24:26 +08:00
commit ce4251a69d
14 changed files with 3532 additions and 0 deletions

View File

@@ -0,0 +1,606 @@
# Cloudflare Queues Best Practices
Production patterns, optimization strategies, and common pitfalls.
---
## Consumer Design Patterns
### 1. Explicit Acknowledgement for Non-Idempotent Operations
**Problem:** Database writes or API calls get duplicated when batch retries
**Solution:** Use explicit `ack()` for each message
```typescript
// ❌ Bad: Entire batch retried if one operation fails
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
await env.DB.prepare(
'INSERT INTO orders (id, data) VALUES (?, ?)'
).bind(message.body.id, JSON.stringify(message.body)).run();
}
// If last insert fails, ALL inserts are retried → duplicates!
},
};
// ✅ Good: Each message acknowledged individually
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
await env.DB.prepare(
'INSERT INTO orders (id, data) VALUES (?, ?)'
).bind(message.body.id, JSON.stringify(message.body)).run();
message.ack(); // Only ack on success
} catch (error) {
console.error(`Failed: ${message.id}`, error);
// Don't ack - will retry this message only
}
}
},
};
```
---
### 2. Exponential Backoff for Rate Limits
**Problem:** Retrying immediately hits same rate limit
**Solution:** Use exponential backoff based on attempts
```typescript
// ❌ Bad: Retry immediately
try {
await callRateLimitedAPI();
message.ack();
} catch (error) {
message.retry(); // Immediately hits rate limit again
}
// ✅ Good: Exponential backoff
try {
await callRateLimitedAPI();
message.ack();
} catch (error) {
if (error.status === 429) {
const delaySeconds = Math.min(
60 * Math.pow(2, message.attempts - 1), // 1m, 2m, 4m, 8m, ...
3600 // Max 1 hour
);
console.log(`Rate limited. Retrying in ${delaySeconds}s`);
message.retry({ delaySeconds });
}
}
```
---
### 3. Always Configure Dead Letter Queue
**Problem:** Messages deleted permanently after max retries
**Solution:** Always configure DLQ in production
```jsonc
{
"queues": {
"consumers": [
{
"queue": "my-queue",
"max_retries": 3,
"dead_letter_queue": "my-dlq" // ✅ Always configure
}
]
}
}
```
**DLQ Consumer:**
```typescript
// Monitor and alert on DLQ messages
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
// Log failure
console.error('PERMANENT FAILURE:', message.id, message.body);
// Store for manual review
await env.DB.prepare(
'INSERT INTO failed_messages (id, body, attempts) VALUES (?, ?, ?)'
).bind(message.id, JSON.stringify(message.body), message.attempts).run();
// Send alert
await sendAlert(`Message ${message.id} failed permanently`);
message.ack();
}
},
};
```
---
## Batch Configuration
### Optimizing Batch Size
**High volume, low latency:**
```jsonc
{
"queues": {
"consumers": [{
"queue": "high-volume-queue",
"max_batch_size": 100, // Max messages per batch
"max_batch_timeout": 1 // Process ASAP
}]
}
}
```
**Low volume, batch efficiency:**
```jsonc
{
"queues": {
"consumers": [{
"queue": "low-volume-queue",
"max_batch_size": 50, // Medium batch
"max_batch_timeout": 30 // Wait for batch to fill
}]
}
}
```
**Cost optimization:**
```jsonc
{
"queues": {
"consumers": [{
"queue": "cost-optimized",
"max_batch_size": 100, // Largest batches
"max_batch_timeout": 60 // Max wait time
}]
}
}
```
---
## Concurrency Management
### Let It Auto-Scale (Default)
```jsonc
{
"queues": {
"consumers": [{
"queue": "my-queue"
// No max_concurrency - auto-scales to 250
}]
}
}
```
**✅ Use when:**
- Default case
- Want best performance
- No upstream rate limits
---
### Limit Concurrency
```jsonc
{
"queues": {
"consumers": [{
"queue": "rate-limited-api-queue",
"max_concurrency": 10 // Limit to 10 concurrent consumers
}]
}
}
```
**✅ Use when:**
- Calling rate-limited APIs
- Database connection limits
- Want to control costs
- Protecting upstream services
---
## Message Design
### Include Metadata
```typescript
// ✅ Good: Include helpful metadata
await env.MY_QUEUE.send({
// Message type for routing
type: 'order-confirmation',
// Idempotency key
idempotencyKey: crypto.randomUUID(),
// Correlation ID for tracing
correlationId: requestId,
// Timestamps
createdAt: Date.now(),
scheduledFor: Date.now() + 3600000,
// Version for schema evolution
_version: 1,
// Actual payload
payload: {
orderId: 'ORD-123',
userId: 'USER-456',
total: 99.99,
},
});
```
---
### Message Versioning
```typescript
// Handle multiple message versions
export default {
async queue(batch: MessageBatch): Promise<void> {
for (const message of batch.messages) {
switch (message.body._version) {
case 1:
await processV1(message.body);
break;
case 2:
await processV2(message.body);
break;
default:
console.warn(`Unknown version: ${message.body._version}`);
}
message.ack();
}
},
};
```
---
### Large Messages
**Problem:** Messages >128 KB fail
**Solution:** Store in R2, send reference
```typescript
// Producer
const message = { largeData: ... };
const size = new TextEncoder().encode(JSON.stringify(message)).length;
if (size > 128 * 1024) {
// Store in R2
const key = `messages/${crypto.randomUUID()}.json`;
await env.MY_BUCKET.put(key, JSON.stringify(message));
// Send reference
await env.MY_QUEUE.send({
type: 'large-message',
r2Key: key,
size,
timestamp: Date.now(),
});
} else {
await env.MY_QUEUE.send(message);
}
// Consumer
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
if (message.body.type === 'large-message') {
// Fetch from R2
const obj = await env.MY_BUCKET.get(message.body.r2Key);
const data = await obj.json();
await processLargeMessage(data);
// Clean up R2
await env.MY_BUCKET.delete(message.body.r2Key);
} else {
await processMessage(message.body);
}
message.ack();
}
},
};
```
---
## Error Handling
### Different Retry Strategies by Error Type
```typescript
try {
await processMessage(message.body);
message.ack();
} catch (error) {
// Rate limit - exponential backoff
if (error.status === 429) {
message.retry({
delaySeconds: Math.min(60 * Math.pow(2, message.attempts - 1), 3600),
});
}
// Server error - shorter backoff
else if (error.status >= 500) {
message.retry({ delaySeconds: 60 });
}
// Client error - don't retry
else if (error.status >= 400) {
console.error('Client error, not retrying:', error);
// Don't ack or retry - goes to DLQ
}
// Unknown error - retry immediately
else {
message.retry();
}
}
```
---
### Circuit Breaker Pattern
```typescript
class CircuitBreaker {
private failures = 0;
private lastFailure = 0;
private state: 'closed' | 'open' | 'half-open' = 'closed';
async call<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'open') {
// Check if we should try again
if (Date.now() - this.lastFailure > 60000) { // 1 minute
this.state = 'half-open';
} else {
throw new Error('Circuit breaker is open');
}
}
try {
const result = await fn();
// Success - reset
if (this.state === 'half-open') {
this.state = 'closed';
this.failures = 0;
}
return result;
} catch (error) {
this.failures++;
this.lastFailure = Date.now();
// Open circuit after 3 failures
if (this.failures >= 3) {
this.state = 'open';
}
throw error;
}
}
}
const breaker = new CircuitBreaker();
export default {
async queue(batch: MessageBatch): Promise<void> {
for (const message of batch.messages) {
try {
await breaker.call(() => callUpstreamAPI(message.body));
message.ack();
} catch (error) {
if (error.message === 'Circuit breaker is open') {
// Retry later when circuit might be closed
message.retry({ delaySeconds: 120 });
} else {
message.retry({ delaySeconds: 60 });
}
}
}
},
};
```
---
## Cost Optimization
### Batch Operations
```typescript
// ❌ Bad: 100 operations (3 per message)
for (let i = 0; i < 100; i++) {
await env.MY_QUEUE.send({ id: i });
}
// ✅ Good: 3 operations total (write batch, read batch, delete batch)
await env.MY_QUEUE.sendBatch(
Array.from({ length: 100 }, (_, i) => ({
body: { id: i },
}))
);
```
### Larger Batches
```jsonc
// Process more messages per invocation
{
"queues": {
"consumers": [{
"queue": "my-queue",
"max_batch_size": 100 // ✅ Max batch size = fewer invocations
}]
}
}
```
---
## Monitoring & Observability
### Structured Logging
```typescript
export default {
async queue(batch: MessageBatch): Promise<void> {
console.log(JSON.stringify({
event: 'batch_start',
queue: batch.queue,
messageCount: batch.messages.length,
timestamp: Date.now(),
}));
let processed = 0;
let failed = 0;
for (const message of batch.messages) {
try {
await processMessage(message.body);
message.ack();
processed++;
} catch (error) {
console.error(JSON.stringify({
event: 'message_failed',
messageId: message.id,
attempts: message.attempts,
error: error.message,
}));
failed++;
}
}
console.log(JSON.stringify({
event: 'batch_complete',
processed,
failed,
duration: Date.now() - batch.messages[0].timestamp.getTime(),
}));
},
};
```
### Metrics Tracking
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
const startTime = Date.now();
for (const message of batch.messages) {
const msgStartTime = Date.now();
try {
await processMessage(message.body);
message.ack();
// Track processing time
await env.METRICS.put(
`processing_time:${Date.now()}`,
String(Date.now() - msgStartTime)
);
} catch (error) {
await env.METRICS.put(
`errors:${Date.now()}`,
JSON.stringify({
messageId: message.id,
error: error.message,
})
);
}
}
// Track batch metrics
await env.METRICS.put(
`batch_size:${Date.now()}`,
String(batch.messages.length)
);
},
};
```
---
## Testing
### Local Development
```bash
# Start local dev server
npm run dev
# In another terminal, send test messages
curl -X POST http://localhost:8787/send \
-H "Content-Type: application/json" \
-d '{"test": "message"}'
# Watch consumer logs
npx wrangler tail my-consumer --local
```
### Unit Tests
```typescript
import { describe, it, expect } from 'vitest';
describe('Queue Consumer', () => {
it('processes messages correctly', async () => {
const batch: MessageBatch = {
queue: 'test-queue',
messages: [
{
id: '123',
timestamp: new Date(),
body: { type: 'test', data: 'hello' },
attempts: 1,
ack: () => {},
retry: () => {},
},
],
ackAll: () => {},
retryAll: () => {},
};
const env = {
// Mock bindings
};
const ctx = {
waitUntil: () => {},
passThroughOnException: () => {},
};
await worker.queue(batch, env, ctx);
// Assert expectations
});
});
```
---
**Last Updated**: 2025-10-21

499
references/consumer-api.md Normal file
View File

@@ -0,0 +1,499 @@
# Consumer API Reference
Complete reference for consuming messages from Cloudflare Queues.
---
## Queue Handler
Consumer Workers must export a `queue()` handler:
```typescript
export default {
async queue(
batch: MessageBatch,
env: Env,
ctx: ExecutionContext
): Promise<void> {
// Process messages
},
};
```
### Parameters
- **`batch`** - MessageBatch object containing messages
- **`env`** - Environment bindings (KV, D1, R2, etc.)
- **`ctx`** - Execution context
- `waitUntil(promise)` - Extend Worker lifetime
- `passThroughOnException()` - Continue on error
### Return Value
- Must return `Promise<void>` or `void`
- Throwing error = all unacknowledged messages retried
- Returning successfully = implicit ack for messages without explicit ack()
---
## MessageBatch Interface
```typescript
interface MessageBatch<Body = unknown> {
readonly queue: string;
readonly messages: Message<Body>[];
ackAll(): void;
retryAll(options?: QueueRetryOptions): void;
}
```
### Properties
#### `queue` (string)
Name of the queue this batch came from.
**Use case:** One consumer handling multiple queues
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
switch (batch.queue) {
case 'high-priority':
await processUrgent(batch.messages);
break;
case 'low-priority':
await processNormal(batch.messages);
break;
default:
console.warn(`Unknown queue: ${batch.queue}`);
}
},
};
```
---
#### `messages` (Message[])
Array of Message objects.
**Important:**
- Ordering is **best effort**, not guaranteed
- Don't rely on message order
- Use timestamps for ordering if needed
```typescript
// Sort by timestamp if order matters
const sortedMessages = batch.messages.sort(
(a, b) => a.timestamp.getTime() - b.timestamp.getTime()
);
```
---
### Methods
#### `ackAll()` - Acknowledge All Messages
Mark all messages as successfully delivered, even if handler throws error.
```typescript
export default {
async queue(batch: MessageBatch): Promise<void> {
// Acknowledge all messages upfront
batch.ackAll();
// Even if this fails, messages won't retry
await processMessages(batch.messages);
},
};
```
**Use cases:**
- Idempotent operations where retries are safe
- Already processed messages (deduplication)
- Want to prevent retries regardless of outcome
---
#### `retryAll(options?)` - Retry All Messages
Mark all messages for retry.
```typescript
interface QueueRetryOptions {
delaySeconds?: number; // 0-43200 (12 hours)
}
batch.retryAll();
batch.retryAll({ delaySeconds: 300 }); // Retry in 5 minutes
```
**Use cases:**
- Rate limiting (retry after backoff)
- Temporary system failure
- Upstream service unavailable
```typescript
export default {
async queue(batch: MessageBatch): Promise<void> {
try {
await callUpstreamAPI(batch.messages);
} catch (error) {
if (error.status === 503) {
// Service unavailable - retry in 5 minutes
batch.retryAll({ delaySeconds: 300 });
} else {
// Other error - retry immediately
batch.retryAll();
}
}
},
};
```
---
## Message Interface
```typescript
interface Message<Body = unknown> {
readonly id: string;
readonly timestamp: Date;
readonly body: Body;
readonly attempts: number;
ack(): void;
retry(options?: QueueRetryOptions): void;
}
```
### Properties
#### `id` (string)
Unique system-generated message ID (UUID format).
```typescript
console.log(message.id); // "550e8400-e29b-41d4-a716-446655440000"
```
---
#### `timestamp` (Date)
When message was sent to queue.
```typescript
console.log(message.timestamp); // Date object
console.log(message.timestamp.toISOString()); // "2025-10-21T12:34:56.789Z"
// Check message age
const ageMs = Date.now() - message.timestamp.getTime();
console.log(`Message age: ${ageMs}ms`);
```
---
#### `body` (any)
Your message content.
```typescript
interface MyMessage {
type: string;
userId: string;
data: any;
}
const message: Message<MyMessage> = ...;
console.log(message.body.type); // TypeScript knows the type
console.log(message.body.userId);
console.log(message.body.data);
```
---
#### `attempts` (number)
Number of times consumer has attempted to process this message. Starts at 1.
```typescript
console.log(message.attempts); // 1 (first attempt)
// Use for exponential backoff
const delaySeconds = 60 * Math.pow(2, message.attempts - 1);
message.retry({ delaySeconds });
// Attempts: 1 → 60s, 2 → 120s, 3 → 240s, 4 → 480s, ...
```
---
### Methods
#### `ack()` - Acknowledge Message
Mark message as successfully delivered. Won't retry even if handler fails.
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
// Non-idempotent operation
await env.DB.prepare(
'INSERT INTO orders (id, data) VALUES (?, ?)'
).bind(message.body.id, JSON.stringify(message.body)).run();
// CRITICAL: Acknowledge success
message.ack();
} catch (error) {
console.error(`Failed: ${message.id}`, error);
// Don't ack - will retry
}
}
},
};
```
**Use cases:**
- Database writes
- Payment processing
- Any non-idempotent operation
- Prevents duplicate processing
---
#### `retry(options?)` - Retry Message
Mark message for retry. Can optionally delay retry.
```typescript
interface QueueRetryOptions {
delaySeconds?: number; // 0-43200 (12 hours)
}
message.retry();
message.retry({ delaySeconds: 600 }); // Retry in 10 minutes
```
**Use cases:**
- Rate limiting (429 errors)
- Temporary failures
- Exponential backoff
```typescript
// Exponential backoff
message.retry({
delaySeconds: Math.min(
60 * Math.pow(2, message.attempts - 1),
3600 // Max 1 hour
),
});
// Different delays for different errors
try {
await processMessage(message.body);
message.ack();
} catch (error) {
if (error.status === 429) {
// Rate limited - retry in 5 minutes
message.retry({ delaySeconds: 300 });
} else if (error.status >= 500) {
// Server error - retry in 1 minute
message.retry({ delaySeconds: 60 });
} else {
// Client error - don't retry
console.error('Client error, not retrying');
}
}
```
---
## Acknowledgement Precedence Rules
When mixing ack/retry calls:
1. **`ack()` or `retry()` wins** - First call on a message takes precedence
2. **Individual > Batch** - Message-level call overrides batch-level call
3. **Subsequent calls ignored** - Second call on same message is silently ignored
```typescript
// ack() takes precedence
message.ack();
message.retry(); // Ignored
// retry() takes precedence
message.retry();
message.ack(); // Ignored
// Individual overrides batch
message.ack();
batch.retryAll(); // Doesn't affect this message
// Batch doesn't affect individually handled messages
for (const msg of batch.messages) {
msg.ack(); // These messages won't be affected by retryAll()
}
batch.retryAll(); // Only affects messages not explicitly ack'd
```
---
## Processing Patterns
### Sequential Processing
```typescript
export default {
async queue(batch: MessageBatch): Promise<void> {
for (const message of batch.messages) {
await processMessage(message.body);
message.ack();
}
},
};
```
**Pros:** Simple, ordered processing
**Cons:** Slow for large batches
---
### Parallel Processing
```typescript
export default {
async queue(batch: MessageBatch): Promise<void> {
await Promise.all(
batch.messages.map(async (message) => {
try {
await processMessage(message.body);
message.ack();
} catch (error) {
console.error(`Failed: ${message.id}`, error);
}
})
);
},
};
```
**Pros:** Fast, efficient
**Cons:** No ordering, higher memory usage
---
### Batched Database Writes
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
// Prepare all statements
const statements = batch.messages.map((message) =>
env.DB.prepare(
'INSERT INTO events (id, data) VALUES (?, ?)'
).bind(message.id, JSON.stringify(message.body))
);
// Execute in batch
const results = await env.DB.batch(statements);
// Acknowledge based on results
for (let i = 0; i < results.length; i++) {
if (results[i].success) {
batch.messages[i].ack();
} else {
console.error(`Failed: ${batch.messages[i].id}`);
}
}
},
};
```
**Pros:** Efficient database usage
**Cons:** More complex error handling
---
### Message Type Routing
```typescript
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
switch (message.body.type) {
case 'email':
await sendEmail(message.body, env);
break;
case 'sms':
await sendSMS(message.body, env);
break;
case 'push':
await sendPush(message.body, env);
break;
default:
console.warn(`Unknown type: ${message.body.type}`);
}
message.ack();
} catch (error) {
console.error(`Failed: ${message.id}`, error);
message.retry({ delaySeconds: 300 });
}
}
},
};
```
---
## ExecutionContext Methods
### `waitUntil(promise)`
Extend Worker lifetime beyond handler return.
```typescript
export default {
async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext): Promise<void> {
for (const message of batch.messages) {
await processMessage(message.body);
message.ack();
// Log asynchronously (doesn't block)
ctx.waitUntil(
env.LOGS.put(`log:${message.id}`, JSON.stringify({
processedAt: Date.now(),
message: message.body,
}))
);
}
},
};
```
---
### `passThroughOnException()`
Continue processing even if handler throws.
```typescript
export default {
async queue(batch: MessageBatch, env: Env, ctx: ExecutionContext): Promise<void> {
ctx.passThroughOnException();
// If this throws, Worker doesn't fail
// But unacknowledged messages will retry
await processMessages(batch.messages);
},
};
```
---
**Last Updated**: 2025-10-21

337
references/producer-api.md Normal file
View File

@@ -0,0 +1,337 @@
# Producer API Reference
Complete reference for sending messages to Cloudflare Queues from Workers.
---
## Queue Binding
Access queues via environment bindings configured in `wrangler.jsonc`:
```jsonc
{
"queues": {
"producers": [
{
"binding": "MY_QUEUE",
"queue": "my-queue"
}
]
}
}
```
**TypeScript:**
```typescript
type Bindings = {
MY_QUEUE: Queue;
};
const app = new Hono<{ Bindings: Bindings }>();
app.post('/send', async (c) => {
await c.env.MY_QUEUE.send({ data: 'hello' });
return c.json({ sent: true });
});
```
---
## `send()` - Send Single Message
### Signature
```typescript
interface Queue<Body = any> {
send(body: Body, options?: QueueSendOptions): Promise<void>;
}
interface QueueSendOptions {
delaySeconds?: number; // 0-43200 (12 hours)
}
```
### Parameters
- **`body`** - Any JSON serializable value
- Must be compatible with structured clone algorithm
- Max size: 128 KB (including ~100 bytes metadata)
- Types: primitives, objects, arrays, Date, Map, Set, etc.
- NOT supported: Functions, Symbols, DOM nodes
- **`options.delaySeconds`** (optional)
- Delay message delivery
- Range: 0-43200 seconds (0-12 hours)
- Default: 0 (immediate delivery)
### Examples
```typescript
// Send simple message
await env.MY_QUEUE.send({ userId: '123', action: 'welcome' });
// Send with delay (10 minutes)
await env.MY_QUEUE.send(
{ userId: '123', action: 'reminder' },
{ delaySeconds: 600 }
);
// Send complex object
await env.MY_QUEUE.send({
type: 'order',
order: {
id: 'ORD-123',
items: [
{ sku: 'ITEM-1', quantity: 2, price: 19.99 },
{ sku: 'ITEM-2', quantity: 1, price: 29.99 },
],
total: 69.97,
customer: {
id: 'CUST-456',
email: 'user@example.com',
},
metadata: {
source: 'web',
campaign: 'summer-sale',
},
},
timestamp: Date.now(),
});
// Send with Date objects
await env.MY_QUEUE.send({
scheduledFor: new Date('2025-12-25T00:00:00Z'),
createdAt: new Date(),
});
```
---
## `sendBatch()` - Send Multiple Messages
### Signature
```typescript
interface Queue<Body = any> {
sendBatch(
messages: Iterable<MessageSendRequest<Body>>,
options?: QueueSendBatchOptions
): Promise<void>;
}
interface MessageSendRequest<Body = any> {
body: Body;
delaySeconds?: number;
}
interface QueueSendBatchOptions {
delaySeconds?: number; // Default delay for all messages
}
```
### Parameters
- **`messages`** - Iterable of message objects
- Each message has `body` and optional `delaySeconds`
- Max 100 messages per batch
- Max 256 KB total batch size
- Can be Array, Set, Generator, etc.
- **`options.delaySeconds`** (optional)
- Default delay applied to all messages
- Overridden by individual message `delaySeconds`
### Examples
```typescript
// Send batch of messages
await env.MY_QUEUE.sendBatch([
{ body: { userId: '1', action: 'email' } },
{ body: { userId: '2', action: 'email' } },
{ body: { userId: '3', action: 'email' } },
]);
// Send batch with individual delays
await env.MY_QUEUE.sendBatch([
{ body: { task: 'immediate' }, delaySeconds: 0 },
{ body: { task: '5-min' }, delaySeconds: 300 },
{ body: { task: '1-hour' }, delaySeconds: 3600 },
]);
// Send batch with default delay (overridable per message)
await env.MY_QUEUE.sendBatch(
[
{ body: { task: 'default-delay' } },
{ body: { task: 'custom-delay' }, delaySeconds: 600 },
],
{ delaySeconds: 300 } // Default 5 minutes
);
// Dynamic batch from database
const users = await getActiveUsers();
await env.MY_QUEUE.sendBatch(
users.map(user => ({
body: {
type: 'send-notification',
userId: user.id,
email: user.email,
message: 'You have a new message',
},
}))
);
// Generator pattern
async function* generateMessages() {
for (let i = 0; i < 100; i++) {
yield {
body: { taskId: i, priority: i % 3 },
};
}
}
await env.MY_QUEUE.sendBatch(generateMessages());
```
---
## Message Size Validation
Messages must be ≤128 KB. Check size before sending:
```typescript
async function sendWithValidation(queue: Queue, message: any) {
const messageStr = JSON.stringify(message);
const size = new TextEncoder().encode(messageStr).length;
const MAX_SIZE = 128 * 1024; // 128 KB
if (size > MAX_SIZE) {
throw new Error(
`Message too large: ${size} bytes (max ${MAX_SIZE})`
);
}
await queue.send(message);
}
```
**Handling large messages:**
```typescript
// Store large data in R2, send reference
if (size > 128 * 1024) {
const key = `messages/${crypto.randomUUID()}.json`;
await env.MY_BUCKET.put(key, JSON.stringify(largeMessage));
await env.MY_QUEUE.send({
type: 'large-message',
r2Key: key,
metadata: {
size,
createdAt: Date.now(),
},
});
}
```
---
## Throughput Management
Max throughput: 5,000 messages/second per queue.
**Rate limiting:**
```typescript
// Batch sends for better throughput
const messages = Array.from({ length: 1000 }, (_, i) => ({
body: { id: i },
}));
// Send in batches of 100 (10 sendBatch calls vs 1000 send calls)
for (let i = 0; i < messages.length; i += 100) {
const batch = messages.slice(i, i + 100);
await env.MY_QUEUE.sendBatch(batch);
}
// Add delay if needed
for (let i = 0; i < messages.length; i += 100) {
const batch = messages.slice(i, i + 100);
await env.MY_QUEUE.sendBatch(batch);
if (i + 100 < messages.length) {
await new Promise(resolve => setTimeout(resolve, 100)); // 100ms
}
}
```
---
## Error Handling
```typescript
try {
await env.MY_QUEUE.send(message);
} catch (error) {
if (error.message.includes('Too Many Requests')) {
// Throughput exceeded (>5000 msg/s)
console.error('Rate limited');
} else if (error.message.includes('too large')) {
// Message >128 KB
console.error('Message too large');
} else {
// Other error
console.error('Queue send failed:', error);
}
}
```
---
## Production Patterns
### Idempotency Keys
```typescript
await env.MY_QUEUE.send({
idempotencyKey: crypto.randomUUID(),
orderId: 'ORD-123',
action: 'process',
});
```
### Message Versioning
```typescript
await env.MY_QUEUE.send({
_version: 1,
_schema: 'order-v1',
orderId: 'ORD-123',
// ...
});
```
### Correlation IDs
```typescript
await env.MY_QUEUE.send({
correlationId: requestId,
parentSpanId: traceId,
// ...
});
```
### Priority Queues
```typescript
// Use multiple queues for different priorities
if (priority === 'high') {
await env.HIGH_PRIORITY_QUEUE.send(message);
} else {
await env.LOW_PRIORITY_QUEUE.send(message);
}
```
---
**Last Updated**: 2025-10-21

View File

@@ -0,0 +1,392 @@
# Wrangler Commands for Cloudflare Queues
Complete reference for managing Cloudflare Queues via the `wrangler` CLI.
---
## Queue Management
### Create Queue
```bash
npx wrangler queues create <NAME> [OPTIONS]
```
**Options:**
- `--delivery-delay-secs <SECONDS>` - Default delay for all messages (0-43200)
- `--message-retention-period-secs <SECONDS>` - How long messages persist (60-1209600, default: 345600 / 4 days)
**Examples:**
```bash
# Create basic queue
npx wrangler queues create my-queue
# Create with custom retention (7 days)
npx wrangler queues create my-queue --message-retention-period-secs 604800
# Create with default delivery delay (5 minutes)
npx wrangler queues create delayed-queue --delivery-delay-secs 300
```
---
### List Queues
```bash
npx wrangler queues list
```
**Output:**
```
┌──────────────────┬─────────────┬──────────┐
│ Name │ Consumers │ Messages │
├──────────────────┼─────────────┼──────────┤
│ my-queue │ 1 │ 0 │
│ high-priority │ 2 │ 142 │
│ my-dlq │ 1 │ 5 │
└──────────────────┴─────────────┴──────────┘
```
---
### Get Queue Info
```bash
npx wrangler queues info <NAME>
```
**Example:**
```bash
npx wrangler queues info my-queue
# Output:
# Queue: my-queue
# Message Retention: 345600 seconds (4 days)
# Delivery Delay: 0 seconds
# Consumers: 1
# - my-consumer (batch_size: 10, batch_timeout: 5s, max_retries: 3)
# Backlog: 0 messages
```
---
### Update Queue
```bash
npx wrangler queues update <NAME> [OPTIONS]
```
**Options:**
- `--delivery-delay-secs <SECONDS>` - Update default delay
- `--message-retention-period-secs <SECONDS>` - Update retention period
**Examples:**
```bash
# Update retention to 14 days (max)
npx wrangler queues update my-queue --message-retention-period-secs 1209600
# Update delivery delay to 10 minutes
npx wrangler queues update my-queue --delivery-delay-secs 600
```
---
### Delete Queue
```bash
npx wrangler queues delete <NAME>
```
**⚠️ WARNING:**
- Deletes ALL messages in the queue
- Cannot be undone
- Use with extreme caution in production
**Example:**
```bash
npx wrangler queues delete old-queue
```
---
## Consumer Management
### Add Consumer
```bash
npx wrangler queues consumer add <QUEUE-NAME> <WORKER-SCRIPT-NAME> [OPTIONS]
```
**Options:**
- `--batch-size <NUMBER>` - Max messages per batch (1-100, default: 10)
- `--batch-timeout <SECONDS>` - Max wait time (0-60, default: 5)
- `--message-retries <NUMBER>` - Max retry attempts (0-100, default: 3)
- `--max-concurrency <NUMBER>` - Limit concurrent consumers (default: auto-scale to 250)
- `--retry-delay-secs <SECONDS>` - Default retry delay
- `--dead-letter-queue <QUEUE-NAME>` - DLQ for failed messages
**Examples:**
```bash
# Basic consumer
npx wrangler queues consumer add my-queue my-consumer
# Optimized for high throughput
npx wrangler queues consumer add my-queue my-consumer \
--batch-size 100 \
--batch-timeout 1
# With DLQ and retry settings
npx wrangler queues consumer add my-queue my-consumer \
--batch-size 50 \
--message-retries 5 \
--retry-delay-secs 300 \
--dead-letter-queue my-dlq
# Limit concurrency for rate-limited APIs
npx wrangler queues consumer add api-queue api-consumer \
--max-concurrency 10
```
---
### Remove Consumer
```bash
npx wrangler queues consumer remove <QUEUE-NAME> <WORKER-SCRIPT-NAME>
```
**Example:**
```bash
npx wrangler queues consumer remove my-queue my-consumer
```
---
## Queue Operations
### Purge Queue
```bash
npx wrangler queues purge <QUEUE-NAME>
```
**⚠️ WARNING:**
- Permanently deletes ALL messages
- Cannot be undone
- Use for clearing test data or stuck queues
**Example:**
```bash
npx wrangler queues purge test-queue
```
---
### Pause Delivery
```bash
npx wrangler queues pause-delivery <QUEUE-NAME>
```
**Use cases:**
- Maintenance on consumer Workers
- Debugging consumer issues
- Temporarily stop processing without deleting messages
**Example:**
```bash
npx wrangler queues pause-delivery my-queue
```
---
### Resume Delivery
```bash
npx wrangler queues resume-delivery <QUEUE-NAME>
```
**Example:**
```bash
npx wrangler queues resume-delivery my-queue
```
---
## Event Subscriptions
Event subscriptions automatically send messages to a queue when events occur in other Cloudflare services.
### Create Subscription
```bash
npx wrangler queues subscription create <QUEUE-NAME> [OPTIONS]
```
**Options:**
- `--source <TYPE>` - Event source (kv, r2, superSlurper, vectorize, workersAi.model, workersBuilds.worker, workflows.workflow)
- `--events <EVENTS>` - Comma-separated list of event types
- `--name <NAME>` - Subscription name (auto-generated if omitted)
- `--enabled` - Whether subscription is active (default: true)
**Examples:**
```bash
# Subscribe to R2 bucket events
npx wrangler queues subscription create my-queue \
--source r2 \
--events object-create,object-delete \
--bucket-name my-bucket
# Subscribe to KV namespace events
npx wrangler queues subscription create my-queue \
--source kv \
--events key-write,key-delete \
--namespace-id abc123
# Subscribe to Worker build events
npx wrangler queues subscription create build-queue \
--source workersBuilds.worker \
--events build-complete,build-failed \
--worker-name my-worker
```
---
### List Subscriptions
```bash
npx wrangler queues subscription list <QUEUE-NAME> [OPTIONS]
```
**Options:**
- `--page <NUMBER>` - Page number
- `--per-page <NUMBER>` - Results per page
- `--json` - Output as JSON
**Example:**
```bash
npx wrangler queues subscription list my-queue --json
```
---
### Get Subscription
```bash
npx wrangler queues subscription get <QUEUE-NAME> --id <SUBSCRIPTION-ID> [--json]
```
**Example:**
```bash
npx wrangler queues subscription get my-queue --id sub_123 --json
```
---
### Delete Subscription
```bash
npx wrangler queues subscription delete <QUEUE-NAME> --id <SUBSCRIPTION-ID>
```
**Example:**
```bash
npx wrangler queues subscription delete my-queue --id sub_123
```
---
## Global Flags
These flags work on all commands:
- `--help` - Show help
- `--config <PATH>` - Path to wrangler.toml or wrangler.jsonc
- `--cwd <PATH>` - Run as if started in specified directory
---
## Complete Workflow Example
```bash
# 1. Create queues
npx wrangler queues create my-queue
npx wrangler queues create my-dlq
# 2. Create and deploy producer Worker
cd my-producer
npm create cloudflare@latest -- --type hello-world --ts
# Add producer binding to wrangler.jsonc
npm run deploy
# 3. Create and deploy consumer Worker
cd ../my-consumer
npm create cloudflare@latest -- --type hello-world --ts
# Add consumer handler
npm run deploy
# 4. Add consumer to queue
npx wrangler queues consumer add my-queue my-consumer \
--batch-size 50 \
--message-retries 5 \
--dead-letter-queue my-dlq
# 5. Monitor queue
npx wrangler queues info my-queue
# 6. Watch consumer logs
npx wrangler tail my-consumer
# 7. If needed, pause delivery
npx wrangler queues pause-delivery my-queue
# 8. Resume delivery
npx wrangler queues resume-delivery my-queue
```
---
## Troubleshooting
### Check queue backlog
```bash
npx wrangler queues info my-queue | grep "Backlog"
```
### Clear stuck queue
```bash
npx wrangler queues purge my-queue
```
### Verify consumer is attached
```bash
npx wrangler queues info my-queue | grep "Consumers"
```
### Check for delivery paused
```bash
npx wrangler queues info my-queue
# Look for "Delivery: paused"
```
---
**Last Updated**: 2025-10-21
**Wrangler Version**: 4.43.0+