Initial commit
This commit is contained in:
15
.claude-plugin/plugin.json
Normal file
15
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"name": "api-event-emitter",
|
||||||
|
"description": "Implement event-driven APIs with message queues and event streaming",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"author": {
|
||||||
|
"name": "Jeremy Longshore",
|
||||||
|
"email": "[email protected]"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./skills"
|
||||||
|
],
|
||||||
|
"commands": [
|
||||||
|
"./commands"
|
||||||
|
]
|
||||||
|
}
|
||||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# api-event-emitter
|
||||||
|
|
||||||
|
Implement event-driven APIs with message queues and event streaming
|
||||||
736
commands/implement-events.md
Normal file
736
commands/implement-events.md
Normal file
@@ -0,0 +1,736 @@
|
|||||||
|
---
|
||||||
|
description: Implement event-driven API architecture
|
||||||
|
shortcut: events
|
||||||
|
---
|
||||||
|
|
||||||
|
# Implement Event-Driven API
|
||||||
|
|
||||||
|
Build production-grade event-driven APIs with message queues, event streaming, and async communication patterns. This command generates event publishers, subscribers, message brokers integration, and event-driven architectures for microservices and distributed systems.
|
||||||
|
|
||||||
|
## Design Decisions
|
||||||
|
|
||||||
|
**Why event-driven architecture:**
|
||||||
|
- **Decoupling**: Services communicate without direct dependencies
|
||||||
|
- **Scalability**: Process events asynchronously, handle traffic spikes
|
||||||
|
- **Resilience**: Failed events can be retried, dead-letter queues prevent data loss
|
||||||
|
- **Auditability**: Event logs provide complete system activity history
|
||||||
|
- **Flexibility**: Add new consumers without modifying publishers
|
||||||
|
|
||||||
|
**Alternatives considered:**
|
||||||
|
- **Synchronous REST APIs**: Simpler but couples services, no retry capability
|
||||||
|
- **Direct database sharing**: Fastest but creates tight coupling and data ownership issues
|
||||||
|
- **GraphQL subscriptions**: Good for client-server, less suited for service-to-service
|
||||||
|
- **Webhooks**: Simple but lacks delivery guarantees and ordering
|
||||||
|
|
||||||
|
**This approach balances**: Loose coupling, reliability, scalability, and operational complexity.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Use event-driven architecture when:
|
||||||
|
- Building microservices that need to communicate asynchronously
|
||||||
|
- Handling high-volume, bursty workloads (user signups, order processing)
|
||||||
|
- Implementing CQRS (Command Query Responsibility Segregation)
|
||||||
|
- Creating audit logs or event sourcing systems
|
||||||
|
- Integrating multiple services that need eventual consistency
|
||||||
|
- Building real-time notification systems
|
||||||
|
|
||||||
|
Don't use when:
|
||||||
|
- Building simple CRUD applications with low traffic
|
||||||
|
- You need immediate, synchronous responses (use REST/GraphQL instead)
|
||||||
|
- Team lacks experience with message queues and async patterns
|
||||||
|
- Debugging and monitoring infrastructure isn't in place
|
||||||
|
- Strong consistency is required (use synchronous transactions)
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Message broker installed (RabbitMQ, Apache Kafka, AWS SQS/SNS, or Redis)
|
||||||
|
- Node.js 16+ or Python 3.8+ for examples
|
||||||
|
- Understanding of async/await patterns and promises
|
||||||
|
- Basic knowledge of pub/sub and message queue concepts
|
||||||
|
- (Optional) Event schema registry (Confluent Schema Registry, AWS Glue)
|
||||||
|
- (Optional) Monitoring tools (Prometheus, Grafana, CloudWatch)
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Choose Message Broker**
|
||||||
|
- RabbitMQ: Flexible routing, mature, good for task queues
|
||||||
|
- Apache Kafka: High throughput, event streaming, log retention
|
||||||
|
- AWS SQS/SNS: Managed service, serverless, simpler operations
|
||||||
|
- Redis Streams: Lightweight, in-memory, good for caching + events
|
||||||
|
|
||||||
|
2. **Define Event Schemas**
|
||||||
|
- Use JSON Schema, Avro, or Protobuf for validation
|
||||||
|
- Include metadata (event ID, timestamp, version, source)
|
||||||
|
- Design for backward compatibility (add optional fields)
|
||||||
|
|
||||||
|
3. **Implement Publishers**
|
||||||
|
- Publish events after successful operations (user created, order placed)
|
||||||
|
- Use transactional outbox pattern for consistency
|
||||||
|
- Add retry logic and dead-letter queues
|
||||||
|
|
||||||
|
4. **Build Subscribers**
|
||||||
|
- Subscribe to relevant events (send email on user created)
|
||||||
|
- Implement idempotent handlers (deduplicate using event ID)
|
||||||
|
- Handle failures gracefully with retries and DLQs
|
||||||
|
|
||||||
|
5. **Add Event Patterns**
|
||||||
|
- Event notification: Fire-and-forget notifications
|
||||||
|
- Event-carried state transfer: Include full state in events
|
||||||
|
- Event sourcing: Store events as source of truth
|
||||||
|
- CQRS: Separate read/write models with events
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
### RabbitMQ Publisher (Node.js)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// events/EventPublisher.js
|
||||||
|
const amqp = require('amqplib');
|
||||||
|
const { v4: uuidv4 } = require('uuid');
|
||||||
|
|
||||||
|
class EventPublisher {
|
||||||
|
constructor(connectionUrl) {
|
||||||
|
this.connectionUrl = connectionUrl;
|
||||||
|
this.connection = null;
|
||||||
|
this.channel = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
async connect() {
|
||||||
|
this.connection = await amqp.connect(this.connectionUrl);
|
||||||
|
this.channel = await this.connection.createChannel();
|
||||||
|
|
||||||
|
// Declare exchange for fanout (pub/sub)
|
||||||
|
await this.channel.assertExchange('events', 'topic', { durable: true });
|
||||||
|
|
||||||
|
console.log('Event publisher connected to RabbitMQ');
|
||||||
|
}
|
||||||
|
|
||||||
|
async publish(eventName, payload) {
|
||||||
|
if (!this.channel) {
|
||||||
|
throw new Error('Publisher not connected');
|
||||||
|
}
|
||||||
|
|
||||||
|
const event = {
|
||||||
|
id: uuidv4(),
|
||||||
|
name: eventName,
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
version: '1.0.0',
|
||||||
|
data: payload
|
||||||
|
};
|
||||||
|
|
||||||
|
const routingKey = eventName;
|
||||||
|
const message = Buffer.from(JSON.stringify(event));
|
||||||
|
|
||||||
|
const published = this.channel.publish(
|
||||||
|
'events',
|
||||||
|
routingKey,
|
||||||
|
message,
|
||||||
|
{
|
||||||
|
persistent: true, // Survive broker restart
|
||||||
|
contentType: 'application/json',
|
||||||
|
messageId: event.id,
|
||||||
|
timestamp: Date.now()
|
||||||
|
}
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!published) {
|
||||||
|
throw new Error('Failed to publish event to exchange buffer');
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Published event: ${eventName}`, event.id);
|
||||||
|
return event.id;
|
||||||
|
}
|
||||||
|
|
||||||
|
async close() {
|
||||||
|
await this.channel?.close();
|
||||||
|
await this.connection?.close();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
module.exports = EventPublisher;
|
||||||
|
|
||||||
|
// Usage in API route
|
||||||
|
const publisher = new EventPublisher('amqp://localhost');
|
||||||
|
await publisher.connect();
|
||||||
|
|
||||||
|
router.post('/users', async (req, res) => {
|
||||||
|
try {
|
||||||
|
const user = await createUser(req.body);
|
||||||
|
|
||||||
|
// Publish event after successful creation
|
||||||
|
await publisher.publish('user.created', {
|
||||||
|
userId: user.id,
|
||||||
|
email: user.email,
|
||||||
|
name: user.name
|
||||||
|
});
|
||||||
|
|
||||||
|
res.status(201).json(user);
|
||||||
|
} catch (error) {
|
||||||
|
next(error);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### RabbitMQ Subscriber (Node.js)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// events/EventSubscriber.js
|
||||||
|
const amqp = require('amqplib');
|
||||||
|
|
||||||
|
class EventSubscriber {
|
||||||
|
constructor(connectionUrl, queueName) {
|
||||||
|
this.connectionUrl = connectionUrl;
|
||||||
|
this.queueName = queueName;
|
||||||
|
this.handlers = new Map();
|
||||||
|
}
|
||||||
|
|
||||||
|
async connect() {
|
||||||
|
this.connection = await amqp.connect(this.connectionUrl);
|
||||||
|
this.channel = await this.connection.createChannel();
|
||||||
|
|
||||||
|
// Declare exchange
|
||||||
|
await this.channel.assertExchange('events', 'topic', { durable: true });
|
||||||
|
|
||||||
|
// Declare queue with dead-letter exchange
|
||||||
|
await this.channel.assertQueue(this.queueName, {
|
||||||
|
durable: true,
|
||||||
|
deadLetterExchange: 'events.dlx',
|
||||||
|
deadLetterRoutingKey: 'dead-letter'
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(`Subscriber ${this.queueName} connected`);
|
||||||
|
}
|
||||||
|
|
||||||
|
on(eventName, handler) {
|
||||||
|
this.handlers.set(eventName, handler);
|
||||||
|
|
||||||
|
// Bind queue to routing key
|
||||||
|
this.channel.bindQueue(this.queueName, 'events', eventName);
|
||||||
|
console.log(`Subscribed to event: ${eventName}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
async start() {
|
||||||
|
this.channel.prefetch(1); // Process one message at a time
|
||||||
|
|
||||||
|
this.channel.consume(this.queueName, async (message) => {
|
||||||
|
if (!message) return;
|
||||||
|
|
||||||
|
const content = message.content.toString();
|
||||||
|
const event = JSON.parse(content);
|
||||||
|
|
||||||
|
console.log(`Received event: ${event.name}`, event.id);
|
||||||
|
|
||||||
|
const handler = this.handlers.get(event.name);
|
||||||
|
if (!handler) {
|
||||||
|
console.warn(`No handler for event: ${event.name}`);
|
||||||
|
this.channel.ack(message); // Acknowledge to prevent reprocessing
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Idempotency: Check if already processed
|
||||||
|
const alreadyProcessed = await checkEventProcessed(event.id);
|
||||||
|
if (alreadyProcessed) {
|
||||||
|
console.log(`Event already processed: ${event.id}`);
|
||||||
|
this.channel.ack(message);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process event
|
||||||
|
await handler(event.data, event);
|
||||||
|
|
||||||
|
// Mark as processed
|
||||||
|
await markEventProcessed(event.id);
|
||||||
|
|
||||||
|
// Acknowledge successful processing
|
||||||
|
this.channel.ack(message);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Error processing event ${event.name}:`, error);
|
||||||
|
|
||||||
|
// Reject and requeue (will go to DLQ after max retries)
|
||||||
|
this.channel.nack(message, false, false);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(`Subscriber ${this.queueName} started`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
const subscriber = new EventSubscriber('amqp://localhost', 'email-service');
|
||||||
|
await subscriber.connect();
|
||||||
|
|
||||||
|
subscriber.on('user.created', async (data, event) => {
|
||||||
|
await sendWelcomeEmail(data.email, data.name);
|
||||||
|
console.log(`Welcome email sent to ${data.email}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
subscriber.on('order.placed', async (data, event) => {
|
||||||
|
await sendOrderConfirmation(data.orderId, data.email);
|
||||||
|
console.log(`Order confirmation sent for ${data.orderId}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
await subscriber.start();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Kafka Producer (Python)
|
||||||
|
|
||||||
|
```python
|
||||||
|
# events/kafka_producer.py
|
||||||
|
from kafka import KafkaProducer
|
||||||
|
from kafka.errors import KafkaError
|
||||||
|
import json
|
||||||
|
import uuid
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Dict, Any
|
||||||
|
|
||||||
|
class EventProducer:
|
||||||
|
def __init__(self, bootstrap_servers: str):
|
||||||
|
self.producer = KafkaProducer(
|
||||||
|
bootstrap_servers=bootstrap_servers,
|
||||||
|
value_serializer=lambda v: json.dumps(v).encode('utf-8'),
|
||||||
|
acks='all', # Wait for all replicas
|
||||||
|
retries=3,
|
||||||
|
max_in_flight_requests_per_connection=1 # Preserve order
|
||||||
|
)
|
||||||
|
|
||||||
|
def publish(self, topic: str, event_name: str, payload: Dict[str, Any]) -> str:
|
||||||
|
event_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
event = {
|
||||||
|
'id': event_id,
|
||||||
|
'name': event_name,
|
||||||
|
'timestamp': datetime.utcnow().isoformat(),
|
||||||
|
'version': '1.0.0',
|
||||||
|
'data': payload
|
||||||
|
}
|
||||||
|
|
||||||
|
future = self.producer.send(
|
||||||
|
topic,
|
||||||
|
value=event,
|
||||||
|
key=event_id.encode('utf-8') # Partition by event ID
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Block for synchronous send (optional)
|
||||||
|
record_metadata = future.get(timeout=10)
|
||||||
|
print(f"Published {event_name} to {record_metadata.topic} "
|
||||||
|
f"partition {record_metadata.partition} offset {record_metadata.offset}")
|
||||||
|
return event_id
|
||||||
|
except KafkaError as e:
|
||||||
|
print(f"Failed to publish event: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
def close(self):
|
||||||
|
self.producer.flush()
|
||||||
|
self.producer.close()
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
producer = EventProducer('localhost:9092')
|
||||||
|
|
||||||
|
def create_user_endpoint(request):
|
||||||
|
user = create_user(request.data)
|
||||||
|
|
||||||
|
# Publish event
|
||||||
|
producer.publish(
|
||||||
|
topic='user-events',
|
||||||
|
event_name='user.created',
|
||||||
|
payload={
|
||||||
|
'user_id': user.id,
|
||||||
|
'email': user.email,
|
||||||
|
'name': user.name
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return {'user': user.to_dict()}, 201
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Usage
|
||||||
|
|
||||||
|
### Example 1: Event Sourcing Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Event sourcing: Store events as source of truth
|
||||||
|
class OrderEventStore {
|
||||||
|
constructor(publisher) {
|
||||||
|
this.publisher = publisher;
|
||||||
|
}
|
||||||
|
|
||||||
|
async placeOrder(orderId, items, customerId) {
|
||||||
|
// Publish event (this is the source of truth)
|
||||||
|
await this.publisher.publish('order.placed', {
|
||||||
|
orderId,
|
||||||
|
items,
|
||||||
|
customerId,
|
||||||
|
status: 'pending',
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async confirmPayment(orderId, paymentId) {
|
||||||
|
await this.publisher.publish('order.payment_confirmed', {
|
||||||
|
orderId,
|
||||||
|
paymentId,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async shipOrder(orderId, trackingNumber) {
|
||||||
|
await this.publisher.publish('order.shipped', {
|
||||||
|
orderId,
|
||||||
|
trackingNumber,
|
||||||
|
timestamp: new Date().toISOString()
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Rebuild order state from events
|
||||||
|
async function getOrderState(orderId) {
|
||||||
|
const events = await loadEvents(`order.${orderId}.*`);
|
||||||
|
|
||||||
|
let order = { id: orderId };
|
||||||
|
for (const event of events) {
|
||||||
|
switch (event.name) {
|
||||||
|
case 'order.placed':
|
||||||
|
order = { ...order, ...event.data, status: 'pending' };
|
||||||
|
break;
|
||||||
|
case 'order.payment_confirmed':
|
||||||
|
order.status = 'confirmed';
|
||||||
|
order.paymentId = event.data.paymentId;
|
||||||
|
break;
|
||||||
|
case 'order.shipped':
|
||||||
|
order.status = 'shipped';
|
||||||
|
order.trackingNumber = event.data.trackingNumber;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return order;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: CQRS (Command Query Responsibility Segregation)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Write model: Handle commands, emit events
|
||||||
|
class OrderCommandHandler {
|
||||||
|
async handlePlaceOrder(command) {
|
||||||
|
// Validate
|
||||||
|
if (!command.items.length) {
|
||||||
|
throw new Error('Order must have items');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create order (write)
|
||||||
|
const order = await db.orders.create({
|
||||||
|
customerId: command.customerId,
|
||||||
|
items: command.items,
|
||||||
|
status: 'pending'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Emit event
|
||||||
|
await publisher.publish('order.placed', {
|
||||||
|
orderId: order.id,
|
||||||
|
customerId: order.customerId,
|
||||||
|
totalAmount: calculateTotal(order.items)
|
||||||
|
});
|
||||||
|
|
||||||
|
return order.id;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read model: Listen to events, update read-optimized views
|
||||||
|
subscriber.on('order.placed', async (data) => {
|
||||||
|
// Update denormalized view for fast queries
|
||||||
|
await redis.set(`order:${data.orderId}`, JSON.stringify({
|
||||||
|
orderId: data.orderId,
|
||||||
|
customerId: data.customerId,
|
||||||
|
totalAmount: data.totalAmount,
|
||||||
|
status: 'pending',
|
||||||
|
placedAt: new Date().toISOString()
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Update customer order count
|
||||||
|
await redis.hincrby(`customer:${data.customerId}`, 'orderCount', 1);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Saga Pattern (Distributed Transactions)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Orchestrate multi-service transaction with compensating actions
|
||||||
|
class OrderSaga {
|
||||||
|
async execute(orderData) {
|
||||||
|
const sagaId = uuidv4();
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Step 1: Reserve inventory
|
||||||
|
await publisher.publish('inventory.reserve', {
|
||||||
|
sagaId,
|
||||||
|
items: orderData.items
|
||||||
|
});
|
||||||
|
await waitForEvent('inventory.reserved', sagaId);
|
||||||
|
|
||||||
|
// Step 2: Process payment
|
||||||
|
await publisher.publish('payment.process', {
|
||||||
|
sagaId,
|
||||||
|
amount: orderData.amount,
|
||||||
|
customerId: orderData.customerId
|
||||||
|
});
|
||||||
|
await waitForEvent('payment.processed', sagaId);
|
||||||
|
|
||||||
|
// Step 3: Create shipment
|
||||||
|
await publisher.publish('shipment.create', {
|
||||||
|
sagaId,
|
||||||
|
orderId: orderData.orderId,
|
||||||
|
address: orderData.shippingAddress
|
||||||
|
});
|
||||||
|
await waitForEvent('shipment.created', sagaId);
|
||||||
|
|
||||||
|
// Saga completed successfully
|
||||||
|
await publisher.publish('order.saga_completed', { sagaId });
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
// Compensate: Rollback in reverse order
|
||||||
|
console.error('Saga failed, executing compensations', error);
|
||||||
|
|
||||||
|
await publisher.publish('shipment.cancel', { sagaId });
|
||||||
|
await publisher.publish('payment.refund', { sagaId });
|
||||||
|
await publisher.publish('inventory.release', { sagaId });
|
||||||
|
await publisher.publish('order.saga_failed', { sagaId, reason: error.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
**Common issues and solutions:**
|
||||||
|
|
||||||
|
**Problem**: Events lost during broker outage
|
||||||
|
- **Cause**: Publisher doesn't wait for acknowledgment
|
||||||
|
- **Solution**: Use persistent messages, wait for broker ACK, implement transactional outbox
|
||||||
|
|
||||||
|
**Problem**: Duplicate event processing
|
||||||
|
- **Cause**: Subscriber crashes before ACK, message redelivered
|
||||||
|
- **Solution**: Make handlers idempotent, store processed event IDs in DB
|
||||||
|
|
||||||
|
**Problem**: Events processed out of order
|
||||||
|
- **Cause**: Multiple consumers, network delays
|
||||||
|
- **Solution**: Use Kafka partitions with same key, single consumer per partition
|
||||||
|
|
||||||
|
**Problem**: Subscriber can't keep up with events
|
||||||
|
- **Cause**: Handler too slow, throughput mismatch
|
||||||
|
- **Solution**: Scale subscribers horizontally, optimize handlers, batch processing
|
||||||
|
|
||||||
|
**Problem**: Dead-letter queue fills up
|
||||||
|
- **Cause**: Persistent failures not monitored
|
||||||
|
- **Solution**: Set up DLQ monitoring alerts, implement DLQ consumer for manual review
|
||||||
|
|
||||||
|
**Transactional outbox pattern** (prevent lost events):
|
||||||
|
```javascript
|
||||||
|
// Atomic database write + event publish
|
||||||
|
async function createUserWithEvent(userData) {
|
||||||
|
const transaction = await db.transaction();
|
||||||
|
|
||||||
|
try {
|
||||||
|
// 1. Create user in database
|
||||||
|
const user = await db.users.create(userData, { transaction });
|
||||||
|
|
||||||
|
// 2. Store event in outbox table (same transaction)
|
||||||
|
await db.outbox.create({
|
||||||
|
eventName: 'user.created',
|
||||||
|
payload: { userId: user.id, email: user.email },
|
||||||
|
published: false
|
||||||
|
}, { transaction });
|
||||||
|
|
||||||
|
await transaction.commit();
|
||||||
|
|
||||||
|
// 3. Background job publishes from outbox
|
||||||
|
// If app crashes, outbox worker retries unpublished events
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
await transaction.rollback();
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Event Schema (JSON Schema)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const userCreatedSchema = {
|
||||||
|
$schema: "http://json-schema.org/draft-07/schema#",
|
||||||
|
type: "object",
|
||||||
|
required: ["id", "name", "timestamp", "version", "data"],
|
||||||
|
properties: {
|
||||||
|
id: { type: "string", format: "uuid" },
|
||||||
|
name: { type: "string", const: "user.created" },
|
||||||
|
timestamp: { type: "string", format: "date-time" },
|
||||||
|
version: { type: "string", pattern: "^\\d+\\.\\d+\\.\\d+$" },
|
||||||
|
data: {
|
||||||
|
type: "object",
|
||||||
|
required: ["userId", "email"],
|
||||||
|
properties: {
|
||||||
|
userId: { type: "integer" },
|
||||||
|
email: { type: "string", format: "email" },
|
||||||
|
name: { type: "string", minLength: 1 }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### RabbitMQ Configuration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const rabbitConfig = {
|
||||||
|
url: process.env.RABBITMQ_URL || 'amqp://localhost',
|
||||||
|
exchange: {
|
||||||
|
name: 'events',
|
||||||
|
type: 'topic', // Supports wildcard routing (user.*, order.created)
|
||||||
|
durable: true // Survive broker restart
|
||||||
|
},
|
||||||
|
queue: {
|
||||||
|
durable: true,
|
||||||
|
deadLetterExchange: 'events.dlx',
|
||||||
|
messageTtl: 86400000, // 24 hours
|
||||||
|
maxLength: 100000, // Max messages in queue
|
||||||
|
maxPriority: 10 // Priority queue support
|
||||||
|
},
|
||||||
|
publisher: {
|
||||||
|
confirm: true, // Wait for broker acknowledgment
|
||||||
|
persistent: true // Messages survive broker restart
|
||||||
|
},
|
||||||
|
subscriber: {
|
||||||
|
prefetch: 1, // Messages to prefetch
|
||||||
|
noAck: false, // Manual acknowledgment
|
||||||
|
exclusive: false // Allow multiple consumers
|
||||||
|
}
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
DO:
|
||||||
|
- Design events as past-tense facts ("user.created" not "create.user")
|
||||||
|
- Include all necessary data in events (avoid requiring additional lookups)
|
||||||
|
- Version your events for backward compatibility
|
||||||
|
- Make event handlers idempotent (use event ID for deduplication)
|
||||||
|
- Monitor event lag (time between publish and process)
|
||||||
|
- Use dead-letter queues for failed events
|
||||||
|
- Implement circuit breakers for external dependencies in handlers
|
||||||
|
- Log event processing with correlation IDs
|
||||||
|
|
||||||
|
DON'T:
|
||||||
|
- Publish events for every database change (too granular)
|
||||||
|
- Include sensitive data without encryption (PII, secrets)
|
||||||
|
- Make event handlers depend on synchronous responses
|
||||||
|
- Publish events before database transaction commits
|
||||||
|
- Ignore event ordering when it matters (use Kafka partitions)
|
||||||
|
- Let DLQs fill up without monitoring
|
||||||
|
- Skip schema validation (causes runtime errors downstream)
|
||||||
|
|
||||||
|
TIPS:
|
||||||
|
- Use event naming conventions: `<entity>.<action>` (user.created, order.shipped)
|
||||||
|
- Include event metadata: ID, timestamp, version, source service
|
||||||
|
- Start with simple pub/sub, add event sourcing/CQRS only if needed
|
||||||
|
- Test event handlers in isolation with mock events
|
||||||
|
- Use feature flags to enable/disable event subscribers gradually
|
||||||
|
- Implement event replay capability for debugging and recovery
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/build-api-gateway` - Route events through API gateway
|
||||||
|
- `/generate-rest-api` - Generate REST API that publishes events
|
||||||
|
- `/create-monitoring` - Monitor event processing metrics
|
||||||
|
- `/implement-throttling` - Rate limit event publishing
|
||||||
|
- `/scan-api-security` - Security scan event handlers
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
- **Throughput**: RabbitMQ ~10k msgs/sec, Kafka ~100k+ msgs/sec per partition
|
||||||
|
- **Latency**: RabbitMQ <10ms, Kafka 5-100ms depending on config
|
||||||
|
- **Ordering**: Kafka guarantees order per partition, RabbitMQ per queue
|
||||||
|
- **Persistence**: Disk writes slow down throughput, use SSDs for brokers
|
||||||
|
|
||||||
|
**Optimization strategies:**
|
||||||
|
```javascript
|
||||||
|
// Batch events for higher throughput
|
||||||
|
const eventBatch = [];
|
||||||
|
setInterval(async () => {
|
||||||
|
if (eventBatch.length > 0) {
|
||||||
|
await publisher.publishBatch(eventBatch);
|
||||||
|
eventBatch.length = 0;
|
||||||
|
}
|
||||||
|
}, 100); // Flush every 100ms
|
||||||
|
|
||||||
|
// Parallel event processing (if order doesn't matter)
|
||||||
|
subscriber.channel.prefetch(10); // Process 10 messages concurrently
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- **Authentication**: Use TLS/SSL for broker connections, client certificates
|
||||||
|
- **Authorization**: Configure topic/queue permissions per service
|
||||||
|
- **Encryption**: Encrypt sensitive event data before publishing
|
||||||
|
- **Audit**: Log all event publishes and subscriptions
|
||||||
|
- **Validation**: Validate event schemas to prevent injection attacks
|
||||||
|
|
||||||
|
**Security checklist:**
|
||||||
|
```javascript
|
||||||
|
// Use TLS for RabbitMQ
|
||||||
|
const connection = await amqp.connect('amqps://user:pass@broker:5671', {
|
||||||
|
ca: [fs.readFileSync('ca-cert.pem')],
|
||||||
|
cert: fs.readFileSync('client-cert.pem'),
|
||||||
|
key: fs.readFileSync('client-key.pem')
|
||||||
|
});
|
||||||
|
|
||||||
|
// Validate event schemas
|
||||||
|
const Ajv = require('ajv');
|
||||||
|
const ajv = new Ajv();
|
||||||
|
const validate = ajv.compile(eventSchema);
|
||||||
|
|
||||||
|
function publishEvent(event) {
|
||||||
|
if (!validate(event)) {
|
||||||
|
throw new Error(`Invalid event: ${ajv.errorsText(validate.errors)}`);
|
||||||
|
}
|
||||||
|
// Proceed with publish
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**Events not being consumed:**
|
||||||
|
1. Check queue binding to exchange with correct routing key
|
||||||
|
2. Verify subscriber is connected and consuming from queue
|
||||||
|
3. Check firewall/network connectivity to broker
|
||||||
|
4. Review broker logs for errors
|
||||||
|
|
||||||
|
**High event processing lag:**
|
||||||
|
1. Scale subscribers horizontally (add more consumers)
|
||||||
|
2. Optimize event handler performance (reduce I/O, batch operations)
|
||||||
|
3. Increase prefetch count for parallel processing
|
||||||
|
4. Check for slow dependencies (databases, external APIs)
|
||||||
|
|
||||||
|
**Events being processed multiple times:**
|
||||||
|
1. Verify idempotency check is working (check event ID storage)
|
||||||
|
2. Ensure ACK is sent after successful processing
|
||||||
|
3. Check for subscriber crashes before ACK
|
||||||
|
4. Review handler timeouts (increase if operations are slow)
|
||||||
|
|
||||||
|
**Dead-letter queue filling up:**
|
||||||
|
1. Review DLQ messages to identify common failure patterns
|
||||||
|
2. Fix handler bugs causing persistent failures
|
||||||
|
3. Implement DLQ replay mechanism after fixes
|
||||||
|
4. Set up alerts for DLQ threshold
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
- **1.0.0** (2025-10-11): Initial release with RabbitMQ and Kafka examples
|
||||||
|
- Event publisher and subscriber implementations
|
||||||
|
- Event sourcing, CQRS, and Saga patterns
|
||||||
|
- Idempotency and dead-letter queue handling
|
||||||
|
- Transactional outbox pattern
|
||||||
|
- Security and performance best practices
|
||||||
97
plugin.lock.json
Normal file
97
plugin.lock.json
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/api-development/api-event-emitter",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "926caa0f7b4e2703ee69e2bda81b25db1033fde6",
|
||||||
|
"treeHash": "05915ba170db92d4cad54ea46256c3e831ed8d79b46758facbd8ca814082e96d",
|
||||||
|
"generatedAt": "2025-11-28T10:18:06.225006Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "api-event-emitter",
|
||||||
|
"description": "Implement event-driven APIs with message queues and event streaming",
|
||||||
|
"version": "1.0.0"
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "3f69ea180e6696f9c710c1b34ae44f108c1f5a582ac7aa1cc5e65c6868749aec"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "9bc0345f917ba29ba8e175a9e00c674eaeb3bf6a2a949d5416056cb877c26851"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/implement-events.md",
|
||||||
|
"sha256": "2868ea6b2485b5feac5d06a734a7413ba037291dcd447b9f10c972980ea899a9"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/examples.md",
|
||||||
|
"sha256": "922bbc3c4ebf38b76f515b5c1998ebde6bf902233e00e2c5a0e9176f975a7572"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/best-practices.md",
|
||||||
|
"sha256": "c8f32b3566252f50daacd346d7045a1060c718ef5cfb07c55a0f2dec5f1fb39e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/README.md",
|
||||||
|
"sha256": "718941e232953801ab2f5d938bbb84514cbf11c5d91c7fbbc3e5fe97a841f9bd"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/helper-template.sh",
|
||||||
|
"sha256": "0881d5660a8a7045550d09ae0acc15642c24b70de6f08808120f47f86ccdf077"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/validation.sh",
|
||||||
|
"sha256": "92551a29a7f512d2036e4f1fb46c2a3dc6bff0f7dde4a9f699533e446db48502"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/README.md",
|
||||||
|
"sha256": "e504c04b3c497cb6833315ba0280a2001ecd134cfd6a1afe56d4bbc2fe10aba9"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/test-data.json",
|
||||||
|
"sha256": "ac17dca3d6e253a5f39f2a2f1b388e5146043756b05d9ce7ac53a0042eee139d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/README.md",
|
||||||
|
"sha256": "ca688b9cf44464000f02379f8f7463e149b7932a13ec7f5bf4cc19c05be791e6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/event_schema_template.json",
|
||||||
|
"sha256": "67ab6a12295132047bab5019fb63c5dd97e66f5351ea2ab5014d8d0819e48eae"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/docker-compose.yml",
|
||||||
|
"sha256": "b9f44476ca592018a7a8534b22ebb4c8895ab3aca0e6cb4ae315c54c6594bf0b"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/example_events.json",
|
||||||
|
"sha256": "99f824b160abfba9ff3ebadd6928da1f7332f0c2852dde4a0326b353a2e28746"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/skill-schema.json",
|
||||||
|
"sha256": "f5639ba823a24c9ac4fb21444c0717b7aefde1a4993682897f5bf544f863c2cd"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/config-template.json",
|
||||||
|
"sha256": "0c2ba33d2d3c5ccb266c0848fc43caa68a2aa6a80ff315d4b378352711f83e1c"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "05915ba170db92d4cad54ea46256c3e831ed8d79b46758facbd8ca814082e96d"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
7
skills/skill-adapter/assets/README.md
Normal file
7
skills/skill-adapter/assets/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# Assets
|
||||||
|
|
||||||
|
Bundled resources for api-event-emitter skill
|
||||||
|
|
||||||
|
- [ ] event_schema_template.json: A template for defining event schemas in JSON format.
|
||||||
|
- [ ] docker-compose.yml: A Docker Compose file for setting up a local message queue for development and testing.
|
||||||
|
- [ ] example_events.json: A collection of example events that can be used to test the API.
|
||||||
32
skills/skill-adapter/assets/config-template.json
Normal file
32
skills/skill-adapter/assets/config-template.json
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"skill": {
|
||||||
|
"name": "skill-name",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"enabled": true,
|
||||||
|
"settings": {
|
||||||
|
"verbose": false,
|
||||||
|
"autoActivate": true,
|
||||||
|
"toolRestrictions": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"triggers": {
|
||||||
|
"keywords": [
|
||||||
|
"example-trigger-1",
|
||||||
|
"example-trigger-2"
|
||||||
|
],
|
||||||
|
"patterns": []
|
||||||
|
},
|
||||||
|
"tools": {
|
||||||
|
"allowed": [
|
||||||
|
"Read",
|
||||||
|
"Grep",
|
||||||
|
"Bash"
|
||||||
|
],
|
||||||
|
"restricted": []
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"author": "Plugin Author",
|
||||||
|
"category": "general",
|
||||||
|
"tags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
56
skills/skill-adapter/assets/docker-compose.yml
Normal file
56
skills/skill-adapter/assets/docker-compose.yml
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
version: "3.9"
|
||||||
|
|
||||||
|
services:
|
||||||
|
# Message Queue Service (e.g., RabbitMQ)
|
||||||
|
message_queue:
|
||||||
|
image: rabbitmq:3.9-management-alpine # Or choose your preferred message queue and version
|
||||||
|
container_name: rabbitmq_server
|
||||||
|
ports:
|
||||||
|
- "5672:5672" # AMQP protocol port
|
||||||
|
- "15672:15672" # Management UI port (optional)
|
||||||
|
environment:
|
||||||
|
RABBITMQ_DEFAULT_USER: "guest" # REPLACE_ME: Change default user in production
|
||||||
|
RABBITMQ_DEFAULT_PASS: "guest" # REPLACE_ME: Change default password in production
|
||||||
|
RABBITMQ_DEFAULT_VHOST: "/"
|
||||||
|
volumes:
|
||||||
|
- rabbitmq_data:/var/lib/rabbitmq # Persist data across restarts (optional)
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "rabbitmqctl", "status"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
# Optional: Event Streaming Platform (e.g., Kafka) - uncomment to include
|
||||||
|
# event_streaming:
|
||||||
|
# image: confluentinc/cp-kafka:latest
|
||||||
|
# container_name: kafka_server
|
||||||
|
# ports:
|
||||||
|
# - "9092:9092"
|
||||||
|
# environment:
|
||||||
|
# KAFKA_BROKER_ID: 1
|
||||||
|
# KAFKA_LISTENERS: PLAINTEXT://:9092
|
||||||
|
# KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092 # REPLACE_ME: Adjust for your environment
|
||||||
|
# KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
|
||||||
|
# depends_on:
|
||||||
|
# - zookeeper
|
||||||
|
# healthcheck:
|
||||||
|
# test: ["CMD", "kafka-topics", "--list", "--zookeeper", "zookeeper:2181"]
|
||||||
|
# interval: 30s
|
||||||
|
# timeout: 10s
|
||||||
|
# retries: 5
|
||||||
|
|
||||||
|
# Optional: Zookeeper (required for Kafka) - uncomment to include if using Kafka
|
||||||
|
# zookeeper:
|
||||||
|
# image: confluentinc/cp-zookeeper:latest
|
||||||
|
# container_name: zookeeper_server
|
||||||
|
# ports:
|
||||||
|
# - "2181:2181"
|
||||||
|
# environment:
|
||||||
|
# ZOOKEEPER_CLIENT_PORT: 2181
|
||||||
|
# ZOOKEEPER_TICK_TIME: 2000
|
||||||
|
|
||||||
|
# Define persistent volumes
|
||||||
|
volumes:
|
||||||
|
rabbitmq_data: # Named volume for RabbitMQ data persistence
|
||||||
|
# kafka_data: # Uncomment if using Kafka
|
||||||
|
# zookeeper_data: # Uncomment if using Zookeeper
|
||||||
69
skills/skill-adapter/assets/event_schema_template.json
Normal file
69
skills/skill-adapter/assets/event_schema_template.json
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
{
|
||||||
|
"_comment": "Template for defining event schemas in JSON format",
|
||||||
|
"schema_version": "1.0",
|
||||||
|
"event_name": "user.created",
|
||||||
|
"description": "Event triggered when a new user is created.",
|
||||||
|
"source": "user-service",
|
||||||
|
"timestamp": "2024-10-27T10:00:00Z",
|
||||||
|
"payload": {
|
||||||
|
"_comment": "Data associated with the user.created event",
|
||||||
|
"user_id": {
|
||||||
|
"type": "string",
|
||||||
|
"format": "uuid",
|
||||||
|
"description": "Unique identifier for the user"
|
||||||
|
},
|
||||||
|
"username": {
|
||||||
|
"type": "string",
|
||||||
|
"minLength": 3,
|
||||||
|
"maxLength": 50,
|
||||||
|
"description": "Username of the new user"
|
||||||
|
},
|
||||||
|
"email": {
|
||||||
|
"type": "string",
|
||||||
|
"format": "email",
|
||||||
|
"description": "Email address of the new user"
|
||||||
|
},
|
||||||
|
"created_at": {
|
||||||
|
"type": "string",
|
||||||
|
"format": "date-time",
|
||||||
|
"description": "Timestamp of when the user was created"
|
||||||
|
},
|
||||||
|
"profile": {
|
||||||
|
"type": "object",
|
||||||
|
"description": "User profile information",
|
||||||
|
"properties": {
|
||||||
|
"first_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "First name of the user"
|
||||||
|
},
|
||||||
|
"last_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Last name of the user"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"first_name",
|
||||||
|
"last_name"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": [
|
||||||
|
"user_id",
|
||||||
|
"username",
|
||||||
|
"email",
|
||||||
|
"created_at"
|
||||||
|
],
|
||||||
|
"examples": [
|
||||||
|
{
|
||||||
|
"_comment": "Example of a user.created event",
|
||||||
|
"user_id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
|
||||||
|
"username": "newuser123",
|
||||||
|
"email": "newuser@example.com",
|
||||||
|
"created_at": "2024-10-27T10:00:00Z",
|
||||||
|
"profile": {
|
||||||
|
"first_name": "John",
|
||||||
|
"last_name": "Doe"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
62
skills/skill-adapter/assets/example_events.json
Normal file
62
skills/skill-adapter/assets/example_events.json
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
{
|
||||||
|
"_comment": "Example events for testing the API event emitter plugin",
|
||||||
|
"user_created": {
|
||||||
|
"_comment": "Event emitted when a new user is created",
|
||||||
|
"event_type": "user.created",
|
||||||
|
"data": {
|
||||||
|
"user_id": "user123",
|
||||||
|
"username": "johndoe",
|
||||||
|
"email": "john.doe@example.com",
|
||||||
|
"created_at": "2024-10-27T10:00:00Z"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"product_added": {
|
||||||
|
"_comment": "Event emitted when a new product is added to the catalog",
|
||||||
|
"event_type": "product.added",
|
||||||
|
"data": {
|
||||||
|
"product_id": "prod456",
|
||||||
|
"product_name": "Awesome Widget",
|
||||||
|
"description": "A fantastic widget for all your needs",
|
||||||
|
"price": 19.99,
|
||||||
|
"created_at": "2024-10-27T10:05:00Z"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"order_placed": {
|
||||||
|
"_comment": "Event emitted when a user places an order",
|
||||||
|
"event_type": "order.placed",
|
||||||
|
"data": {
|
||||||
|
"order_id": "order789",
|
||||||
|
"user_id": "user123",
|
||||||
|
"order_date": "2024-10-27T10:10:00Z",
|
||||||
|
"total_amount": 39.98,
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"product_id": "prod456",
|
||||||
|
"quantity": 2
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"payment_processed": {
|
||||||
|
"_comment": "Event emitted when a payment is successfully processed",
|
||||||
|
"event_type": "payment.processed",
|
||||||
|
"data": {
|
||||||
|
"payment_id": "payment012",
|
||||||
|
"order_id": "order789",
|
||||||
|
"amount": 39.98,
|
||||||
|
"payment_date": "2024-10-27T10:15:00Z",
|
||||||
|
"payment_method": "credit_card",
|
||||||
|
"status": "success"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"user_updated": {
|
||||||
|
"_comment": "Event emitted when a user's profile is updated",
|
||||||
|
"event_type": "user.updated",
|
||||||
|
"data": {
|
||||||
|
"user_id": "user123",
|
||||||
|
"username": "john.doe.updated",
|
||||||
|
"email": "john.doe.updated@example.com",
|
||||||
|
"updated_at": "2024-10-27T10:20:00Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
28
skills/skill-adapter/assets/skill-schema.json
Normal file
28
skills/skill-adapter/assets/skill-schema.json
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||||
|
"title": "Claude Skill Configuration",
|
||||||
|
"type": "object",
|
||||||
|
"required": ["name", "description"],
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[a-z0-9-]+$",
|
||||||
|
"maxLength": 64,
|
||||||
|
"description": "Skill identifier (lowercase, hyphens only)"
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string",
|
||||||
|
"maxLength": 1024,
|
||||||
|
"description": "What the skill does and when to use it"
|
||||||
|
},
|
||||||
|
"allowed-tools": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Comma-separated list of allowed tools"
|
||||||
|
},
|
||||||
|
"version": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^\\d+\\.\\d+\\.\\d+$",
|
||||||
|
"description": "Semantic version (x.y.z)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
27
skills/skill-adapter/assets/test-data.json
Normal file
27
skills/skill-adapter/assets/test-data.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"testCases": [
|
||||||
|
{
|
||||||
|
"name": "Basic activation test",
|
||||||
|
"input": "trigger phrase example",
|
||||||
|
"expected": {
|
||||||
|
"activated": true,
|
||||||
|
"toolsUsed": ["Read", "Grep"],
|
||||||
|
"success": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "Complex workflow test",
|
||||||
|
"input": "multi-step trigger example",
|
||||||
|
"expected": {
|
||||||
|
"activated": true,
|
||||||
|
"steps": 3,
|
||||||
|
"toolsUsed": ["Read", "Write", "Bash"],
|
||||||
|
"success": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fixtures": {
|
||||||
|
"sampleInput": "example data",
|
||||||
|
"expectedOutput": "processed result"
|
||||||
|
}
|
||||||
|
}
|
||||||
7
skills/skill-adapter/references/README.md
Normal file
7
skills/skill-adapter/references/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# References
|
||||||
|
|
||||||
|
Bundled resources for api-event-emitter skill
|
||||||
|
|
||||||
|
- [ ] event_driven_architecture_best_practices.md: A guide to designing and implementing event-driven APIs, including topics like event schema design, idempotency, and error handling.
|
||||||
|
- [ ] message_queue_configuration.md: Documentation on configuring popular message queues like RabbitMQ and Kafka.
|
||||||
|
- [ ] api_event_emitter_api_reference.md: Detailed API reference for the event emitter plugin, including endpoints, request/response formats, and error codes.
|
||||||
69
skills/skill-adapter/references/best-practices.md
Normal file
69
skills/skill-adapter/references/best-practices.md
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
# Skill Best Practices
|
||||||
|
|
||||||
|
Guidelines for optimal skill usage and development.
|
||||||
|
|
||||||
|
## For Users
|
||||||
|
|
||||||
|
### Activation Best Practices
|
||||||
|
|
||||||
|
1. **Use Clear Trigger Phrases**
|
||||||
|
- Match phrases from skill description
|
||||||
|
- Be specific about intent
|
||||||
|
- Provide necessary context
|
||||||
|
|
||||||
|
2. **Provide Sufficient Context**
|
||||||
|
- Include relevant file paths
|
||||||
|
- Specify scope of analysis
|
||||||
|
- Mention any constraints
|
||||||
|
|
||||||
|
3. **Understand Tool Permissions**
|
||||||
|
- Check allowed-tools in frontmatter
|
||||||
|
- Know what the skill can/cannot do
|
||||||
|
- Request appropriate actions
|
||||||
|
|
||||||
|
### Workflow Optimization
|
||||||
|
|
||||||
|
- Start with simple requests
|
||||||
|
- Build up to complex workflows
|
||||||
|
- Verify each step before proceeding
|
||||||
|
- Use skill consistently for related tasks
|
||||||
|
|
||||||
|
## For Developers
|
||||||
|
|
||||||
|
### Skill Development Guidelines
|
||||||
|
|
||||||
|
1. **Clear Descriptions**
|
||||||
|
- Include explicit trigger phrases
|
||||||
|
- Document all capabilities
|
||||||
|
- Specify limitations
|
||||||
|
|
||||||
|
2. **Proper Tool Permissions**
|
||||||
|
- Use minimal necessary tools
|
||||||
|
- Document security implications
|
||||||
|
- Test with restricted tools
|
||||||
|
|
||||||
|
3. **Comprehensive Documentation**
|
||||||
|
- Provide usage examples
|
||||||
|
- Document common pitfalls
|
||||||
|
- Include troubleshooting guide
|
||||||
|
|
||||||
|
### Maintenance
|
||||||
|
|
||||||
|
- Keep version updated
|
||||||
|
- Test after tool updates
|
||||||
|
- Monitor user feedback
|
||||||
|
- Iterate on descriptions
|
||||||
|
|
||||||
|
## Performance Tips
|
||||||
|
|
||||||
|
- Scope skills to specific domains
|
||||||
|
- Avoid overlapping trigger phrases
|
||||||
|
- Keep descriptions under 1024 chars
|
||||||
|
- Test activation reliability
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- Never include secrets in skill files
|
||||||
|
- Validate all inputs
|
||||||
|
- Use read-only tools when possible
|
||||||
|
- Document security requirements
|
||||||
70
skills/skill-adapter/references/examples.md
Normal file
70
skills/skill-adapter/references/examples.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# Skill Usage Examples
|
||||||
|
|
||||||
|
This document provides practical examples of how to use this skill effectively.
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
### Example 1: Simple Activation
|
||||||
|
|
||||||
|
**User Request:**
|
||||||
|
```
|
||||||
|
[Describe trigger phrase here]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Skill Response:**
|
||||||
|
1. Analyzes the request
|
||||||
|
2. Performs the required action
|
||||||
|
3. Returns results
|
||||||
|
|
||||||
|
### Example 2: Complex Workflow
|
||||||
|
|
||||||
|
**User Request:**
|
||||||
|
```
|
||||||
|
[Describe complex scenario]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow:**
|
||||||
|
1. Step 1: Initial analysis
|
||||||
|
2. Step 2: Data processing
|
||||||
|
3. Step 3: Result generation
|
||||||
|
4. Step 4: Validation
|
||||||
|
|
||||||
|
## Advanced Patterns
|
||||||
|
|
||||||
|
### Pattern 1: Chaining Operations
|
||||||
|
|
||||||
|
Combine this skill with other tools:
|
||||||
|
```
|
||||||
|
Step 1: Use this skill for [purpose]
|
||||||
|
Step 2: Chain with [other tool]
|
||||||
|
Step 3: Finalize with [action]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 2: Error Handling
|
||||||
|
|
||||||
|
If issues occur:
|
||||||
|
- Check trigger phrase matches
|
||||||
|
- Verify context is available
|
||||||
|
- Review allowed-tools permissions
|
||||||
|
|
||||||
|
## Tips & Best Practices
|
||||||
|
|
||||||
|
- ✅ Be specific with trigger phrases
|
||||||
|
- ✅ Provide necessary context
|
||||||
|
- ✅ Check tool permissions match needs
|
||||||
|
- ❌ Avoid vague requests
|
||||||
|
- ❌ Don't mix unrelated tasks
|
||||||
|
|
||||||
|
## Common Issues
|
||||||
|
|
||||||
|
**Issue:** Skill doesn't activate
|
||||||
|
**Solution:** Use exact trigger phrases from description
|
||||||
|
|
||||||
|
**Issue:** Unexpected results
|
||||||
|
**Solution:** Check input format and context
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Main SKILL.md for full documentation
|
||||||
|
- scripts/ for automation helpers
|
||||||
|
- assets/ for configuration examples
|
||||||
7
skills/skill-adapter/scripts/README.md
Normal file
7
skills/skill-adapter/scripts/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# Scripts
|
||||||
|
|
||||||
|
Bundled resources for api-event-emitter skill
|
||||||
|
|
||||||
|
- [ ] generate_event_schema.py: Generates event schema based on user input or existing API definitions.
|
||||||
|
- [ ] deploy_event_queue.sh: Deploys a message queue (e.g., RabbitMQ, Kafka) using Docker or a cloud provider's CLI.
|
||||||
|
- [ ] test_event_emitter.py: Sends test events to the API and verifies that they are processed correctly.
|
||||||
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
@@ -0,0 +1,42 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Helper script template for skill automation
|
||||||
|
# Customize this for your skill's specific needs
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
function show_usage() {
|
||||||
|
echo "Usage: $0 [options]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -h, --help Show this help message"
|
||||||
|
echo " -v, --verbose Enable verbose output"
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
VERBOSE=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-h|--help)
|
||||||
|
show_usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
-v|--verbose)
|
||||||
|
VERBOSE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
show_usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Your skill logic here
|
||||||
|
if [ "$VERBOSE" = true ]; then
|
||||||
|
echo "Running skill automation..."
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Complete"
|
||||||
32
skills/skill-adapter/scripts/validation.sh
Executable file
32
skills/skill-adapter/scripts/validation.sh
Executable file
@@ -0,0 +1,32 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Skill validation helper
|
||||||
|
# Validates skill activation and functionality
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🔍 Validating skill..."
|
||||||
|
|
||||||
|
# Check if SKILL.md exists
|
||||||
|
if [ ! -f "../SKILL.md" ]; then
|
||||||
|
echo "❌ Error: SKILL.md not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate frontmatter
|
||||||
|
if ! grep -q "^---$" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: No frontmatter found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if ! grep -q "^name:" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: Missing 'name' field"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "^description:" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: Missing 'description' field"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Skill validation passed"
|
||||||
Reference in New Issue
Block a user