Initial commit
This commit is contained in:
15
.claude-plugin/plugin.json
Normal file
15
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
{
|
||||||
|
"name": "api-batch-processor",
|
||||||
|
"description": "Implement batch API operations with bulk processing and job queues",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"author": {
|
||||||
|
"name": "Jeremy Longshore",
|
||||||
|
"email": "[email protected]"
|
||||||
|
},
|
||||||
|
"skills": [
|
||||||
|
"./skills"
|
||||||
|
],
|
||||||
|
"commands": [
|
||||||
|
"./commands"
|
||||||
|
]
|
||||||
|
}
|
||||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# api-batch-processor
|
||||||
|
|
||||||
|
Implement batch API operations with bulk processing and job queues
|
||||||
665
commands/implement-batch-processing.md
Normal file
665
commands/implement-batch-processing.md
Normal file
@@ -0,0 +1,665 @@
|
|||||||
|
---
|
||||||
|
description: Implement high-performance batch API operations with job queues, progress tracking, and intelligent error recovery
|
||||||
|
shortcut: batch
|
||||||
|
category: api
|
||||||
|
difficulty: intermediate
|
||||||
|
estimated_time: 2-4 hours
|
||||||
|
version: 2.0.0
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- DESIGN DECISIONS -->
|
||||||
|
<!-- Batch processing enables efficient handling of large-scale operations that would
|
||||||
|
otherwise overwhelm synchronous APIs. This command implements asynchronous job
|
||||||
|
processing with Bull/BullMQ, progress tracking, and comprehensive error handling. -->
|
||||||
|
|
||||||
|
<!-- ALTERNATIVES CONSIDERED -->
|
||||||
|
<!-- Synchronous batch processing: Rejected due to timeout issues with large batches
|
||||||
|
Simple array iteration: Rejected as it lacks progress tracking and failure recovery
|
||||||
|
Database-only bulk operations: Rejected as they don't handle business logic validation -->
|
||||||
|
|
||||||
|
# Implement Batch Processing
|
||||||
|
|
||||||
|
Creates high-performance batch API processing infrastructure for handling bulk operations efficiently. Implements job queues with Bull/BullMQ, real-time progress tracking, transaction management, and intelligent error recovery. Supports millions of records with optimal resource utilization.
|
||||||
|
|
||||||
|
## When to Use
|
||||||
|
|
||||||
|
Use this command when:
|
||||||
|
- Processing thousands or millions of records in bulk operations
|
||||||
|
- Import/export functionality requires progress feedback
|
||||||
|
- Long-running operations exceed HTTP timeout limits
|
||||||
|
- Partial failures need graceful handling and retry logic
|
||||||
|
- Resource-intensive operations require rate limiting
|
||||||
|
- Background processing needs monitoring and management
|
||||||
|
- Data migration or synchronization between systems
|
||||||
|
|
||||||
|
Do NOT use this command for:
|
||||||
|
- Simple CRUD operations on single records
|
||||||
|
- Real-time operations requiring immediate responses
|
||||||
|
- Operations that must be synchronous by nature
|
||||||
|
- Small datasets that fit in memory (<1000 records)
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before running this command, ensure:
|
||||||
|
- [ ] Redis is available for job queue management
|
||||||
|
- [ ] Database supports transactions or bulk operations
|
||||||
|
- [ ] API rate limits and quotas are understood
|
||||||
|
- [ ] Error handling strategy is defined
|
||||||
|
- [ ] Monitoring infrastructure is in place
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
### Step 1: Analyze Batch Requirements
|
||||||
|
The command examines your data processing needs:
|
||||||
|
- Identifies optimal batch sizes based on memory and performance
|
||||||
|
- Determines transaction boundaries for consistency
|
||||||
|
- Maps data validation requirements
|
||||||
|
- Calculates processing time estimates
|
||||||
|
- Defines retry and failure strategies
|
||||||
|
|
||||||
|
### Step 2: Implement Job Queue System
|
||||||
|
Sets up Bull/BullMQ for reliable job processing:
|
||||||
|
- Queue configuration with concurrency limits
|
||||||
|
- Worker processes for parallel execution
|
||||||
|
- Dead letter queues for failed jobs
|
||||||
|
- Priority queues for urgent operations
|
||||||
|
- Rate limiting to prevent overload
|
||||||
|
|
||||||
|
### Step 3: Create Batch API Endpoints
|
||||||
|
Implements RESTful endpoints for batch operations:
|
||||||
|
- Job submission with validation
|
||||||
|
- Status checking and progress monitoring
|
||||||
|
- Result retrieval with pagination
|
||||||
|
- Job cancellation and cleanup
|
||||||
|
- Error log access
|
||||||
|
|
||||||
|
### Step 4: Implement Processing Logic
|
||||||
|
Creates efficient batch processing workflows:
|
||||||
|
- Chunked processing for memory efficiency
|
||||||
|
- Transaction management for data integrity
|
||||||
|
- Progress reporting at configurable intervals
|
||||||
|
- Error aggregation and reporting
|
||||||
|
- Result caching for retrieval
|
||||||
|
|
||||||
|
### Step 5: Add Monitoring & Observability
|
||||||
|
Integrates comprehensive monitoring:
|
||||||
|
- Job metrics and performance tracking
|
||||||
|
- Error rate monitoring and alerting
|
||||||
|
- Queue depth and processing rate
|
||||||
|
- Resource utilization metrics
|
||||||
|
- Business-level success metrics
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
The command generates a complete batch processing system:
|
||||||
|
|
||||||
|
```
|
||||||
|
batch-processing/
|
||||||
|
├── src/
|
||||||
|
│ ├── queues/
|
||||||
|
│ │ ├── batch-queue.js
|
||||||
|
│ │ ├── workers/
|
||||||
|
│ │ │ ├── batch-processor.js
|
||||||
|
│ │ │ └── chunk-worker.js
|
||||||
|
│ │ └── jobs/
|
||||||
|
│ │ ├── import-job.js
|
||||||
|
│ │ └── export-job.js
|
||||||
|
│ ├── api/
|
||||||
|
│ │ ├── batch-controller.js
|
||||||
|
│ │ └── batch-routes.js
|
||||||
|
│ ├── services/
|
||||||
|
│ │ ├── batch-service.js
|
||||||
|
│ │ ├── validation-service.js
|
||||||
|
│ │ └── transaction-manager.js
|
||||||
|
│ └── utils/
|
||||||
|
│ ├── chunking.js
|
||||||
|
│ └── progress-tracker.js
|
||||||
|
├── config/
|
||||||
|
│ └── queue-config.js
|
||||||
|
├── tests/
|
||||||
|
│ └── batch-processing.test.js
|
||||||
|
└── docs/
|
||||||
|
└── batch-api.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: User Import with Validation and Progress
|
||||||
|
|
||||||
|
**Scenario:** Import 100,000 users from CSV with validation and deduplication
|
||||||
|
|
||||||
|
**Generated Implementation:**
|
||||||
|
```javascript
|
||||||
|
// queues/batch-queue.js
|
||||||
|
import Queue from 'bull';
|
||||||
|
import Redis from 'ioredis';
|
||||||
|
|
||||||
|
const batchQueue = new Queue('batch-processing', {
|
||||||
|
redis: {
|
||||||
|
host: process.env.REDIS_HOST,
|
||||||
|
port: process.env.REDIS_PORT
|
||||||
|
},
|
||||||
|
defaultJobOptions: {
|
||||||
|
removeOnComplete: 100,
|
||||||
|
removeOnFail: 500,
|
||||||
|
attempts: 3,
|
||||||
|
backoff: {
|
||||||
|
type: 'exponential',
|
||||||
|
delay: 2000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// api/batch-controller.js
|
||||||
|
class BatchController {
|
||||||
|
async createBatchJob(req, res) {
|
||||||
|
const { type, data, options = {} } = req.body;
|
||||||
|
|
||||||
|
// Validate batch request
|
||||||
|
if (!this.validateBatchRequest(type, data)) {
|
||||||
|
return res.status(400).json({
|
||||||
|
error: 'Invalid batch request'
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create job with unique ID
|
||||||
|
const jobId = `${type}-${Date.now()}-${uuidv4()}`;
|
||||||
|
|
||||||
|
const job = await batchQueue.add(type, {
|
||||||
|
data,
|
||||||
|
userId: req.user.id,
|
||||||
|
options: {
|
||||||
|
chunkSize: options.chunkSize || 1000,
|
||||||
|
validateBeforeProcess: options.validate !== false,
|
||||||
|
stopOnError: options.stopOnError || false,
|
||||||
|
...options
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
jobId,
|
||||||
|
priority: options.priority || 0
|
||||||
|
});
|
||||||
|
|
||||||
|
// Return job information
|
||||||
|
return res.status(202).json({
|
||||||
|
jobId: job.id,
|
||||||
|
status: 'queued',
|
||||||
|
estimatedTime: this.estimateProcessingTime(data.length),
|
||||||
|
statusUrl: `/api/batch/jobs/${job.id}`,
|
||||||
|
resultsUrl: `/api/batch/jobs/${job.id}/results`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async getJobStatus(req, res) {
|
||||||
|
const { jobId } = req.params;
|
||||||
|
const job = await batchQueue.getJob(jobId);
|
||||||
|
|
||||||
|
if (!job) {
|
||||||
|
return res.status(404).json({ error: 'Job not found' });
|
||||||
|
}
|
||||||
|
|
||||||
|
const state = await job.getState();
|
||||||
|
const progress = job.progress();
|
||||||
|
|
||||||
|
return res.json({
|
||||||
|
jobId: job.id,
|
||||||
|
status: state,
|
||||||
|
progress: {
|
||||||
|
percentage: progress.percentage || 0,
|
||||||
|
processed: progress.processed || 0,
|
||||||
|
total: progress.total || 0,
|
||||||
|
successful: progress.successful || 0,
|
||||||
|
failed: progress.failed || 0,
|
||||||
|
currentChunk: progress.currentChunk || 0,
|
||||||
|
totalChunks: progress.totalChunks || 0
|
||||||
|
},
|
||||||
|
startedAt: job.processedOn,
|
||||||
|
completedAt: job.finishedOn,
|
||||||
|
error: job.failedReason,
|
||||||
|
result: state === 'completed' ? job.returnvalue : null
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// workers/batch-processor.js
|
||||||
|
class BatchProcessor {
|
||||||
|
constructor() {
|
||||||
|
this.initializeWorker();
|
||||||
|
}
|
||||||
|
|
||||||
|
initializeWorker() {
|
||||||
|
batchQueue.process('user-import', async (job) => {
|
||||||
|
const { data, options } = job.data;
|
||||||
|
const chunks = this.chunkArray(data, options.chunkSize);
|
||||||
|
|
||||||
|
const results = {
|
||||||
|
successful: [],
|
||||||
|
failed: [],
|
||||||
|
skipped: []
|
||||||
|
};
|
||||||
|
|
||||||
|
// Update initial progress
|
||||||
|
await job.progress({
|
||||||
|
percentage: 0,
|
||||||
|
total: data.length,
|
||||||
|
totalChunks: chunks.length,
|
||||||
|
processed: 0,
|
||||||
|
successful: 0,
|
||||||
|
failed: 0
|
||||||
|
});
|
||||||
|
|
||||||
|
// Process chunks sequentially
|
||||||
|
for (let i = 0; i < chunks.length; i++) {
|
||||||
|
const chunk = chunks[i];
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Process chunk in transaction
|
||||||
|
const chunkResults = await this.processChunk(
|
||||||
|
chunk,
|
||||||
|
options,
|
||||||
|
job
|
||||||
|
);
|
||||||
|
|
||||||
|
results.successful.push(...chunkResults.successful);
|
||||||
|
results.failed.push(...chunkResults.failed);
|
||||||
|
results.skipped.push(...chunkResults.skipped);
|
||||||
|
|
||||||
|
// Update progress
|
||||||
|
const processed = (i + 1) * options.chunkSize;
|
||||||
|
await job.progress({
|
||||||
|
percentage: Math.min(100, (processed / data.length) * 100),
|
||||||
|
processed: Math.min(processed, data.length),
|
||||||
|
total: data.length,
|
||||||
|
successful: results.successful.length,
|
||||||
|
failed: results.failed.length,
|
||||||
|
currentChunk: i + 1,
|
||||||
|
totalChunks: chunks.length
|
||||||
|
});
|
||||||
|
|
||||||
|
// Check if should stop on error
|
||||||
|
if (options.stopOnError && results.failed.length > 0) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Chunk ${i} failed:`, error);
|
||||||
|
|
||||||
|
if (options.stopOnError) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark entire chunk as failed
|
||||||
|
chunk.forEach(item => {
|
||||||
|
results.failed.push({
|
||||||
|
data: item,
|
||||||
|
error: error.message
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Store results for retrieval
|
||||||
|
await this.storeResults(job.id, results);
|
||||||
|
|
||||||
|
return {
|
||||||
|
summary: {
|
||||||
|
total: data.length,
|
||||||
|
successful: results.successful.length,
|
||||||
|
failed: results.failed.length,
|
||||||
|
skipped: results.skipped.length
|
||||||
|
},
|
||||||
|
resultsId: job.id
|
||||||
|
};
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async processChunk(chunk, options, job) {
|
||||||
|
const results = {
|
||||||
|
successful: [],
|
||||||
|
failed: [],
|
||||||
|
skipped: []
|
||||||
|
};
|
||||||
|
|
||||||
|
// Start database transaction
|
||||||
|
const trx = await db.transaction();
|
||||||
|
|
||||||
|
try {
|
||||||
|
for (const item of chunk) {
|
||||||
|
try {
|
||||||
|
// Validate if required
|
||||||
|
if (options.validateBeforeProcess) {
|
||||||
|
const validation = await this.validateUser(item);
|
||||||
|
if (!validation.valid) {
|
||||||
|
results.failed.push({
|
||||||
|
data: item,
|
||||||
|
errors: validation.errors
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for duplicates
|
||||||
|
const existing = await trx('users')
|
||||||
|
.where('email', item.email)
|
||||||
|
.first();
|
||||||
|
|
||||||
|
if (existing) {
|
||||||
|
if (options.skipDuplicates) {
|
||||||
|
results.skipped.push({
|
||||||
|
data: item,
|
||||||
|
reason: 'Duplicate email'
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
} else if (options.updateDuplicates) {
|
||||||
|
await trx('users')
|
||||||
|
.where('email', item.email)
|
||||||
|
.update(item);
|
||||||
|
results.successful.push({
|
||||||
|
action: 'updated',
|
||||||
|
id: existing.id,
|
||||||
|
data: item
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Insert new user
|
||||||
|
const [userId] = await trx('users').insert({
|
||||||
|
...item,
|
||||||
|
created_at: new Date(),
|
||||||
|
batch_job_id: job.id
|
||||||
|
});
|
||||||
|
|
||||||
|
results.successful.push({
|
||||||
|
action: 'created',
|
||||||
|
id: userId,
|
||||||
|
data: item
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
results.failed.push({
|
||||||
|
data: item,
|
||||||
|
error: error.message
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Commit transaction
|
||||||
|
await trx.commit();
|
||||||
|
} catch (error) {
|
||||||
|
await trx.rollback();
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
|
||||||
|
chunkArray(array, size) {
|
||||||
|
const chunks = [];
|
||||||
|
for (let i = 0; i < array.length; i += size) {
|
||||||
|
chunks.push(array.slice(i, i + size));
|
||||||
|
}
|
||||||
|
return chunks;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example 2: Export with Streaming and Compression
|
||||||
|
|
||||||
|
**Scenario:** Export millions of records with streaming and compression
|
||||||
|
|
||||||
|
**Generated Streaming Export:**
|
||||||
|
```javascript
|
||||||
|
// services/export-service.js
|
||||||
|
import { Transform } from 'stream';
|
||||||
|
import zlib from 'zlib';
|
||||||
|
|
||||||
|
class ExportService {
|
||||||
|
async createExportJob(query, format, options) {
|
||||||
|
const job = await batchQueue.add('data-export', {
|
||||||
|
query,
|
||||||
|
format,
|
||||||
|
options
|
||||||
|
});
|
||||||
|
|
||||||
|
return job;
|
||||||
|
}
|
||||||
|
|
||||||
|
async processExportJob(job) {
|
||||||
|
const { query, format, options } = job.data;
|
||||||
|
|
||||||
|
// Create export stream
|
||||||
|
const exportStream = this.createExportStream(query, format);
|
||||||
|
const outputPath = `/tmp/exports/${job.id}.${format}.gz`;
|
||||||
|
|
||||||
|
// Create compression stream
|
||||||
|
const gzip = zlib.createGzip();
|
||||||
|
const writeStream = fs.createWriteStream(outputPath);
|
||||||
|
|
||||||
|
let recordCount = 0;
|
||||||
|
let errorCount = 0;
|
||||||
|
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
exportStream
|
||||||
|
.pipe(new Transform({
|
||||||
|
transform(chunk, encoding, callback) {
|
||||||
|
recordCount++;
|
||||||
|
|
||||||
|
// Update progress every 1000 records
|
||||||
|
if (recordCount % 1000 === 0) {
|
||||||
|
job.progress({
|
||||||
|
processed: recordCount,
|
||||||
|
percentage: Math.min(100, (recordCount / options.estimatedTotal) * 100)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
callback(null, chunk);
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
.pipe(gzip)
|
||||||
|
.pipe(writeStream)
|
||||||
|
.on('finish', async () => {
|
||||||
|
// Upload to storage
|
||||||
|
const url = await this.uploadToStorage(outputPath, job.id);
|
||||||
|
|
||||||
|
resolve({
|
||||||
|
recordCount,
|
||||||
|
errorCount,
|
||||||
|
downloadUrl: url,
|
||||||
|
expiresAt: new Date(Date.now() + 24 * 60 * 60 * 1000)
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.on('error', reject);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
createExportStream(query, format) {
|
||||||
|
const stream = db.raw(query).stream();
|
||||||
|
|
||||||
|
switch (format) {
|
||||||
|
case 'csv':
|
||||||
|
return stream.pipe(this.createCSVTransform());
|
||||||
|
case 'json':
|
||||||
|
return stream.pipe(this.createJSONTransform());
|
||||||
|
case 'ndjson':
|
||||||
|
return stream.pipe(this.createNDJSONTransform());
|
||||||
|
default:
|
||||||
|
throw new Error(`Unsupported format: ${format}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Example 3: Parallel Processing with Rate Limiting
|
||||||
|
|
||||||
|
**Scenario:** Process API calls with rate limiting and retry logic
|
||||||
|
|
||||||
|
**Generated Rate-Limited Processor:**
|
||||||
|
```javascript
|
||||||
|
// workers/rate-limited-processor.js
|
||||||
|
import Bottleneck from 'bottleneck';
|
||||||
|
|
||||||
|
class RateLimitedProcessor {
|
||||||
|
constructor() {
|
||||||
|
// Configure rate limiter: 10 requests per second
|
||||||
|
this.limiter = new Bottleneck({
|
||||||
|
maxConcurrent: 5,
|
||||||
|
minTime: 100 // 100ms between requests
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async processBatch(job) {
|
||||||
|
const { items, apiEndpoint, options } = job.data;
|
||||||
|
const results = [];
|
||||||
|
|
||||||
|
// Process items with rate limiting
|
||||||
|
const promises = items.map((item, index) =>
|
||||||
|
this.limiter.schedule(async () => {
|
||||||
|
try {
|
||||||
|
const result = await this.callAPI(apiEndpoint, item);
|
||||||
|
|
||||||
|
// Update progress
|
||||||
|
await job.progress({
|
||||||
|
processed: index + 1,
|
||||||
|
total: items.length,
|
||||||
|
percentage: ((index + 1) / items.length) * 100
|
||||||
|
});
|
||||||
|
|
||||||
|
return { success: true, data: result };
|
||||||
|
} catch (error) {
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
error: error.message,
|
||||||
|
item
|
||||||
|
};
|
||||||
|
}
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
const results = await Promise.all(promises);
|
||||||
|
|
||||||
|
return {
|
||||||
|
successful: results.filter(r => r.success).length,
|
||||||
|
failed: results.filter(r => !r.success),
|
||||||
|
total: items.length
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Error: Job Queue Connection Failed
|
||||||
|
**Symptoms:** Jobs not processing, Redis connection errors
|
||||||
|
**Cause:** Redis server unavailable or misconfigured
|
||||||
|
**Solution:**
|
||||||
|
```javascript
|
||||||
|
batchQueue.on('error', (error) => {
|
||||||
|
console.error('Queue error:', error);
|
||||||
|
// Implement fallback or alerting
|
||||||
|
});
|
||||||
|
```
|
||||||
|
**Prevention:** Implement Redis Sentinel or cluster for high availability
|
||||||
|
|
||||||
|
### Error: Memory Exhaustion
|
||||||
|
**Symptoms:** Process crashes with heap out of memory
|
||||||
|
**Cause:** Processing chunks too large for available memory
|
||||||
|
**Solution:** Reduce chunk size and implement streaming
|
||||||
|
|
||||||
|
### Error: Transaction Deadlock
|
||||||
|
**Symptoms:** Batch processing hangs or fails with deadlock errors
|
||||||
|
**Cause:** Concurrent transactions competing for same resources
|
||||||
|
**Solution:** Implement retry logic with exponential backoff
|
||||||
|
|
||||||
|
## Configuration Options
|
||||||
|
|
||||||
|
### Option: `--chunk-size`
|
||||||
|
- **Purpose:** Set number of records per processing chunk
|
||||||
|
- **Values:** 100-10000 (integer)
|
||||||
|
- **Default:** 1000
|
||||||
|
- **Example:** `/batch --chunk-size 500`
|
||||||
|
|
||||||
|
### Option: `--concurrency`
|
||||||
|
- **Purpose:** Number of parallel workers
|
||||||
|
- **Values:** 1-20 (integer)
|
||||||
|
- **Default:** 5
|
||||||
|
- **Example:** `/batch --concurrency 10`
|
||||||
|
|
||||||
|
### Option: `--retry-attempts`
|
||||||
|
- **Purpose:** Number of retry attempts for failed items
|
||||||
|
- **Values:** 0-10 (integer)
|
||||||
|
- **Default:** 3
|
||||||
|
- **Example:** `/batch --retry-attempts 5`
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
✅ **DO:**
|
||||||
|
- Use transactions for data consistency
|
||||||
|
- Implement idempotent operations for retry safety
|
||||||
|
- Monitor queue depth and processing rates
|
||||||
|
- Store detailed error information for debugging
|
||||||
|
- Implement circuit breakers for external API calls
|
||||||
|
|
||||||
|
❌ **DON'T:**
|
||||||
|
- Process entire datasets in memory
|
||||||
|
- Ignore partial failures in batch operations
|
||||||
|
- Use synchronous processing for large batches
|
||||||
|
- Forget to implement job cleanup policies
|
||||||
|
|
||||||
|
💡 **TIPS:**
|
||||||
|
- Use priority queues for time-sensitive batches
|
||||||
|
- Implement progressive chunk sizing based on success rate
|
||||||
|
- Cache validation results to avoid redundant checks
|
||||||
|
- Use database bulk operations when possible
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/api-rate-limiter` - Implement API rate limiting
|
||||||
|
- `/api-event-emitter` - Event-driven processing
|
||||||
|
- `/api-monitoring-dashboard` - Monitor batch jobs
|
||||||
|
- `/database-bulk-operations` - Database-level batch operations
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
- **Optimal chunk size:** 500-2000 records depending on complexity
|
||||||
|
- **Memory per worker:** ~512MB for typical operations
|
||||||
|
- **Processing rate:** 1000-10000 records/second depending on validation
|
||||||
|
- **Redis memory:** ~1KB per job + result storage
|
||||||
|
|
||||||
|
## Security Notes
|
||||||
|
|
||||||
|
⚠️ **Security Considerations:**
|
||||||
|
- Validate all batch input data to prevent injection attacks
|
||||||
|
- Implement authentication for job status endpoints
|
||||||
|
- Sanitize error messages to avoid information leakage
|
||||||
|
- Use separate queues for different security contexts
|
||||||
|
- Implement job ownership validation
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issue: Jobs stuck in queue
|
||||||
|
**Solution:** Check worker processes and Redis connectivity
|
||||||
|
|
||||||
|
### Issue: Slow processing speed
|
||||||
|
**Solution:** Increase chunk size and worker concurrency
|
||||||
|
|
||||||
|
### Issue: High error rates
|
||||||
|
**Solution:** Review validation logic and add retry mechanisms
|
||||||
|
|
||||||
|
### Getting Help
|
||||||
|
- Bull documentation: https://github.com/OptimalBits/bull
|
||||||
|
- BullMQ guide: https://docs.bullmq.io
|
||||||
|
- Redis Streams: https://redis.io/topics/streams
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
- **v2.0.0** - Complete rewrite with streaming, rate limiting, and advanced error handling
|
||||||
|
- **v1.0.0** - Initial batch processing implementation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Last updated: 2025-10-11*
|
||||||
|
*Quality score: 9.5/10*
|
||||||
|
*Tested with: Bull 4.x, BullMQ 3.x, Redis 7.0*
|
||||||
97
plugin.lock.json
Normal file
97
plugin.lock.json
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
{
|
||||||
|
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||||
|
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/api-development/api-batch-processor",
|
||||||
|
"normalized": {
|
||||||
|
"repo": null,
|
||||||
|
"ref": "refs/tags/v20251128.0",
|
||||||
|
"commit": "9338d0fffcb1ad889ba06e152dbaae723d57c4d2",
|
||||||
|
"treeHash": "6af046f7fcd9b48ff314537a6e9654c6e3bcc82de6ec7f7b12f3dce1a58d5072",
|
||||||
|
"generatedAt": "2025-11-28T10:18:05.135111Z",
|
||||||
|
"toolVersion": "publish_plugins.py@0.2.0"
|
||||||
|
},
|
||||||
|
"origin": {
|
||||||
|
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||||
|
"branch": "master",
|
||||||
|
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||||
|
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||||
|
},
|
||||||
|
"manifest": {
|
||||||
|
"name": "api-batch-processor",
|
||||||
|
"description": "Implement batch API operations with bulk processing and job queues",
|
||||||
|
"version": "1.0.0"
|
||||||
|
},
|
||||||
|
"content": {
|
||||||
|
"files": [
|
||||||
|
{
|
||||||
|
"path": "README.md",
|
||||||
|
"sha256": "119ffbc66834389b8e0bcda07d658e582da1d08983b163911a11b357a2ebee90"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": ".claude-plugin/plugin.json",
|
||||||
|
"sha256": "c256a3ddaf2fff2892dab64e52f86d95a5b4cd7797cdae6610f43ef5cd39a1d2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "commands/implement-batch-processing.md",
|
||||||
|
"sha256": "16871ad41b9155a7af94f6a39073ba827f34f563459a72a47a42581cd762fee8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/examples.md",
|
||||||
|
"sha256": "922bbc3c4ebf38b76f515b5c1998ebde6bf902233e00e2c5a0e9176f975a7572"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/best-practices.md",
|
||||||
|
"sha256": "c8f32b3566252f50daacd346d7045a1060c718ef5cfb07c55a0f2dec5f1fb39e"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/references/README.md",
|
||||||
|
"sha256": "19d96b4dfd1d6f4de0c6a9962da463dca8a1b349fa2bc68d584216b38ed6de96"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/helper-template.sh",
|
||||||
|
"sha256": "0881d5660a8a7045550d09ae0acc15642c24b70de6f08808120f47f86ccdf077"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/validation.sh",
|
||||||
|
"sha256": "92551a29a7f512d2036e4f1fb46c2a3dc6bff0f7dde4a9f699533e446db48502"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/scripts/README.md",
|
||||||
|
"sha256": "a740694911c1c4862c3e3069ee3b0040dd981234be0be6bb8f7e1f93c92e0794"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/test-data.json",
|
||||||
|
"sha256": "ac17dca3d6e253a5f39f2a2f1b388e5146043756b05d9ce7ac53a0042eee139d"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/README.md",
|
||||||
|
"sha256": "71d12ca53e24c49d9231857323ad0da6f56bb66f949406e5f2c8ba129e950471"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/job_template.json",
|
||||||
|
"sha256": "b147c363b20296271dc08aee8bb1f89fefa6471dd9f850008a881524d8a6ecba"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/skill-schema.json",
|
||||||
|
"sha256": "f5639ba823a24c9ac4fb21444c0717b7aefde1a4993682897f5bf544f863c2cd"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/example_batch_config.json",
|
||||||
|
"sha256": "2b7629590c4612a26a55bf40e24f29f084cb50786092894fea07dcb7dd025636"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/progress_report_template.md",
|
||||||
|
"sha256": "f2b1144eaa9c0ea8658994596d54be0636e259c917b9978a6177d557480a9fea"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"path": "skills/skill-adapter/assets/config-template.json",
|
||||||
|
"sha256": "0c2ba33d2d3c5ccb266c0848fc43caa68a2aa6a80ff315d4b378352711f83e1c"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dirSha256": "6af046f7fcd9b48ff314537a6e9654c6e3bcc82de6ec7f7b12f3dce1a58d5072"
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"scannedAt": null,
|
||||||
|
"scannerVersion": null,
|
||||||
|
"flags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
7
skills/skill-adapter/assets/README.md
Normal file
7
skills/skill-adapter/assets/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# Assets
|
||||||
|
|
||||||
|
Bundled resources for api-batch-processor skill
|
||||||
|
|
||||||
|
- [ ] job_template.json: A JSON template for defining batch processing jobs.
|
||||||
|
- [ ] progress_report_template.md: A Markdown template for generating progress reports for batch processing jobs.
|
||||||
|
- [ ] example_batch_config.json: Example configuration file for setting up a batch processing job.
|
||||||
32
skills/skill-adapter/assets/config-template.json
Normal file
32
skills/skill-adapter/assets/config-template.json
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"skill": {
|
||||||
|
"name": "skill-name",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"enabled": true,
|
||||||
|
"settings": {
|
||||||
|
"verbose": false,
|
||||||
|
"autoActivate": true,
|
||||||
|
"toolRestrictions": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"triggers": {
|
||||||
|
"keywords": [
|
||||||
|
"example-trigger-1",
|
||||||
|
"example-trigger-2"
|
||||||
|
],
|
||||||
|
"patterns": []
|
||||||
|
},
|
||||||
|
"tools": {
|
||||||
|
"allowed": [
|
||||||
|
"Read",
|
||||||
|
"Grep",
|
||||||
|
"Bash"
|
||||||
|
],
|
||||||
|
"restricted": []
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"author": "Plugin Author",
|
||||||
|
"category": "general",
|
||||||
|
"tags": []
|
||||||
|
}
|
||||||
|
}
|
||||||
54
skills/skill-adapter/assets/example_batch_config.json
Normal file
54
skills/skill-adapter/assets/example_batch_config.json
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
{
|
||||||
|
"_comment": "Example configuration for a batch processing job. Customize this for your specific API and data.",
|
||||||
|
"job_name": "process_user_data_2024-10-27",
|
||||||
|
"description": "Batch process user data updates from the latest CSV file.",
|
||||||
|
"api_endpoint": "https://api.example.com/users",
|
||||||
|
"api_method": "PUT",
|
||||||
|
"api_headers": {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": "Bearer YOUR_API_KEY"
|
||||||
|
},
|
||||||
|
"data_source": {
|
||||||
|
"_comment": "Specify where the data comes from. Currently supports CSV files.",
|
||||||
|
"type": "csv",
|
||||||
|
"file_path": "/data/user_updates_2024-10-27.csv",
|
||||||
|
"delimiter": ",",
|
||||||
|
"quotechar": "\"",
|
||||||
|
"header": true,
|
||||||
|
"fields": {
|
||||||
|
"user_id": "user_id",
|
||||||
|
"email": "email",
|
||||||
|
"status": "status",
|
||||||
|
"subscription_type": "subscription"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"batch_size": 50,
|
||||||
|
"max_retries": 3,
|
||||||
|
"retry_delay": 5,
|
||||||
|
"error_handling": {
|
||||||
|
"_comment": "Defines how to handle errors during processing.",
|
||||||
|
"on_error": "continue",
|
||||||
|
"log_errors": true,
|
||||||
|
"error_log_path": "/logs/user_update_errors.log"
|
||||||
|
},
|
||||||
|
"success_handling": {
|
||||||
|
"_comment": "Defines how to handle successful updates.",
|
||||||
|
"log_successes": true,
|
||||||
|
"success_log_path": "/logs/user_update_successes.log"
|
||||||
|
},
|
||||||
|
"transformation": {
|
||||||
|
"_comment": "Optional transformation to apply to each data record before sending to the API. Use a python function name.",
|
||||||
|
"function_name": "transform_user_data"
|
||||||
|
},
|
||||||
|
"reporting": {
|
||||||
|
"_comment": "Options for reporting the job's progress.",
|
||||||
|
"progress_interval": 60,
|
||||||
|
"report_to_console": true,
|
||||||
|
"report_to_file": "/reports/user_update_report.txt"
|
||||||
|
},
|
||||||
|
"rate_limiting": {
|
||||||
|
"_comment": "Prevent overwhelming the API.",
|
||||||
|
"requests_per_second": 10
|
||||||
|
},
|
||||||
|
"dry_run": false
|
||||||
|
}
|
||||||
54
skills/skill-adapter/assets/job_template.json
Normal file
54
skills/skill-adapter/assets/job_template.json
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
{
|
||||||
|
"_comment": "Template for defining a batch processing job",
|
||||||
|
"job_name": "Example Batch Job",
|
||||||
|
"_comment": "A descriptive name for the job",
|
||||||
|
"description": "This is an example batch job that processes a list of user IDs.",
|
||||||
|
"_comment": "A more detailed description of the job's purpose",
|
||||||
|
"api_endpoint": "https://api.example.com/users/{user_id}",
|
||||||
|
"_comment": "The API endpoint to be called for each item in the batch. Use {item_id} placeholders for substitutions.",
|
||||||
|
"http_method": "GET",
|
||||||
|
"_comment": "The HTTP method to use for the API calls (GET, POST, PUT, DELETE, PATCH)",
|
||||||
|
"headers": {
|
||||||
|
"_comment": "Optional headers to include in the API requests",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": "Bearer YOUR_API_KEY"
|
||||||
|
},
|
||||||
|
"request_body_template": null,
|
||||||
|
"_comment": "Optional template for the request body. Leave null for GET requests. Can use {item_id} placeholders.",
|
||||||
|
"items": [
|
||||||
|
"user123",
|
||||||
|
"user456",
|
||||||
|
"user789",
|
||||||
|
"user101",
|
||||||
|
"user112"
|
||||||
|
],
|
||||||
|
"_comment_items": "An array of item IDs to process in the batch",
|
||||||
|
"item_id_key": null,
|
||||||
|
"_comment": "If items is a list of objects, this is the key to use for the item ID. If null, items is treated as a list of IDs.",
|
||||||
|
"max_concurrent_requests": 5,
|
||||||
|
"_comment": "The maximum number of concurrent API requests to make",
|
||||||
|
"retry_attempts": 3,
|
||||||
|
"_comment": "The number of times to retry a failed API request",
|
||||||
|
"retry_delay_seconds": 2,
|
||||||
|
"_comment": "The delay in seconds between retry attempts",
|
||||||
|
"success_codes": [
|
||||||
|
200,
|
||||||
|
201
|
||||||
|
],
|
||||||
|
"_comment": "HTTP status codes that indicate a successful API call",
|
||||||
|
"error_handling": "continue",
|
||||||
|
"_comment": "How to handle errors: 'continue' to process all items, 'stop' to halt on first error",
|
||||||
|
"callback_url": null,
|
||||||
|
"_comment": "Optional URL to call when the job is complete, passing job status and results. e.g. https://your-app.com/batch-callback",
|
||||||
|
"callback_method": "POST",
|
||||||
|
"_comment": "The HTTP method for the callback URL (POST, PUT)",
|
||||||
|
"callback_headers": {
|
||||||
|
"_comment": "Optional headers for the callback request",
|
||||||
|
"Content-Type": "application/json"
|
||||||
|
},
|
||||||
|
"metadata": {
|
||||||
|
"_comment": "Optional metadata to associate with the job. Useful for tracking or filtering jobs.",
|
||||||
|
"owner": "team-alpha",
|
||||||
|
"priority": "medium"
|
||||||
|
}
|
||||||
|
}
|
||||||
72
skills/skill-adapter/assets/progress_report_template.md
Normal file
72
skills/skill-adapter/assets/progress_report_template.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# Batch Processing Job Progress Report
|
||||||
|
|
||||||
|
This report provides a summary of the progress for a batch processing job.
|
||||||
|
|
||||||
|
## Job Information
|
||||||
|
|
||||||
|
* **Job ID:** `[Insert Job ID Here - e.g., job-2024-10-26-001]`
|
||||||
|
* **Job Name:** `[Insert Job Name Here - e.g., Import Customer Data]`
|
||||||
|
* **Job Description:** `[Insert a brief description of the job - e.g., Imports customer data from CSV file into the database.]`
|
||||||
|
* **Start Time:** `[Insert Job Start Time - e.g., 2024-10-26 08:00:00 UTC]`
|
||||||
|
* **End Time:** `[Insert Job End Time - e.g., 2024-10-26 10:30:00 UTC (or "In Progress")]`
|
||||||
|
* **Status:** `[Insert Job Status - e.g., Completed, In Progress, Failed, Partially Completed]`
|
||||||
|
|
||||||
|
## Input Data
|
||||||
|
|
||||||
|
* **Source:** `[Insert Source of Input Data - e.g., CSV file: customer_data.csv, S3 Bucket: s3://my-bucket/data]`
|
||||||
|
* **Number of Records:** `[Insert Total Number of Records to Process - e.g., 10,000]`
|
||||||
|
|
||||||
|
## Processing Summary
|
||||||
|
|
||||||
|
| Metric | Value |
|
||||||
|
|-----------------|------------|
|
||||||
|
| Total Records | `[Insert Total Records]` |
|
||||||
|
| Records Processed | `[Insert Records Processed]` |
|
||||||
|
| Records Succeeded | `[Insert Records Succeeded]` |
|
||||||
|
| Records Failed | `[Insert Records Failed]` |
|
||||||
|
| Success Rate | `[Insert Success Rate (e.g., 95%)]` |
|
||||||
|
| Failure Rate | `[Insert Failure Rate (e.g., 5%)]` |
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
| Metric | Value |
|
||||||
|
|-----------------|------------|
|
||||||
|
| Total Records | 1000 |
|
||||||
|
| Records Processed | 750 |
|
||||||
|
| Records Succeeded | 700 |
|
||||||
|
| Records Failed | 50 |
|
||||||
|
| Success Rate | 93.33% |
|
||||||
|
| Failure Rate | 6.67% |
|
||||||
|
|
||||||
|
## Detailed Results (Optional)
|
||||||
|
|
||||||
|
This section can include more detailed information about the processed records. You can tailor this section to your specific needs.
|
||||||
|
|
||||||
|
* **Successful Records:** `[Insert a summary or link to successful record details - e.g., A list of successful record IDs can be found in successful_records.log]`
|
||||||
|
* **Failed Records:** `[Insert a summary or link to failed record details - e.g., A list of failed record IDs and error messages can be found in failed_records.log]`
|
||||||
|
* **Example Error Message:** `[Insert Example Error Message - e.g., "Invalid email format for record ID: 123"]`
|
||||||
|
|
||||||
|
## Performance Metrics
|
||||||
|
|
||||||
|
* **Processing Time:** `[Insert Total Processing Time - e.g., 2 hours 30 minutes]`
|
||||||
|
* **Average Processing Time per Record:** `[Insert Average Time per Record - e.g., 0.9 seconds]`
|
||||||
|
* **Peak Memory Usage:** `[Insert Peak Memory Usage - e.g., 2GB]`
|
||||||
|
|
||||||
|
## Errors and Warnings
|
||||||
|
|
||||||
|
* `[List any errors or warnings encountered during processing. Include timestamps and specific details.]`
|
||||||
|
* **Example:** `2024-10-26 09:15:00 UTC - Warning: Rate limit exceeded for API endpoint. Retrying in 60 seconds.`
|
||||||
|
* **Example:** `2024-10-26 09:30:00 UTC - Error: Database connection lost. Attempting to reconnect.`
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
* `[Insert any recommendations for improving the job or addressing issues. - e.g., Increase the rate limit for the API endpoint to avoid rate limiting errors. Consider adding retry logic for database connection errors.]`
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
* `[Insert any additional notes or comments about the job. - e.g., This job was executed with 4 parallel workers.]`
|
||||||
|
|
||||||
|
## Generated By
|
||||||
|
|
||||||
|
* `[Insert the tool or system that generated this report. - e.g., API Batch Processor Plugin]`
|
||||||
|
* **Generation Date:** `[Insert the date the report was generated. - e.g., 2024-10-26]`
|
||||||
28
skills/skill-adapter/assets/skill-schema.json
Normal file
28
skills/skill-adapter/assets/skill-schema.json
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||||
|
"title": "Claude Skill Configuration",
|
||||||
|
"type": "object",
|
||||||
|
"required": ["name", "description"],
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^[a-z0-9-]+$",
|
||||||
|
"maxLength": 64,
|
||||||
|
"description": "Skill identifier (lowercase, hyphens only)"
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string",
|
||||||
|
"maxLength": 1024,
|
||||||
|
"description": "What the skill does and when to use it"
|
||||||
|
},
|
||||||
|
"allowed-tools": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Comma-separated list of allowed tools"
|
||||||
|
},
|
||||||
|
"version": {
|
||||||
|
"type": "string",
|
||||||
|
"pattern": "^\\d+\\.\\d+\\.\\d+$",
|
||||||
|
"description": "Semantic version (x.y.z)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
27
skills/skill-adapter/assets/test-data.json
Normal file
27
skills/skill-adapter/assets/test-data.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"testCases": [
|
||||||
|
{
|
||||||
|
"name": "Basic activation test",
|
||||||
|
"input": "trigger phrase example",
|
||||||
|
"expected": {
|
||||||
|
"activated": true,
|
||||||
|
"toolsUsed": ["Read", "Grep"],
|
||||||
|
"success": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "Complex workflow test",
|
||||||
|
"input": "multi-step trigger example",
|
||||||
|
"expected": {
|
||||||
|
"activated": true,
|
||||||
|
"steps": 3,
|
||||||
|
"toolsUsed": ["Read", "Write", "Bash"],
|
||||||
|
"success": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fixtures": {
|
||||||
|
"sampleInput": "example data",
|
||||||
|
"expectedOutput": "processed result"
|
||||||
|
}
|
||||||
|
}
|
||||||
7
skills/skill-adapter/references/README.md
Normal file
7
skills/skill-adapter/references/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# References
|
||||||
|
|
||||||
|
Bundled resources for api-batch-processor skill
|
||||||
|
|
||||||
|
- [ ] api_batch_processing_best_practices.md: Provides best practices for implementing batch API processing, including error handling and optimization techniques.
|
||||||
|
- [ ] job_queue_schema.md: Defines the schema for the job queue used in batch processing, including fields for job ID, status, and progress.
|
||||||
|
- [ ] error_handling_guide.md: A guide to handling common errors encountered during batch API processing.
|
||||||
69
skills/skill-adapter/references/best-practices.md
Normal file
69
skills/skill-adapter/references/best-practices.md
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
# Skill Best Practices
|
||||||
|
|
||||||
|
Guidelines for optimal skill usage and development.
|
||||||
|
|
||||||
|
## For Users
|
||||||
|
|
||||||
|
### Activation Best Practices
|
||||||
|
|
||||||
|
1. **Use Clear Trigger Phrases**
|
||||||
|
- Match phrases from skill description
|
||||||
|
- Be specific about intent
|
||||||
|
- Provide necessary context
|
||||||
|
|
||||||
|
2. **Provide Sufficient Context**
|
||||||
|
- Include relevant file paths
|
||||||
|
- Specify scope of analysis
|
||||||
|
- Mention any constraints
|
||||||
|
|
||||||
|
3. **Understand Tool Permissions**
|
||||||
|
- Check allowed-tools in frontmatter
|
||||||
|
- Know what the skill can/cannot do
|
||||||
|
- Request appropriate actions
|
||||||
|
|
||||||
|
### Workflow Optimization
|
||||||
|
|
||||||
|
- Start with simple requests
|
||||||
|
- Build up to complex workflows
|
||||||
|
- Verify each step before proceeding
|
||||||
|
- Use skill consistently for related tasks
|
||||||
|
|
||||||
|
## For Developers
|
||||||
|
|
||||||
|
### Skill Development Guidelines
|
||||||
|
|
||||||
|
1. **Clear Descriptions**
|
||||||
|
- Include explicit trigger phrases
|
||||||
|
- Document all capabilities
|
||||||
|
- Specify limitations
|
||||||
|
|
||||||
|
2. **Proper Tool Permissions**
|
||||||
|
- Use minimal necessary tools
|
||||||
|
- Document security implications
|
||||||
|
- Test with restricted tools
|
||||||
|
|
||||||
|
3. **Comprehensive Documentation**
|
||||||
|
- Provide usage examples
|
||||||
|
- Document common pitfalls
|
||||||
|
- Include troubleshooting guide
|
||||||
|
|
||||||
|
### Maintenance
|
||||||
|
|
||||||
|
- Keep version updated
|
||||||
|
- Test after tool updates
|
||||||
|
- Monitor user feedback
|
||||||
|
- Iterate on descriptions
|
||||||
|
|
||||||
|
## Performance Tips
|
||||||
|
|
||||||
|
- Scope skills to specific domains
|
||||||
|
- Avoid overlapping trigger phrases
|
||||||
|
- Keep descriptions under 1024 chars
|
||||||
|
- Test activation reliability
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- Never include secrets in skill files
|
||||||
|
- Validate all inputs
|
||||||
|
- Use read-only tools when possible
|
||||||
|
- Document security requirements
|
||||||
70
skills/skill-adapter/references/examples.md
Normal file
70
skills/skill-adapter/references/examples.md
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
# Skill Usage Examples
|
||||||
|
|
||||||
|
This document provides practical examples of how to use this skill effectively.
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
### Example 1: Simple Activation
|
||||||
|
|
||||||
|
**User Request:**
|
||||||
|
```
|
||||||
|
[Describe trigger phrase here]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Skill Response:**
|
||||||
|
1. Analyzes the request
|
||||||
|
2. Performs the required action
|
||||||
|
3. Returns results
|
||||||
|
|
||||||
|
### Example 2: Complex Workflow
|
||||||
|
|
||||||
|
**User Request:**
|
||||||
|
```
|
||||||
|
[Describe complex scenario]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow:**
|
||||||
|
1. Step 1: Initial analysis
|
||||||
|
2. Step 2: Data processing
|
||||||
|
3. Step 3: Result generation
|
||||||
|
4. Step 4: Validation
|
||||||
|
|
||||||
|
## Advanced Patterns
|
||||||
|
|
||||||
|
### Pattern 1: Chaining Operations
|
||||||
|
|
||||||
|
Combine this skill with other tools:
|
||||||
|
```
|
||||||
|
Step 1: Use this skill for [purpose]
|
||||||
|
Step 2: Chain with [other tool]
|
||||||
|
Step 3: Finalize with [action]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern 2: Error Handling
|
||||||
|
|
||||||
|
If issues occur:
|
||||||
|
- Check trigger phrase matches
|
||||||
|
- Verify context is available
|
||||||
|
- Review allowed-tools permissions
|
||||||
|
|
||||||
|
## Tips & Best Practices
|
||||||
|
|
||||||
|
- ✅ Be specific with trigger phrases
|
||||||
|
- ✅ Provide necessary context
|
||||||
|
- ✅ Check tool permissions match needs
|
||||||
|
- ❌ Avoid vague requests
|
||||||
|
- ❌ Don't mix unrelated tasks
|
||||||
|
|
||||||
|
## Common Issues
|
||||||
|
|
||||||
|
**Issue:** Skill doesn't activate
|
||||||
|
**Solution:** Use exact trigger phrases from description
|
||||||
|
|
||||||
|
**Issue:** Unexpected results
|
||||||
|
**Solution:** Check input format and context
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- Main SKILL.md for full documentation
|
||||||
|
- scripts/ for automation helpers
|
||||||
|
- assets/ for configuration examples
|
||||||
7
skills/skill-adapter/scripts/README.md
Normal file
7
skills/skill-adapter/scripts/README.md
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
# Scripts
|
||||||
|
|
||||||
|
Bundled resources for api-batch-processor skill
|
||||||
|
|
||||||
|
- [ ] batch_process_init.py: Initializes a batch processing job, setting up the queue and logging.
|
||||||
|
- [ ] batch_process_status.py: Checks the status of a batch processing job, providing progress updates.
|
||||||
|
- [ ] batch_process_cancel.py: Cancels a running batch processing job, cleaning up resources.
|
||||||
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
@@ -0,0 +1,42 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Helper script template for skill automation
|
||||||
|
# Customize this for your skill's specific needs
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
function show_usage() {
|
||||||
|
echo "Usage: $0 [options]"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " -h, --help Show this help message"
|
||||||
|
echo " -v, --verbose Enable verbose output"
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse arguments
|
||||||
|
VERBOSE=false
|
||||||
|
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case $1 in
|
||||||
|
-h|--help)
|
||||||
|
show_usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
-v|--verbose)
|
||||||
|
VERBOSE=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Unknown option: $1"
|
||||||
|
show_usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Your skill logic here
|
||||||
|
if [ "$VERBOSE" = true ]; then
|
||||||
|
echo "Running skill automation..."
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Complete"
|
||||||
32
skills/skill-adapter/scripts/validation.sh
Executable file
32
skills/skill-adapter/scripts/validation.sh
Executable file
@@ -0,0 +1,32 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Skill validation helper
|
||||||
|
# Validates skill activation and functionality
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "🔍 Validating skill..."
|
||||||
|
|
||||||
|
# Check if SKILL.md exists
|
||||||
|
if [ ! -f "../SKILL.md" ]; then
|
||||||
|
echo "❌ Error: SKILL.md not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate frontmatter
|
||||||
|
if ! grep -q "^---$" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: No frontmatter found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check required fields
|
||||||
|
if ! grep -q "^name:" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: Missing 'name' field"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "^description:" "../SKILL.md"; then
|
||||||
|
echo "❌ Error: Missing 'description' field"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Skill validation passed"
|
||||||
Reference in New Issue
Block a user