Initial commit
This commit is contained in:
15
.claude-plugin/plugin.json
Normal file
15
.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "api-request-logger",
|
||||
"description": "Log API requests with structured logging and correlation IDs",
|
||||
"version": "1.0.0",
|
||||
"author": {
|
||||
"name": "Jeremy Longshore",
|
||||
"email": "[email protected]"
|
||||
},
|
||||
"skills": [
|
||||
"./skills"
|
||||
],
|
||||
"commands": [
|
||||
"./commands"
|
||||
]
|
||||
}
|
||||
3
README.md
Normal file
3
README.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# api-request-logger
|
||||
|
||||
Log API requests with structured logging and correlation IDs
|
||||
766
commands/setup-logging.md
Normal file
766
commands/setup-logging.md
Normal file
@@ -0,0 +1,766 @@
|
||||
---
|
||||
description: Set up API request logging
|
||||
shortcut: logs
|
||||
---
|
||||
|
||||
# Set Up API Request Logging
|
||||
|
||||
Implement production-grade structured logging with correlation IDs, request/response capture, PII redaction, and integration with log aggregation platforms.
|
||||
|
||||
## When to Use This Command
|
||||
|
||||
Use `/setup-logging` when you need to:
|
||||
- Debug production issues with complete request context
|
||||
- Track user journeys across distributed services
|
||||
- Meet compliance requirements (audit trails, GDPR)
|
||||
- Analyze API performance and usage patterns
|
||||
- Investigate security incidents with detailed forensics
|
||||
- Monitor business metrics derived from API usage
|
||||
|
||||
DON'T use this when:
|
||||
- Building throwaway prototypes (use console.log)
|
||||
- Extremely high-throughput systems where logging overhead matters (use sampling)
|
||||
- Already using comprehensive APM tool (avoid duplication)
|
||||
|
||||
## Design Decisions
|
||||
|
||||
This command implements **Structured JSON logging with Winston/Bunyan** as the primary approach because:
|
||||
- JSON format enables powerful query capabilities in log aggregation tools
|
||||
- Structured data easier to parse and analyze than free-text logs
|
||||
- Correlation IDs enable distributed tracing across services
|
||||
- Standard libraries with proven reliability at scale
|
||||
|
||||
**Alternative considered: Plain text logging**
|
||||
- Human-readable without tools
|
||||
- Difficult to query and aggregate
|
||||
- No structured fields for filtering
|
||||
- Recommended only for simple applications
|
||||
|
||||
**Alternative considered: Binary logging protocols (gRPC, protobuf)**
|
||||
- More efficient storage and transmission
|
||||
- Requires specialized tooling to read
|
||||
- Added complexity without clear benefits for most use cases
|
||||
- Recommended only for extremely high-volume scenarios
|
||||
|
||||
**Alternative considered: Managed logging services (Datadog, Loggly)**
|
||||
- Fastest time-to-value with built-in dashboards
|
||||
- Higher ongoing costs
|
||||
- Potential vendor lock-in
|
||||
- Recommended for teams without logging infrastructure
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before running this command:
|
||||
1. Node.js/Python runtime with logging library support
|
||||
2. Understanding of sensitive data in your API (for PII redaction)
|
||||
3. Log aggregation platform (ELK stack, Splunk, CloudWatch, etc.)
|
||||
4. Disk space or log shipping configuration for log retention
|
||||
5. Compliance requirements documented (GDPR, HIPAA, SOC2)
|
||||
|
||||
## Implementation Process
|
||||
|
||||
### Step 1: Configure Structured Logger
|
||||
Set up Winston (Node.js) or structlog (Python) with JSON formatting and appropriate transports.
|
||||
|
||||
### Step 2: Implement Correlation ID Middleware
|
||||
Generate unique request IDs and propagate through entire request lifecycle and downstream services.
|
||||
|
||||
### Step 3: Add Request/Response Logging Middleware
|
||||
Capture HTTP method, path, headers, body, status code, and response time with configurable verbosity.
|
||||
|
||||
### Step 4: Implement PII Redaction
|
||||
Identify and mask sensitive data (passwords, tokens, credit cards, SSNs) before logging.
|
||||
|
||||
### Step 5: Configure Log Shipping
|
||||
Set up log rotation, compression, and shipping to centralized log aggregation platform.
|
||||
|
||||
## Output Format
|
||||
|
||||
The command generates:
|
||||
- `logger.js` or `logger.py` - Core logging configuration and utilities
|
||||
- `logging-middleware.js` - Express/FastAPI middleware for request logging
|
||||
- `pii-redactor.js` - PII detection and masking utilities
|
||||
- `log-shipping-config.json` - Fluentd/Filebeat/Logstash configuration
|
||||
- `logger.test.js` - Test suite for logging functionality
|
||||
- `README.md` - Integration guide and best practices
|
||||
|
||||
## Code Examples
|
||||
|
||||
### Example 1: Structured Logging with Winston and Correlation IDs
|
||||
|
||||
```javascript
|
||||
// logger.js - Winston configuration with correlation IDs
|
||||
const winston = require('winston');
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const cls = require('cls-hooked');
|
||||
|
||||
// Create namespace for correlation ID context
|
||||
const namespace = cls.createNamespace('request-context');
|
||||
|
||||
// Custom format for correlation ID
|
||||
const correlationIdFormat = winston.format((info) => {
|
||||
const correlationId = namespace.get('correlationId');
|
||||
if (correlationId) {
|
||||
info.correlationId = correlationId;
|
||||
}
|
||||
return info;
|
||||
});
|
||||
|
||||
// Custom format for sanitizing sensitive data
|
||||
const sanitizeFormat = winston.format((info) => {
|
||||
if (info.meta && typeof info.meta === 'object') {
|
||||
info.meta = sanitizeSensitiveData(info.meta);
|
||||
}
|
||||
if (info.req && info.req.headers) {
|
||||
info.req.headers = sanitizeHeaders(info.req.headers);
|
||||
}
|
||||
return info;
|
||||
});
|
||||
|
||||
// Create logger instance
|
||||
const logger = winston.createLogger({
|
||||
level: process.env.LOG_LEVEL || 'info',
|
||||
format: winston.format.combine(
|
||||
winston.format.timestamp({ format: 'YYYY-MM-DD HH:mm:ss.SSS' }),
|
||||
correlationIdFormat(),
|
||||
sanitizeFormat(),
|
||||
winston.format.errors({ stack: true }),
|
||||
winston.format.json()
|
||||
),
|
||||
defaultMeta: {
|
||||
service: process.env.SERVICE_NAME || 'api',
|
||||
environment: process.env.NODE_ENV || 'development',
|
||||
version: process.env.APP_VERSION || '1.0.0'
|
||||
},
|
||||
transports: [
|
||||
// Console transport for development
|
||||
new winston.transports.Console({
|
||||
format: winston.format.combine(
|
||||
winston.format.colorize(),
|
||||
winston.format.printf(({ timestamp, level, message, correlationId, ...meta }) => {
|
||||
const corrId = correlationId ? `[${correlationId}]` : '';
|
||||
return `${timestamp} ${level} ${corrId}: ${message} ${JSON.stringify(meta)}`;
|
||||
})
|
||||
)
|
||||
}),
|
||||
// File transport for production
|
||||
new winston.transports.File({
|
||||
filename: 'logs/error.log',
|
||||
level: 'error',
|
||||
maxsize: 10485760, // 10MB
|
||||
maxFiles: 5
|
||||
}),
|
||||
new winston.transports.File({
|
||||
filename: 'logs/combined.log',
|
||||
maxsize: 10485760, // 10MB
|
||||
maxFiles: 10
|
||||
})
|
||||
],
|
||||
// Don't exit on uncaught exception
|
||||
exitOnError: false
|
||||
});
|
||||
|
||||
// PII redaction utilities
|
||||
function sanitizeSensitiveData(obj) {
|
||||
const sensitiveKeys = ['password', 'token', 'apiKey', 'secret', 'authorization', 'creditCard', 'ssn', 'cvv'];
|
||||
const sanitized = { ...obj };
|
||||
|
||||
for (const key of Object.keys(sanitized)) {
|
||||
const lowerKey = key.toLowerCase();
|
||||
if (sensitiveKeys.some(sensitive => lowerKey.includes(sensitive))) {
|
||||
sanitized[key] = '[REDACTED]';
|
||||
} else if (typeof sanitized[key] === 'object' && sanitized[key] !== null) {
|
||||
sanitized[key] = sanitizeSensitiveData(sanitized[key]);
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
function sanitizeHeaders(headers) {
|
||||
const sanitized = { ...headers };
|
||||
const sensitiveHeaders = ['authorization', 'cookie', 'x-api-key', 'x-auth-token'];
|
||||
|
||||
for (const header of sensitiveHeaders) {
|
||||
if (sanitized[header]) {
|
||||
sanitized[header] = '[REDACTED]';
|
||||
}
|
||||
}
|
||||
|
||||
return sanitized;
|
||||
}
|
||||
|
||||
// Middleware to set correlation ID
|
||||
function correlationIdMiddleware(req, res, next) {
|
||||
namespace.run(() => {
|
||||
const correlationId = req.headers['x-correlation-id'] || uuidv4();
|
||||
namespace.set('correlationId', correlationId);
|
||||
res.setHeader('X-Correlation-Id', correlationId);
|
||||
next();
|
||||
});
|
||||
}
|
||||
|
||||
// Request logging middleware
|
||||
function requestLoggingMiddleware(req, res, next) {
|
||||
const startTime = Date.now();
|
||||
|
||||
// Log incoming request
|
||||
logger.info('Incoming request', {
|
||||
method: req.method,
|
||||
path: req.path,
|
||||
query: req.query,
|
||||
ip: req.ip,
|
||||
userAgent: req.get('user-agent'),
|
||||
userId: req.user?.id,
|
||||
body: shouldLogBody(req) ? sanitizeSensitiveData(req.body) : '[OMITTED]'
|
||||
});
|
||||
|
||||
// Capture response
|
||||
const originalSend = res.send;
|
||||
res.send = function (data) {
|
||||
res.send = originalSend;
|
||||
|
||||
const duration = Date.now() - startTime;
|
||||
const statusCode = res.statusCode;
|
||||
|
||||
// Log outgoing response
|
||||
logger.info('Outgoing response', {
|
||||
method: req.method,
|
||||
path: req.path,
|
||||
statusCode,
|
||||
duration,
|
||||
userId: req.user?.id,
|
||||
responseSize: data?.length || 0,
|
||||
response: shouldLogResponse(req, statusCode) ? sanitizeSensitiveData(JSON.parse(data)) : '[OMITTED]'
|
||||
});
|
||||
|
||||
return res.send(data);
|
||||
};
|
||||
|
||||
next();
|
||||
}
|
||||
|
||||
function shouldLogBody(req) {
|
||||
// Only log body for specific endpoints or methods
|
||||
const logBodyPaths = ['/api/auth/login', '/api/users'];
|
||||
return req.method !== 'GET' && logBodyPaths.some(path => req.path.startsWith(path));
|
||||
}
|
||||
|
||||
function shouldLogResponse(req, statusCode) {
|
||||
// Log responses for errors or specific endpoints
|
||||
return statusCode >= 400 || req.path.startsWith('/api/critical');
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
logger,
|
||||
correlationIdMiddleware,
|
||||
requestLoggingMiddleware,
|
||||
namespace
|
||||
};
|
||||
```
|
||||
|
||||
### Example 2: Python Structured Logging with FastAPI and PII Redaction
|
||||
|
||||
```python
|
||||
# logger.py - Structlog configuration with PII redaction
|
||||
import logging
|
||||
import structlog
|
||||
import uuid
|
||||
import re
|
||||
from contextvars import ContextVar
|
||||
from typing import Any, Dict
|
||||
from fastapi import Request, Response
|
||||
import time
|
||||
|
||||
# Context variable for correlation ID
|
||||
correlation_id_var: ContextVar[str] = ContextVar('correlation_id', default=None)
|
||||
|
||||
# PII patterns for redaction
|
||||
PII_PATTERNS = {
|
||||
'email': re.compile(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'),
|
||||
'ssn': re.compile(r'\b\d{3}-\d{2}-\d{4}\b'),
|
||||
'credit_card': re.compile(r'\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b'),
|
||||
'phone': re.compile(r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b'),
|
||||
'ip_address': re.compile(r'\b(?:\d{1,3}\.){3}\d{1,3}\b')
|
||||
}
|
||||
|
||||
SENSITIVE_KEYS = ['password', 'token', 'secret', 'api_key', 'authorization', 'credit_card', 'ssn', 'cvv']
|
||||
|
||||
def redact_pii(data: Any) -> Any:
|
||||
"""Recursively redact PII from data structures"""
|
||||
if isinstance(data, dict):
|
||||
redacted = {}
|
||||
for key, value in data.items():
|
||||
# Check if key is sensitive
|
||||
if any(sensitive in key.lower() for sensitive in SENSITIVE_KEYS):
|
||||
redacted[key] = '[REDACTED]'
|
||||
else:
|
||||
redacted[key] = redact_pii(value)
|
||||
return redacted
|
||||
|
||||
elif isinstance(data, list):
|
||||
return [redact_pii(item) for item in data]
|
||||
|
||||
elif isinstance(data, str):
|
||||
# Apply PII pattern matching
|
||||
redacted_str = data
|
||||
for pattern_name, pattern in PII_PATTERNS.items():
|
||||
if pattern_name == 'email':
|
||||
# Partially redact emails (keep domain)
|
||||
redacted_str = pattern.sub(lambda m: f"***@{m.group(0).split('@')[1]}", redacted_str)
|
||||
else:
|
||||
redacted_str = pattern.sub('[REDACTED]', redacted_str)
|
||||
return redacted_str
|
||||
|
||||
return data
|
||||
|
||||
def add_correlation_id(logger, method_name, event_dict):
|
||||
"""Add correlation ID to log context"""
|
||||
correlation_id = correlation_id_var.get()
|
||||
if correlation_id:
|
||||
event_dict['correlation_id'] = correlation_id
|
||||
return event_dict
|
||||
|
||||
def add_service_context(logger, method_name, event_dict):
|
||||
"""Add service metadata to logs"""
|
||||
import os
|
||||
event_dict['service'] = os.getenv('SERVICE_NAME', 'api')
|
||||
event_dict['environment'] = os.getenv('ENVIRONMENT', 'development')
|
||||
event_dict['version'] = os.getenv('APP_VERSION', '1.0.0')
|
||||
return event_dict
|
||||
|
||||
# Configure structlog
|
||||
structlog.configure(
|
||||
processors=[
|
||||
structlog.contextvars.merge_contextvars,
|
||||
structlog.stdlib.filter_by_level,
|
||||
structlog.stdlib.add_logger_name,
|
||||
structlog.stdlib.add_log_level,
|
||||
add_correlation_id,
|
||||
add_service_context,
|
||||
structlog.processors.TimeStamper(fmt="iso"),
|
||||
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
|
||||
],
|
||||
logger_factory=structlog.stdlib.LoggerFactory(),
|
||||
cache_logger_on_first_use=True,
|
||||
)
|
||||
|
||||
# Configure standard library logging
|
||||
logging.basicConfig(
|
||||
format="%(message)s",
|
||||
level=logging.INFO,
|
||||
handlers=[
|
||||
logging.StreamHandler(),
|
||||
logging.FileHandler('logs/app.log')
|
||||
]
|
||||
)
|
||||
|
||||
# Create logger instance
|
||||
logger = structlog.get_logger()
|
||||
|
||||
class RequestLoggingMiddleware:
|
||||
"""FastAPI middleware for comprehensive request logging"""
|
||||
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.logger = structlog.get_logger()
|
||||
|
||||
async def __call__(self, scope, receive, send):
|
||||
if scope['type'] != 'http':
|
||||
await self.app(scope, receive, send)
|
||||
return
|
||||
|
||||
# Generate correlation ID
|
||||
correlation_id = str(uuid.uuid4())
|
||||
correlation_id_var.set(correlation_id)
|
||||
|
||||
request = Request(scope, receive)
|
||||
start_time = time.time()
|
||||
|
||||
# Log incoming request
|
||||
body = await self._get_body(request)
|
||||
self.logger.info(
|
||||
"Incoming request",
|
||||
method=request.method,
|
||||
path=request.url.path,
|
||||
query_params=dict(request.query_params),
|
||||
client_ip=request.client.host if request.client else None,
|
||||
user_agent=request.headers.get('user-agent'),
|
||||
body=redact_pii(body) if self._should_log_body(request) else '[OMITTED]'
|
||||
)
|
||||
|
||||
# Capture response
|
||||
status_code = 500
|
||||
response_body = b''
|
||||
|
||||
async def send_wrapper(message):
|
||||
nonlocal status_code, response_body
|
||||
if message['type'] == 'http.response.start':
|
||||
status_code = message['status']
|
||||
# Add correlation ID header
|
||||
headers = list(message.get('headers', []))
|
||||
headers.append((b'x-correlation-id', correlation_id.encode()))
|
||||
message['headers'] = headers
|
||||
elif message['type'] == 'http.response.body':
|
||||
response_body += message.get('body', b'')
|
||||
await send(message)
|
||||
|
||||
try:
|
||||
await self.app(scope, receive, send_wrapper)
|
||||
finally:
|
||||
duration = time.time() - start_time
|
||||
|
||||
# Log outgoing response
|
||||
self.logger.info(
|
||||
"Outgoing response",
|
||||
method=request.method,
|
||||
path=request.url.path,
|
||||
status_code=status_code,
|
||||
duration_ms=round(duration * 1000, 2),
|
||||
response_size=len(response_body),
|
||||
response=redact_pii(response_body.decode()) if self._should_log_response(status_code) else '[OMITTED]'
|
||||
)
|
||||
|
||||
async def _get_body(self, request: Request) -> dict:
|
||||
"""Safely get request body"""
|
||||
try:
|
||||
return await request.json()
|
||||
except:
|
||||
return {}
|
||||
|
||||
def _should_log_body(self, request: Request) -> bool:
|
||||
"""Determine if request body should be logged"""
|
||||
sensitive_paths = ['/api/auth', '/api/payment']
|
||||
return not any(request.url.path.startswith(path) for path in sensitive_paths)
|
||||
|
||||
def _should_log_response(self, status_code: int) -> bool:
|
||||
"""Determine if response body should be logged"""
|
||||
return status_code >= 400 # Log error responses
|
||||
|
||||
# Usage in FastAPI
|
||||
from fastapi import FastAPI
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Add middleware
|
||||
app.add_middleware(RequestLoggingMiddleware)
|
||||
|
||||
# Example endpoint with logging
|
||||
@app.get("/api/users/{user_id}")
|
||||
async def get_user(user_id: int):
|
||||
logger.info("Fetching user", user_id=user_id)
|
||||
try:
|
||||
# Simulate user fetch
|
||||
user = {"id": user_id, "email": "user@example.com", "ssn": "123-45-6789"}
|
||||
logger.info("User fetched successfully", user_id=user_id)
|
||||
return redact_pii(user)
|
||||
except Exception as e:
|
||||
logger.error("Failed to fetch user", user_id=user_id, error=str(e))
|
||||
raise
|
||||
```
|
||||
|
||||
### Example 3: Log Shipping with Filebeat and ELK Stack
|
||||
|
||||
```yaml
|
||||
# filebeat.yml - Filebeat configuration for log shipping
|
||||
filebeat.inputs:
|
||||
- type: log
|
||||
enabled: true
|
||||
paths:
|
||||
- /var/log/api/combined.log
|
||||
json.keys_under_root: true
|
||||
json.add_error_key: true
|
||||
fields:
|
||||
service: api-gateway
|
||||
datacenter: us-east-1
|
||||
fields_under_root: true
|
||||
|
||||
- type: log
|
||||
enabled: true
|
||||
paths:
|
||||
- /var/log/api/error.log
|
||||
json.keys_under_root: true
|
||||
json.add_error_key: true
|
||||
fields:
|
||||
service: api-gateway
|
||||
datacenter: us-east-1
|
||||
log_level: error
|
||||
fields_under_root: true
|
||||
|
||||
# Processors for enrichment
|
||||
processors:
|
||||
- add_host_metadata:
|
||||
when.not.contains.tags: forwarded
|
||||
- add_cloud_metadata: ~
|
||||
- add_docker_metadata: ~
|
||||
- add_kubernetes_metadata: ~
|
||||
|
||||
# Drop debug logs in production
|
||||
- drop_event:
|
||||
when:
|
||||
and:
|
||||
- equals:
|
||||
environment: production
|
||||
- equals:
|
||||
level: debug
|
||||
|
||||
# Output to Elasticsearch
|
||||
output.elasticsearch:
|
||||
hosts: ["elasticsearch:9200"]
|
||||
index: "api-logs-%{+yyyy.MM.dd}"
|
||||
username: "${ELASTICSEARCH_USERNAME}"
|
||||
password: "${ELASTICSEARCH_PASSWORD}"
|
||||
|
||||
# Output to Logstash (alternative)
|
||||
# output.logstash:
|
||||
# hosts: ["logstash:5044"]
|
||||
# compression_level: 3
|
||||
# bulk_max_size: 2048
|
||||
|
||||
# Logging configuration
|
||||
logging.level: info
|
||||
logging.to_files: true
|
||||
logging.files:
|
||||
path: /var/log/filebeat
|
||||
name: filebeat
|
||||
keepfiles: 7
|
||||
permissions: 0644
|
||||
|
||||
# Enable monitoring
|
||||
monitoring.enabled: true
|
||||
```
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml - Complete ELK stack for log aggregation
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.10.0
|
||||
container_name: elasticsearch
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
|
||||
- xpack.security.enabled=false
|
||||
volumes:
|
||||
- elasticsearch-data:/usr/share/elasticsearch/data
|
||||
ports:
|
||||
- "9200:9200"
|
||||
networks:
|
||||
- logging
|
||||
|
||||
logstash:
|
||||
image: docker.elastic.co/logstash/logstash:8.10.0
|
||||
container_name: logstash
|
||||
volumes:
|
||||
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
|
||||
ports:
|
||||
- "5044:5044"
|
||||
networks:
|
||||
- logging
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana:8.10.0
|
||||
container_name: kibana
|
||||
environment:
|
||||
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
|
||||
ports:
|
||||
- "5601:5601"
|
||||
networks:
|
||||
- logging
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
|
||||
filebeat:
|
||||
image: docker.elastic.co/beats/filebeat:8.10.0
|
||||
container_name: filebeat
|
||||
user: root
|
||||
volumes:
|
||||
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
|
||||
- /var/log/api:/var/log/api:ro
|
||||
- filebeat-data:/usr/share/filebeat/data
|
||||
command: filebeat -e -strict.perms=false
|
||||
networks:
|
||||
- logging
|
||||
depends_on:
|
||||
- elasticsearch
|
||||
- logstash
|
||||
|
||||
networks:
|
||||
logging:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
elasticsearch-data:
|
||||
filebeat-data:
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| "Log file not writable" | Permission issues | Ensure log directory has correct permissions, run with appropriate user |
|
||||
| "Disk space full" | Logs not rotated | Implement log rotation, compress old logs, ship to remote storage |
|
||||
| "PII detected in logs" | Incomplete redaction rules | Review and update PII patterns, audit existing logs |
|
||||
| "High logging latency" | Synchronous file writes | Use async logging, buffer writes, or log to separate thread |
|
||||
| "Lost correlation IDs" | Missing middleware or context propagation | Ensure middleware order is correct, propagate context to async operations |
|
||||
|
||||
## Configuration Options
|
||||
|
||||
**Log Levels**
|
||||
- `debug`: Verbose output for development (not in production)
|
||||
- `info`: General operational messages (default)
|
||||
- `warn`: Unexpected but handled conditions
|
||||
- `error`: Errors requiring attention
|
||||
- `fatal`: Critical errors causing service failure
|
||||
|
||||
**Log Rotation**
|
||||
- **Size-based**: Rotate when file reaches 10MB
|
||||
- **Time-based**: Rotate daily at midnight
|
||||
- **Retention**: Keep 7-30 days based on compliance requirements
|
||||
- **Compression**: Gzip rotated logs to save space
|
||||
|
||||
**PII Redaction Strategies**
|
||||
- **Pattern matching**: Regex for emails, SSNs, credit cards
|
||||
- **Key-based**: Redact specific field names (password, token)
|
||||
- **Partial redaction**: Keep domain for emails (***@example.com)
|
||||
- **Tokenization**: Replace with consistent token for analysis
|
||||
|
||||
## Best Practices
|
||||
|
||||
DO:
|
||||
- Use structured JSON logging for machine readability
|
||||
- Generate correlation IDs for request tracking across services
|
||||
- Redact PII before logging (passwords, tokens, SSNs, credit cards)
|
||||
- Include sufficient context (user ID, request path, duration)
|
||||
- Set appropriate log levels (info for production, debug for development)
|
||||
- Implement log rotation and retention policies
|
||||
|
||||
DON'T:
|
||||
- Log passwords, API keys, or authentication tokens
|
||||
- Use console.log in production (no structure or persistence)
|
||||
- Log full request/response bodies without sanitization
|
||||
- Ignore log volume (can cause disk space or cost issues)
|
||||
- Log at debug level in production (performance impact)
|
||||
- Forget to propagate correlation IDs to downstream services
|
||||
|
||||
TIPS:
|
||||
- Start with conservative logging, increase verbosity during incidents
|
||||
- Use log sampling for high-volume endpoints (log 1%)
|
||||
- Create dashboards for common queries (error rates, slow requests)
|
||||
- Set up alerts for error rate spikes or specific error patterns
|
||||
- Document log schema for easier querying
|
||||
- Test PII redaction with known sensitive data
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
**Logging Overhead**
|
||||
- Structured logging: ~0.1-0.5ms per log statement
|
||||
- JSON serialization: Negligible for small objects
|
||||
- PII redaction: 1-2ms for complex objects
|
||||
- File I/O: Use async writes to avoid blocking
|
||||
|
||||
**Optimization Strategies**
|
||||
- Use log levels to control verbosity
|
||||
- Sample high-volume logs (log 1 in 100 requests)
|
||||
- Buffer logs before writing to disk
|
||||
- Use separate thread for log processing
|
||||
- Compress logs before shipping to reduce bandwidth
|
||||
|
||||
**Volume Management**
|
||||
- Typical API: 100-500 log lines per request
|
||||
- At 1000 req/s: 100k-500k log lines/s
|
||||
- With 1KB per line: 100-500 MB/s log volume
|
||||
- Plan for log retention and storage costs
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **PII Protection**: Redact sensitive data before logging (GDPR, CCPA compliance)
|
||||
2. **Access Control**: Restrict log access to authorized personnel only
|
||||
3. **Encryption**: Encrypt logs at rest and in transit
|
||||
4. **Audit Trail**: Log administrative actions (config changes, user access)
|
||||
5. **Injection Prevention**: Sanitize user input to prevent log injection attacks
|
||||
6. **Retention Policies**: Delete logs after retention period (compliance requirement)
|
||||
|
||||
## Compliance Considerations
|
||||
|
||||
**GDPR Requirements**
|
||||
- Log only necessary personal data
|
||||
- Implement data minimization
|
||||
- Provide mechanism to delete user logs (right to be forgotten)
|
||||
- Document data retention policies
|
||||
|
||||
**HIPAA Requirements**
|
||||
- Encrypt logs containing PHI
|
||||
- Maintain audit trails for access
|
||||
- Implement access controls
|
||||
- Regular security audits
|
||||
|
||||
**SOC 2 Requirements**
|
||||
- Centralized log aggregation
|
||||
- Tamper-proof log storage
|
||||
- Real-time monitoring and alerting
|
||||
- Regular log review procedures
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Logs Not Appearing**
|
||||
```bash
|
||||
# Check log file permissions
|
||||
ls -la /var/log/api/
|
||||
|
||||
# Verify logging middleware is registered
|
||||
# Check application startup logs
|
||||
|
||||
# Test logger directly
|
||||
curl -X POST http://localhost:3000/api/test -d '{"test": "data"}'
|
||||
tail -f /var/log/api/combined.log
|
||||
```
|
||||
|
||||
**Missing Correlation IDs**
|
||||
```bash
|
||||
# Verify correlation ID middleware is first
|
||||
# Check middleware order in application
|
||||
|
||||
# Test correlation ID propagation
|
||||
curl -H "X-Correlation-Id: test-123" http://localhost:3000/api/test
|
||||
grep "test-123" /var/log/api/combined.log
|
||||
```
|
||||
|
||||
**PII Leaking into Logs**
|
||||
```bash
|
||||
# Search for common PII patterns
|
||||
grep -E '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' /var/log/api/combined.log
|
||||
grep -E '\b\d{3}-\d{2}-\d{4}\b' /var/log/api/combined.log
|
||||
|
||||
# Review and update redaction rules
|
||||
# Audit existing logs and delete if necessary
|
||||
```
|
||||
|
||||
**High Disk Usage from Logs**
|
||||
```bash
|
||||
# Check log directory size
|
||||
du -sh /var/log/api/
|
||||
|
||||
# Review log rotation configuration
|
||||
cat /etc/logrotate.d/api
|
||||
|
||||
# Manually rotate logs
|
||||
logrotate -f /etc/logrotate.d/api
|
||||
|
||||
# Enable compression and reduce retention
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/create-monitoring` - Visualize log data with dashboards and alerts
|
||||
- `/add-rate-limiting` - Log rate limit violations for security analysis
|
||||
- `/api-security-scanner` - Audit security-relevant log events
|
||||
- `/api-error-handler` - Integrate error handling with structured logging
|
||||
|
||||
## Version History
|
||||
|
||||
- v1.0.0 (2024-10): Initial implementation with Winston/structlog and PII redaction
|
||||
- Planned v1.1.0: Add OpenTelemetry integration for unified observability
|
||||
93
plugin.lock.json
Normal file
93
plugin.lock.json
Normal file
@@ -0,0 +1,93 @@
|
||||
{
|
||||
"$schema": "internal://schemas/plugin.lock.v1.json",
|
||||
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/api-development/api-request-logger",
|
||||
"normalized": {
|
||||
"repo": null,
|
||||
"ref": "refs/tags/v20251128.0",
|
||||
"commit": "6037c07bf7cb16ebd4a396e5e5583e756b50ceef",
|
||||
"treeHash": "937a918cbc567890c5479197119eac984017a78e18088640935647b13d98a285",
|
||||
"generatedAt": "2025-11-28T10:18:07.953721Z",
|
||||
"toolVersion": "publish_plugins.py@0.2.0"
|
||||
},
|
||||
"origin": {
|
||||
"remote": "git@github.com:zhongweili/42plugin-data.git",
|
||||
"branch": "master",
|
||||
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
|
||||
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
|
||||
},
|
||||
"manifest": {
|
||||
"name": "api-request-logger",
|
||||
"description": "Log API requests with structured logging and correlation IDs",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
"content": {
|
||||
"files": [
|
||||
{
|
||||
"path": "README.md",
|
||||
"sha256": "eba0c4de3782fb6fb6ebaaf5de0c907ab70f335b442d38ba9150c1a090c54f20"
|
||||
},
|
||||
{
|
||||
"path": ".claude-plugin/plugin.json",
|
||||
"sha256": "6562c7380cc6a303c0bfebbc5666513616e2a0ddb1fd2846e2a365cb6e79eb3d"
|
||||
},
|
||||
{
|
||||
"path": "commands/setup-logging.md",
|
||||
"sha256": "4363e952e8ba3141c3548ddb3156157a7c2b55d65142479e8ff4d5f58cff5aa0"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/references/examples.md",
|
||||
"sha256": "922bbc3c4ebf38b76f515b5c1998ebde6bf902233e00e2c5a0e9176f975a7572"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/references/best-practices.md",
|
||||
"sha256": "c8f32b3566252f50daacd346d7045a1060c718ef5cfb07c55a0f2dec5f1fb39e"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/references/README.md",
|
||||
"sha256": "63b95e1db48d45aecfdbfb9bc830919b8aa7f8f8875c93870f2bfe63030ac7ac"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/scripts/helper-template.sh",
|
||||
"sha256": "0881d5660a8a7045550d09ae0acc15642c24b70de6f08808120f47f86ccdf077"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/scripts/validation.sh",
|
||||
"sha256": "92551a29a7f512d2036e4f1fb46c2a3dc6bff0f7dde4a9f699533e446db48502"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/scripts/README.md",
|
||||
"sha256": "c430753ef7cc15d5dc60502ad568f77c9460fa78671a16ba6959ab69555b6842"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/example_log_output.json",
|
||||
"sha256": "98dafa28b11099d45ccd002d81e55944dc8a9bf33c27fdcf72709d67de4e6335"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/test-data.json",
|
||||
"sha256": "ac17dca3d6e253a5f39f2a2f1b388e5146043756b05d9ce7ac53a0042eee139d"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/logging_config_template.json",
|
||||
"sha256": "fd0db958882ac2730ae1b4f89db4c3206f7a7d2cd8068b0ddb09bfd70d7effab"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/README.md",
|
||||
"sha256": "87f787550cf6e2e8aece7a2fa585aa8b618e985f699c453c1e7eb0467254e34f"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/skill-schema.json",
|
||||
"sha256": "f5639ba823a24c9ac4fb21444c0717b7aefde1a4993682897f5bf544f863c2cd"
|
||||
},
|
||||
{
|
||||
"path": "skills/skill-adapter/assets/config-template.json",
|
||||
"sha256": "0c2ba33d2d3c5ccb266c0848fc43caa68a2aa6a80ff315d4b378352711f83e1c"
|
||||
}
|
||||
],
|
||||
"dirSha256": "937a918cbc567890c5479197119eac984017a78e18088640935647b13d98a285"
|
||||
},
|
||||
"security": {
|
||||
"scannedAt": null,
|
||||
"scannerVersion": null,
|
||||
"flags": []
|
||||
}
|
||||
}
|
||||
6
skills/skill-adapter/assets/README.md
Normal file
6
skills/skill-adapter/assets/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
# Assets
|
||||
|
||||
Bundled resources for api-request-logger skill
|
||||
|
||||
- [ ] logging_config_template.json: Template for logging configuration files, including settings for formatters, handlers, and log levels.
|
||||
- [ ] example_log_output.json: Example of structured log output in JSON format, demonstrating the use of correlation IDs and other metadata.
|
||||
32
skills/skill-adapter/assets/config-template.json
Normal file
32
skills/skill-adapter/assets/config-template.json
Normal file
@@ -0,0 +1,32 @@
|
||||
{
|
||||
"skill": {
|
||||
"name": "skill-name",
|
||||
"version": "1.0.0",
|
||||
"enabled": true,
|
||||
"settings": {
|
||||
"verbose": false,
|
||||
"autoActivate": true,
|
||||
"toolRestrictions": true
|
||||
}
|
||||
},
|
||||
"triggers": {
|
||||
"keywords": [
|
||||
"example-trigger-1",
|
||||
"example-trigger-2"
|
||||
],
|
||||
"patterns": []
|
||||
},
|
||||
"tools": {
|
||||
"allowed": [
|
||||
"Read",
|
||||
"Grep",
|
||||
"Bash"
|
||||
],
|
||||
"restricted": []
|
||||
},
|
||||
"metadata": {
|
||||
"author": "Plugin Author",
|
||||
"category": "general",
|
||||
"tags": []
|
||||
}
|
||||
}
|
||||
122
skills/skill-adapter/assets/example_log_output.json
Normal file
122
skills/skill-adapter/assets/example_log_output.json
Normal file
@@ -0,0 +1,122 @@
|
||||
{
|
||||
"_comment": "Example log entry for a successful API request",
|
||||
"level": "INFO",
|
||||
"timestamp": "2024-01-26T10:00:00.000Z",
|
||||
"message": "API request completed successfully",
|
||||
"correlation_id": "a1b2c3d4e5f6g7h8i9j0",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"url": "/api/users/123",
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": "Bearer <redacted>"
|
||||
}
|
||||
},
|
||||
"response": {
|
||||
"status_code": 200,
|
||||
"body_size": 150,
|
||||
"response_time_ms": 75
|
||||
},
|
||||
"user": {
|
||||
"user_id": "user123",
|
||||
"username": "testuser"
|
||||
},
|
||||
"application": "my-app",
|
||||
"environment": "production",
|
||||
"log_type": "api_request",
|
||||
"source": "api-request-logger",
|
||||
"_comment": "Optional fields for request context"
|
||||
},
|
||||
{
|
||||
"_comment": "Example log entry for an API request that resulted in an error",
|
||||
"level": "ERROR",
|
||||
"timestamp": "2024-01-26T10:00:05.000Z",
|
||||
"message": "API request failed with error",
|
||||
"correlation_id": "b2c3d4e5f6g7h8i9j0a1",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"url": "/api/orders",
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": "Bearer <redacted>"
|
||||
},
|
||||
"body": "{\"item_id\": \"456\", \"quantity\": 2}"
|
||||
},
|
||||
"response": {
|
||||
"status_code": 500,
|
||||
"error_message": "Internal Server Error",
|
||||
"response_time_ms": 120
|
||||
},
|
||||
"user": {
|
||||
"user_id": "user456",
|
||||
"username": "anotheruser"
|
||||
},
|
||||
"application": "my-app",
|
||||
"environment": "production",
|
||||
"log_type": "api_request",
|
||||
"source": "api-request-logger",
|
||||
"error": {
|
||||
"type": "ServerError",
|
||||
"message": "Database connection failed"
|
||||
},
|
||||
"_comment": "Error details included for debugging"
|
||||
},
|
||||
{
|
||||
"_comment": "Example log entry for a timed out API request",
|
||||
"level": "WARN",
|
||||
"timestamp": "2024-01-26T10:00:10.000Z",
|
||||
"message": "API request timed out",
|
||||
"correlation_id": "c3d4e5f6g7h8i9j0a1b2",
|
||||
"request": {
|
||||
"method": "GET",
|
||||
"url": "/api/slow-endpoint",
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": "Bearer <redacted>"
|
||||
}
|
||||
},
|
||||
"response": {
|
||||
"status_code": 408,
|
||||
"error_message": "Request Timeout",
|
||||
"response_time_ms": 30000
|
||||
},
|
||||
"user": {
|
||||
"user_id": "user789",
|
||||
"username": "timeoutuser"
|
||||
},
|
||||
"application": "my-app",
|
||||
"environment": "staging",
|
||||
"log_type": "api_request",
|
||||
"source": "api-request-logger",
|
||||
"timeout_ms": 30000,
|
||||
"_comment": "Included timeout value"
|
||||
},
|
||||
{
|
||||
"_comment": "Example log entry for an API request with missing authorization",
|
||||
"level": "WARN",
|
||||
"timestamp": "2024-01-26T10:00:15.000Z",
|
||||
"message": "API request missing authorization header",
|
||||
"correlation_id": "d4e5f6g7h8i9j0a1b2c3",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"url": "/api/sensitive-data",
|
||||
"headers": {
|
||||
"Content-Type": "application/json"
|
||||
},
|
||||
"body": "{ \"sensitive_info\": \"secret\" }"
|
||||
},
|
||||
"response": {
|
||||
"status_code": 401,
|
||||
"error_message": "Unauthorized",
|
||||
"response_time_ms": 50
|
||||
},
|
||||
"application": "my-app",
|
||||
"environment": "staging",
|
||||
"log_type": "api_request",
|
||||
"source": "api-request-logger",
|
||||
"security": {
|
||||
"missing_authorization": true
|
||||
},
|
||||
"_comment": "Flagging the missing authorization"
|
||||
}
|
||||
]
|
||||
68
skills/skill-adapter/assets/logging_config_template.json
Normal file
68
skills/skill-adapter/assets/logging_config_template.json
Normal file
@@ -0,0 +1,68 @@
|
||||
{
|
||||
"_comment": "Template for logging configuration. Customize this to fit your needs.",
|
||||
"version": 1,
|
||||
"disable_existing_loggers": false,
|
||||
"formatters": {
|
||||
"standard": {
|
||||
"_comment": "Standard log format with timestamp, level, and message.",
|
||||
"format": "%(asctime)s - %(levelname)s - %(name)s - %(correlation_id)s - %(message)s"
|
||||
},
|
||||
"json": {
|
||||
"_comment": "JSON log format for easier parsing by log aggregation tools.",
|
||||
"format": "{\"timestamp\": \"%(asctime)s\", \"level\": \"%(levelname)s\", \"name\": \"%(name)s\", \"correlation_id\": \"%(correlation_id)s\", \"message\": \"%(message)s\", \"module\": \"%(module)s\", \"funcName\": \"%(funcName)s\", \"lineno\": %(lineno)d}"
|
||||
}
|
||||
},
|
||||
"handlers": {
|
||||
"console": {
|
||||
"_comment": "Console handler for local development.",
|
||||
"class": "logging.StreamHandler",
|
||||
"level": "INFO",
|
||||
"formatter": "standard",
|
||||
"stream": "ext://sys.stdout"
|
||||
},
|
||||
"file": {
|
||||
"_comment": "File handler for persistent logging.",
|
||||
"class": "logging.handlers.RotatingFileHandler",
|
||||
"level": "DEBUG",
|
||||
"formatter": "standard",
|
||||
"filename": "/var/log/api_requests.log",
|
||||
"maxBytes": 10485760,
|
||||
"backupCount": 5,
|
||||
"encoding": "utf8"
|
||||
},
|
||||
"json_file": {
|
||||
"_comment": "JSON file handler for persistent logging in JSON format",
|
||||
"class": "logging.handlers.RotatingFileHandler",
|
||||
"level": "DEBUG",
|
||||
"formatter": "json",
|
||||
"filename": "/var/log/api_requests.json",
|
||||
"maxBytes": 10485760,
|
||||
"backupCount": 5,
|
||||
"encoding": "utf8"
|
||||
}
|
||||
},
|
||||
"loggers": {
|
||||
"api_request_logger": {
|
||||
"_comment": "Logger for API requests.",
|
||||
"level": "DEBUG",
|
||||
"handlers": ["console", "file", "json_file"],
|
||||
"propagate": false
|
||||
},
|
||||
"other_module": {
|
||||
"_comment": "Example of another module's logger.",
|
||||
"level": "WARNING",
|
||||
"handlers": ["console"],
|
||||
"propagate": true
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"_comment": "Root logger configuration.",
|
||||
"level": "WARNING",
|
||||
"handlers": ["console"]
|
||||
},
|
||||
"correlation_id": {
|
||||
"_comment": "Configuration for correlation ID generation.",
|
||||
"header_name": "X-Correlation-ID",
|
||||
"generator": "uuid"
|
||||
}
|
||||
}
|
||||
28
skills/skill-adapter/assets/skill-schema.json
Normal file
28
skills/skill-adapter/assets/skill-schema.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Claude Skill Configuration",
|
||||
"type": "object",
|
||||
"required": ["name", "description"],
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"pattern": "^[a-z0-9-]+$",
|
||||
"maxLength": 64,
|
||||
"description": "Skill identifier (lowercase, hyphens only)"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"maxLength": 1024,
|
||||
"description": "What the skill does and when to use it"
|
||||
},
|
||||
"allowed-tools": {
|
||||
"type": "string",
|
||||
"description": "Comma-separated list of allowed tools"
|
||||
},
|
||||
"version": {
|
||||
"type": "string",
|
||||
"pattern": "^\\d+\\.\\d+\\.\\d+$",
|
||||
"description": "Semantic version (x.y.z)"
|
||||
}
|
||||
}
|
||||
}
|
||||
27
skills/skill-adapter/assets/test-data.json
Normal file
27
skills/skill-adapter/assets/test-data.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"testCases": [
|
||||
{
|
||||
"name": "Basic activation test",
|
||||
"input": "trigger phrase example",
|
||||
"expected": {
|
||||
"activated": true,
|
||||
"toolsUsed": ["Read", "Grep"],
|
||||
"success": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Complex workflow test",
|
||||
"input": "multi-step trigger example",
|
||||
"expected": {
|
||||
"activated": true,
|
||||
"steps": 3,
|
||||
"toolsUsed": ["Read", "Write", "Bash"],
|
||||
"success": true
|
||||
}
|
||||
}
|
||||
],
|
||||
"fixtures": {
|
||||
"sampleInput": "example data",
|
||||
"expectedOutput": "processed result"
|
||||
}
|
||||
}
|
||||
7
skills/skill-adapter/references/README.md
Normal file
7
skills/skill-adapter/references/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# References
|
||||
|
||||
Bundled resources for api-request-logger skill
|
||||
|
||||
- [ ] logging_best_practices.md: Detailed guide on best practices for structured logging, including formatting, levels, and security considerations.
|
||||
- [ ] correlation_id_implementation.md: Explanation of how correlation IDs are implemented and used for tracing requests across services.
|
||||
- [ ] log_aggregation_setup.md: Instructions on setting up log aggregation using popular tools like Elasticsearch, Logstash, and Kibana (ELK stack).
|
||||
69
skills/skill-adapter/references/best-practices.md
Normal file
69
skills/skill-adapter/references/best-practices.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Skill Best Practices
|
||||
|
||||
Guidelines for optimal skill usage and development.
|
||||
|
||||
## For Users
|
||||
|
||||
### Activation Best Practices
|
||||
|
||||
1. **Use Clear Trigger Phrases**
|
||||
- Match phrases from skill description
|
||||
- Be specific about intent
|
||||
- Provide necessary context
|
||||
|
||||
2. **Provide Sufficient Context**
|
||||
- Include relevant file paths
|
||||
- Specify scope of analysis
|
||||
- Mention any constraints
|
||||
|
||||
3. **Understand Tool Permissions**
|
||||
- Check allowed-tools in frontmatter
|
||||
- Know what the skill can/cannot do
|
||||
- Request appropriate actions
|
||||
|
||||
### Workflow Optimization
|
||||
|
||||
- Start with simple requests
|
||||
- Build up to complex workflows
|
||||
- Verify each step before proceeding
|
||||
- Use skill consistently for related tasks
|
||||
|
||||
## For Developers
|
||||
|
||||
### Skill Development Guidelines
|
||||
|
||||
1. **Clear Descriptions**
|
||||
- Include explicit trigger phrases
|
||||
- Document all capabilities
|
||||
- Specify limitations
|
||||
|
||||
2. **Proper Tool Permissions**
|
||||
- Use minimal necessary tools
|
||||
- Document security implications
|
||||
- Test with restricted tools
|
||||
|
||||
3. **Comprehensive Documentation**
|
||||
- Provide usage examples
|
||||
- Document common pitfalls
|
||||
- Include troubleshooting guide
|
||||
|
||||
### Maintenance
|
||||
|
||||
- Keep version updated
|
||||
- Test after tool updates
|
||||
- Monitor user feedback
|
||||
- Iterate on descriptions
|
||||
|
||||
## Performance Tips
|
||||
|
||||
- Scope skills to specific domains
|
||||
- Avoid overlapping trigger phrases
|
||||
- Keep descriptions under 1024 chars
|
||||
- Test activation reliability
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Never include secrets in skill files
|
||||
- Validate all inputs
|
||||
- Use read-only tools when possible
|
||||
- Document security requirements
|
||||
70
skills/skill-adapter/references/examples.md
Normal file
70
skills/skill-adapter/references/examples.md
Normal file
@@ -0,0 +1,70 @@
|
||||
# Skill Usage Examples
|
||||
|
||||
This document provides practical examples of how to use this skill effectively.
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Example 1: Simple Activation
|
||||
|
||||
**User Request:**
|
||||
```
|
||||
[Describe trigger phrase here]
|
||||
```
|
||||
|
||||
**Skill Response:**
|
||||
1. Analyzes the request
|
||||
2. Performs the required action
|
||||
3. Returns results
|
||||
|
||||
### Example 2: Complex Workflow
|
||||
|
||||
**User Request:**
|
||||
```
|
||||
[Describe complex scenario]
|
||||
```
|
||||
|
||||
**Workflow:**
|
||||
1. Step 1: Initial analysis
|
||||
2. Step 2: Data processing
|
||||
3. Step 3: Result generation
|
||||
4. Step 4: Validation
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Chaining Operations
|
||||
|
||||
Combine this skill with other tools:
|
||||
```
|
||||
Step 1: Use this skill for [purpose]
|
||||
Step 2: Chain with [other tool]
|
||||
Step 3: Finalize with [action]
|
||||
```
|
||||
|
||||
### Pattern 2: Error Handling
|
||||
|
||||
If issues occur:
|
||||
- Check trigger phrase matches
|
||||
- Verify context is available
|
||||
- Review allowed-tools permissions
|
||||
|
||||
## Tips & Best Practices
|
||||
|
||||
- ✅ Be specific with trigger phrases
|
||||
- ✅ Provide necessary context
|
||||
- ✅ Check tool permissions match needs
|
||||
- ❌ Avoid vague requests
|
||||
- ❌ Don't mix unrelated tasks
|
||||
|
||||
## Common Issues
|
||||
|
||||
**Issue:** Skill doesn't activate
|
||||
**Solution:** Use exact trigger phrases from description
|
||||
|
||||
**Issue:** Unexpected results
|
||||
**Solution:** Check input format and context
|
||||
|
||||
## See Also
|
||||
|
||||
- Main SKILL.md for full documentation
|
||||
- scripts/ for automation helpers
|
||||
- assets/ for configuration examples
|
||||
7
skills/skill-adapter/scripts/README.md
Normal file
7
skills/skill-adapter/scripts/README.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# Scripts
|
||||
|
||||
Bundled resources for api-request-logger skill
|
||||
|
||||
- [ ] setup_logging.sh: Automates the setup of logging configurations based on environment variables and best practices.
|
||||
- [ ] generate_correlation_id.py: Generates a unique correlation ID for each API request.
|
||||
- [ ] aggregate_logs.py: Script to aggregate logs from different sources into a centralized logging system.
|
||||
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
42
skills/skill-adapter/scripts/helper-template.sh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
# Helper script template for skill automation
|
||||
# Customize this for your skill's specific needs
|
||||
|
||||
set -e
|
||||
|
||||
function show_usage() {
|
||||
echo "Usage: $0 [options]"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo " -v, --verbose Enable verbose output"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
VERBOSE=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help)
|
||||
show_usage
|
||||
exit 0
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Your skill logic here
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo "Running skill automation..."
|
||||
fi
|
||||
|
||||
echo "✅ Complete"
|
||||
32
skills/skill-adapter/scripts/validation.sh
Executable file
32
skills/skill-adapter/scripts/validation.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
# Skill validation helper
|
||||
# Validates skill activation and functionality
|
||||
|
||||
set -e
|
||||
|
||||
echo "🔍 Validating skill..."
|
||||
|
||||
# Check if SKILL.md exists
|
||||
if [ ! -f "../SKILL.md" ]; then
|
||||
echo "❌ Error: SKILL.md not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate frontmatter
|
||||
if ! grep -q "^---$" "../SKILL.md"; then
|
||||
echo "❌ Error: No frontmatter found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check required fields
|
||||
if ! grep -q "^name:" "../SKILL.md"; then
|
||||
echo "❌ Error: Missing 'name' field"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! grep -q "^description:" "../SKILL.md"; then
|
||||
echo "❌ Error: Missing 'description' field"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Skill validation passed"
|
||||
Reference in New Issue
Block a user