Initial commit
This commit is contained in:
99
commands/deploy.md
Normal file
99
commands/deploy.md
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
name: specweave-kafka:deploy
|
||||
description: Deploy Apache Kafka cluster using Terraform (Apache Kafka, AWS MSK, or Azure Event Hubs). Guides platform selection, sizing, and deployment.
|
||||
---
|
||||
|
||||
# Deploy Kafka Cluster
|
||||
|
||||
Deploy Apache Kafka using Infrastructure as Code (Terraform).
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Platform Selection**: Helps you choose the right Kafka platform
|
||||
2. **Cluster Sizing**: Calculates broker count, instance types, storage
|
||||
3. **Terraform Generation**: Creates or uses existing Terraform modules
|
||||
4. **Deployment**: Guides through terraform init/plan/apply
|
||||
5. **Verification**: Tests cluster connectivity and basic operations
|
||||
|
||||
## Interactive Workflow
|
||||
|
||||
I'll ask you a few questions to determine the best deployment approach:
|
||||
|
||||
### Question 1: Which platform?
|
||||
- **Apache Kafka** (self-hosted on AWS EC2, KRaft mode)
|
||||
- **AWS MSK** (managed Kafka service)
|
||||
- **Azure Event Hubs** (Kafka-compatible API)
|
||||
|
||||
### Question 2: What's your use case?
|
||||
- **Development/Testing** (1 broker, small instance)
|
||||
- **Staging** (3 brokers, medium instances)
|
||||
- **Production** (3-5 brokers, large instances, multi-AZ)
|
||||
|
||||
### Question 3: Expected throughput?
|
||||
- Messages per second (peak)
|
||||
- Average message size
|
||||
- Retention period (hours/days)
|
||||
|
||||
Based on your answers, I'll:
|
||||
- ✅ Recommend broker count and instance types
|
||||
- ✅ Calculate storage requirements
|
||||
- ✅ Generate Terraform configuration
|
||||
- ✅ Guide deployment
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Start deployment wizard
|
||||
/specweave-kafka:deploy
|
||||
|
||||
# I'll activate kafka-iac-deployment skill and guide you through:
|
||||
# 1. Platform selection
|
||||
# 2. Sizing calculation (using ClusterSizingCalculator)
|
||||
# 3. Terraform module selection (apache-kafka, aws-msk, or azure-event-hubs)
|
||||
# 4. Deployment execution
|
||||
# 5. Post-deployment verification
|
||||
```
|
||||
|
||||
## What Gets Created
|
||||
|
||||
**Apache Kafka Deployment** (AWS EC2):
|
||||
- 3-5 EC2 instances (m5.xlarge or larger)
|
||||
- EBS volumes (GP3, 100Gi+ per broker)
|
||||
- Security groups (SASL_SSL on port 9093)
|
||||
- IAM roles for S3 backups
|
||||
- CloudWatch alarms
|
||||
- Load balancer (optional)
|
||||
|
||||
**AWS MSK Deployment**:
|
||||
- MSK cluster (3-6 brokers)
|
||||
- VPC, subnets, security groups
|
||||
- IAM authentication
|
||||
- CloudWatch monitoring
|
||||
- Auto-scaling (optional)
|
||||
|
||||
**Azure Event Hubs Deployment**:
|
||||
- Event Hubs namespace (Premium SKU)
|
||||
- Event hubs (topics)
|
||||
- Private endpoints
|
||||
- Auto-inflate enabled
|
||||
- Zone redundancy
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Terraform 1.5+ installed
|
||||
- AWS CLI (for AWS deployments) or Azure CLI (for Azure)
|
||||
- Appropriate cloud credentials configured
|
||||
- VPC and subnets created (if deploying to cloud)
|
||||
|
||||
## Post-Deployment
|
||||
|
||||
After deployment succeeds, I'll:
|
||||
1. ✅ Output bootstrap servers
|
||||
2. ✅ Provide connection examples
|
||||
3. ✅ Suggest running `/specweave-kafka:monitor-setup` for Prometheus + Grafana
|
||||
4. ✅ Suggest testing with `/specweave-kafka:dev-env` locally
|
||||
|
||||
---
|
||||
|
||||
**Skills Activated**: kafka-iac-deployment, kafka-architecture
|
||||
**Related Commands**: /specweave-kafka:monitor-setup, /specweave-kafka:dev-env
|
||||
176
commands/dev-env.md
Normal file
176
commands/dev-env.md
Normal file
@@ -0,0 +1,176 @@
|
||||
---
|
||||
name: specweave-kafka:dev-env
|
||||
description: Set up local Kafka development environment using Docker Compose. Includes Kafka (KRaft mode), Schema Registry, Kafka UI, Prometheus, and Grafana.
|
||||
---
|
||||
|
||||
# Set Up Local Kafka Dev Environment
|
||||
|
||||
Spin up a complete local Kafka development environment with one command.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **Docker Compose Selection**: Choose Kafka or Redpanda
|
||||
2. **Service Configuration**: Kafka + Schema Registry + UI + Monitoring
|
||||
3. **Environment Setup**: Generate docker-compose.yml
|
||||
4. **Start Services**: `docker-compose up -d`
|
||||
5. **Verification**: Test cluster and provide connection details
|
||||
|
||||
## Two Options Available
|
||||
|
||||
### Option 1: Apache Kafka (KRaft Mode)
|
||||
**Services**:
|
||||
- ✅ Kafka broker (KRaft mode, no ZooKeeper)
|
||||
- ✅ Schema Registry (Avro schemas)
|
||||
- ✅ Kafka UI (web interface, port 8080)
|
||||
- ✅ Prometheus (metrics, port 9090)
|
||||
- ✅ Grafana (dashboards, port 3000)
|
||||
|
||||
**Use When**: Testing Apache Kafka specifically, need Schema Registry
|
||||
|
||||
### Option 2: Redpanda (3-Node Cluster)
|
||||
**Services**:
|
||||
- ✅ Redpanda (3 brokers, Kafka-compatible)
|
||||
- ✅ Redpanda Console (web UI, port 8080)
|
||||
- ✅ Prometheus (metrics, port 9090)
|
||||
- ✅ Grafana (dashboards, port 3000)
|
||||
|
||||
**Use When**: Testing high-performance alternative, need multi-broker cluster locally
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Start dev environment setup
|
||||
/specweave-kafka:dev-env
|
||||
|
||||
# I'll ask:
|
||||
# 1. Which stack? (Kafka or Redpanda)
|
||||
# 2. Where to create files? (current directory or specify path)
|
||||
# 3. Custom ports? (use defaults or customize)
|
||||
|
||||
# Then I'll:
|
||||
# - Generate docker-compose.yml
|
||||
# - Start all services
|
||||
# - Wait for health checks
|
||||
# - Provide connection details
|
||||
# - Open Kafka UI in browser
|
||||
```
|
||||
|
||||
## What Gets Created
|
||||
|
||||
**Directory Structure**:
|
||||
```
|
||||
./kafka-dev/
|
||||
├── docker-compose.yml # Main compose file
|
||||
├── .env # Environment variables
|
||||
├── data/ # Persistent volumes
|
||||
│ ├── kafka/
|
||||
│ ├── prometheus/
|
||||
│ └── grafana/
|
||||
└── config/
|
||||
├── prometheus.yml # Prometheus config
|
||||
└── grafana/ # Dashboard provisioning
|
||||
```
|
||||
|
||||
**Services Running**:
|
||||
- Kafka: localhost:9092 (plaintext) or localhost:9093 (SASL_SSL)
|
||||
- Schema Registry: localhost:8081
|
||||
- Kafka UI: http://localhost:8080
|
||||
- Prometheus: http://localhost:9090
|
||||
- Grafana: http://localhost:3000 (admin/admin)
|
||||
|
||||
## Connection Examples
|
||||
|
||||
**After setup, connect with**:
|
||||
|
||||
### Producer (Node.js):
|
||||
```javascript
|
||||
const { Kafka } = require('kafkajs');
|
||||
|
||||
const kafka = new Kafka({
|
||||
clientId: 'my-app',
|
||||
brokers: ['localhost:9092']
|
||||
});
|
||||
|
||||
const producer = kafka.producer();
|
||||
await producer.connect();
|
||||
await producer.send({
|
||||
topic: 'test-topic',
|
||||
messages: [{ value: 'Hello Kafka!' }]
|
||||
});
|
||||
```
|
||||
|
||||
### Consumer (Python):
|
||||
```python
|
||||
from kafka import KafkaConsumer
|
||||
|
||||
consumer = KafkaConsumer(
|
||||
'test-topic',
|
||||
bootstrap_servers=['localhost:9092'],
|
||||
group_id='my-group',
|
||||
auto_offset_reset='earliest'
|
||||
)
|
||||
|
||||
for message in consumer:
|
||||
print(f"Received: {message.value}")
|
||||
```
|
||||
|
||||
### kcat (CLI):
|
||||
```bash
|
||||
# Produce message
|
||||
echo "Hello Kafka" | kcat -P -b localhost:9092 -t test-topic
|
||||
|
||||
# Consume messages
|
||||
kcat -C -b localhost:9092 -t test-topic -o beginning
|
||||
```
|
||||
|
||||
## Sample Producer/Consumer
|
||||
|
||||
I'll also create sample code templates:
|
||||
- `producer-nodejs.js` - Production-ready Node.js producer
|
||||
- `consumer-nodejs.js` - Production-ready Node.js consumer
|
||||
- `producer-python.py` - Python producer with error handling
|
||||
- `consumer-python.py` - Python consumer with DLQ
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 20+ installed
|
||||
- Docker Compose v2+
|
||||
- 4GB+ free RAM (for Redpanda 3-node cluster)
|
||||
- Ports available: 8080, 8081, 9090, 9092, 9093, 3000
|
||||
|
||||
## Post-Setup
|
||||
|
||||
After environment starts, I'll:
|
||||
1. ✅ Open Kafka UI in browser (http://localhost:8080)
|
||||
2. ✅ Create a test topic via UI
|
||||
3. ✅ Show producer/consumer examples
|
||||
4. ✅ Provide kcat commands for testing
|
||||
5. ✅ Show Grafana dashboards (http://localhost:3000)
|
||||
|
||||
## Useful Commands
|
||||
|
||||
```bash
|
||||
# Start environment
|
||||
docker-compose up -d
|
||||
|
||||
# Stop environment
|
||||
docker-compose down
|
||||
|
||||
# Stop and remove data
|
||||
docker-compose down -v
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f kafka
|
||||
|
||||
# Restart Kafka only
|
||||
docker-compose restart kafka
|
||||
|
||||
# Check health
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Skills Activated**: kafka-cli-tools
|
||||
**Docker Compose Location**: `plugins/specweave-kafka/docker/`
|
||||
**Sample Code**: `plugins/specweave-kafka/docker/templates/`
|
||||
101
commands/mcp-configure.md
Normal file
101
commands/mcp-configure.md
Normal file
@@ -0,0 +1,101 @@
|
||||
---
|
||||
name: specweave-kafka:mcp-configure
|
||||
description: Configure MCP (Model Context Protocol) server for Kafka integration. Auto-detects and configures kanapuli, tuannvm, Joel-hanson, or Confluent MCP servers.
|
||||
---
|
||||
|
||||
# Configure Kafka MCP Server
|
||||
|
||||
Set up MCP (Model Context Protocol) server integration for natural language Kafka operations.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **MCP Server Detection**: Auto-detect installed MCP servers
|
||||
2. **Server Ranking**: Recommend best server for your needs
|
||||
3. **Configuration**: Generate Claude Desktop config
|
||||
4. **Testing**: Verify MCP server connectivity
|
||||
5. **Usage Guide**: Show natural language examples
|
||||
|
||||
## Supported MCP Servers
|
||||
|
||||
| Server | Language | Features | Best For |
|
||||
|--------|----------|----------|----------|
|
||||
| **Confluent Official** | - | Natural language, Flink SQL, Enterprise | Production + Confluent Cloud |
|
||||
| **tuannvm/kafka-mcp-server** | Go | Advanced SASL (SCRAM-SHA-256/512) | Security-focused deployments |
|
||||
| **kanapuli/mcp-kafka** | Node.js | Basic operations, SASL_PLAINTEXT | Quick start, dev environments |
|
||||
| **Joel-hanson/kafka-mcp-server** | Python | Claude Desktop integration | Desktop AI workflows |
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Start MCP configuration wizard
|
||||
/specweave-kafka:mcp-configure
|
||||
|
||||
# I'll:
|
||||
# 1. Detect installed MCP servers (npm, go, pip, CLI)
|
||||
# 2. Rank servers (Confluent > tuannvm > kanapuli > Joel-hanson)
|
||||
# 3. Generate Claude Desktop config (~/.claude/settings.json)
|
||||
# 4. Test connection to Kafka
|
||||
# 5. Show natural language examples
|
||||
```
|
||||
|
||||
## What Gets Configured
|
||||
|
||||
**Claude Desktop Config** (`~/.claude/settings.json`):
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"kafka": {
|
||||
"command": "npx",
|
||||
"args": ["mcp-kafka"],
|
||||
"env": {
|
||||
"KAFKA_BROKERS": "localhost:9092",
|
||||
"KAFKA_SASL_USERNAME": "admin",
|
||||
"KAFKA_SASL_PASSWORD": "admin-secret"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Natural Language Examples
|
||||
|
||||
After MCP is configured, you can use natural language with Claude:
|
||||
|
||||
```
|
||||
You: "List all Kafka topics"
|
||||
Claude: [Uses MCP to call listTopics()]
|
||||
Output: user-events, order-events, payment-events
|
||||
|
||||
You: "Create a topic called 'analytics' with 12 partitions and RF=3"
|
||||
Claude: [Uses MCP to call createTopic()]
|
||||
Output: Topic 'analytics' created successfully
|
||||
|
||||
You: "What's the consumer lag for group 'orders-consumer'?"
|
||||
Claude: [Uses MCP to call getConsumerGroupOffsets()]
|
||||
Output: Total lag: 1,234 messages across 6 partitions
|
||||
|
||||
You: "Send a test message to 'user-events' topic"
|
||||
Claude: [Uses MCP to call produceMessage()]
|
||||
Output: Message sent to partition 3, offset 12345
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18+ (for kanapuli or Joel-hanson)
|
||||
- Go 1.20+ (for tuannvm)
|
||||
- Confluent Cloud account (for Confluent MCP)
|
||||
- Kafka cluster accessible from your machine
|
||||
|
||||
## Post-Configuration
|
||||
|
||||
After MCP is configured, I'll:
|
||||
1. ✅ Restart Claude Desktop (required for MCP changes)
|
||||
2. ✅ Test MCP server with simple command
|
||||
3. ✅ Show 10+ natural language examples
|
||||
4. ✅ Provide troubleshooting tips if connection fails
|
||||
|
||||
---
|
||||
|
||||
**Skills Activated**: kafka-mcp-integration
|
||||
**Related Commands**: /specweave-kafka:deploy, /specweave-kafka:dev-env
|
||||
**MCP Docs**: https://modelcontextprotocol.io/
|
||||
96
commands/monitor-setup.md
Normal file
96
commands/monitor-setup.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
name: specweave-kafka:monitor-setup
|
||||
description: Set up comprehensive Kafka monitoring with Prometheus + Grafana. Configures JMX exporter, dashboards, and alerting rules.
|
||||
---
|
||||
|
||||
# Set Up Kafka Monitoring
|
||||
|
||||
Configure comprehensive monitoring for your Kafka cluster using Prometheus and Grafana.
|
||||
|
||||
## What This Command Does
|
||||
|
||||
1. **JMX Exporter Setup**: Configure Prometheus JMX exporter for Kafka brokers
|
||||
2. **Prometheus Configuration**: Add Kafka scrape targets
|
||||
3. **Grafana Dashboards**: Install 5 pre-built dashboards
|
||||
4. **Alerting Rules**: Configure 14 critical/high/warning alerts
|
||||
5. **Verification**: Test metrics collection and dashboard access
|
||||
|
||||
## Interactive Workflow
|
||||
|
||||
I'll detect your environment and guide setup:
|
||||
|
||||
### Environment Detection
|
||||
- **Kubernetes** (Strimzi/Confluent Operator) → Use PodMonitor
|
||||
- **Docker Compose** → Add Prometheus + Grafana services
|
||||
- **VM/Bare Metal** → Configure JMX exporter JAR
|
||||
|
||||
### Question 1: Where is Kafka running?
|
||||
- Kubernetes (Strimzi)
|
||||
- Docker Compose
|
||||
- VMs/EC2 instances
|
||||
|
||||
### Question 2: Prometheus already installed?
|
||||
- Yes → Just add Kafka scrape config
|
||||
- No → Install Prometheus + Grafana stack
|
||||
|
||||
## Example Usage
|
||||
|
||||
```bash
|
||||
# Start monitoring setup wizard
|
||||
/specweave-kafka:monitor-setup
|
||||
|
||||
# I'll activate kafka-observability skill and:
|
||||
# 1. Detect your environment
|
||||
# 2. Configure JMX exporter (port 7071)
|
||||
# 3. Set up Prometheus scraping
|
||||
# 4. Install 5 Grafana dashboards
|
||||
# 5. Configure 14 alerting rules
|
||||
# 6. Verify metrics collection
|
||||
```
|
||||
|
||||
## What Gets Configured
|
||||
|
||||
**JMX Exporter** (Kafka brokers):
|
||||
- Metrics endpoint on port 7071
|
||||
- 50+ critical Kafka metrics exported
|
||||
- Broker, topic, consumer lag, JVM metrics
|
||||
|
||||
**Prometheus Scraping**:
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'kafka'
|
||||
static_configs:
|
||||
- targets: ['kafka-0:7071', 'kafka-1:7071', 'kafka-2:7071']
|
||||
```
|
||||
|
||||
**5 Grafana Dashboards**:
|
||||
1. **Cluster Overview** - Health, throughput, ISR changes
|
||||
2. **Broker Metrics** - CPU, memory, network, request handlers
|
||||
3. **Consumer Lag** - Lag per group/topic, offset tracking
|
||||
4. **Topic Metrics** - Partition count, replication, log size
|
||||
5. **JVM Metrics** - Heap, GC, threads, file descriptors
|
||||
|
||||
**14 Alerting Rules**:
|
||||
- CRITICAL: Under-replicated partitions, offline partitions, no controller
|
||||
- HIGH: Consumer lag, ISR shrinks, leader elections
|
||||
- WARNING: CPU, memory, GC time, disk usage
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kafka cluster running (self-hosted or K8s)
|
||||
- Prometheus installed (or will be installed)
|
||||
- Grafana installed (or will be installed)
|
||||
|
||||
## Post-Setup
|
||||
|
||||
After setup completes, I'll:
|
||||
1. ✅ Provide Grafana URL and credentials
|
||||
2. ✅ Show how to access dashboards
|
||||
3. ✅ Explain critical alerts
|
||||
4. ✅ Suggest testing alerts by stopping a broker
|
||||
|
||||
---
|
||||
|
||||
**Skills Activated**: kafka-observability
|
||||
**Related Commands**: /specweave-kafka:deploy
|
||||
**Dashboard Locations**: `plugins/specweave-kafka/monitoring/grafana/dashboards/`
|
||||
Reference in New Issue
Block a user