Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:52:45 +08:00
commit 37518e0ca3
19 changed files with 1587 additions and 0 deletions

View File

@@ -0,0 +1,15 @@
{
"name": "grpc-service-generator",
"description": "Generate gRPC services with Protocol Buffers and streaming support",
"version": "1.0.0",
"author": {
"name": "Jeremy Longshore",
"email": "[email protected]"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# grpc-service-generator
Generate gRPC services with Protocol Buffers and streaming support

View File

@@ -0,0 +1,944 @@
---
description: Generate production-ready gRPC services with Protocol Buffers
shortcut: grpc
---
# Generate gRPC Service
Automatically generate high-performance gRPC services with Protocol Buffer definitions, streaming support, load balancing, and comprehensive service implementations for multiple programming languages.
## When to Use This Command
Use `/generate-grpc-service` when you need to:
- Build high-performance microservices with binary protocol
- Implement real-time bidirectional streaming communication
- Create strongly-typed service contracts across languages
- Build internal services requiring minimal latency
- Support multiple programming languages with single definition
- Implement efficient mobile/IoT communication protocols
DON'T use this when:
- Building browser-based web applications (limited browser support)
- Simple REST APIs suffice (gRPC adds complexity)
- Working with teams unfamiliar with Protocol Buffers
- Debugging tools are limited in your environment
## Design Decisions
This command implements **gRPC with Protocol Buffers v3** as the primary approach because:
- Binary protocol offers 20-30% better performance than JSON
- Built-in code generation for 10+ languages
- Native support for streaming in all RPC patterns
- Strong typing prevents runtime errors
- Backward compatibility through field numbering
- Built-in service discovery and load balancing
**Alternative considered: Apache Thrift**
- Similar performance characteristics
- Less ecosystem support
- Fewer language bindings
- Recommended for Facebook ecosystem
**Alternative considered: GraphQL with subscriptions**
- Better for public APIs
- More flexible queries
- Higher overhead
- Recommended for client-facing APIs
## Prerequisites
Before running this command:
1. Protocol Buffer compiler (protoc) installed
2. Language-specific gRPC tools installed
3. Understanding of Protocol Buffer syntax
4. Service architecture defined
5. Authentication strategy determined
## Implementation Process
### Step 1: Define Service Contract
Create comprehensive .proto files with service definitions and message types.
### Step 2: Generate Language Bindings
Compile Protocol Buffers to target language code with gRPC plugins.
### Step 3: Implement Service Logic
Build server-side implementations for all RPC methods.
### Step 4: Add Interceptors
Implement cross-cutting concerns like auth, logging, and error handling.
### Step 5: Configure Production Settings
Set up TLS, connection pooling, and load balancing.
## Output Format
The command generates:
- `proto/service.proto` - Protocol Buffer definitions
- `server/` - Server implementation with all RPC methods
- `client/` - Client library with connection management
- `interceptors/` - Authentication, logging, metrics interceptors
- `config/` - TLS certificates and configuration
- `docs/api.md` - Service documentation
## Code Examples
### Example 1: E-commerce Service with All RPC Patterns
```protobuf
// proto/ecommerce.proto
syntax = "proto3";
package ecommerce.v1;
import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";
// Service definition with all RPC patterns
service ProductService {
// Unary RPC
rpc GetProduct(GetProductRequest) returns (Product);
// Server streaming
rpc ListProducts(ListProductsRequest) returns (stream Product);
// Client streaming
rpc ImportProducts(stream Product) returns (ImportSummary);
// Bidirectional streaming
rpc WatchInventory(stream InventoryUpdate) returns (stream InventoryChange);
// Batch operations
rpc BatchGetProducts(BatchGetProductsRequest) returns (BatchGetProductsResponse);
}
// Message definitions
message Product {
string id = 1;
string name = 2;
string description = 3;
double price = 4;
int32 inventory = 5;
repeated string categories = 6;
map<string, string> metadata = 7;
google.protobuf.Timestamp created_at = 8;
google.protobuf.Timestamp updated_at = 9;
enum Status {
STATUS_UNSPECIFIED = 0;
STATUS_ACTIVE = 1;
STATUS_DISCONTINUED = 2;
STATUS_OUT_OF_STOCK = 3;
}
Status status = 10;
}
message GetProductRequest {
string product_id = 1;
repeated string fields = 2; // Field mask for partial responses
}
message ListProductsRequest {
string category = 1;
int32 page_size = 2;
string page_token = 3;
string order_by = 4;
message Filter {
double min_price = 1;
double max_price = 2;
repeated string tags = 3;
}
Filter filter = 5;
}
message ImportSummary {
int32 total_received = 1;
int32 successful = 2;
int32 failed = 3;
repeated ImportError errors = 4;
}
message ImportError {
int32 index = 1;
string product_id = 2;
string error = 3;
}
message InventoryUpdate {
string product_id = 1;
int32 quantity_change = 2;
string warehouse_id = 3;
}
message InventoryChange {
string product_id = 1;
int32 old_quantity = 2;
int32 new_quantity = 3;
google.protobuf.Timestamp timestamp = 4;
string triggered_by = 5;
}
message BatchGetProductsRequest {
repeated string product_ids = 1;
repeated string fields = 2;
}
message BatchGetProductsResponse {
repeated Product products = 1;
repeated string not_found = 2;
}
```
```go
// server/main.go - Go server implementation
package main
import (
"context"
"crypto/tls"
"fmt"
"io"
"log"
"net"
"sync"
"time"
pb "github.com/company/ecommerce/proto"
"github.com/golang/protobuf/ptypes/empty"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/keepalive"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb"
)
type productServer struct {
pb.UnimplementedProductServiceServer
mu sync.RWMutex
products map[string]*pb.Product
watchers map[string]chan *pb.InventoryChange
}
// Unary RPC implementation
func (s *productServer) GetProduct(
ctx context.Context,
req *pb.GetProductRequest,
) (*pb.Product, error) {
// Extract metadata for tracing
if md, ok := metadata.FromIncomingContext(ctx); ok {
if traceID := md.Get("trace-id"); len(traceID) > 0 {
log.Printf("GetProduct request - trace: %s", traceID[0])
}
}
s.mu.RLock()
product, exists := s.products[req.ProductId]
s.mu.RUnlock()
if !exists {
return nil, status.Errorf(
codes.NotFound,
"product %s not found",
req.ProductId,
)
}
// Apply field mask if specified
if len(req.Fields) > 0 {
return applyFieldMask(product, req.Fields), nil
}
return product, nil
}
// Server streaming implementation
func (s *productServer) ListProducts(
req *pb.ListProductsRequest,
stream pb.ProductService_ListProductsServer,
) error {
s.mu.RLock()
defer s.mu.RUnlock()
count := 0
for _, product := range s.products {
// Apply filters
if !matchesFilter(product, req) {
continue
}
// Send product to stream
if err := stream.Send(product); err != nil {
return status.Errorf(
codes.Internal,
"failed to send product: %v",
err,
)
}
count++
if req.PageSize > 0 && count >= int(req.PageSize) {
break
}
// Simulate real-time processing
time.Sleep(10 * time.Millisecond)
}
return nil
}
// Client streaming implementation
func (s *productServer) ImportProducts(
stream pb.ProductService_ImportProductsServer,
) error {
var summary pb.ImportSummary
var errors []*pb.ImportError
index := 0
for {
product, err := stream.Recv()
if err == io.EOF {
// Client finished sending
summary.Errors = errors
return stream.SendAndClose(&summary)
}
if err != nil {
return status.Errorf(
codes.Internal,
"failed to receive product: %v",
err,
)
}
summary.TotalReceived++
// Validate and store product
if err := validateProduct(product); err != nil {
summary.Failed++
errors = append(errors, &pb.ImportError{
Index: int32(index),
ProductId: product.Id,
Error: err.Error(),
})
} else {
s.mu.Lock()
s.products[product.Id] = product
s.mu.Unlock()
summary.Successful++
}
index++
}
}
// Bidirectional streaming implementation
func (s *productServer) WatchInventory(
stream pb.ProductService_WatchInventoryServer,
) error {
// Create change channel for this client
changeChan := make(chan *pb.InventoryChange, 100)
clientID := generateClientID()
s.mu.Lock()
s.watchers[clientID] = changeChan
s.mu.Unlock()
defer func() {
s.mu.Lock()
delete(s.watchers, clientID)
s.mu.Unlock()
close(changeChan)
}()
// Handle bidirectional communication
errChan := make(chan error, 2)
// Goroutine to receive updates from client
go func() {
for {
update, err := stream.Recv()
if err == io.EOF {
errChan <- nil
return
}
if err != nil {
errChan <- err
return
}
// Process inventory update
if err := s.processInventoryUpdate(update); err != nil {
log.Printf("Failed to process update: %v", err)
continue
}
// Notify all watchers
change := &pb.InventoryChange{
ProductId: update.ProductId,
NewQuantity: s.getInventory(update.ProductId),
Timestamp: timestamppb.Now(),
TriggeredBy: clientID,
}
s.broadcastChange(change)
}
}()
// Goroutine to send changes to client
go func() {
for change := range changeChan {
if err := stream.Send(change); err != nil {
errChan <- err
return
}
}
}()
// Wait for error or completion
return <-errChan
}
// Interceptor for authentication
func authInterceptor(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
// Extract token from metadata
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return nil, status.Error(codes.Unauthenticated, "missing metadata")
}
tokens := md.Get("authorization")
if len(tokens) == 0 {
return nil, status.Error(codes.Unauthenticated, "missing token")
}
// Validate token (implement your auth logic)
if !isValidToken(tokens[0]) {
return nil, status.Error(codes.Unauthenticated, "invalid token")
}
// Continue to handler
return handler(ctx, req)
}
// Main server setup
func main() {
// Load TLS credentials
cert, err := tls.LoadX509KeyPair("server.crt", "server.key")
if err != nil {
log.Fatalf("Failed to load certificates: %v", err)
}
config := &tls.Config{
Certificates: []tls.Certificate{cert},
ClientAuth: tls.RequireAndVerifyClientCert,
}
creds := credentials.NewTLS(config)
// Configure server options
opts := []grpc.ServerOption{
grpc.Creds(creds),
grpc.UnaryInterceptor(authInterceptor),
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionIdle: 5 * time.Minute,
Time: 2 * time.Minute,
Timeout: 20 * time.Second,
}),
grpc.KeepaliveEnforcementPolicy(keepalive.EnforcementPolicy{
MinTime: 5 * time.Second,
PermitWithoutStream: true,
}),
grpc.MaxConcurrentStreams(1000),
}
// Create gRPC server
server := grpc.NewServer(opts...)
// Register service
pb.RegisterProductServiceServer(server, &productServer{
products: make(map[string]*pb.Product),
watchers: make(map[string]chan *pb.InventoryChange),
})
// Start listening
lis, err := net.Listen("tcp", ":50051")
if err != nil {
log.Fatalf("Failed to listen: %v", err)
}
log.Println("gRPC server starting on :50051")
if err := server.Serve(lis); err != nil {
log.Fatalf("Failed to serve: %v", err)
}
}
```
### Example 2: Python Client with Retry and Load Balancing
```python
# client/product_client.py
import grpc
import asyncio
import logging
from typing import List, Optional, AsyncIterator
from concurrent import futures
from grpc import aio
import backoff
from proto import ecommerce_pb2 as pb
from proto import ecommerce_pb2_grpc as pb_grpc
logger = logging.getLogger(__name__)
class ProductClient:
"""Enhanced gRPC client with retry, load balancing, and connection pooling."""
def __init__(
self,
servers: List[str],
api_key: Optional[str] = None,
use_tls: bool = True,
pool_size: int = 10
):
self.servers = servers
self.api_key = api_key
self.use_tls = use_tls
self.pool_size = pool_size
self.channels = []
self.stubs = []
self._round_robin_counter = 0
self._setup_channels()
def _setup_channels(self):
"""Set up connection pool with load balancing."""
for server in self.servers:
for _ in range(self.pool_size // len(self.servers)):
if self.use_tls:
# Load client certificates
with open('client.crt', 'rb') as f:
client_cert = f.read()
with open('client.key', 'rb') as f:
client_key = f.read()
with open('ca.crt', 'rb') as f:
ca_cert = f.read()
credentials = grpc.ssl_channel_credentials(
root_certificates=ca_cert,
private_key=client_key,
certificate_chain=client_cert
)
channel = aio.secure_channel(
server,
credentials,
options=[
('grpc.keepalive_time_ms', 120000),
('grpc.keepalive_timeout_ms', 20000),
('grpc.keepalive_permit_without_calls', True),
('grpc.http2.max_pings_without_data', 0),
]
)
else:
channel = aio.insecure_channel(
server,
options=[
('grpc.keepalive_time_ms', 120000),
('grpc.keepalive_timeout_ms', 20000),
]
)
self.channels.append(channel)
self.stubs.append(pb_grpc.ProductServiceStub(channel))
def _get_stub(self) -> pb_grpc.ProductServiceStub:
"""Get next stub using round-robin load balancing."""
stub = self.stubs[self._round_robin_counter]
self._round_robin_counter = (self._round_robin_counter + 1) % len(self.stubs)
return stub
def _get_metadata(self) -> List[tuple]:
"""Generate request metadata."""
metadata = []
if self.api_key:
metadata.append(('authorization', f'Bearer {self.api_key}'))
metadata.append(('trace-id', self._generate_trace_id()))
return metadata
@backoff.on_exception(
backoff.expo,
grpc.RpcError,
max_tries=3,
giveup=lambda e: e.code() != grpc.StatusCode.UNAVAILABLE
)
async def get_product(
self,
product_id: str,
fields: Optional[List[str]] = None
) -> pb.Product:
"""Get single product with retry logic."""
request = pb.GetProductRequest(
product_id=product_id,
fields=fields or []
)
try:
response = await self._get_stub().GetProduct(
request,
metadata=self._get_metadata(),
timeout=5.0
)
return response
except grpc.RpcError as e:
logger.error(f"Failed to get product {product_id}: {e.details()}")
raise
async def list_products(
self,
category: Optional[str] = None,
page_size: int = 100,
min_price: Optional[float] = None,
max_price: Optional[float] = None
) -> AsyncIterator[pb.Product]:
"""Stream products with server-side streaming."""
request = pb.ListProductsRequest(
category=category or "",
page_size=page_size
)
if min_price is not None or max_price is not None:
request.filter.CopyFrom(pb.ListProductsRequest.Filter(
min_price=min_price or 0,
max_price=max_price or float('inf')
))
try:
stream = self._get_stub().ListProducts(
request,
metadata=self._get_metadata(),
timeout=30.0
)
async for product in stream:
yield product
except grpc.RpcError as e:
logger.error(f"Failed to list products: {e.details()}")
raise
async def import_products(
self,
products: List[pb.Product]
) -> pb.ImportSummary:
"""Import products using client-side streaming."""
async def generate_products():
for product in products:
yield product
await asyncio.sleep(0.01) # Rate limiting
try:
response = await self._get_stub().ImportProducts(
generate_products(),
metadata=self._get_metadata(),
timeout=60.0
)
if response.failed > 0:
logger.warning(
f"Import completed with {response.failed} failures: "
f"{[e.error for e in response.errors]}"
)
return response
except grpc.RpcError as e:
logger.error(f"Failed to import products: {e.details()}")
raise
async def watch_inventory(
self,
updates: AsyncIterator[pb.InventoryUpdate]
) -> AsyncIterator[pb.InventoryChange]:
"""Bidirectional streaming for inventory monitoring."""
try:
stream = self._get_stub().WatchInventory(
metadata=self._get_metadata()
)
# Start sending updates
send_task = asyncio.create_task(self._send_updates(stream, updates))
# Receive changes
try:
async for change in stream:
yield change
finally:
send_task.cancel()
except grpc.RpcError as e:
logger.error(f"Failed in inventory watch: {e.details()}")
raise
async def _send_updates(
self,
stream,
updates: AsyncIterator[pb.InventoryUpdate]
):
"""Send inventory updates to server."""
try:
async for update in updates:
await stream.write(update)
await stream.done_writing()
except asyncio.CancelledError:
pass
async def close(self):
"""Close all channels gracefully."""
close_tasks = [channel.close() for channel in self.channels]
await asyncio.gather(*close_tasks)
@staticmethod
def _generate_trace_id() -> str:
"""Generate unique trace ID for request tracking."""
import uuid
return str(uuid.uuid4())
# Usage example
async def main():
# Initialize client with load balancing
client = ProductClient(
servers=[
'product-service-1:50051',
'product-service-2:50051',
'product-service-3:50051'
],
api_key='your-api-key',
use_tls=True
)
try:
# Unary call
product = await client.get_product('prod-123')
print(f"Product: {product.name} - ${product.price}")
# Server streaming
async for product in client.list_products(
category='electronics',
min_price=100,
max_price=1000
):
print(f"Listed: {product.name}")
# Client streaming
products_to_import = [
pb.Product(id=f'new-{i}', name=f'Product {i}', price=99.99)
for i in range(100)
]
summary = await client.import_products(products_to_import)
print(f"Imported {summary.successful} products")
# Bidirectional streaming
async def generate_updates():
for i in range(10):
yield pb.InventoryUpdate(
product_id=f'prod-{i}',
quantity_change=5,
warehouse_id='warehouse-1'
)
await asyncio.sleep(1)
async for change in client.watch_inventory(generate_updates()):
print(f"Inventory change: {change.product_id} -> {change.new_quantity}")
finally:
await client.close()
if __name__ == '__main__':
asyncio.run(main())
```
### Example 3: Node.js Implementation with Health Checking
```javascript
// server/index.js
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const path = require('path');
// Load proto file
const PROTO_PATH = path.join(__dirname, '../proto/ecommerce.proto');
const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
const protoDescriptor = grpc.loadPackageDefinition(packageDefinition);
const ecommerce = protoDescriptor.ecommerce.v1;
// Health check implementation
const health = require('grpc-health-check');
const healthImpl = new health.Implementation({
'': 'SERVING',
'ecommerce.v1.ProductService': 'SERVING'
});
// Service implementation
class ProductService {
constructor() {
this.products = new Map();
this.watchers = new Map();
}
async getProduct(call, callback) {
const { product_id } = call.request;
const product = this.products.get(product_id);
if (!product) {
callback({
code: grpc.status.NOT_FOUND,
message: `Product ${product_id} not found`
});
return;
}
callback(null, product);
}
async listProducts(call) {
const { category, page_size } = call.request;
let count = 0;
for (const [id, product] of this.products) {
if (category && product.categories.indexOf(category) === -1) {
continue;
}
call.write(product);
count++;
if (page_size > 0 && count >= page_size) {
break;
}
// Simulate processing delay
await new Promise(resolve => setTimeout(resolve, 10));
}
call.end();
}
// Add remaining methods...
}
// Server setup
function main() {
const server = new grpc.Server({
'grpc.max_concurrent_streams': 1000,
'grpc.max_receive_message_length': 1024 * 1024 * 16
});
// Add services
server.addService(
ecommerce.ProductService.service,
new ProductService()
);
// Add health check
server.addService(health.service, healthImpl);
// Start server
server.bindAsync(
'0.0.0.0:50051',
grpc.ServerCredentials.createInsecure(),
(err, port) => {
if (err) {
console.error('Failed to bind:', err);
return;
}
console.log(`gRPC server running on port ${port}`);
server.start();
}
);
}
main();
```
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| "Failed to compile proto" | Invalid Protocol Buffer syntax | Validate with `protoc --lint` |
| "Connection refused" | Server not running or wrong port | Check server status and port |
| "Deadline exceeded" | Request timeout | Increase timeout or optimize operation |
| "Resource exhausted" | Rate limiting or quota exceeded | Implement backoff and retry |
| "Unavailable" | Server temporarily down | Implement circuit breaker pattern |
## Configuration Options
**Server Options**
- `MaxConcurrentStreams`: Limit concurrent streams per connection
- `MaxReceiveMessageSize`: Maximum message size (default 4MB)
- `KeepaliveParams`: Connection health monitoring
- `ConnectionTimeout`: Maximum idle time before closing
**Client Options**
- `LoadBalancingPolicy`: round_robin, pick_first, grpclb
- `WaitForReady`: Block until server available
- `Retry`: Automatic retry configuration
- `Interceptors`: Add cross-cutting concerns
## Best Practices
DO:
- Use field numbers consistently for backward compatibility
- Implement proper error codes and messages
- Add request deadlines for all RPCs
- Use streaming for large datasets
- Implement health checking endpoints
- Version your services properly
DON'T:
- Change field numbers in proto files
- Use gRPC for browser clients without proxy
- Ignore proper error handling
- Send large messages without streaming
- Skip TLS in production
- Use synchronous calls for long operations
## Performance Considerations
- Binary protocol reduces bandwidth by 20-30% vs JSON
- HTTP/2 multiplexing eliminates head-of-line blocking
- Connection pooling reduces handshake overhead
- Streaming prevents memory exhaustion with large datasets
- Protocol Buffers provide 3-10x faster serialization than JSON
## Security Considerations
- Always use TLS in production with mutual authentication
- Implement token-based authentication via metadata
- Use interceptors for consistent auth across services
- Validate all input according to proto definitions
- Implement rate limiting per client
- Use service accounts for service-to-service auth
## Related Commands
- `/rest-api-generator` - Generate REST APIs
- `/graphql-server-builder` - Build GraphQL servers
- `/api-gateway-builder` - Create API gateways
- `/webhook-handler-creator` - Handle webhooks
- `/websocket-server-builder` - WebSocket servers
## Version History
- v1.0.0 (2024-10): Initial implementation with Go, Python, Node.js support
- Planned v1.1.0: Add Rust and Java implementations with advanced load balancing

105
plugin.lock.json Normal file
View File

@@ -0,0 +1,105 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:jeremylongshore/claude-code-plugins-plus:plugins/api-development/grpc-service-generator",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "81192931e3a65ecc7cb9f0cfcd1e92ce7687f712",
"treeHash": "e4ca53dd3760264a956b38d2364fbfc2c8a0d1d6a982eb21623b09a2f6a0f3a8",
"generatedAt": "2025-11-28T10:18:29.469814Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "grpc-service-generator",
"description": "Generate gRPC services with Protocol Buffers and streaming support",
"version": "1.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "92f1181d9227a18be5ab1a6a378bf74a46b83f34227978702f476e79db02f2c6"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "06498e42b1d524422790c1fb0a8768de92fbdf0e72aabc6dfba40be1458184f7"
},
{
"path": "commands/generate-grpc-service.md",
"sha256": "6f390a182afecd1dc57d6cd0ce9e6ef97fdca80df79aba9c0253730a1f6491a2"
},
{
"path": "skills/skill-adapter/references/examples.md",
"sha256": "922bbc3c4ebf38b76f515b5c1998ebde6bf902233e00e2c5a0e9176f975a7572"
},
{
"path": "skills/skill-adapter/references/best-practices.md",
"sha256": "c8f32b3566252f50daacd346d7045a1060c718ef5cfb07c55a0f2dec5f1fb39e"
},
{
"path": "skills/skill-adapter/references/README.md",
"sha256": "fba0cdef72782747ad294d7d683dc35a4f7cce93bc7d0670c27eee39787cfa00"
},
{
"path": "skills/skill-adapter/scripts/helper-template.sh",
"sha256": "0881d5660a8a7045550d09ae0acc15642c24b70de6f08808120f47f86ccdf077"
},
{
"path": "skills/skill-adapter/scripts/validation.sh",
"sha256": "92551a29a7f512d2036e4f1fb46c2a3dc6bff0f7dde4a9f699533e446db48502"
},
{
"path": "skills/skill-adapter/scripts/README.md",
"sha256": "35b32da2497e5b23c5bf43994a54ab7d30e0668bbca8807b323cfaa02b066846"
},
{
"path": "skills/skill-adapter/assets/test-data.json",
"sha256": "ac17dca3d6e253a5f39f2a2f1b388e5146043756b05d9ce7ac53a0042eee139d"
},
{
"path": "skills/skill-adapter/assets/README.md",
"sha256": "84a0f5ae2b2f95da55890d4ee9e0a12fdd79d10c5959f77c8db53603400ef8a4"
},
{
"path": "skills/skill-adapter/assets/skill-schema.json",
"sha256": "f5639ba823a24c9ac4fb21444c0717b7aefde1a4993682897f5bf544f863c2cd"
},
{
"path": "skills/skill-adapter/assets/config-template.json",
"sha256": "0c2ba33d2d3c5ccb266c0848fc43caa68a2aa6a80ff315d4b378352711f83e1c"
},
{
"path": "skills/skill-adapter/assets/examples/bidirectional_streaming_rpc.proto",
"sha256": "a23ace93e2995a8bc341546a8c497397cc4c78957251c00d0e067a7b1ecdaff8"
},
{
"path": "skills/skill-adapter/assets/examples/streaming_rpc.proto",
"sha256": "e18bc4c5c49509fc46886d62d33e4b6a1bfc57a7b64402cda6e6b479b8663c48"
},
{
"path": "skills/skill-adapter/assets/examples/unary_rpc.proto",
"sha256": "73918d5bf615789ba2b0a43a9ea803a16d13e4ebd5858f4b1e59dbbc1e9587f1"
},
{
"path": "skills/skill-adapter/assets/examples/client_streaming_rpc.proto",
"sha256": "ae71237bea49c12369e9322acbd8c503a66bd98482a52622ff88a263b5d1f819"
},
{
"path": "skills/skill-adapter/assets/templates/service.proto.template",
"sha256": "ca5cd318f427365bb7cd252bab20b4c2a526e2607f16b54d9731b1c728affbe1"
}
],
"dirSha256": "e4ca53dd3760264a956b38d2364fbfc2c8a0d1d6a982eb21623b09a2f6a0f3a8"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,9 @@
# Assets
Bundled resources for grpc-service-generator skill
- [ ] templates/service.proto.template: A Jinja2 template for generating .proto files with customizable service and message definitions.
- [ ] examples/unary_rpc.proto: Example .proto file demonstrating a simple unary RPC.
- [ ] examples/streaming_rpc.proto: Example .proto file demonstrating a server-side streaming RPC.
- [ ] examples/client_streaming_rpc.proto: Example .proto file demonstrating a client-side streaming RPC.
- [ ] examples/bidirectional_streaming_rpc.proto: Example .proto file demonstrating a bidirectional streaming RPC.

View File

@@ -0,0 +1,32 @@
{
"skill": {
"name": "skill-name",
"version": "1.0.0",
"enabled": true,
"settings": {
"verbose": false,
"autoActivate": true,
"toolRestrictions": true
}
},
"triggers": {
"keywords": [
"example-trigger-1",
"example-trigger-2"
],
"patterns": []
},
"tools": {
"allowed": [
"Read",
"Grep",
"Bash"
],
"restricted": []
},
"metadata": {
"author": "Plugin Author",
"category": "general",
"tags": []
}
}

View File

@@ -0,0 +1,46 @@
syntax = "proto3";
package examples;
option go_package = "examples";
// The greeting service definition.
service BidirectionalGreeter {
// A bidirectional streaming RPC.
//
// Accepts a stream of GreetingRequests and returns a stream of GreetingResponses.
BidirectionalGreeting (stream GreetingRequest) returns (stream GreetingResponse);
}
// The request message containing the user's name.
message GreetingRequest {
string name = 1;
// Add additional request fields here. Consider adding metadata or context.
string request_id = 2; // Example: a unique request identifier
}
// The response message containing the greetings.
message GreetingResponse {
string message = 1;
// Add additional response fields here. Consider adding status information.
string server_timestamp = 2; // Example: timestamp of the server when the response was generated
}
// Example usage comments:
//
// - The BidirectionalGreeting RPC allows the client and server to exchange multiple messages
// in a single connection. This is useful for real-time communication or data streaming.
//
// - The GreetingRequest can include metadata, such as a request ID, to track individual requests
// within the stream.
//
// - The GreetingResponse can include information about the server's processing, such as a timestamp.
//
// - Consider adding error handling and retry mechanisms to your gRPC client to handle potential
// network issues.
//
// - Implement appropriate logging and monitoring to track the performance of your gRPC service.
//
// - For production environments, enable TLS for secure communication between the client and server.
//
// - Implement interceptors for logging, authentication, and other cross-cutting concerns.

View File

@@ -0,0 +1,27 @@
syntax = "proto3";
package example;
option go_package = "example.com/grpc-service-generator/examples";
// The service definition.
service StreamingService {
// Sends a greeting
rpc ClientStreamingExample (stream ClientStreamingRequest) returns (ClientStreamingResponse) {}
}
// The request message containing the user's name.
message ClientStreamingRequest {
string message = 1; // The message from the client. Can represent chunks of data.
}
// The response message containing the greetings
message ClientStreamingResponse {
string result = 1; // The aggregated result based on the client's stream.
}
// Instructions:
// 1. Define your request and response messages
// 2. Define the RPC service, using the stream keyword for streaming RPCs
// 3. Implement the server and client code
// 4. Remember to handle errors and closing the stream gracefully.

View File

@@ -0,0 +1,38 @@
syntax = "proto3";
package example;
option go_package = "example.com/grpc-service-generator/examples";
// Define the service
service StreamingService {
// Server-side streaming RPC. The client sends a single request, and the
// server responds with a stream of messages.
rpc ServerStreamingExample (StreamingRequest) returns (stream StreamingResponse) {}
}
// The request message for the ServerStreamingExample RPC.
message StreamingRequest {
string request_id = 1; // A unique identifier for the request.
int32 num_responses = 2; // The number of responses the server should send.
string message_prefix = 3; // A prefix to add to each response message.
}
// The response message for the ServerStreamingExample RPC.
message StreamingResponse {
string response_id = 1; // A unique identifier for the response.
string message = 2; // The message content.
}
// Example usage notes:
//
// - The `request_id` field in `StreamingRequest` can be used for logging and
// correlation.
// - The `num_responses` field allows the client to control the number of
// messages received. Consider adding a maximum limit to prevent resource exhaustion.
// - The `message_prefix` field demonstrates how to parameterize the server's
// response. This could be used to customize the response based on user preferences.
// - The `response_id` field in `StreamingResponse` allows for identifying individual messages in the stream.
// - Consider adding error handling to the server implementation to gracefully
// handle situations where the client disconnects prematurely.
// - For production, consider adding authentication and authorization to the service.

View File

@@ -0,0 +1,28 @@
// examples/unary_rpc.proto
//
// This file defines a simple gRPC service with a unary RPC.
//
// To compile this .proto file:
// protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative examples/unary_rpc.proto
syntax = "proto3";
package example;
option go_package = "github.com/example/grpc-service-generator/example"; // Replace with your actual Go package
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}

View File

@@ -0,0 +1,28 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Claude Skill Configuration",
"type": "object",
"required": ["name", "description"],
"properties": {
"name": {
"type": "string",
"pattern": "^[a-z0-9-]+$",
"maxLength": 64,
"description": "Skill identifier (lowercase, hyphens only)"
},
"description": {
"type": "string",
"maxLength": 1024,
"description": "What the skill does and when to use it"
},
"allowed-tools": {
"type": "string",
"description": "Comma-separated list of allowed tools"
},
"version": {
"type": "string",
"pattern": "^\\d+\\.\\d+\\.\\d+$",
"description": "Semantic version (x.y.z)"
}
}
}

View File

@@ -0,0 +1,56 @@
// templates/service.proto.template
// This is a Jinja2 template for generating .proto files.
// Use this template to define your gRPC service and messages.
syntax = "proto3";
package {{ package_name }}; // Replace with your package name
// Option to specify the go package. Replace with your desired path.
option go_package = "{{ go_package_path }}";
// Define your service here. Replace "YourService" with your service name.
// Consider adding authentication and authorization interceptors.
service {{ service_name }} {
// Unary RPC example: A simple request-response.
rpc {{ unary_method_name }} ({{ unary_request_type }}) returns ({{ unary_response_type }});
// Server-side streaming RPC example: The server sends a stream of responses
// after receiving the request. Useful for pushing updates.
rpc {{ server_streaming_method_name }} ({{ streaming_request_type }}) returns (stream {{ streaming_response_type }});
// Client-side streaming RPC example: The client sends a stream of requests
// to the server, which responds with a single response. Useful for batch processing.
rpc {{ client_streaming_method_name }} (stream {{ streaming_request_type }}) returns ({{ streaming_response_type }});
// Bidirectional streaming RPC example: Both the client and the server send
// a stream of messages using a read-write stream. Useful for real-time communication.
rpc {{ bidirectional_streaming_method_name }} (stream {{ streaming_request_type }}) returns (stream {{ streaming_response_type }});
}
// Define your message types here. Make sure the fields are well-defined and documented.
// Consider using well-known types from google/protobuf/timestamp.proto for timestamps.
// Example request message for unary RPC
message {{ unary_request_type }} {
string id = 1; // A unique identifier. Consider adding validation.
string name = 2; // A name. Consider adding validation (e.g., max length).
}
// Example response message for unary RPC
message {{ unary_response_type }} {
string message = 1; // A confirmation message.
int32 status_code = 2; // HTTP-like status code for finer-grained error handling.
}
// Example request message for streaming RPC
message {{ streaming_request_type }} {
string data = 1; // Data to be processed. Consider adding rate limiting on the server.
int64 timestamp = 2; // Timestamp of the data.
}
// Example response message for streaming RPC
message {{ streaming_response_type }} {
string result = 1; // Result of the processing.
bool success = 2; // Indicate if the processing was successful.
}

View File

@@ -0,0 +1,27 @@
{
"testCases": [
{
"name": "Basic activation test",
"input": "trigger phrase example",
"expected": {
"activated": true,
"toolsUsed": ["Read", "Grep"],
"success": true
}
},
{
"name": "Complex workflow test",
"input": "multi-step trigger example",
"expected": {
"activated": true,
"steps": 3,
"toolsUsed": ["Read", "Write", "Bash"],
"success": true
}
}
],
"fixtures": {
"sampleInput": "example data",
"expectedOutput": "processed result"
}
}

View File

@@ -0,0 +1,8 @@
# References
Bundled resources for grpc-service-generator skill
- [ ] grpc_best_practices.md: A document outlining gRPC best practices for service design, error handling, and security.
- [ ] protobuf_style_guide.md: A document detailing the Protocol Buffer style guide for writing clean and maintainable .proto files.
- [ ] grpc_error_handling.md: A document explaining different gRPC error handling strategies.
- [ ] grpc_interceptors.md: A document explaining how to use gRPC interceptors for authentication, logging, and monitoring.

View File

@@ -0,0 +1,69 @@
# Skill Best Practices
Guidelines for optimal skill usage and development.
## For Users
### Activation Best Practices
1. **Use Clear Trigger Phrases**
- Match phrases from skill description
- Be specific about intent
- Provide necessary context
2. **Provide Sufficient Context**
- Include relevant file paths
- Specify scope of analysis
- Mention any constraints
3. **Understand Tool Permissions**
- Check allowed-tools in frontmatter
- Know what the skill can/cannot do
- Request appropriate actions
### Workflow Optimization
- Start with simple requests
- Build up to complex workflows
- Verify each step before proceeding
- Use skill consistently for related tasks
## For Developers
### Skill Development Guidelines
1. **Clear Descriptions**
- Include explicit trigger phrases
- Document all capabilities
- Specify limitations
2. **Proper Tool Permissions**
- Use minimal necessary tools
- Document security implications
- Test with restricted tools
3. **Comprehensive Documentation**
- Provide usage examples
- Document common pitfalls
- Include troubleshooting guide
### Maintenance
- Keep version updated
- Test after tool updates
- Monitor user feedback
- Iterate on descriptions
## Performance Tips
- Scope skills to specific domains
- Avoid overlapping trigger phrases
- Keep descriptions under 1024 chars
- Test activation reliability
## Security Considerations
- Never include secrets in skill files
- Validate all inputs
- Use read-only tools when possible
- Document security requirements

View File

@@ -0,0 +1,70 @@
# Skill Usage Examples
This document provides practical examples of how to use this skill effectively.
## Basic Usage
### Example 1: Simple Activation
**User Request:**
```
[Describe trigger phrase here]
```
**Skill Response:**
1. Analyzes the request
2. Performs the required action
3. Returns results
### Example 2: Complex Workflow
**User Request:**
```
[Describe complex scenario]
```
**Workflow:**
1. Step 1: Initial analysis
2. Step 2: Data processing
3. Step 3: Result generation
4. Step 4: Validation
## Advanced Patterns
### Pattern 1: Chaining Operations
Combine this skill with other tools:
```
Step 1: Use this skill for [purpose]
Step 2: Chain with [other tool]
Step 3: Finalize with [action]
```
### Pattern 2: Error Handling
If issues occur:
- Check trigger phrase matches
- Verify context is available
- Review allowed-tools permissions
## Tips & Best Practices
- ✅ Be specific with trigger phrases
- ✅ Provide necessary context
- ✅ Check tool permissions match needs
- ❌ Avoid vague requests
- ❌ Don't mix unrelated tasks
## Common Issues
**Issue:** Skill doesn't activate
**Solution:** Use exact trigger phrases from description
**Issue:** Unexpected results
**Solution:** Check input format and context
## See Also
- Main SKILL.md for full documentation
- scripts/ for automation helpers
- assets/ for configuration examples

View File

@@ -0,0 +1,8 @@
# Scripts
Bundled resources for grpc-service-generator skill
- [ ] generate_proto.sh: Generates a basic .proto file with example service and message definitions.
- [ ] compile_proto.sh: Compiles the .proto file into gRPC stubs for different languages (Python, Go, Java).
- [ ] run_grpc_server.py: A basic Python gRPC server implementation for testing the generated stubs.
- [ ] test_grpc_client.py: A basic Python gRPC client implementation for testing the gRPC server.

View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Helper script template for skill automation
# Customize this for your skill's specific needs
set -e
function show_usage() {
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " -h, --help Show this help message"
echo " -v, --verbose Enable verbose output"
echo ""
}
# Parse arguments
VERBOSE=false
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
show_usage
exit 0
;;
-v|--verbose)
VERBOSE=true
shift
;;
*)
echo "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# Your skill logic here
if [ "$VERBOSE" = true ]; then
echo "Running skill automation..."
fi
echo "✅ Complete"

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Skill validation helper
# Validates skill activation and functionality
set -e
echo "🔍 Validating skill..."
# Check if SKILL.md exists
if [ ! -f "../SKILL.md" ]; then
echo "❌ Error: SKILL.md not found"
exit 1
fi
# Validate frontmatter
if ! grep -q "^---$" "../SKILL.md"; then
echo "❌ Error: No frontmatter found"
exit 1
fi
# Check required fields
if ! grep -q "^name:" "../SKILL.md"; then
echo "❌ Error: Missing 'name' field"
exit 1
fi
if ! grep -q "^description:" "../SKILL.md"; then
echo "❌ Error: Missing 'description' field"
exit 1
fi
echo "✅ Skill validation passed"