Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:24:49 +08:00
commit 99a553f8ab
25 changed files with 6408 additions and 0 deletions

417
references/cli-commands.md Normal file
View File

@@ -0,0 +1,417 @@
# FastMCP CLI Commands Reference
Complete reference for FastMCP command-line interface.
## Installation
```bash
# Install FastMCP
pip install fastmcp
# or with uv
uv pip install fastmcp
# Check version
fastmcp --version
```
## Development Commands
### `fastmcp dev`
Run server with inspector interface (recommended for development).
```bash
# Basic usage
fastmcp dev server.py
# With options
fastmcp dev server.py --port 8000
# Enable debug logging
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev server.py
```
**Features:**
- Interactive inspector UI
- Hot reload on file changes
- Detailed logging
- Tool/resource inspection
### `fastmcp run`
Run server normally (production-like).
```bash
# stdio transport (default)
fastmcp run server.py
# HTTP transport
fastmcp run server.py --transport http --port 8000
# SSE transport
fastmcp run server.py --transport sse
```
**Options:**
- `--transport`: `stdio`, `http`, or `sse`
- `--port`: Port number (for HTTP/SSE)
- `--host`: Host address (default: 127.0.0.1)
### `fastmcp inspect`
Inspect server without running it.
```bash
# Inspect tools and resources
fastmcp inspect server.py
# Output as JSON
fastmcp inspect server.py --json
# Show detailed information
fastmcp inspect server.py --verbose
```
**Output includes:**
- List of tools
- List of resources
- List of prompts
- Configuration details
## Installation Commands
### `fastmcp install`
Install server to Claude Desktop.
```bash
# Basic installation
fastmcp install server.py
# With custom name
fastmcp install server.py --name "My Server"
# Specify config location
fastmcp install server.py --config-path ~/.config/Claude/claude_desktop_config.json
```
**What it does:**
- Adds server to Claude Desktop configuration
- Sets up proper command and arguments
- Validates server before installing
### Claude Desktop Configuration
Manual configuration (if not using `fastmcp install`):
```json
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["/absolute/path/to/server.py"],
"env": {
"API_KEY": "your-key"
}
}
}
}
```
**Config locations:**
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
## Python Direct Execution
### Run with Python
```bash
# stdio (default)
python server.py
# HTTP transport
python server.py --transport http --port 8000
# With arguments
python server.py --transport http --port 8000 --host 0.0.0.0
```
### Custom Argument Parsing
```python
# server.py
if __name__ == "__main__":
import sys
# Parse custom arguments
if "--test" in sys.argv:
run_tests()
elif "--migrate" in sys.argv:
run_migrations()
else:
mcp.run()
```
## Environment Variables
### FastMCP-Specific Variables
```bash
# Logging
export FASTMCP_LOG_LEVEL=DEBUG # DEBUG, INFO, WARNING, ERROR
export FASTMCP_LOG_FILE=/path/to/log.txt
# Environment
export FASTMCP_ENV=production # development, staging, production
# Custom variables (your server)
export API_KEY=your-key
export DATABASE_URL=postgres://...
```
### Using with Commands
```bash
# Inline environment variables
API_KEY=test fastmcp dev server.py
# From .env file
set -a && source .env && set +a && fastmcp dev server.py
```
## Testing Commands
### Run Tests with Client
```python
# test.py
import asyncio
from fastmcp import Client
async def test():
async with Client("server.py") as client:
tools = await client.list_tools()
print(f"Tools: {[t.name for t in tools]}")
asyncio.run(test())
```
```bash
# Run tests
python test.py
```
### Integration Testing
```bash
# Start server in background
fastmcp run server.py --transport http --port 8000 &
SERVER_PID=$!
# Run tests
pytest tests/
# Kill server
kill $SERVER_PID
```
## Debugging Commands
### Enable Debug Logging
```bash
# Full debug output
FASTMCP_LOG_LEVEL=DEBUG fastmcp dev server.py
# Python logging
PYTHONVERBOSE=1 fastmcp dev server.py
# Trace imports
PYTHONPATH=. python -v server.py
```
### Check Python Environment
```bash
# Check Python version
python --version
# Check installed packages
pip list | grep fastmcp
# Check import paths
python -c "import sys; print('\n'.join(sys.path))"
```
### Validate Server
```bash
# Check syntax
python -m py_compile server.py
# Check imports
python -c "import server; print('OK')"
# Inspect structure
fastmcp inspect server.py --verbose
```
## Deployment Commands
### Prepare for Deployment
```bash
# Freeze dependencies
pip freeze > requirements.txt
# Clean specific to FastMCP
echo "fastmcp>=2.12.0" > requirements.txt
echo "httpx>=0.27.0" >> requirements.txt
# Test with clean environment
python -m venv test_env
source test_env/bin/activate
pip install -r requirements.txt
python server.py
```
### Git Commands for Deployment
```bash
# Prepare for cloud deployment
git add server.py requirements.txt
git commit -m "Prepare for deployment"
# Create GitHub repo
gh repo create my-mcp-server --public
# Push
git push -u origin main
```
## Performance Commands
### Profiling
```bash
# Profile with cProfile
python -m cProfile -o profile.stats server.py
# Analyze profile
python -m pstats profile.stats
```
### Memory Profiling
```bash
# Install memory_profiler
pip install memory_profiler
# Run with memory profiling
python -m memory_profiler server.py
```
## Batch Operations
### Multiple Servers
```bash
# Start multiple servers
fastmcp run server1.py --port 8000 &
fastmcp run server2.py --port 8001 &
fastmcp run server3.py --port 8002 &
# Kill all
killall -9 python
```
### Process Management
```bash
# Use screen/tmux for persistent sessions
screen -S fastmcp
fastmcp dev server.py
# Detach: Ctrl+A, D
# Reattach
screen -r fastmcp
```
## Common Command Patterns
### Local Development
```bash
# Quick iteration cycle
fastmcp dev server.py # Edit, save, auto-reload
```
### Testing with HTTP Client
```bash
# Start HTTP server
fastmcp run server.py --transport http --port 8000
# Test with curl
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"method": "tools/list"}'
```
### Production-like Testing
```bash
# Set production environment
export ENVIRONMENT=production
export FASTMCP_LOG_LEVEL=WARNING
# Run
fastmcp run server.py
```
## Troubleshooting Commands
### Server Won't Start
```bash
# Check for syntax errors
python -m py_compile server.py
# Check for missing dependencies
pip check
# Verify FastMCP installation
python -c "import fastmcp; print(fastmcp.__version__)"
```
### Port Already in Use
```bash
# Find process using port
lsof -i :8000
# Kill process
lsof -ti:8000 | xargs kill -9
# Use different port
fastmcp run server.py --port 8001
```
### Permission Issues
```bash
# Make server executable
chmod +x server.py
# Fix Python path
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
```
## Resources
- **FastMCP CLI Docs**: https://github.com/jlowin/fastmcp#cli
- **MCP Protocol**: https://modelcontextprotocol.io
- **Context7**: `/jlowin/fastmcp`

View File

@@ -0,0 +1,309 @@
# FastMCP Cloud Deployment Guide
Complete guide for deploying FastMCP servers to FastMCP Cloud.
## Critical Requirements
**❗️ MUST HAVE** for FastMCP Cloud:
1. **Module-level server object** named `mcp`, `server`, or `app`
2. **PyPI dependencies only** in `requirements.txt`
3. **Public GitHub repository** (or accessible to FastMCP Cloud)
4. **Environment variables** for configuration (no hardcoded secrets)
## Cloud-Ready Server Pattern
```python
# server.py
from fastmcp import FastMCP
import os
# ✅ CORRECT: Module-level export
mcp = FastMCP("production-server")
# ✅ Use environment variables
API_KEY = os.getenv("API_KEY")
DATABASE_URL = os.getenv("DATABASE_URL")
@mcp.tool()
async def production_tool(data: str) -> dict:
if not API_KEY:
return {"error": "API_KEY not configured"}
return {"status": "success", "data": data}
if __name__ == "__main__":
mcp.run()
```
## Common Anti-Patterns
### ❌ WRONG: Function-Wrapped Server
```python
def create_server():
mcp = FastMCP("server")
return mcp
if __name__ == "__main__":
server = create_server() # Too late for cloud!
server.run()
```
### ✅ CORRECT: Factory with Module Export
```python
def create_server() -> FastMCP:
mcp = FastMCP("server")
# Complex setup logic here
return mcp
# Export at module level
mcp = create_server()
if __name__ == "__main__":
mcp.run()
```
## Deployment Steps
### 1. Prepare Repository
```bash
# Initialize git
git init
# Add files
git add .
# Commit
git commit -m "Initial MCP server"
# Create GitHub repo
gh repo create my-mcp-server --public
# Push
git push -u origin main
```
### 2. Deploy to FastMCP Cloud
1. Visit https://fastmcp.cloud
2. Sign in with GitHub
3. Click "Create Project"
4. Select your repository
5. Configure:
- **Server Name**: Your project name
- **Entrypoint**: `server.py`
- **Environment Variables**: Add all needed variables
### 3. Configure Environment Variables
In FastMCP Cloud dashboard, add:
- `API_KEY`
- `DATABASE_URL`
- `CACHE_TTL`
- Any custom variables
### 4. Access Your Server
- **URL**: `https://your-project.fastmcp.app/mcp`
- **Auto-deploy**: Pushes to main branch auto-deploy
- **PR Previews**: Pull requests get preview deployments
## Project Structure Requirements
### Minimal Structure
```
my-mcp-server/
├── server.py # Main entry point (required)
├── requirements.txt # PyPI dependencies (required)
├── .env # Local dev only (git-ignored)
├── .gitignore # Must ignore .env
└── README.md # Documentation (recommended)
```
### Production Structure
```
my-mcp-server/
├── src/
│ ├── server.py # Main entry point
│ ├── utils.py # Self-contained utilities
│ └── tools/ # Tool modules
│ ├── __init__.py
│ └── api_tools.py
├── requirements.txt
├── .env.example # Template for .env
├── .gitignore
└── README.md
```
## Requirements.txt Rules
### ✅ ALLOWED: PyPI Packages
```txt
fastmcp>=2.12.0
httpx>=0.27.0
python-dotenv>=1.0.0
pydantic>=2.0.0
```
### ❌ NOT ALLOWED: Non-PyPI Dependencies
```txt
# Don't use these in cloud:
git+https://github.com/user/repo.git
-e ./local-package
./wheels/package.whl
```
## Environment Variables Best Practices
### ✅ GOOD: Environment-based Configuration
```python
import os
class Config:
API_KEY = os.getenv("API_KEY", "")
BASE_URL = os.getenv("BASE_URL", "https://api.example.com")
DEBUG = os.getenv("DEBUG", "false").lower() == "true"
@classmethod
def validate(cls):
if not cls.API_KEY:
raise ValueError("API_KEY is required")
```
### ❌ BAD: Hardcoded Values
```python
# Never do this in cloud:
API_KEY = "sk-1234567890" # Exposed in repository!
DATABASE_URL = "postgresql://user:pass@host/db" # Insecure!
```
## Avoiding Circular Imports
**Critical for cloud deployment!**
### ❌ WRONG: Factory Function in `__init__.py`
```python
# shared/__init__.py
def get_api_client():
from .api_client import APIClient # Circular import risk
return APIClient()
# shared/monitoring.py
from . import get_api_client # Creates circle!
```
### ✅ CORRECT: Direct Imports
```python
# shared/__init__.py
from .api_client import APIClient
from .cache import CacheManager
# shared/monitoring.py
from .api_client import APIClient
client = APIClient() # Create directly
```
## Testing Before Deployment
### Local Testing
```bash
# Test with stdio (default)
fastmcp dev server.py
# Test with HTTP
python server.py --transport http --port 8000
```
### Pre-Deployment Checklist
- [ ] Server object exported at module level
- [ ] Only PyPI dependencies in requirements.txt
- [ ] No hardcoded secrets (all in environment variables)
- [ ] `.env` file in `.gitignore`
- [ ] No circular imports
- [ ] No import-time async execution
- [ ] Works with `fastmcp dev server.py`
- [ ] Git repository committed and pushed
- [ ] All required environment variables documented
## Monitoring Deployment
### Check Deployment Logs
FastMCP Cloud provides:
- Build logs
- Runtime logs
- Error logs
### Health Check Endpoint
Add a health check resource:
```python
@mcp.resource("health://status")
async def health_check() -> dict:
return {
"status": "healthy",
"timestamp": datetime.now().isoformat(),
"version": "1.0.0"
}
```
### Common Deployment Errors
1. **"No server object found"**
- Fix: Export server at module level
2. **"Module not found"**
- Fix: Use only PyPI packages
3. **"Import error: circular dependency"**
- Fix: Avoid factory functions in `__init__.py`
4. **"Environment variable not set"**
- Fix: Add variables in FastMCP Cloud dashboard
## Continuous Deployment
FastMCP Cloud automatically deploys when you push to main:
```bash
# Make changes
git add .
git commit -m "Add new feature"
git push
# Deployment happens automatically!
# Check status at fastmcp.cloud
```
## Rollback Strategy
If deployment fails:
```bash
# Revert to previous commit
git revert HEAD
git push
# Or reset to specific commit
git reset --hard <commit-hash>
git push --force # Use with caution!
```
## Resources
- **FastMCP Cloud**: https://fastmcp.cloud
- **FastMCP GitHub**: https://github.com/jlowin/fastmcp
- **Deployment Docs**: Check FastMCP Cloud documentation

118
references/common-errors.md Normal file
View File

@@ -0,0 +1,118 @@
# FastMCP Common Errors Reference
Quick reference for the 15 most common FastMCP errors and their solutions.
## Error 1: Missing Server Object
**Error:** `RuntimeError: No server object found at module level`
**Fix:** Export server at module level: `mcp = FastMCP("name")`
**Why:** FastMCP Cloud requires module-level server object
**Source:** FastMCP Cloud documentation
## Error 2: Async/Await Confusion
**Error:** `RuntimeError: no running event loop`
**Fix:** Use `async def` for async operations, don't mix sync/async
**Example:** Use `await client.get()` not `client.get()`
**Source:** GitHub issues #156, #203
## Error 3: Context Not Injected
**Error:** `TypeError: missing required argument 'context'`
**Fix:** Add type hint: `async def tool(context: Context):`
**Why:** Type hint is required for context injection
**Source:** FastMCP v2 migration guide
## Error 4: Resource URI Syntax
**Error:** `ValueError: Invalid resource URI`
**Fix:** Include scheme: `@mcp.resource("data://config")`
**Valid schemes:** `data://`, `file://`, `info://`, `api://`
**Source:** MCP Protocol specification
## Error 5: Resource Template Parameter Mismatch
**Error:** `TypeError: missing positional argument`
**Fix:** Match parameter names: `user://{user_id}``def get_user(user_id: str)`
**Why:** Parameter names must exactly match URI template
**Source:** FastMCP patterns documentation
## Error 6: Pydantic Validation Error
**Error:** `ValidationError: value is not valid`
**Fix:** Ensure type hints match data types
**Best practice:** Use Pydantic models for complex validation
**Source:** Pydantic documentation
## Error 7: Transport/Protocol Mismatch
**Error:** `ConnectionError: different transport`
**Fix:** Match client/server transport (stdio or http)
**Server:** `mcp.run(transport="http")`
**Client:** `{"transport": "http", "url": "..."}`
**Source:** MCP transport specification
## Error 8: Import Errors (Editable Package)
**Error:** `ModuleNotFoundError: No module named 'my_package'`
**Fix:** Install in editable mode: `pip install -e .`
**Alternative:** Use absolute imports or add to PYTHONPATH
**Source:** Python packaging documentation
## Error 9: Deprecation Warnings
**Error:** `DeprecationWarning: 'mcp.settings' deprecated`
**Fix:** Use `os.getenv()` instead of `mcp.settings.get()`
**Why:** FastMCP v2 removed settings API
**Source:** FastMCP v2 migration guide
## Error 10: Port Already in Use
**Error:** `OSError: [Errno 48] Address already in use`
**Fix:** Use different port: `--port 8001`
**Alternative:** Kill process: `lsof -ti:8000 | xargs kill -9`
**Source:** Common networking issue
## Error 11: Schema Generation Failures
**Error:** `TypeError: not JSON serializable`
**Fix:** Use JSON-compatible types (no NumPy arrays, custom classes)
**Example:** Convert: `data.tolist()` or `data.to_dict()`
**Source:** JSON serialization requirements
## Error 12: JSON Serialization
**Error:** `TypeError: Object of type 'datetime' not JSON serializable`
**Fix:** Convert to string: `datetime.now().isoformat()`
**Apply to:** datetime, bytes, custom objects
**Source:** JSON specification
## Error 13: Circular Import Errors
**Error:** `ImportError: cannot import name 'X' from partially initialized module`
**Fix:** Avoid factory functions in `__init__.py`, use direct imports
**Example:** Import `APIClient` directly, don't use `get_api_client()` factory
**Why:** Cloud deployment initializes modules differently
**Source:** Production cloud deployment errors
## Error 14: Python Version Compatibility
**Error:** `DeprecationWarning: datetime.utcnow() deprecated`
**Fix:** Use `datetime.now(timezone.utc)` (Python 3.12+)
**Why:** Python 3.12+ deprecated some datetime methods
**Source:** Python 3.12 release notes
## Error 15: Import-Time Execution
**Error:** `RuntimeError: Event loop is closed`
**Fix:** Don't create async resources at module level
**Example:** Use lazy initialization: create resources when needed, not at import
**Why:** Event loop not available during module import
**Source:** Async event loop management
---
## Quick Debugging Checklist
When encountering errors:
1. ✅ Check server is exported at module level
2. ✅ Verify async/await usage is correct
3. ✅ Ensure Context type hints are present
4. ✅ Validate resource URIs have scheme prefixes
5. ✅ Match resource template parameters exactly
6. ✅ Use JSON-serializable types only
7. ✅ Avoid circular imports (especially in `__init__.py`)
8. ✅ Don't execute async code at module level
9. ✅ Test locally with `fastmcp dev server.py` before deploying
## Getting Help
- **FastMCP GitHub**: https://github.com/jlowin/fastmcp/issues
- **Context7 Docs**: `/jlowin/fastmcp`
- **This Skill**: See SKILL.md for detailed solutions

View File

@@ -0,0 +1,475 @@
# FastMCP Context Features Reference
Complete reference for FastMCP's advanced context features: elicitation, progress tracking, and sampling.
## Context Injection
To use context features, inject Context into your tool:
```python
from fastmcp import Context
@mcp.tool()
async def tool_with_context(param: str, context: Context) -> dict:
"""Tool that uses context features."""
# Access context features here
pass
```
**Important:** Context parameter MUST have type hint `Context` for injection to work.
## Feature 1: Elicitation (User Input)
Request user input during tool execution.
### Basic Usage
```python
from fastmcp import Context
@mcp.tool()
async def confirm_action(action: str, context: Context) -> dict:
"""Request user confirmation."""
# Request user input
user_response = await context.request_elicitation(
prompt=f"Confirm {action}? (yes/no)",
response_type=str
)
if user_response.lower() == "yes":
result = await perform_action(action)
return {"status": "completed", "action": action}
else:
return {"status": "cancelled", "action": action}
```
### Type-Based Elicitation
```python
@mcp.tool()
async def collect_user_info(context: Context) -> dict:
"""Collect information from user."""
# String input
name = await context.request_elicitation(
prompt="What is your name?",
response_type=str
)
# Boolean input
confirmed = await context.request_elicitation(
prompt="Do you want to continue?",
response_type=bool
)
# Numeric input
count = await context.request_elicitation(
prompt="How many items?",
response_type=int
)
return {
"name": name,
"confirmed": confirmed,
"count": count
}
```
### Custom Type Elicitation
```python
from dataclasses import dataclass
@dataclass
class UserChoice:
option: str
reason: str
@mcp.tool()
async def get_user_choice(options: list[str], context: Context) -> dict:
"""Get user choice with reasoning."""
choice = await context.request_elicitation(
prompt=f"Choose from: {', '.join(options)}",
response_type=UserChoice
)
return {
"selected": choice.option,
"reason": choice.reason
}
```
### Client Handler for Elicitation
Client must provide handler:
```python
from fastmcp import Client
async def elicitation_handler(message: str, response_type: type, context: dict):
"""Handle elicitation requests."""
if response_type == str:
return input(f"{message}: ")
elif response_type == bool:
response = input(f"{message} (y/n): ")
return response.lower() == 'y'
elif response_type == int:
return int(input(f"{message}: "))
else:
return input(f"{message}: ")
async with Client(
"server.py",
elicitation_handler=elicitation_handler
) as client:
result = await client.call_tool("collect_user_info", {})
```
## Feature 2: Progress Tracking
Report progress for long-running operations.
### Basic Progress
```python
@mcp.tool()
async def long_operation(count: int, context: Context) -> dict:
"""Operation with progress tracking."""
for i in range(count):
# Report progress
await context.report_progress(
progress=i + 1,
total=count,
message=f"Processing item {i + 1}/{count}"
)
# Do work
await asyncio.sleep(0.1)
return {"status": "completed", "processed": count}
```
### Multi-Phase Progress
```python
@mcp.tool()
async def multi_phase_operation(data: list, context: Context) -> dict:
"""Operation with multiple phases."""
# Phase 1: Loading
await context.report_progress(0, 3, "Phase 1: Loading data")
loaded = await load_data(data)
# Phase 2: Processing
await context.report_progress(1, 3, "Phase 2: Processing")
for i, item in enumerate(loaded):
await context.report_progress(
progress=i,
total=len(loaded),
message=f"Processing {i + 1}/{len(loaded)}"
)
await process_item(item)
# Phase 3: Saving
await context.report_progress(2, 3, "Phase 3: Saving results")
await save_results()
await context.report_progress(3, 3, "Complete!")
return {"status": "completed", "items": len(loaded)}
```
### Indeterminate Progress
For operations where total is unknown:
```python
@mcp.tool()
async def indeterminate_operation(context: Context) -> dict:
"""Operation with unknown duration."""
stages = [
"Initializing",
"Loading data",
"Processing",
"Finalizing"
]
for stage in stages:
# No total - shows as spinner/indeterminate
await context.report_progress(
progress=stages.index(stage),
total=None,
message=stage
)
await perform_stage(stage)
return {"status": "completed"}
```
### Client Handler for Progress
```python
async def progress_handler(progress: float, total: float | None, message: str | None):
"""Handle progress updates."""
if total:
pct = (progress / total) * 100
# Use \r for same-line update
print(f"\r[{pct:.1f}%] {message}", end="", flush=True)
else:
# Indeterminate progress
print(f"\n[PROGRESS] {message}")
async with Client(
"server.py",
progress_handler=progress_handler
) as client:
result = await client.call_tool("long_operation", {"count": 100})
```
## Feature 3: Sampling (LLM Integration)
Request LLM completions from within tools.
### Basic Sampling
```python
@mcp.tool()
async def enhance_text(text: str, context: Context) -> str:
"""Enhance text using LLM."""
response = await context.request_sampling(
messages=[{
"role": "system",
"content": "You are a professional copywriter."
}, {
"role": "user",
"content": f"Enhance this text: {text}"
}],
temperature=0.7,
max_tokens=500
)
return response["content"]
```
### Structured Output with Sampling
```python
@mcp.tool()
async def classify_text(text: str, context: Context) -> dict:
"""Classify text using LLM."""
prompt = f"""
Classify this text: {text}
Return JSON with:
- category: one of [news, blog, academic, social]
- sentiment: one of [positive, negative, neutral]
- topics: list of main topics
Return as JSON object.
"""
response = await context.request_sampling(
messages=[{"role": "user", "content": prompt}],
temperature=0.3, # Lower for consistency
response_format="json"
)
import json
return json.loads(response["content"])
```
### Multi-Turn Sampling
```python
@mcp.tool()
async def interactive_analysis(topic: str, context: Context) -> dict:
"""Multi-turn analysis with LLM."""
# First turn: Generate questions
questions_response = await context.request_sampling(
messages=[{
"role": "user",
"content": f"Generate 3 key questions about: {topic}"
}],
max_tokens=200
)
# Second turn: Answer questions
analysis_response = await context.request_sampling(
messages=[{
"role": "user",
"content": f"Answer these questions about {topic}:\n{questions_response['content']}"
}],
max_tokens=500
)
return {
"topic": topic,
"questions": questions_response["content"],
"analysis": analysis_response["content"]
}
```
### Client Handler for Sampling
Client provides LLM access:
```python
async def sampling_handler(messages, params, context):
"""Handle LLM sampling requests."""
# Call your LLM API
from openai import AsyncOpenAI
client = AsyncOpenAI()
response = await client.chat.completions.create(
model=params.get("model", "gpt-4"),
messages=messages,
temperature=params.get("temperature", 0.7),
max_tokens=params.get("max_tokens", 1000)
)
return {
"content": response.choices[0].message.content,
"model": response.model,
"usage": {
"prompt_tokens": response.usage.prompt_tokens,
"completion_tokens": response.usage.completion_tokens,
"total_tokens": response.usage.total_tokens
}
}
async with Client(
"server.py",
sampling_handler=sampling_handler
) as client:
result = await client.call_tool("enhance_text", {"text": "Hello world"})
```
## Combined Example
All context features together:
```python
@mcp.tool()
async def comprehensive_task(data: list, context: Context) -> dict:
"""Task using all context features."""
# 1. Elicitation: Confirm operation
confirmed = await context.request_elicitation(
prompt="Start processing?",
response_type=bool
)
if not confirmed:
return {"status": "cancelled"}
# 2. Progress: Track processing
results = []
for i, item in enumerate(data):
await context.report_progress(
progress=i + 1,
total=len(data),
message=f"Processing {i + 1}/{len(data)}"
)
# 3. Sampling: Use LLM for processing
enhanced = await context.request_sampling(
messages=[{
"role": "user",
"content": f"Analyze this item: {item}"
}],
temperature=0.5
)
results.append({
"item": item,
"analysis": enhanced["content"]
})
return {
"status": "completed",
"total": len(data),
"results": results
}
```
## Best Practices
### Elicitation
- **Clear prompts**: Be specific about what you're asking
- **Type validation**: Use appropriate response_type
- **Handle cancellation**: Allow users to cancel operations
- **Provide context**: Explain why input is needed
### Progress Tracking
- **Regular updates**: Report every 5-10% or every item
- **Meaningful messages**: Describe what's happening
- **Phase indicators**: Show which phase of operation
- **Final confirmation**: Report 100% completion
### Sampling
- **System prompts**: Set clear instructions
- **Temperature control**: Lower for factual, higher for creative
- **Token limits**: Set reasonable max_tokens
- **Error handling**: Handle API failures gracefully
- **Cost awareness**: Sampling uses LLM API (costs money)
## Error Handling
### Context Not Available
```python
@mcp.tool()
async def safe_context_usage(context: Context) -> dict:
"""Safely use context features."""
# Check if feature is available
if hasattr(context, 'report_progress'):
await context.report_progress(0, 100, "Starting")
if hasattr(context, 'request_elicitation'):
response = await context.request_elicitation(
prompt="Continue?",
response_type=bool
)
else:
# Fallback behavior
response = True
return {"status": "completed"}
```
### Timeout Handling
```python
import asyncio
@mcp.tool()
async def elicitation_with_timeout(context: Context) -> dict:
"""Elicitation with timeout."""
try:
response = await asyncio.wait_for(
context.request_elicitation(
prompt="Your input (30 seconds):",
response_type=str
),
timeout=30.0
)
return {"status": "completed", "input": response}
except asyncio.TimeoutError:
return {"status": "timeout", "message": "No input received"}
```
## Context Feature Availability
| Feature | Claude Desktop | Claude Code CLI | FastMCP Cloud | Custom Client |
|---------|---------------|----------------|---------------|---------------|
| Elicitation | ✅ | ✅ | ⚠️ Depends | ✅ With handler |
| Progress | ✅ | ✅ | ✅ | ✅ With handler |
| Sampling | ✅ | ✅ | ⚠️ Depends | ✅ With handler |
⚠️ = Feature availability depends on client implementation
## Resources
- **Context API**: See SKILL.md for full Context API reference
- **Client Handlers**: See `client-example.py` template
- **MCP Protocol**: https://modelcontextprotocol.io

View File

@@ -0,0 +1,456 @@
# FastMCP Integration Patterns Reference
Quick reference for API integration patterns with FastMCP.
## Pattern 1: Manual API Integration
Best for simple APIs or when you need fine control.
```python
import httpx
from fastmcp import FastMCP
mcp = FastMCP("API Integration")
# Reusable client
client = httpx.AsyncClient(
base_url="https://api.example.com",
headers={"Authorization": f"Bearer {API_KEY}"},
timeout=30.0
)
@mcp.tool()
async def fetch_data(endpoint: str) -> dict:
"""Fetch from API."""
response = await client.get(endpoint)
response.raise_for_status()
return response.json()
```
**Pros:**
- Full control over requests
- Easy to customize
- Simple to understand
**Cons:**
- Manual tool creation for each endpoint
- More boilerplate code
## Pattern 2: OpenAPI/Swagger Auto-Generation
Best for well-documented APIs with OpenAPI specs.
```python
from fastmcp import FastMCP
from fastmcp.server.openapi import RouteMap, MCPType
import httpx
# Load spec
spec = httpx.get("https://api.example.com/openapi.json").json()
# Create client
client = httpx.AsyncClient(
base_url="https://api.example.com",
headers={"Authorization": f"Bearer {API_KEY}"}
)
# Auto-generate server
mcp = FastMCP.from_openapi(
openapi_spec=spec,
client=client,
name="API Server",
route_maps=[
# GET + params → Resource Templates
RouteMap(
methods=["GET"],
pattern=r".*\{.*\}.*",
mcp_type=MCPType.RESOURCE_TEMPLATE
),
# GET no params → Resources
RouteMap(
methods=["GET"],
mcp_type=MCPType.RESOURCE
),
# POST/PUT/DELETE → Tools
RouteMap(
methods=["POST", "PUT", "PATCH", "DELETE"],
mcp_type=MCPType.TOOL
),
]
)
```
**Pros:**
- Instant integration (minutes not hours)
- Auto-updates with spec changes
- No manual endpoint mapping
**Cons:**
- Requires OpenAPI/Swagger spec
- Less control over individual endpoints
- May include unwanted endpoints
## Pattern 3: FastAPI Conversion
Best for converting existing FastAPI applications.
```python
from fastapi import FastAPI
from fastmcp import FastMCP
# Existing FastAPI app
app = FastAPI()
@app.get("/users/{user_id}")
def get_user(user_id: int):
return {"id": user_id, "name": "User"}
# Convert to MCP
mcp = FastMCP.from_fastapi(
app=app,
httpx_client_kwargs={
"headers": {"Authorization": "Bearer token"}
}
)
```
**Pros:**
- Reuse existing FastAPI code
- Minimal changes needed
- Familiar FastAPI patterns
**Cons:**
- FastAPI must be running separately
- Extra HTTP hop (slower)
## Route Mapping Strategies
### Strategy 1: By HTTP Method
```python
route_maps = [
RouteMap(methods=["GET"], mcp_type=MCPType.RESOURCE),
RouteMap(methods=["POST"], mcp_type=MCPType.TOOL),
RouteMap(methods=["PUT", "PATCH"], mcp_type=MCPType.TOOL),
RouteMap(methods=["DELETE"], mcp_type=MCPType.TOOL),
]
```
### Strategy 2: By Path Pattern
```python
route_maps = [
# Admin endpoints → Exclude
RouteMap(
pattern=r"/admin/.*",
mcp_type=MCPType.EXCLUDE
),
# Internal → Exclude
RouteMap(
pattern=r"/internal/.*",
mcp_type=MCPType.EXCLUDE
),
# Health → Exclude
RouteMap(
pattern=r"/(health|healthz)",
mcp_type=MCPType.EXCLUDE
),
# Everything else
RouteMap(mcp_type=MCPType.TOOL),
]
```
### Strategy 3: By Parameters
```python
route_maps = [
# Has path parameters → Resource Template
RouteMap(
pattern=r".*\{[^}]+\}.*",
mcp_type=MCPType.RESOURCE_TEMPLATE
),
# No parameters → Static Resource or Tool
RouteMap(
methods=["GET"],
mcp_type=MCPType.RESOURCE
),
RouteMap(
methods=["POST", "PUT", "DELETE"],
mcp_type=MCPType.TOOL
),
]
```
## Authentication Patterns
### API Key Authentication
```python
client = httpx.AsyncClient(
base_url="https://api.example.com",
headers={"X-API-Key": os.getenv("API_KEY")}
)
```
### Bearer Token
```python
client = httpx.AsyncClient(
base_url="https://api.example.com",
headers={"Authorization": f"Bearer {os.getenv('API_TOKEN')}"}
)
```
### OAuth2 with Token Refresh
```python
class OAuth2Client:
def __init__(self):
self.access_token = None
self.expires_at = None
async def get_token(self) -> str:
if not self.expires_at or datetime.now() > self.expires_at:
await self.refresh_token()
return self.access_token
async def refresh_token(self):
async with httpx.AsyncClient() as client:
response = await client.post(
"https://auth.example.com/token",
data={
"grant_type": "client_credentials",
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET
}
)
data = response.json()
self.access_token = data["access_token"]
self.expires_at = datetime.now() + timedelta(
seconds=data["expires_in"] - 60
)
oauth = OAuth2Client()
@mcp.tool()
async def authenticated_request(endpoint: str) -> dict:
token = await oauth.get_token()
async with httpx.AsyncClient() as client:
response = await client.get(
endpoint,
headers={"Authorization": f"Bearer {token}"}
)
return response.json()
```
## Error Handling Patterns
### Basic Error Handling
```python
@mcp.tool()
async def safe_api_call(endpoint: str) -> dict:
try:
response = await client.get(endpoint)
response.raise_for_status()
return {"success": True, "data": response.json()}
except httpx.HTTPStatusError as e:
return {
"success": False,
"error": f"HTTP {e.response.status_code}",
"message": e.response.text
}
except httpx.TimeoutException:
return {"success": False, "error": "Request timeout"}
except Exception as e:
return {"success": False, "error": str(e)}
```
### Retry with Exponential Backoff
```python
async def retry_with_backoff(func, max_retries=3):
delay = 1.0
for attempt in range(max_retries):
try:
return await func()
except (httpx.TimeoutException, httpx.NetworkError) as e:
if attempt < max_retries - 1:
await asyncio.sleep(delay)
delay *= 2
else:
raise
```
## Caching Patterns
### Simple Time-Based Cache
```python
import time
class SimpleCache:
def __init__(self, ttl=300):
self.cache = {}
self.timestamps = {}
self.ttl = ttl
def get(self, key: str):
if key in self.cache:
if time.time() - self.timestamps[key] < self.ttl:
return self.cache[key]
return None
def set(self, key: str, value):
self.cache[key] = value
self.timestamps[key] = time.time()
cache = SimpleCache()
@mcp.tool()
async def cached_fetch(endpoint: str) -> dict:
# Check cache
cached = cache.get(endpoint)
if cached:
return {"data": cached, "from_cache": True}
# Fetch from API
data = await fetch_from_api(endpoint)
cache.set(endpoint, data)
return {"data": data, "from_cache": False}
```
## Rate Limiting Patterns
### Simple Rate Limiter
```python
from collections import deque
from datetime import datetime, timedelta
class RateLimiter:
def __init__(self, max_requests: int, time_window: int):
self.max_requests = max_requests
self.time_window = timedelta(seconds=time_window)
self.requests = deque()
async def acquire(self):
now = datetime.now()
# Remove old requests
while self.requests and now - self.requests[0] > self.time_window:
self.requests.popleft()
# Check limit
if len(self.requests) >= self.max_requests:
sleep_time = (self.requests[0] + self.time_window - now).total_seconds()
await asyncio.sleep(sleep_time)
return await self.acquire()
self.requests.append(now)
limiter = RateLimiter(100, 60) # 100 requests per minute
@mcp.tool()
async def rate_limited_call(endpoint: str) -> dict:
await limiter.acquire()
return await api_call(endpoint)
```
## Connection Pooling
### Singleton Client Pattern
```python
class APIClient:
_instance = None
@classmethod
async def get_client(cls):
if cls._instance is None:
cls._instance = httpx.AsyncClient(
base_url=API_BASE_URL,
timeout=30.0,
limits=httpx.Limits(
max_keepalive_connections=5,
max_connections=10
)
)
return cls._instance
@classmethod
async def cleanup(cls):
if cls._instance:
await cls._instance.aclose()
cls._instance = None
# Use in tools
@mcp.tool()
async def api_request(endpoint: str) -> dict:
client = await APIClient.get_client()
response = await client.get(endpoint)
return response.json()
```
## Batch Request Patterns
### Parallel Batch Requests
```python
@mcp.tool()
async def batch_fetch(endpoints: list[str]) -> dict:
"""Fetch multiple endpoints in parallel."""
async def fetch_one(endpoint: str):
try:
response = await client.get(endpoint)
return {"endpoint": endpoint, "success": True, "data": response.json()}
except Exception as e:
return {"endpoint": endpoint, "success": False, "error": str(e)}
results = await asyncio.gather(*[fetch_one(ep) for ep in endpoints])
return {
"total": len(endpoints),
"successful": len([r for r in results if r["success"]]),
"results": results
}
```
## Webhook Patterns
### Webhook Receiver
```python
from fastapi import FastAPI, Request
app = FastAPI()
@app.post("/webhook")
async def handle_webhook(request: Request):
data = await request.json()
# Process webhook
return {"status": "received"}
# Add to MCP server
mcp = FastMCP.from_fastapi(app)
```
## When to Use Each Pattern
| Pattern | Use When | Avoid When |
|---------|----------|------------|
| Manual Integration | Simple API, custom logic needed | API has 50+ endpoints |
| OpenAPI Auto-gen | Well-documented API, many endpoints | No OpenAPI spec available |
| FastAPI Conversion | Existing FastAPI app | Starting from scratch |
| Custom Route Maps | Need precise control | Simple use case |
| Connection Pooling | High-frequency requests | Single request needed |
| Caching | Expensive API calls, data rarely changes | Real-time data required |
| Rate Limiting | API has rate limits | No limits or internal API |
## Resources
- **FastMCP OpenAPI**: FastMCP.from_openapi documentation
- **FastAPI Integration**: FastMCP.from_fastapi documentation
- **HTTPX Docs**: https://www.python-httpx.org
- **OpenAPI Spec**: https://spec.openapis.org

View File

@@ -0,0 +1,488 @@
# FastMCP Production Patterns Reference
Battle-tested patterns for production-ready FastMCP servers.
## Self-Contained Server Pattern
**Problem:** Circular imports break cloud deployment
**Solution:** Keep all utilities in one file
```python
# src/utils.py - All utilities in one place
import os
from typing import Dict, Any
from datetime import datetime
class Config:
"""Configuration from environment."""
SERVER_NAME = os.getenv("SERVER_NAME", "FastMCP Server")
API_KEY = os.getenv("API_KEY", "")
CACHE_TTL = int(os.getenv("CACHE_TTL", "300"))
def format_success(data: Any) -> Dict[str, Any]:
"""Format successful response."""
return {
"success": True,
"data": data,
"timestamp": datetime.now().isoformat()
}
def format_error(error: str, code: str = "ERROR") -> Dict[str, Any]:
"""Format error response."""
return {
"success": False,
"error": error,
"code": code,
"timestamp": datetime.now().isoformat()
}
# Usage in tools
from .utils import format_success, format_error, Config
@mcp.tool()
async def process_data(data: dict) -> dict:
try:
result = await process(data)
return format_success(result)
except Exception as e:
return format_error(str(e))
```
**Why it works:**
- No circular dependencies
- Cloud deployment safe
- Easy to maintain
- Single source of truth
## Lazy Initialization Pattern
**Problem:** Creating expensive resources at import time fails in cloud
**Solution:** Initialize resources only when needed
```python
class ResourceManager:
"""Manages expensive resources with lazy initialization."""
_db_pool = None
_cache = None
@classmethod
async def get_db(cls):
"""Get database pool (create on first use)."""
if cls._db_pool is None:
cls._db_pool = await create_db_pool()
return cls._db_pool
@classmethod
async def get_cache(cls):
"""Get cache (create on first use)."""
if cls._cache is None:
cls._cache = await create_cache()
return cls._cache
# Usage - no initialization at module level
manager = ResourceManager() # Lightweight
@mcp.tool()
async def database_operation():
db = await manager.get_db() # Initialization happens here
return await db.query("SELECT * FROM users")
```
## Connection Pooling Pattern
**Problem:** Creating new connections for each request is slow
**Solution:** Reuse HTTP clients with connection pooling
```python
import httpx
class APIClient:
_instance: Optional[httpx.AsyncClient] = None
@classmethod
async def get_client(cls) -> httpx.AsyncClient:
"""Get or create shared HTTP client."""
if cls._instance is None:
cls._instance = httpx.AsyncClient(
base_url=API_BASE_URL,
timeout=httpx.Timeout(30.0),
limits=httpx.Limits(
max_keepalive_connections=5,
max_connections=10
)
)
return cls._instance
@classmethod
async def cleanup(cls):
"""Cleanup on shutdown."""
if cls._instance:
await cls._instance.aclose()
cls._instance = None
@mcp.tool()
async def api_request(endpoint: str) -> dict:
client = await APIClient.get_client()
response = await client.get(endpoint)
return response.json()
```
## Retry with Exponential Backoff
**Problem:** Transient failures cause tool failures
**Solution:** Automatic retry with exponential backoff
```python
import asyncio
async def retry_with_backoff(
func,
max_retries: int = 3,
initial_delay: float = 1.0,
exponential_base: float = 2.0
):
"""Retry function with exponential backoff."""
delay = initial_delay
last_exception = None
for attempt in range(max_retries):
try:
return await func()
except (httpx.TimeoutException, httpx.NetworkError) as e:
last_exception = e
if attempt < max_retries - 1:
await asyncio.sleep(delay)
delay *= exponential_base
raise last_exception
@mcp.tool()
async def resilient_api_call(endpoint: str) -> dict:
"""API call with automatic retry."""
async def make_call():
client = await APIClient.get_client()
response = await client.get(endpoint)
response.raise_for_status()
return response.json()
try:
data = await retry_with_backoff(make_call)
return {"success": True, "data": data}
except Exception as e:
return {"success": False, "error": str(e)}
```
## Time-Based Caching Pattern
**Problem:** Repeated API calls for same data waste time/money
**Solution:** Cache with TTL (time-to-live)
```python
import time
class TimeBasedCache:
def __init__(self, ttl: int = 300):
self.ttl = ttl
self.cache = {}
self.timestamps = {}
def get(self, key: str):
if key in self.cache:
if time.time() - self.timestamps[key] < self.ttl:
return self.cache[key]
else:
del self.cache[key]
del self.timestamps[key]
return None
def set(self, key: str, value):
self.cache[key] = value
self.timestamps[key] = time.time()
cache = TimeBasedCache(ttl=300)
@mcp.tool()
async def cached_fetch(resource_id: str) -> dict:
"""Fetch with caching."""
cache_key = f"resource:{resource_id}"
cached = cache.get(cache_key)
if cached:
return {"data": cached, "from_cache": True}
data = await fetch_from_api(resource_id)
cache.set(cache_key, data)
return {"data": data, "from_cache": False}
```
## Structured Error Responses
**Problem:** Inconsistent error formats make debugging hard
**Solution:** Standardized error response format
```python
from enum import Enum
class ErrorCode(Enum):
VALIDATION_ERROR = "VALIDATION_ERROR"
NOT_FOUND = "NOT_FOUND"
API_ERROR = "API_ERROR"
TIMEOUT = "TIMEOUT"
UNKNOWN = "UNKNOWN"
def create_error(code: ErrorCode, message: str, details: dict = None):
"""Create structured error response."""
return {
"success": False,
"error": {
"code": code.value,
"message": message,
"details": details or {},
"timestamp": datetime.now().isoformat()
}
}
@mcp.tool()
async def validated_operation(data: str) -> dict:
if not data:
return create_error(
ErrorCode.VALIDATION_ERROR,
"Data is required",
{"field": "data"}
)
try:
result = await process(data)
return {"success": True, "data": result}
except Exception as e:
return create_error(ErrorCode.UNKNOWN, str(e))
```
## Environment-Based Configuration
**Problem:** Different settings for dev/staging/production
**Solution:** Environment-based configuration class
```python
import os
from enum import Enum
class Environment(Enum):
DEVELOPMENT = "development"
STAGING = "staging"
PRODUCTION = "production"
class Config:
ENV = Environment(os.getenv("ENVIRONMENT", "development"))
SETTINGS = {
Environment.DEVELOPMENT: {
"debug": True,
"cache_ttl": 60,
"log_level": "DEBUG"
},
Environment.STAGING: {
"debug": True,
"cache_ttl": 300,
"log_level": "INFO"
},
Environment.PRODUCTION: {
"debug": False,
"cache_ttl": 3600,
"log_level": "WARNING"
}
}
@classmethod
def get(cls, key: str):
return cls.SETTINGS[cls.ENV].get(key)
# Use configuration
cache_ttl = Config.get("cache_ttl")
debug_mode = Config.get("debug")
```
## Health Check Pattern
**Problem:** Need to monitor server health in production
**Solution:** Comprehensive health check resource
```python
@mcp.resource("health://status")
async def health_check() -> dict:
"""Comprehensive health check."""
checks = {}
# Check API connectivity
try:
client = await APIClient.get_client()
response = await client.get("/health", timeout=5)
checks["api"] = response.status_code == 200
except:
checks["api"] = False
# Check database (if applicable)
try:
db = await ResourceManager.get_db()
await db.execute("SELECT 1")
checks["database"] = True
except:
checks["database"] = False
# System resources
import psutil
checks["memory_percent"] = psutil.virtual_memory().percent
checks["cpu_percent"] = psutil.cpu_percent()
# Overall status
all_healthy = (
checks.get("api", True) and
checks.get("database", True) and
checks["memory_percent"] < 90 and
checks["cpu_percent"] < 90
)
return {
"status": "healthy" if all_healthy else "degraded",
"timestamp": datetime.now().isoformat(),
"checks": checks
}
```
## Parallel Processing Pattern
**Problem:** Sequential processing is slow for batch operations
**Solution:** Process items in parallel
```python
import asyncio
@mcp.tool()
async def batch_process(items: list[str]) -> dict:
"""Process multiple items in parallel."""
async def process_single(item: str):
try:
result = await process_item(item)
return {"item": item, "success": True, "result": result}
except Exception as e:
return {"item": item, "success": False, "error": str(e)}
# Process all items in parallel
tasks = [process_single(item) for item in items]
results = await asyncio.gather(*tasks)
successful = [r for r in results if r["success"]]
failed = [r for r in results if not r["success"]]
return {
"total": len(items),
"successful": len(successful),
"failed": len(failed),
"results": results
}
```
## State Management Pattern
**Problem:** Shared state causes race conditions
**Solution:** Thread-safe state management with locks
```python
import asyncio
class StateManager:
def __init__(self):
self._state = {}
self._locks = {}
async def get(self, key: str, default=None):
return self._state.get(key, default)
async def set(self, key: str, value):
if key not in self._locks:
self._locks[key] = asyncio.Lock()
async with self._locks[key]:
self._state[key] = value
async def update(self, key: str, updater):
"""Update with function."""
if key not in self._locks:
self._locks[key] = asyncio.Lock()
async with self._locks[key]:
current = self._state.get(key)
self._state[key] = await updater(current)
return self._state[key]
state = StateManager()
@mcp.tool()
async def increment_counter(name: str) -> dict:
new_value = await state.update(
f"counter_{name}",
lambda x: (x or 0) + 1
)
return {"counter": name, "value": new_value}
```
## Anti-Patterns to Avoid
### ❌ Factory Functions in __init__.py
```python
# DON'T DO THIS
# shared/__init__.py
def get_api_client():
from .api_client import APIClient # Circular import risk
return APIClient()
```
### ❌ Blocking Operations in Async
```python
# DON'T DO THIS
@mcp.tool()
async def bad_async():
time.sleep(5) # Blocks entire event loop!
return "done"
# DO THIS INSTEAD
@mcp.tool()
async def good_async():
await asyncio.sleep(5)
return "done"
```
### ❌ Global Mutable State
```python
# DON'T DO THIS
results = [] # Race conditions!
@mcp.tool()
async def add_result(data: str):
results.append(data)
```
## Production Deployment Checklist
- [ ] Module-level server object
- [ ] Environment variables for all config
- [ ] Connection pooling for HTTP clients
- [ ] Retry logic for transient failures
- [ ] Caching for expensive operations
- [ ] Structured error responses
- [ ] Health check endpoint
- [ ] Logging configured
- [ ] No circular imports
- [ ] No import-time async execution
- [ ] Rate limiting if needed
- [ ] Graceful shutdown handling
## Resources
- **Production Examples**: See `self-contained-server.py` template
- **Error Handling**: See `error-handling.py` template
- **API Patterns**: See `api-client-pattern.py` template