Initial commit
This commit is contained in:
595
skills/flask-docker-deployment/SKILL.md
Normal file
595
skills/flask-docker-deployment/SKILL.md
Normal file
@@ -0,0 +1,595 @@
|
||||
---
|
||||
name: flask-docker-deployment
|
||||
description: Set up Docker deployment for Flask applications with Gunicorn, automated versioning, and container registry publishing
|
||||
---
|
||||
|
||||
# Flask Docker Deployment Pattern
|
||||
|
||||
This skill helps you containerize Flask applications using Docker with Gunicorn for production, automated version management, and seamless container registry publishing.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- You have a Flask application ready to deploy
|
||||
- You want production-grade containerization with Gunicorn
|
||||
- You need automated version management for builds
|
||||
- You're publishing to a container registry (Docker Hub, GHCR, ECR, etc.)
|
||||
- You want a repeatable, idempotent deployment pipeline
|
||||
|
||||
## What This Skill Creates
|
||||
|
||||
1. **Dockerfile** - Multi-stage production-ready container with security best practices
|
||||
2. **build-publish.sh** - Automated build script with version management
|
||||
3. **VERSION** file - Auto-incrementing version tracking (gitignored)
|
||||
4. **.gitignore** - Entry for VERSION file
|
||||
5. **Optional .dockerignore** - Exclude unnecessary files from build context
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before using this skill, ensure:
|
||||
1. Flask application is working locally
|
||||
2. `requirements.txt` exists with all dependencies
|
||||
3. Docker is installed and running
|
||||
4. You're authenticated to your container registry (if publishing)
|
||||
|
||||
## Step 1: Gather Project Information
|
||||
|
||||
**IMPORTANT**: Before creating files, ask the user these questions:
|
||||
|
||||
1. **"What is your Flask application entry point?"**
|
||||
- Format: `{module_name}:{app_variable}`
|
||||
- Example: `hyperopt_daemon:app` or `api_server:create_app()`
|
||||
|
||||
2. **"What port does your Flask app use?"**
|
||||
- Default: 5000
|
||||
- Example: 5678, 8080, 3000
|
||||
|
||||
3. **"What is your container registry URL?"**
|
||||
- Examples:
|
||||
- GitHub: `ghcr.io/{org}/{project}`
|
||||
- Docker Hub: `docker.io/{user}/{project}`
|
||||
- AWS ECR: `{account}.dkr.ecr.{region}.amazonaws.com/{project}`
|
||||
|
||||
4. **"Do you have private Git dependencies?"** (yes/no)
|
||||
- If yes: Will need GitHub Personal Access Token (CR_PAT)
|
||||
- If no: Can skip git installation step
|
||||
|
||||
5. **"How many Gunicorn workers do you want?"**
|
||||
- Default: 4
|
||||
- Recommendation: 2-4 × CPU cores
|
||||
- Note: For background job workers, use 1
|
||||
|
||||
## Step 2: Create Dockerfile
|
||||
|
||||
Create `Dockerfile` in the project root:
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.13-slim
|
||||
|
||||
# Build argument for GitHub Personal Access Token (if needed for private deps)
|
||||
ARG CR_PAT
|
||||
ENV CR_PAT=${CR_PAT}
|
||||
|
||||
# Install git if you have private GitHub dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
git \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy requirements and install dependencies
|
||||
COPY requirements.txt .
|
||||
|
||||
# Configure git to use PAT for GitHub access (if private deps)
|
||||
RUN git config --global url."https://${CR_PAT}@github.com/".insteadOf "https://github.com/" \
|
||||
&& pip install --no-cache-dir -r requirements.txt \
|
||||
&& git config --global --unset url."https://${CR_PAT}@github.com/".insteadOf
|
||||
|
||||
# Copy application code
|
||||
COPY . .
|
||||
|
||||
# Create non-root user for security
|
||||
RUN useradd --create-home --shell /bin/bash appuser
|
||||
RUN chown -R appuser:appuser /app
|
||||
USER appuser
|
||||
|
||||
# Expose the application port
|
||||
EXPOSE {port}
|
||||
|
||||
# Set environment variables
|
||||
ENV PYTHONPATH=/app
|
||||
ENV PORT={port}
|
||||
|
||||
# Run with gunicorn for production
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:{port}", "--workers", "{workers}", "{module}:{app}"]
|
||||
```
|
||||
|
||||
**CRITICAL Replacements:**
|
||||
- `{port}` → Application port (e.g., 5678)
|
||||
- `{workers}` → Number of workers (e.g., 4, or 1 for background jobs)
|
||||
- `{module}` → Python module name (e.g., hyperopt_daemon)
|
||||
- `{app}` → App variable name (e.g., app or create_app())
|
||||
|
||||
**If NO private dependencies**, remove these lines:
|
||||
```dockerfile
|
||||
# Remove ARG CR_PAT, ENV CR_PAT, git installation, and git config commands
|
||||
```
|
||||
|
||||
Simplified version without private deps:
|
||||
```dockerfile
|
||||
FROM python:3.13-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN useradd --create-home --shell /bin/bash appuser
|
||||
RUN chown -R appuser:appuser /app
|
||||
USER appuser
|
||||
|
||||
EXPOSE {port}
|
||||
ENV PYTHONPATH=/app
|
||||
ENV PORT={port}
|
||||
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:{port}", "--workers", "{workers}", "{module}:{app}"]
|
||||
```
|
||||
|
||||
## Step 3: Create build-publish.sh Script
|
||||
|
||||
Create `build-publish.sh` in the project root:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
|
||||
# VERSION file path
|
||||
VERSION_FILE="VERSION"
|
||||
|
||||
# Parse command line arguments
|
||||
NO_CACHE=""
|
||||
if [ "$1" = "--no-cache" ]; then
|
||||
NO_CACHE="--no-cache"
|
||||
echo "Building with --no-cache flag"
|
||||
fi
|
||||
|
||||
# Check if VERSION file exists, if not create it with version 1
|
||||
if [ ! -f "$VERSION_FILE" ]; then
|
||||
echo "1" > "$VERSION_FILE"
|
||||
echo "Created VERSION file with initial version 1"
|
||||
fi
|
||||
|
||||
# Read current version from file
|
||||
CURRENT_VERSION=$(cat "$VERSION_FILE" 2>/dev/null)
|
||||
|
||||
# Validate that the version is a number
|
||||
if ! echo "$CURRENT_VERSION" | grep -qE '^[0-9]+$'; then
|
||||
echo "Error: Invalid version format in $VERSION_FILE. Expected a number, got: $CURRENT_VERSION"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Increment version
|
||||
VERSION=$((CURRENT_VERSION + 1))
|
||||
|
||||
echo "Building version $VERSION (incrementing from $CURRENT_VERSION)"
|
||||
|
||||
# Build the image with optional --no-cache flag
|
||||
docker build $NO_CACHE --build-arg CR_PAT=$CR_PAT --platform linux/amd64 -t {registry_url}:$VERSION .
|
||||
|
||||
# Tag the same image as latest
|
||||
docker tag {registry_url}:$VERSION {registry_url}:latest
|
||||
|
||||
# Push both tags
|
||||
docker push {registry_url}:$VERSION
|
||||
docker push {registry_url}:latest
|
||||
|
||||
# Update the VERSION file with the new version
|
||||
echo "$VERSION" > "$VERSION_FILE"
|
||||
echo "Updated $VERSION_FILE to version $VERSION"
|
||||
```
|
||||
|
||||
**CRITICAL Replacements:**
|
||||
- `{registry_url}` → Full container registry URL (e.g., `ghcr.io/mazza-vc/hyperopt-server`)
|
||||
|
||||
**If NO private dependencies**, remove `--build-arg CR_PAT=$CR_PAT`:
|
||||
```bash
|
||||
docker build $NO_CACHE --platform linux/amd64 -t {registry_url}:$VERSION .
|
||||
```
|
||||
|
||||
Make the script executable:
|
||||
```bash
|
||||
chmod +x build-publish.sh
|
||||
```
|
||||
|
||||
## Step 4: Create Environment Configuration
|
||||
|
||||
### File: `example.env`
|
||||
|
||||
Create or update `example.env` with required environment variables for running the containerized application:
|
||||
|
||||
```bash
|
||||
# Server Configuration
|
||||
PORT={port}
|
||||
|
||||
# Database Configuration (if applicable)
|
||||
{PROJECT_NAME}_DB_HOST=localhost
|
||||
{PROJECT_NAME}_DB_NAME={project_name}
|
||||
{PROJECT_NAME}_DB_USER={project_name}
|
||||
{PROJECT_NAME}_DB_PASSWORD=your_password_here
|
||||
|
||||
# Build Configuration (for private dependencies)
|
||||
CR_PAT=your_github_personal_access_token
|
||||
|
||||
# Optional: Additional app-specific variables
|
||||
DEBUG=False
|
||||
LOG_LEVEL=INFO
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{port}` → Application port (e.g., 5678)
|
||||
- `{PROJECT_NAME}` → Uppercase project name (e.g., "HYPEROPT_SERVER")
|
||||
- `{project_name}` → Snake case project name (e.g., "hyperopt_server")
|
||||
|
||||
**Note:** Remove CR_PAT if you don't have private dependencies.
|
||||
|
||||
### Update .gitignore
|
||||
|
||||
Add VERSION file and .env to `.gitignore`:
|
||||
|
||||
```gitignore
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Version file (used by build system, not tracked)
|
||||
VERSION
|
||||
```
|
||||
|
||||
This prevents the VERSION file and environment secrets from being committed.
|
||||
|
||||
## Step 5: Create .dockerignore (Optional but Recommended)
|
||||
|
||||
Create `.dockerignore` to exclude unnecessary files from Docker build context:
|
||||
|
||||
```
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
env/
|
||||
venv/
|
||||
.venv/
|
||||
ENV/
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Environment files (secrets should not be in image)
|
||||
.env
|
||||
*.env
|
||||
!example.env
|
||||
|
||||
# Testing
|
||||
.pytest_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
.tox/
|
||||
|
||||
# IDEs
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# Git
|
||||
.git/
|
||||
.gitignore
|
||||
|
||||
# CI/CD
|
||||
.github/
|
||||
|
||||
# Documentation
|
||||
*.md
|
||||
docs/
|
||||
|
||||
# Build artifacts
|
||||
VERSION
|
||||
*.log
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
```
|
||||
|
||||
## Step 6: Usage Instructions
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Copy example environment file and configure
|
||||
cp example.env .env
|
||||
# Edit .env and fill in actual values
|
||||
```
|
||||
|
||||
### Building and Publishing
|
||||
|
||||
**Load environment variables** (if using .env):
|
||||
```bash
|
||||
# Export variables from .env for build process
|
||||
set -a
|
||||
source .env
|
||||
set +a
|
||||
```
|
||||
|
||||
**Standard build** (increments version, uses cache):
|
||||
```bash
|
||||
./build-publish.sh
|
||||
```
|
||||
|
||||
**Fresh build** (no cache, pulls latest dependencies):
|
||||
```bash
|
||||
./build-publish.sh --no-cache
|
||||
```
|
||||
|
||||
### Running the Container
|
||||
|
||||
**Using environment file:**
|
||||
```bash
|
||||
docker run -p {port}:{port} \
|
||||
--env-file .env \
|
||||
{registry_url}:latest
|
||||
```
|
||||
|
||||
**Using explicit environment variables:**
|
||||
```bash
|
||||
docker run -p {port}:{port} \
|
||||
-e PORT={port} \
|
||||
-e {PROJECT_NAME}_DB_PASSWORD=secret \
|
||||
-e {PROJECT_NAME}_DB_HOST=db.example.com \
|
||||
{registry_url}:latest
|
||||
```
|
||||
|
||||
### Local Testing
|
||||
|
||||
Test the container locally before publishing:
|
||||
```bash
|
||||
# Build without pushing
|
||||
docker build --platform linux/amd64 -t {project}:test .
|
||||
|
||||
# Run locally
|
||||
docker run -p {port}:{port} {project}:test
|
||||
|
||||
# Test the endpoint
|
||||
curl http://localhost:{port}/health
|
||||
```
|
||||
|
||||
## Design Principles
|
||||
|
||||
This pattern follows these principles:
|
||||
|
||||
### Security:
|
||||
1. **Non-root user** - Container runs as unprivileged user
|
||||
2. **Minimal base image** - python:3.11-slim reduces attack surface
|
||||
3. **Build-time secrets** - CR_PAT only available during build, not in final image
|
||||
4. **Explicit permissions** - chown ensures correct file ownership
|
||||
|
||||
### Reliability:
|
||||
1. **Gunicorn workers** - Production-grade WSGI server with process management
|
||||
2. **Platform specification** - `--platform linux/amd64` ensures compatibility
|
||||
3. **Version tracking** - Auto-incrementing versions for rollback capability
|
||||
4. **Immutable builds** - Each version is reproducible
|
||||
|
||||
### Performance:
|
||||
1. **Layer caching** - Dependencies cached separately from code
|
||||
2. **No-cache option** - Force fresh builds when needed
|
||||
3. **Slim base image** - Faster pulls and smaller storage
|
||||
4. **Multi-worker** - Concurrent request handling
|
||||
|
||||
### DevOps:
|
||||
1. **Automated versioning** - No manual version management
|
||||
2. **Dual tagging** - Both version and latest tags for flexibility
|
||||
3. **Idempotent builds** - Safe to run multiple times
|
||||
4. **Simple CLI** - Single script handles build and publish
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Standard Web API
|
||||
```dockerfile
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:create_app()"]
|
||||
```
|
||||
- Multiple workers for concurrent requests
|
||||
- Factory pattern with `create_app()`
|
||||
|
||||
### Pattern 2: Background Job Worker
|
||||
```dockerfile
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:5678", "--workers", "1", "daemon:app"]
|
||||
```
|
||||
- Single worker to avoid job conflicts
|
||||
- Direct app instance
|
||||
|
||||
### Pattern 3: High-Traffic API
|
||||
```dockerfile
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:8080", "--workers", "8", "--timeout", "120", "api:app"]
|
||||
```
|
||||
- More workers for higher concurrency
|
||||
- Increased timeout for long-running requests
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
### flask-smorest-api Skill
|
||||
Create the API first, then dockerize:
|
||||
```
|
||||
1. User: "Set up Flask API server"
|
||||
2. [flask-smorest-api skill runs]
|
||||
3. User: "Now dockerize it"
|
||||
4. [flask-docker-deployment skill runs]
|
||||
```
|
||||
|
||||
### postgres-setup Skill
|
||||
For database-dependent apps:
|
||||
```dockerfile
|
||||
# Add psycopg2-binary to requirements.txt
|
||||
# Set database env vars in docker run:
|
||||
docker run -e DB_HOST=db.example.com -e DB_PASSWORD=secret ...
|
||||
```
|
||||
|
||||
## Container Registry Setup
|
||||
|
||||
### GitHub Container Registry (GHCR)
|
||||
|
||||
**Login:**
|
||||
```bash
|
||||
echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
|
||||
```
|
||||
|
||||
**Registry URL format:**
|
||||
```
|
||||
ghcr.io/{org}/{project}
|
||||
```
|
||||
|
||||
### Docker Hub
|
||||
|
||||
**Login:**
|
||||
```bash
|
||||
docker login docker.io
|
||||
```
|
||||
|
||||
**Registry URL format:**
|
||||
```
|
||||
docker.io/{username}/{project}
|
||||
```
|
||||
|
||||
### AWS ECR
|
||||
|
||||
**Login:**
|
||||
```bash
|
||||
aws ecr get-login-password --region us-east-1 | \
|
||||
docker login --username AWS --password-stdin \
|
||||
{account}.dkr.ecr.us-east-1.amazonaws.com
|
||||
```
|
||||
|
||||
**Registry URL format:**
|
||||
```
|
||||
{account}.dkr.ecr.{region}.amazonaws.com/{project}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Build fails with "permission denied"
|
||||
```bash
|
||||
chmod +x build-publish.sh
|
||||
```
|
||||
|
||||
### Private dependency installation fails
|
||||
```bash
|
||||
# Verify CR_PAT is set
|
||||
echo $CR_PAT
|
||||
|
||||
# Test GitHub access
|
||||
curl -H "Authorization: token $CR_PAT" https://api.github.com/user
|
||||
```
|
||||
|
||||
### Container won't start
|
||||
```bash
|
||||
# Check logs
|
||||
docker logs {container_id}
|
||||
|
||||
# Run interactively to debug
|
||||
docker run -it {registry_url}:latest /bin/bash
|
||||
```
|
||||
|
||||
### Version file conflicts
|
||||
```bash
|
||||
# If VERSION file gets corrupted, delete and rebuild
|
||||
rm VERSION
|
||||
./build-publish.sh
|
||||
```
|
||||
|
||||
## Example: Complete Workflow
|
||||
|
||||
**User:** "Dockerize my Flask hyperopt server"
|
||||
|
||||
**Claude asks:**
|
||||
- Entry point? → `hyperopt_daemon:app`
|
||||
- Port? → `5678`
|
||||
- Registry? → `ghcr.io/mazza-vc/hyperopt-server`
|
||||
- Private deps? → `yes` (arcana-core)
|
||||
- Workers? → `1` (background job processor)
|
||||
|
||||
**Claude creates:**
|
||||
1. `Dockerfile` with gunicorn, 1 worker, port 5678
|
||||
2. `build-publish.sh` with GHCR registry URL
|
||||
3. Adds `VERSION` to `.gitignore`
|
||||
4. Creates `.dockerignore`
|
||||
|
||||
**User runs:**
|
||||
```bash
|
||||
export CR_PAT=ghp_abc123
|
||||
./build-publish.sh
|
||||
```
|
||||
|
||||
**Result:**
|
||||
- ✅ Builds `ghcr.io/mazza-vc/hyperopt-server:1`
|
||||
- ✅ Tags as `ghcr.io/mazza-vc/hyperopt-server:latest`
|
||||
- ✅ Pushes both tags
|
||||
- ✅ Updates VERSION to `1`
|
||||
|
||||
**Subsequent builds:**
|
||||
```bash
|
||||
./build-publish.sh # Builds version 2
|
||||
./build-publish.sh # Builds version 3
|
||||
./build-publish.sh --no-cache # Builds version 4 (fresh)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use --no-cache strategically** - Only when dependencies updated or debugging
|
||||
2. **Test locally first** - Build and run locally before pushing
|
||||
3. **Keep VERSION in .gitignore** - Let build system manage it
|
||||
4. **Use explicit versions** - Don't rely only on `latest` tag for production
|
||||
5. **Document env vars** - List all required environment variables in README
|
||||
6. **Health checks** - Add `/health` endpoint for container orchestration
|
||||
7. **Logging** - Configure logging to stdout for container logs
|
||||
8. **Resource limits** - Set memory/CPU limits in production deployment
|
||||
|
||||
## Advanced: Multi-Stage Builds
|
||||
|
||||
For smaller images, use multi-stage builds:
|
||||
|
||||
```dockerfile
|
||||
# Build stage
|
||||
FROM python:3.13-slim as builder
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install --user --no-cache-dir -r requirements.txt
|
||||
|
||||
# Runtime stage
|
||||
FROM python:3.13-slim
|
||||
WORKDIR /app
|
||||
COPY --from=builder /root/.local /root/.local
|
||||
COPY . .
|
||||
ENV PATH=/root/.local/bin:$PATH
|
||||
RUN useradd --create-home appuser && chown -R appuser:appuser /app
|
||||
USER appuser
|
||||
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "app:app"]
|
||||
```
|
||||
|
||||
This pattern:
|
||||
- Installs dependencies in builder stage
|
||||
- Copies only installed packages to runtime
|
||||
- Results in smaller final image
|
||||
454
skills/flask-smorest-api/SKILL.md
Normal file
454
skills/flask-smorest-api/SKILL.md
Normal file
@@ -0,0 +1,454 @@
|
||||
---
|
||||
name: flask-smorest-api
|
||||
description: Set up Flask REST API with flask-smorest, OpenAPI/Swagger docs, and blueprint architecture
|
||||
---
|
||||
|
||||
# Flask REST API with flask-smorest Pattern
|
||||
|
||||
This skill helps you set up a Flask REST API following a standardized pattern with flask-smorest for OpenAPI documentation, blueprint architecture, and best practices for production-ready APIs.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Starting a new Flask REST API project
|
||||
- You want automatic OpenAPI/Swagger documentation
|
||||
- You need a clean, modular blueprint architecture
|
||||
- You want type-safe request/response handling with Marshmallow schemas
|
||||
- You're building a production-ready API server
|
||||
|
||||
## What This Skill Creates
|
||||
|
||||
1. **Main application file** - Flask app initialization with flask-smorest
|
||||
2. **Blueprint structure** - Modular endpoint organization
|
||||
3. **Schema files** - Marshmallow schemas for validation and docs
|
||||
4. **Singleton manager pattern** - Centralized service/database initialization
|
||||
5. **CORS support** - Cross-origin request handling
|
||||
6. **Requirements file** - All necessary dependencies
|
||||
|
||||
## Step 1: Gather Project Information
|
||||
|
||||
**IMPORTANT**: Before creating files, ask the user these questions:
|
||||
|
||||
1. **"What is your project name?"** (e.g., "materia-server", "trading-api", "myapp")
|
||||
- Use this to derive:
|
||||
- Main module: `{project_name}.py` (e.g., `materia_server.py`)
|
||||
- Port number (suggest based on project, default: 5000)
|
||||
|
||||
2. **"What features/endpoints do you need?"** (e.g., "users", "tokens", "orders")
|
||||
- Each feature will become a blueprint
|
||||
|
||||
3. **"Do you need database integration?"** (yes/no)
|
||||
- If yes, reference postgres-setup skill for database layer
|
||||
|
||||
4. **"What port should the server run on?"** (default: 5000)
|
||||
|
||||
## Step 2: Create Directory Structure
|
||||
|
||||
Create these directories if they don't exist:
|
||||
```
|
||||
{project_root}/
|
||||
├── blueprints/ # Blueprint modules (one per feature)
|
||||
│ ├── __init__.py
|
||||
│ ├── {feature}.py
|
||||
│ └── {feature}_schemas.py
|
||||
├── src/ # Optional: for package code (database drivers, models, etc.)
|
||||
│ └── {project_name}/
|
||||
└── {project_name}.py # Main application file
|
||||
```
|
||||
|
||||
## Step 3: Create Main Application File
|
||||
|
||||
Create `{project_name}.py` using this template:
|
||||
|
||||
```python
|
||||
import os
|
||||
import logging
|
||||
from flask import Flask
|
||||
from flask_cors import CORS
|
||||
from flask_smorest import Api
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables from .env file
|
||||
load_dotenv()
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def create_app():
|
||||
app = Flask(__name__)
|
||||
app.config['API_TITLE'] = '{Project Name} API'
|
||||
app.config['API_VERSION'] = 'v1'
|
||||
app.config['OPENAPI_VERSION'] = '3.0.2'
|
||||
app.config['OPENAPI_URL_PREFIX'] = '/'
|
||||
app.config['OPENAPI_SWAGGER_UI_PATH'] = '/swagger'
|
||||
app.config['OPENAPI_SWAGGER_UI_URL'] = 'https://cdn.jsdelivr.net/npm/swagger-ui-dist/'
|
||||
|
||||
CORS(app)
|
||||
api = Api(app)
|
||||
|
||||
from blueprints.{feature} import blp as {feature}_blp
|
||||
api.register_blueprint({feature}_blp)
|
||||
|
||||
logger.info("Flask app initialized")
|
||||
return app
|
||||
|
||||
if __name__ == '__main__':
|
||||
port = int(os.environ.get('PORT', {port_number}))
|
||||
app = create_app()
|
||||
logger.info(f"Swagger UI: http://localhost:{port}/swagger")
|
||||
app.run(host='0.0.0.0', port=port)
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{Project Name}` → Human-readable project name (e.g., "Materia Server")
|
||||
- `{project_name}` → Snake case project name (e.g., "materia_server")
|
||||
- `{port_number}` → Actual port number (e.g., 5151)
|
||||
- `{feature}` → Feature name from user's response
|
||||
|
||||
## Step 4: Create Blueprint and Schema Files
|
||||
|
||||
For each feature/endpoint, create two files:
|
||||
|
||||
### File: `blueprints/{feature}.py`
|
||||
|
||||
```python
|
||||
from flask.views import MethodView
|
||||
from flask_smorest import Blueprint, abort
|
||||
from .{feature}_schemas import {Feature}QuerySchema, {Feature}ResponseSchema, ErrorResponseSchema
|
||||
|
||||
blp = Blueprint('{feature}', __name__, url_prefix='/api', description='{Feature} API')
|
||||
|
||||
@blp.route('/{feature}')
|
||||
class {Feature}Resource(MethodView):
|
||||
@blp.arguments({Feature}QuerySchema, location='query')
|
||||
@blp.response(200, {Feature}ResponseSchema)
|
||||
@blp.alt_response(400, schema=ErrorResponseSchema)
|
||||
@blp.alt_response(500, schema=ErrorResponseSchema)
|
||||
def get(self, query_args):
|
||||
try:
|
||||
# TODO: Implement logic
|
||||
return {"message": "Success", "data": []}
|
||||
except ValueError as e:
|
||||
abort(400, message=str(e))
|
||||
except Exception as e:
|
||||
abort(500, message=str(e))
|
||||
```
|
||||
|
||||
### File: `blueprints/{feature}_schemas.py`
|
||||
|
||||
```python
|
||||
from marshmallow import Schema, fields
|
||||
|
||||
class {Feature}QuerySchema(Schema):
|
||||
limit = fields.Integer(load_default=100, metadata={'description': 'Max results'})
|
||||
offset = fields.Integer(load_default=0, metadata={'description': 'Skip count'})
|
||||
|
||||
class {Feature}ResponseSchema(Schema):
|
||||
message = fields.String(required=True)
|
||||
data = fields.List(fields.Dict(), required=True)
|
||||
|
||||
class ErrorResponseSchema(Schema):
|
||||
code = fields.Integer(required=True)
|
||||
status = fields.String(required=True)
|
||||
message = fields.String()
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{Feature}` → PascalCase feature name (e.g., "TradableTokens")
|
||||
- `{feature}` → Snake case feature name (e.g., "tradable_tokens")
|
||||
|
||||
## Step 5: Create Common Singleton Manager (If Needed)
|
||||
|
||||
If the project needs shared services (database, API clients, etc.), create a singleton manager:
|
||||
|
||||
### File: `common.py`
|
||||
|
||||
```python
|
||||
"""
|
||||
Singleton manager for shared service instances.
|
||||
|
||||
Provides centralized initialization of database connections, API clients,
|
||||
and other shared resources.
|
||||
"""
|
||||
|
||||
import os
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ServiceManager:
|
||||
"""
|
||||
Singleton manager for shared service instances.
|
||||
|
||||
Ensures only one instance of each service is created and reused
|
||||
across all blueprints.
|
||||
"""
|
||||
|
||||
_instance = None
|
||||
|
||||
def __new__(cls):
|
||||
if cls._instance is None:
|
||||
cls._instance = super(ServiceManager, cls).__new__(cls)
|
||||
cls._instance._initialized = False
|
||||
return cls._instance
|
||||
|
||||
def __init__(self):
|
||||
if self._initialized:
|
||||
return
|
||||
|
||||
# Initialize services
|
||||
self._db = None
|
||||
self._initialized = True
|
||||
logger.info("ServiceManager initialized")
|
||||
|
||||
def get_database(self):
|
||||
"""
|
||||
Get database connection instance.
|
||||
|
||||
Returns:
|
||||
Database connection instance (lazy initialization)
|
||||
"""
|
||||
if self._db is None:
|
||||
# Import database driver
|
||||
from src.{project_name}.database import Database
|
||||
|
||||
# Get connection parameters from environment
|
||||
db_host = os.environ.get('{PROJECT_NAME}_DB_HOST', 'localhost')
|
||||
db_name = os.environ.get('{PROJECT_NAME}_DB_NAME', '{project_name}')
|
||||
db_user = os.environ.get('{PROJECT_NAME}_DB_USER', '{project_name}')
|
||||
db_passwd = os.environ.get('{PROJECT_NAME}_DB_PASSWORD')
|
||||
|
||||
if not db_passwd:
|
||||
raise ValueError("{PROJECT_NAME}_DB_PASSWORD environment variable required")
|
||||
|
||||
self._db = Database(db_host, db_name, db_user, db_passwd)
|
||||
logger.info("Database connection initialized")
|
||||
|
||||
return self._db
|
||||
|
||||
|
||||
# Global singleton instance
|
||||
service_manager = ServiceManager()
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{PROJECT_NAME}` → Uppercase project name (e.g., "MATERIA_SERVER")
|
||||
- `{project_name}` → Snake case project name (e.g., "materia_server")
|
||||
|
||||
## Step 6: Create Environment Configuration
|
||||
|
||||
### File: `example.env`
|
||||
|
||||
Create or update `example.env` with required environment variables:
|
||||
|
||||
```bash
|
||||
# Server Configuration
|
||||
PORT={port_number}
|
||||
DEBUG=False
|
||||
|
||||
# Database Configuration (if applicable)
|
||||
{PROJECT_NAME}_DB_HOST=localhost
|
||||
{PROJECT_NAME}_DB_NAME={project_name}
|
||||
{PROJECT_NAME}_DB_USER={project_name}
|
||||
{PROJECT_NAME}_DB_PASSWORD=your_password_here
|
||||
|
||||
# Optional: CORS Configuration
|
||||
ALLOWED_ORIGINS=http://localhost:3000,http://localhost:8080
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{port_number}` → Actual port number (e.g., 5151)
|
||||
- `{PROJECT_NAME}` → Uppercase project name (e.g., "MATERIA_SERVER")
|
||||
- `{project_name}` → Snake case project name (e.g., "materia_server")
|
||||
|
||||
### File: `.env` (gitignored)
|
||||
|
||||
Instruct the user to copy `example.env` to `.env` and fill in actual values:
|
||||
|
||||
```bash
|
||||
# Copy example.env to .env and update with actual values
|
||||
cp example.env .env
|
||||
```
|
||||
|
||||
### Update .gitignore
|
||||
|
||||
Add `.env` to `.gitignore` if not already present:
|
||||
|
||||
```
|
||||
# Environment variables
|
||||
.env
|
||||
```
|
||||
|
||||
## Step 7: Create Requirements File
|
||||
|
||||
Create `requirements.txt` with flask-smorest dependencies (no version pinning):
|
||||
|
||||
```txt
|
||||
# Flask and API framework
|
||||
Flask
|
||||
flask-smorest
|
||||
flask-cors
|
||||
|
||||
# Schema validation and serialization
|
||||
marshmallow
|
||||
|
||||
# Environment variable management
|
||||
python-dotenv
|
||||
|
||||
# Production server (optional but recommended)
|
||||
gunicorn
|
||||
|
||||
# Database (if needed)
|
||||
psycopg2-binary
|
||||
```
|
||||
|
||||
## Step 8: Create Blueprints __init__.py
|
||||
|
||||
Create `blueprints/__init__.py`:
|
||||
|
||||
```python
|
||||
"""
|
||||
Blueprint modules for {Project Name} API.
|
||||
|
||||
Each blueprint represents a distinct feature or resource endpoint.
|
||||
"""
|
||||
```
|
||||
|
||||
## Step 9: Document Usage
|
||||
|
||||
Create or update README.md with:
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Copy example environment file
|
||||
cp example.env .env
|
||||
|
||||
# Edit .env and fill in actual values
|
||||
# Then install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Running the Server
|
||||
|
||||
```bash
|
||||
# Development mode
|
||||
python {project_name}.py
|
||||
|
||||
# Production mode with Gunicorn
|
||||
gunicorn -w 4 -b 0.0.0.0:{port_number} '{project_name}:create_app()'
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Copy `example.env` to `.env` and configure:
|
||||
|
||||
**Server Configuration:**
|
||||
- `PORT` - Server port (default: {port_number})
|
||||
- `DEBUG` - Enable debug mode (default: False)
|
||||
|
||||
**Database (if applicable):**
|
||||
- `{PROJECT_NAME}_DB_HOST` - Database host (default: localhost)
|
||||
- `{PROJECT_NAME}_DB_NAME` - Database name (default: {project_name})
|
||||
- `{PROJECT_NAME}_DB_USER` - Database user (default: {project_name})
|
||||
- `{PROJECT_NAME}_DB_PASSWORD` - Database password (REQUIRED)
|
||||
|
||||
### API Documentation
|
||||
|
||||
Once running, access Swagger UI at:
|
||||
```
|
||||
http://localhost:{port_number}/swagger
|
||||
```
|
||||
|
||||
## Design Principles
|
||||
|
||||
This pattern follows these principles:
|
||||
|
||||
### Architecture:
|
||||
1. **Blueprint Organization** - Modular endpoint organization, one blueprint per feature
|
||||
2. **MethodView Classes** - Class-based views for HTTP methods (get, post, put, delete)
|
||||
3. **Separation of Concerns** - Routes, schemas, and business logic separated
|
||||
4. **Singleton Manager** - Centralized service initialization prevents duplicate connections
|
||||
5. **Application Factory** - `create_app()` pattern for testing and flexibility
|
||||
|
||||
### API Design:
|
||||
1. **OpenAPI/Swagger** - Automatic documentation via flask-smorest
|
||||
2. **Schema-Driven** - Marshmallow schemas for validation and serialization
|
||||
3. **Type Safety** - `@blp.arguments()` and `@blp.response()` decorators
|
||||
4. **Error Handling** - Consistent error responses with proper HTTP status codes
|
||||
5. **CORS Support** - Cross-origin requests for frontend consumption
|
||||
|
||||
### Best Practices:
|
||||
1. **Environment-Based Config** - All secrets via environment variables
|
||||
2. **Logging** - Structured logging throughout
|
||||
3. **Idempotent Operations** - Safe to call multiple times
|
||||
4. **Production Ready** - Gunicorn support out of the box
|
||||
5. **Testing Friendly** - Application factory enables easy testing
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
### Database Integration
|
||||
If database is needed, use **postgres-setup** skill first:
|
||||
```
|
||||
User: "Set up postgres database for my project"
|
||||
```
|
||||
|
||||
Then reference the database in your blueprints via the singleton manager:
|
||||
```python
|
||||
from common import service_manager
|
||||
|
||||
db = service_manager.get_database()
|
||||
```
|
||||
|
||||
### Package Structure
|
||||
If publishing as a package, use **python-pypi-setup** skill:
|
||||
```
|
||||
User: "Set up Python package for PyPI"
|
||||
```
|
||||
|
||||
## Example Usage in Claude Code
|
||||
|
||||
**User:** "Set up Flask API server for my project"
|
||||
|
||||
**Claude:** "What is your project name?"
|
||||
|
||||
**User:** "crypto-tracker"
|
||||
|
||||
**Claude:** "What features/endpoints do you need?"
|
||||
|
||||
**User:** "prices, tokens, portfolio"
|
||||
|
||||
**Claude:** "Do you need database integration?"
|
||||
|
||||
**User:** "yes"
|
||||
|
||||
**Claude:** "What port should the server run on?"
|
||||
|
||||
**User:** "8080"
|
||||
|
||||
**Claude:**
|
||||
1. Creates `crypto_tracker.py` with Flask app
|
||||
2. Creates `blueprints/` directory with:
|
||||
- `prices.py` and `prices_schemas.py`
|
||||
- `tokens.py` and `tokens_schemas.py`
|
||||
- `portfolio.py` and `portfolio_schemas.py`
|
||||
3. Creates `common.py` with ServiceManager singleton
|
||||
4. Creates `requirements.txt` with dependencies
|
||||
5. Documents environment variables needed
|
||||
6. Provides startup instructions
|
||||
|
||||
## Optional: Docker Support
|
||||
|
||||
If user requests Docker, create `Dockerfile`:
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.11-slim
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
COPY . .
|
||||
EXPOSE {port_number}
|
||||
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:{port_number}", "{project_name}:create_app()"]
|
||||
```
|
||||
319
skills/mz-configure-loki-logging/SKILL.md
Normal file
319
skills/mz-configure-loki-logging/SKILL.md
Normal file
@@ -0,0 +1,319 @@
|
||||
---
|
||||
name: mz-configure-loki-logging
|
||||
description: Configure Grafana Loki logging using mazza-base library for Python/Flask applications with CA certificate (Mazza-specific)
|
||||
---
|
||||
|
||||
# Loki Logging with mazza-base
|
||||
|
||||
This skill helps you integrate Grafana Loki logging using the mazza-base utility library, which handles structured JSON logging and Loki shipping.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Starting a new Python/Flask application
|
||||
- You want centralized logging with Loki
|
||||
- You need structured logs for production
|
||||
- You want easy local development with console logs
|
||||
|
||||
## What This Skill Creates
|
||||
|
||||
1. **requirements.txt entry** - Adds mazza-base dependency
|
||||
2. **CA certificate file** - Places mazza.vc_CA.pem in project root
|
||||
3. **Dockerfile updates** - Copies CA certificate to container
|
||||
4. **Logging initialization** - Adds configure_logging() call to main application file
|
||||
5. **Environment variable documentation** - All required Loki configuration
|
||||
|
||||
## Step 1: Gather Project Information
|
||||
|
||||
**IMPORTANT**: Before making changes, ask the user these questions:
|
||||
|
||||
1. **"What is your application tag/name?"** (e.g., "materia-server", "trading-api")
|
||||
- This identifies your service in Loki logs
|
||||
|
||||
2. **"What is your main application file?"** (e.g., "app.py", "server.py", "materia_server.py")
|
||||
- Where to add the logging configuration
|
||||
|
||||
3. **"Do you have the mazza.vc_CA.pem certificate file?"**
|
||||
- Required for secure Loki connection
|
||||
- If no, user needs to obtain it from Mazza infrastructure team
|
||||
|
||||
4. **"Do you have a CR_PAT environment variable set?"** (GitHub Personal Access Token)
|
||||
- Required to install mazza-base from private GitHub repo
|
||||
|
||||
## Step 2: Add mazza-base to requirements.txt
|
||||
|
||||
Add this line to `requirements.txt`:
|
||||
|
||||
```txt
|
||||
# Logging configuration with Loki support
|
||||
mazza-base @ git+https://${CR_PAT}@github.com/mazza-vc/python-mazza-base.git@main
|
||||
```
|
||||
|
||||
**NOTE**: The `CR_PAT` environment variable must be set when running `pip install`:
|
||||
```bash
|
||||
export CR_PAT="your_github_personal_access_token"
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Step 3: Add CA Certificate File
|
||||
|
||||
Ensure `mazza.vc_CA.pem` file is in your project root:
|
||||
|
||||
```
|
||||
{project_root}/
|
||||
├── mazza.vc_CA.pem # CA certificate for secure Loki connection
|
||||
├── requirements.txt
|
||||
├── Dockerfile
|
||||
└── {app_file}.py
|
||||
```
|
||||
|
||||
If you don't have this file, contact the Mazza infrastructure team.
|
||||
|
||||
## Step 4: Update Dockerfile
|
||||
|
||||
Add the CA certificate to your Dockerfile. Place this **before** installing requirements:
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.11-alpine
|
||||
|
||||
ARG CR_PAT
|
||||
ENV CR_PAT=${CR_PAT}
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy CA certificate
|
||||
COPY mazza.vc_CA.pem .
|
||||
|
||||
# Copy requirements and install
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
# ... rest of Dockerfile
|
||||
```
|
||||
|
||||
**CRITICAL**: The COPY line must appear before `pip install -r requirements.txt`
|
||||
|
||||
The certificate will be available at `/app/mazza.vc_CA.pem` in the container.
|
||||
|
||||
## Step 5: Configure Logging in Application
|
||||
|
||||
Add to the **top** of your main application file (e.g., `{app_file}.py`):
|
||||
|
||||
```python
|
||||
import os
|
||||
from mazza_base import configure_logging
|
||||
|
||||
# Configure logging with mazza_base
|
||||
# Use debug_local=True for local development, False for production with Loki
|
||||
debug_mode = os.environ.get('DEBUG_LOCAL', 'true').lower() == 'true'
|
||||
log_level = os.environ.get('LOG_LEVEL', 'INFO')
|
||||
configure_logging(
|
||||
application_tag='{application_tag}',
|
||||
debug_local=debug_mode,
|
||||
local_level=log_level
|
||||
)
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace:
|
||||
- `{app_file}` → Your main application filename (e.g., "materia_server")
|
||||
- `{application_tag}` → Your service name (e.g., "materia-server")
|
||||
|
||||
Place this **before** creating your Flask app or any other initialization.
|
||||
|
||||
## Step 6: Document Environment Variables
|
||||
|
||||
Add to README.md or .env.example:
|
||||
|
||||
### Environment Variables
|
||||
|
||||
**Logging Configuration (Local Development):**
|
||||
- `DEBUG_LOCAL` - Set to 'true' for local development (console logs), 'false' for production (Loki)
|
||||
- Default: 'true'
|
||||
- Production: 'false'
|
||||
- `LOG_LEVEL` - Logging level: DEBUG, INFO, WARNING, ERROR, CRITICAL
|
||||
- Default: 'INFO'
|
||||
|
||||
**Loki Configuration (Production Only - required when DEBUG_LOCAL=false):**
|
||||
- `MZ_LOKI_ENDPOINT` - Loki server URL (e.g., https://loki.mazza.vc:8443/loki/api/v1/push)
|
||||
- `MZ_LOKI_USER` - Loki username for authentication
|
||||
- `MZ_LOKI_PASSWORD` - Loki password for authentication
|
||||
- `MZ_LOKI_CA_BUNDLE_PATH` - Path to CA certificate (e.g., /app/mazza.vc_CA.pem)
|
||||
|
||||
**GitHub Access (for pip install):**
|
||||
- `CR_PAT` - GitHub Personal Access Token with repo access
|
||||
- Required to install mazza-base from private repository
|
||||
|
||||
### Logging Behavior
|
||||
|
||||
**Local Development** (`DEBUG_LOCAL=true`):
|
||||
- Logs output to console with pretty formatting
|
||||
- Easy to read during development
|
||||
- No Loki connection required
|
||||
- No need to set MZ_LOKI_* variables
|
||||
|
||||
**Production** (`DEBUG_LOCAL=false`):
|
||||
- Logs output as structured JSON to Loki
|
||||
- All MZ_LOKI_* variables must be set
|
||||
- Queryable in Grafana
|
||||
- Secure connection via mazza.vc_CA.pem
|
||||
|
||||
## Step 7: Usage Examples
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
# In .env or shell
|
||||
export DEBUG_LOCAL=true
|
||||
export LOG_LEVEL=DEBUG
|
||||
export CR_PAT=your_github_token
|
||||
|
||||
pip install -r requirements.txt
|
||||
python {app_file}.py
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
|
||||
Docker Compose example:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
{app_name}:
|
||||
build:
|
||||
context: .
|
||||
args:
|
||||
- CR_PAT=${CR_PAT}
|
||||
environment:
|
||||
- DEBUG_LOCAL=false
|
||||
- LOG_LEVEL=INFO
|
||||
- MZ_LOKI_ENDPOINT=${MZ_LOKI_ENDPOINT}
|
||||
- MZ_LOKI_USER=${MZ_LOKI_USER}
|
||||
- MZ_LOKI_PASSWORD=${MZ_LOKI_PASSWORD}
|
||||
- MZ_LOKI_CA_BUNDLE_PATH=/app/mazza.vc_CA.pem
|
||||
```
|
||||
|
||||
**NOTE**: Set these in your .env file:
|
||||
```
|
||||
MZ_LOKI_ENDPOINT=https://loki.mazza.vc:8443/loki/api/v1/push
|
||||
MZ_LOKI_USER=your_loki_user
|
||||
MZ_LOKI_PASSWORD=your_loki_password
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
The `mazza-base` library provides:
|
||||
|
||||
1. **Automatic mode detection** - Console logs for local dev, Loki for production
|
||||
2. **Structured logging** - Consistent JSON format for Loki
|
||||
3. **Secure connection** - Uses CA certificate for encrypted Loki communication
|
||||
4. **Easy integration** - One function call to configure everything
|
||||
5. **Application tagging** - Identifies your service in centralized logs
|
||||
|
||||
**You don't need to:**
|
||||
- Write JSON formatters
|
||||
- Configure logging handlers
|
||||
- Manage Loki client setup
|
||||
- Handle certificate validation
|
||||
|
||||
**Just call `configure_logging()` and you're done!**
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
### Flask API Server
|
||||
If using **flask-smorest-api** skill, add logging **before** creating Flask app:
|
||||
|
||||
```python
|
||||
import os
|
||||
from flask import Flask
|
||||
from mazza_base import configure_logging
|
||||
|
||||
# Configure logging FIRST
|
||||
debug_mode = os.environ.get('DEBUG_LOCAL', 'true').lower() == 'true'
|
||||
configure_logging(application_tag='my-api', debug_local=debug_mode)
|
||||
|
||||
# Then create Flask app
|
||||
app = Flask(__name__)
|
||||
# ... rest of setup
|
||||
```
|
||||
|
||||
### Docker Deployment
|
||||
In your Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
ARG CR_PAT
|
||||
ENV CR_PAT=${CR_PAT}
|
||||
COPY mazza.vc_CA.pem .
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Cannot install mazza-base:**
|
||||
- Ensure `CR_PAT` environment variable is set
|
||||
- Verify token has repo access to `mazza-vc/python-mazza-base`
|
||||
- Check token is not expired
|
||||
|
||||
**Missing CA certificate error:**
|
||||
- Ensure `mazza.vc_CA.pem` is in project root
|
||||
- Verify file is copied in Dockerfile: `COPY mazza.vc_CA.pem .`
|
||||
- Check MZ_LOKI_CA_BUNDLE_PATH points to correct location
|
||||
|
||||
**Runtime error: Missing required environment variables:**
|
||||
- Only occurs when `DEBUG_LOCAL=false`
|
||||
- Ensure all MZ_LOKI_* variables are set
|
||||
- Check spelling (MZ_LOKI_, not LOKI_ or MATERIA_LOKI_)
|
||||
|
||||
**Logs not appearing in Loki (production):**
|
||||
- Verify `DEBUG_LOCAL=false` is set
|
||||
- Check all MZ_LOKI_* variables are correct
|
||||
- Test CA certificate path is accessible in container
|
||||
- Verify Loki endpoint is reachable from container
|
||||
|
||||
**Import error for mazza_base:**
|
||||
- Run `pip install -r requirements.txt` with CR_PAT set
|
||||
- Verify mazza-base installed: `pip list | grep mazza-base`
|
||||
|
||||
## Example Implementation
|
||||
|
||||
See the materia-server project for a reference implementation:
|
||||
|
||||
```python
|
||||
# materia_server.py
|
||||
import os
|
||||
from mazza_base import configure_logging
|
||||
|
||||
debug_mode = os.environ.get('DEBUG_LOCAL', 'true').lower() == 'true'
|
||||
log_level = os.environ.get('LOG_LEVEL', 'INFO')
|
||||
configure_logging(
|
||||
application_tag='materia-server',
|
||||
debug_local=debug_mode,
|
||||
local_level=log_level
|
||||
)
|
||||
|
||||
# ... rest of Flask app setup
|
||||
```
|
||||
|
||||
```dockerfile
|
||||
# Dockerfile
|
||||
FROM python:3.11-alpine
|
||||
ARG CR_PAT
|
||||
ENV CR_PAT=${CR_PAT}
|
||||
WORKDIR /app
|
||||
COPY mazza.vc_CA.pem .
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt
|
||||
# ... rest of Dockerfile
|
||||
```
|
||||
|
||||
```yaml
|
||||
# docker-compose.yaml
|
||||
services:
|
||||
materia-server:
|
||||
environment:
|
||||
- MZ_LOKI_USER=${MZ_LOKI_USER}
|
||||
- MZ_LOKI_ENDPOINT=${MZ_LOKI_ENDPOINT}
|
||||
- MZ_LOKI_PASSWORD=${MZ_LOKI_PASSWORD}
|
||||
- MZ_LOKI_CA_BUNDLE_PATH=/app/mazza.vc_CA.pem
|
||||
```
|
||||
|
||||
This provides structured logging locally during development and automatic Loki shipping in production with secure encrypted connections.
|
||||
382
skills/postgres-setup/SKILL.md
Normal file
382
skills/postgres-setup/SKILL.md
Normal file
@@ -0,0 +1,382 @@
|
||||
---
|
||||
name: postgres-setup
|
||||
description: Set up PostgreSQL database with standardized schema.sql pattern
|
||||
---
|
||||
|
||||
# PostgreSQL Database Setup Pattern
|
||||
|
||||
This skill helps you set up a PostgreSQL database following a standardized pattern with proper separation of schema and setup scripts.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Starting a new project that needs PostgreSQL
|
||||
- You want a clean separation between schema definition (SQL) and setup logic (Python)
|
||||
- You need support for both production and test databases
|
||||
- You want consistent environment variable patterns
|
||||
|
||||
## What This Skill Creates
|
||||
|
||||
1. **`database/schema.sql`** - SQL schema with table definitions
|
||||
2. **`dev_scripts/setup_database.py`** - Python setup script
|
||||
3. **Documentation** of required environment variables
|
||||
|
||||
## Step 1: Gather Project Information
|
||||
|
||||
**IMPORTANT**: Before creating files, ask the user these questions:
|
||||
|
||||
1. **"What is your project name?"** (e.g., "arcana", "trading-bot", "myapp")
|
||||
- Use this to derive:
|
||||
- Database name: `{project_name}` (e.g., `arcana`)
|
||||
- User name: `{project_name}` (e.g., `arcana`)
|
||||
- Password env var: `{PROJECT_NAME}_PG_PASSWORD` (e.g., `ARCANA_PG_PASSWORD`)
|
||||
|
||||
2. **"What tables do you need in your schema?"** (optional - can create skeleton if unknown)
|
||||
|
||||
## Step 2: Create Directory Structure
|
||||
|
||||
Create these directories if they don't exist:
|
||||
```
|
||||
{project_root}/
|
||||
├── database/
|
||||
└── dev_scripts/
|
||||
```
|
||||
|
||||
## Step 3: Create schema.sql
|
||||
|
||||
Create `database/schema.sql` with:
|
||||
|
||||
### Best Practices to Follow:
|
||||
- Use `CREATE TABLE IF NOT EXISTS` for idempotency
|
||||
- Use `UUID` for primary keys with `gen_random_uuid()` as default
|
||||
- Use `BIGINT` (Unix timestamps) for all date/time fields (NOT TIMESTAMP, NOT TIMESTAMPTZ)
|
||||
- Add proper foreign key constraints with `ON DELETE CASCADE` or `ON DELETE SET NULL`
|
||||
- Add indexes on foreign keys and commonly queried fields
|
||||
- Use `TEXT` instead of `VARCHAR` (PostgreSQL best practice)
|
||||
- Add comments using `COMMENT ON COLUMN` for documentation
|
||||
|
||||
### Template Structure:
|
||||
```sql
|
||||
-- Enable required extensions
|
||||
CREATE EXTENSION IF NOT EXISTS pgcrypto;
|
||||
|
||||
-- Example table
|
||||
CREATE TABLE IF NOT EXISTS example_table (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name TEXT NOT NULL,
|
||||
created_at BIGINT NOT NULL DEFAULT extract(epoch from now())::bigint,
|
||||
updated_at BIGINT
|
||||
);
|
||||
|
||||
-- Add indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_example_created_at ON example_table(created_at);
|
||||
|
||||
-- Add comments
|
||||
COMMENT ON TABLE example_table IS 'Description of what this table stores';
|
||||
COMMENT ON COLUMN example_table.created_at IS 'Unix timestamp of creation';
|
||||
```
|
||||
|
||||
If user provides specific tables, create schema accordingly. Otherwise, create a skeleton with one example table.
|
||||
|
||||
## Step 4: Create setup_database.py
|
||||
|
||||
Create `dev_scripts/setup_database.py` using this template, **substituting project-specific values**:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
"""
|
||||
Database setup script for {PROJECT_NAME}
|
||||
Creates the {project_name} database and user with proper permissions, then applies database/schema.sql
|
||||
|
||||
Usage:
|
||||
python setup_database.py # Sets up main '{project_name}' database
|
||||
python setup_database.py --test-db # Sets up test '{project_name}_test' database
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import psycopg2
|
||||
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
|
||||
|
||||
|
||||
def main():
|
||||
"""Setup {project_name} database and user"""
|
||||
parser = argparse.ArgumentParser(description='Setup {PROJECT_NAME} database')
|
||||
parser.add_argument('--test-db', action='store_true',
|
||||
help='Create {project_name}_test database instead of main {project_name} database')
|
||||
args = parser.parse_args()
|
||||
|
||||
pg_host = os.environ.get('POSTGRES_HOST', 'localhost')
|
||||
pg_port = os.environ.get('POSTGRES_PORT', '5432')
|
||||
pg_user = os.environ.get('POSTGRES_USER', 'postgres')
|
||||
pg_password = os.environ.get('PG_PASSWORD', None)
|
||||
|
||||
if args.test_db:
|
||||
{project_name}_db = '{project_name}_test'
|
||||
print("Setting up TEST database '{project_name}_test'...")
|
||||
else:
|
||||
{project_name}_db = os.environ.get('{PROJECT_NAME}_PG_DB', '{project_name}')
|
||||
{project_name}_user = os.environ.get('{PROJECT_NAME}_PG_USER', '{project_name}')
|
||||
{project_name}_password = os.environ.get('{PROJECT_NAME}_PG_PASSWORD', None)
|
||||
|
||||
if pg_password is None:
|
||||
print("Error: PG_PASSWORD environment variable is required")
|
||||
sys.exit(1)
|
||||
if {project_name}_password is None:
|
||||
print("Error: {PROJECT_NAME}_PG_PASSWORD environment variable is required")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Setting up database '{{project_name}_db}' and user '{{project_name}_user}'...")
|
||||
print(f"Connecting to PostgreSQL at {pg_host}:{pg_port} as {pg_user}")
|
||||
|
||||
try:
|
||||
conn = psycopg2.connect(
|
||||
host=pg_host,
|
||||
port=pg_port,
|
||||
database='postgres',
|
||||
user=pg_user,
|
||||
password=pg_password
|
||||
)
|
||||
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
|
||||
|
||||
with conn.cursor() as cursor:
|
||||
cursor.execute("SELECT 1 FROM pg_roles WHERE rolname = %s", ({project_name}_user,))
|
||||
if not cursor.fetchone():
|
||||
print(f"Creating user '{{project_name}_user}'...")
|
||||
cursor.execute(f"CREATE USER {project_name}_user WITH PASSWORD %s", ({project_name}_password,))
|
||||
print(f"✓ User '{{project_name}_user}' created")
|
||||
else:
|
||||
print(f"✓ User '{{project_name}_user}' already exists")
|
||||
|
||||
cursor.execute("SELECT 1 FROM pg_database WHERE datname = %s", ({project_name}_db,))
|
||||
if not cursor.fetchone():
|
||||
print(f"Creating database '{{project_name}_db}'...")
|
||||
cursor.execute(f"CREATE DATABASE {{project_name}_db} OWNER {{project_name}_user}")
|
||||
print(f"✓ Database '{{project_name}_db}' created")
|
||||
else:
|
||||
print(f"✓ Database '{{project_name}_db}' already exists")
|
||||
|
||||
print("Setting permissions...")
|
||||
cursor.execute(f"GRANT ALL PRIVILEGES ON DATABASE {{project_name}_db} TO {{project_name}_user}")
|
||||
print(f"✓ Granted all privileges on database '{{project_name}_db}' to user '{{project_name}_user}'")
|
||||
|
||||
conn.close()
|
||||
|
||||
print(f"\\nConnecting as '{{project_name}_user}' to apply schema...")
|
||||
{project_name}_conn = psycopg2.connect(
|
||||
host=pg_host,
|
||||
port=pg_port,
|
||||
database={project_name}_db,
|
||||
user={project_name}_user,
|
||||
password={project_name}_password
|
||||
)
|
||||
{project_name}_conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
|
||||
|
||||
repo_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
schema_path = os.path.join(repo_root, 'database', 'schema.sql')
|
||||
if not os.path.exists(schema_path):
|
||||
print(f"Error: schema file not found at {schema_path}")
|
||||
sys.exit(1)
|
||||
|
||||
with open(schema_path, 'r', encoding='utf-8') as f:
|
||||
schema_sql = f.read()
|
||||
|
||||
with {project_name}_conn.cursor() as cursor:
|
||||
print("Ensuring required extensions...")
|
||||
cursor.execute("CREATE EXTENSION IF NOT EXISTS pgcrypto")
|
||||
print(f"Applying schema from {schema_path}...")
|
||||
cursor.execute(schema_sql)
|
||||
print("✓ Schema applied")
|
||||
|
||||
{project_name}_conn.close()
|
||||
print("✓ Database setup complete")
|
||||
print(f"Database: {{project_name}_db}")
|
||||
print(f"User: {{project_name}_user}")
|
||||
print(f"Host: {pg_host}:{pg_port}")
|
||||
|
||||
except psycopg2.Error as e:
|
||||
print(f"Error: {e}")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
print(f"Unexpected error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
```
|
||||
|
||||
**CRITICAL**: Replace ALL instances of:
|
||||
- `{PROJECT_NAME}` → uppercase project name (e.g., `ARCANA`, `MYAPP`)
|
||||
- `{project_name}` → lowercase project name (e.g., `arcana`, `myapp`)
|
||||
|
||||
## Step 5: Create Documentation
|
||||
|
||||
Add a section to the project's README.md (or create SETUP.md) documenting:
|
||||
|
||||
### Environment Variables Required
|
||||
|
||||
**Global (PostgreSQL superuser)**:
|
||||
- `PG_PASSWORD` - PostgreSQL superuser password
|
||||
- `POSTGRES_HOST` - PostgreSQL host (default: localhost)
|
||||
- `POSTGRES_PORT` - PostgreSQL port (default: 5432)
|
||||
- `POSTGRES_USER` - PostgreSQL superuser (default: postgres)
|
||||
|
||||
**Project-specific**:
|
||||
- `{PROJECT_NAME}_PG_DB` - Database name (default: {project_name})
|
||||
- `{PROJECT_NAME}_PG_USER` - Application user (default: {project_name})
|
||||
- `{PROJECT_NAME}_PG_PASSWORD` - Application user password (REQUIRED)
|
||||
|
||||
### Setup Instructions
|
||||
|
||||
```bash
|
||||
# Set required environment variables
|
||||
export PG_PASSWORD="your_postgres_password"
|
||||
export {PROJECT_NAME}_PG_PASSWORD="your_app_password"
|
||||
|
||||
# Run setup script
|
||||
python dev_scripts/setup_database.py
|
||||
|
||||
# For test database
|
||||
python dev_scripts/setup_database.py --test-db
|
||||
```
|
||||
|
||||
## Step 6: Make Script Executable
|
||||
|
||||
Run:
|
||||
```bash
|
||||
chmod +x dev_scripts/setup_database.py
|
||||
```
|
||||
|
||||
## Step 7: Create Database Driver (Optional but Recommended)
|
||||
|
||||
If the project needs a database driver/connection manager, create one following this pattern:
|
||||
|
||||
### File: `src/{project_name}/driver/database.py`
|
||||
|
||||
**Key patterns to follow:**
|
||||
|
||||
1. **Connection Pooling**: Use `ThreadedConnectionPool` from psycopg2
|
||||
```python
|
||||
from psycopg2.pool import ThreadedConnectionPool
|
||||
|
||||
self.pool = ThreadedConnectionPool(
|
||||
min_conn, # e.g., 2
|
||||
max_conn, # e.g., 10
|
||||
host=db_host,
|
||||
database=db_name,
|
||||
user=db_user,
|
||||
password=db_passwd
|
||||
)
|
||||
```
|
||||
|
||||
2. **Context Managers**: Provide context managers for connections and cursors
|
||||
```python
|
||||
@contextmanager
|
||||
def get_cursor(self, commit=True, cursor_factory=None):
|
||||
"""Context manager for database cursors with automatic commit/rollback"""
|
||||
with self._get_connection() as conn:
|
||||
cursor = conn.cursor(cursor_factory=cursor_factory)
|
||||
try:
|
||||
yield cursor
|
||||
if commit:
|
||||
conn.commit()
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
raise
|
||||
finally:
|
||||
cursor.close()
|
||||
```
|
||||
|
||||
3. **Always Use RealDictCursor for Loading Data**: When reading from database, use RealDictCursor
|
||||
```python
|
||||
from psycopg2.extras import RealDictCursor
|
||||
|
||||
with self.get_cursor(commit=False, cursor_factory=RealDictCursor) as cursor:
|
||||
cursor.execute("SELECT * FROM table WHERE id = %s", (id,))
|
||||
result = cursor.fetchone()
|
||||
return Model.from_dict(dict(result))
|
||||
```
|
||||
|
||||
4. **Unix Timestamps Everywhere**: Convert database timestamps to/from unix timestamps
|
||||
```python
|
||||
# When saving to DB - store as BIGINT
|
||||
created_at = int(time.time())
|
||||
|
||||
# When loading from DB - already BIGINT, use as-is
|
||||
# In models, store as int (unix timestamp)
|
||||
# Only convert to datetime for display/formatting purposes
|
||||
```
|
||||
|
||||
5. **Proper Cleanup**: Ensure pool is closed on destruction
|
||||
```python
|
||||
def close(self):
|
||||
if self.pool and not self.pool.closed:
|
||||
self.pool.closeall()
|
||||
|
||||
def __del__(self):
|
||||
if hasattr(self, 'pool'):
|
||||
self.close()
|
||||
```
|
||||
|
||||
### Example Driver Structure:
|
||||
```python
|
||||
class {ProjectName}DB:
|
||||
def __init__(self, db_host, db_name, db_user, db_passwd, min_conn=2, max_conn=10):
|
||||
self.pool = ThreadedConnectionPool(...)
|
||||
|
||||
@contextmanager
|
||||
def get_cursor(self, commit=True, cursor_factory=None):
|
||||
# Context manager for cursors
|
||||
pass
|
||||
|
||||
def load_item_by_id(self, item_id: str) -> Item:
|
||||
with self.get_cursor(commit=False, cursor_factory=RealDictCursor) as cursor:
|
||||
cursor.execute("SELECT * FROM items WHERE id = %s", (item_id,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
raise Exception(f"Item {item_id} not found")
|
||||
return Item.from_dict(dict(result))
|
||||
|
||||
def save_item(self, item: Item) -> str:
|
||||
with self.get_cursor() as cursor:
|
||||
cursor.execute(
|
||||
"INSERT INTO items (name, created_at) VALUES (%s, %s) RETURNING id",
|
||||
(item.name, int(time.time()))
|
||||
)
|
||||
return str(cursor.fetchone()[0])
|
||||
```
|
||||
|
||||
## Design Principles
|
||||
|
||||
This pattern follows these principles:
|
||||
|
||||
### Database Schema:
|
||||
1. **Separation of concerns** - SQL in .sql files, setup logic in Python
|
||||
2. **Idempotency** - Safe to run multiple times
|
||||
3. **Test database support** - Easy to create isolated test environments
|
||||
4. **Unix timestamps** - Always use BIGINT for dates/times (not TIMESTAMP types)
|
||||
5. **UUIDs for keys** - Better for distributed systems
|
||||
6. **Environment-based config** - No hardcoded credentials
|
||||
|
||||
### Database Driver (if applicable):
|
||||
1. **Connection pooling** - Use ThreadedConnectionPool for efficient connection reuse
|
||||
2. **Context managers** - Automatic commit/rollback and resource cleanup
|
||||
3. **RealDictCursor for reads** - Always use RealDictCursor when loading data for easy dict conversion
|
||||
4. **Unix timestamps** - Store as BIGINT, convert only for display
|
||||
5. **Proper cleanup** - Close pool on destruction
|
||||
|
||||
## Example Usage in Claude Code
|
||||
|
||||
User: "Set up postgres database for my project"
|
||||
Claude: "What is your project name?"
|
||||
User: "trading-bot"
|
||||
Claude:
|
||||
1. Creates database/ and dev_scripts/ directories
|
||||
2. Creates database/schema.sql with skeleton
|
||||
3. Creates dev_scripts/setup_database.py with:
|
||||
- TRADING_BOT_PG_PASSWORD
|
||||
- trading_bot database and user
|
||||
4. Documents environment variables needed
|
||||
5. Makes script executable
|
||||
442
skills/python-pypi-setup/SKILL.md
Normal file
442
skills/python-pypi-setup/SKILL.md
Normal file
@@ -0,0 +1,442 @@
|
||||
---
|
||||
name: python-pypi-setup
|
||||
description: Set up Python project for PyPI publishing with pyproject.toml, src layout, and build scripts
|
||||
---
|
||||
|
||||
# Python PyPI Project Setup Pattern
|
||||
|
||||
This skill helps you set up a Python project for PyPI publishing following modern best practices with pyproject.toml, src layout, and standardized build/publish scripts.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Starting a new Python package for PyPI distribution
|
||||
- You want to use modern pyproject.toml-based configuration
|
||||
- You need a standardized src/ layout with explicit package discovery
|
||||
- You want automated build and publish scripts
|
||||
|
||||
## What This Skill Creates
|
||||
|
||||
1. **`pyproject.toml`** - Modern Python project configuration
|
||||
2. **`src/{package_name}/`** - Source layout with package structure
|
||||
3. **`.gitignore`** - Comprehensive Python gitignore
|
||||
4. **`dev-requirements.txt`** - Development dependencies (build, twine, testing tools)
|
||||
5. **`build-publish.sh`** - Automated build and publish script
|
||||
6. **`README.md`** - Basic project documentation
|
||||
|
||||
## Step 1: Gather Project Information
|
||||
|
||||
**IMPORTANT**: Before creating files, ask the user these questions:
|
||||
|
||||
1. **"What is your project name?"** (e.g., "pg-podcast-toolkit", "mypackage")
|
||||
- Use this to derive:
|
||||
- PyPI package name: `{project-name}` (with hyphens, e.g., `pg-podcast-toolkit`)
|
||||
- Python package name: `{package_name}` (with underscores, e.g., `pg_podcast_toolkit`)
|
||||
- Module directory: `src/{package_name}/`
|
||||
|
||||
2. **"What is the project description?"** (brief one-line description for PyPI)
|
||||
|
||||
3. **"What is your name?"** (for author field)
|
||||
|
||||
4. **"What is your email?"** (for author field)
|
||||
|
||||
5. **"What is your GitHub username?"** (for project URLs)
|
||||
|
||||
6. **"What license do you want to use?"** (options: MIT, Apache-2.0, GPL-3.0, BSD-3-Clause)
|
||||
|
||||
7. **"What Python version should be the minimum requirement?"** (default: 3.8)
|
||||
|
||||
8. **"What are your initial dependencies?"** (optional - comma-separated list, can be empty)
|
||||
|
||||
9. **"What keywords describe your project?"** (optional - for PyPI searchability)
|
||||
|
||||
## Step 2: Create Directory Structure
|
||||
|
||||
Create these directories if they don't exist:
|
||||
```
|
||||
{project_root}/
|
||||
├── src/
|
||||
│ └── {package_name}/
|
||||
└── (other files at root)
|
||||
```
|
||||
|
||||
## Step 3: Create pyproject.toml
|
||||
|
||||
Create `pyproject.toml` with the following structure, **substituting project-specific values**:
|
||||
|
||||
```toml
|
||||
[project]
|
||||
name = "{project-name}"
|
||||
version = "0.0.1"
|
||||
authors = [
|
||||
{ name="{author_name}", email="{author_email}" },
|
||||
]
|
||||
description = "{project_description}"
|
||||
keywords = [{keywords_list}]
|
||||
readme = "README.md"
|
||||
requires-python = ">={python_version}"
|
||||
license = {text = "{license_name} License"}
|
||||
classifiers = [
|
||||
"Programming Language :: Python :: 3",
|
||||
"License :: OSI Approved :: {license_classifier}",
|
||||
"Operating System :: OS Independent",
|
||||
]
|
||||
dependencies = [
|
||||
{dependencies_list}
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["src/{package_name}"]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://github.com/{github_username}/{project-name}"
|
||||
Issues = "https://github.com/{github_username}/{project-name}/issues"
|
||||
```
|
||||
|
||||
**CRITICAL Substitutions**:
|
||||
- `{project-name}` → project name with hyphens (e.g., `pg-podcast-toolkit`)
|
||||
- `{package_name}` → package name with underscores (e.g., `pg_podcast_toolkit`)
|
||||
- `{author_name}` → author's name
|
||||
- `{author_email}` → author's email
|
||||
- `{project_description}` → one-line description
|
||||
- `{keywords_list}` → comma-separated quoted keywords (e.g., `"podcasting", "rss", "parser"`) or empty
|
||||
- `{python_version}` → minimum Python version (e.g., `3.8`)
|
||||
- `{license_name}` → license name (e.g., `MIT`, `Apache-2.0`)
|
||||
- `{license_classifier}` → OSI classifier (e.g., `MIT License`, `Apache Software License`)
|
||||
- `{dependencies_list}` → comma-separated quoted dependencies (e.g., `'requests', 'beautifulsoup4'`) or empty
|
||||
- `{github_username}` → GitHub username
|
||||
|
||||
**License Classifiers Mapping**:
|
||||
- MIT → `MIT License`
|
||||
- Apache-2.0 → `Apache Software License`
|
||||
- GPL-3.0 → `GNU General Public License v3 (GPLv3)`
|
||||
- BSD-3-Clause → `BSD License`
|
||||
|
||||
## Step 4: Create Comprehensive .gitignore
|
||||
|
||||
Create `.gitignore` with comprehensive Python patterns:
|
||||
|
||||
```gitignore
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
pip-wheel-metadata/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# pipenv
|
||||
Pipfile.lock
|
||||
|
||||
# PEP 582
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
bin/
|
||||
include/
|
||||
pyvenv.cfg
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# IDEs
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.DS_Store
|
||||
```
|
||||
|
||||
## Step 5: Create dev-requirements.txt
|
||||
|
||||
Create `dev-requirements.txt` with development dependencies:
|
||||
|
||||
```
|
||||
build
|
||||
twine
|
||||
pytest
|
||||
black
|
||||
mypy
|
||||
```
|
||||
|
||||
These are the tools needed to build, publish, and develop the package. Add other dev tools as needed (isort, pytest-cov, etc.).
|
||||
|
||||
## Step 6: Create build-publish.sh
|
||||
|
||||
Create `build-publish.sh` with venv activation and build/publish commands:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Build and publish package to PyPI
|
||||
# Activates virtual environment before running
|
||||
|
||||
# Activate virtual environment
|
||||
source bin/activate
|
||||
|
||||
# Clean previous builds
|
||||
rm -rf dist/*
|
||||
|
||||
# Build package
|
||||
python -m build
|
||||
|
||||
# Upload to PyPI
|
||||
python -m twine upload dist/*
|
||||
```
|
||||
|
||||
**Note**: This script follows the convention that the virtual environment is in `bin/` at the project root.
|
||||
|
||||
## Step 7: Create Package Structure
|
||||
|
||||
Create the basic package structure:
|
||||
|
||||
1. **`src/{package_name}/__init__.py`** - Package initialization file:
|
||||
```python
|
||||
"""
|
||||
{project_description}
|
||||
"""
|
||||
|
||||
__version__ = "0.0.1"
|
||||
```
|
||||
|
||||
2. **If this is a library package**, you can add:
|
||||
```python
|
||||
# Export main classes/functions here for easier imports
|
||||
# from .module import ClassName, function_name
|
||||
# __all__ = ['ClassName', 'function_name']
|
||||
```
|
||||
|
||||
## Step 8: Create README.md
|
||||
|
||||
Create `README.md` with basic project documentation:
|
||||
|
||||
```markdown
|
||||
# {project-name}
|
||||
|
||||
{project_description}
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install {project-name}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```python
|
||||
import {package_name}
|
||||
|
||||
# Add usage examples here
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Create virtual environment
|
||||
python -m venv .
|
||||
|
||||
# Activate virtual environment
|
||||
source bin/activate # On Windows: bin\Scripts\activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r dev-requirements.txt
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### Building and Publishing
|
||||
|
||||
```bash
|
||||
# Make sure you have PyPI credentials configured
|
||||
# Build and publish to PyPI
|
||||
./build-publish.sh
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
{license_name}
|
||||
|
||||
## Author
|
||||
|
||||
{author_name} ({author_email})
|
||||
```
|
||||
|
||||
## Step 9: Make Script Executable
|
||||
|
||||
Run:
|
||||
```bash
|
||||
chmod +x build-publish.sh
|
||||
```
|
||||
|
||||
## Step 10: Create Initial Git Repository (if needed)
|
||||
|
||||
If not already a git repository:
|
||||
```bash
|
||||
git init
|
||||
git add .
|
||||
git commit -m "Initial project structure for PyPI package"
|
||||
```
|
||||
|
||||
## Step 11: Document Next Steps
|
||||
|
||||
Inform the user of the next steps:
|
||||
|
||||
1. **Install development dependencies**:
|
||||
```bash
|
||||
source bin/activate
|
||||
pip install -r dev-requirements.txt
|
||||
```
|
||||
|
||||
2. **Install package in development mode**:
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
3. **Write your code** in `src/{package_name}/`
|
||||
|
||||
4. **Update version** in `pyproject.toml` before publishing
|
||||
|
||||
5. **Configure PyPI credentials** (one-time setup):
|
||||
```bash
|
||||
# Create ~/.pypirc with your PyPI token
|
||||
```
|
||||
|
||||
6. **Build and publish**:
|
||||
```bash
|
||||
./build-publish.sh
|
||||
```
|
||||
|
||||
## Design Principles
|
||||
|
||||
This pattern follows these principles:
|
||||
|
||||
1. **Modern pyproject.toml** - No setup.py needed, all config in pyproject.toml
|
||||
2. **Src Layout** - Source code in `src/` directory for better separation
|
||||
3. **Explicit Package Discovery** - Using hatchling with explicit package paths
|
||||
4. **Comprehensive .gitignore** - Covers all common Python artifacts
|
||||
5. **Virtual Environment Convention** - Uses `bin/` at project root
|
||||
6. **Automated Publishing** - Simple script for build and publish
|
||||
7. **Best Practices** - Follows PEP 517/518 and modern Python packaging standards
|
||||
|
||||
## Example Usage in Claude Code
|
||||
|
||||
User: "Set up a Python package for PyPI"
|
||||
Claude: "What is your project name?"
|
||||
User: "awesome-lib"
|
||||
Claude: [Asks remaining questions]
|
||||
Claude:
|
||||
1. Creates src/awesome_lib/ directory structure
|
||||
2. Creates pyproject.toml with project metadata
|
||||
3. Creates comprehensive .gitignore
|
||||
4. Creates dev-requirements.txt with build tools and dev dependencies
|
||||
5. Creates build-publish.sh script
|
||||
6. Creates src/awesome_lib/__init__.py
|
||||
7. Creates README.md with instructions
|
||||
8. Makes script executable
|
||||
9. Documents next steps for the user
|
||||
Reference in New Issue
Block a user