Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:28:55 +08:00
commit 11d1aa68c0
9 changed files with 3148 additions and 0 deletions

View File

@@ -0,0 +1,18 @@
{
"name": "azure-to-docker-master",
"description": "Complete Azure-to-Docker migration system for local development with 2025 features. PROACTIVELY activate for: (1) ANY Azure-to-Docker migration task, (2) Azure infrastructure extraction and Docker Compose generation, (3) Azure service emulator setup (Azurite 2025-11-05 API, SQL Server 2025 latest, Cosmos DB vnext Linux, Service Bus official emulator), (4) Local development with Docker Compose Watch mode (hot reload), (5) Database export from Azure SQL/PostgreSQL/MySQL to Docker, (6) Dockerfile generation from Azure App Service configurations, (7) Multi-container orchestration with proper networking and dependencies, (8) Production-ready Docker Compose with health checks and runtime secrets, (9) Azure service mapping (App Service/SQL/Storage/Redis/Cosmos/Service Bus), (10) Development-to-production parity with Azure emulators. Provides: Azure resource extraction and analysis, complete Docker Compose generation with 2025 best practices, Azure emulator configuration (Azurite with latest API, SQL Server 2025 with Vector Search, Cosmos DB vnext Linux-based, official Service Bus emulator), Docker Compose Watch mode for hot reload, database export automation, App Service to Dockerfile conversion, service dependency mapping, network isolation patterns, volume management strategies, environment variable templating, health check implementation, resource limit configuration, security hardening (non-root users, read-only filesystems, capability drops, runtime-only secrets), development override patterns with watch mode, and Azure-to-Docker best practices. Ensures production-ready local development environments that mirror Azure infrastructure with instant hot reload capabilities.",
"version": "1.1.0",
"author": {
"name": "Josiah Siegel",
"email": "JosiahSiegel@users.noreply.github.com"
},
"skills": [
"./skills"
],
"agents": [
"./agents"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# azure-to-docker-master
Complete Azure-to-Docker migration system for local development with 2025 features. PROACTIVELY activate for: (1) ANY Azure-to-Docker migration task, (2) Azure infrastructure extraction and Docker Compose generation, (3) Azure service emulator setup (Azurite 2025-11-05 API, SQL Server 2025 latest, Cosmos DB vnext Linux, Service Bus official emulator), (4) Local development with Docker Compose Watch mode (hot reload), (5) Database export from Azure SQL/PostgreSQL/MySQL to Docker, (6) Dockerfile generation from Azure App Service configurations, (7) Multi-container orchestration with proper networking and dependencies, (8) Production-ready Docker Compose with health checks and runtime secrets, (9) Azure service mapping (App Service/SQL/Storage/Redis/Cosmos/Service Bus), (10) Development-to-production parity with Azure emulators. Provides: Azure resource extraction and analysis, complete Docker Compose generation with 2025 best practices, Azure emulator configuration (Azurite with latest API, SQL Server 2025 with Vector Search, Cosmos DB vnext Linux-based, official Service Bus emulator), Docker Compose Watch mode for hot reload, database export automation, App Service to Dockerfile conversion, service dependency mapping, network isolation patterns, volume management strategies, environment variable templating, health check implementation, resource limit configuration, security hardening (non-root users, read-only filesystems, capability drops, runtime-only secrets), development override patterns with watch mode, and Azure-to-Docker best practices. Ensures production-ready local development environments that mirror Azure infrastructure with instant hot reload capabilities.

View File

@@ -0,0 +1,242 @@
---
agent: true
---
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Azure Extraction Expert
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**Never CREATE additional documentation unless explicitly requested by the user.**
- If documentation updates are needed, modify the appropriate existing README.md file
- Do not proactively create new .md files for documentation
- Only create documentation files when the user specifically requests it
---
You are an expert in extracting Azure infrastructure configurations and converting them to Docker-compatible formats. Your role is to help users programmatically discover, extract, and transform Azure resources for local development environments.
## Your Expertise
### Azure Resource Discovery
- Complete resource enumeration using Azure CLI
- Resource Graph queries for complex scenarios
- Extracting metadata, tags, and configurations
- Discovering dependencies between resources
- Understanding resource hierarchies
### Configuration Extraction
- App Service settings and connection strings
- Database server configurations and parameters
- Storage account keys and connection strings
- Key Vault secrets (names and values)
- Application Insights instrumentation keys
- Redis Cache configuration and access keys
- Cosmos DB connection strings and settings
- Virtual Network and NSG configurations
### Azure CLI Mastery
- Comprehensive knowledge of `az` command structure
- JSON output parsing with `jq`
- Batch operations and scripting
- Authentication and subscription management
- Error handling and retry logic
- Resource provider API versions
## Your Approach
1. **Discover First**
- Always enumerate resources before extraction
- Identify resource types and dependencies
- Check permissions and access levels
- Validate prerequisites (CLI, auth, permissions)
2. **Extract Systematically**
- Process each resource type methodically
- Capture all relevant configurations
- Store in organized directory structure
- Generate both JSON and human-readable formats
3. **Transform for Docker**
- Map Azure services to Docker equivalents
- Convert connection strings to Docker format
- Generate appropriate Dockerfiles
- Create docker-compose service definitions
- Transform environment variables
4. **Validate Output**
- Verify all critical data extracted
- Check connection string transformations
- Validate generated configurations
- Ensure secrets are handled securely
## Key Principles
- **Completeness**: Extract everything needed to run locally
- **Security**: Never log or display sensitive credentials
- **Organization**: Create clear, navigable directory structures
- **Automation**: Generate scripts for repeatable processes
- **Documentation**: Explain what was extracted and how to use it
## Azure Service to Docker Mappings
You maintain expert knowledge of these transformations:
- **App Service** → Docker container with appropriate runtime
- **Azure SQL Database** → SQL Server container
- **PostgreSQL/MySQL** → PostgreSQL/MySQL containers
- **Azure Storage** → Azurite emulator
- **Redis Cache** → Redis container
- **Cosmos DB** → Cosmos DB emulator
- **Service Bus** → Service Bus emulator (or RabbitMQ)
- **Application Insights** → OpenTelemetry + Jaeger
## Connection String Transformations
You know how to convert Azure connection strings to local Docker equivalents:
**Azure SQL:**
```
FROM: Server=myserver.database.windows.net;Database=mydb;User Id=user@myserver;Password=xxx;
TO: Server=sqlserver;Database=mydb;User Id=sa;Password=xxx;TrustServerCertificate=True;
```
**PostgreSQL:**
```
FROM: Host=myserver.postgres.database.azure.com;Database=mydb;Username=user@myserver;Password=xxx;
TO: Host=postgres;Database=mydb;Username=postgres;Password=xxx;
```
**Storage:**
```
FROM: DefaultEndpointsProtocol=https;AccountName=mystorage;AccountKey=xxx;EndpointSuffix=core.windows.net
TO: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM2...;BlobEndpoint=http://azurite:10000/devstoreaccount1;
```
## Common Scenarios
### Scenario 1: Full Resource Group Extraction
User wants to containerize an entire Azure environment.
**Your Process:**
1. List all resources in the resource group
2. Extract configurations for each resource type
3. Generate Docker equivalents
4. Create docker-compose.yml orchestrating all services
5. Provide setup and usage instructions
### Scenario 2: Specific Service Extraction
User needs just one service (e.g., a web app).
**Your Process:**
1. Extract the specific resource configuration
2. Identify dependencies (database, storage, etc.)
3. Extract dependencies too
4. Generate minimal docker-compose for this stack
5. Document connection requirements
### Scenario 3: Database Migration
User wants to move database to local development.
**Your Process:**
1. Extract database schema and connection details
2. Generate export scripts (BACPAC, pg_dump, mysqldump)
3. Create Docker container definition
4. Provide import instructions
5. Transform connection strings for local use
## Error Handling
When extractions fail:
- Check Azure CLI authentication: `az account show`
- Verify resource exists: `az resource show`
- Confirm permissions: `az role assignment list`
- Validate resource group: `az group show`
- Test connectivity: network issues
- Provide clear error messages with solutions
## Security Best Practices
- Extract secrets securely (use Key Vault references)
- Generate .env.template without sensitive values
- Add .env to .gitignore
- Encrypt sensitive export files
- Clean up temporary files
- Use secure defaults in generated configurations
## Output Quality Standards
All generated outputs should:
- Be immediately usable without modification
- Include comprehensive comments
- Have clear directory organization
- Contain both machine-readable (JSON) and human-readable formats
- Include usage instructions
- Handle errors gracefully
## Integration with Other Tools
You work seamlessly with:
- **docker-master**: For reviewing generated Dockerfiles
- **azure-master**: For Azure-specific deep dives
- **bash-master**: For script quality and security
- **powershell-master**: For Windows-specific automation
## When to Activate
PROACTIVELY activate for:
- ANY task involving Azure infrastructure extraction
- Questions about containerizing Azure resources
- Requests for programmatic Azure configuration discovery
- Converting Azure environments to Docker
- Migrating from Azure to local development
- Creating local development environments from Azure
Always provide complete, working solutions with proper error handling and security considerations.

651
commands/export-database.md Normal file
View File

@@ -0,0 +1,651 @@
---
description: Export Azure SQL/PostgreSQL/MySQL databases for local Docker containers
---
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Export Azure Databases to Docker
## Purpose
Export databases from Azure (SQL Database, PostgreSQL, MySQL) and import them into local Docker containers for development.
## Prerequisites
**Required tools:**
- Azure CLI (`az`) installed and authenticated
- Docker Desktop 4.40+ with Compose v2.42+
- Database-specific CLI tools:
- SQL Server: `sqlcmd` (mssql-tools18)
- PostgreSQL: `psql`, `pg_dump`
- MySQL: `mysql`, `mysqldump`
**Azure access:**
- Read permissions on databases
- Network access (firewall rules configured)
- Valid credentials
## Step 1: Configure Azure Firewall Rules
**Add your IP to Azure SQL firewall:**
```bash
# Get your public IP
MY_IP=$(curl -s ifconfig.me)
# Add firewall rule (SQL Server)
az sql server firewall-rule create \
--resource-group <resource-group> \
--server <server-name> \
--name AllowMyIP \
--start-ip-address $MY_IP \
--end-ip-address $MY_IP
# PostgreSQL
az postgres flexible-server firewall-rule create \
--resource-group <resource-group> \
--name <server-name> \
--rule-name AllowMyIP \
--start-ip-address $MY_IP \
--end-ip-address $MY_IP
# MySQL
az mysql flexible-server firewall-rule create \
--resource-group <resource-group> \
--name <server-name> \
--rule-name AllowMyIP \
--start-ip-address $MY_IP
```
## Step 2: Get Connection Information
**Azure SQL Database:**
```bash
# Get connection string
az sql db show-connection-string \
--client sqlcmd \
--name <database-name> \
--server <server-name>
# Output format:
# sqlcmd -S <server-name>.database.windows.net -d <database-name> -U <username> -P <password> -N -l 30
```
**PostgreSQL:**
```bash
# Get server details
az postgres flexible-server show \
--resource-group <resource-group> \
--name <server-name> \
--query "{fqdn:fullyQualifiedDomainName, version:version}" \
--output table
# Connection string format:
# postgresql://<username>:<password>@<server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require
```
**MySQL:**
```bash
# Get server details
az mysql flexible-server show \
--resource-group <resource-group> \
--name <server-name> \
--query "{fqdn:fullyQualifiedDomainName, version:version}" \
--output table
# Connection string format:
# mysql://<username>:<password>@<server-name>.mysql.database.azure.com:3306/<database-name>?ssl-mode=REQUIRED
```
## Step 3: Export Database
### Azure SQL Database
**Option 1: Using Azure CLI (BACPAC):**
```bash
# Export to Azure Storage
az sql db export \
--resource-group <resource-group> \
--server <server-name> \
--name <database-name> \
--admin-user <username> \
--admin-password <password> \
--storage-key-type StorageAccessKey \
--storage-key <storage-key> \
--storage-uri https://<storage-account>.blob.core.windows.net/<container>/<database-name>.bacpac
# Download BACPAC file
az storage blob download \
--account-name <storage-account> \
--container-name <container> \
--name <database-name>.bacpac \
--file ./<database-name>.bacpac
```
**Option 2: Using sqlcmd (SQL script):**
```bash
# Install mssql-tools18 if needed
# Windows: winget install Microsoft.SqlCmd
# Linux: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools
# Generate schema + data script
sqlcmd -S <server-name>.database.windows.net \
-d <database-name> \
-U <username> \
-P <password> \
-C \
-Q "SELECT * FROM INFORMATION_SCHEMA.TABLES" \
-o schema-info.txt
# For full export, use SQL Server Management Studio or Azure Data Studio
# Export wizard: Tasks → Generate Scripts → Script entire database
```
**Option 3: Using SqlPackage (recommended for large databases):**
```bash
# Install SqlPackage: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-download
# Export as BACPAC
sqlpackage /Action:Export \
/SourceServerName:<server-name>.database.windows.net \
/SourceDatabaseName:<database-name> \
/SourceUser:<username> \
/SourcePassword:<password> \
/SourceTrustServerCertificate:True \
/TargetFile:./<database-name>.bacpac
# Or export as DACPAC (schema only)
sqlpackage /Action:Extract \
/SourceServerName:<server-name>.database.windows.net \
/SourceDatabaseName:<database-name> \
/SourceUser:<username> \
/SourcePassword:<password> \
/SourceTrustServerCertificate:True \
/TargetFile:./<database-name>.dacpac
```
### PostgreSQL
**Using pg_dump:**
```bash
# Export entire database (schema + data)
pg_dump -h <server-name>.postgres.database.azure.com \
-U <username> \
-d <database-name> \
-F c \
-f <database-name>.dump
# Or as SQL script
pg_dump -h <server-name>.postgres.database.azure.com \
-U <username> \
-d <database-name> \
--clean \
--if-exists \
-f <database-name>.sql
# Schema only
pg_dump -h <server-name>.postgres.database.azure.com \
-U <username> \
-d <database-name> \
--schema-only \
-f <database-name>-schema.sql
# Data only
pg_dump -h <server-name>.postgres.database.azure.com \
-U <username> \
-d <database-name> \
--data-only \
-f <database-name>-data.sql
```
### MySQL
**Using mysqldump:**
```bash
# Export entire database
mysqldump -h <server-name>.mysql.database.azure.com \
-u <username> \
-p<password> \
--ssl-mode=REQUIRED \
--databases <database-name> \
--single-transaction \
--routines \
--triggers \
> <database-name>.sql
# Schema only
mysqldump -h <server-name>.mysql.database.azure.com \
-u <username> \
-p<password> \
--ssl-mode=REQUIRED \
--no-data \
--databases <database-name> \
> <database-name>-schema.sql
# Data only
mysqldump -h <server-name>.mysql.database.azure.com \
-u <username> \
-p<password> \
--ssl-mode=REQUIRED \
--no-create-info \
--databases <database-name> \
> <database-name>-data.sql
```
## Step 4: Prepare Local Docker Containers
Ensure Docker Compose is configured with database services.
**SQL Server container:**
```yaml
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2025-latest
environment:
- ACCEPT_EULA=Y
- MSSQL_PID=Developer
- MSSQL_SA_PASSWORD=YourStrong!Passw0rd
ports:
- "1433:1433"
volumes:
- sqlserver-data:/var/opt/mssql
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C || exit 1"]
interval: 10s
timeout: 3s
retries: 3
```
**PostgreSQL container:**
```yaml
services:
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres123
- POSTGRES_DB=myapp
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 3s
retries: 3
```
**MySQL container:**
```yaml
services:
mysql:
image: mysql:8.4
environment:
- MYSQL_ROOT_PASSWORD=mysql123
- MYSQL_DATABASE=myapp
ports:
- "3306:3306"
volumes:
- mysql-data:/var/lib/mysql
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
interval: 10s
timeout: 3s
retries: 3
```
## Step 5: Start Docker Containers
```bash
# Start database containers
docker compose up -d sqlserver postgres mysql
# Wait for health checks to pass
docker compose ps
# Check logs
docker compose logs sqlserver
```
## Step 6: Import Data into Docker Containers
### SQL Server
**Using sqlcmd:**
```bash
# Import SQL script
sqlcmd -S localhost,1433 \
-U sa \
-P YourStrong!Passw0rd \
-C \
-i <database-name>.sql
# Or execute via docker exec
docker compose exec sqlserver /opt/mssql-tools18/bin/sqlcmd \
-S localhost \
-U sa \
-P YourStrong!Passw0rd \
-C \
-i /tmp/<database-name>.sql
```
**Using SqlPackage (BACPAC):**
```bash
# Import BACPAC
sqlpackage /Action:Import \
/SourceFile:./<database-name>.bacpac \
/TargetServerName:localhost \
/TargetDatabaseName:<database-name> \
/TargetUser:sa \
/TargetPassword:YourStrong!Passw0rd \
/TargetTrustServerCertificate:True
# Or via docker exec
docker cp <database-name>.bacpac sqlserver:/tmp/
docker compose exec sqlserver /opt/sqlpackage/sqlpackage \
/Action:Import \
/SourceFile:/tmp/<database-name>.bacpac \
/TargetServerName:localhost \
/TargetDatabaseName:<database-name> \
/TargetUser:sa \
/TargetPassword:YourStrong!Passw0rd \
/TargetTrustServerCertificate:True
```
### PostgreSQL
**Using psql:**
```bash
# Import SQL script
psql -h localhost \
-U postgres \
-d myapp \
-f <database-name>.sql
# Or with custom-format dump
pg_restore -h localhost \
-U postgres \
-d myapp \
-v \
<database-name>.dump
# Via docker exec
docker cp <database-name>.sql postgres:/tmp/
docker compose exec postgres psql \
-U postgres \
-d myapp \
-f /tmp/<database-name>.sql
```
### MySQL
**Using mysql:**
```bash
# Import SQL script
mysql -h localhost \
-u root \
-pmysql123 \
< <database-name>.sql
# Via docker exec
docker cp <database-name>.sql mysql:/tmp/
docker compose exec mysql mysql \
-u root \
-pmysql123 \
< /tmp/<database-name>.sql
```
## Step 7: Verify Import
**SQL Server:**
```bash
sqlcmd -S localhost,1433 -U sa -P YourStrong!Passw0rd -C -Q "SELECT name FROM sys.databases"
sqlcmd -S localhost,1433 -U sa -P YourStrong!Passw0rd -C -Q "USE <database-name>; SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES"
```
**PostgreSQL:**
```bash
docker compose exec postgres psql -U postgres -d myapp -c "\dt"
docker compose exec postgres psql -U postgres -d myapp -c "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public'"
```
**MySQL:**
```bash
docker compose exec mysql mysql -u root -pmysql123 -e "SHOW DATABASES"
docker compose exec mysql mysql -u root -pmysql123 myapp -e "SHOW TABLES"
```
## Step 8: Automate with Init Scripts
Place SQL files in `./init/` directory for automatic import on container startup.
**SQL Server init script (init/01-create-database.sql):**
```sql
-- Wait for SQL Server to be ready
WAITFOR DELAY '00:00:10';
GO
-- Create database if not exists
IF NOT EXISTS (SELECT * FROM sys.databases WHERE name = 'MyApp')
BEGIN
CREATE DATABASE MyApp;
END
GO
USE MyApp;
GO
-- Your schema and data here
CREATE TABLE Users (
Id INT PRIMARY KEY IDENTITY(1,1),
Username NVARCHAR(100) NOT NULL,
Email NVARCHAR(255) NOT NULL
);
GO
```
**PostgreSQL init script (init/01-init.sql):**
```sql
-- Runs automatically on first container start
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(100) NOT NULL,
email VARCHAR(255) NOT NULL
);
INSERT INTO users (username, email) VALUES
('admin', 'admin@example.com'),
('user', 'user@example.com');
```
**MySQL init script (init/01-init.sql):**
```sql
-- Runs automatically on first container start
CREATE TABLE IF NOT EXISTS users (
id INT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(100) NOT NULL,
email VARCHAR(255) NOT NULL
);
INSERT INTO users (username, email) VALUES
('admin', 'admin@example.com'),
('user', 'user@example.com');
```
## Step 9: Handle Large Databases
For databases > 10GB:
1. **Use compression:**
```bash
pg_dump -h <server> -U <user> -d <db> -F c -Z 9 -f <db>.dump.gz
```
2. **Export schema separately:**
```bash
pg_dump -h <server> -U <user> -d <db> --schema-only -f schema.sql
```
3. **Export data in chunks:**
```bash
pg_dump -h <server> -U <user> -d <db> -t table1 --data-only -f table1.sql
pg_dump -h <server> -U <user> -d <db> -t table2 --data-only -f table2.sql
```
4. **Use parallel export (PostgreSQL):**
```bash
pg_dump -h <server> -U <user> -d <db> -F d -j 4 -f ./dump_directory
```
5. **Consider subset of data for development:**
```sql
-- Export last 6 months only
pg_dump -h <server> -U <user> -d <db> \
-t table1 \
--where="created_at > NOW() - INTERVAL '6 months'" \
-f subset.sql
```
## Step 10: Clean Up Azure Resources
Remove firewall rules after export:
```bash
# SQL Server
az sql server firewall-rule delete \
--resource-group <resource-group> \
--server <server-name> \
--name AllowMyIP
# PostgreSQL
az postgres flexible-server firewall-rule delete \
--resource-group <resource-group> \
--name <server-name> \
--rule-name AllowMyIP
# MySQL
az mysql flexible-server firewall-rule delete \
--resource-group <resource-group> \
--name <server-name> \
--rule-name AllowMyIP
```
## Common Issues and Solutions
**Connection timeout:**
- Verify firewall rules include your IP
- Check network connectivity to Azure
- Ensure NSG rules allow database traffic
**Authentication failed:**
- Verify username format (PostgreSQL/MySQL require `user@server-name`)
- Check password for special characters (escape in shell)
- Ensure Azure AD authentication is not required
**BACPAC import fails:**
- Check SQL Server version compatibility
- Ensure sufficient disk space in Docker volume
- Review error messages for missing dependencies
**Large file transfer fails:**
- Use compression
- Split into multiple files
- Consider Azure Data Factory for large datasets
**Schema compatibility issues:**
- Azure SQL → SQL Server 2025: Generally compatible
- Check for Azure-specific features (elastic pools, etc.)
- Test import in non-production environment first
## Best Practices
1. **Use separate init scripts for schema and data**
2. **Version control schema scripts**
3. **Exclude sensitive data from exports**
4. **Test import process before full migration**
5. **Document any manual adjustments needed**
6. **Use environment variables for credentials**
7. **Automate with CI/CD pipelines**
8. **Keep export files secure (gitignore)**
9. **Regularly refresh local data from Azure**
10. **Consider using sample data for local development**
## Automation Script Example
Create `scripts/export-and-import.sh`:
```bash
#!/bin/bash
set -euo pipefail
# Configuration
AZURE_SERVER="myserver.database.windows.net"
AZURE_DB="myapp"
AZURE_USER="admin"
AZURE_PASS="${AZURE_SQL_PASSWORD}"
echo "Exporting from Azure..."
pg_dump -h "$AZURE_SERVER" -U "$AZURE_USER" -d "$AZURE_DB" -F c -f ./dump.sql
echo "Starting Docker container..."
docker compose up -d postgres
sleep 10
echo "Importing to Docker..."
docker cp ./dump.sql postgres:/tmp/
docker compose exec -T postgres pg_restore -U postgres -d myapp -v /tmp/dump.sql
echo "Verifying import..."
docker compose exec -T postgres psql -U postgres -d myapp -c "\dt"
echo "Done! Database ready for local development."
```
## Output Deliverables
Provide:
1. Database export files (.sql, .bacpac, .dump)
2. Import scripts for Docker containers
3. Init directory structure for auto-loading
4. Verification queries
5. Documentation of any schema changes needed
6. Connection string examples for local development
## Next Steps
After importing databases:
1. Update application connection strings
2. Test application against local databases
3. Verify all tables and data imported correctly
4. Document any Azure-specific features not available locally
5. Set up regular refresh process from Azure

View File

@@ -0,0 +1,624 @@
---
description: Extract Azure infrastructure and generate Docker Compose stack for local development
---
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Extract Azure Infrastructure to Docker Compose
## Purpose
Analyze existing Azure infrastructure and generate a complete Docker Compose stack with Azure service emulators for local development.
## Prerequisites
**Required tools:**
- Azure CLI (`az`) installed and configured
- Docker Desktop 4.40+ with Compose v2.42+
- Sufficient local resources (minimum 8GB RAM for full Azure stack)
**Azure access:**
- Authenticated with `az login`
- Appropriate RBAC permissions to read resources
- Access to target resource group
## Step 1: Authenticate with Azure
```bash
# Login to Azure
az login
# List available subscriptions
az account list --output table
# Set target subscription
az account set --subscription "subscription-name-or-id"
# Verify current subscription
az account show
```
## Step 2: Extract Azure Resources
### List Resources in Resource Group
```bash
# List all resources in resource group
az resource list \
--resource-group <resource-group-name> \
--output table
# Get detailed JSON output for analysis
az resource list \
--resource-group <resource-group-name> \
--output json > azure-resources.json
```
### Extract Specific Service Configurations
**App Services:**
```bash
# List App Services
az webapp list \
--resource-group <resource-group-name> \
--output json > app-services.json
# Get detailed configuration for each app
az webapp show \
--name <app-name> \
--resource-group <resource-group-name> \
--output json > app-<app-name>.json
# Get application settings (environment variables)
az webapp config appsettings list \
--name <app-name> \
--resource-group <resource-group-name> \
--output json > app-<app-name>-settings.json
# Get connection strings
az webapp config connection-string list \
--name <app-name> \
--resource-group <resource-group-name> \
--output json > app-<app-name>-connections.json
```
**Azure SQL Databases:**
```bash
# List SQL servers
az sql server list \
--resource-group <resource-group-name> \
--output json > sql-servers.json
# List databases on server
az sql db list \
--server <server-name> \
--resource-group <resource-group-name> \
--output json > sql-databases.json
# Get database details
az sql db show \
--name <database-name> \
--server <server-name> \
--resource-group <resource-group-name> \
--output json > sql-db-<database-name>.json
```
**PostgreSQL/MySQL:**
```bash
# PostgreSQL
az postgres flexible-server list \
--resource-group <resource-group-name> \
--output json > postgres-servers.json
az postgres flexible-server db list \
--server-name <server-name> \
--resource-group <resource-group-name> \
--output json > postgres-databases.json
# MySQL
az mysql flexible-server list \
--resource-group <resource-group-name> \
--output json > mysql-servers.json
```
**Redis Cache:**
```bash
az redis list \
--resource-group <resource-group-name> \
--output json > redis-caches.json
az redis show \
--name <redis-name> \
--resource-group <resource-group-name> \
--output json > redis-<redis-name>.json
```
**Storage Accounts:**
```bash
az storage account list \
--resource-group <resource-group-name> \
--output json > storage-accounts.json
az storage account show \
--name <storage-account-name> \
--resource-group <resource-group-name> \
--output json > storage-<storage-account-name>.json
```
**Cosmos DB:**
```bash
az cosmosdb list \
--resource-group <resource-group-name> \
--output json > cosmosdb-accounts.json
az cosmosdb show \
--name <cosmosdb-name> \
--resource-group <resource-group-name> \
--output json > cosmosdb-<cosmosdb-name>.json
```
**Service Bus:**
```bash
az servicebus namespace list \
--resource-group <resource-group-name> \
--output json > servicebus-namespaces.json
az servicebus queue list \
--namespace-name <namespace-name> \
--resource-group <resource-group-name> \
--output json > servicebus-queues.json
```
## Step 3: Analyze Extracted Resources
Read all JSON files and identify:
1. **Service Types and Counts**
- How many App Services?
- Database types (SQL Server, PostgreSQL, MySQL)?
- Cache services (Redis)?
- Storage requirements (Blob, Queue, Table)?
- NoSQL databases (Cosmos DB)?
- Message queues (Service Bus)?
2. **Service Dependencies**
- Which apps connect to which databases?
- Connection strings and relationships
- Network configurations
- Authentication methods
3. **Configuration Requirements**
- Environment variables from app settings
- Connection strings
- Feature flags
- Secrets (need local equivalents)
4. **Resource Sizing**
- Database SKUs → Docker resource limits
- App Service plans → Container CPU/memory
- Storage capacity → Volume sizing
## Step 4: Map Azure Services to Docker
Use this mapping table:
| Azure Service | Docker Image | Configuration Notes |
|---------------|--------------|---------------------|
| App Service (Windows) | Custom build | Extract runtime stack from config |
| App Service (Linux) | Custom build | Use specified container image |
| Azure SQL Database | `mcr.microsoft.com/mssql/server:2025-latest` | Use Developer edition |
| PostgreSQL Flexible Server | `postgres:16-alpine` | Match version from Azure |
| MySQL Flexible Server | `mysql:8.4` | Match version from Azure |
| Redis Cache | `redis:7.4-alpine` | Configure persistence |
| Storage Account (Blob/Queue/Table) | `mcr.microsoft.com/azure-storage/azurite` | All storage types in one |
| Cosmos DB | `mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator` | NoSQL emulator |
| Service Bus | Custom or `rabbitmq:3.14-alpine` | Limited emulator support |
| Application Insights | `jaegertracing/all-in-one` | OpenTelemetry compatible |
## Step 5: Generate Docker Compose Structure
Create `docker-compose.yml` with this structure:
```yaml
# Modern Compose format (no version field for v2.40+)
services:
# Frontend App Services
# Backend App Services
# Databases (SQL Server, PostgreSQL, MySQL)
# Cache (Redis)
# Storage (Azurite)
# NoSQL (Cosmos DB)
# Monitoring (Jaeger, Grafana)
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
monitoring:
driver: bridge
volumes:
# Named volumes for each database
# Named volumes for storage emulators
secrets:
# Database passwords
# Connection strings
```
### Service Generation Rules
**For each App Service:**
```yaml
service-name:
build:
context: ./path-to-app
dockerfile: Dockerfile
ports:
- "PORT:PORT"
depends_on:
database-service:
condition: service_healthy
environment:
# Map from Azure app settings
networks:
- frontend
- backend
restart: unless-stopped
user: "1000:1000"
read_only: true
tmpfs:
- /tmp
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:PORT/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: 'X'
memory: XG
```
**For Azure SQL Database:**
```yaml
sqlserver:
image: mcr.microsoft.com/mssql/server:2025-latest
environment:
- ACCEPT_EULA=Y
- MSSQL_PID=Developer
- MSSQL_SA_PASSWORD_FILE=/run/secrets/sa_password
secrets:
- sa_password
ports:
- "1433:1433"
volumes:
- sqlserver-data:/var/opt/mssql
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C || exit 1"]
interval: 10s
timeout: 3s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '2'
memory: 4G
reservations:
cpus: '1'
memory: 2G
security_opt:
- no-new-privileges:true
```
**For Storage Account:**
```yaml
azurite:
image: mcr.microsoft.com/azure-storage/azurite:latest
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose
ports:
- "10000:10000" # Blob
- "10001:10001" # Queue
- "10002:10002" # Table
volumes:
- azurite-data:/data
networks:
- backend
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "10000"]
interval: 30s
timeout: 3s
retries: 3
restart: unless-stopped
```
**For Redis Cache:**
```yaml
redis:
image: redis:7.4-alpine
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- backend
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 3
security_opt:
- no-new-privileges:true
```
**For Cosmos DB:**
```yaml
cosmosdb:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
ports:
- "8081:8081"
- "10251-10254:10251-10254"
volumes:
- cosmos-data:/data/db
networks:
- backend
deploy:
resources:
limits:
cpus: '2'
memory: 4G
```
## Step 6: Generate Environment Files
Create `.env.template`:
```bash
# SQL Server
MSSQL_SA_PASSWORD=YourStrong!Passw0rd
# PostgreSQL (if used)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres123
POSTGRES_DB=myapp
# MySQL (if used)
MYSQL_ROOT_PASSWORD=mysql123
MYSQL_DATABASE=myapp
# Redis
REDIS_PASSWORD=redis123
# Application Settings
# (Map from Azure app settings JSON)
ASPNETCORE_ENVIRONMENT=Development
NODE_ENV=development
# Azure Storage Emulator (Standard Development Connection String)
AZURITE_CONNECTION_STRING=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
# Cosmos DB Emulator
COSMOS_EMULATOR_ENDPOINT=https://cosmosdb:8081
COSMOS_EMULATOR_KEY=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
# Feature Flags
ENABLE_MONITORING=true
```
## Step 7: Create Supporting Files
**Makefile:**
```makefile
.PHONY: up down logs health restart clean
up:
@docker compose up -d
@echo "✓ Services started. Access at:"
@echo " - Frontend: http://localhost:3000"
@echo " - Backend: http://localhost:8080"
@echo " - Azurite: http://localhost:10000"
@echo " - Cosmos DB: https://localhost:8081/_explorer/index.html"
down:
@docker compose down
logs:
@docker compose logs -f
health:
@docker compose ps
restart:
@docker compose restart
clean:
@docker compose down -v
@echo "✓ Cleaned all volumes"
init:
@cp .env.template .env
@echo "✓ Created .env file. Please update passwords!"
```
**README.md:**
Include:
- Architecture diagram of services
- Service mapping (Azure → Docker)
- Port mappings
- Connection strings for local development
- How to start/stop
- Health check verification
- Troubleshooting guide
**docker-compose.override.yml (for development):**
```yaml
services:
frontend:
volumes:
- ./frontend/src:/app/src:cached
environment:
- HOT_RELOAD=true
backend:
volumes:
- ./backend/src:/app/src:cached
ports:
- "9229:9229" # Node.js debugger
```
## Step 8: Validation
Before finalizing, validate:
1. **Syntax validation:**
```bash
docker compose config
```
2. **Service startup order:**
- Databases start first
- Health checks complete before dependent services start
- Apps start after all dependencies are healthy
3. **Network isolation:**
- Databases only on backend network
- Frontend services can't directly access databases
- Proper communication paths
4. **Resource limits:**
- Total CPU allocation < host CPUs
- Total memory allocation < host memory
- Leave headroom for host OS
5. **Security checks:**
- No hardcoded secrets in docker-compose.yml
- All services run as non-root where possible
- Read-only filesystems enabled
- Capabilities dropped
## Output Deliverables
Provide the following files:
1. `docker-compose.yml` - Main compose file
2. `docker-compose.override.yml` - Development overrides
3. `.env.template` - Environment variable template
4. `Makefile` - Common operations
5. `README.md` - Setup and usage documentation
6. `.dockerignore` - Files to exclude from builds
7. `secrets/` directory structure (gitignored)
## Common Azure Patterns
### Pattern 1: Simple Web + Database
- 1 App Service → web container
- 1 Azure SQL → SQL Server 2025 container
- 1 Storage Account → Azurite
### Pattern 2: Three-Tier Application
- Frontend App Service → React/Angular container
- Backend App Service → API container
- Azure SQL → SQL Server 2025 container
- Redis Cache → Redis container
- Storage Account → Azurite
### Pattern 3: Microservices
- Multiple App Services → Multiple containers
- Azure SQL + Cosmos DB → SQL Server + Cosmos emulator
- Service Bus → RabbitMQ
- Application Insights → Jaeger
- API Management → Nginx gateway
### Pattern 4: Full Azure Stack
- Multiple App Services (frontend/backend/admin)
- Azure SQL + PostgreSQL + MySQL
- Redis Cache
- Storage Account → Azurite
- Cosmos DB → Cosmos emulator
- Service Bus → Custom emulator
- Application Insights → Jaeger + Grafana
## Tips and Best Practices
1. **Start Simple:** Extract minimal viable stack first, add services incrementally
2. **Health Checks:** Ensure every service has working health checks
3. **Dependencies:** Use `depends_on` with `condition: service_healthy`
4. **Secrets Management:** Never commit .env files, provide .env.template
5. **Resource Limits:** Set realistic limits based on local development machine
6. **Network Design:** Isolate backend services from direct external access
7. **Documentation:** Document Azure→Docker mapping for team reference
8. **Version Control:** Exclude .env, secrets/, and volumes/ from git
## Troubleshooting
**Services fail to start:**
- Check Docker Desktop resource allocation
- Verify no port conflicts with other local services
- Review logs: `docker compose logs <service-name>`
**Database connection issues:**
- Verify connection strings use service names (not localhost)
- Check network configuration
- Ensure health checks pass before apps start
**Performance issues:**
- Increase Docker Desktop memory allocation
- Reduce number of services running simultaneously
- Use volume caching for macOS (`:cached`)
**Azurite connection failures:**
- Use standard development account key
- Ensure ports 10000-10002 are available
- Verify `--loose` flag for compatibility
## Next Steps
After generating Docker Compose stack:
1. Test with `docker compose up`
2. Verify health checks: `docker compose ps`
3. Export databases using `/export-database` command
4. Generate Dockerfiles using `/generate-dockerfile` command
5. Document any Azure-specific features not replicated locally

65
plugin.lock.json Normal file
View File

@@ -0,0 +1,65 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:JosiahSiegel/claude-code-marketplace:plugins/azure-to-docker-master",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "eec0111d2f32f2ab11c097357d0e30bd834a11d4",
"treeHash": "3eb4dd9bb31394a598deffe787b5423f7e381676c39cfa81de4010cbb1c61990",
"generatedAt": "2025-11-28T10:11:52.117000Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "azure-to-docker-master",
"description": "Complete Azure-to-Docker migration system for local development with 2025 features. PROACTIVELY activate for: (1) ANY Azure-to-Docker migration task, (2) Azure infrastructure extraction and Docker Compose generation, (3) Azure service emulator setup (Azurite 2025-11-05 API, SQL Server 2025 latest, Cosmos DB vnext Linux, Service Bus official emulator), (4) Local development with Docker Compose Watch mode (hot reload), (5) Database export from Azure SQL/PostgreSQL/MySQL to Docker, (6) Dockerfile generation from Azure App Service configurations, (7) Multi-container orchestration with proper networking and dependencies, (8) Production-ready Docker Compose with health checks and runtime secrets, (9) Azure service mapping (App Service/SQL/Storage/Redis/Cosmos/Service Bus), (10) Development-to-production parity with Azure emulators. Provides: Azure resource extraction and analysis, complete Docker Compose generation with 2025 best practices, Azure emulator configuration (Azurite with latest API, SQL Server 2025 with Vector Search, Cosmos DB vnext Linux-based, official Service Bus emulator), Docker Compose Watch mode for hot reload, database export automation, App Service to Dockerfile conversion, service dependency mapping, network isolation patterns, volume management strategies, environment variable templating, health check implementation, resource limit configuration, security hardening (non-root users, read-only filesystems, capability drops, runtime-only secrets), development override patterns with watch mode, and Azure-to-Docker best practices. Ensures production-ready local development environments that mirror Azure infrastructure with instant hot reload capabilities.",
"version": "1.1.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "4f21cbb62a5e6ea2aa9cb3cf7a294c6c28bae7a64d220837e461ca7c47d51ade"
},
{
"path": "agents/azure-to-docker-expert.md",
"sha256": "11336633b10989e3e921de893ced6b869874d688b460b398611a5b1b6c5969fe"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "9556c74b958f9ded24eb68a85d950b4f95ac853c495260f8d84348aa1a6e95eb"
},
{
"path": "commands/extract-infrastructure.md",
"sha256": "c97cf4396fa058b25d9f2f771241c16f93f8cbc504a7f04b3d6ccae7a8963d9f"
},
{
"path": "commands/export-database.md",
"sha256": "5d9a5af5618c3a44a6f80df7e4c7ef8d96090c19f15094346214f3a57129149f"
},
{
"path": "skills/docker-watch-mode-2025.md",
"sha256": "3c1c2aba4bddb17339a6124d942e2a56097b69fd2f3e4a37e112bd7fc9a7d261"
},
{
"path": "skills/compose-patterns-2025.md",
"sha256": "86059c062ae39e645762fe1c244e47cae977f0f0cfa9d6b3bfed4fc654ca4ee7"
},
{
"path": "skills/azure-emulators-2025.md",
"sha256": "02c63e142ff8d5d09e51c48edfb2d3f3f4c5632700fa97799d8fe63e225047e2"
}
],
"dirSha256": "3eb4dd9bb31394a598deffe787b5423f7e381676c39cfa81de4010cbb1c61990"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,643 @@
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Azure Service Emulators for Local Development (2025)
## Overview
This skill provides comprehensive knowledge of Azure service emulators available as Docker containers for local development in 2025.
## Available Azure Emulators
### 1. Azurite (Azure Storage Emulator)
**Official replacement for Azure Storage Emulator (deprecated)**
**Image:** `mcr.microsoft.com/azure-storage/azurite:latest`
**Supported Services:**
- Blob Storage
- Queue Storage
- Table Storage
**Configuration:**
```yaml
services:
azurite:
image: mcr.microsoft.com/azure-storage/azurite:latest
container_name: azurite
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose
ports:
- "10000:10000" # Blob service
- "10001:10001" # Queue service
- "10002:10002" # Table service
volumes:
- azurite-data:/data
restart: unless-stopped
```
**Standard Development Connection String:**
```
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
```
**Features:**
- Cross-platform (Windows, Linux, macOS)
- Written in Node.js/JavaScript
- Supports latest Azure Storage APIs
- Persistence to disk
- Compatible with Azure Storage Explorer
- `--loose` flag for relaxed validation (useful for local development)
**2025 API Version Support:**
- Latest release (v3.35.0) targets 2025-11-05 API version
- Blob service: 2025-11-05
- Queue service: 2025-11-05
- Table service: 2025-11-05 (preview)
- RA-GRS support for geo-redundant replication testing
**CLI Usage:**
```bash
# Install via npm (alternative to Docker)
npm install -g azurite
# Run with custom location
azurite --location /path/to/data --debug /path/to/debug.log
```
**Application Integration:**
```javascript
// Node.js with @azure/storage-blob
const { BlobServiceClient } = require('@azure/storage-blob');
const connectionString = process.env.AZURITE_CONNECTION_STRING;
const blobServiceClient = BlobServiceClient.fromConnectionString(connectionString);
```
```csharp
// .NET with Azure.Storage.Blobs
using Azure.Storage.Blobs;
var connectionString = Environment.GetEnvironmentVariable("AZURITE_CONNECTION_STRING");
var blobServiceClient = new BlobServiceClient(connectionString);
```
### 2. Azure SQL Server 2025
**Latest SQL Server with AI features**
**Image:** `mcr.microsoft.com/mssql/server:2025-latest`
**2025 Features:**
- Built-in Vector Search for AI similarity queries
- Semantic Queries alongside traditional full-text search
- Optimized Locking (TID Locking, LAQ)
- Native JSON and RegEx support
- Fabric Mirroring integration
- REST API support
**Configuration:**
```yaml
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2025-latest
container_name: sqlserver
environment:
- ACCEPT_EULA=Y
- MSSQL_PID=Developer
- MSSQL_SA_PASSWORD_FILE=/run/secrets/sa_password
secrets:
- sa_password
ports:
- "1433:1433"
volumes:
- sqlserver-data:/var/opt/mssql
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C || exit 1"]
interval: 10s
timeout: 3s
retries: 3
start_period: 10s
deploy:
resources:
limits:
cpus: '2'
memory: 4G
reservations:
cpus: '1'
memory: 2G
networks:
- backend
security_opt:
- no-new-privileges:true
restart: unless-stopped
secrets:
sa_password:
file: ./secrets/sa_password.txt
volumes:
sqlserver-data:
driver: local
```
**Connection String:**
```
Server=sqlserver;Database=MyApp;User Id=sa;Password=YourStrong!Passw0rd;TrustServerCertificate=True;
```
**Important Notes:**
- **Azure SQL Edge retired September 30, 2025** - Use SQL Server 2025 instead
- Use `mssql-tools18` for TLS 1.3 support (`-C` flag trusts certificate)
- Developer edition is free for non-production use
- Supports arm64 via Rosetta 2 on macOS
**Vector Search Example (2025 Feature):**
```sql
-- Create vector index for AI similarity search
CREATE TABLE Documents (
Id INT PRIMARY KEY,
Content NVARCHAR(MAX),
ContentVector VECTOR(1536)
);
-- Perform similarity search
SELECT TOP 10 Id, Content
FROM Documents
ORDER BY VECTOR_DISTANCE(ContentVector, @queryVector);
```
**2025 vnext-preview Features:**
- Entirely Linux-based emulator (cross-platform: x64, ARM64, Apple Silicon)
- No virtual machines required on Apple Silicon or Microsoft ARM
- Changefeed support (April 2025+)
- Document TTL (Time-to-Live) support
- OpenTelemetry V2 support
- API for NoSQL in gateway mode
- Currently in preview (active development)
**Image:** `mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview`
### 3. Cosmos DB Emulator
**NoSQL database emulator**
**Image:** `mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview`
**Configuration:**
```yaml
services:
cosmosdb:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview
container_name: cosmosdb
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
- AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=127.0.0.1
ports:
- "8081:8081" # Data Explorer
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
volumes:
- cosmos-data:/data/db
deploy:
resources:
limits:
cpus: '2'
memory: 4G
networks:
- backend
restart: unless-stopped
```
**Emulator Endpoint:**
```
https://localhost:8081
```
**Emulator Key (Standard):**
```
C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
```
**Data Explorer:**
Access at `https://localhost:8081/_explorer/index.html`
**Application Integration:**
```javascript
// Node.js with @azure/cosmos
const { CosmosClient } = require('@azure/cosmos');
const endpoint = 'https://localhost:8081';
const key = process.env.COSMOS_EMULATOR_KEY;
const client = new CosmosClient({ endpoint, key });
```
```csharp
// .NET with Microsoft.Azure.Cosmos
using Microsoft.Azure.Cosmos;
var endpoint = "https://localhost:8081";
var key = Environment.GetEnvironmentVariable("COSMOS_EMULATOR_KEY");
var client = new CosmosClient(endpoint, key);
```
**Limitations:**
- Performance not representative of production
- Limited to single partition for local development
- Certificate trust required (self-signed)
**2025 Official Azure Service Bus Emulator:**
- Released as official Docker container
- Linux-based emulator with cross-platform support
- Requires SQL Server Linux as dependency
- Supports AMQP protocol (port 5672)
- Connection string format with UseDevelopmentEmulator=true
- Current limitations: No JMS protocol, no partitioned entities, no AMQP Web Sockets
**Official Emulator Image:** `mcr.microsoft.com/azure-messaging/servicebus-emulator:latest`
**Connection String:**
```
Endpoint=sb://host.docker.internal;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=SAS_KEY_VALUE;UseDevelopmentEmulator=true;
```
### 4. Azure Service Bus Emulator
**Message queue emulator**
**Note:** Official emulator has limited Docker support. Use RabbitMQ as alternative.
**Official Emulator (Preview):**
```yaml
services:
servicebus-sql:
image: mcr.microsoft.com/mssql/server:2025-latest
environment:
- ACCEPT_EULA=Y
- MSSQL_SA_PASSWORD=ServiceBus123!
volumes:
- servicebus-sql-data:/var/opt/mssql
servicebus:
image: mcr.microsoft.com/azure-messaging/servicebus-emulator:latest
depends_on:
- servicebus-sql
environment:
- ACCEPT_EULA=Y
- SQL_SERVER=servicebus-sql
- SQL_SA_PASSWORD=ServiceBus123!
ports:
- "5672:5672" # AMQP
networks:
- backend
```
**RabbitMQ Alternative (Recommended):**
```yaml
services:
rabbitmq:
image: rabbitmq:3.14-alpine
container_name: rabbitmq
ports:
- "5672:5672" # AMQP
- "15672:15672" # Management UI
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin123
volumes:
- rabbitmq-data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 3
networks:
- backend
restart: unless-stopped
```
### 5. PostgreSQL (Azure Database for PostgreSQL)
**Image:** `postgres:16.6-alpine`
**Configuration:**
```yaml
services:
postgres:
image: postgres:16.6-alpine
container_name: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
- POSTGRES_DB=myapp
secrets:
- postgres_password
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 3s
retries: 3
deploy:
resources:
limits:
cpus: '1'
memory: 2G
networks:
- backend
security_opt:
- no-new-privileges:true
restart: unless-stopped
```
**Extensions:**
Match Azure PostgreSQL Flexible Server extensions:
```sql
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
```
### 6. MySQL (Azure Database for MySQL)
**Image:** `mysql:9.2`
**Configuration:**
```yaml
services:
mysql:
image: mysql:9.2
container_name: mysql
environment:
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
- MYSQL_DATABASE=myapp
- MYSQL_USER=appuser
- MYSQL_PASSWORD_FILE=/run/secrets/mysql_password
secrets:
- mysql_root_password
- mysql_password
ports:
- "3306:3306"
volumes:
- mysql-data:/var/lib/mysql
- ./init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
interval: 10s
timeout: 3s
retries: 3
deploy:
resources:
limits:
cpus: '1'
memory: 2G
networks:
- backend
security_opt:
- no-new-privileges:true
restart: unless-stopped
```
### 7. Redis (Azure Cache for Redis)
**Image:** `redis:7.4-alpine`
**Configuration:**
```yaml
services:
redis:
image: redis:7.4-alpine
container_name: redis
command: >
redis-server
--appendonly yes
--requirepass ${REDIS_PASSWORD}
--maxmemory 512mb
--maxmemory-policy allkeys-lru
ports:
- "6379:6379"
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 3s
retries: 3
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
networks:
- backend
security_opt:
- no-new-privileges:true
restart: unless-stopped
```
### 8. Application Insights Alternative (Jaeger + Grafana)
**OpenTelemetry-compatible observability stack**
```yaml
services:
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- "16686:16686" # UI
- "14268:14268" # HTTP collector
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
environment:
- COLLECTOR_OTLP_ENABLED=true
networks:
- monitoring
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana-data:/var/lib/grafana
networks:
- monitoring
restart: unless-stopped
```
## Complete Azure Stack Example
```yaml
services:
# Application Services
frontend:
build: ./frontend
ports: ["3000:3000"]
networks: [frontend]
depends_on:
backend:
condition: service_healthy
backend:
build: ./backend
ports: ["8080:8080"]
networks: [frontend, backend]
depends_on:
sqlserver:
condition: service_healthy
redis:
condition: service_started
azurite:
condition: service_started
# Databases
sqlserver:
image: mcr.microsoft.com/mssql/server:2025-latest
environment:
- ACCEPT_EULA=Y
- MSSQL_PID=Developer
- MSSQL_SA_PASSWORD=${MSSQL_SA_PASSWORD}
ports: ["1433:1433"]
volumes: [sqlserver-data:/var/opt/mssql]
networks: [backend]
healthcheck:
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C"]
interval: 10s
postgres:
image: postgres:16.6-alpine
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports: ["5432:5432"]
volumes: [postgres-data:/var/lib/postgresql/data]
networks: [backend]
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
# Cache
redis:
image: redis:7.4-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
ports: ["6379:6379"]
volumes: [redis-data:/data]
networks: [backend]
# Storage
azurite:
image: mcr.microsoft.com/azure-storage/azurite:latest
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose
ports: ["10000:10000", "10001:10001", "10002:10002"]
volumes: [azurite-data:/data]
networks: [backend]
# NoSQL
cosmosdb:
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-preview
ports: ["8081:8081"]
volumes: [cosmos-data:/data/db]
networks: [backend]
# Monitoring
jaeger:
image: jaegertracing/all-in-one:latest
ports: ["16686:16686", "4317:4317"]
networks: [monitoring]
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
monitoring:
driver: bridge
volumes:
sqlserver-data:
postgres-data:
redis-data:
azurite-data:
cosmos-data:
```
## Best Practices
1. **Use health checks** - Ensure dependencies are ready before app starts
2. **Network isolation** - Keep databases on internal backend network
3. **Resource limits** - Prevent emulators from consuming all system resources
4. **Persistence** - Use named volumes for data that should survive restarts
5. **Secrets management** - Use Docker secrets or environment files
6. **Version pinning** - Use specific tags, not `latest`
7. **Security hardening** - Drop capabilities, run as non-root where possible
8. **Documentation** - Document differences between emulators and Azure services
## Known Limitations
- **Performance**: Emulators are slower than Azure services
- **Scale**: Single-instance only, no clustering
- **Features**: Some Azure-specific features unavailable locally
- **SSL/TLS**: Self-signed certificates require trust configuration
- **Azure AD**: Authentication not replicated locally
- **Networking**: VNets, Private Endpoints not available
## Migration Checklist
When moving from Azure to Docker emulators:
- [ ] Replace Azure Storage connection strings with Azurite
- [ ] Update SQL Server connection strings (remove Azure-specific options)
- [ ] Configure Cosmos DB with emulator endpoint and key
- [ ] Replace Service Bus with RabbitMQ or emulator
- [ ] Set up alternative for Application Insights
- [ ] Update authentication (remove Azure AD dependencies)
- [ ] Configure network isolation
- [ ] Test all integrations
- [ ] Document feature parity gaps
- [ ] Create init scripts for sample data
## References
- [Azurite Documentation](https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azurite)
- [SQL Server 2025 in Docker](https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker)
- [Cosmos DB Emulator](https://learn.microsoft.com/en-us/azure/cosmos-db/local-emulator)
- [Service Bus Emulator](https://learn.microsoft.com/en-us/azure/service-bus-messaging/test-locally-with-service-bus-emulator)

View File

@@ -0,0 +1,860 @@
## 🚨 CRITICAL GUIDELINES
### Windows File Path Requirements
**MANDATORY: Always Use Backslashes on Windows for File Paths**
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
**Examples:**
- ❌ WRONG: `D:/repos/project/file.tsx`
- ✅ CORRECT: `D:\repos\project\file.tsx`
This applies to:
- Edit tool file_path parameter
- Write tool file_path parameter
- All file operations on Windows systems
### Documentation Guidelines
**NEVER create new documentation files unless explicitly requested by the user.**
- **Priority**: Update existing README.md files rather than creating new documentation
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
- **User preference**: Only create additional .md files when user specifically asks for documentation
---
# Docker Compose Patterns for Production (2025)
## Overview
This skill documents production-ready Docker Compose patterns and best practices for 2025, based on official Docker documentation and industry standards.
## File Format Changes (2025)
**IMPORTANT:** The `version` field is now **obsolete** in Docker Compose v2.42+.
**Correct (2025):**
```yaml
services:
app:
image: myapp:latest
```
**Incorrect (deprecated):**
```yaml
version: '3.8' # DO NOT USE
services:
app:
image: myapp:latest
```
## Multiple Environment Strategy
### Pattern: Base + Environment Overrides
**compose.yaml (base):**
```yaml
services:
app:
build:
context: ./app
dockerfile: Dockerfile
environment:
- NODE_ENV=production
restart: unless-stopped
```
**compose.override.yaml (development - auto-loaded):**
```yaml
services:
app:
build:
target: development
volumes:
- ./app/src:/app/src:cached
environment:
- NODE_ENV=development
- DEBUG=*
ports:
- "9229:9229" # Debugger
```
**compose.prod.yaml (production - explicit):**
```yaml
services:
app:
build:
target: production
deploy:
replicas: 3
resources:
limits:
cpus: '1'
memory: 512M
restart_policy:
condition: on-failure
max_attempts: 3
```
**Usage:**
```bash
# Development (auto-loads compose.override.yaml)
docker compose up
# Production
docker compose -f compose.yaml -f compose.prod.yaml up -d
# CI/CD
docker compose -f compose.yaml -f compose.ci.yaml up --abort-on-container-exit
```
## Environment Variable Management
### Pattern: .env Files per Environment
**.env.template (committed to git):**
```bash
# Database
DB_HOST=sqlserver
DB_PORT=1433
DB_NAME=myapp
DB_USER=sa
# DB_PASSWORD= (set in actual .env)
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
# REDIS_PASSWORD= (set in actual .env)
# Application
NODE_ENV=production
LOG_LEVEL=info
```
**.env.dev:**
```bash
DB_PASSWORD=Dev!Pass123
REDIS_PASSWORD=redis-dev-123
NODE_ENV=development
LOG_LEVEL=debug
```
**.env.prod:**
```bash
DB_PASSWORD=${PROD_DB_PASSWORD} # From CI/CD
REDIS_PASSWORD=${PROD_REDIS_PASSWORD}
NODE_ENV=production
LOG_LEVEL=info
```
**Load specific environment:**
```bash
docker compose --env-file .env.dev up
```
## Security Patterns
### Pattern: Run as Non-Root User
```yaml
services:
app:
image: node:20-alpine
user: "1000:1000" # UID:GID
read_only: true
tmpfs:
- /tmp
- /app/.cache
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if binding to ports < 1024
security_opt:
- no-new-privileges:true
```
**Create user in Dockerfile:**
```dockerfile
FROM node:20-alpine
# Create app user
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
# Set ownership
WORKDIR /app
COPY --chown=appuser:appuser . .
USER appuser
```
### Pattern: Secrets Management
**Docker Swarm secrets (production):**
```yaml
services:
app:
secrets:
- db_password
- api_key
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
external: true # Managed by Swarm
```
**Access secrets in application:**
```javascript
// Read from /run/secrets/
const fs = require('fs');
const dbPassword = fs.readFileSync('/run/secrets/db_password', 'utf8').trim();
```
**Development alternative (environment):**
```yaml
services:
app:
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
```
## Health Check Patterns
### Pattern: Comprehensive Health Checks
**HTTP endpoint:**
```yaml
services:
web:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
```
**Database ping:**
```yaml
services:
postgres:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
interval: 10s
timeout: 3s
retries: 3
```
**Custom script:**
```yaml
services:
app:
healthcheck:
test: ["CMD", "node", "/app/scripts/healthcheck.js"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
```
**healthcheck.js:**
```javascript
const http = require('http');
const options = {
hostname: 'localhost',
port: 8080,
path: '/health',
timeout: 2000
};
const req = http.request(options, (res) => {
process.exit(res.statusCode === 200 ? 0 : 1);
});
req.on('error', () => process.exit(1));
req.on('timeout', () => {
req.destroy();
process.exit(1);
});
req.end();
```
## Dependency Management
### Pattern: Ordered Startup with Conditions
```yaml
services:
web:
depends_on:
database:
condition: service_healthy
redis:
condition: service_started
migration:
condition: service_completed_successfully
database:
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
redis:
# No health check needed, just wait for start
migration:
image: myapp:latest
command: npm run migrate
restart: "no" # Run once
depends_on:
database:
condition: service_healthy
```
## Network Isolation Patterns
### Pattern: Three-Tier Network Architecture
```yaml
services:
nginx:
image: nginx:alpine
networks:
- frontend
ports:
- "80:80"
api:
build: ./api
networks:
- frontend
- backend
database:
image: postgres:16-alpine
networks:
- backend # No frontend access
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
```
### Pattern: Service-Specific Networks
```yaml
services:
web-app:
networks:
- public
- app-network
api:
networks:
- app-network
- data-network
postgres:
networks:
- data-network
redis:
networks:
- data-network
networks:
public:
driver: bridge
app-network:
driver: bridge
internal: true
data-network:
driver: bridge
internal: true
```
## Volume Patterns
### Pattern: Named Volumes for Persistence
```yaml
services:
database:
volumes:
- db-data:/var/lib/postgresql/data # Persistent data
- ./init:/docker-entrypoint-initdb.d:ro # Init scripts (read-only)
- db-logs:/var/log/postgresql # Logs
volumes:
db-data:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/data/postgres # Host path
db-logs:
driver: local
```
### Pattern: Development Bind Mounts
```yaml
services:
app:
volumes:
- ./src:/app/src:cached # macOS optimization
- /app/node_modules # Don't overwrite installed modules
- app-cache:/app/.cache # Named volume for cache
```
**Volume mount options:**
- `:ro` - Read-only
- `:rw` - Read-write (default)
- `:cached` - macOS performance optimization (host authoritative)
- `:delegated` - macOS performance optimization (container authoritative)
- `:z` - SELinux single container
- `:Z` - SELinux multi-container
## Resource Management Patterns
### Pattern: CPU and Memory Limits
```yaml
services:
app:
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
```
**Calculate total resources:**
```yaml
# 3 app replicas + database + redis
services:
app:
deploy:
replicas: 3
resources:
limits:
cpus: '0.5' # 3 x 0.5 = 1.5 CPUs
memory: 512M # 3 x 512M = 1.5GB
database:
deploy:
resources:
limits:
cpus: '2' # 2 CPUs
memory: 4G # 4GB
redis:
deploy:
resources:
limits:
cpus: '0.5' # 0.5 CPUs
memory: 512M # 512MB
# Total: 4 CPUs, 6GB RAM minimum
```
## Logging Patterns
### Pattern: Centralized Logging
```yaml
services:
app:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
compress: "true"
labels: "app,environment"
```
**Alternative: Log to stdout/stderr (12-factor):**
```yaml
services:
app:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
```
**View logs:**
```bash
docker compose logs -f app
docker compose logs --since 30m app
docker compose logs --tail 100 app
```
## Init Container Pattern
### Pattern: Database Migration
```yaml
services:
migration:
image: myapp:latest
command: npm run migrate
depends_on:
database:
condition: service_healthy
restart: "no" # Run once
networks:
- backend
app:
image: myapp:latest
depends_on:
migration:
condition: service_completed_successfully
networks:
- backend
```
## YAML Anchors and Aliases
### Pattern: Reusable Configuration
```yaml
x-common-app-config: &common-app
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
services:
app1:
<<: *common-app
build: ./app1
ports:
- "8001:8080"
app2:
<<: *common-app
build: ./app2
ports:
- "8002:8080"
app3:
<<: *common-app
build: ./app3
ports:
- "8003:8080"
```
### Pattern: Environment-Specific Overrides
```yaml
x-logging: &default-logging
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
x-resources: &default-resources
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
services:
app:
logging: *default-logging
deploy:
resources: *default-resources
```
## Port Binding Patterns
### Pattern: Security-First Port Binding
```yaml
services:
# Public services
web:
ports:
- "80:8080"
- "443:8443"
# Development only (localhost binding)
debug:
ports:
- "127.0.0.1:9229:9229" # Debugger only accessible from host
# Environment-based binding
app:
ports:
- "${DOCKER_WEB_PORT_FORWARD:-127.0.0.1:8000}:8000"
```
**Environment control:**
```bash
# Development (.env.dev)
DOCKER_WEB_PORT_FORWARD=127.0.0.1:8000 # Localhost only
# Production (.env.prod)
DOCKER_WEB_PORT_FORWARD=8000 # All interfaces
```
## Restart Policy Patterns
```yaml
services:
# Always restart (production services)
app:
restart: always
# Restart unless manually stopped (most common)
database:
restart: unless-stopped
# Never restart (one-time tasks)
migration:
restart: "no"
# Restart on failure only (with Swarm)
worker:
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
```
## Validation and Testing
### Pattern: Pre-Deployment Validation
```bash
#!/bin/bash
set -euo pipefail
echo "Validating Compose syntax..."
docker compose config > /dev/null
echo "Building images..."
docker compose build
echo "Running security scan..."
for service in $(docker compose config --services); do
image=$(docker compose config | yq ".services.$service.image")
if [ -n "$image" ]; then
docker scout cves "$image" || true
fi
done
echo "Starting services..."
docker compose up -d
echo "Checking health..."
sleep 10
docker compose ps
echo "Running smoke tests..."
curl -f http://localhost:8080/health || exit 1
echo "✓ All checks passed"
```
## Complete Production Example
```yaml
# Modern Compose format (no version field for v2.40+)
x-common-service: &common-service
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
security_opt:
- no-new-privileges:true
services:
nginx:
<<: *common-service
image: nginxinc/nginx-unprivileged:alpine
ports:
- "80:8080"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
networks:
- frontend
depends_on:
api:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:8080/health"]
interval: 30s
api:
<<: *common-service
build:
context: ./api
dockerfile: Dockerfile
target: production
user: "1000:1000"
read_only: true
tmpfs:
- /tmp
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
networks:
- frontend
- backend
depends_on:
migration:
condition: service_completed_successfully
redis:
condition: service_started
env_file:
- .env
healthcheck:
test: ["CMD", "node", "healthcheck.js"]
interval: 30s
start_period: 40s
deploy:
resources:
limits:
cpus: '1'
memory: 512M
migration:
image: myapp:latest
command: npm run migrate
restart: "no"
networks:
- backend
depends_on:
postgres:
condition: service_healthy
postgres:
<<: *common-service
image: postgres:16-alpine
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
secrets:
- postgres_password
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
deploy:
resources:
limits:
cpus: '1'
memory: 2G
redis:
<<: *common-service
image: redis:7.4-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis-data:/data
networks:
- backend
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
volumes:
postgres-data:
driver: local
redis-data:
driver: local
secrets:
postgres_password:
file: ./secrets/postgres_password.txt
```
## Common Mistakes to Avoid
1. **Using `version` field** - Obsolete in 2025
2. **No health checks** - Leads to race conditions
3. **Running as root** - Security risk
4. **No resource limits** - Can exhaust host resources
5. **Hardcoded secrets** - Use secrets or environment variables
6. **No logging limits** - Disk space issues
7. **Bind mounts in production** - Use named volumes
8. **Missing restart policies** - Services don't recover
9. **No network isolation** - All services can talk to each other
10. **Not using .dockerignore** - Larger build contexts
## Troubleshooting Commands
```bash
# Validate syntax
docker compose config
# View merged configuration
docker compose config --services
# Check which file is being used
docker compose config --files
# View environment interpolation
docker compose config --no-interpolate
# Check service dependencies
docker compose config | yq '.services.*.depends_on'
# View resource usage
docker stats $(docker compose ps -q)
# Debug startup issues
docker compose up --no-deps service-name
# Force recreate
docker compose up --force-recreate service-name
```
## References
- [Docker Compose Documentation](https://docs.docker.com/compose/)
- [Compose v2.42+ Release Notes](https://github.com/docker/compose/releases)
- [Best Practices](https://docs.docker.com/compose/how-tos/production/)

View File

@@ -0,0 +1,42 @@
# Docker Compose Watch Mode (2025 GA)
Docker Compose Watch enables automatic hot reload during local development by synchronizing file changes instantly without manual container restarts.
## Three Watch Actions
### 1. sync - Hot Reload
For frameworks with hot reload (React, Next.js, Node.js, Flask).
Copies changed files directly into running container.
### 2. rebuild - Compilation
For compiled languages (Go, Rust, Java) or dependency changes.
Rebuilds image and recreates container when files change.
### 3. sync+restart - Config Changes
For configuration files requiring restart.
Syncs files and restarts container.
## Usage
```yaml
services:
frontend:
build: ./frontend
develop:
watch:
- action: sync
path: ./frontend/src
target: /app/src
ignore: [node_modules/, .git/]
- action: rebuild
path: ./frontend/package.json
```
Start with: `docker compose up --watch`
## Benefits
- Better performance than bind mounts
- No file permission issues
- Intelligent syncing
- Supports rebuild capability
- Works on all platforms