Initial commit
This commit is contained in:
651
commands/export-database.md
Normal file
651
commands/export-database.md
Normal file
@@ -0,0 +1,651 @@
|
||||
---
|
||||
description: Export Azure SQL/PostgreSQL/MySQL databases for local Docker containers
|
||||
---
|
||||
|
||||
## 🚨 CRITICAL GUIDELINES
|
||||
|
||||
### Windows File Path Requirements
|
||||
|
||||
**MANDATORY: Always Use Backslashes on Windows for File Paths**
|
||||
|
||||
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
|
||||
|
||||
**Examples:**
|
||||
- ❌ WRONG: `D:/repos/project/file.tsx`
|
||||
- ✅ CORRECT: `D:\repos\project\file.tsx`
|
||||
|
||||
This applies to:
|
||||
- Edit tool file_path parameter
|
||||
- Write tool file_path parameter
|
||||
- All file operations on Windows systems
|
||||
|
||||
|
||||
### Documentation Guidelines
|
||||
|
||||
**NEVER create new documentation files unless explicitly requested by the user.**
|
||||
|
||||
- **Priority**: Update existing README.md files rather than creating new documentation
|
||||
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
|
||||
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
|
||||
- **User preference**: Only create additional .md files when user specifically asks for documentation
|
||||
|
||||
|
||||
---
|
||||
|
||||
# Export Azure Databases to Docker
|
||||
|
||||
## Purpose
|
||||
Export databases from Azure (SQL Database, PostgreSQL, MySQL) and import them into local Docker containers for development.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required tools:**
|
||||
- Azure CLI (`az`) installed and authenticated
|
||||
- Docker Desktop 4.40+ with Compose v2.42+
|
||||
- Database-specific CLI tools:
|
||||
- SQL Server: `sqlcmd` (mssql-tools18)
|
||||
- PostgreSQL: `psql`, `pg_dump`
|
||||
- MySQL: `mysql`, `mysqldump`
|
||||
|
||||
**Azure access:**
|
||||
- Read permissions on databases
|
||||
- Network access (firewall rules configured)
|
||||
- Valid credentials
|
||||
|
||||
## Step 1: Configure Azure Firewall Rules
|
||||
|
||||
**Add your IP to Azure SQL firewall:**
|
||||
```bash
|
||||
# Get your public IP
|
||||
MY_IP=$(curl -s ifconfig.me)
|
||||
|
||||
# Add firewall rule (SQL Server)
|
||||
az sql server firewall-rule create \
|
||||
--resource-group <resource-group> \
|
||||
--server <server-name> \
|
||||
--name AllowMyIP \
|
||||
--start-ip-address $MY_IP \
|
||||
--end-ip-address $MY_IP
|
||||
|
||||
# PostgreSQL
|
||||
az postgres flexible-server firewall-rule create \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--rule-name AllowMyIP \
|
||||
--start-ip-address $MY_IP \
|
||||
--end-ip-address $MY_IP
|
||||
|
||||
# MySQL
|
||||
az mysql flexible-server firewall-rule create \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--rule-name AllowMyIP \
|
||||
--start-ip-address $MY_IP
|
||||
```
|
||||
|
||||
## Step 2: Get Connection Information
|
||||
|
||||
**Azure SQL Database:**
|
||||
```bash
|
||||
# Get connection string
|
||||
az sql db show-connection-string \
|
||||
--client sqlcmd \
|
||||
--name <database-name> \
|
||||
--server <server-name>
|
||||
|
||||
# Output format:
|
||||
# sqlcmd -S <server-name>.database.windows.net -d <database-name> -U <username> -P <password> -N -l 30
|
||||
```
|
||||
|
||||
**PostgreSQL:**
|
||||
```bash
|
||||
# Get server details
|
||||
az postgres flexible-server show \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--query "{fqdn:fullyQualifiedDomainName, version:version}" \
|
||||
--output table
|
||||
|
||||
# Connection string format:
|
||||
# postgresql://<username>:<password>@<server-name>.postgres.database.azure.com:5432/<database-name>?sslmode=require
|
||||
```
|
||||
|
||||
**MySQL:**
|
||||
```bash
|
||||
# Get server details
|
||||
az mysql flexible-server show \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--query "{fqdn:fullyQualifiedDomainName, version:version}" \
|
||||
--output table
|
||||
|
||||
# Connection string format:
|
||||
# mysql://<username>:<password>@<server-name>.mysql.database.azure.com:3306/<database-name>?ssl-mode=REQUIRED
|
||||
```
|
||||
|
||||
## Step 3: Export Database
|
||||
|
||||
### Azure SQL Database
|
||||
|
||||
**Option 1: Using Azure CLI (BACPAC):**
|
||||
```bash
|
||||
# Export to Azure Storage
|
||||
az sql db export \
|
||||
--resource-group <resource-group> \
|
||||
--server <server-name> \
|
||||
--name <database-name> \
|
||||
--admin-user <username> \
|
||||
--admin-password <password> \
|
||||
--storage-key-type StorageAccessKey \
|
||||
--storage-key <storage-key> \
|
||||
--storage-uri https://<storage-account>.blob.core.windows.net/<container>/<database-name>.bacpac
|
||||
|
||||
# Download BACPAC file
|
||||
az storage blob download \
|
||||
--account-name <storage-account> \
|
||||
--container-name <container> \
|
||||
--name <database-name>.bacpac \
|
||||
--file ./<database-name>.bacpac
|
||||
```
|
||||
|
||||
**Option 2: Using sqlcmd (SQL script):**
|
||||
```bash
|
||||
# Install mssql-tools18 if needed
|
||||
# Windows: winget install Microsoft.SqlCmd
|
||||
# Linux: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-setup-tools
|
||||
|
||||
# Generate schema + data script
|
||||
sqlcmd -S <server-name>.database.windows.net \
|
||||
-d <database-name> \
|
||||
-U <username> \
|
||||
-P <password> \
|
||||
-C \
|
||||
-Q "SELECT * FROM INFORMATION_SCHEMA.TABLES" \
|
||||
-o schema-info.txt
|
||||
|
||||
# For full export, use SQL Server Management Studio or Azure Data Studio
|
||||
# Export wizard: Tasks → Generate Scripts → Script entire database
|
||||
```
|
||||
|
||||
**Option 3: Using SqlPackage (recommended for large databases):**
|
||||
```bash
|
||||
# Install SqlPackage: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-download
|
||||
|
||||
# Export as BACPAC
|
||||
sqlpackage /Action:Export \
|
||||
/SourceServerName:<server-name>.database.windows.net \
|
||||
/SourceDatabaseName:<database-name> \
|
||||
/SourceUser:<username> \
|
||||
/SourcePassword:<password> \
|
||||
/SourceTrustServerCertificate:True \
|
||||
/TargetFile:./<database-name>.bacpac
|
||||
|
||||
# Or export as DACPAC (schema only)
|
||||
sqlpackage /Action:Extract \
|
||||
/SourceServerName:<server-name>.database.windows.net \
|
||||
/SourceDatabaseName:<database-name> \
|
||||
/SourceUser:<username> \
|
||||
/SourcePassword:<password> \
|
||||
/SourceTrustServerCertificate:True \
|
||||
/TargetFile:./<database-name>.dacpac
|
||||
```
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
**Using pg_dump:**
|
||||
```bash
|
||||
# Export entire database (schema + data)
|
||||
pg_dump -h <server-name>.postgres.database.azure.com \
|
||||
-U <username> \
|
||||
-d <database-name> \
|
||||
-F c \
|
||||
-f <database-name>.dump
|
||||
|
||||
# Or as SQL script
|
||||
pg_dump -h <server-name>.postgres.database.azure.com \
|
||||
-U <username> \
|
||||
-d <database-name> \
|
||||
--clean \
|
||||
--if-exists \
|
||||
-f <database-name>.sql
|
||||
|
||||
# Schema only
|
||||
pg_dump -h <server-name>.postgres.database.azure.com \
|
||||
-U <username> \
|
||||
-d <database-name> \
|
||||
--schema-only \
|
||||
-f <database-name>-schema.sql
|
||||
|
||||
# Data only
|
||||
pg_dump -h <server-name>.postgres.database.azure.com \
|
||||
-U <username> \
|
||||
-d <database-name> \
|
||||
--data-only \
|
||||
-f <database-name>-data.sql
|
||||
```
|
||||
|
||||
### MySQL
|
||||
|
||||
**Using mysqldump:**
|
||||
```bash
|
||||
# Export entire database
|
||||
mysqldump -h <server-name>.mysql.database.azure.com \
|
||||
-u <username> \
|
||||
-p<password> \
|
||||
--ssl-mode=REQUIRED \
|
||||
--databases <database-name> \
|
||||
--single-transaction \
|
||||
--routines \
|
||||
--triggers \
|
||||
> <database-name>.sql
|
||||
|
||||
# Schema only
|
||||
mysqldump -h <server-name>.mysql.database.azure.com \
|
||||
-u <username> \
|
||||
-p<password> \
|
||||
--ssl-mode=REQUIRED \
|
||||
--no-data \
|
||||
--databases <database-name> \
|
||||
> <database-name>-schema.sql
|
||||
|
||||
# Data only
|
||||
mysqldump -h <server-name>.mysql.database.azure.com \
|
||||
-u <username> \
|
||||
-p<password> \
|
||||
--ssl-mode=REQUIRED \
|
||||
--no-create-info \
|
||||
--databases <database-name> \
|
||||
> <database-name>-data.sql
|
||||
```
|
||||
|
||||
## Step 4: Prepare Local Docker Containers
|
||||
|
||||
Ensure Docker Compose is configured with database services.
|
||||
|
||||
**SQL Server container:**
|
||||
```yaml
|
||||
services:
|
||||
sqlserver:
|
||||
image: mcr.microsoft.com/mssql/server:2025-latest
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
- MSSQL_PID=Developer
|
||||
- MSSQL_SA_PASSWORD=YourStrong!Passw0rd
|
||||
ports:
|
||||
- "1433:1433"
|
||||
volumes:
|
||||
- sqlserver-data:/var/opt/mssql
|
||||
- ./init:/docker-entrypoint-initdb.d
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
**PostgreSQL container:**
|
||||
```yaml
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
- POSTGRES_USER=postgres
|
||||
- POSTGRES_PASSWORD=postgres123
|
||||
- POSTGRES_DB=myapp
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- postgres-data:/var/lib/postgresql/data
|
||||
- ./init:/docker-entrypoint-initdb.d
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
**MySQL container:**
|
||||
```yaml
|
||||
services:
|
||||
mysql:
|
||||
image: mysql:8.4
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=mysql123
|
||||
- MYSQL_DATABASE=myapp
|
||||
ports:
|
||||
- "3306:3306"
|
||||
volumes:
|
||||
- mysql-data:/var/lib/mysql
|
||||
- ./init:/docker-entrypoint-initdb.d
|
||||
healthcheck:
|
||||
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p$$MYSQL_ROOT_PASSWORD"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
## Step 5: Start Docker Containers
|
||||
|
||||
```bash
|
||||
# Start database containers
|
||||
docker compose up -d sqlserver postgres mysql
|
||||
|
||||
# Wait for health checks to pass
|
||||
docker compose ps
|
||||
|
||||
# Check logs
|
||||
docker compose logs sqlserver
|
||||
```
|
||||
|
||||
## Step 6: Import Data into Docker Containers
|
||||
|
||||
### SQL Server
|
||||
|
||||
**Using sqlcmd:**
|
||||
```bash
|
||||
# Import SQL script
|
||||
sqlcmd -S localhost,1433 \
|
||||
-U sa \
|
||||
-P YourStrong!Passw0rd \
|
||||
-C \
|
||||
-i <database-name>.sql
|
||||
|
||||
# Or execute via docker exec
|
||||
docker compose exec sqlserver /opt/mssql-tools18/bin/sqlcmd \
|
||||
-S localhost \
|
||||
-U sa \
|
||||
-P YourStrong!Passw0rd \
|
||||
-C \
|
||||
-i /tmp/<database-name>.sql
|
||||
```
|
||||
|
||||
**Using SqlPackage (BACPAC):**
|
||||
```bash
|
||||
# Import BACPAC
|
||||
sqlpackage /Action:Import \
|
||||
/SourceFile:./<database-name>.bacpac \
|
||||
/TargetServerName:localhost \
|
||||
/TargetDatabaseName:<database-name> \
|
||||
/TargetUser:sa \
|
||||
/TargetPassword:YourStrong!Passw0rd \
|
||||
/TargetTrustServerCertificate:True
|
||||
|
||||
# Or via docker exec
|
||||
docker cp <database-name>.bacpac sqlserver:/tmp/
|
||||
docker compose exec sqlserver /opt/sqlpackage/sqlpackage \
|
||||
/Action:Import \
|
||||
/SourceFile:/tmp/<database-name>.bacpac \
|
||||
/TargetServerName:localhost \
|
||||
/TargetDatabaseName:<database-name> \
|
||||
/TargetUser:sa \
|
||||
/TargetPassword:YourStrong!Passw0rd \
|
||||
/TargetTrustServerCertificate:True
|
||||
```
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
**Using psql:**
|
||||
```bash
|
||||
# Import SQL script
|
||||
psql -h localhost \
|
||||
-U postgres \
|
||||
-d myapp \
|
||||
-f <database-name>.sql
|
||||
|
||||
# Or with custom-format dump
|
||||
pg_restore -h localhost \
|
||||
-U postgres \
|
||||
-d myapp \
|
||||
-v \
|
||||
<database-name>.dump
|
||||
|
||||
# Via docker exec
|
||||
docker cp <database-name>.sql postgres:/tmp/
|
||||
docker compose exec postgres psql \
|
||||
-U postgres \
|
||||
-d myapp \
|
||||
-f /tmp/<database-name>.sql
|
||||
```
|
||||
|
||||
### MySQL
|
||||
|
||||
**Using mysql:**
|
||||
```bash
|
||||
# Import SQL script
|
||||
mysql -h localhost \
|
||||
-u root \
|
||||
-pmysql123 \
|
||||
< <database-name>.sql
|
||||
|
||||
# Via docker exec
|
||||
docker cp <database-name>.sql mysql:/tmp/
|
||||
docker compose exec mysql mysql \
|
||||
-u root \
|
||||
-pmysql123 \
|
||||
< /tmp/<database-name>.sql
|
||||
```
|
||||
|
||||
## Step 7: Verify Import
|
||||
|
||||
**SQL Server:**
|
||||
```bash
|
||||
sqlcmd -S localhost,1433 -U sa -P YourStrong!Passw0rd -C -Q "SELECT name FROM sys.databases"
|
||||
sqlcmd -S localhost,1433 -U sa -P YourStrong!Passw0rd -C -Q "USE <database-name>; SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES"
|
||||
```
|
||||
|
||||
**PostgreSQL:**
|
||||
```bash
|
||||
docker compose exec postgres psql -U postgres -d myapp -c "\dt"
|
||||
docker compose exec postgres psql -U postgres -d myapp -c "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = 'public'"
|
||||
```
|
||||
|
||||
**MySQL:**
|
||||
```bash
|
||||
docker compose exec mysql mysql -u root -pmysql123 -e "SHOW DATABASES"
|
||||
docker compose exec mysql mysql -u root -pmysql123 myapp -e "SHOW TABLES"
|
||||
```
|
||||
|
||||
## Step 8: Automate with Init Scripts
|
||||
|
||||
Place SQL files in `./init/` directory for automatic import on container startup.
|
||||
|
||||
**SQL Server init script (init/01-create-database.sql):**
|
||||
```sql
|
||||
-- Wait for SQL Server to be ready
|
||||
WAITFOR DELAY '00:00:10';
|
||||
GO
|
||||
|
||||
-- Create database if not exists
|
||||
IF NOT EXISTS (SELECT * FROM sys.databases WHERE name = 'MyApp')
|
||||
BEGIN
|
||||
CREATE DATABASE MyApp;
|
||||
END
|
||||
GO
|
||||
|
||||
USE MyApp;
|
||||
GO
|
||||
|
||||
-- Your schema and data here
|
||||
CREATE TABLE Users (
|
||||
Id INT PRIMARY KEY IDENTITY(1,1),
|
||||
Username NVARCHAR(100) NOT NULL,
|
||||
Email NVARCHAR(255) NOT NULL
|
||||
);
|
||||
GO
|
||||
```
|
||||
|
||||
**PostgreSQL init script (init/01-init.sql):**
|
||||
```sql
|
||||
-- Runs automatically on first container start
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id SERIAL PRIMARY KEY,
|
||||
username VARCHAR(100) NOT NULL,
|
||||
email VARCHAR(255) NOT NULL
|
||||
);
|
||||
|
||||
INSERT INTO users (username, email) VALUES
|
||||
('admin', 'admin@example.com'),
|
||||
('user', 'user@example.com');
|
||||
```
|
||||
|
||||
**MySQL init script (init/01-init.sql):**
|
||||
```sql
|
||||
-- Runs automatically on first container start
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
||||
username VARCHAR(100) NOT NULL,
|
||||
email VARCHAR(255) NOT NULL
|
||||
);
|
||||
|
||||
INSERT INTO users (username, email) VALUES
|
||||
('admin', 'admin@example.com'),
|
||||
('user', 'user@example.com');
|
||||
```
|
||||
|
||||
## Step 9: Handle Large Databases
|
||||
|
||||
For databases > 10GB:
|
||||
|
||||
1. **Use compression:**
|
||||
```bash
|
||||
pg_dump -h <server> -U <user> -d <db> -F c -Z 9 -f <db>.dump.gz
|
||||
```
|
||||
|
||||
2. **Export schema separately:**
|
||||
```bash
|
||||
pg_dump -h <server> -U <user> -d <db> --schema-only -f schema.sql
|
||||
```
|
||||
|
||||
3. **Export data in chunks:**
|
||||
```bash
|
||||
pg_dump -h <server> -U <user> -d <db> -t table1 --data-only -f table1.sql
|
||||
pg_dump -h <server> -U <user> -d <db> -t table2 --data-only -f table2.sql
|
||||
```
|
||||
|
||||
4. **Use parallel export (PostgreSQL):**
|
||||
```bash
|
||||
pg_dump -h <server> -U <user> -d <db> -F d -j 4 -f ./dump_directory
|
||||
```
|
||||
|
||||
5. **Consider subset of data for development:**
|
||||
```sql
|
||||
-- Export last 6 months only
|
||||
pg_dump -h <server> -U <user> -d <db> \
|
||||
-t table1 \
|
||||
--where="created_at > NOW() - INTERVAL '6 months'" \
|
||||
-f subset.sql
|
||||
```
|
||||
|
||||
## Step 10: Clean Up Azure Resources
|
||||
|
||||
Remove firewall rules after export:
|
||||
|
||||
```bash
|
||||
# SQL Server
|
||||
az sql server firewall-rule delete \
|
||||
--resource-group <resource-group> \
|
||||
--server <server-name> \
|
||||
--name AllowMyIP
|
||||
|
||||
# PostgreSQL
|
||||
az postgres flexible-server firewall-rule delete \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--rule-name AllowMyIP
|
||||
|
||||
# MySQL
|
||||
az mysql flexible-server firewall-rule delete \
|
||||
--resource-group <resource-group> \
|
||||
--name <server-name> \
|
||||
--rule-name AllowMyIP
|
||||
```
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
**Connection timeout:**
|
||||
- Verify firewall rules include your IP
|
||||
- Check network connectivity to Azure
|
||||
- Ensure NSG rules allow database traffic
|
||||
|
||||
**Authentication failed:**
|
||||
- Verify username format (PostgreSQL/MySQL require `user@server-name`)
|
||||
- Check password for special characters (escape in shell)
|
||||
- Ensure Azure AD authentication is not required
|
||||
|
||||
**BACPAC import fails:**
|
||||
- Check SQL Server version compatibility
|
||||
- Ensure sufficient disk space in Docker volume
|
||||
- Review error messages for missing dependencies
|
||||
|
||||
**Large file transfer fails:**
|
||||
- Use compression
|
||||
- Split into multiple files
|
||||
- Consider Azure Data Factory for large datasets
|
||||
|
||||
**Schema compatibility issues:**
|
||||
- Azure SQL → SQL Server 2025: Generally compatible
|
||||
- Check for Azure-specific features (elastic pools, etc.)
|
||||
- Test import in non-production environment first
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use separate init scripts for schema and data**
|
||||
2. **Version control schema scripts**
|
||||
3. **Exclude sensitive data from exports**
|
||||
4. **Test import process before full migration**
|
||||
5. **Document any manual adjustments needed**
|
||||
6. **Use environment variables for credentials**
|
||||
7. **Automate with CI/CD pipelines**
|
||||
8. **Keep export files secure (gitignore)**
|
||||
9. **Regularly refresh local data from Azure**
|
||||
10. **Consider using sample data for local development**
|
||||
|
||||
## Automation Script Example
|
||||
|
||||
Create `scripts/export-and-import.sh`:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
AZURE_SERVER="myserver.database.windows.net"
|
||||
AZURE_DB="myapp"
|
||||
AZURE_USER="admin"
|
||||
AZURE_PASS="${AZURE_SQL_PASSWORD}"
|
||||
|
||||
echo "Exporting from Azure..."
|
||||
pg_dump -h "$AZURE_SERVER" -U "$AZURE_USER" -d "$AZURE_DB" -F c -f ./dump.sql
|
||||
|
||||
echo "Starting Docker container..."
|
||||
docker compose up -d postgres
|
||||
sleep 10
|
||||
|
||||
echo "Importing to Docker..."
|
||||
docker cp ./dump.sql postgres:/tmp/
|
||||
docker compose exec -T postgres pg_restore -U postgres -d myapp -v /tmp/dump.sql
|
||||
|
||||
echo "Verifying import..."
|
||||
docker compose exec -T postgres psql -U postgres -d myapp -c "\dt"
|
||||
|
||||
echo "Done! Database ready for local development."
|
||||
```
|
||||
|
||||
## Output Deliverables
|
||||
|
||||
Provide:
|
||||
1. Database export files (.sql, .bacpac, .dump)
|
||||
2. Import scripts for Docker containers
|
||||
3. Init directory structure for auto-loading
|
||||
4. Verification queries
|
||||
5. Documentation of any schema changes needed
|
||||
6. Connection string examples for local development
|
||||
|
||||
## Next Steps
|
||||
|
||||
After importing databases:
|
||||
1. Update application connection strings
|
||||
2. Test application against local databases
|
||||
3. Verify all tables and data imported correctly
|
||||
4. Document any Azure-specific features not available locally
|
||||
5. Set up regular refresh process from Azure
|
||||
624
commands/extract-infrastructure.md
Normal file
624
commands/extract-infrastructure.md
Normal file
@@ -0,0 +1,624 @@
|
||||
---
|
||||
description: Extract Azure infrastructure and generate Docker Compose stack for local development
|
||||
---
|
||||
|
||||
## 🚨 CRITICAL GUIDELINES
|
||||
|
||||
### Windows File Path Requirements
|
||||
|
||||
**MANDATORY: Always Use Backslashes on Windows for File Paths**
|
||||
|
||||
When using Edit or Write tools on Windows, you MUST use backslashes (`\`) in file paths, NOT forward slashes (`/`).
|
||||
|
||||
**Examples:**
|
||||
- ❌ WRONG: `D:/repos/project/file.tsx`
|
||||
- ✅ CORRECT: `D:\repos\project\file.tsx`
|
||||
|
||||
This applies to:
|
||||
- Edit tool file_path parameter
|
||||
- Write tool file_path parameter
|
||||
- All file operations on Windows systems
|
||||
|
||||
|
||||
### Documentation Guidelines
|
||||
|
||||
**NEVER create new documentation files unless explicitly requested by the user.**
|
||||
|
||||
- **Priority**: Update existing README.md files rather than creating new documentation
|
||||
- **Repository cleanliness**: Keep repository root clean - only README.md unless user requests otherwise
|
||||
- **Style**: Documentation should be concise, direct, and professional - avoid AI-generated tone
|
||||
- **User preference**: Only create additional .md files when user specifically asks for documentation
|
||||
|
||||
|
||||
---
|
||||
|
||||
# Extract Azure Infrastructure to Docker Compose
|
||||
|
||||
## Purpose
|
||||
Analyze existing Azure infrastructure and generate a complete Docker Compose stack with Azure service emulators for local development.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required tools:**
|
||||
- Azure CLI (`az`) installed and configured
|
||||
- Docker Desktop 4.40+ with Compose v2.42+
|
||||
- Sufficient local resources (minimum 8GB RAM for full Azure stack)
|
||||
|
||||
**Azure access:**
|
||||
- Authenticated with `az login`
|
||||
- Appropriate RBAC permissions to read resources
|
||||
- Access to target resource group
|
||||
|
||||
## Step 1: Authenticate with Azure
|
||||
|
||||
```bash
|
||||
# Login to Azure
|
||||
az login
|
||||
|
||||
# List available subscriptions
|
||||
az account list --output table
|
||||
|
||||
# Set target subscription
|
||||
az account set --subscription "subscription-name-or-id"
|
||||
|
||||
# Verify current subscription
|
||||
az account show
|
||||
```
|
||||
|
||||
## Step 2: Extract Azure Resources
|
||||
|
||||
### List Resources in Resource Group
|
||||
|
||||
```bash
|
||||
# List all resources in resource group
|
||||
az resource list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output table
|
||||
|
||||
# Get detailed JSON output for analysis
|
||||
az resource list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > azure-resources.json
|
||||
```
|
||||
|
||||
### Extract Specific Service Configurations
|
||||
|
||||
**App Services:**
|
||||
```bash
|
||||
# List App Services
|
||||
az webapp list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > app-services.json
|
||||
|
||||
# Get detailed configuration for each app
|
||||
az webapp show \
|
||||
--name <app-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > app-<app-name>.json
|
||||
|
||||
# Get application settings (environment variables)
|
||||
az webapp config appsettings list \
|
||||
--name <app-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > app-<app-name>-settings.json
|
||||
|
||||
# Get connection strings
|
||||
az webapp config connection-string list \
|
||||
--name <app-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > app-<app-name>-connections.json
|
||||
```
|
||||
|
||||
**Azure SQL Databases:**
|
||||
```bash
|
||||
# List SQL servers
|
||||
az sql server list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > sql-servers.json
|
||||
|
||||
# List databases on server
|
||||
az sql db list \
|
||||
--server <server-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > sql-databases.json
|
||||
|
||||
# Get database details
|
||||
az sql db show \
|
||||
--name <database-name> \
|
||||
--server <server-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > sql-db-<database-name>.json
|
||||
```
|
||||
|
||||
**PostgreSQL/MySQL:**
|
||||
```bash
|
||||
# PostgreSQL
|
||||
az postgres flexible-server list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > postgres-servers.json
|
||||
|
||||
az postgres flexible-server db list \
|
||||
--server-name <server-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > postgres-databases.json
|
||||
|
||||
# MySQL
|
||||
az mysql flexible-server list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > mysql-servers.json
|
||||
```
|
||||
|
||||
**Redis Cache:**
|
||||
```bash
|
||||
az redis list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > redis-caches.json
|
||||
|
||||
az redis show \
|
||||
--name <redis-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > redis-<redis-name>.json
|
||||
```
|
||||
|
||||
**Storage Accounts:**
|
||||
```bash
|
||||
az storage account list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > storage-accounts.json
|
||||
|
||||
az storage account show \
|
||||
--name <storage-account-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > storage-<storage-account-name>.json
|
||||
```
|
||||
|
||||
**Cosmos DB:**
|
||||
```bash
|
||||
az cosmosdb list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > cosmosdb-accounts.json
|
||||
|
||||
az cosmosdb show \
|
||||
--name <cosmosdb-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > cosmosdb-<cosmosdb-name>.json
|
||||
```
|
||||
|
||||
**Service Bus:**
|
||||
```bash
|
||||
az servicebus namespace list \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > servicebus-namespaces.json
|
||||
|
||||
az servicebus queue list \
|
||||
--namespace-name <namespace-name> \
|
||||
--resource-group <resource-group-name> \
|
||||
--output json > servicebus-queues.json
|
||||
```
|
||||
|
||||
## Step 3: Analyze Extracted Resources
|
||||
|
||||
Read all JSON files and identify:
|
||||
|
||||
1. **Service Types and Counts**
|
||||
- How many App Services?
|
||||
- Database types (SQL Server, PostgreSQL, MySQL)?
|
||||
- Cache services (Redis)?
|
||||
- Storage requirements (Blob, Queue, Table)?
|
||||
- NoSQL databases (Cosmos DB)?
|
||||
- Message queues (Service Bus)?
|
||||
|
||||
2. **Service Dependencies**
|
||||
- Which apps connect to which databases?
|
||||
- Connection strings and relationships
|
||||
- Network configurations
|
||||
- Authentication methods
|
||||
|
||||
3. **Configuration Requirements**
|
||||
- Environment variables from app settings
|
||||
- Connection strings
|
||||
- Feature flags
|
||||
- Secrets (need local equivalents)
|
||||
|
||||
4. **Resource Sizing**
|
||||
- Database SKUs → Docker resource limits
|
||||
- App Service plans → Container CPU/memory
|
||||
- Storage capacity → Volume sizing
|
||||
|
||||
## Step 4: Map Azure Services to Docker
|
||||
|
||||
Use this mapping table:
|
||||
|
||||
| Azure Service | Docker Image | Configuration Notes |
|
||||
|---------------|--------------|---------------------|
|
||||
| App Service (Windows) | Custom build | Extract runtime stack from config |
|
||||
| App Service (Linux) | Custom build | Use specified container image |
|
||||
| Azure SQL Database | `mcr.microsoft.com/mssql/server:2025-latest` | Use Developer edition |
|
||||
| PostgreSQL Flexible Server | `postgres:16-alpine` | Match version from Azure |
|
||||
| MySQL Flexible Server | `mysql:8.4` | Match version from Azure |
|
||||
| Redis Cache | `redis:7.4-alpine` | Configure persistence |
|
||||
| Storage Account (Blob/Queue/Table) | `mcr.microsoft.com/azure-storage/azurite` | All storage types in one |
|
||||
| Cosmos DB | `mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator` | NoSQL emulator |
|
||||
| Service Bus | Custom or `rabbitmq:3.14-alpine` | Limited emulator support |
|
||||
| Application Insights | `jaegertracing/all-in-one` | OpenTelemetry compatible |
|
||||
|
||||
## Step 5: Generate Docker Compose Structure
|
||||
|
||||
Create `docker-compose.yml` with this structure:
|
||||
|
||||
```yaml
|
||||
# Modern Compose format (no version field for v2.40+)
|
||||
|
||||
services:
|
||||
# Frontend App Services
|
||||
# Backend App Services
|
||||
# Databases (SQL Server, PostgreSQL, MySQL)
|
||||
# Cache (Redis)
|
||||
# Storage (Azurite)
|
||||
# NoSQL (Cosmos DB)
|
||||
# Monitoring (Jaeger, Grafana)
|
||||
|
||||
networks:
|
||||
frontend:
|
||||
driver: bridge
|
||||
backend:
|
||||
driver: bridge
|
||||
internal: true
|
||||
monitoring:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
# Named volumes for each database
|
||||
# Named volumes for storage emulators
|
||||
|
||||
secrets:
|
||||
# Database passwords
|
||||
# Connection strings
|
||||
```
|
||||
|
||||
### Service Generation Rules
|
||||
|
||||
**For each App Service:**
|
||||
```yaml
|
||||
service-name:
|
||||
build:
|
||||
context: ./path-to-app
|
||||
dockerfile: Dockerfile
|
||||
ports:
|
||||
- "PORT:PORT"
|
||||
depends_on:
|
||||
database-service:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
# Map from Azure app settings
|
||||
networks:
|
||||
- frontend
|
||||
- backend
|
||||
restart: unless-stopped
|
||||
user: "1000:1000"
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp
|
||||
cap_drop:
|
||||
- ALL
|
||||
cap_add:
|
||||
- NET_BIND_SERVICE
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:PORT/health"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: 'X'
|
||||
memory: XG
|
||||
```
|
||||
|
||||
**For Azure SQL Database:**
|
||||
```yaml
|
||||
sqlserver:
|
||||
image: mcr.microsoft.com/mssql/server:2025-latest
|
||||
environment:
|
||||
- ACCEPT_EULA=Y
|
||||
- MSSQL_PID=Developer
|
||||
- MSSQL_SA_PASSWORD_FILE=/run/secrets/sa_password
|
||||
secrets:
|
||||
- sa_password
|
||||
ports:
|
||||
- "1433:1433"
|
||||
volumes:
|
||||
- sqlserver-data:/var/opt/mssql
|
||||
networks:
|
||||
- backend
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "/opt/mssql-tools18/bin/sqlcmd -S localhost -U sa -P $$MSSQL_SA_PASSWORD -Q 'SELECT 1' -C || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '1'
|
||||
memory: 2G
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
```
|
||||
|
||||
**For Storage Account:**
|
||||
```yaml
|
||||
azurite:
|
||||
image: mcr.microsoft.com/azure-storage/azurite:latest
|
||||
command: azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --loose
|
||||
ports:
|
||||
- "10000:10000" # Blob
|
||||
- "10001:10001" # Queue
|
||||
- "10002:10002" # Table
|
||||
volumes:
|
||||
- azurite-data:/data
|
||||
networks:
|
||||
- backend
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-z", "localhost", "10000"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
**For Redis Cache:**
|
||||
```yaml
|
||||
redis:
|
||||
image: redis:7.4-alpine
|
||||
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
|
||||
ports:
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- redis-data:/data
|
||||
networks:
|
||||
- backend
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
|
||||
interval: 10s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
```
|
||||
|
||||
**For Cosmos DB:**
|
||||
```yaml
|
||||
cosmosdb:
|
||||
image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest
|
||||
environment:
|
||||
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
|
||||
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
|
||||
ports:
|
||||
- "8081:8081"
|
||||
- "10251-10254:10251-10254"
|
||||
volumes:
|
||||
- cosmos-data:/data/db
|
||||
networks:
|
||||
- backend
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
memory: 4G
|
||||
```
|
||||
|
||||
## Step 6: Generate Environment Files
|
||||
|
||||
Create `.env.template`:
|
||||
|
||||
```bash
|
||||
# SQL Server
|
||||
MSSQL_SA_PASSWORD=YourStrong!Passw0rd
|
||||
|
||||
# PostgreSQL (if used)
|
||||
POSTGRES_USER=postgres
|
||||
POSTGRES_PASSWORD=postgres123
|
||||
POSTGRES_DB=myapp
|
||||
|
||||
# MySQL (if used)
|
||||
MYSQL_ROOT_PASSWORD=mysql123
|
||||
MYSQL_DATABASE=myapp
|
||||
|
||||
# Redis
|
||||
REDIS_PASSWORD=redis123
|
||||
|
||||
# Application Settings
|
||||
# (Map from Azure app settings JSON)
|
||||
ASPNETCORE_ENVIRONMENT=Development
|
||||
NODE_ENV=development
|
||||
|
||||
# Azure Storage Emulator (Standard Development Connection String)
|
||||
AZURITE_CONNECTION_STRING=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite:10000/devstoreaccount1;QueueEndpoint=http://azurite:10001/devstoreaccount1;TableEndpoint=http://azurite:10002/devstoreaccount1;
|
||||
|
||||
# Cosmos DB Emulator
|
||||
COSMOS_EMULATOR_ENDPOINT=https://cosmosdb:8081
|
||||
COSMOS_EMULATOR_KEY=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
|
||||
|
||||
# Feature Flags
|
||||
ENABLE_MONITORING=true
|
||||
```
|
||||
|
||||
## Step 7: Create Supporting Files
|
||||
|
||||
**Makefile:**
|
||||
```makefile
|
||||
.PHONY: up down logs health restart clean
|
||||
|
||||
up:
|
||||
@docker compose up -d
|
||||
@echo "✓ Services started. Access at:"
|
||||
@echo " - Frontend: http://localhost:3000"
|
||||
@echo " - Backend: http://localhost:8080"
|
||||
@echo " - Azurite: http://localhost:10000"
|
||||
@echo " - Cosmos DB: https://localhost:8081/_explorer/index.html"
|
||||
|
||||
down:
|
||||
@docker compose down
|
||||
|
||||
logs:
|
||||
@docker compose logs -f
|
||||
|
||||
health:
|
||||
@docker compose ps
|
||||
|
||||
restart:
|
||||
@docker compose restart
|
||||
|
||||
clean:
|
||||
@docker compose down -v
|
||||
@echo "✓ Cleaned all volumes"
|
||||
|
||||
init:
|
||||
@cp .env.template .env
|
||||
@echo "✓ Created .env file. Please update passwords!"
|
||||
```
|
||||
|
||||
**README.md:**
|
||||
Include:
|
||||
- Architecture diagram of services
|
||||
- Service mapping (Azure → Docker)
|
||||
- Port mappings
|
||||
- Connection strings for local development
|
||||
- How to start/stop
|
||||
- Health check verification
|
||||
- Troubleshooting guide
|
||||
|
||||
**docker-compose.override.yml (for development):**
|
||||
```yaml
|
||||
services:
|
||||
frontend:
|
||||
volumes:
|
||||
- ./frontend/src:/app/src:cached
|
||||
environment:
|
||||
- HOT_RELOAD=true
|
||||
|
||||
backend:
|
||||
volumes:
|
||||
- ./backend/src:/app/src:cached
|
||||
ports:
|
||||
- "9229:9229" # Node.js debugger
|
||||
```
|
||||
|
||||
## Step 8: Validation
|
||||
|
||||
Before finalizing, validate:
|
||||
|
||||
1. **Syntax validation:**
|
||||
```bash
|
||||
docker compose config
|
||||
```
|
||||
|
||||
2. **Service startup order:**
|
||||
- Databases start first
|
||||
- Health checks complete before dependent services start
|
||||
- Apps start after all dependencies are healthy
|
||||
|
||||
3. **Network isolation:**
|
||||
- Databases only on backend network
|
||||
- Frontend services can't directly access databases
|
||||
- Proper communication paths
|
||||
|
||||
4. **Resource limits:**
|
||||
- Total CPU allocation < host CPUs
|
||||
- Total memory allocation < host memory
|
||||
- Leave headroom for host OS
|
||||
|
||||
5. **Security checks:**
|
||||
- No hardcoded secrets in docker-compose.yml
|
||||
- All services run as non-root where possible
|
||||
- Read-only filesystems enabled
|
||||
- Capabilities dropped
|
||||
|
||||
## Output Deliverables
|
||||
|
||||
Provide the following files:
|
||||
|
||||
1. `docker-compose.yml` - Main compose file
|
||||
2. `docker-compose.override.yml` - Development overrides
|
||||
3. `.env.template` - Environment variable template
|
||||
4. `Makefile` - Common operations
|
||||
5. `README.md` - Setup and usage documentation
|
||||
6. `.dockerignore` - Files to exclude from builds
|
||||
7. `secrets/` directory structure (gitignored)
|
||||
|
||||
## Common Azure Patterns
|
||||
|
||||
### Pattern 1: Simple Web + Database
|
||||
- 1 App Service → web container
|
||||
- 1 Azure SQL → SQL Server 2025 container
|
||||
- 1 Storage Account → Azurite
|
||||
|
||||
### Pattern 2: Three-Tier Application
|
||||
- Frontend App Service → React/Angular container
|
||||
- Backend App Service → API container
|
||||
- Azure SQL → SQL Server 2025 container
|
||||
- Redis Cache → Redis container
|
||||
- Storage Account → Azurite
|
||||
|
||||
### Pattern 3: Microservices
|
||||
- Multiple App Services → Multiple containers
|
||||
- Azure SQL + Cosmos DB → SQL Server + Cosmos emulator
|
||||
- Service Bus → RabbitMQ
|
||||
- Application Insights → Jaeger
|
||||
- API Management → Nginx gateway
|
||||
|
||||
### Pattern 4: Full Azure Stack
|
||||
- Multiple App Services (frontend/backend/admin)
|
||||
- Azure SQL + PostgreSQL + MySQL
|
||||
- Redis Cache
|
||||
- Storage Account → Azurite
|
||||
- Cosmos DB → Cosmos emulator
|
||||
- Service Bus → Custom emulator
|
||||
- Application Insights → Jaeger + Grafana
|
||||
|
||||
## Tips and Best Practices
|
||||
|
||||
1. **Start Simple:** Extract minimal viable stack first, add services incrementally
|
||||
2. **Health Checks:** Ensure every service has working health checks
|
||||
3. **Dependencies:** Use `depends_on` with `condition: service_healthy`
|
||||
4. **Secrets Management:** Never commit .env files, provide .env.template
|
||||
5. **Resource Limits:** Set realistic limits based on local development machine
|
||||
6. **Network Design:** Isolate backend services from direct external access
|
||||
7. **Documentation:** Document Azure→Docker mapping for team reference
|
||||
8. **Version Control:** Exclude .env, secrets/, and volumes/ from git
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Services fail to start:**
|
||||
- Check Docker Desktop resource allocation
|
||||
- Verify no port conflicts with other local services
|
||||
- Review logs: `docker compose logs <service-name>`
|
||||
|
||||
**Database connection issues:**
|
||||
- Verify connection strings use service names (not localhost)
|
||||
- Check network configuration
|
||||
- Ensure health checks pass before apps start
|
||||
|
||||
**Performance issues:**
|
||||
- Increase Docker Desktop memory allocation
|
||||
- Reduce number of services running simultaneously
|
||||
- Use volume caching for macOS (`:cached`)
|
||||
|
||||
**Azurite connection failures:**
|
||||
- Use standard development account key
|
||||
- Ensure ports 10000-10002 are available
|
||||
- Verify `--loose` flag for compatibility
|
||||
|
||||
## Next Steps
|
||||
|
||||
After generating Docker Compose stack:
|
||||
1. Test with `docker compose up`
|
||||
2. Verify health checks: `docker compose ps`
|
||||
3. Export databases using `/export-database` command
|
||||
4. Generate Dockerfiles using `/generate-dockerfile` command
|
||||
5. Document any Azure-specific features not replicated locally
|
||||
Reference in New Issue
Block a user