Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:48:00 +08:00
commit cdbb3f7db6
8 changed files with 3301 additions and 0 deletions

714
commands/sng-ci.md Normal file
View File

@@ -0,0 +1,714 @@
# Setup CI/CD Pipeline Command
You are helping the user set up a CI/CD pipeline for automated testing, building, and deployment following Sngular's DevOps best practices.
## Instructions
1. **Determine the platform**:
- GitHub Actions
- GitLab CI
- Jenkins
- CircleCI
- Azure DevOps
- Bitbucket Pipelines
2. **Identify application type**:
- Node.js/TypeScript application
- Python application
- Go application
- Frontend application (React, Vue, Next.js)
- Full-stack application
- Monorepo with multiple services
3. **Ask about pipeline requirements**:
- Linting and code quality checks
- Unit and integration tests
- Build and compile steps
- Docker image building
- Deployment targets (staging, production)
- Security scanning
- Performance testing
4. **Determine trigger events**:
- Push to main/master
- Pull requests
- Tag/release creation
- Scheduled runs
- Manual triggers
## GitHub Actions Workflows
### Basic CI Pipeline
```yaml
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run linter
run: npm run lint
- name: Check formatting
run: npm run format:check
test:
name: Test
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/coverage-final.json
flags: unittests
build:
name: Build
runs-on: ubuntu-latest
needs: [lint, test]
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build application
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build
path: dist/
```
### CI/CD with Docker
```yaml
# .github/workflows/ci-cd.yml
name: CI/CD
on:
push:
branches: [ main ]
tags: [ 'v*' ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
security-scan:
name: Security Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
build-and-push:
name: Build and Push Docker Image
runs-on: ubuntu-latest
needs: [test, security-scan]
if: github.event_name != 'pull_request'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build-and-push
if: github.ref == 'refs/heads/main'
environment:
name: staging
url: https://staging.example.com
steps:
- name: Deploy to staging
run: |
echo "Deploying to staging environment"
# Add deployment commands here
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: build-and-push
if: startsWith(github.ref, 'refs/tags/v')
environment:
name: production
url: https://example.com
steps:
- name: Deploy to production
run: |
echo "Deploying to production environment"
# Add deployment commands here
```
### Monorepo Pipeline
```yaml
# .github/workflows/monorepo-ci.yml
name: Monorepo CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
detect-changes:
name: Detect Changes
runs-on: ubuntu-latest
outputs:
frontend: ${{ steps.filter.outputs.frontend }}
backend: ${{ steps.filter.outputs.backend }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
frontend:
- 'apps/frontend/**'
- 'packages/ui/**'
backend:
- 'apps/backend/**'
- 'packages/api/**'
test-frontend:
name: Test Frontend
runs-on: ubuntu-latest
needs: detect-changes
if: needs.detect-changes.outputs.frontend == 'true'
defaults:
run:
working-directory: apps/frontend
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build
run: npm run build
test-backend:
name: Test Backend
runs-on: ubuntu-latest
needs: detect-changes
if: needs.detect-changes.outputs.backend == 'true'
defaults:
run:
working-directory: apps/backend
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run migrations
run: npm run migrate
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test
- name: Run tests
run: npm test
env:
DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test
- name: Build
run: npm run build
```
## GitLab CI Pipeline
```yaml
# .gitlab-ci.yml
stages:
- lint
- test
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
# Templates
.node_template: &node_template
image: node:20-alpine
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
before_script:
- npm ci
lint:
<<: *node_template
stage: lint
script:
- npm run lint
- npm run format:check
test:unit:
<<: *node_template
stage: test
script:
- npm test -- --coverage
coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
artifacts:
when: always
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
test:e2e:
<<: *node_template
stage: test
services:
- postgres:16-alpine
variables:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
DATABASE_URL: postgresql://testuser:testpass@postgres:5432/testdb
script:
- npm run test:e2e
build:
stage: build
image: docker:24
services:
- docker:24-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
only:
- main
- tags
deploy:staging:
stage: deploy
image: alpine:latest
before_script:
- apk add --no-cache curl
script:
- echo "Deploying to staging"
- curl -X POST $STAGING_WEBHOOK_URL
environment:
name: staging
url: https://staging.example.com
only:
- main
deploy:production:
stage: deploy
image: alpine:latest
before_script:
- apk add --no-cache curl
script:
- echo "Deploying to production"
- curl -X POST $PRODUCTION_WEBHOOK_URL
environment:
name: production
url: https://example.com
when: manual
only:
- tags
```
## Jenkins Pipeline
```groovy
// Jenkinsfile
pipeline {
agent any
environment {
NODE_VERSION = '20'
DOCKER_REGISTRY = 'registry.example.com'
IMAGE_NAME = 'myapp'
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Install Dependencies') {
agent {
docker {
image "node:${NODE_VERSION}-alpine"
reuseNode true
}
}
steps {
sh 'npm ci'
}
}
stage('Lint') {
agent {
docker {
image "node:${NODE_VERSION}-alpine"
reuseNode true
}
}
steps {
sh 'npm run lint'
}
}
stage('Test') {
agent {
docker {
image "node:${NODE_VERSION}-alpine"
reuseNode true
}
}
steps {
sh 'npm test -- --coverage'
}
post {
always {
junit 'junit.xml'
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'coverage',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
}
}
}
stage('Build') {
agent {
docker {
image "node:${NODE_VERSION}-alpine"
reuseNode true
}
}
steps {
sh 'npm run build'
}
}
stage('Docker Build') {
when {
branch 'main'
}
steps {
script {
docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER}")
docker.build("${DOCKER_REGISTRY}/${IMAGE_NAME}:latest")
}
}
}
stage('Docker Push') {
when {
branch 'main'
}
steps {
script {
docker.withRegistry("https://${DOCKER_REGISTRY}", 'docker-credentials') {
docker.image("${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER}").push()
docker.image("${DOCKER_REGISTRY}/${IMAGE_NAME}:latest").push()
}
}
}
}
stage('Deploy to Staging') {
when {
branch 'main'
}
steps {
sh """
kubectl set image deployment/myapp \
myapp=${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER} \
--namespace=staging
"""
}
}
stage('Deploy to Production') {
when {
tag pattern: "v\\d+\\.\\d+\\.\\d+", comparator: "REGEXP"
}
steps {
input message: 'Deploy to production?', ok: 'Deploy'
sh """
kubectl set image deployment/myapp \
myapp=${DOCKER_REGISTRY}/${IMAGE_NAME}:${BUILD_NUMBER} \
--namespace=production
"""
}
}
}
post {
always {
cleanWs()
}
success {
echo 'Pipeline succeeded!'
}
failure {
echo 'Pipeline failed!'
// Send notification
}
}
}
```
## Best Practices
### 1. Caching Dependencies
```yaml
# GitHub Actions
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
```
### 2. Matrix Builds
```yaml
# Test multiple versions
strategy:
matrix:
node-version: [18, 20, 21]
os: [ubuntu-latest, windows-latest, macos-latest]
```
### 3. Conditional Execution
```yaml
# Only run on specific branches
if: github.ref == 'refs/heads/main'
# Only run for PRs
if: github.event_name == 'pull_request'
# Only run for tags
if: startsWith(github.ref, 'refs/tags/')
```
### 4. Secrets Management
```yaml
# Use secrets from repository settings
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
API_KEY: ${{ secrets.API_KEY }}
```
### 5. Parallel Jobs
```yaml
# Jobs run in parallel by default
jobs:
lint:
# ...
test:
# ...
security-scan:
# ...
```
### 6. Job Dependencies
```yaml
jobs:
test:
# ...
build:
needs: test # Wait for test to complete
# ...
deploy:
needs: [test, build] # Wait for multiple jobs
# ...
```
## Security Best Practices
- Store secrets in CI platform's secret management
- Use minimal permissions for CI tokens
- Scan dependencies for vulnerabilities
- Scan Docker images for security issues
- Don't log sensitive information
- Use branch protection rules
- Require status checks before merging
- Enable signed commits
## Monitoring and Notifications
### Slack Notifications (GitHub Actions)
```yaml
- name: Slack Notification
uses: 8398a7/action-slack@v3
if: always()
with:
status: ${{ job.status }}
text: 'CI Pipeline ${{ job.status }}'
webhook_url: ${{ secrets.SLACK_WEBHOOK }}
```
Ask the user: "What CI/CD platform would you like to use?"

724
commands/sng-deploy.md Normal file
View File

@@ -0,0 +1,724 @@
# Deploy Application Command
You are helping the user deploy their application to various platforms and orchestrators following Sngular's deployment best practices.
## Instructions
1. **Determine deployment target**:
- Kubernetes (K8s)
- Docker Swarm
- AWS (ECS, EKS, EC2, Lambda)
- Google Cloud (GKE, Cloud Run, App Engine)
- Azure (AKS, Container Instances, App Service)
- Vercel / Netlify (for frontend)
- Heroku
- DigitalOcean
- Railway
2. **Identify application type**:
- Containerized application (Docker)
- Serverless function
- Static site
- Full-stack application
- Microservices
3. **Ask about requirements**:
- Environment (staging, production)
- Scaling needs (replicas, auto-scaling)
- Resource limits (CPU, memory)
- Database / persistent storage
- Load balancing
- SSL/TLS certificates
- Domain configuration
- Monitoring and logging
## Kubernetes Deployment
### Deployment Configuration
```yaml
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: production
labels:
app: myapp
version: v1.0.0
spec:
replicas: 3
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v1.0.0
spec:
# Security
securityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001
# Init containers (migrations, etc.)
initContainers:
- name: migrate
image: myapp:latest
command: ['npm', 'run', 'migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
containers:
- name: myapp
image: myapp:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 3000
protocol: TCP
# Environment variables
env:
- name: NODE_ENV
value: production
- name: PORT
value: "3000"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: myapp-config
key: redis-url
# Resource limits
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Health checks
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Startup probe for slow-starting apps
startupProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 3
failureThreshold: 30
# Volume mounts
volumeMounts:
- name: app-config
mountPath: /app/config
readOnly: true
volumes:
- name: app-config
configMap:
name: myapp-config
# Image pull secrets
imagePullSecrets:
- name: registry-credentials
```
### Service Configuration
```yaml
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: production
labels:
app: myapp
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: myapp
```
### Ingress Configuration
```yaml
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: production
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
```
### ConfigMap and Secrets
```yaml
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: production
data:
redis-url: "redis://redis-service:6379"
log-level: "info"
feature-flag-enabled: "true"
```
```yaml
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
namespace: production
type: Opaque
data:
# Base64 encoded values
database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0BkYjoxMjM0NS9teWFwcA==
jwt-secret: c3VwZXJzZWNyZXRrZXk=
```
### Horizontal Pod Autoscaler
```yaml
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
### Namespace Configuration
```yaml
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
environment: production
```
## Helm Chart
```yaml
# Chart.yaml
apiVersion: v2
name: myapp
description: A Helm chart for MyApp
type: application
version: 1.0.0
appVersion: "1.0.0"
```
```yaml
# values.yaml
replicaCount: 3
image:
repository: myapp
pullPolicy: Always
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: myapp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: myapp-tls
hosts:
- myapp.example.com
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
env:
NODE_ENV: production
PORT: "3000"
secrets:
DATABASE_URL: ""
JWT_SECRET: ""
```
## Docker Compose Deployment
```yaml
# docker-compose.prod.yml
version: '3.8'
services:
app:
image: myapp:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
NODE_ENV: production
DATABASE_URL: ${DATABASE_URL}
REDIS_URL: redis://redis:6379
depends_on:
- db
- redis
networks:
- app-network
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: unless-stopped
networks:
- app-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- app
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
postgres_data:
```
## AWS Deployment
### ECS Task Definition
```json
{
"family": "myapp",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"containerDefinitions": [
{
"name": "myapp",
"image": "123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "NODE_ENV",
"value": "production"
}
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789:secret:myapp/database-url"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/myapp",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
]
}
```
### Lambda Function (Serverless)
```yaml
# serverless.yml
service: myapp
provider:
name: aws
runtime: nodejs20.x
region: us-east-1
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${self:provider.stage}
DATABASE_URL: ${env:DATABASE_URL}
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
Resource: "arn:aws:dynamodb:*:*:table/MyTable"
functions:
api:
handler: dist/lambda.handler
events:
- http:
path: /{proxy+}
method: ANY
cors: true
timeout: 30
memorySize: 512
scheduled:
handler: dist/scheduled.handler
events:
- schedule: rate(1 hour)
plugins:
- serverless-plugin-typescript
- serverless-offline
package:
individually: true
patterns:
- '!node_modules/**'
- '!src/**'
- 'dist/**'
```
## Vercel Deployment (Frontend)
```json
// vercel.json
{
"version": 2,
"builds": [
{
"src": "package.json",
"use": "@vercel/next"
}
],
"routes": [
{
"src": "/api/(.*)",
"dest": "/api/$1"
}
],
"env": {
"NODE_ENV": "production",
"NEXT_PUBLIC_API_URL": "@api_url"
},
"regions": ["iad1"],
"github": {
"enabled": true,
"autoAlias": true,
"silent": true
}
}
```
## Deployment Scripts
### Rolling Update Script
```bash
#!/bin/bash
# deploy.sh
set -e
ENVIRONMENT=${1:-staging}
IMAGE_TAG=${2:-latest}
echo "Deploying to $ENVIRONMENT with image tag $IMAGE_TAG"
# Update Kubernetes deployment
kubectl set image deployment/myapp \
myapp=myapp:$IMAGE_TAG \
--namespace=$ENVIRONMENT \
--record
# Wait for rollout to complete
kubectl rollout status deployment/myapp \
--namespace=$ENVIRONMENT \
--timeout=5m
# Verify deployment
kubectl get pods \
--namespace=$ENVIRONMENT \
--selector=app=myapp
echo "Deployment completed successfully!"
```
### Blue-Green Deployment
```bash
#!/bin/bash
# blue-green-deploy.sh
set -e
NAMESPACE="production"
NEW_VERSION=$1
CURRENT_SERVICE=$(kubectl get service myapp -n $NAMESPACE -o jsonpath='{.spec.selector.version}')
echo "Current version: $CURRENT_SERVICE"
echo "New version: $NEW_VERSION"
# Deploy new version (green)
kubectl apply -f k8s/deployment-$NEW_VERSION.yaml -n $NAMESPACE
# Wait for new version to be ready
kubectl wait --for=condition=available --timeout=300s \
deployment/myapp-$NEW_VERSION -n $NAMESPACE
# Run smoke tests
if ! ./scripts/smoke-test.sh http://myapp-$NEW_VERSION:80; then
echo "Smoke tests failed! Rolling back..."
kubectl delete deployment/myapp-$NEW_VERSION -n $NAMESPACE
exit 1
fi
# Switch traffic to new version
kubectl patch service myapp -n $NAMESPACE \
-p '{"spec":{"selector":{"version":"'$NEW_VERSION'"}}}'
echo "Traffic switched to $NEW_VERSION"
# Wait and monitor
sleep 60
# Delete old version
if [ "$CURRENT_SERVICE" != "" ]; then
kubectl delete deployment/myapp-$CURRENT_SERVICE -n $NAMESPACE
echo "Old version $CURRENT_SERVICE deleted"
fi
echo "Blue-green deployment completed!"
```
### Health Check Script
```bash
#!/bin/bash
# health-check.sh
URL=$1
MAX_ATTEMPTS=30
SLEEP_TIME=10
for i in $(seq 1 $MAX_ATTEMPTS); do
echo "Attempt $i of $MAX_ATTEMPTS"
if curl -f -s $URL/health > /dev/null; then
echo "Health check passed!"
exit 0
fi
if [ $i -lt $MAX_ATTEMPTS ]; then
echo "Health check failed, retrying in $SLEEP_TIME seconds..."
sleep $SLEEP_TIME
fi
done
echo "Health check failed after $MAX_ATTEMPTS attempts"
exit 1
```
## Best Practices
### Security
- Use secrets management (never commit secrets)
- Enable RBAC in Kubernetes
- Use network policies to restrict traffic
- Scan images for vulnerabilities
- Run containers as non-root
- Use read-only root filesystem where possible
- Enable pod security policies
### Reliability
- Set appropriate resource limits
- Configure health checks (liveness, readiness)
- Use rolling updates with maxUnavailable: 0
- Implement circuit breakers
- Set up autoscaling
- Configure pod disruption budgets
- Use multiple replicas across zones
### Monitoring
- Set up logging (ELK, Loki, CloudWatch)
- Configure metrics (Prometheus, Datadog)
- Set up alerts for critical issues
- Use distributed tracing (Jaeger, Zipkin)
- Monitor resource usage
- Track deployment success/failure rates
### Performance
- Use CDN for static assets
- Enable caching where appropriate
- Optimize container images
- Use horizontal pod autoscaling
- Configure connection pooling
- Implement rate limiting
Ask the user: "What platform would you like to deploy to?"

446
commands/sng-dockerfile.md Normal file
View File

@@ -0,0 +1,446 @@
# Create Dockerfile Command
You are helping the user create an optimized Dockerfile for containerizing their application following Sngular's DevOps best practices.
## Instructions
1. **Detect application type**:
- Node.js (Express, Fastify, NestJS, Next.js)
- Python (FastAPI, Flask, Django)
- Go application
- Java/Spring Boot
- Static site (React, Vue, etc.)
- Multi-service application
2. **Determine build requirements**:
- Package manager (npm, yarn, pnpm, pip, go mod, maven, gradle)
- Build steps needed
- Dependencies to install
- Environment variables required
- Port to expose
3. **Ask for optimization preferences**:
- Multi-stage build (recommended)
- Base image preference (alpine, slim, distroless)
- Development vs production
- Build caching strategy
## Dockerfile Templates
### Node.js Application (Multi-stage)
```dockerfile
# syntax=docker/dockerfile:1
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
# Install dependencies first (better caching)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy application code
COPY . .
# Build application (if needed)
RUN npm run build
# Production stage
FROM node:20-alpine AS production
# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
WORKDIR /app
# Copy only necessary files from builder
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/package*.json ./
# Switch to non-root user
USER nodejs
# Expose application port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
# Start application
CMD ["node", "dist/main.js"]
```
### Next.js Application
```dockerfile
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]
```
### Python FastAPI Application
```dockerfile
# Build stage
FROM python:3.11-slim AS builder
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt
# Production stage
FROM python:3.11-slim
WORKDIR /app
# Install runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy wheels and install
COPY --from=builder /app/wheels /wheels
COPY requirements.txt .
RUN pip install --no-cache /wheels/*
# Create non-root user
RUN useradd -m -u 1001 appuser
# Copy application
COPY --chown=appuser:appuser . .
USER appuser
EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
```
### Go Application
```dockerfile
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Install build dependencies
RUN apk add --no-cache git
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build binary with optimizations
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags="-w -s" -o main .
# Production stage (distroless for minimal size)
FROM gcr.io/distroless/static-debian11
WORKDIR /app
# Copy binary from builder
COPY --from=builder /app/main .
# Use numeric user ID (distroless doesn't have /etc/passwd)
USER 65532:65532
EXPOSE 8080
ENTRYPOINT ["/app/main"]
```
### Static Site (Nginx)
```dockerfile
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
# Copy custom nginx config
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Copy built files
COPY --from=builder /app/dist /usr/share/nginx/html
# Add non-root user
RUN chown -R nginx:nginx /usr/share/nginx/html && \
chmod -R 755 /usr/share/nginx/html && \
chown -R nginx:nginx /var/cache/nginx && \
chown -R nginx:nginx /var/log/nginx && \
touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/run/nginx.pid
USER nginx
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s CMD wget --quiet --tries=1 --spider http://localhost:8080/health || exit 1
CMD ["nginx", "-g", "daemon off;"]
```
## Nginx Configuration for Static Sites
```nginx
# nginx.conf
server {
listen 8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/xml+rss application/json;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# SPA routing
location / {
try_files $uri $uri/ /index.html;
}
# Health check
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
```
## .dockerignore File
```
# .dockerignore
node_modules
npm-debug.log
dist
build
.git
.gitignore
.env
.env.local
.env.*.local
README.md
.vscode
.idea
*.log
coverage
.next
.cache
__pycache__
*.pyc
*.pyo
.pytest_cache
.mypy_cache
target
bin
obj
```
## Docker Compose for Development
```yaml
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
target: development
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
depends_on:
- db
- redis
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
postgres_data:
```
## Best Practices
### Security
- Use specific version tags, not `latest`
- Run as non-root user
- Use minimal base images (alpine, slim, distroless)
- Scan images for vulnerabilities
- Don't include secrets in images
### Performance
- Use multi-stage builds to reduce image size
- Leverage build cache (COPY dependencies first)
- Combine RUN commands to reduce layers
- Use .dockerignore to exclude unnecessary files
### Optimization
```dockerfile
# Bad: Creates multiple layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y git
# Good: Single layer with cleanup
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
```
### Health Checks
```dockerfile
# Application health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
```
### Build Arguments
```dockerfile
ARG NODE_VERSION=20
FROM node:${NODE_VERSION}-alpine
ARG BUILD_DATE
ARG VCS_REF
LABEL org.label-schema.build-date=$BUILD_DATE \
org.label-schema.vcs-ref=$VCS_REF
```
## Building and Running
```bash
# Build image
docker build -t myapp:latest .
# Build with build args
docker build --build-arg NODE_VERSION=20 -t myapp:latest .
# Run container
docker run -p 3000:3000 -e NODE_ENV=production myapp:latest
# Run with docker-compose
docker-compose up -d
# View logs
docker logs -f myapp
# Execute command in container
docker exec -it myapp sh
```
## Image Size Optimization
```dockerfile
# Use smaller base images
FROM node:20-alpine # ~110MB
# vs
FROM node:20 # ~900MB
# Use distroless for Go/static binaries
FROM gcr.io/distroless/static-debian11 # ~2MB
# Multi-stage builds
FROM node:20 AS builder
# ... build steps
FROM node:20-alpine AS production
COPY --from=builder /app/dist ./dist
```
Ask the user: "What type of application would you like to containerize?"