Files
gh-igpastor-sng-claude-mark…/commands/sng-deploy.md
2025-11-29 18:48:00 +08:00

14 KiB

Deploy Application Command

You are helping the user deploy their application to various platforms and orchestrators following Sngular's deployment best practices.

Instructions

  1. Determine deployment target:

    • Kubernetes (K8s)
    • Docker Swarm
    • AWS (ECS, EKS, EC2, Lambda)
    • Google Cloud (GKE, Cloud Run, App Engine)
    • Azure (AKS, Container Instances, App Service)
    • Vercel / Netlify (for frontend)
    • Heroku
    • DigitalOcean
    • Railway
  2. Identify application type:

    • Containerized application (Docker)
    • Serverless function
    • Static site
    • Full-stack application
    • Microservices
  3. Ask about requirements:

    • Environment (staging, production)
    • Scaling needs (replicas, auto-scaling)
    • Resource limits (CPU, memory)
    • Database / persistent storage
    • Load balancing
    • SSL/TLS certificates
    • Domain configuration
    • Monitoring and logging

Kubernetes Deployment

Deployment Configuration

# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production
  labels:
    app: myapp
    version: v1.0.0
spec:
  replicas: 3
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
        version: v1.0.0
    spec:
      # Security
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        fsGroup: 1001

      # Init containers (migrations, etc.)
      initContainers:
        - name: migrate
          image: myapp:latest
          command: ['npm', 'run', 'migrate']
          env:
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: myapp-secrets
                  key: database-url

      containers:
        - name: myapp
          image: myapp:latest
          imagePullPolicy: Always

          ports:
            - name: http
              containerPort: 3000
              protocol: TCP

          # Environment variables
          env:
            - name: NODE_ENV
              value: production
            - name: PORT
              value: "3000"
            - name: DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: myapp-secrets
                  key: database-url
            - name: REDIS_URL
              valueFrom:
                configMapKeyRef:
                  name: myapp-config
                  key: redis-url

          # Resource limits
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi

          # Health checks
          livenessProbe:
            httpGet:
              path: /health
              port: http
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 3
            failureThreshold: 3

          readinessProbe:
            httpGet:
              path: /ready
              port: http
            initialDelaySeconds: 10
            periodSeconds: 5
            timeoutSeconds: 3
            failureThreshold: 3

          # Startup probe for slow-starting apps
          startupProbe:
            httpGet:
              path: /health
              port: http
            initialDelaySeconds: 0
            periodSeconds: 10
            timeoutSeconds: 3
            failureThreshold: 30

          # Volume mounts
          volumeMounts:
            - name: app-config
              mountPath: /app/config
              readOnly: true

      volumes:
        - name: app-config
          configMap:
            name: myapp-config

      # Image pull secrets
      imagePullSecrets:
        - name: registry-credentials

Service Configuration

# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: production
  labels:
    app: myapp
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: myapp

Ingress Configuration

# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  namespace: production
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rate-limit: "100"
spec:
  tls:
    - hosts:
        - myapp.example.com
      secretName: myapp-tls
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: myapp
                port:
                  number: 80

ConfigMap and Secrets

# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
  namespace: production
data:
  redis-url: "redis://redis-service:6379"
  log-level: "info"
  feature-flag-enabled: "true"
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: myapp-secrets
  namespace: production
type: Opaque
data:
  # Base64 encoded values
  database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc0BkYjoxMjM0NS9teWFwcA==
  jwt-secret: c3VwZXJzZWNyZXRrZXk=

Horizontal Pod Autoscaler

# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 3
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80

Namespace Configuration

# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    name: production
    environment: production

Helm Chart

# Chart.yaml
apiVersion: v2
name: myapp
description: A Helm chart for MyApp
type: application
version: 1.0.0
appVersion: "1.0.0"
# values.yaml
replicaCount: 3

image:
  repository: myapp
  pullPolicy: Always
  tag: "latest"

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: myapp.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: myapp-tls
      hosts:
        - myapp.example.com

resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 128Mi

autoscaling:
  enabled: true
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70
  targetMemoryUtilizationPercentage: 80

env:
  NODE_ENV: production
  PORT: "3000"

secrets:
  DATABASE_URL: ""
  JWT_SECRET: ""

Docker Compose Deployment

# docker-compose.prod.yml
version: '3.8'

services:
  app:
    image: myapp:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      NODE_ENV: production
      DATABASE_URL: ${DATABASE_URL}
      REDIS_URL: redis://redis:6379
    depends_on:
      - db
      - redis
    networks:
      - app-network
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
        max_attempts: 3
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 3s
      retries: 3
      start_period: 40s

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    networks:
      - app-network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 5

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - app
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

volumes:
  postgres_data:

AWS Deployment

ECS Task Definition

{
  "family": "myapp",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"],
  "cpu": "256",
  "memory": "512",
  "containerDefinitions": [
    {
      "name": "myapp",
      "image": "123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest",
      "portMappings": [
        {
          "containerPort": 3000,
          "protocol": "tcp"
        }
      ],
      "environment": [
        {
          "name": "NODE_ENV",
          "value": "production"
        }
      ],
      "secrets": [
        {
          "name": "DATABASE_URL",
          "valueFrom": "arn:aws:secretsmanager:us-east-1:123456789:secret:myapp/database-url"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/myapp",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "healthCheck": {
        "command": ["CMD-SHELL", "curl -f http://localhost:3000/health || exit 1"],
        "interval": 30,
        "timeout": 5,
        "retries": 3,
        "startPeriod": 60
      }
    }
  ]
}

Lambda Function (Serverless)

# serverless.yml
service: myapp

provider:
  name: aws
  runtime: nodejs20.x
  region: us-east-1
  stage: ${opt:stage, 'dev'}
  environment:
    NODE_ENV: ${self:provider.stage}
    DATABASE_URL: ${env:DATABASE_URL}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - dynamodb:Query
            - dynamodb:Scan
            - dynamodb:GetItem
            - dynamodb:PutItem
          Resource: "arn:aws:dynamodb:*:*:table/MyTable"

functions:
  api:
    handler: dist/lambda.handler
    events:
      - http:
          path: /{proxy+}
          method: ANY
          cors: true
    timeout: 30
    memorySize: 512

  scheduled:
    handler: dist/scheduled.handler
    events:
      - schedule: rate(1 hour)

plugins:
  - serverless-plugin-typescript
  - serverless-offline

package:
  individually: true
  patterns:
    - '!node_modules/**'
    - '!src/**'
    - 'dist/**'

Vercel Deployment (Frontend)

// vercel.json
{
  "version": 2,
  "builds": [
    {
      "src": "package.json",
      "use": "@vercel/next"
    }
  ],
  "routes": [
    {
      "src": "/api/(.*)",
      "dest": "/api/$1"
    }
  ],
  "env": {
    "NODE_ENV": "production",
    "NEXT_PUBLIC_API_URL": "@api_url"
  },
  "regions": ["iad1"],
  "github": {
    "enabled": true,
    "autoAlias": true,
    "silent": true
  }
}

Deployment Scripts

Rolling Update Script

#!/bin/bash
# deploy.sh

set -e

ENVIRONMENT=${1:-staging}
IMAGE_TAG=${2:-latest}

echo "Deploying to $ENVIRONMENT with image tag $IMAGE_TAG"

# Update Kubernetes deployment
kubectl set image deployment/myapp \
  myapp=myapp:$IMAGE_TAG \
  --namespace=$ENVIRONMENT \
  --record

# Wait for rollout to complete
kubectl rollout status deployment/myapp \
  --namespace=$ENVIRONMENT \
  --timeout=5m

# Verify deployment
kubectl get pods \
  --namespace=$ENVIRONMENT \
  --selector=app=myapp

echo "Deployment completed successfully!"

Blue-Green Deployment

#!/bin/bash
# blue-green-deploy.sh

set -e

NAMESPACE="production"
NEW_VERSION=$1
CURRENT_SERVICE=$(kubectl get service myapp -n $NAMESPACE -o jsonpath='{.spec.selector.version}')

echo "Current version: $CURRENT_SERVICE"
echo "New version: $NEW_VERSION"

# Deploy new version (green)
kubectl apply -f k8s/deployment-$NEW_VERSION.yaml -n $NAMESPACE

# Wait for new version to be ready
kubectl wait --for=condition=available --timeout=300s \
  deployment/myapp-$NEW_VERSION -n $NAMESPACE

# Run smoke tests
if ! ./scripts/smoke-test.sh http://myapp-$NEW_VERSION:80; then
  echo "Smoke tests failed! Rolling back..."
  kubectl delete deployment/myapp-$NEW_VERSION -n $NAMESPACE
  exit 1
fi

# Switch traffic to new version
kubectl patch service myapp -n $NAMESPACE \
  -p '{"spec":{"selector":{"version":"'$NEW_VERSION'"}}}'

echo "Traffic switched to $NEW_VERSION"

# Wait and monitor
sleep 60

# Delete old version
if [ "$CURRENT_SERVICE" != "" ]; then
  kubectl delete deployment/myapp-$CURRENT_SERVICE -n $NAMESPACE
  echo "Old version $CURRENT_SERVICE deleted"
fi

echo "Blue-green deployment completed!"

Health Check Script

#!/bin/bash
# health-check.sh

URL=$1
MAX_ATTEMPTS=30
SLEEP_TIME=10

for i in $(seq 1 $MAX_ATTEMPTS); do
  echo "Attempt $i of $MAX_ATTEMPTS"

  if curl -f -s $URL/health > /dev/null; then
    echo "Health check passed!"
    exit 0
  fi

  if [ $i -lt $MAX_ATTEMPTS ]; then
    echo "Health check failed, retrying in $SLEEP_TIME seconds..."
    sleep $SLEEP_TIME
  fi
done

echo "Health check failed after $MAX_ATTEMPTS attempts"
exit 1

Best Practices

Security

  • Use secrets management (never commit secrets)
  • Enable RBAC in Kubernetes
  • Use network policies to restrict traffic
  • Scan images for vulnerabilities
  • Run containers as non-root
  • Use read-only root filesystem where possible
  • Enable pod security policies

Reliability

  • Set appropriate resource limits
  • Configure health checks (liveness, readiness)
  • Use rolling updates with maxUnavailable: 0
  • Implement circuit breakers
  • Set up autoscaling
  • Configure pod disruption budgets
  • Use multiple replicas across zones

Monitoring

  • Set up logging (ELK, Loki, CloudWatch)
  • Configure metrics (Prometheus, Datadog)
  • Set up alerts for critical issues
  • Use distributed tracing (Jaeger, Zipkin)
  • Monitor resource usage
  • Track deployment success/failure rates

Performance

  • Use CDN for static assets
  • Enable caching where appropriate
  • Optimize container images
  • Use horizontal pod autoscaling
  • Configure connection pooling
  • Implement rate limiting

Ask the user: "What platform would you like to deploy to?"