Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 09:05:12 +08:00
commit 74928623b2
25 changed files with 3741 additions and 0 deletions

View File

@@ -0,0 +1,321 @@
# Helm Scaffold Skill
A Claude Skill that generates production-ready Helm charts following Kubernetes and Helm best practices.
## Overview
This skill enables Claude to automatically generate complete, well-structured Helm charts for various application types including:
- **Deployments** - Stateless applications (web apps, APIs, microservices)
- **StatefulSets** - Stateful applications (databases, message queues)
- **Jobs** - One-time batch processing tasks
- **CronJobs** - Scheduled recurring tasks
## Features
- **Production-Ready Charts** - Follows Kubernetes and CNCF best practices
- **Security Defaults** - Includes security contexts, read-only filesystems, non-root users
- **Resource Management** - Sensible CPU/memory limits and requests
- **Health Checks** - Liveness and readiness probes configured
- **Standard Labels** - Uses app.kubernetes.io/\* label namespace
- **Multi-Environment** - Generate dev, staging, and production value files
- **Complete Documentation** - Includes README, NOTES.txt, and inline comments
- **Dry Run Instructions** - Step-by-step testing and validation guide
## Usage Examples
### Example 1: Simple Web Application
```
Create a Helm chart for my Node.js API called "user-service"
running on port 3000 with image myregistry/user-service:1.0.0
```
Claude will:
1. Ask clarifying questions (replicas, ingress, autoscaling)
2. Generate complete chart structure
3. Include Deployment, Service, ServiceAccount, and optional resources
4. Provide testing instructions
### Example 2: Database with Persistent Storage
```
Create a Helm chart for PostgreSQL with 20Gi of storage
```
Claude will:
1. Generate StatefulSet instead of Deployment
2. Include PersistentVolumeClaim configuration
3. Add headless service for stable network identities
4. Configure appropriate security contexts
### Example 3: Scheduled Job
```
Create a Helm chart for a data backup job that runs every night at 2 AM
```
Claude will:
1. Generate CronJob template
2. Configure schedule: "0 2 \* \* \*"
3. Include job-specific settings (backoff, restart policy)
4. Provide dry run testing instructions
### Example 4: Multi-Environment Setup
```
Create a Helm chart for my Python web app with dev, staging,
and production configurations
```
Claude will:
1. Generate base values.yaml
2. Create values-dev.yaml with minimal resources
3. Create values-staging.yaml with moderate resources
4. Create values-prod.yaml with HA configuration, autoscaling, and security hardening
5. Provide deployment commands for each environment
## Chart Structure
Generated charts follow this structure:
```
<chart-name>/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration values
├── values-dev.yaml # Development overrides (optional)
├── values-staging.yaml # Staging overrides (optional)
├── values-prod.yaml # Production overrides (optional)
├── .helmignore # Files to ignore when packaging
├── README.md # Chart documentation
└── templates/
├── _helpers.tpl # Template helper functions
├── NOTES.txt # Post-installation notes
├── deployment.yaml # Deployment/StatefulSet/Job/CronJob
├── service.yaml # Kubernetes Service
├── serviceaccount.yaml # Service Account
├── ingress.yaml # Ingress (optional)
├── hpa.yaml # Horizontal Pod Autoscaler (optional)
├── configmap.yaml # ConfigMap (optional)
└── secret.yaml # Secret (optional)
```
## Best Practices Included
The skill automatically incorporates:
### Security
- Read-only root filesystem
- Non-root user execution
- Dropped capabilities
- Security contexts for pods and containers
- Automatic service account token mounting control
### Resource Management
- CPU and memory limits
- CPU and memory requests
- Sensible defaults based on application type
### Labels and Selectors
- Standard Kubernetes labels (app.kubernetes.io/\*)
- Proper selector label configuration
- Consistent labeling across resources
### Health Checks
- Liveness probes for container health
- Readiness probes for traffic routing
- Configurable probe parameters
### Scalability
- Horizontal Pod Autoscaler support
- Pod anti-affinity for high availability
- Node selector and toleration support
## Testing Your Charts
Every generated chart includes comprehensive testing instructions:
1. **Lint** - Validate chart structure and syntax
```bash
helm lint .
```
2. **Template** - Render all Kubernetes manifests
```bash
helm template my-app .
```
3. **Dry Run** - Simulate installation
```bash
helm install my-app . --dry-run --debug
```
4. **Test Environment** - Deploy to test namespace
```bash
kubectl create namespace test
helm install my-app . -n test
```
## Customization
All generated values can be customized:
### Via Command Line
```bash
helm install my-app . \
--set replicaCount=3 \
--set image.tag=2.0.0 \
--set resources.limits.memory=512Mi
```
### Via Values File
```bash
helm install my-app . -f custom-values.yaml
```
### Via Multiple Values Files
```bash
helm install my-app . \
-f values.yaml \
-f values-prod.yaml \
-f overrides.yaml
```
## Requirements
To use generated charts, you need:
- **Kubernetes** 1.24+ cluster
- **Helm** 3.0+ CLI tool
- **kubectl** configured to access your cluster
## Supported Application Types
| Type | Use Case | Key Features |
| --------------- | ---------------------------------- | --------------------------------------- |
| **Deployment** | Stateless apps, APIs, web services | Rolling updates, multiple replicas |
| **StatefulSet** | Databases, caches, message queues | Stable network IDs, persistent storage |
| **Job** | Data migration, batch processing | One-time execution, completion tracking |
| **CronJob** | Scheduled tasks, backups, reports | Recurring schedule, job history |
## Common Use Cases
### Microservices Architecture
Generate consistent charts for all microservices with standard labels, security settings, and observability configuration.
### Database Deployment
Create StatefulSet-based charts with persistent storage, proper backup strategies, and initialization scripts.
## Troubleshooting
### Chart Fails Lint
- Check YAML indentation (use 2 spaces, no tabs)
- Verify all template variables exist in values.yaml
- Ensure Chart.yaml has valid semantic version
### Pods Won't Start
- Check resource limits (may be too low)
- Verify image pull secrets are configured
- Review security context settings
- Check persistent volume claims (for StatefulSets)
### Health Check Failures
- Increase `initialDelaySeconds` for slow-starting apps
- Verify health check endpoint paths
- Ensure application exposes health endpoints
### Template Errors
- Validate Go template syntax
- Check for nil pointer errors (missing values)
- Use `helm template` to debug rendering
## Advanced Features
### Custom Resource Definitions (CRDs)
For CRDs, create them in a separate chart and add as a dependency.
### Subchart Management
```yaml
# Chart.yaml
dependencies:
- name: postgresql
version: "12.x.x"
repository: "https://charts.bitnami.com/bitnami"
```
### Values Schema Validation
Add `values.schema.json` for IDE autocomplete and validation:
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"replicaCount": {
"type": "integer",
"minimum": 1
}
}
}
```
### Hooks for Lifecycle Management
```yaml
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
```
## Contributing
To extend this skill:
1. Add new template patterns to the SKILL.md
2. Include additional best practices
3. Add examples for specific use cases
4. Update testing procedures
## Support
For issues or questions:
- Review the generated README.md in your chart
- Check Helm documentation: https://helm.sh/docs/
- Review Kubernetes best practices: https://kubernetes.io/docs/concepts/configuration/overview/
## Version History
- **v1.0.0** (2025-10-23) - Initial release
- Support for Deployment, StatefulSet, Job, CronJob
- Multi-environment configuration
- Comprehensive security defaults
- Dry run testing instructions
- Production-ready templates

View File

@@ -0,0 +1,400 @@
---
name: helm-scaffold
description: Generate production-ready Helm charts for Kubernetes applications. Use when users need to create new Helm charts, convert Kubernetes manifests to Helm templates, scaffold charts for Deployments/StatefulSets/Jobs/CronJobs, create multi-environment configurations, or standardize organizational chart templates with CNCF/Helm best practices. Uses Python scaffolding script and template assets for automated generation.
---
# Helm Scaffold
Automate production-ready Helm chart generation using template assets, Python scaffolding scripts, and Kubernetes best practices through conversational interaction.
## Core Capabilities
- **Automated scaffolding** using Python script (`scripts/scaffold_chart.py`)
- **Template-based generation** from `assets/templates/` directory
- **Convert Kubernetes manifests** to Helm templates
- **Multi-environment configurations** (dev, staging, prod)
- **Organizational standardization** with team policies
- **Comprehensive testing** with dry-run workflows
## Quick Start
For simple chart generation, use the Python scaffolding script directly:
```bash
python3 scripts/scaffold_chart.py <chart-name> \
--workload-type deployment \
--output /mnt/user-data/outputs \
--ingress \
--hpa
```
## Workflow Decision Tree
```
User Request
├─ Simple chart (name + type provided)
│ └─ Use scaffold_chart.py directly
├─ Complex chart (many options)
│ └─ Ask clarifying questions, then use scaffold_chart.py
├─ Manifest conversion
│ └─ Manual templatization process
└─ Custom requirements
└─ Manual generation with template references
```
## Interactive Workflow
### Step 1: Understand Use Case
Identify the scenario:
- **Simple new chart**: Use `scaffold_chart.py` with user-provided params
- **Complex chart**: Ask questions, build command
- **Manifest conversion**: Extract variables, templatize
- **Team template**: Apply organizational standards
### Step 2: Gather Requirements
**Minimum required:**
- Chart name
- Workload type (deployment, statefulset, job, cronjob)
**Optional (ask if not provided):**
- Container image repository and tag
- Application port
- Include Ingress? (--ingress flag)
- Include HPA? (--hpa flag)
- Include ConfigMap? (--configmap flag)
- Multi-environment configs?
### Step 3: Generate Chart
#### Option A: Using scaffold_chart.py (Recommended)
For straightforward charts, execute the Python script:
```python
import subprocess
cmd = [
'python3', 'scripts/scaffold_chart.py',
chart_name,
'--workload-type', workload_type,
'--output', '/mnt/user-data/outputs'
]
if include_ingress:
cmd.append('--ingress')
if include_hpa:
cmd.append('--hpa')
if include_configmap:
cmd.append('--configmap')
subprocess.run(cmd, check=True)
```
**The script automatically:**
- Creates chart directory structure
- Copies template files from `assets/templates/`
- Replaces `CHARTNAME` placeholder with actual chart name
- Includes only requested resources
- Applies best practices
#### Option B: Manual Generation
For custom requirements, manually copy and modify templates:
1. Read template from `assets/templates/<workload>/<file>.yaml`
2. Replace `CHARTNAME` with actual chart name
3. Customize based on user requirements
4. Write to `/home/claude/<chart-name>/templates/`
### Step 4: Multi-Environment Configuration
If user requests dev/staging/prod configs, create additional values files:
**values-dev.yaml:**
```yaml
replicaCount: 1
resources:
limits: {cpu: 200m, memory: 128Mi}
requests: {cpu: 25m, memory: 32Mi}
env:
- name: LOG_LEVEL
value: "debug"
```
**values-prod.yaml:**
```yaml
replicaCount: 3
resources:
limits: {cpu: 1000m, memory: 512Mi}
requests: {cpu: 100m, memory: 128Mi}
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
```
### Step 5: Testing Instructions
**Always provide comprehensive testing workflow:**
```bash
# Navigate to chart directory
cd <chart-name>
# 1. Validate chart structure
helm lint .
# 2. Render templates locally
helm template <chart-name> .
# 3. Dry run installation
helm install <chart-name> . --dry-run --debug
# 4. Test with specific values
helm template <chart-name> . -f values-dev.yaml
# 5. Deploy to test namespace
kubectl create namespace test
helm install <chart-name> . -n test
# 6. Verify deployment
kubectl get all -n test
helm status <chart-name> -n test
# 7. Cleanup
helm uninstall <chart-name> -n test
kubectl delete namespace test
```
### Step 6: Deliver Output
- Chart is created in `/mnt/user-data/outputs/<chart-name>/`
- Provide download link
- Include testing instructions
- Explain customization options
## Workload Types
Load `references/workload-types.md` for detailed decision tree and characteristics.
**Quick reference:**
- **Deployment**: Stateless apps (web, API, microservices)
- **StatefulSet**: Stateful apps (databases, caches) - stable IDs, persistent storage
- **Job**: One-time tasks (migrations, ETL)
- **CronJob**: Scheduled tasks (backups, reports)
**Template locations:**
- `assets/templates/deployment/deployment.yaml`
- `assets/templates/statefulset/statefulset.yaml`
- `assets/templates/job/job.yaml`
- `assets/templates/cronjob/cronjob.yaml`
## Converting Manifests to Helm
When user provides raw Kubernetes YAML:
1. **Analyze manifests**: Identify resources and configurable values
2. **Extract variables**: Images, replicas, ports, resources, env-specific settings
3. **Create values.yaml**: Organize extracted values logically
4. **Templatize YAML**:
- Replace hardcoded values with `{{ .Values.* }}`
- Use `{{ include "CHARTNAME.fullname" . }}` for names
- Use `{{ include "CHARTNAME.labels" . }}` for labels
5. **Add helpers**: Copy `assets/templates/_helpers.tpl` and customize
6. **Document**: Explain what was parameterized
## Template Assets Structure
```
assets/templates/
├── Chart.yaml # Base chart metadata
├── values.yaml # Complete values with all options
├── .helmignore # Files to ignore
├── _helpers.tpl # Helper functions (CHARTNAME placeholder)
├── NOTES.txt # Post-install instructions
├── deployment/
│ └── deployment.yaml # Deployment template
├── statefulset/
│ └── statefulset.yaml # StatefulSet template
├── job/
│ └── job.yaml # Job template
├── cronjob/
│ └── cronjob.yaml # CronJob template
├── service/
│ └── service.yaml # Service template
├── ingress/
│ └── ingress.yaml # Ingress template
├── hpa/
│ └── hpa.yaml # HPA template
├── configmap/
│ └── configmap.yaml # ConfigMap template
└── rbac/
└── serviceaccount.yaml # ServiceAccount template
```
All templates use `CHARTNAME` placeholder which is replaced by the script.
## Scripts
### scaffold_chart.py
**Purpose**: Automated chart generation from templates
**Usage**:
```bash
python3 scripts/scaffold_chart.py CHART_NAME [OPTIONS]
Arguments:
CHART_NAME Name of the Helm chart
Options:
-w, --workload-type Type: deployment, statefulset, job, cronjob (default: deployment)
-o, --output Output directory (default: current directory)
--ingress Include Ingress resource
--hpa Include HorizontalPodAutoscaler
--configmap Include ConfigMap
```
**What it does:**
- Creates chart directory structure
- Copies relevant templates from `assets/templates/`
- Replaces `CHARTNAME` placeholder
- Includes only requested optional resources
- Applies best practices automatically
## Best Practices (Auto-Applied)
The templates in `assets/templates/` already include:
- ✅ Standard Kubernetes labels (`app.kubernetes.io/*`)
- ✅ Security contexts (readOnlyRootFilesystem, runAsNonRoot, dropped capabilities)
- ✅ Resource limits and requests
- ✅ Health checks (liveness and readiness probes)
- ✅ Helper functions for naming and labels
- ✅ Proper selector labels
- ✅ Service account configuration
## Organizational Standardization
For platform teams needing consistent charts:
1. **Capture standards**: Ask about required labels, policies, security requirements
2. **Modify templates**: Update `assets/templates/_helpers.tpl` with org labels
3. **Generate**: Use scaffold_chart.py with modified templates
4. **Document**: Explain customization points
Example org-specific helper addition:
```yaml
{{- define "CHARTNAME.orgLabels" -}}
org.example.com/cost-center: {{ .Values.org.costCenter | required "Cost center required" }}
org.example.com/team: {{ .Values.org.team | required "Team name required" }}
{{- end }}
```
## Success Criteria
Generated charts must:
- ✅ Pass `helm lint` without errors (95%+ target)
- ✅ Follow CNCF/Helm best practices
- ✅ Include security contexts by default
- ✅ Have resource limits configured
- ✅ Use standard Kubernetes labels
- ✅ Be deployment-ready within <15 minutes
## References
Load as needed based on use case:
**For workload selection:**
- `references/workload-types.md` - Decision tree, characteristics, when to use each type
**For testing:**
- `references/testing-guide.md` - Comprehensive dry-run workflows, mock values, validation commands
**For best practices:**
- `references/best-practices.md` - CNCF/Helm standards, security guidelines
**For examples:**
- `references/examples.md` - Real-world chart scenarios
**For template reference:**
- `references/templates.md` - Inline template documentation
## Example Usage
### Example 1: Simple Web App
```
User: "Create a Helm chart for my Node.js API"
Claude: I'll create a chart for your Node.js API. A few quick questions:
1. What's the chart name? (e.g., "user-api")
2. What's your container image? (e.g., "myorg/user-api:1.0.0")
3. Do you need an Ingress? (yes/no)
User: Name is user-api, image is myorg/user-api:1.0.0, yes to ingress
Claude: [Executes]
python3 scripts/scaffold_chart.py user-api \
--workload-type deployment \
--output /mnt/user-data/outputs \
--ingress
[Chart generated successfully]
[Provides testing instructions]
```
### Example 2: Database
```
User: "Create a Helm chart for PostgreSQL with persistent storage"
Claude: I'll create a StatefulSet chart for PostgreSQL.
What storage size do you need? (e.g., 10Gi, 20Gi)
User: 20Gi
Claude: [Executes]
python3 scripts/scaffold_chart.py postgresql \
--workload-type statefulset \
--output /mnt/user-data/outputs
[Then manually adds persistence config to values.yaml]
persistence:
enabled: true
size: 20Gi
storageClass: standard
```
### Example 3: Scheduled Backup
```
User: "Create a CronJob chart for nightly backups"
Claude: [Executes]
python3 scripts/scaffold_chart.py db-backup \
--workload-type cronjob \
--output /mnt/user-data/outputs
[Then updates values.yaml with schedule]
cronjob:
schedule: "0 2 * * *"
```
## Troubleshooting
**Issue**: Script fails to find templates
**Solution**: Ensure running from skill directory or provide full path to script
**Issue**: CHARTNAME not replaced
**Solution**: Script automatically handles this; if manual, use find/replace
**Issue**: Generated chart fails lint
**Solution**: Check values.yaml for required fields, review templates
## Notes
- Always use `scaffold_chart.py` when possible - it's faster and consistent
- Templates use `CHARTNAME` placeholder - script replaces automatically
- For complex customization, read templates from assets and modify manually
- Multi-environment configs are created separately after initial generation
- Testing instructions are critical - always include them

View File

@@ -0,0 +1,30 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# CI/CD
.gitlab-ci.yml
.travis.yml
.github/
# Documentation
README.md
CONTRIBUTING.md

View File

@@ -0,0 +1,15 @@
apiVersion: v2
name: CHARTNAME
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.0.0"
keywords:
- kubernetes
- helm
home: https://github.com/yourorg/CHARTNAME
sources:
- https://github.com/yourorg/CHARTNAME
maintainers:
- name: Your Name
email: your.email@example.com

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "CHARTNAME.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "CHARTNAME.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "CHARTNAME.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "CHARTNAME.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,60 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "CHARTNAME.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
*/}}
{{- define "CHARTNAME.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "CHARTNAME.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "CHARTNAME.labels" -}}
helm.sh/chart: {{ include "CHARTNAME.chart" . }}
{{ include "CHARTNAME.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "CHARTNAME.selectorLabels" -}}
app.kubernetes.io/name: {{ include "CHARTNAME.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "CHARTNAME.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "CHARTNAME.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.configMap.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
data:
{{- with .Values.configMap.data }}
{{- toYaml . | nindent 2 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,66 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cronJob.schedule | quote }}
{{- if .Values.cronJob.concurrencyPolicy }}
concurrencyPolicy: {{ .Values.cronJob.concurrencyPolicy }}
{{- end }}
{{- if .Values.cronJob.successfulJobsHistoryLimit }}
successfulJobsHistoryLimit: {{ .Values.cronJob.successfulJobsHistoryLimit }}
{{- end }}
{{- if .Values.cronJob.failedJobsHistoryLimit }}
failedJobsHistoryLimit: {{ .Values.cronJob.failedJobsHistoryLimit }}
{{- end }}
jobTemplate:
spec:
{{- if .Values.cronJob.backoffLimit }}
backoffLimit: {{ .Values.cronJob.backoffLimit }}
{{- end }}
{{- if .Values.cronJob.activeDeadlineSeconds }}
activeDeadlineSeconds: {{ .Values.cronJob.activeDeadlineSeconds }}
{{- end }}
template:
metadata:
labels:
{{- include "CHARTNAME.selectorLabels" . | nindent 12 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 12 }}
{{- end }}
spec:
restartPolicy: {{ .Values.cronJob.restartPolicy | default "OnFailure" }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 12 }}
{{- end }}
serviceAccountName: {{ include "CHARTNAME.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 12 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 14 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.cronJob.command }}
command:
{{- toYaml .Values.cronJob.command | nindent 14 }}
{{- end }}
{{- if .Values.cronJob.args }}
args:
{{- toYaml .Values.cronJob.args | nindent 14 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 14 }}
{{- with .Values.env }}
env:
{{- toYaml . | nindent 14 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}

View File

@@ -0,0 +1,88 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "CHARTNAME.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "CHARTNAME.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "CHARTNAME.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbe.path }}
port: http
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: {{ .Values.readinessProbe.path }}
port: http
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.env }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.volumeMounts }}
volumeMounts:
{{- toYaml .Values.volumeMounts | nindent 12 }}
{{- end }}
{{- if .Values.volumes }}
volumes:
{{- toYaml .Values.volumes | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,32 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "CHARTNAME.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,41 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "CHARTNAME.fullname" $ }}
port:
number: {{ $.Values.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,57 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
spec:
{{- if .Values.job.backoffLimit }}
backoffLimit: {{ .Values.job.backoffLimit }}
{{- end }}
{{- if .Values.job.activeDeadlineSeconds }}
activeDeadlineSeconds: {{ .Values.job.activeDeadlineSeconds }}
{{- end }}
{{- if .Values.job.ttlSecondsAfterFinished }}
ttlSecondsAfterFinished: {{ .Values.job.ttlSecondsAfterFinished }}
{{- end }}
template:
metadata:
labels:
{{- include "CHARTNAME.selectorLabels" . | nindent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
restartPolicy: {{ .Values.job.restartPolicy | default "OnFailure" }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "CHARTNAME.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.job.command }}
command:
{{- toYaml .Values.job.command | nindent 12 }}
{{- end }}
{{- if .Values.job.args }}
args:
{{- toYaml .Values.job.args | nindent 12 }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.env }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "CHARTNAME.serviceAccountName" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,25 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
{{- with .Values.service.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
{{- if and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
{{- include "CHARTNAME.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,82 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "CHARTNAME.fullname" . }}
labels:
{{- include "CHARTNAME.labels" . | nindent 4 }}
spec:
serviceName: {{ include "CHARTNAME.fullname" . }}-headless
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "CHARTNAME.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "CHARTNAME.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "CHARTNAME.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbe.path }}
port: http
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: {{ .Values.readinessProbe.path }}
port: http
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /data
{{- if .Values.volumeMounts }}
{{- toYaml .Values.volumeMounts | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size }}

View File

@@ -0,0 +1,148 @@
# Default values for CHARTNAME.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: nginx
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
fsGroup: 2000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
allowPrivilegeEscalation: false
service:
type: ClusterIP
port: 80
annotations: {}
# For LoadBalancer type
# loadBalancerIP: ""
# For NodePort type
# nodePort: 30000
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: chart-example.local
paths:
- path: /
pathType: Prefix
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
livenessProbe:
enabled: true
path: /healthz
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
path: /ready
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
env: []
# - name: ENVIRONMENT
# value: "production"
volumeMounts: []
# - name: config
# mountPath: /etc/config
volumes: []
# - name: config
# configMap:
# name: my-config
configMap:
enabled: false
data: {}
# key1: value1
# key2: value2
# For StatefulSet deployments
persistence:
enabled: false
storageClass: ""
size: 8Gi
# For Job workloads
job:
backoffLimit: 4
activeDeadlineSeconds: 600
ttlSecondsAfterFinished: 86400
restartPolicy: OnFailure
command: []
args: []
# For CronJob workloads
cronJob:
schedule: "0 0 * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
backoffLimit: 4
activeDeadlineSeconds: 600
restartPolicy: OnFailure
command: []
args: []

View File

@@ -0,0 +1,282 @@
# Helm Chart Best Practices
CNCF and Helm community standards for production-ready charts.
## Chart Metadata Standards
### Chart.yaml Requirements
- `apiVersion: v2` (Helm 3)
- Semantic versioning (version, appVersion)
- Meaningful description
- Keywords for discoverability
- Maintainer information
### Naming Conventions
- Chart names: lowercase, hyphens (no underscores)
- Resource names: `{{ template "name.fullname" . }}`
- Avoid hardcoding names
## Kubernetes Label Standards
**Required labels (app.kubernetes.io/* namespace):**
```yaml
labels:
app.kubernetes.io/name: {{ include "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "chart.chart" . }}
```
**Selector labels (must be immutable):**
```yaml
selector:
matchLabels:
app.kubernetes.io/name: {{ include "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
```
## Security Best Practices
### Pod Security Context
```yaml
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault # Production
```
### Container Security Context
```yaml
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- ALL
```
### Security Guidelines
- Never run as root (UID 0)
- Drop all Linux capabilities by default
- Use read-only root filesystem when possible
- Apply seccomp profiles in production
- Avoid privileged containers
- Don't expose host ports or namespaces
## Resource Management
### Always Define Resources
```yaml
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
```
### Resource Sizing Guidelines
- **Small apps**: 50m CPU / 64Mi memory (requests)
- **Medium apps**: 100m CPU / 128Mi memory (requests)
- **Large apps**: 250m+ CPU / 256Mi+ memory (requests)
- Limits should be 2-10x requests
- Monitor and adjust based on actual usage
## Health Checks
### Liveness Probe
Detects when container needs restart:
```yaml
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
```
### Readiness Probe
Detects when container can accept traffic:
```yaml
readinessProbe:
httpGet:
path: /ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
```
### Probe Best Practices
- Always define both liveness and readiness
- Use appropriate initialDelaySeconds for slow-starting apps
- Health endpoints should be lightweight
- Don't use same endpoint for liveness and readiness if startup is slow
## Values.yaml Organization
### Structure
```yaml
# 1. Replica configuration
replicaCount: 1
# 2. Image configuration
image:
repository: example/app
pullPolicy: IfNotPresent
tag: "" # Defaults to Chart.appVersion
# 3. Service account
serviceAccount:
create: true
name: ""
# 4. Security contexts
podSecurityContext: {}
securityContext: {}
# 5. Service configuration
service:
type: ClusterIP
port: 80
# 6. Resources
resources: {}
# 7. Autoscaling
autoscaling:
enabled: false
# 8. Additional features (Ingress, ConfigMaps, etc.)
```
### Documentation
- Comment every major section
- Provide examples for complex values
- Document accepted value types
- Explain default behavior
## Template Best Practices
### Use Helper Functions
```yaml
# _helpers.tpl
{{- define "app.fullname" -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
```
### Conditional Resources
```yaml
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
...
{{- end }}
```
### Checksum Annotations
Force pod restart on config changes:
```yaml
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
```
## NOTES.txt Guidelines
Provide clear post-installation instructions:
```
1. How to access the application
2. Default credentials (if any)
3. Next steps for configuration
4. Links to documentation
5. Troubleshooting commands
```
## Multi-Environment Patterns
### Base + Override Pattern
- `values.yaml`: Base defaults
- `values-dev.yaml`: Development overrides
- `values-prod.yaml`: Production overrides
### Environment-Specific Settings
- **Dev**: Debug enabled, minimal resources, verbose logging
- **Staging**: Production-like, moderate resources
- **Prod**: HA, autoscaling, security hardening, monitoring
## Common Pitfalls to Avoid
**Don't:**
- Hardcode values in templates
- Forget resource limits
- Run containers as root
- Skip health checks
- Use `latest` image tag
- Expose secrets in values.yaml
- Create resources without labels
- Ignore security contexts
**Do:**
- Use template functions
- Define all resources
- Use non-root users
- Configure probes
- Pin specific versions
- Reference external secrets
- Apply standard labels
- Enable security contexts
## Testing Checklist
Before deploying:
- [ ] `helm lint` passes
- [ ] `helm template` renders correctly
- [ ] All required labels present
- [ ] Security contexts configured
- [ ] Resource limits defined
- [ ] Health checks configured
- [ ] NOTES.txt provides clear instructions
- [ ] README documents all values
- [ ] Dry run succeeds
- [ ] Test deployment in dev environment
## Validation Commands
```bash
# Lint chart
helm lint .
# Template rendering
helm template myrelease .
# Dry run
helm install myrelease . --dry-run --debug
# Install to test namespace
kubectl create ns test
helm install myrelease . -n test
# Verify
kubectl get all -n test
helm test myrelease -n test
# Cleanup
helm uninstall myrelease -n test
kubectl delete ns test
```
## References
- [Helm Best Practices](https://helm.sh/docs/chart_best_practices/)
- [Kubernetes Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
- [CNCF Security Whitepaper](https://github.com/cncf/tag-security)
- [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/)

View File

@@ -0,0 +1,507 @@
# Helm Chart Examples
Real-world chart examples for common use cases.
## Example 1: Simple Web Application
**Scenario**: Node.js API microservice
**Chart.yaml**:
```yaml
apiVersion: v2
name: user-api
description: User management API service
type: application
version: 0.1.0
appVersion: "1.0.0"
```
**values.yaml** (key sections):
```yaml
replicaCount: 2
image:
repository: myorg/user-api
tag: "1.0.0"
service:
type: ClusterIP
port: 80
targetPort: 3000
ingress:
enabled: true
className: nginx
hosts:
- host: api.example.com
paths:
- path: /users
pathType: Prefix
resources:
limits: {cpu: 500m, memory: 256Mi}
requests: {cpu: 100m, memory: 128Mi}
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 70
```
**Usage**:
```bash
helm install user-api . -n production
```
---
## Example 2: Database (StatefulSet)
**Scenario**: PostgreSQL with persistent storage
**Chart.yaml**:
```yaml
apiVersion: v2
name: postgresql
description: PostgreSQL database
type: application
version: 0.1.0
appVersion: "15"
```
**values.yaml** (key sections):
```yaml
replicaCount: 1
image:
repository: postgres
tag: "15"
persistence:
enabled: true
storageClass: "standard"
accessMode: ReadWriteOnce
size: 20Gi
resources:
limits: {cpu: 1000m, memory: 2Gi}
requests: {cpu: 250m, memory: 512Mi}
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
```
**StatefulSet template** (key parts):
```yaml
spec:
serviceName: postgresql-headless
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 20Gi
```
---
## Example 3: Scheduled Job (CronJob)
**Scenario**: Nightly database backup
**Chart.yaml**:
```yaml
apiVersion: v2
name: db-backup
description: Automated database backup job
type: application
version: 0.1.0
appVersion: "1.0.0"
```
**values.yaml** (key sections):
```yaml
cronjob:
schedule: "0 2 * * *" # 2 AM daily
successfulJobsHistoryLimit: 7
failedJobsHistoryLimit: 3
concurrencyPolicy: Forbid
image:
repository: myorg/backup-tool
tag: "latest"
resources:
limits: {cpu: 500m, memory: 512Mi}
requests: {cpu: 100m, memory: 128Mi}
env:
- name: BACKUP_RETENTION_DAYS
value: "30"
- name: S3_BUCKET
value: "my-backups"
```
---
## Example 4: Multi-Environment Configuration
**Scenario**: Application deployed to dev, staging, prod
**values.yaml** (base):
```yaml
replicaCount: 1
image:
repository: myorg/myapp
tag: ""
resources:
limits: {cpu: 500m, memory: 256Mi}
requests: {cpu: 50m, memory: 64Mi}
env:
- name: LOG_LEVEL
value: "info"
```
**values-dev.yaml**:
```yaml
replicaCount: 1
image:
tag: "dev"
resources:
limits: {cpu: 200m, memory: 128Mi}
requests: {cpu: 25m, memory: 32Mi}
env:
- name: LOG_LEVEL
value: "debug"
- name: DEBUG
value: "true"
ingress:
enabled: true
hosts:
- host: dev.myapp.example.com
```
**values-prod.yaml**:
```yaml
replicaCount: 3
image:
tag: "1.0.0"
resources:
limits: {cpu: 1000m, memory: 512Mi}
requests: {cpu: 100m, memory: 128Mi}
env:
- name: LOG_LEVEL
value: "warn"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values: [myapp]
topologyKey: kubernetes.io/hostname
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: myapp.example.com
tls:
- secretName: myapp-tls
hosts:
- myapp.example.com
```
**Deployment commands**:
```bash
# Development
helm install myapp . -f values-dev.yaml -n dev
# Production
helm install myapp . -f values-prod.yaml -n production
```
---
## Example 5: Converting Manifest to Helm
**Original Kubernetes manifest**:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myorg/frontend:1.0.0
ports:
- containerPort: 8080
resources:
limits:
cpu: "500m"
memory: "256Mi"
```
**Converted Helm template** (deployment.yaml):
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "frontend.fullname" . }}
labels:
{{- include "frontend.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "frontend.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "frontend.labels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
```
**Extracted values.yaml**:
```yaml
replicaCount: 2
image:
repository: myorg/frontend
tag: "1.0.0"
service:
targetPort: 8080
resources:
limits:
cpu: 500m
memory: 256Mi
```
**What was parameterized**:
- Replica count → `.Values.replicaCount`
- Image name/tag → `.Values.image.*`
- Port → `.Values.service.targetPort`
- Resources → `.Values.resources`
- Labels → Helper templates
- Resource name → Template function
---
## Example 6: Organizational Standard Template
**Scenario**: Platform team creates standard chart for all services
**Required organizational standards**:
- All services must have cost center label
- Security scanning required
- Must use org-wide naming convention
- Mandatory resource limits
- Required network policies
**Modified _helpers.tpl**:
```yaml
{{- define "org.labels" -}}
app.kubernetes.io/name: {{ include "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
org.example.com/cost-center: {{ .Values.org.costCenter | required "Cost center is required" }}
org.example.com/team: {{ .Values.org.team | required "Team name is required" }}
org.example.com/security-scan: "required"
{{- end }}
```
**Required values**:
```yaml
org:
costCenter: "" # MUST be provided
team: "" # MUST be provided
# Resource limits are mandatory
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
# Security context is required
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ALL]
```
**Usage**:
```bash
helm install myapp . \
--set org.costCenter=eng-001 \
--set org.team=platform
```
---
## Example 7: With Subchart Dependency
**Scenario**: Application that needs PostgreSQL database
**Chart.yaml**:
```yaml
apiVersion: v2
name: myapp
version: 0.1.0
appVersion: "1.0.0"
dependencies:
- name: postgresql
version: "12.1.0"
repository: "https://charts.bitnami.com/bitnami"
condition: postgresql.enabled
```
**values.yaml**:
```yaml
# Application values
replicaCount: 2
image:
repository: myorg/myapp
tag: "1.0.0"
# Subchart values
postgresql:
enabled: true
auth:
database: myapp
username: myapp
primary:
persistence:
size: 10Gi
```
**Install**:
```bash
# Download dependencies
helm dependency update
# Install with subchart
helm install myapp .
```
---
## Common Patterns
### ConfigMap from Files
```yaml
# values.yaml
config:
app.conf: |
server_port=8080
log_level=info
db.conf: |
host=postgres
port=5432
# templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "app.fullname" . }}
data:
{{- range $key, $value := .Values.config }}
{{ $key }}: |
{{- $value | nindent 4 }}
{{- end }}
```
### External Secret Reference
```yaml
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.externalSecret.name }}
key: password
```
### Horizontal Pod Autoscaler
```yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "app.fullname" . }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "app.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
```
These examples demonstrate common patterns and can be adapted for specific use cases.

View File

@@ -0,0 +1,306 @@
# Helm Chart Templates
Complete template library. Load sections as needed. {{ CN }} = {{ CHART_NAME }} (abbreviated).
## Chart.yaml
```yaml
apiVersion: v2
name: {{ CHART_NAME }}
description: A Helm chart for {{ APP_DESCRIPTION }}
type: application
version: 0.1.0
appVersion: "1.0.0"
```
## values.yaml
```yaml
replicaCount: 1
image:
repository: {{ IMAGE_REPO }}
pullPolicy: IfNotPresent
tag: "{{ IMAGE_TAG }}"
serviceAccount:
create: true
automount: true
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: [ALL]
service:
type: ClusterIP
port: {{ SERVICE_PORT }}
targetPort: {{ CONTAINER_PORT }}
resources:
limits: {cpu: 500m, memory: 256Mi}
requests: {cpu: 50m, memory: 64Mi}
livenessProbe:
httpGet: {path: /health, port: http}
initialDelaySeconds: 30
readinessProbe:
httpGet: {path: /ready, port: http}
initialDelaySeconds: 5
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
```
## _helpers.tpl
```yaml
{{- define "{{ CN }}.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "{{ CN }}.fullname" -}}
{{- if .Values.fullnameOverride }}{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}{{- end }}{{- end }}{{- end }}
{{- define "{{ CN }}.labels" -}}
helm.sh/chart: {{ include "{{ CN }}.chart" . }}
{{ include "{{ CN }}.selectorLabels" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{- define "{{ CN }}.selectorLabels" -}}
app.kubernetes.io/name: {{ include "{{ CN }}.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{- define "{{ CN }}.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}{{- default (include "{{ CN }}.fullname" .) .Values.serviceAccount.name }}
{{- else }}{{- default "default" .Values.serviceAccount.name }}{{- end }}{{- end }}
```
## Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}replicas: {{ .Values.replicaCount }}{{- end }}
selector:
matchLabels: {{- include "{{ CN }}.selectorLabels" . | nindent 6 }}
template:
metadata:
labels: {{- include "{{ CN }}.labels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "{{ CN }}.serviceAccountName" . }}
securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext: {{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
livenessProbe: {{- toYaml .Values.livenessProbe | nindent 12 }}
readinessProbe: {{- toYaml .Values.readinessProbe | nindent 12 }}
resources: {{- toYaml .Values.resources | nindent 12 }}
```
## StatefulSet
```yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
serviceName: {{ include "{{ CN }}.fullname" . }}-headless
replicas: {{ .Values.replicaCount }}
selector:
matchLabels: {{- include "{{ CN }}.selectorLabels" . | nindent 6 }}
template:
metadata:
labels: {{- include "{{ CN }}.labels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "{{ CN }}.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata: {name: data}
spec:
accessModes: [{{ .Values.persistence.accessMode | quote }}]
storageClassName: {{ .Values.persistence.storageClass | quote }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
```
Add to values: `persistence: {storageClass: standard, accessMode: ReadWriteOnce, size: 10Gi}`
## Job
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
backoffLimit: {{ .Values.job.backoffLimit | default 3 }}
completions: {{ .Values.job.completions | default 1 }}
template:
metadata:
labels: {{- include "{{ CN }}.labels" . | nindent 8 }}
spec:
restartPolicy: OnFailure
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
```
Add to values: `job: {backoffLimit: 3, completions: 1, parallelism: 1}`
## CronJob
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cronjob.schedule | quote }}
successfulJobsHistoryLimit: {{ .Values.cronjob.successfulJobsHistoryLimit | default 3 }}
failedJobsHistoryLimit: {{ .Values.cronjob.failedJobsHistoryLimit | default 1 }}
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
```
Add to values: `cronjob: {schedule: "0 2 * * *", successfulJobsHistoryLimit: 3, failedJobsHistoryLimit: 1}`
## Service
```yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
name: http
selector: {{- include "{{ CN }}.selectorLabels" . | nindent 4 }}
```
## ServiceAccount
```yaml
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "{{ CN }}.serviceAccountName" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}
```
## Ingress
```yaml
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "{{ CN }}.fullname" . }}
labels: {{- include "{{ CN }}.labels" . | nindent 4 }}
spec:
{{- if .Values.ingress.className }}ingressClassName: {{ .Values.ingress.className }}{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service: {name: {{ include "{{ CN }}.fullname" $ }}, port: {number: {{ $.Values.service.port }}}}
{{- end }}
{{- end }}
{{- end }}
```
## HPA
```yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "{{ CN }}.fullname" . }}
spec:
scaleTargetRef: {apiVersion: apps/v1, kind: Deployment, name: {{ include "{{ CN }}.fullname" . }}}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource: {name: cpu, target: {type: Utilization, averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}}}
{{- end }}
```
## NOTES.txt
```
Thank you for installing {{ .Chart.Name }}!
Release: {{ .Release.Name }}
{{- if .Values.ingress.enabled }}
URL: http{{ if $.Values.ingress.tls }}s{{ end }}://{{ (index .Values.ingress.hosts 0).host }}
{{- else }}
Port forward: kubectl port-forward svc/{{ include "{{ CN }}.fullname" . }} {{ .Values.service.port }}
{{- end }}
```
## .helmignore
```
.DS_Store
.git/
*.swp
.idea/
.vscode/
```
## README.md
```markdown
# {{ CHART_NAME }}
## Installation
helm install {{ CN }} .
## Testing
helm lint .
helm template {{ CN }} .
helm install {{ CN }} . --dry-run
```
## Environment Values
values-dev.yaml:
```yaml
replicaCount: 1
resources: {limits: {cpu: 200m, memory: 128Mi}, requests: {cpu: 25m, memory: 32Mi}}
```
values-prod.yaml:
```yaml
replicaCount: 3
resources: {limits: {cpu: 1000m, memory: 512Mi}, requests: {cpu: 100m, memory: 128Mi}}
autoscaling: {enabled: true, minReplicas: 3, maxReplicas: 10}
```

View File

@@ -0,0 +1,535 @@
# Helm Chart Testing Guide
## Overview
This guide provides testing templates and commands for validating Helm charts before deployment. Always test charts with dry-run and template rendering before actual deployment.
## Testing Workflow
```
1. Helm Lint → Validate chart structure
2. Helm Template → Preview rendered manifests
3. Dry-run Install → Test against cluster API
4. Kubectl Dry-run → Final validation
5. Actual Install → Deploy to cluster
```
## Mock Values by Workload Type
### Deployment (Web Application)
```yaml
# values-test.yaml
image:
repository: nginx
tag: "1.25"
pullPolicy: IfNotPresent
replicaCount: 2
service:
type: ClusterIP
port: 80
ingress:
enabled: true
className: "nginx"
hosts:
- host: test.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: test-tls
hosts:
- test.example.com
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
enabled: true
path: /healthz
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
enabled: true
path: /ready
initialDelaySeconds: 5
periodSeconds: 10
env:
- name: ENVIRONMENT
value: "test"
- name: LOG_LEVEL
value: "debug"
- name: PORT
value: "80"
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
```
### StatefulSet (Database)
```yaml
# values-test.yaml
image:
repository: postgres
tag: "15"
pullPolicy: IfNotPresent
replicaCount: 3
service:
type: ClusterIP
port: 5432
persistence:
enabled: true
storageClass: "standard"
size: 10Gi
resources:
requests:
cpu: 250m
memory: 512Mi
limits:
cpu: 1000m
memory: 2Gi
livenessProbe:
enabled: true
path: /
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
enabled: true
path: /
initialDelaySeconds: 10
periodSeconds: 10
env:
- name: POSTGRES_DB
value: "testdb"
- name: POSTGRES_USER
value: "testuser"
- name: POSTGRES_PASSWORD
value: "testpass123"
- name: PGDATA
value: "/data/pgdata"
podManagementPolicy: OrderedReady
volumeMounts:
- name: data
mountPath: /data
nodeSelector: {}
tolerations: []
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- postgres
topologyKey: kubernetes.io/hostname
```
### Job (Batch Processing)
```yaml
# values-test.yaml
image:
repository: busybox
tag: "1.36"
pullPolicy: IfNotPresent
job:
backoffLimit: 3
activeDeadlineSeconds: 600
ttlSecondsAfterFinished: 86400
restartPolicy: OnFailure
parallelism: 1
completions: 1
command:
- /bin/sh
args:
- -c
- |
echo "Starting batch job..."
echo "Processing data..."
sleep 30
echo "Job completed successfully!"
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
env:
- name: JOB_TYPE
value: "batch-process"
- name: BATCH_SIZE
value: "1000"
- name: OUTPUT_PATH
value: "/output"
nodeSelector: {}
tolerations: []
```
### CronJob (Scheduled Task)
```yaml
# values-test.yaml
image:
repository: busybox
tag: "1.36"
pullPolicy: IfNotPresent
cronJob:
schedule: "*/5 * * * *" # Every 5 minutes for testing
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
startingDeadlineSeconds: 200
suspend: false
backoffLimit: 2
activeDeadlineSeconds: 300
restartPolicy: OnFailure
command:
- /bin/sh
args:
- -c
- |
echo "Running scheduled task at $(date)"
echo "Performing backup/cleanup/sync..."
sleep 10
echo "Task completed at $(date)"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
env:
- name: TASK_NAME
value: "scheduled-backup"
- name: RETENTION_DAYS
value: "7"
- name: TARGET
value: "/backup"
nodeSelector: {}
tolerations: []
```
## Testing Commands
### 1. Validate Chart Structure
```bash
# Basic lint
helm lint my-chart
# Lint with custom values
helm lint my-chart -f values-test.yaml
# Lint with value overrides
helm lint my-chart --set image.tag=latest
```
**Expected output:**
```
==> Linting my-chart
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed
```
### 2. Render Templates Locally
```bash
# Render all templates
helm template my-release my-chart
# Render with test values
helm template my-release my-chart -f values-test.yaml
# Render with inline overrides
helm template my-release my-chart \
--set image.tag=latest \
--set replicaCount=3
# Show only specific resource
helm template my-release my-chart | grep -A 30 "kind: Deployment"
# Save rendered manifests to file
helm template my-release my-chart -f values-test.yaml > rendered.yaml
# Render for specific namespace
helm template my-release my-chart --namespace production
```
### 3. Dry-run Install
```bash
# Dry-run against cluster API (requires cluster access)
helm install my-release my-chart --dry-run --debug
# Dry-run with custom values
helm install my-release my-chart \
--dry-run \
--debug \
-f values-test.yaml
# Dry-run with namespace
helm install my-release my-chart \
--dry-run \
--debug \
--namespace test \
--create-namespace
```
**Expected behavior:**
- Validates against cluster API
- Shows what would be installed
- Catches API version issues
- Does NOT create resources
### 4. Validate with Kubectl
```bash
# Validate rendered manifests
helm template my-release my-chart | kubectl apply --dry-run=client -f -
# Validate with specific values
helm template my-release my-chart -f values-test.yaml | kubectl apply --dry-run=client -f -
# Server-side dry-run (requires cluster)
helm template my-release my-chart | kubectl apply --dry-run=server -f -
```
### 5. Diff Against Existing Release
```bash
# Install helm diff plugin first
helm plugin install https://github.com/databus23/helm-diff
# Compare with existing release
helm diff upgrade my-release my-chart -f values-test.yaml
# Show only changes
helm diff upgrade my-release my-chart -f values-test.yaml --suppress-secrets
```
## Testing Checklist
### Pre-deployment Validation
- [ ] Chart passes `helm lint` without warnings
- [ ] Templates render successfully with `helm template`
- [ ] Dry-run install completes without errors
- [ ] Image names are valid and accessible
- [ ] Resource limits are appropriate
- [ ] Labels and selectors match correctly
- [ ] Health check paths are correct
- [ ] Environment variables are set
- [ ] Secrets are not in values.yaml
- [ ] Ingress hostnames are valid
- [ ] Service ports match container ports
### Workload-Specific Checks
#### Deployment
- [ ] Replica count is appropriate
- [ ] Rolling update strategy configured
- [ ] HPA settings (if enabled) are sensible
- [ ] Pod disruption budget considered
#### StatefulSet
- [ ] Persistence configuration correct
- [ ] Storage class exists
- [ ] Headless service defined
- [ ] Pod management policy appropriate
- [ ] Volume claim templates valid
#### Job
- [ ] Backoff limit reasonable
- [ ] Active deadline set
- [ ] TTL for cleanup configured
- [ ] Restart policy appropriate
- [ ] Command/args correct
#### CronJob
- [ ] Schedule syntax valid
- [ ] Concurrency policy appropriate
- [ ] History limits set
- [ ] Starting deadline configured
- [ ] Job template valid
## Common Testing Scenarios
### Test 1: Minimal Values (Defaults)
```bash
# Test with only required values
helm template test my-chart --set image.repository=nginx
```
### Test 2: Production-like Values
```bash
# Test with production configuration
helm template test my-chart -f values-prod.yaml
```
### Test 3: Multiple Environments
```bash
# Test dev environment
helm template test my-chart -f values-dev.yaml
# Test staging environment
helm template test my-chart -f values-staging.yaml
# Test production environment
helm template test my-chart -f values-prod.yaml
```
### Test 4: Value Overrides
```bash
# Test with inline overrides
helm template test my-chart \
--set image.tag=v2.0.0 \
--set replicaCount=5 \
--set ingress.enabled=true
```
### Test 5: Resource Validation
```bash
# Check resource limits
helm template test my-chart | grep -A 5 "resources:"
# Check security contexts
helm template test my-chart | grep -A 10 "securityContext:"
# Check probes
helm template test my-chart | grep -A 5 "Probe:"
```
## Troubleshooting
### Issue: Template rendering fails
```bash
# Debug with verbose output
helm template test my-chart --debug
# Check specific template
helm template test my-chart --show-only templates/deployment.yaml
```
### Issue: Validation errors
```bash
# Validate individual resources
helm template test my-chart | kubectl apply --dry-run=client -f - --validate=strict
# Check for deprecated APIs
helm template test my-chart | kubectl apply --dry-run=server -f -
```
### Issue: Values not applied
```bash
# Verify values are loaded
helm template test my-chart -f values-test.yaml --debug | grep -A 5 "USER-SUPPLIED VALUES"
# Check final computed values
helm template test my-chart -f values-test.yaml --debug | grep -A 20 "COMPUTED VALUES"
```
## Best Practices
1. **Always test before deploying** - Use dry-run and template rendering
2. **Use realistic test data** - Mock values should resemble production
3. **Test all environments** - Validate dev, staging, prod configurations
4. **Validate security** - Check security contexts and RBAC
5. **Check resource limits** - Ensure requests/limits are appropriate
6. **Test failure scenarios** - Invalid values, missing fields
7. **Document test process** - Share testing commands with team
8. **Automate testing** - Include in CI/CD pipeline
## Automated Testing Example
```bash
#!/bin/bash
# test-chart.sh
set -e
CHART_DIR="my-chart"
VALUES_FILE="values-test.yaml"
echo "Testing Helm chart: $CHART_DIR"
# Test 1: Lint
echo "1. Running helm lint..."
helm lint $CHART_DIR -f $VALUES_FILE
# Test 2: Template rendering
echo "2. Rendering templates..."
helm template test $CHART_DIR -f $VALUES_FILE > /tmp/rendered.yaml
# Test 3: Kubectl validation
echo "3. Validating with kubectl..."
kubectl apply --dry-run=client -f /tmp/rendered.yaml
# Test 4: Check for secrets in values
echo "4. Checking for secrets in values..."
if grep -i "password\|secret\|token" $VALUES_FILE; then
echo "WARNING: Potential secrets found in values file!"
fi
echo "✅ All tests passed!"
```
## Summary
Always follow this testing sequence:
1. **Lint** the chart structure
2. **Render** templates to preview
3. **Dry-run** to validate against API
4. **Review** rendered manifests
5. **Deploy** to test environment first
6. **Monitor** after deployment
Never skip testing, even for "small changes"!

View File

@@ -0,0 +1,322 @@
# Helm Chart Workload Types
## Overview
Different application types require different Kubernetes workload resources. This guide helps choose the right workload type and understand its specific requirements.
## Workload Type Decision Tree
```
Is it a long-running process?
├─ YES: Does it need stable network identity or persistent storage?
│ ├─ YES: Use StatefulSet
│ └─ NO: Use Deployment
└─ NO: Is it scheduled to run repeatedly?
├─ YES: Use CronJob
└─ NO: Use Job
```
## Deployment
**Use for:** Stateless applications, microservices, web servers, APIs
### Characteristics
- No persistent identity
- Pods are interchangeable
- Can be scaled horizontally
- Rolling updates and rollbacks
- No guaranteed ordering
### Best For
- REST APIs
- Web applications
- Microservices
- Stateless workers
- Frontend applications
### values.yaml Specific Fields
```yaml
replicaCount: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
```
### Example Use Cases
- Node.js API server
- Nginx web server
- Python Flask application
- React/Vue frontend
- Stateless background workers
## StatefulSet
**Use for:** Stateful applications requiring stable identity or persistent storage
### Characteristics
- Stable, unique network identifiers
- Stable, persistent storage
- Ordered, graceful deployment and scaling
- Ordered, automated rolling updates
- Requires headless service
### Best For
- Databases (MySQL, PostgreSQL, MongoDB)
- Message queues (RabbitMQ, Kafka)
- Distributed systems (Elasticsearch, Cassandra)
- Applications requiring stable hostnames
### values.yaml Specific Fields
```yaml
replicaCount: 3
updateStrategy:
type: RollingUpdate
persistence:
enabled: true
storageClass: "fast-ssd"
size: 10Gi
accessMode: ReadWriteOnce
podManagementPolicy: OrderedReady # or Parallel
```
### Template Additions
```yaml
serviceName: {{ include "chart.fullname" . }}-headless
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: {{ .Values.persistence.storageClass }}
resources:
requests:
storage: {{ .Values.persistence.size }}
```
### Headless Service Required
```yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "chart.fullname" . }}-headless
spec:
clusterIP: None
selector:
{{- include "chart.selectorLabels" . | nindent 4 }}
```
### Example Use Cases
- PostgreSQL cluster
- Redis with persistent storage
- Elasticsearch cluster
- Kafka broker
- ZooKeeper ensemble
## Job
**Use for:** Run-to-completion tasks, one-time executions
### Characteristics
- Runs until completion
- Pods are not restarted after successful completion
- Can run multiple pods in parallel
- Automatic cleanup options
- Suitable for batch processing
### Best For
- Database migrations
- Batch processing
- Data imports/exports
- One-time setup tasks
- Report generation
### values.yaml Specific Fields
```yaml
job:
backoffLimit: 4 # Retry attempts
activeDeadlineSeconds: 600 # Timeout
ttlSecondsAfterFinished: 86400 # Cleanup after 24h
restartPolicy: OnFailure # or Never
parallelism: 1 # Parallel pods
completions: 1 # Required completions
command: []
args: []
```
### Template Specifics
```yaml
spec:
backoffLimit: {{ .Values.job.backoffLimit }}
ttlSecondsAfterFinished: {{ .Values.job.ttlSecondsAfterFinished }}
template:
spec:
restartPolicy: {{ .Values.job.restartPolicy }}
```
### Example Use Cases
- Database schema migration
- Data ETL job
- Image processing batch
- Cache warming
- Backup operations
## CronJob
**Use for:** Scheduled, recurring tasks
### Characteristics
- Scheduled execution (cron syntax)
- Creates Jobs on schedule
- Concurrency control
- History management
- Automatic cleanup
### Best For
- Scheduled backups
- Report generation
- Data synchronization
- Cache clearing
- Periodic cleanup tasks
### values.yaml Specific Fields
```yaml
cronJob:
schedule: "0 2 * * *" # Daily at 2 AM
concurrencyPolicy: Forbid # Forbid, Allow, or Replace
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
startingDeadlineSeconds: 200 # Deadline for missed runs
suspend: false # Pause scheduling
backoffLimit: 4
activeDeadlineSeconds: 600
restartPolicy: OnFailure
command: []
args: []
```
### Cron Schedule Examples
```yaml
"*/5 * * * *" # Every 5 minutes
"0 * * * *" # Every hour
"0 0 * * *" # Daily at midnight
"0 2 * * *" # Daily at 2 AM
"0 0 * * 0" # Weekly on Sunday
"0 0 1 * *" # Monthly on 1st
"0 0 1 1 *" # Yearly on Jan 1st
```
### Concurrency Policies
- **Forbid**: Don't start new job if previous still running (recommended)
- **Allow**: Allow concurrent jobs
- **Replace**: Cancel running job and start new one
### Example Use Cases
- Nightly database backup
- Daily report generation
- Hourly cache refresh
- Weekly cleanup tasks
- Monthly billing runs
## Comparison Matrix
| Feature | Deployment | StatefulSet | Job | CronJob |
|---------|-----------|-------------|-----|---------|
| **Replicas** | Yes | Yes | Parallelism | Parallelism |
| **Persistent Identity** | No | Yes | No | No |
| **Persistent Storage** | Optional | Yes | Optional | Optional |
| **Ordered Operations** | No | Yes | No | No |
| **Auto-restart** | Yes | Yes | Optional | Optional |
| **Scaling** | Easy | Ordered | N/A | N/A |
| **Updates** | Rolling | Rolling | N/A | N/A |
| **Service** | Yes | Headless | Optional | Optional |
| **Typical Replicas** | 2-100+ | 1-10 | 1-1000 | 1-100 |
## When to Use Each Type
### Use Deployment When:
- Application is stateless
- Pods are interchangeable
- No need for stable network identity
- Need rapid scaling
- Standard web application pattern
### Use StatefulSet When:
- Need stable, unique network identifiers
- Require persistent storage per pod
- Need ordered deployment/scaling
- Running clustered databases
- Pods need to discover each other
### Use Job When:
- Task runs to completion
- One-time execution needed
- Batch processing work
- Database migration
- Don't need scheduling
### Use CronJob When:
- Need scheduled execution
- Recurring tasks
- Time-based triggers
- Periodic maintenance
- Regular backups or reports
## Migration Considerations
### Deployment → StatefulSet
**Required Changes:**
- Add `serviceName` pointing to headless service
- Create headless service
- Add `volumeClaimTemplates`
- Update selector labels (StatefulSet selector is immutable)
- Plan for ordered rollout
### StatefulSet → Deployment
**Consider:**
- Loss of stable network identity
- Need to externalize persistent data
- Pods become interchangeable
- No ordered operations
## Resource Recommendations by Type
### Deployment
```yaml
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
```
### StatefulSet
```yaml
resources:
requests:
memory: "512Mi" # Higher for databases
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
```
### Job/CronJob
```yaml
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "1000m"
```

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
"""
Helm Chart Scaffolding Script
Generates production-ready Helm charts with best practices
"""
import os
import shutil
import argparse
from pathlib import Path
from typing import Dict, List, Optional
def replace_placeholder(content: str, chart_name: str) -> str:
"""Replace CHARTNAME placeholder with actual chart name"""
return content.replace("CHARTNAME", chart_name)
def create_chart_structure(
chart_name: str,
output_dir: str,
workload_type: str = "deployment",
include_ingress: bool = False,
include_hpa: bool = False,
include_configmap: bool = False,
) -> None:
"""
Create Helm chart directory structure and files
Args:
chart_name: Name of the Helm chart
output_dir: Directory where chart will be created
workload_type: Type of workload (deployment, statefulset, job, cronjob)
include_ingress: Whether to include Ingress resource
include_hpa: Whether to include HorizontalPodAutoscaler
include_configmap: Whether to include ConfigMap
"""
# Get templates directory
script_dir = Path(__file__).parent.parent
templates_dir = script_dir / "assets" / "templates"
# Create chart directory
chart_dir = Path(output_dir) / chart_name
chart_dir.mkdir(parents=True, exist_ok=True)
# Create templates subdirectory
templates_output_dir = chart_dir / "templates"
templates_output_dir.mkdir(exist_ok=True)
print(f"📦 Creating Helm chart: {chart_name}")
print(f"📂 Output directory: {chart_dir}")
# Copy Chart.yaml
chart_yaml_src = templates_dir / "Chart.yaml"
chart_yaml_dst = chart_dir / "Chart.yaml"
with open(chart_yaml_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(chart_yaml_dst, 'w') as f:
f.write(content)
print("✅ Created Chart.yaml")
# Copy values.yaml
values_yaml_src = templates_dir / "values.yaml"
values_yaml_dst = chart_dir / "values.yaml"
shutil.copy2(values_yaml_src, values_yaml_dst)
print("✅ Created values.yaml")
# Copy .helmignore
helmignore_src = templates_dir / ".helmignore"
helmignore_dst = chart_dir / ".helmignore"
shutil.copy2(helmignore_src, helmignore_dst)
print("✅ Created .helmignore")
# Copy _helpers.tpl
helpers_src = templates_dir / "_helpers.tpl"
helpers_dst = templates_output_dir / "_helpers.tpl"
with open(helpers_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(helpers_dst, 'w') as f:
f.write(content)
print("✅ Created templates/_helpers.tpl")
# Copy NOTES.txt
notes_src = templates_dir / "NOTES.txt"
notes_dst = templates_output_dir / "NOTES.txt"
with open(notes_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(notes_dst, 'w') as f:
f.write(content)
print("✅ Created templates/NOTES.txt")
# Copy workload type template
workload_templates = {
"deployment": "deployment/deployment.yaml",
"statefulset": "statefulset/statefulset.yaml",
"job": "job/job.yaml",
"cronjob": "cronjob/cronjob.yaml",
}
workload_src = templates_dir / workload_templates[workload_type]
workload_dst = templates_output_dir / f"{workload_type}.yaml"
with open(workload_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(workload_dst, 'w') as f:
f.write(content)
print(f"✅ Created templates/{workload_type}.yaml")
# Copy Service (not for jobs)
if workload_type in ["deployment", "statefulset"]:
service_src = templates_dir / "service" / "service.yaml"
service_dst = templates_output_dir / "service.yaml"
with open(service_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(service_dst, 'w') as f:
f.write(content)
print("✅ Created templates/service.yaml")
# Copy ServiceAccount
sa_src = templates_dir / "rbac" / "serviceaccount.yaml"
sa_dst = templates_output_dir / "serviceaccount.yaml"
with open(sa_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(sa_dst, 'w') as f:
f.write(content)
print("✅ Created templates/serviceaccount.yaml")
# Optionally copy Ingress
if include_ingress:
ingress_src = templates_dir / "ingress" / "ingress.yaml"
ingress_dst = templates_output_dir / "ingress.yaml"
with open(ingress_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(ingress_dst, 'w') as f:
f.write(content)
print("✅ Created templates/ingress.yaml")
# Optionally copy HPA (only for deployment)
if include_hpa and workload_type == "deployment":
hpa_src = templates_dir / "hpa" / "hpa.yaml"
hpa_dst = templates_output_dir / "hpa.yaml"
with open(hpa_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(hpa_dst, 'w') as f:
f.write(content)
print("✅ Created templates/hpa.yaml")
# Optionally copy ConfigMap
if include_configmap:
cm_src = templates_dir / "configmap" / "configmap.yaml"
cm_dst = templates_output_dir / "configmap.yaml"
with open(cm_src, 'r') as f:
content = replace_placeholder(f.read(), chart_name)
with open(cm_dst, 'w') as f:
f.write(content)
print("✅ Created templates/configmap.yaml")
print(f"\n🎉 Chart '{chart_name}' created successfully at {chart_dir}")
print(f"\nNext steps:")
print(f"1. cd {chart_dir}")
print(f"2. Edit values.yaml to configure your application")
print(f"3. Run: helm lint .")
print(f"4. Run: helm install {chart_name} .")
def main():
parser = argparse.ArgumentParser(
description="Generate production-ready Helm charts",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Create basic deployment chart
%(prog)s my-app -o ./charts
# Create statefulset with ingress
%(prog)s my-db -o ./charts -t statefulset --ingress
# Create job with configmap
%(prog)s my-job -o ./charts -t job --configmap
# Create deployment with HPA
%(prog)s my-api -o ./charts --hpa
"""
)
parser.add_argument(
"chart_name",
help="Name of the Helm chart to create"
)
parser.add_argument(
"-o", "--output",
default=".",
help="Output directory (default: current directory)"
)
parser.add_argument(
"-t", "--type",
choices=["deployment", "statefulset", "job", "cronjob"],
default="deployment",
help="Workload type (default: deployment)"
)
parser.add_argument(
"--ingress",
action="store_true",
help="Include Ingress resource"
)
parser.add_argument(
"--hpa",
action="store_true",
help="Include HorizontalPodAutoscaler (deployment only)"
)
parser.add_argument(
"--configmap",
action="store_true",
help="Include ConfigMap resource"
)
args = parser.parse_args()
create_chart_structure(
chart_name=args.chart_name,
output_dir=args.output,
workload_type=args.type,
include_ingress=args.ingress,
include_hpa=args.hpa,
include_configmap=args.configmap,
)
if __name__ == "__main__":
main()