Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:51:02 +08:00
commit ff1f4bd119
252 changed files with 72682 additions and 0 deletions

View File

@@ -0,0 +1,5 @@
# Compliance & Auditing Skills
This directory contains skills for security compliance and auditing operations.
See the main [README.md](../../README.md) for usage and [CONTRIBUTE.md](../../CONTRIBUTE.md) for contribution guidelines.

View File

@@ -0,0 +1,431 @@
---
name: policy-opa
description: >
Policy-as-code enforcement and compliance validation using Open Policy Agent (OPA).
Use when: (1) Enforcing security and compliance policies across infrastructure and applications,
(2) Validating Kubernetes admission control policies, (3) Implementing policy-as-code for
compliance frameworks (SOC2, PCI-DSS, GDPR, HIPAA), (4) Testing and evaluating OPA Rego policies,
(5) Integrating policy checks into CI/CD pipelines, (6) Auditing configuration drift against
organizational security standards, (7) Implementing least-privilege access controls.
version: 0.1.0
maintainer: SirAppSec
category: compliance
tags: [opa, policy-as-code, compliance, rego, kubernetes, admission-control, soc2, gdpr, pci-dss, hipaa]
frameworks: [SOC2, PCI-DSS, GDPR, HIPAA, NIST, ISO27001]
dependencies:
tools: [opa, docker, kubectl]
packages: [jq, yq]
references:
- https://www.openpolicyagent.org/docs/latest/
- https://www.openpolicyagent.org/docs/latest/policy-language/
- https://www.conftest.dev/
---
# Policy-as-Code with Open Policy Agent
## Overview
This skill enables policy-as-code enforcement using Open Policy Agent (OPA) for compliance validation, security policy enforcement, and configuration auditing. OPA provides a unified framework for policy evaluation across cloud-native environments, Kubernetes, CI/CD pipelines, and infrastructure-as-code.
Use OPA to codify security requirements, compliance controls, and organizational standards as executable policies written in Rego. Automatically validate configurations, prevent misconfigurations, and maintain continuous compliance.
## Quick Start
### Install OPA
```bash
# macOS
brew install opa
# Linux
curl -L -o opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
chmod +x opa
# Verify installation
opa version
```
### Basic Policy Evaluation
```bash
# Evaluate a policy against input data
opa eval --data policy.rego --input input.json 'data.example.allow'
# Test policies with unit tests
opa test policy.rego policy_test.rego --verbose
# Run OPA server for live policy evaluation
opa run --server --addr localhost:8181
```
## Core Workflow
### Step 1: Define Policy Requirements
Identify compliance requirements and security controls to enforce:
- Compliance frameworks (SOC2, PCI-DSS, GDPR, HIPAA, NIST)
- Kubernetes security policies (pod security, RBAC, network policies)
- Infrastructure-as-code policies (Terraform, CloudFormation)
- Application security policies (API authorization, data access)
- Organizational security standards
### Step 2: Write OPA Rego Policies
Create policy files in Rego language. Use the provided templates in `assets/` for common patterns:
**Example: Kubernetes Pod Security Policy**
```rego
package kubernetes.admission
import future.keywords.contains
import future.keywords.if
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged containers are not allowed: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
```
**Example: Compliance Control Validation (SOC2)**
```rego
package compliance.soc2
import future.keywords.if
# CC6.1: Logical and physical access controls
deny[msg] {
input.kind == "Deployment"
not input.spec.template.metadata.labels["data-classification"]
msg := "SOC2 CC6.1: All deployments must have data-classification label"
}
# CC6.6: Encryption in transit
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := "SOC2 CC6.6: LoadBalancer services must use SSL/TLS encryption"
}
```
### Step 3: Test Policies with Unit Tests
Write comprehensive tests for policy validation:
```rego
package kubernetes.admission_test
import data.kubernetes.admission
test_deny_privileged_container {
input := {
"request": {
"kind": {"kind": "Pod"},
"object": {
"spec": {
"containers": [{
"name": "nginx",
"securityContext": {"privileged": true}
}]
}
}
}
}
count(admission.deny) > 0
}
test_allow_unprivileged_container {
input := {
"request": {
"kind": {"kind": "Pod"},
"object": {
"spec": {
"containers": [{
"name": "nginx",
"securityContext": {"privileged": false, "runAsNonRoot": true}
}]
}
}
}
}
count(admission.deny) == 0
}
```
Run tests:
```bash
opa test . --verbose
```
### Step 4: Evaluate Policies Against Configuration
Use the bundled evaluation script for policy validation:
```bash
# Evaluate single file
./scripts/evaluate_policy.py --policy policies/ --input config.yaml
# Evaluate directory of configurations
./scripts/evaluate_policy.py --policy policies/ --input configs/ --recursive
# Output results in JSON format for CI/CD integration
./scripts/evaluate_policy.py --policy policies/ --input config.yaml --format json
```
Or use OPA directly:
```bash
# Evaluate with formatted output
opa eval --data policies/ --input config.yaml --format pretty 'data.compliance.violations'
# Bundle evaluation for complex policies
opa eval --bundle policies.tar.gz --input config.yaml 'data'
```
### Step 5: Integrate with CI/CD Pipelines
Add policy validation to your CI/CD workflow:
**GitHub Actions Example:**
```yaml
- name: Validate Policies
uses: open-policy-agent/setup-opa@v2
with:
version: latest
- name: Run Policy Tests
run: opa test policies/ --verbose
- name: Evaluate Configuration
run: |
opa eval --data policies/ --input deployments/ \
--format pretty 'data.compliance.violations' > violations.json
if [ $(jq 'length' violations.json) -gt 0 ]; then
echo "Policy violations detected!"
cat violations.json
exit 1
fi
```
**GitLab CI Example:**
```yaml
policy-validation:
image: openpolicyagent/opa:latest
script:
- opa test policies/ --verbose
- opa eval --data policies/ --input configs/ --format pretty 'data.compliance.violations'
artifacts:
reports:
junit: test-results.xml
```
### Step 6: Deploy as Kubernetes Admission Controller
Enforce policies at cluster level using OPA Gatekeeper:
```bash
# Install OPA Gatekeeper
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
# Apply constraint template
kubectl apply -f assets/k8s-constraint-template.yaml
# Apply constraint
kubectl apply -f assets/k8s-constraint.yaml
# Test admission control
kubectl apply -f test-pod.yaml # Should be denied if violates policy
```
### Step 7: Monitor Policy Compliance
Generate compliance reports using the bundled reporting script:
```bash
# Generate compliance report
./scripts/generate_report.py --policy policies/ --audit-logs audit.json --output compliance-report.html
# Export violations for SIEM integration
./scripts/generate_report.py --policy policies/ --audit-logs audit.json --format json --output violations.json
```
## Security Considerations
- **Policy Versioning**: Store policies in version control with change tracking and approval workflows
- **Least Privilege**: Grant minimal permissions for policy evaluation - OPA should run with read-only access to configurations
- **Sensitive Data**: Avoid embedding secrets in policies - use external data sources or encrypted configs
- **Audit Logging**: Log all policy evaluations, violations, and exceptions for compliance auditing
- **Policy Testing**: Maintain comprehensive test coverage (>80%) for all policy rules
- **Separation of Duties**: Separate policy authors from policy enforcers; require peer review for policy changes
- **Compliance Mapping**: Map policies to specific compliance controls (SOC2 CC6.1, PCI-DSS 8.2.1) for audit traceability
## Bundled Resources
### Scripts (`scripts/`)
- `evaluate_policy.py` - Evaluate OPA policies against configuration files with formatted output
- `generate_report.py` - Generate compliance reports from policy evaluation results
- `test_policies.sh` - Run OPA policy unit tests with coverage reporting
### References (`references/`)
- `rego-patterns.md` - Common Rego patterns for security and compliance policies
- `compliance-frameworks.md` - Policy templates mapped to SOC2, PCI-DSS, GDPR, HIPAA controls
- `kubernetes-security.md` - Kubernetes security policies and admission control patterns
- `iac-policies.md` - Infrastructure-as-code policy validation for Terraform, CloudFormation
### Assets (`assets/`)
- `k8s-pod-security.rego` - Kubernetes pod security policy template
- `k8s-constraint-template.yaml` - OPA Gatekeeper constraint template
- `k8s-constraint.yaml` - Example Gatekeeper constraint configuration
- `soc2-compliance.rego` - SOC2 compliance controls as OPA policies
- `pci-dss-compliance.rego` - PCI-DSS requirements as OPA policies
- `gdpr-compliance.rego` - GDPR data protection policies
- `terraform-security.rego` - Terraform security best practices policies
- `ci-cd-pipeline.yaml` - CI/CD integration examples (GitHub Actions, GitLab CI)
## Common Patterns
### Pattern 1: Kubernetes Admission Control
Enforce security policies at pod creation time:
```rego
package kubernetes.admission
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext.runAsNonRoot
msg := "Pods must run as non-root user"
}
```
### Pattern 2: Infrastructure-as-Code Validation
Validate Terraform configurations before apply:
```rego
package terraform.security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not resource.change.after.server_side_encryption_configuration
msg := sprintf("S3 bucket %v must have encryption enabled", [resource.name])
}
```
### Pattern 3: Compliance Framework Mapping
Map policies to specific compliance controls:
```rego
package compliance.soc2
# SOC2 CC6.1: Logical and physical access controls
cc6_1_violations[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
msg := sprintf("SOC2 CC6.1 VIOLATION: cluster-admin binding for %v", [input.metadata.name])
}
```
### Pattern 4: Data Classification Enforcement
Enforce data handling policies based on classification:
```rego
package data.classification
deny[msg] {
input.metadata.labels["data-classification"] == "restricted"
input.spec.template.spec.volumes[_].hostPath
msg := "Restricted data cannot use hostPath volumes"
}
```
### Pattern 5: API Authorization Policies
Implement attribute-based access control (ABAC):
```rego
package api.authz
import future.keywords.if
allow if {
input.method == "GET"
input.path[0] == "public"
}
allow if {
input.method == "GET"
input.user.role == "admin"
}
allow if {
input.method == "POST"
input.user.role == "editor"
input.resource.owner == input.user.id
}
```
## Integration Points
- **CI/CD Pipelines**: GitHub Actions, GitLab CI, Jenkins, CircleCI - validate policies before deployment
- **Kubernetes**: OPA Gatekeeper admission controller for runtime policy enforcement
- **Terraform/IaC**: Pre-deployment validation using `conftest` or OPA CLI
- **API Gateways**: Kong, Envoy, NGINX - authorize requests using OPA policies
- **Monitoring/SIEM**: Export policy violations to Splunk, ELK, Datadog for security monitoring
- **Compliance Tools**: Integrate with compliance platforms for control validation and audit trails
## Troubleshooting
### Issue: Policy Evaluation Returns Unexpected Results
**Solution**:
- Enable trace mode: `opa eval --data policy.rego --input input.json --explain full 'data.example.allow'`
- Validate input data structure matches policy expectations
- Check for typos in policy rules or variable names
- Use `opa fmt` to format policies and catch syntax errors
### Issue: Kubernetes Admission Control Not Blocking Violations
**Solution**:
- Verify Gatekeeper is running: `kubectl get pods -n gatekeeper-system`
- Check constraint status: `kubectl get constraints`
- Review audit logs: `kubectl logs -n gatekeeper-system -l control-plane=controller-manager`
- Ensure constraint template is properly defined and matches policy expectations
### Issue: Policy Tests Failing
**Solution**:
- Run tests with verbose output: `opa test . --verbose`
- Check test input data matches expected format
- Verify policy package names match between policy and test files
- Use `print()` statements in Rego for debugging
### Issue: Performance Degradation with Large Policy Sets
**Solution**:
- Use policy bundles: `opa build policies/ -o bundle.tar.gz`
- Enable partial evaluation for complex policies
- Optimize policy rules to reduce computational complexity
- Index data for faster lookups using `input.key` patterns
- Consider splitting large policy sets into separate evaluation domains
## References
- [OPA Documentation](https://www.openpolicyagent.org/docs/latest/)
- [Rego Language Reference](https://www.openpolicyagent.org/docs/latest/policy-language/)
- [OPA Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/)
- [Conftest](https://www.conftest.dev/)
- [OPA Kubernetes Tutorial](https://www.openpolicyagent.org/docs/latest/kubernetes-tutorial/)
- [SOC2 Security Controls](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
- [PCI-DSS Requirements](https://www.pcisecuritystandards.org/)
- [GDPR Compliance Guide](https://gdpr.eu/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,234 @@
# GitHub Actions CI/CD Pipeline with OPA Policy Validation
name: OPA Policy Validation
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
# Test OPA policies with unit tests
test-policies:
name: Test OPA Policies
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
with:
version: latest
- name: Run Policy Tests
run: |
opa test policies/ --verbose --coverage
opa test policies/ --coverage --format=json > coverage.json
- name: Check Coverage Threshold
run: |
COVERAGE=$(jq -r '.coverage' coverage.json | awk '{print int($1)}')
if [ "$COVERAGE" -lt 80 ]; then
echo "Coverage $COVERAGE% is below threshold 80%"
exit 1
fi
echo "Coverage: $COVERAGE%"
# Validate Kubernetes manifests
validate-kubernetes:
name: Validate Kubernetes Configs
runs-on: ubuntu-latest
needs: test-policies
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Validate Kubernetes Manifests
run: |
for file in k8s/**/*.yaml; do
echo "Validating $file"
opa eval --data policies/ --input "$file" \
--format pretty 'data.kubernetes.admission.deny' \
> violations.txt
if [ -s violations.txt ]; then
echo "Policy violations found in $file:"
cat violations.txt
exit 1
fi
done
- name: Generate Validation Report
if: always()
run: |
./scripts/generate_report.py \
--policy policies/ \
--audit-logs violations.json \
--format html \
--output validation-report.html
- name: Upload Report
if: always()
uses: actions/upload-artifact@v3
with:
name: validation-report
path: validation-report.html
# Validate Terraform configurations
validate-terraform:
name: Validate Terraform Configs
runs-on: ubuntu-latest
needs: test-policies
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Terraform Init
run: terraform init
- name: Generate Terraform Plan
run: |
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: Validate with OPA
run: |
opa eval --data policies/terraform/ --input tfplan.json \
--format pretty 'data.terraform.security.deny' \
> terraform-violations.json
if [ -s terraform-violations.json ]; then
echo "Terraform policy violations detected:"
cat terraform-violations.json
exit 1
fi
# Compliance validation for production
compliance-check:
name: Compliance Validation
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
needs: [validate-kubernetes, validate-terraform]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: SOC2 Compliance Check
run: |
opa eval --data policies/compliance/soc2-compliance.rego \
--input deployments/ \
--format json 'data.compliance.soc2.deny' \
> soc2-violations.json
- name: PCI-DSS Compliance Check
run: |
opa eval --data policies/compliance/pci-dss-compliance.rego \
--input deployments/ \
--format json 'data.compliance.pci.deny' \
> pci-violations.json
- name: GDPR Compliance Check
run: |
opa eval --data policies/compliance/gdpr-compliance.rego \
--input deployments/ \
--format json 'data.compliance.gdpr.deny' \
> gdpr-violations.json
- name: Generate Compliance Report
run: |
./scripts/generate_report.py \
--policy policies/compliance/ \
--audit-logs soc2-violations.json \
--format html \
--output compliance-report.html
- name: Upload Compliance Report
uses: actions/upload-artifact@v3
with:
name: compliance-report
path: compliance-report.html
- name: Fail on Violations
run: |
TOTAL_VIOLATIONS=$(cat *-violations.json | jq -s 'map(length) | add')
if [ "$TOTAL_VIOLATIONS" -gt 0 ]; then
echo "Found $TOTAL_VIOLATIONS compliance violations"
exit 1
fi
---
# GitLab CI/CD Pipeline Example
# .gitlab-ci.yml
stages:
- test
- validate
- compliance
variables:
OPA_VERSION: "latest"
test-policies:
stage: test
image: openpolicyagent/opa:${OPA_VERSION}
script:
- opa test policies/ --verbose --coverage
- opa test policies/ --format=json --coverage > coverage.json
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.json
validate-kubernetes:
stage: validate
image: openpolicyagent/opa:${OPA_VERSION}
script:
- |
for file in k8s/**/*.yaml; do
opa eval --data policies/ --input "$file" \
'data.kubernetes.admission.deny' || exit 1
done
only:
- merge_requests
- main
validate-terraform:
stage: validate
image: hashicorp/terraform:latest
before_script:
- apk add --no-cache curl jq
- curl -L -o /usr/local/bin/opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
- chmod +x /usr/local/bin/opa
script:
- terraform init
- terraform plan -out=tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
- opa eval --data policies/terraform/ --input tfplan.json 'data.terraform.security.deny'
only:
- merge_requests
- main
compliance-check:
stage: compliance
image: openpolicyagent/opa:${OPA_VERSION}
script:
- opa eval --data policies/compliance/ --input deployments/ 'data.compliance'
artifacts:
reports:
junit: compliance-report.xml
only:
- main

View File

@@ -0,0 +1,159 @@
package compliance.gdpr
import future.keywords.if
# GDPR Article 25: Data Protection by Design and by Default
# Require data classification labels
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.labels["data-classification"]
msg := {
"control": "GDPR Article 25",
"severity": "high",
"violation": sprintf("Deployment processing personal data requires classification: %v", [input.metadata.name]),
"remediation": "Add label: data-classification=personal|sensitive|public",
}
}
# Data minimization - limit replicas for personal data
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "personal"
input.spec.replicas > 3
not input.metadata.annotations["gdpr.justification"]
msg := {
"control": "GDPR Article 25",
"severity": "medium",
"violation": sprintf("Excessive replicas for personal data: %v", [input.metadata.name]),
"remediation": "Reduce replicas or add justification annotation",
}
}
# Require purpose limitation annotation
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["data-purpose"]
msg := {
"control": "GDPR Article 25",
"severity": "medium",
"violation": sprintf("Personal data deployment requires purpose annotation: %v", [input.metadata.name]),
"remediation": "Add annotation: data-purpose=<specific purpose>",
}
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "personal"
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "pii"
}
processes_personal_data(resource) {
contains(lower(resource.metadata.name), "user")
}
# GDPR Article 32: Security of Processing
# Require encryption for personal data volumes
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"severity": "high",
"violation": sprintf("Personal data volume requires encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption",
}
}
# Require TLS for personal data services
deny[msg] {
input.kind == "Service"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"severity": "high",
"violation": sprintf("Personal data service requires TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS encryption",
}
}
# Require pseudonymization or anonymization
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["data-protection.method"]
msg := {
"control": "GDPR Article 32",
"severity": "medium",
"violation": sprintf("Personal data deployment requires protection method: %v", [input.metadata.name]),
"remediation": "Add annotation: data-protection.method=pseudonymization|anonymization|encryption",
}
}
# GDPR Article 33: Breach Notification
# Require incident response plan
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
input.metadata.namespace == "production"
not input.metadata.annotations["incident-response.plan"]
msg := {
"control": "GDPR Article 33",
"severity": "medium",
"violation": sprintf("Production personal data deployment requires incident response plan: %v", [input.metadata.name]),
"remediation": "Add annotation: incident-response.plan=<plan-id>",
}
}
# GDPR Article 30: Records of Processing Activities
# Require data processing record
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["dpa.record-id"]
msg := {
"control": "GDPR Article 30",
"severity": "medium",
"violation": sprintf("Personal data deployment requires processing record: %v", [input.metadata.name]),
"remediation": "Add annotation: dpa.record-id=<record-id>",
}
}
# GDPR Article 35: Data Protection Impact Assessment (DPIA)
# Require DPIA for high-risk processing
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "sensitive"
not input.metadata.annotations["dpia.reference"]
msg := {
"control": "GDPR Article 35",
"severity": "high",
"violation": sprintf("Sensitive data deployment requires DPIA: %v", [input.metadata.name]),
"remediation": "Conduct DPIA and add annotation: dpia.reference=<dpia-id>",
}
}
# GDPR Article 17: Right to Erasure (Right to be Forgotten)
# Require data retention policy
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["data-retention.days"]
msg := {
"control": "GDPR Article 17",
"severity": "medium",
"violation": sprintf("Personal data volume requires retention policy: %v", [input.metadata.name]),
"remediation": "Add annotation: data-retention.days=<number>",
}
}

View File

@@ -0,0 +1,87 @@
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8spodsecurity
annotations:
description: "Enforces pod security standards including privileged containers, host namespaces, and capabilities"
spec:
crd:
spec:
names:
kind: K8sPodSecurity
validation:
openAPIV3Schema:
type: object
properties:
allowPrivileged:
type: boolean
description: "Allow privileged containers"
allowHostNamespace:
type: boolean
description: "Allow host namespace usage"
allowedCapabilities:
type: array
description: "List of allowed capabilities"
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8spodsecurity
import future.keywords.contains
import future.keywords.if
violation[{"msg": msg}] {
not input.parameters.allowPrivileged
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostPID == true
msg := "Host PID namespace not allowed"
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostIPC == true
msg := "Host IPC namespace not allowed"
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostNetwork == true
msg := "Host network namespace not allowed"
}
violation[{"msg": msg}] {
volume := input.review.object.spec.volumes[_]
volume.hostPath
msg := sprintf("hostPath volume not allowed: %v", [volume.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
not is_allowed_capability(capability)
msg := sprintf("Capability %v not allowed for container: %v", [capability, container.name])
}
is_allowed_capability(capability) {
input.parameters.allowedCapabilities[_] == capability
}

View File

@@ -0,0 +1,20 @@
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPodSecurity
metadata:
name: pod-security-policy
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "production"
- "staging"
excludedNamespaces:
- "kube-system"
- "gatekeeper-system"
parameters:
allowPrivileged: false
allowHostNamespace: false
allowedCapabilities:
- "NET_BIND_SERVICE" # Allow binding to privileged ports

View File

@@ -0,0 +1,90 @@
package kubernetes.admission
import future.keywords.contains
import future.keywords.if
# Deny privileged containers
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container is not allowed: %v", [container.name])
}
# Enforce non-root user
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root user: %v", [container.name])
}
# Require read-only root filesystem
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
# Deny host namespaces
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostPID == true
msg := "Sharing the host PID namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostIPC == true
msg := "Sharing the host IPC namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostNetwork == true
msg := "Sharing the host network namespace is not allowed"
}
# Deny hostPath volumes
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.hostPath
msg := sprintf("hostPath volumes are not allowed: %v", [volume.name])
}
# Require dropping ALL capabilities
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not drops_all_capabilities(container)
msg := sprintf("Container must drop ALL capabilities: %v", [container.name])
}
drops_all_capabilities(container) {
container.securityContext.capabilities.drop[_] == "ALL"
}
# Deny dangerous capabilities
dangerous_capabilities := [
"CAP_SYS_ADMIN",
"CAP_NET_ADMIN",
"CAP_SYS_PTRACE",
"CAP_SYS_MODULE",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
dangerous_capabilities[_] == capability
msg := sprintf("Capability %v is not allowed for container: %v", [capability, container.name])
}
# Require seccomp profile
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext.seccompProfile
msg := "Pod must define a seccomp profile"
}

View File

@@ -0,0 +1,131 @@
package compliance.pci
import future.keywords.if
# PCI-DSS Requirement 1.2: Firewall Configuration
# Require network policies for cardholder data
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["network-policy.enabled"] == "true"
msg := {
"control": "PCI-DSS 1.2",
"severity": "high",
"violation": sprintf("PCI in-scope namespace requires network policy: %v", [input.metadata.name]),
"remediation": "Create NetworkPolicy to restrict traffic and add annotation",
}
}
# PCI-DSS Requirement 2.2: System Hardening
# Container hardening - read-only filesystem
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := {
"control": "PCI-DSS 2.2",
"severity": "high",
"violation": sprintf("PCI container requires read-only filesystem: %v", [container.name]),
"remediation": "Set securityContext.readOnlyRootFilesystem: true",
}
}
# Container hardening - no privilege escalation
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.allowPrivilegeEscalation == false
msg := {
"control": "PCI-DSS 2.2",
"severity": "high",
"violation": sprintf("PCI container allows privilege escalation: %v", [container.name]),
"remediation": "Set securityContext.allowPrivilegeEscalation: false",
}
}
# PCI-DSS Requirement 3.4: Encryption of Cardholder Data
# Require encryption for PCI data at rest
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "PCI-DSS 3.4",
"severity": "critical",
"violation": sprintf("PCI volume requires encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption",
}
}
# Require TLS for PCI data in transit
deny[msg] {
input.kind == "Service"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "PCI-DSS 4.1",
"severity": "critical",
"violation": sprintf("PCI service requires TLS encryption: %v", [input.metadata.name]),
"remediation": "Enable TLS for data in transit",
}
}
# PCI-DSS Requirement 8.2.1: Strong Authentication
# Require MFA for payment endpoints
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["payment.enabled"] == "true"
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "PCI-DSS 8.2.1",
"severity": "high",
"violation": sprintf("Payment ingress requires MFA: %v", [input.metadata.name]),
"remediation": "Enable MFA via annotation: mfa.required=true",
}
}
# PCI-DSS Requirement 10.2: Audit Logging
# Require audit logging for PCI components
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
not has_audit_logging(input)
msg := {
"control": "PCI-DSS 10.2",
"severity": "high",
"violation": sprintf("PCI deployment requires audit logging: %v", [input.metadata.name]),
"remediation": "Deploy audit logging sidecar or enable centralized logging",
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
has_audit_logging(resource) {
container := resource.spec.template.spec.containers[_]
contains(container.name, "audit")
}
# PCI-DSS Requirement 11.3: Penetration Testing
# Require security testing evidence for PCI deployments
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
input.metadata.namespace == "production"
not input.metadata.annotations["security-testing.date"]
msg := {
"control": "PCI-DSS 11.3",
"severity": "medium",
"violation": sprintf("PCI deployment requires security testing evidence: %v", [input.metadata.name]),
"remediation": "Add annotation: security-testing.date=YYYY-MM-DD",
}
}

View File

@@ -0,0 +1,107 @@
package compliance.soc2
import future.keywords.if
# SOC2 CC6.1: Logical and Physical Access Controls
# Deny overly permissive RBAC
deny[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
not startswith(input.subjects[_].name, "system:")
msg := {
"control": "SOC2 CC6.1",
"severity": "high",
"violation": sprintf("Overly permissive cluster-admin binding: %v", [input.metadata.name]),
"remediation": "Use least-privilege roles instead of cluster-admin",
}
}
# Require authentication for external services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["auth.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"severity": "medium",
"violation": sprintf("External service without authentication: %v", [input.metadata.name]),
"remediation": "Add annotation: auth.required=true",
}
}
# SOC2 CC6.6: Encryption in Transit
# Require TLS for Ingress
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "SOC2 CC6.6",
"severity": "high",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure spec.tls with valid certificates",
}
}
# Require TLS for LoadBalancer
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := {
"control": "SOC2 CC6.6",
"severity": "high",
"violation": sprintf("LoadBalancer without SSL/TLS: %v", [input.metadata.name]),
"remediation": "Add SSL certificate annotation",
}
}
# SOC2 CC6.7: Encryption at Rest
# Require encrypted volumes for confidential data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-classification"] == "confidential"
not input.metadata.annotations["volume.beta.kubernetes.io/storage-encrypted"] == "true"
msg := {
"control": "SOC2 CC6.7",
"severity": "high",
"violation": sprintf("Unencrypted volume for confidential data: %v", [input.metadata.name]),
"remediation": "Enable volume encryption annotation",
}
}
# SOC2 CC7.2: System Monitoring
# Require audit logging for critical systems
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["critical-system"] == "true"
not has_audit_logging(input)
msg := {
"control": "SOC2 CC7.2",
"severity": "medium",
"violation": sprintf("Critical system without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging via sidecar or annotations",
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
# SOC2 CC8.1: Change Management
# Require approval for production changes
deny[msg] {
input.kind == "Deployment"
input.metadata.namespace == "production"
not input.metadata.annotations["change-request.id"]
msg := {
"control": "SOC2 CC8.1",
"severity": "medium",
"violation": sprintf("Production deployment without change request: %v", [input.metadata.name]),
"remediation": "Add annotation: change-request.id=CR-XXXX",
}
}

View File

@@ -0,0 +1,223 @@
package terraform.security
import future.keywords.if
# AWS S3 Bucket Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := {
"resource": resource.name,
"type": "aws_s3_bucket",
"severity": "high",
"violation": "S3 bucket must have encryption enabled",
"remediation": "Add server_side_encryption_configuration block",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_versioning(resource)
msg := {
"resource": resource.name,
"type": "aws_s3_bucket",
"severity": "medium",
"violation": "S3 bucket should have versioning enabled",
"remediation": "Add versioning configuration with enabled = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.after.block_public_acls == false
msg := {
"resource": resource.name,
"type": "aws_s3_bucket_public_access_block",
"severity": "high",
"violation": "S3 bucket must block public ACLs",
"remediation": "Set block_public_acls = true",
}
}
has_encryption(resource) {
resource.change.after.server_side_encryption_configuration
}
has_versioning(resource) {
resource.change.after.versioning[_].enabled == true
}
# AWS EC2 Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
not resource.change.after.metadata_options.http_tokens == "required"
msg := {
"resource": resource.name,
"type": "aws_instance",
"severity": "high",
"violation": "EC2 instance must use IMDSv2",
"remediation": "Set metadata_options.http_tokens = required",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.associate_public_ip_address == true
is_production
msg := {
"resource": resource.name,
"type": "aws_instance",
"severity": "high",
"violation": "Production EC2 instances cannot have public IPs",
"remediation": "Set associate_public_ip_address = false",
}
}
is_production {
input.variables.environment == "production"
}
# AWS RDS Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.storage_encrypted
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "high",
"violation": "RDS instance must have encryption enabled",
"remediation": "Set storage_encrypted = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.publicly_accessible == true
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "critical",
"violation": "RDS instance cannot be publicly accessible",
"remediation": "Set publicly_accessible = false",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
backup_retention := resource.change.after.backup_retention_period
backup_retention < 7
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "medium",
"violation": "RDS instance must have at least 7 days backup retention",
"remediation": "Set backup_retention_period >= 7",
}
}
# AWS IAM Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Action[_] == "*"
msg := {
"resource": resource.name,
"type": "aws_iam_policy",
"severity": "high",
"violation": "IAM policy cannot use wildcard actions",
"remediation": "Specify explicit actions instead of *",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Resource[_] == "*"
statement.Effect == "Allow"
msg := {
"resource": resource.name,
"type": "aws_iam_policy",
"severity": "high",
"violation": "IAM policy cannot use wildcard resources with Allow",
"remediation": "Specify explicit resource ARNs",
}
}
# AWS Security Group Rules
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 22
is_open_to_internet(resource.change.after.cidr_blocks)
msg := {
"resource": resource.name,
"type": "aws_security_group_rule",
"severity": "critical",
"violation": "Security group allows SSH from internet",
"remediation": "Restrict SSH access to specific IP ranges",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 3389
is_open_to_internet(resource.change.after.cidr_blocks)
msg := {
"resource": resource.name,
"type": "aws_security_group_rule",
"severity": "critical",
"violation": "Security group allows RDP from internet",
"remediation": "Restrict RDP access to specific IP ranges",
}
}
is_open_to_internet(cidr_blocks) {
cidr_blocks[_] == "0.0.0.0/0"
}
# AWS KMS Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.enable_key_rotation
msg := {
"resource": resource.name,
"type": "aws_kms_key",
"severity": "medium",
"violation": "KMS key must have automatic rotation enabled",
"remediation": "Set enable_key_rotation = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
deletion_window := resource.change.after.deletion_window_in_days
deletion_window < 30
msg := {
"resource": resource.name,
"type": "aws_kms_key",
"severity": "medium",
"violation": "KMS key deletion window must be at least 30 days",
"remediation": "Set deletion_window_in_days >= 30",
}
}

View File

@@ -0,0 +1,40 @@
# Reference Document Template
This file contains detailed reference material that Claude should load only when needed.
## Table of Contents
- [Section 1](#section-1)
- [Section 2](#section-2)
- [Security Standards](#security-standards)
## Section 1
Detailed information, schemas, or examples that are too large for SKILL.md.
## Section 2
Additional reference material.
## Security Standards
### OWASP Top 10
Reference relevant OWASP categories:
- A01: Broken Access Control
- A02: Cryptographic Failures
- etc.
### CWE Mappings
Map to relevant Common Weakness Enumeration categories:
- CWE-79: Cross-site Scripting
- CWE-89: SQL Injection
- etc.
### MITRE ATT&CK
Reference relevant tactics and techniques if applicable:
- TA0001: Initial Access
- T1190: Exploit Public-Facing Application
- etc.

View File

@@ -0,0 +1,507 @@
# Compliance Framework Policy Templates
Policy templates mapped to specific compliance framework controls for SOC2, PCI-DSS, GDPR, HIPAA, and NIST.
## Table of Contents
- [SOC2 Trust Services Criteria](#soc2-trust-services-criteria)
- [PCI-DSS Requirements](#pci-dss-requirements)
- [GDPR Data Protection](#gdpr-data-protection)
- [HIPAA Security Rules](#hipaa-security-rules)
- [NIST Cybersecurity Framework](#nist-cybersecurity-framework)
## SOC2 Trust Services Criteria
### CC6.1: Logical and Physical Access Controls
**Control**: The entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events.
```rego
package compliance.soc2.cc6_1
# Deny overly permissive RBAC
deny[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
not startswith(input.subjects[_].name, "system:")
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("Overly permissive cluster-admin binding: %v", [input.metadata.name]),
"remediation": "Use least-privilege roles instead of cluster-admin"
}
}
# Require authentication for external services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["auth.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("External service without authentication: %v", [input.metadata.name]),
"remediation": "Add auth.required=true annotation"
}
}
# Require MFA for admin access
deny[msg] {
input.kind == "RoleBinding"
contains(input.roleRef.name, "admin")
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("Admin role without MFA requirement: %v", [input.metadata.name]),
"remediation": "Add mfa.required=true annotation"
}
}
```
### CC6.6: Encryption in Transit
**Control**: The entity protects information transmitted to external parties during transmission.
```rego
package compliance.soc2.cc6_6
# Require TLS for external services
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "SOC2 CC6.6",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure spec.tls with valid certificates"
}
}
# Require TLS for LoadBalancer services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := {
"control": "SOC2 CC6.6",
"violation": sprintf("LoadBalancer without SSL/TLS: %v", [input.metadata.name]),
"remediation": "Add SSL certificate annotation"
}
}
```
### CC6.7: Encryption at Rest
**Control**: The entity protects information at rest.
```rego
package compliance.soc2.cc6_7
# Require encrypted volumes
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-classification"] == "confidential"
not input.metadata.annotations["volume.beta.kubernetes.io/storage-encrypted"] == "true"
msg := {
"control": "SOC2 CC6.7",
"violation": sprintf("Unencrypted volume for confidential data: %v", [input.metadata.name]),
"remediation": "Enable volume encryption annotation"
}
}
```
### CC7.2: System Monitoring
**Control**: The entity monitors system components and the operation of those components for anomalies.
```rego
package compliance.soc2.cc7_2
# Require audit logging
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["critical-system"] == "true"
not has_audit_logging(input)
msg := {
"control": "SOC2 CC7.2",
"violation": sprintf("Critical system without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging via sidecar or annotations"
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
```
## PCI-DSS Requirements
### Requirement 1.2: Firewall Configuration
**Control**: Build firewall and router configurations that restrict connections between untrusted networks.
```rego
package compliance.pci.req1_2
# Require network policies for cardholder data
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["pci.scope"] == "in-scope"
not has_network_policy(input.metadata.name)
msg := {
"control": "PCI-DSS 1.2",
"violation": sprintf("PCI in-scope namespace without network policy: %v", [input.metadata.name]),
"remediation": "Create NetworkPolicy to restrict traffic"
}
}
has_network_policy(namespace) {
# Check if NetworkPolicy exists in data (requires external data)
data.network_policies[namespace]
}
```
### Requirement 2.2: System Hardening
**Control**: Develop configuration standards for all system components.
```rego
package compliance.pci.req2_2
# Container hardening requirements
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := {
"control": "PCI-DSS 2.2",
"violation": sprintf("PCI container without read-only filesystem: %v", [container.name]),
"remediation": "Set securityContext.readOnlyRootFilesystem: true"
}
}
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.allowPrivilegeEscalation == false
msg := {
"control": "PCI-DSS 2.2",
"violation": sprintf("PCI container allows privilege escalation: %v", [container.name]),
"remediation": "Set securityContext.allowPrivilegeEscalation: false"
}
}
```
### Requirement 8.2.1: Strong Authentication
**Control**: Render all authentication credentials unreadable during transmission and storage.
```rego
package compliance.pci.req8_2_1
# Require MFA for payment endpoints
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["payment.enabled"] == "true"
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "PCI-DSS 8.2.1",
"violation": sprintf("Payment ingress without MFA: %v", [input.metadata.name]),
"remediation": "Enable MFA via annotation: mfa.required=true"
}
}
# Password strength requirements
deny[msg] {
input.kind == "ConfigMap"
input.metadata.name == "auth-config"
to_number(input.data["password.minLength"]) < 12
msg := {
"control": "PCI-DSS 8.2.1",
"violation": "Password minimum length below requirement",
"remediation": "Set password.minLength to at least 12"
}
}
```
### Requirement 10.2: Audit Logging
**Control**: Implement automated audit trails for all system components.
```rego
package compliance.pci.req10_2
# Require audit logging for PCI components
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
not has_audit_sidecar(input)
msg := {
"control": "PCI-DSS 10.2",
"violation": sprintf("PCI deployment without audit logging: %v", [input.metadata.name]),
"remediation": "Deploy audit logging sidecar"
}
}
has_audit_sidecar(resource) {
container := resource.spec.template.spec.containers[_]
contains(container.name, "audit")
}
```
## GDPR Data Protection
### Article 25: Data Protection by Design
**Control**: The controller shall implement appropriate technical and organizational measures.
```rego
package compliance.gdpr.art25
# Require data classification labels
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.labels["data-classification"]
msg := {
"control": "GDPR Article 25",
"violation": sprintf("Deployment processing personal data without classification: %v", [input.metadata.name]),
"remediation": "Add data-classification label"
}
}
# Data minimization - limit replicas for personal data
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "personal"
input.spec.replicas > 3
not input.metadata.annotations["gdpr.justification"]
msg := {
"control": "GDPR Article 25",
"violation": sprintf("Excessive replicas for personal data: %v", [input.metadata.name]),
"remediation": "Reduce replicas or add justification annotation"
}
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "personal"
}
processes_personal_data(resource) {
contains(lower(resource.metadata.name), "user")
}
```
### Article 32: Security of Processing
**Control**: Implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk.
```rego
package compliance.gdpr.art32
# Require encryption for personal data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"violation": sprintf("Personal data volume without encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption"
}
}
# Require TLS for personal data services
deny[msg] {
input.kind == "Service"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"violation": sprintf("Personal data service without TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS encryption"
}
}
```
## HIPAA Security Rules
### 164.308: Administrative Safeguards
**Control**: Implement policies and procedures to prevent, detect, contain, and correct security violations.
```rego
package compliance.hipaa.admin
# Require access control policies
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["access-control.policy"]
msg := {
"control": "HIPAA 164.308",
"violation": sprintf("PHI namespace without access control policy: %v", [input.metadata.name]),
"remediation": "Document access control policy in annotation"
}
}
```
### 164.312: Technical Safeguards
**Control**: Implement technical policies and procedures for electronic information systems.
```rego
package compliance.hipaa.technical
# Encryption in transit for PHI
deny[msg] {
input.kind == "Service"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI service without TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS for data in transit"
}
}
# Audit logging for PHI access
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["phi-data"] == "true"
not has_audit_logging(input)
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI deployment without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging for all PHI access"
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
# Authentication controls
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["auth.method"]
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI ingress without authentication: %v", [input.metadata.name]),
"remediation": "Configure authentication method"
}
}
```
## NIST Cybersecurity Framework
### PR.AC-4: Access Control
**Control**: Access permissions and authorizations are managed, incorporating the principles of least privilege and separation of duties.
```rego
package compliance.nist.pr_ac_4
# Least privilege - no wildcard permissions
deny[msg] {
input.kind == "Role"
rule := input.rules[_]
rule.verbs[_] == "*"
msg := {
"control": "NIST PR.AC-4",
"violation": sprintf("Wildcard permissions in role: %v", [input.metadata.name]),
"remediation": "Specify explicit verb permissions"
}
}
deny[msg] {
input.kind == "Role"
rule := input.rules[_]
rule.resources[_] == "*"
msg := {
"control": "NIST PR.AC-4",
"violation": sprintf("Wildcard resources in role: %v", [input.metadata.name]),
"remediation": "Specify explicit resource permissions"
}
}
```
### PR.DS-1: Data-at-Rest Protection
**Control**: Data-at-rest is protected.
```rego
package compliance.nist.pr_ds_1
# Require encryption for sensitive data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-sensitivity"] == "high"
not input.metadata.annotations["volume.encryption"] == "enabled"
msg := {
"control": "NIST PR.DS-1",
"violation": sprintf("Sensitive data volume without encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption for data-at-rest protection"
}
}
```
### PR.DS-2: Data-in-Transit Protection
**Control**: Data-in-transit is protected.
```rego
package compliance.nist.pr_ds_2
# Require TLS for external traffic
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "NIST PR.DS-2",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure TLS for data-in-transit protection"
}
}
```
## Multi-Framework Compliance
Example policy that maps to multiple frameworks:
```rego
package compliance.multi_framework
# Encryption requirement - maps to multiple frameworks
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not has_tls_encryption(input)
msg := {
"violation": sprintf("External service without TLS encryption: %v", [input.metadata.name]),
"remediation": "Enable TLS/SSL for external services",
"frameworks": {
"SOC2": "CC6.6 - Encryption in Transit",
"PCI-DSS": "4.1 - Use strong cryptography",
"GDPR": "Article 32 - Security of Processing",
"HIPAA": "164.312 - Technical Safeguards",
"NIST": "PR.DS-2 - Data-in-Transit Protection"
}
}
}
has_tls_encryption(service) {
service.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
}
```
## References
- [SOC2 Trust Services Criteria](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
- [PCI-DSS Requirements](https://www.pcisecuritystandards.org/document_library)
- [GDPR Official Text](https://gdpr.eu/)
- [HIPAA Security Rule](https://www.hhs.gov/hipaa/for-professionals/security/index.html)
- [NIST Cybersecurity Framework](https://www.nist.gov/cyberframework)

View File

@@ -0,0 +1,623 @@
# Infrastructure-as-Code Security Policies
OPA policies for validating infrastructure-as-code configurations in Terraform, CloudFormation, and other IaC tools.
## Table of Contents
- [Terraform Policies](#terraform-policies)
- [AWS CloudFormation](#aws-cloudformation)
- [Azure ARM Templates](#azure-arm-templates)
- [GCP Deployment Manager](#gcp-deployment-manager)
## Terraform Policies
### S3 Bucket Security
```rego
package terraform.aws.s3
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := sprintf("S3 bucket must have encryption enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_versioning(resource)
msg := sprintf("S3 bucket must have versioning enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.after.block_public_acls == false
msg := sprintf("S3 bucket must block public ACLs: %v", [resource.name])
}
has_encryption(resource) {
resource.change.after.server_side_encryption_configuration
}
has_versioning(resource) {
resource.change.after.versioning[_].enabled == true
}
```
### EC2 Instance Security
```rego
package terraform.aws.ec2
# Deny instances without IMDSv2
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
not resource.change.after.metadata_options.http_tokens == "required"
msg := sprintf("EC2 instance must use IMDSv2: %v", [resource.name])
}
# Deny instances with public IPs in production
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.associate_public_ip_address == true
is_production_environment
msg := sprintf("Production EC2 instances cannot have public IPs: %v", [resource.name])
}
# Require monitoring
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.monitoring != true
msg := sprintf("EC2 instance must have detailed monitoring enabled: %v", [resource.name])
}
is_production_environment {
input.variables.environment == "production"
}
```
### RDS Database Security
```rego
package terraform.aws.rds
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.storage_encrypted
msg := sprintf("RDS instance must have encryption enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.publicly_accessible == true
msg := sprintf("RDS instance cannot be publicly accessible: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.backup_retention_period
msg := sprintf("RDS instance must have backup retention configured: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.backup_retention_period < 7
msg := sprintf("RDS instance must have at least 7 days backup retention: %v", [resource.name])
}
```
### IAM Policy Security
```rego
package terraform.aws.iam
# Deny wildcard actions in IAM policies
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Action[_] == "*"
msg := sprintf("IAM policy cannot use wildcard actions: %v", [resource.name])
}
# Deny wildcard resources
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Resource[_] == "*"
statement.Effect == "Allow"
msg := sprintf("IAM policy cannot use wildcard resources with Allow: %v", [resource.name])
}
# Deny policies without conditions for sensitive actions
sensitive_actions := [
"iam:CreateUser",
"iam:DeleteUser",
"iam:AttachUserPolicy",
"kms:Decrypt",
]
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
action := statement.Action[_]
sensitive_actions[_] == action
not statement.Condition
msg := sprintf("Sensitive IAM action requires conditions: %v in %v", [action, resource.name])
}
```
### Security Group Rules
```rego
package terraform.aws.security_groups
# Deny SSH from internet
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 22
resource.change.after.to_port == 22
is_open_to_internet(resource.change.after.cidr_blocks)
msg := sprintf("Security group rule allows SSH from internet: %v", [resource.name])
}
# Deny RDP from internet
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 3389
resource.change.after.to_port == 3389
is_open_to_internet(resource.change.after.cidr_blocks)
msg := sprintf("Security group rule allows RDP from internet: %v", [resource.name])
}
# Deny unrestricted ingress
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
is_open_to_internet(resource.change.after.cidr_blocks)
not is_allowed_public_port(resource.change.after.from_port)
msg := sprintf("Security group rule allows unrestricted ingress: %v", [resource.name])
}
is_open_to_internet(cidr_blocks) {
cidr_blocks[_] == "0.0.0.0/0"
}
# Allowed public ports (HTTP/HTTPS)
is_allowed_public_port(port) {
port == 80
}
is_allowed_public_port(port) {
port == 443
}
```
### KMS Key Security
```rego
package terraform.aws.kms
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.enable_key_rotation
msg := sprintf("KMS key must have automatic rotation enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.deletion_window_in_days
msg := sprintf("KMS key must have deletion window configured: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
resource.change.after.deletion_window_in_days < 30
msg := sprintf("KMS key deletion window must be at least 30 days: %v", [resource.name])
}
```
### CloudWatch Logging
```rego
package terraform.aws.logging
# Require CloudWatch logs for Lambda
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_lambda_function"
not has_cloudwatch_logs(resource.name)
msg := sprintf("Lambda function must have CloudWatch logs configured: %v", [resource.name])
}
has_cloudwatch_logs(function_name) {
resource := input.resource_changes[_]
resource.type == "aws_cloudwatch_log_group"
contains(resource.change.after.name, function_name)
}
```
## AWS CloudFormation
### S3 Bucket Security
```rego
package cloudformation.aws.s3
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::S3::Bucket"
not has_bucket_encryption(resource)
msg := sprintf("S3 bucket must have encryption: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::S3::Bucket"
not has_versioning(resource)
msg := sprintf("S3 bucket must have versioning enabled: %v", [name])
}
has_bucket_encryption(resource) {
resource.Properties.BucketEncryption
}
has_versioning(resource) {
resource.Properties.VersioningConfiguration.Status == "Enabled"
}
```
### EC2 Security Groups
```rego
package cloudformation.aws.ec2
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::EC2::SecurityGroup"
rule := resource.Properties.SecurityGroupIngress[_]
rule.CidrIp == "0.0.0.0/0"
rule.FromPort == 22
msg := sprintf("Security group allows SSH from internet: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::EC2::SecurityGroup"
rule := resource.Properties.SecurityGroupIngress[_]
rule.CidrIp == "0.0.0.0/0"
rule.FromPort == 3389
msg := sprintf("Security group allows RDP from internet: %v", [name])
}
```
### RDS Database
```rego
package cloudformation.aws.rds
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::RDS::DBInstance"
not resource.Properties.StorageEncrypted
msg := sprintf("RDS instance must have encryption enabled: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::RDS::DBInstance"
resource.Properties.PubliclyAccessible == true
msg := sprintf("RDS instance cannot be publicly accessible: %v", [name])
}
```
## Azure ARM Templates
### Storage Account Security
```rego
package azure.storage
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
not resource.properties.supportsHttpsTrafficOnly
msg := sprintf("Storage account must require HTTPS: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
resource.properties.allowBlobPublicAccess == true
msg := sprintf("Storage account must disable public blob access: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
not resource.properties.minimumTlsVersion == "TLS1_2"
msg := sprintf("Storage account must use TLS 1.2 minimum: %v", [resource.name])
}
```
### Virtual Machine Security
```rego
package azure.compute
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Compute/virtualMachines"
not has_managed_identity(resource)
msg := sprintf("Virtual machine should use managed identity: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Compute/virtualMachines"
not has_disk_encryption(resource)
msg := sprintf("Virtual machine must have disk encryption: %v", [resource.name])
}
has_managed_identity(vm) {
vm.identity.type
}
has_disk_encryption(vm) {
vm.properties.storageProfile.osDisk.encryptionSettings
}
```
### Network Security Groups
```rego
package azure.network
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Network/networkSecurityGroups"
rule := resource.properties.securityRules[_]
rule.properties.access == "Allow"
rule.properties.sourceAddressPrefix == "*"
rule.properties.destinationPortRange == "22"
msg := sprintf("NSG allows SSH from internet: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Network/networkSecurityGroups"
rule := resource.properties.securityRules[_]
rule.properties.access == "Allow"
rule.properties.sourceAddressPrefix == "*"
rule.properties.destinationPortRange == "3389"
msg := sprintf("NSG allows RDP from internet: %v", [resource.name])
}
```
## GCP Deployment Manager
### GCS Bucket Security
```rego
package gcp.storage
deny[msg] {
resource := input.resources[_]
resource.type == "storage.v1.bucket"
not has_uniform_access(resource)
msg := sprintf("GCS bucket must use uniform bucket-level access: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "storage.v1.bucket"
not has_encryption(resource)
msg := sprintf("GCS bucket must have encryption configured: %v", [resource.name])
}
has_uniform_access(bucket) {
bucket.properties.iamConfiguration.uniformBucketLevelAccess.enabled == true
}
has_encryption(bucket) {
bucket.properties.encryption
}
```
### Compute Instance Security
```rego
package gcp.compute
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.instance"
not has_service_account(resource)
msg := sprintf("Compute instance should use service account: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.instance"
not has_disk_encryption(resource)
msg := sprintf("Compute instance must have disk encryption: %v", [resource.name])
}
has_service_account(instance) {
instance.properties.serviceAccounts
}
has_disk_encryption(instance) {
instance.properties.disks[_].diskEncryptionKey
}
```
### Firewall Rules
```rego
package gcp.network
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.firewall"
resource.properties.direction == "INGRESS"
"0.0.0.0/0" == resource.properties.sourceRanges[_]
allowed := resource.properties.allowed[_]
allowed.ports[_] == "22"
msg := sprintf("Firewall rule allows SSH from internet: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.firewall"
resource.properties.direction == "INGRESS"
"0.0.0.0/0" == resource.properties.sourceRanges[_]
allowed := resource.properties.allowed[_]
allowed.ports[_] == "3389"
msg := sprintf("Firewall rule allows RDP from internet: %v", [resource.name])
}
```
## Conftest Integration
Example using Conftest for Terraform validation:
```bash
# Install conftest
brew install conftest
# Create policy directory
mkdir -p policy
# Write policy (policy/terraform.rego)
package main
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not resource.change.after.server_side_encryption_configuration
msg := sprintf("S3 bucket must have encryption: %v", [resource.name])
}
# Generate Terraform plan
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
# Run conftest
conftest test tfplan.json
```
## CI/CD Integration
### GitHub Actions
```yaml
name: IaC Policy Validation
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Generate Terraform Plan
run: |
terraform init
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: Validate with OPA
run: |
opa eval --data policies/ --input tfplan.json \
--format pretty 'data.terraform.deny' > violations.txt
if [ -s violations.txt ]; then
cat violations.txt
exit 1
fi
```
### GitLab CI
```yaml
iac-validation:
image: openpolicyagent/opa:latest
script:
- terraform init
- terraform plan -out=tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
- opa eval --data policies/ --input tfplan.json 'data.terraform.deny'
only:
- merge_requests
```
## References
- [Conftest](https://www.conftest.dev/)
- [Terraform Sentinel](https://www.terraform.io/docs/cloud/sentinel/index.html)
- [AWS CloudFormation Guard](https://github.com/aws-cloudformation/cloudformation-guard)
- [Azure Policy](https://docs.microsoft.com/en-us/azure/governance/policy/)
- [Checkov](https://www.checkov.io/)

View File

@@ -0,0 +1,550 @@
# Kubernetes Security Policies
Comprehensive OPA policies for Kubernetes security best practices and admission control.
## Table of Contents
- [Pod Security](#pod-security)
- [RBAC Security](#rbac-security)
- [Network Security](#network-security)
- [Image Security](#image-security)
- [Secret Management](#secret-management)
## Pod Security
### Privileged Containers
Deny privileged containers:
```rego
package kubernetes.admission.privileged_containers
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container is not allowed: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.initContainers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged init container is not allowed: %v", [container.name])
}
```
### Run as Non-Root
Enforce containers run as non-root:
```rego
package kubernetes.admission.non_root
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root user: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.runAsUser == 0
msg := sprintf("Container cannot run as UID 0 (root): %v", [container.name])
}
```
### Read-Only Root Filesystem
Require read-only root filesystem:
```rego
package kubernetes.admission.readonly_root
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
```
### Capabilities
Restrict Linux capabilities:
```rego
package kubernetes.admission.capabilities
# Denied capabilities
denied_capabilities := [
"CAP_SYS_ADMIN",
"CAP_NET_ADMIN",
"CAP_SYS_PTRACE",
"CAP_SYS_MODULE",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
denied_capabilities[_] == capability
msg := sprintf("Capability %v is not allowed for container: %v", [capability, container.name])
}
# Require dropping ALL capabilities by default
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not drops_all_capabilities(container)
msg := sprintf("Container must drop ALL capabilities: %v", [container.name])
}
drops_all_capabilities(container) {
container.securityContext.capabilities.drop[_] == "ALL"
}
```
### Host Namespaces
Prevent use of host namespaces:
```rego
package kubernetes.admission.host_namespaces
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostPID == true
msg := "Sharing the host PID namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostIPC == true
msg := "Sharing the host IPC namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostNetwork == true
msg := "Sharing the host network namespace is not allowed"
}
```
### Host Paths
Restrict hostPath volumes:
```rego
package kubernetes.admission.host_path
# Allowed host paths (if any)
allowed_host_paths := [
"/var/log/pods", # Example: log collection
]
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.hostPath
not is_allowed_host_path(volume.hostPath.path)
msg := sprintf("hostPath volume is not allowed: %v", [volume.hostPath.path])
}
is_allowed_host_path(path) {
allowed_host_paths[_] == path
}
```
### Security Context
Comprehensive pod security context validation:
```rego
package kubernetes.admission.security_context
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext
msg := "Pod must define a security context"
}
deny[msg] {
input.request.kind.kind == "Pod"
pod_security := input.request.object.spec.securityContext
not pod_security.runAsNonRoot
msg := "Pod security context must set runAsNonRoot: true"
}
deny[msg] {
input.request.kind.kind == "Pod"
pod_security := input.request.object.spec.securityContext
not pod_security.seccompProfile
msg := "Pod must define a seccomp profile"
}
```
## RBAC Security
### Wildcard Permissions
Prevent wildcard RBAC permissions:
```rego
package kubernetes.rbac.wildcards
deny[msg] {
input.request.kind.kind == "Role"
rule := input.request.object.rules[_]
rule.verbs[_] == "*"
msg := sprintf("Role contains wildcard verb permission in rule: %v", [rule])
}
deny[msg] {
input.request.kind.kind == "Role"
rule := input.request.object.rules[_]
rule.resources[_] == "*"
msg := sprintf("Role contains wildcard resource permission in rule: %v", [rule])
}
deny[msg] {
input.request.kind.kind == "ClusterRole"
rule := input.request.object.rules[_]
rule.verbs[_] == "*"
msg := sprintf("ClusterRole contains wildcard verb permission in rule: %v", [rule])
}
```
### Cluster Admin
Restrict cluster-admin usage:
```rego
package kubernetes.rbac.cluster_admin
# System accounts allowed to use cluster-admin
allowed_system_accounts := [
"system:kube-controller-manager",
"system:kube-scheduler",
]
deny[msg] {
input.request.kind.kind == "ClusterRoleBinding"
input.request.object.roleRef.name == "cluster-admin"
subject := input.request.object.subjects[_]
not is_allowed_system_account(subject)
msg := sprintf("cluster-admin binding not allowed for subject: %v", [subject.name])
}
is_allowed_system_account(subject) {
allowed_system_accounts[_] == subject.name
}
```
### Service Account Token Mounting
Control service account token auto-mounting:
```rego
package kubernetes.rbac.service_account_tokens
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.automountServiceAccountToken == true
not requires_service_account(input.request.object)
msg := "Pod should not auto-mount service account token unless required"
}
requires_service_account(pod) {
pod.metadata.annotations["requires-service-account"] == "true"
}
```
## Network Security
### Network Policies Required
Require network policies for namespaces:
```rego
package kubernetes.network.policies_required
# Check if namespace has network policies (requires admission controller data)
deny[msg] {
input.request.kind.kind == "Namespace"
not has_network_policy_annotation(input.request.object)
msg := sprintf("Namespace must have network policy annotation: %v", [input.request.object.metadata.name])
}
has_network_policy_annotation(namespace) {
namespace.metadata.annotations["network-policy.enabled"] == "true"
}
```
### Deny Default Network Policy
Implement default-deny network policy:
```rego
package kubernetes.network.default_deny
deny[msg] {
input.request.kind.kind == "NetworkPolicy"
not is_default_deny(input.request.object)
input.request.object.metadata.labels["policy-type"] == "default"
msg := "Default network policy must be deny-all"
}
is_default_deny(network_policy) {
# Check for empty ingress rules (deny all ingress)
not network_policy.spec.ingress
# Check for ingress type
network_policy.spec.policyTypes[_] == "Ingress"
}
```
### Service Type LoadBalancer
Restrict external LoadBalancer services:
```rego
package kubernetes.network.loadbalancer
deny[msg] {
input.request.kind.kind == "Service"
input.request.object.spec.type == "LoadBalancer"
not is_approved_for_external_exposure(input.request.object)
msg := sprintf("LoadBalancer service requires approval annotation: %v", [input.request.object.metadata.name])
}
is_approved_for_external_exposure(service) {
service.metadata.annotations["external-exposure.approved"] == "true"
}
```
## Image Security
### Image Registry Whitelist
Allow only approved image registries:
```rego
package kubernetes.images.registry_whitelist
approved_registries := [
"gcr.io/my-company",
"docker.io/my-company",
"quay.io/my-company",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not is_approved_registry(container.image)
msg := sprintf("Image from unapproved registry: %v", [container.image])
}
is_approved_registry(image) {
startswith(image, approved_registries[_])
}
```
### Image Tags
Prevent latest tag and require specific tags:
```rego
package kubernetes.images.tags
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Container uses 'latest' tag: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not contains(container.image, ":")
msg := sprintf("Container image must specify a tag: %v", [container.name])
}
```
### Image Vulnerability Scanning
Require vulnerability scan results:
```rego
package kubernetes.images.vulnerability_scanning
deny[msg] {
input.request.kind.kind == "Pod"
not has_scan_annotation(input.request.object)
msg := "Pod must have vulnerability scan results annotation"
}
deny[msg] {
input.request.kind.kind == "Pod"
scan_result := input.request.object.metadata.annotations["vulnerability-scan.result"]
scan_result == "failed"
msg := "Pod image failed vulnerability scan"
}
has_scan_annotation(pod) {
pod.metadata.annotations["vulnerability-scan.result"]
}
```
## Secret Management
### Environment Variable Secrets
Prevent secrets in environment variables:
```rego
package kubernetes.secrets.env_vars
sensitive_keywords := [
"password",
"token",
"apikey",
"secret",
"credential",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
env := container.env[_]
is_sensitive_name(env.name)
env.value # Direct value, not from secret
msg := sprintf("Sensitive data in environment variable: %v in container %v", [env.name, container.name])
}
is_sensitive_name(name) {
lower_name := lower(name)
contains(lower_name, sensitive_keywords[_])
}
```
### Secret Volume Permissions
Restrict secret volume mount permissions:
```rego
package kubernetes.secrets.volume_permissions
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.secret
volume_mount := input.request.object.spec.containers[_].volumeMounts[_]
volume_mount.name == volume.name
not volume_mount.readOnly
msg := sprintf("Secret volume mount must be read-only: %v", [volume.name])
}
```
### External Secrets
Require use of external secret management:
```rego
package kubernetes.secrets.external
deny[msg] {
input.request.kind.kind == "Secret"
input.request.object.metadata.labels["environment"] == "production"
not input.request.object.metadata.annotations["external-secret.enabled"] == "true"
msg := sprintf("Production secrets must use external secret management: %v", [input.request.object.metadata.name])
}
```
## Admission Control Integration
Example OPA Gatekeeper ConstraintTemplate:
```yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8spodsecsecurity
spec:
crd:
spec:
names:
kind: K8sPodSecSecurity
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8spodsecurity
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
```
Example Constraint:
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPodSecSecurity
metadata:
name: pod-security-policy
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "production"
- "staging"
```
## References
- [Kubernetes Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/)
- [OPA Gatekeeper Library](https://github.com/open-policy-agent/gatekeeper-library)
- [NSA Kubernetes Hardening Guide](https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/2716980/)
- [CIS Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes)

View File

@@ -0,0 +1,505 @@
# Common Rego Patterns for Security and Compliance
This reference provides common Rego patterns for implementing security and compliance policies in OPA.
## Table of Contents
- [Basic Patterns](#basic-patterns)
- [Security Patterns](#security-patterns)
- [Compliance Patterns](#compliance-patterns)
- [Advanced Patterns](#advanced-patterns)
## Basic Patterns
### Deny Rules
Most common pattern - deny when condition is met:
```rego
package example
deny[msg] {
condition_is_true
msg := "Descriptive error message"
}
```
### Allow Rules
Whitelist pattern - allow specific cases:
```rego
package example
default allow = false
allow {
input.user.role == "admin"
}
allow {
input.user.id == input.resource.owner
}
```
### Array Iteration
Iterate over arrays to check conditions:
```rego
package example
deny[msg] {
container := input.spec.containers[_]
container.image == "vulnerable:latest"
msg := sprintf("Vulnerable image detected: %v", [container.name])
}
```
### Object Key Checking
Verify required keys exist:
```rego
package example
required_labels := ["app", "environment", "owner"]
deny[msg] {
missing := required_labels[_]
not input.metadata.labels[missing]
msg := sprintf("Missing required label: %v", [missing])
}
```
## Security Patterns
### Privileged Container Check
Deny privileged containers:
```rego
package kubernetes.security
deny[msg] {
container := input.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
```
### Host Path Volume Check
Prevent hostPath volumes:
```rego
package kubernetes.security
deny[msg] {
volume := input.spec.volumes[_]
volume.hostPath
msg := sprintf("hostPath volumes not allowed: %v", [volume.name])
}
```
### Image Registry Whitelist
Allow only approved registries:
```rego
package kubernetes.security
allowed_registries := [
"gcr.io/company",
"docker.io/company",
]
deny[msg] {
container := input.spec.containers[_]
image := container.image
not startswith_any(image, allowed_registries)
msg := sprintf("Image from unauthorized registry: %v", [image])
}
startswith_any(str, prefixes) {
startswith(str, prefixes[_])
}
```
### Network Policy Enforcement
Require network policies for namespaces:
```rego
package kubernetes.security
deny[msg] {
input.kind == "Namespace"
not input.metadata.labels["network-policy"]
msg := "Namespace must have network-policy label"
}
```
### Secret in Environment Variables
Prevent secrets in environment variables:
```rego
package kubernetes.security
deny[msg] {
container := input.spec.containers[_]
env := container.env[_]
contains(lower(env.name), "password")
env.value # Direct value, not from secret
msg := sprintf("Secret in environment variable: %v", [env.name])
}
```
## Compliance Patterns
### SOC2 CC6.1: Access Control
```rego
package compliance.soc2
# Deny cluster-admin for non-system accounts
deny[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
not startswith(input.subjects[_].name, "system:")
msg := sprintf("SOC2 CC6.1: cluster-admin role binding not allowed for %v", [input.metadata.name])
}
# Require authentication labels
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["auth.required"]
msg := "SOC2 CC6.1: LoadBalancer services must require authentication"
}
```
### PCI-DSS 8.2.1: Strong Authentication
```rego
package compliance.pci
# Require MFA annotation
deny[msg] {
input.kind == "Ingress"
input.metadata.annotations["payment.enabled"] == "true"
not input.metadata.annotations["mfa.required"] == "true"
msg := "PCI-DSS 8.2.1: Payment endpoints must require MFA"
}
# Password complexity requirements
deny[msg] {
input.kind == "ConfigMap"
input.data["password.minLength"]
to_number(input.data["password.minLength"]) < 12
msg := "PCI-DSS 8.2.1: Minimum password length must be 12"
}
```
### GDPR Article 25: Data Protection by Design
```rego
package compliance.gdpr
# Require data classification
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.labels["data-classification"]
msg := "GDPR Art25: Deployments processing personal data must have data-classification label"
}
# Require encryption for personal data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := "GDPR Art25: Personal data volumes must use encryption"
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "personal"
}
processes_personal_data(resource) {
contains(lower(resource.metadata.name), "user")
}
```
### HIPAA 164.312: Technical Safeguards
```rego
package compliance.hipaa
# Require encryption in transit
deny[msg] {
input.kind == "Service"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["tls.enabled"] == "true"
msg := "HIPAA 164.312: Services handling PHI must use TLS encryption"
}
# Audit logging requirement
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["phi-data"] == "true"
not has_audit_logging(input)
msg := "HIPAA 164.312: PHI deployments must enable audit logging"
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
```
## Advanced Patterns
### Helper Functions
Create reusable helper functions:
```rego
package helpers
# Check if string starts with any prefix
startswith_any(str, prefixes) {
startswith(str, prefixes[_])
}
# Check if array contains value
array_contains(arr, val) {
arr[_] == val
}
# Get all containers (including init containers)
all_containers[container] {
container := input.spec.containers[_]
}
all_containers[container] {
container := input.spec.initContainers[_]
}
# Safe label access with default
get_label(resource, key, default_val) = val {
val := resource.metadata.labels[key]
} else = default_val
```
### Multi-Framework Mapping
Map single policy to multiple frameworks:
```rego
package multi_framework
deny[msg] {
container := input.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := {
"violation": "Container filesystem must be read-only",
"container": container.name,
"frameworks": {
"SOC2": "CC6.1",
"PCI-DSS": "2.2",
"NIST": "CM-7",
}
}
}
```
### Severity Levels
Add severity to violations:
```rego
package severity
violations[violation] {
container := input.spec.containers[_]
container.securityContext.privileged == true
violation := {
"message": sprintf("Privileged container: %v", [container.name]),
"severity": "critical",
"remediation": "Set securityContext.privileged to false"
}
}
violations[violation] {
not input.spec.securityContext.runAsNonRoot
violation := {
"message": "Pod does not enforce non-root user",
"severity": "high",
"remediation": "Set spec.securityContext.runAsNonRoot to true"
}
}
```
### Exception Handling
Allow policy exceptions with justification:
```rego
package exceptions
default allow = false
# Check for valid exception
has_exception {
input.metadata.annotations["policy.exception"] == "true"
input.metadata.annotations["policy.justification"]
input.metadata.annotations["policy.approver"]
}
deny[msg] {
violates_policy
not has_exception
msg := "Policy violation - no valid exception found"
}
deny[msg] {
violates_policy
has_exception
not is_valid_approver
msg := "Policy exception requires valid approver"
}
```
### Data Validation
Validate external data sources:
```rego
package data_validation
import data.approved_images
deny[msg] {
container := input.spec.containers[_]
not image_approved(container.image)
msg := sprintf("Image not in approved list: %v", [container.image])
}
image_approved(image) {
approved_images[_] == image
}
# Validate with external API (requires OPA bundle with data)
deny[msg] {
input.kind == "Deployment"
namespace := input.metadata.namespace
not data.namespaces[namespace].approved
msg := sprintf("Deployment to unapproved namespace: %v", [namespace])
}
```
### Testing Patterns
Write comprehensive tests:
```rego
package example_test
import data.example
# Test deny rule
test_deny_privileged {
input := {
"spec": {
"containers": [{
"name": "app",
"securityContext": {"privileged": true}
}]
}
}
count(example.deny) > 0
}
# Test allow case
test_allow_unprivileged {
input := {
"spec": {
"containers": [{
"name": "app",
"securityContext": {"privileged": false}
}]
}
}
count(example.deny) == 0
}
# Test with multiple containers
test_multiple_containers {
input := {
"spec": {
"containers": [
{"name": "app1", "securityContext": {"privileged": false}},
{"name": "app2", "securityContext": {"privileged": true}}
]
}
}
count(example.deny) == 1
}
```
## Performance Optimization
### Index Data Structures
Use indexed data for faster lookups:
```rego
# Slow - iterates every time
approved_images := ["image1:v1", "image2:v1", "image3:v1"]
deny[msg] {
container := input.spec.containers[_]
not array_contains(approved_images, container.image)
msg := "Image not approved"
}
# Fast - uses indexing
approved_images_set := {
"image1:v1",
"image2:v1",
"image3:v1"
}
deny[msg] {
container := input.spec.containers[_]
not approved_images_set[container.image]
msg := "Image not approved"
}
```
### Partial Evaluation
Use comprehensions for efficiency:
```rego
# Collect all violations at once
all_violations := [msg |
container := input.spec.containers[_]
violates_policy(container)
msg := format_message(container)
]
deny[msg] {
msg := all_violations[_]
}
```
## References
- [Rego Language Reference](https://www.openpolicyagent.org/docs/latest/policy-reference/)
- [OPA Best Practices](https://www.openpolicyagent.org/docs/latest/policy-performance/)
- [Rego Style Guide](https://github.com/open-policy-agent/opa/blob/main/docs/content/policy-language.md)