Initial commit
This commit is contained in:
5
skills/devsecops/.category
Normal file
5
skills/devsecops/.category
Normal file
@@ -0,0 +1,5 @@
|
||||
# DevSecOps Skills
|
||||
|
||||
This directory contains skills for DevSecOps and CI/CD security operations.
|
||||
|
||||
See the main [README.md](../../README.md) for usage and [CONTRIBUTE.md](../../CONTRIBUTE.md) for contribution guidelines.
|
||||
329
skills/devsecops/container-grype/SKILL.md
Normal file
329
skills/devsecops/container-grype/SKILL.md
Normal file
@@ -0,0 +1,329 @@
|
||||
---
|
||||
name: container-grype
|
||||
description: >
|
||||
Container vulnerability scanning and dependency risk assessment using Grype with CVSS severity
|
||||
ratings, EPSS exploit probability, and CISA KEV indicators. Use when: (1) Scanning container
|
||||
images and filesystems for known vulnerabilities, (2) Integrating vulnerability scanning into
|
||||
CI/CD pipelines with severity thresholds, (3) Analyzing SBOMs (Syft, SPDX, CycloneDX) for
|
||||
security risks, (4) Prioritizing remediation based on threat metrics (CVSS, EPSS, KEV),
|
||||
(5) Generating vulnerability reports in multiple formats (JSON, SARIF, CycloneDX) for security
|
||||
toolchain integration.
|
||||
version: 0.1.0
|
||||
maintainer: SirAppSec
|
||||
category: devsecops
|
||||
tags: [container-security, vulnerability-scanning, sca, sbom, cvss, cve, docker, grype]
|
||||
frameworks: [CWE, NIST]
|
||||
dependencies:
|
||||
tools: [grype, docker]
|
||||
references:
|
||||
- https://github.com/anchore/grype
|
||||
- https://www.cve.org/
|
||||
- https://nvd.nist.gov/
|
||||
---
|
||||
|
||||
# Container Vulnerability Scanning with Grype
|
||||
|
||||
## Overview
|
||||
|
||||
Grype is an open-source vulnerability scanner that identifies known security flaws in container images,
|
||||
filesystems, and Software Bill of Materials (SBOM) documents. It analyzes operating system packages
|
||||
(Alpine, Ubuntu, Red Hat, Debian) and language-specific dependencies (Java, Python, JavaScript, Ruby,
|
||||
Go, PHP, Rust) against vulnerability databases to detect CVEs.
|
||||
|
||||
Grype emphasizes actionable security insights through:
|
||||
- CVSS severity ratings for risk classification
|
||||
- EPSS exploit probability scores for threat assessment
|
||||
- CISA Known Exploited Vulnerabilities (KEV) indicators
|
||||
- Multiple output formats (table, JSON, SARIF, CycloneDX) for toolchain integration
|
||||
|
||||
## Quick Start
|
||||
|
||||
Scan a container image:
|
||||
```bash
|
||||
grype <image-name>
|
||||
```
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
# Scan official Docker image
|
||||
grype alpine:latest
|
||||
|
||||
# Scan local Docker image
|
||||
grype myapp:v1.2.3
|
||||
|
||||
# Scan filesystem directory
|
||||
grype dir:/path/to/project
|
||||
|
||||
# Scan SBOM file
|
||||
grype sbom:/path/to/sbom.json
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Basic Vulnerability Scan
|
||||
|
||||
1. **Identify scan target**: Determine what to scan (container image, filesystem, SBOM)
|
||||
2. **Run Grype scan**: Execute `grype <target>` to analyze for vulnerabilities
|
||||
3. **Review findings**: Examine CVE IDs, severity, CVSS scores, affected packages
|
||||
4. **Prioritize remediation**: Focus on critical/high severity, CISA KEV, high EPSS scores
|
||||
5. **Apply fixes**: Update vulnerable packages or base images
|
||||
6. **Re-scan**: Verify vulnerabilities are resolved
|
||||
|
||||
### CI/CD Integration with Fail Thresholds
|
||||
|
||||
For automated pipeline security gates:
|
||||
|
||||
```bash
|
||||
# Fail build if any critical vulnerabilities found
|
||||
grype <image> --fail-on critical
|
||||
|
||||
# Fail on high or critical severities
|
||||
grype <image> --fail-on high
|
||||
|
||||
# Output JSON for further processing
|
||||
grype <image> -o json > results.json
|
||||
```
|
||||
|
||||
**Pipeline integration pattern**:
|
||||
1. Build container image
|
||||
2. Run Grype scan with `--fail-on` threshold
|
||||
3. If scan fails: Block deployment, alert security team
|
||||
4. If scan passes: Continue deployment workflow
|
||||
5. Archive scan results as build artifacts
|
||||
|
||||
### SBOM-Based Scanning
|
||||
|
||||
Use Grype with Syft-generated SBOMs for faster re-scanning:
|
||||
|
||||
```bash
|
||||
# Generate SBOM with Syft (separate skill: sbom-syft)
|
||||
syft <image> -o json > sbom.json
|
||||
|
||||
# Scan SBOM with Grype (faster than re-analyzing image)
|
||||
grype sbom:sbom.json
|
||||
|
||||
# Pipe Syft output directly to Grype
|
||||
syft <image> -o json | grype
|
||||
```
|
||||
|
||||
**Benefits of SBOM workflow**:
|
||||
- Faster re-scans without re-analyzing image layers
|
||||
- Share SBOMs across security tools
|
||||
- Archive SBOMs for compliance and auditing
|
||||
|
||||
### Risk Prioritization Workflow
|
||||
|
||||
Progress:
|
||||
[ ] 1. Run full Grype scan with JSON output: `grype <target> -o json > results.json`
|
||||
[ ] 2. Use helper script to extract high-risk CVEs: `./scripts/prioritize_cves.py results.json`
|
||||
[ ] 3. Review CISA KEV matches (actively exploited vulnerabilities)
|
||||
[ ] 4. Check EPSS scores (exploit probability) for non-KEV findings
|
||||
[ ] 5. Prioritize remediation: KEV > High EPSS > CVSS Critical > CVSS High
|
||||
[ ] 6. Document remediation plan with CVE IDs and affected packages
|
||||
[ ] 7. Apply fixes and re-scan to verify
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
## Output Formats
|
||||
|
||||
Grype supports multiple output formats for different use cases:
|
||||
|
||||
**Table (default)**: Human-readable console output
|
||||
```bash
|
||||
grype <image>
|
||||
```
|
||||
|
||||
**JSON**: Machine-parseable for automation
|
||||
```bash
|
||||
grype <image> -o json
|
||||
```
|
||||
|
||||
**SARIF**: Static Analysis Results Interchange Format for IDE integration
|
||||
```bash
|
||||
grype <image> -o sarif
|
||||
```
|
||||
|
||||
**CycloneDX**: SBOM format with vulnerability data
|
||||
```bash
|
||||
grype <image> -o cyclonedx-json
|
||||
```
|
||||
|
||||
**Template**: Custom output using Go templates
|
||||
```bash
|
||||
grype <image> -o template -t custom-template.tmpl
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Filtering and Exclusions
|
||||
|
||||
Exclude specific file paths:
|
||||
```bash
|
||||
grype <image> --exclude '/usr/share/doc/**'
|
||||
```
|
||||
|
||||
Filter by severity:
|
||||
```bash
|
||||
grype <image> --only-fixed # Only show vulnerabilities with available fixes
|
||||
```
|
||||
|
||||
### Custom Ignore Rules
|
||||
|
||||
Create `.grype.yaml` to suppress false positives:
|
||||
|
||||
```yaml
|
||||
ignore:
|
||||
# Ignore specific CVE
|
||||
- vulnerability: CVE-YYYY-XXXXX
|
||||
reason: "False positive - component not used"
|
||||
|
||||
# Ignore CVE for specific package
|
||||
- vulnerability: CVE-YYYY-ZZZZZ
|
||||
package:
|
||||
name: example-lib
|
||||
version: 1.2.3
|
||||
reason: "Risk accepted - mitigation controls in place"
|
||||
```
|
||||
|
||||
### Database Management
|
||||
|
||||
Update vulnerability database:
|
||||
```bash
|
||||
grype db update
|
||||
```
|
||||
|
||||
Check database status:
|
||||
```bash
|
||||
grype db status
|
||||
```
|
||||
|
||||
Use specific database location:
|
||||
```bash
|
||||
grype <image> --db /path/to/database
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **Sensitive Data Handling**: Scan results may contain package names and versions that reveal
|
||||
application architecture. Store results securely and limit access to authorized security personnel.
|
||||
|
||||
- **Access Control**: Grype requires Docker socket access when scanning container images.
|
||||
Restrict permissions to prevent unauthorized image access.
|
||||
|
||||
- **Audit Logging**: Log all Grype scans with timestamps, target details, and operator identity
|
||||
for compliance and incident response. Archive scan results for historical vulnerability tracking.
|
||||
|
||||
- **Compliance**: Regular vulnerability scanning supports SOC2, PCI-DSS, NIST 800-53, and ISO 27001
|
||||
requirements. Document scan frequency and remediation SLAs.
|
||||
|
||||
- **Safe Defaults**: Use `--fail-on critical` as minimum threshold for production deployments.
|
||||
Configure automated scans in CI/CD to prevent vulnerable images from reaching production.
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### Scripts (`scripts/`)
|
||||
|
||||
- **prioritize_cves.py** - Parse Grype JSON output and prioritize CVEs by threat metrics (KEV, EPSS, CVSS)
|
||||
- **grype_scan.sh** - Wrapper script for consistent Grype scans with logging and threshold configuration
|
||||
|
||||
### References (`references/`)
|
||||
|
||||
- **cvss_guide.md** - CVSS severity rating system and score interpretation
|
||||
- **cisa_kev.md** - CISA Known Exploited Vulnerabilities catalog and remediation urgency
|
||||
- **vulnerability_remediation.md** - Common remediation patterns for dependency vulnerabilities
|
||||
|
||||
### Assets (`assets/`)
|
||||
|
||||
- **grype-ci-config.yml** - CI/CD pipeline configuration for Grype vulnerability scanning
|
||||
- **grype-config.yaml** - Example Grype configuration with common ignore patterns
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Pre-Production Scanning
|
||||
|
||||
Scan before pushing images to registry:
|
||||
|
||||
```bash
|
||||
# Build image
|
||||
docker build -t myapp:latest .
|
||||
|
||||
# Scan locally before push
|
||||
grype myapp:latest --fail-on critical
|
||||
|
||||
# If scan passes, push to registry
|
||||
docker push myapp:latest
|
||||
```
|
||||
|
||||
### Pattern 2: Scheduled Scanning
|
||||
|
||||
Re-scan existing images for newly disclosed vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Scan all production images daily
|
||||
for image in $(docker images --format '{{.Repository}}:{{.Tag}}' | grep prod); do
|
||||
grype $image -o json >> daily-scan-$(date +%Y%m%d).json
|
||||
done
|
||||
```
|
||||
|
||||
### Pattern 3: Base Image Selection
|
||||
|
||||
Compare base images to choose least vulnerable option:
|
||||
|
||||
```bash
|
||||
# Compare Alpine versions
|
||||
grype alpine:3.18
|
||||
grype alpine:3.19
|
||||
|
||||
# Compare distros
|
||||
grype ubuntu:22.04
|
||||
grype debian:12-slim
|
||||
grype alpine:3.19
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
- **CI/CD**: Integrate with GitHub Actions, GitLab CI, Jenkins, CircleCI using `--fail-on` thresholds
|
||||
- **Container Registries**: Scan images from Docker Hub, ECR, GCR, ACR, Harbor
|
||||
- **Security Tools**: Export SARIF for GitHub Security, JSON for SIEM ingestion, CycloneDX for DependencyTrack
|
||||
- **SDLC**: Scan during build (shift-left), before deployment (quality gate), and scheduled (continuous monitoring)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Database Update Fails
|
||||
|
||||
**Symptoms**: `grype db update` fails with network errors
|
||||
|
||||
**Solution**:
|
||||
- Check network connectivity and proxy settings
|
||||
- Verify firewall allows access to Grype database sources
|
||||
- Use `grype db update --verbose` for detailed error messages
|
||||
- Consider using offline database: `grype db import /path/to/database.tar.gz`
|
||||
|
||||
### Issue: False Positives
|
||||
|
||||
**Symptoms**: Grype reports vulnerabilities in unused code or misidentified packages
|
||||
|
||||
**Solution**:
|
||||
- Create `.grype.yaml` ignore file with specific CVE suppressions
|
||||
- Document justification for each ignored vulnerability
|
||||
- Periodically review ignored CVEs (quarterly) to reassess risk
|
||||
- Use `--only-fixed` to focus on actionable findings
|
||||
|
||||
### Issue: Slow Scans
|
||||
|
||||
**Symptoms**: Grype scans take excessive time on large images
|
||||
|
||||
**Solution**:
|
||||
- Use SBOM workflow: Generate SBOM once with Syft, re-scan SBOM with Grype
|
||||
- Exclude unnecessary paths: `--exclude '/usr/share/doc/**'`
|
||||
- Use local database cache: `grype db update` before batch scans
|
||||
- Scan base images separately to identify inherited vulnerabilities
|
||||
|
||||
## References
|
||||
|
||||
- [Grype GitHub Repository](https://github.com/anchore/grype)
|
||||
- [Grype Documentation](https://github.com/anchore/grype#getting-started)
|
||||
- [NIST National Vulnerability Database](https://nvd.nist.gov/)
|
||||
- [CISA Known Exploited Vulnerabilities](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
|
||||
- [FIRST EPSS (Exploit Prediction Scoring System)](https://www.first.org/epss/)
|
||||
- [CVSS Specification](https://www.first.org/cvss/specification-document)
|
||||
9
skills/devsecops/container-grype/assets/.gitkeep
Normal file
9
skills/devsecops/container-grype/assets/.gitkeep
Normal file
@@ -0,0 +1,9 @@
|
||||
# Assets Directory
|
||||
|
||||
Place files that will be used in the output Claude produces:
|
||||
- Templates
|
||||
- Configuration files
|
||||
- Images/logos
|
||||
- Boilerplate code
|
||||
|
||||
These files are NOT loaded into context but copied/modified in output.
|
||||
357
skills/devsecops/container-grype/assets/ci-config-template.yml
Normal file
357
skills/devsecops/container-grype/assets/ci-config-template.yml
Normal file
@@ -0,0 +1,357 @@
|
||||
# Security-Enhanced CI/CD Pipeline Template
|
||||
#
|
||||
# This template demonstrates security best practices for CI/CD pipelines.
|
||||
# Adapt this template to your specific security tool and workflow needs.
|
||||
#
|
||||
# Key Security Features:
|
||||
# - SAST (Static Application Security Testing)
|
||||
# - Dependency vulnerability scanning
|
||||
# - Secrets detection
|
||||
# - Infrastructure-as-Code security scanning
|
||||
# - Container image scanning
|
||||
# - Security artifact uploading for compliance
|
||||
|
||||
name: Security Scan Pipeline
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main, develop]
|
||||
schedule:
|
||||
# Run weekly security scans on Sunday at 2 AM UTC
|
||||
- cron: '0 2 * * 0'
|
||||
workflow_dispatch: # Allow manual trigger
|
||||
|
||||
# Security: Restrict permissions to minimum required
|
||||
permissions:
|
||||
contents: read
|
||||
security-events: write # For uploading SARIF results
|
||||
pull-requests: write # For commenting on PRs
|
||||
|
||||
env:
|
||||
# Configuration
|
||||
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
|
||||
REPORT_DIR: 'security-reports'
|
||||
|
||||
jobs:
|
||||
# Job 1: Static Application Security Testing (SAST)
|
||||
sast-scan:
|
||||
name: SAST Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Full history for better analysis
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Run SAST Scanner
|
||||
run: |
|
||||
# Example: Using Semgrep for SAST
|
||||
pip install semgrep
|
||||
semgrep --config=auto \
|
||||
--json \
|
||||
--output ${{ env.REPORT_DIR }}/sast-results.json \
|
||||
. || true
|
||||
|
||||
# Alternative: Bandit for Python projects
|
||||
# pip install bandit
|
||||
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
|
||||
|
||||
- name: Process SAST Results
|
||||
run: |
|
||||
# Parse results and fail on critical/high severity
|
||||
python3 -c "
|
||||
import json
|
||||
import sys
|
||||
|
||||
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
|
||||
results = json.load(f)
|
||||
|
||||
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
|
||||
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
|
||||
|
||||
print(f'Critical findings: {critical}')
|
||||
print(f'High findings: {high}')
|
||||
|
||||
if critical > 0:
|
||||
print('❌ Build failed: Critical security issues found')
|
||||
sys.exit(1)
|
||||
elif high > 0:
|
||||
print('⚠️ Warning: High severity issues found')
|
||||
# Optionally fail on high severity
|
||||
# sys.exit(1)
|
||||
else:
|
||||
print('✅ No critical security issues found')
|
||||
"
|
||||
|
||||
- name: Upload SAST Results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: sast-results
|
||||
path: ${{ env.REPORT_DIR }}/sast-results.json
|
||||
retention-days: 30
|
||||
|
||||
# Job 2: Dependency Vulnerability Scanning
|
||||
dependency-scan:
|
||||
name: Dependency Vulnerability Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Scan Python Dependencies
|
||||
if: hashFiles('requirements.txt') != ''
|
||||
run: |
|
||||
pip install safety
|
||||
safety check \
|
||||
--json \
|
||||
--output ${{ env.REPORT_DIR }}/safety-results.json \
|
||||
|| true
|
||||
|
||||
- name: Scan Node Dependencies
|
||||
if: hashFiles('package.json') != ''
|
||||
run: |
|
||||
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
|
||||
|
||||
- name: Process Dependency Results
|
||||
run: |
|
||||
# Check for critical vulnerabilities
|
||||
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
|
||||
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
|
||||
echo "Critical vulnerabilities: $critical_count"
|
||||
if [ "$critical_count" -gt "0" ]; then
|
||||
echo "❌ Build failed: Critical vulnerabilities in dependencies"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
- name: Upload Dependency Scan Results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: dependency-scan-results
|
||||
path: ${{ env.REPORT_DIR }}/
|
||||
retention-days: 30
|
||||
|
||||
# Job 3: Secrets Detection
|
||||
secrets-scan:
|
||||
name: Secrets Detection
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Full history to scan all commits
|
||||
|
||||
- name: Run Gitleaks
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GITLEAKS_ENABLE_SUMMARY: true
|
||||
|
||||
- name: Alternative - TruffleHog Scan
|
||||
if: false # Set to true to enable
|
||||
run: |
|
||||
pip install truffleHog
|
||||
trufflehog --json --regex --entropy=True . \
|
||||
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
|
||||
|
||||
- name: Upload Secrets Scan Results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: secrets-scan-results
|
||||
path: ${{ env.REPORT_DIR }}/
|
||||
retention-days: 30
|
||||
|
||||
# Job 4: Container Image Scanning
|
||||
container-scan:
|
||||
name: Container Image Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
if: hashFiles('Dockerfile') != ''
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build Docker Image
|
||||
run: |
|
||||
docker build -t app:${{ github.sha }} .
|
||||
|
||||
- name: Run Trivy Scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: app:${{ github.sha }}
|
||||
format: 'sarif'
|
||||
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
|
||||
- name: Upload Trivy Results to GitHub Security
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
|
||||
|
||||
- name: Upload Container Scan Results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: container-scan-results
|
||||
path: ${{ env.REPORT_DIR }}/
|
||||
retention-days: 30
|
||||
|
||||
# Job 5: Infrastructure-as-Code Security Scanning
|
||||
iac-scan:
|
||||
name: IaC Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Checkov
|
||||
run: |
|
||||
pip install checkov
|
||||
checkov -d . \
|
||||
--output json \
|
||||
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
|
||||
--quiet \
|
||||
|| true
|
||||
|
||||
- name: Run tfsec (for Terraform)
|
||||
if: hashFiles('**/*.tf') != ''
|
||||
run: |
|
||||
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
|
||||
tfsec . \
|
||||
--format json \
|
||||
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|
||||
|| true
|
||||
|
||||
- name: Process IaC Results
|
||||
run: |
|
||||
# Fail on critical findings
|
||||
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
|
||||
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
|
||||
echo "Failed checks: $critical_count"
|
||||
if [ "$critical_count" -gt "0" ]; then
|
||||
echo "⚠️ Warning: IaC security issues found"
|
||||
# Optionally fail the build
|
||||
# exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
- name: Upload IaC Scan Results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: iac-scan-results
|
||||
path: ${{ env.REPORT_DIR }}/
|
||||
retention-days: 30
|
||||
|
||||
# Job 6: Security Report Generation and Notification
|
||||
security-report:
|
||||
name: Generate Security Report
|
||||
runs-on: ubuntu-latest
|
||||
needs: [sast-scan, dependency-scan, secrets-scan]
|
||||
if: always()
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download All Scan Results
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: all-results/
|
||||
|
||||
- name: Generate Consolidated Report
|
||||
run: |
|
||||
# Consolidate all security scan results
|
||||
mkdir -p consolidated-report
|
||||
|
||||
cat > consolidated-report/security-summary.md << 'EOF'
|
||||
# Security Scan Summary
|
||||
|
||||
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
|
||||
**Commit**: ${{ github.sha }}
|
||||
**Branch**: ${{ github.ref_name }}
|
||||
|
||||
## Scan Results
|
||||
|
||||
### SAST Scan
|
||||
See artifacts: `sast-results`
|
||||
|
||||
### Dependency Scan
|
||||
See artifacts: `dependency-scan-results`
|
||||
|
||||
### Secrets Scan
|
||||
See artifacts: `secrets-scan-results`
|
||||
|
||||
### Container Scan
|
||||
See artifacts: `container-scan-results`
|
||||
|
||||
### IaC Scan
|
||||
See artifacts: `iac-scan-results`
|
||||
|
||||
---
|
||||
|
||||
For detailed results, download scan artifacts from this workflow run.
|
||||
EOF
|
||||
|
||||
- name: Comment on PR (if applicable)
|
||||
if: github.event_name == 'pull_request'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
|
||||
|
||||
github.rest.issues.createComment({
|
||||
issue_number: context.issue.number,
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
body: report
|
||||
});
|
||||
|
||||
- name: Upload Consolidated Report
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: consolidated-security-report
|
||||
path: consolidated-report/
|
||||
retention-days: 90
|
||||
|
||||
# Security Best Practices Demonstrated:
|
||||
#
|
||||
# 1. ✅ Minimal permissions (principle of least privilege)
|
||||
# 2. ✅ Multiple security scan types (defense in depth)
|
||||
# 3. ✅ Fail-fast on critical findings
|
||||
# 4. ✅ Secrets detection across full git history
|
||||
# 5. ✅ Container image scanning before deployment
|
||||
# 6. ✅ IaC scanning for misconfigurations
|
||||
# 7. ✅ Artifact retention for compliance audit trail
|
||||
# 8. ✅ SARIF format for GitHub Security integration
|
||||
# 9. ✅ Scheduled scans for continuous monitoring
|
||||
# 10. ✅ PR comments for developer feedback
|
||||
#
|
||||
# Compliance Mappings:
|
||||
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
|
||||
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
|
||||
# - NIST: SA-11 (Developer Security Testing)
|
||||
# - OWASP: Integrated security testing throughout SDLC
|
||||
405
skills/devsecops/container-grype/assets/grype-ci-config.yml
Normal file
405
skills/devsecops/container-grype/assets/grype-ci-config.yml
Normal file
@@ -0,0 +1,405 @@
|
||||
# Grype CI/CD Pipeline Configuration Examples
|
||||
#
|
||||
# This file provides example configurations for integrating Grype vulnerability
|
||||
# scanning into various CI/CD platforms.
|
||||
|
||||
# =============================================================================
|
||||
# GitHub Actions
|
||||
# =============================================================================
|
||||
|
||||
name: Container Security Scan
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
schedule:
|
||||
# Scan daily for new vulnerabilities
|
||||
- cron: '0 6 * * *'
|
||||
|
||||
jobs:
|
||||
grype-scan:
|
||||
name: Grype Vulnerability Scan
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build Docker image
|
||||
run: |
|
||||
docker build -t ${{ github.repository }}:${{ github.sha }} .
|
||||
|
||||
- name: Install Grype
|
||||
uses: anchore/scan-action@v3
|
||||
id: grype
|
||||
with:
|
||||
image: ${{ github.repository }}:${{ github.sha }}
|
||||
fail-build: true
|
||||
severity-cutoff: high
|
||||
output-format: sarif
|
||||
|
||||
- name: Upload SARIF results to GitHub Security
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
if: always()
|
||||
with:
|
||||
sarif_file: ${{ steps.grype.outputs.sarif }}
|
||||
|
||||
- name: Generate human-readable report
|
||||
if: always()
|
||||
run: |
|
||||
grype ${{ github.repository }}:${{ github.sha }} -o table > grype-report.txt
|
||||
|
||||
- name: Upload scan report
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: grype-scan-report
|
||||
path: grype-report.txt
|
||||
retention-days: 30
|
||||
|
||||
# =============================================================================
|
||||
# GitLab CI
|
||||
# =============================================================================
|
||||
|
||||
# .gitlab-ci.yml
|
||||
|
||||
stages:
|
||||
- build
|
||||
- scan
|
||||
- deploy
|
||||
|
||||
variables:
|
||||
IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
|
||||
GRYPE_VERSION: "latest"
|
||||
|
||||
build:
|
||||
stage: build
|
||||
image: docker:24-dind
|
||||
services:
|
||||
- docker:24-dind
|
||||
script:
|
||||
- docker build -t $IMAGE_NAME .
|
||||
- docker push $IMAGE_NAME
|
||||
only:
|
||||
- branches
|
||||
|
||||
grype-scan:
|
||||
stage: scan
|
||||
image: anchore/grype:$GRYPE_VERSION
|
||||
script:
|
||||
- grype $IMAGE_NAME --fail-on high -o json > grype-results.json
|
||||
- grype $IMAGE_NAME -o table
|
||||
artifacts:
|
||||
reports:
|
||||
container_scanning: grype-results.json
|
||||
paths:
|
||||
- grype-results.json
|
||||
expire_in: 30 days
|
||||
allow_failure: false
|
||||
only:
|
||||
- branches
|
||||
|
||||
deploy:
|
||||
stage: deploy
|
||||
script:
|
||||
- echo "Deploying $IMAGE_NAME"
|
||||
only:
|
||||
- main
|
||||
when: on_success
|
||||
|
||||
# =============================================================================
|
||||
# Jenkins Pipeline
|
||||
# =============================================================================
|
||||
|
||||
# Jenkinsfile
|
||||
|
||||
pipeline {
|
||||
agent any
|
||||
|
||||
environment {
|
||||
IMAGE_NAME = "myapp"
|
||||
IMAGE_TAG = "${env.BUILD_NUMBER}"
|
||||
GRYPE_VERSION = "latest"
|
||||
}
|
||||
|
||||
stages {
|
||||
stage('Build') {
|
||||
steps {
|
||||
script {
|
||||
docker.build("${IMAGE_NAME}:${IMAGE_TAG}")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
stage('Grype Scan') {
|
||||
agent {
|
||||
docker {
|
||||
image "anchore/grype:${GRYPE_VERSION}"
|
||||
args '-v /var/run/docker.sock:/var/run/docker.sock'
|
||||
}
|
||||
}
|
||||
steps {
|
||||
sh """
|
||||
# Run scan with high severity threshold
|
||||
grype ${IMAGE_NAME}:${IMAGE_TAG} \
|
||||
--fail-on high \
|
||||
-o json > grype-results.json
|
||||
|
||||
# Generate human-readable report
|
||||
grype ${IMAGE_NAME}:${IMAGE_TAG} \
|
||||
-o table > grype-report.txt
|
||||
"""
|
||||
}
|
||||
post {
|
||||
always {
|
||||
archiveArtifacts artifacts: 'grype-*.json,grype-*.txt',
|
||||
allowEmptyArchive: true
|
||||
}
|
||||
failure {
|
||||
echo 'Grype scan found vulnerabilities above threshold'
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
stage('Deploy') {
|
||||
when {
|
||||
branch 'main'
|
||||
}
|
||||
steps {
|
||||
echo "Deploying ${IMAGE_NAME}:${IMAGE_TAG}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# CircleCI
|
||||
# =============================================================================
|
||||
|
||||
# .circleci/config.yml
|
||||
|
||||
version: 2.1
|
||||
|
||||
orbs:
|
||||
docker: circleci/docker@2.2.0
|
||||
|
||||
jobs:
|
||||
build-and-scan:
|
||||
docker:
|
||||
- image: cimg/base:2024.01
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker:
|
||||
docker_layer_caching: true
|
||||
|
||||
- run:
|
||||
name: Build Docker Image
|
||||
command: |
|
||||
docker build -t myapp:${CIRCLE_SHA1} .
|
||||
|
||||
- run:
|
||||
name: Install Grype
|
||||
command: |
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
|
||||
- run:
|
||||
name: Scan with Grype
|
||||
command: |
|
||||
grype myapp:${CIRCLE_SHA1} --fail-on critical -o json > grype-results.json
|
||||
grype myapp:${CIRCLE_SHA1} -o table | tee grype-report.txt
|
||||
|
||||
- store_artifacts:
|
||||
path: grype-results.json
|
||||
destination: scan-results
|
||||
|
||||
- store_artifacts:
|
||||
path: grype-report.txt
|
||||
destination: scan-results
|
||||
|
||||
workflows:
|
||||
build-scan-deploy:
|
||||
jobs:
|
||||
- build-and-scan:
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- main
|
||||
- develop
|
||||
|
||||
# =============================================================================
|
||||
# Azure Pipelines
|
||||
# =============================================================================
|
||||
|
||||
# azure-pipelines.yml
|
||||
|
||||
trigger:
|
||||
branches:
|
||||
include:
|
||||
- main
|
||||
- develop
|
||||
|
||||
pool:
|
||||
vmImage: 'ubuntu-latest'
|
||||
|
||||
variables:
|
||||
imageName: 'myapp'
|
||||
imageTag: '$(Build.BuildId)'
|
||||
|
||||
stages:
|
||||
- stage: Build
|
||||
jobs:
|
||||
- job: BuildImage
|
||||
steps:
|
||||
- task: Docker@2
|
||||
displayName: Build Docker image
|
||||
inputs:
|
||||
command: build
|
||||
dockerfile: Dockerfile
|
||||
tags: $(imageTag)
|
||||
|
||||
- stage: Scan
|
||||
dependsOn: Build
|
||||
jobs:
|
||||
- job: GrypeScan
|
||||
steps:
|
||||
- script: |
|
||||
# Install Grype
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
|
||||
# Run scan
|
||||
grype $(imageName):$(imageTag) \
|
||||
--fail-on high \
|
||||
-o json > $(Build.ArtifactStagingDirectory)/grype-results.json
|
||||
|
||||
grype $(imageName):$(imageTag) \
|
||||
-o table > $(Build.ArtifactStagingDirectory)/grype-report.txt
|
||||
displayName: 'Run Grype Scan'
|
||||
|
||||
- task: PublishBuildArtifacts@1
|
||||
displayName: 'Publish Scan Results'
|
||||
inputs:
|
||||
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
|
||||
ArtifactName: 'grype-scan-results'
|
||||
condition: always()
|
||||
|
||||
- stage: Deploy
|
||||
dependsOn: Scan
|
||||
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
|
||||
jobs:
|
||||
- job: DeployProduction
|
||||
steps:
|
||||
- script: echo "Deploying to production"
|
||||
displayName: 'Deploy'
|
||||
|
||||
# =============================================================================
|
||||
# Tekton Pipeline
|
||||
# =============================================================================
|
||||
|
||||
# tekton-pipeline.yaml
|
||||
|
||||
apiVersion: tekton.dev/v1beta1
|
||||
kind: Pipeline
|
||||
metadata:
|
||||
name: grype-scan-pipeline
|
||||
spec:
|
||||
params:
|
||||
- name: image-name
|
||||
type: string
|
||||
description: Name of the image to scan
|
||||
- name: image-tag
|
||||
type: string
|
||||
description: Tag of the image to scan
|
||||
default: latest
|
||||
|
||||
workspaces:
|
||||
- name: shared-workspace
|
||||
|
||||
tasks:
|
||||
- name: build-image
|
||||
taskRef:
|
||||
name: buildah
|
||||
workspaces:
|
||||
- name: source
|
||||
workspace: shared-workspace
|
||||
params:
|
||||
- name: IMAGE
|
||||
value: $(params.image-name):$(params.image-tag)
|
||||
|
||||
- name: grype-scan
|
||||
runAfter:
|
||||
- build-image
|
||||
taskRef:
|
||||
name: grype-scan
|
||||
params:
|
||||
- name: IMAGE
|
||||
value: $(params.image-name):$(params.image-tag)
|
||||
- name: SEVERITY_THRESHOLD
|
||||
value: high
|
||||
|
||||
- name: deploy
|
||||
runAfter:
|
||||
- grype-scan
|
||||
taskRef:
|
||||
name: kubectl-deploy
|
||||
params:
|
||||
- name: IMAGE
|
||||
value: $(params.image-name):$(params.image-tag)
|
||||
|
||||
---
|
||||
|
||||
apiVersion: tekton.dev/v1beta1
|
||||
kind: Task
|
||||
metadata:
|
||||
name: grype-scan
|
||||
spec:
|
||||
params:
|
||||
- name: IMAGE
|
||||
description: Image to scan
|
||||
- name: SEVERITY_THRESHOLD
|
||||
description: Fail on this severity or higher
|
||||
default: high
|
||||
|
||||
steps:
|
||||
- name: scan
|
||||
image: anchore/grype:latest
|
||||
script: |
|
||||
#!/bin/sh
|
||||
grype $(params.IMAGE) \
|
||||
--fail-on $(params.SEVERITY_THRESHOLD) \
|
||||
-o json > /workspace/grype-results.json
|
||||
|
||||
grype $(params.IMAGE) -o table | tee /workspace/grype-report.txt
|
||||
|
||||
workspaces:
|
||||
- name: scan-results
|
||||
|
||||
# =============================================================================
|
||||
# Best Practices
|
||||
# =============================================================================
|
||||
|
||||
# 1. Update vulnerability database regularly
|
||||
# - Run grype db update before scans
|
||||
# - Cache database between pipeline runs
|
||||
# - Update database at least daily
|
||||
|
||||
# 2. Set appropriate fail thresholds
|
||||
# - Production: --fail-on critical or high
|
||||
# - Development: --fail-on high (may allow critical temporarily)
|
||||
# - Monitor-only: No fail threshold, just report
|
||||
|
||||
# 3. Archive scan results
|
||||
# - Store JSON for trend analysis
|
||||
# - Keep reports for compliance audits
|
||||
# - Retention: 30-90 days minimum
|
||||
|
||||
# 4. Integrate with security dashboards
|
||||
# - Upload SARIF to GitHub Security
|
||||
# - Send metrics to monitoring systems
|
||||
# - Alert security team on critical findings
|
||||
|
||||
# 5. Scheduled scanning
|
||||
# - Scan production images daily for new CVEs
|
||||
# - Re-scan after vulnerability database updates
|
||||
# - Track vulnerability trends over time
|
||||
255
skills/devsecops/container-grype/assets/grype-config.yaml
Normal file
255
skills/devsecops/container-grype/assets/grype-config.yaml
Normal file
@@ -0,0 +1,255 @@
|
||||
# Grype Configuration File (.grype.yaml)
|
||||
#
|
||||
# Place this file in your project root or specify with: grype <target> -c .grype.yaml
|
||||
#
|
||||
# Documentation: https://github.com/anchore/grype#configuration
|
||||
|
||||
# =============================================================================
|
||||
# Ignore Rules - Suppress False Positives and Accepted Risks
|
||||
# =============================================================================
|
||||
|
||||
ignore:
|
||||
# Example 1: Ignore specific CVE globally
|
||||
- vulnerability: CVE-2021-12345
|
||||
reason: "False positive - vulnerable code path not used in our application"
|
||||
|
||||
# Example 2: Ignore CVE for specific package only
|
||||
- vulnerability: CVE-2022-67890
|
||||
package:
|
||||
name: example-library
|
||||
version: 1.2.3
|
||||
reason: "Risk accepted - compensating WAF rules deployed to block exploitation"
|
||||
|
||||
# Example 3: Ignore CVE with expiration date (forces re-evaluation)
|
||||
- vulnerability: CVE-2023-11111
|
||||
package:
|
||||
name: lodash
|
||||
reason: "Temporary acceptance while migration to alternative library is in progress"
|
||||
expires: 2025-12-31
|
||||
|
||||
# Example 4: Ignore by fix state
|
||||
- fix-state: wont-fix
|
||||
reason: "Maintainer has stated these will not be fixed"
|
||||
|
||||
# Example 5: Ignore vulnerabilities in test dependencies
|
||||
- package:
|
||||
name: pytest
|
||||
type: python
|
||||
reason: "Test-only dependency, not present in production"
|
||||
|
||||
# =============================================================================
|
||||
# Match Configuration
|
||||
# =============================================================================
|
||||
|
||||
match:
|
||||
# Match vulnerabilities in OS packages
|
||||
os:
|
||||
enabled: true
|
||||
|
||||
# Match vulnerabilities in language packages
|
||||
language:
|
||||
enabled: true
|
||||
|
||||
# Control matching behavior
|
||||
go:
|
||||
# Use Go module proxy for additional metadata
|
||||
use-network: true
|
||||
main-module-version:
|
||||
# Use version from go.mod if available
|
||||
from-contents: true
|
||||
|
||||
java:
|
||||
# Use Maven Central for additional metadata
|
||||
use-network: true
|
||||
|
||||
python:
|
||||
# Use PyPI for additional metadata
|
||||
use-network: true
|
||||
|
||||
# =============================================================================
|
||||
# Search Configuration
|
||||
# =============================================================================
|
||||
|
||||
search:
|
||||
# Search for packages in these locations
|
||||
scope: all-layers # Options: all-layers, squashed
|
||||
|
||||
# Exclude paths from scanning
|
||||
exclude:
|
||||
# Exclude documentation directories
|
||||
- "/usr/share/doc/**"
|
||||
- "/usr/share/man/**"
|
||||
|
||||
# Exclude test directories
|
||||
- "**/test/**"
|
||||
- "**/tests/**"
|
||||
- "**/__tests__/**"
|
||||
|
||||
# Exclude development tools not in production
|
||||
- "**/node_modules/.bin/**"
|
||||
|
||||
# Exclude specific files
|
||||
- "**/*.md"
|
||||
- "**/*.txt"
|
||||
|
||||
# Index archives (tar, zip, jar, etc.)
|
||||
index-archives: true
|
||||
|
||||
# Maximum depth to traverse nested archives
|
||||
max-depth: 3
|
||||
|
||||
# =============================================================================
|
||||
# Database Configuration
|
||||
# =============================================================================
|
||||
|
||||
db:
|
||||
# Cache directory for vulnerability database
|
||||
cache-dir: ~/.grype/db
|
||||
|
||||
# Auto-update database
|
||||
auto-update: true
|
||||
|
||||
# Validate database checksum
|
||||
validate-by-hash-on-start: true
|
||||
|
||||
# Update check timeout
|
||||
update-url-timeout: 30s
|
||||
|
||||
# =============================================================================
|
||||
# Vulnerability Matching Configuration
|
||||
# =============================================================================
|
||||
|
||||
# Adjust matcher configuration
|
||||
dev:
|
||||
# Profile memory usage (debugging)
|
||||
profile-mem: false
|
||||
|
||||
# =============================================================================
|
||||
# Output Configuration
|
||||
# =============================================================================
|
||||
|
||||
output:
|
||||
# Default output format
|
||||
# Options: table, json, cyclonedx-json, cyclonedx-xml, sarif, template
|
||||
format: table
|
||||
|
||||
# Show suppressed/ignored vulnerabilities in output
|
||||
show-suppressed: false
|
||||
|
||||
# =============================================================================
|
||||
# Fail-on Configuration
|
||||
# =============================================================================
|
||||
|
||||
# Uncomment to set default fail-on severity
|
||||
# fail-on: high # Options: negligible, low, medium, high, critical
|
||||
|
||||
# =============================================================================
|
||||
# Registry Authentication
|
||||
# =============================================================================
|
||||
|
||||
registry:
|
||||
# Authenticate to private registries
|
||||
# auth:
|
||||
# - authority: registry.example.com
|
||||
# username: user
|
||||
# password: pass
|
||||
#
|
||||
# - authority: gcr.io
|
||||
# token: <token>
|
||||
|
||||
# Use Docker config for authentication
|
||||
insecure-use-http: false
|
||||
|
||||
# =============================================================================
|
||||
# Example Configurations for Different Use Cases
|
||||
# =============================================================================
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Use Case 1: Development Environment (Permissive)
|
||||
# -----------------------------------------------------------------------------
|
||||
#
|
||||
# ignore:
|
||||
# # Allow medium and below in dev
|
||||
# - severity: medium
|
||||
# reason: "Development environment - focus on high/critical only"
|
||||
#
|
||||
# fail-on: critical
|
||||
#
|
||||
# search:
|
||||
# exclude:
|
||||
# - "**/test/**"
|
||||
# - "**/node_modules/**"
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Use Case 2: CI/CD Pipeline (Strict)
|
||||
# -----------------------------------------------------------------------------
|
||||
#
|
||||
# fail-on: high
|
||||
#
|
||||
# ignore:
|
||||
# # Only allow documented exceptions
|
||||
# - vulnerability: CVE-2024-XXXX
|
||||
# reason: "Documented risk acceptance by Security Team - Ticket SEC-123"
|
||||
# expires: 2025-06-30
|
||||
#
|
||||
# output:
|
||||
# format: json
|
||||
# show-suppressed: true
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Use Case 3: Production Monitoring (Focus on Exploitability)
|
||||
# -----------------------------------------------------------------------------
|
||||
#
|
||||
# match:
|
||||
# # Prioritize known exploited vulnerabilities
|
||||
# only-fixed: true # Only show CVEs with available fixes
|
||||
#
|
||||
# ignore:
|
||||
# # Ignore unfixable vulnerabilities with compensating controls
|
||||
# - fix-state: wont-fix
|
||||
# reason: "Compensating controls implemented - network isolation, WAF rules"
|
||||
#
|
||||
# output:
|
||||
# format: json
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# Use Case 4: Compliance Scanning (Comprehensive)
|
||||
# -----------------------------------------------------------------------------
|
||||
#
|
||||
# search:
|
||||
# scope: all-layers
|
||||
# index-archives: true
|
||||
# max-depth: 5
|
||||
#
|
||||
# output:
|
||||
# format: cyclonedx-json
|
||||
# show-suppressed: true
|
||||
#
|
||||
# # No ignores - report everything for compliance review
|
||||
|
||||
# =============================================================================
|
||||
# Best Practices
|
||||
# =============================================================================
|
||||
|
||||
# 1. Document all ignore rules with clear reasons
|
||||
# - Include ticket numbers for risk acceptances
|
||||
# - Set expiration dates for temporary ignores
|
||||
# - Review ignores quarterly
|
||||
|
||||
# 2. Use package-specific ignores instead of global CVE ignores
|
||||
# - Reduces risk of suppressing legitimate vulnerabilities in other packages
|
||||
# - Example: CVE-2021-12345 in package-a (ignored) vs package-b (should alert)
|
||||
|
||||
# 3. Exclude non-production paths
|
||||
# - Test directories, documentation, dev tools
|
||||
# - Reduces noise and scan time
|
||||
|
||||
# 4. Keep configuration in version control
|
||||
# - Track changes to ignore rules
|
||||
# - Audit trail for risk acceptances
|
||||
# - Share consistent configuration across team
|
||||
|
||||
# 5. Different configs for different environments
|
||||
# - Development: More permissive, focus on critical
|
||||
# - CI/CD: Strict, block on high/critical
|
||||
# - Production: Monitor all, focus on exploitable CVEs
|
||||
355
skills/devsecops/container-grype/assets/rule-template.yaml
Normal file
355
skills/devsecops/container-grype/assets/rule-template.yaml
Normal file
@@ -0,0 +1,355 @@
|
||||
# Security Rule Template
|
||||
#
|
||||
# This template demonstrates how to structure security rules/policies.
|
||||
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
|
||||
#
|
||||
# Rule Structure Best Practices:
|
||||
# - Clear rule ID and metadata
|
||||
# - Severity classification
|
||||
# - Framework mappings (OWASP, CWE)
|
||||
# - Remediation guidance
|
||||
# - Example vulnerable and fixed code
|
||||
|
||||
rules:
|
||||
# Example Rule 1: SQL Injection Detection
|
||||
- id: sql-injection-string-concatenation
|
||||
metadata:
|
||||
name: "SQL Injection via String Concatenation"
|
||||
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
|
||||
severity: "HIGH"
|
||||
category: "security"
|
||||
subcategory: "injection"
|
||||
|
||||
# Security Framework Mappings
|
||||
owasp:
|
||||
- "A03:2021 - Injection"
|
||||
cwe:
|
||||
- "CWE-89: SQL Injection"
|
||||
mitre_attack:
|
||||
- "T1190: Exploit Public-Facing Application"
|
||||
|
||||
# Compliance Standards
|
||||
compliance:
|
||||
- "PCI-DSS 6.5.1: Injection flaws"
|
||||
- "NIST 800-53 SI-10: Information Input Validation"
|
||||
|
||||
# Confidence and Impact
|
||||
confidence: "HIGH"
|
||||
likelihood: "HIGH"
|
||||
impact: "HIGH"
|
||||
|
||||
# References
|
||||
references:
|
||||
- "https://owasp.org/www-community/attacks/SQL_Injection"
|
||||
- "https://cwe.mitre.org/data/definitions/89.html"
|
||||
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
|
||||
|
||||
# Languages this rule applies to
|
||||
languages:
|
||||
- python
|
||||
- javascript
|
||||
- java
|
||||
- go
|
||||
|
||||
# Detection Pattern (example using Semgrep-style syntax)
|
||||
pattern-either:
|
||||
- pattern: |
|
||||
cursor.execute($SQL + $VAR)
|
||||
- pattern: |
|
||||
cursor.execute(f"... {$VAR} ...")
|
||||
- pattern: |
|
||||
cursor.execute("..." + $VAR + "...")
|
||||
|
||||
# What to report when found
|
||||
message: |
|
||||
Potential SQL injection vulnerability detected. SQL query is constructed using
|
||||
string concatenation or f-strings with user input. This allows attackers to
|
||||
inject malicious SQL code.
|
||||
|
||||
Use parameterized queries instead:
|
||||
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
|
||||
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
|
||||
|
||||
See: https://owasp.org/www-community/attacks/SQL_Injection
|
||||
|
||||
# Suggested fix (auto-fix if supported)
|
||||
fix: |
|
||||
Use parameterized queries with placeholders
|
||||
|
||||
# Example vulnerable code
|
||||
examples:
|
||||
- vulnerable: |
|
||||
# Vulnerable: String concatenation
|
||||
user_id = request.GET['id']
|
||||
query = "SELECT * FROM users WHERE id = " + user_id
|
||||
cursor.execute(query)
|
||||
|
||||
- fixed: |
|
||||
# Fixed: Parameterized query
|
||||
user_id = request.GET['id']
|
||||
query = "SELECT * FROM users WHERE id = ?"
|
||||
cursor.execute(query, (user_id,))
|
||||
|
||||
# Example Rule 2: Hardcoded Secrets Detection
|
||||
- id: hardcoded-secret-credential
|
||||
metadata:
|
||||
name: "Hardcoded Secret or Credential"
|
||||
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
|
||||
severity: "CRITICAL"
|
||||
category: "security"
|
||||
subcategory: "secrets"
|
||||
|
||||
owasp:
|
||||
- "A07:2021 - Identification and Authentication Failures"
|
||||
cwe:
|
||||
- "CWE-798: Use of Hard-coded Credentials"
|
||||
- "CWE-259: Use of Hard-coded Password"
|
||||
|
||||
compliance:
|
||||
- "PCI-DSS 8.2.1: Use of strong cryptography"
|
||||
- "SOC 2 CC6.1: Logical access controls"
|
||||
- "GDPR Article 32: Security of processing"
|
||||
|
||||
confidence: "MEDIUM"
|
||||
likelihood: "HIGH"
|
||||
impact: "CRITICAL"
|
||||
|
||||
references:
|
||||
- "https://cwe.mitre.org/data/definitions/798.html"
|
||||
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
|
||||
|
||||
languages:
|
||||
- python
|
||||
- javascript
|
||||
- java
|
||||
- go
|
||||
- ruby
|
||||
|
||||
pattern-either:
|
||||
- pattern: |
|
||||
password = "..."
|
||||
- pattern: |
|
||||
api_key = "..."
|
||||
- pattern: |
|
||||
secret = "..."
|
||||
- pattern: |
|
||||
token = "..."
|
||||
|
||||
pattern-not: |
|
||||
$VAR = ""
|
||||
|
||||
message: |
|
||||
Potential hardcoded secret detected. Hardcoding credentials in source code
|
||||
is a critical security vulnerability that can lead to unauthorized access
|
||||
if the code is exposed.
|
||||
|
||||
Use environment variables or a secrets management system instead:
|
||||
- Python: os.environ.get('API_KEY')
|
||||
- Node.js: process.env.API_KEY
|
||||
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
|
||||
|
||||
See: https://cwe.mitre.org/data/definitions/798.html
|
||||
|
||||
examples:
|
||||
- vulnerable: |
|
||||
# Vulnerable: Hardcoded API key
|
||||
api_key = "sk-1234567890abcdef"
|
||||
api.authenticate(api_key)
|
||||
|
||||
- fixed: |
|
||||
# Fixed: Environment variable
|
||||
import os
|
||||
api_key = os.environ.get('API_KEY')
|
||||
if not api_key:
|
||||
raise ValueError("API_KEY environment variable not set")
|
||||
api.authenticate(api_key)
|
||||
|
||||
# Example Rule 3: XSS via Unsafe HTML Rendering
|
||||
- id: xss-unsafe-html-rendering
|
||||
metadata:
|
||||
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
|
||||
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
|
||||
severity: "HIGH"
|
||||
category: "security"
|
||||
subcategory: "xss"
|
||||
|
||||
owasp:
|
||||
- "A03:2021 - Injection"
|
||||
cwe:
|
||||
- "CWE-79: Cross-site Scripting (XSS)"
|
||||
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
|
||||
|
||||
compliance:
|
||||
- "PCI-DSS 6.5.7: Cross-site scripting"
|
||||
- "NIST 800-53 SI-10: Information Input Validation"
|
||||
|
||||
confidence: "HIGH"
|
||||
likelihood: "MEDIUM"
|
||||
impact: "HIGH"
|
||||
|
||||
references:
|
||||
- "https://owasp.org/www-community/attacks/xss/"
|
||||
- "https://cwe.mitre.org/data/definitions/79.html"
|
||||
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
|
||||
|
||||
languages:
|
||||
- javascript
|
||||
- typescript
|
||||
- jsx
|
||||
- tsx
|
||||
|
||||
pattern-either:
|
||||
- pattern: |
|
||||
dangerouslySetInnerHTML={{__html: $VAR}}
|
||||
- pattern: |
|
||||
innerHTML = $VAR
|
||||
|
||||
message: |
|
||||
Potential XSS vulnerability detected. Setting HTML content directly from
|
||||
user input without sanitization can allow attackers to inject malicious
|
||||
JavaScript code.
|
||||
|
||||
Use one of these safe alternatives:
|
||||
- React: Use {userInput} for automatic escaping
|
||||
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
|
||||
- Framework-specific sanitizers
|
||||
|
||||
See: https://owasp.org/www-community/attacks/xss/
|
||||
|
||||
examples:
|
||||
- vulnerable: |
|
||||
// Vulnerable: Unsanitized HTML
|
||||
function UserComment({ comment }) {
|
||||
return <div dangerouslySetInnerHTML={{__html: comment}} />;
|
||||
}
|
||||
|
||||
- fixed: |
|
||||
// Fixed: Sanitized with DOMPurify
|
||||
import DOMPurify from 'dompurify';
|
||||
|
||||
function UserComment({ comment }) {
|
||||
const sanitized = DOMPurify.sanitize(comment);
|
||||
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
|
||||
}
|
||||
|
||||
# Example Rule 4: Insecure Cryptography
|
||||
- id: weak-cryptographic-algorithm
|
||||
metadata:
|
||||
name: "Weak Cryptographic Algorithm"
|
||||
description: "Detects use of weak or deprecated cryptographic algorithms"
|
||||
severity: "HIGH"
|
||||
category: "security"
|
||||
subcategory: "cryptography"
|
||||
|
||||
owasp:
|
||||
- "A02:2021 - Cryptographic Failures"
|
||||
cwe:
|
||||
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
|
||||
- "CWE-326: Inadequate Encryption Strength"
|
||||
|
||||
compliance:
|
||||
- "PCI-DSS 4.1: Use strong cryptography"
|
||||
- "NIST 800-53 SC-13: Cryptographic Protection"
|
||||
- "GDPR Article 32: Security of processing"
|
||||
|
||||
confidence: "HIGH"
|
||||
likelihood: "MEDIUM"
|
||||
impact: "HIGH"
|
||||
|
||||
references:
|
||||
- "https://cwe.mitre.org/data/definitions/327.html"
|
||||
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
|
||||
|
||||
languages:
|
||||
- python
|
||||
- javascript
|
||||
- java
|
||||
|
||||
pattern-either:
|
||||
- pattern: |
|
||||
hashlib.md5(...)
|
||||
- pattern: |
|
||||
hashlib.sha1(...)
|
||||
- pattern: |
|
||||
crypto.createHash('md5')
|
||||
- pattern: |
|
||||
crypto.createHash('sha1')
|
||||
|
||||
message: |
|
||||
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
|
||||
considered cryptographically broken and should not be used for security purposes.
|
||||
|
||||
Use strong alternatives:
|
||||
- For hashing: SHA-256, SHA-384, or SHA-512
|
||||
- For password hashing: bcrypt, argon2, or PBKDF2
|
||||
- Python: hashlib.sha256()
|
||||
- Node.js: crypto.createHash('sha256')
|
||||
|
||||
See: https://cwe.mitre.org/data/definitions/327.html
|
||||
|
||||
examples:
|
||||
- vulnerable: |
|
||||
# Vulnerable: MD5 hash
|
||||
import hashlib
|
||||
hash_value = hashlib.md5(data).hexdigest()
|
||||
|
||||
- fixed: |
|
||||
# Fixed: SHA-256 hash
|
||||
import hashlib
|
||||
hash_value = hashlib.sha256(data).hexdigest()
|
||||
|
||||
# Rule Configuration
|
||||
configuration:
|
||||
# Global settings
|
||||
enabled: true
|
||||
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
|
||||
|
||||
# Performance tuning
|
||||
max_file_size_kb: 1024
|
||||
exclude_patterns:
|
||||
- "test/*"
|
||||
- "tests/*"
|
||||
- "node_modules/*"
|
||||
- "vendor/*"
|
||||
- "*.min.js"
|
||||
|
||||
# False positive reduction
|
||||
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
|
||||
|
||||
# Rule Metadata Schema
|
||||
# This section documents the expected structure for rules
|
||||
metadata_schema:
|
||||
required:
|
||||
- id: "Unique identifier for the rule (kebab-case)"
|
||||
- name: "Human-readable rule name"
|
||||
- description: "What the rule detects"
|
||||
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
|
||||
- category: "security | best-practice | performance"
|
||||
|
||||
optional:
|
||||
- subcategory: "Specific type (injection, xss, secrets, etc.)"
|
||||
- owasp: "OWASP Top 10 mappings"
|
||||
- cwe: "CWE identifier(s)"
|
||||
- mitre_attack: "MITRE ATT&CK technique(s)"
|
||||
- compliance: "Compliance standard references"
|
||||
- confidence: "Detection confidence level"
|
||||
- likelihood: "Likelihood of exploitation"
|
||||
- impact: "Potential impact if exploited"
|
||||
- references: "External documentation links"
|
||||
|
||||
# Usage Instructions:
|
||||
#
|
||||
# 1. Copy this template when creating new security rules
|
||||
# 2. Update metadata fields with appropriate framework mappings
|
||||
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
|
||||
# 4. Provide clear remediation guidance in the message field
|
||||
# 5. Include both vulnerable and fixed code examples
|
||||
# 6. Test rules on real codebases before deployment
|
||||
#
|
||||
# Best Practices:
|
||||
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
|
||||
# - Include compliance standard references
|
||||
# - Provide actionable remediation guidance
|
||||
# - Show code examples (vulnerable vs. fixed)
|
||||
# - Tune confidence levels to reduce false positives
|
||||
# - Exclude test directories to reduce noise
|
||||
550
skills/devsecops/container-grype/references/EXAMPLE.md
Normal file
550
skills/devsecops/container-grype/references/EXAMPLE.md
Normal file
@@ -0,0 +1,550 @@
|
||||
# Reference Document Template
|
||||
|
||||
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
|
||||
|
||||
**When to use this reference**: Include a clear statement about when Claude should consult this document.
|
||||
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
|
||||
|
||||
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
|
||||
|
||||
- [When to Use References](#when-to-use-references)
|
||||
- [Document Organization](#document-organization)
|
||||
- [Detailed Technical Content](#detailed-technical-content)
|
||||
- [Security Framework Mappings](#security-framework-mappings)
|
||||
- [OWASP Top 10](#owasp-top-10)
|
||||
- [CWE Mappings](#cwe-mappings)
|
||||
- [MITRE ATT&CK](#mitre-attck)
|
||||
- [Remediation Patterns](#remediation-patterns)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
- [Examples and Code Samples](#examples-and-code-samples)
|
||||
|
||||
---
|
||||
|
||||
## When to Use References
|
||||
|
||||
**Move content from SKILL.md to references/** when:
|
||||
|
||||
1. **Content exceeds 100 lines** - Keep SKILL.md concise
|
||||
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
|
||||
3. **Advanced user content** - Deep technical details for expert users
|
||||
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
|
||||
5. **Language-specific patterns** - Separate files per language/framework
|
||||
6. **Historical context** - Old patterns and deprecated approaches
|
||||
|
||||
**Keep in SKILL.md**:
|
||||
- Core workflows (top 3-5 use cases)
|
||||
- Decision points and branching logic
|
||||
- Quick start guidance
|
||||
- Essential security considerations
|
||||
|
||||
---
|
||||
|
||||
## Document Organization
|
||||
|
||||
### Structure for Long Documents
|
||||
|
||||
For references >100 lines:
|
||||
|
||||
```markdown
|
||||
# Title
|
||||
|
||||
**When to use**: Clear trigger statement
|
||||
**Purpose**: What this provides
|
||||
|
||||
## Table of Contents
|
||||
- Links to all major sections
|
||||
|
||||
## Quick Reference
|
||||
- Key facts or commands for fast lookup
|
||||
|
||||
## Detailed Content
|
||||
- Comprehensive information organized logically
|
||||
|
||||
## Framework Mappings
|
||||
- OWASP, CWE, MITRE ATT&CK references
|
||||
|
||||
## Examples
|
||||
- Code samples and patterns
|
||||
```
|
||||
|
||||
### Section Naming Conventions
|
||||
|
||||
- Use **imperative** or **declarative** headings
|
||||
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
|
||||
- ✅ "Common Patterns" not "These are common patterns"
|
||||
- Make headings **searchable** and **specific**
|
||||
|
||||
---
|
||||
|
||||
## Detailed Technical Content
|
||||
|
||||
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
|
||||
|
||||
### Example: Comprehensive Vulnerability Detection
|
||||
|
||||
#### SQL Injection Detection Patterns
|
||||
|
||||
**Pattern 1: String Concatenation in Queries**
|
||||
|
||||
```python
|
||||
# Vulnerable pattern
|
||||
query = "SELECT * FROM users WHERE id = " + user_id
|
||||
cursor.execute(query)
|
||||
|
||||
# Detection criteria:
|
||||
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
|
||||
# - String concatenation operator (+, f-string)
|
||||
# - Variable user input (request params, form data)
|
||||
|
||||
# Severity: HIGH
|
||||
# CWE: CWE-89
|
||||
# OWASP: A03:2021 - Injection
|
||||
```
|
||||
|
||||
**Remediation**:
|
||||
```python
|
||||
# Fixed: Parameterized query
|
||||
query = "SELECT * FROM users WHERE id = ?"
|
||||
cursor.execute(query, (user_id,))
|
||||
|
||||
# OR using ORM
|
||||
user = User.objects.get(id=user_id)
|
||||
```
|
||||
|
||||
**Pattern 2: Unsafe String Formatting**
|
||||
|
||||
```python
|
||||
# Vulnerable patterns
|
||||
query = f"SELECT * FROM users WHERE name = '{username}'"
|
||||
query = "SELECT * FROM users WHERE name = '%s'" % username
|
||||
query = "SELECT * FROM users WHERE name = '{}'".format(username)
|
||||
|
||||
# All three patterns are vulnerable to SQL injection
|
||||
```
|
||||
|
||||
#### Cross-Site Scripting (XSS) Detection
|
||||
|
||||
**Pattern 1: Unescaped Output in Templates**
|
||||
|
||||
```javascript
|
||||
// Vulnerable: Direct HTML injection
|
||||
element.innerHTML = userInput;
|
||||
document.write(userInput);
|
||||
|
||||
// Vulnerable: React dangerouslySetInnerHTML
|
||||
<div dangerouslySetInnerHTML={{__html: userComment}} />
|
||||
|
||||
// Detection criteria:
|
||||
# - Direct DOM manipulation (innerHTML, document.write)
|
||||
# - React dangerouslySetInnerHTML with user data
|
||||
# - Template engines with autoescaping disabled
|
||||
|
||||
// Severity: HIGH
|
||||
// CWE: CWE-79
|
||||
// OWASP: A03:2021 - Injection
|
||||
```
|
||||
|
||||
**Remediation**:
|
||||
```javascript
|
||||
// Fixed: Escaped output
|
||||
element.textContent = userInput; // Auto-escapes
|
||||
|
||||
// Fixed: Sanitization library
|
||||
import DOMPurify from 'dompurify';
|
||||
const clean = DOMPurify.sanitize(userComment);
|
||||
<div dangerouslySetInnerHTML={{__html: clean}} />
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security Framework Mappings
|
||||
|
||||
This section provides comprehensive security framework mappings for findings.
|
||||
|
||||
### OWASP Top 10
|
||||
|
||||
Map security findings to OWASP Top 10 (2021) categories:
|
||||
|
||||
| Category | Title | Common Vulnerabilities |
|
||||
|----------|-------|----------------------|
|
||||
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
|
||||
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
|
||||
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
|
||||
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
|
||||
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
|
||||
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
|
||||
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
|
||||
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
|
||||
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
|
||||
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
|
||||
|
||||
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
|
||||
|
||||
### CWE Mappings
|
||||
|
||||
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
|
||||
|
||||
#### Injection Vulnerabilities
|
||||
- **CWE-78**: OS Command Injection
|
||||
- **CWE-79**: Cross-site Scripting (XSS)
|
||||
- **CWE-89**: SQL Injection
|
||||
- **CWE-90**: LDAP Injection
|
||||
- **CWE-91**: XML Injection
|
||||
- **CWE-94**: Code Injection
|
||||
|
||||
#### Authentication & Authorization
|
||||
- **CWE-287**: Improper Authentication
|
||||
- **CWE-288**: Authentication Bypass Using Alternate Path
|
||||
- **CWE-290**: Authentication Bypass by Spoofing
|
||||
- **CWE-294**: Authentication Bypass by Capture-replay
|
||||
- **CWE-306**: Missing Authentication for Critical Function
|
||||
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
|
||||
- **CWE-352**: Cross-Site Request Forgery (CSRF)
|
||||
|
||||
#### Cryptographic Issues
|
||||
- **CWE-256**: Plaintext Storage of Password
|
||||
- **CWE-259**: Use of Hard-coded Password
|
||||
- **CWE-261**: Weak Encoding for Password
|
||||
- **CWE-321**: Use of Hard-coded Cryptographic Key
|
||||
- **CWE-326**: Inadequate Encryption Strength
|
||||
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
|
||||
- **CWE-329**: Not Using a Random IV with CBC Mode
|
||||
- **CWE-798**: Use of Hard-coded Credentials
|
||||
|
||||
#### Input Validation
|
||||
- **CWE-20**: Improper Input Validation
|
||||
- **CWE-73**: External Control of File Name or Path
|
||||
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
|
||||
- **CWE-601**: URL Redirection to Untrusted Site
|
||||
|
||||
#### Sensitive Data Exposure
|
||||
- **CWE-200**: Information Exposure
|
||||
- **CWE-209**: Information Exposure Through Error Message
|
||||
- **CWE-312**: Cleartext Storage of Sensitive Information
|
||||
- **CWE-319**: Cleartext Transmission of Sensitive Information
|
||||
- **CWE-532**: Information Exposure Through Log Files
|
||||
|
||||
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
|
||||
|
||||
### MITRE ATT&CK
|
||||
|
||||
Reference relevant tactics and techniques for threat context:
|
||||
|
||||
#### Initial Access (TA0001)
|
||||
- **T1190**: Exploit Public-Facing Application
|
||||
- **T1133**: External Remote Services
|
||||
- **T1078**: Valid Accounts
|
||||
|
||||
#### Execution (TA0002)
|
||||
- **T1059**: Command and Scripting Interpreter
|
||||
- **T1203**: Exploitation for Client Execution
|
||||
|
||||
#### Persistence (TA0003)
|
||||
- **T1098**: Account Manipulation
|
||||
- **T1136**: Create Account
|
||||
- **T1505**: Server Software Component
|
||||
|
||||
#### Privilege Escalation (TA0004)
|
||||
- **T1068**: Exploitation for Privilege Escalation
|
||||
- **T1548**: Abuse Elevation Control Mechanism
|
||||
|
||||
#### Defense Evasion (TA0005)
|
||||
- **T1027**: Obfuscated Files or Information
|
||||
- **T1140**: Deobfuscate/Decode Files or Information
|
||||
- **T1562**: Impair Defenses
|
||||
|
||||
#### Credential Access (TA0006)
|
||||
- **T1110**: Brute Force
|
||||
- **T1555**: Credentials from Password Stores
|
||||
- **T1552**: Unsecured Credentials
|
||||
|
||||
#### Discovery (TA0007)
|
||||
- **T1083**: File and Directory Discovery
|
||||
- **T1046**: Network Service Scanning
|
||||
|
||||
#### Collection (TA0009)
|
||||
- **T1005**: Data from Local System
|
||||
- **T1114**: Email Collection
|
||||
|
||||
#### Exfiltration (TA0010)
|
||||
- **T1041**: Exfiltration Over C2 Channel
|
||||
- **T1567**: Exfiltration Over Web Service
|
||||
|
||||
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
|
||||
|
||||
---
|
||||
|
||||
## Remediation Patterns
|
||||
|
||||
This section provides specific remediation guidance for common vulnerability types.
|
||||
|
||||
### SQL Injection Remediation
|
||||
|
||||
**Step 1: Identify vulnerable queries**
|
||||
- Search for string concatenation in SQL queries
|
||||
- Check for f-strings or format() with SQL keywords
|
||||
- Review all database interaction code
|
||||
|
||||
**Step 2: Apply parameterized queries**
|
||||
|
||||
```python
|
||||
# Python with sqlite3
|
||||
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
|
||||
|
||||
# Python with psycopg2 (PostgreSQL)
|
||||
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
|
||||
|
||||
# Python with SQLAlchemy (ORM)
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
|
||||
```
|
||||
|
||||
**Step 3: Validate and sanitize input** (defense in depth)
|
||||
```python
|
||||
import re
|
||||
|
||||
# Validate input format
|
||||
if not re.match(r'^\d+$', user_id):
|
||||
raise ValueError("Invalid user ID format")
|
||||
|
||||
# Use ORM query builders
|
||||
user = User.query.filter_by(id=user_id).first()
|
||||
```
|
||||
|
||||
**Step 4: Implement least privilege**
|
||||
- Database user should have minimum required permissions
|
||||
- Use read-only accounts for SELECT operations
|
||||
- Never use admin/root accounts for application queries
|
||||
|
||||
### XSS Remediation
|
||||
|
||||
**Step 1: Enable auto-escaping**
|
||||
- Most modern frameworks escape by default
|
||||
- Ensure auto-escaping is not disabled
|
||||
|
||||
**Step 2: Use framework-specific safe methods**
|
||||
|
||||
```javascript
|
||||
// React: Use JSX (auto-escapes)
|
||||
<div>{userInput}</div>
|
||||
|
||||
// Vue: Use template syntax (auto-escapes)
|
||||
<div>{{ userInput }}</div>
|
||||
|
||||
// Angular: Use property binding (auto-escapes)
|
||||
<div [textContent]="userInput"></div>
|
||||
```
|
||||
|
||||
**Step 3: Sanitize when HTML is required**
|
||||
|
||||
```javascript
|
||||
import DOMPurify from 'dompurify';
|
||||
|
||||
// Sanitize HTML content
|
||||
const clean = DOMPurify.sanitize(userHTML, {
|
||||
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
|
||||
ALLOWED_ATTR: []
|
||||
});
|
||||
```
|
||||
|
||||
**Step 4: Content Security Policy (CSP)**
|
||||
|
||||
```html
|
||||
<!-- Add CSP header -->
|
||||
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
This section contains detailed configuration options and tuning parameters.
|
||||
|
||||
### Example: SAST Tool Configuration
|
||||
|
||||
```yaml
|
||||
# Advanced security scanner configuration
|
||||
scanner:
|
||||
# Severity threshold
|
||||
severity_threshold: MEDIUM
|
||||
|
||||
# Rule configuration
|
||||
rules:
|
||||
enabled:
|
||||
- sql-injection
|
||||
- xss
|
||||
- hardcoded-secrets
|
||||
disabled:
|
||||
- informational-only
|
||||
|
||||
# False positive reduction
|
||||
confidence_threshold: HIGH
|
||||
exclude_patterns:
|
||||
- "*/test/*"
|
||||
- "*/tests/*"
|
||||
- "*/node_modules/*"
|
||||
- "*.test.js"
|
||||
- "*.spec.ts"
|
||||
|
||||
# Performance tuning
|
||||
max_file_size_kb: 2048
|
||||
timeout_seconds: 300
|
||||
parallel_jobs: 4
|
||||
|
||||
# Output configuration
|
||||
output_format: json
|
||||
include_code_snippets: true
|
||||
max_snippet_lines: 10
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples and Code Samples
|
||||
|
||||
This section provides comprehensive code examples for various scenarios.
|
||||
|
||||
### Example 1: Secure API Authentication
|
||||
|
||||
```python
|
||||
# Secure API key handling
|
||||
import os
|
||||
from functools import wraps
|
||||
from flask import Flask, request, jsonify
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
# Load API key from environment (never hardcode)
|
||||
VALID_API_KEY = os.environ.get('API_KEY')
|
||||
if not VALID_API_KEY:
|
||||
raise ValueError("API_KEY environment variable not set")
|
||||
|
||||
def require_api_key(f):
|
||||
@wraps(f)
|
||||
def decorated_function(*args, **kwargs):
|
||||
api_key = request.headers.get('X-API-Key')
|
||||
|
||||
if not api_key:
|
||||
return jsonify({'error': 'API key required'}), 401
|
||||
|
||||
# Constant-time comparison to prevent timing attacks
|
||||
import hmac
|
||||
if not hmac.compare_digest(api_key, VALID_API_KEY):
|
||||
return jsonify({'error': 'Invalid API key'}), 403
|
||||
|
||||
return f(*args, **kwargs)
|
||||
return decorated_function
|
||||
|
||||
@app.route('/api/secure-endpoint')
|
||||
@require_api_key
|
||||
def secure_endpoint():
|
||||
return jsonify({'message': 'Access granted'})
|
||||
```
|
||||
|
||||
### Example 2: Secure Password Hashing
|
||||
|
||||
```python
|
||||
# Secure password storage with bcrypt
|
||||
import bcrypt
|
||||
|
||||
def hash_password(password: str) -> str:
|
||||
"""Hash a password using bcrypt."""
|
||||
# Generate salt and hash password
|
||||
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
|
||||
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
|
||||
return hashed.decode('utf-8')
|
||||
|
||||
def verify_password(password: str, hashed: str) -> bool:
|
||||
"""Verify a password against a hash."""
|
||||
return bcrypt.checkpw(
|
||||
password.encode('utf-8'),
|
||||
hashed.encode('utf-8')
|
||||
)
|
||||
|
||||
# Usage
|
||||
stored_hash = hash_password("user_password")
|
||||
is_valid = verify_password("user_password", stored_hash) # True
|
||||
```
|
||||
|
||||
### Example 3: Secure File Upload
|
||||
|
||||
```python
|
||||
# Secure file upload with validation
|
||||
import os
|
||||
import magic
|
||||
from werkzeug.utils import secure_filename
|
||||
|
||||
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
|
||||
ALLOWED_MIME_TYPES = {
|
||||
'application/pdf',
|
||||
'image/png',
|
||||
'image/jpeg'
|
||||
}
|
||||
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
|
||||
|
||||
def is_allowed_file(filename: str, file_content: bytes) -> bool:
|
||||
"""Validate file extension and MIME type."""
|
||||
# Check extension
|
||||
if '.' not in filename:
|
||||
return False
|
||||
|
||||
ext = filename.rsplit('.', 1)[1].lower()
|
||||
if ext not in ALLOWED_EXTENSIONS:
|
||||
return False
|
||||
|
||||
# Check MIME type (prevent extension spoofing)
|
||||
mime = magic.from_buffer(file_content, mime=True)
|
||||
if mime not in ALLOWED_MIME_TYPES:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def handle_upload(file):
|
||||
"""Securely handle file upload."""
|
||||
# Check file size
|
||||
file.seek(0, os.SEEK_END)
|
||||
size = file.tell()
|
||||
file.seek(0)
|
||||
|
||||
if size > MAX_FILE_SIZE:
|
||||
raise ValueError("File too large")
|
||||
|
||||
# Read content for validation
|
||||
content = file.read()
|
||||
file.seek(0)
|
||||
|
||||
# Validate file type
|
||||
if not is_allowed_file(file.filename, content):
|
||||
raise ValueError("Invalid file type")
|
||||
|
||||
# Sanitize filename
|
||||
filename = secure_filename(file.filename)
|
||||
|
||||
# Generate unique filename to prevent overwrite attacks
|
||||
import uuid
|
||||
unique_filename = f"{uuid.uuid4()}_{filename}"
|
||||
|
||||
# Save to secure location (outside web root)
|
||||
upload_path = os.path.join('/secure/uploads', unique_filename)
|
||||
file.save(upload_path)
|
||||
|
||||
return unique_filename
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices for Reference Documents
|
||||
|
||||
1. **Start with "When to use"** - Help Claude know when to load this reference
|
||||
2. **Include table of contents** - For documents >100 lines
|
||||
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
|
||||
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
|
||||
5. **Provide remediation** - Don't just identify issues, show how to fix them
|
||||
6. **Organize logically** - Group related content, use clear headings
|
||||
7. **Keep examples current** - Use modern patterns and current framework versions
|
||||
8. **Be concise** - Even in references, challenge every sentence
|
||||
@@ -0,0 +1,253 @@
|
||||
# Workflow Checklist Template
|
||||
|
||||
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
|
||||
|
||||
## Pattern 1: Sequential Workflow Checklist
|
||||
|
||||
Use this pattern for operations that must be completed in order, step-by-step.
|
||||
|
||||
### Security Assessment Workflow
|
||||
|
||||
Progress:
|
||||
[ ] 1. Identify application entry points and attack surface
|
||||
[ ] 2. Map authentication and authorization flows
|
||||
[ ] 3. Identify data flows and sensitive data handling
|
||||
[ ] 4. Review existing security controls
|
||||
[ ] 5. Document findings with framework references (OWASP, CWE)
|
||||
[ ] 6. Prioritize findings by severity (CVSS scores)
|
||||
[ ] 7. Generate report with remediation recommendations
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: Conditional Workflow
|
||||
|
||||
Use this pattern when the workflow branches based on findings or conditions.
|
||||
|
||||
### Vulnerability Remediation Workflow
|
||||
|
||||
1. Identify vulnerability type
|
||||
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
|
||||
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
|
||||
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
|
||||
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
|
||||
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
|
||||
|
||||
2. Assess severity using CVSS calculator
|
||||
- If CVSS >= 9.0 → Priority: Critical (immediate action)
|
||||
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
|
||||
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
|
||||
- If CVSS < 4.0 → Priority: Low (action within 30 days)
|
||||
|
||||
3. Apply appropriate remediation pattern
|
||||
4. Validate fix with security testing
|
||||
5. Document changes and update security documentation
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Iterative Workflow
|
||||
|
||||
Use this pattern for operations that repeat across multiple targets or items.
|
||||
|
||||
### Code Security Review Workflow
|
||||
|
||||
For each file in the review scope:
|
||||
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
|
||||
2. Check against secure coding patterns for the language
|
||||
3. Flag potential vulnerabilities with severity rating
|
||||
4. Map findings to CWE and OWASP categories
|
||||
5. Suggest specific remediation approaches
|
||||
6. Document finding with code location and fix priority
|
||||
|
||||
Continue until all files in scope have been reviewed.
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: Feedback Loop Workflow
|
||||
|
||||
Use this pattern when validation and iteration are required.
|
||||
|
||||
### Secure Configuration Generation Workflow
|
||||
|
||||
1. Generate initial security configuration based on requirements
|
||||
2. Run validation script: `./scripts/validate_config.py config.yaml`
|
||||
3. Review validation output:
|
||||
- Note all errors (must fix)
|
||||
- Note all warnings (should fix)
|
||||
- Note all info items (consider)
|
||||
4. Fix identified issues in configuration
|
||||
5. Repeat steps 2-4 until validation passes with zero errors
|
||||
6. Review warnings and determine if they should be addressed
|
||||
7. Apply configuration once validation is clean
|
||||
|
||||
**Validation Loop**: Run validator → Fix errors → Repeat until clean
|
||||
|
||||
---
|
||||
|
||||
## Pattern 5: Parallel Analysis Workflow
|
||||
|
||||
Use this pattern when multiple independent analyses can run concurrently.
|
||||
|
||||
### Comprehensive Security Scan Workflow
|
||||
|
||||
Run these scans in parallel:
|
||||
|
||||
**Static Analysis**:
|
||||
[ ] 1a. Run SAST scan (Semgrep/Bandit)
|
||||
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
|
||||
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
|
||||
[ ] 1d. Run license compliance check
|
||||
|
||||
**Dynamic Analysis**:
|
||||
[ ] 2a. Run DAST scan (ZAP/Burp)
|
||||
[ ] 2b. Run API security testing
|
||||
[ ] 2c. Run authentication/authorization testing
|
||||
|
||||
**Infrastructure Analysis**:
|
||||
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
|
||||
[ ] 3b. Run container image scan (Trivy/Grype)
|
||||
[ ] 3c. Run configuration review
|
||||
|
||||
**Consolidation**:
|
||||
[ ] 4. Aggregate all findings
|
||||
[ ] 5. Deduplicate and correlate findings
|
||||
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
|
||||
[ ] 7. Generate unified security report
|
||||
|
||||
---
|
||||
|
||||
## Pattern 6: Research and Documentation Workflow
|
||||
|
||||
Use this pattern for security research and documentation tasks.
|
||||
|
||||
### Threat Modeling Workflow
|
||||
|
||||
Research Progress:
|
||||
[ ] 1. Identify system components and boundaries
|
||||
[ ] 2. Map data flows between components
|
||||
[ ] 3. Identify trust boundaries
|
||||
[ ] 4. Enumerate assets (data, services, credentials)
|
||||
[ ] 5. Apply STRIDE framework to each component:
|
||||
- Spoofing threats
|
||||
- Tampering threats
|
||||
- Repudiation threats
|
||||
- Information disclosure threats
|
||||
- Denial of service threats
|
||||
- Elevation of privilege threats
|
||||
[ ] 6. Map threats to MITRE ATT&CK techniques
|
||||
[ ] 7. Identify existing mitigations
|
||||
[ ] 8. Document residual risks
|
||||
[ ] 9. Recommend additional security controls
|
||||
[ ] 10. Generate threat model document
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
---
|
||||
|
||||
## Pattern 7: Compliance Validation Workflow
|
||||
|
||||
Use this pattern for compliance checks against security standards.
|
||||
|
||||
### Security Compliance Audit Workflow
|
||||
|
||||
**SOC 2 Controls Review**:
|
||||
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
|
||||
[ ] 2. Verify logical access controls implementation (CC6.1)
|
||||
[ ] 3. Review authentication mechanisms (CC6.1)
|
||||
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
|
||||
[ ] 5. Review audit logging configuration (CC7.2)
|
||||
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
|
||||
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
|
||||
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
|
||||
|
||||
**Evidence Collection**:
|
||||
[ ] 9. Collect policy documents
|
||||
[ ] 10. Collect configuration screenshots
|
||||
[ ] 11. Collect audit logs
|
||||
[ ] 12. Document control gaps
|
||||
[ ] 13. Generate compliance report
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Incident Response Workflow
|
||||
|
||||
Use this pattern for security incident handling.
|
||||
|
||||
### Security Incident Response Workflow
|
||||
|
||||
**Detection and Analysis**:
|
||||
[ ] 1. Confirm security incident (rule out false positive)
|
||||
[ ] 2. Determine incident severity (SEV1/2/3/4)
|
||||
[ ] 3. Identify affected systems and data
|
||||
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
|
||||
|
||||
**Containment**:
|
||||
[ ] 5. Isolate affected systems (network segmentation)
|
||||
[ ] 6. Disable compromised accounts
|
||||
[ ] 7. Block malicious indicators (IPs, domains, hashes)
|
||||
[ ] 8. Implement temporary compensating controls
|
||||
|
||||
**Eradication**:
|
||||
[ ] 9. Identify root cause
|
||||
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
|
||||
[ ] 11. Patch vulnerabilities exploited
|
||||
[ ] 12. Reset compromised credentials
|
||||
|
||||
**Recovery**:
|
||||
[ ] 13. Restore systems from clean backups (if needed)
|
||||
[ ] 14. Re-enable systems with monitoring
|
||||
[ ] 15. Verify system integrity
|
||||
[ ] 16. Resume normal operations
|
||||
|
||||
**Post-Incident**:
|
||||
[ ] 17. Document incident timeline
|
||||
[ ] 18. Identify lessons learned
|
||||
[ ] 19. Update security controls to prevent recurrence
|
||||
[ ] 20. Update incident response procedures
|
||||
[ ] 21. Communicate with stakeholders
|
||||
|
||||
---
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
### When to Use Workflow Checklists
|
||||
|
||||
✅ **Use checklists for**:
|
||||
- Complex multi-step operations
|
||||
- Operations requiring specific order
|
||||
- Security assessments and audits
|
||||
- Incident response procedures
|
||||
- Compliance validation tasks
|
||||
|
||||
❌ **Don't use checklists for**:
|
||||
- Simple single-step operations
|
||||
- Highly dynamic exploratory work
|
||||
- Operations that vary significantly each time
|
||||
|
||||
### Adapting This Template
|
||||
|
||||
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
|
||||
2. **Customize steps** to match your specific security tool or process
|
||||
3. **Add framework references** (OWASP, CWE, NIST) where applicable
|
||||
4. **Include tool-specific commands** for automation
|
||||
5. **Add decision points** where manual judgment is required
|
||||
|
||||
### Checklist Best Practices
|
||||
|
||||
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
|
||||
- **Include success criteria**: "Validation passes with 0 errors"
|
||||
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
|
||||
- **Show progress**: Checkbox format helps track completion
|
||||
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
|
||||
|
||||
### Integration with Feedback Loops
|
||||
|
||||
Combine checklists with validation scripts for maximum effectiveness:
|
||||
|
||||
1. Create checklist for the workflow
|
||||
2. Provide validation script that checks quality
|
||||
3. Include "run validator" step in checklist
|
||||
4. Loop: Complete step → Validate → Fix issues → Re-validate
|
||||
|
||||
This pattern dramatically improves output quality through systematic validation.
|
||||
225
skills/devsecops/container-grype/references/cisa_kev.md
Normal file
225
skills/devsecops/container-grype/references/cisa_kev.md
Normal file
@@ -0,0 +1,225 @@
|
||||
# CISA Known Exploited Vulnerabilities (KEV) Catalog
|
||||
|
||||
CISA's Known Exploited Vulnerabilities (KEV) catalog identifies CVEs with confirmed active exploitation in the wild.
|
||||
|
||||
## Table of Contents
|
||||
- [What is KEV](#what-is-kev)
|
||||
- [Why KEV Matters](#why-kev-matters)
|
||||
- [KEV in Grype](#kev-in-grype)
|
||||
- [Remediation Urgency](#remediation-urgency)
|
||||
- [Federal Requirements](#federal-requirements)
|
||||
|
||||
## What is KEV
|
||||
|
||||
The Cybersecurity and Infrastructure Security Agency (CISA) maintains a catalog of vulnerabilities that:
|
||||
1. Have **confirmed active exploitation** in real-world attacks
|
||||
2. Present **significant risk** to federal enterprise and critical infrastructure
|
||||
3. Require **prioritized remediation**
|
||||
|
||||
**Key Points**:
|
||||
- KEV listings indicate **active, ongoing exploitation**, not theoretical risk
|
||||
- Being in KEV catalog means attackers have weaponized the vulnerability
|
||||
- KEV CVEs should be treated as **highest priority** regardless of CVSS score
|
||||
|
||||
## Why KEV Matters
|
||||
|
||||
### Active Threat Indicator
|
||||
|
||||
**KEV presence means**:
|
||||
- Exploit code is publicly available or in active use by threat actors
|
||||
- Attackers are successfully exploiting this vulnerability
|
||||
- Your organization is likely a target if running vulnerable software
|
||||
|
||||
### Prioritization Signal
|
||||
|
||||
**CVSS vs KEV**:
|
||||
- CVSS: Theoretical severity based on technical characteristics
|
||||
- KEV: Proven real-world exploitation
|
||||
|
||||
**Example**:
|
||||
- CVE with CVSS 6.5 (Medium) but KEV listing → **Prioritize over CVSS 9.0 (Critical) without KEV**
|
||||
- Active exploitation trumps theoretical severity
|
||||
|
||||
### Compliance Requirement
|
||||
|
||||
**BOD 22-01**: Federal agencies must remediate KEV vulnerabilities within specified timeframes
|
||||
- Many commercial organizations adopt similar policies
|
||||
- SOC2, PCI-DSS, and other frameworks increasingly reference KEV
|
||||
|
||||
## KEV in Grype
|
||||
|
||||
### Detecting KEV in Scans
|
||||
|
||||
Grype includes KEV data in vulnerability assessments:
|
||||
|
||||
```bash
|
||||
# Standard scan includes KEV indicators
|
||||
grype <image> -o json > results.json
|
||||
|
||||
# Check for KEV matches
|
||||
grep -i "kev" results.json
|
||||
```
|
||||
|
||||
**Grype output indicators**:
|
||||
- `dataSource` field may include KEV references
|
||||
- Some vulnerabilities explicitly marked as CISA KEV
|
||||
|
||||
### Filtering KEV Vulnerabilities
|
||||
|
||||
Use the prioritization script to extract KEV matches:
|
||||
|
||||
```bash
|
||||
./scripts/prioritize_cves.py results.json
|
||||
```
|
||||
|
||||
Output shows `[KEV]` indicator for confirmed KEV vulnerabilities.
|
||||
|
||||
### Automated KEV Alerting
|
||||
|
||||
Integrate KEV detection into CI/CD:
|
||||
|
||||
```bash
|
||||
# Fail build on any KEV vulnerability
|
||||
grype <image> -o json | \
|
||||
jq '.matches[] | select(.vulnerability.dataSource | contains("KEV"))' | \
|
||||
jq -s 'if length > 0 then error("KEV vulnerabilities found") else empty end'
|
||||
```
|
||||
|
||||
## Remediation Urgency
|
||||
|
||||
### BOD 22-01 Timeframes
|
||||
|
||||
CISA Binding Operational Directive 22-01 requires:
|
||||
|
||||
| Vulnerability Type | Remediation Deadline |
|
||||
|-------------------|---------------------|
|
||||
| KEV listed before directive | 2 weeks from BOD publication |
|
||||
| Newly added KEV | 2 weeks from KEV addition |
|
||||
| Critical KEV (discretionary) | Immediate (24-48 hours) |
|
||||
|
||||
### Commercial Best Practices
|
||||
|
||||
**Recommended SLAs for KEV vulnerabilities**:
|
||||
|
||||
1. **Immediate Response (0-24 hours)**:
|
||||
- Assess exposure and affected systems
|
||||
- Implement temporary mitigations (disable feature, block network access)
|
||||
- Notify security leadership and stakeholders
|
||||
|
||||
2. **Emergency Patching (24-48 hours)**:
|
||||
- Deploy patches to production systems
|
||||
- Validate remediation with re-scan
|
||||
- Document patch deployment
|
||||
|
||||
3. **Validation and Monitoring (48-72 hours)**:
|
||||
- Verify all instances patched
|
||||
- Check logs for exploitation attempts
|
||||
- Update detection rules and threat intelligence
|
||||
|
||||
### Temporary Mitigations
|
||||
|
||||
If immediate patching is not possible:
|
||||
|
||||
**Network-Level Controls**:
|
||||
- Block external access to vulnerable services
|
||||
- Segment vulnerable systems from critical assets
|
||||
- Deploy Web Application Firewall (WAF) rules
|
||||
|
||||
**Application-Level Controls**:
|
||||
- Disable vulnerable features or endpoints
|
||||
- Implement additional authentication requirements
|
||||
- Enable enhanced logging and monitoring
|
||||
|
||||
**Operational Controls**:
|
||||
- Increase security monitoring for affected systems
|
||||
- Deploy compensating detective controls
|
||||
- Schedule emergency maintenance window
|
||||
|
||||
## Federal Requirements
|
||||
|
||||
### Binding Operational Directive 22-01
|
||||
|
||||
**Scope**: All federal civilian executive branch (FCEB) agencies
|
||||
|
||||
**Requirements**:
|
||||
1. Remediate KEV vulnerabilities within required timeframes
|
||||
2. Report remediation status to CISA
|
||||
3. Document exceptions and compensating controls
|
||||
|
||||
**Penalties**: Non-compliance may result in:
|
||||
- Required reporting to agency leadership
|
||||
- Escalation to Office of Management and Budget (OMB)
|
||||
- Potential security authorization impacts
|
||||
|
||||
### Extending to Commercial Organizations
|
||||
|
||||
Many commercial organizations adopt KEV-based policies:
|
||||
|
||||
**Rationale**:
|
||||
- KEV represents highest-priority threats
|
||||
- Federal government invests in threat intelligence
|
||||
- Following KEV reduces actual breach risk
|
||||
|
||||
**Implementation**:
|
||||
- Monitor KEV catalog for relevant CVEs
|
||||
- Integrate KEV data into vulnerability management
|
||||
- Define internal KEV remediation SLAs
|
||||
- Report KEV status to leadership and audit teams
|
||||
|
||||
## Monitoring KEV Updates
|
||||
|
||||
### CISA KEV Catalog
|
||||
|
||||
Access the catalog:
|
||||
- **Web**: https://www.cisa.gov/known-exploited-vulnerabilities-catalog
|
||||
- **JSON**: https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json
|
||||
- **CSV**: https://www.cisa.gov/sites/default/files/csv/known_exploited_vulnerabilities.csv
|
||||
|
||||
### Automated Monitoring
|
||||
|
||||
Track new KEV additions:
|
||||
|
||||
```bash
|
||||
# Download current KEV catalog
|
||||
curl -s https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json \
|
||||
-o kev-catalog.json
|
||||
|
||||
# Compare against previous download
|
||||
diff kev-catalog-previous.json kev-catalog.json
|
||||
```
|
||||
|
||||
**Subscribe to updates**:
|
||||
- CISA cybersecurity alerts: https://www.cisa.gov/cybersecurity-alerts
|
||||
- RSS feeds for KEV additions
|
||||
- Security vendor threat intelligence feeds
|
||||
|
||||
## Response Workflow
|
||||
|
||||
### KEV Vulnerability Detected
|
||||
|
||||
Progress:
|
||||
[ ] 1. **Identify** affected systems: Run Grype scan across all environments
|
||||
[ ] 2. **Assess** exposure: Determine if vulnerable systems are internet-facing or critical
|
||||
[ ] 3. **Contain** risk: Implement temporary mitigations (network blocks, feature disable)
|
||||
[ ] 4. **Remediate**: Deploy patches or upgrades to all affected systems
|
||||
[ ] 5. **Validate**: Re-scan with Grype to confirm vulnerability resolved
|
||||
[ ] 6. **Monitor**: Review logs for exploitation attempts during vulnerable window
|
||||
[ ] 7. **Document**: Record timeline, actions taken, and lessons learned
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
### Post-Remediation Analysis
|
||||
|
||||
After resolving KEV vulnerability:
|
||||
|
||||
1. **Threat Hunting**: Search logs for indicators of compromise (IOC)
|
||||
2. **Root Cause**: Determine why vulnerable software was deployed
|
||||
3. **Process Improvement**: Update procedures to prevent recurrence
|
||||
4. **Reporting**: Notify stakeholders and compliance teams
|
||||
|
||||
## References
|
||||
|
||||
- [CISA KEV Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
|
||||
- [BOD 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities](https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities)
|
||||
- [KEV Catalog JSON Feed](https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json)
|
||||
- [CISA Cybersecurity Alerts](https://www.cisa.gov/cybersecurity-alerts)
|
||||
210
skills/devsecops/container-grype/references/cvss_guide.md
Normal file
210
skills/devsecops/container-grype/references/cvss_guide.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# CVSS Severity Rating Guide
|
||||
|
||||
Common Vulnerability Scoring System (CVSS) is a standardized framework for rating vulnerability severity.
|
||||
|
||||
## Table of Contents
|
||||
- [CVSS Score Ranges](#cvss-score-ranges)
|
||||
- [Severity Ratings](#severity-ratings)
|
||||
- [CVSS Metrics](#cvss-metrics)
|
||||
- [Interpreting Scores](#interpreting-scores)
|
||||
- [Remediation SLAs](#remediation-slas)
|
||||
|
||||
## CVSS Score Ranges
|
||||
|
||||
| CVSS Score | Severity Rating | Description |
|
||||
|------------|----------------|-------------|
|
||||
| 0.0 | None | No vulnerability |
|
||||
| 0.1 - 3.9 | Low | Minimal security impact |
|
||||
| 4.0 - 6.9 | Medium | Moderate security impact |
|
||||
| 7.0 - 8.9 | High | Significant security impact |
|
||||
| 9.0 - 10.0 | Critical | Severe security impact |
|
||||
|
||||
## Severity Ratings
|
||||
|
||||
### Critical (9.0 - 10.0)
|
||||
|
||||
**Characteristics**:
|
||||
- Trivial to exploit
|
||||
- No user interaction required
|
||||
- Remote code execution or complete system compromise
|
||||
- Affects default configurations
|
||||
|
||||
**Examples**:
|
||||
- Unauthenticated remote code execution
|
||||
- Critical SQL injection allowing full database access
|
||||
- Authentication bypass in critical services
|
||||
|
||||
**Action**: Remediate immediately (within 24-48 hours)
|
||||
|
||||
### High (7.0 - 8.9)
|
||||
|
||||
**Characteristics**:
|
||||
- Easy to exploit with moderate skill
|
||||
- May require user interaction or specific conditions
|
||||
- Significant data exposure or privilege escalation
|
||||
- Affects common configurations
|
||||
|
||||
**Examples**:
|
||||
- Authenticated remote code execution
|
||||
- Cross-site scripting (XSS) in privileged contexts
|
||||
- Privilege escalation vulnerabilities
|
||||
|
||||
**Action**: Remediate within 7 days
|
||||
|
||||
### Medium (4.0 - 6.9)
|
||||
|
||||
**Characteristics**:
|
||||
- Requires specific conditions or elevated privileges
|
||||
- Limited impact or scope
|
||||
- May require local access or user interaction
|
||||
|
||||
**Examples**:
|
||||
- Information disclosure of non-sensitive data
|
||||
- Denial of service with mitigating factors
|
||||
- Cross-site request forgery (CSRF)
|
||||
|
||||
**Action**: Remediate within 30 days
|
||||
|
||||
### Low (0.1 - 3.9)
|
||||
|
||||
**Characteristics**:
|
||||
- Difficult to exploit
|
||||
- Minimal security impact
|
||||
- Requires significant user interaction or unlikely conditions
|
||||
|
||||
**Examples**:
|
||||
- Information leakage of minimal data
|
||||
- Low-impact denial of service
|
||||
- Security misconfigurations with limited exposure
|
||||
|
||||
**Action**: Remediate within 90 days or next maintenance cycle
|
||||
|
||||
## CVSS Metrics
|
||||
|
||||
CVSS v3.1 scores are calculated from three metric groups:
|
||||
|
||||
### Base Metrics (Primary Factors)
|
||||
|
||||
**Attack Vector (AV)**:
|
||||
- Network (N): Remotely exploitable
|
||||
- Adjacent (A): Requires local network access
|
||||
- Local (L): Requires local system access
|
||||
- Physical (P): Requires physical access
|
||||
|
||||
**Attack Complexity (AC)**:
|
||||
- Low (L): No specialized conditions required
|
||||
- High (H): Requires specific conditions or expert knowledge
|
||||
|
||||
**Privileges Required (PR)**:
|
||||
- None (N): No authentication needed
|
||||
- Low (L): Basic user privileges required
|
||||
- High (H): Administrator privileges required
|
||||
|
||||
**User Interaction (UI)**:
|
||||
- None (N): No user interaction required
|
||||
- Required (R): Requires user action (e.g., clicking a link)
|
||||
|
||||
**Scope (S)**:
|
||||
- Unchanged (U): Vulnerability affects only the vulnerable component
|
||||
- Changed (C): Vulnerability affects resources beyond the vulnerable component
|
||||
|
||||
**Impact Metrics** (Confidentiality, Integrity, Availability):
|
||||
- None (N): No impact
|
||||
- Low (L): Limited impact
|
||||
- High (H): Total or serious impact
|
||||
|
||||
### Temporal Metrics (Optional)
|
||||
|
||||
Time-dependent factors:
|
||||
- Exploit Code Maturity
|
||||
- Remediation Level
|
||||
- Report Confidence
|
||||
|
||||
### Environmental Metrics (Optional)
|
||||
|
||||
Organization-specific factors:
|
||||
- Modified Base Metrics
|
||||
- Confidentiality/Integrity/Availability Requirements
|
||||
|
||||
## Interpreting Scores
|
||||
|
||||
### Context Matters
|
||||
|
||||
CVSS scores should be interpreted in context:
|
||||
|
||||
**High-Value Systems**: Escalate severity for:
|
||||
- Production systems
|
||||
- Customer-facing applications
|
||||
- Systems handling PII or financial data
|
||||
- Critical infrastructure
|
||||
|
||||
**Low-Value Systems**: May de-prioritize for:
|
||||
- Development/test environments
|
||||
- Internal tools with limited access
|
||||
- Deprecated systems scheduled for decommission
|
||||
|
||||
### Complementary Metrics
|
||||
|
||||
Consider alongside CVSS:
|
||||
|
||||
**EPSS (Exploit Prediction Scoring System)**:
|
||||
- Probability (0-100%) that a vulnerability will be exploited in the wild
|
||||
- High EPSS + High CVSS = Urgent remediation
|
||||
|
||||
**CISA KEV (Known Exploited Vulnerabilities)**:
|
||||
- Active exploitation confirmed in the wild
|
||||
- KEV presence overrides CVSS - remediate immediately
|
||||
|
||||
**Reachability**:
|
||||
- Is the vulnerable code path actually executed?
|
||||
- Is the vulnerable dependency directly or transitively included?
|
||||
|
||||
## Remediation SLAs
|
||||
|
||||
### Industry Standard SLA Examples
|
||||
|
||||
| Severity | Timeframe | Priority |
|
||||
|----------|-----------|----------|
|
||||
| Critical | 24-48 hours | P0 - Drop everything |
|
||||
| High | 7 days | P1 - Next sprint |
|
||||
| Medium | 30 days | P2 - Planned work |
|
||||
| Low | 90 days | P3 - Maintenance cycle |
|
||||
|
||||
### Adjusted for Exploitability
|
||||
|
||||
**If CISA KEV or EPSS > 50%**:
|
||||
- Reduce timeframe by 50%
|
||||
- Example: High (7 days) → 3-4 days
|
||||
|
||||
**If proof-of-concept exists**:
|
||||
- Treat High as Critical
|
||||
- Treat Medium as High
|
||||
|
||||
**If actively exploited**:
|
||||
- All severities become Critical (immediate remediation)
|
||||
|
||||
## False Positives and Suppressions
|
||||
|
||||
Not all reported vulnerabilities require immediate action:
|
||||
|
||||
### Valid Suppression Reasons
|
||||
|
||||
- **Not Reachable**: Vulnerable code path not executed
|
||||
- **Mitigated**: Compensating controls in place (WAF, network segmentation)
|
||||
- **Not Affected**: Version mismatch or platform-specific vulnerability
|
||||
- **Risk Accepted**: Business decision with documented justification
|
||||
|
||||
### Documentation Requirements
|
||||
|
||||
For all suppressions:
|
||||
1. CVE ID and affected package
|
||||
2. Detailed justification
|
||||
3. Approver and approval date
|
||||
4. Review/expiration date (quarterly recommended)
|
||||
5. Compensating controls if applicable
|
||||
|
||||
## References
|
||||
|
||||
- [CVSS v3.1 Specification](https://www.first.org/cvss/specification-document)
|
||||
- [CVSS Calculator](https://www.first.org/cvss/calculator/3.1)
|
||||
- [NVD CVSS Severity Distribution](https://nvd.nist.gov/vuln/severity-distribution)
|
||||
@@ -0,0 +1,510 @@
|
||||
# Vulnerability Remediation Patterns
|
||||
|
||||
Common patterns for remediating dependency vulnerabilities detected by Grype.
|
||||
|
||||
## Table of Contents
|
||||
- [General Remediation Strategies](#general-remediation-strategies)
|
||||
- [Package Update Patterns](#package-update-patterns)
|
||||
- [Base Image Updates](#base-image-updates)
|
||||
- [Dependency Pinning](#dependency-pinning)
|
||||
- [Compensating Controls](#compensating-controls)
|
||||
- [Language-Specific Patterns](#language-specific-patterns)
|
||||
|
||||
## General Remediation Strategies
|
||||
|
||||
### Strategy 1: Direct Dependency Update
|
||||
|
||||
**When to use**: Vulnerability in a directly declared dependency
|
||||
|
||||
**Pattern**:
|
||||
1. Identify fixed version from Grype output
|
||||
2. Update dependency version in manifest file
|
||||
3. Test application compatibility
|
||||
4. Re-scan to verify fix
|
||||
5. Deploy updated application
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
# Grype reports: lodash@4.17.15 has CVE-2020-8203, fixed in 4.17.19
|
||||
# Update package.json
|
||||
npm install lodash@4.17.19
|
||||
npm test
|
||||
grype dir:. --only-fixed
|
||||
```
|
||||
|
||||
### Strategy 2: Transitive Dependency Update
|
||||
|
||||
**When to use**: Vulnerability in an indirect dependency
|
||||
|
||||
**Pattern**:
|
||||
1. Identify which direct dependency includes the vulnerable package
|
||||
2. Check if direct dependency has an update that resolves the issue
|
||||
3. Update direct dependency or use dependency override mechanism
|
||||
4. Re-scan to verify fix
|
||||
|
||||
**Example (npm)**:
|
||||
```json
|
||||
// package.json - Override transitive dependency
|
||||
{
|
||||
"overrides": {
|
||||
"lodash": "^4.17.21"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example (pip)**:
|
||||
```txt
|
||||
# constraints.txt
|
||||
lodash>=4.17.21
|
||||
```
|
||||
|
||||
### Strategy 3: Base Image Update
|
||||
|
||||
**When to use**: Vulnerability in OS packages from container base image
|
||||
|
||||
**Pattern**:
|
||||
1. Identify vulnerable OS package and fixed version
|
||||
2. Update to newer base image tag or rebuild with package updates
|
||||
3. Re-scan updated image
|
||||
4. Test application on new base image
|
||||
|
||||
**Example**:
|
||||
```dockerfile
|
||||
# Before: Alpine 3.14 with vulnerable openssl
|
||||
FROM alpine:3.14
|
||||
|
||||
# After: Alpine 3.19 with fixed openssl
|
||||
FROM alpine:3.19
|
||||
|
||||
# Or: Explicit package update
|
||||
FROM alpine:3.14
|
||||
RUN apk upgrade --no-cache openssl
|
||||
```
|
||||
|
||||
### Strategy 4: Patch or Backport
|
||||
|
||||
**When to use**: No fixed version available or update breaks compatibility
|
||||
|
||||
**Pattern**:
|
||||
1. Research if security patch exists separately from full version update
|
||||
2. Apply patch using package manager's patching mechanism
|
||||
3. Consider backporting fix if feasible
|
||||
4. Document patch and establish review schedule
|
||||
|
||||
**Example (npm postinstall)**:
|
||||
```json
|
||||
{
|
||||
"scripts": {
|
||||
"postinstall": "patch-package"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Strategy 5: Compensating Controls
|
||||
|
||||
**When to use**: Fix not available and risk must be accepted
|
||||
|
||||
**Pattern**:
|
||||
1. Document vulnerability and risk acceptance
|
||||
2. Implement network, application, or operational controls
|
||||
3. Enhance monitoring and detection
|
||||
4. Schedule regular review (quarterly)
|
||||
5. Track for future remediation when fix becomes available
|
||||
|
||||
## Package Update Patterns
|
||||
|
||||
### Pattern: Semantic Versioning Updates
|
||||
|
||||
**Minor/Patch Updates** (Generally Safe):
|
||||
```bash
|
||||
# Python: Update to latest patch version
|
||||
pip install --upgrade 'package>=1.2.0,<1.3.0'
|
||||
|
||||
# Node.js: Update to latest minor version
|
||||
npm update package
|
||||
|
||||
# Go: Update to latest patch
|
||||
go get -u=patch github.com/org/package
|
||||
```
|
||||
|
||||
**Major Updates** (Breaking Changes):
|
||||
```bash
|
||||
# Review changelog before updating
|
||||
npm show package versions
|
||||
pip index versions package
|
||||
|
||||
# Update and test thoroughly
|
||||
npm install package@3.0.0
|
||||
npm test
|
||||
```
|
||||
|
||||
### Pattern: Lock File Management
|
||||
|
||||
**Update specific package**:
|
||||
```bash
|
||||
# npm
|
||||
npm install package@latest
|
||||
npm install # Update lock file
|
||||
|
||||
# pip
|
||||
pip install --upgrade package
|
||||
pip freeze > requirements.txt
|
||||
|
||||
# Go
|
||||
go get -u github.com/org/package
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
**Update all dependencies**:
|
||||
```bash
|
||||
# npm (interactive)
|
||||
npm-check-updates --interactive
|
||||
|
||||
# pip
|
||||
pip list --outdated | cut -d ' ' -f1 | xargs -n1 pip install -U
|
||||
|
||||
# Go
|
||||
go get -u ./...
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
## Base Image Updates
|
||||
|
||||
### Pattern: Minimal Base Images
|
||||
|
||||
**Reduce attack surface with minimal images**:
|
||||
|
||||
```dockerfile
|
||||
# ❌ Large attack surface
|
||||
FROM ubuntu:22.04
|
||||
|
||||
# ✅ Minimal attack surface
|
||||
FROM alpine:3.19
|
||||
# or
|
||||
FROM gcr.io/distroless/base-debian12
|
||||
|
||||
# ✅ Minimal for specific language
|
||||
FROM python:3.11-slim
|
||||
FROM node:20-alpine
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Fewer packages = fewer vulnerabilities
|
||||
- Smaller image size
|
||||
- Faster scans
|
||||
|
||||
### Pattern: Multi-Stage Builds
|
||||
|
||||
**Separate build dependencies from runtime**:
|
||||
|
||||
```dockerfile
|
||||
# Build stage with full toolchain
|
||||
FROM node:20 AS builder
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci
|
||||
COPY . .
|
||||
RUN npm run build
|
||||
|
||||
# Production stage with minimal image
|
||||
FROM node:20-alpine AS production
|
||||
WORKDIR /app
|
||||
COPY --from=builder /app/dist ./dist
|
||||
COPY --from=builder /app/node_modules ./node_modules
|
||||
USER node
|
||||
CMD ["node", "dist/index.js"]
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Build tools not present in final image
|
||||
- Reduced vulnerability exposure
|
||||
- Smaller production image
|
||||
|
||||
### Pattern: Regular Base Image Updates
|
||||
|
||||
**Automate base image updates**:
|
||||
|
||||
```yaml
|
||||
# Dependabot config for Dockerfile
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
```
|
||||
|
||||
**Manual update process**:
|
||||
```bash
|
||||
# Check for newer base image versions
|
||||
docker pull alpine:3.19
|
||||
docker images alpine
|
||||
|
||||
# Update Dockerfile
|
||||
sed -i 's/FROM alpine:3.18/FROM alpine:3.19/' Dockerfile
|
||||
|
||||
# Rebuild and scan
|
||||
docker build -t myapp:latest .
|
||||
grype myapp:latest
|
||||
```
|
||||
|
||||
## Dependency Pinning
|
||||
|
||||
### Pattern: Pin to Secure Versions
|
||||
|
||||
**Lock to known-good versions**:
|
||||
|
||||
```dockerfile
|
||||
# ✅ Pin specific versions
|
||||
FROM alpine:3.19.0@sha256:abc123...
|
||||
|
||||
# Install specific package versions
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates=20240226-r0 \
|
||||
openssl=3.1.4-r0
|
||||
```
|
||||
|
||||
```json
|
||||
// package.json - Exact versions
|
||||
{
|
||||
"dependencies": {
|
||||
"express": "4.18.2",
|
||||
"lodash": "4.17.21"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Reproducible builds
|
||||
- Controlled updates
|
||||
- Prevent automatic vulnerability introduction
|
||||
|
||||
**Drawbacks**:
|
||||
- Manual update effort
|
||||
- May miss security patches
|
||||
- Requires active maintenance
|
||||
|
||||
### Pattern: Range-Based Pinning
|
||||
|
||||
**Allow patch updates, lock major/minor**:
|
||||
|
||||
```json
|
||||
// package.json - Allow patch updates
|
||||
{
|
||||
"dependencies": {
|
||||
"express": "~4.18.2", // Allow 4.18.x
|
||||
"lodash": "^4.17.21" // Allow 4.x.x
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# requirements.txt - Compatible releases
|
||||
express>=4.18.2,<5.0.0
|
||||
lodash>=4.17.21,<5.0.0
|
||||
```
|
||||
|
||||
## Compensating Controls
|
||||
|
||||
### Pattern: Network Segmentation
|
||||
|
||||
**Isolate vulnerable systems**:
|
||||
|
||||
```yaml
|
||||
# Docker Compose network isolation
|
||||
services:
|
||||
vulnerable-service:
|
||||
image: myapp:vulnerable
|
||||
networks:
|
||||
- internal
|
||||
# No external port exposure
|
||||
|
||||
gateway:
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
- internal
|
||||
- external
|
||||
|
||||
networks:
|
||||
internal:
|
||||
internal: true
|
||||
external:
|
||||
```
|
||||
|
||||
**Benefits**:
|
||||
- Limits attack surface
|
||||
- Contains potential breaches
|
||||
- Buys time for proper remediation
|
||||
|
||||
### Pattern: Web Application Firewall (WAF)
|
||||
|
||||
**Block exploit attempts at perimeter**:
|
||||
|
||||
```nginx
|
||||
# ModSecurity/OWASP Core Rule Set
|
||||
location / {
|
||||
modsecurity on;
|
||||
modsecurity_rules_file /etc/nginx/modsec/main.conf;
|
||||
proxy_pass http://vulnerable-backend;
|
||||
}
|
||||
```
|
||||
|
||||
**Virtual Patching**:
|
||||
- Create WAF rules for specific CVEs
|
||||
- Block known exploit patterns
|
||||
- Monitor for exploitation attempts
|
||||
|
||||
### Pattern: Runtime Application Self-Protection (RASP)
|
||||
|
||||
**Detect and prevent exploitation at runtime**:
|
||||
|
||||
```python
|
||||
# Example: Add input validation
|
||||
def process_user_input(data):
|
||||
# Validate against known exploit patterns
|
||||
if contains_sql_injection(data):
|
||||
log_security_event("SQL injection attempt blocked")
|
||||
raise SecurityException("Invalid input")
|
||||
|
||||
return sanitize_input(data)
|
||||
```
|
||||
|
||||
## Language-Specific Patterns
|
||||
|
||||
### Python
|
||||
|
||||
**Update vulnerable package**:
|
||||
```bash
|
||||
# Check for vulnerabilities
|
||||
grype dir:/path/to/project -o json
|
||||
|
||||
# Update package
|
||||
pip install --upgrade vulnerable-package
|
||||
|
||||
# Freeze updated dependencies
|
||||
pip freeze > requirements.txt
|
||||
|
||||
# Verify fix
|
||||
grype dir:/path/to/project
|
||||
```
|
||||
|
||||
**Use constraints files**:
|
||||
```bash
|
||||
# constraints.txt
|
||||
vulnerable-package>=1.2.3 # CVE-2024-XXXX fixed
|
||||
|
||||
# Install with constraints
|
||||
pip install -r requirements.txt -c constraints.txt
|
||||
```
|
||||
|
||||
### Node.js
|
||||
|
||||
**Update vulnerable package**:
|
||||
```bash
|
||||
# Check for vulnerabilities
|
||||
npm audit
|
||||
grype dir:. -o json
|
||||
|
||||
# Fix automatically (if possible)
|
||||
npm audit fix
|
||||
|
||||
# Manual update
|
||||
npm install package@version
|
||||
|
||||
# Verify fix
|
||||
npm audit
|
||||
grype dir:.
|
||||
```
|
||||
|
||||
**Override transitive dependencies**:
|
||||
```json
|
||||
{
|
||||
"overrides": {
|
||||
"vulnerable-package": "^2.0.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Go
|
||||
|
||||
**Update vulnerable module**:
|
||||
```bash
|
||||
# Check for vulnerabilities
|
||||
go list -m all | grype
|
||||
|
||||
# Update specific module
|
||||
go get -u github.com/org/vulnerable-module
|
||||
|
||||
# Update all modules
|
||||
go get -u ./...
|
||||
|
||||
# Verify and tidy
|
||||
go mod tidy
|
||||
grype dir:.
|
||||
```
|
||||
|
||||
### Java/Maven
|
||||
|
||||
**Update vulnerable dependency**:
|
||||
```xml
|
||||
<!-- pom.xml - Update version -->
|
||||
<dependency>
|
||||
<groupId>org.example</groupId>
|
||||
<artifactId>vulnerable-lib</artifactId>
|
||||
<version>2.0.0</version> <!-- Updated from 1.0.0 -->
|
||||
</dependency>
|
||||
```
|
||||
|
||||
**Force dependency version**:
|
||||
```xml
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.example</groupId>
|
||||
<artifactId>vulnerable-lib</artifactId>
|
||||
<version>2.0.0</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
```
|
||||
|
||||
### Rust
|
||||
|
||||
**Update vulnerable crate**:
|
||||
```bash
|
||||
# Check for vulnerabilities
|
||||
cargo audit
|
||||
grype dir:. -o json
|
||||
|
||||
# Update specific crate
|
||||
cargo update -p vulnerable-crate
|
||||
|
||||
# Update all crates
|
||||
cargo update
|
||||
|
||||
# Verify fix
|
||||
cargo audit
|
||||
grype dir:.
|
||||
```
|
||||
|
||||
## Verification Workflow
|
||||
|
||||
After applying any remediation:
|
||||
|
||||
Progress:
|
||||
[ ] 1. **Re-scan**: Run Grype scan to verify vulnerability resolved
|
||||
[ ] 2. **Test**: Execute test suite to ensure no functionality broken
|
||||
[ ] 3. **Document**: Record CVE, fix applied, and verification results
|
||||
[ ] 4. **Deploy**: Roll out fix to affected environments
|
||||
[ ] 5. **Monitor**: Watch for related security issues or regressions
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
## References
|
||||
|
||||
- [npm Security Best Practices](https://docs.npmjs.com/security-best-practices)
|
||||
- [Python Packaging Security](https://packaging.python.org/en/latest/guides/security/)
|
||||
- [Go Modules Security](https://go.dev/blog/vuln)
|
||||
- [OWASP Dependency Check](https://owasp.org/www-project-dependency-check/)
|
||||
598
skills/devsecops/container-hadolint/SKILL.md
Normal file
598
skills/devsecops/container-hadolint/SKILL.md
Normal file
@@ -0,0 +1,598 @@
|
||||
---
|
||||
name: container-hadolint
|
||||
description: >
|
||||
Dockerfile security linting and best practice validation using Hadolint with 100+ built-in
|
||||
rules aligned to CIS Docker Benchmark. Use when: (1) Analyzing Dockerfiles for security
|
||||
misconfigurations and anti-patterns, (2) Enforcing container image security best practices
|
||||
in CI/CD pipelines, (3) Detecting hardcoded secrets and credentials in container builds,
|
||||
(4) Validating compliance with CIS Docker Benchmark requirements, (5) Integrating shift-left
|
||||
container security into developer workflows, (6) Providing remediation guidance for insecure
|
||||
Dockerfile instructions.
|
||||
version: 0.1.0
|
||||
maintainer: SirAppSec
|
||||
category: devsecops
|
||||
tags: [docker, hadolint, dockerfile, container-security, cis-benchmark, linting, ci-cd]
|
||||
frameworks: [CIS, OWASP]
|
||||
dependencies:
|
||||
tools: [hadolint, docker]
|
||||
references:
|
||||
- https://github.com/hadolint/hadolint
|
||||
- https://www.cisecurity.org/benchmark/docker
|
||||
- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
|
||||
---
|
||||
|
||||
# Dockerfile Security Linting with Hadolint
|
||||
|
||||
## Overview
|
||||
|
||||
Hadolint is a Dockerfile linter that validates container build files against security best practices and the CIS Docker Benchmark. It analyzes Dockerfile instructions to identify misconfigurations, anti-patterns, and security vulnerabilities before images are built and deployed.
|
||||
|
||||
Hadolint integrates ShellCheck to validate RUN instructions, ensuring shell commands follow security best practices. With 100+ built-in rules mapped to CIS Docker Benchmark controls, Hadolint provides comprehensive security validation for container images.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Install Hadolint
|
||||
|
||||
```bash
|
||||
# macOS via Homebrew
|
||||
brew install hadolint
|
||||
|
||||
# Linux via binary
|
||||
wget -O /usr/local/bin/hadolint https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64
|
||||
chmod +x /usr/local/bin/hadolint
|
||||
|
||||
# Via Docker
|
||||
docker pull hadolint/hadolint
|
||||
```
|
||||
|
||||
### Scan Dockerfile
|
||||
|
||||
```bash
|
||||
# Scan Dockerfile in current directory
|
||||
hadolint Dockerfile
|
||||
|
||||
# Scan with specific Dockerfile path
|
||||
hadolint path/to/Dockerfile
|
||||
|
||||
# Using Docker
|
||||
docker run --rm -i hadolint/hadolint < Dockerfile
|
||||
```
|
||||
|
||||
### Generate Report
|
||||
|
||||
```bash
|
||||
# JSON output for automation
|
||||
hadolint -f json Dockerfile > hadolint-report.json
|
||||
|
||||
# GitLab Code Quality format
|
||||
hadolint -f gitlab_codeclimate Dockerfile > hadolint-codeclimate.json
|
||||
|
||||
# Checkstyle format for CI integration
|
||||
hadolint -f checkstyle Dockerfile > hadolint-checkstyle.xml
|
||||
```
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Local Development Scanning
|
||||
|
||||
Validate Dockerfiles during development:
|
||||
|
||||
```bash
|
||||
# Basic scan with colored output
|
||||
hadolint Dockerfile
|
||||
|
||||
# Scan with specific severity threshold
|
||||
hadolint --failure-threshold error Dockerfile
|
||||
|
||||
# Show only warnings and errors
|
||||
hadolint --no-color --format tty Dockerfile | grep -E "^(warning|error)"
|
||||
|
||||
# Verbose output with rule IDs
|
||||
hadolint -t style -t warning -t error Dockerfile
|
||||
```
|
||||
|
||||
**Output Format:**
|
||||
```
|
||||
Dockerfile:3 DL3008 warning: Pin versions in apt get install
|
||||
Dockerfile:7 DL3025 error: Use JSON notation for CMD and ENTRYPOINT
|
||||
Dockerfile:12 DL3059 info: Multiple RUN instructions detected
|
||||
```
|
||||
|
||||
**When to use**: Developer workstation, pre-commit validation, iterative Dockerfile development.
|
||||
|
||||
### 2. CI/CD Pipeline Integration
|
||||
|
||||
Automate Dockerfile validation in build pipelines:
|
||||
|
||||
#### GitHub Actions
|
||||
|
||||
```yaml
|
||||
name: Hadolint
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Hadolint Dockerfile
|
||||
uses: hadolint/hadolint-action@v3.1.0
|
||||
with:
|
||||
dockerfile: Dockerfile
|
||||
failure-threshold: warning
|
||||
format: sarif
|
||||
output-file: hadolint.sarif
|
||||
|
||||
- name: Upload SARIF to GitHub Security
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: hadolint.sarif
|
||||
```
|
||||
|
||||
#### GitLab CI
|
||||
|
||||
```yaml
|
||||
hadolint:
|
||||
image: hadolint/hadolint:latest-debian
|
||||
stage: lint
|
||||
script:
|
||||
- hadolint -f gitlab_codeclimate Dockerfile > hadolint-report.json
|
||||
artifacts:
|
||||
reports:
|
||||
codequality: hadolint-report.json
|
||||
when: always
|
||||
```
|
||||
|
||||
**When to use**: Automated security gates, pull request checks, deployment validation.
|
||||
|
||||
### 3. Configuration Customization
|
||||
|
||||
Create `.hadolint.yaml` to customize rules:
|
||||
|
||||
```yaml
|
||||
# .hadolint.yaml
|
||||
failure-threshold: warning
|
||||
ignored:
|
||||
- DL3008 # Allow unpinned apt-get packages (assess risk first)
|
||||
- DL3059 # Allow multiple RUN instructions
|
||||
|
||||
trustedRegistries:
|
||||
- docker.io/library # Official Docker Hub images
|
||||
- gcr.io/distroless # Google distroless images
|
||||
- registry.access.redhat.com # Red Hat registry
|
||||
|
||||
override:
|
||||
error:
|
||||
- DL3001 # Enforce: never use yum/dnf/zypper without version pins
|
||||
warning:
|
||||
- DL3015 # Warn: use --no-install-recommends with apt-get
|
||||
info:
|
||||
- DL3059 # Info: multiple RUN instructions reduce layer caching
|
||||
|
||||
label-schema:
|
||||
maintainer: text
|
||||
org.opencontainers.image.vendor: text
|
||||
org.opencontainers.image.version: semver
|
||||
```
|
||||
|
||||
Use bundled templates in `assets/`:
|
||||
- `assets/hadolint-strict.yaml` - Strict security enforcement (CRITICAL/HIGH only)
|
||||
- `assets/hadolint-balanced.yaml` - Balanced validation (recommended)
|
||||
- `assets/hadolint-permissive.yaml` - Permissive for legacy Dockerfiles
|
||||
|
||||
**When to use**: Reducing false positives, organizational standards, legacy Dockerfile migration.
|
||||
|
||||
### 4. Security-Focused Validation
|
||||
|
||||
Enforce critical security rules:
|
||||
|
||||
```bash
|
||||
# Only fail on security issues (error severity)
|
||||
hadolint --failure-threshold error Dockerfile
|
||||
|
||||
# Check specific security rules
|
||||
hadolint --trusted-registry docker.io/library Dockerfile
|
||||
|
||||
# Scan all Dockerfiles in project
|
||||
find . -name "Dockerfile*" -exec hadolint {} \;
|
||||
|
||||
# Generate security report with only errors
|
||||
hadolint -f json Dockerfile | jq '.[] | select(.level == "error")'
|
||||
```
|
||||
|
||||
**Critical Security Rules:**
|
||||
- **DL3000**: Use absolute WORKDIR (prevents directory traversal)
|
||||
- **DL3001**: Always use version pinning for package managers
|
||||
- **DL3002**: Never switch to root USER in Dockerfile
|
||||
- **DL3020**: Use COPY instead of ADD (prevents arbitrary URL fetching)
|
||||
- **DL3025**: Use JSON notation for CMD/ENTRYPOINT (prevents shell injection)
|
||||
|
||||
See `references/security_rules.md` for complete security rule catalog with CIS mappings.
|
||||
|
||||
### 5. Multi-Stage Build Validation
|
||||
|
||||
Scan complex multi-stage Dockerfiles:
|
||||
|
||||
```bash
|
||||
# Validate all stages
|
||||
hadolint Dockerfile
|
||||
|
||||
# Stage-specific validation (use custom script)
|
||||
./scripts/hadolint_multistage.py Dockerfile
|
||||
```
|
||||
|
||||
**Common Multi-Stage Issues:**
|
||||
- Using same user across build and runtime stages
|
||||
- Copying unnecessary build tools to production image
|
||||
- Missing security hardening in final stage
|
||||
- Secrets present in build stage propagating to runtime
|
||||
|
||||
**When to use**: Complex builds, security-hardened images, production containerization.
|
||||
|
||||
### 6. Pre-Commit Hook Integration
|
||||
|
||||
Prevent insecure Dockerfiles from being committed:
|
||||
|
||||
```bash
|
||||
# Install pre-commit hook using bundled script
|
||||
./scripts/install_precommit.sh
|
||||
|
||||
# Or manually create hook
|
||||
cat << 'EOF' > .git/hooks/pre-commit
|
||||
#!/bin/bash
|
||||
for dockerfile in $(git diff --cached --name-only | grep -E 'Dockerfile'); do
|
||||
hadolint --failure-threshold warning "$dockerfile" || exit 1
|
||||
done
|
||||
EOF
|
||||
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
**When to use**: Developer workstations, team onboarding, mandatory security controls.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Sensitive Data Handling
|
||||
|
||||
- **Secret Detection**: Hadolint flags hardcoded secrets in ENV, ARG, LABEL instructions
|
||||
- **Build Secrets**: Use Docker BuildKit secrets (`RUN --mount=type=secret`) instead of ARG for credentials
|
||||
- **Multi-Stage Security**: Ensure secrets in build stages don't leak to final image
|
||||
- **Image Scanning**: Hadolint validates Dockerfile - combine with image scanning (Trivy, Grype) for runtime security
|
||||
|
||||
### Access Control
|
||||
|
||||
- **CI/CD Permissions**: Hadolint scans require read access to Dockerfile and build context
|
||||
- **Report Storage**: Treat scan reports as internal documentation - may reveal security practices
|
||||
- **Trusted Registries**: Configure `trustedRegistries` to enforce approved base image sources
|
||||
|
||||
### Audit Logging
|
||||
|
||||
Log the following for compliance and security auditing:
|
||||
- Scan execution timestamps and Dockerfile paths
|
||||
- Rule violations by severity (error, warning, info)
|
||||
- Suppressed rules and justifications
|
||||
- Base image registry validation results
|
||||
- Remediation actions and timeline
|
||||
|
||||
### Compliance Requirements
|
||||
|
||||
- **CIS Docker Benchmark 1.6**: Hadolint rules map to CIS controls (see `references/cis_mapping.md`)
|
||||
- 4.1: Create a user for the container (DL3002)
|
||||
- 4.6: Add HEALTHCHECK instruction (DL3025)
|
||||
- 4.7: Do not use update alone in Dockerfile (DL3009)
|
||||
- 4.9: Use COPY instead of ADD (DL3020)
|
||||
- **OWASP Docker Security**: Validates against OWASP container security best practices
|
||||
- **NIST SP 800-190**: Application container security guidance
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### Scripts (`scripts/`)
|
||||
|
||||
- `hadolint_scan.py` - Comprehensive scanning with multiple Dockerfiles and output formats
|
||||
- `hadolint_multistage.py` - Multi-stage Dockerfile analysis with stage-specific validation
|
||||
- `install_precommit.sh` - Automated pre-commit hook installation
|
||||
- `ci_integration.sh` - CI/CD integration examples for multiple platforms
|
||||
|
||||
### References (`references/`)
|
||||
|
||||
- `security_rules.md` - Complete Hadolint security rules with CIS Benchmark mappings
|
||||
- `cis_mapping.md` - Detailed CIS Docker Benchmark control mapping
|
||||
- `remediation_guide.md` - Rule-by-rule remediation guidance with secure examples
|
||||
- `shellcheck_integration.md` - ShellCheck rules for RUN instruction validation
|
||||
|
||||
### Assets (`assets/`)
|
||||
|
||||
- `hadolint-strict.yaml` - Strict security configuration
|
||||
- `hadolint-balanced.yaml` - Production-ready configuration (recommended)
|
||||
- `hadolint-permissive.yaml` - Legacy Dockerfile migration configuration
|
||||
- `github-actions.yml` - Complete GitHub Actions workflow
|
||||
- `gitlab-ci.yml` - Complete GitLab CI pipeline
|
||||
- `precommit-config.yaml` - Pre-commit framework configuration
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Initial Dockerfile Security Audit
|
||||
|
||||
First-time security assessment:
|
||||
|
||||
```bash
|
||||
# 1. Find all Dockerfiles
|
||||
find . -type f -name "Dockerfile*" > dockerfile-list.txt
|
||||
|
||||
# 2. Scan all Dockerfiles with JSON output
|
||||
mkdir -p security-reports
|
||||
while read dockerfile; do
|
||||
output_file="security-reports/$(echo $dockerfile | tr '/' '_').json"
|
||||
hadolint -f json "$dockerfile" > "$output_file" 2>&1
|
||||
done < dockerfile-list.txt
|
||||
|
||||
# 3. Generate summary report
|
||||
./scripts/hadolint_scan.py --input-dir . --output summary-report.html
|
||||
|
||||
# 4. Review critical/high findings
|
||||
cat security-reports/*.json | jq '.[] | select(.level == "error")' > critical-findings.json
|
||||
```
|
||||
|
||||
### Pattern 2: Progressive Remediation
|
||||
|
||||
Gradual security hardening:
|
||||
|
||||
```bash
|
||||
# Phase 1: Baseline (don't fail builds yet)
|
||||
hadolint --failure-threshold none -f json Dockerfile > baseline.json
|
||||
|
||||
# Phase 2: Fix critical issues (fail on errors only)
|
||||
hadolint --failure-threshold error Dockerfile
|
||||
|
||||
# Phase 3: Address warnings
|
||||
hadolint --failure-threshold warning Dockerfile
|
||||
|
||||
# Phase 4: Full compliance (including style/info)
|
||||
hadolint Dockerfile
|
||||
```
|
||||
|
||||
### Pattern 3: Security-Hardened Production Image
|
||||
|
||||
Build security-first container image:
|
||||
|
||||
```dockerfile
|
||||
# Example secure Dockerfile following Hadolint best practices
|
||||
|
||||
# Use specific base image version from trusted registry
|
||||
FROM docker.io/library/node:18.19.0-alpine3.19
|
||||
|
||||
# Install packages with version pinning and cleanup
|
||||
RUN apk add --no-cache \
|
||||
dumb-init=1.2.5-r2 \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
|
||||
# Create non-root user
|
||||
RUN addgroup -g 1001 -S appuser && \
|
||||
adduser -S -u 1001 -G appuser appuser
|
||||
|
||||
# Set working directory
|
||||
WORKDIR /app
|
||||
|
||||
# Copy application files (use COPY not ADD)
|
||||
COPY --chown=appuser:appuser package*.json ./
|
||||
COPY --chown=appuser:appuser . .
|
||||
|
||||
# Install dependencies
|
||||
RUN npm ci --only=production && \
|
||||
npm cache clean --force
|
||||
|
||||
# Switch to non-root user
|
||||
USER appuser
|
||||
|
||||
# Expose port (document only, not security control)
|
||||
EXPOSE 3000
|
||||
|
||||
# Add healthcheck
|
||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD node healthcheck.js || exit 1
|
||||
|
||||
# Use JSON notation for entrypoint/cmd
|
||||
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||
CMD ["node", "server.js"]
|
||||
```
|
||||
|
||||
Validate with Hadolint:
|
||||
```bash
|
||||
hadolint Dockerfile # Should pass with no errors
|
||||
```
|
||||
|
||||
### Pattern 4: CI/CD with Automated Remediation Suggestions
|
||||
|
||||
Provide actionable feedback in pull requests:
|
||||
|
||||
```bash
|
||||
# In CI pipeline
|
||||
hadolint -f json Dockerfile > hadolint.json
|
||||
|
||||
# Generate remediation suggestions
|
||||
./scripts/hadolint_scan.py \
|
||||
--input hadolint.json \
|
||||
--format markdown \
|
||||
--output pr-comment.md
|
||||
|
||||
# Post to PR comment (using gh CLI)
|
||||
gh pr comment --body-file pr-comment.md
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
- **GitHub Actions**: Native hadolint-action with SARIF support for Security tab
|
||||
- **GitLab CI**: GitLab Code Quality format integration
|
||||
- **Jenkins**: Checkstyle format for Jenkins Warnings plugin
|
||||
- **CircleCI**: Docker-based executor with artifact retention
|
||||
- **Azure Pipelines**: Task integration with results publishing
|
||||
|
||||
### Security Tools Ecosystem
|
||||
|
||||
- **Image Scanning**: Combine with Trivy, Grype, Clair for runtime vulnerability scanning
|
||||
- **Secret Scanning**: Integrate with Gitleaks, TruffleHog for comprehensive secret detection
|
||||
- **IaC Security**: Chain with Checkov for Kubernetes/Terraform validation
|
||||
- **SBOM Generation**: Export findings alongside Syft/Trivy SBOM reports
|
||||
- **Security Dashboards**: Export JSON to Grafana, Kibana, Datadog for centralized monitoring
|
||||
|
||||
### SDLC Integration
|
||||
|
||||
- **Development**: Pre-commit hooks provide immediate feedback
|
||||
- **Code Review**: PR checks prevent insecure Dockerfiles from merging
|
||||
- **Testing**: Scan test environment Dockerfiles
|
||||
- **Staging**: Validation gate before production promotion
|
||||
- **Production**: Periodic audits of deployed container configurations
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Too Many False Positives
|
||||
|
||||
**Symptoms**: Legitimate patterns flagged (legacy Dockerfiles, specific use cases)
|
||||
|
||||
**Solution**:
|
||||
```yaml
|
||||
# Create .hadolint.yaml
|
||||
ignored:
|
||||
- DL3059 # Multiple RUN instructions (valid for complex builds)
|
||||
|
||||
# Or use inline ignores
|
||||
# hadolint ignore=DL3008
|
||||
RUN apt-get update && apt-get install -y curl
|
||||
```
|
||||
|
||||
Consult `references/remediation_guide.md` for rule-specific guidance.
|
||||
|
||||
### Issue: Base Image Registry Not Trusted
|
||||
|
||||
**Symptoms**: Error about untrusted registry even for legitimate images
|
||||
|
||||
**Solution**:
|
||||
```yaml
|
||||
# Add to .hadolint.yaml
|
||||
trustedRegistries:
|
||||
- mycompany.azurecr.io
|
||||
- gcr.io/my-project
|
||||
- docker.io/library
|
||||
```
|
||||
|
||||
### Issue: ShellCheck Warnings in RUN Instructions
|
||||
|
||||
**Symptoms**: SC2086, SC2046 warnings from ShellCheck integration
|
||||
|
||||
**Solution**:
|
||||
```dockerfile
|
||||
# Bad: Unquoted variables
|
||||
RUN echo $MY_VAR > file.txt
|
||||
|
||||
# Good: Quoted variables
|
||||
RUN echo "$MY_VAR" > file.txt
|
||||
|
||||
# Or disable specific ShellCheck rule
|
||||
# hadolint ignore=DL4006
|
||||
RUN echo $MY_VAR > file.txt
|
||||
```
|
||||
|
||||
See `references/shellcheck_integration.md` for complete ShellCheck guidance.
|
||||
|
||||
### Issue: Multi-Stage Build Not Recognized
|
||||
|
||||
**Symptoms**: Errors about missing USER instruction despite proper multi-stage setup
|
||||
|
||||
**Solution**:
|
||||
```dockerfile
|
||||
# Ensure each stage has appropriate USER
|
||||
FROM node:18 AS builder
|
||||
# Build operations...
|
||||
|
||||
FROM node:18-alpine AS runtime
|
||||
USER node # Add USER in final stage
|
||||
CMD ["node", "app.js"]
|
||||
```
|
||||
|
||||
### Issue: CI Pipeline Failing on Warnings
|
||||
|
||||
**Symptoms**: Build fails on low-severity issues
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Adjust failure threshold in CI
|
||||
hadolint --failure-threshold error Dockerfile
|
||||
|
||||
# Or configure per-environment
|
||||
if [ "$CI_ENVIRONMENT" == "production" ]; then
|
||||
hadolint --failure-threshold warning Dockerfile
|
||||
else
|
||||
hadolint --failure-threshold error Dockerfile
|
||||
fi
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Rule Severity Override
|
||||
|
||||
```yaml
|
||||
# .hadolint.yaml
|
||||
override:
|
||||
error:
|
||||
- DL3001 # Package versioning is critical
|
||||
- DL3020 # COPY vs ADD is security-critical
|
||||
warning:
|
||||
- DL3059 # Multiple RUN is warning, not info
|
||||
info:
|
||||
- DL3008 # Downgrade apt-get pinning to info for dev images
|
||||
```
|
||||
|
||||
### Inline Suppression
|
||||
|
||||
```dockerfile
|
||||
# Suppress single rule for one instruction
|
||||
# hadolint ignore=DL3018
|
||||
RUN apk add --no-cache curl
|
||||
|
||||
# Suppress multiple rules
|
||||
# hadolint ignore=DL3003,DL3009
|
||||
WORKDIR /tmp
|
||||
RUN apt-get update && apt-get install -y wget
|
||||
|
||||
# Global suppression (use sparingly)
|
||||
# hadolint global ignore=DL3059
|
||||
```
|
||||
|
||||
### Trusted Registry Enforcement
|
||||
|
||||
```yaml
|
||||
# .hadolint.yaml
|
||||
trustedRegistries:
|
||||
- docker.io/library # Official images only
|
||||
- gcr.io/distroless # Google distroless
|
||||
- cgr.dev/chainguard # Chainguard images
|
||||
|
||||
# This will error on:
|
||||
# FROM nginx:latest ❌ (docker.io/nginx)
|
||||
# FROM docker.io/library/nginx:latest ✅ (trusted)
|
||||
```
|
||||
|
||||
### Label Schema Validation
|
||||
|
||||
```yaml
|
||||
# .hadolint.yaml
|
||||
label-schema:
|
||||
maintainer: text
|
||||
org.opencontainers.image.created: rfc3339
|
||||
org.opencontainers.image.version: semver
|
||||
org.opencontainers.image.vendor: text
|
||||
```
|
||||
|
||||
Ensures Dockerfile LABELs conform to OCI image specification.
|
||||
|
||||
## References
|
||||
|
||||
- [Hadolint GitHub Repository](https://github.com/hadolint/hadolint)
|
||||
- [CIS Docker Benchmark](https://www.cisecurity.org/benchmark/docker)
|
||||
- [Docker Best Practices](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
|
||||
- [ShellCheck Documentation](https://www.shellcheck.net/)
|
||||
- [OCI Image Specification](https://github.com/opencontainers/image-spec)
|
||||
9
skills/devsecops/container-hadolint/assets/.gitkeep
Normal file
9
skills/devsecops/container-hadolint/assets/.gitkeep
Normal file
@@ -0,0 +1,9 @@
|
||||
# Assets Directory
|
||||
|
||||
Place files that will be used in the output Claude produces:
|
||||
- Templates
|
||||
- Configuration files
|
||||
- Images/logos
|
||||
- Boilerplate code
|
||||
|
||||
These files are NOT loaded into context but copied/modified in output.
|
||||
@@ -0,0 +1,99 @@
|
||||
# GitHub Actions workflow for Hadolint Dockerfile linting
|
||||
# Place this file at: .github/workflows/hadolint.yml
|
||||
|
||||
name: Hadolint Dockerfile Security Scan
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
paths:
|
||||
- '**/Dockerfile*'
|
||||
- '**/*.dockerfile'
|
||||
- '.github/workflows/hadolint.yml'
|
||||
pull_request:
|
||||
branches: [ main, develop ]
|
||||
paths:
|
||||
- '**/Dockerfile*'
|
||||
- '**/*.dockerfile'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
security-events: write # For SARIF upload
|
||||
pull-requests: write # For PR comments
|
||||
|
||||
jobs:
|
||||
hadolint:
|
||||
name: Lint Dockerfiles
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Hadolint
|
||||
uses: hadolint/hadolint-action@v3.1.0
|
||||
with:
|
||||
dockerfile: "Dockerfile" # Change to your Dockerfile path
|
||||
failure-threshold: warning
|
||||
format: sarif
|
||||
output-file: hadolint-results.sarif
|
||||
config: .hadolint.yaml # Optional: use custom config
|
||||
|
||||
- name: Upload SARIF to GitHub Security
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: hadolint-results.sarif
|
||||
category: hadolint
|
||||
|
||||
- name: Generate readable report
|
||||
if: failure()
|
||||
uses: hadolint/hadolint-action@v3.1.0
|
||||
with:
|
||||
dockerfile: "Dockerfile"
|
||||
format: tty
|
||||
|
||||
hadolint-all:
|
||||
name: Lint All Dockerfiles
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Find all Dockerfiles
|
||||
id: find-dockerfiles
|
||||
run: |
|
||||
# Find all Dockerfile* in repository
|
||||
DOCKERFILES=$(find . -type f \( -name "Dockerfile*" -o -name "*.dockerfile" \) | tr '\n' ' ')
|
||||
echo "dockerfiles=$DOCKERFILES" >> $GITHUB_OUTPUT
|
||||
echo "Found Dockerfiles: $DOCKERFILES"
|
||||
|
||||
- name: Run Hadolint on all Dockerfiles
|
||||
run: |
|
||||
# Install hadolint
|
||||
wget -O /usr/local/bin/hadolint https://github.com/hadolint/hadolint/releases/latest/download/hadolint-Linux-x86_64
|
||||
chmod +x /usr/local/bin/hadolint
|
||||
|
||||
# Scan each Dockerfile
|
||||
FAILED=0
|
||||
for dockerfile in ${{ steps.find-dockerfiles.outputs.dockerfiles }}; do
|
||||
echo "Scanning: $dockerfile"
|
||||
if ! hadolint --failure-threshold warning "$dockerfile"; then
|
||||
FAILED=1
|
||||
fi
|
||||
done
|
||||
|
||||
exit $FAILED
|
||||
|
||||
- name: Comment PR with results
|
||||
if: github.event_name == 'pull_request' && failure()
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
github.rest.issues.createComment({
|
||||
issue_number: context.issue.number,
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
body: '❌ Hadolint found security issues in Dockerfiles. Please review the workflow logs and fix the issues.'
|
||||
})
|
||||
82
skills/devsecops/container-hadolint/assets/gitlab-ci.yml
Normal file
82
skills/devsecops/container-hadolint/assets/gitlab-ci.yml
Normal file
@@ -0,0 +1,82 @@
|
||||
# GitLab CI configuration for Hadolint Dockerfile linting
|
||||
# Add this to your .gitlab-ci.yml file
|
||||
|
||||
stages:
|
||||
- lint
|
||||
- build
|
||||
|
||||
# Hadolint Dockerfile security scanning
|
||||
hadolint:
|
||||
stage: lint
|
||||
image: hadolint/hadolint:latest-debian
|
||||
script:
|
||||
# Find all Dockerfiles
|
||||
- |
|
||||
DOCKERFILES=$(find . -type f \( -name "Dockerfile*" -o -name "*.dockerfile" \))
|
||||
echo "Found Dockerfiles:"
|
||||
echo "$DOCKERFILES"
|
||||
|
||||
# Scan each Dockerfile and generate reports
|
||||
- |
|
||||
FAILED=0
|
||||
for dockerfile in $DOCKERFILES; do
|
||||
echo "Scanning: $dockerfile"
|
||||
|
||||
# Generate GitLab Code Quality report
|
||||
hadolint -f gitlab_codeclimate "$dockerfile" >> hadolint-report.json || FAILED=1
|
||||
|
||||
# Also print human-readable output
|
||||
hadolint "$dockerfile" || true
|
||||
done
|
||||
|
||||
exit $FAILED
|
||||
|
||||
artifacts:
|
||||
reports:
|
||||
codequality: hadolint-report.json
|
||||
paths:
|
||||
- hadolint-report.json
|
||||
when: always
|
||||
expire_in: 1 week
|
||||
|
||||
# Only run on branches with Dockerfile changes
|
||||
rules:
|
||||
- changes:
|
||||
- "**/Dockerfile*"
|
||||
- "**/*.dockerfile"
|
||||
- ".gitlab-ci.yml"
|
||||
|
||||
# Alternative: Scan specific Dockerfile
|
||||
hadolint-main:
|
||||
stage: lint
|
||||
image: hadolint/hadolint:latest-debian
|
||||
script:
|
||||
- hadolint --failure-threshold warning Dockerfile
|
||||
only:
|
||||
changes:
|
||||
- Dockerfile
|
||||
|
||||
# Advanced: Multiple Dockerfiles with matrix
|
||||
hadolint-matrix:
|
||||
stage: lint
|
||||
image: hadolint/hadolint:latest-debian
|
||||
parallel:
|
||||
matrix:
|
||||
- DOCKERFILE:
|
||||
- "Dockerfile"
|
||||
- "Dockerfile.dev"
|
||||
- "services/api/Dockerfile"
|
||||
- "services/web/Dockerfile"
|
||||
script:
|
||||
- |
|
||||
if [ -f "$DOCKERFILE" ]; then
|
||||
echo "Scanning: $DOCKERFILE"
|
||||
hadolint --failure-threshold warning "$DOCKERFILE"
|
||||
else
|
||||
echo "File not found: $DOCKERFILE"
|
||||
exit 1
|
||||
fi
|
||||
only:
|
||||
changes:
|
||||
- Dockerfile*
|
||||
- services/**/Dockerfile*
|
||||
@@ -0,0 +1,40 @@
|
||||
# Hadolint Balanced Configuration
|
||||
# Recommended for most production use cases
|
||||
# Balances security enforcement with practical development needs
|
||||
|
||||
failure-threshold: warning
|
||||
|
||||
# Allow common development patterns that don't compromise security
|
||||
ignored:
|
||||
- DL3059 # Multiple RUN instructions (improves layer caching in development)
|
||||
|
||||
# Trusted registries - add your organization's registries
|
||||
trustedRegistries:
|
||||
- docker.io/library # Official Docker Hub images
|
||||
- gcr.io/distroless # Google distroless images
|
||||
- cgr.dev/chainguard # Chainguard images
|
||||
# Add your private registries below:
|
||||
# - mycompany.azurecr.io
|
||||
# - gcr.io/my-project
|
||||
|
||||
# Balanced severity levels
|
||||
override:
|
||||
error:
|
||||
- DL3002 # Never switch to root (critical security)
|
||||
- DL3020 # Use COPY instead of ADD (prevent URL injection)
|
||||
warning:
|
||||
- DL3000 # Use absolute WORKDIR
|
||||
- DL3001 # Version pinning for package managers
|
||||
- DL3006 # Always tag images
|
||||
- DL3008 # Version pinning for apt
|
||||
- DL3013 # Version pinning for pip
|
||||
- DL3025 # Use JSON notation for CMD/ENTRYPOINT
|
||||
info:
|
||||
- DL3007 # Use image digests (nice to have)
|
||||
- DL3009 # Delete apt cache (optimization)
|
||||
|
||||
# Recommended OCI labels
|
||||
label-schema:
|
||||
maintainer: text
|
||||
org.opencontainers.image.version: semver
|
||||
org.opencontainers.image.vendor: text
|
||||
@@ -0,0 +1,35 @@
|
||||
# Hadolint Permissive Configuration
|
||||
# For legacy Dockerfiles during migration or development environments
|
||||
# Use temporarily while remediating existing issues
|
||||
|
||||
failure-threshold: error # Only fail on critical security issues
|
||||
|
||||
# Ignore common legacy patterns (review and remove as you fix them)
|
||||
ignored:
|
||||
- DL3006 # Image versioning (fix gradually)
|
||||
- DL3008 # apt-get version pinning (fix gradually)
|
||||
- DL3009 # apt cache cleanup (optimization, not security)
|
||||
- DL3013 # pip version pinning (fix gradually)
|
||||
- DL3015 # apt --no-install-recommends (optimization)
|
||||
- DL3059 # Multiple RUN instructions (caching)
|
||||
|
||||
# Still enforce trusted registries
|
||||
trustedRegistries:
|
||||
- docker.io
|
||||
- gcr.io
|
||||
- ghcr.io
|
||||
# Add your registries
|
||||
|
||||
# Minimal enforcement - only critical security issues
|
||||
override:
|
||||
error:
|
||||
- DL3002 # Never switch to root (always enforce)
|
||||
- DL3020 # Use COPY instead of ADD (security critical)
|
||||
warning:
|
||||
- DL3001 # Package manager version pinning
|
||||
- DL3025 # JSON notation for CMD/ENTRYPOINT
|
||||
info:
|
||||
# Everything else is informational
|
||||
- DL3000
|
||||
- DL3003
|
||||
- DL3007
|
||||
@@ -0,0 +1,48 @@
|
||||
# Hadolint Strict Configuration
|
||||
# Enforces maximum security with minimal exceptions
|
||||
# Use for: Production Dockerfiles, security-critical applications
|
||||
|
||||
failure-threshold: error
|
||||
|
||||
# Minimal ignores - only critical exceptions
|
||||
ignored: []
|
||||
|
||||
# Only trust official and verified registries
|
||||
trustedRegistries:
|
||||
- docker.io/library # Official Docker Hub images
|
||||
- gcr.io/distroless # Google distroless base images
|
||||
- cgr.dev/chainguard # Chainguard minimal images
|
||||
|
||||
# Enforce strict severity levels
|
||||
override:
|
||||
error:
|
||||
- DL3000 # Use absolute WORKDIR
|
||||
- DL3001 # Version pinning for yum
|
||||
- DL3002 # Never switch to root
|
||||
- DL3003 # Use WORKDIR instead of cd
|
||||
- DL3006 # Always tag images
|
||||
- DL3008 # Version pinning for apt
|
||||
- DL3013 # Version pinning for pip
|
||||
- DL3016 # Version pinning for npm
|
||||
- DL3018 # Version pinning for apk
|
||||
- DL3020 # Use COPY instead of ADD
|
||||
- DL3028 # Use build secrets for credentials
|
||||
warning:
|
||||
- DL3007 # Use specific digests (recommended)
|
||||
- DL3009 # Delete apt cache
|
||||
- DL3015 # Avoid additional packages
|
||||
- DL3025 # Use JSON notation
|
||||
|
||||
# Enforce OCI image labels
|
||||
label-schema:
|
||||
maintainer: text
|
||||
org.opencontainers.image.created: rfc3339
|
||||
org.opencontainers.image.authors: text
|
||||
org.opencontainers.image.url: url
|
||||
org.opencontainers.image.documentation: url
|
||||
org.opencontainers.image.source: url
|
||||
org.opencontainers.image.version: semver
|
||||
org.opencontainers.image.revision: text
|
||||
org.opencontainers.image.vendor: text
|
||||
org.opencontainers.image.title: text
|
||||
org.opencontainers.image.description: text
|
||||
40
skills/devsecops/container-hadolint/references/EXAMPLE.md
Normal file
40
skills/devsecops/container-hadolint/references/EXAMPLE.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Reference Document Template
|
||||
|
||||
This file contains detailed reference material that Claude should load only when needed.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Section 1](#section-1)
|
||||
- [Section 2](#section-2)
|
||||
- [Security Standards](#security-standards)
|
||||
|
||||
## Section 1
|
||||
|
||||
Detailed information, schemas, or examples that are too large for SKILL.md.
|
||||
|
||||
## Section 2
|
||||
|
||||
Additional reference material.
|
||||
|
||||
## Security Standards
|
||||
|
||||
### OWASP Top 10
|
||||
|
||||
Reference relevant OWASP categories:
|
||||
- A01: Broken Access Control
|
||||
- A02: Cryptographic Failures
|
||||
- etc.
|
||||
|
||||
### CWE Mappings
|
||||
|
||||
Map to relevant Common Weakness Enumeration categories:
|
||||
- CWE-79: Cross-site Scripting
|
||||
- CWE-89: SQL Injection
|
||||
- etc.
|
||||
|
||||
### MITRE ATT&CK
|
||||
|
||||
Reference relevant tactics and techniques if applicable:
|
||||
- TA0001: Initial Access
|
||||
- T1190: Exploit Public-Facing Application
|
||||
- etc.
|
||||
439
skills/devsecops/container-hadolint/references/security_rules.md
Normal file
439
skills/devsecops/container-hadolint/references/security_rules.md
Normal file
@@ -0,0 +1,439 @@
|
||||
# Hadolint Security Rules Reference
|
||||
|
||||
Complete reference of Hadolint security rules with CIS Docker Benchmark mappings and remediation guidance.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Critical Security Rules](#critical-security-rules)
|
||||
- [CIS Docker Benchmark Mappings](#cis-docker-benchmark-mappings)
|
||||
- [Rule Categories](#rule-categories)
|
||||
|
||||
## Critical Security Rules
|
||||
|
||||
### DL3000: Use absolute WORKDIR
|
||||
|
||||
**Severity**: Error
|
||||
**CIS Mapping**: 4.10 - Ensure secrets are not stored in Dockerfiles
|
||||
|
||||
**Issue**: Relative WORKDIR can lead to path confusion and security vulnerabilities.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
WORKDIR app
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
WORKDIR /app
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3001: Version pinning for package managers
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.3 - Do not install unnecessary packages
|
||||
|
||||
**Issue**: Unpinned versions lead to non-reproducible builds and potential security vulnerabilities from package updates.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
RUN yum install httpd
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
RUN yum install -y httpd-2.4.51
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3002: Never switch back to root
|
||||
|
||||
**Severity**: Error
|
||||
**CIS Mapping**: 4.1 - Create a user for the container
|
||||
|
||||
**Issue**: Switching back to root defeats container isolation and violates least privilege principle.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
USER node
|
||||
RUN npm install
|
||||
USER root # ❌ Don't switch back to root
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
USER node
|
||||
RUN npm install
|
||||
# Stay as non-root user
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3003: Use WORKDIR instead of cd
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: Best practices
|
||||
|
||||
**Issue**: Using `cd` in RUN commands doesn't persist across instructions and can cause confusion.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
RUN cd /app && npm install
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
WORKDIR /app
|
||||
RUN npm install
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3006: Always tag image versions
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.3 - Ensure base images are verified
|
||||
|
||||
**Issue**: Using `:latest` or no tag creates non-reproducible builds and security risks.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
FROM node
|
||||
FROM ubuntu:latest
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
FROM node:18.19.0-alpine3.19
|
||||
FROM ubuntu:22.04
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3007: Pin Docker image versions to specific digest
|
||||
|
||||
**Severity**: Info
|
||||
**CIS Mapping**: 4.3 - Ensure base images are verified
|
||||
|
||||
**Issue**: Tags can be overwritten; digests are immutable.
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
FROM node:18.19.0-alpine3.19
|
||||
```
|
||||
|
||||
**Better**:
|
||||
```dockerfile
|
||||
FROM node:18.19.0-alpine3.19@sha256:abc123...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3008: Pin apt-get package versions
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.3 - Do not install unnecessary packages
|
||||
|
||||
**Issue**: Unpinned apt packages lead to non-reproducible builds.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y curl
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
curl=7.68.0-1ubuntu2.14 && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3009: Delete apt cache after installation
|
||||
|
||||
**Severity**: Info
|
||||
**CIS Mapping**: 4.6 - Reduce image size
|
||||
|
||||
**Issue**: Unnecessary cache increases image size and attack surface.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y curl
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3013: Pin pip package versions
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.3 - Do not install unnecessary packages
|
||||
|
||||
**Issue**: Unpinned pip packages compromise build reproducibility.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
RUN pip install flask
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
RUN pip install --no-cache-dir flask==2.3.2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3020: Use COPY instead of ADD
|
||||
|
||||
**Severity**: Error
|
||||
**CIS Mapping**: 4.9 - Use COPY instead of ADD
|
||||
|
||||
**Issue**: ADD has implicit behavior (auto-extraction, URL support) that can be exploited.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
ADD app.tar.gz /app/
|
||||
ADD https://example.com/file.txt /tmp/
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
COPY app.tar.gz /app/
|
||||
# For URLs, use RUN wget/curl instead
|
||||
RUN curl -O https://example.com/file.txt
|
||||
```
|
||||
|
||||
**Exception**: ADD is acceptable only when you explicitly need tar auto-extraction.
|
||||
|
||||
---
|
||||
|
||||
### DL3025: Use JSON notation for CMD and ENTRYPOINT
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.6 - Add HEALTHCHECK instruction
|
||||
|
||||
**Issue**: Shell form enables shell injection attacks and doesn't properly handle signals.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
CMD node server.js
|
||||
ENTRYPOINT /app/start.sh
|
||||
```
|
||||
|
||||
**Good**:
|
||||
```dockerfile
|
||||
CMD ["node", "server.js"]
|
||||
ENTRYPOINT ["/app/start.sh"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3028: Use credentials via build secrets
|
||||
|
||||
**Severity**: Warning
|
||||
**CIS Mapping**: 4.10 - Do not store secrets in Dockerfiles
|
||||
|
||||
**Issue**: Credentials in ENV or ARG end up in image layers.
|
||||
|
||||
**Bad**:
|
||||
```dockerfile
|
||||
ARG API_KEY=secret123
|
||||
RUN curl -H "Authorization: $API_KEY" https://api.example.com
|
||||
```
|
||||
|
||||
**Good** (BuildKit secrets):
|
||||
```dockerfile
|
||||
# syntax=docker/dockerfile:1.4
|
||||
RUN --mount=type=secret,id=api_key \
|
||||
curl -H "Authorization: $(cat /run/secrets/api_key)" https://api.example.com
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DL3059: Multiple RUN instructions
|
||||
|
||||
**Severity**: Info
|
||||
**CIS Mapping**: 4.6 - Optimize layers
|
||||
|
||||
**Issue**: Multiple RUN instructions create unnecessary layers, increasing image size.
|
||||
|
||||
**Less Optimal**:
|
||||
```dockerfile
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y curl
|
||||
RUN curl -O https://example.com/file
|
||||
```
|
||||
|
||||
**Better**:
|
||||
```dockerfile
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl && \
|
||||
curl -O https://example.com/file && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
**Note**: Balance between layer caching and image size. For development, separate RUN instructions may aid caching.
|
||||
|
||||
---
|
||||
|
||||
## CIS Docker Benchmark Mappings
|
||||
|
||||
### CIS 4.1: Create a user for the container
|
||||
|
||||
**Hadolint Rules**: DL3002
|
||||
|
||||
**Requirement**: Don't run containers as root.
|
||||
|
||||
**Implementation**:
|
||||
```dockerfile
|
||||
RUN groupadd -r appuser && useradd -r -g appuser appuser
|
||||
USER appuser
|
||||
# Don't switch back to root
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### CIS 4.3: Do not install unnecessary packages
|
||||
|
||||
**Hadolint Rules**: DL3001, DL3008, DL3013, DL3015
|
||||
|
||||
**Requirement**: Minimize attack surface by installing only required packages with pinned versions.
|
||||
|
||||
**Implementation**:
|
||||
```dockerfile
|
||||
# Use --no-install-recommends
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
package1=version1 \
|
||||
package2=version2 && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### CIS 4.6: Add HEALTHCHECK instruction
|
||||
|
||||
**Hadolint Rules**: DL3025 (related to proper CMD/ENTRYPOINT)
|
||||
|
||||
**Requirement**: Include HEALTHCHECK to enable container health monitoring.
|
||||
|
||||
**Implementation**:
|
||||
```dockerfile
|
||||
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
CMD curl -f http://localhost:8080/health || exit 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### CIS 4.7: Do not use update instructions alone
|
||||
|
||||
**Hadolint Rules**: DL3009, DL3014, DL3015
|
||||
|
||||
**Requirement**: Update and install should be in same RUN instruction to prevent cache issues.
|
||||
|
||||
**Implementation**:
|
||||
```dockerfile
|
||||
# Bad
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y package
|
||||
|
||||
# Good
|
||||
RUN apt-get update && \
|
||||
apt-get install -y package && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### CIS 4.9: Use COPY instead of ADD
|
||||
|
||||
**Hadolint Rules**: DL3020
|
||||
|
||||
**Requirement**: Use COPY for file operations; ADD only for tar extraction.
|
||||
|
||||
**Implementation**: See DL3020 above.
|
||||
|
||||
---
|
||||
|
||||
### CIS 4.10: Do not store secrets in Dockerfiles
|
||||
|
||||
**Hadolint Rules**: DL3028, DL3000 (indirectly)
|
||||
|
||||
**Requirement**: Use build secrets or external secret management.
|
||||
|
||||
**Implementation**: See DL3028 above.
|
||||
|
||||
---
|
||||
|
||||
## Rule Categories
|
||||
|
||||
### Base Image Security
|
||||
- DL3006: Always tag image versions
|
||||
- DL3007: Use specific image digests
|
||||
- DL3026: Use trusted registries only
|
||||
|
||||
### Package Management
|
||||
- DL3001: Version pinning (yum/dnf/zypper)
|
||||
- DL3008: Version pinning (apt-get)
|
||||
- DL3013: Version pinning (pip)
|
||||
- DL3016: Version pinning (npm)
|
||||
- DL3018: Version pinning (apk)
|
||||
- DL3028: Use build secrets for credentials
|
||||
|
||||
### Instruction Best Practices
|
||||
- DL3000: Use absolute WORKDIR
|
||||
- DL3003: Use WORKDIR instead of cd
|
||||
- DL3020: Use COPY instead of ADD
|
||||
- DL3025: Use JSON notation for CMD/ENTRYPOINT
|
||||
|
||||
### User and Permissions
|
||||
- DL3002: Never switch back to root
|
||||
- DL4001: Use SHELL to switch shells securely
|
||||
|
||||
### Image Optimization
|
||||
- DL3009: Delete apt cache
|
||||
- DL3014: Use -y for apt-get
|
||||
- DL3015: Avoid additional packages
|
||||
- DL3059: Minimize RUN instructions
|
||||
|
||||
### ShellCheck Integration
|
||||
- DL4000-DL4006: Shell script best practices in RUN
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Table
|
||||
|
||||
| Rule | Severity | CIS | Description |
|
||||
|------|----------|-----|-------------|
|
||||
| DL3000 | Error | 4.10 | Use absolute WORKDIR |
|
||||
| DL3001 | Warning | 4.3 | Pin yum versions |
|
||||
| DL3002 | Error | 4.1 | Don't switch to root |
|
||||
| DL3003 | Warning | - | Use WORKDIR not cd |
|
||||
| DL3006 | Warning | 4.3 | Tag image versions |
|
||||
| DL3007 | Info | 4.3 | Use image digests |
|
||||
| DL3008 | Warning | 4.3 | Pin apt versions |
|
||||
| DL3009 | Info | 4.7 | Delete apt cache |
|
||||
| DL3013 | Warning | 4.3 | Pin pip versions |
|
||||
| DL3020 | Error | 4.9 | Use COPY not ADD |
|
||||
| DL3025 | Warning | 4.6 | JSON notation CMD |
|
||||
| DL3028 | Warning | 4.10 | Use build secrets |
|
||||
| DL3059 | Info | - | Multiple RUN instructions |
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Hadolint Rules Wiki](https://github.com/hadolint/hadolint/wiki)
|
||||
- [CIS Docker Benchmark v1.6](https://www.cisecurity.org/benchmark/docker)
|
||||
- [Docker Security Best Practices](https://docs.docker.com/develop/security-best-practices/)
|
||||
- [NIST SP 800-190](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-190.pdf)
|
||||
671
skills/devsecops/iac-checkov/SKILL.md
Normal file
671
skills/devsecops/iac-checkov/SKILL.md
Normal file
@@ -0,0 +1,671 @@
|
||||
---
|
||||
name: iac-checkov
|
||||
description: >
|
||||
Infrastructure as Code (IaC) security scanning using Checkov with 750+ built-in policies for Terraform,
|
||||
CloudFormation, Kubernetes, Dockerfile, and ARM templates. Use when: (1) Scanning IaC files for security
|
||||
misconfigurations and compliance violations, (2) Validating cloud infrastructure against CIS, PCI-DSS,
|
||||
HIPAA, and SOC2 benchmarks, (3) Detecting secrets and hardcoded credentials in IaC, (4) Implementing
|
||||
policy-as-code in CI/CD pipelines, (5) Generating compliance reports with remediation guidance for
|
||||
cloud security posture management.
|
||||
version: 0.1.0
|
||||
maintainer: SirAppSec
|
||||
category: devsecops
|
||||
tags: [iac, checkov, terraform, kubernetes, cloudformation, compliance, policy-as-code, cloud-security]
|
||||
frameworks: [PCI-DSS, HIPAA, SOC2, NIST, GDPR]
|
||||
dependencies:
|
||||
python: ">=3.8"
|
||||
packages: [checkov]
|
||||
references:
|
||||
- https://www.checkov.io/
|
||||
- https://github.com/bridgecrewio/checkov
|
||||
- https://docs.paloaltonetworks.com/prisma/prisma-cloud
|
||||
---
|
||||
|
||||
# Infrastructure as Code Security with Checkov
|
||||
|
||||
## Overview
|
||||
|
||||
Checkov is a static code analysis tool that scans Infrastructure as Code (IaC) files for security misconfigurations
|
||||
and compliance violations before deployment. With 750+ built-in policies, Checkov helps prevent cloud security issues
|
||||
by detecting problems in Terraform, CloudFormation, Kubernetes, Dockerfiles, Helm charts, and ARM templates.
|
||||
|
||||
Checkov performs graph-based scanning to understand resource relationships and detect complex misconfigurations that
|
||||
span multiple resources, making it more powerful than simple pattern matching.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Install Checkov
|
||||
|
||||
```bash
|
||||
# Via pip
|
||||
pip install checkov
|
||||
|
||||
# Via Homebrew (macOS)
|
||||
brew install checkov
|
||||
|
||||
# Via Docker
|
||||
docker pull bridgecrew/checkov
|
||||
```
|
||||
|
||||
### Scan Terraform Directory
|
||||
|
||||
```bash
|
||||
# Scan all Terraform files in directory
|
||||
checkov -d ./terraform
|
||||
|
||||
# Scan specific file
|
||||
checkov -f ./terraform/main.tf
|
||||
|
||||
# Scan with specific framework
|
||||
checkov -d ./infrastructure --framework terraform
|
||||
```
|
||||
|
||||
### Scan Kubernetes Manifests
|
||||
|
||||
```bash
|
||||
# Scan Kubernetes YAML files
|
||||
checkov -d ./k8s --framework kubernetes
|
||||
|
||||
# Scan Helm chart
|
||||
checkov -d ./helm-chart --framework helm
|
||||
```
|
||||
|
||||
### Scan CloudFormation Template
|
||||
|
||||
```bash
|
||||
# Scan CloudFormation template
|
||||
checkov -f ./cloudformation/template.yaml --framework cloudformation
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: Understand Scan Scope
|
||||
|
||||
Identify IaC files and frameworks to scan:
|
||||
|
||||
```bash
|
||||
# Supported frameworks
|
||||
checkov --list-frameworks
|
||||
|
||||
# Output:
|
||||
# terraform, cloudformation, kubernetes, dockerfile, helm,
|
||||
# serverless, arm, secrets, ansible, github_actions, gitlab_ci
|
||||
```
|
||||
|
||||
**Scope Considerations:**
|
||||
- Scan entire infrastructure directory for comprehensive coverage
|
||||
- Focus on specific frameworks during initial adoption
|
||||
- Exclude generated or vendor files
|
||||
- Include both production and non-production configurations
|
||||
|
||||
### Step 2: Run Basic Scan
|
||||
|
||||
Execute Checkov with appropriate output format:
|
||||
|
||||
```bash
|
||||
# CLI output (human-readable)
|
||||
checkov -d ./terraform
|
||||
|
||||
# JSON output (for automation)
|
||||
checkov -d ./terraform -o json
|
||||
|
||||
# Multiple output formats
|
||||
checkov -d ./terraform -o cli -o json -o sarif
|
||||
|
||||
# Save output to file
|
||||
checkov -d ./terraform -o json --output-file-path ./reports
|
||||
```
|
||||
|
||||
**What Checkov Detects:**
|
||||
- Security misconfigurations (unencrypted resources, public access)
|
||||
- Compliance violations (CIS benchmarks, industry standards)
|
||||
- Secrets and hardcoded credentials
|
||||
- Missing security controls (logging, monitoring, encryption)
|
||||
- Insecure network configurations
|
||||
- Resource relationship issues (via graph analysis)
|
||||
|
||||
### Step 3: Filter and Prioritize Findings
|
||||
|
||||
Focus on critical issues first:
|
||||
|
||||
```bash
|
||||
# Show only high severity issues
|
||||
checkov -d ./terraform --check CKV_AWS_*
|
||||
|
||||
# Skip specific checks (false positives)
|
||||
checkov -d ./terraform --skip-check CKV_AWS_8,CKV_AWS_21
|
||||
|
||||
# Check against specific compliance framework
|
||||
checkov -d ./terraform --compact --framework terraform \
|
||||
--check CIS_AWS,CIS_AZURE
|
||||
|
||||
# Run only checks with specific severity
|
||||
checkov -d ./terraform --check HIGH,CRITICAL
|
||||
```
|
||||
|
||||
**Severity Levels:**
|
||||
- **CRITICAL**: Immediate security risks (public S3 buckets, unencrypted databases)
|
||||
- **HIGH**: Significant security concerns (missing MFA, weak encryption)
|
||||
- **MEDIUM**: Important security best practices (missing tags, logging disabled)
|
||||
- **LOW**: Recommendations and hardening (resource naming conventions)
|
||||
|
||||
### Step 4: Suppress False Positives
|
||||
|
||||
Use inline suppression for legitimate exceptions:
|
||||
|
||||
```hcl
|
||||
# Terraform example
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# checkov:skip=CKV_AWS_18:This bucket is intentionally public for static website
|
||||
bucket = "my-public-website"
|
||||
acl = "public-read"
|
||||
}
|
||||
```
|
||||
|
||||
```yaml
|
||||
# Kubernetes example
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: privileged-pod
|
||||
annotations:
|
||||
checkov.io/skip: CKV_K8S_16=Legacy application requires privileged mode
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
securityContext:
|
||||
privileged: true
|
||||
```
|
||||
|
||||
See `references/suppression_guide.md` for comprehensive suppression strategies.
|
||||
|
||||
### Step 5: Create Custom Policies
|
||||
|
||||
Define organization-specific policies:
|
||||
|
||||
```python
|
||||
# custom_checks/require_s3_versioning.py
|
||||
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
|
||||
from checkov.common.models.enums import CheckResult, CheckCategories
|
||||
|
||||
class S3BucketVersioning(BaseResourceCheck):
|
||||
def __init__(self):
|
||||
name = "Ensure S3 bucket has versioning enabled"
|
||||
id = "CKV_AWS_CUSTOM_001"
|
||||
supported_resources = ['aws_s3_bucket']
|
||||
categories = [CheckCategories.BACKUP_AND_RECOVERY]
|
||||
super().__init__(name=name, id=id, categories=categories,
|
||||
supported_resources=supported_resources)
|
||||
|
||||
def scan_resource_conf(self, conf):
|
||||
if 'versioning' in conf:
|
||||
if conf['versioning'][0].get('enabled') == [True]:
|
||||
return CheckResult.PASSED
|
||||
return CheckResult.FAILED
|
||||
|
||||
check = S3BucketVersioning()
|
||||
```
|
||||
|
||||
Run with custom policies:
|
||||
|
||||
```bash
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks
|
||||
```
|
||||
|
||||
See `references/custom_policies.md` for advanced policy development.
|
||||
|
||||
### Step 6: Generate Compliance Reports
|
||||
|
||||
Create reports for audit and compliance:
|
||||
|
||||
```bash
|
||||
# Generate comprehensive report
|
||||
checkov -d ./terraform \
|
||||
-o cli -o json -o junitxml \
|
||||
--output-file-path ./compliance-reports \
|
||||
--repo-id my-infrastructure \
|
||||
--branch main
|
||||
|
||||
# CycloneDX SBOM for IaC
|
||||
checkov -d ./terraform -o cyclonedx
|
||||
|
||||
# SARIF for GitHub Security
|
||||
checkov -d ./terraform -o sarif --output-file-path ./sarif-report.json
|
||||
```
|
||||
|
||||
**Report Types:**
|
||||
- **CLI**: Human-readable console output
|
||||
- **JSON**: Machine-readable for automation
|
||||
- **JUnit XML**: CI/CD integration (Jenkins, GitLab)
|
||||
- **SARIF**: GitHub/Azure DevOps Security tab
|
||||
- **CycloneDX**: Software Bill of Materials for IaC
|
||||
|
||||
Map findings to compliance frameworks using `references/compliance_mapping.md`.
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
Add Checkov scanning to pull request checks:
|
||||
|
||||
```yaml
|
||||
# .github/workflows/checkov.yml
|
||||
name: Checkov IaC Security Scan
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
checkov-scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Run Checkov
|
||||
uses: bridgecrewio/checkov-action@master
|
||||
with:
|
||||
directory: infrastructure/
|
||||
framework: terraform
|
||||
output_format: sarif
|
||||
output_file_path: checkov-results.sarif
|
||||
soft_fail: false
|
||||
|
||||
- name: Upload SARIF Report
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: checkov-results.sarif
|
||||
```
|
||||
|
||||
### Pre-Commit Hook
|
||||
|
||||
Prevent committing insecure IaC:
|
||||
|
||||
```yaml
|
||||
# .pre-commit-config.yaml
|
||||
repos:
|
||||
- repo: https://github.com/bridgecrewio/checkov
|
||||
rev: 2.5.0
|
||||
hooks:
|
||||
- id: checkov
|
||||
args: [--soft-fail]
|
||||
files: \.(tf|yaml|yml|json)$
|
||||
```
|
||||
|
||||
Install pre-commit hooks:
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
### GitLab CI
|
||||
|
||||
```yaml
|
||||
# .gitlab-ci.yml
|
||||
checkov_scan:
|
||||
image: bridgecrew/checkov:latest
|
||||
stage: security
|
||||
script:
|
||||
- checkov -d ./terraform -o json -o junitxml
|
||||
--output-file-path $CI_PROJECT_DIR/checkov-report
|
||||
artifacts:
|
||||
reports:
|
||||
junit: checkov-report/results_junitxml.xml
|
||||
paths:
|
||||
- checkov-report/
|
||||
when: always
|
||||
```
|
||||
|
||||
### Jenkins Pipeline
|
||||
|
||||
```groovy
|
||||
// Jenkinsfile
|
||||
pipeline {
|
||||
agent any
|
||||
stages {
|
||||
stage('Checkov Scan') {
|
||||
steps {
|
||||
sh 'pip install checkov'
|
||||
sh '''
|
||||
checkov -d ./terraform \
|
||||
-o cli -o junitxml \
|
||||
--output-file-path ./reports
|
||||
'''
|
||||
}
|
||||
}
|
||||
}
|
||||
post {
|
||||
always {
|
||||
junit 'reports/results_junitxml.xml'
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See `assets/` directory for complete CI/CD templates.
|
||||
|
||||
## Framework-Specific Workflows
|
||||
|
||||
### Terraform
|
||||
|
||||
**Scan Terraform with Variable Files:**
|
||||
|
||||
```bash
|
||||
# Scan with tfvars
|
||||
checkov -d ./terraform --var-file ./terraform.tfvars
|
||||
|
||||
# Download and scan external modules
|
||||
checkov -d ./terraform --download-external-modules true
|
||||
|
||||
# Skip Terraform plan files
|
||||
checkov -d ./terraform --skip-path terraform.tfstate
|
||||
```
|
||||
|
||||
**Common Terraform Checks:**
|
||||
- CKV_AWS_19: Ensure S3 bucket has server-side encryption
|
||||
- CKV_AWS_21: Ensure S3 bucket has versioning enabled
|
||||
- CKV_AWS_23: Ensure Security Group ingress is not open to 0.0.0.0/0
|
||||
- CKV_AWS_40: Ensure IAM policies don't use wildcard actions
|
||||
- CKV_AWS_61: Ensure RDS database has encryption at rest enabled
|
||||
|
||||
### Kubernetes
|
||||
|
||||
**Scan Kubernetes Manifests:**
|
||||
|
||||
```bash
|
||||
# Scan all YAML manifests
|
||||
checkov -d ./k8s --framework kubernetes
|
||||
|
||||
# Scan Helm chart
|
||||
checkov -d ./helm-chart --framework helm
|
||||
|
||||
# Scan kustomize output
|
||||
kustomize build ./overlay/prod | checkov -f - --framework kubernetes
|
||||
```
|
||||
|
||||
**Common Kubernetes Checks:**
|
||||
- CKV_K8S_8: Ensure Liveness Probe is configured
|
||||
- CKV_K8S_10: Ensure CPU requests are set
|
||||
- CKV_K8S_11: Ensure CPU limits are set
|
||||
- CKV_K8S_14: Ensure container image is not latest
|
||||
- CKV_K8S_16: Ensure container is not privileged
|
||||
- CKV_K8S_22: Ensure read-only root filesystem
|
||||
- CKV_K8S_28: Ensure container capabilities are minimized
|
||||
|
||||
### CloudFormation
|
||||
|
||||
**Scan CloudFormation Templates:**
|
||||
|
||||
```bash
|
||||
# Scan CloudFormation template
|
||||
checkov -f ./cloudformation/stack.yaml --framework cloudformation
|
||||
|
||||
# Scan AWS SAM template
|
||||
checkov -f ./sam-template.yaml --framework serverless
|
||||
```
|
||||
|
||||
### Dockerfile
|
||||
|
||||
**Scan Dockerfiles for Security Issues:**
|
||||
|
||||
```bash
|
||||
# Scan Dockerfile
|
||||
checkov -f ./Dockerfile --framework dockerfile
|
||||
|
||||
# Common issues detected:
|
||||
# - Running as root user
|
||||
# - Using :latest tag
|
||||
# - Missing HEALTHCHECK
|
||||
# - Exposing sensitive ports
|
||||
```
|
||||
|
||||
## Baseline and Drift Detection
|
||||
|
||||
### Create Security Baseline
|
||||
|
||||
Establish baseline for existing infrastructure:
|
||||
|
||||
```bash
|
||||
# Create baseline (first scan)
|
||||
checkov -d ./terraform --create-baseline
|
||||
|
||||
# This creates .checkov.baseline file with current findings
|
||||
```
|
||||
|
||||
### Detect New Issues (Drift)
|
||||
|
||||
Compare subsequent scans against baseline:
|
||||
|
||||
```bash
|
||||
# Compare against baseline - only fail on NEW issues
|
||||
checkov -d ./terraform --baseline .checkov.baseline
|
||||
|
||||
# This allows existing issues while preventing new ones
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
- Gradual remediation of legacy infrastructure
|
||||
- Focus on preventing new security debt
|
||||
- Phased compliance adoption
|
||||
|
||||
## Secret Scanning
|
||||
|
||||
Detect hardcoded secrets in IaC:
|
||||
|
||||
```bash
|
||||
# Enable secrets scanning
|
||||
checkov -d ./terraform --framework secrets
|
||||
|
||||
# Common secrets detected:
|
||||
# - AWS access keys
|
||||
# - API tokens
|
||||
# - Private keys
|
||||
# - Database passwords
|
||||
# - Generic secrets (high entropy strings)
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **Policy Suppression Governance**: Require security team approval for suppressing CRITICAL/HIGH findings
|
||||
- **CI/CD Failure Thresholds**: Configure `--hard-fail-on` for severity levels that should block deployment
|
||||
- **Custom Policy Management**: Version control custom policies and review changes
|
||||
- **Compliance Alignment**: Map organizational requirements to Checkov policies
|
||||
- **Secrets Management**: Never commit secrets; use secret managers and rotation policies
|
||||
- **Audit Logging**: Log all scan results and policy suppressions for compliance audits
|
||||
- **False Positive Review**: Regularly review suppressed findings to ensure they remain valid
|
||||
- **Policy Updates**: Keep Checkov updated to receive new security policies
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### Scripts (`scripts/`)
|
||||
|
||||
- `checkov_scan.py` - Comprehensive scanning script with multiple frameworks and output formats
|
||||
- `checkov_terraform_scan.sh` - Terraform-specific scanning with variable file support
|
||||
- `checkov_k8s_scan.sh` - Kubernetes manifest scanning with cluster comparison
|
||||
- `checkov_baseline_create.sh` - Baseline creation and drift detection workflow
|
||||
- `checkov_compliance_report.py` - Generate compliance reports (CIS, PCI-DSS, HIPAA, SOC2)
|
||||
- `ci_integration.sh` - CI/CD integration examples for multiple platforms
|
||||
|
||||
### References (`references/`)
|
||||
|
||||
- `compliance_mapping.md` - Mapping of Checkov checks to CIS, PCI-DSS, HIPAA, SOC2, NIST
|
||||
- `custom_policies.md` - Guide for writing custom Python and YAML policies
|
||||
- `suppression_guide.md` - Best practices for suppressing false positives
|
||||
- `terraform_checks.md` - Comprehensive list of Terraform checks with remediation
|
||||
- `kubernetes_checks.md` - Kubernetes security checks and pod security standards
|
||||
- `cloudformation_checks.md` - CloudFormation security checks with examples
|
||||
|
||||
### Assets (`assets/`)
|
||||
|
||||
- `checkov_config.yaml` - Checkov configuration file template
|
||||
- `github_actions.yml` - Complete GitHub Actions workflow
|
||||
- `gitlab_ci.yml` - Complete GitLab CI pipeline
|
||||
- `jenkins_pipeline.groovy` - Jenkins pipeline template
|
||||
- `pre_commit_config.yaml` - Pre-commit hook configuration
|
||||
- `custom_policy_template.py` - Template for custom Python policies
|
||||
- `policy_metadata.yaml` - Policy metadata for organization-specific policies
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Progressive Compliance Adoption
|
||||
|
||||
Gradually increase security posture:
|
||||
|
||||
```bash
|
||||
# Phase 1: Scan without failing (awareness)
|
||||
checkov -d ./terraform --soft-fail
|
||||
|
||||
# Phase 2: Fail only on CRITICAL issues
|
||||
checkov -d ./terraform --hard-fail-on CRITICAL
|
||||
|
||||
# Phase 3: Fail on CRITICAL and HIGH
|
||||
checkov -d ./terraform --hard-fail-on CRITICAL,HIGH
|
||||
|
||||
# Phase 4: Full enforcement with baseline
|
||||
checkov -d ./terraform --baseline .checkov.baseline
|
||||
```
|
||||
|
||||
### Pattern 2: Multi-Framework Infrastructure
|
||||
|
||||
Scan complete infrastructure stack:
|
||||
|
||||
```bash
|
||||
# Use bundled script for comprehensive scanning
|
||||
python3 scripts/checkov_scan.py \
|
||||
--infrastructure-dir ./infrastructure \
|
||||
--frameworks terraform,kubernetes,dockerfile \
|
||||
--output-dir ./security-reports \
|
||||
--compliance CIS,PCI-DSS
|
||||
```
|
||||
|
||||
### Pattern 3: Policy-as-Code Repository
|
||||
|
||||
Maintain centralized policy repository:
|
||||
|
||||
```
|
||||
policies/
|
||||
├── custom_checks/
|
||||
│ ├── aws/
|
||||
│ │ ├── require_encryption.py
|
||||
│ │ └── require_tags.py
|
||||
│ ├── kubernetes/
|
||||
│ │ └── require_psp.py
|
||||
├── .checkov.yaml # Global config
|
||||
└── suppression_list.txt # Approved suppressions
|
||||
```
|
||||
|
||||
### Pattern 4: Compliance-Driven Scanning
|
||||
|
||||
Focus on specific compliance requirements:
|
||||
|
||||
```bash
|
||||
# CIS AWS Foundations Benchmark
|
||||
checkov -d ./terraform --check CIS_AWS
|
||||
|
||||
# PCI-DSS compliance
|
||||
checkov -d ./terraform --framework terraform \
|
||||
--check CKV_AWS_19,CKV_AWS_21,CKV_AWS_61 \
|
||||
-o json --output-file-path ./pci-dss-report
|
||||
|
||||
# HIPAA compliance
|
||||
checkov -d ./terraform --framework terraform \
|
||||
--compact --check CKV_AWS_17,CKV_AWS_19,CKV_AWS_61,CKV_AWS_93
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
- **CI/CD**: GitHub Actions, GitLab CI, Jenkins, Azure DevOps, CircleCI, Bitbucket Pipelines
|
||||
- **Version Control**: Pre-commit hooks, pull request checks, branch protection rules
|
||||
- **Cloud Platforms**: AWS, Azure, GCP, OCI, Alibaba Cloud
|
||||
- **IaC Tools**: Terraform, Terragrunt, CloudFormation, ARM, Pulumi
|
||||
- **Container Orchestration**: Kubernetes, OpenShift, EKS, GKE, AKS
|
||||
- **Policy Engines**: OPA (Open Policy Agent), Sentinel
|
||||
- **Security Platforms**: Prisma Cloud, Bridgecrew Platform
|
||||
- **SIEM/Logging**: Export findings to Splunk, Elasticsearch, CloudWatch
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Too Many Findings Overwhelming Team
|
||||
|
||||
**Solution**: Use progressive adoption with baselines:
|
||||
|
||||
```bash
|
||||
# Create baseline with current state
|
||||
checkov -d ./terraform --create-baseline
|
||||
|
||||
# Only fail on new issues
|
||||
checkov -d ./terraform --baseline .checkov.baseline --soft-fail-on LOW,MEDIUM
|
||||
```
|
||||
|
||||
### Issue: False Positives for Legitimate Use Cases
|
||||
|
||||
**Solution**: Use inline suppressions with justification:
|
||||
|
||||
```hcl
|
||||
# Provide clear business justification
|
||||
resource "aws_security_group" "allow_office" {
|
||||
# checkov:skip=CKV_AWS_23:Office IP range needs SSH access for developers
|
||||
ingress {
|
||||
from_port = 22
|
||||
to_port = 22
|
||||
protocol = "tcp"
|
||||
cidr_blocks = ["203.0.113.0/24"] # Office IP range
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Issue: Scan Takes Too Long
|
||||
|
||||
**Solution**: Optimize scan scope:
|
||||
|
||||
```bash
|
||||
# Skip unnecessary paths
|
||||
checkov -d ./terraform \
|
||||
--skip-path .terraform/ \
|
||||
--skip-path modules/vendor/ \
|
||||
--skip-framework secrets
|
||||
|
||||
# Use compact output
|
||||
checkov -d ./terraform --compact --quiet
|
||||
```
|
||||
|
||||
### Issue: Custom Policies Not Loading
|
||||
|
||||
**Solution**: Verify policy structure and loading:
|
||||
|
||||
```bash
|
||||
# Check policy syntax
|
||||
python3 custom_checks/my_policy.py
|
||||
|
||||
# Ensure proper directory structure
|
||||
checkov -d ./terraform \
|
||||
--external-checks-dir ./custom_checks \
|
||||
--list
|
||||
|
||||
# Debug with verbose output
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks -v
|
||||
```
|
||||
|
||||
### Issue: Integration with Private Terraform Modules
|
||||
|
||||
**Solution**: Configure module access:
|
||||
|
||||
```bash
|
||||
# Set up Terraform credentials
|
||||
export TF_TOKEN_app_terraform_io="your-token"
|
||||
|
||||
# Download external modules
|
||||
checkov -d ./terraform --download-external-modules true
|
||||
|
||||
# Or scan after terraform init
|
||||
cd ./terraform && terraform init
|
||||
checkov -d .
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Checkov Documentation](https://www.checkov.io/)
|
||||
- [Checkov GitHub Repository](https://github.com/bridgecrewio/checkov)
|
||||
- [CIS Benchmarks](https://www.cisecurity.org/cis-benchmarks/)
|
||||
- [Terraform Security Best Practices](https://www.terraform.io/docs/cloud/guides/recommended-practices/index.html)
|
||||
- [Kubernetes Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/)
|
||||
- [AWS Security Best Practices](https://aws.amazon.com/security/security-resources/)
|
||||
9
skills/devsecops/iac-checkov/assets/.gitkeep
Normal file
9
skills/devsecops/iac-checkov/assets/.gitkeep
Normal file
@@ -0,0 +1,9 @@
|
||||
# Assets Directory
|
||||
|
||||
Place files that will be used in the output Claude produces:
|
||||
- Templates
|
||||
- Configuration files
|
||||
- Images/logos
|
||||
- Boilerplate code
|
||||
|
||||
These files are NOT loaded into context but copied/modified in output.
|
||||
94
skills/devsecops/iac-checkov/assets/checkov_config.yaml
Normal file
94
skills/devsecops/iac-checkov/assets/checkov_config.yaml
Normal file
@@ -0,0 +1,94 @@
|
||||
# Checkov Configuration File
|
||||
# Place this file as .checkov.yaml in your project root
|
||||
|
||||
# Framework selection
|
||||
framework:
|
||||
- terraform
|
||||
- kubernetes
|
||||
- dockerfile
|
||||
- helm
|
||||
|
||||
# Checks to skip globally
|
||||
skip-check:
|
||||
# Development environment exceptions
|
||||
- CKV_AWS_17 # RDS backup retention (dev envs)
|
||||
- CKV_AWS_8 # CloudWatch log encryption (cost optimization)
|
||||
|
||||
# Low severity informational checks
|
||||
- CKV_AWS_50 # Lambda tracing
|
||||
- CKV_K8S_35 # Prefer secrets as files
|
||||
|
||||
# Paths to exclude from scanning
|
||||
skip-path:
|
||||
- .terraform/
|
||||
- .terragrunt-cache/
|
||||
- node_modules/
|
||||
- vendor/
|
||||
- "**/.git"
|
||||
- "**/test/"
|
||||
- "**/examples/"
|
||||
|
||||
# Severity-based configuration
|
||||
soft-fail-on:
|
||||
- LOW
|
||||
- MEDIUM
|
||||
|
||||
hard-fail-on:
|
||||
- CRITICAL
|
||||
- HIGH
|
||||
|
||||
# Compact output mode
|
||||
compact: true
|
||||
|
||||
# Quiet mode (only show failures)
|
||||
quiet: false
|
||||
|
||||
# Download external Terraform modules
|
||||
download-external-modules: true
|
||||
|
||||
# Output configuration
|
||||
output:
|
||||
- cli
|
||||
- json
|
||||
- sarif
|
||||
|
||||
# Output file path
|
||||
output-file-path: ./checkov-reports
|
||||
|
||||
# Repository identification
|
||||
repo-id: my-infrastructure
|
||||
branch: main
|
||||
|
||||
# External checks directory
|
||||
external-checks-dir:
|
||||
- ./custom_checks
|
||||
|
||||
# Baseline file for drift detection
|
||||
# baseline: .checkov.baseline
|
||||
|
||||
# Enable secrets scanning
|
||||
# framework:
|
||||
# - secrets
|
||||
|
||||
# Prisma Cloud/Bridgecrew integration (optional)
|
||||
# bc-api-key: ${PRISMA_API_KEY}
|
||||
# prisma-api-url: https://api.prismacloud.io
|
||||
|
||||
# Skip specific resources by regex
|
||||
# skip-resources-without-violations: true
|
||||
|
||||
# CKV check configuration
|
||||
# check:
|
||||
# - CIS_AWS
|
||||
# - CIS_AZURE
|
||||
# - CIS_KUBERNETES
|
||||
|
||||
# Enable/disable specific frameworks
|
||||
# skip-framework:
|
||||
# - ansible
|
||||
# - github_actions
|
||||
|
||||
# Custom policies metadata filter
|
||||
# policy-metadata-filter:
|
||||
# severity: HIGH,CRITICAL
|
||||
# category: IAM,ENCRYPTION
|
||||
199
skills/devsecops/iac-checkov/assets/github_actions.yml
Normal file
199
skills/devsecops/iac-checkov/assets/github_actions.yml
Normal file
@@ -0,0 +1,199 @@
|
||||
# GitHub Actions Workflow for Checkov IaC Security Scanning
|
||||
# Place this file in .github/workflows/checkov.yml
|
||||
|
||||
name: Checkov IaC Security Scan
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, develop]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- '**.tf'
|
||||
- '**.yaml'
|
||||
- '**.yml'
|
||||
- '**.json'
|
||||
schedule:
|
||||
# Run weekly security scans on Sunday at 2 AM
|
||||
- cron: '0 2 * * 0'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
security-events: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
checkov-terraform:
|
||||
name: Terraform Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Checkov on Terraform
|
||||
uses: bridgecrewio/checkov-action@master
|
||||
with:
|
||||
directory: terraform/
|
||||
framework: terraform
|
||||
output_format: sarif
|
||||
output_file_path: checkov-terraform.sarif
|
||||
soft_fail: false
|
||||
download_external_modules: true
|
||||
|
||||
- name: Upload SARIF Report
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: checkov-terraform.sarif
|
||||
category: terraform
|
||||
|
||||
checkov-kubernetes:
|
||||
name: Kubernetes Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Checkov on Kubernetes
|
||||
uses: bridgecrewio/checkov-action@master
|
||||
with:
|
||||
directory: k8s/
|
||||
framework: kubernetes
|
||||
output_format: sarif
|
||||
output_file_path: checkov-k8s.sarif
|
||||
soft_fail: false
|
||||
|
||||
- name: Upload SARIF Report
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: checkov-k8s.sarif
|
||||
category: kubernetes
|
||||
|
||||
checkov-dockerfile:
|
||||
name: Dockerfile Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Checkov on Dockerfiles
|
||||
uses: bridgecrewio/checkov-action@master
|
||||
with:
|
||||
directory: ./
|
||||
framework: dockerfile
|
||||
output_format: sarif
|
||||
output_file_path: checkov-docker.sarif
|
||||
soft_fail: false
|
||||
|
||||
- name: Upload SARIF Report
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: checkov-docker.sarif
|
||||
category: dockerfile
|
||||
|
||||
checkov-compliance:
|
||||
name: Compliance Scan (CIS, PCI-DSS)
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'push' || github.event_name == 'schedule'
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install Checkov
|
||||
run: pip install checkov
|
||||
|
||||
- name: Run CIS Compliance Scan
|
||||
run: |
|
||||
checkov -d terraform/ \
|
||||
--framework terraform \
|
||||
--check CIS_AWS,CIS_AZURE \
|
||||
-o json -o cli \
|
||||
--output-file-path ./compliance-reports
|
||||
|
||||
- name: Upload Compliance Reports
|
||||
uses: actions/upload-artifact@v4
|
||||
if: always()
|
||||
with:
|
||||
name: compliance-reports
|
||||
path: compliance-reports/
|
||||
retention-days: 90
|
||||
|
||||
security-gate:
|
||||
name: Security Gate Check
|
||||
runs-on: ubuntu-latest
|
||||
needs: [checkov-terraform, checkov-kubernetes]
|
||||
if: always()
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install Dependencies
|
||||
run: pip install checkov
|
||||
|
||||
- name: Run Checkov with Threshold
|
||||
run: |
|
||||
# Fail on CRITICAL and HIGH severity issues
|
||||
checkov -d terraform/ \
|
||||
--framework terraform \
|
||||
--hard-fail-on CRITICAL,HIGH \
|
||||
-o json --output-file-path ./gate-report || EXIT_CODE=$?
|
||||
|
||||
# Parse results
|
||||
if [ -f "gate-report/results_json.json" ]; then
|
||||
CRITICAL=$(jq '[.results.failed_checks[] | select(.severity == "CRITICAL")] | length' gate-report/results_json.json)
|
||||
HIGH=$(jq '[.results.failed_checks[] | select(.severity == "HIGH")] | length' gate-report/results_json.json)
|
||||
|
||||
echo "Critical findings: $CRITICAL"
|
||||
echo "High findings: $HIGH"
|
||||
|
||||
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
|
||||
echo "❌ Security gate failed"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "✅ Security gate passed"
|
||||
|
||||
- name: Comment on PR
|
||||
if: github.event_name == 'pull_request'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
const report = JSON.parse(fs.readFileSync('gate-report/results_json.json', 'utf8'));
|
||||
|
||||
const summary = report.summary || {};
|
||||
const passed = summary.passed || 0;
|
||||
const failed = summary.failed || 0;
|
||||
const skipped = summary.skipped || 0;
|
||||
|
||||
const body = `## Checkov IaC Security Scan Results
|
||||
|
||||
| Status | Count |
|
||||
|--------|-------|
|
||||
| ✅ Passed | ${passed} |
|
||||
| ❌ Failed | ${failed} |
|
||||
| ⏭️ Skipped | ${skipped} |
|
||||
|
||||
${failed > 0 ? '⚠️ Please review and fix the security findings before merging.' : '✅ No security issues detected!'}
|
||||
`;
|
||||
|
||||
github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: body
|
||||
});
|
||||
218
skills/devsecops/iac-checkov/assets/gitlab_ci.yml
Normal file
218
skills/devsecops/iac-checkov/assets/gitlab_ci.yml
Normal file
@@ -0,0 +1,218 @@
|
||||
# GitLab CI/CD Pipeline for Checkov IaC Security Scanning
|
||||
# Add this to your .gitlab-ci.yml file
|
||||
|
||||
stages:
|
||||
- security
|
||||
- compliance
|
||||
- report
|
||||
|
||||
variables:
|
||||
CHECKOV_IMAGE: "bridgecrew/checkov:latest"
|
||||
REPORTS_DIR: "checkov-reports"
|
||||
|
||||
# Terraform Security Scan
|
||||
checkov_terraform:
|
||||
stage: security
|
||||
image: $CHECKOV_IMAGE
|
||||
script:
|
||||
- mkdir -p $REPORTS_DIR
|
||||
- |
|
||||
checkov -d terraform/ \
|
||||
--framework terraform \
|
||||
-o json -o junitxml -o sarif \
|
||||
--output-file-path $REPORTS_DIR \
|
||||
--compact || EXIT_CODE=$?
|
||||
- echo "Exit code: ${EXIT_CODE:-0}"
|
||||
artifacts:
|
||||
reports:
|
||||
junit: $REPORTS_DIR/results_junitxml.xml
|
||||
sast: $REPORTS_DIR/results_sarif.sarif
|
||||
paths:
|
||||
- $REPORTS_DIR/
|
||||
when: always
|
||||
expire_in: 30 days
|
||||
only:
|
||||
changes:
|
||||
- terraform/**/*
|
||||
- "*.tf"
|
||||
tags:
|
||||
- docker
|
||||
|
||||
# Kubernetes Security Scan
|
||||
checkov_kubernetes:
|
||||
stage: security
|
||||
image: $CHECKOV_IMAGE
|
||||
script:
|
||||
- mkdir -p $REPORTS_DIR
|
||||
- |
|
||||
checkov -d k8s/ \
|
||||
--framework kubernetes \
|
||||
-o json -o junitxml \
|
||||
--output-file-path $REPORTS_DIR \
|
||||
--compact
|
||||
artifacts:
|
||||
reports:
|
||||
junit: $REPORTS_DIR/results_junitxml.xml
|
||||
paths:
|
||||
- $REPORTS_DIR/
|
||||
when: always
|
||||
expire_in: 30 days
|
||||
only:
|
||||
changes:
|
||||
- k8s/**/*
|
||||
- "*.yaml"
|
||||
- "*.yml"
|
||||
tags:
|
||||
- docker
|
||||
|
||||
# CloudFormation Security Scan
|
||||
checkov_cloudformation:
|
||||
stage: security
|
||||
image: $CHECKOV_IMAGE
|
||||
script:
|
||||
- mkdir -p $REPORTS_DIR
|
||||
- |
|
||||
checkov -d cloudformation/ \
|
||||
--framework cloudformation \
|
||||
-o json -o junitxml \
|
||||
--output-file-path $REPORTS_DIR \
|
||||
--compact
|
||||
artifacts:
|
||||
reports:
|
||||
junit: $REPORTS_DIR/results_junitxml.xml
|
||||
paths:
|
||||
- $REPORTS_DIR/
|
||||
when: always
|
||||
expire_in: 30 days
|
||||
only:
|
||||
changes:
|
||||
- cloudformation/**/*
|
||||
allow_failure: true
|
||||
tags:
|
||||
- docker
|
||||
|
||||
# Compliance Scan (CIS Benchmarks)
|
||||
checkov_compliance:
|
||||
stage: compliance
|
||||
image: $CHECKOV_IMAGE
|
||||
script:
|
||||
- mkdir -p $REPORTS_DIR/compliance
|
||||
- |
|
||||
# CIS AWS Benchmark
|
||||
checkov -d terraform/ \
|
||||
--framework terraform \
|
||||
--check CIS_AWS \
|
||||
-o json -o cli \
|
||||
--output-file-path $REPORTS_DIR/compliance \
|
||||
--compact || true
|
||||
|
||||
# Parse results
|
||||
if [ -f "$REPORTS_DIR/compliance/results_json.json" ]; then
|
||||
FAILED=$(jq '.summary.failed' $REPORTS_DIR/compliance/results_json.json)
|
||||
echo "CIS compliance failures: $FAILED"
|
||||
fi
|
||||
artifacts:
|
||||
paths:
|
||||
- $REPORTS_DIR/compliance/
|
||||
when: always
|
||||
expire_in: 90 days
|
||||
only:
|
||||
- main
|
||||
- develop
|
||||
tags:
|
||||
- docker
|
||||
|
||||
# Security Gate - Fail on Critical/High
|
||||
security_gate:
|
||||
stage: compliance
|
||||
image: $CHECKOV_IMAGE
|
||||
script:
|
||||
- mkdir -p $REPORTS_DIR/gate
|
||||
- |
|
||||
# Run scan with severity filtering
|
||||
checkov -d terraform/ \
|
||||
--framework terraform \
|
||||
--hard-fail-on CRITICAL,HIGH \
|
||||
-o json \
|
||||
--output-file-path $REPORTS_DIR/gate \
|
||||
--compact || EXIT_CODE=$?
|
||||
|
||||
# Check results
|
||||
if [ -f "$REPORTS_DIR/gate/results_json.json" ]; then
|
||||
CRITICAL=$(jq '[.results.failed_checks[] | select(.severity == "CRITICAL")] | length' $REPORTS_DIR/gate/results_json.json)
|
||||
HIGH=$(jq '[.results.failed_checks[] | select(.severity == "HIGH")] | length' $REPORTS_DIR/gate/results_json.json)
|
||||
|
||||
echo "Critical findings: $CRITICAL"
|
||||
echo "High findings: $HIGH"
|
||||
|
||||
if [ "$CRITICAL" -gt 0 ] || [ "$HIGH" -gt 0 ]; then
|
||||
echo "❌ Security gate failed: Critical or High severity issues found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Security gate passed"
|
||||
fi
|
||||
|
||||
exit ${EXIT_CODE:-0}
|
||||
artifacts:
|
||||
paths:
|
||||
- $REPORTS_DIR/gate/
|
||||
when: always
|
||||
expire_in: 30 days
|
||||
dependencies:
|
||||
- checkov_terraform
|
||||
- checkov_kubernetes
|
||||
only:
|
||||
- merge_requests
|
||||
- main
|
||||
allow_failure: false
|
||||
tags:
|
||||
- docker
|
||||
|
||||
# Generate Summary Report
|
||||
generate_report:
|
||||
stage: report
|
||||
image: alpine:latest
|
||||
before_script:
|
||||
- apk add --no-cache jq curl
|
||||
script:
|
||||
- |
|
||||
# Generate markdown summary
|
||||
cat > $REPORTS_DIR/summary.md <<EOF
|
||||
# Checkov IaC Security Scan Summary
|
||||
|
||||
**Pipeline:** $CI_PIPELINE_ID
|
||||
**Branch:** $CI_COMMIT_REF_NAME
|
||||
**Commit:** $CI_COMMIT_SHORT_SHA
|
||||
**Date:** $(date)
|
||||
|
||||
## Scan Results
|
||||
|
||||
EOF
|
||||
|
||||
# Parse Terraform scan results
|
||||
if [ -f "$REPORTS_DIR/results_json.json" ]; then
|
||||
echo "### Terraform Scan" >> $REPORTS_DIR/summary.md
|
||||
echo "" >> $REPORTS_DIR/summary.md
|
||||
echo "| Metric | Count |" >> $REPORTS_DIR/summary.md
|
||||
echo "|--------|-------|" >> $REPORTS_DIR/summary.md
|
||||
jq -r '.summary | "| Passed | \(.passed) |\n| Failed | \(.failed) |\n| Skipped | \(.skipped) |"' \
|
||||
$REPORTS_DIR/results_json.json >> $REPORTS_DIR/summary.md
|
||||
echo "" >> $REPORTS_DIR/summary.md
|
||||
fi
|
||||
|
||||
cat $REPORTS_DIR/summary.md
|
||||
artifacts:
|
||||
paths:
|
||||
- $REPORTS_DIR/summary.md
|
||||
when: always
|
||||
expire_in: 90 days
|
||||
dependencies:
|
||||
- checkov_terraform
|
||||
- checkov_kubernetes
|
||||
only:
|
||||
- merge_requests
|
||||
- main
|
||||
- develop
|
||||
tags:
|
||||
- docker
|
||||
92
skills/devsecops/iac-checkov/assets/pre_commit_config.yaml
Normal file
92
skills/devsecops/iac-checkov/assets/pre_commit_config.yaml
Normal file
@@ -0,0 +1,92 @@
|
||||
# Pre-commit Hook Configuration for Checkov
|
||||
# Place this file as .pre-commit-config.yaml in your project root
|
||||
#
|
||||
# Install: pip install pre-commit
|
||||
# Setup: pre-commit install
|
||||
|
||||
repos:
|
||||
# Checkov IaC Security Scanning
|
||||
- repo: https://github.com/bridgecrewio/checkov
|
||||
rev: 2.5.0
|
||||
hooks:
|
||||
- id: checkov
|
||||
name: Checkov IaC Security Scan
|
||||
args:
|
||||
- --soft-fail # Don't block commits (warning only)
|
||||
- --compact # Concise output
|
||||
- --framework=terraform # Scan Terraform files
|
||||
- --framework=kubernetes # Scan Kubernetes files
|
||||
- --framework=dockerfile # Scan Dockerfiles
|
||||
files: \.(tf|yaml|yml|json|Dockerfile)$
|
||||
exclude: |
|
||||
(?x)^(
|
||||
.terraform/|
|
||||
.terragrunt-cache/|
|
||||
vendor/|
|
||||
node_modules/
|
||||
)
|
||||
|
||||
# Strict mode (fail on Critical/High) - optional
|
||||
- repo: https://github.com/bridgecrewio/checkov
|
||||
rev: 2.5.0
|
||||
hooks:
|
||||
- id: checkov
|
||||
name: Checkov Strict Mode (Critical/High)
|
||||
args:
|
||||
- --hard-fail-on=CRITICAL,HIGH
|
||||
- --compact
|
||||
- --quiet
|
||||
files: \.(tf|yaml|yml)$
|
||||
exclude: |
|
||||
(?x)^(
|
||||
.terraform/|
|
||||
test/|
|
||||
examples/
|
||||
)
|
||||
# Only run on specific branches
|
||||
stages: [push]
|
||||
|
||||
# Terraform-specific scanning with external modules
|
||||
- repo: https://github.com/bridgecrewio/checkov
|
||||
rev: 2.5.0
|
||||
hooks:
|
||||
- id: checkov
|
||||
name: Checkov Terraform (with external modules)
|
||||
args:
|
||||
- --download-external-modules=true
|
||||
- --framework=terraform
|
||||
- --soft-fail
|
||||
files: \.tf$
|
||||
exclude: .terraform/
|
||||
|
||||
# Additional code quality hooks
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
- id: end-of-file-fixer
|
||||
- id: check-yaml
|
||||
args: [--allow-multiple-documents]
|
||||
- id: check-json
|
||||
- id: check-merge-conflict
|
||||
- id: detect-private-key
|
||||
name: Detect Private Keys (Secrets)
|
||||
|
||||
# Terraform formatting
|
||||
- repo: https://github.com/antonbabenko/pre-commit-terraform
|
||||
rev: v1.86.0
|
||||
hooks:
|
||||
- id: terraform_fmt
|
||||
- id: terraform_validate
|
||||
- id: terraform_docs
|
||||
args:
|
||||
- --hook-config=--add-to-existing-file=true
|
||||
- --hook-config=--create-file-if-not-exist=true
|
||||
|
||||
# YAML linting
|
||||
- repo: https://github.com/adrienverge/yamllint
|
||||
rev: v1.33.0
|
||||
hooks:
|
||||
- id: yamllint
|
||||
args: [-c=.yamllint.yaml]
|
||||
files: \.(yaml|yml)$
|
||||
40
skills/devsecops/iac-checkov/references/EXAMPLE.md
Normal file
40
skills/devsecops/iac-checkov/references/EXAMPLE.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Reference Document Template
|
||||
|
||||
This file contains detailed reference material that Claude should load only when needed.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Section 1](#section-1)
|
||||
- [Section 2](#section-2)
|
||||
- [Security Standards](#security-standards)
|
||||
|
||||
## Section 1
|
||||
|
||||
Detailed information, schemas, or examples that are too large for SKILL.md.
|
||||
|
||||
## Section 2
|
||||
|
||||
Additional reference material.
|
||||
|
||||
## Security Standards
|
||||
|
||||
### OWASP Top 10
|
||||
|
||||
Reference relevant OWASP categories:
|
||||
- A01: Broken Access Control
|
||||
- A02: Cryptographic Failures
|
||||
- etc.
|
||||
|
||||
### CWE Mappings
|
||||
|
||||
Map to relevant Common Weakness Enumeration categories:
|
||||
- CWE-79: Cross-site Scripting
|
||||
- CWE-89: SQL Injection
|
||||
- etc.
|
||||
|
||||
### MITRE ATT&CK
|
||||
|
||||
Reference relevant tactics and techniques if applicable:
|
||||
- TA0001: Initial Access
|
||||
- T1190: Exploit Public-Facing Application
|
||||
- etc.
|
||||
237
skills/devsecops/iac-checkov/references/compliance_mapping.md
Normal file
237
skills/devsecops/iac-checkov/references/compliance_mapping.md
Normal file
@@ -0,0 +1,237 @@
|
||||
# Checkov Compliance Framework Mapping
|
||||
|
||||
Mapping of Checkov checks to CIS, PCI-DSS, HIPAA, SOC2, NIST, and GDPR compliance requirements.
|
||||
|
||||
## CIS Benchmarks
|
||||
|
||||
### CIS AWS Foundations Benchmark v1.4
|
||||
|
||||
| Check ID | CIS Control | Description | Severity |
|
||||
|----------|-------------|-------------|----------|
|
||||
| CKV_AWS_19 | 2.1.1 | Ensure S3 bucket encryption at rest | HIGH |
|
||||
| CKV_AWS_21 | 2.1.3 | Ensure S3 bucket versioning enabled | MEDIUM |
|
||||
| CKV_AWS_18 | 2.1.5 | Ensure S3 bucket access logging | MEDIUM |
|
||||
| CKV_AWS_23 | 4.1 | Security group ingress not 0.0.0.0/0 | HIGH |
|
||||
| CKV_AWS_24 | 4.2 | Security group ingress not ::/0 | HIGH |
|
||||
| CKV_AWS_40 | 1.16 | IAM policies no wildcard actions | HIGH |
|
||||
| CKV_AWS_61 | 2.3.1 | RDS encryption at rest enabled | HIGH |
|
||||
| CKV_AWS_16 | 2.3.1 | RDS storage encrypted | HIGH |
|
||||
| CKV_AWS_17 | 2.3.2 | RDS backup retention period | MEDIUM |
|
||||
| CKV_AWS_7 | 2.9 | EBS encryption by default | HIGH |
|
||||
| CKV_AWS_93 | 2.4.1 | S3 bucket public access blocked | CRITICAL |
|
||||
|
||||
### CIS Kubernetes Benchmark v1.6
|
||||
|
||||
| Check ID | CIS Control | Description | Severity |
|
||||
|----------|-------------|-------------|----------|
|
||||
| CKV_K8S_16 | 5.2.1 | Container not privileged | HIGH |
|
||||
| CKV_K8S_22 | 5.2.6 | Read-only root filesystem | HIGH |
|
||||
| CKV_K8S_28 | 5.2.7 | Minimize capabilities | HIGH |
|
||||
| CKV_K8S_10 | 5.2.13 | CPU requests configured | MEDIUM |
|
||||
| CKV_K8S_11 | 5.2.13 | CPU limits configured | MEDIUM |
|
||||
| CKV_K8S_12 | 5.2.14 | Memory requests configured | MEDIUM |
|
||||
| CKV_K8S_13 | 5.2.14 | Memory limits configured | MEDIUM |
|
||||
| CKV_K8S_8 | 5.2.15 | Liveness probe configured | MEDIUM |
|
||||
| CKV_K8S_9 | 5.2.15 | Readiness probe configured | MEDIUM |
|
||||
|
||||
## PCI-DSS v3.2.1
|
||||
|
||||
### Requirement 2: Do not use vendor-supplied defaults
|
||||
|
||||
| Check ID | PCI Requirement | Description |
|
||||
|----------|-----------------|-------------|
|
||||
| CKV_AWS_41 | 2.1 | EKS encryption enabled |
|
||||
| CKV_AWS_58 | 2.2 | EKS public access restricted |
|
||||
| CKV_K8S_14 | 2.3 | Image tag not :latest |
|
||||
|
||||
### Requirement 3: Protect stored cardholder data
|
||||
|
||||
| Check ID | PCI Requirement | Description |
|
||||
|----------|-----------------|-------------|
|
||||
| CKV_AWS_19 | 3.4 | S3 bucket encrypted |
|
||||
| CKV_AWS_61 | 3.4 | RDS encrypted at rest |
|
||||
| CKV_AWS_7 | 3.4 | EBS encryption enabled |
|
||||
| CKV_AWS_89 | 3.4 | DynamoDB encryption |
|
||||
|
||||
### Requirement 6: Develop and maintain secure systems
|
||||
|
||||
| Check ID | PCI Requirement | Description |
|
||||
|----------|-----------------|-------------|
|
||||
| CKV_AWS_23 | 6.2 | Security groups not open |
|
||||
| CKV_AWS_40 | 6.5 | IAM no wildcard permissions |
|
||||
| CKV_K8S_16 | 6.5 | No privileged containers |
|
||||
|
||||
### Requirement 10: Track and monitor all access
|
||||
|
||||
| Check ID | PCI Requirement | Description |
|
||||
|----------|-----------------|-------------|
|
||||
| CKV_AWS_18 | 10.2 | S3 access logging enabled |
|
||||
| CKV_AWS_51 | 10.3 | ECR image scanning |
|
||||
| CKV_AWS_46 | 10.5 | ECS task logging |
|
||||
|
||||
## HIPAA Security Rule
|
||||
|
||||
### Administrative Safeguards (§164.308)
|
||||
|
||||
| Check ID | HIPAA Control | Description |
|
||||
|----------|---------------|-------------|
|
||||
| CKV_AWS_40 | §164.308(a)(3) | IAM access controls |
|
||||
| CKV_AWS_49 | §164.308(a)(4) | CloudTrail logging |
|
||||
| CKV_AWS_38 | §164.308(a)(5) | EKS RBAC enabled |
|
||||
|
||||
### Physical Safeguards (§164.310)
|
||||
|
||||
| Check ID | HIPAA Control | Description |
|
||||
|----------|---------------|-------------|
|
||||
| CKV_AWS_19 | §164.310(d)(1) | Encryption at rest (S3) |
|
||||
| CKV_AWS_7 | §164.310(d)(1) | Encryption at rest (EBS) |
|
||||
| CKV_AWS_61 | §164.310(d)(1) | Encryption at rest (RDS) |
|
||||
|
||||
### Technical Safeguards (§164.312)
|
||||
|
||||
| Check ID | HIPAA Control | Description |
|
||||
|----------|---------------|-------------|
|
||||
| CKV_AWS_23 | §164.312(a)(1) | Access control (network) |
|
||||
| CKV_AWS_18 | §164.312(b) | Audit logging (S3) |
|
||||
| CKV_AWS_27 | §164.312(c)(1) | SQS encryption |
|
||||
| CKV_AWS_20 | §164.312(e)(1) | S3 SSL/TLS enforced |
|
||||
|
||||
## SOC 2 Trust Service Criteria
|
||||
|
||||
### CC6.1: Logical and Physical Access Controls
|
||||
|
||||
| Check ID | TSC | Description |
|
||||
|----------|-----|-------------|
|
||||
| CKV_AWS_40 | CC6.1 | IAM least privilege |
|
||||
| CKV_AWS_23 | CC6.1 | Network segmentation |
|
||||
| CKV_K8S_21 | CC6.1 | Namespace defined |
|
||||
|
||||
### CC6.6: Encryption
|
||||
|
||||
| Check ID | TSC | Description |
|
||||
|----------|-----|-------------|
|
||||
| CKV_AWS_19 | CC6.6 | S3 encryption |
|
||||
| CKV_AWS_7 | CC6.6 | EBS encryption |
|
||||
| CKV_AWS_61 | CC6.6 | RDS encryption |
|
||||
| CKV_AWS_20 | CC6.6 | S3 SSL enforced |
|
||||
|
||||
### CC7.2: System Monitoring
|
||||
|
||||
| Check ID | TSC | Description |
|
||||
|----------|-----|-------------|
|
||||
| CKV_AWS_18 | CC7.2 | S3 access logging |
|
||||
| CKV_AWS_49 | CC7.2 | CloudTrail enabled |
|
||||
| CKV_K8S_8 | CC7.2 | Liveness probe |
|
||||
|
||||
## NIST 800-53 Rev 5
|
||||
|
||||
### AC (Access Control)
|
||||
|
||||
| Check ID | NIST Control | Description |
|
||||
|----------|--------------|-------------|
|
||||
| CKV_AWS_40 | AC-3 | IAM least privilege |
|
||||
| CKV_AWS_23 | AC-4 | Network access control |
|
||||
| CKV_K8S_16 | AC-6 | Least privilege (containers) |
|
||||
|
||||
### AU (Audit and Accountability)
|
||||
|
||||
| Check ID | NIST Control | Description |
|
||||
|----------|--------------|-------------|
|
||||
| CKV_AWS_18 | AU-2 | S3 access logging |
|
||||
| CKV_AWS_49 | AU-12 | CloudTrail logging |
|
||||
| CKV_K8S_35 | AU-9 | Audit log protection |
|
||||
|
||||
### SC (System and Communications Protection)
|
||||
|
||||
| Check ID | NIST Control | Description |
|
||||
|----------|--------------|-------------|
|
||||
| CKV_AWS_19 | SC-28 | Encryption at rest (S3) |
|
||||
| CKV_AWS_20 | SC-8 | Encryption in transit (S3) |
|
||||
| CKV_AWS_7 | SC-28 | Encryption at rest (EBS) |
|
||||
|
||||
## GDPR
|
||||
|
||||
### Article 32: Security of Processing
|
||||
|
||||
| Check ID | GDPR Article | Description |
|
||||
|----------|--------------|-------------|
|
||||
| CKV_AWS_19 | Art. 32(1)(a) | Encryption of personal data |
|
||||
| CKV_AWS_7 | Art. 32(1)(a) | EBS encryption |
|
||||
| CKV_AWS_61 | Art. 32(1)(a) | RDS encryption |
|
||||
| CKV_AWS_21 | Art. 32(1)(b) | Data backup (S3 versioning) |
|
||||
| CKV_AWS_18 | Art. 32(1)(d) | Access logging |
|
||||
|
||||
### Article 25: Data Protection by Design
|
||||
|
||||
| Check ID | GDPR Article | Description |
|
||||
|----------|--------------|-------------|
|
||||
| CKV_AWS_93 | Art. 25 | S3 public access block |
|
||||
| CKV_AWS_23 | Art. 25 | Network isolation |
|
||||
| CKV_AWS_20 | Art. 25 | Secure transmission |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Scan for CIS Compliance
|
||||
|
||||
```bash
|
||||
# CIS AWS Benchmark
|
||||
checkov -d ./terraform --check CIS_AWS
|
||||
|
||||
# CIS Azure Benchmark
|
||||
checkov -d ./terraform --check CIS_AZURE
|
||||
|
||||
# CIS Kubernetes Benchmark
|
||||
checkov -d ./k8s --framework kubernetes --check CIS_KUBERNETES
|
||||
```
|
||||
|
||||
### Scan for PCI-DSS Compliance
|
||||
|
||||
```bash
|
||||
# Focus on encryption requirements (Req 3.4)
|
||||
checkov -d ./terraform \
|
||||
--check CKV_AWS_19,CKV_AWS_61,CKV_AWS_7,CKV_AWS_89
|
||||
|
||||
# Network security (Req 1, 2)
|
||||
checkov -d ./terraform \
|
||||
--check CKV_AWS_23,CKV_AWS_24,CKV_AWS_40
|
||||
```
|
||||
|
||||
### Scan for HIPAA Compliance
|
||||
|
||||
```bash
|
||||
# HIPAA-focused scan
|
||||
checkov -d ./terraform \
|
||||
--check CKV_AWS_19,CKV_AWS_7,CKV_AWS_61,CKV_AWS_20,CKV_AWS_18,CKV_AWS_40
|
||||
```
|
||||
|
||||
### Generate Compliance Report
|
||||
|
||||
```bash
|
||||
# Comprehensive compliance report
|
||||
checkov -d ./terraform \
|
||||
-o json --output-file-path ./compliance-report \
|
||||
--repo-id healthcare-infra \
|
||||
--check CIS_AWS,PCI_DSS,HIPAA
|
||||
```
|
||||
|
||||
## Compliance Matrix
|
||||
|
||||
| Framework | Checkov Support | Common Checks | Report Format |
|
||||
|-----------|-----------------|---------------|---------------|
|
||||
| CIS AWS | ✓ Full | 100+ checks | JSON, CLI, SARIF |
|
||||
| CIS Azure | ✓ Full | 80+ checks | JSON, CLI, SARIF |
|
||||
| CIS Kubernetes | ✓ Full | 50+ checks | JSON, CLI, SARIF |
|
||||
| PCI-DSS 3.2.1 | ✓ Partial | 30+ checks | JSON, CLI |
|
||||
| HIPAA | ✓ Partial | 40+ checks | JSON, CLI |
|
||||
| SOC 2 | ✓ Partial | 35+ checks | JSON, CLI |
|
||||
| NIST 800-53 | ✓ Mapping | 60+ checks | JSON, CLI |
|
||||
| GDPR | ✓ Mapping | 25+ checks | JSON, CLI |
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [CIS Benchmarks](https://www.cisecurity.org/cis-benchmarks/)
|
||||
- [PCI Security Standards](https://www.pcisecuritystandards.org/)
|
||||
- [HIPAA Security Rule](https://www.hhs.gov/hipaa/for-professionals/security/index.html)
|
||||
- [AICPA SOC 2](https://www.aicpa.org/soc4so)
|
||||
- [NIST 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final)
|
||||
- [GDPR Portal](https://gdpr.eu/)
|
||||
460
skills/devsecops/iac-checkov/references/custom_policies.md
Normal file
460
skills/devsecops/iac-checkov/references/custom_policies.md
Normal file
@@ -0,0 +1,460 @@
|
||||
# Checkov Custom Policy Development Guide
|
||||
|
||||
Complete guide for creating organization-specific security policies in Python and YAML.
|
||||
|
||||
## Overview
|
||||
|
||||
Custom policies allow you to enforce organization-specific security requirements beyond Checkov's built-in checks. Policies can be written in:
|
||||
|
||||
- **Python**: Full programmatic control, graph-based analysis
|
||||
- **YAML**: Simple attribute checks, easy to maintain
|
||||
|
||||
## Python-Based Custom Policies
|
||||
|
||||
### Basic Resource Check
|
||||
|
||||
```python
|
||||
# custom_checks/require_resource_tags.py
|
||||
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
|
||||
from checkov.common.models.enums import CheckResult, CheckCategories
|
||||
|
||||
class RequireResourceTags(BaseResourceCheck):
|
||||
def __init__(self):
|
||||
name = "Ensure all resources have required tags"
|
||||
id = "CKV_AWS_CUSTOM_001"
|
||||
supported_resources = ['aws_*'] # All AWS resources
|
||||
categories = [CheckCategories.CONVENTION]
|
||||
super().__init__(name=name, id=id, categories=categories,
|
||||
supported_resources=supported_resources)
|
||||
|
||||
def scan_resource_conf(self, conf):
|
||||
"""Check if resource has required tags."""
|
||||
required_tags = ['Environment', 'Owner', 'CostCenter']
|
||||
|
||||
tags = conf.get('tags')
|
||||
if not tags or not isinstance(tags, list):
|
||||
return CheckResult.FAILED
|
||||
|
||||
tag_dict = tags[0] if tags else {}
|
||||
|
||||
for required_tag in required_tags:
|
||||
if required_tag not in tag_dict:
|
||||
self.evaluated_keys = ['tags']
|
||||
return CheckResult.FAILED
|
||||
|
||||
return CheckResult.PASSED
|
||||
|
||||
check = RequireResourceTags()
|
||||
```
|
||||
|
||||
### Graph-Based Policy
|
||||
|
||||
```python
|
||||
# custom_checks/s3_bucket_policy_public.py
|
||||
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
|
||||
from checkov.common.models.enums import CheckResult, CheckCategories
|
||||
|
||||
class S3BucketPolicyNotPublic(BaseResourceCheck):
|
||||
def __init__(self):
|
||||
name = "Ensure S3 bucket policy doesn't allow public access"
|
||||
id = "CKV_AWS_CUSTOM_002"
|
||||
supported_resources = ['aws_s3_bucket_policy']
|
||||
categories = [CheckCategories.IAM]
|
||||
super().__init__(name=name, id=id, categories=categories,
|
||||
supported_resources=supported_resources)
|
||||
|
||||
def scan_resource_conf(self, conf):
|
||||
"""Scan S3 bucket policy for public access."""
|
||||
policy = conf.get('policy')
|
||||
if not policy:
|
||||
return CheckResult.PASSED
|
||||
|
||||
import json
|
||||
try:
|
||||
policy_doc = json.loads(policy[0]) if isinstance(policy, list) else json.loads(policy)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
return CheckResult.UNKNOWN
|
||||
|
||||
statements = policy_doc.get('Statement', [])
|
||||
for statement in statements:
|
||||
effect = statement.get('Effect')
|
||||
principal = statement.get('Principal', {})
|
||||
|
||||
# Check for public access
|
||||
if effect == 'Allow':
|
||||
if principal == '*' or principal.get('AWS') == '*':
|
||||
return CheckResult.FAILED
|
||||
|
||||
return CheckResult.PASSED
|
||||
|
||||
check = S3BucketPolicyNotPublic()
|
||||
```
|
||||
|
||||
### Connection-Aware Check (Graph)
|
||||
|
||||
```python
|
||||
# custom_checks/ec2_in_private_subnet.py
|
||||
from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck
|
||||
from checkov.common.models.enums import CheckResult, CheckCategories
|
||||
|
||||
class EC2InPrivateSubnet(BaseResourceCheck):
|
||||
def __init__(self):
|
||||
name = "Ensure EC2 instances are in private subnets"
|
||||
id = "CKV_AWS_CUSTOM_003"
|
||||
supported_resources = ['aws_instance']
|
||||
categories = [CheckCategories.NETWORKING]
|
||||
super().__init__(name=name, id=id, categories=categories,
|
||||
supported_resources=supported_resources)
|
||||
|
||||
def scan_resource_conf(self, conf, entity_type):
|
||||
"""Check if EC2 instance is in private subnet."""
|
||||
subnet_id = conf.get('subnet_id')
|
||||
if not subnet_id:
|
||||
return CheckResult.PASSED
|
||||
|
||||
# Use graph to find connected subnet
|
||||
# This requires access to the graph context
|
||||
# Implementation depends on Checkov version
|
||||
|
||||
return CheckResult.UNKNOWN # Implement graph logic
|
||||
|
||||
check = EC2InPrivateSubnet()
|
||||
```
|
||||
|
||||
## YAML-Based Custom Policies
|
||||
|
||||
### Simple Attribute Check
|
||||
|
||||
```yaml
|
||||
# custom_checks/s3_lifecycle.yaml
|
||||
metadata:
|
||||
id: "CKV_AWS_CUSTOM_004"
|
||||
name: "Ensure S3 buckets have lifecycle policies"
|
||||
category: "BACKUP_AND_RECOVERY"
|
||||
severity: "MEDIUM"
|
||||
|
||||
definition:
|
||||
cond_type: "attribute"
|
||||
resource_types:
|
||||
- "aws_s3_bucket"
|
||||
attribute: "lifecycle_rule"
|
||||
operator: "exists"
|
||||
```
|
||||
|
||||
### Complex Logic
|
||||
|
||||
```yaml
|
||||
# custom_checks/rds_multi_az.yaml
|
||||
metadata:
|
||||
id: "CKV_AWS_CUSTOM_005"
|
||||
name: "Ensure RDS instances are multi-AZ for production"
|
||||
category: "BACKUP_AND_RECOVERY"
|
||||
severity: "HIGH"
|
||||
|
||||
definition:
|
||||
or:
|
||||
- cond_type: "attribute"
|
||||
resource_types:
|
||||
- "aws_db_instance"
|
||||
attribute: "multi_az"
|
||||
operator: "equals"
|
||||
value: true
|
||||
|
||||
- and:
|
||||
- cond_type: "attribute"
|
||||
resource_types:
|
||||
- "aws_db_instance"
|
||||
attribute: "tags.Environment"
|
||||
operator: "not_equals"
|
||||
value: "production"
|
||||
```
|
||||
|
||||
### Kubernetes Policy
|
||||
|
||||
```yaml
|
||||
# custom_checks/k8s_service_account.yaml
|
||||
metadata:
|
||||
id: "CKV_K8S_CUSTOM_001"
|
||||
name: "Ensure pods use dedicated service accounts"
|
||||
category: "IAM"
|
||||
severity: "HIGH"
|
||||
|
||||
definition:
|
||||
cond_type: "attribute"
|
||||
resource_types:
|
||||
- "Pod"
|
||||
- "Deployment"
|
||||
- "StatefulSet"
|
||||
- "DaemonSet"
|
||||
attribute: "spec.serviceAccountName"
|
||||
operator: "not_equals"
|
||||
value: "default"
|
||||
```
|
||||
|
||||
## Policy Structure
|
||||
|
||||
### Python Policy Template
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
|
||||
from checkov.common.models.enums import CheckResult, CheckCategories
|
||||
|
||||
class MyCustomCheck(BaseResourceCheck):
|
||||
def __init__(self):
|
||||
# Metadata
|
||||
name = "Check description"
|
||||
id = "CKV_[PROVIDER]_CUSTOM_[NUMBER]" # e.g., CKV_AWS_CUSTOM_001
|
||||
supported_resources = ['resource_type'] # e.g., ['aws_s3_bucket']
|
||||
categories = [CheckCategories.CATEGORY] # See categories below
|
||||
guideline = "https://docs.example.com/security-policy"
|
||||
|
||||
super().__init__(
|
||||
name=name,
|
||||
id=id,
|
||||
categories=categories,
|
||||
supported_resources=supported_resources,
|
||||
guideline=guideline
|
||||
)
|
||||
|
||||
def scan_resource_conf(self, conf, entity_type=None):
|
||||
"""
|
||||
Scan resource configuration for compliance.
|
||||
|
||||
Args:
|
||||
conf: Resource configuration dictionary
|
||||
entity_type: Resource type (optional)
|
||||
|
||||
Returns:
|
||||
CheckResult.PASSED, CheckResult.FAILED, or CheckResult.UNKNOWN
|
||||
"""
|
||||
# Implementation
|
||||
if self.check_condition(conf):
|
||||
return CheckResult.PASSED
|
||||
|
||||
self.evaluated_keys = ['attribute_that_failed']
|
||||
return CheckResult.FAILED
|
||||
|
||||
def get_inspected_key(self):
|
||||
"""Return the key that was checked."""
|
||||
return 'attribute_name'
|
||||
|
||||
check = MyCustomCheck()
|
||||
```
|
||||
|
||||
### Check Categories
|
||||
|
||||
```python
|
||||
from checkov.common.models.enums import CheckCategories
|
||||
|
||||
# Available categories:
|
||||
CheckCategories.IAM
|
||||
CheckCategories.NETWORKING
|
||||
CheckCategories.ENCRYPTION
|
||||
CheckCategories.LOGGING
|
||||
CheckCategories.BACKUP_AND_RECOVERY
|
||||
CheckCategories.CONVENTION
|
||||
CheckCategories.SECRETS
|
||||
CheckCategories.KUBERNETES
|
||||
CheckCategories.API_SECURITY
|
||||
CheckCategories.SUPPLY_CHAIN
|
||||
```
|
||||
|
||||
## Loading Custom Policies
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
custom_checks/
|
||||
├── aws/
|
||||
│ ├── require_tags.py
|
||||
│ ├── s3_lifecycle.yaml
|
||||
│ └── rds_backups.py
|
||||
├── kubernetes/
|
||||
│ ├── require_resource_limits.py
|
||||
│ └── security_context.yaml
|
||||
└── azure/
|
||||
└── storage_encryption.py
|
||||
```
|
||||
|
||||
### Load Policies
|
||||
|
||||
```bash
|
||||
# Load from directory
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks
|
||||
|
||||
# Load specific policy
|
||||
checkov -d ./terraform --external-checks-git https://github.com/org/policies.git
|
||||
|
||||
# List loaded custom checks
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks --list
|
||||
```
|
||||
|
||||
## Testing Custom Policies
|
||||
|
||||
### Unit Testing
|
||||
|
||||
```python
|
||||
# tests/test_require_tags.py
|
||||
import unittest
|
||||
from custom_checks.require_resource_tags import RequireResourceTags
|
||||
from checkov.common.models.enums import CheckResult
|
||||
|
||||
class TestRequireResourceTags(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.check = RequireResourceTags()
|
||||
|
||||
def test_pass_with_all_tags(self):
|
||||
resource_conf = {
|
||||
'tags': [{
|
||||
'Environment': 'production',
|
||||
'Owner': 'team@example.com',
|
||||
'CostCenter': 'engineering'
|
||||
}]
|
||||
}
|
||||
result = self.check.scan_resource_conf(resource_conf)
|
||||
self.assertEqual(result, CheckResult.PASSED)
|
||||
|
||||
def test_fail_missing_tag(self):
|
||||
resource_conf = {
|
||||
'tags': [{
|
||||
'Environment': 'production',
|
||||
'Owner': 'team@example.com'
|
||||
# Missing CostCenter
|
||||
}]
|
||||
}
|
||||
result = self.check.scan_resource_conf(resource_conf)
|
||||
self.assertEqual(result, CheckResult.FAILED)
|
||||
|
||||
def test_fail_no_tags(self):
|
||||
resource_conf = {}
|
||||
result = self.check.scan_resource_conf(resource_conf)
|
||||
self.assertEqual(result, CheckResult.FAILED)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
|
||||
```bash
|
||||
# Test against sample infrastructure
|
||||
checkov -d ./tests/fixtures/terraform \
|
||||
--external-checks-dir ./custom_checks \
|
||||
--check CKV_AWS_CUSTOM_001
|
||||
|
||||
# Verify output format
|
||||
checkov -d ./tests/fixtures/terraform \
|
||||
--external-checks-dir ./custom_checks \
|
||||
-o json | jq '.results.failed_checks[] | select(.check_id == "CKV_AWS_CUSTOM_001")'
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Naming Convention Check
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
class ResourceNamingConvention(BaseResourceCheck):
|
||||
def scan_resource_conf(self, conf):
|
||||
"""Enforce naming convention: env-app-resource"""
|
||||
pattern = r'^(dev|staging|prod)-[a-z]+-[a-z0-9-]+$'
|
||||
|
||||
name = conf.get('name')
|
||||
if not name or not isinstance(name, list):
|
||||
return CheckResult.FAILED
|
||||
|
||||
resource_name = name[0] if isinstance(name[0], str) else str(name[0])
|
||||
|
||||
if not re.match(pattern, resource_name):
|
||||
self.evaluated_keys = ['name']
|
||||
return CheckResult.FAILED
|
||||
|
||||
return CheckResult.PASSED
|
||||
```
|
||||
|
||||
### Pattern 2: Environment-Specific Requirements
|
||||
|
||||
```python
|
||||
class ProductionEncryption(BaseResourceCheck):
|
||||
def scan_resource_conf(self, conf):
|
||||
"""Require encryption for production resources."""
|
||||
tags = conf.get('tags', [{}])[0]
|
||||
environment = tags.get('Environment', '')
|
||||
|
||||
# Only enforce for production
|
||||
if environment.lower() != 'production':
|
||||
return CheckResult.PASSED
|
||||
|
||||
# Check encryption
|
||||
encryption_enabled = conf.get('server_side_encryption_configuration')
|
||||
if not encryption_enabled:
|
||||
return CheckResult.FAILED
|
||||
|
||||
return CheckResult.PASSED
|
||||
```
|
||||
|
||||
### Pattern 3: Cost Optimization
|
||||
|
||||
```python
|
||||
class EC2InstanceSizing(BaseResourceCheck):
|
||||
def scan_resource_conf(self, conf):
|
||||
"""Prevent oversized instances in non-production."""
|
||||
tags = conf.get('tags', [{}])[0]
|
||||
environment = tags.get('Environment', '')
|
||||
|
||||
# Only restrict non-production
|
||||
if environment.lower() == 'production':
|
||||
return CheckResult.PASSED
|
||||
|
||||
instance_type = conf.get('instance_type', [''])[0]
|
||||
oversized_types = ['c5.9xlarge', 'c5.12xlarge', 'c5.18xlarge']
|
||||
|
||||
if instance_type in oversized_types:
|
||||
self.evaluated_keys = ['instance_type']
|
||||
return CheckResult.FAILED
|
||||
|
||||
return CheckResult.PASSED
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **ID Convention**: Use `CKV_[PROVIDER]_CUSTOM_[NUMBER]` format
|
||||
2. **Documentation**: Include guideline URL in check metadata
|
||||
3. **Error Handling**: Return `CheckResult.UNKNOWN` for ambiguous cases
|
||||
4. **Performance**: Minimize complex operations in scan loops
|
||||
5. **Testing**: Write unit tests for all custom policies
|
||||
6. **Versioning**: Track policy versions in version control
|
||||
7. **Review Process**: Require security team review before deployment
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Policy Not Loading
|
||||
|
||||
```bash
|
||||
# Debug loading
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks -v
|
||||
|
||||
# Verify syntax
|
||||
python3 custom_checks/my_policy.py
|
||||
|
||||
# Check for import errors
|
||||
python3 -c "import custom_checks.my_policy"
|
||||
```
|
||||
|
||||
### Policy Not Triggering
|
||||
|
||||
```bash
|
||||
# Verify resource type matches
|
||||
checkov -d ./terraform --external-checks-dir ./custom_checks --list
|
||||
|
||||
# Test with specific check
|
||||
checkov -d ./terraform --check CKV_AWS_CUSTOM_001 -v
|
||||
```
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Checkov Custom Policies Documentation](https://www.checkov.io/3.Custom%20Policies/Custom%20Policies%20Overview.html)
|
||||
- [Python Policy Examples](https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks)
|
||||
- [YAML Policy Examples](https://github.com/bridgecrewio/checkov/tree/master/checkov/terraform/checks/graph_checks)
|
||||
431
skills/devsecops/iac-checkov/references/suppression_guide.md
Normal file
431
skills/devsecops/iac-checkov/references/suppression_guide.md
Normal file
@@ -0,0 +1,431 @@
|
||||
# Checkov Suppression and Exception Handling Guide
|
||||
|
||||
Best practices for suppressing false positives and managing policy exceptions in Checkov.
|
||||
|
||||
## Suppression Methods
|
||||
|
||||
### Inline Suppression (Recommended)
|
||||
|
||||
#### Terraform
|
||||
|
||||
```hcl
|
||||
# Single check suppression with justification
|
||||
resource "aws_s3_bucket" "public_site" {
|
||||
# checkov:skip=CKV_AWS_18:Public bucket for static website hosting
|
||||
bucket = "my-public-website"
|
||||
acl = "public-read"
|
||||
}
|
||||
|
||||
# Multiple checks suppression
|
||||
resource "aws_security_group" "legacy" {
|
||||
# checkov:skip=CKV_AWS_23:Legacy app requires open access
|
||||
# checkov:skip=CKV_AWS_24:IPv6 not supported by application
|
||||
name = "legacy-sg"
|
||||
|
||||
ingress {
|
||||
from_port = 0
|
||||
to_port = 0
|
||||
protocol = "-1"
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Kubernetes
|
||||
|
||||
```yaml
|
||||
# Annotation-based suppression
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: legacy-app
|
||||
annotations:
|
||||
checkov.io/skip: CKV_K8S_16=Legacy application requires elevated privileges
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: myapp:1.0
|
||||
securityContext:
|
||||
privileged: true
|
||||
```
|
||||
|
||||
#### CloudFormation
|
||||
|
||||
```yaml
|
||||
Resources:
|
||||
PublicBucket:
|
||||
Type: AWS::S3::Bucket
|
||||
Metadata:
|
||||
checkov:
|
||||
skip:
|
||||
- id: CKV_AWS_18
|
||||
comment: "Public bucket for CDN origin"
|
||||
Properties:
|
||||
BucketName: my-public-bucket
|
||||
PublicAccessBlockConfiguration:
|
||||
BlockPublicAcls: false
|
||||
```
|
||||
|
||||
### Configuration File Suppression
|
||||
|
||||
#### .checkov.yaml
|
||||
|
||||
```yaml
|
||||
# .checkov.yaml (project root)
|
||||
skip-check:
|
||||
- CKV_AWS_8 # Ensure CloudWatch log groups encrypted
|
||||
- CKV_K8S_43 # Image pull policy Always
|
||||
|
||||
# Skip specific paths
|
||||
skip-path:
|
||||
- .terraform/
|
||||
- node_modules/
|
||||
- vendor/
|
||||
|
||||
# Severity-based soft fail
|
||||
soft-fail-on:
|
||||
- LOW
|
||||
- MEDIUM
|
||||
|
||||
# Hard fail on critical/high only
|
||||
hard-fail-on:
|
||||
- CRITICAL
|
||||
- HIGH
|
||||
```
|
||||
|
||||
### CLI-Based Suppression
|
||||
|
||||
```bash
|
||||
# Skip specific checks
|
||||
checkov -d ./terraform --skip-check CKV_AWS_8,CKV_AWS_21
|
||||
|
||||
# Skip entire frameworks
|
||||
checkov -d ./infra --skip-framework secrets
|
||||
|
||||
# Skip paths
|
||||
checkov -d ./terraform --skip-path .terraform/ --skip-path vendor/
|
||||
```
|
||||
|
||||
## Suppression Governance
|
||||
|
||||
### Approval Workflow
|
||||
|
||||
```yaml
|
||||
# .github/workflows/checkov-review.yml
|
||||
name: Review Checkov Suppressions
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- '**.tf'
|
||||
- '**.yaml'
|
||||
- '**.yml'
|
||||
|
||||
jobs:
|
||||
check-suppressions:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Check for New Suppressions
|
||||
run: |
|
||||
# Count suppressions in PR
|
||||
SUPPRESSIONS=$(git diff origin/main | grep -c "checkov:skip" || true)
|
||||
|
||||
if [ "$SUPPRESSIONS" -gt 0 ]; then
|
||||
echo "::warning::PR contains $SUPPRESSIONS new suppression(s)"
|
||||
echo "Security team review required"
|
||||
# Request review from security team
|
||||
fi
|
||||
```
|
||||
|
||||
### Suppression Documentation Template
|
||||
|
||||
```hcl
|
||||
resource "aws_security_group" "example" {
|
||||
# checkov:skip=CKV_AWS_23:TICKET-1234 - Business justification here
|
||||
# Approved by: security-team@example.com
|
||||
# Review date: 2024-01-15
|
||||
# Expiration: 2024-06-15 (review quarterly)
|
||||
#
|
||||
# Compensating controls:
|
||||
# - WAF rule blocks malicious traffic
|
||||
# - Application-level authentication required
|
||||
# - IP allow-listing at load balancer
|
||||
# - 24/7 monitoring and alerting
|
||||
|
||||
name = "approved-exception"
|
||||
# ... configuration
|
||||
}
|
||||
```
|
||||
|
||||
## Suppression Best Practices
|
||||
|
||||
### 1. Always Provide Justification
|
||||
|
||||
```hcl
|
||||
# ❌ BAD: No justification
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# checkov:skip=CKV_AWS_18
|
||||
bucket = "my-bucket"
|
||||
}
|
||||
|
||||
# ✅ GOOD: Clear business justification
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# checkov:skip=CKV_AWS_18:Public bucket required for static website hosting.
|
||||
# Content is non-sensitive marketing materials. CloudFront restricts direct access.
|
||||
bucket = "marketing-website"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Document Compensating Controls
|
||||
|
||||
```hcl
|
||||
resource "aws_security_group" "app" {
|
||||
# checkov:skip=CKV_AWS_23:Office IP range access required for developers
|
||||
#
|
||||
# Compensating controls:
|
||||
# 1. IP range limited to corporate /24 subnet (203.0.113.0/24)
|
||||
# 2. MFA required for VPN access to corporate network
|
||||
# 3. Additional application-level authentication
|
||||
# 4. Session timeout of 15 minutes
|
||||
# 5. All access logged to SIEM
|
||||
|
||||
ingress {
|
||||
from_port = 22
|
||||
to_port = 22
|
||||
protocol = "tcp"
|
||||
cidr_blocks = ["203.0.113.0/24"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Set Expiration Dates
|
||||
|
||||
```hcl
|
||||
resource "aws_instance" "temp" {
|
||||
# checkov:skip=CKV_AWS_8:Temporary instance for POC
|
||||
# EXPIRES: 2024-03-31
|
||||
# After expiration: Remove or apply encryption
|
||||
|
||||
ami = "ami-12345678"
|
||||
instance_type = "t3.micro"
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Use Granular Suppressions
|
||||
|
||||
```hcl
|
||||
# ❌ BAD: Suppress entire file or directory
|
||||
# checkov:skip=* (Don't do this!)
|
||||
|
||||
# ✅ GOOD: Suppress specific checks on specific resources
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# checkov:skip=CKV_AWS_18:Specific reason for this resource only
|
||||
bucket = "specific-bucket"
|
||||
}
|
||||
```
|
||||
|
||||
## Exception Categories
|
||||
|
||||
### Legitimate Exceptions
|
||||
|
||||
#### 1. Public Resources by Design
|
||||
|
||||
```hcl
|
||||
resource "aws_s3_bucket" "website" {
|
||||
# checkov:skip=CKV_AWS_18:Public bucket for static website
|
||||
# checkov:skip=CKV_AWS_93:Public access required by design
|
||||
# Content: Marketing materials (non-sensitive)
|
||||
# Access: Read-only via CloudFront
|
||||
|
||||
bucket = "company-website"
|
||||
}
|
||||
```
|
||||
|
||||
#### 2. Legacy System Constraints
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: legacy-app
|
||||
annotations:
|
||||
checkov.io/skip: CKV_K8S_16=Legacy app built before containers, requires host access
|
||||
# Migration plan: TICKET-5678
|
||||
# Target date: Q2 2024
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: legacy
|
||||
image: legacy-app:1.0
|
||||
```
|
||||
|
||||
#### 3. Development/Testing Environments
|
||||
|
||||
```hcl
|
||||
resource "aws_db_instance" "dev_db" {
|
||||
# checkov:skip=CKV_AWS_17:Dev environment - backups not required
|
||||
# checkov:skip=CKV_AWS_61:Dev environment - encryption overhead not needed
|
||||
# Environment: Non-production only
|
||||
# Data: Synthetic test data (no PII/PHI)
|
||||
|
||||
identifier = "dev-database"
|
||||
backup_retention_period = 0
|
||||
storage_encrypted = false
|
||||
|
||||
tags = {
|
||||
Environment = "development"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Temporary Exceptions
|
||||
|
||||
```hcl
|
||||
resource "aws_rds_cluster" "temp_unencrypted" {
|
||||
# checkov:skip=CKV_AWS_96:Temporary exception during migration
|
||||
# TICKET: INFRA-1234
|
||||
# EXPIRES: 2024-02-15
|
||||
# PLAN: Enable encryption at rest in Phase 2 migration
|
||||
# OWNER: platform-team@example.com
|
||||
|
||||
cluster_identifier = "migration-temp"
|
||||
storage_encrypted = false
|
||||
}
|
||||
```
|
||||
|
||||
## Suppression Anti-Patterns
|
||||
|
||||
### ❌ Don't: Blanket Suppressions
|
||||
|
||||
```yaml
|
||||
# BAD: Suppress all checks
|
||||
skip-check:
|
||||
- "*"
|
||||
```
|
||||
|
||||
### ❌ Don't: Suppress Without Documentation
|
||||
|
||||
```hcl
|
||||
# BAD: No explanation
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# checkov:skip=CKV_AWS_18
|
||||
bucket = "my-bucket"
|
||||
}
|
||||
```
|
||||
|
||||
### ❌ Don't: Permanent Suppressions for Production
|
||||
|
||||
```hcl
|
||||
# BAD: Permanent suppression of critical security control
|
||||
resource "aws_rds_cluster" "prod" {
|
||||
# checkov:skip=CKV_AWS_96:Too expensive
|
||||
# ^ This is unacceptable for production!
|
||||
|
||||
cluster_identifier = "production-db"
|
||||
storage_encrypted = false
|
||||
}
|
||||
```
|
||||
|
||||
### ❌ Don't: Suppress High/Critical Without Review
|
||||
|
||||
```hcl
|
||||
# DANGEROUS: Suppressing critical finding without security review
|
||||
resource "aws_security_group" "prod" {
|
||||
# checkov:skip=CKV_AWS_23:Need access from anywhere
|
||||
# ^ Requires security team approval!
|
||||
|
||||
ingress {
|
||||
cidr_blocks = ["0.0.0.0/0"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Monitoring Suppressions
|
||||
|
||||
### Track Suppression Metrics
|
||||
|
||||
```bash
|
||||
# Count suppressions by type
|
||||
grep -r "checkov:skip" ./terraform | \
|
||||
sed 's/.*checkov:skip=\([^:]*\).*/\1/' | \
|
||||
sort | uniq -c | sort -rn
|
||||
|
||||
# Find suppressions without justification
|
||||
grep -r "checkov:skip=" ./terraform | \
|
||||
grep -v "checkov:skip=.*:.*"
|
||||
```
|
||||
|
||||
### Suppression Audit Report
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Generate suppression audit report."""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
def find_suppressions(directory):
|
||||
"""Find all Checkov suppressions."""
|
||||
suppressions = []
|
||||
|
||||
for file_path in Path(directory).rglob('*.tf'):
|
||||
with open(file_path) as f:
|
||||
content = f.read()
|
||||
|
||||
# Find suppressions
|
||||
matches = re.findall(
|
||||
r'#\s*checkov:skip=([^:]+):(.*)',
|
||||
content
|
||||
)
|
||||
|
||||
for check_id, reason in matches:
|
||||
suppressions.append({
|
||||
'file': str(file_path),
|
||||
'check_id': check_id.strip(),
|
||||
'reason': reason.strip()
|
||||
})
|
||||
|
||||
return suppressions
|
||||
|
||||
def generate_report(suppressions):
|
||||
"""Generate markdown report."""
|
||||
print("# Checkov Suppression Audit Report")
|
||||
print(f"\nGenerated: {datetime.now().isoformat()}")
|
||||
print(f"\nTotal Suppressions: {len(suppressions)}\n")
|
||||
|
||||
print("## Suppressions by Check")
|
||||
check_counts = {}
|
||||
for s in suppressions:
|
||||
check_counts[s['check_id']] = check_counts.get(s['check_id'], 0) + 1
|
||||
|
||||
for check_id, count in sorted(check_counts.items(), key=lambda x: -x[1]):
|
||||
print(f"- {check_id}: {count}")
|
||||
|
||||
print("\n## All Suppressions")
|
||||
for s in suppressions:
|
||||
print(f"\n### {s['file']}")
|
||||
print(f"**Check:** {s['check_id']}")
|
||||
print(f"**Reason:** {s['reason'] or '(no justification provided)'}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
directory = sys.argv[1] if len(sys.argv) > 1 else './terraform'
|
||||
suppressions = find_suppressions(directory)
|
||||
generate_report(suppressions)
|
||||
```
|
||||
|
||||
## Quarterly Review Process
|
||||
|
||||
1. **Generate Suppression Report**: List all active suppressions
|
||||
2. **Review Expirations**: Check for expired temporary suppressions
|
||||
3. **Validate Justifications**: Ensure reasons still apply
|
||||
4. **Verify Compensating Controls**: Confirm controls are still in place
|
||||
5. **Update or Remove**: Update suppressions or fix underlying issues
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Checkov Suppression Documentation](https://www.checkov.io/2.Basics/Suppressing%20and%20Skipping%20Policies.html)
|
||||
- [Security Exception Management Best Practices](https://owasp.org/www-community/Security_Exception_Management)
|
||||
457
skills/devsecops/sca-trivy/SKILL.md
Normal file
457
skills/devsecops/sca-trivy/SKILL.md
Normal file
@@ -0,0 +1,457 @@
|
||||
---
|
||||
name: sca-trivy
|
||||
description: >
|
||||
Software Composition Analysis (SCA) and container vulnerability scanning using Aqua Trivy
|
||||
for identifying CVE vulnerabilities in dependencies, container images, IaC misconfigurations,
|
||||
and license compliance risks. Use when: (1) Scanning container images and filesystems for
|
||||
vulnerabilities and misconfigurations, (2) Analyzing dependencies for known CVEs across
|
||||
multiple languages (Go, Python, Node.js, Java, etc.), (3) Detecting IaC security issues
|
||||
in Terraform, Kubernetes, Dockerfile, (4) Integrating vulnerability scanning into CI/CD
|
||||
pipelines with SARIF output, (5) Generating Software Bill of Materials (SBOM) in CycloneDX
|
||||
or SPDX format, (6) Prioritizing remediation by CVSS score and exploitability.
|
||||
version: 0.1.0
|
||||
maintainer: SirAppSec
|
||||
category: devsecops
|
||||
tags: [sca, trivy, container-security, vulnerability-scanning, sbom, iac-security, dependency-scanning, cvss]
|
||||
frameworks: [OWASP, CWE, NIST, PCI-DSS, SOC2]
|
||||
dependencies:
|
||||
tools: [trivy, docker]
|
||||
references:
|
||||
- https://aquasecurity.github.io/trivy/
|
||||
- https://owasp.org/www-project-dependency-check/
|
||||
- https://nvd.nist.gov/
|
||||
- https://www.cisa.gov/sbom
|
||||
---
|
||||
|
||||
# Software Composition Analysis with Trivy
|
||||
|
||||
## Overview
|
||||
|
||||
Trivy is a comprehensive security scanner for containers, filesystems, and git repositories. It detects
|
||||
vulnerabilities (CVEs) in OS packages and application dependencies, IaC misconfigurations, exposed secrets,
|
||||
and software licenses. This skill provides workflows for vulnerability scanning, SBOM generation, CI/CD
|
||||
integration, and remediation prioritization aligned with CVSS and OWASP standards.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Scan a container image for vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Install Trivy
|
||||
brew install trivy # macOS
|
||||
# or: apt-get install trivy # Debian/Ubuntu
|
||||
# or: docker pull aquasec/trivy:latest
|
||||
|
||||
# Scan container image
|
||||
trivy image nginx:latest
|
||||
|
||||
# Scan local filesystem for dependencies
|
||||
trivy fs .
|
||||
|
||||
# Scan IaC files for misconfigurations
|
||||
trivy config .
|
||||
|
||||
# Generate SBOM
|
||||
trivy image --format cyclonedx --output sbom.json nginx:latest
|
||||
```
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### Workflow 1: Container Image Security Assessment
|
||||
|
||||
Progress:
|
||||
[ ] 1. Identify target container image (repository:tag)
|
||||
[ ] 2. Run comprehensive Trivy scan with `trivy image <image-name>`
|
||||
[ ] 3. Analyze vulnerability findings by severity (CRITICAL, HIGH, MEDIUM, LOW)
|
||||
[ ] 4. Map CVE findings to CWE categories and OWASP references
|
||||
[ ] 5. Check for available patches and updated base images
|
||||
[ ] 6. Generate prioritized remediation report with upgrade recommendations
|
||||
|
||||
Work through each step systematically. Check off completed items.
|
||||
|
||||
### Workflow 2: Dependency Vulnerability Scanning
|
||||
|
||||
Scan project dependencies for known vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Scan filesystem for all dependencies
|
||||
trivy fs --severity CRITICAL,HIGH .
|
||||
|
||||
# Scan specific package manifest
|
||||
trivy fs --scanners vuln package-lock.json
|
||||
|
||||
# Generate JSON report for analysis
|
||||
trivy fs --format json --output trivy-report.json .
|
||||
|
||||
# Generate SARIF for GitHub/GitLab integration
|
||||
trivy fs --format sarif --output trivy.sarif .
|
||||
```
|
||||
|
||||
For each vulnerability:
|
||||
1. Review CVE details and CVSS score
|
||||
2. Check if fixed version is available
|
||||
3. Consult `references/remediation_guide.md` for language-specific guidance
|
||||
4. Update dependency to patched version
|
||||
5. Re-scan to validate fix
|
||||
|
||||
### Workflow 3: Infrastructure as Code Security
|
||||
|
||||
Detect misconfigurations in IaC files:
|
||||
|
||||
```bash
|
||||
# Scan Terraform configurations
|
||||
trivy config ./terraform --severity CRITICAL,HIGH
|
||||
|
||||
# Scan Kubernetes manifests
|
||||
trivy config ./k8s --severity CRITICAL,HIGH
|
||||
|
||||
# Scan Dockerfile best practices
|
||||
trivy config --file-patterns dockerfile:Dockerfile .
|
||||
|
||||
# Generate report with remediation guidance
|
||||
trivy config --format json --output iac-findings.json .
|
||||
```
|
||||
|
||||
Review findings by category:
|
||||
- **Security**: Authentication, authorization, encryption
|
||||
- **Compliance**: CIS benchmarks, security standards
|
||||
- **Best Practices**: Resource limits, immutability, least privilege
|
||||
|
||||
### Workflow 4: CI/CD Pipeline Integration
|
||||
|
||||
#### GitHub Actions
|
||||
|
||||
```yaml
|
||||
name: Trivy Security Scan
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
scan-type: 'fs'
|
||||
scan-ref: '.'
|
||||
format: 'sarif'
|
||||
output: 'trivy-results.sarif'
|
||||
severity: 'CRITICAL,HIGH'
|
||||
|
||||
- name: Upload results to GitHub Security
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: 'trivy-results.sarif'
|
||||
```
|
||||
|
||||
#### GitLab CI
|
||||
|
||||
```yaml
|
||||
trivy-scan:
|
||||
stage: test
|
||||
image: aquasec/trivy:latest
|
||||
script:
|
||||
- trivy fs --exit-code 1 --severity CRITICAL,HIGH --format json --output trivy-report.json .
|
||||
artifacts:
|
||||
reports:
|
||||
dependency_scanning: trivy-report.json
|
||||
when: always
|
||||
allow_failure: false
|
||||
```
|
||||
|
||||
Use bundled templates from `assets/ci_integration/` for additional platforms.
|
||||
|
||||
### Workflow 5: SBOM Generation
|
||||
|
||||
Generate Software Bill of Materials for supply chain transparency:
|
||||
|
||||
```bash
|
||||
# Generate CycloneDX SBOM
|
||||
trivy image --format cyclonedx --output sbom-cyclonedx.json nginx:latest
|
||||
|
||||
# Generate SPDX SBOM
|
||||
trivy image --format spdx-json --output sbom-spdx.json nginx:latest
|
||||
|
||||
# SBOM for filesystem/project
|
||||
trivy fs --format cyclonedx --output project-sbom.json .
|
||||
```
|
||||
|
||||
SBOM use cases:
|
||||
- **Vulnerability tracking**: Monitor dependencies for new CVEs
|
||||
- **License compliance**: Identify license obligations and risks
|
||||
- **Supply chain security**: Verify component provenance
|
||||
- **Regulatory compliance**: Meet CISA SBOM requirements
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Sensitive Data Handling
|
||||
|
||||
- **Registry credentials**: Use environment variables or credential helpers, never hardcode
|
||||
- **Scan reports**: Contain vulnerability details and package versions - treat as sensitive
|
||||
- **SBOM files**: May reveal internal architecture - control access appropriately
|
||||
- **Secret scanning**: Enable with `--scanners secret` to detect exposed credentials in images
|
||||
|
||||
### Access Control
|
||||
|
||||
- **Container registry access**: Requires pull permissions for image scanning
|
||||
- **Filesystem access**: Read permissions for dependency manifests and IaC files
|
||||
- **CI/CD integration**: Secure API tokens and registry credentials in secrets management
|
||||
- **Report storage**: Restrict access to vulnerability reports and SBOM artifacts
|
||||
|
||||
### Audit Logging
|
||||
|
||||
Log the following for compliance and incident response:
|
||||
- Scan execution timestamps and scope (image, filesystem, repository)
|
||||
- Vulnerability counts by severity level
|
||||
- Policy violations and blocking decisions
|
||||
- SBOM generation and distribution events
|
||||
- Remediation actions and version updates
|
||||
|
||||
### Compliance Requirements
|
||||
|
||||
- **PCI-DSS 6.2**: Ensure system components protected from known vulnerabilities
|
||||
- **SOC2 CC7.1**: Detect and act upon changes that could affect security
|
||||
- **NIST 800-53 SI-2**: Flaw remediation and vulnerability scanning
|
||||
- **CIS Benchmarks**: Container and Kubernetes security hardening
|
||||
- **OWASP Top 10 A06**: Vulnerable and Outdated Components
|
||||
- **CWE-1104**: Use of Unmaintained Third-Party Components
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### Scripts (`scripts/`)
|
||||
|
||||
- `trivy_scan.py` - Comprehensive scanning with JSON/SARIF output and severity filtering
|
||||
- `sbom_generator.py` - SBOM generation with CycloneDX and SPDX format support
|
||||
- `vulnerability_report.py` - Parse Trivy output and generate remediation reports with CVSS scores
|
||||
- `baseline_manager.py` - Baseline creation for tracking new vulnerabilities only
|
||||
|
||||
### References (`references/`)
|
||||
|
||||
- `scanner_types.md` - Detailed guide for vulnerability, misconfiguration, secret, and license scanning
|
||||
- `remediation_guide.md` - Language and ecosystem-specific remediation strategies
|
||||
- `cvss_prioritization.md` - CVSS score interpretation and vulnerability prioritization framework
|
||||
- `iac_checks.md` - Complete list of IaC security checks with CIS benchmark mappings
|
||||
|
||||
### Assets (`assets/`)
|
||||
|
||||
- `trivy.yaml` - Custom Trivy configuration with security policies and ignore rules
|
||||
- `ci_integration/github-actions.yml` - Complete GitHub Actions workflow with security gates
|
||||
- `ci_integration/gitlab-ci.yml` - Complete GitLab CI pipeline with dependency scanning
|
||||
- `ci_integration/jenkins.groovy` - Jenkins pipeline with Trivy integration
|
||||
- `policy_template.rego` - OPA policy template for custom vulnerability policies
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Multi-Stage Security Scanning
|
||||
|
||||
Comprehensive security assessment combining multiple scan types:
|
||||
|
||||
```bash
|
||||
# 1. Scan container image for vulnerabilities
|
||||
trivy image --severity CRITICAL,HIGH myapp:latest
|
||||
|
||||
# 2. Scan IaC for misconfigurations
|
||||
trivy config ./infrastructure --severity CRITICAL,HIGH
|
||||
|
||||
# 3. Scan filesystem for dependency vulnerabilities
|
||||
trivy fs --severity CRITICAL,HIGH ./app
|
||||
|
||||
# 4. Scan for exposed secrets
|
||||
trivy fs --scanners secret ./app
|
||||
|
||||
# 5. Generate comprehensive SBOM
|
||||
trivy image --format cyclonedx --output sbom.json myapp:latest
|
||||
```
|
||||
|
||||
### Pattern 2: Baseline Vulnerability Tracking
|
||||
|
||||
Implement baseline scanning to track only new vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Initial scan - create baseline
|
||||
trivy image --format json --output baseline.json nginx:latest
|
||||
|
||||
# Subsequent scans - detect new vulnerabilities
|
||||
trivy image --format json --output current.json nginx:latest
|
||||
./scripts/baseline_manager.py --baseline baseline.json --current current.json
|
||||
```
|
||||
|
||||
### Pattern 3: License Compliance Scanning
|
||||
|
||||
Detect license compliance risks:
|
||||
|
||||
```bash
|
||||
# Scan for license information
|
||||
trivy image --scanners license --format json --output licenses.json myapp:latest
|
||||
|
||||
# Filter by license type
|
||||
trivy image --scanners license --severity HIGH,CRITICAL myapp:latest
|
||||
```
|
||||
|
||||
Review findings:
|
||||
- **High Risk**: GPL, AGPL (strong copyleft)
|
||||
- **Medium Risk**: LGPL, MPL (weak copyleft)
|
||||
- **Low Risk**: Apache, MIT, BSD (permissive)
|
||||
|
||||
### Pattern 4: Custom Policy Enforcement
|
||||
|
||||
Apply custom security policies with OPA:
|
||||
|
||||
```bash
|
||||
# Create Rego policy in assets/policy_template.rego
|
||||
# Deny images with CRITICAL vulnerabilities or outdated packages
|
||||
|
||||
# Run scan with policy enforcement
|
||||
trivy image --format json --output scan.json myapp:latest
|
||||
trivy image --ignore-policy assets/policy_template.rego myapp:latest
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
- **GitHub Actions**: Native `aquasecurity/trivy-action` with SARIF upload to Security tab
|
||||
- **GitLab CI**: Dependency scanning report format for Security Dashboard
|
||||
- **Jenkins**: Docker-based scanning with JUnit XML report generation
|
||||
- **CircleCI**: Docker executor with artifact storage
|
||||
- **Azure Pipelines**: Task-based integration with results publishing
|
||||
|
||||
### Container Platforms
|
||||
|
||||
- **Docker**: Image scanning before push to registry
|
||||
- **Kubernetes**: Admission controllers with trivy-operator for runtime scanning
|
||||
- **Harbor**: Built-in Trivy integration for registry scanning
|
||||
- **AWS ECR**: Scan images on push with enhanced scanning
|
||||
- **Google Artifact Registry**: Vulnerability scanning integration
|
||||
|
||||
### Security Tools Ecosystem
|
||||
|
||||
- **SIEM Integration**: Export JSON findings to Splunk, ELK, or Datadog
|
||||
- **Vulnerability Management**: Import SARIF/JSON into Snyk, Qualys, or Rapid7
|
||||
- **SBOM Tools**: CycloneDX and SPDX compatibility with dependency-track and GUAC
|
||||
- **Policy Enforcement**: OPA/Rego integration for custom policy as code
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: High False Positive Rate
|
||||
|
||||
**Symptoms**: Many vulnerabilities reported that don't apply to your use case
|
||||
|
||||
**Solution**:
|
||||
1. Use `.trivyignore` file to suppress specific CVEs with justification
|
||||
2. Filter by exploitability: `trivy image --ignore-unfixed myapp:latest`
|
||||
3. Apply severity filtering: `--severity CRITICAL,HIGH`
|
||||
4. Review vendor-specific security advisories for false positive validation
|
||||
5. See `references/false_positives.md` for common patterns
|
||||
|
||||
### Issue: Performance Issues on Large Images
|
||||
|
||||
**Symptoms**: Scans taking excessive time or high memory usage
|
||||
|
||||
**Solution**:
|
||||
1. Use cached DB: `trivy image --cache-dir /path/to/cache myapp:latest`
|
||||
2. Skip unnecessary scanners: `--scanners vuln` (exclude config, secret)
|
||||
3. Use offline mode after initial DB download: `--offline-scan`
|
||||
4. Increase timeout: `--timeout 30m`
|
||||
5. Scan specific layers: `--removed-pkgs` to exclude removed packages
|
||||
|
||||
### Issue: Missing Vulnerabilities for Specific Languages
|
||||
|
||||
**Symptoms**: Expected CVEs not detected in application dependencies
|
||||
|
||||
**Solution**:
|
||||
1. Verify language support: Check supported languages and file patterns
|
||||
2. Ensure dependency manifests are present (package.json, go.mod, requirements.txt)
|
||||
3. Include lock files for accurate version detection
|
||||
4. For compiled binaries, scan source code separately
|
||||
5. Consult `references/scanner_types.md` for language-specific requirements
|
||||
|
||||
### Issue: Registry Authentication Failures
|
||||
|
||||
**Symptoms**: Unable to scan private container images
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Use Docker credential helper
|
||||
docker login registry.example.com
|
||||
trivy image registry.example.com/private/image:tag
|
||||
|
||||
# Or use environment variables
|
||||
export TRIVY_USERNAME=user
|
||||
export TRIVY_PASSWORD=pass
|
||||
trivy image registry.example.com/private/image:tag
|
||||
|
||||
# Or use credential file
|
||||
trivy image --username user --password pass registry.example.com/private/image:tag
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Trivy Configuration
|
||||
|
||||
Create `trivy.yaml` configuration file:
|
||||
|
||||
```yaml
|
||||
# trivy.yaml
|
||||
vulnerability:
|
||||
type: os,library
|
||||
severity: CRITICAL,HIGH,MEDIUM
|
||||
ignorefile: .trivyignore
|
||||
ignore-unfixed: false
|
||||
skip-files:
|
||||
- "test/**"
|
||||
- "**/node_modules/**"
|
||||
|
||||
cache:
|
||||
dir: /tmp/trivy-cache
|
||||
|
||||
db:
|
||||
repository: ghcr.io/aquasecurity/trivy-db:latest
|
||||
|
||||
output:
|
||||
format: json
|
||||
severity-sort: true
|
||||
```
|
||||
|
||||
Use with: `trivy image --config trivy.yaml myapp:latest`
|
||||
|
||||
### Trivy Ignore File
|
||||
|
||||
Create `.trivyignore` to suppress specific CVEs:
|
||||
|
||||
```
|
||||
# .trivyignore
|
||||
# False positive - patched in vendor fork
|
||||
CVE-0000-12345
|
||||
|
||||
# Risk accepted by security team - JIRA-1234
|
||||
CVE-0000-67890
|
||||
|
||||
# No fix available, compensating controls in place
|
||||
CVE-0000-11111
|
||||
```
|
||||
|
||||
### Offline Air-Gapped Scanning
|
||||
|
||||
For air-gapped environments:
|
||||
|
||||
```bash
|
||||
# On internet-connected machine:
|
||||
trivy image --download-db-only --cache-dir /path/to/db
|
||||
|
||||
# Transfer cache to air-gapped environment
|
||||
|
||||
# On air-gapped machine:
|
||||
trivy image --skip-db-update --cache-dir /path/to/db --offline-scan myapp:latest
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Trivy Official Documentation](https://aquasecurity.github.io/trivy/)
|
||||
- [OWASP Dependency Check](https://owasp.org/www-project-dependency-check/)
|
||||
- [NVD - National Vulnerability Database](https://nvd.nist.gov/)
|
||||
- [CISA SBOM Guidelines](https://www.cisa.gov/sbom)
|
||||
- [CWE-1104: Use of Unmaintained Third-Party Components](https://cwe.mitre.org/data/definitions/1104.html)
|
||||
- [OWASP Top 10 - Vulnerable and Outdated Components](https://owasp.org/Top10/)
|
||||
502
skills/devsecops/secrets-gitleaks/SKILL.md
Normal file
502
skills/devsecops/secrets-gitleaks/SKILL.md
Normal file
@@ -0,0 +1,502 @@
|
||||
---
|
||||
name: secrets-gitleaks
|
||||
description: >
|
||||
Hardcoded secret detection and prevention in git repositories and codebases using Gitleaks.
|
||||
Identifies passwords, API keys, tokens, and credentials through regex-based pattern matching
|
||||
and entropy analysis. Use when: (1) Scanning repositories for exposed secrets and credentials,
|
||||
(2) Implementing pre-commit hooks to prevent secret leakage, (3) Integrating secret detection
|
||||
into CI/CD pipelines, (4) Auditing codebases for compliance violations (PCI-DSS, SOC2, GDPR),
|
||||
(5) Establishing baseline secret detection and tracking new exposures, (6) Remediating
|
||||
historical secret exposures in git history.
|
||||
version: 0.1.0
|
||||
maintainer: SirAppSec
|
||||
category: devsecops
|
||||
tags: [secrets, gitleaks, secret-scanning, devsecops, ci-cd, credentials, api-keys, compliance]
|
||||
frameworks: [OWASP, CWE, PCI-DSS, SOC2, GDPR]
|
||||
dependencies:
|
||||
tools: [gitleaks, git]
|
||||
references:
|
||||
- https://github.com/gitleaks/gitleaks
|
||||
- https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/
|
||||
- https://cwe.mitre.org/data/definitions/798.html
|
||||
---
|
||||
|
||||
# Secrets Detection with Gitleaks
|
||||
|
||||
## Overview
|
||||
|
||||
Gitleaks is a secret detection tool that scans git repositories, files, and directories for hardcoded credentials including passwords, API keys, tokens, and other sensitive information. It uses regex-based pattern matching combined with Shannon entropy analysis to identify secrets that could lead to unauthorized access if exposed.
|
||||
|
||||
This skill provides comprehensive guidance for integrating Gitleaks into DevSecOps workflows, from pre-commit hooks to CI/CD pipelines, with emphasis on preventing secret leakage before code reaches production.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Scan current repository for secrets:
|
||||
|
||||
```bash
|
||||
# Install gitleaks
|
||||
brew install gitleaks # macOS
|
||||
# or: docker pull zricethezav/gitleaks:latest
|
||||
|
||||
# Scan current git repository
|
||||
gitleaks detect -v
|
||||
|
||||
# Scan specific directory
|
||||
gitleaks detect --source /path/to/code -v
|
||||
|
||||
# Generate report
|
||||
gitleaks detect --report-path gitleaks-report.json --report-format json
|
||||
```
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Repository Scanning
|
||||
|
||||
Scan existing repositories to identify exposed secrets:
|
||||
|
||||
```bash
|
||||
# Full repository scan with verbose output
|
||||
gitleaks detect -v --source /path/to/repo
|
||||
|
||||
# Scan with custom configuration
|
||||
gitleaks detect --config .gitleaks.toml -v
|
||||
|
||||
# Generate JSON report for further analysis
|
||||
gitleaks detect --report-path findings.json --report-format json
|
||||
|
||||
# Generate SARIF report for GitHub/GitLab integration
|
||||
gitleaks detect --report-path findings.sarif --report-format sarif
|
||||
```
|
||||
|
||||
**When to use**: Initial security audit, compliance checks, incident response.
|
||||
|
||||
### 2. Pre-Commit Hook Protection
|
||||
|
||||
Prevent secrets from being committed in the first place:
|
||||
|
||||
```bash
|
||||
# Install pre-commit hook (run in repository root)
|
||||
cat << 'EOF' > .git/hooks/pre-commit
|
||||
#!/bin/sh
|
||||
gitleaks protect --verbose --redact --staged
|
||||
EOF
|
||||
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
Use the bundled script for automated hook installation:
|
||||
|
||||
```bash
|
||||
./scripts/install_precommit.sh
|
||||
```
|
||||
|
||||
**When to use**: Developer workstation setup, team onboarding, mandatory security controls.
|
||||
|
||||
### 3. CI/CD Pipeline Integration
|
||||
|
||||
#### GitHub Actions
|
||||
|
||||
```yaml
|
||||
name: gitleaks
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
#### GitLab CI
|
||||
|
||||
```yaml
|
||||
gitleaks:
|
||||
image: zricethezav/gitleaks:latest
|
||||
stage: test
|
||||
script:
|
||||
- gitleaks detect --report-path gitleaks.json --report-format json --verbose
|
||||
artifacts:
|
||||
paths:
|
||||
- gitleaks.json
|
||||
when: always
|
||||
allow_failure: false
|
||||
```
|
||||
|
||||
**When to use**: Automated security gates, pull request checks, release validation.
|
||||
|
||||
### 4. Baseline and Incremental Scanning
|
||||
|
||||
Establish security baseline and track only new secrets:
|
||||
|
||||
```bash
|
||||
# Create initial baseline
|
||||
gitleaks detect --report-path baseline.json --report-format json
|
||||
|
||||
# Subsequent scans detect only new secrets
|
||||
gitleaks detect --baseline-path baseline.json --report-path new-findings.json -v
|
||||
```
|
||||
|
||||
**When to use**: Legacy codebase remediation, phased rollout, compliance tracking.
|
||||
|
||||
### 5. Configuration Customization
|
||||
|
||||
Create custom `.gitleaks.toml` configuration:
|
||||
|
||||
```toml
|
||||
title = "Custom Gitleaks Configuration"
|
||||
|
||||
[extend]
|
||||
# Extend default config with custom rules
|
||||
useDefault = true
|
||||
|
||||
[[rules]]
|
||||
id = "custom-api-key"
|
||||
description = "Custom API Key Pattern"
|
||||
regex = '''(?i)(custom_api_key|custom_secret)[\s]*[=:][\s]*['"][a-zA-Z0-9]{32,}['"]'''
|
||||
tags = ["api-key", "custom"]
|
||||
|
||||
[allowlist]
|
||||
description = "Global allowlist"
|
||||
paths = [
|
||||
'''\.md$''', # Ignore markdown files
|
||||
'''test/fixtures/''', # Ignore test fixtures
|
||||
]
|
||||
stopwords = [
|
||||
'''EXAMPLE''', # Ignore example keys
|
||||
'''PLACEHOLDER''',
|
||||
]
|
||||
```
|
||||
|
||||
Use bundled configuration templates in `assets/`:
|
||||
- `assets/config-strict.toml` - Strict detection (low false negatives)
|
||||
- `assets/config-balanced.toml` - Balanced detection (recommended)
|
||||
- `assets/config-custom.toml` - Template for custom rules
|
||||
|
||||
**When to use**: Reducing false positives, adding proprietary secret patterns, organizational standards.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Sensitive Data Handling
|
||||
|
||||
- **Secret Redaction**: Always use `--redact` flag in logs and reports to prevent accidental secret exposure
|
||||
- **Report Security**: Gitleaks reports contain detected secrets - treat as confidential, encrypt at rest
|
||||
- **Git History**: Detected secrets in git history require complete removal using tools like `git filter-repo` or `BFG Repo-Cleaner`
|
||||
- **Credential Rotation**: All exposed secrets must be rotated immediately, even if removed from code
|
||||
|
||||
### Access Control
|
||||
|
||||
- **CI/CD Permissions**: Gitleaks scans require read access to repository content and git history
|
||||
- **Report Access**: Restrict access to scan reports containing sensitive findings
|
||||
- **Baseline Files**: Baseline JSON files contain secret metadata - protect with same controls as findings
|
||||
|
||||
### Audit Logging
|
||||
|
||||
Log the following for compliance and incident response:
|
||||
- Scan execution timestamps and scope (repository, branch, commit range)
|
||||
- Number and types of secrets detected
|
||||
- Remediation actions taken (credential rotation, commit history cleanup)
|
||||
- False positive classifications and allowlist updates
|
||||
|
||||
### Compliance Requirements
|
||||
|
||||
- **PCI-DSS 3.2.1**: Requirement 6.5.3 - Prevent hardcoded credentials in payment applications
|
||||
- **SOC2**: CC6.1 - Logical access controls prevent unauthorized credential exposure
|
||||
- **GDPR**: Article 32 - Appropriate security measures for processing personal data credentials
|
||||
- **CWE-798**: Use of Hard-coded Credentials
|
||||
- **CWE-259**: Use of Hard-coded Password
|
||||
- **OWASP A07:2021**: Identification and Authentication Failures
|
||||
|
||||
## Bundled Resources
|
||||
|
||||
### Scripts (`scripts/`)
|
||||
|
||||
- `install_precommit.sh` - Automated pre-commit hook installation with configuration prompts
|
||||
- `scan_and_report.py` - Comprehensive scanning with multiple output formats and severity classification
|
||||
- `baseline_manager.py` - Baseline creation, comparison, and incremental scan management
|
||||
|
||||
### References (`references/`)
|
||||
|
||||
- `detection_rules.md` - Comprehensive list of built-in Gitleaks detection rules with CWE mappings
|
||||
- `remediation_guide.md` - Step-by-step secret remediation procedures including git history cleanup
|
||||
- `false_positives.md` - Common false positive patterns and allowlist configuration strategies
|
||||
- `compliance_mapping.md` - Detailed mapping to PCI-DSS, SOC2, GDPR, and OWASP requirements
|
||||
|
||||
### Assets (`assets/`)
|
||||
|
||||
- `config-strict.toml` - High-sensitivity configuration (maximum detection)
|
||||
- `config-balanced.toml` - Production-ready balanced configuration
|
||||
- `config-custom.toml` - Template with inline documentation for custom rules
|
||||
- `precommit-config.yaml` - Pre-commit framework configuration
|
||||
- `github-action.yml` - Complete GitHub Actions workflow template
|
||||
- `gitlab-ci.yml` - Complete GitLab CI pipeline template
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Pattern 1: Initial Repository Audit
|
||||
|
||||
First-time secret scanning for security assessment:
|
||||
|
||||
```bash
|
||||
# 1. Clone repository with full history
|
||||
git clone --mirror https://github.com/org/repo.git audit-repo
|
||||
cd audit-repo
|
||||
|
||||
# 2. Run comprehensive scan
|
||||
gitleaks detect --report-path audit-report.json --report-format json -v
|
||||
|
||||
# 3. Generate human-readable report
|
||||
./scripts/scan_and_report.py --input audit-report.json --format markdown --output audit-report.md
|
||||
|
||||
# 4. Review findings and classify false positives
|
||||
# Edit .gitleaks.toml to add allowlist entries
|
||||
|
||||
# 5. Create baseline for future scans
|
||||
cp audit-report.json baseline.json
|
||||
```
|
||||
|
||||
### Pattern 2: Developer Workstation Setup
|
||||
|
||||
Protect developers from accidental secret commits:
|
||||
|
||||
```bash
|
||||
# 1. Install gitleaks locally
|
||||
brew install gitleaks # macOS
|
||||
# or use package manager for your OS
|
||||
|
||||
# 2. Install pre-commit hook
|
||||
./scripts/install_precommit.sh
|
||||
|
||||
# 3. Test hook with dummy commit
|
||||
echo "api_key = 'EXAMPLE_KEY_12345'" > test.txt
|
||||
git add test.txt
|
||||
git commit -m "test" # Should be blocked by gitleaks
|
||||
|
||||
# 4. Clean up test
|
||||
git reset HEAD~1
|
||||
rm test.txt
|
||||
```
|
||||
|
||||
### Pattern 3: CI/CD Pipeline with Baseline
|
||||
|
||||
Progressive secret detection in continuous integration:
|
||||
|
||||
```bash
|
||||
# In CI pipeline script:
|
||||
|
||||
# 1. Check if baseline exists
|
||||
if [ -f ".gitleaks-baseline.json" ]; then
|
||||
# Incremental scan - only new secrets
|
||||
gitleaks detect \
|
||||
--baseline-path .gitleaks-baseline.json \
|
||||
--report-path new-findings.json \
|
||||
--report-format json \
|
||||
--exit-code 1 # Fail on new secrets
|
||||
else
|
||||
# Initial scan - create baseline
|
||||
gitleaks detect \
|
||||
--report-path .gitleaks-baseline.json \
|
||||
--report-format json \
|
||||
--exit-code 0 # Don't fail on first scan
|
||||
fi
|
||||
|
||||
# 2. Generate SARIF for GitHub Security tab
|
||||
if [ -f "new-findings.json" ] && [ -s "new-findings.json" ]; then
|
||||
gitleaks detect \
|
||||
--baseline-path .gitleaks-baseline.json \
|
||||
--report-path results.sarif \
|
||||
--report-format sarif
|
||||
fi
|
||||
```
|
||||
|
||||
### Pattern 4: Custom Rule Development
|
||||
|
||||
Add organization-specific secret patterns:
|
||||
|
||||
```toml
|
||||
# Add to .gitleaks.toml
|
||||
|
||||
[[rules]]
|
||||
id = "acme-corp-api-key"
|
||||
description = "ACME Corp Internal API Key"
|
||||
regex = '''(?i)acme[_-]?api[_-]?key[\s]*[=:][\s]*['"]?([a-f0-9]{40})['"]?'''
|
||||
secretGroup = 1
|
||||
tags = ["api-key", "acme-internal"]
|
||||
|
||||
[[rules]]
|
||||
id = "acme-corp-database-password"
|
||||
description = "ACME Corp Database Password Format"
|
||||
regex = '''(?i)(db_pass|database_password)[\s]*[=:][\s]*['"]([A-Z][a-z0-9@#$%]{15,})['"]'''
|
||||
secretGroup = 2
|
||||
tags = ["password", "database", "acme-internal"]
|
||||
|
||||
# Test custom rules
|
||||
# gitleaks detect --config .gitleaks.toml -v
|
||||
```
|
||||
|
||||
## Integration Points
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
- **GitHub Actions**: Use `gitleaks/gitleaks-action@v2` for native integration with Security tab
|
||||
- **GitLab CI**: Docker-based scanning with artifact retention for audit trails
|
||||
- **Jenkins**: Execute via Docker or installed binary in pipeline stages
|
||||
- **CircleCI**: Docker executor with orb support
|
||||
- **Azure Pipelines**: Task-based integration with results publishing
|
||||
|
||||
### Security Tools Ecosystem
|
||||
|
||||
- **SIEM Integration**: Export JSON findings to Splunk, ELK, or Datadog for centralized monitoring
|
||||
- **Vulnerability Management**: Import SARIF reports into Snyk, SonarQube, or Checkmarx
|
||||
- **Secret Management**: Integrate findings with HashiCorp Vault or AWS Secrets Manager rotation workflows
|
||||
- **Ticketing Systems**: Automated Jira/ServiceNow ticket creation for remediation tracking
|
||||
|
||||
### SDLC Integration
|
||||
|
||||
- **Design Phase**: Include secret detection requirements in security architecture reviews
|
||||
- **Development**: Pre-commit hooks provide immediate feedback to developers
|
||||
- **Code Review**: PR/MR checks prevent secrets from reaching main branches
|
||||
- **Testing**: Scan test environments and infrastructure-as-code
|
||||
- **Deployment**: Final validation gate before production release
|
||||
- **Operations**: Periodic scanning of deployed configurations and logs
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Too Many False Positives
|
||||
|
||||
**Symptoms**: Legitimate code patterns flagged as secrets (test fixtures, examples, placeholders)
|
||||
|
||||
**Solution**:
|
||||
1. Review findings to identify patterns: `grep -i "example\|test\|placeholder" gitleaks-report.json`
|
||||
2. Add to allowlist in `.gitleaks.toml`:
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = ['''test/''', '''examples/''', '''\.md$''']
|
||||
stopwords = ["EXAMPLE", "PLACEHOLDER", "YOUR_API_KEY_HERE"]
|
||||
```
|
||||
3. Use commit allowlists for specific false positives:
|
||||
```toml
|
||||
[allowlist]
|
||||
commits = ["commit-sha-here"]
|
||||
```
|
||||
4. Consult `references/false_positives.md` for common patterns
|
||||
|
||||
### Issue: Performance Issues on Large Repositories
|
||||
|
||||
**Symptoms**: Scans taking excessive time (>10 minutes), high memory usage
|
||||
|
||||
**Solution**:
|
||||
1. Use `--log-opts` to limit git history: `gitleaks detect --log-opts="--since=2024-01-01"`
|
||||
2. Scan specific branches: `gitleaks detect --log-opts="origin/main"`
|
||||
3. Use baseline approach to scan only recent changes
|
||||
4. Consider shallow clone for initial scans: `git clone --depth=1000`
|
||||
5. Parallelize scans across multiple branches or subdirectories
|
||||
|
||||
### Issue: Pre-commit Hook Blocking Valid Commits
|
||||
|
||||
**Symptoms**: Developers unable to commit code with legitimate patterns
|
||||
|
||||
**Solution**:
|
||||
1. Add inline comment to bypass hook: `# gitleaks:allow`
|
||||
2. Update `.gitleaks.toml` allowlist for the specific pattern
|
||||
3. Use `--redact` to safely review findings: `gitleaks protect --staged --redact`
|
||||
4. Temporary bypass (use with caution): `git commit --no-verify`
|
||||
5. Review with security team if pattern is genuinely needed
|
||||
|
||||
### Issue: Secrets Found in Git History
|
||||
|
||||
**Symptoms**: Secrets detected in old commits, already removed from current code
|
||||
|
||||
**Solution**:
|
||||
1. Rotate compromised credentials immediately (highest priority)
|
||||
2. For public repositories, consider full history rewrite using:
|
||||
- `git filter-repo` (recommended): `git filter-repo --path-glob '*.env' --invert-paths`
|
||||
- BFG Repo-Cleaner: `bfg --delete-files credentials.json`
|
||||
3. Force-push cleaned history: `git push --force`
|
||||
4. Notify all contributors to rebase/re-clone
|
||||
5. See `references/remediation_guide.md` for detailed procedures
|
||||
6. Document incident in security audit log
|
||||
|
||||
### Issue: Custom Secret Patterns Not Detected
|
||||
|
||||
**Symptoms**: Organization-specific secrets not caught by default rules
|
||||
|
||||
**Solution**:
|
||||
1. Develop regex pattern: Test at regex101.com with sample secrets
|
||||
2. Add custom rule to `.gitleaks.toml`:
|
||||
```toml
|
||||
[[rules]]
|
||||
id = "custom-secret-id"
|
||||
description = "Description"
|
||||
regex = '''your-pattern-here'''
|
||||
secretGroup = 1 # Capture group containing actual secret
|
||||
```
|
||||
3. Test pattern: `gitleaks detect --config .gitleaks.toml -v --no-git`
|
||||
4. Consider entropy threshold if pattern is ambiguous:
|
||||
```toml
|
||||
[[rules.Entropies]]
|
||||
Min = "3.5"
|
||||
Max = "7.0"
|
||||
Group = "1"
|
||||
```
|
||||
5. Validate with known true positives and negatives
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Entropy-Based Detection
|
||||
|
||||
For secrets without clear patterns, use Shannon entropy analysis:
|
||||
|
||||
```toml
|
||||
[[rules]]
|
||||
id = "high-entropy-strings"
|
||||
description = "High entropy strings that may be secrets"
|
||||
regex = '''[a-zA-Z0-9]{32,}'''
|
||||
entropy = 4.5 # Shannon entropy threshold
|
||||
secretGroup = 0
|
||||
```
|
||||
|
||||
### Composite Rules (v8.28.0+)
|
||||
|
||||
Detect secrets spanning multiple lines or requiring context:
|
||||
|
||||
```toml
|
||||
[[rules]]
|
||||
id = "multi-line-secret"
|
||||
description = "API key with usage context"
|
||||
regex = '''api_key[\s]*='''
|
||||
|
||||
[[rules.composite]]
|
||||
pattern = '''initialize_client'''
|
||||
location = "line" # Must be within same line proximity
|
||||
distance = 5 # Within 5 lines
|
||||
```
|
||||
|
||||
### Global vs Rule-Specific Allowlists
|
||||
|
||||
```toml
|
||||
# Global allowlist (highest precedence)
|
||||
[allowlist]
|
||||
description = "Organization-wide exceptions"
|
||||
paths = ['''vendor/''', '''node_modules/''']
|
||||
|
||||
# Rule-specific allowlist
|
||||
[[rules]]
|
||||
id = "generic-api-key"
|
||||
[rules.allowlist]
|
||||
description = "Exceptions only for this rule"
|
||||
regexes = ['''key\s*=\s*EXAMPLE''']
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [Gitleaks Official Documentation](https://github.com/gitleaks/gitleaks)
|
||||
- [OWASP A07:2021 - Identification and Authentication Failures](https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/)
|
||||
- [CWE-798: Use of Hard-coded Credentials](https://cwe.mitre.org/data/definitions/798.html)
|
||||
- [CWE-259: Use of Hard-coded Password](https://cwe.mitre.org/data/definitions/259.html)
|
||||
- [CWE-321: Use of Hard-coded Cryptographic Key](https://cwe.mitre.org/data/definitions/321.html)
|
||||
- [PCI-DSS Requirements](https://www.pcisecuritystandards.org/)
|
||||
- [SOC2 Security Criteria](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
|
||||
9
skills/devsecops/secrets-gitleaks/assets/.gitkeep
Normal file
9
skills/devsecops/secrets-gitleaks/assets/.gitkeep
Normal file
@@ -0,0 +1,9 @@
|
||||
# Assets Directory
|
||||
|
||||
Place files that will be used in the output Claude produces:
|
||||
- Templates
|
||||
- Configuration files
|
||||
- Images/logos
|
||||
- Boilerplate code
|
||||
|
||||
These files are NOT loaded into context but copied/modified in output.
|
||||
@@ -0,0 +1,81 @@
|
||||
# Gitleaks Balanced Configuration
|
||||
# Production-ready configuration balancing security and developer experience
|
||||
# Use for: Most production repositories
|
||||
|
||||
title = "Gitleaks Balanced Configuration"
|
||||
|
||||
[extend]
|
||||
# Extend default Gitleaks rules
|
||||
useDefault = true
|
||||
|
||||
[allowlist]
|
||||
description = "Balanced allowlist for common false positives"
|
||||
|
||||
# Standard non-production paths
|
||||
paths = [
|
||||
'''test/.*''',
|
||||
'''tests/.*''',
|
||||
'''.*/fixtures/.*''',
|
||||
'''.*/testdata/.*''',
|
||||
'''spec/.*''',
|
||||
'''examples?/.*''',
|
||||
'''docs?/.*''',
|
||||
'''\.md$''',
|
||||
'''\.rst$''',
|
||||
'''\.txt$''',
|
||||
'''node_modules/.*''',
|
||||
'''vendor/.*''',
|
||||
'''third[_-]party/.*''',
|
||||
'''\.min\.js$''',
|
||||
'''\.min\.css$''',
|
||||
'''dist/.*''',
|
||||
'''build/.*''',
|
||||
'''target/.*''',
|
||||
'''.*/mocks?/.*''',
|
||||
]
|
||||
|
||||
# Common placeholder patterns
|
||||
stopwords = [
|
||||
"example",
|
||||
"placeholder",
|
||||
"your_api_key_here",
|
||||
"your_key_here",
|
||||
"your_secret_here",
|
||||
"replace_me",
|
||||
"replaceme",
|
||||
"changeme",
|
||||
"change_me",
|
||||
"insert_key_here",
|
||||
"xxxxxx",
|
||||
"000000",
|
||||
"123456",
|
||||
"abcdef",
|
||||
"sample",
|
||||
"dummy",
|
||||
"fake",
|
||||
"test_key",
|
||||
"test_secret",
|
||||
"test_password",
|
||||
"test_token",
|
||||
"mock",
|
||||
"TODO",
|
||||
]
|
||||
|
||||
# Public non-secrets
|
||||
regexes = [
|
||||
'''-----BEGIN CERTIFICATE-----''',
|
||||
'''-----BEGIN PUBLIC KEY-----''',
|
||||
'''data:image/[^;]+;base64,''',
|
||||
'''[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}''', # UUID
|
||||
]
|
||||
|
||||
# Manually verified false positives (add with comments)
|
||||
commits = []
|
||||
|
||||
# Custom rules for organization-specific patterns can be added below
|
||||
|
||||
# Example: Allowlist template files
|
||||
# [[rules]]
|
||||
# id = "generic-api-key"
|
||||
# [rules.allowlist]
|
||||
# paths = ['''config/.*\.template$''', '''config/.*\.example$''']
|
||||
178
skills/devsecops/secrets-gitleaks/assets/config-custom.toml
Normal file
178
skills/devsecops/secrets-gitleaks/assets/config-custom.toml
Normal file
@@ -0,0 +1,178 @@
|
||||
# Gitleaks Custom Configuration Template
|
||||
# Use this as a starting point for organization-specific detection rules
|
||||
|
||||
title = "Custom Gitleaks Configuration"
|
||||
|
||||
[extend]
|
||||
# Extend default Gitleaks rules with custom rules
|
||||
useDefault = true
|
||||
|
||||
# =============================================================================
|
||||
# GLOBAL ALLOWLIST
|
||||
# =============================================================================
|
||||
# Global allowlists apply to ALL rules and have highest precedence
|
||||
|
||||
[allowlist]
|
||||
description = "Global allowlist for organization-wide exceptions"
|
||||
|
||||
# Paths to exclude from scanning
|
||||
paths = [
|
||||
# Test and documentation
|
||||
'''test/.*''',
|
||||
'''docs?/.*''',
|
||||
'''examples?/.*''',
|
||||
|
||||
# Dependencies
|
||||
'''node_modules/.*''',
|
||||
'''vendor/.*''',
|
||||
|
||||
# Build artifacts
|
||||
'''dist/.*''',
|
||||
'''build/.*''',
|
||||
]
|
||||
|
||||
# Known placeholder values
|
||||
stopwords = [
|
||||
"example",
|
||||
"placeholder",
|
||||
"your_key_here",
|
||||
"test",
|
||||
"mock",
|
||||
"dummy",
|
||||
]
|
||||
|
||||
# Public non-secrets
|
||||
regexes = [
|
||||
'''-----BEGIN CERTIFICATE-----''',
|
||||
'''-----BEGIN PUBLIC KEY-----''',
|
||||
]
|
||||
|
||||
# Manually verified commits (add with explanatory comments)
|
||||
commits = []
|
||||
|
||||
# =============================================================================
|
||||
# CUSTOM DETECTION RULES
|
||||
# =============================================================================
|
||||
# Add organization-specific secret patterns here
|
||||
|
||||
# Example: Custom API Key Pattern
|
||||
[[rules]]
|
||||
id = "acme-corp-api-key"
|
||||
description = "ACME Corp Internal API Key"
|
||||
# Regex pattern to match your organization's API key format
|
||||
# Use triple-quoted strings for complex patterns
|
||||
regex = '''(?i)acme[_-]?api[_-]?key[\s]*[=:][\s]*['"]?([a-zA-Z0-9]{40})['"]?'''
|
||||
# Capture group containing the actual secret (for entropy analysis)
|
||||
secretGroup = 1
|
||||
# Tags for categorization and filtering
|
||||
tags = ["api-key", "acme-internal"]
|
||||
|
||||
# Optional: Rule-specific allowlist (lower precedence than global)
|
||||
#[rules.allowlist]
|
||||
#paths = ['''config/defaults\.yaml''']
|
||||
#stopwords = ["DEFAULT_KEY"]
|
||||
|
||||
# Example: Custom Database Password Pattern
|
||||
[[rules]]
|
||||
id = "acme-corp-db-password"
|
||||
description = "ACME Corp Database Password Format"
|
||||
# Matches company-specific password format
|
||||
regex = '''(?i)(db_pass|database_password)[\s]*[=:][\s]*['"]([A-Z][a-z0-9@#$%]{15,})['"]'''
|
||||
secretGroup = 2
|
||||
tags = ["password", "database", "acme-internal"]
|
||||
|
||||
# Example: High-Entropy Detection with Custom Threshold
|
||||
[[rules]]
|
||||
id = "high-entropy-string"
|
||||
description = "High entropy string (potential secret)"
|
||||
# Match strings of 32+ alphanumeric characters
|
||||
regex = '''[a-zA-Z0-9+/]{32,}'''
|
||||
# Shannon entropy threshold (0.0 - 8.0, higher = more random)
|
||||
entropy = 4.5
|
||||
# Which capture group to analyze (0 = entire match)
|
||||
secretGroup = 0
|
||||
tags = ["entropy", "generic"]
|
||||
|
||||
[rules.allowlist]
|
||||
# Allowlist base64-encoded images
|
||||
regexes = ['''data:image/[^;]+;base64,''']
|
||||
|
||||
# Example: Custom Service Account Key
|
||||
[[rules]]
|
||||
id = "acme-corp-service-account"
|
||||
description = "ACME Corp Service Account JSON Key"
|
||||
# Detect JSON structure with specific fields
|
||||
regex = '''"type":\s*"acme_service_account"'''
|
||||
tags = ["service-account", "acme-internal"]
|
||||
|
||||
# Example: Custom OAuth Token Format
|
||||
[[rules]]
|
||||
id = "acme-corp-oauth-token"
|
||||
description = "ACME Corp OAuth Token"
|
||||
# Custom token format: acme_oauth_v1_<40 hex chars>
|
||||
regex = '''acme_oauth_v1_[a-f0-9]{40}'''
|
||||
tags = ["oauth", "token", "acme-internal"]
|
||||
|
||||
# =============================================================================
|
||||
# TESTING CUSTOM RULES
|
||||
# =============================================================================
|
||||
# Test your custom rules with:
|
||||
# gitleaks detect --config config-custom.toml -v
|
||||
#
|
||||
# Test against specific file:
|
||||
# gitleaks detect --config config-custom.toml --source path/to/file --no-git
|
||||
#
|
||||
# Test regex pattern online:
|
||||
# https://regex101.com/ (select Golang flavor)
|
||||
#
|
||||
# =============================================================================
|
||||
|
||||
# =============================================================================
|
||||
# ENTROPY ANALYSIS GUIDE
|
||||
# =============================================================================
|
||||
# Entropy values (Shannon entropy):
|
||||
# 0.0 - 2.5: Very low (repeated characters, simple patterns)
|
||||
# 2.5 - 3.5: Low (common words, simple sequences)
|
||||
# 3.5 - 4.5: Medium (mixed case, some randomness)
|
||||
# 4.5 - 5.5: High (strong randomness, likely secret)
|
||||
# 5.5 - 8.0: Very high (cryptographic randomness)
|
||||
#
|
||||
# Recommended thresholds:
|
||||
# - API keys: 4.5+
|
||||
# - Passwords: 3.5+
|
||||
# - Tokens: 4.5+
|
||||
# - Generic secrets: 5.0+
|
||||
# =============================================================================
|
||||
|
||||
# =============================================================================
|
||||
# REGEX CAPTURE GROUPS
|
||||
# =============================================================================
|
||||
# Use capture groups to extract the actual secret from surrounding text:
|
||||
#
|
||||
# regex = '''api_key\s*=\s*"([a-zA-Z0-9]+)"'''
|
||||
# ^^^^^^^^^
|
||||
# Group 1
|
||||
#
|
||||
# secretGroup = 1 # Analyze only the key value, not 'api_key = ""'
|
||||
#
|
||||
# This improves entropy analysis accuracy and reduces false positives.
|
||||
# =============================================================================
|
||||
|
||||
# =============================================================================
|
||||
# COMPOSITE RULES (Advanced)
|
||||
# =============================================================================
|
||||
# Gitleaks v8.28.0+ supports composite rules for context-aware detection
|
||||
# Useful for secrets that require nearby context (multi-line patterns)
|
||||
|
||||
#[[rules]]
|
||||
#id = "composite-api-key"
|
||||
#description = "API key with usage context"
|
||||
#regex = '''api_key\s*='''
|
||||
#
|
||||
#[[rules.composite]]
|
||||
#pattern = '''initialize_client'''
|
||||
#location = "line" # "line", "fragment", or "commit"
|
||||
#distance = 5 # Within 5 lines
|
||||
#
|
||||
# This detects api_key = "..." only when "initialize_client" appears within 5 lines
|
||||
# =============================================================================
|
||||
48
skills/devsecops/secrets-gitleaks/assets/config-strict.toml
Normal file
48
skills/devsecops/secrets-gitleaks/assets/config-strict.toml
Normal file
@@ -0,0 +1,48 @@
|
||||
# Gitleaks Strict Configuration
|
||||
# High-sensitivity detection with minimal allowlisting
|
||||
# Use for: Security-critical repositories, financial services, healthcare
|
||||
|
||||
title = "Gitleaks Strict Configuration"
|
||||
|
||||
[extend]
|
||||
# Use all default Gitleaks rules
|
||||
useDefault = true
|
||||
|
||||
[allowlist]
|
||||
description = "Minimal allowlist - only proven false positives"
|
||||
|
||||
# Only allow in build artifacts and dependencies
|
||||
paths = [
|
||||
'''node_modules/.*''',
|
||||
'''vendor/.*''',
|
||||
'''\.min\.js$''',
|
||||
'''\.min\.css$''',
|
||||
]
|
||||
|
||||
# Only obvious non-secret patterns
|
||||
stopwords = [
|
||||
"EXAMPLE_DO_NOT_USE",
|
||||
"PLACEHOLDER_REPLACE_ME",
|
||||
]
|
||||
|
||||
# All commits must be manually verified before allowlisting
|
||||
commits = []
|
||||
|
||||
# Additional strict rules for high-value targets
|
||||
|
||||
[[rules]]
|
||||
id = "strict-env-file"
|
||||
description = "Detect any .env files (should not be in repo)"
|
||||
regex = '''.*'''
|
||||
path = '''\.env$'''
|
||||
tags = ["env-file", "strict"]
|
||||
|
||||
[[rules]]
|
||||
id = "strict-config-secrets"
|
||||
description = "Config files with potential secrets"
|
||||
regex = '''(?i)(password|secret|key|token|credential)[\s]*[=:][\s]*['"]?([a-zA-Z0-9!@#$%^&*()_+\-=\[\]{};':"\\|,.<>\/?]{8,})['"]?'''
|
||||
secretGroup = 2
|
||||
tags = ["config", "strict"]
|
||||
[rules.allowlist]
|
||||
paths = ['''test/.*''']
|
||||
stopwords = ["EXAMPLE"]
|
||||
181
skills/devsecops/secrets-gitleaks/assets/github-action.yml
Normal file
181
skills/devsecops/secrets-gitleaks/assets/github-action.yml
Normal file
@@ -0,0 +1,181 @@
|
||||
# GitHub Actions Workflow for Gitleaks Secret Scanning
|
||||
# Save as: .github/workflows/gitleaks.yml
|
||||
|
||||
name: Secret Scanning with Gitleaks
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- develop
|
||||
- 'release/**'
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- develop
|
||||
schedule:
|
||||
# Run daily at 2 AM UTC
|
||||
- cron: '0 2 * * *'
|
||||
workflow_dispatch: # Allow manual triggers
|
||||
|
||||
# Cancel in-progress runs when new commit pushed
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
gitleaks-scan:
|
||||
name: Scan for Secrets
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
permissions:
|
||||
# Required for uploading SARIF results to GitHub Security tab
|
||||
security-events: write
|
||||
# Required for checking out private repos
|
||||
contents: read
|
||||
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
# Fetch full history for comprehensive scanning
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Run Gitleaks Scan
|
||||
id: gitleaks
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
# Optional: Use custom configuration
|
||||
# GITLEAKS_CONFIG: .gitleaks.toml
|
||||
|
||||
# Optional: Generate JSON report for further processing
|
||||
- name: Generate JSON Report
|
||||
if: always() # Run even if secrets found
|
||||
run: |
|
||||
docker run --rm -v ${{ github.workspace }}:/repo \
|
||||
zricethezav/gitleaks:latest \
|
||||
detect --source /repo \
|
||||
--report-path /repo/gitleaks-report.json \
|
||||
--report-format json \
|
||||
--exit-code 0 || true
|
||||
|
||||
# Optional: Upload JSON report as artifact
|
||||
- name: Upload Scan Report
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: gitleaks-report
|
||||
path: gitleaks-report.json
|
||||
retention-days: 30
|
||||
|
||||
# Optional: Generate SARIF report for GitHub Security tab
|
||||
- name: Generate SARIF Report
|
||||
if: always()
|
||||
run: |
|
||||
docker run --rm -v ${{ github.workspace }}:/repo \
|
||||
zricethezav/gitleaks:latest \
|
||||
detect --source /repo \
|
||||
--report-path /repo/gitleaks.sarif \
|
||||
--report-format sarif \
|
||||
--exit-code 0 || true
|
||||
|
||||
# Optional: Upload SARIF report to GitHub Security
|
||||
- name: Upload SARIF to GitHub Security
|
||||
if: always()
|
||||
uses: github/codeql-action/upload-sarif@v3
|
||||
with:
|
||||
sarif_file: gitleaks.sarif
|
||||
category: gitleaks
|
||||
|
||||
# Optional: Comment on PR with findings
|
||||
- name: Comment PR with Findings
|
||||
if: failure() && github.event_name == 'pull_request'
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
try {
|
||||
const report = JSON.parse(fs.readFileSync('gitleaks-report.json', 'utf8'));
|
||||
const findings = report.length;
|
||||
|
||||
const comment = `## 🔒 Secret Scanning Results
|
||||
|
||||
⚠️ **${findings} potential secret(s) detected!**
|
||||
|
||||
Please review the findings and take immediate action:
|
||||
1. **Do not merge** this PR until secrets are removed
|
||||
2. Rotate any exposed credentials immediately
|
||||
3. Remove secrets from code and use environment variables
|
||||
4. Review the security tab for detailed findings
|
||||
|
||||
See [Secret Scanning Guide](https://github.com/${{ github.repository }}/blob/main/docs/secret-scanning.md) for remediation steps.`;
|
||||
|
||||
github.rest.issues.createComment({
|
||||
issue_number: context.issue.number,
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
body: comment
|
||||
});
|
||||
} catch (error) {
|
||||
console.log('No report file or error reading it:', error.message);
|
||||
}
|
||||
|
||||
# Optional: Post to Slack on failure
|
||||
- name: Notify Slack on Failure
|
||||
if: failure()
|
||||
uses: slackapi/slack-github-action@v1
|
||||
with:
|
||||
payload: |
|
||||
{
|
||||
"text": "🚨 Secrets detected in ${{ github.repository }}",
|
||||
"blocks": [
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": "*Secret Scanning Alert*\n\nSecrets detected in repository: `${{ github.repository }}`\nBranch: `${{ github.ref_name }}`\nCommit: `${{ github.sha }}`\n\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Details>"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
env:
|
||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
|
||||
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
|
||||
|
||||
# Optional: Baseline scanning for incremental detection
|
||||
baseline-scan:
|
||||
name: Incremental Scan Against Baseline
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'push'
|
||||
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Download Existing Baseline
|
||||
continue-on-error: true
|
||||
run: |
|
||||
# Download baseline from artifact storage or S3
|
||||
# Example: aws s3 cp s3://bucket/.gitleaks-baseline.json .
|
||||
echo "Baseline download would go here"
|
||||
|
||||
- name: Run Incremental Scan
|
||||
run: |
|
||||
docker run --rm -v ${{ github.workspace }}:/repo \
|
||||
zricethezav/gitleaks:latest \
|
||||
detect --source /repo \
|
||||
--baseline-path /repo/.gitleaks-baseline.json \
|
||||
--report-path /repo/new-findings.json \
|
||||
--report-format json \
|
||||
--exit-code 1 || true
|
||||
|
||||
- name: Upload New Findings
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: new-findings
|
||||
path: new-findings.json
|
||||
retention-days: 90
|
||||
257
skills/devsecops/secrets-gitleaks/assets/gitlab-ci.yml
Normal file
257
skills/devsecops/secrets-gitleaks/assets/gitlab-ci.yml
Normal file
@@ -0,0 +1,257 @@
|
||||
# GitLab CI Pipeline for Gitleaks Secret Scanning
|
||||
# Save as: .gitlab-ci.yml or include in existing pipeline
|
||||
|
||||
# Define stages
|
||||
stages:
|
||||
- security
|
||||
- report
|
||||
|
||||
# Default Docker image for security jobs
|
||||
image: docker:latest
|
||||
|
||||
services:
|
||||
- docker:dind
|
||||
|
||||
variables:
|
||||
# Gitleaks Docker image
|
||||
GITLEAKS_IMAGE: zricethezav/gitleaks:latest
|
||||
# Report output path
|
||||
REPORT_PATH: gitleaks-report.json
|
||||
# SARIF output for GitLab Security Dashboard
|
||||
SARIF_PATH: gl-secret-detection-report.json
|
||||
|
||||
# Secret scanning job
|
||||
gitleaks-scan:
|
||||
stage: security
|
||||
image: $GITLEAKS_IMAGE
|
||||
|
||||
# Run on all branches and merge requests
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
||||
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
|
||||
- if: '$CI_COMMIT_BRANCH =~ /^(develop|release)/'
|
||||
|
||||
script:
|
||||
# Run Gitleaks scan
|
||||
- echo "Running Gitleaks secret detection..."
|
||||
- |
|
||||
gitleaks detect \
|
||||
--source . \
|
||||
--report-path $REPORT_PATH \
|
||||
--report-format json \
|
||||
--verbose || true
|
||||
|
||||
# Convert to GitLab SARIF format for Security Dashboard
|
||||
- |
|
||||
gitleaks detect \
|
||||
--source . \
|
||||
--report-path $SARIF_PATH \
|
||||
--report-format sarif \
|
||||
--verbose || true
|
||||
|
||||
# Check if secrets were found
|
||||
- |
|
||||
if [ -s "$REPORT_PATH" ] && [ "$(cat $REPORT_PATH)" != "null" ]; then
|
||||
echo "⚠️ Secrets detected! Review findings below."
|
||||
cat $REPORT_PATH | jq -r '.[] | "File: \(.File)\nLine: \(.StartLine)\nRule: \(.RuleID)\n"'
|
||||
exit 1
|
||||
else
|
||||
echo "✅ No secrets detected"
|
||||
fi
|
||||
|
||||
artifacts:
|
||||
paths:
|
||||
- $REPORT_PATH
|
||||
- $SARIF_PATH
|
||||
reports:
|
||||
# GitLab Security Dashboard integration
|
||||
secret_detection: $SARIF_PATH
|
||||
when: always
|
||||
expire_in: 30 days
|
||||
|
||||
# Allow failure for initial rollout, then set to false
|
||||
allow_failure: false
|
||||
|
||||
# Optional: Incremental scanning with baseline
|
||||
gitleaks-incremental:
|
||||
stage: security
|
||||
image: $GITLEAKS_IMAGE
|
||||
|
||||
# Only run on merge requests
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
||||
|
||||
script:
|
||||
# Download baseline from artifacts or storage
|
||||
- echo "Downloading baseline..."
|
||||
- |
|
||||
if [ -f ".gitleaks-baseline.json" ]; then
|
||||
echo "Using baseline from repository"
|
||||
else
|
||||
echo "No baseline found, running full scan"
|
||||
fi
|
||||
|
||||
# Run incremental scan
|
||||
- |
|
||||
if [ -f ".gitleaks-baseline.json" ]; then
|
||||
gitleaks detect \
|
||||
--source . \
|
||||
--baseline-path .gitleaks-baseline.json \
|
||||
--report-path new-findings.json \
|
||||
--report-format json \
|
||||
--exit-code 1 || true
|
||||
|
||||
if [ -s "new-findings.json" ] && [ "$(cat new-findings.json)" != "null" ]; then
|
||||
echo "⚠️ New secrets detected since baseline!"
|
||||
cat new-findings.json | jq .
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
artifacts:
|
||||
paths:
|
||||
- new-findings.json
|
||||
when: always
|
||||
expire_in: 7 days
|
||||
|
||||
# Optional: Create baseline on main branch
|
||||
create-baseline:
|
||||
stage: security
|
||||
image: $GITLEAKS_IMAGE
|
||||
|
||||
# Only run on main/master branch
|
||||
rules:
|
||||
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
|
||||
when: manual # Manual trigger to avoid overwriting
|
||||
|
||||
script:
|
||||
- echo "Creating new baseline..."
|
||||
- |
|
||||
gitleaks detect \
|
||||
--source . \
|
||||
--report-path .gitleaks-baseline.json \
|
||||
--report-format json \
|
||||
--exit-code 0 || true
|
||||
|
||||
artifacts:
|
||||
paths:
|
||||
- .gitleaks-baseline.json
|
||||
expire_in: 365 days
|
||||
|
||||
# Optional: Generate human-readable report
|
||||
generate-report:
|
||||
stage: report
|
||||
image: python:3.11-slim
|
||||
|
||||
dependencies:
|
||||
- gitleaks-scan
|
||||
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
||||
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
|
||||
|
||||
script:
|
||||
- pip install jinja2
|
||||
- |
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
with open('gitleaks-report.json', 'r') as f:
|
||||
findings = json.load(f)
|
||||
|
||||
if not findings:
|
||||
print("✅ No secrets detected")
|
||||
sys.exit(0)
|
||||
|
||||
print("# Gitleaks Secret Detection Report")
|
||||
print(f"\n**Generated**: {datetime.now().isoformat()}")
|
||||
print(f"**Total Findings**: {len(findings)}\n")
|
||||
|
||||
for idx, finding in enumerate(findings, 1):
|
||||
print(f"\n## Finding {idx}")
|
||||
print(f"- **File**: {finding.get('File', 'unknown')}")
|
||||
print(f"- **Line**: {finding.get('StartLine', 'unknown')}")
|
||||
print(f"- **Rule**: {finding.get('RuleID', 'unknown')}")
|
||||
print(f"- **Description**: {finding.get('Description', 'unknown')}")
|
||||
print(f"- **Commit**: {finding.get('Commit', 'N/A')}\n")
|
||||
|
||||
except FileNotFoundError:
|
||||
print("No report file found")
|
||||
except json.JSONDecodeError:
|
||||
print("No findings in report")
|
||||
EOF
|
||||
|
||||
artifacts:
|
||||
paths:
|
||||
- gitleaks-report.json
|
||||
|
||||
# Optional: Comment on merge request
|
||||
comment-mr:
|
||||
stage: report
|
||||
image: alpine:latest
|
||||
|
||||
dependencies:
|
||||
- gitleaks-scan
|
||||
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
|
||||
|
||||
before_script:
|
||||
- apk add --no-cache curl jq
|
||||
|
||||
script:
|
||||
- |
|
||||
if [ -s "$REPORT_PATH" ] && [ "$(cat $REPORT_PATH)" != "null" ]; then
|
||||
FINDING_COUNT=$(cat $REPORT_PATH | jq '. | length')
|
||||
|
||||
COMMENT="## 🔒 Secret Scanning Results\n\n"
|
||||
COMMENT="${COMMENT}⚠️ **${FINDING_COUNT} potential secret(s) detected!**\n\n"
|
||||
COMMENT="${COMMENT}Please review the findings and take immediate action:\n"
|
||||
COMMENT="${COMMENT}1. **Do not merge** this MR until secrets are removed\n"
|
||||
COMMENT="${COMMENT}2. Rotate any exposed credentials immediately\n"
|
||||
COMMENT="${COMMENT}3. Remove secrets from code and use CI/CD variables\n\n"
|
||||
COMMENT="${COMMENT}See pipeline artifacts for detailed findings."
|
||||
|
||||
# Post comment to merge request
|
||||
curl --request POST \
|
||||
--header "PRIVATE-TOKEN: $GITLAB_TOKEN" \
|
||||
--data-urlencode "body=$COMMENT" \
|
||||
"$CI_API_V4_URL/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/notes"
|
||||
fi
|
||||
|
||||
allow_failure: true
|
||||
|
||||
# Optional: Scheduled nightly scan
|
||||
nightly-scan:
|
||||
stage: security
|
||||
image: $GITLEAKS_IMAGE
|
||||
|
||||
# Run on schedule only
|
||||
rules:
|
||||
- if: '$CI_PIPELINE_SOURCE == "schedule"'
|
||||
|
||||
script:
|
||||
- echo "Running comprehensive nightly secret scan..."
|
||||
- |
|
||||
gitleaks detect \
|
||||
--source . \
|
||||
--report-path nightly-scan.json \
|
||||
--report-format json \
|
||||
--verbose
|
||||
|
||||
artifacts:
|
||||
paths:
|
||||
- nightly-scan.json
|
||||
when: always
|
||||
expire_in: 90 days
|
||||
|
||||
# Send notifications on failure
|
||||
after_script:
|
||||
- |
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Secrets detected in nightly scan!"
|
||||
# Add notification logic (email, Slack, etc.)
|
||||
fi
|
||||
@@ -0,0 +1,70 @@
|
||||
# Pre-commit Framework Configuration for Gitleaks
|
||||
# Install pre-commit: pip install pre-commit
|
||||
# Install hooks: pre-commit install
|
||||
# Run manually: pre-commit run --all-files
|
||||
#
|
||||
# More info: https://pre-commit.com/
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/gitleaks/gitleaks
|
||||
rev: v8.18.0 # Update to latest version: https://github.com/gitleaks/gitleaks/releases
|
||||
hooks:
|
||||
- id: gitleaks
|
||||
name: Gitleaks - Secret Detection
|
||||
description: Scan staged changes for hardcoded secrets
|
||||
entry: gitleaks protect --verbose --redact --staged
|
||||
language: system
|
||||
pass_filenames: false
|
||||
# Optional: Custom configuration
|
||||
# args: ['--config', '.gitleaks.toml']
|
||||
|
||||
# Optional: Additional security hooks
|
||||
|
||||
# Detect private keys
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: detect-private-key
|
||||
name: Detect Private Keys
|
||||
|
||||
# Check for AWS credentials
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: detect-aws-credentials
|
||||
name: Detect AWS Credentials
|
||||
args: ['--allow-missing-credentials']
|
||||
|
||||
# Prevent large files (may contain secrets)
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: check-added-large-files
|
||||
name: Check for Large Files
|
||||
args: ['--maxkb=1000']
|
||||
|
||||
# Check for merge conflicts
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: check-merge-conflict
|
||||
name: Check for Merge Conflicts
|
||||
|
||||
# Ensure files end with newline
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: end-of-file-fixer
|
||||
name: Fix End of Files
|
||||
|
||||
# Trim trailing whitespace
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
name: Trim Trailing Whitespace
|
||||
|
||||
# Configuration for pre-commit.ci (optional CI service)
|
||||
ci:
|
||||
autofix_prs: false
|
||||
autoupdate_schedule: monthly
|
||||
40
skills/devsecops/secrets-gitleaks/references/EXAMPLE.md
Normal file
40
skills/devsecops/secrets-gitleaks/references/EXAMPLE.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Reference Document Template
|
||||
|
||||
This file contains detailed reference material that Claude should load only when needed.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Section 1](#section-1)
|
||||
- [Section 2](#section-2)
|
||||
- [Security Standards](#security-standards)
|
||||
|
||||
## Section 1
|
||||
|
||||
Detailed information, schemas, or examples that are too large for SKILL.md.
|
||||
|
||||
## Section 2
|
||||
|
||||
Additional reference material.
|
||||
|
||||
## Security Standards
|
||||
|
||||
### OWASP Top 10
|
||||
|
||||
Reference relevant OWASP categories:
|
||||
- A01: Broken Access Control
|
||||
- A02: Cryptographic Failures
|
||||
- etc.
|
||||
|
||||
### CWE Mappings
|
||||
|
||||
Map to relevant Common Weakness Enumeration categories:
|
||||
- CWE-79: Cross-site Scripting
|
||||
- CWE-89: SQL Injection
|
||||
- etc.
|
||||
|
||||
### MITRE ATT&CK
|
||||
|
||||
Reference relevant tactics and techniques if applicable:
|
||||
- TA0001: Initial Access
|
||||
- T1190: Exploit Public-Facing Application
|
||||
- etc.
|
||||
@@ -0,0 +1,538 @@
|
||||
# Compliance Framework Mapping
|
||||
|
||||
Detailed mapping of Gitleaks secret detection to compliance and security frameworks.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [OWASP Top 10](#owasp-top-10)
|
||||
- [CWE (Common Weakness Enumeration)](#cwe-common-weakness-enumeration)
|
||||
- [PCI-DSS](#pci-dss)
|
||||
- [SOC 2](#soc-2)
|
||||
- [GDPR](#gdpr)
|
||||
- [NIST Cybersecurity Framework](#nist-cybersecurity-framework)
|
||||
- [ISO 27001](#iso-27001)
|
||||
- [HIPAA](#hipaa)
|
||||
- [Compliance Reporting](#compliance-reporting)
|
||||
|
||||
## OWASP Top 10
|
||||
|
||||
### A07:2021 – Identification and Authentication Failures
|
||||
|
||||
**Relevance**: Hardcoded credentials lead to authentication bypass and unauthorized access.
|
||||
|
||||
**Gitleaks Coverage**:
|
||||
- Detects hardcoded passwords, API keys, tokens
|
||||
- Identifies database connection strings with embedded credentials
|
||||
- Finds SSH keys, certificates, and cryptographic secrets
|
||||
|
||||
**Control Implementation**:
|
||||
```yaml
|
||||
# CI/CD check to prevent authentication failures
|
||||
name: OWASP A07 - Authentication Control
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
secrets-scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Scan for hardcoded credentials (OWASP A07)
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
**Evidence for Auditors**:
|
||||
- Gitleaks scan reports (JSON/SARIF format)
|
||||
- CI/CD pipeline logs showing regular scans
|
||||
- Pre-commit hook installation across developer workstations
|
||||
- Remediation tracking for detected secrets
|
||||
|
||||
### A01:2021 – Broken Access Control
|
||||
|
||||
**Relevance**: Exposed API keys and tokens can bypass access control mechanisms.
|
||||
|
||||
**Gitleaks Coverage**:
|
||||
- Cloud provider credentials (AWS, GCP, Azure)
|
||||
- Service account keys and OAuth tokens
|
||||
- Administrative API keys
|
||||
|
||||
**Control Implementation**:
|
||||
- Implement secret scanning before deployment
|
||||
- Rotate credentials when exposure detected
|
||||
- Review cloud provider audit logs for unauthorized access
|
||||
|
||||
### A02:2021 – Cryptographic Failures
|
||||
|
||||
**Relevance**: Hardcoded cryptographic keys compromise encryption.
|
||||
|
||||
**Gitleaks Coverage**:
|
||||
- Private keys (RSA, DSA, EC)
|
||||
- JWT signing secrets
|
||||
- Encryption keys in configuration files
|
||||
|
||||
**Evidence**:
|
||||
- Detection rules for CWE-321 (Use of Hard-coded Cryptographic Key)
|
||||
- Remediation procedures for exposed cryptographic material
|
||||
|
||||
## CWE (Common Weakness Enumeration)
|
||||
|
||||
### CWE-798: Use of Hard-coded Credentials
|
||||
|
||||
**Description**: Software contains hard-coded credentials (e.g., password, cryptographic key).
|
||||
|
||||
**CVSS Base Score**: Typically 7.5 - 9.8 (High to Critical)
|
||||
|
||||
**Gitleaks Detection**:
|
||||
- All API key rules
|
||||
- Database connection strings
|
||||
- Service account credentials
|
||||
- Generic password patterns
|
||||
|
||||
**Remediation Mapping**:
|
||||
```toml
|
||||
# Tag all findings with CWE-798
|
||||
[[rules]]
|
||||
id = "generic-api-key"
|
||||
description = "Generic API Key (CWE-798)"
|
||||
regex = '''(?i)api_key\s*=\s*["']([a-zA-Z0-9]{32,})["']'''
|
||||
tags = ["api-key", "CWE-798"]
|
||||
```
|
||||
|
||||
### CWE-259: Use of Hard-coded Password
|
||||
|
||||
**Description**: Software contains hard-coded password.
|
||||
|
||||
**Gitleaks Detection**:
|
||||
- Password variables in code
|
||||
- Database connection strings with passwords
|
||||
- Configuration files with password fields
|
||||
|
||||
**Example Finding**:
|
||||
```json
|
||||
{
|
||||
"RuleID": "generic-password",
|
||||
"Description": "Hard-coded password detected",
|
||||
"File": "config/database.py",
|
||||
"Line": 42,
|
||||
"CWE": "CWE-259"
|
||||
}
|
||||
```
|
||||
|
||||
### CWE-321: Use of Hard-coded Cryptographic Key
|
||||
|
||||
**Description**: Use of hard-coded cryptographic key in product.
|
||||
|
||||
**Gitleaks Detection**:
|
||||
- Private key files (PEM format)
|
||||
- JWT signing secrets
|
||||
- Encryption keys in source code
|
||||
|
||||
### CWE-522: Insufficiently Protected Credentials
|
||||
|
||||
**Description**: Product transmits or stores authentication credentials in insufficiently protected form.
|
||||
|
||||
**Gitleaks Coverage**: Detects credentials stored in source code (inadequate protection).
|
||||
|
||||
### CWE-257: Storing Passwords in a Recoverable Format
|
||||
|
||||
**Description**: Storing passwords in a recoverable format makes them vulnerable.
|
||||
|
||||
**Gitleaks Coverage**: Identifies plaintext passwords in configuration and code.
|
||||
|
||||
## PCI-DSS
|
||||
|
||||
### Requirement 6.5.3: Insecure Cryptographic Storage
|
||||
|
||||
**Control Objective**: Protect stored cardholder data.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Scan payment processing code for embedded API keys (Stripe, PayPal, etc.)
|
||||
- Detect hardcoded encryption keys
|
||||
- Identify database credentials used for cardholder data access
|
||||
|
||||
**Compliance Evidence**:
|
||||
```bash
|
||||
# Generate PCI-DSS compliance report
|
||||
gitleaks detect \
|
||||
--source ./payment-processing \
|
||||
--report-format json \
|
||||
--report-path pci-compliance-scan.json
|
||||
|
||||
# Extract payment-related findings
|
||||
jq '.[] | select(.Tags[] | contains("payment"))' pci-compliance-scan.json
|
||||
```
|
||||
|
||||
### Requirement 8.2.1: Strong Cryptography for Authentication
|
||||
|
||||
**Control Objective**: Use strong authentication credentials.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Detect weak/hardcoded authentication tokens
|
||||
- Identify test credentials in production code paths
|
||||
|
||||
### Requirement 10.2: Logging and Monitoring
|
||||
|
||||
**Control Objective**: Implement automated audit trails.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
```python
|
||||
# Log all secret detection events
|
||||
import logging
|
||||
import json
|
||||
|
||||
with open('gitleaks-findings.json', 'r') as f:
|
||||
findings = json.load(f)
|
||||
|
||||
for finding in findings:
|
||||
logging.warning(
|
||||
f"PCI-DSS Violation: Hardcoded credential detected",
|
||||
extra={
|
||||
"rule": finding["RuleID"],
|
||||
"file": finding["File"],
|
||||
"line": finding["StartLine"],
|
||||
"compliance_requirement": "PCI-DSS 6.5.3"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### PCI-DSS Reporting Template
|
||||
|
||||
```markdown
|
||||
# PCI-DSS Requirement 6.5.3 - Secret Scanning Report
|
||||
|
||||
**Reporting Period**: Q1 2024
|
||||
**Scan Date**: 2024-01-15
|
||||
**Scope**: All repositories handling cardholder data
|
||||
|
||||
## Summary
|
||||
- **Repositories Scanned**: 15
|
||||
- **Secrets Detected**: 3
|
||||
- **Remediation Status**: All resolved within 24 hours
|
||||
|
||||
## Findings
|
||||
|
||||
| Finding ID | Rule | Severity | File | Status | Remediation Date |
|
||||
|------------|------|----------|------|--------|------------------|
|
||||
| F001 | stripe-api-key | CRITICAL | payment/config.py | Resolved | 2024-01-15 |
|
||||
| F002 | database-password | HIGH | db/setup.sql | Resolved | 2024-01-15 |
|
||||
| F003 | aws-access-key | HIGH | deploy/config.yml | Resolved | 2024-01-16 |
|
||||
|
||||
## Control Effectiveness
|
||||
✅ Automated secret scanning implemented
|
||||
✅ All findings remediated within SLA
|
||||
✅ Pre-commit hooks prevent new violations
|
||||
```
|
||||
|
||||
## SOC 2
|
||||
|
||||
### CC6.1: Logical and Physical Access Controls
|
||||
|
||||
**Control Activity**: Implement controls to prevent unauthorized access to system resources.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Automated detection of exposed credentials
|
||||
- Pre-commit hooks to prevent credential commits
|
||||
- CI/CD gates blocking deployments with secrets
|
||||
|
||||
**SOC 2 Evidence Package**:
|
||||
1. Secret scanning policy and procedures
|
||||
2. Gitleaks configuration file (`.gitleaks.toml`)
|
||||
3. CI/CD pipeline configurations
|
||||
4. Scan execution logs (last 12 months)
|
||||
5. Remediation tracking (issue tickets)
|
||||
6. Training materials for developers
|
||||
|
||||
### CC6.6: Logical Access - Provisioning
|
||||
|
||||
**Control Activity**: Provision access based on role, revoke when no longer needed.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Detection of service account keys and tokens
|
||||
- Audit trail of credential exposure and rotation
|
||||
- Automated revocation workflows
|
||||
|
||||
### CC7.2: System Monitoring
|
||||
|
||||
**Control Activity**: Monitor system for security events and anomalies.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
```yaml
|
||||
# Continuous monitoring workflow
|
||||
name: SOC2 CC7.2 - Security Monitoring
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 2 * * *' # Daily at 2 AM
|
||||
jobs:
|
||||
security-scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Secret Detection Scan
|
||||
uses: gitleaks/gitleaks-action@v2
|
||||
- name: Report to SIEM
|
||||
run: |
|
||||
curl -X POST https://siem.company.com/api/events \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @gitleaks-report.json
|
||||
```
|
||||
|
||||
### SOC 2 Audit Response Template
|
||||
|
||||
```markdown
|
||||
# SOC 2 Control CC6.1 - Secret Scanning Control
|
||||
|
||||
**Control Description**: Automated secret scanning prevents unauthorized access through exposed credentials.
|
||||
|
||||
**Control Design**:
|
||||
1. Pre-commit hooks block credential commits at developer workstation
|
||||
2. CI/CD pipeline scans all pull requests before merge
|
||||
3. Nightly scans of all production repositories
|
||||
4. Automated alerting to security team for violations
|
||||
|
||||
**Control Operating Effectiveness**:
|
||||
- **Frequency**: Continuous (pre-commit) + Daily (scheduled scans)
|
||||
- **Population**: 247 repositories, 85 developers
|
||||
- **Sample Period**: January 1 - December 31, 2024
|
||||
- **Samples Tested**: 52 weekly scan reports
|
||||
- **Exceptions**: 0
|
||||
|
||||
**Evidence of Operation**:
|
||||
- Attached: gitleaks-audit-log-2024.json
|
||||
- Attached: remediation-tracking.csv
|
||||
- Attached: developer-training-records.pdf
|
||||
```
|
||||
|
||||
## GDPR
|
||||
|
||||
### Article 32: Security of Processing
|
||||
|
||||
**Requirement**: Implement appropriate technical measures to ensure security of personal data.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Detect API keys for services processing personal data
|
||||
- Identify database credentials for systems storing personal data
|
||||
- Scan for OAuth tokens with user data access scopes
|
||||
|
||||
**GDPR Compliance Mapping**:
|
||||
|
||||
| GDPR Requirement | Gitleaks Control | Evidence |
|
||||
|------------------|------------------|----------|
|
||||
| Art. 32(1)(a) - Pseudonymization | Detect database credentials protecting personal data | Scan reports |
|
||||
| Art. 32(1)(b) - Confidentiality | Prevent credential exposure in source code | Pre-commit hooks |
|
||||
| Art. 32(2) - Risk Assessment | Regular security scanning | Scan schedules |
|
||||
| Art. 33 - Breach Notification | Detection triggers incident response | Alert logs |
|
||||
|
||||
### Data Breach Notification
|
||||
|
||||
If Gitleaks detects exposed credentials accessing personal data:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# gdpr-incident-response.sh
|
||||
|
||||
# Assess if personal data is at risk
|
||||
echo "1. Identify data accessed by exposed credential"
|
||||
echo "2. Determine if data is personal data under GDPR"
|
||||
echo "3. Assess likelihood of unauthorized access"
|
||||
|
||||
# 72-hour notification requirement
|
||||
echo "If personal data breach confirmed:"
|
||||
echo "- Notify supervisory authority within 72 hours"
|
||||
echo "- Document: nature of breach, data categories affected, likely consequences, measures taken"
|
||||
```
|
||||
|
||||
## NIST Cybersecurity Framework
|
||||
|
||||
### Identify (ID.AM): Asset Management
|
||||
|
||||
**Subcategory**: ID.AM-2 - Software platforms and applications are inventoried.
|
||||
|
||||
**Gitleaks Implementation**: Catalog all repositories with secret scanning coverage.
|
||||
|
||||
### Protect (PR.AC): Access Control
|
||||
|
||||
**Subcategory**: PR.AC-1 - Identities and credentials are managed.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Automated detection of exposed credentials
|
||||
- Credential lifecycle management (rotation after exposure)
|
||||
|
||||
### Detect (DE.CM): Security Continuous Monitoring
|
||||
|
||||
**Subcategory**: DE.CM-4 - Malicious code is detected.
|
||||
|
||||
**Gitleaks Implementation**: Secrets considered "malicious" when hardcoded.
|
||||
|
||||
### Respond (RS.AN): Analysis
|
||||
|
||||
**Subcategory**: RS.AN-1 - Notifications are investigated.
|
||||
|
||||
**Gitleaks Implementation**: Alert triage and investigation procedures.
|
||||
|
||||
### Recover (RC.RP): Recovery Planning
|
||||
|
||||
**Subcategory**: RC.RP-1 - Recovery plan is executed during or after an event.
|
||||
|
||||
**Gitleaks Implementation**: Credential rotation and git history cleanup procedures.
|
||||
|
||||
## ISO 27001
|
||||
|
||||
### A.9.2.4: Management of Secret Authentication Information
|
||||
|
||||
**Control**: Allocation of secret authentication information shall be controlled through a formal management process.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Detect deviations from secret management process (hardcoded secrets)
|
||||
- Enforce secret management policy through pre-commit hooks
|
||||
|
||||
### A.9.4.3: Password Management System
|
||||
|
||||
**Control**: Password management systems shall be interactive and ensure quality passwords.
|
||||
|
||||
**Gitleaks Implementation**: Prevent password hardcoding in source code.
|
||||
|
||||
### A.12.6.1: Management of Technical Vulnerabilities
|
||||
|
||||
**Control**: Obtain information about technical vulnerabilities and take appropriate measures.
|
||||
|
||||
**Gitleaks Implementation**: Continuous vulnerability scanning for credential exposure.
|
||||
|
||||
## HIPAA
|
||||
|
||||
### § 164.312(a)(1): Access Control
|
||||
|
||||
**Standard**: Implement technical policies to allow only authorized access to ePHI.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
- Detect credentials for systems accessing ePHI
|
||||
- Prevent unauthorized access through exposed credentials
|
||||
|
||||
### § 164.308(a)(1)(ii)(D): Information System Activity Review
|
||||
|
||||
**Standard**: Implement procedures to regularly review records of information system activity.
|
||||
|
||||
**Gitleaks Implementation**:
|
||||
```bash
|
||||
# Weekly HIPAA compliance review
|
||||
gitleaks detect \
|
||||
--source ./healthcare-systems \
|
||||
--report-format json \
|
||||
> hipaa-weekly-scan.json
|
||||
|
||||
# Review findings for ePHI access credentials
|
||||
jq '.[] | select(.Tags[] | contains("database") or contains("api-key"))' \
|
||||
hipaa-weekly-scan.json
|
||||
```
|
||||
|
||||
### § 164.312(b): Audit Controls
|
||||
|
||||
**Standard**: Implement hardware, software, procedures to record and examine system activity.
|
||||
|
||||
**Gitleaks Implementation**: Audit trail of secret detection events.
|
||||
|
||||
## Compliance Reporting
|
||||
|
||||
### Automated Compliance Report Generation
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
"""Generate compliance report from Gitleaks findings."""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
# Compliance framework mappings
|
||||
COMPLIANCE_MAPPINGS = {
|
||||
"CWE-798": ["OWASP-A07", "PCI-DSS-6.5.3", "SOC2-CC6.1", "ISO27001-A.9.2.4"],
|
||||
"CWE-259": ["OWASP-A07", "PCI-DSS-8.2.1", "SOC2-CC6.1", "ISO27001-A.9.4.3"],
|
||||
"CWE-321": ["OWASP-A02", "PCI-DSS-6.5.3", "ISO27001-A.12.3.1"],
|
||||
}
|
||||
|
||||
def generate_compliance_report(findings_file, framework):
|
||||
"""Generate compliance-specific report."""
|
||||
|
||||
with open(findings_file, 'r') as f:
|
||||
findings = json.load(f)
|
||||
|
||||
# Filter findings relevant to framework
|
||||
relevant_findings = []
|
||||
for finding in findings:
|
||||
cwe = finding.get("CWE", "")
|
||||
if framework in COMPLIANCE_MAPPINGS.get(cwe, []):
|
||||
relevant_findings.append(finding)
|
||||
|
||||
# Generate report
|
||||
report = {
|
||||
"framework": framework,
|
||||
"generated": datetime.now().isoformat(),
|
||||
"total_findings": len(relevant_findings),
|
||||
"findings": relevant_findings,
|
||||
"compliance_status": "NON-COMPLIANT" if relevant_findings else "COMPLIANT"
|
||||
}
|
||||
|
||||
return report
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 3:
|
||||
print("Usage: compliance_report.py <findings.json> <framework>")
|
||||
print("Frameworks: OWASP, PCI-DSS, SOC2, ISO27001, GDPR, HIPAA")
|
||||
sys.exit(1)
|
||||
|
||||
report = generate_compliance_report(sys.argv[1], sys.argv[2])
|
||||
print(json.dumps(report, indent=2))
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```bash
|
||||
# Generate PCI-DSS specific report
|
||||
./compliance_report.py gitleaks-findings.json PCI-DSS > pci-dss-report.json
|
||||
|
||||
# Generate SOC2 specific report
|
||||
./compliance_report.py gitleaks-findings.json SOC2 > soc2-report.json
|
||||
```
|
||||
|
||||
### Compliance Dashboard Metrics
|
||||
|
||||
Track these KPIs for compliance reporting:
|
||||
|
||||
```yaml
|
||||
metrics:
|
||||
- name: "Secret Detection Coverage"
|
||||
description: "Percentage of repositories with secret scanning enabled"
|
||||
target: 100%
|
||||
|
||||
- name: "Mean Time to Remediation (MTTR)"
|
||||
description: "Average time from detection to credential rotation"
|
||||
target: < 4 hours
|
||||
|
||||
- name: "False Positive Rate"
|
||||
description: "Percentage of findings classified as false positives"
|
||||
target: < 10%
|
||||
|
||||
- name: "Pre-commit Hook Adoption"
|
||||
description: "Percentage of developers with hooks installed"
|
||||
target: > 95%
|
||||
|
||||
- name: "Scan Frequency"
|
||||
description: "Scans per repository per month"
|
||||
target: > 30 (daily)
|
||||
```
|
||||
|
||||
## Audit Preparation Checklist
|
||||
|
||||
- [ ] Configure Gitleaks across all in-scope repositories
|
||||
- [ ] Implement CI/CD secret scanning gates
|
||||
- [ ] Deploy pre-commit hooks to developer workstations
|
||||
- [ ] Establish remediation procedures and SLAs
|
||||
- [ ] Create audit trail (scan logs, remediation tickets)
|
||||
- [ ] Generate compliance-specific reports
|
||||
- [ ] Document control design and operating effectiveness
|
||||
- [ ] Prepare evidence package for auditors
|
||||
- [ ] Train team on secret management policies
|
||||
- [ ] Schedule regular compliance reviews
|
||||
276
skills/devsecops/secrets-gitleaks/references/detection_rules.md
Normal file
276
skills/devsecops/secrets-gitleaks/references/detection_rules.md
Normal file
@@ -0,0 +1,276 @@
|
||||
# Gitleaks Detection Rules Reference
|
||||
|
||||
Comprehensive reference of built-in Gitleaks detection rules with CWE mappings and remediation guidance.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Cloud Provider Credentials](#cloud-provider-credentials)
|
||||
- [Version Control Systems](#version-control-systems)
|
||||
- [API Keys and Tokens](#api-keys-and-tokens)
|
||||
- [Database Credentials](#database-credentials)
|
||||
- [Private Keys](#private-keys)
|
||||
- [Generic Patterns](#generic-patterns)
|
||||
|
||||
## Cloud Provider Credentials
|
||||
|
||||
### AWS Access Key ID
|
||||
- **Rule ID**: `aws-access-token`
|
||||
- **Pattern**: `AKIA[0-9A-Z]{16}`
|
||||
- **CWE**: CWE-798 (Use of Hard-coded Credentials)
|
||||
- **Severity**: HIGH
|
||||
- **Description**: AWS Access Key ID for programmatic access
|
||||
- **Remediation**: Rotate via AWS IAM console, use AWS Secrets Manager or IAM roles
|
||||
|
||||
### AWS Secret Access Key
|
||||
- **Rule ID**: `aws-secret-key`
|
||||
- **Pattern**: `(?i)aws(.{0,20})?[\'\"][0-9a-zA-Z\/+]{40}[\'\"]`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: AWS Secret Access Key paired with Access Key ID
|
||||
- **Remediation**: Immediate rotation required, review CloudTrail logs for unauthorized access
|
||||
|
||||
### GCP API Key
|
||||
- **Rule ID**: `gcp-api-key`
|
||||
- **Pattern**: `AIza[0-9A-Za-z\\-_]{35}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Google Cloud Platform API key
|
||||
- **Remediation**: Delete and regenerate in GCP Console, review API usage logs
|
||||
|
||||
### GCP Service Account
|
||||
- **Rule ID**: `gcp-service-account`
|
||||
- **Pattern**: `\"type\": \"service_account\"`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: GCP service account JSON key file
|
||||
- **Remediation**: Delete service account key, use Workload Identity where possible
|
||||
|
||||
### Azure Storage Account Key
|
||||
- **Rule ID**: `azure-storage-key`
|
||||
- **Pattern**: `(?i)azure.*[\'\"][0-9a-zA-Z\/+]{88}[\'\"]`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: Azure Storage Account access key
|
||||
- **Remediation**: Regenerate keys in Azure Portal, use Azure Key Vault
|
||||
|
||||
### Digital Ocean Token
|
||||
- **Rule ID**: `digitalocean-token`
|
||||
- **Pattern**: `dop_v1_[a-f0-9]{64}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Digital Ocean personal access token
|
||||
- **Remediation**: Revoke token in Digital Ocean console, create new token
|
||||
|
||||
## Version Control Systems
|
||||
|
||||
### GitHub Personal Access Token
|
||||
- **Rule ID**: `github-pat`
|
||||
- **Pattern**: `ghp_[0-9a-zA-Z]{36}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: GitHub personal access token (classic)
|
||||
- **Remediation**: Revoke in GitHub Settings > Developer settings, review audit log
|
||||
|
||||
### GitHub OAuth Token
|
||||
- **Rule ID**: `github-oauth`
|
||||
- **Pattern**: `gho_[0-9a-zA-Z]{36}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: GitHub OAuth access token
|
||||
- **Remediation**: Revoke OAuth app authorization, regenerate token
|
||||
|
||||
### GitHub Fine-Grained Token
|
||||
- **Rule ID**: `github-fine-grained-pat`
|
||||
- **Pattern**: `github_pat_[0-9a-zA-Z]{22}_[0-9a-zA-Z]{59}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: GitHub fine-grained personal access token
|
||||
- **Remediation**: Revoke in GitHub Settings, review resource access scope
|
||||
|
||||
### GitLab Personal Access Token
|
||||
- **Rule ID**: `gitlab-pat`
|
||||
- **Pattern**: `glpat-[0-9a-zA-Z\\-_]{20}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: GitLab personal access token
|
||||
- **Remediation**: Revoke in GitLab User Settings > Access Tokens
|
||||
|
||||
### Bitbucket App Password
|
||||
- **Rule ID**: `bitbucket-app-password`
|
||||
- **Pattern**: `(?i)bitbucket.*[\'\"][0-9a-zA-Z]{16}[\'\"]`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Bitbucket app-specific password
|
||||
- **Remediation**: Revoke in Bitbucket Personal Settings > App passwords
|
||||
|
||||
## API Keys and Tokens
|
||||
|
||||
### Stripe API Key
|
||||
- **Rule ID**: `stripe-api-key`
|
||||
- **Pattern**: `(?i)(sk|pk)_(test|live)_[0-9a-zA-Z]{24,}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL (live), HIGH (test)
|
||||
- **Description**: Stripe API secret or publishable key
|
||||
- **Remediation**: Roll keys in Stripe Dashboard, review payment transactions
|
||||
|
||||
### Twilio API Key
|
||||
- **Rule ID**: `twilio-api-key`
|
||||
- **Pattern**: `SK[0-9a-fA-F]{32}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Twilio API key
|
||||
- **Remediation**: Delete key in Twilio Console, create new key
|
||||
|
||||
### SendGrid API Key
|
||||
- **Rule ID**: `sendgrid-api-key`
|
||||
- **Pattern**: `SG\\.[0-9A-Za-z\\-_]{22}\\.[0-9A-Za-z\\-_]{43}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: SendGrid API key
|
||||
- **Remediation**: Delete in SendGrid Settings > API Keys, update applications
|
||||
|
||||
### Slack Token
|
||||
- **Rule ID**: `slack-token`
|
||||
- **Pattern**: `xox[baprs]-[0-9]{10,13}-[0-9]{10,13}-[a-zA-Z0-9]{24,}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Slack bot, app, or user token
|
||||
- **Remediation**: Regenerate in Slack App Settings, rotate token
|
||||
|
||||
### Slack Webhook
|
||||
- **Rule ID**: `slack-webhook`
|
||||
- **Pattern**: `https://hooks\\.slack\\.com/services/T[a-zA-Z0-9_]+/B[a-zA-Z0-9_]+/[a-zA-Z0-9_]+`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: MEDIUM
|
||||
- **Description**: Slack incoming webhook URL
|
||||
- **Remediation**: Regenerate webhook in Slack App Settings
|
||||
|
||||
### npm Token
|
||||
- **Rule ID**: `npm-access-token`
|
||||
- **Pattern**: `npm_[0-9a-zA-Z]{36}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: npm access token
|
||||
- **Remediation**: Revoke in npm Account Settings, check package publish history
|
||||
|
||||
### PyPI Token
|
||||
- **Rule ID**: `pypi-upload-token`
|
||||
- **Pattern**: `pypi-AgEIcHlwaS5vcmc[0-9A-Za-z\\-_]{50,}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: PyPI upload token
|
||||
- **Remediation**: Delete token in PyPI Account Settings, verify package uploads
|
||||
|
||||
## Database Credentials
|
||||
|
||||
### PostgreSQL Connection String
|
||||
- **Rule ID**: `postgres-connection-string`
|
||||
- **Pattern**: `postgres(ql)?://[a-zA-Z0-9]+:[a-zA-Z0-9]+@[a-zA-Z0-9.-]+:[0-9]+/[a-zA-Z0-9_-]+`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: PostgreSQL database connection string with embedded credentials
|
||||
- **Remediation**: Change database password, use connection string from environment variables
|
||||
|
||||
### MySQL Connection String
|
||||
- **Rule ID**: `mysql-connection-string`
|
||||
- **Pattern**: `mysql://[a-zA-Z0-9]+:[a-zA-Z0-9]+@[a-zA-Z0-9.-]+:[0-9]+/[a-zA-Z0-9_-]+`
|
||||
- **CWE**: CWE-259
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: MySQL database connection string with embedded credentials
|
||||
- **Remediation**: Rotate database password immediately, review access logs
|
||||
|
||||
### MongoDB Connection String
|
||||
- **Rule ID**: `mongodb-connection-string`
|
||||
- **Pattern**: `mongodb(\+srv)?://[a-zA-Z0-9]+:[a-zA-Z0-9]+@[a-zA-Z0-9.-]+`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: MongoDB connection string with credentials
|
||||
- **Remediation**: Change MongoDB user password, enable IP whitelisting
|
||||
|
||||
### Redis URL
|
||||
- **Rule ID**: `redis-url`
|
||||
- **Pattern**: `redis://:[a-zA-Z0-9]+@[a-zA-Z0-9.-]+:[0-9]+`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: Redis connection URL with password
|
||||
- **Remediation**: Change Redis password via CONFIG SET, use ACLs
|
||||
|
||||
## Private Keys
|
||||
|
||||
### RSA Private Key
|
||||
- **Rule ID**: `rsa-private-key`
|
||||
- **Pattern**: `-----BEGIN RSA PRIVATE KEY-----`
|
||||
- **CWE**: CWE-321 (Use of Hard-coded Cryptographic Key)
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: RSA private key in PEM format
|
||||
- **Remediation**: Generate new key pair, revoke associated certificates, audit access
|
||||
|
||||
### SSH Private Key
|
||||
- **Rule ID**: `ssh-private-key`
|
||||
- **Pattern**: `-----BEGIN (EC|DSA|OPENSSH) PRIVATE KEY-----`
|
||||
- **CWE**: CWE-321
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: SSH private key
|
||||
- **Remediation**: Remove from authorized_keys on all servers, generate new key
|
||||
|
||||
### PGP Private Key
|
||||
- **Rule ID**: `pgp-private-key`
|
||||
- **Pattern**: `-----BEGIN PGP PRIVATE KEY BLOCK-----`
|
||||
- **CWE**: CWE-321
|
||||
- **Severity**: CRITICAL
|
||||
- **Description**: PGP/GPG private key
|
||||
- **Remediation**: Revoke key on keyservers, generate new key pair
|
||||
|
||||
### JWT Token
|
||||
- **Rule ID**: `jwt`
|
||||
- **Pattern**: `eyJ[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}\\.[A-Za-z0-9_-]{10,}`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: HIGH
|
||||
- **Description**: JSON Web Token (may contain sensitive claims)
|
||||
- **Remediation**: Invalidate token, check token expiration, rotate signing secret
|
||||
|
||||
## Generic Patterns
|
||||
|
||||
### Generic API Key
|
||||
- **Rule ID**: `generic-api-key`
|
||||
- **Pattern**: `(?i)(api_key|apikey|api-key)[\s]*[=:][\s]*[\'\"]?[a-zA-Z0-9]{32,}[\'\"]?`
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: MEDIUM
|
||||
- **Description**: Generic API key pattern
|
||||
- **Remediation**: Rotate credential based on service documentation
|
||||
|
||||
### Generic Secret
|
||||
- **Rule ID**: `generic-secret`
|
||||
- **Pattern**: `(?i)(secret|password|passwd|pwd)[\s]*[=:][\s]*[\'\"]?[a-zA-Z0-9!@#$%^&*]{16,}[\'\"]?`
|
||||
- **CWE**: CWE-259
|
||||
- **Severity**: MEDIUM
|
||||
- **Description**: Generic secret or password pattern
|
||||
- **Remediation**: Move to environment variable or secret management system
|
||||
|
||||
### High Entropy String
|
||||
- **Rule ID**: `high-entropy`
|
||||
- **Pattern**: `[a-zA-Z0-9]{32,}`
|
||||
- **Entropy**: 4.5+
|
||||
- **CWE**: CWE-798
|
||||
- **Severity**: LOW (requires validation)
|
||||
- **Description**: High-entropy string that may be a credential
|
||||
- **Remediation**: Validate if actual secret, rotate if necessary
|
||||
|
||||
## Usage in Configuration
|
||||
|
||||
Add these rule IDs to your `.gitleaks.toml` allowlist if needed:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Allow specific rules in test files"
|
||||
paths = ['''test/''']
|
||||
rules = ["generic-api-key", "generic-secret"]
|
||||
```
|
||||
|
||||
## CWE Reference
|
||||
|
||||
- **CWE-798**: Use of Hard-coded Credentials
|
||||
- **CWE-259**: Use of Hard-coded Password
|
||||
- **CWE-321**: Use of Hard-coded Cryptographic Key
|
||||
- **CWE-522**: Insufficiently Protected Credentials
|
||||
- **CWE-257**: Storing Passwords in a Recoverable Format
|
||||
598
skills/devsecops/secrets-gitleaks/references/false_positives.md
Normal file
598
skills/devsecops/secrets-gitleaks/references/false_positives.md
Normal file
@@ -0,0 +1,598 @@
|
||||
# False Positives Management
|
||||
|
||||
Strategies for managing false positives in Gitleaks secret detection.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Understanding False Positives](#understanding-false-positives)
|
||||
- [Allowlist Strategies](#allowlist-strategies)
|
||||
- [Common False Positive Patterns](#common-false-positive-patterns)
|
||||
- [Configuration Examples](#configuration-examples)
|
||||
- [Best Practices](#best-practices)
|
||||
|
||||
## Understanding False Positives
|
||||
|
||||
False positives occur when legitimate code patterns match secret detection rules.
|
||||
|
||||
### Categories of False Positives
|
||||
|
||||
1. **Example/Placeholder Values**: Documentation and examples using fake credentials
|
||||
2. **Test Fixtures**: Test data with credential-like patterns
|
||||
3. **Non-Secret Constants**: Configuration values that match patterns but aren't sensitive
|
||||
4. **Generated Code**: Auto-generated code with high-entropy strings
|
||||
5. **Comments and Documentation**: Explanatory text matching patterns
|
||||
|
||||
### Impact Assessment
|
||||
|
||||
Before allowlisting, verify it's truly a false positive:
|
||||
|
||||
```bash
|
||||
# Extract the flagged value
|
||||
echo "api_key_here" | base64 # Check if valid encoding
|
||||
curl -H "Authorization: Bearer <token>" https://api.service.com/test # Test if active
|
||||
|
||||
# Check git history for when added
|
||||
git log -p --all -S "flagged_value"
|
||||
|
||||
# Review context around detection
|
||||
git show <commit-sha>:<file-path>
|
||||
```
|
||||
|
||||
## Allowlist Strategies
|
||||
|
||||
### 1. Path-Based Allowlisting
|
||||
|
||||
Exclude entire directories or file patterns:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Exclude test and documentation files"
|
||||
paths = [
|
||||
'''test/.*''', # All test directories
|
||||
'''tests/.*''', # Alternative test directory name
|
||||
'''.*/fixtures/.*''', # Test fixtures anywhere
|
||||
'''examples/.*''', # Example code
|
||||
'''docs/.*''', # Documentation
|
||||
'''.*\.md$''', # Markdown files
|
||||
'''.*\.rst$''', # ReStructuredText files
|
||||
'''.*_test\.go$''', # Go test files
|
||||
'''.*\.test\.js$''', # JavaScript test files
|
||||
'''.*\.spec\.ts$''', # TypeScript spec files
|
||||
]
|
||||
```
|
||||
|
||||
### 2. Stopword Allowlisting
|
||||
|
||||
Filter out known placeholder values:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Common placeholder values"
|
||||
stopwords = [
|
||||
"example",
|
||||
"placeholder",
|
||||
"your_api_key_here",
|
||||
"your_secret_here",
|
||||
"REPLACEME",
|
||||
"CHANGEME",
|
||||
"xxxxxx",
|
||||
"000000",
|
||||
"123456",
|
||||
"abcdef",
|
||||
"sample",
|
||||
"dummy",
|
||||
"fake",
|
||||
"test_key",
|
||||
"mock_token",
|
||||
]
|
||||
```
|
||||
|
||||
### 3. Commit-Based Allowlisting
|
||||
|
||||
Allowlist specific commits after manual verification:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Verified false positives"
|
||||
commits = [
|
||||
"a1b2c3d4e5f6", # Initial test fixtures - verified 2024-01-15
|
||||
"f6e5d4c3b2a1", # Documentation examples - verified 2024-01-16
|
||||
]
|
||||
```
|
||||
|
||||
Add comment explaining why each commit is allowlisted.
|
||||
|
||||
### 4. Regex Allowlisting
|
||||
|
||||
Allowlist specific patterns:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Pattern-based allowlist"
|
||||
regexes = [
|
||||
'''example_api_key_[0-9]+''', # Example keys with numeric suffix
|
||||
'''key\s*=\s*["']EXAMPLE["']''', # Explicitly marked examples
|
||||
'''(?i)test_?password_?[0-9]*''', # Test passwords
|
||||
'''(?i)dummy.*secret''', # Dummy secrets
|
||||
]
|
||||
```
|
||||
|
||||
### 5. Rule-Specific Allowlisting
|
||||
|
||||
Create exceptions for specific rules only:
|
||||
|
||||
```toml
|
||||
[[rules]]
|
||||
id = "generic-api-key"
|
||||
description = "Generic API Key"
|
||||
regex = '''(?i)api_key\s*=\s*["']([a-zA-Z0-9]{32})["']'''
|
||||
|
||||
[rules.allowlist]
|
||||
description = "Allow generic API key pattern in specific contexts"
|
||||
paths = ['''config/defaults\.yaml''']
|
||||
regexes = ['''api_key\s*=\s*["']example''']
|
||||
```
|
||||
|
||||
### 6. Global vs Rule Allowlists
|
||||
|
||||
Global allowlists override rule-specific ones:
|
||||
|
||||
```toml
|
||||
# Global allowlist - highest precedence
|
||||
[allowlist]
|
||||
description = "Organization-wide exceptions"
|
||||
paths = ['''vendor/''', '''node_modules/''']
|
||||
|
||||
# Rule-specific allowlist
|
||||
[[rules]]
|
||||
id = "custom-secret"
|
||||
[rules.allowlist]
|
||||
description = "Exceptions only for this rule"
|
||||
paths = ['''config/template\.yml''']
|
||||
```
|
||||
|
||||
## Common False Positive Patterns
|
||||
|
||||
### 1. Documentation Examples
|
||||
|
||||
**Problem**: README and documentation contain example credentials.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = [
|
||||
'''README\.md$''',
|
||||
'''CONTRIBUTING\.md$''',
|
||||
'''docs/.*\.md$''',
|
||||
'''.*\.example$''', # .env.example files
|
||||
'''.*\.template$''', # Template files
|
||||
'''.*\.sample$''', # Sample configurations
|
||||
]
|
||||
|
||||
stopwords = [
|
||||
"example.com",
|
||||
"user@example.org",
|
||||
"YOUR_API_KEY",
|
||||
]
|
||||
```
|
||||
|
||||
### 2. Test Fixtures
|
||||
|
||||
**Problem**: Test data contains credential-like strings for testing credential handling.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = [
|
||||
'''test/fixtures/.*''',
|
||||
'''spec/fixtures/.*''',
|
||||
'''.*/testdata/.*''', # Go convention
|
||||
'''.*/mocks/.*''',
|
||||
'''cypress/fixtures/.*''', # Cypress test data
|
||||
]
|
||||
|
||||
# Or use inline comments in code
|
||||
# password = "test_password_123" # gitleaks:allow
|
||||
```
|
||||
|
||||
### 3. Generated Code
|
||||
|
||||
**Problem**: Code generators produce high-entropy identifiers.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Generated code"
|
||||
paths = [
|
||||
'''.*\.pb\.go$''', # Protocol buffer generated code
|
||||
'''.*_generated\..*''', # Generated file marker
|
||||
'''node_modules/.*''', # Dependencies
|
||||
'''vendor/.*''', # Vendored dependencies
|
||||
'''dist/.*''', # Build output
|
||||
'''build/.*''',
|
||||
]
|
||||
```
|
||||
|
||||
### 4. Configuration Templates
|
||||
|
||||
**Problem**: Config templates with placeholder values match patterns.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = [
|
||||
'''config/.*\.template''',
|
||||
'''templates/.*''',
|
||||
'''.*\.tpl$''',
|
||||
'''.*\.tmpl$''',
|
||||
]
|
||||
|
||||
stopwords = [
|
||||
"REPLACE_WITH_YOUR",
|
||||
"CONFIGURE_ME",
|
||||
"SET_THIS_VALUE",
|
||||
]
|
||||
```
|
||||
|
||||
### 5. Base64 Encoded Strings
|
||||
|
||||
**Problem**: Non-secret base64 data flagged due to high entropy.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
# Increase entropy threshold to reduce false positives
|
||||
[[rules]]
|
||||
id = "high-entropy-base64"
|
||||
regex = '''[a-zA-Z0-9+/]{40,}={0,2}'''
|
||||
entropy = 5.5 # Increase from default 4.5
|
||||
```
|
||||
|
||||
Or allowlist specific patterns:
|
||||
```toml
|
||||
[allowlist]
|
||||
regexes = [
|
||||
'''data:image/[^;]+;base64,''', # Base64 encoded images
|
||||
'''-----BEGIN CERTIFICATE-----''', # Public certificates (not private keys)
|
||||
]
|
||||
```
|
||||
|
||||
### 6. Public Keys and Certificates
|
||||
|
||||
**Problem**: Public keys detected (which are not secrets).
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
regexes = [
|
||||
'''-----BEGIN PUBLIC KEY-----''',
|
||||
'''-----BEGIN CERTIFICATE-----''',
|
||||
'''-----BEGIN X509 CERTIFICATE-----''',
|
||||
]
|
||||
|
||||
# But DO NOT allowlist:
|
||||
# -----BEGIN PRIVATE KEY-----
|
||||
# -----BEGIN RSA PRIVATE KEY-----
|
||||
```
|
||||
|
||||
### 7. UUIDs and Identifiers
|
||||
|
||||
**Problem**: UUIDs match high-entropy patterns.
|
||||
|
||||
**Solution**:
|
||||
```toml
|
||||
[allowlist]
|
||||
regexes = [
|
||||
'''[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}''', # UUID
|
||||
'''[0-9a-f]{24}''', # MongoDB ObjectId
|
||||
]
|
||||
```
|
||||
|
||||
Or adjust entropy detection:
|
||||
```toml
|
||||
[[rules]]
|
||||
id = "generic-high-entropy"
|
||||
entropy = 6.0 # Only flag very high entropy
|
||||
```
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Minimal Configuration
|
||||
|
||||
Start with broad allowlists, refine over time:
|
||||
|
||||
```toml
|
||||
title = "Minimal Gitleaks Configuration"
|
||||
|
||||
[extend]
|
||||
useDefault = true # Use all built-in rules
|
||||
|
||||
[allowlist]
|
||||
description = "Broad allowlist for initial rollout"
|
||||
paths = [
|
||||
'''test/.*''',
|
||||
'''.*\.md$''',
|
||||
'''vendor/.*''',
|
||||
'''node_modules/.*''',
|
||||
]
|
||||
|
||||
stopwords = [
|
||||
"example",
|
||||
"test",
|
||||
"mock",
|
||||
"dummy",
|
||||
]
|
||||
```
|
||||
|
||||
### Strict Configuration
|
||||
|
||||
Minimize false positives with targeted allowlists:
|
||||
|
||||
```toml
|
||||
title = "Strict Gitleaks Configuration"
|
||||
|
||||
[extend]
|
||||
useDefault = true
|
||||
|
||||
[allowlist]
|
||||
description = "Minimal allowlist - verify all exceptions"
|
||||
|
||||
# Only allow specific known false positives
|
||||
paths = [
|
||||
'''docs/api-examples\.md''', # API documentation with examples
|
||||
'''test/fixtures/auth\.json''', # Authentication test fixtures
|
||||
]
|
||||
|
||||
# Specific known placeholder values
|
||||
stopwords = [
|
||||
"YOUR_API_KEY_HERE",
|
||||
"sk_test_example_key_123456789",
|
||||
]
|
||||
|
||||
# Manually verified commits
|
||||
commits = [
|
||||
"abc123def456", # Test fixtures added - verified 2024-01-15 by security@company.com
|
||||
]
|
||||
```
|
||||
|
||||
### Balanced Configuration
|
||||
|
||||
Balance detection sensitivity with operational overhead:
|
||||
|
||||
```toml
|
||||
title = "Balanced Gitleaks Configuration"
|
||||
|
||||
[extend]
|
||||
useDefault = true
|
||||
|
||||
[allowlist]
|
||||
description = "Balanced allowlist"
|
||||
|
||||
# Common non-secret paths
|
||||
paths = [
|
||||
'''test/fixtures/.*''',
|
||||
'''spec/fixtures/.*''',
|
||||
'''.*\.md$''',
|
||||
'''docs/.*''',
|
||||
'''examples/.*''',
|
||||
'''vendor/.*''',
|
||||
'''node_modules/.*''',
|
||||
]
|
||||
|
||||
# Common placeholders
|
||||
stopwords = [
|
||||
"example",
|
||||
"placeholder",
|
||||
"your_key_here",
|
||||
"replace_me",
|
||||
"changeme",
|
||||
"test",
|
||||
"dummy",
|
||||
"mock",
|
||||
]
|
||||
|
||||
# Public non-secrets
|
||||
regexes = [
|
||||
'''-----BEGIN CERTIFICATE-----''',
|
||||
'''-----BEGIN PUBLIC KEY-----''',
|
||||
'''data:image/[^;]+;base64,''',
|
||||
]
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Document Allowlist Decisions
|
||||
|
||||
Always add comments explaining why patterns are allowlisted:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
description = "Verified false positives - reviewed 2024-01-15"
|
||||
|
||||
# Test fixtures created during initial test suite development
|
||||
# Contains only example credentials for testing credential validation
|
||||
paths = ['''test/fixtures/credentials\.json''']
|
||||
|
||||
# Documentation examples using clearly fake values
|
||||
# All examples prefixed with "example_" or "test_"
|
||||
stopwords = ["example_", "test_"]
|
||||
```
|
||||
|
||||
### 2. Regular Allowlist Review
|
||||
|
||||
Schedule periodic reviews:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# review-allowlist.sh
|
||||
|
||||
echo "Gitleaks Allowlist Review"
|
||||
echo "========================="
|
||||
echo ""
|
||||
|
||||
# Show allowlist paths
|
||||
echo "Allowlisted paths:"
|
||||
grep -A 10 "^\[allowlist\]" .gitleaks.toml | grep "paths = "
|
||||
|
||||
# Show allowlisted commits
|
||||
echo ""
|
||||
echo "Allowlisted commits:"
|
||||
grep -A 10 "^\[allowlist\]" .gitleaks.toml | grep "commits = "
|
||||
|
||||
# Check if commits still exist
|
||||
# (May have been removed in history rewrite)
|
||||
git rev-parse --verify abc123def456 2>/dev/null || echo "WARNING: Commit abc123def456 not found"
|
||||
```
|
||||
|
||||
### 3. Use Inline Annotations Sparingly
|
||||
|
||||
For one-off false positives, use inline comments:
|
||||
|
||||
```python
|
||||
# This is a test password for unit tests only
|
||||
# gitleaks:allow
|
||||
TEST_PASSWORD = "test_password_123"
|
||||
```
|
||||
|
||||
**Warning**: Overuse of inline annotations indicates poorly tuned configuration.
|
||||
|
||||
### 4. Version Control Your Configuration
|
||||
|
||||
Track changes to `.gitleaks.toml`:
|
||||
|
||||
```bash
|
||||
git log -p .gitleaks.toml
|
||||
|
||||
# See who allowlisted what and when
|
||||
git blame .gitleaks.toml
|
||||
```
|
||||
|
||||
### 5. Test Allowlist Changes
|
||||
|
||||
Before committing allowlist changes:
|
||||
|
||||
```bash
|
||||
# Test configuration
|
||||
gitleaks detect --config .gitleaks.toml -v
|
||||
|
||||
# Verify specific file is now allowed
|
||||
gitleaks detect --config .gitleaks.toml --source test/fixtures/credentials.json
|
||||
|
||||
# Verify secret is still caught in production code
|
||||
echo 'api_key = "sk_live_actual_key"' > /tmp/test_detection.py
|
||||
gitleaks detect --config .gitleaks.toml --source /tmp/test_detection.py --no-git
|
||||
```
|
||||
|
||||
### 6. Separate Allowlists by Environment
|
||||
|
||||
Use different configurations for different contexts:
|
||||
|
||||
```bash
|
||||
# Strict config for production code
|
||||
gitleaks detect --config .gitleaks.strict.toml --source src/
|
||||
|
||||
# Lenient config for test code
|
||||
gitleaks detect --config .gitleaks.lenient.toml --source test/
|
||||
```
|
||||
|
||||
### 7. Monitor False Positive Rate
|
||||
|
||||
Track metrics over time:
|
||||
|
||||
```bash
|
||||
# Total findings
|
||||
TOTAL=$(gitleaks detect --report-format json 2>/dev/null | jq '. | length')
|
||||
|
||||
# Run with allowlist
|
||||
AFTER_FILTER=$(gitleaks detect --config .gitleaks.toml --report-format json 2>/dev/null | jq '. | length')
|
||||
|
||||
# Calculate reduction
|
||||
echo "False positive reduction: $(($TOTAL - $AFTER_FILTER)) / $TOTAL"
|
||||
```
|
||||
|
||||
**Target**: < 10% false positive rate for good developer experience.
|
||||
|
||||
### 8. Security Review for New Allowlists
|
||||
|
||||
Require security team approval for:
|
||||
- New allowlisted paths in `src/` or production code
|
||||
- New allowlisted commits (verify manually first)
|
||||
- Changes to rule-specific allowlists
|
||||
- New stopwords that could mask real secrets
|
||||
|
||||
### 9. Avoid Overly Broad Patterns
|
||||
|
||||
**Bad** (too broad):
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = ['''.*'''] # Disables all detection!
|
||||
stopwords = ["key", "secret"] # Matches too many real secrets
|
||||
```
|
||||
|
||||
**Good** (specific):
|
||||
```toml
|
||||
[allowlist]
|
||||
paths = ['''test/unit/.*\.test\.js$'''] # Specific test directory
|
||||
stopwords = ["example_key", "test_secret"] # Specific placeholders
|
||||
```
|
||||
|
||||
### 10. Escape Special Characters
|
||||
|
||||
When using regex patterns, escape properly:
|
||||
|
||||
```toml
|
||||
[allowlist]
|
||||
regexes = [
|
||||
'''api\.example\.com''', # Literal dot
|
||||
'''config\[\'key\'\]''', # Literal brackets and quotes
|
||||
]
|
||||
```
|
||||
|
||||
## Troubleshooting False Positives
|
||||
|
||||
### Issue: Can't Identify Source of False Positive
|
||||
|
||||
```bash
|
||||
# Run with verbose output
|
||||
gitleaks detect -v | grep "RuleID"
|
||||
|
||||
# Get detailed finding information
|
||||
gitleaks detect --report-format json | jq '.[] | {file: .File, line: .StartLine, rule: .RuleID}'
|
||||
|
||||
# View context around detection
|
||||
gitleaks detect --report-format json | jq -r '.[0] | .File, .StartLine' | xargs -I {} sh -c 'sed -n "{}-5,{}+5p" {}'
|
||||
```
|
||||
|
||||
### Issue: Allowlist Not Working
|
||||
|
||||
```bash
|
||||
# Verify config is loaded
|
||||
gitleaks detect --config .gitleaks.toml -v 2>&1 | grep "config"
|
||||
|
||||
# Check regex syntax
|
||||
echo "test_string" | grep -E 'your_regex_pattern'
|
||||
|
||||
# Test path matching
|
||||
echo "test/fixtures/file.json" | grep -E 'test/fixtures/.*'
|
||||
```
|
||||
|
||||
### Issue: Too Many False Positives
|
||||
|
||||
1. **Export findings**: `gitleaks detect --report-format json > findings.json`
|
||||
2. **Analyze patterns**: `jq -r '.[].File' findings.json | sort | uniq -c | sort -rn`
|
||||
3. **Group by rule**: `jq -r '.[].RuleID' findings.json | sort | uniq -c | sort -rn`
|
||||
4. **Create targeted allowlists** based on analysis
|
||||
|
||||
## False Positive vs Real Secret
|
||||
|
||||
When unsure, err on the side of caution:
|
||||
|
||||
| Indicator | False Positive | Real Secret |
|
||||
|-----------|----------------|-------------|
|
||||
| Location | Test/docs/examples | Production code |
|
||||
| Pattern | "example", "test", "mock" | No such indicators |
|
||||
| Entropy | Low/medium | High |
|
||||
| Format | Incomplete/truncated | Complete/valid |
|
||||
| Context | Educational comments | Functional code |
|
||||
| Git history | Added in test commits | Added furtively |
|
||||
|
||||
**When in doubt**: Treat as real secret and investigate.
|
||||
@@ -0,0 +1,530 @@
|
||||
# Secret Remediation Guide
|
||||
|
||||
Comprehensive procedures for remediating exposed secrets detected by Gitleaks.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Immediate Response](#immediate-response)
|
||||
- [Remediation Workflow](#remediation-workflow)
|
||||
- [Git History Cleanup](#git-history-cleanup)
|
||||
- [Cloud Provider Specific](#cloud-provider-specific)
|
||||
- [Database Credentials](#database-credentials)
|
||||
- [API Keys and Tokens](#api-keys-and-tokens)
|
||||
- [Post-Remediation](#post-remediation)
|
||||
|
||||
## Immediate Response
|
||||
|
||||
When secrets are detected, follow this priority order:
|
||||
|
||||
### 1. Assess Exposure (0-15 minutes)
|
||||
|
||||
**Questions to answer immediately:**
|
||||
- Is the repository public or private?
|
||||
- Has the commit been pushed to remote?
|
||||
- How long has the secret been exposed?
|
||||
- What systems does this credential access?
|
||||
|
||||
**Actions:**
|
||||
```bash
|
||||
# Check if commit is pushed
|
||||
git log origin/main..HEAD # If output, not yet pushed
|
||||
|
||||
# Check repository visibility
|
||||
gh repo view --json visibility
|
||||
|
||||
# Check commit age
|
||||
git log -1 --format="%ar" <commit-sha>
|
||||
```
|
||||
|
||||
### 2. Rotate Credentials (0-30 minutes)
|
||||
|
||||
**CRITICAL**: Rotate the exposed credential immediately, regardless of exposure duration.
|
||||
|
||||
Priority order:
|
||||
1. **Production credentials** - Immediate rotation
|
||||
2. **Payment/financial systems** - Immediate rotation
|
||||
3. **Customer data access** - Immediate rotation
|
||||
4. **Development/test credentials** - Rotate within 24 hours
|
||||
|
||||
### 3. Review Access Logs (30-60 minutes)
|
||||
|
||||
Check for unauthorized access:
|
||||
- Cloud provider audit logs (CloudTrail, Cloud Audit Logs, Activity Log)
|
||||
- Application logs showing authentication attempts
|
||||
- Database connection logs
|
||||
- API usage logs
|
||||
|
||||
### 4. Remove from Code (0-24 hours)
|
||||
|
||||
Remove secret from current code and optionally from git history.
|
||||
|
||||
## Remediation Workflow
|
||||
|
||||
### Step 1: Rotate the Credential
|
||||
|
||||
**Before removing from code**, rotate the credential to prevent race conditions.
|
||||
|
||||
#### Cloud Providers
|
||||
|
||||
**AWS**:
|
||||
```bash
|
||||
# Deactivate compromised key
|
||||
aws iam update-access-key \
|
||||
--access-key-id AKIA... \
|
||||
--status Inactive \
|
||||
--user-name username
|
||||
|
||||
# Create new key
|
||||
aws iam create-access-key --user-name username
|
||||
|
||||
# Delete old key after updating applications
|
||||
aws iam delete-access-key \
|
||||
--access-key-id AKIA... \
|
||||
--user-name username
|
||||
```
|
||||
|
||||
**GCP**:
|
||||
```bash
|
||||
# Delete service account key
|
||||
gcloud iam service-accounts keys delete KEY_ID \
|
||||
--iam-account=SERVICE_ACCOUNT_EMAIL
|
||||
|
||||
# Create new key
|
||||
gcloud iam service-accounts keys create new-key.json \
|
||||
--iam-account=SERVICE_ACCOUNT_EMAIL
|
||||
```
|
||||
|
||||
**Azure**:
|
||||
```bash
|
||||
# Regenerate storage account key
|
||||
az storage account keys renew \
|
||||
--account-name ACCOUNT_NAME \
|
||||
--key primary
|
||||
|
||||
# List keys to verify
|
||||
az storage account keys list \
|
||||
--account-name ACCOUNT_NAME
|
||||
```
|
||||
|
||||
#### API Tokens
|
||||
|
||||
**GitHub**:
|
||||
1. Navigate to Settings > Developer settings > Personal access tokens
|
||||
2. Find the compromised token (check "Last used" column)
|
||||
3. Click "Delete"
|
||||
4. Generate new token with minimal required scopes
|
||||
|
||||
**Stripe**:
|
||||
1. Log into Stripe Dashboard
|
||||
2. Navigate to Developers > API keys
|
||||
3. Click "Roll" on the compromised key
|
||||
4. Update all applications with new key
|
||||
|
||||
**Generic API Key**:
|
||||
1. Access provider's console/dashboard
|
||||
2. Locate API key management
|
||||
3. Revoke/delete compromised key
|
||||
4. Generate new key
|
||||
5. Update applications
|
||||
6. Test connectivity
|
||||
|
||||
### Step 2: Remove from Current Code
|
||||
|
||||
Replace hardcoded secrets with environment variables or secret management:
|
||||
|
||||
**Before** (insecure):
|
||||
```python
|
||||
API_KEY = "sk_live_51ABC123..."
|
||||
db_password = "MyP@ssw0rd123"
|
||||
```
|
||||
|
||||
**After** (secure):
|
||||
```python
|
||||
import os
|
||||
|
||||
API_KEY = os.environ.get("STRIPE_API_KEY")
|
||||
if not API_KEY:
|
||||
raise ValueError("STRIPE_API_KEY environment variable not set")
|
||||
|
||||
db_password = os.environ.get("DB_PASSWORD")
|
||||
```
|
||||
|
||||
**Using secret management**:
|
||||
```python
|
||||
from azure.keyvault.secrets import SecretClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
credential = DefaultAzureCredential()
|
||||
client = SecretClient(vault_url="https://myvault.vault.azure.net/", credential=credential)
|
||||
|
||||
db_password = client.get_secret("database-password").value
|
||||
```
|
||||
|
||||
### Step 3: Commit the Fix
|
||||
|
||||
```bash
|
||||
# Add changes
|
||||
git add .
|
||||
|
||||
# Commit with clear message
|
||||
git commit -m "refactor: Move API credentials to environment variables
|
||||
|
||||
- Replace hardcoded Stripe API key with environment variable
|
||||
- Replace database password with AWS Secrets Manager reference
|
||||
- Add validation for required environment variables
|
||||
|
||||
Addresses: Secret exposure detected by Gitleaks scan"
|
||||
|
||||
# Push
|
||||
git push origin main
|
||||
```
|
||||
|
||||
## Git History Cleanup
|
||||
|
||||
If secrets are in pushed commits, consider removing from git history.
|
||||
|
||||
### Decision Matrix
|
||||
|
||||
| Scenario | Action | Reason |
|
||||
|----------|--------|--------|
|
||||
| Public repo, secret exposed | **Mandatory** history rewrite | Secret is public knowledge |
|
||||
| Private repo, < 24 hours, < 5 collaborators | **Recommended** history rewrite | Minimal disruption |
|
||||
| Private repo, > 1 week, > 10 collaborators | **Optional** - Rotate only | High coordination cost |
|
||||
| Production repo with CI/CD | **Coordinate carefully** | May break automation |
|
||||
|
||||
### Method 1: git-filter-repo (Recommended)
|
||||
|
||||
Install:
|
||||
```bash
|
||||
pip install git-filter-repo
|
||||
```
|
||||
|
||||
Remove specific file from all history:
|
||||
```bash
|
||||
# Backup first
|
||||
git clone --mirror <repo-url> backup-repo.git
|
||||
|
||||
# Remove file
|
||||
git filter-repo --path config/secrets.yaml --invert-paths
|
||||
|
||||
# Force push
|
||||
git push origin --force --all
|
||||
```
|
||||
|
||||
Remove secrets matching pattern:
|
||||
```bash
|
||||
# Use callback for complex filtering
|
||||
git filter-repo --replace-text <(echo 'regex:sk_live_[a-zA-Z0-9]{24}==>REDACTED')
|
||||
```
|
||||
|
||||
### Method 2: BFG Repo-Cleaner
|
||||
|
||||
Download:
|
||||
```bash
|
||||
# macOS
|
||||
brew install bfg
|
||||
|
||||
# Or download JAR from https://rtyley.github.io/bfg-repo-cleaner/
|
||||
```
|
||||
|
||||
Remove specific file:
|
||||
```bash
|
||||
# Clone mirror
|
||||
git clone --mirror <repo-url> repo-mirror.git
|
||||
cd repo-mirror.git
|
||||
|
||||
# Remove file
|
||||
bfg --delete-files secrets.env
|
||||
|
||||
# Clean up
|
||||
git reflog expire --expire=now --all
|
||||
git gc --prune=now --aggressive
|
||||
|
||||
# Force push
|
||||
git push
|
||||
```
|
||||
|
||||
Remove secrets by pattern:
|
||||
```bash
|
||||
# Create replacements.txt
|
||||
echo "PASSWORD1==>***REMOVED***" > replacements.txt
|
||||
echo "sk_live_51ABC==>***REMOVED***" >> replacements.txt
|
||||
|
||||
# Run BFG
|
||||
bfg --replace-text replacements.txt repo-mirror.git
|
||||
```
|
||||
|
||||
### Method 3: Interactive Rebase (Small Changes)
|
||||
|
||||
For recent commits not yet widely distributed:
|
||||
|
||||
```bash
|
||||
# Rebase last N commits
|
||||
git rebase -i HEAD~5
|
||||
|
||||
# In editor, mark commits to 'edit'
|
||||
# When stopped at each commit:
|
||||
git rm config/secrets.yaml
|
||||
git commit --amend --no-edit
|
||||
git rebase --continue
|
||||
|
||||
# Force push
|
||||
git push --force-with-lease
|
||||
```
|
||||
|
||||
### Post-Rewrite Coordination
|
||||
|
||||
After rewriting history:
|
||||
|
||||
1. **Notify team immediately**:
|
||||
```text
|
||||
URGENT: Git history rewritten to remove exposed credentials.
|
||||
|
||||
Action required for all developers:
|
||||
1. Commit/stash any local changes
|
||||
2. Run: git fetch origin && git reset --hard origin/main
|
||||
3. Delete and re-clone if issues persist
|
||||
|
||||
Contact security team with questions.
|
||||
```
|
||||
|
||||
2. **Update CI/CD**:
|
||||
- Invalidate old caches
|
||||
- May need to reconfigure webhooks
|
||||
- Update any hardcoded commit references
|
||||
|
||||
3. **Update branch protection**:
|
||||
- May need to temporarily disable
|
||||
- Re-enable after force push completes
|
||||
|
||||
## Cloud Provider Specific
|
||||
|
||||
### AWS
|
||||
|
||||
**Check for unauthorized access**:
|
||||
```bash
|
||||
# List recent API calls for access key
|
||||
aws cloudtrail lookup-events \
|
||||
--lookup-attributes AttributeKey=Username,AttributeValue=compromised-user \
|
||||
--max-results 50 \
|
||||
--start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S)
|
||||
```
|
||||
|
||||
**Revoke all sessions**:
|
||||
```bash
|
||||
# Attach policy to deny all actions
|
||||
aws iam put-user-policy \
|
||||
--user-name compromised-user \
|
||||
--policy-name DenyAll \
|
||||
--policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Deny","Action":"*","Resource":"*"}]}'
|
||||
```
|
||||
|
||||
### GCP
|
||||
|
||||
**Check audit logs**:
|
||||
```bash
|
||||
gcloud logging read "protoPayload.authenticationInfo.principalEmail=SERVICE_ACCOUNT_EMAIL" \
|
||||
--limit 100 \
|
||||
--format json
|
||||
```
|
||||
|
||||
**Disable service account**:
|
||||
```bash
|
||||
gcloud iam service-accounts disable SERVICE_ACCOUNT_EMAIL
|
||||
```
|
||||
|
||||
### Azure
|
||||
|
||||
**Review activity logs**:
|
||||
```bash
|
||||
az monitor activity-log list \
|
||||
--start-time 2024-01-01T00:00:00Z \
|
||||
--resource-id /subscriptions/SUBSCRIPTION_ID
|
||||
```
|
||||
|
||||
**Revoke access**:
|
||||
```bash
|
||||
# Regenerate keys
|
||||
az storage account keys renew \
|
||||
--account-name STORAGE_ACCOUNT \
|
||||
--key primary
|
||||
```
|
||||
|
||||
## Database Credentials
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
```sql
|
||||
-- Change password
|
||||
ALTER USER app_user WITH PASSWORD 'new_secure_password';
|
||||
|
||||
-- View recent connections
|
||||
SELECT datname, usename, client_addr, backend_start
|
||||
FROM pg_stat_activity
|
||||
WHERE usename = 'app_user'
|
||||
ORDER BY backend_start DESC;
|
||||
|
||||
-- Kill active connections (if suspicious)
|
||||
SELECT pg_terminate_backend(pid)
|
||||
FROM pg_stat_activity
|
||||
WHERE usename = 'app_user' AND client_addr != 'trusted_ip';
|
||||
```
|
||||
|
||||
### MySQL
|
||||
|
||||
```sql
|
||||
-- Change password
|
||||
ALTER USER 'app_user'@'%' IDENTIFIED BY 'new_secure_password';
|
||||
FLUSH PRIVILEGES;
|
||||
|
||||
-- View recent connections
|
||||
SELECT * FROM information_schema.PROCESSLIST
|
||||
WHERE USER = 'app_user';
|
||||
|
||||
-- Kill connections
|
||||
KILL CONNECTION process_id;
|
||||
```
|
||||
|
||||
### MongoDB
|
||||
|
||||
```javascript
|
||||
// Change password
|
||||
use admin
|
||||
db.changeUserPassword("app_user", "new_secure_password")
|
||||
|
||||
// View recent operations
|
||||
db.currentOp({ "active": true })
|
||||
|
||||
// Kill operation
|
||||
db.killOp(opid)
|
||||
```
|
||||
|
||||
## API Keys and Tokens
|
||||
|
||||
### GitHub
|
||||
|
||||
**Audit unauthorized access**:
|
||||
```bash
|
||||
# List recent events for token
|
||||
gh api /users/{username}/events/public | jq '.[] | {type, repo: .repo.name, created_at}'
|
||||
```
|
||||
|
||||
**Revoke all tokens** (if compromised account):
|
||||
1. Settings > Developer settings > Personal access tokens
|
||||
2. Select all tokens
|
||||
3. Click "Delete"
|
||||
|
||||
### Slack
|
||||
|
||||
**Check workspace audit logs**:
|
||||
1. Go to workspace settings (admin required)
|
||||
2. Navigate to Logs > Audit Logs
|
||||
3. Filter by token usage
|
||||
|
||||
**Regenerate token**:
|
||||
1. Go to api.slack.com/apps
|
||||
2. Select your app
|
||||
3. Navigate to OAuth & Permissions
|
||||
4. Click "Regenerate" on token
|
||||
|
||||
## Post-Remediation
|
||||
|
||||
### 1. Implement Prevention
|
||||
|
||||
**Pre-commit hooks**:
|
||||
```bash
|
||||
# Install Gitleaks pre-commit hook
|
||||
cd /path/to/repo
|
||||
cat << 'EOF' > .git/hooks/pre-commit
|
||||
#!/bin/sh
|
||||
gitleaks protect --verbose --redact --staged
|
||||
EOF
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
**CI/CD checks**:
|
||||
```yaml
|
||||
# .github/workflows/secrets-scan.yml
|
||||
name: Secret Scanning
|
||||
on: [push, pull_request]
|
||||
jobs:
|
||||
scan:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: gitleaks/gitleaks-action@v2
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
|
||||
### 2. Update Secret Management
|
||||
|
||||
Migrate to proper secret management:
|
||||
|
||||
**Environment variables** (minimal):
|
||||
```bash
|
||||
# .env (never commit!)
|
||||
DATABASE_URL=postgresql://user:pass@host:5432/db
|
||||
API_KEY=sk_live_...
|
||||
|
||||
# .gitignore
|
||||
.env
|
||||
.env.local
|
||||
```
|
||||
|
||||
**Secret management services**:
|
||||
- AWS: Secrets Manager, Systems Manager Parameter Store
|
||||
- GCP: Secret Manager
|
||||
- Azure: Key Vault
|
||||
- HashiCorp: Vault
|
||||
- Kubernetes: Secrets
|
||||
|
||||
### 3. Document Incident
|
||||
|
||||
Create incident report including:
|
||||
- **Timeline**: When secret was committed, detected, remediated
|
||||
- **Exposure**: Duration, repository visibility, access scope
|
||||
- **Impact**: Systems accessed, data at risk, unauthorized activity
|
||||
- **Response**: Rotation completed, logs reviewed, history cleaned
|
||||
- **Prevention**: Controls implemented to prevent recurrence
|
||||
|
||||
### 4. Team Training
|
||||
|
||||
Conduct training on:
|
||||
- Using environment variables and secret management
|
||||
- Pre-commit hooks and local scanning
|
||||
- Recognizing secrets in code review
|
||||
- Incident response procedures
|
||||
|
||||
### 5. Compliance Notifications
|
||||
|
||||
If required by regulations:
|
||||
- **GDPR**: Notify supervisory authority within 72 hours if personal data at risk
|
||||
- **PCI-DSS**: Notify card brands and processor if payment data affected
|
||||
- **SOC2**: Document in compliance report, may trigger audit
|
||||
- **HIPAA**: Notify covered entities if PHI exposed
|
||||
|
||||
## Prevention Checklist
|
||||
|
||||
- [ ] Credential rotated and old credential deactivated
|
||||
- [ ] Access logs reviewed for unauthorized activity
|
||||
- [ ] Secret removed from current code
|
||||
- [ ] Git history cleaned (if applicable)
|
||||
- [ ] Team notified of credential change
|
||||
- [ ] Applications updated with new credential
|
||||
- [ ] Pre-commit hooks installed
|
||||
- [ ] CI/CD secret scanning enabled
|
||||
- [ ] Secret management solution implemented
|
||||
- [ ] Incident documented
|
||||
- [ ] Compliance notifications sent (if required)
|
||||
- [ ] Team training scheduled
|
||||
|
||||
## Emergency Contacts
|
||||
|
||||
Maintain contact list for rapid response:
|
||||
- **Security Team**: security@company.com
|
||||
- **DevOps On-Call**: devops-oncall@company.com
|
||||
- **Cloud Provider Support**: Account-specific
|
||||
- **Compliance Officer**: compliance@company.com
|
||||
Reference in New Issue
Block a user