Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:51:02 +08:00
commit ff1f4bd119
252 changed files with 72682 additions and 0 deletions

View File

@@ -0,0 +1,329 @@
---
name: container-grype
description: >
Container vulnerability scanning and dependency risk assessment using Grype with CVSS severity
ratings, EPSS exploit probability, and CISA KEV indicators. Use when: (1) Scanning container
images and filesystems for known vulnerabilities, (2) Integrating vulnerability scanning into
CI/CD pipelines with severity thresholds, (3) Analyzing SBOMs (Syft, SPDX, CycloneDX) for
security risks, (4) Prioritizing remediation based on threat metrics (CVSS, EPSS, KEV),
(5) Generating vulnerability reports in multiple formats (JSON, SARIF, CycloneDX) for security
toolchain integration.
version: 0.1.0
maintainer: SirAppSec
category: devsecops
tags: [container-security, vulnerability-scanning, sca, sbom, cvss, cve, docker, grype]
frameworks: [CWE, NIST]
dependencies:
tools: [grype, docker]
references:
- https://github.com/anchore/grype
- https://www.cve.org/
- https://nvd.nist.gov/
---
# Container Vulnerability Scanning with Grype
## Overview
Grype is an open-source vulnerability scanner that identifies known security flaws in container images,
filesystems, and Software Bill of Materials (SBOM) documents. It analyzes operating system packages
(Alpine, Ubuntu, Red Hat, Debian) and language-specific dependencies (Java, Python, JavaScript, Ruby,
Go, PHP, Rust) against vulnerability databases to detect CVEs.
Grype emphasizes actionable security insights through:
- CVSS severity ratings for risk classification
- EPSS exploit probability scores for threat assessment
- CISA Known Exploited Vulnerabilities (KEV) indicators
- Multiple output formats (table, JSON, SARIF, CycloneDX) for toolchain integration
## Quick Start
Scan a container image:
```bash
grype <image-name>
```
Examples:
```bash
# Scan official Docker image
grype alpine:latest
# Scan local Docker image
grype myapp:v1.2.3
# Scan filesystem directory
grype dir:/path/to/project
# Scan SBOM file
grype sbom:/path/to/sbom.json
```
## Core Workflow
### Basic Vulnerability Scan
1. **Identify scan target**: Determine what to scan (container image, filesystem, SBOM)
2. **Run Grype scan**: Execute `grype <target>` to analyze for vulnerabilities
3. **Review findings**: Examine CVE IDs, severity, CVSS scores, affected packages
4. **Prioritize remediation**: Focus on critical/high severity, CISA KEV, high EPSS scores
5. **Apply fixes**: Update vulnerable packages or base images
6. **Re-scan**: Verify vulnerabilities are resolved
### CI/CD Integration with Fail Thresholds
For automated pipeline security gates:
```bash
# Fail build if any critical vulnerabilities found
grype <image> --fail-on critical
# Fail on high or critical severities
grype <image> --fail-on high
# Output JSON for further processing
grype <image> -o json > results.json
```
**Pipeline integration pattern**:
1. Build container image
2. Run Grype scan with `--fail-on` threshold
3. If scan fails: Block deployment, alert security team
4. If scan passes: Continue deployment workflow
5. Archive scan results as build artifacts
### SBOM-Based Scanning
Use Grype with Syft-generated SBOMs for faster re-scanning:
```bash
# Generate SBOM with Syft (separate skill: sbom-syft)
syft <image> -o json > sbom.json
# Scan SBOM with Grype (faster than re-analyzing image)
grype sbom:sbom.json
# Pipe Syft output directly to Grype
syft <image> -o json | grype
```
**Benefits of SBOM workflow**:
- Faster re-scans without re-analyzing image layers
- Share SBOMs across security tools
- Archive SBOMs for compliance and auditing
### Risk Prioritization Workflow
Progress:
[ ] 1. Run full Grype scan with JSON output: `grype <target> -o json > results.json`
[ ] 2. Use helper script to extract high-risk CVEs: `./scripts/prioritize_cves.py results.json`
[ ] 3. Review CISA KEV matches (actively exploited vulnerabilities)
[ ] 4. Check EPSS scores (exploit probability) for non-KEV findings
[ ] 5. Prioritize remediation: KEV > High EPSS > CVSS Critical > CVSS High
[ ] 6. Document remediation plan with CVE IDs and affected packages
[ ] 7. Apply fixes and re-scan to verify
Work through each step systematically. Check off completed items.
## Output Formats
Grype supports multiple output formats for different use cases:
**Table (default)**: Human-readable console output
```bash
grype <image>
```
**JSON**: Machine-parseable for automation
```bash
grype <image> -o json
```
**SARIF**: Static Analysis Results Interchange Format for IDE integration
```bash
grype <image> -o sarif
```
**CycloneDX**: SBOM format with vulnerability data
```bash
grype <image> -o cyclonedx-json
```
**Template**: Custom output using Go templates
```bash
grype <image> -o template -t custom-template.tmpl
```
## Advanced Configuration
### Filtering and Exclusions
Exclude specific file paths:
```bash
grype <image> --exclude '/usr/share/doc/**'
```
Filter by severity:
```bash
grype <image> --only-fixed # Only show vulnerabilities with available fixes
```
### Custom Ignore Rules
Create `.grype.yaml` to suppress false positives:
```yaml
ignore:
# Ignore specific CVE
- vulnerability: CVE-YYYY-XXXXX
reason: "False positive - component not used"
# Ignore CVE for specific package
- vulnerability: CVE-YYYY-ZZZZZ
package:
name: example-lib
version: 1.2.3
reason: "Risk accepted - mitigation controls in place"
```
### Database Management
Update vulnerability database:
```bash
grype db update
```
Check database status:
```bash
grype db status
```
Use specific database location:
```bash
grype <image> --db /path/to/database
```
## Security Considerations
- **Sensitive Data Handling**: Scan results may contain package names and versions that reveal
application architecture. Store results securely and limit access to authorized security personnel.
- **Access Control**: Grype requires Docker socket access when scanning container images.
Restrict permissions to prevent unauthorized image access.
- **Audit Logging**: Log all Grype scans with timestamps, target details, and operator identity
for compliance and incident response. Archive scan results for historical vulnerability tracking.
- **Compliance**: Regular vulnerability scanning supports SOC2, PCI-DSS, NIST 800-53, and ISO 27001
requirements. Document scan frequency and remediation SLAs.
- **Safe Defaults**: Use `--fail-on critical` as minimum threshold for production deployments.
Configure automated scans in CI/CD to prevent vulnerable images from reaching production.
## Bundled Resources
### Scripts (`scripts/`)
- **prioritize_cves.py** - Parse Grype JSON output and prioritize CVEs by threat metrics (KEV, EPSS, CVSS)
- **grype_scan.sh** - Wrapper script for consistent Grype scans with logging and threshold configuration
### References (`references/`)
- **cvss_guide.md** - CVSS severity rating system and score interpretation
- **cisa_kev.md** - CISA Known Exploited Vulnerabilities catalog and remediation urgency
- **vulnerability_remediation.md** - Common remediation patterns for dependency vulnerabilities
### Assets (`assets/`)
- **grype-ci-config.yml** - CI/CD pipeline configuration for Grype vulnerability scanning
- **grype-config.yaml** - Example Grype configuration with common ignore patterns
## Common Patterns
### Pattern 1: Pre-Production Scanning
Scan before pushing images to registry:
```bash
# Build image
docker build -t myapp:latest .
# Scan locally before push
grype myapp:latest --fail-on critical
# If scan passes, push to registry
docker push myapp:latest
```
### Pattern 2: Scheduled Scanning
Re-scan existing images for newly disclosed vulnerabilities:
```bash
# Scan all production images daily
for image in $(docker images --format '{{.Repository}}:{{.Tag}}' | grep prod); do
grype $image -o json >> daily-scan-$(date +%Y%m%d).json
done
```
### Pattern 3: Base Image Selection
Compare base images to choose least vulnerable option:
```bash
# Compare Alpine versions
grype alpine:3.18
grype alpine:3.19
# Compare distros
grype ubuntu:22.04
grype debian:12-slim
grype alpine:3.19
```
## Integration Points
- **CI/CD**: Integrate with GitHub Actions, GitLab CI, Jenkins, CircleCI using `--fail-on` thresholds
- **Container Registries**: Scan images from Docker Hub, ECR, GCR, ACR, Harbor
- **Security Tools**: Export SARIF for GitHub Security, JSON for SIEM ingestion, CycloneDX for DependencyTrack
- **SDLC**: Scan during build (shift-left), before deployment (quality gate), and scheduled (continuous monitoring)
## Troubleshooting
### Issue: Database Update Fails
**Symptoms**: `grype db update` fails with network errors
**Solution**:
- Check network connectivity and proxy settings
- Verify firewall allows access to Grype database sources
- Use `grype db update --verbose` for detailed error messages
- Consider using offline database: `grype db import /path/to/database.tar.gz`
### Issue: False Positives
**Symptoms**: Grype reports vulnerabilities in unused code or misidentified packages
**Solution**:
- Create `.grype.yaml` ignore file with specific CVE suppressions
- Document justification for each ignored vulnerability
- Periodically review ignored CVEs (quarterly) to reassess risk
- Use `--only-fixed` to focus on actionable findings
### Issue: Slow Scans
**Symptoms**: Grype scans take excessive time on large images
**Solution**:
- Use SBOM workflow: Generate SBOM once with Syft, re-scan SBOM with Grype
- Exclude unnecessary paths: `--exclude '/usr/share/doc/**'`
- Use local database cache: `grype db update` before batch scans
- Scan base images separately to identify inherited vulnerabilities
## References
- [Grype GitHub Repository](https://github.com/anchore/grype)
- [Grype Documentation](https://github.com/anchore/grype#getting-started)
- [NIST National Vulnerability Database](https://nvd.nist.gov/)
- [CISA Known Exploited Vulnerabilities](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
- [FIRST EPSS (Exploit Prediction Scoring System)](https://www.first.org/epss/)
- [CVSS Specification](https://www.first.org/cvss/specification-document)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,405 @@
# Grype CI/CD Pipeline Configuration Examples
#
# This file provides example configurations for integrating Grype vulnerability
# scanning into various CI/CD platforms.
# =============================================================================
# GitHub Actions
# =============================================================================
name: Container Security Scan
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
schedule:
# Scan daily for new vulnerabilities
- cron: '0 6 * * *'
jobs:
grype-scan:
name: Grype Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build -t ${{ github.repository }}:${{ github.sha }} .
- name: Install Grype
uses: anchore/scan-action@v3
id: grype
with:
image: ${{ github.repository }}:${{ github.sha }}
fail-build: true
severity-cutoff: high
output-format: sarif
- name: Upload SARIF results to GitHub Security
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: ${{ steps.grype.outputs.sarif }}
- name: Generate human-readable report
if: always()
run: |
grype ${{ github.repository }}:${{ github.sha }} -o table > grype-report.txt
- name: Upload scan report
uses: actions/upload-artifact@v4
if: always()
with:
name: grype-scan-report
path: grype-report.txt
retention-days: 30
# =============================================================================
# GitLab CI
# =============================================================================
# .gitlab-ci.yml
stages:
- build
- scan
- deploy
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
GRYPE_VERSION: "latest"
build:
stage: build
image: docker:24-dind
services:
- docker:24-dind
script:
- docker build -t $IMAGE_NAME .
- docker push $IMAGE_NAME
only:
- branches
grype-scan:
stage: scan
image: anchore/grype:$GRYPE_VERSION
script:
- grype $IMAGE_NAME --fail-on high -o json > grype-results.json
- grype $IMAGE_NAME -o table
artifacts:
reports:
container_scanning: grype-results.json
paths:
- grype-results.json
expire_in: 30 days
allow_failure: false
only:
- branches
deploy:
stage: deploy
script:
- echo "Deploying $IMAGE_NAME"
only:
- main
when: on_success
# =============================================================================
# Jenkins Pipeline
# =============================================================================
# Jenkinsfile
pipeline {
agent any
environment {
IMAGE_NAME = "myapp"
IMAGE_TAG = "${env.BUILD_NUMBER}"
GRYPE_VERSION = "latest"
}
stages {
stage('Build') {
steps {
script {
docker.build("${IMAGE_NAME}:${IMAGE_TAG}")
}
}
}
stage('Grype Scan') {
agent {
docker {
image "anchore/grype:${GRYPE_VERSION}"
args '-v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
sh """
# Run scan with high severity threshold
grype ${IMAGE_NAME}:${IMAGE_TAG} \
--fail-on high \
-o json > grype-results.json
# Generate human-readable report
grype ${IMAGE_NAME}:${IMAGE_TAG} \
-o table > grype-report.txt
"""
}
post {
always {
archiveArtifacts artifacts: 'grype-*.json,grype-*.txt',
allowEmptyArchive: true
}
failure {
echo 'Grype scan found vulnerabilities above threshold'
}
}
}
stage('Deploy') {
when {
branch 'main'
}
steps {
echo "Deploying ${IMAGE_NAME}:${IMAGE_TAG}"
}
}
}
}
# =============================================================================
# CircleCI
# =============================================================================
# .circleci/config.yml
version: 2.1
orbs:
docker: circleci/docker@2.2.0
jobs:
build-and-scan:
docker:
- image: cimg/base:2024.01
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Build Docker Image
command: |
docker build -t myapp:${CIRCLE_SHA1} .
- run:
name: Install Grype
command: |
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
- run:
name: Scan with Grype
command: |
grype myapp:${CIRCLE_SHA1} --fail-on critical -o json > grype-results.json
grype myapp:${CIRCLE_SHA1} -o table | tee grype-report.txt
- store_artifacts:
path: grype-results.json
destination: scan-results
- store_artifacts:
path: grype-report.txt
destination: scan-results
workflows:
build-scan-deploy:
jobs:
- build-and-scan:
filters:
branches:
only:
- main
- develop
# =============================================================================
# Azure Pipelines
# =============================================================================
# azure-pipelines.yml
trigger:
branches:
include:
- main
- develop
pool:
vmImage: 'ubuntu-latest'
variables:
imageName: 'myapp'
imageTag: '$(Build.BuildId)'
stages:
- stage: Build
jobs:
- job: BuildImage
steps:
- task: Docker@2
displayName: Build Docker image
inputs:
command: build
dockerfile: Dockerfile
tags: $(imageTag)
- stage: Scan
dependsOn: Build
jobs:
- job: GrypeScan
steps:
- script: |
# Install Grype
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin
# Run scan
grype $(imageName):$(imageTag) \
--fail-on high \
-o json > $(Build.ArtifactStagingDirectory)/grype-results.json
grype $(imageName):$(imageTag) \
-o table > $(Build.ArtifactStagingDirectory)/grype-report.txt
displayName: 'Run Grype Scan'
- task: PublishBuildArtifacts@1
displayName: 'Publish Scan Results'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'grype-scan-results'
condition: always()
- stage: Deploy
dependsOn: Scan
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- job: DeployProduction
steps:
- script: echo "Deploying to production"
displayName: 'Deploy'
# =============================================================================
# Tekton Pipeline
# =============================================================================
# tekton-pipeline.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: grype-scan-pipeline
spec:
params:
- name: image-name
type: string
description: Name of the image to scan
- name: image-tag
type: string
description: Tag of the image to scan
default: latest
workspaces:
- name: shared-workspace
tasks:
- name: build-image
taskRef:
name: buildah
workspaces:
- name: source
workspace: shared-workspace
params:
- name: IMAGE
value: $(params.image-name):$(params.image-tag)
- name: grype-scan
runAfter:
- build-image
taskRef:
name: grype-scan
params:
- name: IMAGE
value: $(params.image-name):$(params.image-tag)
- name: SEVERITY_THRESHOLD
value: high
- name: deploy
runAfter:
- grype-scan
taskRef:
name: kubectl-deploy
params:
- name: IMAGE
value: $(params.image-name):$(params.image-tag)
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: grype-scan
spec:
params:
- name: IMAGE
description: Image to scan
- name: SEVERITY_THRESHOLD
description: Fail on this severity or higher
default: high
steps:
- name: scan
image: anchore/grype:latest
script: |
#!/bin/sh
grype $(params.IMAGE) \
--fail-on $(params.SEVERITY_THRESHOLD) \
-o json > /workspace/grype-results.json
grype $(params.IMAGE) -o table | tee /workspace/grype-report.txt
workspaces:
- name: scan-results
# =============================================================================
# Best Practices
# =============================================================================
# 1. Update vulnerability database regularly
# - Run grype db update before scans
# - Cache database between pipeline runs
# - Update database at least daily
# 2. Set appropriate fail thresholds
# - Production: --fail-on critical or high
# - Development: --fail-on high (may allow critical temporarily)
# - Monitor-only: No fail threshold, just report
# 3. Archive scan results
# - Store JSON for trend analysis
# - Keep reports for compliance audits
# - Retention: 30-90 days minimum
# 4. Integrate with security dashboards
# - Upload SARIF to GitHub Security
# - Send metrics to monitoring systems
# - Alert security team on critical findings
# 5. Scheduled scanning
# - Scan production images daily for new CVEs
# - Re-scan after vulnerability database updates
# - Track vulnerability trends over time

View File

@@ -0,0 +1,255 @@
# Grype Configuration File (.grype.yaml)
#
# Place this file in your project root or specify with: grype <target> -c .grype.yaml
#
# Documentation: https://github.com/anchore/grype#configuration
# =============================================================================
# Ignore Rules - Suppress False Positives and Accepted Risks
# =============================================================================
ignore:
# Example 1: Ignore specific CVE globally
- vulnerability: CVE-2021-12345
reason: "False positive - vulnerable code path not used in our application"
# Example 2: Ignore CVE for specific package only
- vulnerability: CVE-2022-67890
package:
name: example-library
version: 1.2.3
reason: "Risk accepted - compensating WAF rules deployed to block exploitation"
# Example 3: Ignore CVE with expiration date (forces re-evaluation)
- vulnerability: CVE-2023-11111
package:
name: lodash
reason: "Temporary acceptance while migration to alternative library is in progress"
expires: 2025-12-31
# Example 4: Ignore by fix state
- fix-state: wont-fix
reason: "Maintainer has stated these will not be fixed"
# Example 5: Ignore vulnerabilities in test dependencies
- package:
name: pytest
type: python
reason: "Test-only dependency, not present in production"
# =============================================================================
# Match Configuration
# =============================================================================
match:
# Match vulnerabilities in OS packages
os:
enabled: true
# Match vulnerabilities in language packages
language:
enabled: true
# Control matching behavior
go:
# Use Go module proxy for additional metadata
use-network: true
main-module-version:
# Use version from go.mod if available
from-contents: true
java:
# Use Maven Central for additional metadata
use-network: true
python:
# Use PyPI for additional metadata
use-network: true
# =============================================================================
# Search Configuration
# =============================================================================
search:
# Search for packages in these locations
scope: all-layers # Options: all-layers, squashed
# Exclude paths from scanning
exclude:
# Exclude documentation directories
- "/usr/share/doc/**"
- "/usr/share/man/**"
# Exclude test directories
- "**/test/**"
- "**/tests/**"
- "**/__tests__/**"
# Exclude development tools not in production
- "**/node_modules/.bin/**"
# Exclude specific files
- "**/*.md"
- "**/*.txt"
# Index archives (tar, zip, jar, etc.)
index-archives: true
# Maximum depth to traverse nested archives
max-depth: 3
# =============================================================================
# Database Configuration
# =============================================================================
db:
# Cache directory for vulnerability database
cache-dir: ~/.grype/db
# Auto-update database
auto-update: true
# Validate database checksum
validate-by-hash-on-start: true
# Update check timeout
update-url-timeout: 30s
# =============================================================================
# Vulnerability Matching Configuration
# =============================================================================
# Adjust matcher configuration
dev:
# Profile memory usage (debugging)
profile-mem: false
# =============================================================================
# Output Configuration
# =============================================================================
output:
# Default output format
# Options: table, json, cyclonedx-json, cyclonedx-xml, sarif, template
format: table
# Show suppressed/ignored vulnerabilities in output
show-suppressed: false
# =============================================================================
# Fail-on Configuration
# =============================================================================
# Uncomment to set default fail-on severity
# fail-on: high # Options: negligible, low, medium, high, critical
# =============================================================================
# Registry Authentication
# =============================================================================
registry:
# Authenticate to private registries
# auth:
# - authority: registry.example.com
# username: user
# password: pass
#
# - authority: gcr.io
# token: <token>
# Use Docker config for authentication
insecure-use-http: false
# =============================================================================
# Example Configurations for Different Use Cases
# =============================================================================
# -----------------------------------------------------------------------------
# Use Case 1: Development Environment (Permissive)
# -----------------------------------------------------------------------------
#
# ignore:
# # Allow medium and below in dev
# - severity: medium
# reason: "Development environment - focus on high/critical only"
#
# fail-on: critical
#
# search:
# exclude:
# - "**/test/**"
# - "**/node_modules/**"
# -----------------------------------------------------------------------------
# Use Case 2: CI/CD Pipeline (Strict)
# -----------------------------------------------------------------------------
#
# fail-on: high
#
# ignore:
# # Only allow documented exceptions
# - vulnerability: CVE-2024-XXXX
# reason: "Documented risk acceptance by Security Team - Ticket SEC-123"
# expires: 2025-06-30
#
# output:
# format: json
# show-suppressed: true
# -----------------------------------------------------------------------------
# Use Case 3: Production Monitoring (Focus on Exploitability)
# -----------------------------------------------------------------------------
#
# match:
# # Prioritize known exploited vulnerabilities
# only-fixed: true # Only show CVEs with available fixes
#
# ignore:
# # Ignore unfixable vulnerabilities with compensating controls
# - fix-state: wont-fix
# reason: "Compensating controls implemented - network isolation, WAF rules"
#
# output:
# format: json
# -----------------------------------------------------------------------------
# Use Case 4: Compliance Scanning (Comprehensive)
# -----------------------------------------------------------------------------
#
# search:
# scope: all-layers
# index-archives: true
# max-depth: 5
#
# output:
# format: cyclonedx-json
# show-suppressed: true
#
# # No ignores - report everything for compliance review
# =============================================================================
# Best Practices
# =============================================================================
# 1. Document all ignore rules with clear reasons
# - Include ticket numbers for risk acceptances
# - Set expiration dates for temporary ignores
# - Review ignores quarterly
# 2. Use package-specific ignores instead of global CVE ignores
# - Reduces risk of suppressing legitimate vulnerabilities in other packages
# - Example: CVE-2021-12345 in package-a (ignored) vs package-b (should alert)
# 3. Exclude non-production paths
# - Test directories, documentation, dev tools
# - Reduces noise and scan time
# 4. Keep configuration in version control
# - Track changes to ignore rules
# - Audit trail for risk acceptances
# - Share consistent configuration across team
# 5. Different configs for different environments
# - Development: More permissive, focus on critical
# - CI/CD: Strict, block on high/critical
# - Production: Monitor all, focus on exploitable CVEs

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,225 @@
# CISA Known Exploited Vulnerabilities (KEV) Catalog
CISA's Known Exploited Vulnerabilities (KEV) catalog identifies CVEs with confirmed active exploitation in the wild.
## Table of Contents
- [What is KEV](#what-is-kev)
- [Why KEV Matters](#why-kev-matters)
- [KEV in Grype](#kev-in-grype)
- [Remediation Urgency](#remediation-urgency)
- [Federal Requirements](#federal-requirements)
## What is KEV
The Cybersecurity and Infrastructure Security Agency (CISA) maintains a catalog of vulnerabilities that:
1. Have **confirmed active exploitation** in real-world attacks
2. Present **significant risk** to federal enterprise and critical infrastructure
3. Require **prioritized remediation**
**Key Points**:
- KEV listings indicate **active, ongoing exploitation**, not theoretical risk
- Being in KEV catalog means attackers have weaponized the vulnerability
- KEV CVEs should be treated as **highest priority** regardless of CVSS score
## Why KEV Matters
### Active Threat Indicator
**KEV presence means**:
- Exploit code is publicly available or in active use by threat actors
- Attackers are successfully exploiting this vulnerability
- Your organization is likely a target if running vulnerable software
### Prioritization Signal
**CVSS vs KEV**:
- CVSS: Theoretical severity based on technical characteristics
- KEV: Proven real-world exploitation
**Example**:
- CVE with CVSS 6.5 (Medium) but KEV listing → **Prioritize over CVSS 9.0 (Critical) without KEV**
- Active exploitation trumps theoretical severity
### Compliance Requirement
**BOD 22-01**: Federal agencies must remediate KEV vulnerabilities within specified timeframes
- Many commercial organizations adopt similar policies
- SOC2, PCI-DSS, and other frameworks increasingly reference KEV
## KEV in Grype
### Detecting KEV in Scans
Grype includes KEV data in vulnerability assessments:
```bash
# Standard scan includes KEV indicators
grype <image> -o json > results.json
# Check for KEV matches
grep -i "kev" results.json
```
**Grype output indicators**:
- `dataSource` field may include KEV references
- Some vulnerabilities explicitly marked as CISA KEV
### Filtering KEV Vulnerabilities
Use the prioritization script to extract KEV matches:
```bash
./scripts/prioritize_cves.py results.json
```
Output shows `[KEV]` indicator for confirmed KEV vulnerabilities.
### Automated KEV Alerting
Integrate KEV detection into CI/CD:
```bash
# Fail build on any KEV vulnerability
grype <image> -o json | \
jq '.matches[] | select(.vulnerability.dataSource | contains("KEV"))' | \
jq -s 'if length > 0 then error("KEV vulnerabilities found") else empty end'
```
## Remediation Urgency
### BOD 22-01 Timeframes
CISA Binding Operational Directive 22-01 requires:
| Vulnerability Type | Remediation Deadline |
|-------------------|---------------------|
| KEV listed before directive | 2 weeks from BOD publication |
| Newly added KEV | 2 weeks from KEV addition |
| Critical KEV (discretionary) | Immediate (24-48 hours) |
### Commercial Best Practices
**Recommended SLAs for KEV vulnerabilities**:
1. **Immediate Response (0-24 hours)**:
- Assess exposure and affected systems
- Implement temporary mitigations (disable feature, block network access)
- Notify security leadership and stakeholders
2. **Emergency Patching (24-48 hours)**:
- Deploy patches to production systems
- Validate remediation with re-scan
- Document patch deployment
3. **Validation and Monitoring (48-72 hours)**:
- Verify all instances patched
- Check logs for exploitation attempts
- Update detection rules and threat intelligence
### Temporary Mitigations
If immediate patching is not possible:
**Network-Level Controls**:
- Block external access to vulnerable services
- Segment vulnerable systems from critical assets
- Deploy Web Application Firewall (WAF) rules
**Application-Level Controls**:
- Disable vulnerable features or endpoints
- Implement additional authentication requirements
- Enable enhanced logging and monitoring
**Operational Controls**:
- Increase security monitoring for affected systems
- Deploy compensating detective controls
- Schedule emergency maintenance window
## Federal Requirements
### Binding Operational Directive 22-01
**Scope**: All federal civilian executive branch (FCEB) agencies
**Requirements**:
1. Remediate KEV vulnerabilities within required timeframes
2. Report remediation status to CISA
3. Document exceptions and compensating controls
**Penalties**: Non-compliance may result in:
- Required reporting to agency leadership
- Escalation to Office of Management and Budget (OMB)
- Potential security authorization impacts
### Extending to Commercial Organizations
Many commercial organizations adopt KEV-based policies:
**Rationale**:
- KEV represents highest-priority threats
- Federal government invests in threat intelligence
- Following KEV reduces actual breach risk
**Implementation**:
- Monitor KEV catalog for relevant CVEs
- Integrate KEV data into vulnerability management
- Define internal KEV remediation SLAs
- Report KEV status to leadership and audit teams
## Monitoring KEV Updates
### CISA KEV Catalog
Access the catalog:
- **Web**: https://www.cisa.gov/known-exploited-vulnerabilities-catalog
- **JSON**: https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json
- **CSV**: https://www.cisa.gov/sites/default/files/csv/known_exploited_vulnerabilities.csv
### Automated Monitoring
Track new KEV additions:
```bash
# Download current KEV catalog
curl -s https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json \
-o kev-catalog.json
# Compare against previous download
diff kev-catalog-previous.json kev-catalog.json
```
**Subscribe to updates**:
- CISA cybersecurity alerts: https://www.cisa.gov/cybersecurity-alerts
- RSS feeds for KEV additions
- Security vendor threat intelligence feeds
## Response Workflow
### KEV Vulnerability Detected
Progress:
[ ] 1. **Identify** affected systems: Run Grype scan across all environments
[ ] 2. **Assess** exposure: Determine if vulnerable systems are internet-facing or critical
[ ] 3. **Contain** risk: Implement temporary mitigations (network blocks, feature disable)
[ ] 4. **Remediate**: Deploy patches or upgrades to all affected systems
[ ] 5. **Validate**: Re-scan with Grype to confirm vulnerability resolved
[ ] 6. **Monitor**: Review logs for exploitation attempts during vulnerable window
[ ] 7. **Document**: Record timeline, actions taken, and lessons learned
Work through each step systematically. Check off completed items.
### Post-Remediation Analysis
After resolving KEV vulnerability:
1. **Threat Hunting**: Search logs for indicators of compromise (IOC)
2. **Root Cause**: Determine why vulnerable software was deployed
3. **Process Improvement**: Update procedures to prevent recurrence
4. **Reporting**: Notify stakeholders and compliance teams
## References
- [CISA KEV Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
- [BOD 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities](https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities)
- [KEV Catalog JSON Feed](https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json)
- [CISA Cybersecurity Alerts](https://www.cisa.gov/cybersecurity-alerts)

View File

@@ -0,0 +1,210 @@
# CVSS Severity Rating Guide
Common Vulnerability Scoring System (CVSS) is a standardized framework for rating vulnerability severity.
## Table of Contents
- [CVSS Score Ranges](#cvss-score-ranges)
- [Severity Ratings](#severity-ratings)
- [CVSS Metrics](#cvss-metrics)
- [Interpreting Scores](#interpreting-scores)
- [Remediation SLAs](#remediation-slas)
## CVSS Score Ranges
| CVSS Score | Severity Rating | Description |
|------------|----------------|-------------|
| 0.0 | None | No vulnerability |
| 0.1 - 3.9 | Low | Minimal security impact |
| 4.0 - 6.9 | Medium | Moderate security impact |
| 7.0 - 8.9 | High | Significant security impact |
| 9.0 - 10.0 | Critical | Severe security impact |
## Severity Ratings
### Critical (9.0 - 10.0)
**Characteristics**:
- Trivial to exploit
- No user interaction required
- Remote code execution or complete system compromise
- Affects default configurations
**Examples**:
- Unauthenticated remote code execution
- Critical SQL injection allowing full database access
- Authentication bypass in critical services
**Action**: Remediate immediately (within 24-48 hours)
### High (7.0 - 8.9)
**Characteristics**:
- Easy to exploit with moderate skill
- May require user interaction or specific conditions
- Significant data exposure or privilege escalation
- Affects common configurations
**Examples**:
- Authenticated remote code execution
- Cross-site scripting (XSS) in privileged contexts
- Privilege escalation vulnerabilities
**Action**: Remediate within 7 days
### Medium (4.0 - 6.9)
**Characteristics**:
- Requires specific conditions or elevated privileges
- Limited impact or scope
- May require local access or user interaction
**Examples**:
- Information disclosure of non-sensitive data
- Denial of service with mitigating factors
- Cross-site request forgery (CSRF)
**Action**: Remediate within 30 days
### Low (0.1 - 3.9)
**Characteristics**:
- Difficult to exploit
- Minimal security impact
- Requires significant user interaction or unlikely conditions
**Examples**:
- Information leakage of minimal data
- Low-impact denial of service
- Security misconfigurations with limited exposure
**Action**: Remediate within 90 days or next maintenance cycle
## CVSS Metrics
CVSS v3.1 scores are calculated from three metric groups:
### Base Metrics (Primary Factors)
**Attack Vector (AV)**:
- Network (N): Remotely exploitable
- Adjacent (A): Requires local network access
- Local (L): Requires local system access
- Physical (P): Requires physical access
**Attack Complexity (AC)**:
- Low (L): No specialized conditions required
- High (H): Requires specific conditions or expert knowledge
**Privileges Required (PR)**:
- None (N): No authentication needed
- Low (L): Basic user privileges required
- High (H): Administrator privileges required
**User Interaction (UI)**:
- None (N): No user interaction required
- Required (R): Requires user action (e.g., clicking a link)
**Scope (S)**:
- Unchanged (U): Vulnerability affects only the vulnerable component
- Changed (C): Vulnerability affects resources beyond the vulnerable component
**Impact Metrics** (Confidentiality, Integrity, Availability):
- None (N): No impact
- Low (L): Limited impact
- High (H): Total or serious impact
### Temporal Metrics (Optional)
Time-dependent factors:
- Exploit Code Maturity
- Remediation Level
- Report Confidence
### Environmental Metrics (Optional)
Organization-specific factors:
- Modified Base Metrics
- Confidentiality/Integrity/Availability Requirements
## Interpreting Scores
### Context Matters
CVSS scores should be interpreted in context:
**High-Value Systems**: Escalate severity for:
- Production systems
- Customer-facing applications
- Systems handling PII or financial data
- Critical infrastructure
**Low-Value Systems**: May de-prioritize for:
- Development/test environments
- Internal tools with limited access
- Deprecated systems scheduled for decommission
### Complementary Metrics
Consider alongside CVSS:
**EPSS (Exploit Prediction Scoring System)**:
- Probability (0-100%) that a vulnerability will be exploited in the wild
- High EPSS + High CVSS = Urgent remediation
**CISA KEV (Known Exploited Vulnerabilities)**:
- Active exploitation confirmed in the wild
- KEV presence overrides CVSS - remediate immediately
**Reachability**:
- Is the vulnerable code path actually executed?
- Is the vulnerable dependency directly or transitively included?
## Remediation SLAs
### Industry Standard SLA Examples
| Severity | Timeframe | Priority |
|----------|-----------|----------|
| Critical | 24-48 hours | P0 - Drop everything |
| High | 7 days | P1 - Next sprint |
| Medium | 30 days | P2 - Planned work |
| Low | 90 days | P3 - Maintenance cycle |
### Adjusted for Exploitability
**If CISA KEV or EPSS > 50%**:
- Reduce timeframe by 50%
- Example: High (7 days) → 3-4 days
**If proof-of-concept exists**:
- Treat High as Critical
- Treat Medium as High
**If actively exploited**:
- All severities become Critical (immediate remediation)
## False Positives and Suppressions
Not all reported vulnerabilities require immediate action:
### Valid Suppression Reasons
- **Not Reachable**: Vulnerable code path not executed
- **Mitigated**: Compensating controls in place (WAF, network segmentation)
- **Not Affected**: Version mismatch or platform-specific vulnerability
- **Risk Accepted**: Business decision with documented justification
### Documentation Requirements
For all suppressions:
1. CVE ID and affected package
2. Detailed justification
3. Approver and approval date
4. Review/expiration date (quarterly recommended)
5. Compensating controls if applicable
## References
- [CVSS v3.1 Specification](https://www.first.org/cvss/specification-document)
- [CVSS Calculator](https://www.first.org/cvss/calculator/3.1)
- [NVD CVSS Severity Distribution](https://nvd.nist.gov/vuln/severity-distribution)

View File

@@ -0,0 +1,510 @@
# Vulnerability Remediation Patterns
Common patterns for remediating dependency vulnerabilities detected by Grype.
## Table of Contents
- [General Remediation Strategies](#general-remediation-strategies)
- [Package Update Patterns](#package-update-patterns)
- [Base Image Updates](#base-image-updates)
- [Dependency Pinning](#dependency-pinning)
- [Compensating Controls](#compensating-controls)
- [Language-Specific Patterns](#language-specific-patterns)
## General Remediation Strategies
### Strategy 1: Direct Dependency Update
**When to use**: Vulnerability in a directly declared dependency
**Pattern**:
1. Identify fixed version from Grype output
2. Update dependency version in manifest file
3. Test application compatibility
4. Re-scan to verify fix
5. Deploy updated application
**Example**:
```bash
# Grype reports: lodash@4.17.15 has CVE-2020-8203, fixed in 4.17.19
# Update package.json
npm install lodash@4.17.19
npm test
grype dir:. --only-fixed
```
### Strategy 2: Transitive Dependency Update
**When to use**: Vulnerability in an indirect dependency
**Pattern**:
1. Identify which direct dependency includes the vulnerable package
2. Check if direct dependency has an update that resolves the issue
3. Update direct dependency or use dependency override mechanism
4. Re-scan to verify fix
**Example (npm)**:
```json
// package.json - Override transitive dependency
{
"overrides": {
"lodash": "^4.17.21"
}
}
```
**Example (pip)**:
```txt
# constraints.txt
lodash>=4.17.21
```
### Strategy 3: Base Image Update
**When to use**: Vulnerability in OS packages from container base image
**Pattern**:
1. Identify vulnerable OS package and fixed version
2. Update to newer base image tag or rebuild with package updates
3. Re-scan updated image
4. Test application on new base image
**Example**:
```dockerfile
# Before: Alpine 3.14 with vulnerable openssl
FROM alpine:3.14
# After: Alpine 3.19 with fixed openssl
FROM alpine:3.19
# Or: Explicit package update
FROM alpine:3.14
RUN apk upgrade --no-cache openssl
```
### Strategy 4: Patch or Backport
**When to use**: No fixed version available or update breaks compatibility
**Pattern**:
1. Research if security patch exists separately from full version update
2. Apply patch using package manager's patching mechanism
3. Consider backporting fix if feasible
4. Document patch and establish review schedule
**Example (npm postinstall)**:
```json
{
"scripts": {
"postinstall": "patch-package"
}
}
```
### Strategy 5: Compensating Controls
**When to use**: Fix not available and risk must be accepted
**Pattern**:
1. Document vulnerability and risk acceptance
2. Implement network, application, or operational controls
3. Enhance monitoring and detection
4. Schedule regular review (quarterly)
5. Track for future remediation when fix becomes available
## Package Update Patterns
### Pattern: Semantic Versioning Updates
**Minor/Patch Updates** (Generally Safe):
```bash
# Python: Update to latest patch version
pip install --upgrade 'package>=1.2.0,<1.3.0'
# Node.js: Update to latest minor version
npm update package
# Go: Update to latest patch
go get -u=patch github.com/org/package
```
**Major Updates** (Breaking Changes):
```bash
# Review changelog before updating
npm show package versions
pip index versions package
# Update and test thoroughly
npm install package@3.0.0
npm test
```
### Pattern: Lock File Management
**Update specific package**:
```bash
# npm
npm install package@latest
npm install # Update lock file
# pip
pip install --upgrade package
pip freeze > requirements.txt
# Go
go get -u github.com/org/package
go mod tidy
```
**Update all dependencies**:
```bash
# npm (interactive)
npm-check-updates --interactive
# pip
pip list --outdated | cut -d ' ' -f1 | xargs -n1 pip install -U
# Go
go get -u ./...
go mod tidy
```
## Base Image Updates
### Pattern: Minimal Base Images
**Reduce attack surface with minimal images**:
```dockerfile
# ❌ Large attack surface
FROM ubuntu:22.04
# ✅ Minimal attack surface
FROM alpine:3.19
# or
FROM gcr.io/distroless/base-debian12
# ✅ Minimal for specific language
FROM python:3.11-slim
FROM node:20-alpine
```
**Benefits**:
- Fewer packages = fewer vulnerabilities
- Smaller image size
- Faster scans
### Pattern: Multi-Stage Builds
**Separate build dependencies from runtime**:
```dockerfile
# Build stage with full toolchain
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage with minimal image
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
USER node
CMD ["node", "dist/index.js"]
```
**Benefits**:
- Build tools not present in final image
- Reduced vulnerability exposure
- Smaller production image
### Pattern: Regular Base Image Updates
**Automate base image updates**:
```yaml
# Dependabot config for Dockerfile
version: 2
updates:
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
```
**Manual update process**:
```bash
# Check for newer base image versions
docker pull alpine:3.19
docker images alpine
# Update Dockerfile
sed -i 's/FROM alpine:3.18/FROM alpine:3.19/' Dockerfile
# Rebuild and scan
docker build -t myapp:latest .
grype myapp:latest
```
## Dependency Pinning
### Pattern: Pin to Secure Versions
**Lock to known-good versions**:
```dockerfile
# ✅ Pin specific versions
FROM alpine:3.19.0@sha256:abc123...
# Install specific package versions
RUN apk add --no-cache \
ca-certificates=20240226-r0 \
openssl=3.1.4-r0
```
```json
// package.json - Exact versions
{
"dependencies": {
"express": "4.18.2",
"lodash": "4.17.21"
}
}
```
**Benefits**:
- Reproducible builds
- Controlled updates
- Prevent automatic vulnerability introduction
**Drawbacks**:
- Manual update effort
- May miss security patches
- Requires active maintenance
### Pattern: Range-Based Pinning
**Allow patch updates, lock major/minor**:
```json
// package.json - Allow patch updates
{
"dependencies": {
"express": "~4.18.2", // Allow 4.18.x
"lodash": "^4.17.21" // Allow 4.x.x
}
}
```
```python
# requirements.txt - Compatible releases
express>=4.18.2,<5.0.0
lodash>=4.17.21,<5.0.0
```
## Compensating Controls
### Pattern: Network Segmentation
**Isolate vulnerable systems**:
```yaml
# Docker Compose network isolation
services:
vulnerable-service:
image: myapp:vulnerable
networks:
- internal
# No external port exposure
gateway:
image: nginx:alpine
ports:
- "80:80"
networks:
- internal
- external
networks:
internal:
internal: true
external:
```
**Benefits**:
- Limits attack surface
- Contains potential breaches
- Buys time for proper remediation
### Pattern: Web Application Firewall (WAF)
**Block exploit attempts at perimeter**:
```nginx
# ModSecurity/OWASP Core Rule Set
location / {
modsecurity on;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
proxy_pass http://vulnerable-backend;
}
```
**Virtual Patching**:
- Create WAF rules for specific CVEs
- Block known exploit patterns
- Monitor for exploitation attempts
### Pattern: Runtime Application Self-Protection (RASP)
**Detect and prevent exploitation at runtime**:
```python
# Example: Add input validation
def process_user_input(data):
# Validate against known exploit patterns
if contains_sql_injection(data):
log_security_event("SQL injection attempt blocked")
raise SecurityException("Invalid input")
return sanitize_input(data)
```
## Language-Specific Patterns
### Python
**Update vulnerable package**:
```bash
# Check for vulnerabilities
grype dir:/path/to/project -o json
# Update package
pip install --upgrade vulnerable-package
# Freeze updated dependencies
pip freeze > requirements.txt
# Verify fix
grype dir:/path/to/project
```
**Use constraints files**:
```bash
# constraints.txt
vulnerable-package>=1.2.3 # CVE-2024-XXXX fixed
# Install with constraints
pip install -r requirements.txt -c constraints.txt
```
### Node.js
**Update vulnerable package**:
```bash
# Check for vulnerabilities
npm audit
grype dir:. -o json
# Fix automatically (if possible)
npm audit fix
# Manual update
npm install package@version
# Verify fix
npm audit
grype dir:.
```
**Override transitive dependencies**:
```json
{
"overrides": {
"vulnerable-package": "^2.0.0"
}
}
```
### Go
**Update vulnerable module**:
```bash
# Check for vulnerabilities
go list -m all | grype
# Update specific module
go get -u github.com/org/vulnerable-module
# Update all modules
go get -u ./...
# Verify and tidy
go mod tidy
grype dir:.
```
### Java/Maven
**Update vulnerable dependency**:
```xml
<!-- pom.xml - Update version -->
<dependency>
<groupId>org.example</groupId>
<artifactId>vulnerable-lib</artifactId>
<version>2.0.0</version> <!-- Updated from 1.0.0 -->
</dependency>
```
**Force dependency version**:
```xml
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.example</groupId>
<artifactId>vulnerable-lib</artifactId>
<version>2.0.0</version>
</dependency>
</dependencies>
</dependencyManagement>
```
### Rust
**Update vulnerable crate**:
```bash
# Check for vulnerabilities
cargo audit
grype dir:. -o json
# Update specific crate
cargo update -p vulnerable-crate
# Update all crates
cargo update
# Verify fix
cargo audit
grype dir:.
```
## Verification Workflow
After applying any remediation:
Progress:
[ ] 1. **Re-scan**: Run Grype scan to verify vulnerability resolved
[ ] 2. **Test**: Execute test suite to ensure no functionality broken
[ ] 3. **Document**: Record CVE, fix applied, and verification results
[ ] 4. **Deploy**: Roll out fix to affected environments
[ ] 5. **Monitor**: Watch for related security issues or regressions
Work through each step systematically. Check off completed items.
## References
- [npm Security Best Practices](https://docs.npmjs.com/security-best-practices)
- [Python Packaging Security](https://packaging.python.org/en/latest/guides/security/)
- [Go Modules Security](https://go.dev/blog/vuln)
- [OWASP Dependency Check](https://owasp.org/www-project-dependency-check/)