Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:51:02 +08:00
commit ff1f4bd119
252 changed files with 72682 additions and 0 deletions

View File

@@ -0,0 +1,492 @@
---
name: sbom-syft
description: >
Software Bill of Materials (SBOM) generation using Syft for container images, filesystems, and
archives. Detects packages across 28+ ecosystems with multi-format output support (CycloneDX,
SPDX, syft-json). Enables vulnerability assessment, license compliance, and supply chain security.
Use when: (1) Generating SBOMs for container images or applications, (2) Analyzing software
dependencies and packages for vulnerability scanning, (3) Tracking license compliance across
dependencies, (4) Integrating SBOM generation into CI/CD for supply chain security, (5) Creating
signed SBOM attestations for software provenance.
version: 0.1.0
maintainer: SirAppSec
category: secsdlc
tags: [sbom, syft, supply-chain, dependencies, cyclonedx, spdx, vulnerability-management, license-compliance]
frameworks: [NIST, OWASP]
dependencies:
tools: [docker]
references:
- https://github.com/anchore/syft
- https://anchore.com/sbom/
---
# Syft SBOM Generator
## Overview
Syft is a CLI tool and Go library for generating comprehensive Software Bills of Materials (SBOMs) from container images and filesystems. It provides visibility into packages and dependencies across 28+ ecosystems, supporting multiple SBOM formats (CycloneDX, SPDX) for vulnerability management, license compliance, and supply chain security.
## Supported Ecosystems
**Languages & Package Managers:**
Alpine (apk), C/C++ (conan), Dart (pub), Debian/Ubuntu (dpkg), Dotnet (deps.json), Go (go.mod), Java (JAR/WAR/EAR/Maven/Gradle), JavaScript (npm/yarn), PHP (composer), Python (pip/poetry/setup.py), Red Hat (RPM), Ruby (gem), Rust (cargo), Swift (cocoapods)
**Container & System:**
OCI images, Docker images, Singularity, container layers, Linux distributions
## Quick Start
Generate SBOM for container image:
```bash
# Using Docker
docker run --rm -v $(pwd):/out anchore/syft:latest <image> -o cyclonedx-json=/out/sbom.json
# Local installation
syft <image> -o cyclonedx-json=sbom.json
# Examples
syft alpine:latest -o cyclonedx-json
syft docker.io/nginx:latest -o spdx-json
syft dir:/path/to/project -o cyclonedx-json
```
## Core Workflows
### Workflow 1: Container Image SBOM Generation
For creating SBOMs of container images:
1. Identify target container image (local or registry)
2. Run Syft to generate SBOM:
```bash
syft <image-name:tag> -o cyclonedx-json=sbom-cyclonedx.json
```
3. Optionally generate multiple formats:
```bash
syft <image-name:tag> \
-o cyclonedx-json=sbom-cyclonedx.json \
-o spdx-json=sbom-spdx.json \
-o syft-json=sbom-syft.json
```
4. Store SBOM artifacts with image for traceability
5. Use SBOM for vulnerability scanning with Grype or other tools
6. Track SBOM versions alongside image releases
### Workflow 2: CI/CD Pipeline Integration
Progress:
[ ] 1. Add Syft to build pipeline after image creation
[ ] 2. Generate SBOM in standard format (CycloneDX or SPDX)
[ ] 3. Store SBOM as build artifact
[ ] 4. Scan SBOM for vulnerabilities (using Grype or similar)
[ ] 5. Fail build on critical vulnerabilities or license violations
[ ] 6. Publish SBOM alongside container image
[ ] 7. Integrate with vulnerability management platform
Work through each step systematically. Check off completed items.
### Workflow 3: Filesystem and Application Scanning
For generating SBOMs from source code or filesystems:
1. Navigate to project root or specify path
2. Scan directory structure:
```bash
syft dir:/path/to/project -o cyclonedx-json=app-sbom.json
```
3. Review detected packages and dependencies
4. Validate package detection accuracy (check for false positives/negatives)
5. Configure exclusions if needed (using `.syft.yaml`)
6. Generate SBOM for each release version
7. Track dependency changes between versions
### Workflow 4: SBOM Analysis and Vulnerability Scanning
Combining SBOM generation with vulnerability assessment:
1. Generate SBOM with Syft:
```bash
syft <target> -o cyclonedx-json=sbom.json
```
2. Scan SBOM for vulnerabilities using Grype:
```bash
grype sbom:sbom.json -o json --file vulnerabilities.json
```
3. Review vulnerability findings by severity
4. Filter by exploitability and fix availability
5. Prioritize remediation based on:
- CVSS score
- Active exploitation status
- Fix availability
- Dependency depth
6. Update dependencies and regenerate SBOM
7. Re-scan to verify vulnerability remediation
### Workflow 5: Signed SBOM Attestation
For creating cryptographically signed SBOM attestations:
1. Install cosign (for signing):
```bash
# macOS
brew install cosign
# Linux
wget https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64
chmod +x cosign-linux-amd64
mv cosign-linux-amd64 /usr/local/bin/cosign
```
2. Generate SBOM:
```bash
syft <image> -o cyclonedx-json=sbom.json
```
3. Create attestation and sign:
```bash
cosign attest --predicate sbom.json --type cyclonedx <image>
```
4. Verify attestation:
```bash
cosign verify-attestation --type cyclonedx <image>
```
5. Store signature alongside SBOM for provenance verification
## Output Formats
Syft supports multiple SBOM formats for different use cases:
| Format | Use Case | Specification |
|--------|----------|---------------|
| `cyclonedx-json` | Modern SBOM standard, wide tool support | CycloneDX 1.4+ |
| `cyclonedx-xml` | CycloneDX XML variant | CycloneDX 1.4+ |
| `spdx-json` | Linux Foundation standard | SPDX 2.3 |
| `spdx-tag-value` | SPDX text format | SPDX 2.3 |
| `syft-json` | Syft native format (most detail) | Syft-specific |
| `syft-text` | Human-readable console output | Syft-specific |
| `github-json` | GitHub dependency submission | GitHub-specific |
| `template` | Custom Go template output | User-defined |
Specify with `-o` flag:
```bash
syft <target> -o cyclonedx-json=output.json
```
## Configuration
Create `.syft.yaml` in project root or home directory:
```yaml
# Cataloger configuration
package:
cataloger:
enabled: true
scope: all-layers # Options: all-layers, squashed
search:
unindexed-archives: false
indexed-archives: true
# Exclusions
exclude:
- "**/test/**"
- "**/node_modules/**"
- "**/.git/**"
# Registry authentication
registry:
insecure-skip-tls-verify: false
auth:
- authority: registry.example.com
username: user
password: pass
# Output format defaults
output: cyclonedx-json
# Log level
log:
level: warn # Options: error, warn, info, debug, trace
```
## Common Patterns
### Pattern 1: Multi-Architecture Image Scanning
Scan all architectures of multi-platform images:
```bash
# Scan specific architecture
syft --platform linux/amd64 <image> -o cyclonedx-json=sbom-amd64.json
syft --platform linux/arm64 <image> -o cyclonedx-json=sbom-arm64.json
# Or scan manifest list (all architectures)
syft <image> --platform all -o cyclonedx-json
```
### Pattern 2: Private Registry Authentication
Access images from private registries:
```bash
# Using Docker credentials
docker login registry.example.com
syft registry.example.com/private/image:tag -o cyclonedx-json
# Using environment variables
export SYFT_REGISTRY_AUTH_AUTHORITY=registry.example.com
export SYFT_REGISTRY_AUTH_USERNAME=user
export SYFT_REGISTRY_AUTH_PASSWORD=pass
syft registry.example.com/private/image:tag -o cyclonedx-json
# Using config file (recommended)
# Add credentials to .syft.yaml
```
### Pattern 3: OCI Archive Scanning
Scan saved container images (OCI or Docker format):
```bash
# Save image to archive
docker save nginx:latest -o nginx.tar
# Scan archive
syft oci-archive:nginx.tar -o cyclonedx-json=sbom.json
# Or scan Docker archive
syft docker-archive:nginx.tar -o cyclonedx-json=sbom.json
```
### Pattern 4: Comparing SBOMs Between Versions
Track dependency changes across releases:
```bash
# Generate SBOMs for two versions
syft myapp:v1.0 -o syft-json=sbom-v1.0.json
syft myapp:v2.0 -o syft-json=sbom-v2.0.json
# Compare with jq
jq -s '{"added": (.[1].artifacts - .[0].artifacts), "removed": (.[0].artifacts - .[1].artifacts)}' \
sbom-v1.0.json sbom-v2.0.json
```
### Pattern 5: Filtering SBOM Output
Extract specific package information:
```bash
# Generate detailed SBOM
syft <target> -o syft-json=full-sbom.json
# Extract only Python packages
cat full-sbom.json | jq '.artifacts[] | select(.type == "python")'
# Extract packages with specific licenses
cat full-sbom.json | jq '.artifacts[] | select(.licenses[].value == "MIT")'
# Count packages by ecosystem
cat full-sbom.json | jq '.artifacts | group_by(.type) | map({type: .[0].type, count: length})'
```
## Security Considerations
- **Sensitive Data Handling**: SBOMs may contain internal package names and versions. Store SBOMs securely and restrict access to authorized personnel
- **Access Control**: Limit SBOM generation and access to build systems. Use read-only credentials for registry access
- **Audit Logging**: Log SBOM generation events, distribution, and access for compliance tracking
- **Compliance**: SBOMs support compliance with Executive Order 14028 (Software Supply Chain Security), NIST guidelines, and OWASP recommendations
- **Safe Defaults**: Use signed attestations for production SBOMs to ensure integrity and provenance
## Integration Points
### CI/CD Integration
**GitHub Actions:**
```yaml
- name: Generate SBOM with Syft
uses: anchore/sbom-action@v0
with:
image: ${{ env.IMAGE_NAME }}:${{ github.sha }}
format: cyclonedx-json
output-file: sbom.json
- name: Upload SBOM
uses: actions/upload-artifact@v3
with:
name: sbom
path: sbom.json
```
**GitLab CI:**
```yaml
sbom-generation:
image: anchore/syft:latest
script:
- syft $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -o cyclonedx-json=sbom.json
artifacts:
reports:
cyclonedx: sbom.json
```
**Jenkins:**
```groovy
stage('Generate SBOM') {
steps {
sh 'syft ${IMAGE_NAME}:${BUILD_NUMBER} -o cyclonedx-json=sbom.json'
archiveArtifacts artifacts: 'sbom.json'
}
}
```
### Vulnerability Scanning
Integrate with Grype for vulnerability scanning:
```bash
# Generate SBOM and scan in one pipeline
syft <target> -o cyclonedx-json=sbom.json
grype sbom:sbom.json
```
### SBOM Distribution
Attach SBOMs to container images:
```bash
# Using ORAS
oras attach <image> --artifact-type application/vnd.cyclonedx+json sbom.json
# Using Docker manifest
# Store SBOM as additional layer or separate artifact
```
## Advanced Usage
### Custom Template Output
Create custom output formats using Go templates:
```bash
# Create template file
cat > custom-template.tmpl <<'EOF'
{{- range .Artifacts}}
{{.Name}}@{{.Version}} ({{.Type}})
{{- end}}
EOF
# Use template
syft <target> -o template -t custom-template.tmpl
```
### Scanning Specific Layers
Analyze specific layers in container images:
```bash
# Squashed view (default - final filesystem state)
syft <image> --scope squashed -o cyclonedx-json
# All layers (every layer's packages)
syft <image> --scope all-layers -o cyclonedx-json
```
### Environment Variable Configuration
Configure Syft via environment variables:
```bash
export SYFT_SCOPE=all-layers
export SYFT_OUTPUT=cyclonedx-json
export SYFT_LOG_LEVEL=debug
export SYFT_EXCLUDE="**/test/**,**/node_modules/**"
syft <target>
```
## Troubleshooting
### Issue: Missing Packages in SBOM
**Solution**: Enable all-layers scope or check for package manager files:
```bash
syft <target> --scope all-layers -o syft-json
```
Verify package manifest files exist (package.json, requirements.txt, go.mod, etc.)
### Issue: Registry Authentication Failure
**Solution**: Ensure Docker credentials are configured or use explicit auth:
```bash
docker login <registry>
# Then run syft
syft <registry>/<image> -o cyclonedx-json
```
### Issue: Large SBOM Size
**Solution**: Use squashed scope and exclude test/dev dependencies:
```yaml
# In .syft.yaml
package:
cataloger:
scope: squashed
exclude:
- "**/test/**"
- "**/node_modules/**"
- "**/.git/**"
```
### Issue: Slow Scanning Performance
**Solution**: Disable unindexed archive scanning for faster results:
```yaml
# In .syft.yaml
package:
search:
unindexed-archives: false
```
## License Compliance
Extract license information from SBOM:
```bash
# Generate SBOM
syft <target> -o syft-json=sbom.json
# Extract unique licenses
cat sbom.json | jq -r '.artifacts[].licenses[].value' | sort -u
# Find packages with specific licenses
cat sbom.json | jq '.artifacts[] | select(.licenses[].value | contains("GPL"))'
# Generate license report
cat sbom.json | jq -r '.artifacts[] | "\(.name):\(.licenses[].value)"' | sort
```
## Vulnerability Management Workflow
Complete workflow integrating SBOM generation with vulnerability management:
Progress:
[ ] 1. Generate SBOM for application/container
[ ] 2. Scan SBOM for known vulnerabilities
[ ] 3. Classify vulnerabilities by severity and exploitability
[ ] 4. Check for available patches and updates
[ ] 5. Update vulnerable dependencies
[ ] 6. Regenerate SBOM after updates
[ ] 7. Re-scan to confirm vulnerability remediation
[ ] 8. Document accepted risks for unfixable vulnerabilities
[ ] 9. Schedule periodic SBOM regeneration and scanning
Work through each step systematically. Check off completed items.
## References
- [Syft GitHub Repository](https://github.com/anchore/syft)
- [Anchore SBOM Documentation](https://anchore.com/sbom/)
- [CycloneDX Specification](https://cyclonedx.org/)
- [SPDX Specification](https://spdx.dev/)
- [NIST Software Supply Chain Security](https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/software-supply-chain-security-guidance)
- [OWASP Software Component Verification Standard](https://owasp.org/www-project-software-component-verification-standard/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.