Initial commit
This commit is contained in:
609
skills/building-hooks/SKILL.md
Normal file
609
skills/building-hooks/SKILL.md
Normal file
@@ -0,0 +1,609 @@
|
||||
---
|
||||
name: building-hooks
|
||||
description: Use when creating Claude Code hooks - covers hook patterns, composition, testing, progressive enhancement from simple to advanced
|
||||
---
|
||||
|
||||
<skill_overview>
|
||||
Hooks encode business rules at application level; start with observation, add automation, enforce only when patterns clear.
|
||||
</skill_overview>
|
||||
|
||||
<rigidity_level>
|
||||
MEDIUM FREEDOM - Follow progressive enhancement (observe → automate → enforce) strictly. Hook patterns are adaptable, but always start non-blocking and test thoroughly.
|
||||
</rigidity_level>
|
||||
|
||||
<quick_reference>
|
||||
| Phase | Approach | Example |
|
||||
|-------|----------|---------|
|
||||
| 1. Observe | Non-blocking, report only | Log edits, display reminders |
|
||||
| 2. Automate | Background tasks, non-blocking | Auto-format, run builds |
|
||||
| 3. Enforce | Blocking only when necessary | Block dangerous ops, require fixes |
|
||||
|
||||
**Most used events:** UserPromptSubmit (before processing), Stop (after completion)
|
||||
|
||||
**Critical:** Start Phase 1, observe for a week, then Phase 2. Only add Phase 3 if absolutely necessary.
|
||||
</quick_reference>
|
||||
|
||||
<when_to_use>
|
||||
Use hooks for:
|
||||
- Automatic quality checks (build, lint, format)
|
||||
- Workflow automation (skill activation, context injection)
|
||||
- Error prevention (catching issues early)
|
||||
- Consistent behavior (formatting, conventions)
|
||||
|
||||
**Never use hooks for:**
|
||||
- Complex business logic (use tools/scripts)
|
||||
- Slow operations that block workflow (use background jobs)
|
||||
- Anything requiring LLM reasoning (hooks are deterministic)
|
||||
</when_to_use>
|
||||
|
||||
<hook_lifecycle_events>
|
||||
| Event | When Fires | Use Cases |
|
||||
|-------|------------|-----------|
|
||||
| UserPromptSubmit | Before Claude processes prompt | Validation, context injection, skill activation |
|
||||
| Stop | After Claude finishes | Build checks, formatting, quality reminders |
|
||||
| PostToolUse | After each tool execution | Logging, tracking, validation |
|
||||
| PreToolUse | Before tool execution | Permission checks, validation |
|
||||
| ToolError | When tool fails | Error handling, fallbacks |
|
||||
| SessionStart | New session begins | Environment setup, context loading |
|
||||
| SessionEnd | Session closes | Cleanup, logging |
|
||||
| Error | Unhandled error | Error recovery, notifications |
|
||||
</hook_lifecycle_events>
|
||||
|
||||
<progressive_enhancement>
|
||||
## Phase 1: Observation (Non-Blocking)
|
||||
|
||||
**Goal:** Understand patterns before acting
|
||||
|
||||
**Examples:**
|
||||
- Log file edits (PostToolUse)
|
||||
- Display reminders (Stop, non-blocking)
|
||||
- Track metrics
|
||||
|
||||
**Duration:** Observe for 1 week minimum
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Automation (Background)
|
||||
|
||||
**Goal:** Automate tedious tasks
|
||||
|
||||
**Examples:**
|
||||
- Auto-format edited files (Stop)
|
||||
- Run builds after changes (Stop)
|
||||
- Inject helpful context (UserPromptSubmit)
|
||||
|
||||
**Requirement:** Fast (<2 seconds), non-blocking
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Enforcement (Blocking)
|
||||
|
||||
**Goal:** Prevent errors, enforce standards
|
||||
|
||||
**Examples:**
|
||||
- Block dangerous operations (PreToolUse)
|
||||
- Require fixes before continuing (Stop, blocking)
|
||||
- Validate inputs (UserPromptSubmit, blocking)
|
||||
|
||||
**Requirement:** Only add when patterns clear from Phase 1-2
|
||||
</progressive_enhancement>
|
||||
|
||||
<common_hook_patterns>
|
||||
## Pattern 1: Build Checker (Stop Hook)
|
||||
|
||||
**Problem:** TypeScript errors left behind
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Stop hook - runs after Claude finishes
|
||||
|
||||
# Check modified repos
|
||||
modified_repos=$(grep -h "edited" ~/.claude/edit-log.txt | cut -d: -f1 | sort -u)
|
||||
|
||||
for repo in $modified_repos; do
|
||||
echo "Building $repo..."
|
||||
cd "$repo" && npm run build 2>&1 | tee /tmp/build-output.txt
|
||||
|
||||
error_count=$(grep -c "error TS" /tmp/build-output.txt || echo "0")
|
||||
|
||||
if [ "$error_count" -gt 0 ]; then
|
||||
if [ "$error_count" -ge 5 ]; then
|
||||
echo "⚠️ Found $error_count errors - consider error-resolver agent"
|
||||
else
|
||||
echo "🔴 Found $error_count TypeScript errors:"
|
||||
grep "error TS" /tmp/build-output.txt
|
||||
fi
|
||||
else
|
||||
echo "✅ Build passed"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
```json
|
||||
{
|
||||
"event": "Stop",
|
||||
"command": "~/.claude/hooks/build-checker.sh",
|
||||
"description": "Run builds on modified repos",
|
||||
"blocking": false
|
||||
}
|
||||
```
|
||||
|
||||
**Result:** Zero errors left behind
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: Auto-Formatter (Stop Hook)
|
||||
|
||||
**Problem:** Inconsistent formatting
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Stop hook - format all edited files
|
||||
|
||||
edited_files=$(tail -20 ~/.claude/edit-log.txt | grep "^/" | sort -u)
|
||||
|
||||
for file in $edited_files; do
|
||||
repo_dir=$(dirname "$file")
|
||||
while [ "$repo_dir" != "/" ]; do
|
||||
if [ -f "$repo_dir/.prettierrc" ]; then
|
||||
echo "Formatting $file..."
|
||||
cd "$repo_dir" && npx prettier --write "$file"
|
||||
break
|
||||
fi
|
||||
repo_dir=$(dirname "$repo_dir")
|
||||
done
|
||||
done
|
||||
|
||||
echo "✅ Formatting complete"
|
||||
```
|
||||
|
||||
**Result:** All code consistently formatted
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Error Handling Reminder (Stop Hook)
|
||||
|
||||
**Problem:** Claude forgets error handling
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Stop hook - gentle reminder
|
||||
|
||||
edited_files=$(tail -20 ~/.claude/edit-log.txt | grep "^/")
|
||||
|
||||
risky_patterns=0
|
||||
for file in $edited_files; do
|
||||
if grep -q "try\|catch\|async\|await\|prisma\|router\." "$file"; then
|
||||
((risky_patterns++))
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$risky_patterns" -gt 0 ]; then
|
||||
cat <<EOF
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 ERROR HANDLING SELF-CHECK
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
⚠️ Risky Patterns Detected
|
||||
$risky_patterns file(s) with async/try-catch/database operations
|
||||
|
||||
❓ Did you add proper error handling?
|
||||
❓ Are errors logged appropriately?
|
||||
|
||||
💡 Consider: Sentry.captureException(), proper logging
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
EOF
|
||||
fi
|
||||
```
|
||||
|
||||
**Result:** Claude self-checks without blocking
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: Skills Auto-Activation
|
||||
|
||||
**See:** hyperpowers:skills-auto-activation for complete implementation
|
||||
|
||||
**Summary:** Analyzes prompt keywords, injects skill activation reminder before Claude processes.
|
||||
</common_hook_patterns>
|
||||
|
||||
<hook_composition>
|
||||
## Naming for Order Control
|
||||
|
||||
Multiple hooks for same event run in **alphabetical order** by filename.
|
||||
|
||||
**Use numeric prefixes:**
|
||||
|
||||
```
|
||||
hooks/
|
||||
├── 00-log-prompt.sh # First (logging)
|
||||
├── 10-inject-context.sh # Second (context)
|
||||
├── 20-activate-skills.sh # Third (skills)
|
||||
└── 99-notify.sh # Last (notifications)
|
||||
```
|
||||
|
||||
## Hook Dependencies
|
||||
|
||||
If Hook B depends on Hook A's output:
|
||||
|
||||
1. **Option 1:** Numeric prefixes (A before B)
|
||||
2. **Option 2:** Combine into single hook
|
||||
3. **Option 3:** File-based communication
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# 10-track-edits.sh writes to edit-log.txt
|
||||
# 20-check-builds.sh reads from edit-log.txt
|
||||
```
|
||||
</hook_composition>
|
||||
|
||||
<testing_hooks>
|
||||
## Test in Isolation
|
||||
|
||||
```bash
|
||||
# Manually trigger
|
||||
bash ~/.claude/hooks/build-checker.sh
|
||||
|
||||
# Check exit code
|
||||
echo $? # 0 = success
|
||||
```
|
||||
|
||||
## Test with Mock Data
|
||||
|
||||
```bash
|
||||
# Create mock log
|
||||
echo "/path/to/test/file.ts" > /tmp/test-edit-log.txt
|
||||
|
||||
# Run with test data
|
||||
EDIT_LOG=/tmp/test-edit-log.txt bash ~/.claude/hooks/build-checker.sh
|
||||
```
|
||||
|
||||
## Test Non-Blocking Behavior
|
||||
|
||||
- Hook exits quickly (<2 seconds)
|
||||
- Doesn't block Claude
|
||||
- Provides clear output
|
||||
|
||||
## Test Blocking Behavior
|
||||
|
||||
- Blocking decision correct
|
||||
- Reason message helpful
|
||||
- Escape hatch exists
|
||||
|
||||
## Debugging
|
||||
|
||||
**Enable logging:**
|
||||
```bash
|
||||
set -x # Debug output
|
||||
exec 2>~/.claude/hooks/debug.log
|
||||
```
|
||||
|
||||
**Check execution:**
|
||||
```bash
|
||||
tail -f ~/.claude/logs/hooks.log
|
||||
```
|
||||
|
||||
**Common issues:**
|
||||
- Timeout (>10 second default)
|
||||
- Wrong working directory
|
||||
- Missing environment variables
|
||||
- File permissions
|
||||
</testing_hooks>
|
||||
|
||||
<examples>
|
||||
<example>
|
||||
<scenario>Developer adds blocking hook immediately without observation</scenario>
|
||||
|
||||
<code>
|
||||
# Developer frustrated by TypeScript errors
|
||||
# Creates blocking Stop hook immediately:
|
||||
|
||||
#!/bin/bash
|
||||
npm run build
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "BUILD FAILED - BLOCKING"
|
||||
exit 1 # Blocks Claude
|
||||
fi
|
||||
</code>
|
||||
|
||||
<why_it_fails>
|
||||
- No observation period to understand patterns
|
||||
- Blocks even for minor errors
|
||||
- No escape hatch if hook misbehaves
|
||||
- Might block during experimentation
|
||||
- Frustrates workflow when building is slow
|
||||
- Haven't identified when blocking is actually needed
|
||||
</why_it_fails>
|
||||
|
||||
<correction>
|
||||
**Phase 1: Observe (1 week)**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Non-blocking observation
|
||||
npm run build 2>&1 | tee /tmp/build.log
|
||||
|
||||
if grep -q "error TS" /tmp/build.log; then
|
||||
echo "🔴 Build errors found (not blocking)"
|
||||
fi
|
||||
```
|
||||
|
||||
**After 1 week, review:**
|
||||
- How often do errors appear?
|
||||
- Are they usually fixed quickly?
|
||||
- Do they cause real problems or just noise?
|
||||
|
||||
**Phase 2: If errors are frequent, automate**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Still non-blocking, but more helpful
|
||||
npm run build 2>&1 | tee /tmp/build.log
|
||||
|
||||
error_count=$(grep -c "error TS" /tmp/build.log || echo "0")
|
||||
|
||||
if [ "$error_count" -ge 5 ]; then
|
||||
echo "⚠️ $error_count errors - consider using error-resolver agent"
|
||||
elif [ "$error_count" -gt 0 ]; then
|
||||
echo "🔴 $error_count errors (not blocking):"
|
||||
grep "error TS" /tmp/build.log | head -5
|
||||
fi
|
||||
```
|
||||
|
||||
**Phase 3: Only if observation shows blocking is necessary**
|
||||
|
||||
Never reached - non-blocking works fine!
|
||||
|
||||
**What you gain:**
|
||||
- Understood patterns before acting
|
||||
- Non-blocking keeps workflow smooth
|
||||
- Helpful messages without friction
|
||||
- Can experiment without frustration
|
||||
</correction>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
<scenario>Hook is slow, blocks workflow</scenario>
|
||||
|
||||
<code>
|
||||
#!/bin/bash
|
||||
# Stop hook that's too slow
|
||||
|
||||
# Run full test suite (takes 45 seconds!)
|
||||
npm test
|
||||
|
||||
# Run linter (takes 10 seconds)
|
||||
npm run lint
|
||||
|
||||
# Run build (takes 30 seconds)
|
||||
npm run build
|
||||
|
||||
# Total: 85 seconds of blocking!
|
||||
</code>
|
||||
|
||||
<why_it_fails>
|
||||
- Hook takes 85 seconds to complete
|
||||
- Blocks Claude for entire duration
|
||||
- User can't continue working
|
||||
- Frustrating, likely to be disabled
|
||||
- Defeats purpose of automation
|
||||
</why_it_fails>
|
||||
|
||||
<correction>
|
||||
**Make hook fast (<2 seconds):**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Stop hook - fast checks only
|
||||
|
||||
# Quick syntax check (< 1 second)
|
||||
npm run check-syntax
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "🔴 Syntax errors found"
|
||||
echo "💡 Run 'npm test' manually for full test suite"
|
||||
fi
|
||||
|
||||
echo "✅ Quick checks passed (run 'npm test' for full suite)"
|
||||
```
|
||||
|
||||
**Or run slow checks in background:**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Stop hook - trigger background job
|
||||
|
||||
# Start tests in background
|
||||
(
|
||||
npm test > /tmp/test-results.txt 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "🔴 Tests failed (see /tmp/test-results.txt)"
|
||||
fi
|
||||
) &
|
||||
|
||||
echo "⏳ Tests running in background (check /tmp/test-results.txt)"
|
||||
```
|
||||
|
||||
**What you gain:**
|
||||
- Hook completes instantly
|
||||
- Workflow not blocked
|
||||
- Still get quality checks
|
||||
- User can continue working
|
||||
</correction>
|
||||
</example>
|
||||
|
||||
<example>
|
||||
<scenario>Hook has no error handling, fails silently</scenario>
|
||||
|
||||
<code>
|
||||
#!/bin/bash
|
||||
# Hook with no error handling
|
||||
|
||||
file=$(tail -1 ~/.claude/edit-log.txt)
|
||||
prettier --write "$file"
|
||||
</code>
|
||||
|
||||
<why_it_fails>
|
||||
- If edit-log.txt missing → hook fails silently
|
||||
- If file path invalid → prettier errors not caught
|
||||
- If prettier not installed → silent failure
|
||||
- No logging, can't debug
|
||||
- User has no idea hook ran or failed
|
||||
</why_it_fails>
|
||||
|
||||
<correction>
|
||||
**Add error handling:**
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail # Exit on error, undefined vars
|
||||
|
||||
# Log execution
|
||||
echo "[$(date)] Hook started" >> ~/.claude/hooks/formatter.log
|
||||
|
||||
# Validate input
|
||||
if [ ! -f ~/.claude/edit-log.txt ]; then
|
||||
echo "[$(date)] ERROR: edit-log.txt not found" >> ~/.claude/hooks/formatter.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
file=$(tail -1 ~/.claude/edit-log.txt | grep "^/.*\.ts$")
|
||||
|
||||
if [ -z "$file" ]; then
|
||||
echo "[$(date)] No TypeScript file to format" >> ~/.claude/hooks/formatter.log
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ ! -f "$file" ]; then
|
||||
echo "[$(date)] ERROR: File not found: $file" >> ~/.claude/hooks/formatter.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check prettier exists
|
||||
if ! command -v prettier &> /dev/null; then
|
||||
echo "[$(date)] ERROR: prettier not installed" >> ~/.claude/hooks/formatter.log
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Format
|
||||
echo "[$(date)] Formatting: $file" >> ~/.claude/hooks/formatter.log
|
||||
if prettier --write "$file" 2>&1 | tee -a ~/.claude/hooks/formatter.log; then
|
||||
echo "✅ Formatted $file"
|
||||
else
|
||||
echo "🔴 Formatting failed (see ~/.claude/hooks/formatter.log)"
|
||||
fi
|
||||
```
|
||||
|
||||
**What you gain:**
|
||||
- Errors logged and visible
|
||||
- Graceful handling of missing files
|
||||
- Can debug when issues occur
|
||||
- Clear feedback to user
|
||||
- Hook doesn't fail silently
|
||||
</correction>
|
||||
</example>
|
||||
</examples>
|
||||
|
||||
<security>
|
||||
**Hooks run with your credentials and have full system access.**
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Review code carefully** - Hooks execute any command
|
||||
2. **Use absolute paths** - Don't rely on PATH
|
||||
3. **Validate inputs** - Don't trust file paths blindly
|
||||
4. **Limit scope** - Only access what's needed
|
||||
5. **Log actions** - Track what hooks do
|
||||
6. **Test thoroughly** - Especially blocking hooks
|
||||
|
||||
## Dangerous Patterns
|
||||
|
||||
❌ **Don't:**
|
||||
```bash
|
||||
# DANGEROUS - executes arbitrary code
|
||||
cmd=$(tail -1 ~/.claude/edit-log.txt)
|
||||
eval "$cmd"
|
||||
```
|
||||
|
||||
✅ **Do:**
|
||||
```bash
|
||||
# SAFE - validates and sanitizes
|
||||
file=$(tail -1 ~/.claude/edit-log.txt | grep "^/.*\.ts$")
|
||||
if [ -f "$file" ]; then
|
||||
prettier --write "$file"
|
||||
fi
|
||||
```
|
||||
</security>
|
||||
|
||||
<critical_rules>
|
||||
## Rules That Have No Exceptions
|
||||
|
||||
1. **Start with Phase 1 (observe)** → Understand patterns before acting
|
||||
2. **Keep hooks fast (<2 seconds)** → Don't block workflow
|
||||
3. **Test thoroughly** → Hooks have full system access
|
||||
4. **Add error handling and logging** → Silent failures are debugging nightmares
|
||||
5. **Use progressive enhancement** → Observe → Automate → Enforce (only if needed)
|
||||
|
||||
## Common Excuses
|
||||
|
||||
All of these mean: **STOP. Follow progressive enhancement.**
|
||||
|
||||
- "Hook is simple, don't need testing" (Untested hooks fail in production)
|
||||
- "Blocking is fine, need to enforce" (Start non-blocking, observe first)
|
||||
- "I'll add error handling later" (Hook errors silent, add now)
|
||||
- "Hook is slow but thorough" (Slow hooks block workflow, optimize)
|
||||
- "Need access to everything" (Minimal permissions only)
|
||||
</critical_rules>
|
||||
|
||||
<verification_checklist>
|
||||
Before deploying hook:
|
||||
|
||||
- [ ] Tested in isolation (manual execution)
|
||||
- [ ] Tested with mock data
|
||||
- [ ] Completes quickly (<2 seconds for non-blocking)
|
||||
- [ ] Has error handling (set -euo pipefail)
|
||||
- [ ] Has logging (can debug failures)
|
||||
- [ ] Validates inputs (doesn't trust blindly)
|
||||
- [ ] Uses absolute paths
|
||||
- [ ] Started with Phase 1 (observation)
|
||||
- [ ] If blocking: has escape hatch
|
||||
|
||||
**Can't check all boxes?** Return to development and fix.
|
||||
</verification_checklist>
|
||||
|
||||
<integration>
|
||||
**This skill covers:** Hook creation and patterns
|
||||
|
||||
**Related skills:**
|
||||
- hyperpowers:skills-auto-activation (complete skill activation hook)
|
||||
- hyperpowers:verification-before-completion (quality hooks automate this)
|
||||
- hyperpowers:testing-anti-patterns (avoid in hooks)
|
||||
|
||||
**Hook patterns support:**
|
||||
- Automatic skill activation
|
||||
- Build verification
|
||||
- Code formatting
|
||||
- Error prevention
|
||||
- Workflow automation
|
||||
</integration>
|
||||
|
||||
<resources>
|
||||
**Detailed guides:**
|
||||
- [Complete hook examples](resources/hook-examples.md)
|
||||
- [Hook pattern library](resources/hook-patterns.md)
|
||||
- [Testing strategies](resources/testing-hooks.md)
|
||||
|
||||
**Official documentation:**
|
||||
- [Anthropic Hooks Guide](https://docs.claude.com/en/docs/claude-code/hooks-guide)
|
||||
|
||||
**When stuck:**
|
||||
- Hook failing silently → Add logging, check ~/.claude/hooks/debug.log
|
||||
- Hook too slow → Profile execution, move slow parts to background
|
||||
- Hook blocking incorrectly → Return to Phase 1, observe patterns
|
||||
- Testing unclear → Start with manual execution, then mock data
|
||||
</resources>
|
||||
577
skills/building-hooks/resources/hook-examples.md
Normal file
577
skills/building-hooks/resources/hook-examples.md
Normal file
@@ -0,0 +1,577 @@
|
||||
# Complete Hook Examples
|
||||
|
||||
This guide provides complete, production-ready hook implementations you can use and adapt.
|
||||
|
||||
## Example 1: File Edit Tracker (PostToolUse)
|
||||
|
||||
**Purpose:** Track which files were edited and in which repos for later analysis.
|
||||
|
||||
**File:** `~/.claude/hooks/post-tool-use/01-track-edits.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration
|
||||
LOG_FILE="$HOME/.claude/edit-log.txt"
|
||||
MAX_LOG_LINES=1000
|
||||
|
||||
# Create log if doesn't exist
|
||||
touch "$LOG_FILE"
|
||||
|
||||
# Function to log edit
|
||||
log_edit() {
|
||||
local file_path="$1"
|
||||
local timestamp=$(date +"%Y-%m-%d %H:%M:%S")
|
||||
local repo=$(find_repo "$file_path")
|
||||
|
||||
echo "$timestamp | $repo | $file_path" >> "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Function to find repo root
|
||||
find_repo() {
|
||||
local dir=$(dirname "$1")
|
||||
while [ "$dir" != "/" ]; do
|
||||
if [ -d "$dir/.git" ]; then
|
||||
basename "$dir"
|
||||
return
|
||||
fi
|
||||
dir=$(dirname "$dir")
|
||||
done
|
||||
echo "unknown"
|
||||
}
|
||||
|
||||
# Read tool use event from stdin
|
||||
read -r tool_use_json
|
||||
|
||||
# Extract file path from tool use
|
||||
tool_name=$(echo "$tool_use_json" | jq -r '.tool.name')
|
||||
file_path=""
|
||||
|
||||
case "$tool_name" in
|
||||
"Edit"|"Write")
|
||||
file_path=$(echo "$tool_use_json" | jq -r '.tool.input.file_path')
|
||||
;;
|
||||
"MultiEdit")
|
||||
# MultiEdit has multiple files - log each
|
||||
echo "$tool_use_json" | jq -r '.tool.input.edits[].file_path' | while read -r path; do
|
||||
log_edit "$path"
|
||||
done
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
# Log single edit
|
||||
if [ -n "$file_path" ] && [ "$file_path" != "null" ]; then
|
||||
log_edit "$file_path"
|
||||
fi
|
||||
|
||||
# Rotate log if too large
|
||||
line_count=$(wc -l < "$LOG_FILE")
|
||||
if [ "$line_count" -gt "$MAX_LOG_LINES" ]; then
|
||||
tail -n "$MAX_LOG_LINES" "$LOG_FILE" > "$LOG_FILE.tmp"
|
||||
mv "$LOG_FILE.tmp" "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Return success (non-blocking)
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
**Configuration (`hooks.json`):**
|
||||
```json
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"event": "PostToolUse",
|
||||
"command": "~/.claude/hooks/post-tool-use/01-track-edits.sh",
|
||||
"description": "Track file edits for build checking",
|
||||
"blocking": false,
|
||||
"timeout": 1000
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Example 2: Multi-Repo Build Checker (Stop)
|
||||
|
||||
**Purpose:** Run builds on all repos that were modified, report errors.
|
||||
|
||||
**File:** `~/.claude/hooks/stop/20-build-checker.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration
|
||||
LOG_FILE="$HOME/.claude/edit-log.txt"
|
||||
PROJECT_ROOT="$HOME/git/myproject"
|
||||
ERROR_THRESHOLD=5
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get repos modified since last check
|
||||
get_modified_repos() {
|
||||
# Get unique repos from recent edits
|
||||
tail -50 "$LOG_FILE" 2>/dev/null | \
|
||||
cut -d'|' -f2 | \
|
||||
tr -d ' ' | \
|
||||
sort -u | \
|
||||
grep -v "unknown"
|
||||
}
|
||||
|
||||
# Run build in repo
|
||||
build_repo() {
|
||||
local repo_name="$1"
|
||||
local repo_path="$PROJECT_ROOT/$repo_name"
|
||||
|
||||
if [ ! -d "$repo_path" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Determine build command
|
||||
local build_cmd=""
|
||||
if [ -f "$repo_path/package.json" ]; then
|
||||
build_cmd="npm run build"
|
||||
elif [ -f "$repo_path/Cargo.toml" ]; then
|
||||
build_cmd="cargo build"
|
||||
elif [ -f "$repo_path/go.mod" ]; then
|
||||
build_cmd="go build ./..."
|
||||
else
|
||||
return 0 # No build system found
|
||||
fi
|
||||
|
||||
echo "Building $repo_name..."
|
||||
|
||||
# Run build and capture output
|
||||
cd "$repo_path"
|
||||
local output=$(eval "$build_cmd" 2>&1)
|
||||
local exit_code=$?
|
||||
|
||||
if [ $exit_code -ne 0 ]; then
|
||||
# Count errors
|
||||
local error_count=$(echo "$output" | grep -c "error" || echo "0")
|
||||
|
||||
if [ "$error_count" -ge "$ERROR_THRESHOLD" ]; then
|
||||
echo -e "${YELLOW}⚠️ $repo_name: $error_count errors found${NC}"
|
||||
echo " Consider launching auto-error-resolver agent"
|
||||
else
|
||||
echo -e "${RED}🔴 $repo_name: $error_count errors${NC}"
|
||||
echo "$output" | grep "error" | head -10
|
||||
fi
|
||||
|
||||
return 1
|
||||
else
|
||||
echo -e "${GREEN}✅ $repo_name: Build passed${NC}"
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Main execution
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "🔨 BUILD VERIFICATION"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
modified_repos=$(get_modified_repos)
|
||||
|
||||
if [ -z "$modified_repos" ]; then
|
||||
echo "No repos modified since last check"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
build_failures=0
|
||||
|
||||
for repo in $modified_repos; do
|
||||
if ! build_repo "$repo"; then
|
||||
((build_failures++))
|
||||
fi
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
if [ "$build_failures" -gt 0 ]; then
|
||||
echo -e "${RED}$build_failures repo(s) failed to build${NC}"
|
||||
else
|
||||
echo -e "${GREEN}All builds passed${NC}"
|
||||
fi
|
||||
|
||||
# Non-blocking - always return success
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Example 3: TypeScript Prettier Formatter (Stop)
|
||||
|
||||
**Purpose:** Auto-format all edited TypeScript/JavaScript files.
|
||||
|
||||
**File:** `~/.claude/hooks/stop/30-format-code.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration
|
||||
LOG_FILE="$HOME/.claude/edit-log.txt"
|
||||
PROJECT_ROOT="$HOME/git/myproject"
|
||||
|
||||
# Get recently edited files
|
||||
get_edited_files() {
|
||||
tail -50 "$LOG_FILE" 2>/dev/null | \
|
||||
cut -d'|' -f3 | \
|
||||
tr -d ' ' | \
|
||||
grep -E '\.(ts|tsx|js|jsx)$' | \
|
||||
sort -u
|
||||
}
|
||||
|
||||
# Format file with prettier
|
||||
format_file() {
|
||||
local file="$1"
|
||||
|
||||
if [ ! -f "$file" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Find prettier config
|
||||
local dir=$(dirname "$file")
|
||||
local prettier_config=""
|
||||
|
||||
while [ "$dir" != "/" ]; do
|
||||
if [ -f "$dir/.prettierrc" ] || [ -f "$dir/.prettierrc.json" ]; then
|
||||
prettier_config="$dir"
|
||||
break
|
||||
fi
|
||||
dir=$(dirname "$dir")
|
||||
done
|
||||
|
||||
if [ -z "$prettier_config" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Format the file
|
||||
cd "$prettier_config"
|
||||
npx prettier --write "$file" 2>/dev/null
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✓ Formatted: $(basename $file)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main execution
|
||||
echo "🎨 Formatting edited files..."
|
||||
|
||||
edited_files=$(get_edited_files)
|
||||
|
||||
if [ -z "$edited_files" ]; then
|
||||
echo "No files to format"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
formatted_count=0
|
||||
|
||||
for file in $edited_files; do
|
||||
if format_file "$file"; then
|
||||
((formatted_count++))
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Formatted $formatted_count file(s)"
|
||||
|
||||
# Non-blocking
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Example 4: Skill Activation Injector (UserPromptSubmit)
|
||||
|
||||
**Purpose:** Analyze user prompt and inject skill activation reminders.
|
||||
|
||||
**File:** `~/.claude/hooks/user-prompt-submit/skill-activator.js`
|
||||
|
||||
```javascript
|
||||
#!/usr/bin/env node
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
// Load skill rules
|
||||
const rulesPath = process.env.SKILL_RULES || path.join(process.env.HOME, '.claude/skill-rules.json');
|
||||
const rules = JSON.parse(fs.readFileSync(rulesPath, 'utf8'));
|
||||
|
||||
// Read prompt from stdin
|
||||
let promptData = '';
|
||||
process.stdin.on('data', chunk => {
|
||||
promptData += chunk;
|
||||
});
|
||||
|
||||
process.stdin.on('end', () => {
|
||||
const prompt = JSON.parse(promptData);
|
||||
const activatedSkills = analyzePrompt(prompt.text);
|
||||
|
||||
if (activatedSkills.length > 0) {
|
||||
const context = generateContext(activatedSkills);
|
||||
console.log(JSON.stringify({
|
||||
decision: 'approve',
|
||||
additionalContext: context
|
||||
}));
|
||||
} else {
|
||||
console.log(JSON.stringify({ decision: 'approve' }));
|
||||
}
|
||||
});
|
||||
|
||||
function analyzePrompt(text) {
|
||||
const lowerText = text.toLowerCase();
|
||||
const activated = [];
|
||||
|
||||
for (const [skillName, config] of Object.entries(rules)) {
|
||||
// Check keywords
|
||||
if (config.promptTriggers?.keywords) {
|
||||
for (const keyword of config.promptTriggers.keywords) {
|
||||
if (lowerText.includes(keyword.toLowerCase())) {
|
||||
activated.push({ skill: skillName, priority: config.priority || 'medium' });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check intent patterns
|
||||
if (config.promptTriggers?.intentPatterns) {
|
||||
for (const pattern of config.promptTriggers.intentPatterns) {
|
||||
if (new RegExp(pattern, 'i').test(text)) {
|
||||
activated.push({ skill: skillName, priority: config.priority || 'medium' });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by priority
|
||||
return activated.sort((a, b) => {
|
||||
const priorityOrder = { high: 0, medium: 1, low: 2 };
|
||||
return priorityOrder[a.priority] - priorityOrder[b.priority];
|
||||
});
|
||||
}
|
||||
|
||||
function generateContext(skills) {
|
||||
const skillList = skills.map(s => s.skill).join(', ');
|
||||
|
||||
return `
|
||||
🎯 SKILL ACTIVATION CHECK
|
||||
|
||||
The following skills may be relevant to this prompt:
|
||||
${skills.map(s => `- **${s.skill}** (${s.priority} priority)`).join('\n')}
|
||||
|
||||
Before responding, check if any of these skills should be used.
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
**Configuration (`skill-rules.json`):**
|
||||
```json
|
||||
{
|
||||
"backend-dev-guidelines": {
|
||||
"type": "domain",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": ["backend", "controller", "service", "API", "endpoint"],
|
||||
"intentPatterns": [
|
||||
"(create|add).*?(route|endpoint|controller)",
|
||||
"(how to|best practice).*?(backend|API)"
|
||||
]
|
||||
}
|
||||
},
|
||||
"frontend-dev-guidelines": {
|
||||
"type": "domain",
|
||||
"priority": "high",
|
||||
"promptTriggers": {
|
||||
"keywords": ["frontend", "component", "react", "UI", "layout"],
|
||||
"intentPatterns": [
|
||||
"(create|build).*?(component|page|view)",
|
||||
"(how to|pattern).*?(react|frontend)"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example 5: Error Handling Reminder (Stop)
|
||||
|
||||
**Purpose:** Gentle reminder to check error handling in risky code.
|
||||
|
||||
**File:** `~/.claude/hooks/stop/40-error-reminder.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
LOG_FILE="$HOME/.claude/edit-log.txt"
|
||||
|
||||
# Get recently edited files
|
||||
get_edited_files() {
|
||||
tail -20 "$LOG_FILE" 2>/dev/null | \
|
||||
cut -d'|' -f3 | \
|
||||
tr -d ' ' | \
|
||||
sort -u
|
||||
}
|
||||
|
||||
# Check for risky patterns
|
||||
check_file_risk() {
|
||||
local file="$1"
|
||||
|
||||
if [ ! -f "$file" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Look for risky patterns
|
||||
if grep -q -E "try|catch|async|await|prisma|\.execute\(|fetch\(|axios\." "$file"; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Main execution
|
||||
risky_count=0
|
||||
backend_files=0
|
||||
|
||||
for file in $(get_edited_files); do
|
||||
if check_file_risk "$file"; then
|
||||
((risky_count++))
|
||||
|
||||
if echo "$file" | grep -q "backend\|server\|api"; then
|
||||
((backend_files++))
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$risky_count" -gt 0 ]; then
|
||||
cat <<EOF
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
📋 ERROR HANDLING SELF-CHECK
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
⚠️ Risky Patterns Detected
|
||||
$risky_count file(s) with async/try-catch/database operations
|
||||
|
||||
❓ Did you add proper error handling?
|
||||
❓ Are errors logged/captured appropriately?
|
||||
❓ Are promises handled correctly?
|
||||
|
||||
EOF
|
||||
|
||||
if [ "$backend_files" -gt 0 ]; then
|
||||
cat <<EOF
|
||||
💡 Backend Best Practice:
|
||||
- All errors should be captured (Sentry, logging)
|
||||
- Database operations need try-catch
|
||||
- API routes should use error middleware
|
||||
|
||||
EOF
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
fi
|
||||
|
||||
# Non-blocking
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Example 6: Dangerous Operation Blocker (PreToolUse)
|
||||
|
||||
**Purpose:** Block dangerous file operations (deletion, overwrite) in production paths.
|
||||
|
||||
**File:** `~/.claude/hooks/pre-tool-use/dangerous-ops.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Read tool use event
|
||||
read -r tool_use_json
|
||||
|
||||
tool_name=$(echo "$tool_use_json" | jq -r '.tool.name')
|
||||
file_path=$(echo "$tool_use_json" | jq -r '.tool.input.file_path // empty')
|
||||
|
||||
# Dangerous paths (customize for your project)
|
||||
PROTECTED_PATHS=(
|
||||
"/production/"
|
||||
"/prod/"
|
||||
"/.env.production"
|
||||
"/config/production"
|
||||
)
|
||||
|
||||
# Check if operation is dangerous
|
||||
is_dangerous() {
|
||||
local path="$1"
|
||||
|
||||
for protected in "${PROTECTED_PATHS[@]}"; do
|
||||
if [[ "$path" == *"$protected"* ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Check dangerous operations
|
||||
if [ "$tool_name" == "Write" ] || [ "$tool_name" == "Edit" ]; then
|
||||
if is_dangerous "$file_path"; then
|
||||
cat <<EOF | jq -c '.'
|
||||
{
|
||||
"decision": "block",
|
||||
"reason": "⛔ BLOCKED: Attempting to modify protected path\\n\\nFile: $file_path\\n\\nThis path is protected from automatic modification.\\nIf you need to make changes:\\n1. Review changes carefully\\n2. Use manual file editing\\n3. Confirm with teammate\\n\\nTo override, edit ~/.claude/hooks/pre-tool-use/dangerous-ops.sh"
|
||||
}
|
||||
EOF
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Allow operation (NOTE: PreToolUse hooks should use hookSpecificOutput format with permissionDecision)
|
||||
echo '{"decision": "allow"}'
|
||||
```
|
||||
|
||||
## Testing These Examples
|
||||
|
||||
### Test Edit Tracker
|
||||
```bash
|
||||
# Create test log entry
|
||||
echo "2025-01-15 10:30:00 | frontend | /path/to/file.ts" > ~/.claude/edit-log.txt
|
||||
|
||||
# Test formatting script
|
||||
bash ~/.claude/hooks/stop/30-format-code.sh
|
||||
```
|
||||
|
||||
### Test Build Checker
|
||||
```bash
|
||||
# Add some edits to log
|
||||
echo "2025-01-15 10:30:00 | backend | /path/to/backend/file.ts" >> ~/.claude/edit-log.txt
|
||||
|
||||
# Run build checker
|
||||
bash ~/.claude/hooks/stop/20-build-checker.sh
|
||||
```
|
||||
|
||||
### Test Skill Activator
|
||||
```bash
|
||||
# Test with mock prompt
|
||||
echo '{"text": "How do I create a new API endpoint?"}' | node ~/.claude/hooks/user-prompt-submit/skill-activator.js
|
||||
```
|
||||
|
||||
## Debugging Tips
|
||||
|
||||
**Enable debug mode:**
|
||||
```bash
|
||||
# Add to top of any bash script
|
||||
set -x
|
||||
exec 2>>~/.claude/hooks/debug.log
|
||||
```
|
||||
|
||||
**Check hook execution:**
|
||||
```bash
|
||||
# Watch hooks run in real-time
|
||||
tail -f ~/.claude/logs/hooks.log
|
||||
```
|
||||
|
||||
**Test hook output:**
|
||||
```bash
|
||||
# Capture output
|
||||
bash ~/.claude/hooks/stop/20-build-checker.sh > /tmp/hook-test.log 2>&1
|
||||
cat /tmp/hook-test.log
|
||||
```
|
||||
610
skills/building-hooks/resources/hook-patterns.md
Normal file
610
skills/building-hooks/resources/hook-patterns.md
Normal file
@@ -0,0 +1,610 @@
|
||||
# Hook Patterns Library
|
||||
|
||||
Reusable patterns for common hook use cases.
|
||||
|
||||
## Pattern: File Path Validation
|
||||
|
||||
Safely validate and sanitize file paths in hooks.
|
||||
|
||||
```bash
|
||||
validate_file_path() {
|
||||
local path="$1"
|
||||
|
||||
# Remove null/empty
|
||||
if [ -z "$path" ] || [ "$path" == "null" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Must be absolute path
|
||||
if [[ ! "$path" =~ ^/ ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Must exist
|
||||
if [ ! -f "$path" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check file extension whitelist
|
||||
if [[ ! "$path" =~ \.(ts|tsx|js|jsx|py|rs|go|java)$ ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Usage
|
||||
if validate_file_path "$file_path"; then
|
||||
# Safe to operate on file
|
||||
process_file "$file_path"
|
||||
fi
|
||||
```
|
||||
|
||||
## Pattern: Finding Project Root
|
||||
|
||||
Locate the project root directory from any file path.
|
||||
|
||||
```bash
|
||||
find_project_root() {
|
||||
local dir="$1"
|
||||
|
||||
# Start from file's directory
|
||||
if [ -f "$dir" ]; then
|
||||
dir=$(dirname "$dir")
|
||||
fi
|
||||
|
||||
# Walk up until finding markers
|
||||
while [ "$dir" != "/" ]; do
|
||||
# Check for project markers
|
||||
if [ -f "$dir/package.json" ] || \
|
||||
[ -f "$dir/Cargo.toml" ] || \
|
||||
[ -f "$dir/go.mod" ] || \
|
||||
[ -d "$dir/.git" ]; then
|
||||
echo "$dir"
|
||||
return 0
|
||||
fi
|
||||
dir=$(dirname "$dir")
|
||||
done
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Usage
|
||||
project_root=$(find_project_root "$file_path")
|
||||
if [ -n "$project_root" ]; then
|
||||
cd "$project_root"
|
||||
npm run build
|
||||
fi
|
||||
```
|
||||
|
||||
## Pattern: Conditional Hook Execution
|
||||
|
||||
Run hook only when certain conditions are met.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Configuration
|
||||
MIN_CHANGES=3
|
||||
TARGET_REPO="backend"
|
||||
|
||||
# Check if should run
|
||||
should_run() {
|
||||
# Count recent edits
|
||||
local edit_count=$(tail -20 ~/.claude/edit-log.txt | wc -l)
|
||||
|
||||
if [ "$edit_count" -lt "$MIN_CHANGES" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if target repo was modified
|
||||
if ! tail -20 ~/.claude/edit-log.txt | grep -q "$TARGET_REPO"; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Main execution
|
||||
if ! should_run; then
|
||||
echo '{}'
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Run actual hook logic
|
||||
perform_build_check
|
||||
```
|
||||
|
||||
## Pattern: Rate Limiting
|
||||
|
||||
Prevent hooks from running too frequently.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
RATE_LIMIT_FILE="/tmp/hook-last-run"
|
||||
MIN_INTERVAL=30 # seconds
|
||||
|
||||
# Check if enough time has passed
|
||||
should_run() {
|
||||
if [ ! -f "$RATE_LIMIT_FILE" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local last_run=$(cat "$RATE_LIMIT_FILE")
|
||||
local now=$(date +%s)
|
||||
local elapsed=$((now - last_run))
|
||||
|
||||
if [ "$elapsed" -lt "$MIN_INTERVAL" ]; then
|
||||
echo "Skipping (ran ${elapsed}s ago, min interval ${MIN_INTERVAL}s)"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Update last run time
|
||||
mark_run() {
|
||||
date +%s > "$RATE_LIMIT_FILE"
|
||||
}
|
||||
|
||||
# Usage
|
||||
if should_run; then
|
||||
perform_expensive_operation
|
||||
mark_run
|
||||
fi
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: Multi-Project Detection
|
||||
|
||||
Detect which project/repo a file belongs to.
|
||||
|
||||
```bash
|
||||
detect_project() {
|
||||
local file="$1"
|
||||
local project_root="/Users/myuser/projects"
|
||||
|
||||
# Extract project name from path
|
||||
if [[ "$file" =~ $project_root/([^/]+) ]]; then
|
||||
echo "${BASH_REMATCH[1]}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "unknown"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Usage
|
||||
project=$(detect_project "$file_path")
|
||||
|
||||
case "$project" in
|
||||
"frontend")
|
||||
npm --prefix ~/projects/frontend run build
|
||||
;;
|
||||
"backend")
|
||||
cargo build --manifest-path ~/projects/backend/Cargo.toml
|
||||
;;
|
||||
*)
|
||||
echo "Unknown project: $project"
|
||||
;;
|
||||
esac
|
||||
```
|
||||
|
||||
## Pattern: Graceful Degradation
|
||||
|
||||
Handle failures gracefully without blocking workflow.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Try operation with fallback
|
||||
try_with_fallback() {
|
||||
local primary_cmd="$1"
|
||||
local fallback_cmd="$2"
|
||||
local description="$3"
|
||||
|
||||
echo "Attempting: $description"
|
||||
|
||||
# Try primary command
|
||||
if eval "$primary_cmd" 2>/dev/null; then
|
||||
echo "✅ Success"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "⚠️ Primary failed, trying fallback..."
|
||||
|
||||
# Try fallback
|
||||
if eval "$fallback_cmd" 2>/dev/null; then
|
||||
echo "✅ Fallback succeeded"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "❌ Both failed, continuing anyway"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Usage
|
||||
try_with_fallback \
|
||||
"npm run build" \
|
||||
"npm run build:dev" \
|
||||
"Building project"
|
||||
|
||||
# Always return empty response (non-blocking)
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: Parallel Execution
|
||||
|
||||
Run multiple checks in parallel for speed.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Run checks in parallel
|
||||
run_parallel_checks() {
|
||||
local pids=()
|
||||
|
||||
# Start each check in background
|
||||
check_typescript &
|
||||
pids+=($!)
|
||||
|
||||
check_eslint &
|
||||
pids+=($!)
|
||||
|
||||
check_tests &
|
||||
pids+=($!)
|
||||
|
||||
# Wait for all to complete
|
||||
local exit_code=0
|
||||
for pid in "${pids[@]}"; do
|
||||
wait "$pid" || exit_code=1
|
||||
done
|
||||
|
||||
return $exit_code
|
||||
}
|
||||
|
||||
check_typescript() {
|
||||
npx tsc --noEmit > /tmp/tsc-output.txt 2>&1
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "TypeScript errors found"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_eslint() {
|
||||
npx eslint . > /tmp/eslint-output.txt 2>&1
|
||||
}
|
||||
|
||||
check_tests() {
|
||||
npm test > /tmp/test-output.txt 2>&1
|
||||
}
|
||||
|
||||
# Usage
|
||||
if run_parallel_checks; then
|
||||
echo "✅ All checks passed"
|
||||
else
|
||||
echo "⚠️ Some checks failed"
|
||||
cat /tmp/tsc-output.txt
|
||||
cat /tmp/eslint-output.txt
|
||||
fi
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: Smart Caching
|
||||
|
||||
Cache results to avoid redundant work.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
CACHE_DIR="$HOME/.claude/hook-cache"
|
||||
mkdir -p "$CACHE_DIR"
|
||||
|
||||
# Generate cache key
|
||||
cache_key() {
|
||||
local file="$1"
|
||||
echo -n "$file:$(stat -f %m "$file" 2>/dev/null || stat -c %Y "$file")" | md5sum | cut -d' ' -f1
|
||||
}
|
||||
|
||||
# Check cache
|
||||
check_cache() {
|
||||
local file="$1"
|
||||
local key=$(cache_key "$file")
|
||||
local cache_file="$CACHE_DIR/$key"
|
||||
|
||||
if [ -f "$cache_file" ]; then
|
||||
# Cache hit
|
||||
cat "$cache_file"
|
||||
return 0
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Update cache
|
||||
update_cache() {
|
||||
local file="$1"
|
||||
local result="$2"
|
||||
local key=$(cache_key "$file")
|
||||
local cache_file="$CACHE_DIR/$key"
|
||||
|
||||
echo "$result" > "$cache_file"
|
||||
|
||||
# Clean old cache entries (older than 1 day)
|
||||
find "$CACHE_DIR" -type f -mtime +1 -delete 2>/dev/null
|
||||
}
|
||||
|
||||
# Usage
|
||||
if cached=$(check_cache "$file_path"); then
|
||||
echo "Cache hit: $cached"
|
||||
else
|
||||
result=$(expensive_operation "$file_path")
|
||||
update_cache "$file_path" "$result"
|
||||
echo "Computed: $result"
|
||||
fi
|
||||
```
|
||||
|
||||
## Pattern: Progressive Output
|
||||
|
||||
Show progress for long-running hooks.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Progress indicator
|
||||
show_progress() {
|
||||
local message="$1"
|
||||
echo -n "$message..."
|
||||
}
|
||||
|
||||
complete_progress() {
|
||||
local status="$1"
|
||||
if [ "$status" == "success" ]; then
|
||||
echo " ✅"
|
||||
else
|
||||
echo " ❌"
|
||||
fi
|
||||
}
|
||||
|
||||
# Usage
|
||||
show_progress "Running TypeScript compiler"
|
||||
if npx tsc --noEmit 2>/dev/null; then
|
||||
complete_progress "success"
|
||||
else
|
||||
complete_progress "failure"
|
||||
fi
|
||||
|
||||
show_progress "Running linter"
|
||||
if npx eslint . 2>/dev/null; then
|
||||
complete_progress "success"
|
||||
else
|
||||
complete_progress "failure"
|
||||
fi
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: Context Injection
|
||||
|
||||
Inject helpful context into Claude's prompt.
|
||||
|
||||
```javascript
|
||||
// UserPromptSubmit hook
|
||||
function injectContext(prompt) {
|
||||
const context = [];
|
||||
|
||||
// Add relevant documentation
|
||||
if (prompt.includes('API')) {
|
||||
context.push('📖 API Documentation: https://docs.example.com/api');
|
||||
}
|
||||
|
||||
// Add recent changes
|
||||
const recentFiles = getRecentlyEditedFiles();
|
||||
if (recentFiles.length > 0) {
|
||||
context.push(`📝 Recently edited: ${recentFiles.join(', ')}`);
|
||||
}
|
||||
|
||||
// Add project status
|
||||
const buildStatus = getLastBuildStatus();
|
||||
if (!buildStatus.passed) {
|
||||
context.push(`⚠️ Current build has ${buildStatus.errorCount} errors`);
|
||||
}
|
||||
|
||||
if (context.length === 0) {
|
||||
return { decision: 'approve' };
|
||||
}
|
||||
|
||||
return {
|
||||
decision: 'approve',
|
||||
additionalContext: `\n\n---\n${context.join('\n')}\n---\n`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Pattern: Error Accumulation
|
||||
|
||||
Collect multiple errors before reporting.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
ERRORS=()
|
||||
|
||||
# Add error to collection
|
||||
add_error() {
|
||||
ERRORS+=("$1")
|
||||
}
|
||||
|
||||
# Report all errors
|
||||
report_errors() {
|
||||
if [ ${#ERRORS[@]} -eq 0 ]; then
|
||||
echo "✅ No errors found"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "⚠️ Found ${#ERRORS[@]} issue(s):"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
local i=1
|
||||
for error in "${ERRORS[@]}"; do
|
||||
echo "$i. $error"
|
||||
((i++))
|
||||
done
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Usage
|
||||
if ! run_typescript_check; then
|
||||
add_error "TypeScript compilation failed"
|
||||
fi
|
||||
|
||||
if ! run_lint_check; then
|
||||
add_error "Linting issues found"
|
||||
fi
|
||||
|
||||
if ! run_test_check; then
|
||||
add_error "Tests failing"
|
||||
fi
|
||||
|
||||
report_errors
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: Conditional Blocking
|
||||
|
||||
Block only on critical errors, warn on others.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
ERROR_LEVEL="none" # none, warning, critical
|
||||
|
||||
# Check for issues
|
||||
check_critical_issues() {
|
||||
if grep -q "FIXME\|XXX\|TODO: CRITICAL" "$file_path"; then
|
||||
ERROR_LEVEL="critical"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
check_warnings() {
|
||||
if grep -q "console.log\|debugger" "$file_path"; then
|
||||
ERROR_LEVEL="warning"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run checks
|
||||
check_critical_issues
|
||||
check_warnings
|
||||
|
||||
# Return appropriate decision
|
||||
case "$ERROR_LEVEL" in
|
||||
"critical")
|
||||
echo '{
|
||||
"decision": "block",
|
||||
"reason": "🚫 CRITICAL: Found critical TODOs or FIXMEs that must be addressed"
|
||||
}' | jq -c '.'
|
||||
;;
|
||||
"warning")
|
||||
echo "⚠️ Warning: Found debug statements (console.log, debugger)"
|
||||
echo '{}'
|
||||
;;
|
||||
*)
|
||||
echo '{}'
|
||||
;;
|
||||
esac
|
||||
```
|
||||
|
||||
## Pattern: Hook Coordination
|
||||
|
||||
Coordinate between multiple hooks using shared state.
|
||||
|
||||
```bash
|
||||
# Hook 1: Track state
|
||||
#!/bin/bash
|
||||
STATE_FILE="/tmp/hook-state.json"
|
||||
|
||||
# Update state
|
||||
jq -n \
|
||||
--arg timestamp "$(date +%s)" \
|
||||
--arg files "$files_edited" \
|
||||
'{lastRun: $timestamp, filesEdited: ($files | split(","))}' \
|
||||
> "$STATE_FILE"
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
```bash
|
||||
# Hook 2: Read state
|
||||
#!/bin/bash
|
||||
STATE_FILE="/tmp/hook-state.json"
|
||||
|
||||
if [ -f "$STATE_FILE" ]; then
|
||||
last_run=$(jq -r '.lastRun' "$STATE_FILE")
|
||||
files=$(jq -r '.filesEdited[]' "$STATE_FILE")
|
||||
|
||||
# Use state from previous hook
|
||||
for file in $files; do
|
||||
process_file "$file"
|
||||
done
|
||||
fi
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Pattern: User Notification
|
||||
|
||||
Notify user of important events without blocking.
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Send desktop notification (macOS)
|
||||
notify_macos() {
|
||||
osascript -e "display notification \"$1\" with title \"Claude Code Hook\""
|
||||
}
|
||||
|
||||
# Send desktop notification (Linux)
|
||||
notify_linux() {
|
||||
notify-send "Claude Code Hook" "$1"
|
||||
}
|
||||
|
||||
# Notify based on OS
|
||||
notify() {
|
||||
local message="$1"
|
||||
|
||||
case "$OSTYPE" in
|
||||
darwin*)
|
||||
notify_macos "$message"
|
||||
;;
|
||||
linux*)
|
||||
notify_linux "$message"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Usage
|
||||
if [ "$error_count" -gt 10 ]; then
|
||||
notify "⚠️ Build has $error_count errors"
|
||||
fi
|
||||
|
||||
echo '{}'
|
||||
```
|
||||
|
||||
## Remember
|
||||
|
||||
- **Keep it simple** - Start with basic patterns, add complexity only when needed
|
||||
- **Test thoroughly** - Test each pattern in isolation before combining
|
||||
- **Fail gracefully** - Non-blocking hooks should never crash workflow
|
||||
- **Log everything** - You'll need it for debugging
|
||||
- **Document patterns** - Future you will thank present you
|
||||
657
skills/building-hooks/resources/testing-hooks.md
Normal file
657
skills/building-hooks/resources/testing-hooks.md
Normal file
@@ -0,0 +1,657 @@
|
||||
# Testing Hooks
|
||||
|
||||
Comprehensive testing strategies for Claude Code hooks.
|
||||
|
||||
## Testing Philosophy
|
||||
|
||||
**Hooks run with full system access. Test them thoroughly before deploying.**
|
||||
|
||||
### Testing Levels
|
||||
|
||||
1. **Unit testing** - Test functions in isolation
|
||||
2. **Integration testing** - Test with mock Claude Code events
|
||||
3. **Manual testing** - Test in real Claude Code sessions
|
||||
4. **Regression testing** - Verify hooks don't break existing workflows
|
||||
|
||||
## Unit Testing Hook Functions
|
||||
|
||||
### Bash Functions
|
||||
|
||||
**Example: Testing file validation**
|
||||
|
||||
```bash
|
||||
# hook-functions.sh - extractable functions
|
||||
validate_file_path() {
|
||||
local path="$1"
|
||||
|
||||
if [ -z "$path" ] || [ "$path" == "null" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! "$path" =~ ^/ ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -f "$path" ]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test script
|
||||
#!/bin/bash
|
||||
source ./hook-functions.sh
|
||||
|
||||
test_validate_file_path() {
|
||||
# Test valid path
|
||||
touch /tmp/test-file.txt
|
||||
if validate_file_path "/tmp/test-file.txt"; then
|
||||
echo "✅ Valid path test passed"
|
||||
else
|
||||
echo "❌ Valid path test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test invalid path
|
||||
if ! validate_file_path ""; then
|
||||
echo "✅ Empty path test passed"
|
||||
else
|
||||
echo "❌ Empty path test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test null path
|
||||
if ! validate_file_path "null"; then
|
||||
echo "✅ Null path test passed"
|
||||
else
|
||||
echo "❌ Null path test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test relative path
|
||||
if ! validate_file_path "relative/path.txt"; then
|
||||
echo "✅ Relative path test passed"
|
||||
else
|
||||
echo "❌ Relative path test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
rm /tmp/test-file.txt
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run test
|
||||
test_validate_file_path
|
||||
```
|
||||
|
||||
### JavaScript Functions
|
||||
|
||||
**Example: Testing prompt analysis**
|
||||
|
||||
```javascript
|
||||
// skill-activator.js
|
||||
function analyzePrompt(text, rules) {
|
||||
const lowerText = text.toLowerCase();
|
||||
const activated = [];
|
||||
|
||||
for (const [skillName, config] of Object.entries(rules)) {
|
||||
if (config.promptTriggers?.keywords) {
|
||||
for (const keyword of config.promptTriggers.keywords) {
|
||||
if (lowerText.includes(keyword.toLowerCase())) {
|
||||
activated.push({ skill: skillName, priority: config.priority || 'medium' });
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return activated;
|
||||
}
|
||||
|
||||
// test.js
|
||||
const assert = require('assert');
|
||||
|
||||
const testRules = {
|
||||
'backend-dev': {
|
||||
priority: 'high',
|
||||
promptTriggers: {
|
||||
keywords: ['backend', 'API', 'endpoint']
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Test keyword matching
|
||||
function testKeywordMatching() {
|
||||
const result = analyzePrompt('How do I create a backend endpoint?', testRules);
|
||||
assert.equal(result.length, 1, 'Should find one skill');
|
||||
assert.equal(result[0].skill, 'backend-dev', 'Should match backend-dev');
|
||||
assert.equal(result[0].priority, 'high', 'Should have high priority');
|
||||
console.log('✅ Keyword matching test passed');
|
||||
}
|
||||
|
||||
// Test no match
|
||||
function testNoMatch() {
|
||||
const result = analyzePrompt('How do I write Python?', testRules);
|
||||
assert.equal(result.length, 0, 'Should find no skills');
|
||||
console.log('✅ No match test passed');
|
||||
}
|
||||
|
||||
// Test case insensitivity
|
||||
function testCaseInsensitive() {
|
||||
const result = analyzePrompt('BACKEND endpoint', testRules);
|
||||
assert.equal(result.length, 1, 'Should match regardless of case');
|
||||
console.log('✅ Case insensitive test passed');
|
||||
}
|
||||
|
||||
// Run tests
|
||||
testKeywordMatching();
|
||||
testNoMatch();
|
||||
testCaseInsensitive();
|
||||
```
|
||||
|
||||
## Integration Testing with Mock Events
|
||||
|
||||
### Creating Mock Events
|
||||
|
||||
**PostToolUse event:**
|
||||
```json
|
||||
{
|
||||
"event": "PostToolUse",
|
||||
"tool": {
|
||||
"name": "Edit",
|
||||
"input": {
|
||||
"file_path": "/Users/test/project/src/file.ts",
|
||||
"old_string": "const x = 1;",
|
||||
"new_string": "const x = 2;"
|
||||
}
|
||||
},
|
||||
"result": {
|
||||
"success": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**UserPromptSubmit event:**
|
||||
```json
|
||||
{
|
||||
"event": "UserPromptSubmit",
|
||||
"text": "How do I create a new API endpoint?",
|
||||
"timestamp": "2025-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Stop event:**
|
||||
```json
|
||||
{
|
||||
"event": "Stop",
|
||||
"sessionId": "abc123",
|
||||
"messageCount": 10
|
||||
}
|
||||
```
|
||||
|
||||
### Testing Hook with Mock Events
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# test-hook.sh
|
||||
|
||||
# Create mock event
|
||||
create_mock_edit_event() {
|
||||
cat <<EOF
|
||||
{
|
||||
"event": "PostToolUse",
|
||||
"tool": {
|
||||
"name": "Edit",
|
||||
"input": {
|
||||
"file_path": "/tmp/test-file.ts"
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
}
|
||||
|
||||
# Test hook
|
||||
test_edit_tracker() {
|
||||
# Setup
|
||||
export LOG_FILE="/tmp/test-edit-log.txt"
|
||||
rm -f "$LOG_FILE"
|
||||
|
||||
# Run hook with mock event
|
||||
create_mock_edit_event | bash hooks/post-tool-use/01-track-edits.sh
|
||||
|
||||
# Verify
|
||||
if [ -f "$LOG_FILE" ]; then
|
||||
if grep -q "test-file.ts" "$LOG_FILE"; then
|
||||
echo "✅ Edit tracker test passed"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "❌ Edit tracker test failed"
|
||||
return 1
|
||||
}
|
||||
|
||||
test_edit_tracker
|
||||
```
|
||||
|
||||
### Testing JavaScript Hooks
|
||||
|
||||
```javascript
|
||||
// test-skill-activator.js
|
||||
const { execSync } = require('child_process');
|
||||
|
||||
function testSkillActivator(prompt) {
|
||||
const mockEvent = JSON.stringify({
|
||||
text: prompt
|
||||
});
|
||||
|
||||
const result = execSync(
|
||||
'node hooks/user-prompt-submit/skill-activator.js',
|
||||
{
|
||||
input: mockEvent,
|
||||
encoding: 'utf8',
|
||||
env: {
|
||||
...process.env,
|
||||
SKILL_RULES: './test-skill-rules.json'
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
return JSON.parse(result);
|
||||
}
|
||||
|
||||
// Test activation
|
||||
function testBackendActivation() {
|
||||
const result = testSkillActivator('How do I create a backend endpoint?');
|
||||
|
||||
if (result.additionalContext && result.additionalContext.includes('backend')) {
|
||||
console.log('✅ Backend activation test passed');
|
||||
} else {
|
||||
console.log('❌ Backend activation test failed');
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
testBackendActivation();
|
||||
```
|
||||
|
||||
## Manual Testing in Claude Code
|
||||
|
||||
### Testing Checklist
|
||||
|
||||
**Before deployment:**
|
||||
- [ ] Hook executes without errors
|
||||
- [ ] Hook completes within timeout (default 10s)
|
||||
- [ ] Output is helpful and not overwhelming
|
||||
- [ ] Non-blocking hooks don't prevent work
|
||||
- [ ] Blocking hooks have clear error messages
|
||||
- [ ] Hook handles missing files gracefully
|
||||
- [ ] Hook handles malformed input gracefully
|
||||
|
||||
### Manual Test Procedure
|
||||
|
||||
**1. Enable debug mode:**
|
||||
```bash
|
||||
# Add to top of hook
|
||||
set -x
|
||||
exec 2>>~/.claude/hooks/debug-$(date +%Y%m%d).log
|
||||
```
|
||||
|
||||
**2. Test with minimal prompt:**
|
||||
```
|
||||
Create a simple test file
|
||||
```
|
||||
|
||||
**3. Observe hook execution:**
|
||||
```bash
|
||||
# Watch debug log
|
||||
tail -f ~/.claude/hooks/debug-*.log
|
||||
```
|
||||
|
||||
**4. Verify output:**
|
||||
- Check that hook completes
|
||||
- Verify no errors in debug log
|
||||
- Confirm expected behavior
|
||||
|
||||
**5. Test edge cases:**
|
||||
- Empty file paths
|
||||
- Non-existent files
|
||||
- Files outside project
|
||||
- Malformed input
|
||||
- Missing dependencies
|
||||
|
||||
**6. Test performance:**
|
||||
```bash
|
||||
# Time hook execution
|
||||
time bash hooks/stop/build-checker.sh
|
||||
```
|
||||
|
||||
## Regression Testing
|
||||
|
||||
### Creating Test Suite
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# regression-test.sh
|
||||
|
||||
TEST_DIR="/tmp/hook-tests"
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Setup test environment
|
||||
setup() {
|
||||
export LOG_FILE="$TEST_DIR/edit-log.txt"
|
||||
export PROJECT_ROOT="$TEST_DIR/projects"
|
||||
mkdir -p "$PROJECT_ROOT"
|
||||
}
|
||||
|
||||
# Cleanup after tests
|
||||
teardown() {
|
||||
rm -rf "$TEST_DIR"
|
||||
}
|
||||
|
||||
# Test 1: Edit tracker logs edits
|
||||
test_edit_tracker_logs() {
|
||||
echo '{"tool": {"name": "Edit", "input": {"file_path": "/test/file.ts"}}}' | \
|
||||
bash hooks/post-tool-use/01-track-edits.sh
|
||||
|
||||
if grep -q "file.ts" "$LOG_FILE"; then
|
||||
echo "✅ Test 1 passed"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "❌ Test 1 failed"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Test 2: Build checker finds errors
|
||||
test_build_checker_finds_errors() {
|
||||
# Create mock project with errors
|
||||
mkdir -p "$PROJECT_ROOT/test-project"
|
||||
echo 'const x: string = 123;' > "$PROJECT_ROOT/test-project/error.ts"
|
||||
|
||||
# Add to log
|
||||
echo "2025-01-15 10:00:00 | test-project | error.ts" > "$LOG_FILE"
|
||||
|
||||
# Run build checker (should find errors)
|
||||
output=$(bash hooks/stop/20-build-checker.sh)
|
||||
|
||||
if echo "$output" | grep -q "error"; then
|
||||
echo "✅ Test 2 passed"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "❌ Test 2 failed"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Test 3: Formatter handles missing prettier
|
||||
test_formatter_missing_prettier() {
|
||||
# Create file without prettier config
|
||||
mkdir -p "$PROJECT_ROOT/no-prettier"
|
||||
echo 'const x=1' > "$PROJECT_ROOT/no-prettier/file.js"
|
||||
echo "2025-01-15 10:00:00 | no-prettier | file.js" > "$LOG_FILE"
|
||||
|
||||
# Should complete without error
|
||||
if bash hooks/stop/30-format-code.sh 2>&1; then
|
||||
echo "✅ Test 3 passed"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "❌ Test 3 failed"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Run all tests
|
||||
run_all_tests() {
|
||||
setup
|
||||
|
||||
local failed=0
|
||||
|
||||
test_edit_tracker_logs || ((failed++))
|
||||
test_build_checker_finds_errors || ((failed++))
|
||||
test_formatter_missing_prettier || ((failed++))
|
||||
|
||||
teardown
|
||||
|
||||
if [ $failed -eq 0 ]; then
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "✅ All tests passed!"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
return 0
|
||||
else
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "❌ $failed test(s) failed"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
run_all_tests
|
||||
```
|
||||
|
||||
### Running Regression Suite
|
||||
|
||||
```bash
|
||||
# Run before deploying changes
|
||||
bash test/regression-test.sh
|
||||
|
||||
# Run on schedule (cron)
|
||||
0 0 * * * cd ~/hooks && bash test/regression-test.sh
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
### Measuring Hook Performance
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# benchmark-hook.sh
|
||||
|
||||
ITERATIONS=10
|
||||
HOOK_PATH="hooks/stop/build-checker.sh"
|
||||
|
||||
total_time=0
|
||||
|
||||
for i in $(seq 1 $ITERATIONS); do
|
||||
start=$(date +%s%N)
|
||||
bash "$HOOK_PATH" > /dev/null 2>&1
|
||||
end=$(date +%s%N)
|
||||
|
||||
elapsed=$(( (end - start) / 1000000 )) # Convert to ms
|
||||
total_time=$(( total_time + elapsed ))
|
||||
|
||||
echo "Iteration $i: ${elapsed}ms"
|
||||
done
|
||||
|
||||
average=$(( total_time / ITERATIONS ))
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo "Average: ${average}ms"
|
||||
echo "Total: ${total_time}ms"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
if [ $average -gt 2000 ]; then
|
||||
echo "⚠️ Hook is slow (>2s)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Performance acceptable"
|
||||
```
|
||||
|
||||
### Performance Targets
|
||||
|
||||
- **Non-blocking hooks:** <2 seconds
|
||||
- **Blocking hooks:** <5 seconds
|
||||
- **UserPromptSubmit:** <1 second (critical path)
|
||||
- **PostToolUse:** <500ms (runs frequently)
|
||||
|
||||
## Continuous Testing
|
||||
|
||||
### Pre-commit Hook for Hook Testing
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
echo "Testing Claude Code hooks..."
|
||||
|
||||
# Run test suite
|
||||
if bash test/regression-test.sh; then
|
||||
echo "✅ Hook tests passed"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Hook tests failed"
|
||||
echo "Fix tests before committing"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test-hooks.yml
|
||||
name: Test Claude Code Hooks
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v2
|
||||
with:
|
||||
node-version: '18'
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm install
|
||||
|
||||
- name: Run hook tests
|
||||
run: bash test/regression-test.sh
|
||||
|
||||
- name: Run performance tests
|
||||
run: bash test/benchmark-hook.sh
|
||||
```
|
||||
|
||||
## Common Testing Mistakes
|
||||
|
||||
### Mistake 1: Not Testing Error Paths
|
||||
|
||||
❌ **Wrong:**
|
||||
```bash
|
||||
# Only test success path
|
||||
npx tsc --noEmit
|
||||
echo "✅ Build passed"
|
||||
```
|
||||
|
||||
✅ **Right:**
|
||||
```bash
|
||||
# Test both success and failure
|
||||
if npx tsc --noEmit 2>&1; then
|
||||
echo "✅ Build passed"
|
||||
else
|
||||
echo "❌ Build failed"
|
||||
# Test that error handling works
|
||||
fi
|
||||
```
|
||||
|
||||
### Mistake 2: Hardcoding Paths
|
||||
|
||||
❌ **Wrong:**
|
||||
```bash
|
||||
# Hardcoded path
|
||||
cd /Users/myname/projects/myproject
|
||||
npm run build
|
||||
```
|
||||
|
||||
✅ **Right:**
|
||||
```bash
|
||||
# Dynamic path
|
||||
project_root=$(find_project_root "$file_path")
|
||||
if [ -n "$project_root" ]; then
|
||||
cd "$project_root"
|
||||
npm run build
|
||||
fi
|
||||
```
|
||||
|
||||
### Mistake 3: Not Cleaning Up
|
||||
|
||||
❌ **Wrong:**
|
||||
```bash
|
||||
# Leaves test files behind
|
||||
echo "test" > /tmp/test-file.txt
|
||||
run_test
|
||||
# Never cleans up
|
||||
```
|
||||
|
||||
✅ **Right:**
|
||||
```bash
|
||||
# Always cleanup
|
||||
trap 'rm -f /tmp/test-file.txt' EXIT
|
||||
echo "test" > /tmp/test-file.txt
|
||||
run_test
|
||||
```
|
||||
|
||||
### Mistake 4: Silent Failures
|
||||
|
||||
❌ **Wrong:**
|
||||
```bash
|
||||
# Errors disappear
|
||||
npx tsc --noEmit 2>/dev/null
|
||||
```
|
||||
|
||||
✅ **Right:**
|
||||
```bash
|
||||
# Capture errors
|
||||
output=$(npx tsc --noEmit 2>&1)
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "❌ TypeScript errors:"
|
||||
echo "$output"
|
||||
fi
|
||||
```
|
||||
|
||||
## Debugging Failed Tests
|
||||
|
||||
### Enable Verbose Output
|
||||
|
||||
```bash
|
||||
# Add debug flags
|
||||
set -x # Print commands
|
||||
set -e # Exit on error
|
||||
set -u # Error on undefined variables
|
||||
set -o pipefail # Catch pipe failures
|
||||
```
|
||||
|
||||
### Capture Test Output
|
||||
|
||||
```bash
|
||||
# Run test with full output
|
||||
bash -x test/regression-test.sh 2>&1 | tee test-output.log
|
||||
|
||||
# Review output
|
||||
less test-output.log
|
||||
```
|
||||
|
||||
### Isolate Failing Test
|
||||
|
||||
```bash
|
||||
# Run single test
|
||||
source test/regression-test.sh
|
||||
setup
|
||||
test_build_checker_finds_errors
|
||||
teardown
|
||||
```
|
||||
|
||||
## Remember
|
||||
|
||||
- **Test before deploying** - Hooks have full system access
|
||||
- **Test all paths** - Success, failure, edge cases
|
||||
- **Test performance** - Hooks shouldn't slow workflow
|
||||
- **Automate testing** - Run tests on every change
|
||||
- **Clean up** - Don't leave test artifacts
|
||||
- **Document tests** - Future you will thank present you
|
||||
|
||||
**Golden rule:** If you wouldn't run it on production, don't deploy it as a hook.
|
||||
Reference in New Issue
Block a user