Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:37:55 +08:00
commit 506a828b22
59 changed files with 18515 additions and 0 deletions

472
hooks/README.md Normal file
View File

@@ -0,0 +1,472 @@
# Unify 2.1 Plugin Hooks
Intelligent prompt interception system with dual-stage hook pipeline for automatic skill activation and orchestrator routing.
## Overview
This plugin provides **three hooks** that work together to enhance Claude Code's capabilities:
1. **skill-activation-prompt** - Detects domain-specific needs and recommends skills
2. **orchestrator-interceptor** - Analyzes complexity and routes to multi-agent orchestration
3. **combined-prompt-hook** - Chains both hooks seamlessly
## Architecture
```
┌──────────────────────────────────────────────────────────────┐
│ USER PROMPT SUBMITTED │
└────────────────┬─────────────────────────────────────────────┘
┌────────────────────────────────────┐
│ combined-prompt-hook.py │ ← Single entry point
│ (Hook Orchestration Layer) │
└────────────────┬───────────────────┘
┌────────────┴────────────┐
│ │
▼ ▼
┌───────────────────┐ ┌──────────────────────┐
│ STAGE 1: SKILLS │ │ STAGE 2: ORCHESTRATOR│
│ skill-activation │ │ orchestrator- │
│ -prompt.py │ │ interceptor.py │
└────────┬──────────┘ └──────────┬───────────┘
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ Detects │ │ Analyzes │
│ domain │ │ complexity │
│ skills │ │ & routing │
│ needed │ │ │
└─────────────┘ └──────────────┘
│ │
└────────────┬─────────────┘
┌──────────────────────┐
│ COMBINED CONTEXT │
│ injected into │
│ Claude's conversation│
└──────────────────────┘
```
## Hook 1: Skill Activation
**File**: `skill-activation-prompt.py`
**Purpose**: Load domain-specific knowledge before execution
**Detection Rules** (from `../skills/skill-rules.json`):
- Keywords: `["pyspark", "schema", "bronze", "silver", "gold", "etl", "transform"]`
- Intent patterns: `["generate.*pyspark", "create.*table", "transform.*data"]`
**Example Output**:
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 SKILL ACTIVATION CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ CRITICAL SKILLS (REQUIRED):
→ schema-reference
📚 RECOMMENDED SKILLS:
→ pyspark-patterns
ACTION: Use Skill tool BEFORE responding
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Hook 2: Orchestrator Interceptor
**File**: `orchestrator_interceptor.py`
**Purpose**: Analyze complexity and route to optimal execution strategy
**Classification Rules**:
| Pattern | Reason | Complexity | Action |
|---------|--------|------------|--------|
| "what is", "explain", <20 words | Simple query | simple_query | Skip orchestration |
| Contains "bronze", "silver", "gold" (2+) | Cross-layer work | high_complexity | Multi-agent (6-8) |
| "all", "across", "entire", "multiple" | Broad scope | complex_task | Multi-agent (4-6) |
| "linting", "formatting" + "all"/"entire" | Quality sweep | high_complexity | Multi-agent (6-8) |
| "implement", "create", "build" + >10 words | Implementation | moderate_task | Single agent or 2-3 |
**Cost Estimates**:
- Simple query: ~500 tokens, $0.002
- Moderate task: ~6,000 tokens, $0.018
- Complex task: ~17,000 tokens, $0.051
- High complexity: ~43,000 tokens, $0.129
**Example Output**:
```
<orchestrator-analysis-required>
ORCHESTRATOR INTERCEPTION ACTIVE
BEFORE responding to the user, you MUST:
1. Launch the master-orchestrator agent
2. USER PROMPT: "Fix linting across bronze, silver, and gold layers"
3. Classification: Cross Layer Work (High Complexity)
4. COST ESTIMATION:
- Total estimated: ~43,000 tokens
- Approximate cost: $0.129 USD
Present execution plan with user approval options.
</orchestrator-analysis-required>
```
## Hook 3: Combined Hook
**File**: `combined-prompt-hook.py`
**Purpose**: Chain both hooks seamlessly in pure Python
**Logic**:
1. Read prompt once from stdin
2. Pass to skill-activation-prompt.py → Get skill recommendations (JSON)
3. Pass to orchestrator_interceptor.py → Get complexity analysis (JSON)
4. Parse and merge both JSON outputs
5. Return combined JSON context
**Benefits**:
- No bash/jq dependencies
- Better error handling
- Type-safe with proper JSON parsing
- Easier to debug and maintain
## Installation
### Option 1: Plugin-Level (Recommended)
Update `.claude/settings.json` to use the plugin hooks:
```json
{
"hooks": {
"user-prompt-submit": ".claude/plugins/repos/unify_2_1/hooks/combined-prompt-hook.py"
}
}
```
### Option 2: Global Level
Copy hooks to global hooks directory:
```bash
cp -r .claude/plugins/repos/unify_2_1/hooks/* ~/.claude/hooks/
```
Then configure in `~/.claude/settings.json`:
```json
{
"hooks": {
"user-prompt-submit": ".claude/hooks/combined-prompt-hook.py"
}
}
```
**Restart Claude Code** after configuration changes.
## Configuration
### Adjust Skill Detection
Edit `../skills/skill-rules.json`:
```json
{
"skills": {
"schema-reference": {
"priority": "critical",
"promptTriggers": {
"keywords": ["schema", "table", "column"],
"intentPatterns": ["generate.*table", "create.*etl"]
}
}
}
}
```
### Adjust Orchestrator Classification
Edit `orchestrator_interceptor.py`:
```python
# Add custom patterns
def should_orchestrate(prompt: str) -> tuple[bool, str, str]:
# Your custom logic here
if "custom_pattern" in prompt.lower():
return True, "custom_reason", "high_complexity"
```
### Adjust Cost Estimates
Edit `orchestrator_interceptor.py`:
```python
token_estimates = {
"moderate_task": {
"orchestrator_analysis": 1000,
"agent_execution": 5000,
"total_estimated": 6000,
"cost_usd": 0.018
},
# Adjust as needed
}
```
## Monitoring & Logs
### Log Location
All orchestrator decisions logged to:
```
~/.claude/hook_logs/orchestrator_hook.log
```
### View Logs
```bash
# Real-time monitoring
tail -f ~/.claude/hook_logs/orchestrator_hook.log
# Recent entries
tail -50 ~/.claude/hook_logs/orchestrator_hook.log
# Search classifications
grep "Classified as" ~/.claude/hook_logs/orchestrator_hook.log
# View cost estimates
grep "Cost estimate" ~/.claude/hook_logs/orchestrator_hook.log
```
### Log Format
```
2025-11-10 23:00:44 | INFO | ================================================================================
2025-11-10 23:00:44 | INFO | Hook triggered - Session: abc123
2025-11-10 23:00:44 | INFO | CWD: /workspaces/unify_2_1_dm_niche_rms_build_d10
2025-11-10 23:00:44 | INFO | Prompt: Fix all linting errors across bronze, silver, and gold layers
2025-11-10 23:00:44 | INFO | Classified as CROSS-LAYER WORK (3 layers)
2025-11-10 23:00:44 | INFO | Decision: ORCHESTRATE
2025-11-10 23:00:44 | INFO | Reason: cross_layer_work
2025-11-10 23:00:44 | INFO | Complexity: high_complexity
2025-11-10 23:00:44 | INFO | Cost estimate: $0.129 USD (~43,000 tokens)
2025-11-10 23:00:44 | INFO | Hook completed successfully
2025-11-10 23:00:44 | INFO | ================================================================================
```
### Log Rotation
- **Rotation**: 10 MB
- **Retention**: 30 days
- **Location**: `~/.claude/hook_logs/orchestrator_hook.log`
## Examples
### Example 1: Simple Domain Query
**Prompt**: "What is TableUtilities?"
**Skill Hook**:
```
📚 RECOMMENDED SKILLS: → pyspark-patterns
```
**Orchestrator Hook**:
```
<simple-query-detected>
Handle directly without orchestration overhead.
</simple-query-detected>
```
**Result**: Skill loaded for accurate answer, no orchestration overhead
### Example 2: Complex Multi-Layer Task
**Prompt**: "Fix linting across bronze, silver, and gold layers"
**Skill Hook**:
```
📚 RECOMMENDED SKILLS:
→ project-architecture
→ pyspark-patterns
```
**Orchestrator Hook**:
```
Classification: Cross-layer work (High complexity)
Cost: $0.129 USD (~43,000 tokens)
Strategy: Multi-agent (6-8 agents in parallel)
```
**Result**: Architecture loaded + Multi-agent orchestration plan presented
### Example 3: PySpark ETL Implementation
**Prompt**: "Generate gold table g_x_mg_vehicle_stats from silver_cms and silver_fvms"
**Skill Hook**:
```
⚠️ CRITICAL SKILLS:
→ schema-reference (exact schemas)
📚 RECOMMENDED SKILLS:
→ pyspark-patterns (TableUtilities methods)
```
**Orchestrator Hook**:
```
Classification: Implementation task (Moderate)
Cost: $0.018 USD (~6,000 tokens)
Strategy: Single pyspark-developer agent
```
**Result**: Schemas + patterns loaded, orchestrator plans single-agent execution
## Testing
### Test Skill Hook Only
```bash
echo '{"prompt":"Generate PySpark table from bronze_cms","session_id":"test","cwd":"/workspaces"}' | \
python3 .claude/plugins/repos/unify_2_1/hooks/skill-activation-prompt.py | jq
```
### Test Orchestrator Hook Only
```bash
echo '{"session_id":"test","cwd":"/workspaces","prompt":"Fix linting across all layers"}' | \
python3 .claude/plugins/repos/unify_2_1/hooks/orchestrator_interceptor.py | jq
```
### Test Combined Hook
```bash
echo '{"session_id":"test","cwd":"/workspaces","prompt":"Generate gold table from silver data"}' | \
python3 .claude/plugins/repos/unify_2_1/hooks/combined-prompt-hook.py | jq
```
## Troubleshooting
### Hook Not Running
1. **Verify configuration**:
```bash
cat .claude/settings.json | grep -A 2 hooks
```
2. **Check executability**:
```bash
ls -l .claude/plugins/repos/unify_2_1/hooks/*.{sh,py}
```
3. **Test directly**:
```bash
echo '{"prompt":"test"}' | python3 .claude/plugins/repos/unify_2_1/hooks/orchestrator_interceptor.py
```
4. **Restart Claude Code** (required for settings changes)
### Dependencies Missing
```bash
# Check loguru
python3 -c "import loguru; print(f'loguru {loguru.__version__}')"
# Install if needed
pip install loguru
# Check jq
which jq || sudo apt-get install -y jq
```
### Hook Errors
Hooks are fail-safe - errors don't block prompts:
- Error logged to `orchestrator_hook.log`
- Prompt passes through unchanged
- Claude responds normally
Check logs:
```bash
tail -50 ~/.claude/hook_logs/orchestrator_hook.log | grep ERROR
```
## Performance Impact
### Minimal Overhead
- **Skill hook**: <50ms (keyword matching, local file read)
- **Orchestrator hook**: <100ms (regex patterns, logging)
- **Total**: ~150ms added to each prompt
### When Hooks Skip
- Simple queries: Both hooks run but skip actions (~100ms)
- Domain queries: Skill loads, orchestrator skips (~120ms)
- Complex tasks: Both hooks activate fully (~150ms)
### Fail-Safe Design
- Hooks never block prompts
- Errors are caught and logged
- Default action: allow prompt through unchanged
## Integration with Plugin
### Plugin Components Using Hooks
- **16 Agents** - All benefit from orchestrator routing
- **30 Commands** - Some trigger orchestrator explicitly (`/orchestrate`)
- **9 Skills** - Activated automatically by skill hook
### Workflow
```
User types prompt
Combined hook analyzes
Skills loaded (if needed)
Orchestrator invoked (if complex)
Specialized agents launched (if approved)
Results aggregated
User receives comprehensive response
```
## Files
```
.claude/plugins/repos/unify_2_1/hooks/
├── README.md # This file
├── combined-prompt-hook.py # Main entry point (chains hooks in Python)
├── orchestrator_interceptor.py # Complexity analysis + routing
└── skill-activation-prompt.py # Skill detection (Python implementation)
```
## Version History
**1.0.0** (2025-11-10)
- Initial release
- Dual-stage hook pipeline
- Skill activation + orchestrator routing
- Cost estimation
- Comprehensive logging
## Support
For issues:
1. Check logs: `~/.claude/hook_logs/orchestrator_hook.log`
2. Test hooks individually (see Testing section)
3. Verify dependencies (Python, loguru)
4. Review this README
## License
MIT License (same as parent plugin)

76
hooks/combined-prompt-hook.py Executable file
View File

@@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""
Combined Hook for Claude Code
Chains skill-activation and orchestrator-interceptor hooks.
This ensures both hooks run on user prompt submit and combines their outputs.
"""
import sys
import json
import subprocess
from pathlib import Path
def main() -> None:
try:
script_dir = Path(__file__).parent.resolve()
input_data = sys.stdin.read()
skill_script = script_dir / "skill-activation-prompt.py"
orchestrator_script = script_dir / "orchestrator_interceptor.py"
skill_result = subprocess.run(
[sys.executable, str(skill_script)],
input=input_data,
capture_output=True,
text=True,
check=False,
)
skill_output_json = {}
if skill_result.returncode == 0 and skill_result.stdout:
try:
skill_output_json = json.loads(skill_result.stdout)
except json.JSONDecodeError:
print("Warning: Skill hook returned invalid JSON", file=sys.stderr)
orchestrator_result = subprocess.run(
[sys.executable, str(orchestrator_script)],
input=input_data,
capture_output=True,
text=True,
check=True,
)
orchestrator_output_json = {}
if orchestrator_result.stdout:
orchestrator_output_json = json.loads(orchestrator_result.stdout)
skill_context = skill_output_json.get("hookSpecificOutput", {}).get(
"additionalContext", ""
)
orchestrator_context = orchestrator_output_json.get(
"hookSpecificOutput", {}
).get("additionalContext", "")
if skill_context and orchestrator_context:
combined_context = f"{skill_context}\n\n{orchestrator_context}"
elif skill_context:
combined_context = skill_context
else:
combined_context = orchestrator_context
response = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": combined_context,
}
}
print(json.dumps(response))
sys.exit(0)
except Exception as e:
print(f"Error in combined-prompt-hook: {e}", file=sys.stderr)
response = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": "",
}
}
print(json.dumps(response))
sys.exit(0)
if __name__ == "__main__":
main()

245
hooks/orchestrator_interceptor.py Executable file
View File

@@ -0,0 +1,245 @@
#!/usr/bin/env python3
"""
Orchestrator Interceptor Hook for Claude Code
Analyzes user prompts and injects orchestrator invocation context for complex tasks.
"""
import json
import sys
from pathlib import Path
from loguru import logger
# Configure loguru
log_dir = Path.home() / ".claude" / "hook_logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_file = log_dir / "orchestrator_hook.log"
logger.remove()
logger.add(
log_file,
rotation="10 MB",
retention="30 days",
format="{time:YYYY-MM-DD HH:mm:ss} | {level: <8} | {message}",
level="INFO"
)
logger.add(sys.stderr, level="ERROR")
def estimate_complexity_tokens(prompt: str, complexity: str) -> dict[str, int]:
"""Estimate token usage based on complexity assessment."""
token_estimates = {
"simple_query": {
"orchestrator_analysis": 500,
"agent_execution": 0,
"total_estimated": 500,
"cost_usd": 0.0015
},
"moderate_task": {
"orchestrator_analysis": 1000,
"agent_execution": 5000,
"total_estimated": 6000,
"cost_usd": 0.018
},
"complex_task": {
"orchestrator_analysis": 2000,
"agent_execution": 15000,
"total_estimated": 17000,
"cost_usd": 0.051
},
"high_complexity": {
"orchestrator_analysis": 3000,
"agent_execution": 40000,
"total_estimated": 43000,
"cost_usd": 0.129
}
}
return token_estimates.get(complexity, token_estimates["moderate_task"])
def should_orchestrate(prompt: str) -> tuple[bool, str, str]:
"""
Determine if prompt needs orchestration.
Returns: (needs_orchestration, reason, complexity_level)
"""
prompt_lower = prompt.lower()
word_count = len(prompt.split())
# 1. Simple queries (skip orchestration)
simple_patterns = ["what is", "explain", "how do", "why does", "show me", "what does", "define"]
if any(pattern in prompt_lower for pattern in simple_patterns) and word_count < 20:
logger.info(f"Classified as SIMPLE QUERY: {prompt[:100]}")
return False, "simple_query", "simple_query"
# 2. Explicit orchestration requests
if "orchestrate" in prompt_lower or "@orchestrate" in prompt_lower:
logger.info(f"Classified as EXPLICIT ORCHESTRATION REQUEST: {prompt[:100]}")
return True, "explicit_request", "high_complexity"
# 3. Cross-layer work (likely needs orchestration)
layers_mentioned = sum(1 for layer in ["bronze", "silver", "gold"] if layer in prompt_lower)
if layers_mentioned >= 2:
logger.info(f"Classified as CROSS-LAYER WORK ({layers_mentioned} layers): {prompt[:100]}")
return True, "cross_layer_work", "high_complexity"
# 4. Broad scope indicators
broad_keywords = ["all", "across", "entire", "multiple", "every"]
if any(keyword in prompt_lower for keyword in broad_keywords):
logger.info(f"Classified as BROAD SCOPE: {prompt[:100]}")
return True, "broad_scope", "complex_task"
# 5. Code quality sweeps
quality_keywords = ["linting", "formatting", "type hints", "quality", "refactor", "optimize"]
scope_keywords = ["all", "entire", "project", "codebase"]
if any(q in prompt_lower for q in quality_keywords) and any(s in prompt_lower for s in scope_keywords):
logger.info(f"Classified as QUALITY SWEEP: {prompt[:100]}")
return True, "quality_sweep", "high_complexity"
# 6. Multiple file/component work
if any(keyword in prompt_lower for keyword in ["files", "tables", "classes", "modules", "components"]):
if any(number in prompt for number in ["all", "multiple", "several", "many"]):
logger.info(f"Classified as MULTI-COMPONENT WORK: {prompt[:100]}")
return True, "multi_component", "complex_task"
# 7. Implementation/feature requests (moderate complexity)
action_keywords = ["implement", "create", "build", "add", "fix", "update", "modify"]
if any(action in prompt_lower for action in action_keywords) and word_count > 10:
logger.info(f"Classified as MODERATE TASK: {prompt[:100]}")
return True, "implementation_task", "moderate_task"
# 8. Default to simple handling for very short prompts
if word_count < 5:
logger.info(f"Classified as SIMPLE (too short): {prompt[:100]}")
return False, "too_short", "simple_query"
# Default: moderate orchestration for safety
logger.info(f"Classified as DEFAULT MODERATE: {prompt[:100]}")
return True, "default_moderate", "moderate_task"
def generate_orchestrator_context(prompt: str, reason: str, complexity: str) -> str:
"""Generate the context to inject for orchestrator analysis."""
cost_estimate = estimate_complexity_tokens(prompt, complexity)
context = f"""
<orchestrator-analysis-required>
ORCHESTRATOR INTERCEPTION ACTIVE
BEFORE responding to the user, you MUST:
1. Launch the master-orchestrator agent using the Task tool with subagent_type="master-orchestrator"
2. Pass this user prompt to the orchestrator for complexity analysis:
USER PROMPT: "{prompt}"
3. Classification hint: {reason.replace('_', ' ').title()}
Estimated complexity: {complexity.replace('_', ' ').title()}
4. COST ESTIMATION (based on initial classification):
- Orchestrator analysis: ~{cost_estimate['orchestrator_analysis']:,} tokens
- Estimated agent execution: ~{cost_estimate['agent_execution']:,} tokens
- Total estimated: ~{cost_estimate['total_estimated']:,} tokens
- Approximate cost: ${cost_estimate['cost_usd']:.3f} USD
Note: Actual costs may vary based on orchestrator's final strategy.
5. The orchestrator will:
- Assess complexity (Simple/Moderate/High)
- Determine optimal execution strategy (direct tools vs single agent vs multi-agent)
- Recommend agent count and decomposition (if multi-agent)
- Provide detailed execution plan with time estimates
- Refine cost estimates based on strategy
6. Present the orchestrator's plan to the user with these options:
┌─────────────────────────────────────────┐
│ [1] Execute Plan │
│ → Proceed with orchestrator's │
│ recommended approach │
│ │
│ [2] Modify Plan │
│ → User provides feedback to adjust │
│ strategy (agent count, approach) │
│ │
│ [3] Skip Orchestration │
│ → Handle directly without │
│ multi-agent coordination │
└─────────────────────────────────────────┘
7. Only after user approval, execute according to the chosen approach.
CRITICAL RULES:
- Do NOT start any work until orchestrator has analyzed
- Do NOT proceed without user approval of the plan
- Present cost estimates clearly in the plan
- If user chooses [3], handle task directly without orchestrator
- Log decision and execution to hook logs
</orchestrator-analysis-required>
"""
return context
def generate_simple_context(prompt: str) -> str:
"""Generate context for simple queries that don't need orchestration."""
return f"""
<simple-query-detected>
This prompt has been classified as a simple informational query.
Handle directly without orchestration overhead.
Query: "{prompt}"
</simple-query-detected>
"""
def main():
try:
# Read hook input
hook_input = json.loads(sys.stdin.read())
user_prompt = hook_input.get("prompt", "")
session_id = hook_input.get("session_id", "unknown")
cwd = hook_input.get("cwd", "unknown")
logger.info("=" * 80)
logger.info(f"Hook triggered - Session: {session_id}")
logger.info(f"CWD: {cwd}")
logger.info(f"Prompt: {user_prompt}")
# Analyze prompt
needs_orchestration, reason, complexity = should_orchestrate(user_prompt)
# Log decision
logger.info(f"Decision: {'ORCHESTRATE' if needs_orchestration else 'SKIP'}")
logger.info(f"Reason: {reason}")
logger.info(f"Complexity: {complexity}")
# Generate appropriate context
if needs_orchestration:
additional_context = generate_orchestrator_context(user_prompt, reason, complexity)
cost_estimate = estimate_complexity_tokens(user_prompt, complexity)
logger.info(f"Cost estimate: ${cost_estimate['cost_usd']:.3f} USD (~{cost_estimate['total_estimated']:,} tokens)")
else:
additional_context = generate_simple_context(user_prompt)
logger.info("No orchestration needed - simple query")
# Return JSON response
response = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": additional_context
}
}
logger.info("Hook completed successfully")
logger.info("=" * 80)
print(json.dumps(response))
sys.exit(0)
except Exception as e:
logger.error(f"Hook error: {e}")
logger.exception("Full traceback:")
# On error, don't block - allow prompt through without modification
response = {"hookSpecificOutput": {"hookEventName": "UserPromptSubmit", "additionalContext": ""}}
print(json.dumps(response))
sys.exit(0)
if __name__ == "__main__":
main()

131
hooks/skill-activation-prompt.py Executable file
View File

@@ -0,0 +1,131 @@
#!/usr/bin/env python3
import sys
import json
import re
from pathlib import Path
from typing import Dict, List, Literal, Optional, TypedDict
class PromptTriggers(TypedDict, total=False):
keywords: List[str]
intentPatterns: List[str]
class SkillRule(TypedDict):
type: Literal["guardrail", "domain"]
enforcement: Literal["block", "suggest", "warn"]
priority: Literal["critical", "high", "medium", "low"]
promptTriggers: Optional[PromptTriggers]
class SkillRules(TypedDict):
version: str
skills: Dict[str, SkillRule]
class HookInput(TypedDict):
session_id: str
transcript_path: str
cwd: str
permission_mode: str
prompt: str
class MatchedSkill(TypedDict):
name: str
matchType: Literal["keyword", "intent"]
config: SkillRule
def main() -> None:
try:
input_data = sys.stdin.read()
data: HookInput = json.loads(input_data)
prompt = data["prompt"].lower()
project_dir = (
data.get("cwd") or os.environ.get("CLAUDE_PROJECT_DIR") or os.getcwd()
)
rules_path = Path(project_dir) / ".claude" / "skills" / "skill-rules.json"
with open(rules_path, "r", encoding="utf-8") as f:
rules: SkillRules = json.load(f)
matched_skills: List[MatchedSkill] = []
for skill_name, config in rules["skills"].items():
triggers = config.get("promptTriggers")
if not triggers:
continue
keywords = triggers.get("keywords", [])
if keywords:
keyword_match = any(kw.lower() in prompt for kw in keywords)
if keyword_match:
matched_skills.append(
{"name": skill_name, "matchType": "keyword", "config": config}
)
continue
intent_patterns = triggers.get("intentPatterns", [])
if intent_patterns:
intent_match = any(
re.search(pattern, prompt, re.IGNORECASE)
for pattern in intent_patterns
)
if intent_match:
matched_skills.append(
{"name": skill_name, "matchType": "intent", "config": config}
)
additional_context = ""
if matched_skills:
output = "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n"
output += "🎯 SKILL ACTIVATION CHECK\n"
output += "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n\n"
critical = [
s for s in matched_skills if s["config"]["priority"] == "critical"
]
high = [s for s in matched_skills if s["config"]["priority"] == "high"]
medium = [s for s in matched_skills if s["config"]["priority"] == "medium"]
low = [s for s in matched_skills if s["config"]["priority"] == "low"]
if critical:
output += "⚠️ CRITICAL SKILLS (REQUIRED):\n"
for s in critical:
output += f"{s['name']}\n"
output += "\n"
if high:
output += "📚 RECOMMENDED SKILLS:\n"
for s in high:
output += f"{s['name']}\n"
output += "\n"
if medium:
output += "💡 SUGGESTED SKILLS:\n"
for s in medium:
output += f"{s['name']}\n"
output += "\n"
if low:
output += "📌 OPTIONAL SKILLS:\n"
for s in low:
output += f"{s['name']}\n"
output += "\n"
output += "ACTION: Use Skill tool BEFORE responding\n"
output += "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\n"
additional_context = output
response = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": additional_context,
}
}
print(json.dumps(response))
sys.exit(0)
except Exception as err:
print(f"Error in skill-activation-prompt hook: {err}", file=sys.stderr)
response = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": "",
}
}
print(json.dumps(response))
sys.exit(0)
if __name__ == "__main__":
import os
main()