Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:51:34 +08:00
commit acde81dcfe
59 changed files with 22282 additions and 0 deletions

200
hooks/README.md Normal file
View File

@@ -0,0 +1,200 @@
# PRISM Workflow Hooks
Python-based Claude Code hooks that enforce story file updates throughout the core-development lifecycle workflow.
## Overview
These hooks ensure:
1. **Story Context is Established**: All workflow commands work on the correct story file
2. **Story Files are Updated**: Required sections are present based on workflow phase
3. **Workflow Integrity**: Steps execute in proper order with proper validation
## Hook Files
### Python Scripts
| Hook | Type | Purpose | Blocks? |
|------|------|---------|---------|
| `enforce-story-context.py` | PreToolUse | Ensure workflow commands have active story | ✅ Yes |
| `track-current-story.py` | PostToolUse | Capture story file as current context | ❌ No |
| `validate-story-updates.py` | PostToolUse | Validate story file updates | ❌ No (warns) |
| `validate-required-sections.py` | PostToolUse | Verify all required PRISM sections | ✅ Yes (critical errors) |
### Configuration
`hooks.json` - Hook event configuration for Claude Code
## How It Works
### Story Context Flow
```
1. *draft command creates story in docs/stories/
2. track-current-story.py captures path → .prism-current-story.txt
3. All workflow commands check enforce-story-context.py
4. Commands blocked if no active story ❌
OR
5. Commands proceed with story context ✅
```
### Validation Flow
```
Story file Edit/Write
validate-story-updates.py
- Warns if editing non-current story
- Checks for required base sections
validate-required-sections.py
- Comprehensive section validation
- Blocks if critical sections missing
```
## Generated Files
### `.prism-current-story.txt`
Contains the path to the currently active story file.
**Example**: `docs/stories/platform-1.auth-improvements-2.md`
**Created by**: `track-current-story.py`
### `.prism-workflow.log`
Audit log of all workflow events.
**Format**: `TIMESTAMP | EVENT_TYPE | DETAILS`
**Example**:
```
2025-10-24T15:30:45Z | STORY_ACTIVE | docs/stories/epic-1.story-2.md
2025-10-24T15:31:12Z | COMMAND | develop-story | docs/stories/epic-1.story-2.md
2025-10-24T15:32:08Z | STORY_UPDATED | docs/stories/epic-1.story-2.md
2025-10-24T15:32:09Z | VALIDATION | PASS | docs/stories/epic-1.story-2.md | In Progress
```
## Workflow Integration
### Core Development Cycle Steps
1. **draft_story** (`*draft`)
- Creates story file in `docs/stories/`
- **Hook**: `track-current-story.py` captures file path
- **Result**: Story context established
2. **risk_assessment** (`*risk {story}`)
- **Hook**: `enforce-story-context.py` verifies story exists
3. **test_design** (`*design {story}`)
- **Hook**: `enforce-story-context.py` verifies story exists
4. **validate_story** (`*validate-story-draft {story}`)
- **Hook**: `enforce-story-context.py` verifies story exists
5. **implement_tasks** (`*develop-story`)
- **Hook**: `enforce-story-context.py` verifies story exists
- **Hook**: `validate-story-updates.py` validates Dev Agent Record
- **Hook**: `validate-required-sections.py` ensures required sections
6. **qa_review** (`*review {story}`)
- **Hook**: `enforce-story-context.py` verifies story exists
- **Hook**: `validate-story-updates.py` validates QA Results section
7. **address_review_issues** (`*review-qa`)
- **Hook**: `enforce-story-context.py` verifies story exists
8. **update_gate** (`*gate {story}`)
- **Hook**: `enforce-story-context.py` verifies story exists
## Error Messages
### No Active Story
```
❌ ERROR: Command 'develop-story' requires an active story
No current story found in workflow context
REQUIRED: Draft a story first using the core-development-cycle workflow:
1. Run: *planning-review (optional)
2. Run: *draft
The draft command will create a story file and establish story context.
```
### Missing Required Sections
```
❌ VALIDATION FAILED: Story file has critical errors
ERROR: Missing required section for In Progress status: ## Dev Agent Record
Story file: docs/stories/epic-1.story-2.md
Status: In Progress
REQUIRED: Fix these errors before proceeding with workflow
```
## Dependencies
- **Python 3.6+**: All hooks are Python scripts
- **json**: Standard library (built-in)
- **pathlib**: Standard library (built-in)
- **re**: Standard library (built-in)
No external packages required!
## Installation
The hooks are automatically loaded by Claude Code from the `hooks/` directory when the plugin is installed.
## Troubleshooting
### Hook Not Running
**Check**:
1. `hooks.json` is valid JSON
2. Python 3 is in PATH: `python --version`
3. Matcher pattern matches tool being used
### Hook Blocking Unexpectedly
**Debug**:
1. Check `.prism-workflow.log` for error messages
2. Run hook manually:
```bash
echo '{"tool_input":{"command":"*develop-story"}}' | python hooks/enforce-story-context.py
```
3. Check if `.prism-current-story.txt` exists and contains valid path
### Story Context Lost
**Fix**:
1. Verify story file exists in `docs/stories/`
2. Manually set current story:
```bash
echo "docs/stories/your-story.md" > .prism-current-story.txt
```
3. Or run `*draft` to create new story
## Version
**PRISM Hook System Version**: 1.0.0
**Compatible with**:
- PRISM Core Development Cycle: v1.3.0+
- Claude Code: Latest (hooks feature released June 2025)
## Support
For issues or questions:
1. Check `.prism-workflow.log` for detailed event history
2. Review hook output in Claude Code console
3. File issue at PRISM repository
---
**Last Updated**: 2025-10-24

View File

@@ -0,0 +1,183 @@
#!/usr/bin/env python3
"""
PRISM Context Memory: Git Commit Capture Hook (Obsidian)
Automatically captures context from git commits.
Invoked by PostToolUse:Bash hook when git commit is detected.
Uses Obsidian markdown storage.
"""
import sys
import io
import os
import json
import subprocess
from pathlib import Path
from datetime import datetime
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
# Add utils to path
PRISM_ROOT = Path(__file__).parent.parent
sys.path.insert(0, str(PRISM_ROOT / "skills" / "context-memory" / "utils"))
try:
from storage_obsidian import store_git_commit, get_vault_path
except ImportError:
# Memory system not initialized, skip silently
sys.exit(0)
def is_git_commit_command(command: str) -> bool:
"""Check if bash command is a git commit."""
if not command:
return False
# Normalize command
cmd = command.strip().lower()
# Check for git commit
return (
cmd.startswith("git commit") or
"git add" in cmd and "git commit" in cmd
)
def get_latest_commit_info():
"""Get info about the latest commit."""
try:
# Get commit hash
hash_result = subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True,
text=True,
check=True
)
commit_hash = hash_result.stdout.strip()
# Get commit message
msg_result = subprocess.run(
["git", "log", "-1", "--pretty=%B"],
capture_output=True,
text=True,
check=True
)
commit_message = msg_result.stdout.strip()
# Get author
author_result = subprocess.run(
["git", "log", "-1", "--pretty=%an"],
capture_output=True,
text=True,
check=True
)
author = author_result.stdout.strip()
# Get date
date_result = subprocess.run(
["git", "log", "-1", "--pretty=%aI"],
capture_output=True,
text=True,
check=True
)
date = date_result.stdout.strip()
# Get stats
stats_result = subprocess.run(
["git", "show", "--shortstat", "--format=", commit_hash],
capture_output=True,
text=True,
check=True
)
stats_output = stats_result.stdout.strip()
# Parse stats
files_changed = 0
insertions = 0
deletions = 0
if stats_output:
# Example: " 3 files changed, 45 insertions(+), 12 deletions(-)"
parts = stats_output.split(',')
for part in parts:
part = part.strip()
if 'file' in part:
files_changed = int(part.split()[0])
elif 'insertion' in part:
insertions = int(part.split()[0])
elif 'deletion' in part:
deletions = int(part.split()[0])
return {
'hash': commit_hash,
'message': commit_message,
'author': author,
'date': date,
'files_changed': files_changed,
'insertions': insertions,
'deletions': deletions
}
except subprocess.CalledProcessError:
return None
def main():
"""
Capture commit context from hook invocation.
Expected environment:
- TOOL_NAME: 'Bash'
- TOOL_PARAMS_command: The bash command executed
"""
# Check if memory system enabled
if os.environ.get("PRISM_MEMORY_AUTO_CAPTURE", "true").lower() != "true":
sys.exit(0)
tool_name = os.environ.get("TOOL_NAME", "")
command = os.environ.get("TOOL_PARAMS_command", "")
# Check if this is a git commit
if tool_name != "Bash" or not is_git_commit_command(command):
sys.exit(0)
# Check if vault exists
try:
vault = get_vault_path()
if not vault.exists():
# Vault not initialized, skip
sys.exit(0)
except Exception:
sys.exit(0)
# Get commit info
commit_info = get_latest_commit_info()
if not commit_info:
sys.exit(0)
# Store as markdown
try:
store_git_commit(
commit_hash=commit_info['hash'],
author=commit_info['author'],
date=commit_info['date'],
message=commit_info['message'],
files_changed=commit_info['files_changed'],
insertions=commit_info['insertions'],
deletions=commit_info['deletions']
)
except Exception as e:
# Log but don't block
error_log = PRISM_ROOT / ".prism-memory-log.txt"
with open(error_log, 'a') as f:
f.write(f"[Commit] Error capturing {commit_info['hash']}: {e}\n")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python3
"""
PRISM Context Memory: Git Commit Capture Hook
Automatically captures context from git commits.
Invoked by PostToolUse:Bash hook when git commit is detected.
"""
import sys
import io
import os
import json
import subprocess
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
# Add utils to path
PRISM_ROOT = Path(__file__).parent.parent
sys.path.insert(0, str(PRISM_ROOT / "skills" / "context-memory" / "utils"))
try:
from memory_ops import get_db_connection
except ImportError:
# Memory system not initialized, skip silently
sys.exit(0)
def is_git_commit_command(command: str) -> bool:
"""Check if bash command is a git commit."""
if not command:
return False
# Normalize command
cmd = command.strip().lower()
# Check for git commit
return (
cmd.startswith("git commit") or
"git add" in cmd and "git commit" in cmd
)
def get_latest_commit_info():
"""Get info about the latest commit."""
try:
# Get commit hash
hash_result = subprocess.run(
["git", "rev-parse", "HEAD"],
capture_output=True,
text=True,
check=True
)
commit_hash = hash_result.stdout.strip()
# Get commit message
msg_result = subprocess.run(
["git", "log", "-1", "--pretty=%B"],
capture_output=True,
text=True,
check=True
)
commit_message = msg_result.stdout.strip()
# Get author
author_result = subprocess.run(
["git", "log", "-1", "--pretty=%an"],
capture_output=True,
text=True,
check=True
)
author = author_result.stdout.strip()
# Get diff
diff_result = subprocess.run(
["git", "show", "--format=", commit_hash],
capture_output=True,
text=True,
check=True
)
diff = diff_result.stdout
# Get files changed
files_result = subprocess.run(
["git", "show", "--name-only", "--format=", commit_hash],
capture_output=True,
text=True,
check=True
)
files = [f for f in files_result.stdout.strip().split('\n') if f]
return {
'hash': commit_hash,
'message': commit_message,
'author': author,
'diff': diff,
'files': files
}
except subprocess.CalledProcessError:
return None
def store_commit_context(commit_info):
"""
Store commit context in database.
NOTE: Stores raw commit data without AI analysis.
Agent can analyze commits later if needed using recall functions.
"""
conn = get_db_connection()
cursor = conn.cursor()
try:
# Store raw commit data (no AI analysis in hooks)
# Use commit message as summary, set flags to NULL for later analysis
cursor.execute("""
INSERT INTO git_context (
commit_hash, commit_message, files_changed, summary,
refactoring, bug_fix, feature, author, commit_date
)
VALUES (?, ?, ?, ?, NULL, NULL, NULL, ?, CURRENT_TIMESTAMP)
""", (
commit_info['hash'],
commit_info['message'],
json.dumps(commit_info['files']),
commit_info['message'], # Use commit message as summary
commit_info['author']
))
conn.commit()
# Hooks should be silent on success - commit captured successfully
except Exception as e:
# Log error but don't block
error_log = PRISM_ROOT / ".prism-memory-log.txt"
with open(error_log, 'a') as f:
f.write(f"[Commit] Error capturing {commit_info['hash']}: {e}\n")
finally:
conn.close()
def main():
"""
Capture commit context from hook invocation.
Expected environment:
- TOOL_NAME: 'Bash'
- TOOL_PARAMS_command: The bash command executed
"""
# Check if memory system enabled
if os.environ.get("PRISM_MEMORY_AUTO_CAPTURE", "true").lower() != "true":
sys.exit(0)
tool_name = os.environ.get("TOOL_NAME", "")
command = os.environ.get("TOOL_PARAMS_command", "")
# Check if this is a git commit
if tool_name != "Bash" or not is_git_commit_command(command):
sys.exit(0)
# Check if database exists
try:
get_db_connection()
except SystemExit:
# Database not initialized, skip
sys.exit(0)
# Get commit info
commit_info = get_latest_commit_info()
if not commit_info:
sys.exit(0)
# Store in database
try:
store_commit_context(commit_info)
except Exception as e:
# Log but don't block
error_log = PRISM_ROOT / ".prism-memory-log.txt"
with open(error_log, 'a') as f:
f.write(f"[Commit] Error in main: {e}\n")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""
PRISM Context Memory: File Change Capture Hook (Obsidian)
Automatically captures context when files are edited or created.
Invoked by PostToolUse:Edit and PostToolUse:Write hooks.
Uses Obsidian markdown storage.
"""
import sys
import io
import os
import json
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
# Add utils to path
PRISM_ROOT = Path(__file__).parent.parent
sys.path.insert(0, str(PRISM_ROOT / "skills" / "context-memory" / "utils"))
try:
from storage_obsidian import remember_file, get_vault_path
except ImportError:
# Memory system not initialized, skip silently
sys.exit(0)
def should_capture_file(file_path: str) -> bool:
"""Check if file should be captured in memory."""
# Skip if memory system not enabled
if os.environ.get("PRISM_MEMORY_AUTO_CAPTURE", "true").lower() != "true":
return False
# Skip certain file types
skip_extensions = [
'.md', '.txt', '.log', '.json', '.yaml', '.yml',
'.svg', '.png', '.jpg', '.jpeg', '.gif',
'.lock', '.sum', '.mod'
]
ext = os.path.splitext(file_path)[1].lower()
if ext in skip_extensions:
return False
# Skip certain directories
skip_dirs = [
'node_modules', '.git', 'dist', 'build', '__pycache__',
'.prism', 'vendor', 'target', 'PRISM-Memory'
]
path_parts = Path(file_path).parts
if any(skip_dir in path_parts for skip_dir in skip_dirs):
return False
# Only capture source code files
code_extensions = [
'.py', '.js', '.ts', '.jsx', '.tsx', '.rb', '.go',
'.rs', '.java', '.cs', '.cpp', '.c', '.h', '.hpp',
'.php', '.swift', '.kt'
]
return ext in code_extensions
def main():
"""
Capture file context from hook invocation.
Expected environment:
- TOOL_NAME: 'Edit' or 'Write'
- TOOL_PARAMS_file_path: Path to the file
"""
tool_name = os.environ.get("TOOL_NAME", "")
file_path = os.environ.get("TOOL_PARAMS_file_path", "")
if not file_path:
# Try alternative param names
file_path = os.environ.get("TOOL_PARAMS_path", "")
if not file_path:
sys.exit(0)
# Check if we should capture this file
if not should_capture_file(file_path):
sys.exit(0)
# Check if vault exists
try:
vault = get_vault_path()
if not vault.exists():
# Vault not initialized, skip
sys.exit(0)
except Exception:
sys.exit(0)
# Capture file context
try:
# Add note about how file was changed
note = f"Modified via {tool_name}" if tool_name else None
remember_file(file_path, note=note)
except Exception as e:
# Log error but don't block the workflow
error_log = PRISM_ROOT / ".prism-memory-log.txt"
with open(error_log, 'a') as f:
f.write(f"[{tool_name}] Error capturing {file_path}: {e}\n")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,114 @@
#!/usr/bin/env python3
"""
PRISM Context Memory: File Change Capture Hook
Automatically captures context when files are edited or created.
Invoked by PostToolUse:Edit and PostToolUse:Write hooks.
"""
import sys
import io
import os
import json
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
# Add utils to path
PRISM_ROOT = Path(__file__).parent.parent
sys.path.insert(0, str(PRISM_ROOT / "skills" / "context-memory" / "utils"))
try:
from memory_ops import remember_file, get_db_connection
except ImportError:
# Memory system not initialized, skip silently
sys.exit(0)
def should_capture_file(file_path: str) -> bool:
"""Check if file should be captured in memory."""
# Skip if memory system not enabled
if os.environ.get("PRISM_MEMORY_AUTO_CAPTURE", "true").lower() != "true":
return False
# Skip certain file types
skip_extensions = [
'.md', '.txt', '.log', '.json', '.yaml', '.yml',
'.svg', '.png', '.jpg', '.jpeg', '.gif',
'.lock', '.sum', '.mod'
]
ext = os.path.splitext(file_path)[1].lower()
if ext in skip_extensions:
return False
# Skip certain directories
skip_dirs = [
'node_modules', '.git', 'dist', 'build', '__pycache__',
'.prism', 'vendor', 'target'
]
path_parts = Path(file_path).parts
if any(skip_dir in path_parts for skip_dir in skip_dirs):
return False
# Only capture source code files
code_extensions = [
'.py', '.js', '.ts', '.jsx', '.tsx', '.rb', '.go',
'.rs', '.java', '.cs', '.cpp', '.c', '.h', '.hpp',
'.php', '.swift', '.kt'
]
return ext in code_extensions
def main():
"""
Capture file context from hook invocation.
Expected environment:
- TOOL_NAME: 'Edit' or 'Write'
- TOOL_PARAMS_file_path: Path to the file
"""
tool_name = os.environ.get("TOOL_NAME", "")
file_path = os.environ.get("TOOL_PARAMS_file_path", "")
if not file_path:
# Try alternative param names
file_path = os.environ.get("TOOL_PARAMS_path", "")
if not file_path:
sys.exit(0)
# Check if we should capture this file
if not should_capture_file(file_path):
sys.exit(0)
# Check if database exists
try:
get_db_connection()
except SystemExit:
# Database not initialized, skip
sys.exit(0)
# Capture file context
try:
# Add note about how file was changed
note = f"Modified via {tool_name}" if tool_name else None
remember_file(file_path, note=note)
except Exception as e:
# Log error but don't block the workflow
error_log = PRISM_ROOT / ".prism-memory-log.txt"
with open(error_log, 'a') as f:
f.write(f"[{tool_name}] Error capturing {file_path}: {e}\n")
sys.exit(0)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,121 @@
#!/usr/bin/env python3
"""
Post-Story Learning Consolidation Hook
Triggers after story completion to:
1. Review memories related to the story
2. Refresh decayed/low-confidence memories
3. Reinforce patterns and decisions that were used
4. Capture key learnings from the story
This ensures that coding knowledge doesn't decay - instead it gets
refreshed and updated as part of the learning cycle.
"""
import os
import sys
import json
from pathlib import Path
# Add skills directory to path
prism_root = Path(__file__).parent.parent
sys.path.insert(0, str(prism_root / "skills" / "context-memory" / "utils"))
try:
from storage_obsidian import consolidate_story_learnings, get_memories_needing_review
except ImportError as e:
print(f"[ERROR] Failed to import storage_obsidian: {e}")
sys.exit(0) # Don't fail the hook
def get_story_context():
"""Extract story context from environment or git."""
story_id = os.environ.get('PRISM_STORY_ID', '')
story_title = os.environ.get('PRISM_STORY_TITLE', '')
# If not in env, try to read from .prism-current-story.txt
story_file = prism_root / '.prism-current-story.txt'
if not story_id and story_file.exists():
try:
with open(story_file, 'r') as f:
story_data = json.loads(f.read())
story_id = story_data.get('id', '')
story_title = story_data.get('title', '')
except Exception:
pass
return story_id, story_title
def get_changed_files():
"""Get list of files changed in recent commits."""
try:
import subprocess
result = subprocess.run(
['git', 'diff', '--name-only', 'HEAD~1..HEAD'],
capture_output=True,
text=True,
cwd=prism_root.parent
)
if result.returncode == 0:
return [f.strip() for f in result.stdout.split('\n') if f.strip()]
except Exception:
pass
return []
def main():
"""Run story learning consolidation."""
# Only run if story context is available
story_id, story_title = get_story_context()
if not story_id:
# No story context - skip consolidation
sys.exit(0)
print(f"\n=== Story Learning Consolidation ===")
print(f"Story: {story_id} - {story_title}")
# Get changed files
files_changed = get_changed_files()
# Run consolidation
try:
stats = consolidate_story_learnings(
story_id=story_id,
story_title=story_title,
files_changed=files_changed,
patterns_used=[], # TODO: Extract from story metadata
decisions_made=[], # TODO: Extract from story metadata
key_learnings=[] # TODO: Extract from story metadata
)
if stats:
print(f"\nConsolidation Results:")
print(f" Memories reviewed: {stats.get('memories_reviewed', 0)}")
print(f" Memories refreshed: {stats.get('memories_refreshed', 0)}")
print(f" Patterns reinforced: {stats.get('patterns_reinforced', 0)}")
print(f" Learnings captured: {stats.get('learnings_captured', 0)}")
# Show memories that need review
needs_review = get_memories_needing_review()
if needs_review:
print(f"\n⚠️ {len(needs_review)} memories need review:")
for memory in needs_review[:5]: # Show top 5
print(f" - {memory['title']} (confidence: {memory['confidence']:.2f})")
if len(needs_review) > 5:
print(f" ... and {len(needs_review) - 5} more")
except Exception as e:
print(f"[ERROR] Consolidation failed: {e}")
# Don't fail the hook
print()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,93 @@
#!/usr/bin/env python3
"""
Enforce Story Context Hook
Purpose: Block workflow commands that require a story if no story is active
Trigger: PreToolUse on Bash commands (skill invocations)
Part of: PRISM Core Development Lifecycle
"""
import sys
import io
import json
import os
from datetime import datetime, timezone
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
def main():
# Claude Code passes parameters via environment variables
# Not via stdin JSON
# Extract command from environment variables
command = os.environ.get('TOOL_PARAMS_command', '')
# Check if command is a PRISM skill command that requires a story context
requires_story = False
command_name = None
if '*develop-story' in command:
requires_story = True
command_name = 'develop-story'
elif '*review ' in command:
requires_story = True
command_name = 'review'
elif '*risk ' in command:
requires_story = True
command_name = 'risk-profile'
elif '*design ' in command:
requires_story = True
command_name = 'test-design'
elif '*validate-story-draft ' in command:
requires_story = True
command_name = 'validate-story-draft'
elif '*gate ' in command:
requires_story = True
command_name = 'gate'
elif '*review-qa' in command:
requires_story = True
command_name = 'review-qa'
if requires_story:
# Check if there's an active story
story_file_path = Path('.prism-current-story.txt')
if not story_file_path.exists():
print(f"❌ ERROR: Command '{command_name}' requires an active story", file=sys.stderr)
print("", file=sys.stderr)
print(" No current story found in workflow context", file=sys.stderr)
print("", file=sys.stderr)
print(" REQUIRED: Draft a story first using the core-development-cycle workflow:", file=sys.stderr)
print(" 1. Run: *planning-review (optional)", file=sys.stderr)
print(" 2. Run: *draft", file=sys.stderr)
print("", file=sys.stderr)
print(" The draft command will create a story file and establish story context.", file=sys.stderr)
sys.exit(2) # Block the command
story_file = story_file_path.read_text().strip()
# Verify story file exists
if not Path(story_file).exists():
print(f"❌ ERROR: Current story file not found: {story_file}", file=sys.stderr)
print("", file=sys.stderr)
print(" The story reference is stale or the file was deleted", file=sys.stderr)
print("", file=sys.stderr)
print(" REQUIRED: Create a new story:", file=sys.stderr)
print(" Run: *draft", file=sys.stderr)
sys.exit(2) # Block the command
# Log command with story context
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
with open('.prism-workflow.log', 'a') as log:
log.write(f"{timestamp} | COMMAND | {command_name} | {story_file}\n")
# Hooks should be silent on success
# Success is indicated by exit code 0
sys.exit(0)
if __name__ == '__main__':
main()

63
hooks/hooks.json Normal file
View File

@@ -0,0 +1,63 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/enforce-story-context.py",
"description": "Ensure workflow commands have required story context"
}
]
}
],
"PostToolUse": [
{
"matcher": "Write",
"hooks": [
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/track-current-story.py",
"description": "Track story file as current workflow context"
}
]
},
{
"matcher": "Edit",
"hooks": [
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/validate-story-updates.py",
"description": "Validate story file updates"
}
]
},
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/validate-required-sections.py",
"description": "Verify all required PRISM sections exist"
},
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/capture-file-context-obsidian.py",
"description": "Capture file changes to Obsidian memory vault"
}
]
},
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "python ${CLAUDE_PLUGIN_ROOT}/hooks/capture-commit-context-obsidian.py",
"description": "Capture git commit context to Obsidian memory vault"
}
]
}
]
}
}

View File

@@ -0,0 +1,49 @@
#!/usr/bin/env python3
"""
Track Current Story Hook
Purpose: Capture the story file being worked on from draft_story step
Trigger: PostToolUse on Write operations
Part of: PRISM Core Development Lifecycle
"""
import sys
import io
import json
import os
import re
from datetime import datetime, timezone
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
def main():
# Claude Code passes parameters via environment variables
# Not via stdin JSON
# Extract file path from environment variables
file_path = os.environ.get('TOOL_PARAMS_file_path', '')
# Check if this is a story file being created/updated
if re.match(r'^docs/stories/.*\.md$', file_path):
# Save as current story being worked on
with open('.prism-current-story.txt', 'w') as f:
f.write(file_path)
# Extract story filename
story_filename = Path(file_path).stem
# Log the story activation
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
with open('.prism-workflow.log', 'a') as log:
log.write(f"{timestamp} | STORY_ACTIVE | {file_path}\n")
# Hooks should be silent on success
# Success is indicated by exit code 0
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,126 @@
#!/usr/bin/env python3
"""
Validate Required Sections Hook
Purpose: Ensure story files have all required PRISM sections before workflow progression
Trigger: PostToolUse on Edit/Write to story files
Part of: PRISM Core Development Lifecycle
"""
import sys
import io
import json
import re
from datetime import datetime, timezone
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
def main():
# Claude Code passes parameters via environment variables
# Not via stdin JSON
import os
# Extract file path from environment variables
file_path = os.environ.get('TOOL_PARAMS_file_path', '')
# Only validate story files
if not re.match(r'^docs/stories/.*\.md$', file_path):
sys.exit(0)
story_path = Path(file_path)
if not story_path.exists():
sys.exit(0)
# Read story content
story_content = story_path.read_text()
# Define required sections
required_base_sections = [
"## Story Description",
"## Acceptance Criteria",
"## Tasks",
"## PSP Estimation Tracking"
]
development_sections = [
"## Dev Agent Record"
]
# Get story status
status_match = re.search(r'^status:\s*(.+)$', story_content, re.MULTILINE)
status = status_match.group(1).strip() if status_match else "Draft"
validation_errors = []
validation_warnings = []
# Validate base sections (always required)
for section in required_base_sections:
if section not in story_content:
validation_errors.append(f"Missing required section: {section}")
# Validate development sections if story is in progress or later
if status in ["In Progress", "In-Progress", "Ready for Review", "Ready-for-Review", "Done", "Completed"]:
for section in development_sections:
if section not in story_content:
validation_errors.append(f"Missing required section for {status} status: {section}")
# Validate Dev Agent Record subsections
if "## Dev Agent Record" in story_content:
if "### Completion Notes" not in story_content:
validation_warnings.append("Dev Agent Record missing subsection: ### Completion Notes")
if "### File List" not in story_content:
validation_warnings.append("Dev Agent Record missing subsection: ### File List")
if "### Change Log" not in story_content:
validation_warnings.append("Dev Agent Record missing subsection: ### Change Log")
if "### Debug Log" not in story_content:
validation_warnings.append("Dev Agent Record missing subsection: ### Debug Log")
# Check for PSP tracking fields
if "estimated:" not in story_content:
validation_warnings.append("PSP Estimation Tracking missing 'estimated' field")
if status in ["In Progress", "In-Progress", "Ready for Review", "Ready-for-Review", "Done", "Completed"]:
if "started:" not in story_content:
validation_warnings.append("PSP Estimation Tracking missing 'started' timestamp")
if status in ["Ready for Review", "Ready-for-Review", "Done", "Completed"]:
if "completed:" not in story_content:
validation_warnings.append("PSP Estimation Tracking missing 'completed' timestamp")
# Report validation results
if validation_errors:
print("❌ VALIDATION FAILED: Story file has critical errors", file=sys.stderr)
print("", file=sys.stderr)
for error in validation_errors:
print(f" ERROR: {error}", file=sys.stderr)
print("", file=sys.stderr)
print(f" Story file: {file_path}", file=sys.stderr)
print(f" Status: {status}", file=sys.stderr)
print("", file=sys.stderr)
print(" REQUIRED: Fix these errors before proceeding with workflow", file=sys.stderr)
sys.exit(2) # Block operation
if validation_warnings:
print("⚠️ VALIDATION WARNINGS: Story file has minor issues", file=sys.stderr)
for warning in validation_warnings:
print(f" WARNING: {warning}", file=sys.stderr)
print(" These should be addressed but won't block workflow progression", file=sys.stderr)
# Hooks should be silent on success - no output for successful validation
# Log validation result
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
result = "FAIL" if validation_errors else ("WARN" if validation_warnings else "PASS")
with open('.prism-workflow.log', 'a') as log:
log.write(f"{timestamp} | VALIDATION | {result} | {file_path} | {status}\n")
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,97 @@
#!/usr/bin/env python3
"""
Validate Story Updates Hook
Purpose: Ensure all workflow steps update the current story file appropriately
Trigger: PostToolUse on Edit operations to story files
Part of: PRISM Core Development Lifecycle
"""
import sys
import io
import json
import re
from datetime import datetime, timezone
from pathlib import Path
# Fix Windows console encoding for emoji support
if sys.stdout.encoding != 'utf-8':
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8', errors='replace')
def main():
# Claude Code passes parameters via environment variables
# Not via stdin JSON
import os
# Extract file path from environment variables
file_path = os.environ.get('TOOL_PARAMS_file_path', '')
# Only validate story files
if not re.match(r'^docs/stories/.*\.md$', file_path):
sys.exit(0)
# Verify this is the current story
story_file_path = Path('.prism-current-story.txt')
if story_file_path.exists():
current_story = story_file_path.read_text().strip()
if file_path != current_story:
print("⚠️ WARNING: Editing story file that is not the current story", file=sys.stderr)
print(f" Current: {current_story}", file=sys.stderr)
print(f" Editing: {file_path}", file=sys.stderr)
print(" HINT: Use *draft to set a new current story", file=sys.stderr)
# Check that story file exists
story_path = Path(file_path)
if not story_path.exists():
print(f"❌ ERROR: Story file not found: {file_path}", file=sys.stderr)
sys.exit(2)
# Log the story update
timestamp = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
with open('.prism-workflow.log', 'a') as log:
log.write(f"{timestamp} | STORY_UPDATED | {file_path}\n")
# Read story content
story_content = story_path.read_text()
# Validate required story sections exist
missing_sections = []
required_sections = [
"## Story Description",
"## Acceptance Criteria",
"## Tasks",
"## PSP Estimation Tracking"
]
for section in required_sections:
if section not in story_content:
missing_sections.append(section)
if missing_sections:
print("⚠️ WARNING: Story file missing required sections:", file=sys.stderr)
for section in missing_sections:
print(f" - {section}", file=sys.stderr)
print(" These sections are required by PRISM workflow", file=sys.stderr)
# Check for workflow-specific required sections
if "## Dev Agent Record" in story_content:
if "### Completion Notes" not in story_content:
print("⚠️ WARNING: Dev Agent Record missing 'Completion Notes' subsection", file=sys.stderr)
if "### File List" not in story_content:
print("⚠️ WARNING: Dev Agent Record missing 'File List' subsection", file=sys.stderr)
# If QA Results exists, validate it has content
if "## QA Results" in story_content:
qa_section = re.search(r'## QA Results.*?(?=^##|\Z)', story_content, re.MULTILINE | re.DOTALL)
if qa_section and len(qa_section.group(0).split('\n')) < 5:
print("⚠️ WARNING: QA Results section appears empty or incomplete", file=sys.stderr)
# Hooks should be silent on success - no output for successful validation
sys.exit(0)
if __name__ == '__main__':
main()