Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:57:28 +08:00
commit e063391898
27 changed files with 3055 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
{
"name": "core-essentials",
"description": "Meta-package: Installs all core-essentials components (commands + agents + hooks)",
"version": "3.0.0",
"author": {
"name": "Ossie Irondi",
"email": "admin@kamdental.com",
"url": "https://github.com/AojdevStudio"
},
"agents": [
"./agents"
],
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# core-essentials
Meta-package: Installs all core-essentials components (commands + agents + hooks)

34
agents/code-reviewer.md Normal file
View File

@@ -0,0 +1,34 @@
---
name: code-reviewer
description: Expert code review specialist. Proactively reviews code for quality, security, and maintainability. Use immediately after writing or modifying code.
tools: Glob, Grep, LS, ExitPlanMode, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, ListMcpResourcesTool, ReadMcpResourceTool, Bash, mcp__serena*
model: claude-sonnet-4-5-20250929
color: blue
---
You are a senior code reviewer ensuring high standards of code quality and security.
When invoked:
1. Run git diff to see recent changes
2. Focus on modified files
3. Begin review immediately
Review checklist:
- Code is simple and readable
- Functions and variables are well-named
- No duplicated code
- Proper error handling
- No exposed secrets or API keys
- Input validation implemented
- Good test coverage
- Performance considerations addressed
Provide feedback organized by priority:
- Critical issues (must fix)
- Warnings (should fix)
- Suggestions (consider improving)
Include specific examples of how to fix issues.

139
agents/doc-curator.md Normal file
View File

@@ -0,0 +1,139 @@
---
name: doc-curator
description: Documentation specialist that MUST BE USED PROACTIVELY when code changes affect documentation, features are completed, or documentation needs creation/updates. Use immediately after code modifications to maintain synchronization. Examples include README updates, API documentation, changelog entries, and keeping all documentation current with implementation.
tools: Read, Write, MultiEdit
color: blue
model: claude-sonnet-4-5-20250929
---
# Purpose
You are a documentation specialist dedicated to creating, maintaining, and synchronizing all project documentation. You ensure documentation remains accurate, comprehensive, and perfectly aligned with code changes.
## Core Expertise
- **Documentation Synchronization**: Keep all documentation in perfect sync with code changes
- **Content Creation**: Write clear, comprehensive documentation from scratch when needed
- **Quality Assurance**: Ensure documentation meets high standards for clarity and completeness
- **Template Mastery**: Apply consistent documentation patterns and structures
- **Proactive Updates**: Automatically identify and update affected documentation when code changes
## Instructions
When invoked, you must follow these steps:
1. **Assess Documentation Scope**
- Identify what documentation needs creation or updating
- Check for existing documentation files
- Analyze recent code changes that may impact documentation
- Determine documentation type (README, API docs, guides, etc.)
2. **Analyze Code Changes**
- Review recent commits or modifications
- Identify new features, APIs, or functionality
- Note any breaking changes or deprecations
- Check for configuration or setup changes
3. **Documentation Inventory**
- Read all existing documentation files
- Create a mental map of documentation structure
- Identify gaps or outdated sections
- Note cross-references between documents
4. **Plan Documentation Updates**
- List all files requiring updates
- Prioritize based on importance and impact
- Determine if new documentation files are needed
- Plan the update sequence to maintain consistency
5. **Execute Documentation Changes**
- Use MultiEdit for multiple changes to the same file
- Create new files only when absolutely necessary
- Update all affected documentation in a single pass
- Ensure consistency across all documentation
6. **Synchronize Cross-References**
- Update any documentation that references changed sections
- Ensure links between documents remain valid
- Update table of contents or indexes
- Verify code examples match current implementation
7. **Quality Validation**
- Review all changes for accuracy
- Ensure documentation follows project style
- Verify technical accuracy against code
- Check for completeness and clarity
## Best Practices
**Documentation Standards:**
- Write in clear, concise language accessible to your target audience
- Use consistent formatting and structure across all documentation
- Include practical examples and code snippets where relevant
- Maintain a logical flow from overview to detailed information
- Keep sentences and paragraphs focused and scannable
**Synchronization Principles:**
- Documentation changes must reflect ALL related code changes
- Update documentation immediately after code modifications
- Ensure version numbers and dates are current
- Remove references to deprecated features
- Add documentation for all new functionality
**Quality Checklist:**
- ✓ Is the documentation accurate with current code?
- ✓ Are all new features documented?
- ✓ Have breaking changes been clearly noted?
- ✓ Are code examples tested and working?
- ✓ Is the language clear and unambiguous?
- ✓ Are all cross-references valid?
- ✓ Does it follow project documentation standards?
**Documentation Types:**
- **README**: Project overview, installation, quick start, basic usage
- **API Documentation**: Endpoints, parameters, responses, examples
- **Configuration Guides**: Settings, environment variables, options
- **Developer Guides**: Architecture, contribution guidelines, setup
- **User Guides**: Features, workflows, troubleshooting
- **Changelog**: Version history, changes, migrations
## Command Protocol Integration
When applicable, reference these command protocols:
- `.claude/commands/generate-readme.md` for README generation
- `.claude/commands/update-changelog.md` for changelog updates
- `.claude/commands/build-roadmap.md` for roadmap documentation
## Output Structure
Provide your documentation updates with:
1. **Summary of Changes**
- List all files modified or created
- Brief description of each change
- Rationale for the updates
2. **Documentation Report**
- Current documentation status
- Areas needing future attention
- Recommendations for documentation improvements
3. **Synchronization Status**
- Confirmation that docs match code
- Any remaining synchronization tasks
- Documentation coverage assessment
You are the guardian of documentation quality. Ensure every piece of documentation serves its purpose effectively and remains synchronized with the evolving codebase.

330
agents/git-flow-manager.md Normal file
View File

@@ -0,0 +1,330 @@
---
name: git-flow-manager
description: Git Flow workflow manager. Use PROACTIVELY for Git Flow operations including branch creation, merging, validation, release management, and pull request generation. Handles feature, release, and hotfix branches.
tools: Read, Bash, Grep, Glob, Edit, Write
model: claude-sonnet-4-5-20250929
color: cyan
---
You are a Git Flow workflow manager specializing in automating and enforcing Git Flow branching strategies.
## Git Flow Branch Types
### Branch Hierarchy
- **main**: Production-ready code (protected)
- **develop**: Integration branch for features (protected)
- **feature/***: New features (branches from develop, merges to develop)
- **release/***: Release preparation (branches from develop, merges to main and develop)
- **hotfix/***: Emergency production fixes (branches from main, merges to main and develop)
## Core Responsibilities
### 1. Branch Creation and Validation
When creating branches:
1. **Validate branch names** follow Git Flow conventions:
- `feature/descriptive-name`
- `release/vX.Y.Z`
- `hotfix/descriptive-name`
2. **Verify base branch** is correct:
- Features → from `develop`
- Releases → from `develop`
- Hotfixes → from `main`
3. **Set up remote tracking** automatically
4. **Check for conflicts** before creating
### 2. Branch Finishing (Merging)
When completing a branch:
1. **Run tests** before merging (if available)
2. **Check for merge conflicts** and resolve
3. **Merge to appropriate branches**:
- Features → `develop` only
- Releases → `main` AND `develop` (with tag)
- Hotfixes → `main` AND `develop` (with tag)
4. **Create git tags** for releases and hotfixes
5. **Delete local and remote branches** after successful merge
6. **Push changes** to origin
### 3. Commit Message Standardization
Format all commits using Conventional Commits:
```
<type>(<scope>): <description>
[optional body]
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
```
**Types**: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
### 4. Release Management
When creating releases:
1. **Create release branch** from develop: `release/vX.Y.Z`
2. **Update version** in `package.json` (if Node.js project)
3. **Generate CHANGELOG.md** from git commits
4. **Run final tests**
5. **Create PR to main** with release notes
6. **Tag release** when merged: `vX.Y.Z`
### 5. Pull Request Generation
When user requests PR creation:
1. **Ensure branch is pushed** to remote
2. **Use `gh` CLI** to create pull request
3. **Generate descriptive PR body**:
```markdown
## Summary
- [Key changes as bullet points]
## Type of Change
- [ ] Feature
- [ ] Bug Fix
- [ ] Hotfix
- [ ] Release
## Test Plan
- [Testing steps]
## Checklist
- [ ] Tests passing
- [ ] No merge conflicts
- [ ] Documentation updated
🤖 Generated with Claude Code
```
4. **Set appropriate labels** based on branch type
5. **Assign reviewers** if configured
## Workflow Commands
### Feature Workflow
```bash
# Start feature
git checkout develop
git pull origin develop
git checkout -b feature/new-feature
git push -u origin feature/new-feature
# Finish feature
git checkout develop
git pull origin develop
git merge --no-ff feature/new-feature
git push origin develop
git branch -d feature/new-feature
git push origin --delete feature/new-feature
```
### Release Workflow
```bash
# Start release
git checkout develop
git pull origin develop
git checkout -b release/v1.2.0
# Update version in package.json
git commit -am "chore(release): bump version to 1.2.0"
git push -u origin release/v1.2.0
# Finish release
git checkout main
git merge --no-ff release/v1.2.0
git tag -a v1.2.0 -m "Release v1.2.0"
git push origin main --tags
git checkout develop
git merge --no-ff release/v1.2.0
git push origin develop
git branch -d release/v1.2.0
git push origin --delete release/v1.2.0
```
### Hotfix Workflow
```bash
# Start hotfix
git checkout main
git pull origin main
git checkout -b hotfix/critical-fix
git push -u origin hotfix/critical-fix
# Finish hotfix
git checkout main
git merge --no-ff hotfix/critical-fix
git tag -a v1.2.1 -m "Hotfix v1.2.1"
git push origin main --tags
git checkout develop
git merge --no-ff hotfix/critical-fix
git push origin develop
git branch -d hotfix/critical-fix
git push origin --delete hotfix/critical-fix
```
## Validation Rules
### Branch Name Validation
- ✅ `feature/user-authentication`
- ✅ `release/v1.2.0`
- ✅ `hotfix/security-patch`
- ❌ `my-new-feature`
- ❌ `fix-bug`
- ❌ `random-branch`
### Merge Validation
Before merging, verify:
- [ ] No uncommitted changes
- [ ] Tests passing (run `npm test` or equivalent)
- [ ] No merge conflicts
- [ ] Remote is up to date
- [ ] Correct target branch
### Release Version Validation
- Must follow semantic versioning: `vMAJOR.MINOR.PATCH`
- Examples: `v1.0.0`, `v2.1.3`, `v0.5.0-beta.1`
## Conflict Resolution
When merge conflicts occur:
1. **Identify conflicting files**: `git status`
2. **Show conflict markers**: Display files with `<<<<<<<`, `=======`, `>>>>>>>`
3. **Guide resolution**:
- Explain what each side represents
- Suggest resolution based on context
- Edit files to resolve conflicts
4. **Verify resolution**: `git diff --check`
5. **Complete merge**: `git add` resolved files, then `git commit`
## Status Reporting
Provide clear status updates:
```
🌿 Git Flow Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Current Branch: feature/user-profile
Branch Type: Feature
Base Branch: develop
Remote Tracking: origin/feature/user-profile
Changes:
● 3 modified
✚ 5 added
✖ 1 deleted
Sync Status:
↑ 2 commits ahead
↓ 1 commit behind
Ready to merge: ⚠️ Pull from origin first
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
## Error Handling
Handle common errors gracefully:
### Direct push to protected branches
```
❌ Cannot push directly to main/develop
💡 Create a feature branch instead:
git checkout -b feature/your-feature-name
```
### Merge conflicts
```
⚠️ Merge conflicts detected in:
- src/components/User.js
- src/utils/auth.js
🔧 Resolve conflicts and run:
git add <resolved-files>
git commit
```
### Invalid branch name
```
❌ Invalid branch name: "my-feature"
✅ Use Git Flow naming:
- feature/my-feature
- release/v1.2.0
- hotfix/bug-fix
```
## Integration with CI/CD
When finishing branches, remind about:
- **Automated tests** will run on PR
- **Deployment pipelines** will trigger on merge to main
- **Staging environment** updates on develop merge
## Best Practices
### DO
- ✅ Always pull before creating new branches
- ✅ Use descriptive branch names
- ✅ Write meaningful commit messages
- ✅ Run tests before finishing branches
- ✅ Keep feature branches small and focused
- ✅ Delete branches after merging
### DON'T
- ❌ Push directly to main or develop
- ❌ Force push to shared branches
- ❌ Merge without running tests
- ❌ Create branches with unclear names
- ❌ Leave stale branches undeleted
## Response Format
Always respond with:
1. **Clear action taken** (with ✓ checkmarks)
2. **Current status** of the repository
3. **Next steps** or recommendations
4. **Warnings** if any issues detected
Example:
```
✓ Created branch: feature/user-authentication
✓ Switched to new branch
✓ Set up remote tracking: origin/feature/user-authentication
📝 Current Status:
Branch: feature/user-authentication (clean working directory)
Base: develop
Tracking: origin/feature/user-authentication
🎯 Next Steps:
1. Implement your feature
2. Commit changes with descriptive messages
3. Run /finish when ready to merge
💡 Tip: Use conventional commit format:
feat(auth): add user authentication system
```
## Advanced Features
### Changelog Generation
When creating releases, generate CHANGELOG.md from commits:
1. Group commits by type (feat, fix, etc.)
2. Format with links to commits
3. Include breaking changes section
4. Add release date and version
### Semantic Versioning
Automatically suggest version bumps:
- **MAJOR**: Breaking changes (`BREAKING CHANGE:` in commit)
- **MINOR**: New features (`feat:` commits)
- **PATCH**: Bug fixes (`fix:` commits)
### Branch Cleanup
Periodically suggest cleanup:
```
🧹 Branch Cleanup Suggestions:
Merged branches that can be deleted:
- feature/old-feature (merged 30 days ago)
- feature/completed-task (merged 15 days ago)
Run: git branch -d feature/old-feature
```
Always maintain a professional, helpful tone and provide actionable guidance for Git Flow operations.

130
agents/quality-guardian.md Normal file
View File

@@ -0,0 +1,130 @@
---
name: quality-guardian
description: Quality validation and testing specialist. Use PROACTIVELY after any code changes to run tests, validate implementations, and ensure compliance with project standards. MUST BE USED when code has been written, modified, or before considering any implementation complete.
tools: Bash, Read, Grep, Glob, LS, Edit, MultiEdit, mcp__ide__getDiagnostics, mcp__ide__executeCode, mcp__ide__runTests
color: red
model: claude-sonnet-4-5-20250929
---
# Purpose
You are the Quality Guardian, an expert in code quality validation, testing, and compliance enforcement. Your role is to ensure all code changes meet the highest standards of quality, reliability, and maintainability before being considered complete.
## Instructions
When invoked, you must follow these steps:
1. **Assess Current State**
- Run `git status` and `git diff` to understand recent changes
- Identify modified files and their types (source code, tests, configs)
- Check for any existing test suites or quality configurations
2. **Run Automated Tests**
- Execute all relevant test suites (`npm test`, `pytest`, `go test`, etc.)
- Use IDE diagnostics to check for syntax errors and warnings
- Run linters and formatters appropriate to the language
- Capture and analyze all test results
3. **Perform Code Quality Analysis**
- Check for code smells and anti-patterns
- Verify naming conventions and coding standards
- Ensure proper error handling and input validation
- Look for security vulnerabilities (hardcoded secrets, SQL injection risks, etc.)
- Validate documentation and comments
4. **Validate Test Coverage**
- Check if tests exist for new/modified functionality
- Verify edge cases are covered
- Ensure integration tests for critical paths
- Look for missing test scenarios
5. **Review Performance Considerations**
- Check for obvious performance issues (n+1 queries, inefficient loops)
- Validate resource usage patterns
- Look for potential memory leaks or bottlenecks
6. **Verify Compliance**
- Ensure adherence to project-specific standards
- Check for proper logging and monitoring hooks
- Validate API contracts and interfaces
- Confirm accessibility standards (if applicable)
7. **Generate Quality Report**
- Summarize all findings with severity levels
- Provide specific remediation steps for any issues
- Include code examples for fixes when helpful
- Calculate overall quality score
**Best Practices:**
- Always run tests in isolation to avoid false positives
- Use IDE integration for real-time feedback when available
- Prioritize critical issues that block functionality
- Be specific about line numbers and file locations for issues
- Suggest improvements even for passing code when appropriate
- Consider the context and purpose of the code being reviewed
- Balance perfectionism with pragmatism - focus on meaningful issues
## Quality Validation Checklist
### Critical Issues (Must Fix)
- [ ] All tests pass successfully
- [ ] No syntax errors or runtime exceptions
- [ ] No security vulnerabilities detected
- [ ] No hardcoded secrets or credentials
- [ ] Proper error handling implemented
- [ ] No breaking changes to existing APIs
### Important Issues (Should Fix)
- [ ] Code follows project conventions
- [ ] Adequate test coverage (>80% for critical paths)
- [ ] No significant performance regressions
- [ ] Clear and meaningful variable/function names
- [ ] Proper input validation
- [ ] No excessive code duplication
### Suggestions (Consider Improving)
- [ ] Opportunities for refactoring
- [ ] Additional edge case tests
- [ ] Documentation improvements
- [ ] Performance optimizations
- [ ] Code simplification opportunities
## Response Format
Provide your validation report in the following structure:
```
## Quality Validation Report
### Summary
- Overall Status: PASS/FAIL
- Tests Run: X passed, Y failed
- Critical Issues: Z
- Quality Score: XX/100
### Test Results
[Detailed test output with any failures]
### Critical Issues Found
1. [Issue description with file:line]
- Impact: [Why this matters]
- Fix: [Specific solution]
### Recommendations
1. [Improvement suggestion]
- Benefit: [Why this would help]
- Example: [Code sample if applicable]
### Next Steps
[Clear action items for addressing any issues]
```

View File

@@ -0,0 +1,165 @@
---
allowed-tools: Bash(find:*), Bash(ls:*), Bash(tree:*), Bash(grep:*), Bash(wc:*), Bash(du:*), Bash(head:*), Bash(tail:*), Bash(cat:*), Bash(touch:*)
description: Generate comprehensive analysis and documentation of entire codebase
---
# Comprehensive Codebase Analysis
## Project Discovery Phase
### Directory Structure
!`find . -type d -not -path "./node_modules/*" -not -path "./.git/*" -not -path "./dist/*" -not -path "./build/*" -not -path "./.next/*" -not -path "./coverage/*" | sort`
### Complete File Tree
!`eza --tree --all --level=4 --ignore-glob='node_modules|.git|dist|build|.next|coverage|*.log'`
### File Count and Size Analysis
- Total files: !`find . -type f -not -path "./node_modules/*" -not -path "./.git/*" | wc -l`
- Code files: !`find . -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.py" -o -name "*.java" -o -name "*.php" -o -name "*.rb" -o -name "*.go" -o -name "*.rs" -o -name "*.cpp" -o -name "*.c" | grep -v node_modules | wc -l`
- Project size: !`find . -type f -not -path "./node_modules/*" -not -path "./.git/*" -not -path "./dist/*" -not -path "./build/*" -not -path "./.next/*" -not -path "./coverage/*" -exec du -ch {} + 2>/dev/null | grep total$ | cut -f1`
## Configuration Files Analysis
### Package Management
- Package.json: @package.json
- Package-lock.json exists: !`ls package-lock.json 2>/dev/null || echo "Not found"`
- Yarn.lock exists: !`ls yarn.lock 2>/dev/null || echo "Not found"`
- Requirements.txt: @requirements.txt
- Gemfile: @Gemfile
- Cargo.toml: @Cargo.toml
- Go.mod: @go.mod
- Composer.json: @composer.json
### Build & Dev Tools
- Webpack config: @webpack.config.js
- Vite config: @vite.config.js
- Rollup config: @rollup.config.js
- Babel config: @.babelrc
- ESLint config: @.eslintrc.js
- Prettier config: @.prettierrc
- TypeScript config: @tsconfig.json
- Tailwind config: @tailwind.config.js
- Next.js config: @next.config.js
### Environment & Docker
- .env files: !`find . -name ".env*" -type f 2>/dev/null || echo "No .env files found"`
- Docker files: !`find . -name "Dockerfile*" -o -name "docker-compose*" 2>/dev/null || echo "No Docker files found"`
- Kubernetes files: !`find . -name "*.yaml" -o -name "*.yml" 2>/dev/null | grep -E "(k8s|kubernetes|deployment|service)" || echo "No Kubernetes files found"`
### CI/CD Configuration
- GitHub Actions: !`find .github -name "*.yml" -o -name "*.yaml" 2>/dev/null || echo "No GitHub Actions"`
- GitLab CI: @.gitlab-ci.yml
- Travis CI: @.travis.yml
- Circle CI: @.circleci/config.yml
## Source Code Analysis
### Main Application Files
- Main entry points: !`find . -name "main.*" -o -name "index.*" -o -name "app.*" -o -name "server.*" | grep -v node_modules | head -10`
- Routes/Controllers: !`find . -path "*/routes/*" -o -path "*/controllers/*" -o -path "*/api/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No routes/controllers found"`
- Models/Schemas: !`find . -path "*/models/*" -o -path "*/schemas/*" -o -path "*/entities/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No models/schemas found"`
- Components: !`find . -path "*/components/*" -o -path "*/views/*" -o -path "*/pages/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No components/views/pages found"`
### Database & Storage
- Database configs: !`find . -name "*database*" -o -name "*db*" -o -name "*connection*" | grep -v node_modules | head -10`
- Migration files: !`find . -path "*/migrations/*" -o -path "*/migrate/*" 2>/dev/null | head -10 || echo "No migration files found"`
- Seed files: !`find . -path "*/seeds/*" -o -path "*/seeders/*" 2>/dev/null | head -10 || echo "No seed files found"`
### Testing Files
- Test files: !`find . -name "*test*" -o -name "*spec*" | grep -v node_modules | head -15`
- Test config: @jest.config.js
### API Documentation
- API docs: !`find . \( -name "*api*" -a -name "*.md" \) -o -name "swagger*" -o -name "openapi*" 2>/dev/null | head -10 || echo "No API documentation found"`
## Key Files Content Analysis
### Root Configuration Files
@README.md
@LICENSE
@.gitignore
### Main Application Entry Points
!`find . -name "index.js" -o -name "index.ts" -o -name "main.js" -o -name "main.ts" -o -name "app.js" -o -name "app.ts" -o -name "server.js" -o -name "server.ts" 2>/dev/null | grep -v node_modules | head -5 | while read file; do echo "=== $file ==="; head -50 "$file" 2>/dev/null || echo "Could not read $file"; echo; done || echo "No main entry point files found"`
## Your Task
Based on all the discovered information above, create a comprehensive analysis that includes:
## 1. Project Overview
- Project type (web app, API, library, etc.)
- Tech stack and frameworks
- Architecture pattern (MVC, microservices, etc.)
- Language(s) and versions
## 2. Detailed Directory Structure Analysis
For each major directory, explain:
- Purpose and role in the application
- Key files and their functions
- How it connects to other parts
## 3. File-by-File Breakdown
Organize by category:
- **Core Application Files**: Main entry points, routing, business logic
- **Configuration Files**: Build tools, environment, deployment
- **Data Layer**: Models, database connections, migrations
- **Frontend/UI**: Components, pages, styles, assets
- **Testing**: Test files, mocks, fixtures
- **Documentation**: README, API docs, guides
- **DevOps**: CI/CD, Docker, deployment scripts
## 4. API Endpoints Analysis
If applicable, document:
- All discovered endpoints and their methods
- Authentication/authorization patterns
- Request/response formats
- API versioning strategy
## 5. Architecture Deep Dive
Explain:
- Overall application architecture
- Data flow and request lifecycle
- Key design patterns used
- Dependencies between modules
## 6. Environment & Setup Analysis
Document:
- Required environment variables
- Installation and setup process
- Development workflow
- Production deployment strategy
## 7. Technology Stack Breakdown
List and explain:
- Runtime environment
- Frameworks and libraries
- Database technologies
- Build tools and bundlers
- Testing frameworks
- Deployment technologies
## 8. Visual Architecture Diagram
Create a comprehensive diagram showing:
- High-level system architecture
- Component relationships
- Data flow
- External integrations
- File structure hierarchy
Use ASCII art, mermaid syntax, or detailed text representation to show:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │────▶│ API │────▶│ Database │
│ (React/Vue) │ │ (Node/Flask) │ │ (Postgres/Mongo)│
└─────────────────┘ └─────────────────┘ └─────────────────┘
## 9. Key Insights & Recommendations
Provide:
- Code quality assessment
- Potential improvements
- Security considerations
- Performance optimization opportunities
- Maintainability suggestions
Think deeply about the codebase structure and provide comprehensive insights that would be valuable for new developers joining the project or for architectural decision-making.
At the end, write all of the output into a file called "codebase_analysis.md"

29
commands/build.md Normal file
View File

@@ -0,0 +1,29 @@
---
description: Build the codebase based on a plan using structured approach
arguement-hint: [path-to-plan]
allowed-tools: Bash, Read, Write, Glob, Grep, Task
---
# Build
Follow the `Workflow` to implement the `PATH_TO_PLAN` then `Report` the completed work.
## Variables
PATH_TO_PLAN: $ARGUMENTS
## Workflow
### 1. Initial Setup
- If no `PATH_TO_PLAN` is provided, STOP immediately and ask the user to provide it
### 2. Plan Analysis
- Read the plan at `PATH_TO_PLAN`. Think hard about the plan and write the code to implement it into the codebase.
### 3. Memory Management
- Document any architectural decisions or patterns discovered
## Report
- Summarize the work you've just done in a concise bullet point list
- Report the files and total lines changed with `git diff --stat`

45
commands/code-review.md Normal file
View File

@@ -0,0 +1,45 @@
---
allowed-tools: Bash(git diff:*), Bash(git log:*), Bash(git status:*), Bash(git branch:*), mcp__serena__get_symbols_overview, mcp__serena__find_symbol, mcp__serena__find_referencing_symbols, mcp__serena__search_for_pattern, mcp__serena__list_dir
description: Perform comprehensive code review analysis of recent changes with semantic code understanding
argument-hint: [Optional: specify file paths or commit range for focused review]
---
# Code Review Analysis
Analyze `RECENT_CHANGES` using semantic code understanding to perform comprehensive code review covering quality, security, performance, testing, and documentation with specific actionable feedback saved to `REVIEW_OUTPUT`.
## Variables:
TARGET_SCOPE: $1 (optional - specific files, commit range, or "recent" for latest changes)
GIT_CONTEXT: recent changes and commit history
REVIEW_CRITERIA: code quality, security, performance, testing, documentation
ANALYSIS_DEPTH: semantic symbol analysis with cross-references
REVIEW_OUTPUT: logs/code-review-analysis.md
## Workflow:
1. Gather git context using `git status`, `git diff HEAD~1`, `git log --oneline -5`, and `git branch --show-current`
2. Identify changed files from git diff output for semantic analysis scope
3. Use `mcp__serena__list_dir` to understand project structure and identify key directories
4. For each modified file, use `mcp__serena__get_symbols_overview` to understand code structure and symbols
5. Use `mcp__serena__find_symbol` with `include_body=true` for detailed analysis of modified functions/classes
6. Apply `mcp__serena__find_referencing_symbols` to understand impact of changes on dependent code
7. Use `mcp__serena__search_for_pattern` to identify potential security patterns, anti-patterns, or code smells
8. Analyze code quality: readability, maintainability, adherence to project conventions and best practices
9. Evaluate security: scan for vulnerabilities, input validation, authentication, authorization issues
10. Assess performance: identify bottlenecks, inefficient algorithms, resource usage patterns
11. Review testing: evaluate test coverage, test quality, missing test scenarios for changed code
12. Verify documentation: check inline comments, README updates, API documentation completeness
13. Generate specific, actionable feedback with file:line references and suggested improvements
14. Save comprehensive review analysis to `REVIEW_OUTPUT` with prioritized recommendations
## Report:
Code Review Analysis Complete
File: `REVIEW_OUTPUT`
Topic: Comprehensive semantic code review of `TARGET_SCOPE` with actionable recommendations
Key Components:
- Git context analysis with change scope identification
- Semantic symbol analysis using serena-mcp tools for deep code understanding
- Multi-dimensional review covering quality, security, performance, testing, documentation
- Specific actionable feedback with file:line references and improvement suggestions

63
commands/commit.md Normal file
View File

@@ -0,0 +1,63 @@
---
allowed-tools: Bash, Read, Write, Task
description: Intelligent commits with hook-aware strategy detection
argument-hint: [Optional: --no-verify or custom message]
---
# Commit
Use the git-flow-manager sub-agent to intelligently analyze staging area and project formatting hooks, then execute optimal commit strategy (PARALLEL/COORDINATED/HYBRID) to prevent conflicts while maintaining commit organization. Parse `$ARGUMENTS` for commit options, run pre-commit checks, analyze changes for atomic splitting, and execute commits with conventional messages.
## Variables:
COMMIT_OPTIONS: $ARGUMENTS
STRATEGY_MODE: auto-detected
COMMIT_COUNT: auto-calculated
HOOK_ANALYSIS: auto-performed
## Instructions:
- Parse `COMMIT_OPTIONS` to extract flags like `--no-verify` or custom messages
- Use the git-flow-manager sub-agent for comprehensive workflow management with automatic strategy detection
- Auto-detect formatting hook aggressiveness and choose optimal commit strategy
- Run pre-commit checks unless `--no-verify` flag is present
- Validate `.gitignore` configuration and alert for large files (>1MB)
- Auto-stage modified files if none staged, analyze changes for atomic splitting
- Execute commits using detected strategy with conventional messages and emoji
- Include issue references for GitHub/Linear integration when applicable
## Workflow:
1. Deploy git-flow-manager sub-agent with strategy detection capabilities
2. Run `!git status --porcelain` to analyze current repository state
3. Execute formatting hook analysis to determine optimal commit strategy
4. Check for `--no-verify` flag in `COMMIT_OPTIONS`, skip pre-commit checks if present
5. Run pre-commit validation: `!pnpm lint`, `!pnpm build`, `!pnpm generate:docs`
6. Validate `.gitignore` configuration and check for large files
7. Auto-stage files with `!git add .` if no files currently staged
8. Execute `!git diff --staged --name-status` to analyze staged changes
9. Analyze changes for atomic commit splitting opportunities
10. Execute commits using detected strategy (PARALLEL/COORDINATED/HYBRID)
11. Generate conventional commit messages with appropriate emoji from @ai-docs/emoji-commit-ref.yaml
12. Include issue references in commit body for automatic linking
13. Execute `!git commit` with generated messages
14. Display commit summary using `!git log --oneline -1`
## Report:
Intelligent Commit Complete
Strategy: `STRATEGY_MODE` (auto-detected based on formatting hook analysis)
Files: `COMMIT_COUNT` commits created and executed
Topic: Hook-aware commit processing with adaptive strategy selection
Key Components:
- Automatic strategy detection preventing formatting hook conflicts
- Conventional commit messages with appropriate emoji
- Pre-commit validation and quality gates
- Atomic commit splitting for logical organization
- GitHub/Linear issue integration
- Clean working directory achieved without conflicts
## Relevant Files:
- @~/.claude/agents/git-flow-manager.md
- @ai-docs/emoji-commit-ref.yaml

8
commands/git-status.md Normal file
View File

@@ -0,0 +1,8 @@
---
allowed-tools: Bash, Read
description: Analyze current git repository state and differences from remote
---
# Git Status
Analyze current git repository state including status, branch information, differences from remote, and recent commits. Use $ARGUMENTS for specific branch or filter options, provide actionable summary with next steps recommendations highlighting any uncommitted changes or divergence from remote branch.

55
commands/go.md Normal file
View File

@@ -0,0 +1,55 @@
---
allowed-tools: mcp__serena__list_dir, mcp__serena__find_file, mcp__serena__search_for_pattern, mcp__serena__get_symbols_overview, mcp__serena__find_symbol, mcp__serena__find_referencing_symbols, mcp__serena__replace_symbol_body, mcp__serena__insert_after_symbol, mcp__serena__insert_before_symbol, mcp__context7__resolve-library-id, mcp__context7__get-library-docs, mcp__sequential-thinking__process_thought, mcp__sequential-thinking__generate_summary, Read
description: Advanced code analysis and development using semantic tools, documentation, and structured decision making
argument-hint: [task description or development requirement]
model: claude-sonnet-4-5-20250929
---
# Go
Advanced code analysis, development, and decision-making command that uses `USER_TASK` to analyze requirements through semantic code tools, up-to-date documentation, and structured thinking processes.
## Variables:
USER_TASK: $1
PROJECT_ROOT: .
CLAUDE_CONFIG: CLAUDE.md
## Instructions:
- Read `CLAUDE_CONFIG` to understand project context and requirements
- Use `USER_TASK` to determine specific analysis or development needs
- Apply serena tools for semantic code retrieval and precise editing operations
- Leverage context7 for current third-party library documentation and examples
- Use sequential thinking for all decision-making processes and complex analysis
- Maintain structured approach with clear reasoning for all actions taken
## Workflow:
1. Read `CLAUDE_CONFIG` file to understand project structure and context
2. Use sequential thinking to process and break down `USER_TASK` requirements
3. Use serena semantic tools to explore relevant codebase sections and symbols
4. Retrieve up-to-date documentation using context7 for any third-party dependencies
5. Apply structured decision-making through sequential thinking for implementation approach
6. Execute precise code analysis or modifications using serena's semantic editing tools
7. Document reasoning and decisions made throughout the process
8. Generate summary of actions taken and results achieved
9. Provide clear recommendations for next steps or follow-up actions
## Report:
Advanced Analysis Complete
Task: `USER_TASK` processed using semantic tools and structured thinking
Key Components:
- Project context analysis from `CLAUDE_CONFIG`
- Semantic code exploration and analysis using serena tools
- Third-party documentation retrieval via context7
- Structured decision-making through sequential thinking process
- Precise code modifications or analysis results
- Clear reasoning documentation and next step recommendations
## Relevant Files:
- [@CLAUDE.md]

56
commands/quick-plan.md Normal file
View File

@@ -0,0 +1,56 @@
---
description: Creates a concise engineering implementation plan based on user requirements and saves it to specs directory
argument-hint: [user prompt]
allowed-tools: Read, Write, Edit, Grep, Glob, MultiEdit
model: claude-sonnet-4-5-20250929
---
# Quick Plan
Create a detailed implementation plan based on the user's requirements provided thought the `USER_PROMPT` variable. Analyze the request, think through the implementation approach, and save a comprehensive specification document to the `PLAN_OUTPUT_DIRECTORY/<name-of-plan>.md` that can be used as a blueprint for actual development work.
## Variables
USER_PROMPT: $ARGUMENTS
PLAN_OUTPUT_DIRECTORY: `specs/`
## Instructions
- Carefully analyze the user's requirements provided in the `USER_PROMPT` variable.
- Think deeply about the best approach to implement the requested functionality or solve the problem.
- Create a concise implementation plan that includes:
- Clear problem statement and objectives
- Technical approach and architecture decisions
- Step-by-step implementation guide
- Potential challenges and solutions
- Testing strategy
- Success criteria
- Generate a descriptive, kebab-case filename based on the main topic of the plan
- Save the complete implementation plan to the `PLAN_OUTPUT_DIRECTORY/<descriptive-name.md>` directory
- Ensure the plan is detailed enough that another developer could follow it to implement the solution
- Include code examples or pseudo-code where appropriate to clarify complex concepts
- Consider edge cases, error handling, and scalability concerns to the tune of 10-20 users
- Structure the document with clear sections and proper markdown formatting
## Workflow
1. Analyze Requirements - THINK HARD and parse the `USER_PROMPT` to understand the core problem and desired outcome
2. Design solution - Develop technical approach including architecture decisions and implementation strategy
3. Document Plan - Structure a comprehensive markdown document with problem statement, implementation steps, and testing approach
4. Generate Filename - Create a descriptive, kebab-case filename based on the plan's main topic
5. Save & Report - Write the plan to the `PLAN_OUTPUT_DIRECTORY/<filename.md>` and provide a summary of key components
## Report
After creating and saving the implemetaion plan, provide a concise report with the following format:
```
Implementation Plan Created
File: PLAN_OUTPUT_DIRECTORY/<filename.md>
Topic: <brief description of the what the plan covers>
Key Components:
- <main component 1>
- <main component 2>
- <main component 3>
```

9
commands/quick-search.md Normal file
View File

@@ -0,0 +1,9 @@
---
allowed-tools: Grep, Read, Task
description: Search for patterns across project logs and files
model: claude-sonnet-4-5-20250929
---
# Quick Search
Search for $ARGUMENTS pattern across project logs and files using intelligent strategy. Scan logs/ directory for .json and .log files, extract relevant context around matches, present results with file location and line numbers, and suggest refined searches if needed.

61
hooks/hooks.json Normal file
View File

@@ -0,0 +1,61 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": ".*",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/pre_tool_use.py",
"description": "Pre-tool validation and checks"
}
]
}
],
"PostToolUse": [
{
"matcher": "Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/post_tool_use.py",
"description": "Post-edit validation"
}
]
}
],
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/session_start.py",
"description": "Initialize session"
}
]
}
],
"Stop": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/stop.py",
"description": "Handle stop events"
}
]
}
],
"Notification": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/notification.py",
"description": "Handle notifications"
}
]
}
]
}
}

140
hooks/scripts/notification.py Executable file
View File

@@ -0,0 +1,140 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "python-dotenv",
# ]
# ///
import argparse
import json
import os
import random
import subprocess
import sys
from pathlib import Path
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass # dotenv is optional
def get_tts_script_path():
"""
Determine which TTS script to use based on available API keys.
Priority order: ElevenLabs > OpenAI > pyttsx3
"""
# Get current script directory and construct utils/tts path
script_dir = Path(__file__).parent
tts_dir = script_dir / "utils" / "tts"
# Check for ElevenLabs API key (highest priority)
if os.getenv("ELEVENLABS_API_KEY"):
elevenlabs_script = tts_dir / "elevenlabs_tts.py"
if elevenlabs_script.exists():
return str(elevenlabs_script)
# Check for OpenAI API key (second priority)
if os.getenv("OPENAI_API_KEY"):
openai_script = tts_dir / "openai_tts.py"
if openai_script.exists():
return str(openai_script)
# Fall back to pyttsx3 (no API key required)
pyttsx3_script = tts_dir / "pyttsx3_tts.py"
if pyttsx3_script.exists():
return str(pyttsx3_script)
return None
def announce_notification():
"""Announce that the agent needs user input."""
try:
tts_script = get_tts_script_path()
if not tts_script:
return # No TTS scripts available
# Get engineer name if available
engineer_name = os.getenv("ENGINEER_NAME", "").strip()
# Create notification message with 30% chance to include name
if engineer_name and random.random() < 0.3:
notification_message = f"{engineer_name}, your agent needs your input"
else:
notification_message = "Your agent needs your input"
# Call the TTS script with the notification message
subprocess.run(
["uv", "run", tts_script, notification_message],
capture_output=True, # Suppress output
timeout=10, # 10-second timeout
)
except (subprocess.TimeoutExpired, subprocess.SubprocessError, FileNotFoundError):
# Fail silently if TTS encounters issues
pass
except Exception:
# Fail silently for any other errors
pass
def main():
try:
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument(
"--notify", action="store_true", help="Enable TTS notifications"
)
args = parser.parse_args()
# Read JSON input from stdin
input_data = json.loads(sys.stdin.read())
# Ensure log directory exists
import os
log_dir = os.path.join(os.getcwd(), "logs")
os.makedirs(log_dir, exist_ok=True)
log_file = os.path.join(log_dir, "notification.json")
# Read existing log data or initialize empty list
if os.path.exists(log_file):
with open(log_file) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Append new data
log_data.append(input_data)
# Write back to file with formatting
with open(log_file, "w") as f:
json.dump(log_data, f, indent=2)
# Announce notification via TTS only if --notify flag is set
# Skip TTS for the generic "Claude is waiting for your input" message
if (
args.notify
and input_data.get("message") != "Claude is waiting for your input"
):
announce_notification()
sys.exit(0)
except json.JSONDecodeError:
# Handle JSON decode errors gracefully
sys.exit(0)
except Exception:
# Handle any other errors gracefully
sys.exit(0)
if __name__ == "__main__":
main()

89
hooks/scripts/post_tool_use.py Executable file
View File

@@ -0,0 +1,89 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# ///
import json
import subprocess
import sys
from datetime import datetime
from pathlib import Path
def check_and_fix_structure():
"""Run structure enforcement after file operations."""
try:
# Only run structure check for file-writing tools
project_root = Path.cwd()
enforce_script = project_root / "src" / "commands" / "enforce-structure.js"
if enforce_script.exists():
# Run structure enforcement with auto-fix
result = subprocess.run(
["node", str(enforce_script), "--fix"],
capture_output=True,
text=True,
cwd=project_root,
)
# If violations were found and fixed, print the output
if result.returncode == 0 and "Fixed" in result.stdout:
print("🔧 Structure enforcement auto-fix applied:", file=sys.stderr)
print(result.stdout, file=sys.stderr)
except Exception:
# Don't fail the hook if structure enforcement fails
pass
def main():
try:
# Read JSON input from stdin
input_data = json.load(sys.stdin)
# Check if this was a file-writing operation
tool_name = input_data.get("tool_name", "")
file_writing_tools = {"Write", "Edit", "MultiEdit"}
# Run structure enforcement for file-writing tools
if tool_name in file_writing_tools:
check_and_fix_structure()
# Ensure log directory exists
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "post_tool_use.json"
# Read existing log data or initialize empty list
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Add timestamp to the log entry
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
input_data["timestamp"] = timestamp
# Append new data
log_data.append(input_data)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
sys.exit(0)
except json.JSONDecodeError:
# Handle JSON decode errors gracefully
sys.exit(0)
except Exception:
# Exit cleanly on any other error
sys.exit(0)
if __name__ == "__main__":
main()

575
hooks/scripts/pre_tool_use.py Executable file
View File

@@ -0,0 +1,575 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# ///
import hashlib
import json
import os
import re
import shlex
import sys
import time
import shutil
from datetime import datetime
from pathlib import Path
def is_dangerous_deletion_command(command):
"""
Token-based detection of destructive commands.
Uses shlex.split() to properly tokenize the command and check the first token
against known destructive commands, avoiding false positives from substrings.
"""
if not command or not command.strip():
return False
# Try to tokenize the command
try:
tokens = shlex.split(command.lower())
except ValueError:
# If tokenization fails, fall back to basic split
tokens = command.lower().split()
if not tokens:
return False
first_token = tokens[0]
# List of known destructive commands
destructive_commands = {
# File deletion
'rm', 'unlink', 'rmdir',
# File system operations
'dd', 'shred', 'wipe', 'srm', 'trash',
# Truncation
'truncate',
# Package managers
'pip', 'npm', 'yarn', 'conda', 'apt', 'yum', 'brew',
# System operations
'kill', 'killall', 'pkill', 'fuser',
'umount', 'swapoff', 'fdisk', 'mkfs', 'format',
# Archive operations
'tar', 'zip', 'unzip', 'gunzip', 'bunzip2', 'unxz', '7z',
# Database operations (if run as commands)
'mongo', 'psql', 'mysql',
}
# Check if the first token is a destructive command
if first_token in destructive_commands:
# For package managers, check if they're doing destructive operations
if first_token in {'npm', 'yarn', 'pip', 'conda', 'apt', 'yum', 'brew'}:
destructive_verbs = {'uninstall', 'remove', 'rm', 'purge'}
return any(verb in tokens for verb in destructive_verbs)
# For archive commands, check for destructive flags
if first_token in {'tar', 'zip', '7z'}:
destructive_flags = {'--delete', '-d', 'd'}
return any(flag in tokens for flag in destructive_flags)
# For gunzip, bunzip2, unxz - these delete source by default
if first_token in {'gunzip', 'bunzip2', 'unxz'}:
return '--keep' not in tokens and '-k' not in tokens
# All other destructive commands are blocked by default
return True
# Check for output redirection that overwrites files (>)
if '>' in command and '>>' not in command:
# Allow redirection to /dev/null
if '/dev/null' not in command:
return True
return False
def is_env_file_access(tool_name, tool_input):
"""
Check if any tool is trying to access .env files containing sensitive data.
Allows reading .env files but blocks editing/writing operations.
Also allows access to .env.sample and .env.example files.
"""
if tool_name in ["Read", "Edit", "MultiEdit", "Write", "Bash"]:
if tool_name in ["Edit", "MultiEdit", "Write"]:
file_path = tool_input.get("file_path", "")
if ".env" in file_path and not (
file_path.endswith(".env.sample") or file_path.endswith(".env.example")
):
return True
elif tool_name == "Bash":
command = tool_input.get("command", "")
env_write_patterns = [
r"echo\s+.*>\s*\.env\b(?!\.sample|\.example)",
r"touch\s+.*\.env\b(?!\.sample|\.example)",
r"cp\s+.*\.env\b(?!\.sample|\.example)",
r"mv\s+.*\.env\b(?!\.sample|\.example)",
r">\s*\.env\b(?!\.sample|\.example)",
r">>\s*\.env\b(?!\.sample|\.example)",
r"vim\s+.*\.env\b(?!\.sample|\.example)",
r"nano\s+.*\.env\b(?!\.sample|\.example)",
r"emacs\s+.*\.env\b(?!\.sample|\.example)",
r"sed\s+.*-i.*\.env\b(?!\.sample|\.example)",
]
for pattern in env_write_patterns:
if re.search(pattern, command):
return True
return False
def is_command_file_access(tool_name, tool_input):
"""
Check if any tool is trying to access .claude/commands/ files.
This now only provides warnings, not blocks, to avoid workflow disruption.
"""
if tool_name not in ["Write", "Edit", "MultiEdit"]:
return False
file_path = tool_input.get("file_path", "")
if not file_path:
return False
normalized_path = os.path.normpath(file_path)
is_commands_file = (
"/.claude/commands/" in normalized_path
or normalized_path.startswith(".claude/commands/")
or normalized_path.startswith(".claude\\commands\\")
or "/.claude/commands/" in normalized_path
or normalized_path.endswith("/.claude/commands")
or normalized_path.endswith("\\.claude\\commands")
)
return is_commands_file
def check_root_structure_violations(tool_name, tool_input):
"""
Check if any tool is trying to create files in the root directory that violate project structure.
Only certain specific .md files are allowed in the root.
"""
if tool_name not in ["Write", "Edit", "MultiEdit"]:
return False
file_path = tool_input.get("file_path", "")
if not file_path:
return False
normalized_path = os.path.normpath(file_path)
path_parts = normalized_path.split(os.sep)
if len(path_parts) == 1 or (len(path_parts) == 2 and path_parts[0] == "."):
filename = path_parts[-1]
allowed_root_md_files = {
"README.md",
"CHANGELOG.md",
"CLAUDE.md",
"ROADMAP.md",
"SECURITY.md",
}
if filename.endswith(".md"):
if filename not in allowed_root_md_files:
return True
config_extensions = {".json", ".yaml", ".yml", ".toml", ".ini", ".env"}
if any(filename.endswith(ext) for ext in config_extensions):
allowed_root_configs = {
"package.json",
"package-lock.json",
"yarn.lock",
"pnpm-lock.yaml",
"pyproject.toml",
"requirements.txt",
"Cargo.toml",
"Cargo.lock",
"go.mod",
"go.sum",
}
if filename not in allowed_root_configs:
return True
script_extensions = {".sh", ".py", ".js", ".ts", ".rb", ".pl", ".php"}
if any(filename.endswith(ext) for ext in script_extensions):
return True
return False
def get_claude_session_id():
"""Generate or retrieve a unique session ID for Claude interactions."""
session_file = Path.home() / ".cache" / "claude" / "session_id"
session_file.parent.mkdir(parents=True, exist_ok=True)
if session_file.exists():
try:
with open(session_file) as f:
session_id = f.read().strip()
if session_id:
return session_id
except Exception:
pass
session_id = hashlib.md5(str(time.time()).encode()).hexdigest()[:8]
try:
with open(session_file, "w") as f:
f.write(session_id)
except Exception:
pass
return session_id
# -----------------------------
# SAFE TRASH (ultra-conservative)
# -----------------------------
REPO_ROOT = Path.cwd().resolve()
MAX_TRASH_BYTES = 20 * 1024 * 1024 # 20MB cap
TRASH_DIR = REPO_ROOT / ".trash"
def _is_simple_relpath(p: str) -> bool:
# disallow globs and backrefs; must not be absolute
if not p or p.startswith("-"):
return False
bad_tokens = ["*", "?", "[", "]", ".."]
if any(b in p for b in bad_tokens):
return False
return not os.path.isabs(p)
def _resolve_inside_repo(raw_path: str) -> Path | None:
try:
candidate = (Path.cwd() / raw_path).resolve()
except Exception:
return None
try:
if str(candidate).startswith(str(REPO_ROOT) + os.sep) or str(candidate) == str(
REPO_ROOT
):
return candidate
return None
except Exception:
return None
def _is_denied_path(p: Path) -> bool:
try:
rel = p.resolve().relative_to(REPO_ROOT)
except Exception:
return True
s = str(rel)
if s == ".env" or s.endswith(os.sep + ".env"):
return True
parts = set(s.split(os.sep))
# Never touch these; also forbids any nested target within these dirs
denied_dirs = {"node_modules", "venv", "dist", "build", ".trash", "logs"}
if parts.intersection(denied_dirs):
return True
return False
def _is_regular_and_small(p: Path, max_bytes: int = MAX_TRASH_BYTES) -> bool:
try:
st = p.stat()
return p.is_file() and not p.is_symlink() and st.st_size <= max_bytes
except Exception:
return False
def _trash_destination_for(p: Path) -> Path:
ts = datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
bucket = TRASH_DIR / ts
rel = p.resolve().relative_to(REPO_ROOT)
dest = bucket / rel
dest.parent.mkdir(parents=True, exist_ok=True)
return dest
def _append_trash_log(original: Path, moved_to: Path, session_id: str):
try:
log_dir = REPO_ROOT / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "pre_tool_use.json"
entry = {
"tool_name": "Bash",
"tool_input": {"command": f"safe_trash {original}"},
"session_id": session_id,
"hook_event_name": "PreToolUse",
"decision": "approved",
"working_directory": str(Path.cwd()),
"reason": "allowed_trash_command",
"timestamp": datetime.now().strftime("%b %d, %I:%M%p").lower(),
"moved_from": str(original),
"moved_to": str(moved_to),
}
if log_path.exists():
try:
with open(log_path) as f:
existing = json.load(f)
except Exception:
existing = []
else:
existing = []
existing.append(entry)
with open(log_path, "w") as f:
json.dump(existing, f, indent=2)
except Exception:
pass
def is_allowed_trash_command(command: str) -> tuple[bool, str | None]:
"""
Allow exactly one ultra-safe pattern:
safe_trash <relative-file>
We intentionally DO NOT allow multi-args, globs, or directories.
Returns (allowed, resolved_absolute_path | None).
"""
if not command:
return (False, None)
normalized = " ".join(command.strip().split())
m = re.match(r"^safe_trash\s+([^\s]+)$", normalized)
if not m:
return (False, None)
raw_path = m.group(1)
if not _is_simple_relpath(raw_path):
return (False, None)
target = _resolve_inside_repo(raw_path)
if target is None:
return (False, None)
if _is_denied_path(target):
return (False, None)
if not _is_regular_and_small(target):
return (False, None)
return (True, str(target))
def handle_safe_trash(command: str, session_id: str) -> bool:
"""
If command matches safe_trash policy, move the file into ./.trash/<timestamp>/...
Returns True if we handled it here (and external command should be blocked).
"""
allowed, target_s = is_allowed_trash_command(command)
if not allowed:
return False
target = Path(target_s)
dest = _trash_destination_for(target)
try:
dest.parent.mkdir(parents=True, exist_ok=True)
shutil.move(str(target), str(dest))
_append_trash_log(target, dest, session_id)
log_tool_call(
"Bash",
{"command": command},
"approved",
"allowed_trash_command",
f"target={target}",
)
print(
f"✅ safe_trash moved file:\n from: {target}\n to: {dest}",
file=sys.stderr,
)
print(
" External command was intercepted by pre_tool_use hook (no shell execution).",
file=sys.stderr,
)
return True
except Exception as e:
print(f"safe_trash error: {e}", file=sys.stderr)
return False
def log_tool_call(tool_name, tool_input, decision, reason=None, block_message=None):
"""Log all tool calls with their decisions to a structured JSON file."""
try:
session_id = get_claude_session_id()
input_data = {
"tool_name": tool_name,
"tool_input": tool_input,
"session_id": session_id,
"hook_event_name": "PreToolUse",
"decision": decision,
"working_directory": str(Path.cwd()),
}
if reason:
input_data["reason"] = reason
if block_message:
input_data["block_message"] = block_message
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "pre_tool_use.json"
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
input_data["timestamp"] = timestamp
log_data.append(input_data)
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
except Exception as e:
print(f"Logging error: {e}", file=sys.stderr)
def main():
try:
input_data = json.load(sys.stdin)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
if not tool_name:
print("Error: No tool_name provided in input", file=sys.stderr)
sys.exit(1)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
try:
# Early-intercept: handle ultra-safe trash command inline to avoid any shell-side surprises
if tool_name == "Bash":
command = tool_input.get("command", "")
if handle_safe_trash(command, get_claude_session_id()):
sys.exit(2)
# Check for .env file access violations
if is_env_file_access(tool_name, tool_input):
block_message = "Access to .env files containing sensitive data is prohibited"
log_tool_call(
tool_name, tool_input, "blocked", "env_file_access", block_message
)
print(
"BLOCKED: Access to .env files containing sensitive data is prohibited",
file=sys.stderr,
)
print("Use .env.sample for template files instead", file=sys.stderr)
sys.exit(2)
# Block ALL forms of deletion and destructive operations
if tool_name == "Bash":
command = tool_input.get("command", "")
if is_dangerous_deletion_command(command):
block_message = (
"Destructive command detected and blocked for data protection"
)
log_tool_call(
tool_name,
tool_input,
"blocked",
"dangerous_deletion_command",
block_message,
)
print(
"🚫 DELETION PROTECTION: ALL destructive operations are BLOCKED",
file=sys.stderr,
)
print("", file=sys.stderr)
print("🛡️ PROTECTED OPERATIONS:", file=sys.stderr)
print(" • File deletion (rm, unlink, rmdir)", file=sys.stderr)
print(" • Directory removal (rm -r, rm -rf)", file=sys.stderr)
print(" • File overwriting (>, echo >, cat >)", file=sys.stderr)
print(" • Truncation (truncate, :>, /dev/null)", file=sys.stderr)
print(" • Package removal (npm uninstall, pip uninstall)", file=sys.stderr)
print(" • Database drops (DROP TABLE, DELETE FROM)", file=sys.stderr)
print(" • System operations (kill -9, format, fdisk)", file=sys.stderr)
print(" • Archive destructive ops (tar --delete)", file=sys.stderr)
print(" • Dangerous paths (/, ~, *, .., system dirs)", file=sys.stderr)
print("", file=sys.stderr)
print("💡 SAFE ALTERNATIVES:", file=sys.stderr)
print(" • Use 'mv' to relocate instead of delete", file=sys.stderr)
print(" • Use 'cp' to backup before changes", file=sys.stderr)
print(" • Use '>>' to append instead of overwrite", file=sys.stderr)
print(" • Use specific file paths (no wildcards)", file=sys.stderr)
print(
" • Request manual confirmation for destructive operations",
file=sys.stderr,
)
print("", file=sys.stderr)
print("🔒 This protection ensures NO accidental data loss", file=sys.stderr)
sys.exit(2)
# Check for root directory structure violations
if check_root_structure_violations(tool_name, tool_input):
file_path = tool_input.get("file_path", "")
filename = os.path.basename(file_path)
block_message = f"Root structure violation: unauthorized file {filename} in root directory"
log_tool_call(
tool_name,
tool_input,
"blocked",
"root_structure_violation",
block_message,
)
print("🚫 ROOT STRUCTURE VIOLATION BLOCKED", file=sys.stderr)
print(f" File: {filename}", file=sys.stderr)
print(" Reason: Unauthorized file in root directory", file=sys.stderr)
print("", file=sys.stderr)
print("📋 Root directory rules:", file=sys.stderr)
print(
" • Only these .md files allowed: README.md, CHANGELOG.md, CLAUDE.md, ROADMAP.md, SECURITY.md",
file=sys.stderr,
)
print(" • Config files belong in config/ directory", file=sys.stderr)
print(" • Scripts belong in scripts/ directory", file=sys.stderr)
print(" • Documentation belongs in docs/ directory", file=sys.stderr)
print("", file=sys.stderr)
print(
"💡 Suggestion: Use /enforce-structure --fix to auto-organize files",
file=sys.stderr,
)
sys.exit(2)
# WARNING (not blocking) for command file access
if is_command_file_access(tool_name, tool_input):
file_path = tool_input.get("file_path", "")
filename = os.path.basename(file_path)
log_tool_call(
tool_name,
tool_input,
"approved",
"command_file_warning",
f"Warning: modifying command file {filename}",
)
print(f"⚠️ COMMAND FILE MODIFICATION: {filename}", file=sys.stderr)
print(" Location: .claude/commands/", file=sys.stderr)
print(" Impact: May affect Claude's available commands", file=sys.stderr)
print("", file=sys.stderr)
print("💡 Best practices:", file=sys.stderr)
print(" • Test command changes carefully", file=sys.stderr)
print(" • Document any custom commands", file=sys.stderr)
print(" • Consider using /create-command for new commands", file=sys.stderr)
print("", file=sys.stderr)
except Exception as e:
print(f"Pre-tool use hook error: {e}", file=sys.stderr)
log_tool_call(
tool_name, tool_input, "approved", "hook_error", f"Hook error occurred: {e}"
)
# If we get here, the tool call is allowed - log as approved
log_tool_call(tool_name, tool_input, "approved")
sys.exit(0)
if __name__ == "__main__":
main()

224
hooks/scripts/session_start.py Executable file
View File

@@ -0,0 +1,224 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "python-dotenv",
# ]
# ///
import argparse
import json
import subprocess
import sys
from datetime import datetime
from pathlib import Path
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass # dotenv is optional
def log_session_start(input_data):
"""Log session start event to logs directory."""
# Ensure logs directory exists
log_dir = Path("logs")
log_dir.mkdir(parents=True, exist_ok=True)
log_file = log_dir / "session_start.json"
# Read existing log data or initialize empty list
if log_file.exists():
with open(log_file) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Append the entire input data
log_data.append(input_data)
# Write back to file with formatting
with open(log_file, "w") as f:
json.dump(log_data, f, indent=2)
def get_git_status():
"""Get current git status information."""
try:
# Get current branch
branch_result = subprocess.run(
["git", "rev-parse", "--abbrev-ref", "HEAD"],
capture_output=True,
text=True,
timeout=5,
)
current_branch = (
branch_result.stdout.strip() if branch_result.returncode == 0 else "unknown"
)
# Get uncommitted changes count
status_result = subprocess.run(
["git", "status", "--porcelain"], capture_output=True, text=True, timeout=5
)
if status_result.returncode == 0:
changes = (
status_result.stdout.strip().split("\n")
if status_result.stdout.strip()
else []
)
uncommitted_count = len(changes)
else:
uncommitted_count = 0
return current_branch, uncommitted_count
except Exception:
return None, None
def get_recent_issues():
"""Get recent GitHub issues if gh CLI is available."""
try:
# Check if gh is available
gh_check = subprocess.run(["which", "gh"], capture_output=True)
if gh_check.returncode != 0:
return None
# Get recent open issues
result = subprocess.run(
["gh", "issue", "list", "--limit", "5", "--state", "open"],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0 and result.stdout.strip():
return result.stdout.strip()
except Exception:
pass
return None
def load_development_context(source):
"""Load relevant development context based on session source."""
context_parts = []
# Add timestamp
context_parts.append(
f"Session started at: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"
)
context_parts.append(f"Session source: {source}")
# Add git information
branch, changes = get_git_status()
if branch:
context_parts.append(f"Git branch: {branch}")
if changes > 0:
context_parts.append(f"Uncommitted changes: {changes} files")
# Load project-specific context files if they exist
context_files = [
".claude/CONTEXT.md",
".claude/TODO.md",
"TODO.md",
".github/ISSUE_TEMPLATE.md",
]
for file_path in context_files:
if Path(file_path).exists():
try:
with open(file_path) as f:
content = f.read().strip()
if content:
context_parts.append(f"\n--- Content from {file_path} ---")
context_parts.append(
content[:1000]
) # Limit to first 1000 chars
except Exception:
pass
# Add recent issues if available
issues = get_recent_issues()
if issues:
context_parts.append("\n--- Recent GitHub Issues ---")
context_parts.append(issues)
return "\n".join(context_parts)
def main():
try:
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument(
"--load-context",
action="store_true",
help="Load development context at session start",
)
parser.add_argument(
"--announce", action="store_true", help="Announce session start via TTS"
)
args = parser.parse_args()
# Read JSON input from stdin
input_data = json.loads(sys.stdin.read())
# Extract fields
session_id = input_data.get("session_id", "unknown")
source = input_data.get("source", "unknown") # "startup", "resume", or "clear"
# Log the session start event
log_session_start(input_data)
# Load development context if requested
if args.load_context:
context = load_development_context(source)
if context:
# Using JSON output to add context
output = {
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": context,
}
}
print(json.dumps(output))
sys.exit(0)
# Announce session start if requested
if args.announce:
try:
# Try to use TTS to announce session start
script_dir = Path(__file__).parent
tts_script = script_dir / "utils" / "tts" / "pyttsx3_tts.py"
if tts_script.exists():
messages = {
"startup": "Claude Code session started",
"resume": "Resuming previous session",
"clear": "Starting fresh session",
}
message = messages.get(source, "Session started")
subprocess.run(
["uv", "run", str(tts_script), message],
capture_output=True,
timeout=5,
)
except Exception:
pass
# Success
sys.exit(0)
except json.JSONDecodeError:
# Handle JSON decode errors gracefully
sys.exit(0)
except Exception:
# Handle any other errors gracefully
sys.exit(0)
if __name__ == "__main__":
main()

214
hooks/scripts/stop.py Executable file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "python-dotenv",
# ]
# ///
import argparse
import json
import os
import random
import subprocess
import sys
from pathlib import Path
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass # dotenv is optional
def get_completion_messages():
"""Return list of friendly completion messages."""
return [
"Work complete!",
"All done!",
"Task finished!",
"Job complete!",
"Ready for next task!",
]
def get_tts_script_path():
"""
Determine which TTS script to use based on available API keys.
Priority order: ElevenLabs > OpenAI > pyttsx3
"""
# Get current script directory and construct utils/tts path
script_dir = Path(__file__).parent
tts_dir = script_dir / "utils" / "tts"
# Check for ElevenLabs API key (highest priority)
if os.getenv("ELEVENLABS_API_KEY"):
elevenlabs_script = tts_dir / "elevenlabs_tts.py"
if elevenlabs_script.exists():
return str(elevenlabs_script)
# Check for OpenAI API key (second priority)
if os.getenv("OPENAI_API_KEY"):
openai_script = tts_dir / "openai_tts.py"
if openai_script.exists():
return str(openai_script)
# Fall back to pyttsx3 (no API key required)
pyttsx3_script = tts_dir / "pyttsx3_tts.py"
if pyttsx3_script.exists():
return str(pyttsx3_script)
return None
def get_llm_completion_message():
"""
Generate completion message using available LLM services.
Priority order: OpenAI > Anthropic > fallback to random message
Returns:
str: Generated or fallback completion message
"""
# Get current script directory and construct utils/llm path
script_dir = Path(__file__).parent
llm_dir = script_dir / "utils" / "llm"
# Try OpenAI first (highest priority)
if os.getenv("OPENAI_API_KEY"):
oai_script = llm_dir / "oai.py"
if oai_script.exists():
try:
result = subprocess.run(
["uv", "run", str(oai_script), "--completion"],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0 and result.stdout.strip():
return result.stdout.strip()
except (subprocess.TimeoutExpired, subprocess.SubprocessError):
pass
# Try Anthropic second
if os.getenv("ANTHROPIC_API_KEY"):
anth_script = llm_dir / "anth.py"
if anth_script.exists():
try:
result = subprocess.run(
["uv", "run", str(anth_script), "--completion"],
capture_output=True,
text=True,
timeout=10,
)
if result.returncode == 0 and result.stdout.strip():
return result.stdout.strip()
except (subprocess.TimeoutExpired, subprocess.SubprocessError):
pass
# Fallback to random predefined message
messages = get_completion_messages()
return random.choice(messages)
def announce_completion():
"""Announce completion using the best available TTS service."""
try:
tts_script = get_tts_script_path()
if not tts_script:
return # No TTS scripts available
# Get completion message (LLM-generated or fallback)
completion_message = get_llm_completion_message()
# Call the TTS script with the completion message
subprocess.run(
["uv", "run", tts_script, completion_message],
capture_output=True, # Suppress output
timeout=10, # 10-second timeout
)
except (subprocess.TimeoutExpired, subprocess.SubprocessError, FileNotFoundError):
# Fail silently if TTS encounters issues
pass
except Exception:
# Fail silently for any other errors
pass
def main():
try:
# Parse command line arguments
parser = argparse.ArgumentParser()
parser.add_argument(
"--chat", action="store_true", help="Copy transcript to chat.json"
)
args = parser.parse_args()
# Read JSON input from stdin
input_data = json.load(sys.stdin)
# Extract required fields
session_id = input_data.get("session_id", "")
stop_hook_active = input_data.get("stop_hook_active", False)
# Ensure log directory exists
log_dir = os.path.join(os.getcwd(), "logs")
os.makedirs(log_dir, exist_ok=True)
log_path = os.path.join(log_dir, "stop.json")
# Read existing log data or initialize empty list
if os.path.exists(log_path):
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Append new data
log_data.append(input_data)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
# Handle --chat switch
if args.chat and "transcript_path" in input_data:
transcript_path = input_data["transcript_path"]
if os.path.exists(transcript_path):
# Read .jsonl file and convert to JSON array
chat_data = []
try:
with open(transcript_path) as f:
for line in f:
line = line.strip()
if line:
try:
chat_data.append(json.loads(line))
except json.JSONDecodeError:
pass # Skip invalid lines
# Write to logs/chat.json
chat_file = os.path.join(log_dir, "chat.json")
with open(chat_file, "w") as f:
json.dump(chat_data, f, indent=2)
except Exception:
pass # Fail silently
# Announce completion via TTS
announce_completion()
sys.exit(0)
except json.JSONDecodeError:
# Handle JSON decode errors gracefully
sys.exit(0)
except Exception:
# Handle any other errors gracefully
sys.exit(0)
if __name__ == "__main__":
main()

26
hooks/utils/README.md Normal file
View File

@@ -0,0 +1,26 @@
# Utils - Shared Utilities
This directory contains shared utilities and helper functions used by various hooks.
## Structure:
- **llm/**: Language model utilities
- anth.py: Anthropic API utilities
- oai.py: OpenAI API utilities
- **tts/**: Text-to-speech utilities
- elevenlabs_tts.py: ElevenLabs TTS integration
- openai_tts.py: OpenAI TTS integration
- pyttsx3_tts.py: Local TTS using pyttsx3
## Usage:
These utilities are imported and used by various hooks. They provide common functionality like:
- API integrations
- Text-to-speech capabilities
- Shared helper functions
- Common validation logic
## Note:
Do not run these files directly. They are meant to be imported by hooks.

115
hooks/utils/llm/anth.py Executable file
View File

@@ -0,0 +1,115 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "anthropic",
# "python-dotenv",
# ]
# ///
import os
import sys
from dotenv import load_dotenv
def prompt_llm(prompt_text):
"""
Base Anthropic LLM prompting method using fastest model.
Args:
prompt_text (str): The prompt to send to the model
Returns:
str: The model's response text, or None if error
"""
load_dotenv()
api_key = os.getenv("ANTHROPIC_API_KEY")
if not api_key:
return None
try:
import anthropic
client = anthropic.Anthropic(api_key=api_key)
message = client.messages.create(
model="claude-3-5-haiku-20241022", # Fastest Anthropic model
max_tokens=100,
temperature=0.7,
messages=[{"role": "user", "content": prompt_text}],
)
return message.content[0].text.strip()
except Exception:
return None
def generate_completion_message():
"""
Generate a completion message using Anthropic LLM.
Returns:
str: A natural language completion message, or None if error
"""
engineer_name = os.getenv("ENGINEER_NAME", "").strip()
if engineer_name:
name_instruction = f"Sometimes (about 30% of the time) include the engineer's name '{engineer_name}' in a natural way."
examples = f"""Examples of the style:
- Standard: "Work complete!", "All done!", "Task finished!", "Ready for your next move!"
- Personalized: "{engineer_name}, all set!", "Ready for you, {engineer_name}!", "Complete, {engineer_name}!", "{engineer_name}, we're done!" """
else:
name_instruction = ""
examples = """Examples of the style: "Work complete!", "All done!", "Task finished!", "Ready for your next move!" """
prompt = f"""Generate a short, friendly completion message for when an AI coding assistant finishes a task.
Requirements:
- Keep it under 10 words
- Make it positive and future focused
- Use natural, conversational language
- Focus on completion/readiness
- Do NOT include quotes, formatting, or explanations
- Return ONLY the completion message text
{name_instruction}
{examples}
Generate ONE completion message:"""
response = prompt_llm(prompt)
# Clean up response - remove quotes and extra formatting
if response:
response = response.strip().strip('"').strip("'").strip()
# Take first line if multiple lines
response = response.split("\n")[0].strip()
return response
def main():
"""Command line interface for testing."""
if len(sys.argv) > 1:
if sys.argv[1] == "--completion":
message = generate_completion_message()
if message:
print(message)
else:
print("Error generating completion message")
else:
prompt_text = " ".join(sys.argv[1:])
response = prompt_llm(prompt_text)
if response:
print(response)
else:
print("Error calling Anthropic API")
else:
print("Usage: ./anth.py 'your prompt here' or ./anth.py --completion")
if __name__ == "__main__":
main()

115
hooks/utils/llm/oai.py Executable file
View File

@@ -0,0 +1,115 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "openai",
# "python-dotenv",
# ]
# ///
import os
import sys
from dotenv import load_dotenv
def prompt_llm(prompt_text):
"""
Base OpenAI LLM prompting method using fastest model.
Args:
prompt_text (str): The prompt to send to the model
Returns:
str: The model's response text, or None if error
"""
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
return None
try:
from openai import OpenAI
client = OpenAI(api_key=api_key)
response = client.chat.completions.create(
model="gpt-4.1-nano", # Fastest OpenAI model
messages=[{"role": "user", "content": prompt_text}],
max_tokens=100,
temperature=0.7,
)
return response.choices[0].message.content.strip()
except Exception:
return None
def generate_completion_message():
"""
Generate a completion message using OpenAI LLM.
Returns:
str: A natural language completion message, or None if error
"""
engineer_name = os.getenv("ENGINEER_NAME", "").strip()
if engineer_name:
name_instruction = f"Sometimes (about 30% of the time) include the engineer's name '{engineer_name}' in a natural way."
examples = f"""Examples of the style:
- Standard: "Work complete!", "All done!", "Task finished!", "Ready for your next move!"
- Personalized: "{engineer_name}, all set!", "Ready for you, {engineer_name}!", "Complete, {engineer_name}!", "{engineer_name}, we're done!" """
else:
name_instruction = ""
examples = """Examples of the style: "Work complete!", "All done!", "Task finished!", "Ready for your next move!" """
prompt = f"""Generate a short, friendly completion message for when an AI coding assistant finishes a task.
Requirements:
- Keep it under 10 words
- Make it positive and future focused
- Use natural, conversational language
- Focus on completion/readiness
- Do NOT include quotes, formatting, or explanations
- Return ONLY the completion message text
{name_instruction}
{examples}
Generate ONE completion message:"""
response = prompt_llm(prompt)
# Clean up response - remove quotes and extra formatting
if response:
response = response.strip().strip('"').strip("'").strip()
# Take first line if multiple lines
response = response.split("\n")[0].strip()
return response
def main():
"""Command line interface for testing."""
if len(sys.argv) > 1:
if sys.argv[1] == "--completion":
message = generate_completion_message()
if message:
print(message)
else:
print("Error generating completion message")
else:
prompt_text = " ".join(sys.argv[1:])
response = prompt_llm(prompt_text)
if response:
print(response)
else:
print("Error calling OpenAI API")
else:
print("Usage: ./oai.py 'your prompt here' or ./oai.py --completion")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,90 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "elevenlabs",
# "python-dotenv",
# ]
# ///
import os
import sys
from dotenv import load_dotenv
def main():
"""
ElevenLabs Turbo v2.5 TTS Script
Uses ElevenLabs' Turbo v2.5 model for fast, high-quality text-to-speech.
Accepts optional text prompt as command-line argument.
Usage:
- ./eleven_turbo_tts.py # Uses default text
- ./eleven_turbo_tts.py "Your custom text" # Uses provided text
Features:
- Fast generation (optimized for real-time use)
- High-quality voice synthesis
- Stable production model
- Cost-effective for high-volume usage
"""
# Load environment variables
load_dotenv()
# Get API key from environment
api_key = os.getenv("ELEVENLABS_API_KEY")
if not api_key:
print("❌ Error: ELEVENLABS_API_KEY not found in environment variables")
print("Please add your ElevenLabs API key to .env file:")
print("ELEVENLABS_API_KEY=your_api_key_here")
sys.exit(1)
try:
from elevenlabs import play
from elevenlabs.client import ElevenLabs
# Initialize client
elevenlabs = ElevenLabs(api_key=api_key)
print("🎙️ ElevenLabs Turbo v2.5 TTS")
print("=" * 40)
# Get text from command line argument or use default
if len(sys.argv) > 1:
text = " ".join(sys.argv[1:]) # Join all arguments as text
else:
text = "The first move is what sets everything in motion."
print(f"🎯 Text: {text}")
print("🔊 Generating and playing...")
try:
# Generate and play audio directly
audio = elevenlabs.text_to_speech.convert(
text=text,
voice_id="9BWtsMINqrJLrRacOk9x", # Aria voice
model_id="eleven_turbo_v2_5",
output_format="mp3_44100_128",
)
play(audio)
print("✅ Playback complete!")
except Exception as e:
print(f"❌ Error: {e}")
except ImportError:
print("❌ Error: elevenlabs package not installed")
print("This script uses UV to auto-install dependencies.")
print("Make sure UV is installed: https://docs.astral.sh/uv/")
sys.exit(1)
except Exception as e:
print(f"❌ Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

107
hooks/utils/tts/openai_tts.py Executable file
View File

@@ -0,0 +1,107 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "openai",
# "openai[voice_helpers]",
# "python-dotenv",
# ]
# ///
import asyncio
import os
import subprocess
import sys
import tempfile
from dotenv import load_dotenv
async def main():
"""
OpenAI TTS Script
Uses OpenAI's latest TTS model for high-quality text-to-speech.
Accepts optional text prompt as command-line argument.
Usage:
- ./openai_tts.py # Uses default text
- ./openai_tts.py "Your custom text" # Uses provided text
Features:
- OpenAI gpt-4o-mini-tts model (latest)
- Nova voice (engaging and warm)
- Streaming audio with instructions support
- Live audio playback via afplay (macOS)
"""
# Load environment variables
load_dotenv()
# Get API key from environment
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
print("❌ Error: OPENAI_API_KEY not found in environment variables")
print("Please add your OpenAI API key to .env file:")
print("OPENAI_API_KEY=your_api_key_here")
sys.exit(1)
try:
from openai import AsyncOpenAI
# Initialize OpenAI client
openai = AsyncOpenAI(api_key=api_key)
print("🎙️ OpenAI TTS")
print("=" * 20)
# Get text from command line argument or use default
if len(sys.argv) > 1:
text = " ".join(sys.argv[1:]) # Join all arguments as text
else:
text = "Today is a wonderful day to build something people love!"
print(f"🎯 Text: {text}")
print("🔊 Generating and streaming...")
try:
# Generate and stream audio using OpenAI TTS
async with openai.audio.speech.with_streaming_response.create(
model="gpt-4o-mini-tts",
voice="nova",
input=text,
instructions="Speak in a cheerful, positive yet professional tone.",
response_format="mp3",
) as response:
# Create a temporary file to store the audio
with tempfile.NamedTemporaryFile(
delete=False, suffix=".mp3"
) as temp_file:
# Write the audio stream to the temporary file
async for chunk in response.iter_bytes():
temp_file.write(chunk)
temp_file_path = temp_file.name
try:
# Play the audio using afplay
subprocess.run(["afplay", temp_file_path], check=True)
print("✅ Playback complete!")
finally:
# Clean up the temporary file
os.unlink(temp_file_path)
except Exception as e:
print(f"❌ Error: {e}")
except ImportError:
print("❌ Error: Required package not installed")
print("This script uses UV to auto-install dependencies.")
print("Make sure UV is installed: https://docs.astral.sh/uv/")
sys.exit(1)
except Exception as e:
print(f"❌ Unexpected error: {e}")
sys.exit(1)
if __name__ == "__main__":
asyncio.run(main())

77
hooks/utils/tts/pyttsx3_tts.py Executable file
View File

@@ -0,0 +1,77 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = [
# "pyttsx3",
# ]
# ///
import random
import sys
def main():
"""
pyttsx3 TTS Script
Uses pyttsx3 for offline text-to-speech synthesis.
Accepts optional text prompt as command-line argument.
Usage:
- ./pyttsx3_tts.py # Uses default text
- ./pyttsx3_tts.py "Your custom text" # Uses provided text
Features:
- Offline TTS (no API key required)
- Cross-platform compatibility
- Configurable voice settings
- Immediate audio playback
"""
try:
import pyttsx3
# Initialize TTS engine
engine = pyttsx3.init()
# Configure engine settings
engine.setProperty("rate", 180) # Speech rate (words per minute)
engine.setProperty("volume", 0.8) # Volume (0.0 to 1.0)
print("🎙️ pyttsx3 TTS")
print("=" * 15)
# Get text from command line argument or use default
if len(sys.argv) > 1:
text = " ".join(sys.argv[1:]) # Join all arguments as text
else:
# Default completion messages
completion_messages = [
"Work complete!",
"All done!",
"Task finished!",
"Job complete!",
"Ready for next task!",
]
text = random.choice(completion_messages)
print(f"🎯 Text: {text}")
print("🔊 Speaking...")
# Speak the text
engine.say(text)
engine.runAndWait()
print("✅ Playback complete!")
except ImportError:
print("❌ Error: pyttsx3 package not installed")
print("This script uses UV to auto-install dependencies.")
sys.exit(1)
except Exception as e:
print(f"❌ Error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

137
plugin.lock.json Normal file
View File

@@ -0,0 +1,137 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:AojdevStudio/dev-utils-marketplace:core-essentials",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "0a7129ea0750311d9528330a34b1a13e7b030c23",
"treeHash": "11db4df7e4ac473c0b38fc7b86c7260eee75726571451e0980261c8c977005a1",
"generatedAt": "2025-11-28T10:24:56.632049Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "core-essentials",
"description": "Meta-package: Installs all core-essentials components (commands + agents + hooks)",
"version": "3.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "37d7f38e581d4da089568b70b7250d0b60a32be41c8ef5579ae48b1a66768a9e"
},
{
"path": "agents/code-reviewer.md",
"sha256": "d8483d11c1001503d9476e0efeca0a51d8905d44230cca470c2b0e993b9e07a4"
},
{
"path": "agents/git-flow-manager.md",
"sha256": "700abe8bd92976f604340298f169ac21aa752776d53bd258e7ae77117380e692"
},
{
"path": "agents/doc-curator.md",
"sha256": "318b666a3709bebca137926931060f16665aff13a23f2fbed9ee6f9eaedffcac"
},
{
"path": "agents/quality-guardian.md",
"sha256": "4e4268038dc7b328529b78204070afdf59190739d1b774a1fe2995b7c645cfe9"
},
{
"path": "hooks/hooks.json",
"sha256": "bfced33b099e3d39cc39be64f4dc522763b5473b430b4e754c5546954b0b7f81"
},
{
"path": "hooks/utils/README.md",
"sha256": "1afe0a36e518db27b540795daf04d64728ab1ebccb376f349b494b48c9493634"
},
{
"path": "hooks/utils/llm/oai.py",
"sha256": "9f9380a1f4426f60e4d1b3f0adb08ab100dae3f37a7f1d4807c447d511577fe2"
},
{
"path": "hooks/utils/llm/anth.py",
"sha256": "f917ede2fed996dfa455deefaa1c8175ee540e501e7478270aed507ed386b410"
},
{
"path": "hooks/utils/tts/pyttsx3_tts.py",
"sha256": "8600390b839a71e1a27e499e1a5196745e1ac71f51b78caea8dc76c9d9b4d313"
},
{
"path": "hooks/utils/tts/openai_tts.py",
"sha256": "e7dcba27e9f0824f796663ee72354d841e7bc945f8ab58f9d55dfc16f4f7df2d"
},
{
"path": "hooks/utils/tts/elevenlabs_tts.py",
"sha256": "90b4efd5dd7099911c8ac2f58bbbdfc559f1bc97838f87862646ce134c3cf050"
},
{
"path": "hooks/scripts/post_tool_use.py",
"sha256": "4ffe868b3c95e8a83b67cdab5aade36be1f187d0cb4553b227627c2ad6d6363c"
},
{
"path": "hooks/scripts/notification.py",
"sha256": "34f076c926a2e1aef782a44458d898ecd49c59048bc59d1cc97b9e3e3e3d69b5"
},
{
"path": "hooks/scripts/stop.py",
"sha256": "e57e882a2c28a1860f8640bc528b7c235be9f6ceb18984b8d96858ce19167db9"
},
{
"path": "hooks/scripts/pre_tool_use.py",
"sha256": "984238cb274e6f8da66a1cda6dbe384ba6c9ddd4c05ccd623086e9039a4bc172"
},
{
"path": "hooks/scripts/session_start.py",
"sha256": "c93e969f0d340635f8ec8155c31dfa77eceb959d8922099007a9c1994129ceee"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "69980da542f88b696201a41f1569762a84acd68c16504af127d68b6d5ce18b8f"
},
{
"path": "commands/go.md",
"sha256": "36c2eb96b6d3aad17050a5aff76fe9db1c8e3f3762ad7b70079a0b9ada9ab473"
},
{
"path": "commands/git-status.md",
"sha256": "441bc8f194c998b62a208a5f623a0550c4106486a6f167b09ab62235ac6346f6"
},
{
"path": "commands/quick-search.md",
"sha256": "eb4cb47a7885b82cd73c1e90ca74f028e37d47166bb8840ad706f0bbde9a0975"
},
{
"path": "commands/build.md",
"sha256": "6e9457d0d2acddee5645fd33819e868957dcd706e0850748a2038602616c9e41"
},
{
"path": "commands/analyze-codebase.md",
"sha256": "b4af7931b653385110ebdf862693c2415ee1db98b665feee28921244c0092c32"
},
{
"path": "commands/code-review.md",
"sha256": "9b5914890e15226ed0c7c57a00f5ccd2a943a949ba666db4f6387d1bcfca53c8"
},
{
"path": "commands/quick-plan.md",
"sha256": "36a2199bb03f643bdf7303ab8ec474fe2c69bd6ca00b2a6fca0a007abf5a85ce"
},
{
"path": "commands/commit.md",
"sha256": "2d00bf6bdb356d640b316315bae95f01a0d52ffdd9ff67bf5471555327b30749"
}
],
"dirSha256": "11db4df7e4ac473c0b38fc7b86c7260eee75726571451e0980261c8c977005a1"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}