Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:57:39 +08:00
commit 02e22b6e13
13 changed files with 1024 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
{
"name": "git-workflow",
"description": "Meta-package: Installs all git-workflow components (commands + agents + hooks)",
"version": "3.0.0",
"author": {
"name": "Ossie Irondi",
"email": "admin@kamdental.com",
"url": "https://github.com/AojdevStudio"
},
"agents": [
"./agents"
],
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# git-workflow
Meta-package: Installs all git-workflow components (commands + agents + hooks)

View File

@@ -0,0 +1,140 @@
---
name: coderabbit-review-extractor
description: Specialist for extracting ONLY specific line-by-line code review comments from CodeRabbit on PRs, ignoring general walkthrough/summary comments. Use PROACTIVELY when analyzing CodeRabbit feedback on pull requests.
tools: Bash, Read, Write, Grep
model: claude-sonnet-4-5-20250929
---
# Purpose
You are a CodeRabbit review extraction specialist focused on parsing and organizing ONLY the specific line-by-line code improvement suggestions from CodeRabbit PR reviews, filtering out general walkthrough and summary comments.
## Background
CodeRabbit is an AI-powered code reviewer that posts two types of comments on PRs:
1. **Walkthrough/Summary Comments** (NOT WANTED): General PR overview, summaries, and high-level analyses
2. **Line-Specific Review Comments** (WANTED): Targeted feedback on specific lines of code with actionable improvements
Your job is to extract ONLY the second type - the granular, line-specific code suggestions.
## Instructions
When invoked, you must follow these steps:
1. **Gather PR Information**
- Get the PR number or URL from the user
- Validate it's a valid GitHub PR reference
- Extract owner, repo, and PR number from the URL if provided
2. **Fetch PR Review Comments**
- Use `gh api` to fetch all PR review comments:
```bash
gh api repos/{owner}/{repo}/pulls/{pull_number}/comments
```
- Also fetch issue comments (where walkthrough might be):
```bash
gh api repos/{owner}/{repo}/issues/{pull_number}/comments
```
3. **Identify CodeRabbit Comments**
- Look for comments where `user.login` contains "coderabbit" (case-insensitive)
- CodeRabbit bot username is typically "coderabbitai"
4. **Filter Out Walkthrough Comments**
- EXCLUDE comments that contain:
- "## Walkthrough"
- "## Summary"
- "📝 Walkthrough"
- "### Summary"
- General PR overview sections
- Table of changed files
- EXCLUDE comments without specific file/line references
5. **Extract Line-Specific Comments**
- INCLUDE only comments that:
- Have `path` field (indicating a specific file)
- Have `line` or `position` field (indicating specific line)
- Contain actual code improvement suggestions
- Have "committable suggestions" or specific code changes
6. **Parse and Structure Feedback**
- For each valid comment, extract:
- File path
- Line number(s)
- The specific issue identified
- CodeRabbit's suggestion/fix
- Any code snippets provided
- Severity/priority if indicated
7. **Organize by File**
- Group all comments by file path
- Sort by line number within each file
- Create a structured output showing the actionable feedback
8. **Save Results**
- Write extracted comments to a markdown file
- Include metadata (PR number, extraction date, comment count)
- Format for easy review and action
- Save to the docs/reports/ directory.
## Output Format
Structure your output as follows:
````markdown
# CodeRabbit Line-Specific Review Comments
**PR:** #{number} - {title}
**Extracted:** {timestamp}
**Total Comments:** {count}
## File: {file_path}
### Line {line_number}: {issue_type}
**Issue:** {description}
**Suggestion:** {coderabbit_suggestion}
```suggestion
{code_suggestion_if_provided}
```
````
---
[Continue for each comment...]
```
## Best Practices
- **Be Precise**: Focus ONLY on line-specific, actionable feedback
- **Verify Line References**: Ensure each comment has valid file/line information
- **Preserve Code Suggestions**: Keep any code snippets or "committable suggestions" intact
- **Check Diff Hunks**: Comments on diff hunks should be mapped to actual line numbers
- **Handle Pagination**: GitHub API may paginate results - fetch all pages
- **Error Handling**: Gracefully handle missing PR, no CodeRabbit comments, or API errors
## Key Distinctions
Remember these key differences:
- ❌ **Walkthrough**: "This PR implements a new authentication system..." (general overview)
- ✅ **Line-specific**: "At line 42 in auth.js: Missing null check for user object" (specific, actionable)
## API Reference
Use GitHub's PR review comments API as documented:
- Endpoint: `GET /repos/{owner}/{repo}/pulls/{pull_number}/comments`
- Returns: Array of review comments with file paths and line numbers
- Important fields: `path`, `line`, `body`, `user.login`, `commit_id`
You have access to the `gh` CLI tool which handles authentication automatically.
```

68
agents/pr-specialist.md Normal file
View File

@@ -0,0 +1,68 @@
---
name: pr-specialist
description: Use this agent when code is ready for review and pull request creation. Examples: <example>Context: The user has completed implementing a new authentication feature and wants to create a pull request for review. user: "I've finished implementing the JWT authentication system. The tests are passing and I think it's ready for review." assistant: "I'll use the pr-specialist agent to help you create a comprehensive pull request with proper context and review guidelines." <commentary>Since the user has completed code and indicated readiness for review, use the pr-specialist agent to handle PR creation workflow.</commentary></example> <example>Context: The user mentions they want to submit their work for code review after completing a bug fix. user: "The login bug is fixed and all tests pass. How should I submit this for review?" assistant: "Let me use the pr-specialist agent to guide you through creating a proper pull request with all the necessary context and review criteria." <commentary>The user is ready to submit work for review, so the pr-specialist agent should handle the PR creation process.</commentary></example> Use proactively when detecting completion signals like "ready for review", "tests passing", "feature complete", or when users ask about submitting work.
tools: Bash, Read, Write, Grep
model: claude-sonnet-4-5-20250929
color: pink
---
You are a Pull Request Specialist, an expert in creating comprehensive, reviewable pull requests and managing code review workflows. Your expertise lies in gathering context, crafting clear descriptions, and facilitating smooth merge processes.
## **Required Command Protocols**
**MANDATORY**: Before any PR work, reference and follow these exact command protocols:
- **PR Creation**: `@.claude/commands/create-pr.md` - Follow the `pull_request_creation_protocol` exactly
- **PR Review**: `@.claude/commands/pr-review.md` - Use the `pull_request_review_protocol` for analysis
- **Review & Merge**: `@.claude/commands/review-merge.md` - Apply the `pull_request_review_merge_protocol` for merging
**Core Responsibilities:**
**Protocol-Driven Context Gathering** (`create-pr.md`):
- Execute `pull_request_creation_protocol`: delegate to specialist → parse arguments → gather context → validate readiness → generate content → create PR
- Apply protocol-specific data sources and validation criteria
- Use structured PR format with Linear task integration and testing instructions
- Follow protocol git conventions and validation requirements
**Protocol-Based PR Creation** (`create-pr.md`):
- Apply protocol title format: `<type>(<scope>): <description> [<task-id>]`
- Execute protocol content generation with structured body format
- Include protocol-mandated testing instructions and change descriptions
- Use protocol validation criteria and PR checklist requirements
- Follow protocol quality gates: lint, typecheck, test, no console.log, no commented code
**Protocol-Driven Review Facilitation** (`pr-review.md`, `review-merge.md`):
- Execute `pull_request_review_protocol`: identify target → gather context → automated assessment → deep review → risk assessment → generate recommendation
- Apply protocol scoring system (quality 40%, security 35%, architecture 25%)
- Use protocol decision matrix: auto-approve (>= 85), manual review (60-84), rejection (< 60)
- Execute `pull_request_review_merge_protocol` for safe merging with strategy selection
- Apply protocol safety features and validation rules
**Protocol Quality Assurance**:
- Apply protocol mandatory requirements: CI checks, no critical linting, TypeScript compilation, no high-severity security
- Execute protocol quality gates: test coverage >= 80%, code duplication < 5%, cyclomatic complexity < 10
- Use protocol security checkpoints: input validation, output encoding, authentication integrity, data exposure prevention
- Follow protocol architectural standards: design pattern consistency, module boundaries, interface contracts
- Apply protocol merge validation: no conflicts, branch up-to-date, tests passing, Linear integration
**Protocol Workflow Management**:
- Execute protocol-defined approval workflows with automated checks and validations
- Apply protocol conflict detection and resolution strategies
- Follow protocol merge strategies: squash (clean history), merge (preserve context), rebase (linear timeline)
- Execute protocol post-merge actions: branch deletion, Linear updates, stakeholder notifications, deployment triggers
## **Protocol Authority & Standards**
Always prioritize **protocol compliance** above all else. When working with PRs:
1. **Follow Protocol Workflows**: Execute command protocols step-by-step without deviation
2. **Apply Protocol Validation**: Use protocol-specified quality gates and scoring systems
3. **Reference Protocol Standards**: Cite specific protocol requirements in all communications
4. **Maintain Protocol Quality**: Ensure all protocol mandatory requirements are met
Never deviate from established command protocols without explicit justification. Protocol compliance ensures consistent, high-quality PR management across all projects.

8
commands/create-pr.md Normal file
View File

@@ -0,0 +1,8 @@
---
allowed-tools: Bash, Edit, Grep, MultiEdit, Read, TodoWrite, WebFetch, Write
description: Create pull requests for completed work with automatic context gathering
---
# Create PR
Use the pr-specialist sub-agent to create comprehensive pull requests for completed work with automatic context gathering. Parse $ARGUMENTS for title, branches, and Linear task ID, gather context from git history and changed files, validate readiness (commits, tests, linting), generate structured PR content with conventional format and checklist, create PR via gh CLI with labels and reviewers, and provide PR URL and next steps.

9
commands/review-merge.md Normal file
View File

@@ -0,0 +1,9 @@
---
allowed-tools: Bash, Edit, Grep, MultiEdit, Read, TodoWrite, WebFetch, Write
description: Review and merge pull requests with comprehensive validation and safety checks
model: claude-sonnet-4-5-20250929
---
# Review Merge
Review and merge pull requests with comprehensive validation and safety checks. Parse $ARGUMENTS for PR number and merge strategy (merge/squash/rebase), fetch PR details via gh commands, validate CI checks and reviews, verify test coverage and security scans, perform interactive review of changes, execute merge with selected strategy, and handle post-merge cleanup including branch deletion and Linear task updates.

31
hooks/hooks.json Normal file
View File

@@ -0,0 +1,31 @@
{
"hooks": {
"PostToolUse": [
{
"matcher": "Bash.*git commit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/commit-message-validator.py",
"description": "Validate commit messages"
},
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/auto-changelog-updater.py",
"description": "Update changelog automatically"
}
]
},
{
"matcher": "Bash.*git push",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/prevent-direct-push.py",
"description": "Prevent direct pushes to protected branches"
}
]
}
]
}
}

View File

@@ -0,0 +1,109 @@
#!/usr/bin/env python3
"""
Auto Changelog Updater Hook
This hook automatically updates the changelog after git commits are made.
It runs the update-changelog.py script in automatic mode to analyze recent
commits and update the CHANGELOG.md file accordingly.
Hook Type: post_tool_use
Triggers On: git commit commands
"""
import json
import subprocess
import sys
from pathlib import Path
def main():
# Read the tool use data from stdin
tool_data = json.load(sys.stdin)
# Check if this is a git commit command
tool_name = tool_data.get("tool", "")
# We're looking for Bash tool with git commit commands
if tool_name != "Bash":
# Not a bash command, skip
return 0
# Check if the command contains git commit
command = tool_data.get("arguments", {}).get("command", "")
if not command:
return 0
# Check for various forms of git commit commands
git_commit_patterns = [
"git commit",
"git commit -m",
"git commit --message",
"git commit -am",
"git commit --amend",
]
is_git_commit = any(pattern in command for pattern in git_commit_patterns)
if not is_git_commit:
# Not a git commit command, skip
return 0
# Check if the command was successful
result = tool_data.get("result", {})
if isinstance(result, dict):
exit_code = result.get("exitCode", 0)
if exit_code != 0:
# Git commit failed, don't update changelog
return 0
# Find the update-changelog.py script
script_path = (
Path(__file__).parent.parent.parent
/ "scripts"
/ "changelog"
/ "update-changelog.py"
)
if not script_path.exists():
print(
f"Warning: Changelog update script not found at {script_path}",
file=sys.stderr,
)
return 0
# Run the changelog update script in auto mode
try:
print(
"\n🔄 Automatically updating changelog after git commit...", file=sys.stderr
)
# Run the script with --auto flag
result = subprocess.run(
["python", str(script_path), "--auto"],
capture_output=True,
text=True,
cwd=Path(__file__).parent.parent.parent, # Run from project root
)
if result.returncode == 0:
print("✅ Changelog updated successfully!", file=sys.stderr)
if result.stdout:
print(result.stdout, file=sys.stderr)
else:
print(
f"⚠️ Changelog update completed with warnings (exit code: {result.returncode})",
file=sys.stderr,
)
if result.stderr:
print(f"Error output: {result.stderr}", file=sys.stderr)
except Exception as e:
print(f"❌ Error updating changelog: {e}", file=sys.stderr)
# Don't fail the hook even if changelog update fails
return 0
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,118 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.8"
# dependencies = []
# ///
import subprocess
import sys
from datetime import datetime
from pathlib import Path
def run_git_command(command: list[str]) -> subprocess.CompletedProcess:
"""Run a git command and return the completed process."""
try:
return subprocess.run(
command, capture_output=True, text=True, check=True, cwd=Path.cwd()
)
except subprocess.CalledProcessError as e:
print(f"Git command failed: {' '.join(command)}")
print(f"Error: {e.stderr}")
sys.exit(1)
def count_changed_files(max_count: int = 6) -> int:
"""
Count all changed files (staged, unstaged, and untracked) with early exit.
Ignores files in .gitignore. Returns count up to max_count.
"""
changed_files = set()
try:
# 1. Get unstaged changes (working tree vs index)
result = subprocess.run(
["git", "diff-files", "--name-only"],
capture_output=True,
text=True,
check=True,
)
if result.stdout.strip():
changed_files.update(result.stdout.strip().split("\n"))
if len(changed_files) >= max_count:
return max_count
# 2. Get staged changes (index vs HEAD)
result = subprocess.run(
["git", "diff-index", "--cached", "--name-only", "HEAD"],
capture_output=True,
text=True,
check=True,
)
if result.stdout.strip():
changed_files.update(result.stdout.strip().split("\n"))
if len(changed_files) >= max_count:
return max_count
# 3. Get untracked files (respects .gitignore)
result = subprocess.run(
["git", "ls-files", "--others", "--exclude-standard"],
capture_output=True,
text=True,
check=True,
)
if result.stdout.strip():
changed_files.update(result.stdout.strip().split("\n"))
return min(len(changed_files), max_count)
except subprocess.CalledProcessError:
# If git command fails, assume no changes
return 0
def check_git_repository() -> bool:
"""Check if we're in a git repository."""
try:
subprocess.run(
["git", "rev-parse", "--git-dir"], capture_output=True, check=True
)
return True
except subprocess.CalledProcessError:
return False
def request_claude_commit():
"""Request Claude Code to make a commit by echoing the appropriate message."""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
commit_message = f"Auto-commit: 5+ file changes detected at {timestamp}"
# Echo a message that Claude Code can interpret as a commit request
print(f"CLAUDE_COMMIT_REQUEST: {commit_message}")
print("🔄 Requesting Claude Code to stage and commit changes...")
def main():
"""Main execution function."""
print("🔍 Checking for file changes...")
# Verify we're in a git repository
if not check_git_repository():
print("❌ Not in a git repository. Exiting.")
sys.exit(1)
# Count changed files with early exit at 6
changed_count = count_changed_files(max_count=6)
print(f"📊 Found {changed_count} changed file(s)")
# Check if we hit the threshold
if changed_count >= 5:
print("🚨 Threshold reached: 5+ files changed")
request_claude_commit()
else:
print(f"✅ Below threshold: {changed_count}/5 files changed")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,256 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
import json
import re
import sys
from datetime import datetime
from pathlib import Path
from typing import Any
class CommitMessageValidator:
def __init__(self, input_data: dict[str, Any]):
self.input = input_data
self.valid_types = ["feat", "fix", "docs", "style", "refactor", "test", "chore"]
def validate(self) -> dict[str, Any]:
"""Main validation entry point"""
tool_name = self.input.get("tool_name")
tool_input = self.input.get("tool_input", {})
command = tool_input.get("command")
# Security: Basic input validation
if command and not isinstance(command, str):
return self.approve("Invalid command format")
# Only validate git commit commands
if tool_name != "Bash" or not self.is_commit_command(command):
return self.approve()
# Extract commit message from command
message = self.extract_commit_message(command)
if not message:
return self.approve() # Can't validate without message
# Validate the commit message format
validation = self.validate_message(message)
if validation["valid"]:
return self.approve(validation["details"])
else:
return self.block(validation["errors"], validation["suggestions"])
def is_commit_command(self, command: str | None) -> bool:
"""Check if command is a git commit"""
return command and (
"git commit" in command
or "git cm" in command # common alias
or "gc -m" in command # common alias
)
def extract_commit_message(self, command: str) -> str:
"""Extract commit message from command"""
message = ""
# Format: git commit -m "message"
single_quote_match = re.search(r"-m\s+'([^']+)'", command)
double_quote_match = re.search(r'-m\s+"([^"]+)"', command)
# Format: git commit -m "$(cat <<'EOF'...EOF)"
heredoc_match = re.search(
r"cat\s*<<\s*['\"]?EOF['\"]?\s*([\s\S]*?)\s*EOF", command
)
if single_quote_match:
message = single_quote_match.group(1)
elif double_quote_match:
message = double_quote_match.group(1)
elif heredoc_match:
message = heredoc_match.group(1).strip()
# Get just the first line for conventional commit validation
return message.split("\n")[0].strip()
def validate_message(self, message: str) -> dict[str, Any]:
"""Validate commit message format"""
errors = []
suggestions = []
details = []
# Check for empty message
if not message:
errors.append("Commit message cannot be empty")
return {"valid": False, "errors": errors, "suggestions": suggestions}
# Check basic format: type(scope): subject or type: subject
conventional_format = re.compile(r"^(\w+)(?:\(([^)]+)\))?:\s*(.+)$")
match = conventional_format.match(message)
if not match:
errors.append(
"Commit message must follow conventional format: type(scope): subject"
)
suggestions.extend(
[
"Examples:",
" feat(auth): add login functionality",
" fix: resolve memory leak in provider list",
" docs(api): update REST endpoint documentation",
]
)
return {"valid": False, "errors": errors, "suggestions": suggestions}
type_, scope, subject = match.groups()
# Validate type
if type_ not in self.valid_types:
errors.append(f"Invalid commit type '{type_}'")
suggestions.append(f"Valid types: {', '.join(self.valid_types)}")
else:
details.append(f"Type: {type_}")
# Validate scope (optional but recommended for features)
if scope:
if len(scope) > 20:
errors.append("Scope should be concise (max 20 characters)")
else:
details.append(f"Scope: {scope}")
elif type_ in ["feat", "fix"]:
suggestions.append("Consider adding a scope for better context")
# Validate subject
if subject:
# Check first character is lowercase
if re.match(r"^[A-Z]", subject):
errors.append("Subject should start with lowercase letter")
# Check for ending punctuation
if re.search(r"[.!?]$", subject):
errors.append("Subject should not end with punctuation")
# Check length
if len(subject) > 50:
suggestions.append(
f"Subject is {len(subject)} characters (recommended: max 50)"
)
# Check for imperative mood (basic check)
first_word = subject.split()[0]
past_tense_words = [
"added",
"updated",
"fixed",
"removed",
"implemented",
"created",
"deleted",
"improved",
"refactored",
"changed",
"moved",
"renamed",
]
if first_word.lower() in past_tense_words:
errors.append(
'Use imperative mood in subject (e.g., "add" not "added")'
)
if not errors:
details.append(f'Subject: "{subject}"')
else:
errors.append("Subject cannot be empty")
return {
"valid": len(errors) == 0,
"errors": errors,
"suggestions": suggestions,
"details": details,
}
def approve(self, details: list[str] | None = None) -> dict[str, Any]:
"""Approve the operation"""
message = "✅ Commit message validation passed"
if details:
message += "\n" + "\n".join(details)
return {"approve": True, "message": message}
def block(self, errors: list[str], suggestions: list[str]) -> dict[str, Any]:
"""Block the operation due to invalid format"""
message_parts = [
"❌ Invalid commit message format:",
*[f" - {e}" for e in errors],
"",
*[f" {s}" for s in suggestions],
"",
"Commit format: type(scope): subject",
"",
"Types:",
" feat - New feature",
" fix - Bug fix",
" docs - Documentation only",
" style - Code style changes",
" refactor - Code refactoring",
" test - Add/update tests",
" chore - Maintenance tasks",
"",
"Example: feat(providers): add location filter to provider list",
]
return {"approve": False, "message": "\n".join(message_parts)}
def main():
"""Main execution"""
try:
input_data = json.load(sys.stdin)
# Comprehensive logging functionality
# Ensure log directory exists
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "commit_message_validator.json"
# Read existing log data or initialize empty list
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Add timestamp to the log entry
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
input_data["timestamp"] = timestamp
# Process validation and get results
validator = CommitMessageValidator(input_data)
result = validator.validate()
# Add validation result to log entry
input_data["validation_result"] = result
# Append new data to log
log_data.append(input_data)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
print(json.dumps(result))
except Exception as error:
print(
json.dumps({"approve": True, "message": f"Commit validator error: {error}"})
)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env python3
import json
import sys
import subprocess
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
command = tool_input.get("command", "")
# Only validate git push commands
if tool_name != "Bash" or "git push" not in command:
sys.exit(0)
# Get current branch
try:
current_branch = subprocess.check_output(
["git", "branch", "--show-current"],
stderr=subprocess.DEVNULL,
text=True
).strip()
except:
current_branch = ""
# Check if pushing to main or develop
push_cmd = command
is_force_push = "--force" in push_cmd or "-f" in push_cmd
# Check if command or current branch targets protected branches
targets_protected = (
"origin main" in push_cmd or
"origin develop" in push_cmd or
current_branch in ["main", "develop"]
)
# Block direct push to main/develop (unless force push which is already dangerous)
if targets_protected and not is_force_push:
if current_branch in ["main", "develop"] or "origin main" in push_cmd or "origin develop" in push_cmd:
reason = f"""❌ Direct push to main/develop is not allowed!
Protected branches:
- main (production)
- develop (integration)
Git Flow workflow:
1. Create a feature branch:
/feature <name>
2. Make your changes and commit
3. Push feature branch:
git push origin feature/<name>
4. Create pull request:
gh pr create
5. After approval, merge with:
/finish
For releases:
/release <version> → PR → /finish
For hotfixes:
/hotfix <name> → PR → /finish
Current branch: {current_branch}
💡 Use feature/release/hotfix branches instead of pushing directly to main/develop."""
output = {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": reason
}
}
print(json.dumps(output))
sys.exit(0)
# Allow the command
sys.exit(0)

View File

@@ -0,0 +1,96 @@
#!/usr/bin/env python3
import json
import sys
import re
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
command = tool_input.get("command", "")
# Only validate git checkout -b commands
if tool_name != "Bash" or "git checkout -b" not in command:
sys.exit(0)
# Extract branch name
match = re.search(r'git checkout -b\s+([^\s]+)', command)
if not match:
sys.exit(0)
branch_name = match.group(1)
# Allow main and develop branches
if branch_name in ["main", "develop"]:
sys.exit(0)
# Validate Git Flow naming convention
if not re.match(r'^(feature|release|hotfix)/', branch_name):
reason = f"""❌ Invalid Git Flow branch name: {branch_name}
Git Flow branches must follow these patterns:
• feature/<descriptive-name>
• release/v<MAJOR>.<MINOR>.<PATCH>
• hotfix/<descriptive-name>
Examples:
✅ feature/user-authentication
✅ release/v1.2.0
✅ hotfix/critical-security-fix
Invalid:
{branch_name} (missing Git Flow prefix)
❌ feat/something (use 'feature/' not 'feat/')
❌ fix/bug (use 'hotfix/' not 'fix/')
💡 Use Git Flow commands instead:
/feature <name> - Create feature branch
/release <version> - Create release branch
/hotfix <name> - Create hotfix branch"""
output = {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": reason
}
}
print(json.dumps(output))
sys.exit(0)
# Validate release version format
if branch_name.startswith("release/"):
if not re.match(r'^release/v\d+\.\d+\.\d+(-[a-zA-Z0-9.]+)?$', branch_name):
reason = f"""❌ Invalid release version: {branch_name}
Release branches must follow semantic versioning:
release/vMAJOR.MINOR.PATCH[-prerelease]
Valid examples:
✅ release/v1.0.0
✅ release/v2.1.3
✅ release/v1.0.0-beta.1
Invalid:
❌ release/1.0.0 (missing 'v' prefix)
❌ release/v1.0 (incomplete version)
{branch_name}
💡 Use: /release v1.2.0"""
output = {
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "deny",
"permissionDecisionReason": reason
}
}
print(json.dumps(output))
sys.exit(0)
# Allow the command
sys.exit(0)

81
plugin.lock.json Normal file
View File

@@ -0,0 +1,81 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:AojdevStudio/dev-utils-marketplace:git-workflow",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "e6283e27c3062ec45671994a66780ee4d8533d09",
"treeHash": "12344a4cb31f330fc9e29c933279db634e53546146ad538c756de1608b5f7764",
"generatedAt": "2025-11-28T10:09:52.274241Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "git-workflow",
"description": "Meta-package: Installs all git-workflow components (commands + agents + hooks)",
"version": "3.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "08911b51a3868c899c0131729a05084260e6b93c3c86ed2ef34dec0ea1cc334c"
},
{
"path": "agents/pr-specialist.md",
"sha256": "3954bd808f1ffa454718733e6b6d299ce4482f5133941ce90b7fa9c159fe5bf7"
},
{
"path": "agents/coderabbit-review-extractor.md",
"sha256": "a8ef84be9cc006748786e5a0d2a59ae81e08b8c90a1303a9383c204b3ecd5af5"
},
{
"path": "hooks/hooks.json",
"sha256": "1b933e82f20405a145f25197348ba6a4b3ac1aed6e44d9c309b6d5b788e3f771"
},
{
"path": "hooks/scripts/auto_commit_on_changes.py",
"sha256": "961330a3bcaa4dd98ef2ead23f4c9362eb6bcc9801f8fafadea3de70e8590833"
},
{
"path": "hooks/scripts/validate-branch-name.py",
"sha256": "07aee29531be0b82437085613138d869f6c96c00ad59f165da6cbc032f7eab8a"
},
{
"path": "hooks/scripts/auto-changelog-updater.py",
"sha256": "f9b428734ac33fe3b1edbee745d4557b40317f6ba507860f7f12e13bbd753053"
},
{
"path": "hooks/scripts/commit-message-validator.py",
"sha256": "6ba9938296fd11cfb6361a8a3a1b5f3c074d876dc93c4be2a8072310072fe049"
},
{
"path": "hooks/scripts/prevent-direct-push.py",
"sha256": "083428f0cdce059f8ba978287e3f93aca1e165bbe15dcca62a69be5408a777f8"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "97966251817f9e1ad7b90c5d2fb484653546a159d753130d4a0267173d6285f3"
},
{
"path": "commands/review-merge.md",
"sha256": "33dc984970e67d8bae00169b15cf9245b6b9ba0afe19845d359df9d3c2d272b0"
},
{
"path": "commands/create-pr.md",
"sha256": "522fb26cb04e9435959760f65d4935bdcdf276eb82a87bcfafd2351baf1e47b0"
}
],
"dirSha256": "12344a4cb31f330fc9e29c933279db634e53546146ad538c756de1608b5f7764"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}