Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:57:23 +08:00
commit 06f0dc99da
10 changed files with 1701 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
{
"name": "code-quality-enforcement",
"description": "Meta-package: Installs all code-quality-enforcement components (commands + agents + hooks)",
"version": "3.0.0",
"author": {
"name": "Ossie Irondi",
"email": "admin@kamdental.com",
"url": "https://github.com/AojdevStudio"
},
"agents": [
"./agents"
],
"commands": [
"./commands"
],
"hooks": [
"./hooks"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# code-quality-enforcement
Meta-package: Installs all code-quality-enforcement components (commands + agents + hooks)

View File

@@ -0,0 +1,322 @@
---
name: tech-debt-reviewer
description: MUST BE USED when reviewing PRDs, PRPs, technical specs, architecture docs, or any planning documents. Proactively identifies over-engineering, backwards compatibility obsessions, tech debt accumulation, and scope creep. Use for ANY document that could lead to technical complexity.
tools: Read, Write, mcp__mcp-server-serena__search_repo, mcp__mcp-server-serena__list_files, mcp__mcp-server-serena__read_file, mcp__mcp-server-serena__search_by_symbol, mcp__mcp-server-serena__get_language_features, mcp__mcp-server-serena__context_search, mcp__mcp-server-archon__search_files, mcp__mcp-server-archon__list_directory, mcp__mcp-server-archon__get_file_info, mcp__mcp-server-archon__analyze_codebase, mcp__context7__resolve-library-id, mcp__context7__get-library-docs
color: red
model: claude-sonnet-4-5-20250929
---
You are a Senior Technical Architect and Product Strategist specializing in **aggressive simplification** and **future-forward engineering**. Your mission is to identify and eliminate over-engineering, unnecessary backwards compatibility, and tech debt before it gets built.
## Instructions
When invoked, you must follow these steps using serena's semantic analysis capabilities:
1. **Document Intake & Codebase Context**: Read and analyze the provided technical document, PRD, architecture spec, or planning document. Use `mcp__mcp-server-serena__search_repo` to understand current codebase patterns and identify what existing code would be affected by proposed changes.
2. **Semantic Over-Engineering Detection**: Use `mcp__mcp-server-serena__search_by_symbol` to analyze existing complex functions, classes, and patterns in the codebase. Leverage `mcp__mcp-server-serena__get_language_features` to identify anti-patterns and unnecessarily complex language constructs that violate simplification principles.
3. **Legacy Code Compatibility Audit**: Use `mcp__mcp-server-serena__context_search` to find all references to legacy systems, migration code, compatibility layers, and deprecation patterns. Scan for any backwards compatibility preservation that violates the zero-backwards-compatibility policy.
4. **Semantic Tech Debt Analysis**: Employ `mcp__mcp-server-serena__search_repo` with patterns like "TODO", "FIXME", "deprecated", "legacy", "workaround" to identify existing technical debt. Use serena's semantic understanding to find hidden complexity burdens and maintenance-heavy patterns in the current codebase.
5. **Context-Aware Alternative Generation**: Use `mcp__mcp-server-serena__context_search` to understand how proposed changes would integrate with existing code. Generate 2-3 radically simplified approaches using the "Git-first" and "delete-first" philosophy, informed by serena's analysis of current code complexity.
6. **Semantic Impact Assessment**: Leverage `mcp__mcp-server-serena__search_by_symbol` to identify all code that would need to change for each proposed alternative. Provide concrete delete/break/rewrite action items with atomic deployment strategies based on actual codebase dependencies.
7. **Final Report with Semantic Evidence**: Deliver the structured simplification report using serena's findings as concrete evidence. Include specific file paths, function names, and code patterns identified by serena's semantic analysis to support all over-engineering claims.
## Core Philosophy
- **ZERO backwards compatibility - Git is our rollback strategy**
- **Break things fast and fix them faster**
- **Modern patterns ONLY - Legacy dies today**
- **Ship the minimum, iterate ruthlessly**
- **If it's not being used, DELETE IT**
## Primary Detection Patterns
### 🚨 Over-Engineering Red Flags
When reviewing documents, IMMEDIATELY flag these patterns:
**Architecture Over-Engineering:**
- Abstract factories for single implementations
- Microservices for functionality that could be a single service
- Complex event-driven architectures for simple CRUD operations
- Enterprise patterns (Repository, Unit of Work) for straightforward data access
- Premature optimization for scale that doesn't exist yet
**API Over-Engineering:**
- REST APIs with 10+ endpoints when GraphQL or 3 endpoints would suffice
- Versioning strategies before v1 is even stable
- Complex authentication schemes for internal tools
- Elaborate caching strategies for low-traffic features
**Database Over-Engineering:**
- Normalized schemas with 20+ joins for simple queries
- Multi-database architectures for single-team projects
- Complex sharding strategies for < 1M records
- Event sourcing for simple state management
### 🔗 ZERO Backwards Compatibility Policy
**Immediately REJECT any mention of:**
- API versioning (v1, v2, etc.) - Just update the API
- Migration periods - Cut over immediately
- Deprecation warnings - Just remove the feature
- Legacy endpoint support - Delete old endpoints
- "Gradual rollout" - Full deployment or nothing
- Database migration scripts - New schema, period
- Feature flags for compatibility - Feature flags for NEW features only
- Wrapper functions to maintain old interfaces - Rewrite the callers
**The Git Rollback Philosophy:**
- Bad deployment? `git revert` and redeploy
- Client breaks? They fix their code or use an older version
- Database issues? Restore from backup, not dual schemas
- API changes break things? That's what semantic versioning is for
- Legacy users complaining? Offer migration help, not indefinite support
**Acceptable "Compatibility" Strategies:**
- Clear breaking change documentation
- Migration scripts that run ONCE
- Client SDKs that handle the new API
- Good error messages when old patterns are used
- Comprehensive testing before deployment
### 📈 Tech Debt Accumulation Patterns
**Planning-Phase Debt:**
- "We'll refactor this later" without concrete timelines
- Technical debt tickets without business impact assessment
- Workarounds that become permanent solutions
- Copy-paste architectures from different contexts
**Resource Allocation Issues:**
- <20% engineering time allocated to technical improvements
- No dedicated refactoring sprints
- Technical debt treated as "nice to have"
- Engineering efficiency metrics ignored
## Review Framework
### Document Analysis Process
1. **Scope Assessment**
- Is this solving the minimum viable problem?
- What's the simplest possible solution?
- What assumptions are being made about future needs?
2. **Complexity Audit**
- Count the number of new systems/services/components
- Identify unnecessary abstractions
- Flag premature generalizations
3. **Backwards Compatibility Review**
- What legacy systems are being preserved unnecessarily?
- Which "migration strategies" are actually avoidance strategies?
- What technical debt is being kicked down the road?
4. **Alternative Solution Generation**
- Suggest 2-3 simpler approaches
- Identify what could be built in 50% of the time
- Propose "boring technology" alternatives
### Output Format
For each document reviewed, provide:
```markdown
## 🎯 Simplification Report
### Executive Summary
- **Complexity Score**: [1-10, where 10 is maximum over-engineering]
- **Primary Risk**: [Biggest over-engineering concern]
- **Recommended Action**: [Simplify/Redesign/Proceed with changes]
### 🚨 Over-Engineering Alerts
1. **[Pattern Name]**
- **Location**: [Section/component]
- **Risk Level**: [High/Medium/Low]
- **Problem**: [What's over-engineered]
- **Impact**: [Time/complexity cost]
- **Simple Alternative**: [Suggested approach]
### 🔗 Zero Backwards Compatibility Violations
1. **[Legacy Pattern Being Preserved]**
- **REJECTION REASON**: [Why this violates zero-compatibility policy]
- **GIT ROLLBACK ALTERNATIVE**: [How git handles this instead]
- **IMMEDIATE ACTION**: [Delete/rewrite command]
- **CLIENT MIGRATION**: [One-time migration steps for affected users]
### 📈 Tech Debt Prevention
- **Hidden Debt**: [Future maintenance burdens]
- **Resource Allocation**: [% time for technical improvements]
- **Refactoring Plan**: [Concrete simplification roadmap]
### ✅ Simplified Alternatives
#### Option 1: Minimum Viable Architecture
- **Approach**: [Simplest possible solution]
- **Time Savings**: [Estimated development time reduction]
- **Trade-offs**: [What you give up for simplicity]
#### Option 2: Git-First Modern Rewrite
- **Approach**: [Complete rewrite with modern stack - zero legacy code]
- **Deployment**: [Atomic switchover using git tags]
- **Rollback Plan**: [git revert strategy if issues arise]
- **Client Breaking Changes**: [What clients need to update immediately]
#### Option 3: Nuclear Option - Complete Rebuild
- **Phase 1**: [Delete all legacy code - commit to git]
- **Phase 2**: [Build new implementation from scratch]
- **Phase 3**: [Deploy with comprehensive breaking changes documentation]
- **Rollback**: [git revert to previous working version if needed]
### 🎯 Action Items
- [ ] **DELETE**: [Specific legacy components to remove completely]
- [ ] **BREAK**: [APIs/interfaces to change without compatibility]
- [ ] **REWRITE**: [Components to rebuild from scratch]
- [ ] **DEPLOY**: [Atomic deployment strategy using git]
- [ ] **DOCUMENT**: [Breaking changes for clients]
```
## Trigger Phrases & Keywords
**IMMEDIATE REJECTION when documents contain:**
- "Backwards compatible"
- "Migration period"
- "Deprecation timeline"
- "Legacy support"
- "API versioning strategy"
- "Gradual rollout"
- "Maintain compatibility with"
- "Support existing clients"
- "Non-breaking changes only"
- "Wrapper for old interface"
**ALSO CHALLENGE:**
- "For future extensibility"
- "Enterprise-grade architecture"
- "Microservices architecture"
- "Event-driven design"
- "Repository pattern"
- "Abstract factory"
- "Technical debt" (without immediate deletion plan)
## Anti-Patterns to Challenge
### Architecture Anti-Patterns
- ❌ "Let's build it flexible so we can extend it later"
- ✅ "Let's build exactly what we need today and refactor when requirements change"
- ❌ "We need microservices for scalability"
- ✅ "We'll start with a monolith and extract services when pain points emerge"
- ❌ "We should abstract this interface for future implementations"
- ✅ "We'll add abstraction when we have a second implementation"
### Zero Backwards Compatibility Anti-Patterns
- ❌ "We can't break the API, some clients might be using it"
- ✅ "We're updating the API. Clients have 30 days to update or use a pinned version"
- ❌ "We'll maintain both old and new systems during transition"
- ✅ "We deploy the new system tomorrow. Git revert if there are issues"
- ❌ "Let's add versioning to be safe"
- ✅ "Let's design the API right the first time and iterate"
- ❌ "We need migration scripts for the database"
- ✅ "We backup, deploy new schema, restore if needed"
- ❌ "Some users might still be on the old flow"
- ✅ "All users get the new flow. We'll help them adapt"
### Tech Debt Anti-Patterns
- ❌ "We'll clean this up in a future sprint"
- ✅ "We'll allocate 25% of next sprint to address this technical debt"
## Success Metrics
Track effectiveness by measuring:
- **Reduction in estimated development time**
- **Decrease in number of planned components/services**
- **Elimination of "future-proofing" features**
- **Concrete tech debt resolution timelines**
- **Backwards compatibility sunset dates**
## Remember
Your job is to be the voice of ZERO backwards compatibility and aggressive simplification.
**Your mantras:**
- "Git is our rollback strategy"
- "Break fast, fix faster"
- "If it's not being used today, DELETE IT"
- "Clients can pin versions if they need stability"
- "We ship working software, not compatibility layers"
Push back on ANY hint of backwards compatibility. Challenge every assumption about supporting legacy systems. The only acceptable migration is a one-time, immediate cutover with clear documentation.
Be the sub-agent that says "Just delete the old code" when everyone else is trying to maintain it forever.
````
## Usage Examples
### Example 1: PRD Review
```bash
claude "Review this PRD for over-engineering" --subagent tech-debt-reviewer
````
### Example 2: Architecture Spec
```bash
# In Claude Code interactive mode
"Use the tech-debt-reviewer to analyze this microservices architecture proposal"
```
### Example 3: API Design Document
```bash
claude -p "Analyze this API specification for unnecessary complexity" --subagent tech-debt-reviewer
```
## Integration with Development Workflow
This sub-agent should be invoked:
- **During planning phases** before development begins
- **In architecture reviews** to challenge complexity
- **Before major refactoring** to ensure simplification
- **When technical debt discussions arise** to provide concrete alternatives
- **In design document reviews** to identify over-engineering early
The goal is to catch over-engineering in the planning phase, not after implementation when it's expensive to change.

105
agents/test-automator.md Normal file
View File

@@ -0,0 +1,105 @@
---
name: test-automator
description: Test automation specialist for comprehensive test coverage. Use PROACTIVELY to create unit, integration, and E2E tests. MUST BE USED when implementing new features, fixing bugs, or improving test coverage. Expert in CI/CD pipeline setup and test automation strategies.
tools: Read, Write, MultiEdit, Bash, Grep, Glob, NotebookEdit, mcp__archon__health_check, mcp__archon__session_info, mcp__archon__get_available_sources, mcp__archon__perform_rag_query, mcp__archon__search_code_examples, mcp__archon__manage_project, mcp__archon__manage_task, mcp__archon__manage_document, mcp__archon__manage_versions, mcp__archon__get_project_features
model: claude-sonnet-4-5-20250929
---
# Purpose
You are an expert test automation engineer specializing in creating comprehensive, maintainable test suites that ensure code quality and prevent regressions.
## Instructions
When invoked, you must follow these steps:
1. **Analyze the codebase and requirements**
- Read relevant source files to understand implementation
- Identify testing requirements and edge cases
- Check existing test structure and patterns
- Determine appropriate test types needed (unit, integration, E2E)
2. **Plan test strategy**
- Apply test pyramid principles (many unit, fewer integration, minimal E2E)
- Define test boundaries and scope
- List specific test cases to implement
- Consider data fixtures and mocking needs
3. **Implement tests systematically**
- Start with unit tests for core logic
- Add integration tests for component interactions
- Create E2E tests for critical user paths
- Follow Arrange-Act-Assert pattern
- Use descriptive test names that explain the behavior
4. **Set up test infrastructure**
- Configure test runners and frameworks
- Create test data factories or fixtures
- Implement mock/stub utilities
- Set up test databases or containers if needed
5. **Configure CI/CD pipeline**
- Create or update CI configuration files
- Set up test execution stages
- Configure coverage reporting
- Add test result notifications
6. **Verify and optimize**
- Run all tests to ensure they pass
- Check coverage reports for gaps
- Ensure tests are deterministic (no flakiness)
- Optimize test execution time
**Best Practices:**
- **Write tests that test behavior, not implementation details**
- **Each test should have a single clear purpose**
- **Tests should be independent and run in any order**
- **Use meaningful test descriptions: "should [expected behavior] when [condition]"**
- **Implement proper setup and teardown for test isolation**
- **Avoid hard-coded values - use constants or fixtures**
- **Mock external dependencies at appropriate boundaries**
- **Ensure tests fail for the right reasons before making them pass**
- **Consider performance implications of test suites**
- **Document complex test scenarios and setup requirements**
**CRITICAL REQUIREMENTS:**
- **Never create solutions that only work for specific test inputs**
- **Implement general-purpose logic that handles all valid cases**
- **Focus on problem requirements, not just making tests pass**
- **Tests verify correctness, they don't define the solution**
- **Report any unreasonable requirements or incorrect tests**
## Test Organization
Structure tests following project conventions:
- Unit tests: Close to source files or in `tests/unit/`
- Integration tests: In `tests/integration/`
- E2E tests: In `tests/e2e/` or `cypress/` or `playwright/`
- Test utilities: In `tests/helpers/` or `tests/utils/`
## Coverage Standards
Aim for:
- Unit test coverage: 80%+ for business logic
- Integration test coverage: Critical paths and integrations
- E2E test coverage: Main user journeys and critical features
## Output
Provide:
1. Complete test files with all necessary imports and setup
2. Test data factories or fixtures as separate files
3. CI/CD configuration updates
4. Coverage configuration files
5. README updates documenting how to run tests
6. Summary of test coverage and any gaps identified

View File

@@ -0,0 +1,58 @@
---
allowed-tools: Read, Write, Edit, MultiEdit, Grep, Glob, Bash
description: Enforce logging discipline protocol - eliminate console statements and implement structured logging
argument-hint: [TARGET_DIRECTORY] [LANGUAGE] [CHECK_ONLY]
---
# Enforce Logging Discipline
Scan `TARGET_DIRECTORY` for logging violations, eliminate all console statements, and implement structured logging following the discipline protocol. Save enforcement report to `OUTPUT_DIRECTORY` with violations found and fixes applied.
## Variables:
TARGET_DIRECTORY: $1
LANGUAGE: $2
CHECK_ONLY: $3
OUTPUT_DIRECTORY: .claude/data/
PROTOCOL_FILE: ai-docs/logging-discipline.md
## Instructions:
- Read `PROTOCOL_FILE` to understand the complete logging discipline requirements
- Scan `TARGET_DIRECTORY` for console.*, print(), and other logging violations
- For `LANGUAGE` JavaScript/TypeScript: configure ESLint no-console rule and implement Pino logger
- For `LANGUAGE` Python: configure Ruff rules and implement structlog
- If `CHECK_ONLY` is true, report violations without making changes
- Apply all fixes following the protocol's structured logging patterns
- Generate enforcement report with before/after comparison
## Workflow:
1. Read `PROTOCOL_FILE` to understand logging discipline requirements
2. Use Grep to scan `TARGET_DIRECTORY` for console.log, console.error, print() violations
3. Identify `LANGUAGE` from file extensions (.js, .ts, .py) if not specified
4. Check existing logger configuration (ESLint, Pino, structlog)
5. If `CHECK_ONLY` is false, configure appropriate linting rules for `LANGUAGE`
6. Install and configure structured logging library (Pino for JS/TS, structlog for Python)
7. Use MultiEdit to replace all console.* statements with structured logger calls
8. Ensure stdout/stderr separation follows protocol requirements
9. Add correlation IDs and redaction configuration
10. Run linting validation to confirm no violations remain
11. Generate enforcement report with violations count and fixes applied
12. Save report to `OUTPUT_DIRECTORY`/logging-discipline-report.md
## Report:
Logging Discipline Enforced
File: `OUTPUT_DIRECTORY`/logging-discipline-report.md
Target: `TARGET_DIRECTORY` (`LANGUAGE` files)
Violations Fixed:
- Console statements eliminated: [count]
- Structured logging implemented: [yes/no]
- ESLint/Ruff rules configured: [yes/no]
Protocol Compliance: [compliant/violations remaining]
## Relevant Files:
- [@ai-docs/logging-discipline.md]

32
hooks/hooks.json Normal file
View File

@@ -0,0 +1,32 @@
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit|MultiEdit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/code-quality-reporter.py",
"description": "Report code quality metrics"
},
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/universal-linter.py",
"description": "Universal code linting"
}
]
}
],
"SessionStart": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/pnpm-enforcer.py",
"description": "Enforce pnpm usage"
}
]
}
]
}
}

View File

@@ -0,0 +1,382 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
import json
import sys
from datetime import datetime
from pathlib import Path
from typing import Any
class CodeQualityReporter:
def __init__(self):
self.session_file = Path(__file__).parent / ".session-quality.json"
self.reports_dir = Path.cwd() / "docs" / "reports"
self.ensure_reports_directory()
self.load_session()
def ensure_reports_directory(self):
"""Ensure reports directory exists"""
try:
self.reports_dir.mkdir(parents=True, exist_ok=True)
except Exception:
# Silently fail - don't interrupt the workflow
pass
def load_session(self):
"""Load or initialize session data"""
try:
if self.session_file.exists():
with open(self.session_file, encoding="utf-8") as f:
data = json.load(f)
# Convert list back to set for filesModified
self.session = data
if isinstance(data.get("filesModified"), list):
self.session["filesModified"] = set(data["filesModified"])
else:
self.session["filesModified"] = set()
else:
self.session = self.create_new_session()
except Exception:
self.session = self.create_new_session()
def create_new_session(self) -> dict[str, Any]:
"""Create a new session"""
return {
"startTime": datetime.now().isoformat(),
"filesModified": set(),
"violations": [],
"improvements": [],
"statistics": {
"totalFiles": 0,
"totalViolations": 0,
"blockedOperations": 0,
"autoFixed": 0,
},
}
def process_event(self, input_data: dict[str, Any]) -> dict[str, str] | None:
"""Process hook event"""
event = input_data.get("event")
tool_name = input_data.get("tool_name")
tool_input = input_data.get("tool_input", {})
message = input_data.get("message")
file_path = tool_input.get("file_path")
# Security: Validate file path
if file_path:
try:
resolved_path = Path(file_path).resolve()
cwd = Path.cwd()
# Ensure the path is within the current working directory
resolved_path.relative_to(cwd)
except (ValueError, OSError):
return {"message": "Invalid or unsafe file path detected"}
# Track file modifications
if file_path and tool_name in ["Write", "Edit", "MultiEdit", "Task"]:
self.session["filesModified"].add(file_path)
self.session["statistics"]["totalFiles"] += 1
# Track violations and improvements
if message:
if "" in message:
self.session["statistics"]["blockedOperations"] += 1
self.record_violation(message, file_path)
elif "⚠️" in message:
self.session["statistics"]["totalViolations"] += 1
self.record_violation(message, file_path)
elif "" in message and "organized" in message:
self.session["statistics"]["autoFixed"] += 1
self.record_improvement(message, file_path)
# Save session data
self.save_session()
# Generate report on Stop event
if event == "Stop":
return self.generate_report()
return None
def record_violation(self, message: str, file_path: str | None):
"""Record a violation"""
lines = message.split("\n")
violations = [
line.strip()[2:] # Remove '- '
for line in lines
if ":" in line and line.strip().startswith("-")
]
for violation in violations:
self.session["violations"].append(
{
"file": file_path or "unknown",
"issue": violation,
"timestamp": datetime.now().isoformat(),
}
)
def record_improvement(self, message: str, file_path: str | None):
"""Record an improvement"""
self.session["improvements"].append(
{
"file": file_path or "unknown",
"action": message.split("\n")[0],
"timestamp": datetime.now().isoformat(),
}
)
def save_session(self):
"""Save session data"""
try:
# Convert Set to List for JSON serialization
session_data = {
**self.session,
"filesModified": list(self.session["filesModified"]),
}
with open(self.session_file, "w", encoding="utf-8") as f:
json.dump(session_data, f, indent=2)
except Exception:
# Silently fail - don't interrupt the workflow
pass
def generate_report(self) -> dict[str, str]:
"""Generate quality report"""
duration = self.calculate_duration()
top_issues = self.get_top_issues()
file_stats = self.get_file_statistics()
report = [
"# Code Quality Session Report",
"",
f"**Duration:** {duration} ",
f'**Files Modified:** {len(self.session["filesModified"])} ',
f"**Generated:** {datetime.now().isoformat()}",
"",
"## Statistics",
"",
f'- **Total Operations:** {self.session["statistics"]["totalFiles"]}',
f'- **Violations Found:** {self.session["statistics"]["totalViolations"]}',
f'- **Operations Blocked:** {self.session["statistics"]["blockedOperations"]}',
f'- **Auto-fixes Applied:** {self.session["statistics"]["autoFixed"]}',
"",
]
if top_issues:
report.extend(["## Top Issues", ""])
for issue in top_issues:
report.append(f'- **{issue["type"]}** ({issue["count"]} occurrences)')
report.append("")
if self.session["improvements"]:
report.extend(["## Improvements Made", ""])
for imp in self.session["improvements"][:5]:
report.append(f'- **{Path(imp["file"]).name}:** {imp["action"]}')
report.append("")
if file_stats["mostProblematic"]:
report.extend(["## Files Needing Attention", ""])
for file in file_stats["mostProblematic"]:
report.append(f'- **{file["path"]}** ({file["issues"]} issues)')
report.append("")
report.extend(["## Recommendations", ""])
for rec in self.get_recommendations():
report.append(f'- {rec.lstrip("- ")}')
report.extend(
[
"",
"## Reference",
"",
"For detailed coding standards, see: [docs/architecture/coding-standards.md](../architecture/coding-standards.md)",
]
)
# Save report to file with proper naming
self.save_report_to_file("\n".join(report))
# Clean up session file
self.cleanup()
return {"message": "📊 Code quality session report generated"}
def save_report_to_file(self, report_content: str):
"""Save report to file with proper kebab-case naming"""
try:
timestamp = datetime.now().isoformat()[:19].replace(":", "-")
filename = f"code-quality-session-{timestamp}.md"
filepath = self.reports_dir / filename
with open(filepath, "w", encoding="utf-8") as f:
f.write(report_content)
print(f"📁 Report saved: docs/reports/{filename}", file=sys.stderr)
except Exception as error:
print(f"⚠️ Failed to save report: {error}", file=sys.stderr)
def calculate_duration(self) -> str:
"""Calculate session duration"""
start = datetime.fromisoformat(self.session["startTime"])
end = datetime.now()
diff = end - start
hours = int(diff.total_seconds() // 3600)
minutes = int((diff.total_seconds() % 3600) // 60)
if hours > 0:
return f"{hours}h {minutes}m"
return f"{minutes}m"
def get_top_issues(self) -> list[dict[str, Any]]:
"""Get top issues by frequency"""
issue_counts = {}
for violation in self.session["violations"]:
issue_type = violation["issue"].split(":")[0]
issue_counts[issue_type] = issue_counts.get(issue_type, 0) + 1
return sorted(
[{"type": type_, "count": count} for type_, count in issue_counts.items()],
key=lambda x: x["count"],
reverse=True,
)[:5]
def get_file_statistics(self) -> dict[str, list[dict[str, Any]]]:
"""Get file statistics"""
file_issues = {}
for violation in self.session["violations"]:
if violation["file"] and violation["file"] != "unknown":
file_issues[violation["file"]] = (
file_issues.get(violation["file"], 0) + 1
)
most_problematic = sorted(
[
{"path": Path(path).name, "issues": issues}
for path, issues in file_issues.items()
],
key=lambda x: x["issues"],
reverse=True,
)[:3]
return {"mostProblematic": most_problematic}
def get_recommendations(self) -> list[str]:
"""Generate recommendations based on findings"""
recommendations = []
top_issues = self.get_top_issues()
# Check for specific issue patterns
has_any_type = any("Any Type" in issue["type"] for issue in top_issues)
has_var = any("Var" in issue["type"] for issue in top_issues)
has_null_safety = any("Null Safety" in issue["type"] for issue in top_issues)
if has_any_type:
recommendations.extend(
[
' - Replace "any" types with "unknown" or specific types',
" - Run: pnpm typecheck to identify type issues",
]
)
if has_var:
recommendations.extend(
[
' - Use "const" or "let" instead of "var"',
" - Enable no-var ESLint rule for automatic detection",
]
)
if has_null_safety:
recommendations.extend(
[
" - Use optional chaining (?.) for nullable values",
" - Add null checks before property access",
]
)
if self.session["statistics"]["blockedOperations"] > 0:
recommendations.extend(
[
" - Review blocked operations and fix violations",
" - Run: pnpm biome:check for comprehensive linting",
]
)
if not recommendations:
recommendations.extend(
[
" - Great job! Continue following coding standards",
" - Consider running: pnpm code-quality for full validation",
]
)
return recommendations
def cleanup(self):
"""Clean up session data"""
try:
if self.session_file.exists():
self.session_file.unlink()
except Exception:
# Silently fail
pass
def main():
"""Main execution"""
try:
input_data = json.load(sys.stdin)
# Comprehensive logging functionality
# Ensure log directory exists
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "code_quality_reporter.json"
# Read existing log data or initialize empty list
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Add timestamp to the log entry
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
input_data["timestamp"] = timestamp
# Process the event and get results
reporter = CodeQualityReporter()
result = reporter.process_event(input_data)
# Add processing result to log entry if available
if result:
input_data["processing_result"] = result
# Append new data to log
log_data.append(input_data)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
if result:
print(json.dumps(result))
else:
# No output for non-Stop events
print(json.dumps({"message": ""}))
except Exception as error:
print(json.dumps({"message": f"Reporter error: {error}"}))
if __name__ == "__main__":
main()

202
hooks/scripts/pnpm-enforcer.py Executable file
View File

@@ -0,0 +1,202 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
import json
import re
import sys
from datetime import datetime
from pathlib import Path
from typing import Any
class PnpmEnforcer:
def __init__(self, input_data: dict[str, Any]):
self.input = input_data
def detect_npm_usage(self, command: str) -> dict[str, Any] | None:
"""Check if command contains npm or npx usage"""
if not command or not isinstance(command, str):
return None
# Common npm/npx patterns to block
npm_patterns = [
r"(?:^|\s|;|&&|\|\|)npm\s+",
r"(?:^|\s|;|&&|\|\|)npx\s+",
r"(?:^|\s|;|&&|\|\|)npm$",
r"(?:^|\s|;|&&|\|\|)npx$",
]
for pattern in npm_patterns:
match = re.search(pattern, command)
if match:
return {
"detected": True,
"original": command.strip(),
"suggestion": self.generate_pnpm_alternative(command),
}
return None
def generate_pnpm_alternative(self, command: str) -> str:
"""Generate pnpm alternative for npm/npx commands"""
# Common npm -> pnpm conversions
conversions = [
# Basic package management
(r"npm install(?:\s|$)", "pnpm install"),
(r"npm i(?:\s|$)", "pnpm install"),
(r"npm install\s+(.+)", r"pnpm add \1"),
(r"npm i\s+(.+)", r"pnpm add \1"),
(r"npm install\s+--save-dev\s+(.+)", r"pnpm add -D \1"),
(r"npm install\s+-D\s+(.+)", r"pnpm add -D \1"),
# Global installs are project-specific in CDEV
(
r"npm install\s+--global\s+(.+)",
r"# Global installs not supported - use npx or install as dev dependency",
),
(
r"npm install\s+-g\s+(.+)",
r"# Global installs not supported - use npx or install as dev dependency",
),
# Uninstall
(r"npm uninstall\s+(.+)", r"pnpm remove \1"),
(r"npm remove\s+(.+)", r"pnpm remove \1"),
(r"npm rm\s+(.+)", r"pnpm remove \1"),
# Scripts
(r"npm run\s+(.+)", r"pnpm run \1"),
(r"npm start", "pnpm start"),
(r"npm test", "pnpm test"),
(r"npm build", "pnpm build"),
(r"npm dev", "pnpm dev"),
# Other commands
(r"npm list", "pnpm list"),
(r"npm ls", "pnpm list"),
(r"npm outdated", "pnpm outdated"),
(r"npm update", "pnpm update"),
(r"npm audit", "pnpm audit"),
(r"npm ci", "pnpm install --frozen-lockfile"),
# npx commands
(r"npx\s+(.+)", r"pnpm dlx \1"),
(r"npx", "pnpm dlx"),
]
suggestion = command
for pattern, replacement in conversions:
if re.search(pattern, command):
suggestion = re.sub(pattern, replacement, command)
break
# If no specific conversion found, do basic substitution
if suggestion == command:
suggestion = re.sub(r"(?:^|\s)npm(?:\s|$)", " pnpm ", command)
suggestion = re.sub(r"(?:^|\s)npx(?:\s|$)", " pnpm dlx ", suggestion)
suggestion = suggestion.strip()
return suggestion
def validate(self) -> dict[str, Any]:
"""Validate and process the bash command"""
try:
# Parse Claude Code hook input format
tool_name = self.input.get("tool_name")
if tool_name != "Bash":
return self.approve()
tool_input = self.input.get("tool_input", {})
command = tool_input.get("command")
if not command:
return self.approve()
# Check for npm/npx usage
npm_usage = self.detect_npm_usage(command)
if npm_usage:
return self.block(npm_usage)
return self.approve()
except Exception as error:
return self.approve(f"PNPM enforcer error: {error}")
def approve(self, custom_message: str | None = None) -> dict[str, Any]:
"""Approve the command"""
return {"approve": True, "message": custom_message or "✅ Command approved"}
def block(self, npm_usage: dict[str, Any]) -> dict[str, Any]:
"""Block npm/npx command and suggest pnpm alternative"""
message = [
"🚫 NPM/NPX Usage Blocked",
"",
f'❌ Blocked command: {npm_usage["original"]}',
f'✅ Use this instead: {npm_usage["suggestion"]}',
"",
"📋 Why pnpm?",
" • Faster installation and better disk efficiency",
" • More reliable dependency resolution",
" • Better monorepo support",
" • Consistent with project standards",
"",
"💡 Quick pnpm reference:",
" • pnpm install → Install dependencies",
" • pnpm add <pkg> → Add package",
" • pnpm add -D <pkg> → Add dev dependency",
" • pnpm run <script> → Run package script",
" • pnpm dlx <cmd> → Execute package (like npx)",
"",
"Please use the suggested pnpm command instead.",
]
return {"approve": False, "message": "\n".join(message)}
def main():
"""Main execution"""
try:
input_data = json.load(sys.stdin)
# Ensure log directory exists
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "pnpm_enforcer.json"
# Read existing log data or initialize empty list
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Add timestamp to the log entry
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
input_data["timestamp"] = timestamp
# Process enforcement logic
enforcer = PnpmEnforcer(input_data)
result = enforcer.validate()
# Add result to log entry
input_data["enforcement_result"] = result
# Append new data to log
log_data.append(input_data)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
print(json.dumps(result))
except Exception as error:
print(json.dumps({"approve": True, "message": f"PNPM enforcer error: {error}"}))
if __name__ == "__main__":
main()

509
hooks/scripts/universal-linter.py Executable file
View File

@@ -0,0 +1,509 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
import hashlib
import json
import subprocess
import sys
from datetime import datetime, timedelta
from pathlib import Path
from typing import Any
# Simple file validation cache to prevent redundant work
validation_cache = {}
CACHE_TTL = timedelta(minutes=5)
def get_file_hash(file_path: str) -> str | None:
"""Generate file hash for cache key"""
try:
path = Path(file_path)
if not path.exists():
return None
content = path.read_text(encoding="utf-8")
mtime = path.stat().st_mtime
return hashlib.md5(f"{content}{mtime}".encode()).hexdigest()
except Exception:
return None
def is_cached_valid(file_path: str) -> dict[str, Any] | None:
"""Check if file was recently validated"""
file_hash = get_file_hash(file_path)
if not file_hash:
return None
cache_key = f"{file_path}:{file_hash}"
cached = validation_cache.get(cache_key)
if cached and datetime.now() - cached["timestamp"] < CACHE_TTL:
return cached["result"]
return None
def cache_result(file_path: str, result: dict[str, Any]):
"""Cache validation result"""
file_hash = get_file_hash(file_path)
if not file_hash:
return
cache_key = f"{file_path}:{file_hash}"
validation_cache[cache_key] = {"result": result, "timestamp": datetime.now()}
def should_validate_file(file_path: str, project_type: str) -> bool:
"""Check if file should be validated"""
if not file_path:
return False
# Skip non-existent files
if not Path(file_path).exists():
return False
# Get file extension
ext = Path(file_path).suffix
# Check based on project type
if project_type == "javascript":
return ext in [".ts", ".tsx", ".js", ".jsx", ".mjs", ".cjs"]
elif project_type == "python":
return ext in [".py", ".pyi"]
elif project_type == "rust":
return ext in [".rs"]
elif project_type == "go":
return ext in [".go"]
# For unknown project types, try to validate common code files
return ext in [".ts", ".tsx", ".js", ".jsx", ".py", ".rs", ".go"]
def detect_package_manager() -> str:
"""Detect which package manager to use based on project files"""
project_root = Path.cwd()
# Check for lock files in order of preference
if (project_root / "pnpm-lock.yaml").exists():
return "pnpm"
elif (project_root / "yarn.lock").exists():
return "yarn"
elif (project_root / "package-lock.json").exists():
return "npm"
# Fallback to npm if no lock file found
return "npm"
def detect_project_type() -> str:
"""Detect project type based on files and dependencies"""
project_root = Path.cwd()
# Check for Python files
if (project_root / "pyproject.toml").exists() or (
project_root / "requirements.txt"
).exists():
return "python"
# Check for Rust files
if (project_root / "Cargo.toml").exists():
return "rust"
# Check for package.json (JavaScript/TypeScript)
if (project_root / "package.json").exists():
return "javascript"
# Check for Go files
if (project_root / "go.mod").exists():
return "go"
return "unknown"
def get_available_linters(project_type: str) -> list:
"""Get available linting tools for the project"""
linters = []
project_root = Path.cwd()
if project_type == "python":
# Check for Python linters
if subprocess.run(["which", "ruff"], capture_output=True).returncode == 0:
linters.append(("ruff", ["ruff", "check", "--fix"]))
if subprocess.run(["which", "black"], capture_output=True).returncode == 0:
linters.append(("black", ["black", "."]))
if subprocess.run(["which", "flake8"], capture_output=True).returncode == 0:
linters.append(("flake8", ["flake8"]))
if subprocess.run(["which", "pylint"], capture_output=True).returncode == 0:
linters.append(("pylint", ["pylint"]))
elif project_type == "javascript":
package_manager = detect_package_manager()
# Check package.json for available scripts and dependencies
package_json_path = project_root / "package.json"
if package_json_path.exists():
try:
with open(package_json_path) as f:
package_data = json.load(f)
scripts = package_data.get("scripts", {})
deps = {
**package_data.get("dependencies", {}),
**package_data.get("devDependencies", {}),
}
# Check for common linting scripts
if "lint" in scripts:
linters.append(("lint", [package_manager, "run", "lint"]))
if "lint:fix" in scripts:
linters.append(("lint:fix", [package_manager, "run", "lint:fix"]))
# Check for Biome
if "biome" in scripts or "@biomejs/biome" in deps:
linters.append(
("biome", [package_manager, "biome", "check", "--apply"])
)
# Check for ESLint
if "eslint" in deps:
linters.append(("eslint", [package_manager, "run", "lint"]))
# Check for Prettier
if "prettier" in deps:
linters.append(("prettier", [package_manager, "run", "format"]))
except (json.JSONDecodeError, FileNotFoundError):
pass
elif project_type == "rust":
# Check for Rust tools
if subprocess.run(["which", "cargo"], capture_output=True).returncode == 0:
linters.append(("clippy", ["cargo", "clippy", "--fix", "--allow-dirty"]))
linters.append(("fmt", ["cargo", "fmt"]))
elif project_type == "go":
# Check for Go tools
if subprocess.run(["which", "go"], capture_output=True).returncode == 0:
linters.append(("fmt", ["go", "fmt", "./..."]))
linters.append(("vet", ["go", "vet", "./..."]))
if (
subprocess.run(["which", "golangci-lint"], capture_output=True).returncode
== 0
):
linters.append(("golangci-lint", ["golangci-lint", "run", "--fix"]))
return linters
def get_available_type_checkers(project_type: str) -> list:
"""Get available type checking tools for the project"""
type_checkers = []
project_root = Path.cwd()
if project_type == "python":
if subprocess.run(["which", "mypy"], capture_output=True).returncode == 0:
type_checkers.append(("mypy", ["mypy", "."]))
if subprocess.run(["which", "pyright"], capture_output=True).returncode == 0:
type_checkers.append(("pyright", ["pyright"]))
elif project_type == "javascript":
package_manager = detect_package_manager()
package_json_path = project_root / "package.json"
if package_json_path.exists():
try:
with open(package_json_path) as f:
package_data = json.load(f)
scripts = package_data.get("scripts", {})
deps = {
**package_data.get("dependencies", {}),
**package_data.get("devDependencies", {}),
}
# Check for TypeScript
if "typecheck" in scripts:
type_checkers.append(
("typecheck", [package_manager, "run", "typecheck"])
)
elif "typescript" in deps:
type_checkers.append(("tsc", [package_manager, "tsc", "--noEmit"]))
except (json.JSONDecodeError, FileNotFoundError):
pass
elif project_type == "rust":
# Rust has built-in type checking via cargo check
if subprocess.run(["which", "cargo"], capture_output=True).returncode == 0:
type_checkers.append(("check", ["cargo", "check"]))
elif project_type == "go":
# Go has built-in type checking via go build
if subprocess.run(["which", "go"], capture_output=True).returncode == 0:
type_checkers.append(("build", ["go", "build", "./..."]))
return type_checkers
def run_linting_checks(file_path: str, project_type: str) -> list:
"""Run all available linting checks"""
results = []
linters = get_available_linters(project_type)
if not linters:
return [
{
"success": True,
"message": " No linters available, skipping checks",
"output": "",
}
]
for linter_name, linter_cmd in linters:
try:
# For file-specific linters, add the file path
if linter_name in ["ruff", "biome"] and file_path:
cmd = linter_cmd + [file_path]
else:
cmd = linter_cmd
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
results.append(
{
"success": True,
"message": f'{linter_name} check passed for {Path(file_path).name if file_path else "project"}',
"output": result.stdout,
"linter": linter_name,
}
)
except subprocess.CalledProcessError as error:
error_output = error.stdout or error.stderr or str(error)
results.append(
{
"success": False,
"message": f'{linter_name} found issues in {Path(file_path).name if file_path else "project"}',
"output": error_output,
"fix": f'Run: {" ".join(cmd)}',
"linter": linter_name,
}
)
except FileNotFoundError:
results.append(
{
"success": True,
"message": f" {linter_name} not available, skipping check",
"output": "",
"linter": linter_name,
}
)
return results
def run_type_checks(project_type: str) -> list:
"""Run all available type checking"""
results = []
type_checkers = get_available_type_checkers(project_type)
if not type_checkers:
return [
{
"success": True,
"message": " No type checkers available, skipping checks",
"output": "",
}
]
for checker_name, checker_cmd in type_checkers:
try:
result = subprocess.run(
checker_cmd, capture_output=True, text=True, check=True
)
results.append(
{
"success": True,
"message": f"{checker_name} type check passed",
"output": result.stdout,
"checker": checker_name,
}
)
except subprocess.CalledProcessError as error:
error_output = error.stdout or error.stderr or str(error)
results.append(
{
"success": False,
"message": f"{checker_name} type check failed",
"output": error_output,
"fix": f'Run: {" ".join(checker_cmd)}',
"checker": checker_name,
}
)
except FileNotFoundError:
results.append(
{
"success": True,
"message": f" {checker_name} not available, skipping check",
"output": "",
"checker": checker_name,
}
)
return results
def validate_file(file_path: str) -> dict[str, Any]:
"""Validate a single file"""
# Check cache first
cached = is_cached_valid(file_path)
if cached:
return cached
# Detect project type
project_type = detect_project_type()
# Check if file should be validated
if not should_validate_file(file_path, project_type):
result = {
"approve": True,
"message": f" Skipped {Path(file_path).name} (not a supported file type for {project_type} project)",
}
return result
# Run linting checks
lint_results = run_linting_checks(file_path, project_type)
# Run type checking (project-wide)
type_results = run_type_checks(project_type)
# Combine all results
all_results = lint_results + type_results
all_passed = all(result["success"] for result in all_results)
if all_passed:
successful_tools = [
r.get("linter", r.get("checker", "tool"))
for r in all_results
if r["success"]
]
tools_used = ", ".join(filter(None, successful_tools))
result = {
"approve": True,
"message": f"✅ All checks passed for {Path(file_path).name}"
+ (f" ({tools_used})" if tools_used else ""),
}
else:
issues = []
fixes = []
for check_result in all_results:
if not check_result["success"]:
issues.append(check_result["message"])
if "fix" in check_result:
fixes.append(check_result["fix"])
message_parts = ["❌ Validation failed:"] + issues
if fixes:
message_parts.extend(["", "🔧 Fixes:"] + fixes)
result = {"approve": False, "message": "\n".join(message_parts)}
# Cache result
cache_result(file_path, result)
return result
def main():
"""Main execution"""
try:
input_data = json.load(sys.stdin)
# Extract file path from tool input
tool_input = input_data.get("tool_input", {})
file_path = tool_input.get("file_path")
if not file_path:
# No file path provided, approve by default
result = {
"approve": True,
"message": " No file path provided, skipping validation",
}
else:
# Show user-friendly message that linter is running
file_name = Path(file_path).name if file_path else "file"
print(f"🔍 Running linter on {file_name}...", file=sys.stderr)
result = validate_file(file_path)
# Show result to user
if result.get("approve", True):
print(f"✨ Linting complete for {file_name}", file=sys.stderr)
else:
print(
f"🔧 Linter found issues in {file_name} (see details above)",
file=sys.stderr,
)
# Log the linting activity
try:
# Ensure log directory exists
log_dir = Path.cwd() / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
log_path = log_dir / "universal_linter.json"
# Read existing log data or initialize empty list
if log_path.exists():
with open(log_path) as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Create log entry with relevant data
log_entry = {
"file_path": file_path,
"project_type": detect_project_type() if file_path else "unknown",
"result": result.get("approve", True),
"message": result.get("message", ""),
"tool_input": tool_input,
"session_id": input_data.get("session_id", "unknown"),
}
# Add timestamp to the log entry
timestamp = datetime.now().strftime("%b %d, %I:%M%p").lower()
log_entry["timestamp"] = timestamp
# Append new data
log_data.append(log_entry)
# Write back to file with formatting
with open(log_path, "w") as f:
json.dump(log_data, f, indent=2)
except Exception:
# Don't let logging errors break the hook
pass
print(json.dumps(result))
except Exception as error:
print(
json.dumps({"approve": True, "message": f"Universal linter error: {error}"})
)
if __name__ == "__main__":
main()

69
plugin.lock.json Normal file
View File

@@ -0,0 +1,69 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:AojdevStudio/dev-utils-marketplace:code-quality-enforcement",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "9fd8f963156292d007a8ea005ba1aa4a01151c6a",
"treeHash": "a51c57245b1427956b61de3b7afe3f3924e70c9dacb36e660fbce1e5747bd1fd",
"generatedAt": "2025-11-28T10:24:55.759990Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "code-quality-enforcement",
"description": "Meta-package: Installs all code-quality-enforcement components (commands + agents + hooks)",
"version": "3.0.0"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "444e7aec5e0920a467c2f075b3cd81891f2cae141f5ffdb13e0153dc6e0ec7ba"
},
{
"path": "agents/test-automator.md",
"sha256": "c2e2f30e50056ea3ac2a14f97b94e3a6e748155f833bc8f7cde4c110821bc944"
},
{
"path": "agents/tech-debt-reviewer.md",
"sha256": "a7584464fbf934326f314e57fc922dd2a9cf809f34f203cfb1e493d6688ecec5"
},
{
"path": "hooks/hooks.json",
"sha256": "55565dbddf6825b772d6ab1442170ba7937b087de3fb3440892d7ca759f808a2"
},
{
"path": "hooks/scripts/universal-linter.py",
"sha256": "0fb34899d3ee5ea0ca24c2025f88888eb303c2d09b1c19db7f55ea919e4c279e"
},
{
"path": "hooks/scripts/pnpm-enforcer.py",
"sha256": "4c34915c301d545af35837c81c3326430e28902f645c79f8aff88af726f1d301"
},
{
"path": "hooks/scripts/code-quality-reporter.py",
"sha256": "332a9ba27a079591c945d2955177e3e13df8a093f563bb71f9e4daa629b57b5c"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "dd80d837a12288d14fb5792124a29fd84ee8d042a354771f02ed196dd507503d"
},
{
"path": "commands/enforce-logging-discipline.md",
"sha256": "947587b368117aca0ed9e72881f872d9fb10b1c5f9a0c15a0fde0ffafefb3cbe"
}
],
"dirSha256": "a51c57245b1427956b61de3b7afe3f3924e70c9dacb36e660fbce1e5747bd1fd"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}