Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 17:57:28 +08:00
commit e063391898
27 changed files with 3055 additions and 0 deletions

View File

@@ -0,0 +1,165 @@
---
allowed-tools: Bash(find:*), Bash(ls:*), Bash(tree:*), Bash(grep:*), Bash(wc:*), Bash(du:*), Bash(head:*), Bash(tail:*), Bash(cat:*), Bash(touch:*)
description: Generate comprehensive analysis and documentation of entire codebase
---
# Comprehensive Codebase Analysis
## Project Discovery Phase
### Directory Structure
!`find . -type d -not -path "./node_modules/*" -not -path "./.git/*" -not -path "./dist/*" -not -path "./build/*" -not -path "./.next/*" -not -path "./coverage/*" | sort`
### Complete File Tree
!`eza --tree --all --level=4 --ignore-glob='node_modules|.git|dist|build|.next|coverage|*.log'`
### File Count and Size Analysis
- Total files: !`find . -type f -not -path "./node_modules/*" -not -path "./.git/*" | wc -l`
- Code files: !`find . -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.py" -o -name "*.java" -o -name "*.php" -o -name "*.rb" -o -name "*.go" -o -name "*.rs" -o -name "*.cpp" -o -name "*.c" | grep -v node_modules | wc -l`
- Project size: !`find . -type f -not -path "./node_modules/*" -not -path "./.git/*" -not -path "./dist/*" -not -path "./build/*" -not -path "./.next/*" -not -path "./coverage/*" -exec du -ch {} + 2>/dev/null | grep total$ | cut -f1`
## Configuration Files Analysis
### Package Management
- Package.json: @package.json
- Package-lock.json exists: !`ls package-lock.json 2>/dev/null || echo "Not found"`
- Yarn.lock exists: !`ls yarn.lock 2>/dev/null || echo "Not found"`
- Requirements.txt: @requirements.txt
- Gemfile: @Gemfile
- Cargo.toml: @Cargo.toml
- Go.mod: @go.mod
- Composer.json: @composer.json
### Build & Dev Tools
- Webpack config: @webpack.config.js
- Vite config: @vite.config.js
- Rollup config: @rollup.config.js
- Babel config: @.babelrc
- ESLint config: @.eslintrc.js
- Prettier config: @.prettierrc
- TypeScript config: @tsconfig.json
- Tailwind config: @tailwind.config.js
- Next.js config: @next.config.js
### Environment & Docker
- .env files: !`find . -name ".env*" -type f 2>/dev/null || echo "No .env files found"`
- Docker files: !`find . -name "Dockerfile*" -o -name "docker-compose*" 2>/dev/null || echo "No Docker files found"`
- Kubernetes files: !`find . -name "*.yaml" -o -name "*.yml" 2>/dev/null | grep -E "(k8s|kubernetes|deployment|service)" || echo "No Kubernetes files found"`
### CI/CD Configuration
- GitHub Actions: !`find .github -name "*.yml" -o -name "*.yaml" 2>/dev/null || echo "No GitHub Actions"`
- GitLab CI: @.gitlab-ci.yml
- Travis CI: @.travis.yml
- Circle CI: @.circleci/config.yml
## Source Code Analysis
### Main Application Files
- Main entry points: !`find . -name "main.*" -o -name "index.*" -o -name "app.*" -o -name "server.*" | grep -v node_modules | head -10`
- Routes/Controllers: !`find . -path "*/routes/*" -o -path "*/controllers/*" -o -path "*/api/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No routes/controllers found"`
- Models/Schemas: !`find . -path "*/models/*" -o -path "*/schemas/*" -o -path "*/entities/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No models/schemas found"`
- Components: !`find . -path "*/components/*" -o -path "*/views/*" -o -path "*/pages/*" 2>/dev/null | grep -v node_modules | head -20 || echo "No components/views/pages found"`
### Database & Storage
- Database configs: !`find . -name "*database*" -o -name "*db*" -o -name "*connection*" | grep -v node_modules | head -10`
- Migration files: !`find . -path "*/migrations/*" -o -path "*/migrate/*" 2>/dev/null | head -10 || echo "No migration files found"`
- Seed files: !`find . -path "*/seeds/*" -o -path "*/seeders/*" 2>/dev/null | head -10 || echo "No seed files found"`
### Testing Files
- Test files: !`find . -name "*test*" -o -name "*spec*" | grep -v node_modules | head -15`
- Test config: @jest.config.js
### API Documentation
- API docs: !`find . \( -name "*api*" -a -name "*.md" \) -o -name "swagger*" -o -name "openapi*" 2>/dev/null | head -10 || echo "No API documentation found"`
## Key Files Content Analysis
### Root Configuration Files
@README.md
@LICENSE
@.gitignore
### Main Application Entry Points
!`find . -name "index.js" -o -name "index.ts" -o -name "main.js" -o -name "main.ts" -o -name "app.js" -o -name "app.ts" -o -name "server.js" -o -name "server.ts" 2>/dev/null | grep -v node_modules | head -5 | while read file; do echo "=== $file ==="; head -50 "$file" 2>/dev/null || echo "Could not read $file"; echo; done || echo "No main entry point files found"`
## Your Task
Based on all the discovered information above, create a comprehensive analysis that includes:
## 1. Project Overview
- Project type (web app, API, library, etc.)
- Tech stack and frameworks
- Architecture pattern (MVC, microservices, etc.)
- Language(s) and versions
## 2. Detailed Directory Structure Analysis
For each major directory, explain:
- Purpose and role in the application
- Key files and their functions
- How it connects to other parts
## 3. File-by-File Breakdown
Organize by category:
- **Core Application Files**: Main entry points, routing, business logic
- **Configuration Files**: Build tools, environment, deployment
- **Data Layer**: Models, database connections, migrations
- **Frontend/UI**: Components, pages, styles, assets
- **Testing**: Test files, mocks, fixtures
- **Documentation**: README, API docs, guides
- **DevOps**: CI/CD, Docker, deployment scripts
## 4. API Endpoints Analysis
If applicable, document:
- All discovered endpoints and their methods
- Authentication/authorization patterns
- Request/response formats
- API versioning strategy
## 5. Architecture Deep Dive
Explain:
- Overall application architecture
- Data flow and request lifecycle
- Key design patterns used
- Dependencies between modules
## 6. Environment & Setup Analysis
Document:
- Required environment variables
- Installation and setup process
- Development workflow
- Production deployment strategy
## 7. Technology Stack Breakdown
List and explain:
- Runtime environment
- Frameworks and libraries
- Database technologies
- Build tools and bundlers
- Testing frameworks
- Deployment technologies
## 8. Visual Architecture Diagram
Create a comprehensive diagram showing:
- High-level system architecture
- Component relationships
- Data flow
- External integrations
- File structure hierarchy
Use ASCII art, mermaid syntax, or detailed text representation to show:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │────▶│ API │────▶│ Database │
│ (React/Vue) │ │ (Node/Flask) │ │ (Postgres/Mongo)│
└─────────────────┘ └─────────────────┘ └─────────────────┘
## 9. Key Insights & Recommendations
Provide:
- Code quality assessment
- Potential improvements
- Security considerations
- Performance optimization opportunities
- Maintainability suggestions
Think deeply about the codebase structure and provide comprehensive insights that would be valuable for new developers joining the project or for architectural decision-making.
At the end, write all of the output into a file called "codebase_analysis.md"

29
commands/build.md Normal file
View File

@@ -0,0 +1,29 @@
---
description: Build the codebase based on a plan using structured approach
arguement-hint: [path-to-plan]
allowed-tools: Bash, Read, Write, Glob, Grep, Task
---
# Build
Follow the `Workflow` to implement the `PATH_TO_PLAN` then `Report` the completed work.
## Variables
PATH_TO_PLAN: $ARGUMENTS
## Workflow
### 1. Initial Setup
- If no `PATH_TO_PLAN` is provided, STOP immediately and ask the user to provide it
### 2. Plan Analysis
- Read the plan at `PATH_TO_PLAN`. Think hard about the plan and write the code to implement it into the codebase.
### 3. Memory Management
- Document any architectural decisions or patterns discovered
## Report
- Summarize the work you've just done in a concise bullet point list
- Report the files and total lines changed with `git diff --stat`

45
commands/code-review.md Normal file
View File

@@ -0,0 +1,45 @@
---
allowed-tools: Bash(git diff:*), Bash(git log:*), Bash(git status:*), Bash(git branch:*), mcp__serena__get_symbols_overview, mcp__serena__find_symbol, mcp__serena__find_referencing_symbols, mcp__serena__search_for_pattern, mcp__serena__list_dir
description: Perform comprehensive code review analysis of recent changes with semantic code understanding
argument-hint: [Optional: specify file paths or commit range for focused review]
---
# Code Review Analysis
Analyze `RECENT_CHANGES` using semantic code understanding to perform comprehensive code review covering quality, security, performance, testing, and documentation with specific actionable feedback saved to `REVIEW_OUTPUT`.
## Variables:
TARGET_SCOPE: $1 (optional - specific files, commit range, or "recent" for latest changes)
GIT_CONTEXT: recent changes and commit history
REVIEW_CRITERIA: code quality, security, performance, testing, documentation
ANALYSIS_DEPTH: semantic symbol analysis with cross-references
REVIEW_OUTPUT: logs/code-review-analysis.md
## Workflow:
1. Gather git context using `git status`, `git diff HEAD~1`, `git log --oneline -5`, and `git branch --show-current`
2. Identify changed files from git diff output for semantic analysis scope
3. Use `mcp__serena__list_dir` to understand project structure and identify key directories
4. For each modified file, use `mcp__serena__get_symbols_overview` to understand code structure and symbols
5. Use `mcp__serena__find_symbol` with `include_body=true` for detailed analysis of modified functions/classes
6. Apply `mcp__serena__find_referencing_symbols` to understand impact of changes on dependent code
7. Use `mcp__serena__search_for_pattern` to identify potential security patterns, anti-patterns, or code smells
8. Analyze code quality: readability, maintainability, adherence to project conventions and best practices
9. Evaluate security: scan for vulnerabilities, input validation, authentication, authorization issues
10. Assess performance: identify bottlenecks, inefficient algorithms, resource usage patterns
11. Review testing: evaluate test coverage, test quality, missing test scenarios for changed code
12. Verify documentation: check inline comments, README updates, API documentation completeness
13. Generate specific, actionable feedback with file:line references and suggested improvements
14. Save comprehensive review analysis to `REVIEW_OUTPUT` with prioritized recommendations
## Report:
Code Review Analysis Complete
File: `REVIEW_OUTPUT`
Topic: Comprehensive semantic code review of `TARGET_SCOPE` with actionable recommendations
Key Components:
- Git context analysis with change scope identification
- Semantic symbol analysis using serena-mcp tools for deep code understanding
- Multi-dimensional review covering quality, security, performance, testing, documentation
- Specific actionable feedback with file:line references and improvement suggestions

63
commands/commit.md Normal file
View File

@@ -0,0 +1,63 @@
---
allowed-tools: Bash, Read, Write, Task
description: Intelligent commits with hook-aware strategy detection
argument-hint: [Optional: --no-verify or custom message]
---
# Commit
Use the git-flow-manager sub-agent to intelligently analyze staging area and project formatting hooks, then execute optimal commit strategy (PARALLEL/COORDINATED/HYBRID) to prevent conflicts while maintaining commit organization. Parse `$ARGUMENTS` for commit options, run pre-commit checks, analyze changes for atomic splitting, and execute commits with conventional messages.
## Variables:
COMMIT_OPTIONS: $ARGUMENTS
STRATEGY_MODE: auto-detected
COMMIT_COUNT: auto-calculated
HOOK_ANALYSIS: auto-performed
## Instructions:
- Parse `COMMIT_OPTIONS` to extract flags like `--no-verify` or custom messages
- Use the git-flow-manager sub-agent for comprehensive workflow management with automatic strategy detection
- Auto-detect formatting hook aggressiveness and choose optimal commit strategy
- Run pre-commit checks unless `--no-verify` flag is present
- Validate `.gitignore` configuration and alert for large files (>1MB)
- Auto-stage modified files if none staged, analyze changes for atomic splitting
- Execute commits using detected strategy with conventional messages and emoji
- Include issue references for GitHub/Linear integration when applicable
## Workflow:
1. Deploy git-flow-manager sub-agent with strategy detection capabilities
2. Run `!git status --porcelain` to analyze current repository state
3. Execute formatting hook analysis to determine optimal commit strategy
4. Check for `--no-verify` flag in `COMMIT_OPTIONS`, skip pre-commit checks if present
5. Run pre-commit validation: `!pnpm lint`, `!pnpm build`, `!pnpm generate:docs`
6. Validate `.gitignore` configuration and check for large files
7. Auto-stage files with `!git add .` if no files currently staged
8. Execute `!git diff --staged --name-status` to analyze staged changes
9. Analyze changes for atomic commit splitting opportunities
10. Execute commits using detected strategy (PARALLEL/COORDINATED/HYBRID)
11. Generate conventional commit messages with appropriate emoji from @ai-docs/emoji-commit-ref.yaml
12. Include issue references in commit body for automatic linking
13. Execute `!git commit` with generated messages
14. Display commit summary using `!git log --oneline -1`
## Report:
Intelligent Commit Complete
Strategy: `STRATEGY_MODE` (auto-detected based on formatting hook analysis)
Files: `COMMIT_COUNT` commits created and executed
Topic: Hook-aware commit processing with adaptive strategy selection
Key Components:
- Automatic strategy detection preventing formatting hook conflicts
- Conventional commit messages with appropriate emoji
- Pre-commit validation and quality gates
- Atomic commit splitting for logical organization
- GitHub/Linear issue integration
- Clean working directory achieved without conflicts
## Relevant Files:
- @~/.claude/agents/git-flow-manager.md
- @ai-docs/emoji-commit-ref.yaml

8
commands/git-status.md Normal file
View File

@@ -0,0 +1,8 @@
---
allowed-tools: Bash, Read
description: Analyze current git repository state and differences from remote
---
# Git Status
Analyze current git repository state including status, branch information, differences from remote, and recent commits. Use $ARGUMENTS for specific branch or filter options, provide actionable summary with next steps recommendations highlighting any uncommitted changes or divergence from remote branch.

55
commands/go.md Normal file
View File

@@ -0,0 +1,55 @@
---
allowed-tools: mcp__serena__list_dir, mcp__serena__find_file, mcp__serena__search_for_pattern, mcp__serena__get_symbols_overview, mcp__serena__find_symbol, mcp__serena__find_referencing_symbols, mcp__serena__replace_symbol_body, mcp__serena__insert_after_symbol, mcp__serena__insert_before_symbol, mcp__context7__resolve-library-id, mcp__context7__get-library-docs, mcp__sequential-thinking__process_thought, mcp__sequential-thinking__generate_summary, Read
description: Advanced code analysis and development using semantic tools, documentation, and structured decision making
argument-hint: [task description or development requirement]
model: claude-sonnet-4-5-20250929
---
# Go
Advanced code analysis, development, and decision-making command that uses `USER_TASK` to analyze requirements through semantic code tools, up-to-date documentation, and structured thinking processes.
## Variables:
USER_TASK: $1
PROJECT_ROOT: .
CLAUDE_CONFIG: CLAUDE.md
## Instructions:
- Read `CLAUDE_CONFIG` to understand project context and requirements
- Use `USER_TASK` to determine specific analysis or development needs
- Apply serena tools for semantic code retrieval and precise editing operations
- Leverage context7 for current third-party library documentation and examples
- Use sequential thinking for all decision-making processes and complex analysis
- Maintain structured approach with clear reasoning for all actions taken
## Workflow:
1. Read `CLAUDE_CONFIG` file to understand project structure and context
2. Use sequential thinking to process and break down `USER_TASK` requirements
3. Use serena semantic tools to explore relevant codebase sections and symbols
4. Retrieve up-to-date documentation using context7 for any third-party dependencies
5. Apply structured decision-making through sequential thinking for implementation approach
6. Execute precise code analysis or modifications using serena's semantic editing tools
7. Document reasoning and decisions made throughout the process
8. Generate summary of actions taken and results achieved
9. Provide clear recommendations for next steps or follow-up actions
## Report:
Advanced Analysis Complete
Task: `USER_TASK` processed using semantic tools and structured thinking
Key Components:
- Project context analysis from `CLAUDE_CONFIG`
- Semantic code exploration and analysis using serena tools
- Third-party documentation retrieval via context7
- Structured decision-making through sequential thinking process
- Precise code modifications or analysis results
- Clear reasoning documentation and next step recommendations
## Relevant Files:
- [@CLAUDE.md]

56
commands/quick-plan.md Normal file
View File

@@ -0,0 +1,56 @@
---
description: Creates a concise engineering implementation plan based on user requirements and saves it to specs directory
argument-hint: [user prompt]
allowed-tools: Read, Write, Edit, Grep, Glob, MultiEdit
model: claude-sonnet-4-5-20250929
---
# Quick Plan
Create a detailed implementation plan based on the user's requirements provided thought the `USER_PROMPT` variable. Analyze the request, think through the implementation approach, and save a comprehensive specification document to the `PLAN_OUTPUT_DIRECTORY/<name-of-plan>.md` that can be used as a blueprint for actual development work.
## Variables
USER_PROMPT: $ARGUMENTS
PLAN_OUTPUT_DIRECTORY: `specs/`
## Instructions
- Carefully analyze the user's requirements provided in the `USER_PROMPT` variable.
- Think deeply about the best approach to implement the requested functionality or solve the problem.
- Create a concise implementation plan that includes:
- Clear problem statement and objectives
- Technical approach and architecture decisions
- Step-by-step implementation guide
- Potential challenges and solutions
- Testing strategy
- Success criteria
- Generate a descriptive, kebab-case filename based on the main topic of the plan
- Save the complete implementation plan to the `PLAN_OUTPUT_DIRECTORY/<descriptive-name.md>` directory
- Ensure the plan is detailed enough that another developer could follow it to implement the solution
- Include code examples or pseudo-code where appropriate to clarify complex concepts
- Consider edge cases, error handling, and scalability concerns to the tune of 10-20 users
- Structure the document with clear sections and proper markdown formatting
## Workflow
1. Analyze Requirements - THINK HARD and parse the `USER_PROMPT` to understand the core problem and desired outcome
2. Design solution - Develop technical approach including architecture decisions and implementation strategy
3. Document Plan - Structure a comprehensive markdown document with problem statement, implementation steps, and testing approach
4. Generate Filename - Create a descriptive, kebab-case filename based on the plan's main topic
5. Save & Report - Write the plan to the `PLAN_OUTPUT_DIRECTORY/<filename.md>` and provide a summary of key components
## Report
After creating and saving the implemetaion plan, provide a concise report with the following format:
```
Implementation Plan Created
File: PLAN_OUTPUT_DIRECTORY/<filename.md>
Topic: <brief description of the what the plan covers>
Key Components:
- <main component 1>
- <main component 2>
- <main component 3>
```

9
commands/quick-search.md Normal file
View File

@@ -0,0 +1,9 @@
---
allowed-tools: Grep, Read, Task
description: Search for patterns across project logs and files
model: claude-sonnet-4-5-20250929
---
# Quick Search
Search for $ARGUMENTS pattern across project logs and files using intelligent strategy. Scan logs/ directory for .json and .log files, extract relevant context around matches, present results with file location and line numbers, and suggest refined searches if needed.