Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:32:48 +08:00
commit 9997176040
24 changed files with 2371 additions and 0 deletions

93
commands/containerize.md Normal file
View File

@@ -0,0 +1,93 @@
---
allowed-tools: Read, Write, Edit, Bash
argument-hint: [application-type] | --node | --python | --java | --go | --multi-stage
description: Containerize application with optimized Docker configuration, security, and multi-stage builds
model: claude-sonnet-4-5
---
# Application Containerization
Containerize application for deployment: $ARGUMENTS
## Current Application Analysis
- Application type: @package.json or @setup.py or @go.mod or @pom.xml (detect runtime)
- Existing Docker: @Dockerfile or @docker-compose.yml or @compose.yaml (if exists)
- Dependencies: !find . -name "*requirements*.txt" -o -name "package*.json" -o -name "go.mod" | head -3
- Port configuration: !grep -r "PORT\|listen\|bind" src/ 2>/dev/null | head -3 || echo "Port detection needed"
- Build tools: @Makefile or build scripts detection
## Task
Implement production-ready containerization strategy:
1. **Application Analysis and Containerization Strategy**
- Analyze application architecture and runtime requirements
- Identify application dependencies and external services
- Determine optimal base image and runtime environment
- Plan multi-stage build strategy for optimization
- Assess security requirements and compliance needs
2. **Dockerfile Creation and Optimization**
- Create comprehensive Dockerfile with multi-stage builds
- Select minimal base images (Alpine, distroless, or slim variants)
- Configure proper layer caching and build optimization
- Implement security best practices (non-root user, minimal attack surface)
- Set up proper file permissions and ownership
3. **Build Process Configuration**
- Configure .dockerignore file to exclude unnecessary files
- Set up build arguments and environment variables
- Implement build-time dependency installation and cleanup
- Configure application bundling and asset optimization
- Set up proper build context and file structure
4. **Runtime Configuration**
- Configure application startup and health checks
- Set up proper signal handling and graceful shutdown
- Configure logging and output redirection
- Set up environment-specific configuration management
- Configure resource limits and performance tuning
5. **Security Hardening**
- Run application as non-root user with minimal privileges
- Configure security scanning and vulnerability assessment
- Implement secrets management and secure credential handling
- Set up network security and firewall rules
- Configure security policies and access controls
6. **Docker Compose Configuration**
- Create compose.yaml for local development
- Configure service dependencies and networking
- Set up volume mounting and data persistence
- Configure environment variables and secrets
- Set up development vs production configurations
7. **Container Orchestration Preparation**
- Prepare configurations for Kubernetes deployment
- Create deployment manifests and service definitions
- Configure ingress and load balancing
- Set up persistent volumes and storage classes
- Configure auto-scaling and resource management
8. **Monitoring and Observability**
- Configure application metrics and health endpoints
- Set up logging aggregation and centralized logging
- Configure distributed tracing and monitoring
- Set up alerting and notification systems
- Configure performance monitoring and profiling
9. **CI/CD Integration**
- Configure automated Docker image building
- Set up image scanning and security validation
- Configure image registry and artifact management
- Set up automated deployment pipelines
- Configure rollback and blue-green deployment strategies
10. **Testing and Validation**
- Test container builds and functionality
- Validate security configurations and compliance
- Test deployment in different environments
- Validate performance and resource utilization
- Test backup and disaster recovery procedures
- Create documentation for container deployment and management

View File

@@ -0,0 +1,27 @@
# Parallel Task Version Execution
## Variables
FEATURE_NAME: $ARGUMENTS
PLAN_TO_EXECUTE: $ARGUMENTS
NUMBER_OF_PARALLEL_WORKTREES: $ARGUMENTS
## Instructions
We're going to create NUMBER_OF_PARALLEL_WORKTREES new subagents that use the Task tool to create N versions of the same feature in parallel.
Be sure to read PLAN_TO_EXECUTE.
This enables use to concurrently build the same feature in parallel so we can test and validate each subagent's changes in isolation then pick the best changes.
The first agent will run in trees/<FEATURE_NAME>-1/
The second agent will run in trees/<FEATURE_NAME>-2/
...
The last agent will run in trees/<FEATURE_NAME>-<NUMBER_OF_PARALLEL_WORKTREES>/
The code in trees/<FEATURE_NAME>-<i>/ will be identical to the code in the current branch. It will be setup and ready for you to build the feature end to end.
Each agent will independently implement the engineering plan detailed in PLAN_TO_EXECUTE in their respective workspace.
When the subagent completes it's work, have the subagent to report their final changes made in a comprehensive `RESULTS.md` file at the root of their respective workspace.
Make sure agents don't run any tests or other code - focus on the code changes only.

40
commands/execute-prp.md Normal file
View File

@@ -0,0 +1,40 @@
# Execute BASE PRP
Implement a feature using using the PRP file.
## PRP File: $ARGUMENTS
## Execution Process
1. **Load PRP**
- Read the specified PRP file
- Understand all context and requirements
- Follow all instructions in the PRP and extend the research if needed
- Ensure you have all needed context to implement the PRP fully
- Do more web searches and codebase exploration as needed
2. **ULTRATHINK**
- Think hard before you execute the plan. Create a comprehensive plan addressing all requirements.
- Break down complex tasks into smaller, manageable steps using your todos tools.
- Use the TodoWrite tool to create and track your implementation plan.
- Identify implementation patterns from existing code to follow.
3. **Execute the plan**
- Execute the PRP
- Implement all the code
4. **Validate**
- Run each validation command
- Fix any failures
- Re-run until all pass
5. **Complete**
- Ensure all checklist items done
- Run final validation suite
- Report completion status
- Read the PRP again to ensure you have implemented everything
6. **Reference the PRP**
- You can always reference the PRP again if needed
Note: If validation fails, use error patterns in PRP to fix and retry.

69
commands/generate-prp.md Normal file
View File

@@ -0,0 +1,69 @@
# Create PRP
## Feature file: $ARGUMENTS
Generate a complete PRP for general feature implementation with thorough research. Ensure context is passed to the AI agent to enable self-validation and iterative refinement. Read the feature file first to understand what needs to be created, how the examples provided help, and any other considerations.
The AI agent only gets the context you are appending to the PRP and training data. Assuma the AI agent has access to the codebase and the same knowledge cutoff as you, so its important that your research findings are included or referenced in the PRP. The Agent has Websearch capabilities, so pass urls to documentation and examples.
## Research Process
1. **Codebase Analysis**
- Search for similar features/patterns in the codebase
- Identify files to reference in PRP
- Note existing conventions to follow
- Check test patterns for validation approach
2. **External Research**
- Search for similar features/patterns online
- Library documentation (include specific URLs)
- Implementation examples (GitHub/StackOverflow/blogs)
- Best practices and common pitfalls
3. **User Clarification** (if needed)
- Specific patterns to mirror and where to find them?
- Integration requirements and where to find them?
## PRP Generation
Using templates/prp_base.md as template:
### Critical Context to Include and pass to the AI agent as part of the PRP
- **Documentation**: URLs with specific sections
- **Code Examples**: Real snippets from codebase
- **Gotchas**: Library quirks, version issues
- **Patterns**: Existing approaches to follow
### Implementation Blueprint
- Start with pseudocode showing approach
- Reference real files for patterns
- Include error handling strategy
- list tasks to be completed to fullfill the PRP in the order they should be completed
### Validation Gates (Must be Executable) eg for python
```bash
# Syntax/Style
ruff check --fix && mypy .
# Unit Tests
uv run pytest tests/ -v
```
*** CRITICAL AFTER YOU ARE DONE RESEARCHING AND EXPLORING THE CODEBASE BEFORE YOU START WRITING THE PRP ***
*** ULTRATHINK ABOUT THE PRP AND PLAN YOUR APPROACH THEN START WRITING THE PRP ***
## Output
Save as: `PRPs/{feature-name}.md`
## Quality Checklist
- [ ] All necessary context included
- [ ] Validation gates are executable by AI
- [ ] References existing patterns
- [ ] Clear implementation path
- [ ] Error handling documented
Score the PRP on a scale of 1-10 (confidence level to succeed in one-pass implementation using claude codes)
Remember: The goal is one-pass implementation success through comprehensive context.

202
commands/infinite.md Normal file
View File

@@ -0,0 +1,202 @@
**INFINITE AGENTIC LOOP COMMAND**
Think deeply about this infinite generation task. You are about to embark on a sophisticated iterative creation process.
**Variables:**
spec_file: $ARGUMENTS
output_dir: $ARGUMENTS
count: $ARGUMENTS
**ARGUMENTS PARSING:**
Parse the following arguments from "$ARGUMENTS":
1. `spec_file` - Path to the markdown specification file
2. `output_dir` - Directory where iterations will be saved
3. `count` - Number of iterations (1-N or "infinite")
**PHASE 1: SPECIFICATION ANALYSIS**
Read and deeply understand the specification file at `spec_file`. This file defines:
- What type of content to generate
- The format and structure requirements
- Any specific parameters or constraints
- The intended evolution pattern between iterations
Think carefully about the spec's intent and how each iteration should build upon previous work.
**PHASE 2: OUTPUT DIRECTORY RECONNAISSANCE**
Thoroughly analyze the `output_dir` to understand the current state:
- List all existing files and their naming patterns
- Identify the highest iteration number currently present
- Analyze the content evolution across existing iterations
- Understand the trajectory of previous generations
- Determine what gaps or opportunities exist for new iterations
**PHASE 3: ITERATION STRATEGY**
Based on the spec analysis and existing iterations:
- Determine the starting iteration number (highest existing + 1)
- Plan how each new iteration will be unique and evolutionary
- Consider how to build upon previous iterations while maintaining novelty
- If count is "infinite", prepare for continuous generation until context limits
**PHASE 4: PARALLEL AGENT COORDINATION**
Deploy multiple Sub Agents to generate iterations in parallel for maximum efficiency and creative diversity:
**Sub-Agent Distribution Strategy:**
- For count 1-5: Launch all agents simultaneously
- For count 6-20: Launch in batches of 5 agents to manage coordination
- For "infinite": Launch waves of 3-5 agents, monitoring context and spawning new waves
**Agent Assignment Protocol:**
Each Sub Agent receives:
1. **Spec Context**: Complete specification file analysis
2. **Directory Snapshot**: Current state of output_dir at launch time
3. **Iteration Assignment**: Specific iteration number (starting_number + agent_index)
4. **Uniqueness Directive**: Explicit instruction to avoid duplicating concepts from existing iterations
5. **Quality Standards**: Detailed requirements from the specification
**Agent Task Specification:**
```
TASK: Generate iteration [NUMBER] for [SPEC_FILE] in [OUTPUT_DIR]
You are Sub Agent [X] generating iteration [NUMBER].
CONTEXT:
- Specification: [Full spec analysis]
- Existing iterations: [Summary of current output_dir contents]
- Your iteration number: [NUMBER]
- Assigned creative direction: [Specific innovation dimension to explore]
REQUIREMENTS:
1. Read and understand the specification completely
2. Analyze existing iterations to ensure your output is unique
3. Generate content following the spec format exactly
4. Focus on [assigned innovation dimension] while maintaining spec compliance
5. Create file with exact name pattern specified
6. Ensure your iteration adds genuine value and novelty
DELIVERABLE: Single file as specified, with unique innovative content
```
**Parallel Execution Management:**
- Launch all assigned Sub Agents simultaneously using Task tool
- Monitor agent progress and completion
- Handle any agent failures by reassigning iteration numbers
- Ensure no duplicate iteration numbers are generated
- Collect and validate all completed iterations
**PHASE 5: INFINITE MODE ORCHESTRATION**
For infinite generation mode, orchestrate continuous parallel waves:
**Wave-Based Generation:**
1. **Wave Planning**: Determine next wave size (3-5 agents) based on context capacity
2. **Agent Preparation**: Prepare fresh context snapshots for each new wave
3. **Progressive Sophistication**: Each wave should explore more advanced innovation dimensions
4. **Context Monitoring**: Track total context usage across all agents and main orchestrator
5. **Graceful Conclusion**: When approaching context limits, complete current wave and summarize
**Infinite Execution Cycle:**
```
WHILE context_capacity > threshold:
1. Assess current output_dir state
2. Plan next wave of agents (size based on remaining context)
3. Assign increasingly sophisticated creative directions
4. Launch parallel Sub Agent wave
5. Monitor wave completion
6. Update directory state snapshot
7. Evaluate context capacity remaining
8. If sufficient capacity: Continue to next wave
9. If approaching limits: Complete final wave and summarize
```
**Progressive Sophistication Strategy:**
- **Wave 1**: Basic functional replacements with single innovation dimension
- **Wave 2**: Multi-dimensional innovations with enhanced interactions
- **Wave 3**: Complex paradigm combinations with adaptive behaviors
- **Wave N**: Revolutionary concepts pushing the boundaries of the specification
**Context Optimization:**
- Each wave uses fresh agent instances to avoid context accumulation
- Main orchestrator maintains lightweight state tracking
- Progressive summarization of completed iterations to manage context
- Strategic pruning of less essential details in later waves
**EXECUTION PRINCIPLES:**
**Quality & Uniqueness:**
- Each iteration must be genuinely unique and valuable
- Build upon previous work while introducing novel elements
- Maintain consistency with the original specification
- Ensure proper file organization and naming
**Parallel Coordination:**
- Deploy Sub Agents strategically to maximize creative diversity
- Assign distinct innovation dimensions to each agent to avoid overlap
- Coordinate timing to prevent file naming conflicts
- Monitor all agents for successful completion and quality
**Scalability & Efficiency:**
- Think deeply about the evolution trajectory across parallel streams
- For infinite mode, optimize for maximum valuable output before context exhaustion
- Use wave-based generation to manage context limits intelligently
- Balance parallel speed with quality and coordination overhead
**Agent Management:**
- Provide each Sub Agent with complete context and clear assignments
- Handle agent failures gracefully with iteration reassignment
- Ensure all parallel outputs integrate cohesively with the overall progression
**ULTRA-THINKING DIRECTIVE:**
Before beginning generation, engage in extended thinking about:
**Specification & Evolution:**
- The deeper implications of the specification
- How to create meaningful progression across iterations
- What makes each iteration valuable and unique
- How to balance consistency with innovation
**Parallel Strategy:**
- Optimal Sub Agent distribution for the requested count
- How to assign distinct creative directions to maximize diversity
- Wave sizing and timing for infinite mode
- Context management across multiple parallel agents
**Coordination Challenges:**
- How to prevent duplicate concepts across parallel streams
- Strategies for ensuring each agent produces genuinely unique output
- Managing file naming and directory organization with concurrent writes
- Quality control mechanisms for parallel outputs
**Infinite Mode Optimization:**
- Wave-based generation patterns for sustained output
- Progressive sophistication strategies across multiple waves
- Context capacity monitoring and graceful conclusion planning
- Balancing speed of parallel generation with depth of innovation
**Risk Mitigation:**
- Handling agent failures and iteration reassignment
- Ensuring coherent overall progression despite parallel execution
- Managing context window limits across the entire system
- Maintaining specification compliance across all parallel outputs
Begin execution with deep analysis of these parallel coordination challenges and proceed systematically through each phase, leveraging Sub Agents for maximum creative output and efficiency.

8
commands/planning.md Normal file
View File

@@ -0,0 +1,8 @@
Please create a detailed plan to implement a project objective or feature according to the user's task which is as follows: $ARGUMENTS.
- You will ask followup questions and clarification prompts to the user until you are clear how to approach the plan.
- You are to collect as much information as you can around the project to maximise its chances of success.
- You will now ULTRATHINK about the plan and figure out if you are missing anything, spin the project around from a different perspective and challenge your assumptions.
- You will research online, to find peer reviewed solutions, best practises or other inspiration sources that help increase the chances of success.
- Once your reserach is completed, you will create a clearly described task list and present that to the user.
- You will NOT write any code or start building anything until the user has confirmed your plan and task list, so you will collaborate with the user until this has been completed and the plan is confirmed.

14
commands/prep-parallel.md Normal file
View File

@@ -0,0 +1,14 @@
# Initialize parallel git worktree directories for parallel Claude Code agents
## Variables
FEATURE_NAME: $ARGUMENTS
NUMBER_OF_PARALLEL_WORKTREES: $ARGUMENTS
## Execute these commands
> Execute the loop in parallel with the Batch and Task tool
- create a new dir `trees/`
- for i in NUMBER_OF_PARALLEL_WORKTREES
- RUN `git worktree add -b FEATURE_NAME-i ./trees/FEATURE_NAME-i`
- RUN `cd trees/FEATURE_NAME-i`, `git ls-files` to validate
- RUN `git worktree list` to verify all trees were created properly

16
commands/primer.md Normal file
View File

@@ -0,0 +1,16 @@
# Prime Context for Claude Code
Use the command `tree` to get an understanding of the project structure.
Start with reading the CLAUDE.md file if it exists to get an understanding of the project.
Read the README.md file to get an understanding of the project.
Read key files in the src/ or root directory
Explain back to me:
- Project structure
- Project purpose and goals
- Key files and their purposes
- Any important dependencies
- Any important configuration files

18
commands/prompt_writer.md Normal file
View File

@@ -0,0 +1,18 @@
---
allowed-tools: Read, Write, Edit,
argument-hint: [Your initial draft idea of a prompt to be improved]
description: Helps to craft an improved prompt based on your brief, following Anthropic's latest prompt engineering guides
model: claude-sonnet-4-5
---
You're a Principal prompt engineer with equity stake in the company you're being asked to write prompts for.
READ:
https://docs.claude.com/en/docs/build-with-claude/prompt-engineering/claude-4-best-practices.md
You are to improve and think ahrd about best practise prompt enginnering, and take the users prompt brief and improve it.
User's Brief/Initial Prompt:
$ARGUMENTS
Output:
Improved prompt validated aginst the documentation article shared with you

50
commands/reflection.md Normal file
View File

@@ -0,0 +1,50 @@
You are an expert in prompt engineering, specializing in optimizing AI code assistant instructions. Your task is to analyze and improve the instructions for Claude Code found in u/CLAUDE.md. Follow these steps carefully:
1. Analysis Phase:
Review the chat history in your context window.
Then, examine the current Claude instructions:
<claude_instructions>
u/CLAUDE.md
</claude_instructions>
Analyze the chat history and instructions to identify areas that could be improved. Look for:
- Inconsistencies in Claude's responses
- Misunderstandings of user requests
- Areas where Claude could provide more detailed or accurate information
- Opportunities to enhance Claude's ability to handle specific types of queries or tasks
2. Interaction Phase:
Present your findings and improvement ideas to the human. For each suggestion:
a) Explain the current issue you've identified
b) Propose a specific change or addition to the instructions
c) Describe how this change would improve Claude's performance
Wait for feedback from the human on each suggestion before proceeding. If the human approves a change, move it to the implementation phase. If not, refine your suggestion or move on to the next idea.
3. Implementation Phase:
For each approved change:
a) Clearly state the section of the instructions you're modifying
b) Present the new or modified text for that section
c) Explain how this change addresses the issue identified in the analysis phase
4. Output Format:
Present your final output in the following structure:
<analysis>
[List the issues identified and potential improvements]
</analysis>
<improvements>
[For each approved improvement:
1. Section being modified
2. New or modified instruction text
3. Explanation of how this addresses the identified issue]
</improvements>
<final_instructions>
[Present the complete, updated set of instructions for Claude, incorporating all approved changes]
</final_instructions>
Remember, your goal is to enhance Claude's performance and consistency while maintaining the core functionality and purpose of the AI assistant. Be thorough in your analysis, clear in your explanations, and precise in your implementations.

169
commands/ultra-think.md Normal file
View File

@@ -0,0 +1,169 @@
# Deep Analysis and Problem Solving Mode
Deep analysis and problem solving mode
## Instructions
1. **Initialize Ultra Think Mode**
- Acknowledge the request for enhanced analytical thinking
- Set context for deep, systematic reasoning
- Prepare to explore the problem space comprehensively
2. **Parse the Problem or Question**
- Extract the core challenge from: **$ARGUMENTS**
- Identify all stakeholders and constraints
- Recognize implicit requirements and hidden complexities
- Question assumptions and surface unknowns
3. **Multi-Dimensional Analysis**
Approach the problem from multiple angles:
### Technical Perspective
- Analyze technical feasibility and constraints
- Consider scalability, performance, and maintainability
- Evaluate security implications
- Assess technical debt and future-proofing
### Business Perspective
- Understand business value and ROI
- Consider time-to-market pressures
- Evaluate competitive advantages
- Assess risk vs. reward trade-offs
### User Perspective
- Analyze user needs and pain points
- Consider usability and accessibility
- Evaluate user experience implications
- Think about edge cases and user journeys
### System Perspective
- Consider system-wide impacts
- Analyze integration points
- Evaluate dependencies and coupling
- Think about emergent behaviors
4. **Generate Multiple Solutions**
- Brainstorm at least 3-5 different approaches
- For each approach, consider:
- Pros and cons
- Implementation complexity
- Resource requirements
- Potential risks
- Long-term implications
- Include both conventional and creative solutions
- Consider hybrid approaches
5. **Deep Dive Analysis**
For the most promising solutions:
- Create detailed implementation plans
- Identify potential pitfalls and mitigation strategies
- Consider phased approaches and MVPs
- Analyze second and third-order effects
- Think through failure modes and recovery
6. **Cross-Domain Thinking**
- Draw parallels from other industries or domains
- Apply design patterns from different contexts
- Consider biological or natural system analogies
- Look for innovative combinations of existing solutions
7. **Challenge and Refine**
- Play devil's advocate with each solution
- Identify weaknesses and blind spots
- Consider "what if" scenarios
- Stress-test assumptions
- Look for unintended consequences
8. **Synthesize Insights**
- Combine insights from all perspectives
- Identify key decision factors
- Highlight critical trade-offs
- Summarize innovative discoveries
- Present a nuanced view of the problem space
9. **Provide Structured Recommendations**
Present findings in a clear structure:
``
## Problem Analysis
- Core challenge
- Key constraints
- Critical success factors
## Solution Options
### Option 1: [Name]
- Description
- Pros/Cons
- Implementation approach
- Risk assessment
### Option 2: [Name]
[Similar structure]
## Recommendation
- Recommended approach
- Rationale
- Implementation roadmap
- Success metrics
- Risk mitigation plan
## Alternative Perspectives
- Contrarian view
- Future considerations
- Areas for further research
`
10. **Meta-Analysis**
- Reflect on the thinking process itself
- Identify areas of uncertainty
- Acknowledge biases or limitations
- Suggest additional expertise needed
- Provide confidence levels for recommendations
## Usage Examples
`bash
### Architectural decision
/project:ultra-think Should we migrate to microservices or improve our monolith?
### Complex problem solving
/project:ultra-think How do we scale our system to handle 10x traffic while reducing costs?
### Strategic planning
/project:ultra-think What technology stack should we choose for our next-gen platform?
### Design challenge
/project:ultra-think How can we improve our API to be more developer-friendly while maintaining backward compatibility?
``
## Key Principles
- **First Principles Thinking**: Break down to fundamental truths
- **Systems Thinking**: Consider interconnections and feedback loops
- **Probabilistic Thinking**: Work with uncertainties and ranges
- **Inversion**: Consider what to avoid, not just what to do
- **Second-Order Thinking**: Consider consequences of consequences
## Output Expectations
- Comprehensive analysis (typically 2-4 pages of insights)
- Multiple viable solutions with trade-offs
- Clear reasoning chains
- Acknowledgment of uncertainties
- Actionable recommendations
- Novel insights or perspectives