Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:49:58 +08:00
commit 5007abf04b
89 changed files with 44129 additions and 0 deletions

View File

@@ -0,0 +1,227 @@
# Commands Reference Library
This directory contains **reference material for creating and organizing Claude Code slash commands**. These are NOT deployed commands themselves, but rather templates, patterns, and procedural guides for command development.
## Purpose
The `commands/` directory serves as a knowledge base for:
- **Command Templates**: Standardized structures for creating new slash commands
- **Command Patterns**: Configuration defining command categories, workflows, and integration
- **Meta-Commands**: Guides for generating other commands using established patterns
- **Specialized Workflows**: Domain-specific command procedures (testing, development)
## Directory Structure
```text
commands/
├── development/ # Development workflow commands
│ ├── config/
│ │ └── command-patterns.yml # Command categories, workflows, risk levels
│ ├── templates/
│ │ └── command-template.md # Base template for new commands
│ ├── use-command-template.md # Meta-command: generate commands from template
│ └── create-feature-task.md # Structured feature development workflow
└── testing/ # Testing workflow commands
├── analyze-test-failures.md # Investigate test failures (bug vs test issue)
├── comprehensive-test-review.md
└── test-failure-mindset.md
```
## Key Files
### Configuration
**[command-patterns.yml](./development/config/command-patterns.yml)**
Defines the organizational structure for commands:
- **Command Categories**: Analysis, Development, Quality, Documentation, Operations
- **Workflow Chains**: Multi-step processes (Feature Development, Bug Fix, Code Review)
- **Context Sharing**: What information flows between commands
- **Cache Patterns**: TTL and invalidation rules for different command types
- **Risk Assessment**: Classification of commands by risk level and required safeguards
### Templates
**[command-template.md](./development/templates/command-template.md)**
Standard structure for creating new commands:
- Purpose statement (single sentence)
- Task description with `$ARGUMENTS` placeholder
- Phased execution steps (Analysis → Implementation → Validation)
- Context preservation rules
- Expected output format
- Integration points (prerequisites, follow-ups, related commands)
### Meta-Commands
**[use-command-template.md](./development/use-command-template.md)**
Procedural guide for generating new commands:
1. Parse command purpose from `$ARGUMENTS`
2. Select appropriate category and naming convention
3. Apply template structure with customizations
4. Configure integration with workflow chains
5. Create command file in appropriate location
### Specialized Workflows
**[analyze-test-failures.md](./testing/analyze-test-failures.md)**
Critical thinking framework for test failure analysis:
- Balanced investigation approach (test bug vs implementation bug)
- Structured analysis steps
- Classification criteria (Test Bug | Implementation Bug | Ambiguous)
- Examples demonstrating reasoning patterns
- Output format for clear communication
**[create-feature-task.md](./development/create-feature-task.md)**
Structured approach to feature development:
- Requirement parsing and scope determination
- Task structure generation with phases
- Documentation creation and tracking setup
- Integration with development workflows
## Usage Patterns
### Creating a New Command
When you need to create a new slash command for Claude Code:
1. **Consult the patterns**: Review [command-patterns.yml](./development/config/command-patterns.yml) to understand:
- Which category your command belongs to
- Whether it fits into existing workflow chains
- What risk level it represents
2. **Use the template**: Start with [command-template.md](./development/templates/command-template.md)
- Replace placeholders with command-specific content
- Customize execution steps for your use case
- Define clear integration points
3. **Follow naming conventions**: Use verb-noun format
- Analysis: `analyze-*`, `scan-*`, `validate-*`
- Development: `create-*`, `implement-*`, `fix-*`
- Operations: `deploy`, `migrate`, `cleanup-*`
4. **Deploy to proper location**: Actual slash commands live in:
- User commands: `~/.claude/commands/`
- Project commands: `.claude/commands/` (in project root)
- NOT in this `references/commands/` directory
### Integrating Commands into Workflows
Commands are designed to chain together:
```yaml
Feature_Development:
steps:
- create-feature-task # Initialize structured task
- study-current-repo # Understand codebase
- implement-feature # Write code
- create-test-plan # Design tests
- comprehensive-test-review # Validate quality
- gh-create-pr # Submit for review
```
Each command produces context that subsequent commands can use.
## Relationship to Skill Structure
This directory is part of the **python3-development** skill's reference material:
```text
python3-development/
├── SKILL.md # Skill entry point
├── references/
│ ├── commands/ # THIS DIRECTORY (reference material)
│ ├── modern-modules/ # Python library guides
│ └── ...
└── scripts/ # Executable tools
```
**Important Distinctions**:
- **This directory** (`references/commands/`): Templates and patterns for creating commands
- **Deployed commands** (`~/.claude/commands/`): Actual slash commands that Claude Code executes
- **Skill scripts** (`scripts/`): Python tools that may be called by commands
## Best Practices
### When Creating Commands
1. **Single Responsibility**: Each command should focus on one clear task
2. **Clear Naming**: Use descriptive verb-noun pairs (`analyze-dependencies`, `create-component`)
3. **Example Usage**: Include at least 3 concrete examples
4. **Context Definition**: Specify what gets cached for reuse by later commands
5. **Integration Points**: Define prerequisites and natural follow-up commands
### When Organizing Commands
1. **Category Alignment**: Place commands in appropriate category subdirectories
2. **Workflow Awareness**: Consider how commands chain together
3. **Risk Classification**: Mark high-risk commands with appropriate safeguards
4. **Documentation**: Keep command patterns file updated with new additions
## Integration with Python Development Skill
Commands in this directory support the orchestration patterns described in:
- [python-development-orchestration.md](../references/python-development-orchestration.md)
- [reference-document-architecture.md](../planning/reference-document-architecture.md) (historical proposal, not implemented)
They complement the agent-based workflows:
```text
User Request
Orchestrator (uses skill + commands)
├─→ @agent-python-cli-architect (implementation)
├─→ @agent-python-pytest-architect (testing)
└─→ @agent-python-code-reviewer (review)
Apply standards: /modernpython, /shebangpython (from commands)
```
## Common Workflows
### Feature Development
```bash
# 1. Create structured task
/development:create-feature-task Add user authentication with OAuth
# 2. Implement with appropriate agent
# (Orchestrator delegates to @agent-python-cli-architect)
# 3. Validate with standards
/modernpython src/auth/oauth.py
/shebangpython scripts/migrate-users.py
```
### Test Failure Investigation
```bash
# Analyze failures with critical thinking
/testing:analyze-test-failures test_authentication.py::test_oauth_flow
```
### Command Creation
```bash
# Generate new command from template
/development:use-command-template validate API endpoints for rate limiting
```
## Further Reading
- [Command Template](./development/templates/command-template.md) - Base structure for all commands
- [Command Patterns](./development/config/command-patterns.yml) - Organizational taxonomy
- [Python Development Orchestration](../references/python-development-orchestration.md) - How commands fit into workflows

View File

@@ -0,0 +1,112 @@
# Command Configuration Patterns
# Defines reusable patterns for Claude Code commands
Command_Categories:
Analysis:
- analyze-repo-for-claude
- estimate-context-window
- study-*
- validate-*
- scan-*
Development:
- create-feature-task
- implement-feature
- create-test-plan
- fix-bug
Quality:
- comprehensive-test-review
- validate-code-quality
- scan-performance
- scan-test-coverage
Documentation:
- generate-primevue-reference
- document-api
- create-readme
Operations:
- git-*
- gh-*
- deploy
- migrate
Workflow_Chains:
Feature_Development:
steps:
- create-feature-task
- study-current-repo
- implement-feature
- create-test-plan
- comprehensive-test-review
- gh-create-pr
Bug_Fix:
steps:
- gh-issue-enhance
- analyze-issue
- fix-bug
- test-fix
- gh-pr-submit
Code_Review:
steps:
- scan-code-quality
- scan-performance
- scan-test-coverage
- generate-review-report
Context_Sharing:
# Define what context flows between commands
analyze_to_implement:
from: analyze-*
to: implement-*, fix-*
shares: [findings, patterns, architecture]
scan_to_fix:
from: scan-*, validate-*
to: fix-*, improve-*
shares: [issues, priorities, locations]
test_to_deploy:
from: test-*, scan-test-coverage
to: deploy, gh-pr-*
shares: [results, coverage, confidence]
Cache_Patterns:
# How long to cache results from different command types
Analysis_Commands:
ttl: 3600 # 1 hour
invalidate_on: [file_changes, branch_switch]
Scan_Commands:
ttl: 1800 # 30 minutes
invalidate_on: [file_changes]
Build_Commands:
ttl: 300 # 5 minutes
invalidate_on: [any_change]
Risk_Assessment:
High_Risk_Commands:
commands:
- deploy
- migrate
- cleanup-*
- delete-*
triggers: [confirmation, backup, dry_run]
Medium_Risk_Commands:
commands:
- refactor-*
- update-dependencies
- merge-*
triggers: [plan_first, test_after]
Low_Risk_Commands:
commands:
- analyze-*
- scan-*
- study-*
triggers: []

View File

@@ -0,0 +1,60 @@
---
title: "Create Feature Development Task"
description: "Set up comprehensive feature development task with proper tracking"
command_type: "development"
last_updated: "2025-11-02"
related_docs:
- "./use-command-template.md"
- "../../references/python-development-orchestration.md"
---
# Create Feature Development Task
I need to create a structured development task for: $ARGUMENTS
## Your Task
Set up a comprehensive feature development task with proper tracking, phases, and documentation.
## Execution Steps
1. **Parse Feature Requirements**
- Extract feature name and description from $ARGUMENTS
- Identify key requirements and constraints
- Determine complexity and scope
2. **Generate Task Structure**
- Use the feature task template as base
- Customize phases based on feature type
- Add specific acceptance criteria
- Include relevant technical considerations
3. **Create Task Documentation**
- Copy template from ~/.claude/templates/feature-task-template.md
- Fill in all sections with feature-specific details
- Save to appropriate location (suggest: .claude/tasks/[feature-name].md)
- Create initial git branch if requested
4. **Set Up Tracking**
- Add task to TODO list if applicable
- Create initial checkpoints
- Set up progress markers
- Configure any automation needed
## Template Usage
@include templates/feature-task-template.md
## Context Preservation
When creating the task, preserve:
- Initial requirements
- Key technical decisions
- File locations
- Dependencies identified
- Risk factors
## Integration
**Prerequisites**: Clear feature requirements **Follow-up**: `/development:implement-feature [task-file]` **Related**: `create-test-plan`, `estimate-context-window`

View File

@@ -0,0 +1,59 @@
# Command Template
## Purpose
[Single sentence describing what the command does]
## Your Task
[Clear description of what needs to be accomplished with: $ARGUMENTS]
## Execution Steps
1. **Phase 1 - Analysis**
- [Step 1]
- [Step 2]
- [Step 3]
2. **Phase 2 - Implementation**
- [Step 1]
- [Step 2]
- [Step 3]
3. **Phase 3 - Validation**
- [Step 1]
- [Step 2]
- [Step 3]
## Context Preservation
Cache the following for future commands:
- Key findings
- Decisions made
- Files modified
- Patterns identified
## Expected Output
```markdown
## [Command] Results
### Summary
- [Key outcome 1]
- [Key outcome 2]
### Details
[Structured output based on command type]
### Next Steps
- [Recommended follow-up action 1]
- [Recommended follow-up action 2]
```
## Integration
**Prerequisites**: [What should be done before this command] **Follow-up**: [What commands naturally follow this one] **Related**: [Other commands that work well with this]

View File

@@ -0,0 +1,74 @@
---
title: "Use Command Template"
description: "Create new Claude Code command following established patterns"
command_type: "development"
last_updated: "2025-11-02"
related_docs:
- "./templates/command-template.md"
- "./create-feature-task.md"
---
# Use Command Template
I need to create a new command using the standard template for: $ARGUMENTS
## Your Task
Create a new Claude Code command following our established patterns and templates.
## Execution Steps
1. **Determine Command Type**
- Parse command purpose from $ARGUMENTS
- Identify appropriate category (analysis/development/quality/etc)
- Choose suitable command name (verb-noun format)
2. **Apply Template**
- Start with base template from [command-template.md](./templates/command-template.md)
- Customize sections for specific command purpose
- Ensure all required sections are included
- Add command-specific flags if needed
3. **Configure Integration**
- Check [command-patterns.yml](./config/command-patterns.yml) for workflow placement
- Identify prerequisite commands
- Define what context this command produces
- Add to appropriate workflow chains
4. **Create Command File**
- Determine correct folder based on category
- Create .md file with command content
- Verify @include references work correctly
- Test with example usage
## Template Structure
The standard template includes:
- Purpose (single sentence)
- Task description with $ARGUMENTS
- Phased execution steps
- Context preservation rules
- Expected output format
- Integration guidance
## Best Practices
- Keep commands focused on a single responsibility
- Use clear verb-noun naming (analyze-dependencies, create-component)
- Include at least 3 example usages
- Define what gets cached for reuse
- Specify prerequisite and follow-up commands
## Example Usage
```bash
# Create a new analysis command
/development:use-command-template analyze API endpoints for rate limiting needs
# Create a new validation command
/development:use-command-template validate database migrations for safety
# Create a new generation command
/development:use-command-template generate Pydantic classes from API schema
```

View File

@@ -0,0 +1,114 @@
---
title: "Analyze Test Failures"
description: "Analyze failing test cases with a balanced, investigative approach"
command_type: "testing"
last_updated: "2025-11-02"
related_docs:
- "./test-failure-mindset.md"
- "./comprehensive-test-review.md"
---
# Analyze Test Failures
<role>
You are a senior software engineer with expertise in test-driven development and debugging. Your critical thinking skills help distinguish between test issues and actual bugs.
</role>
<context>
When tests fail, there are two primary possibilities that must be carefully evaluated:
1. The test itself is incorrect (false positive)
2. The test is correct and has discovered a genuine bug (true positive)
Assuming tests are wrong by default is a dangerous anti-pattern that defeats the purpose of testing. </context>
<task>
Analyze the failing test case(s) $ARGUMENTS with a balanced, investigative approach to determine whether the failure indicates a test issue or a genuine bug.
</task>
<instructions>
1. **Initial Analysis**
- Read the failing test carefully, understanding its intent
- Examine the test's assertions and expected behavior
- Review the error message and stack trace
2. **Investigate the Implementation**
- Check the actual implementation being tested
- Trace through the code path that leads to the failure
- Verify that the implementation matches its documented behavior
3. **Apply Critical Thinking** For each failing test, ask:
- What behavior is the test trying to verify?
- Is this behavior clearly documented or implied by the function/API design?
- Does the current implementation actually provide this behavior?
- Could this be an edge case the implementation missed?
4. **Make a Determination** Classify the failure as one of:
- **Test Bug**: The test's expectations are incorrect
- **Implementation Bug**: The code doesn't behave as it should
- **Ambiguous**: The intended behavior is unclear and needs clarification
5. **Document Your Reasoning** Provide clear explanation for your determination, including:
- Evidence supporting your conclusion
- The specific mismatch between expectation and reality
- Recommended fix (whether to the test or implementation) </instructions>
<examples>
<example>
**Scenario**: Test expects `calculateDiscount(100, 0.2)` to return 20, but it returns 80
**Analysis**:
- Test assumes function returns discount amount
- Implementation returns price after discount
- Function name is ambiguous
**Determination**: Ambiguous - needs clarification **Reasoning**: The function name could reasonably mean either "calculate the discount amount" or "calculate the discounted price". Check documentation or ask for intended behavior. </example>
<example>
**Scenario**: Test expects `validateEmail("user@example.com")` to return true, but it returns false
**Analysis**:
- Test provides a valid email format
- Implementation regex is missing support for dots in domain
- Other valid emails also fail
**Determination**: Implementation Bug **Reasoning**: The email address is valid per RFC standards. The implementation's regex is too restrictive and needs to be fixed. </example>
<example>
**Scenario**: Test expects `divide(10, 0)` to return 0, but it throws an error
**Analysis**:
- Test assumes division by zero returns 0
- Implementation throws DivisionByZeroError
- Standard mathematical behavior is to treat as undefined/error
**Determination**: Test Bug **Reasoning**: Division by zero is mathematically undefined. Throwing an error is the correct behavior. The test should expect an error, not 0. </example> </examples>
<important>
- NEVER automatically assume the test is wrong
- ALWAYS consider that the test might have found a real bug
- When uncertain, lean toward investigating the implementation
- Tests are often your specification - they define expected behavior
- A failing test is a gift - it's either catching a bug or clarifying requirements
</important>
<output_format> For each failing test, provide:
```text
Test: [test name/description]
Failure: [what failed and how]
Investigation:
- Test expects: [expected behavior]
- Implementation does: [actual behavior]
- Root cause: [why they differ]
Determination: [Test Bug | Implementation Bug | Ambiguous]
Recommendation:
[Specific fix to either test or implementation]
```
</output_format>

View File

@@ -0,0 +1,31 @@
---
title: "Comprehensive Test Review"
description: "Perform thorough test review following standard checklist"
command_type: "testing"
last_updated: "2025-11-02"
related_docs:
- "./analyze-test-failures.md"
- "./test-failure-mindset.md"
---
# Comprehensive Test Review
I need to review and ensure comprehensive testing for: $ARGUMENTS
## Test Review Process
I'll perform a thorough test review following our standard checklist:
@include templates/test-checklist.md
## Additional Considerations
Beyond the standard checklist, I'll also examine:
- Test isolation and independence
- Mock usage appropriateness
- Test execution time
- Flaky test patterns
- Test naming clarity
Let me analyze the testing situation...

View File

@@ -0,0 +1,121 @@
---
title: "Test Failure Analysis Mindset"
description: "Set balanced investigative approach for test failures"
command_type: "testing"
last_updated: "2025-11-02"
related_docs:
- "./analyze-test-failures.md"
- "./comprehensive-test-review.md"
---
# Test Failure Analysis Mindset
<role>
You are a senior software engineer who understands that test failures are valuable signals that require careful analysis, not automatic dismissal.
</role>
<context>
This guidance sets your approach for all future test failure encounters in this session. Tests are specifications - they define expected behavior. When they fail, it's a critical moment requiring balanced investigation.
</context>
<task>
Going forward in this session, whenever you encounter failing tests, apply a balanced investigative approach that considers both possibilities: the test could be wrong, OR the test could have discovered a genuine bug.
</task>
<principles>
1. **Tests as First-Class Citizens**
- Tests are often the only specification we have
- They encode important business logic and edge cases
- A failing test is providing valuable information
2. **Dual Hypothesis Approach** Always consider both possibilities:
- Hypothesis A: The test's expectations are incorrect
- Hypothesis B: The implementation has a bug
3. **Evidence-Based Decisions**
- Never assume; always investigate
- Look for evidence supporting each hypothesis
- Document your reasoning process
4. **Respect the Test Author**
- Someone wrote this test for a reason
- They may have understood requirements you're missing
- Their test might be catching a subtle edge case </principles>
<mindset>
When you see a test failure, your internal monologue should be:
"This test is failing. This could mean:
1. The test discovered a bug in the implementation (valuable!)
2. The test's expectations don't match intended behavior
3. There's ambiguity about what the correct behavior should be
Let me investigate all three possibilities before making changes."
NOT: "The test is failing, so I'll fix the test to match the implementation." </mindset>
<approach>
For EVERY test failure you encounter:
1. **Pause and Read**
- Understand what the test is trying to verify
- Read its name, comments, and assertions carefully
2. **Trace the Implementation**
- Follow the code path that leads to the failure
- Understand what the code actually does vs. what's expected
3. **Consider the Context**
- Is this testing a documented requirement?
- Would the current behavior surprise a user?
- What would be the impact of each possible fix?
4. **Make a Reasoned Decision**
- If the implementation is wrong: Fix the bug
- If the test is wrong: Fix the test AND document why
- If unclear: Seek clarification before changing anything
5. **Learn from the Failure**
- What can this teach us about the system?
- Should we add more tests for related cases?
- Is there a pattern we're missing? </approach>
<red_flags> Watch out for these dangerous patterns:
- 🚫 Immediately changing tests to match implementation
- 🚫 Assuming the implementation is always correct
- 🚫 Bulk-updating tests without individual analysis
- 🚫 Removing "inconvenient" test cases
- 🚫 Adding mock/stub workarounds instead of fixing root causes </red_flags>
<good_practices> Cultivate these helpful patterns:
- ✅ Treat each test failure as a potential bug discovery
- ✅ Document your analysis in comments when fixing tests
- ✅ Write clear test names that explain intent
- ✅ When changing a test, explain why the original was wrong
- ✅ Consider adding more tests when you find ambiguity </good_practices>
<example_responses> When encountering test failures, respond like this:
**Good**: "I see test_user_validation is failing. Let me trace through the validation logic to understand if this is catching a real bug or if the test's expectations are incorrect."
**Bad**: "The test is failing so I'll update it to match what the code does."
**Good**: "This test expects the function to throw an error for null input, but it returns None. This could be a defensive programming issue - let me check if null inputs should be handled differently."
**Bad**: "I'll change the test to expect None instead of an error." </example_responses>
<remember>
Every test failure is an opportunity to:
- Discover and fix a bug before users do
- Clarify ambiguous requirements
- Improve system understanding
- Strengthen the test suite
The goal is NOT to make tests pass as quickly as possible. The goal IS to ensure the system behaves correctly. </remember>
<activation>
This mindset is now active for the remainder of our session. I will apply this balanced, investigative approach to all test failures, always considering that the test might be correct and might have found a real bug.
</activation>