Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:32:10 +08:00
commit 087d2b04d6
6 changed files with 417 additions and 0 deletions

View File

@@ -0,0 +1,83 @@
---
name: git-pushing
description: Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to save and push their work. Also activates when user says "push changes", "commit and push", or similar git workflow requests.
---
# Git Push Workflow
Stage all changes, create a conventional commit, and push to the remote branch.
## When to Use
Automatically activate when the user:
- Explicitly asks to push changes ("push this", "commit and push")
- Mentions saving work to remote ("save to github", "push to remote")
- Completes a feature and wants to share it
- Says phrases like "let's push this up" or "commit these changes"
## Workflow
### 1. Check Git Status
Run `git status` to understand:
- Which files have changed
- What will be committed
- Current branch name
### 2. Stage Changes
- Run `git add .` to stage all changes
- Alternatively, stage specific files if partial commit is needed
### 3. Create Commit Message
**If user provided a message:**
- Use it directly
**If no message provided:**
- Analyze changes using `git diff`
- Generate a conventional commit message:
- Format: `type(scope): description`
- Types: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`
- Keep description concise (50-90 characters)
- Use imperative mood: "Add" not "Added"
- Always append Claude Code footer:
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
**Use heredoc format:**
```bash
git commit -m "$(cat <<'EOF'
commit message here
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
```
### 4. Push to Remote
- Run `git push` to push commits
- If push fails due to diverged branches, inform user and ask how to proceed
### 5. Confirm Success
- Report commit hash
- Summarize what was committed
- Confirm push succeeded
## Examples
User: "Push these changes"
→ Check status, stage all, generate commit message, push
User: "Commit with message 'fix: resolve table extraction issue'"
→ Use provided message, push
User: "Let's save this to github"
→ Activate workflow, generate appropriate commit message

View File

@@ -0,0 +1,155 @@
---
name: review-implementing
description: Process and implement code review feedback systematically. Use when user provides reviewer comments, PR feedback, code review notes, or asks to implement suggestions from reviews. Activates on phrases like "implement this feedback", "address review comments", or when user pastes reviewer notes.
---
# Review Feedback Implementation
Systematically process and implement changes based on code review feedback.
## When to Use
Automatically activate when the user:
- Provides reviewer comments or feedback
- Pastes PR review notes
- Mentions implementing review suggestions
- Says "address these comments" or "implement feedback"
- Shares a list of changes requested by reviewers
## Systematic Workflow
### 1. Parse Reviewer Notes
Identify individual feedback items:
- Split numbered lists (1., 2., etc.)
- Handle bullet points or unnumbered feedback
- Extract distinct change requests
- Clarify any ambiguous items before starting
### 2. Create Todo List
Use TodoWrite tool to create actionable tasks:
- Each feedback item becomes one or more todos
- Break down complex feedback into smaller tasks
- Make tasks specific and measurable
- Mark first task as `in_progress` before starting
Example:
```
- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case
```
### 3. Implement Changes Systematically
For each todo item:
**Locate relevant code:**
- Use Grep to search for functions/classes
- Use Glob to find files by pattern
- Read current implementation
**Make changes:**
- Use Edit tool for modifications
- Follow project conventions (CLAUDE.md)
- Preserve existing functionality unless changing behavior
**Verify changes:**
- Check syntax correctness
- Run relevant tests if applicable
- Ensure changes address reviewer's intent
**Update status:**
- Mark todo as `completed` immediately after finishing
- Move to next todo (only one `in_progress` at a time)
### 4. Handle Different Feedback Types
**Code changes:**
- Use Edit tool for existing code
- Follow type hint conventions (PEP 604/585)
- Maintain consistent style
**New features:**
- Create new files with Write tool if needed
- Add corresponding tests
- Update documentation
**Documentation:**
- Update docstrings following project style
- Modify markdown files as needed
- Keep explanations concise
**Tests:**
- Write tests as functions, not classes
- Use descriptive names
- Follow pytest conventions
**Refactoring:**
- Preserve functionality
- Improve code structure
- Run tests to verify no regressions
### 5. Validation
After implementing changes:
- Run affected tests
- Check for linting errors: `uv run ruff check`
- Verify changes don't break existing functionality
### 6. Communication
Keep user informed:
- Update todo list in real-time
- Ask for clarification on ambiguous feedback
- Report blockers or challenges
- Summarize changes at completion
## Edge Cases
**Conflicting feedback:**
- Ask user for guidance
- Explain the conflict clearly
**Breaking changes required:**
- Notify user before implementing
- Discuss impact and alternatives
**Tests fail after changes:**
- Fix tests before marking todo complete
- Ensure all related tests pass
**Referenced code doesn't exist:**
- Ask user for clarification
- Verify understanding before proceeding
## Important Guidelines
- **Always use TodoWrite** for tracking progress
- **Mark todos completed immediately** after each item
- **Only one todo in_progress** at any time
- **Don't batch completions** - update status in real-time
- **Ask questions** for unclear feedback
- **Run tests** if changes affect tested code
- **Follow CLAUDE.md conventions** for all code changes
- **Use conventional commits** if creating commits afterward
## Example
User: "Implement these review comments:
1. Add type hints to the extract function
2. Fix the duplicate tag detection logic
3. Update the docstring in chain.py"
**Actions:**
1. Create TodoWrite with 3 items
2. Mark item 1 as in_progress
3. Grep for extract function
4. Read file containing function
5. Edit to add type hints
6. Mark item 1 completed
7. Mark item 2 in_progress
8. Repeat process for remaining items
9. Summarize all changes made

110
skills/test-fixing/SKILL.md Normal file
View File

@@ -0,0 +1,110 @@
---
name: test-fixing
description: Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass. Activates on phrases like "fix the tests", "tests are failing", or "make the test suite green".
---
# Test Fixing Workflow
Systematically identify and fix all failing tests using smart grouping strategies.
## When to Use
Automatically activate when the user:
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
## Systematic Approach
### 1. Initial Test Run
Run `make test` to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
### 2. Smart Error Grouping
Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failures
- **Root cause**: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
### 3. Systematic Fixing Process
For each group (starting with highest impact):
1. **Identify root cause**
- Read relevant code
- Check recent changes with `git diff`
- Understand the error pattern
2. **Implement fix**
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
3. **Verify fix**
- Run subset of tests for this group
- Use pytest markers or file patterns:
```bash
uv run pytest tests/path/to/test_file.py -v
uv run pytest -k "pattern" -v
```
- Ensure group passes before moving on
4. **Move to next group**
### 4. Fix Order Strategy
**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues
**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions
**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling
### 5. Final Verification
After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact
## Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
## Example Workflow
User: "The tests are failing after my refactor"
1. Run `make test` → 15 failures identified
2. Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓