Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:40:11 +08:00
commit 8b119df38b
11 changed files with 853 additions and 0 deletions

View File

@@ -0,0 +1,108 @@
---
name: code-execution
description: Execute Python code locally with marketplace API access for 90%+ token savings on bulk operations. Activates when user requests bulk operations (10+ files), complex multi-step workflows, iterative processing, or mentions efficiency/performance.
---
# Code Execution
Execute Python locally with API access. **90-99% token savings** for bulk operations.
## When to Use
- Bulk operations (10+ files)
- Complex multi-step workflows
- Iterative processing across many files
- User mentions efficiency/performance
## How to Use
Use direct Python imports in Claude Code:
```python
from execution_runtime import fs, code, transform, git
# Code analysis (metadata only!)
functions = code.find_functions('app.py', pattern='handle_.*')
# File operations
code_block = fs.copy_lines('source.py', 10, 20)
fs.paste_code('target.py', 50, code_block)
# Bulk transformations
result = transform.rename_identifier('.', 'oldName', 'newName', '**/*.py')
# Git operations
git.git_add(['.'])
git.git_commit('feat: refactor code')
```
**If not installed:** Run `~/.claude/plugins/marketplaces/mhattingpete-claude-skills/execution-runtime/setup.sh`
## Available APIs
- **Filesystem** (`fs`): copy_lines, paste_code, search_replace, batch_copy
- **Code Analysis** (`code`): find_functions, find_classes, analyze_dependencies - returns METADATA only!
- **Transformations** (`transform`): rename_identifier, remove_debug_statements, batch_refactor
- **Git** (`git`): git_status, git_add, git_commit, git_push
## Pattern
1. **Analyze locally** (metadata only, not source)
2. **Process locally** (all operations in execution)
3. **Return summary** (not data!)
## Examples
**Bulk refactor (50 files):**
```python
from execution_runtime import transform
result = transform.rename_identifier('.', 'oldName', 'newName', '**/*.py')
# Returns: {'files_modified': 50, 'total_replacements': 247}
```
**Extract functions:**
```python
from execution_runtime import code, fs
functions = code.find_functions('app.py', pattern='.*_util$') # Metadata only!
for func in functions:
code_block = fs.copy_lines('app.py', func['start_line'], func['end_line'])
fs.paste_code('utils.py', -1, code_block)
result = {'functions_moved': len(functions)}
```
**Code audit (100 files):**
```python
from execution_runtime import code
from pathlib import Path
files = list(Path('.').glob('**/*.py'))
issues = []
for file in files:
deps = code.analyze_dependencies(str(file)) # Metadata only!
if deps.get('complexity', 0) > 15:
issues.append({'file': str(file), 'complexity': deps['complexity']})
result = {'files_audited': len(files), 'high_complexity': len(issues)}
```
## Best Practices
✅ Return summaries, not data
✅ Use code_analysis (returns metadata, not source)
✅ Batch operations
✅ Handle errors, return error count
❌ Don't return all code to context
❌ Don't read full source when you need metadata
❌ Don't process files one by one
## Token Savings
| Files | Traditional | Execution | Savings |
|-------|-------------|-----------|---------|
| 10 | 5K tokens | 500 | 90% |
| 50 | 25K tokens | 600 | 97.6% |
| 100 | 150K tokens | 1K | 99.3% |

View File

@@ -0,0 +1,23 @@
"""
Example: Bulk Refactoring Across Entire Codebase
This example shows how to rename an identifier across all Python files
in a project with maximum efficiency.
"""
from api.code_transform import rename_identifier
# Rename function across all Python files
result = rename_identifier(
pattern='.', # Current directory
old_name='getUserData',
new_name='fetchUserData',
file_pattern='**/*.py', # All Python files recursively
regex=False # Exact identifier match
)
# Result contains summary only (not all file contents!)
# Token usage: ~500 tokens total
# vs ~25,000 tokens with traditional approach
print(f"Modified {result['files_modified']} files")
print(f"Total replacements: {result['total_replacements']}")

View File

@@ -0,0 +1,76 @@
"""
Example: Comprehensive Codebase Audit
Analyze code quality across entire project with minimal tokens.
"""
from api.code_analysis import analyze_dependencies, find_unused_imports
from pathlib import Path
# Find all Python files
files = list(Path('.').glob('**/*.py'))
print(f"Analyzing {len(files)} files...")
issues = {
'high_complexity': [],
'unused_imports': [],
'large_files': [],
'no_docstrings': []
}
# Analyze each file (metadata only, not source!)
for file in files:
file_str = str(file)
# Get complexity metrics
deps = analyze_dependencies(file_str)
# Flag high complexity
if deps.get('complexity', 0) > 15:
issues['high_complexity'].append({
'file': file_str,
'complexity': deps['complexity'],
'functions': deps['functions'],
'avg_complexity': deps.get('avg_complexity_per_function', 0)
})
# Flag large files
if deps.get('lines', 0) > 500:
issues['large_files'].append({
'file': file_str,
'lines': deps['lines'],
'functions': deps['functions']
})
# Find unused imports
unused = find_unused_imports(file_str)
if unused:
issues['unused_imports'].append({
'file': file_str,
'count': len(unused),
'imports': unused
})
# Return summary (NOT all the data!)
result = {
'files_audited': len(files),
'total_lines': sum(d.get('lines', 0) for d in [analyze_dependencies(str(f)) for f in files]),
'issues': {
'high_complexity': len(issues['high_complexity']),
'unused_imports': len(issues['unused_imports']),
'large_files': len(issues['large_files'])
},
'top_complexity_issues': sorted(
issues['high_complexity'],
key=lambda x: x['complexity'],
reverse=True
)[:5] # Only top 5
}
print(f"\\nAudit complete:")
print(f" High complexity files: {result['issues']['high_complexity']}")
print(f" Files with unused imports: {result['issues']['unused_imports']}")
print(f" Large files (>500 lines): {result['issues']['large_files']}")
# Token usage: ~2,000 tokens for 100 files
# vs ~150,000 tokens loading all files into context

View File

@@ -0,0 +1,36 @@
"""
Example: Extract Functions to New File
Shows how to find and move functions to a separate file
with minimal token usage.
"""
from api.code_analysis import find_functions
from api.filesystem import copy_lines, paste_code, read_file, write_file
# Find utility functions (returns metadata ONLY, not source code)
functions = find_functions('app.py', pattern='.*_util$', regex=True)
print(f"Found {len(functions)} utility functions")
# Extract imports from original file
content = read_file('app.py')
imports = [line for line in content.splitlines()
if line.strip().startswith(('import ', 'from '))]
# Create new utils.py with imports
write_file('utils.py', '\\n'.join(set(imports)) + '\\n\\n')
# Copy each function to utils.py
for func in functions:
print(f" Moving {func['name']} (lines {func['start_line']}-{func['end_line']})")
code = copy_lines('app.py', func['start_line'], func['end_line'])
paste_code('utils.py', -1, code + '\\n\\n') # -1 = append to end
result = {
'functions_extracted': len(functions),
'function_names': [f['name'] for f in functions]
}
# Token usage: ~800 tokens
# vs ~15,000 tokens reading full file into context

View File

@@ -0,0 +1,112 @@
---
name: code-refactor
description: Perform bulk code refactoring operations like renaming variables/functions across files, replacing patterns, and updating API calls. Use when users request renaming identifiers, replacing deprecated code patterns, updating method calls, or making consistent changes across multiple locations.
---
# Code Refactor
Systematic code refactoring across files. **Auto-switches to execution mode** for 10+ files (90% token savings).
## Mode Selection
- **1-9 files**: Use native tools (Grep + Edit with replace_all)
- **10+ files**: Automatically use `code-execution` skill
**Execution example (50 files):**
```python
from api.code_transform import rename_identifier
result = rename_identifier('.', 'oldName', 'newName', '**/*.py')
# Returns: {'files_modified': 50, 'total_replacements': 247}
# ~500 tokens vs ~25,000 tokens traditional
```
## When to Use
- "rename [identifier] to [new_name]"
- "replace all [pattern] with [replacement]"
- "refactor to use [new_pattern]"
- "update all calls to [function/API]"
- "convert [old_pattern] to [new_pattern]"
## Core Workflow (Native Mode)
### 1. Find All Occurrences
```
Grep(pattern="getUserData", output_mode="files_with_matches") # Find files
Grep(pattern="getUserData", output_mode="content", -n=true, -B=2, -A=2) # Verify with context
```
### 2. Replace All Instances
```
Edit(
file_path="src/api.js",
old_string="getUserData",
new_string="fetchUserData",
replace_all=true
)
```
### 3. Verify Changes
```
Grep(pattern="getUserData", output_mode="files_with_matches") # Should return none
```
## Workflow Examples
### Rename Function
1. Find: `Grep(pattern="getUserData", output_mode="files_with_matches")`
2. Count: "Found 15 occurrences in 5 files"
3. Replace in each file with `replace_all=true`
4. Verify: Re-run Grep
5. Suggest: Run tests
### Replace Deprecated Pattern
1. Find: `Grep(pattern="\\bvar\\s+\\w+", output_mode="content", -n=true)`
2. Analyze: Check if reassigned (let) or constant (const)
3. Replace: `Edit(old_string="var count = 0", new_string="let count = 0")`
4. Verify: `npm run lint`
### Update API Calls
1. Find: `Grep(pattern="/api/auth/login", output_mode="content", -n=true)`
2. Replace: `Edit(old_string="'/api/auth/login'", new_string="'/api/v2/authentication/login'", replace_all=true)`
3. Test: Recommend integration tests
## Best Practices
**Planning:**
- Find all instances first
- Review context of each match
- Inform user of scope
- Consider edge cases (strings, comments)
**Safe Process:**
1. Search → Find all
2. Analyze → Verify appropriate
3. Inform → Tell user scope
4. Execute → Make changes
5. Verify → Confirm applied
6. Test → Suggest running tests
**Edge Cases:**
- Strings/comments: Ask if should update
- Exported APIs: Warn of breaking changes
- Case sensitivity: Be explicit
## Tool Reference
**Edit with replace_all:**
- `replace_all=true`: Replace all occurrences
- `replace_all=false`: Replace only first (or fail if multiple)
- Must match EXACTLY (whitespace, quotes)
**Grep patterns:**
- `-n=true`: Show line numbers
- `-B=N, -A=N`: Context lines
- `-i=true`: Case-insensitive
- `type="py"`: Filter by file type
## Integration
- **test-fixing**: Fix broken tests after refactoring
- **code-transfer**: Move refactored code
- **feature-planning**: Plan large refactorings

View File

@@ -0,0 +1,138 @@
---
name: code-transfer
description: Transfer code between files with line-based precision. Use when users request copying code from one location to another, moving functions or classes between files, extracting code blocks, or inserting code at specific line numbers.
---
# Code Transfer
Transfer code between files with precise line-based control. **Dual-mode operation**: native tools (1-10 files) or execution mode (10+ files, 90% token savings).
## Operation Modes
### Basic Mode (Default)
Use Read, Edit, Bash scripts for 1-10 file operations. Works immediately, no setup required.
### Execution Mode (10+ files)
```python
from api.filesystem import batch_copy
from api.code_analysis import find_functions
functions = find_functions('app.py', pattern='handle_.*')
operations = [{
'source_file': 'app.py',
'start_line': f['start_line'],
'end_line': f['end_line'],
'target_file': 'handlers.py',
'target_line': -1
} for f in functions]
batch_copy(operations)
```
## When to Use
- "copy this code to [file]"
- "move [function/class] to [file]"
- "extract this to a new file"
- "insert at line [number]"
- "reorganize into separate files"
## Core Operations
### 1. Extract Source Code
```
Read(file_path="src/auth.py") # Full file
Read(file_path="src/auth.py", offset=10, limit=20) # Line range
Grep(pattern="def authenticate", -n=true, -A=10) # Find function
```
### 2. Insert at Specific Line
Use `line_insert.py` script for line-based insertion:
```bash
python3 skills/code-transfer/scripts/line_insert.py <file> <line_number> <code> [--backup]
```
**Examples:**
```bash
# Insert function at line 50
python3 skills/code-transfer/scripts/line_insert.py src/utils.py 50 "def helper():\n pass"
# Insert with backup
python3 skills/code-transfer/scripts/line_insert.py src/utils.py 50 "code" --backup
# Insert at beginning
python3 skills/code-transfer/scripts/line_insert.py src/new.py 1 "import os"
```
**When to use:**
- User specifies exact line number
- Inserting into new/empty files
- Inserting at beginning/end without context
### 3. Insert Relative to Content
Use **Edit** when insertion point is relative to existing code:
```
Edit(
file_path="src/utils.py",
old_string="def existing():\n pass",
new_string="def existing():\n pass\n\ndef new():\n return True"
)
```
## Workflow Examples
### Copy Function Between Files
1. Find: `Grep(pattern="def validate_user", -n=true, -A=20)`
2. Extract: `Read(file_path="auth.py", offset=45, limit=15)`
3. Check target: `Read(file_path="validators.py")`
4. Insert: Use `line_insert.py` or Edit based on context
### Extract Class to New File
1. Locate: `Grep(pattern="class DatabaseConnection", -n=true, -A=50)`
2. Extract: `Read(file_path="original.py", offset=100, limit=50)`
3. Create: `Write(file_path="database.py", content="<extracted>")`
4. Update imports: `Edit` in original file
5. Remove old class: `Edit` with replacement
### Insert at Specific Line
1. Validate: `Read(file_path="main.py", offset=20, limit=10)`
2. Insert: `python3 skills/code-transfer/scripts/line_insert.py main.py 25 "logger.info('...')" --backup`
3. Verify: `Read(file_path="main.py", offset=23, limit=5)`
### Reorganize Into Modules
1. Analyze: `Read(file_path="utils.py")`
2. Identify groups: `Grep(pattern="^def |^class ", -n=true)`
3. Extract each category: `Write` new files
4. Update original: Re-export or redirect
## Best Practices
**Planning:**
- Understand dependencies (imports, references)
- Identify exact start/end of code block
- Check target file structure
- Ensure necessary imports included
**Preservation:**
- Include docstrings and comments
- Transfer related functions together
- Update imports in both files
- Maintain formatting/indentation
**Validation:**
- Verify insertion placement
- Check syntax
- Test imports
- Suggest running tests
**Backups:**
- Use `--backup` for significant changes
- Critical file operations
- Large deletions
## Integration
- **code-refactor**: Refactor after transferring
- **test-fixing**: Run tests after reorganizing
- **feature-planning**: Plan large reorganizations

View File

@@ -0,0 +1,188 @@
#!/usr/bin/env python3
"""
Line-based code insertion utility.
This script provides precise line-number-based code insertion,
which complements Claude's native Edit tool (which requires exact string matching).
Usage:
python line_insert.py <file_path> <line_number> <code> [--backup]
Examples:
# Insert at line 10
python line_insert.py src/main.py 10 "print('hello')"
# Insert with backup
python line_insert.py src/main.py 10 "print('hello')" --backup
"""
import argparse
import sys
from pathlib import Path
from datetime import datetime
def validate_file_path(file_path: Path) -> None:
"""
Validate that the file path is safe to use.
Args:
file_path: Path object to validate
Raises:
ValueError: If path is invalid or unsafe
"""
# Resolve to absolute path
abs_path = file_path.resolve()
# Basic security: prevent directory traversal
if ".." in str(file_path):
raise ValueError(f"Path contains '..' which is not allowed: {file_path}")
# Check parent directory exists (or can be created)
if not abs_path.parent.exists():
raise ValueError(f"Parent directory does not exist: {abs_path.parent}")
def create_backup(file_path: Path) -> Path:
"""
Create a backup of the file with timestamp.
Args:
file_path: Path to the file to backup
Returns:
Path to the backup file
"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = file_path.with_suffix(f"{file_path.suffix}.backup_{timestamp}")
if file_path.exists():
backup_path.write_text(file_path.read_text())
print(f"✅ Created backup: {backup_path}", file=sys.stderr)
return backup_path
def insert_code(
file_path: Path,
line_number: int,
code: str,
create_backup_flag: bool = False
) -> None:
"""
Insert code at a specific line number in a file.
Args:
file_path: Path to the target file
line_number: Line number where code should be inserted (1-based)
code: Code to insert (can be multiple lines)
create_backup_flag: Whether to create a backup before modifying
Raises:
ValueError: If line_number is invalid
IOError: If file operations fail
"""
# Validate inputs
validate_file_path(file_path)
if line_number < 1:
raise ValueError(f"Line number must be >= 1, got: {line_number}")
# Create backup if requested and file exists
if create_backup_flag and file_path.exists():
create_backup(file_path)
# Read existing content or start with empty
if file_path.exists():
with open(file_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
else:
lines = []
print(f" Creating new file: {file_path}", file=sys.stderr)
# Prepare code lines to insert
code_lines = code.splitlines(keepends=True)
# Ensure last line has newline if inserting in middle of file
if code_lines and not code_lines[-1].endswith('\n'):
code_lines[-1] += '\n'
# Insert at the specified line (1-based index)
# Line 1 means insert at the beginning
# Line len(lines)+1 means append at the end
insert_index = line_number - 1
if insert_index > len(lines):
# If line number is beyond file, pad with empty lines
lines.extend(['\\n'] * (insert_index - len(lines)))
# Insert the code
lines[insert_index:insert_index] = code_lines
# Write back to file
file_path.parent.mkdir(parents=True, exist_ok=True)
with open(file_path, 'w', encoding='utf-8') as f:
f.writelines(lines)
print(f"✅ Inserted {len(code_lines)} line(s) at line {line_number} in {file_path}", file=sys.stderr)
def main():
"""Main entry point for CLI usage."""
parser = argparse.ArgumentParser(
description="Insert code at a specific line number in a file",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Insert a single line at line 10
%(prog)s src/main.py 10 "print('hello')"
# Insert multiple lines
%(prog)s src/main.py 10 "def foo():\\n pass"
# Insert with backup
%(prog)s src/main.py 10 "print('hello')" --backup
"""
)
parser.add_argument(
'file_path',
type=Path,
help='Path to the target file'
)
parser.add_argument(
'line_number',
type=int,
help='Line number where code should be inserted (1-based)'
)
parser.add_argument(
'code',
type=str,
help='Code to insert (use \\n for newlines)'
)
parser.add_argument(
'--backup',
action='store_true',
help='Create a backup before modifying the file'
)
args = parser.parse_args()
try:
insert_code(
file_path=args.file_path,
line_number=args.line_number,
code=args.code,
create_backup_flag=args.backup
)
sys.exit(0)
except Exception as e:
print(f"❌ Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,84 @@
---
name: file-operations
description: Analyze files and get detailed metadata including size, line counts, modification times, and content statistics. Use when users request file information, statistics, or analysis without modifying files.
---
# File Operations
Analyze files and retrieve metadata using Claude's native tools without modifying files.
## When to Use
- "analyze [file]"
- "get file info for [file]"
- "how many lines in [file]"
- "compare [file1] and [file2]"
- "file statistics"
## Core Operations
### File Size & Metadata
```bash
stat -f "%z bytes, modified %Sm" [file_path] # Single file
ls -lh [directory] # Multiple files
du -h [file_path] # Human-readable size
```
### Line Counts
```bash
wc -l [file_path] # Single file
wc -l [file1] [file2] # Multiple files
find [dir] -name "*.py" | xargs wc -l # Directory total
```
### Content Analysis
Use **Read** to analyze structure, then count functions/classes/imports.
### Pattern Search
```
Grep(pattern="^def ", output_mode="count", path="src/") # Count functions
Grep(pattern="TODO|FIXME", output_mode="content", -n=true) # Find TODOs
Grep(pattern="^import ", output_mode="count") # Count imports
```
### Find Files
```
Glob(pattern="**/*.py")
```
## Workflow Examples
### Comprehensive File Analysis
1. Get size/mod time: `stat -f "%z bytes, modified %Sm" file.py`
2. Count lines: `wc -l file.py`
3. Read file: `Read(file_path="file.py")`
4. Count functions: `Grep(pattern="^def ", output_mode="count")`
5. Count classes: `Grep(pattern="^class ", output_mode="count")`
### Compare File Sizes
1. Find files: `Glob(pattern="src/**/*.py")`
2. Get sizes: `ls -lh src/**/*.py`
3. Total size: `du -sh src/*.py`
### Code Quality Metrics
1. Total lines: `find . -name "*.py" | xargs wc -l`
2. Test files: `find . -name "test_*.py" | wc -l`
3. TODOs: `Grep(pattern="TODO|FIXME|HACK", output_mode="count")`
### Find Largest Files
```bash
find . -type f -not -path "./node_modules/*" -exec du -h {} + | sort -rh | head -20
```
## Best Practices
- **Non-destructive**: Use Read/stat/wc, never modify
- **Efficient**: Read small files fully, use Grep for large files
- **Context-aware**: Compare to project averages, suggest optimizations
## Integration
Works with:
- **code-auditor**: Comprehensive analysis
- **code-transfer**: After identifying large files
- **codebase-documenter**: Understanding file purposes