Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:56:10 +08:00
commit 400ca062d1
48 changed files with 18674 additions and 0 deletions

View File

@@ -0,0 +1,239 @@
---
name: contextune:usage
description: Track and optimize Claude Code usage with intelligent recommendations
---
# Usage Tracking & Optimization
Contextune integrates with Claude Code's `/usage` command to provide intelligent context optimization, cost savings, and proactive warnings.
## Quick Start
### Option 1: Manual Paste (Most Accurate - Recommended)
1. Run Claude Code's usage command:
```
/usage
```
2. Copy the entire output
3. Run this command and paste when prompted:
```
/contextune:usage
```
### Option 2: Automatic Estimation
Contextune automatically estimates usage based on tracked operations (~85% accurate for Contextune tasks only).
View current estimates:
```
/contextune:stats
```
## What You Get
### Intelligent Recommendations
Based on your current usage, Contextune provides:
- **Model Selection**: Auto-switch to Haiku when approaching limits (87% cost savings)
- **Parallel Task Limits**: Recommended max concurrent tasks based on remaining quota
- **Proactive Warnings**: Alerts when approaching session or weekly limits
- **Opus Opportunities**: Suggestions to use Opus when quota is available for complex tasks
### Example Output
```
📊 Current Usage Analysis
═══════════════════════════════════════
📈 Usage Metrics:
Session: 19% (resets 1:00am America/New_York)
Weekly: 90% (resets Oct 29, 10:00pm America/New_York) ⚠️ CRITICAL
Opus: 0%
🎯 Status: CRITICAL
⚠️ Warnings:
• CRITICAL: 90% weekly usage (resets Oct 29, 10:00pm)
• You have ~10% capacity until Oct 29, 10:00pm
💡 Recommendations:
• Switch research tasks to Haiku (87% cost savings)
• Recommended max parallel tasks: 2
• Consider deferring non-critical tasks until weekly reset
• ✨ Opus available (0% used) - great for complex architecture tasks
📊 Historical Trends:
• Average daily usage: 12.9% (7-day trend)
• Projected usage to limit: [calculation]
• Cost savings from Haiku: $0.42 this week
```
## Usage Thresholds
| Weekly Usage | Status | Automatic Actions |
|-------------|--------|-------------------|
| 0-70% | ✅ Healthy | Normal operation, all models available |
| 71-85% | ⚠️ Warning | Suggest Haiku for research tasks |
| 86-95% | 🚨 Critical | Auto-switch to Haiku, limit parallel tasks |
| 96-100% | 🛑 Limit | Defer all tasks until reset |
## Integration Points
### Automatic Optimization
Once you've logged your usage, Contextune automatically:
1. **Research Tasks** (`/ctx:research`):
- Uses Haiku if weekly > 80% (saves $0.12 per task)
- Adjusts number of parallel agents based on quota
2. **Planning** (`/ctx:plan`):
- Warns if approaching limits
- Suggests task deferral if critical
- Recommends queue for after reset
3. **Execution** (`/ctx:execute`):
- Limits parallel tasks based on remaining quota
- Queues excess tasks for after reset
- Provides time estimates
## Manual Paste Format
Contextune can parse Claude Code's `/usage` output in this format:
```
Current session
[progress bar] 19% used
Resets 1:00am (America/New_York)
Current week (all models)
[progress bar] 90% used
Resets Oct 29, 10:00pm (America/New_York)
Current week (Opus)
[progress bar] 0% used
```
Just paste the entire block when prompted.
## Cost Savings Examples
### Research Task (3 parallel agents)
**Without optimization:**
- Model: Sonnet 3.5
- Cost: ~$0.24 per research task
- Weekly (10 tasks): $2.40
**With Contextune (90% usage):**
- Model: Haiku 4.5 (auto-switched)
- Cost: ~$0.02 per research task
- Weekly (10 tasks): $0.20
- **Saved: $2.20/week** ✅
### Parallel Execution (5 tasks)
**Without optimization:**
- Spawn all 5 tasks
- Risk hitting 100% limit mid-execution
- Failed tasks, wasted quota
**With Contextune (90% usage):**
- Execute 2 tasks now (within quota)
- Queue 3 tasks for after reset
- **Avoided: quota exhaustion** ✅
## Technical Details
### Data Storage
Usage snapshots are stored in `.contextune/observability.db`:
```sql
CREATE TABLE usage_history (
timestamp REAL PRIMARY KEY,
session_percent REAL,
weekly_percent REAL,
opus_percent REAL,
session_reset TEXT,
weekly_reset TEXT
);
```
### Token Estimation
When manual data isn't available, Contextune estimates usage from tracked operations:
```python
# Rough estimates (Claude 3.5 limits)
SESSION_LIMIT = 200,000 tokens # Per session limit
WEEKLY_LIMIT = 1,000,000 tokens # Per week limit
# Calculation
session_percent = (tracked_tokens / SESSION_LIMIT) * 100
```
**Accuracy**: ~85% for Contextune operations only (doesn't track other Claude Code sessions)
### Three-Tier Fallback
Contextune tries multiple approaches:
1. **Headless query** (experimental, may not be reliable)
2. **Token estimation** (85% accurate, automatic)
3. **Manual paste** (100% accurate, user-triggered)
## Troubleshooting
### "Unable to fetch usage data"
**Cause**: All automatic methods failed
**Solution**: Use manual paste workflow:
```
/usage
# Copy output
/contextune:usage
# Paste when prompted
```
### "Usage seems low compared to /usage"
**Cause**: Token estimation only tracks Contextune operations
**Solution**: Use manual paste for accurate data including all Claude Code sessions
### "Headless query taking too long"
**Cause**: Experimental feature timing out
**Solution**: Press Ctrl+C to cancel, use manual paste instead
## Privacy & Security
- Usage data stored locally in `.contextune/observability.db`
- No data sent to external servers
- Reset times preserved from Claude Code output
- Historical data helps optimize future tasks
## Related Commands
- `/usage` - Claude Code's native usage command
- `/context` - Claude Code's context management
- `/contextune:stats` - View Contextune detection statistics
- `/contextune:config` - Configure Contextune settings
## Future Enhancements
Planned for v1.0:
- **Predictive Analysis**: "At current rate, you'll hit 95% by Tuesday"
- **Budget Tracking**: "$X spent this week / $Y monthly budget"
- **Email Reports**: Weekly usage summaries
- **Team Coordination**: Share usage data across team
- **Official MCP Integration**: Direct API access (when available from Anthropic)
---
**Pro Tip**: Run `/contextune:usage` at the start of your session to enable intelligent optimization for all subsequent tasks.

298
commands/ctx-cleanup.md Normal file
View File

@@ -0,0 +1,298 @@
---
name: ctx:cleanup
description: Clean up completed worktrees and branches
keywords:
- clean up
- cleanup worktrees
- remove worktrees
- cleanup tasks
- cleanup parallel
- remove completed
- clean worktrees
executable: true
---
# Parallel Cleanup - Remove Completed Worktrees
You are performing cleanup of completed parallel development work.
**Contextune Integration:** This command can be triggered via `/contextune:parallel:cleanup` or natural language like "clean up parallel worktrees", "remove completed tasks".
---
## Execution Workflow
### Step 1: Identify Merged Branches
**Check which parallel branches have been merged to main:**
```bash
# List all feature branches that are fully merged
git branch --merged main | grep "feature/task-"
```
**Expected output:**
```
feature/task-0
feature/task-2
```
**Interpret:**
- Listed branches: Safe to delete (already in main) ✅
- Not listed: Still has unmerged commits ⚠️
---
### Step 2: Show Cleanup Plan (Ask User)
**Before deleting anything, show what will be removed:**
```
🧹 Cleanup Plan
**Will remove:**
✅ Worktree: worktrees/task-0 (merged to main)
✅ Local branch: feature/task-0 (merged)
✅ Remote branch: origin/feature/task-0 (if exists)
✅ Worktree: worktrees/task-2 (merged to main)
✅ Local branch: feature/task-2 (merged)
✅ Remote branch: origin/feature/task-2 (if exists)
**Will keep:**
⏳ Worktree: worktrees/task-1 (not merged - has uncommitted work)
Proceed with cleanup? (yes/no)
```
**Ask user for confirmation before proceeding.**
---
### Step 3: Remove Merged Worktrees
**For each merged branch, remove its worktree:**
```bash
# Remove worktree for task-0
git worktree remove worktrees/task-0
# Remove worktree for task-2
git worktree remove worktrees/task-2
```
**Expected output per removal:**
```
✅ Removed worktree 'worktrees/task-0'
```
**If removal fails:**
```
Error: worktree has uncommitted changes
```
→ Skip this worktree, warn user
---
### Step 4: Delete Local Merged Branches
**Delete the local branches that were merged:**
```bash
# Delete local branch
git branch -d feature/task-0
# Delete local branch
git branch -d feature/task-2
```
**Expected output:**
```
Deleted branch feature/task-0 (was abc1234).
```
**If deletion fails:**
```
error: The branch 'feature/task-0' is not fully merged.
```
→ Use `-D` to force (ask user first!) or skip
---
### Step 5: Delete Remote Branches (Optional)
**Ask user:** "Also delete remote branches?"
**If yes:**
```bash
# Delete remote branch
git push origin --delete feature/task-0
# Delete remote branch
git push origin --delete feature/task-2
```
**Expected output:**
```
To github.com:user/repo.git
- [deleted] feature/task-0
```
**If no:** Skip this step
---
### Step 6: Archive Completed Tasks (Optional)
**Move completed task files to archive:**
```bash
# Create archive directory
mkdir -p .parallel/archive/completed-$(date +%Y%m%d)
# Move completed task files
mv .parallel/plans/tasks/task-0.md .parallel/archive/completed-$(date +%Y%m%d)/
mv .parallel/plans/tasks/task-2.md .parallel/archive/completed-$(date +%Y%m%d)/
```
**Or keep them for reference** (task files are lightweight)
---
### Step 7: Prune Stale References
**Clean up git's internal references:**
```bash
git worktree prune
git remote prune origin
```
**Expected output:**
```
✅ Pruned worktree references
✅ Pruned remote references
```
---
### Step 8: Verify Cleanup
**Confirm everything was cleaned up:**
```bash
# Check remaining worktrees
git worktree list
# Check remaining feature branches
git branch | grep "feature/task-"
# Check remote branches
git branch -r | grep "feature/task-"
```
**Expected:** Only unmerged tasks should remain
---
### Step 9: Report Results
```
✅ Cleanup complete!
**Removed:**
• 2 worktrees (task-0, task-2)
• 2 local branches
• 2 remote branches
**Kept:**
• 1 worktree (task-1 - unmerged)
**Remaining parallel work:**
- task-1: In progress (3 commits ahead)
**Next actions:**
• Continue work on task-1
• Or run /ctx:status for detailed progress
```
---
## Contextune-Specific Additions
### Natural Language Triggers
Users can trigger this command with:
- `/contextune:parallel:cleanup` (explicit)
- "clean up parallel worktrees"
- "remove completed tasks"
- "clean up parallel work"
- "delete merged branches"
Contextune automatically detects these intents.
### Global Availability
Works in ALL projects after installing Contextune:
```bash
/plugin install slashsense
```
### Related Commands
When suggesting next steps, mention:
- `/contextune:parallel:status` - Check what's left
- `/contextune:parallel:execute` - Start new parallel work
- `/contextune:parallel:plan` - Plan next iteration
---
## Example User Interactions
**Natural Language:**
```
User: "clean up the parallel worktrees"
You: [Execute cleanup workflow]
1. Identify merged branches
2. Ask for confirmation
3. Clean up safely
4. Report results
```
**Explicit Command:**
```
User: "/contextune:parallel:cleanup"
You: [Execute cleanup workflow]
```
**With Options:**
```
User: "/contextune:parallel:cleanup --dry-run"
You: [Show what WOULD be deleted]
Don't actually delete anything
Provide option to run for real
```
---
## Safety First
Always:
- Verify branches are merged before deleting
- Ask for user confirmation
- Provide recovery instructions if something goes wrong
- Support dry-run mode for safety
- Never delete unmerged work automatically
---
## Implementation Notes
- Use the exact same implementation as `/.claude/commands/parallel/cleanup.md`
- Add Contextune branding where appropriate
- Support both explicit and natural language invocation
- Be conservative - when in doubt, keep rather than delete

331
commands/ctx-configure.md Normal file
View File

@@ -0,0 +1,331 @@
---
name: ctx:configure
description: Interactive configuration for Contextune features (output style, status bar, CLAUDE.md)
keywords:
- configure
- setup
- customize
- configuration
- setup contextune
- configure environment
- customization guide
- output style
- install
- uninstall
executable: commands/ctx-configure.py
---
# Contextune Interactive Configuration
**Interactive setup** for Contextune features with guided prompts.
**What this configures:**
- ✨ Extraction-optimized output style (automatic documentation)
- 🎨 Status bar integration (optional)
- 📝 CLAUDE.md integration (optional)
Run `/ctx:configure` and Claude will guide you through interactive prompts.
---
## Quick Start
```bash
/ctx:configure
```
Claude will detect your current setup and present interactive options via dialog prompts.
---
## Interactive Flows
### Flow 1: First-Time Setup (Complete Setup in One Command)
When you run `/ctx:configure` and nothing is installed, Claude guides you through:
**Step 1: "Would you like to install the extraction-optimized output style?"**
- **Install** - Enable automatic documentation extraction
- **Skip** - Continue without
**Step 2 (if Install): "Where should the output style be installed?"**
- **This project** - Install to `.claude/output-styles/` (git-trackable, team can share)
- **All projects** - Install to `~/.claude/output-styles/` (available everywhere)
**Step 3: "Would you like to add Contextune to your status bar?"**
- **Yes** - Show Contextune commands in status bar (zero token cost)
- **No** - Skip status bar integration
**Result:** Complete setup with your preferred configuration ✅
---
### Flow 2: Manage Existing Installation
If customizations are already installed, Claude offers:
**"Manage Contextune configuration"**
Current installation displayed (e.g., "Output style: user-level, Status line: ✅")
- **Activate style** - Make extraction-optimized active for this session
- **Reinstall** - Change installation scope (user ↔ project)
- **Uninstall** - Remove all customizations
- **Keep as-is** - No changes
---
### Flow 3: Uninstall (Clean Removal)
If you choose to uninstall, Claude shows:
**⚠️ Important Warning:**
> Before disabling the Contextune plugin (`/plugin disable contextune`),
> run this uninstall process FIRST.
>
> The plugin's hooks won't be available after disabling,
> so remove customizations while the plugin is still active.
**"Proceed with uninstallation?"**
- **Uninstall** - Remove all customizations
- **Cancel** - Keep everything as-is
**If Uninstall: "Clean up extracted documentation files?"**
- **Keep files** - Preserve .plans/ directories with your documentation
- **Clean up** - Remove all .plans/ directories (⚠️ Cannot be undone)
**Result:** Clean removal + guidance for plugin disable ✅
---
## What Gets Configured
### 1. Extraction-Optimized Output Style ⭐ **Recommended**
**What it does:**
- Formats all design work in structured YAML blocks
- Enables automatic extraction to .plans/ files when session ends
- Zero manual documentation work
- Perfect DRY workflow (no redundant Read operations)
**Installation Options:**
**User-level** (`~/.claude/output-styles/`):
- ✅ Available in all projects
- ✅ Single installation
- ❌ Not git-trackable
**Project-level** (`.claude/output-styles/`):
- ✅ Git-trackable (team can share)
- ✅ Project-specific configuration
- ❌ Must install per project
**Benefits:**
- SessionEnd hook extracts designs automatically
- Next session has context restored
- Never use Write/Read tools for documentation
---
### 2. Status Bar Integration (Optional)
**What it does:**
- Shows Contextune commands in your status bar
- Zero token cost (UI-only display)
- Quick reference for common commands
**Installation:**
- Interactive prompt asks during `/ctx:configure`
- Claude modifies `~/.claude/statusline.sh` automatically
- Status bar updates immediately
**Display:**
```
Contextune: /ctx:research | /ctx:plan | /ctx:execute
```
---
## ✅ What Works Automatically (No Setup Needed)
After installing Contextune, these features work immediately:
1. **Intent Detection** - Hook detects slash commands from natural language
2. **Skills** - Claude auto-suggests parallelization and discovers capabilities
3. **Commands** - All `/ctx:*` commands available in autocomplete
4. **SessionEnd Hook** - Extracts documentation automatically (works with or without output style)
**You don't need to configure anything!** Output style just makes extraction more reliable (99% vs 60%).
---
## 🎨 Optional Customizations
For power users who want extra visibility:
1. **CLAUDE.md** - Persistent context at session start (~150 tokens)
2. **Status Bar** - Always-visible command reminders
**These are still manual** (not handled by /ctx:configure yet)
**Trade-offs:**
- ✅ Pro: Contextune always top-of-mind for Claude
- ✅ Pro: Visual reminders in status bar
- ⚠️ Con: ~150 tokens per session (CLAUDE.md)
- ⚠️ Con: Manual setup required
- ⚠️ Con: You must manually update if plugin changes
---
## Option 1: Add to CLAUDE.md
**File:** `~/.claude/CLAUDE.md`
**Add this section:**
```markdown
## Contextune Plugin (Parallel Development)
**Quick Research**: `/ctx:research` - Fast answers using 3 parallel agents (1-2 min, $0.07)
**Planning**: `/ctx:plan` - Create parallel development plans with grounded research
**Execution**: `/ctx:execute` - Run tasks in parallel using git worktrees
**Monitoring**: `/ctx:status` - Check progress across all worktrees
**Cleanup**: `/ctx:cleanup` - Remove completed worktrees and branches
**Natural Language Examples:**
- "research best React state libraries" → Triggers `/ctx:research`
- "create parallel plan for auth, dashboard, API" → Triggers `/ctx:plan`
- "what can Contextune do?" → Activates `intent-recognition` skill
**Skills (Auto-Activated):**
- `parallel-development-expert` - Suggests parallelization when you mention multiple tasks
- `intent-recognition` - Helps discover Contextune capabilities
**Cost Optimization**: Uses Haiku agents (87% cheaper than Sonnet) for execution.
Full documentation: Type `/ctx:research what can Contextune do?`
```
**How to add:**
```bash
# 1. Open CLAUDE.md
code ~/.claude/CLAUDE.md
# 2. Add the section above anywhere in the file
# 3. Save and restart Claude Code session
```
**Cost:** ~150 tokens per session (loaded at session start)
---
## Option 2: Add to Status Bar
**File:** `~/.claude/statusline.sh`
**Add this section before the final `echo` command:**
```bash
# Section: Contextune Commands (if plugin installed)
if grep -q '"slashsense@Contextune".*true' ~/.claude/settings.json 2>/dev/null; then
OUTPUT="${OUTPUT} | ${YELLOW}Contextune:${RESET} /ctx:research | /ctx:plan | /ctx:execute"
fi
```
**How to add:**
```bash
# 1. Open statusline.sh
code ~/.claude/statusline.sh
# 2. Find the line near the end that starts with: echo -e "$OUTPUT"
# 3. Add the section above BEFORE that echo line
# 4. Save (changes apply immediately on next status bar refresh)
```
**Cost:** Zero context (UI-only display)
---
## Option 3: Validate Plugin Status
Run this command to check Contextune installation:
```bash
# Check if plugin is enabled
grep -A 2 '"slashsense@Contextune"' ~/.claude/settings.json
# List available skills
ls -la ~/.claude/plugins/*/skills/*/SKILL.md
# List available commands
ls -la ~/.claude/plugins/*/commands/*.md | grep ss-
```
**Expected output:**
- Plugin enabled: `"slashsense@Contextune": true`
- Skills: `parallel-development-expert`, `intent-recognition`
- Commands: `ss-research`, `ss-plan`, `ss-execute`, `ss-status`, `ss-cleanup`, `ss-stats`, `ss-verify`
---
## Recommendation
**Most users: Don't customize!**
- Skills provide automatic discovery
- Hook provides intent detection
- Commands work out of the box
**Power users who want extra visibility:**
- Add Status Bar section (zero context cost)
- Skip CLAUDE.md (Skills are better for discovery)
**Only if you really want persistent context:**
- Add both CLAUDE.md and Status Bar sections
- Understand the ~150 token cost per session
- Manually update if plugin changes
---
## Troubleshooting
**Q: Contextune commands not appearing?**
```bash
/plugin list # Verify plugin is installed and enabled
/plugin enable slashsense # Enable if disabled
```
**Q: Skills not activating?**
```bash
# Check skills exist
ls ~/.claude/plugins/marketplaces/Contextune/skills/
# Expected: parallel-development-expert/, intent-recognition/
```
**Q: Hook not detecting intents?**
```bash
# Check hook is registered
cat ~/.claude/plugins/marketplaces/Contextune/hooks/hooks.json
# Expected: UserPromptSubmit hook with user_prompt_submit.py
```
---
## Summary
**Built-in (no setup):**
- ✅ Intent detection via hook
- ✅ Discovery via skills
- ✅ All commands available
**Optional customizations (manual):**
- 🎨 CLAUDE.md integration (~150 tokens/session)
- 🎨 Status bar display (zero tokens)
**Need help?**
- Run `/ctx:research what can Contextune do?`
- Ask Claude: "How do I use Contextune for parallel development?"
- Read README: `cat ~/.claude/plugins/marketplaces/Contextune/README.md`

483
commands/ctx-configure.py Executable file
View File

@@ -0,0 +1,483 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
"""
Enhanced Contextune Configuration
Interactive configuration management using AskUserQuestion tool.
Features:
- Dual-scope output style installation (user-level or project-level)
- Status line integration
- Installation manifest tracking
- Clean uninstallation with warnings
"""
import json
import shutil
from pathlib import Path
import sys
import os
import re
# Add lib to path for manifest import
sys.path.insert(0, str(Path(__file__).parent.parent / 'lib'))
from install_manifest import read_manifest, update_output_style, update_status_line, clear_manifest
def detect_state() -> dict:
"""
Detect current installation state.
Returns:
dict with complete state information
"""
# Check manifest first
manifest = read_manifest()
# Verify files still exist
user_path = Path.home() / ".claude" / "output-styles" / "extraction-optimized.md"
project_path = Path.cwd() / ".claude" / "output-styles" / "extraction-optimized.md"
statusline_path = Path.home() / ".claude" / "statusline.sh"
output_style_installed = False
output_style_scope = None
output_style_path = None
if user_path.exists():
output_style_installed = True
output_style_scope = 'user'
output_style_path = str(user_path)
elif project_path.exists():
output_style_installed = True
output_style_scope = 'project'
output_style_path = str(project_path)
# Check status line
status_line_installed = False
if statusline_path.exists():
try:
content = statusline_path.read_text()
if 'Contextune' in content or 'ctx:' in content:
status_line_installed = True
except IOError:
pass
return {
'output_style': {
'installed': output_style_installed,
'scope': output_style_scope,
'path': output_style_path
},
'status_line': {
'installed': status_line_installed,
'path': str(statusline_path) if statusline_path.exists() else None
},
'manifest': manifest
}
def install_output_style(scope: str = 'user') -> tuple[bool, str]:
"""
Install extraction-optimized output style.
Args:
scope: 'user' for ~/.claude/output-styles/ or 'project' for .claude/output-styles/
Returns:
(success: bool, installed_path: str)
"""
try:
# Find plugin root via CLAUDE_PLUGIN_ROOT env var
plugin_root = os.environ.get('CLAUDE_PLUGIN_ROOT')
if not plugin_root:
plugin_root = Path(__file__).parent.parent
else:
plugin_root = Path(plugin_root)
source = plugin_root / "output-styles" / "extraction-optimized.md"
if not source.exists():
print(f"❌ Source not found: {source}", file=sys.stderr)
return False, ""
# Determine destination based on scope
if scope == 'user':
dest_dir = Path.home() / ".claude" / "output-styles"
else: # project
dest_dir = Path.cwd() / ".claude" / "output-styles"
dest_dir.mkdir(parents=True, exist_ok=True)
dest = dest_dir / "extraction-optimized.md"
# Copy file
shutil.copy(source, dest)
# Update manifest
update_output_style(scope, str(dest))
return True, str(dest)
except Exception as e:
print(f"❌ Installation failed: {e}", file=sys.stderr)
import traceback
traceback.print_exc(file=sys.stderr)
return False, ""
def install_status_line() -> bool:
"""
Add Contextune section to ~/.claude/statusline.sh.
Returns:
bool indicating success
"""
try:
statusline_path = Path.home() / ".claude" / "statusline.sh"
# Create statusline.sh from template if doesn't exist
if not statusline_path.exists():
statusline_path.parent.mkdir(parents=True, exist_ok=True)
# Basic template
template = '''#!/bin/bash
# Claude Code Status Line
OUTPUT=""
# Section: Contextune Commands
if grep -q '"contextune.*true' ~/.claude/settings.json 2>/dev/null; then
YELLOW="\\033[1;33m"
RESET="\\033[0m"
OUTPUT="${OUTPUT}${YELLOW}Contextune:${RESET} /ctx:research | /ctx:plan | /ctx:execute"
fi
echo -e "$OUTPUT"
'''
statusline_path.write_text(template)
statusline_path.chmod(0o755)
# Update manifest
update_status_line(True, str(statusline_path))
return True
# Read existing file
content = statusline_path.read_text()
# Check if Contextune already present
if 'Contextune' in content or 'ctx:' in content:
print(" Contextune already in status line", file=sys.stderr)
update_status_line(True, str(statusline_path))
return True
# Find the final echo line
lines = content.split('\n')
insert_index = -1
for i in range(len(lines) - 1, -1, -1):
if lines[i].strip().startswith('echo'):
insert_index = i
break
if insert_index == -1:
# No echo found, append at end
insert_index = len(lines)
# Create Contextune section
contextune_section = [
'',
'# Section: Contextune Commands',
'if grep -q \'"contextune.*true\' ~/.claude/settings.json 2>/dev/null; then',
' YELLOW="\\033[1;33m"',
' RESET="\\033[0m"',
' OUTPUT="${OUTPUT} | ${YELLOW}Contextune:${RESET} /ctx:research | /ctx:plan | /ctx:execute"',
'fi',
''
]
# Insert section before echo
new_lines = lines[:insert_index] + contextune_section + lines[insert_index:]
# Write back
statusline_path.write_text('\n'.join(new_lines))
# Update manifest
update_status_line(True, str(statusline_path))
return True
except Exception as e:
print(f"❌ Status line installation failed: {e}", file=sys.stderr)
import traceback
traceback.print_exc(file=sys.stderr)
return False
def uninstall_output_style(manifest: dict) -> tuple[bool, int]:
"""
Remove output style based on manifest.
Args:
manifest: Installation manifest
Returns:
(success: bool, files_removed: int)
"""
try:
removed = 0
output_style = manifest.get('output_style', {})
if output_style.get('installed'):
path = output_style.get('path')
if path and Path(path).exists():
Path(path).unlink()
removed += 1
print(f"✅ Removed output style: {path}", file=sys.stderr)
return True, removed
except Exception as e:
print(f"❌ Failed to remove output style: {e}", file=sys.stderr)
return False, 0
def uninstall_status_line(manifest: dict) -> tuple[bool, bool]:
"""
Remove Contextune section from status line.
Args:
manifest: Installation manifest
Returns:
(success: bool, removed: bool)
"""
try:
status_line = manifest.get('status_line', {})
if not status_line.get('installed'):
return True, False
statusline_path = Path.home() / ".claude" / "statusline.sh"
if not statusline_path.exists():
return True, False
# Read content
content = statusline_path.read_text()
# Remove Contextune section (from # Section: Contextune to fi)
pattern = r'\n# Section: Contextune Commands\n.*?fi\n'
new_content = re.sub(pattern, '', content, flags=re.DOTALL)
if new_content != content:
statusline_path.write_text(new_content)
print(f"✅ Removed Contextune from status line", file=sys.stderr)
return True, True
return True, False
except Exception as e:
print(f"❌ Failed to remove status line section: {e}", file=sys.stderr)
return False, False
def cleanup_plans_directories() -> int:
"""
Find and remove .plans/ directories.
Returns:
Number of directories removed
"""
try:
search_paths = [
Path.cwd(),
Path.home() / "DevProjects",
Path.home() / "Projects",
Path.home() / "Code",
Path.home() / "dev"
]
plans_dirs = []
for search_path in search_paths:
if search_path.exists() and search_path.is_dir():
for plans_dir in search_path.glob('**/.plans'):
# Limit depth
relative = plans_dir.relative_to(search_path) if plans_dir.is_relative_to(search_path) else plans_dir
if len(relative.parts) <= 4:
plans_dirs.append(plans_dir)
removed = 0
for plans_dir in plans_dirs:
try:
shutil.rmtree(plans_dir)
removed += 1
print(f" Removed: {plans_dir}", file=sys.stderr)
except Exception as e:
print(f" Failed to remove {plans_dir}: {e}", file=sys.stderr)
return removed
except Exception as e:
print(f"❌ Cleanup failed: {e}", file=sys.stderr)
return 0
def output_instructions_for_claude():
"""
Output JSON instructions for Claude to use AskUserQuestion.
"""
state = detect_state()
instructions = {
'state': state,
'next_action': 'use_ask_user_question',
'instructions': None
}
if not state['output_style']['installed']:
# Not installed - offer to install
instructions['instructions'] = {
'action': 'prompt_install',
'message': (
'Output style not installed. Use AskUserQuestion tool:\n\n'
'Question: "Would you like to install the extraction-optimized output style?"\n'
'Header: "Setup"\n'
'Options:\n'
'1. Install (Enable automatic documentation extraction)\n'
'2. Skip (Can install later with /ctx:configure)\n\n'
'If Install selected: Ask about scope (next prompt)\n'
'If Skip: Show how to run /ctx:configure later'
),
'scope_prompt': (
'Question: "Where should the output style be installed?"\n'
'Header: "Scope"\n'
'Options:\n'
'1. This project - Install to .claude/output-styles/ (project-specific, git-trackable)\n'
'2. All projects - Install to ~/.claude/output-styles/ (available everywhere)\n\n'
'After scope selected: Ask about status line (next prompt)'
),
'status_line_prompt': (
'Question: "Would you like to add Contextune to your status bar?"\n'
'Header: "Status Bar"\n'
'Options:\n'
'1. Yes (Show Contextune commands in status bar - zero tokens)\n'
'2. No (Skip status bar integration)\n\n'
'After selection: Execute installation with chosen options'
)
}
else:
# Already installed - offer management
scope_text = "user-level" if state['output_style']['scope'] == 'user' else "project-level"
instructions['instructions'] = {
'action': 'prompt_manage',
'current_state': {
'output_style': f"Installed ({scope_text})",
'status_line': "Installed" if state['status_line']['installed'] else "Not installed"
},
'message': (
f'Current installation:\n'
f'• Output style: {state["output_style"]["scope"]}-level\n'
f'• Status line: {"" if state["status_line"]["installed"] else ""}\n\n'
'Use AskUserQuestion tool:\n\n'
'Question: "Manage Contextune configuration"\n'
'Header: "Configure"\n'
'Options:\n'
'1. Activate style (Make extraction-optimized active now)\n'
'2. Reinstall (Change scope: user ↔ project)\n'
'3. Uninstall (Remove all customizations)\n'
'4. Keep as-is (No changes)\n\n'
'Based on selection, execute appropriate action'
)
}
return instructions
def main():
"""Main entry point for configuration script."""
# Check for command-line arguments
if len(sys.argv) > 1:
arg = sys.argv[1]
if arg == '--install-user':
success, path = install_output_style(scope='user')
if success:
print(f"\n✅ Output style installed (user-level)")
print(f" Location: {path}")
else:
print(f"\n❌ Installation failed")
sys.exit(0 if success else 1)
elif arg == '--install-project':
success, path = install_output_style(scope='project')
if success:
print(f"\n✅ Output style installed (project-level)")
print(f" Location: {path}")
else:
print(f"\n❌ Installation failed")
sys.exit(0 if success else 1)
elif arg == '--install-statusline':
success = install_status_line()
if success:
print(f"\n✅ Status line integration added")
print(f" Location: ~/.claude/statusline.sh")
else:
print(f"\n❌ Status line installation failed")
sys.exit(0 if success else 1)
elif arg == '--uninstall':
manifest = read_manifest()
print("\n🗑️ Uninstalling Contextune customizations...\n")
# Remove output style
success, removed = uninstall_output_style(manifest)
if removed:
print(f"✅ Removed output style")
# Remove status line
success, removed = uninstall_status_line(manifest)
if removed:
print(f"✅ Removed status line integration")
# Clear manifest
clear_manifest()
print(f"\n✅ Uninstallation complete!")
print(f"\n⚠️ IMPORTANT: You can now safely disable the plugin:")
print(f" /plugin disable contextune")
print(f"\nTo reinstall later:")
print(f" /plugin enable contextune")
print(f" /ctx:configure")
sys.exit(0)
elif arg == '--uninstall-with-cleanup':
manifest = read_manifest()
print("\n🗑️ Uninstalling with cleanup...\n")
# Remove output style
uninstall_output_style(manifest)
# Remove status line
uninstall_status_line(manifest)
# Clean .plans/
print(f"\n🗑️ Cleaning .plans/ directories...")
removed_count = cleanup_plans_directories()
print(f"✅ Removed {removed_count} .plans/ directories")
# Clear manifest
clear_manifest()
print(f"\n✅ Complete uninstallation finished!")
print(f"\n⚠️ You can now safely disable the plugin:")
print(f" /plugin disable contextune")
sys.exit(0)
# No arguments - output instructions for Claude
instructions = output_instructions_for_claude()
print(json.dumps(instructions, indent=2))
sys.exit(0)
if __name__ == '__main__':
main()

233
commands/ctx-dashboard.py Executable file
View File

@@ -0,0 +1,233 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = []
# ///
"""
Contextune Observability Dashboard
Beautiful, real-time dashboard showing:
- Detection statistics
- Performance metrics (P50/P95/P99)
- Matcher efficiency
- Recent errors
- System health
"""
import sys
from pathlib import Path
# Add lib directory to path
PLUGIN_ROOT = Path(__file__).parent.parent
sys.path.insert(0, str(PLUGIN_ROOT / "lib"))
from observability_db import ObservabilityDB
import json
from datetime import datetime
def format_duration(seconds: float) -> str:
"""Format duration in human-readable format."""
if seconds < 60:
return f"{seconds:.0f}s ago"
elif seconds < 3600:
return f"{seconds/60:.0f}m ago"
elif seconds < 86400:
return f"{seconds/3600:.1f}h ago"
else:
return f"{seconds/86400:.1f}d ago"
def render_dashboard():
"""Render comprehensive observability dashboard."""
db = ObservabilityDB(".contextune/observability.db")
stats = db.get_stats()
print("=" * 70)
print("🎯 CONTEXTUNE OBSERVABILITY DASHBOARD".center(70))
print("=" * 70)
print()
# === DETECTION STATISTICS ===
det_stats = stats["detections"]
print("📊 DETECTION STATISTICS")
print("-" * 70)
print(f" Total Detections: {det_stats['total']}")
print()
if det_stats["by_method"]:
print(" By Detection Method:")
for method, count in sorted(det_stats["by_method"].items(), key=lambda x: x[1], reverse=True):
pct = (count / det_stats['total'] * 100) if det_stats['total'] > 0 else 0
bar = "" * int(pct / 5)
print(f" {method:15s} {count:4d} ({pct:5.1f}%) {bar}")
print()
if det_stats["by_command"]:
print(" Top Commands:")
for cmd, count in list(det_stats["by_command"].items())[:5]:
pct = (count / det_stats['total'] * 100) if det_stats['total'] > 0 else 0
print(f" {cmd:20s} {count:4d} ({pct:5.1f}%)")
print()
# === MATCHER PERFORMANCE ===
matcher_stats = stats["matchers"]
if matcher_stats:
print("⚡ MATCHER PERFORMANCE")
print("-" * 70)
print(f" {'Method':<15s} {'Avg Latency':>12s} {'Success Rate':>12s}")
print(f" {'-'*15} {'-'*12} {'-'*12}")
# Sort by latency
for method in ["keyword", "model2vec", "semantic"]:
if method in matcher_stats:
m = matcher_stats[method]
latency = m["avg_latency_ms"]
success = m["success_rate"]
# Color code latency
if latency < 1:
latency_str = f"{latency:.3f}ms"
elif latency < 10:
latency_str = f"{latency:.2f}ms"
else:
latency_str = f"{latency:.1f}ms"
print(f" {method:<15s} {latency_str:>12s} {success:>11.1f}%")
print()
# === PERFORMANCE METRICS ===
perf_stats = stats["performance"]
if perf_stats:
print("📈 SYSTEM PERFORMANCE")
print("-" * 70)
print(f" {'Component':<20s} {'P50':>8s} {'P95':>8s} {'P99':>8s} {'Count':>8s}")
print(f" {'-'*20} {'-'*8} {'-'*8} {'-'*8} {'-'*8}")
for component, metrics in perf_stats.items():
p50 = metrics["p50"]
p95 = metrics["p95"]
p99 = metrics["p99"]
count = metrics["count"]
print(f" {component:<20s} {p50:>7.2f}ms {p95:>7.2f}ms {p99:>7.2f}ms {count:>8d}")
print()
# === RECENT DETECTIONS ===
recent = db.get_recent_detections(5)
if recent:
print("🔍 RECENT DETECTIONS (Last 5)")
print("-" * 70)
for d in recent:
timestamp = datetime.fromtimestamp(d["timestamp"])
time_ago = format_duration(datetime.now().timestamp() - d["timestamp"])
prompt = d.get("prompt_preview", "")[:40]
latency = d.get("latency_ms", 0)
print(f" {timestamp.strftime('%H:%M:%S')} ({time_ago})")
print(f"{d['command']} ({d['confidence']*100:.0f}% {d['method']}, {latency:.3f}ms)")
if prompt:
print(f" Prompt: \"{prompt}\"")
print()
# === ERROR TRACKING ===
error_stats = stats["errors"]
if error_stats["total"] > 0:
print("❌ ERROR TRACKING")
print("-" * 70)
print(f" Total Errors: {error_stats['total']}")
print()
if error_stats["by_component"]:
print(" By Component:")
for component, count in sorted(error_stats["by_component"].items(), key=lambda x: x[1], reverse=True):
print(f" {component:20s} {count:4d}")
print()
# Recent errors
recent_errors = db.get_error_summary(24)
if recent_errors:
print(" Recent Errors (Last 24h):")
for err in recent_errors[:3]:
timestamp = datetime.fromtimestamp(err["timestamp"])
time_ago = format_duration(datetime.now().timestamp() - err["timestamp"])
print(f" [{timestamp.strftime('%H:%M:%S')}] {err['component']}")
print(f" {err['error_type']}: {err['message']}")
print(f" ({time_ago})")
print()
# === SYSTEM HEALTH ===
print("🏥 SYSTEM HEALTH")
print("-" * 70)
# Calculate health score
health_score = 100
# Deduct for errors
if error_stats["total"] > 0:
error_penalty = min(30, error_stats["total"] * 5)
health_score -= error_penalty
# Deduct for slow performance
if perf_stats:
for component, metrics in perf_stats.items():
if metrics["p95"] > 100: # > 100ms is slow
health_score -= 10
# Health indicator
if health_score >= 90:
health_icon = ""
health_status = "Excellent"
elif health_score >= 70:
health_icon = ""
health_status = "Good"
elif health_score >= 50:
health_icon = ""
health_status = "Fair"
else:
health_icon = ""
health_status = "Poor"
print(f" Overall Health: {health_icon} {health_score}/100 ({health_status})")
print()
# === RECOMMENDATIONS ===
recommendations = []
if matcher_stats.get("semantic", {}).get("success_rate", 0) < 50:
recommendations.append("⚠ Semantic router has low success rate - check Cohere API key")
if error_stats["total"] > 10:
recommendations.append("⚠ High error count - review error logs")
if perf_stats.get("hook", {}).get("p95", 0) > 50:
recommendations.append("⚠ Hook P95 latency >50ms - may impact UX")
if det_stats["total"] < 10:
recommendations.append("💡 Try natural language queries like 'research best React libraries'")
if recommendations:
print("💡 RECOMMENDATIONS")
print("-" * 70)
for rec in recommendations:
print(f" {rec}")
print()
print("=" * 70)
print()
print("💡 Commands:")
print(" /ctx:help Full command reference")
print(" /ctx:research Fast parallel research")
print(" /ctx:plan Create parallel development plan")
print()
if __name__ == "__main__":
try:
render_dashboard()
except FileNotFoundError:
print("⚠ No observability data yet. Use Contextune first to collect metrics!")
sys.exit(0)
except Exception as e:
print(f"❌ Error rendering dashboard: {e}", file=sys.stderr)
sys.exit(1)

182
commands/ctx-design.md Normal file
View File

@@ -0,0 +1,182 @@
---
name: ctx:design
description: Design system architecture, APIs, and component interfaces with structured workflow
keywords:
- design
- architect
- architecture
- system design
- api design
- design pattern
- structure
executable: true
---
# Design Architecture - Structured Design Workflow
Systematic architecture design following: Understand → Research → Specify → Decompose → Plan
This command provides a structured approach to system design, API design, and architectural planning.
## When to Use
Use `/ctx:design` when you need to:
- Design a new system or feature
- Plan API architecture
- Structure component interfaces
- Make build vs buy decisions
- Break down complex architectural problems
- Create implementation plans with dependencies
## Workflow
### 1. Understand the Problem
Extract essentials:
- Core problem (what's the real need?)
- Constraints (time, budget, skills, existing systems)
- Success criteria (what does "done" look like?)
- Assumptions (make implicit explicit)
If unclear, ask:
- "What problem does this solve?"
- "What systems must it integrate with?"
- "Expected scale/volume?"
- "Must-haves vs. nice-to-haves?"
### 2. Research Existing Solutions
**Run WebSearch queries (use WebSearch tool):**
```bash
# Search for best libraries/tools
WebSearch: "best {technology} for {problem} 2025"
# Search for implementation examples
WebSearch: "{problem} implementation examples latest"
# Search for known issues
WebSearch: "{problem} common pitfalls challenges"
# Compare top solutions
WebSearch: "{library A} vs {library B} comparison 2025"
```
**Example for authentication:**
```bash
WebSearch: "best authentication library Node.js 2025"
WebSearch: "JWT vs Session authentication comparison 2025"
WebSearch: "authentication implementation examples Express"
WebSearch: "authentication security pitfalls 2025"
```
**For each solution found, evaluate:**
- **Maturity:** Check GitHub stars, last commit date, npm weekly downloads
- **Fit:** Does it solve 80%+ of requirements?
- **Integration:** Compatible with existing tech stack?
- **Cost:** License type, hosting requirements, pricing
- **Risk:** Vendor lock-in, learning curve, community support
**Output format:**
| Solution | Maturity | Fit | Integration | Cost | Risk | Recommendation |
|----------|----------|-----|-------------|------|------|----------------|
| Library A | High (10K⭐) | 95% | ✅ | Free (MIT) | Low | ✅ Use |
| Library B | Medium (2K⭐) | 85% | ✅ | $99/mo | Medium | ❌ Skip |
| Build Custom | N/A | 100% | ✅ | Dev time | High | ❌ Skip |
### 3. Develop Specifications
Structure:
```
## Problem Statement
[1-2 sentences]
## Requirements
- [ ] Functional (High/Med/Low priority)
- [ ] Performance (metrics, scale)
- [ ] Security (requirements)
## Constraints
- Technical: [stack, systems]
- Resources: [time, budget, team]
## Success Criteria
- [Measurable outcomes]
```
### 4. Decompose into Tasks
Process:
1. Identify major components
2. Break into 1-3 day tasks
3. Classify: Independent | Sequential | Parallel-ready
4. Map dependencies
For each task:
- Prerequisites (what must exist first?)
- Outputs (what does it produce?)
- Downstream (what depends on it?)
- Parallelizable? (can run with others?)
### 5. Create Execution Plan
Phase structure:
```
## Phase 1: Foundation (Parallel)
- [ ] Task A - Infrastructure
- [ ] Task B - Data models
- [ ] Task C - CI/CD
## Phase 2: Core (Sequential after Phase 1)
- [ ] Task D - Auth (needs A,B)
- [ ] Task E - API (needs B)
## Phase 3: Features (Mixed)
- [ ] Task F - Feature 1 (needs D,E)
- [ ] Task G - Feature 2 (needs D,E) ← Parallel with F
```
## Build vs. Buy Decision
| Factor | Build | Buy |
|--------|-------|-----|
| Uniqueness | Core differentiator | Common problem |
| Fit | Tools don't match | 80%+ match |
| Control | Need full control | Standard OK |
| Timeline | Have time | Need speed |
| Expertise | Team has skills | Steep curve |
| Maintenance | Can maintain | Want support |
## Integration with ctx:architect Skill
This command is enhanced by the `ctx:architect` skill, which provides:
- Proactive detection of design opportunities
- Structured workflow guidance
- Research recommendations
- Specification templates
The skill automatically activates when Contextune detects design-related prompts.
## Examples
**API Design:**
```
/ctx:design Design REST API for user management with auth
```
**System Architecture:**
```
/ctx:design Design microservices architecture for e-commerce platform
```
**Component Planning:**
```
/ctx:design Plan authentication system with OAuth2 and JWT
```
## See Also
- `/ctx:research` - Research libraries and best practices
- `/ctx:plan` - Create parallel development plan
- `ctx:architect` skill - Automated design workflow guidance

1609
commands/ctx-execute.md Normal file

File diff suppressed because it is too large Load Diff

295
commands/ctx-git-commit.md Normal file
View File

@@ -0,0 +1,295 @@
---
name: ctx:git-commit
description: Deterministic commit and push workflow using scripts (DRY compliant)
keywords:
- commit
- push
- git commit
- commit and push
- save changes
executable: true
---
# Git Commit - Deterministic Commit and Push Workflow
You are executing a deterministic git commit and push workflow using the `commit_and_push.sh` script.
**Cost:** ~$0.002 (545 tokens) vs ~$0.037-0.086 (8K-25K tokens) for multi-tool approach
**Savings:** 93-97% token reduction
---
## Workflow
**IMPORTANT:** Use the `./scripts/commit_and_push.sh` script. DO NOT use manual git commands.
### Step 1: Determine What to Commit
Check git status to understand what files changed:
```bash
git status --short
```
**Analyze the output:**
- `M` = Modified files
- `A` = Added files
- `D` = Deleted files
- `??` = Untracked files
### Step 2: Stage and Commit Using Script
**Use the deterministic script:**
```bash
./scripts/commit_and_push.sh "<files>" "<message>" "<branch>" "<remote>"
```
**Parameters:**
- `<files>` - Files to commit (use `.` for all changes, or specific files)
- `<message>` - Commit message (follows conventional commits format)
- `<branch>` - Branch name (default: `master`, optional)
- `<remote>` - Remote name (auto-detected if not specified, optional)
**Example 1: Commit all changes**
```bash
./scripts/commit_and_push.sh "." "feat: add new feature
Detailed description of changes.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
```
**Example 2: Commit specific files**
```bash
./scripts/commit_and_push.sh "src/feature.ts tests/feature.test.ts" "feat: implement feature X"
```
**Example 3: Specify branch and remote**
```bash
./scripts/commit_and_push.sh "." "fix: resolve bug" "develop" "origin"
```
---
## Commit Message Format
Follow conventional commits:
```
<type>: <description>
[optional body]
[optional footer]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
**Types:**
- `feat:` - New feature
- `fix:` - Bug fix
- `docs:` - Documentation changes
- `refactor:` - Code refactoring
- `test:` - Test changes
- `chore:` - Build/tooling changes
**Examples:**
```bash
# Feature
"feat: add user authentication
Implemented JWT-based authentication with refresh tokens.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Bug fix
"fix: resolve memory leak in WebSocket handler
Fixed issue where connections were not properly cleaned up.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Documentation
"docs: update API documentation
Added examples for new endpoints.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
```
---
## What the Script Does
The `commit_and_push.sh` script handles:
1.`git add <files>` - Stage specified files
2. ✅ Check for changes - Skip if nothing to commit
3.`git commit -m "<message>"` - Commit with message
4. ✅ Auto-detect remote - Use first remote if not specified
5.`git push <remote> <branch>` - Push to remote
6. ✅ Error handling - Clear error messages
**Script output:**
```
✅ Committed and pushed to origin/master
```
---
## Error Handling
**If script fails:**
1. **No changes to commit:**
```
No changes to commit
```
- Expected when files are already committed
2. **No git remotes:**
```
Error: No git remotes configured
```
- Add remote: `git remote add origin <url>`
3. **Permission denied:**
```
Error: Permission denied
```
- Check SSH keys or credentials
4. **Merge conflicts:**
```
Error: Merge conflict detected
```
- Pull latest changes first: `git pull <remote> <branch>`
- Resolve conflicts manually
---
## Why Use the Script?
### Token Efficiency
**Multi-tool approach (what NOT to do):**
```
Tool 1: git status
Tool 2: git add .
Tool 3: git status --short
Tool 4: git diff --cached
Tool 5: git commit -m "message"
Tool 6: git log -1
Tool 7: git push origin master
Tool 8: git status
Cost: ~8K-25K tokens ($0.037-0.086)
```
**Script approach (correct):**
```
Tool 1: ./scripts/commit_and_push.sh "." "message"
Cost: ~545 tokens ($0.002)
Savings: 93-97% reduction
```
### Reliability
- ✅ **Deterministic** - Same input → same output
- ✅ **Tested** - Script handles edge cases
- ✅ **Fast** - Single command, 100-500ms execution
- ✅ **Error recovery** - Clear error messages
### Compliance
- ✅ Follows UNIFIED_DRY_STRATEGY.md
- ✅ Uses scripts for workflows (not multi-tool)
- ✅ Automatic remote detection
- ✅ Proper error handling
---
## Integration with Contextune
This command is available via:
1. **Explicit command:** `/ctx:git-commit`
2. **Natural language:** Contextune detects and routes automatically:
- "commit and push"
- "save changes"
- "commit these files"
3. **PreToolUse hook:** Intercepts manual git commands and suggests script
---
## Related Commands
- `/ctx:git-pr` - Create pull request using script
- `/ctx:git-merge` - Merge branches using script
- `/ctx:cleanup` - Cleanup worktrees and branches
---
## Advanced Usage
### Multiple File Patterns
```bash
# Commit specific directories
./scripts/commit_and_push.sh "src/ tests/" "feat: implement feature"
# Commit specific file types
./scripts/commit_and_push.sh "*.ts *.tsx" "refactor: update types"
```
### Multiline Commit Messages
```bash
./scripts/commit_and_push.sh "." "feat: add authentication
Implemented features:
- JWT token generation
- Refresh token rotation
- User session management
Breaking changes:
- Auth API endpoints changed from /api/v1 to /api/v2
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
```
---
## Notes
- Always use the script for commit + push workflows
- Single git commands (like `git status`) are OK without script
- Script auto-detects remote (no need to specify if only one remote)
- Follow conventional commit format for consistency
- Include co-authorship footer for Claude-assisted commits
---
## See Also
- `UNIFIED_DRY_STRATEGY.md` - DRY strategy for git operations
- `scripts/commit_and_push.sh` - Script source code
- `scripts/smart_execute.sh` - Error recovery wrapper
- `scripts/create_pr.sh` - Create pull request script
- `scripts/merge_and_cleanup.sh` - Merge and cleanup script

325
commands/ctx-help.md Normal file
View File

@@ -0,0 +1,325 @@
---
name: ctx:help
description: Example-first command reference and quick start guide
keywords:
- help
- examples
- quick start
- how to use
- show examples
- command reference
- getting started
---
# Contextune Help - Quick Start Guide
**Just type naturally—Contextune detects your intent automatically!**
---
## ✨ Try These Examples (Copy & Paste)
### 🔍 Fast Research (1-2 min, ~$0.07)
```
research best React state management library for 2025
```
→ Spawns 3 parallel agents (web + codebase + deps)
→ Returns comparison table + recommendation
→ 67% cost reduction with parallel agents
### ⚡ Parallel Development (1.5-3x measured speedup)
```
work on authentication, dashboard, and API in parallel
```
→ Creates development plan with task breakdown
→ Sets up git worktrees for parallel execution
→ Spawns agents to work simultaneously
### 💡 Feature Discovery
```
what can Contextune do?
```
→ Activates intent-recognition skill
→ Shows all capabilities with examples
→ Guides you to the right commands
---
## 📚 Most Used Commands
### `/ctx:research <query>`
Fast technical research using 3 parallel Haiku agents.
**Examples:**
```bash
/ctx:research best database for user authentication
/ctx:research should I use REST or GraphQL
/ctx:research latest TypeScript testing frameworks
```
**What you get:**
- Web research (latest trends, comparisons)
- Codebase analysis (what already exists)
- Dependency check (what's installed vs needed)
- Recommendation with reasoning
**Execution:** Fast parallel | **Cost:** ~$0.07
---
### `/ctx:status`
Monitor progress across all parallel worktrees.
**Shows:**
- Active worktrees and their branches
- Task completion status
- Git commit history per worktree
- Next steps and blockers
**Use when:** Working on parallel tasks and want overview
---
### `/ctx:configure`
Setup guide for persistent status bar integration.
**Enables:**
- Real-time detection display in status bar
- Zero context overhead (file-based)
- See what Contextune detected without cluttering chat
**One-time setup:** Adds statusline script to your config
---
## 🔧 Advanced Workflow Commands
### `/ctx:plan`
Document development plan for parallel execution.
**Creates:**
- Modular YAML plan with task breakdown
- Dependency graph (parallel vs sequential tasks)
- Resource allocation strategy
- Time and cost estimates
**Example:**
```bash
/ctx:plan
# Then describe your features:
# "I need user auth, admin dashboard, and payment integration"
```
**Output:** `.parallel/plans/active/plan.yaml`
---
### `/ctx:execute`
Execute development plan in parallel using git worktrees.
**What happens:**
1. Reads plan from `.parallel/plans/active/plan.yaml`
2. Creates git worktrees for each task
3. Spawns parallel agents to work independently
4. Creates PRs when tasks complete
5. Reports progress and costs
**Prerequisites:**
- Git repository with remote
- GitHub CLI (`gh`) authenticated
- Existing plan (run `/ctx:plan` first)
**Performance:** Measured speedup typically 1.5-3x on completed workflows
**Cost savings:** 81% cheaper with Haiku agents ($0.27 vs $1.40 per workflow)
---
### `/ctx:cleanup`
Clean up completed worktrees and branches.
**Removes:**
- Merged worktrees
- Completed task branches
- Temporary files
**Safe:** Only cleans up merged/completed work
---
### `/ctx:stats`
View detailed usage statistics and performance metrics.
**Shows:**
- Detection accuracy breakdown
- Cost savings vs manual work
- Performance metrics (latency, throughput)
- Most-used commands
**Validates claims:** See actual 81% cost reduction data
---
## 🎯 Natural Language Detection
You don't need to memorize commands! Just type what you want:
| What You Type | Contextune Detects | Confidence |
|--------------|-------------------|------------|
| "analyze my code" | `/sc:analyze` | 85% (keyword) |
| "run the tests" | `/sc:test` | 85% (keyword) |
| "research best approach" | `/ctx:research` | 92% (keyword) |
| "work in parallel" | `/ctx:execute` | 88% (keyword) |
| "review performance" | `/sc:improve` | 85% (keyword) |
| "explain this code" | `/sc:explain` | 85% (keyword) |
**Detection tiers:**
- **Keyword** (0.02ms) - 60% of queries, instant
- **Model2Vec** (0.2ms) - 30% of queries, fast embeddings
- **Semantic Router** (50ms) - 10% of queries, LLM-based
---
## 🤖 Auto-Activated Skills
These skills activate automatically when you mention certain topics:
### `parallel-development-expert`
**Triggers:** "parallel", "concurrent", "speed up", "work on multiple"
**Does:**
- Analyzes if tasks are parallelizable
- Calculates time savings
- Suggests `/ctx:plan` and `/ctx:execute`
- Guides worktree setup
### `intent-recognition`
**Triggers:** "what can you do", "capabilities", "features", "help"
**Does:**
- Shows Contextune capabilities
- Provides natural language examples
- Explains detection system
- Guides to relevant commands
### `git-worktree-master`
**Triggers:** "worktree stuck", "can't remove", "locked", "worktree error"
**Does:**
- Diagnoses worktree issues
- Suggests safe fixes
- Handles lock file cleanup
- Prevents data loss
### `performance-optimizer`
**Triggers:** "slow", "performance", "optimize", "bottleneck"
**Does:**
- Benchmarks workflow performance
- Identifies bottlenecks
- Calculates speedup potential
- Suggests optimizations
---
## 🚀 Quick Start Workflow
**1. First-Time Setup (Optional, 2 min)**
```bash
/ctx:configure
```
→ Sets up status bar integration for persistent detection display
**2. Fast Research (When You Need to Decide)**
```
research best authentication library for Python
```
→ Get answer in 1-2 min with comparison table
**3. Parallel Development (When You Have Multiple Tasks)**
```
work on user auth, admin panel, and reports in parallel
```
→ Contextune creates plan + worktrees + executes
**4. Monitor Progress**
```bash
/ctx:status
```
→ See what's happening across all parallel tasks
**5. Clean Up When Done**
```bash
/ctx:cleanup
```
→ Remove merged worktrees and branches
---
## 💰 Cost Optimization
Contextune uses **Haiku agents** for 81% cost reduction:
| Operation | Sonnet Cost | Haiku Cost | Savings |
|-----------|-------------|------------|---------|
| Research (3 agents) | $0.36 | $0.07 | 81% |
| Task execution | $0.27 | $0.04 | 85% |
| Worktree management | $0.06 | $0.008 | 87% |
**Annual savings:** ~$1,350 for typical usage (1,200 workflows/year)
Run `/ctx:stats` to see YOUR actual savings!
---
## 🔧 Configuration Files
### Plan Structure
```
.parallel/
├── plans/
│ ├── 20251025-120000/ # Timestamped plan directories
│ │ ├── plan.yaml # Main plan file
│ │ └── tasks/ # Individual task files
│ └── active -> 20251025-120000/ # Symlink to current plan
└── scripts/
├── setup_worktrees.sh # Worktree creation script
└── create_prs.sh # PR creation script
```
### Detection Data
```
.contextune/
└── last_detection # JSON with latest detection
# Read by status bar script
```
---
## 📖 More Resources
- **README:** Full feature list and architecture
- **GitHub:** https://github.com/Shakes-tzd/contextune
- **Issues:** Report bugs or request features
---
## 🆘 Common Questions
**Q: Do I need to memorize slash commands?**
A: No! Just type naturally. Contextune detects intent automatically.
**Q: Does this slow down Claude Code?**
A: No. Hook adds <2ms latency for 90% of queries.
**Q: Does it work offline?**
A: Yes! Keyword + Model2Vec tiers work offline (90% coverage).
**Q: Can I customize detection patterns?**
A: Yes! Edit `~/.claude/plugins/contextune/data/user_patterns.json`
**Q: How do I see detection stats?**
A: Run `/ctx:stats` to see accuracy, performance, and cost metrics.
---
**💡 Tip:** Type "what can Contextune do?" right now to see the intent-recognition skill in action!

680
commands/ctx-plan.md Normal file
View File

@@ -0,0 +1,680 @@
---
name: ctx:plan
description: Document a development plan for parallel execution (modular YAML)
keywords:
- create plan
- development plan
- parallel plan
- plan tasks
- make plan
- plan development
- create development plan
executable: true
---
# Parallel Plan - Create Modular YAML Development Plan
You are executing the parallel planning workflow. Your task is to analyze the conversation history and create a **modular YAML plan** for parallel development.
**Key Innovation:** Each task is a separate YAML file. No more monolithic markdown files!
**Benefits:**
- ✅ 95% fewer tokens for updates (edit one task file vs entire plan)
- ✅ Add/remove tasks without touching existing content
- ✅ Reorder tasks with simple array edits
- ✅ Better version control (smaller, focused diffs)
- ✅ No time/duration estimates (use tokens and priority instead)
- ✅ Priority + dependencies (what actually matters for execution)
**DRY Strategy Note:**
- Plans use **extraction-optimized output format** (visibility + iteration)
- NO Write tool during planning (user sees full plan in conversation)
- `/ctx:execute` extracts plan automatically when needed
- SessionEnd hook as backup (extracts at session end)
- Result: Modular files created automatically, zero manual work
This command is part of the Contextune plugin and can be triggered via natural language or explicitly with `/contextune:parallel:plan`.
---
## Step 1: Analyze Conversation and Requirements
Review the conversation history to identify:
- What features/tasks the user wants to implement
- Which tasks are independent (can run in parallel)
- Which tasks have dependencies (must run sequentially)
- Potential shared resources or conflict zones
Use the following criteria to classify tasks:
**Independent Tasks (Parallel-Safe):**
- Touch different files
- Different modules/features
- No shared state
- Can complete in any order
**Dependent Tasks (Sequential):**
- Task B needs Task A's output
- Database migrations
- Shared file modifications
- Order matters
**Conflict Risks:**
- Same file edits
- Shared configuration
- Database schema changes
- API contract changes
---
## Step 2: Parallel Research (NEW! - Grounded Research)
**IMPORTANT:** Before planning, do comprehensive research using 5 parallel agents!
**Why parallel research?**
- 5x faster (parallel vs sequential execution)
- More comprehensive coverage
- Grounded in current reality (uses context from hook)
- Main agent context preserved (research in subagents)
### Research Phase Workflow
**Context Available (injected by hook):**
- Current date (for accurate web searches)
- Tech stack (from package.json, etc.)
- Existing specifications
- Recent plans
**Spawn 5 Research Agents in PARALLEL:**
Use the Task tool to spawn ALL 5 agents in a SINGLE message:
#### Agent 1: Web Search - Similar Solutions
```
Task tool with subagent_type="general-purpose"
Description: "Research similar solutions and best practices"
Prompt: (Use template from docs/RESEARCH_AGENTS_GUIDE.md - Agent 1)
"You are researching similar solutions for {PROBLEM}.
Use WebSearch to find:
- Best practices for {PROBLEM} in {CURRENT_YEAR} ← Use year from context!
- Common approaches and patterns
- Known pitfalls
Search queries:
- 'best practices {PROBLEM} {TECH_STACK} {CURRENT_YEAR}'
- '{PROBLEM} implementation examples latest'
Report back (<500 words):
1. Approaches found (top 3)
2. Recommended approach with reasoning
3. Implementation considerations
4. Pitfalls to avoid"
```
#### Agent 2: Web Search - Libraries/Tools
```
Task tool with subagent_type="general-purpose"
Description: "Research libraries and tools"
Prompt: (Use template from docs/RESEARCH_AGENTS_GUIDE.md - Agent 2)
"You are researching libraries for {USE_CASE} in {TECH_STACK}.
Use WebSearch to find:
- Popular libraries for {USE_CASE}
- Comparison of top solutions
- Community recommendations
Report back (<500 words):
1. Top 3 libraries (comparison table)
2. Recommended library with reasoning
3. Integration notes"
```
#### Agent 3: Codebase Pattern Search
```
Task tool with subagent_type="general-purpose"
Description: "Search codebase for existing patterns"
Prompt: (Use template from docs/RESEARCH_AGENTS_GUIDE.md - Agent 3)
"You are searching codebase for existing patterns for {PROBLEM}.
Use Grep/Glob to search:
- grep -r '{KEYWORD}' . --include='*.{ext}'
- Check for similar functionality
CRITICAL: If similar code exists, recommend REUSING it!
Report back (<400 words):
1. Existing functionality found (with file:line)
2. Patterns to follow
3. Recommendation (REUSE vs CREATE NEW)"
```
#### Agent 4: Specification Validation
```
Task tool with subagent_type="general-purpose"
Description: "Validate against existing specifications"
Prompt: (Use template from docs/RESEARCH_AGENTS_GUIDE.md - Agent 4)
"You are checking specifications for {PROBLEM}.
Read these files (if exist):
- docs/ARCHITECTURE.md
- docs/specs/*.md
- README.md
Check for:
- Existing requirements
- Constraints we must follow
- Patterns to use
Report back (<500 words):
1. Spec status (exists/incomplete/missing)
2. Requirements from specs
3. Compliance checklist"
```
#### Agent 5: Dependency Analysis
```
Task tool with subagent_type="general-purpose"
Description: "Analyze project dependencies"
Prompt: (Use template from docs/RESEARCH_AGENTS_GUIDE.md - Agent 5)
"You are analyzing dependencies for {PROBLEM}.
Read:
- package.json (Node.js)
- pyproject.toml (Python)
- go.mod (Go)
- Cargo.toml (Rust)
Check:
- What's already installed?
- Can we reuse existing deps?
- What new deps needed?
Report back (<300 words):
1. Relevant existing dependencies
2. New dependencies needed (if any)
3. Compatibility analysis"
```
**Spawn ALL 5 in ONE message** (parallel execution!)
### Wait for Research Results
All 5 agents will complete quickly when executed in parallel.
### Synthesize Research Findings
Once all 5 agents report back:
1. **Read all research reports**
2. **Identify best approach** (from Agent 1)
3. **Select libraries** (from Agent 2)
4. **Plan code reuse** (from Agent 3)
5. **Check spec compliance** (from Agent 4)
6. **Plan dependencies** (from Agent 5)
**Create Research Synthesis:**
```markdown
## Research Synthesis
### Best Approach
{From Agent 1: Recommended approach and reasoning}
### Libraries/Tools
{From Agent 2: Which libraries to use}
### Existing Code to Reuse
{From Agent 3: Files and patterns to leverage}
### Specification Compliance
{From Agent 4: Requirements we must follow}
### Dependencies
{From Agent 5: What to install, what to reuse}
### Architectural Decisions
Based on research findings:
**Decision 1:** {Architecture decision}
- **Reasoning:** {Why, based on research}
- **Source:** {Which research agent(s)}
**Decision 2:** {Technology decision}
- **Reasoning:** {Why, based on research}
- **Source:** {Which research agent(s)}
**Decision 3:** {Pattern decision}
- **Reasoning:** {Why, based on research}
- **Source:** {Which research agent(s)}
```
This synthesis will be embedded in the plan document and used to create detailed specifications for Haiku agents.
---
## Step 3: Output Extraction-Optimized Plan Format
**IMPORTANT:** Do NOT use the Write tool. Output the plan in structured format in the conversation.
The `/ctx:execute` command will extract this automatically to modular files when the user runs it.
Your output will be automatically extracted to:
```
.parallel/plans/
├── plan.yaml ← From your Plan Structure YAML
├── tasks/
│ ├── task-0.md ← From your Task 0 section
│ ├── task-1.md ← From your Task 1 section
│ └── ...
├── templates/
│ └── task-template.md
└── scripts/
├── add_task.sh
└── generate_full.sh
```
---
## Step 4: Output Plan in Extraction-Optimized Format
Output your plan in this structured markdown format. The extraction process will parse this into modular files automatically.
**Format Template:**
```markdown
# Implementation Plan: {Feature Name}
**Type:** Plan
**Status:** Ready
**Created:** {YYYYMMDD-HHMMSS}
---
## Overview
{2-3 sentence description of what this plan accomplishes}
---
## Plan Structure
\`\`\`yaml
metadata:
name: "{Feature Name}"
created: "{YYYYMMDD-HHMMSS}"
status: "ready" # ready | in_progress | completed
# High-level overview
overview: |
{2-3 sentence description of what we're building}
# Research synthesis from parallel research phase
research:
approach: "{Best approach from Agent 1}"
libraries:
- name: "{Library from Agent 2}"
reason: "{Why selected}"
patterns:
- file: "{file:line from Agent 3}"
description: "{Pattern to reuse}"
specifications:
- requirement: "{Requirement from Agent 4}"
status: "must_follow" # must_follow | should_follow | nice_to_have
dependencies:
existing:
- "{Dependency already installed}"
new:
- "{New dependency needed}"
# Feature list (just names for reference)
features:
- "{feature-1}"
- "{feature-2}"
# Task index (TOC with task names for quick reference)
tasks:
- id: "task-0"
name: "{Task Name}" # Name here for index/TOC!
file: "tasks/task-0.md"
priority: "blocker" # blocker | high | medium | low
dependencies: []
- id: "task-1"
name: "{Task Name}"
file: "tasks/task-1.md"
priority: "high"
dependencies: ["task-0"] # If depends on task-0
- id: "task-2"
name: "{Task Name}"
file: "tasks/task-2.md"
priority: "high"
dependencies: []
# Add more task references as needed
# Shared resources and conflict zones
shared_resources:
files:
- path: "config/app.ts"
reason: "Multiple tasks may import"
mitigation: "Task 1 creates base first"
databases:
- name: "{database}"
concern: "{What could conflict}"
mitigation: "{How to avoid}"
# Testing strategy
testing:
unit:
- "Each task writes own tests"
- "Must pass before push"
integration:
- "Run after merging to main"
- "Test cross-feature interactions"
isolation:
- "Each worktree runs independently"
- "No shared test state"
# Success criteria
success_criteria:
- "All tasks complete"
- "All tests passing"
- "No merge conflicts"
- "Code reviewed"
- "Documentation updated"
# Notes and decisions
notes: |
{Any additional context, decisions, or considerations}
# Changelog
changelog:
- timestamp: "{YYYYMMDD-HHMMSS}"
event: "Plan created"
```
**Important instructions:**
- Fill in all placeholders with actual values from the conversation
- **NO TIME ESTIMATES** - they go stale immediately and add no value
- Use priority (blocker/high/medium/low) instead - this determines execution order
- Use dependencies to define execution sequence
- **Add task names to the index** - plan.yaml acts as Table of Contents for the model
- Be specific about files that will be touched
- Break down large tasks into smaller, independent tasks when possible
- Aim for 3-5 parallel tasks maximum for optimal efficiency
**Context Optimization:**
- plan.yaml = lightweight index/TOC (model reads this first)
- Task names in index allow model to understand scope without reading full task files
- If tasks created in same session → already in context, no re-read needed!
- If new session → model reads specific task files only when spawning agents
\`\`\`
---
## Task Details
For each task in your plan, output a task section using this format:
```markdown
---
id: task-{N}
priority: high # blocker | high | medium | low
status: pending # pending | in_progress | completed | blocked
dependencies:
- task-0 # Must complete before this starts
labels:
- parallel-execution
- auto-created
- priority-{priority}
---
# {Task Name}
## 🎯 Objective
{Clear, specific description of what this task accomplishes}
## 🛠️ Implementation Approach
{Implementation approach from research synthesis}
**Libraries:**
- `{library-1}` - {Why needed}
- `{library-2}` - {Why needed}
**Pattern to follow:**
- **File:** `{file:line to copy from}`
- **Description:** {What pattern to follow}
## 📁 Files to Touch
**Create:**
- `path/to/new/file.ts`
**Modify:**
- `path/to/existing/file.ts`
**Delete:**
- `path/to/deprecated/file.ts`
## 🧪 Tests Required
**Unit:**
- [ ] Test {specific functionality}
- [ ] Test {edge case}
**Integration:**
- [ ] Test {interaction with other components}
## ✅ Acceptance Criteria
- [ ] All unit tests pass
- [ ] {Specific functionality works}
- [ ] No regressions in existing features
- [ ] Code follows project conventions
## ⚠️ Potential Conflicts
**Files:**
- `shared/config.ts` - Task 2 also modifies → Coordinate with Task 2
## 📝 Notes
{Any additional context, gotchas, or decisions}
---
**Worktree:** `worktrees/task-{N}`
**Branch:** `feature/task-{N}`
🤖 Auto-created via Contextune parallel execution
```
**Important:** Output one task section for EACH task in your plan. Repeat the structure above for task-0, task-1, task-2, etc.
**End the plan output with:**
```markdown
---
## References
- [Related documentation]
- [Related code]
```
This completes the extraction-optimized plan format.
---
## Step 5: Validate Your Plan Output
Before finishing, verify your conversation output includes:
1.**Detection markers:** `**Type:** Plan` header
2.**Plan Structure section:** With valid YAML block containing:
- `metadata:` with name, created, status
- `tasks:` array with id, name, file, priority, dependencies
- `shared_resources:`, `testing:`, `success_criteria:`
3.**Task Details sections:** One `### Task N:` section per task
4.**Task YAML frontmatter:** Each task has valid YAML between \`\`\`yaml blocks
5.**At least 1 task defined**
6.**Valid dependencies:** No circular deps, all referenced tasks exist
7.**Priorities set:** Each task has blocker/high/medium/low
8.**NO time estimates:** Only tokens, complexity, priority
**Extraction will happen automatically when user runs `/ctx:execute` or at session end.**
If you notice issues in your output, fix them before reporting to user.
---
## Step 6: Report to User
Tell the user:
```
📋 Plan created in extraction-optimized format!
**Plan Summary:**
- {N} total tasks
- {X} can run in parallel
- {Y} have dependencies (sequential)
- Conflict risk: {Low/Medium/High}
**Tasks by Priority:**
- Blocker: {list task IDs}
- High: {list task IDs}
- Medium: {list task IDs}
- Low: {list task IDs}
**What Happens Next:**
The plan above will be automatically extracted to modular files when you:
1. Run `/ctx:execute` - Extracts and executes immediately
2. End this session - SessionEnd hook extracts automatically
**Extraction Output:**
```
.parallel/plans/
├── plan.yaml (main plan with metadata)
├── tasks/
│ ├── task-0.md (GitHub-ready task files)
│ ├── task-1.md
│ └── ...
├── templates/
│ └── task-template.md
└── scripts/
├── add_task.sh
└── generate_full.sh
```
**Key Benefits:**
✅ **Full visibility**: You see complete plan in conversation
✅ **Easy iteration**: Ask for changes before extraction
✅ **Zero manual work**: Extraction happens automatically
✅ **Modular files**: Edit individual tasks after extraction
✅ **Perfect DRY**: Plan exists once (conversation), extracted once (files)
**Next Steps:**
1. Review the plan above (scroll up if needed)
2. Request changes: "Change task 2 to use React instead of Vue"
3. When satisfied, run: `/ctx:execute`
Ready to execute? Run `/ctx:execute` to extract and start parallel development.
```
Include a warning if:
- Conflict risk is Medium or High
- More than 5 parallel tasks (may be hard to coordinate)
- Sequential dependencies exist
- Tasks have circular dependencies (validation should catch this!)
---
## Error Handling
**If YAML syntax is invalid in your output:**
- Check your YAML blocks for syntax errors
- Validate with a YAML parser before outputting
- Common issues: Improper indentation, missing quotes, unclosed brackets
**If task dependencies are circular:**
- Detect the cycle (e.g., task-1 → task-2 → task-1)
- Fix the dependencies in your output
- Ensure each task can complete before its dependents start
**If conversation context is insufficient:**
- Ask user for clarification:
- What features do they want to implement?
- Which tasks can run independently?
- Are there any dependencies?
- What libraries or patterns should be used?
**If extraction fails (reported by `/ctx:execute`):**
- The user will see error messages from the extraction process
- Common fixes:
- Ensure `**Type:** Plan` header is present
- Verify YAML blocks are properly formatted
- Check that task IDs match between plan and task sections
---
## Contextune Integration
This command is available globally through the Contextune plugin. Users can trigger it with:
- **Explicit command:** `/contextune:parallel:plan`
- **Natural language:** "plan parallel development", "create parallel plan"
- **Auto-detection:** Contextune will detect planning intent automatically
When users say things like "plan parallel development for X, Y, Z", Contextune routes to this command automatically.
---
## Notes
- Output plans in extraction-optimized format (NO Write tool)
- Break down vague requests into specific, actionable tasks
- Ask clarifying questions if the scope is unclear
- Prioritize task independence to maximize parallelization
- Document assumptions in each task's notes section
- **NO TIME ESTIMATES** - use priority, dependencies, and tokens instead
- Ensure each task section is self-contained and complete
- The plan YAML should be lightweight (just references and metadata)
- **Extraction happens automatically** when user runs `/ctx:execute` or ends session
**Benefits of Extraction-Based Approach:**
- **Full visibility**: User sees complete plan in conversation
- **Easy iteration**: User can request changes before extraction
- **Perfect DRY**: Plan exists once (conversation), extracted once (files)
- **Zero manual work**: No Write tool calls, extraction is automatic
- **Modular output**: Extracted files are modular and editable
- **GitHub-native**: Tasks in GitHub issue format (zero transformation!)
- **Token efficient**: ~500 tokens saved per task (no parsing overhead)

281
commands/ctx-research.md Normal file
View File

@@ -0,0 +1,281 @@
---
name: ctx:research
description: Fast research using 3 parallel Haiku agents for technical questions and decision-making (1-2 min)
keywords:
- quick research
- fast research
- parallel research
- technical research
- investigate question
- research question
executable: true
---
# Contextune Research - Quick Technical Investigation
Conduct focused research using 3 parallel Haiku agents to answer specific technical questions quickly.
## Use Cases
- "What's the best React state library in 2025?"
- "Should I use REST or GraphQL for this API?"
- "What testing frameworks work with Python 3.12?"
- "Does our codebase already handle authentication?"
## How It Works
1. **You ask a research question**
2. **3 parallel agents execute** (1-2 min total):
- **Agent 1: Web Research** - Latest trends, comparisons, best practices
- **Agent 2: Codebase Search** - Existing patterns, reuse opportunities
- **Agent 3: Dependency Analysis** - What's installed, compatibility
3. **Synthesis** - Combined findings with recommendation
4. **Result** - Comparison table + actionable next steps
## Agent Specifications
### Agent 1: Web Research
Searches the web for current information:
```markdown
Research {QUESTION} using WebSearch.
Current date: {CURRENT_DATE}
Tech stack: {TECH_STACK}
Search queries:
- '{QUESTION} best practices {CURRENT_YEAR}'
- '{QUESTION} comparison latest'
- '{QUESTION} recommendations {CURRENT_YEAR}'
Report format (<500 words):
1. **Top 3 Options Found**
2. **Comparison Table** (pros/cons for each)
3. **Current Trends** (what's popular/recommended)
4. **Recommendation** with reasoning
Focus on recent information (2024-2025 preferred).
```
**Expected output:** Comparison of top solutions with pros/cons
---
### Agent 2: Codebase Search
Searches existing code for patterns:
```markdown
Search codebase for existing solutions to {QUESTION}.
Use Grep/Glob to find:
- Similar functionality: grep -r '{KEYWORDS}' .
- Relevant files: glob '**/*{pattern}*'
- Existing implementations
**CRITICAL**: If similar code exists, recommend REUSING it!
Report format (<400 words):
1. **Existing Functionality** (file:line references)
2. **Patterns to Follow** (coding style, architecture)
3. **Recommendation**:
- REUSE: If good solution exists
- NEW: If nothing suitable found
- ENHANCE: If partial solution exists
Include specific file paths and line numbers.
```
**Expected output:** What already exists that can be reused
---
### Agent 3: Dependency Analysis
Analyzes dependencies and compatibility:
```markdown
Analyze dependencies for {QUESTION}.
Check these files:
- package.json (Node/JavaScript)
- pyproject.toml / requirements.txt (Python)
- go.mod (Go)
- Cargo.toml (Rust)
- composer.json (PHP)
Report format (<300 words):
1. **Can Reuse**: Existing dependencies that solve this
2. **Need to Add**: New dependencies required
3. **Compatibility Notes**: Version conflicts, breaking changes
4. **Installation**: Exact commands to install
Example:
Can reuse: pg@8.11.0 (PostgreSQL driver already installed)
Need to add: pg-promise@11.5.4 (better async support)
Compatible: Both work with Node 18+
Install: npm install pg-promise
```
**Expected output:** What's available vs what's needed
---
## Synthesis Format
After all 3 agents complete, synthesize findings:
```markdown
## Research Results: {QUESTION}
### Web Research (Agent 1)
{Top 3 options with pros/cons}
### Codebase Analysis (Agent 2)
{Existing code to reuse OR "No existing solution found"}
### Dependencies (Agent 3)
{What's available, what needs installation}
---
## Recommendation
**Option:** {Recommended approach}
**Reasoning:**
- {Why this option - reference findings from all 3 agents}
- {Supporting evidence from web research}
- {Reuse opportunities from codebase}
- {Dependency considerations}
**Next Steps:**
1. {Actionable item 1}
2. {Actionable item 2}
3. {Actionable item 3}
---
**Cost:** $0.06-0.08 (3 Haiku agents × ~$0.02 each)
**Execution:** Parallel (3 agents simultaneously)
```
---
## Example Usage
**User:** "/ctx:research What's the best database for user authentication?"
**Result:**
```markdown
## Research Results: Best Database for User Authentication
### Web Research
Top 3 options for auth storage:
1. **PostgreSQL** - Battle-tested, ACID compliance, excellent for relational user data
2. **Redis** - Fast in-memory, perfect for sessions/tokens, not for primary user storage
3. **MongoDB** - Flexible schema, good for user profiles with varying attributes
### Codebase Analysis
Found existing: PostgreSQL connection in src/db/connection.ts:12
Pattern: Using pg-promise for async queries
Recommendation: **REUSE** PostgreSQL (already configured)
### Dependencies
Can reuse:
- pg@8.11.0 (PostgreSQL driver - installed)
- bcrypt@5.1.0 (password hashing - installed)
Need to add: None
Compatibility: ✅ All compatible with Node 18+
---
## Recommendation
**Option:** PostgreSQL
**Reasoning:**
- Already configured in codebase (connection.ts:12)
- Team familiar with SQL and pg-promise
- Handles relational user data excellently
- ACID compliance ensures data integrity for auth
- No new dependencies needed
**Next Steps:**
1. Create users table migration
2. Add bcrypt password hashing (already available)
3. Implement session management with pg-sessions
4. Consider Redis for session storage later (optimization)
---
**Cost:** $0.07
**Execution:** Fast parallel research
```
---
## Differences from `/ctx:plan`
| Feature | `/ctx:research` | `/ctx:plan` |
|---------|----------------|------------|
| **Purpose** | Answer specific question | Create execution plan |
| **Agents** | 3 (focused) | 5 (comprehensive) |
| **Output** | Findings + recommendation | Structured task plan with dependencies |
| **Time** | 1-2 min | 2-3 min |
| **Cost** | $0.06-0.08 | $0.10-0.12 |
| **Use When** | Quick decision needed | Ready to execute in parallel |
| **Next Step** | User decides | Execute with `/ctx:execute` |
---
## When to Use This Command
**Use `/ctx:research` when:**
- ✅ You need to make a technical decision
- ✅ Comparing libraries, frameworks, or approaches
- ✅ Want to know what already exists in codebase
- ✅ Checking compatibility or dependencies
- ✅ Need quick answer (1-2 min) not full plan
**Use `/ctx:plan` when:**
- ✅ You have multiple features to implement
- ✅ Need structured task breakdown
- ✅ Ready for parallel execution
- ✅ Want dependency analysis and task ordering
---
## Tips for Better Results
1. **Be Specific**: "best state library for React" > "state management"
2. **Include Context**: "for our e-commerce app" helps agents understand requirements
3. **One Question**: Focus on single decision per research session
4. **Current Tech**: Mention your stack ("we use Python 3.12") for relevant results
---
## Technical Details
**Cost Breakdown:**
- Agent 1 (Web): ~$0.03 (200 input + 500 output tokens)
- Agent 2 (Code): ~$0.02 (150 input + 400 output tokens)
- Agent 3 (Deps): ~$0.02 (100 input + 300 output tokens)
- **Total:** ~$0.07 per research session
**Performance:**
- All 3 agents run in parallel
- Parallel execution (not sequential)
- Uses Haiku 4.5 for cost optimization
**Context Preservation:**
- Agents run in subprocesses
- Your main session context preserved
- Results returned as formatted markdown
---
**Ready to research!** Just ask a technical question and I'll spawn the research agents.

148
commands/ctx-stats.md Normal file
View File

@@ -0,0 +1,148 @@
---
name: ctx:stats
description: View Contextune detection statistics
keywords:
- show stats
- statistics
- detection stats
- performance metrics
- stats
- metrics
- show statistics
executable: commands/slashsense-stats.py
---
# Contextune Statistics
Display detection performance metrics and usage statistics from the observability database.
---
## Execution
This command runs automatically via the executable script. The markdown provides documentation only.
**Script:** `commands/slashsense-stats.py`
**Execution:** Automatic when command is triggered
**Data Source:** `~/.claude/plugins/contextune/data/observability.db`
---
## What This Command Does
**Step 1: Load Statistics**
Reads detection data from the observability database:
```bash
sqlite3 ~/.claude/plugins/contextune/data/observability.db \
"SELECT tier, COUNT(*), AVG(latency_ms), AVG(confidence)
FROM detections GROUP BY tier"
```
**Step 2: Generate Report**
Creates formatted output using Rich library showing:
1. **Detection Performance by Tier**
- Keyword: Detection count, average latency, accuracy
- Model2Vec: Detection count, average latency, accuracy
- Semantic Router: Detection count, average latency, accuracy
2. **Top Detected Commands**
- Command name and frequency count
- Shows top 5 most-used commands
3. **Confidence Distribution**
- Breakdown by confidence range (50-70%, 70-85%, 85%+)
- Visual progress bars
**Step 3: Display to User**
Outputs formatted tables and panels to terminal.
---
## Example Output
```
╭─────────────────────────── Contextune Statistics ───────────────────────────╮
│ │
│ Total Detections: 1,247 │
│ │
│ Performance by Tier │
│ ┌───────────────┬────────────┬─────────────┬──────────┐ │
│ │ Tier │ Detections │ Avg Latency │ Accuracy │ │
│ ├───────────────┼────────────┼─────────────┼──────────┤ │
│ │ Keyword │ 892 │ 0.05ms │ 98% │ │
│ │ Model2Vec │ 245 │ 0.18ms │ 94% │ │
│ │ Semantic │ 110 │ 47.30ms │ 89% │ │
│ └───────────────┴────────────┴─────────────┴──────────┘ │
│ │
│ Top Commands │
│ 1. /sc:analyze 324 detections │
│ 2. /sc:implement 218 detections │
│ 3. /sc:test 187 detections │
│ 4. /sc:git 156 detections │
│ 5. /sc:improve 134 detections │
│ │
╰──────────────────────────────────────────────────────────────────────────────╯
```
---
## Data Sources
**If observability.db exists:**
- Shows actual detection data
- Real latency measurements
- Actual command frequencies
**If observability.db doesn't exist:**
- Shows example/mock data (for demonstration)
- Indicates data is not from actual usage
---
## Interpreting Results
**Tier Performance:**
- **Keyword (Target: <0.1ms):** Fastest, highest accuracy, handles 60% of queries
- **Model2Vec (Target: <1ms):** Fast, good accuracy, handles 30% of queries
- **Semantic Router (Target: <100ms):** Slower, handles complex/ambiguous 10%
**Latency Analysis:**
- < 1ms: Excellent (no perceptible delay)
- 1-10ms: Good (barely noticeable)
- 10-50ms: Acceptable (slight delay)
- > 100ms: Needs optimization
**Accuracy Expectations:**
- 95%+: Excellent (trust the detection)
- 85-95%: Good (verify before auto-execute)
- 70-85%: Fair (suggest to user)
- < 70%: Skip (don't suggest)
---
## Troubleshooting
**"No data available":**
```
No detection data found. Using example statistics.
```
- This is normal for new installations
- Data accumulates as you use Contextune
- Mock data shows what stats will look like
**"Database error":**
- Check: `ls ~/.claude/plugins/contextune/data/observability.db`
- Permissions: Ensure readable
- Corruption: Delete and let it recreate on next detection
---
## Related Commands
- `/ctx:usage` - View token usage and cost optimization
- `/ctx:help` - View all available commands
- `/ctx:configure` - Configure Contextune settings

255
commands/ctx-status.md Normal file
View File

@@ -0,0 +1,255 @@
---
name: ctx:status
description: Check status of parallel worktrees and tasks
keywords:
- check status
- parallel status
- show progress
- task status
- worktree status
- show parallel progress
- check parallel
executable: true
---
# Parallel Status - Monitor Parallel Development
You are checking the status of all parallel worktrees and tasks.
**Contextune Integration:** This command can be triggered via `/contextune:parallel:status` or natural language like "check parallel progress", "show parallel status".
---
## Execution Workflow
### Step 1: Check for Active Worktrees
**Run this command:**
```bash
git worktree list
```
**Expected output:**
```
/Users/you/project abc1234 [main]
/Users/you/project/worktrees/task-0 def5678 [feature/task-0]
/Users/you/project/worktrees/task-1 ghi9012 [feature/task-1]
```
**Parse the output:**
- Line 1: Main worktree (skip)
- Lines 2+: Parallel worktrees (check each)
- Extract: worktree path, commit hash, branch name
**If no worktrees found:**
```
No parallel tasks active.
```
Stop here - nothing to report.
---
### Step 2: Check Task Files for Status
**For each worktree found, read its task file:**
```bash
# Get task ID from worktree path
task_id=$(basename /path/to/worktrees/task-0)
# Read task status from YAML frontmatter
grep "^status:" .parallel/plans/tasks/${task_id}.md
```
**Status values:**
- `pending`: Not started yet
- `in_progress`: Currently working
- `completed`: Done and pushed
- `blocked`: Encountered error
---
### Step 3: Check Git Status Per Worktree
**For each worktree, check uncommitted changes:**
```bash
cd worktrees/task-0
git status --short
cd ../..
```
**Interpret output:**
- Empty: Clean working tree (good!)
- `M file.ts`: Modified files (work in progress)
- `??` file: Untracked files (needs git add)
---
### Step 4: Check Branch Status (Ahead/Behind)
**For each worktree, check if branch is pushed:**
```bash
cd worktrees/task-0
git status --branch --porcelain | head -1
cd ../..
```
**Example outputs:**
- `## feature/task-0...origin/feature/task-0`: Branch is up to date ✅
- `## feature/task-0...origin/feature/task-0 [ahead 2]`: 2 commits not pushed ⚠️
- `## feature/task-0`: No remote branch yet ⚠️
---
### Step 5: Check Test Status (if available)
**Look for test result files:**
```bash
ls worktrees/task-0/test-results.xml 2>/dev/null || echo "No test results"
ls worktrees/task-0/.pytest_cache 2>/dev/null || echo "No pytest cache"
```
**Or check recent git log for test-related commits:**
```bash
cd worktrees/task-0
git log --oneline -5 | grep -i "test"
cd ../..
```
---
### Step 6: Format Status Report
**Create comprehensive status report:**
```
📊 Parallel Development Status
**Active Tasks:** 3
**Completed:** 1
**In Progress:** 2
**Blocked:** 0
─────────────────────────────────────────────────
Task 0: Fix ctx-stats.md
├─ Status: completed ✅
├─ Branch: feature/task-0
├─ Commits: 3 commits ahead
├─ Tests: All passing ✅
└─ Ready: Yes - can merge
Task 1: Fix ctx-status.md
├─ Status: in_progress ⏳
├─ Branch: feature/task-1
├─ Commits: 1 commit ahead (not pushed)
├─ Tests: Not run yet
└─ Ready: No - work in progress
Task 2: Fix ctx-cleanup.md
├─ Status: pending 📋
├─ Branch: feature/task-2
├─ Commits: None (clean)
└─ Ready: No - not started
─────────────────────────────────────────────────
**Next Actions:**
• task-0: Ready to merge/create PR
• task-1: Push changes and run tests
• task-2: Start implementation
```
---
### Step 7: Provide Recommendations
**Based on task statuses, suggest next actions:**
**If any tasks are completed:**
```
✅ Tasks ready for review: task-0
Suggested action:
./scripts/create_prs.sh
```
**If any tasks are blocked:**
```
⚠️ Blocked tasks need attention: task-N
Check error logs:
cd worktrees/task-N && git log -1
```
**If all tasks are complete:**
```
🎉 All tasks completed!
Next steps:
1. Create PRs: ./scripts/create_prs.sh
2. Or merge directly: /ctx:cleanup
```
---
## Contextune-Specific Additions
### Natural Language Triggers
Users can trigger this command with:
- `/contextune:parallel:status` (explicit)
- "check parallel progress"
- "show parallel status"
- "how are the parallel tasks doing"
- "parallel development status"
Contextune automatically detects these intents.
### Global Availability
Works in ALL projects after installing Contextune:
```bash
/plugin install slashsense
```
### Related Commands
When suggesting next steps, mention:
- `/contextune:parallel:execute` - Execute parallel development
- `/contextune:parallel:cleanup` - Clean up completed work
- `/contextune:parallel:plan` - Create development plan
---
## Example User Interactions
**Natural Language:**
```
User: "how are the parallel tasks going?"
You: [Execute status check workflow]
Display formatted status report
Provide recommendations
```
**Explicit Command:**
```
User: "/contextune:parallel:status"
You: [Execute status check workflow]
```
---
## Implementation Notes
- Use the exact same implementation as `/.claude/commands/parallel/status.md`
- Add Contextune branding where appropriate
- Support both explicit and natural language invocation
- This command is read-only - never modifies anything

97
commands/ctx-usage.md Normal file
View File

@@ -0,0 +1,97 @@
---
name: ctx:usage
description: Track and optimize context usage with intelligent recommendations
keywords:
- usage
- context
- limits
- quota
- optimization
---
# /ctx:usage - Context Usage Optimization
Track your Claude Code usage and get intelligent recommendations for cost optimization.
## Usage
### Quick Check (Manual Input)
```bash
# 1. Run Claude Code's built-in command:
/usage
# 2. Then run this command to log it:
/ctx:usage
```
Claude will ask you to paste the `/usage` output, then provide:
- ✅ Current usage status
- ⚠️ Warnings if approaching limits
- 💡 Recommendations (model selection, parallel tasks, timing)
- 📊 Historical trends
### Automatic Tracking
Contextune automatically estimates your token usage based on:
- Prompt lengths
- Response sizes
- Haiku vs Sonnet usage
- Parallel task spawning
## Example Output
```
📊 Context Usage Analysis
Current Status:
Session: 7% (resets 12:59am)
Weekly: 89% (resets Oct 29, 9:59pm)
Opus: 0% available
⚠️ Warnings:
• 89% weekly usage - approaching limit
• Reset in: [time remaining]
💡 Recommendations:
• Switch research tasks to Haiku (87% cost savings)
• Max parallel tasks: 2 (based on remaining context)
• ✨ Opus available (0% used) - great for complex architecture
• Defer non-critical tasks until weekly reset
📈 Estimated Savings:
• Using Haiku for next 5 tasks: ~$0.45 saved
• Waiting until reset: +11% weekly capacity
```
## Integration with Other Commands
### /ctx:research
Automatically uses Haiku when weekly usage > 80%
### /ctx:plan
Limits parallel tasks based on available context
### /ctx:execute
Defers execution if approaching session limits
## Manual Update
If you want to manually log usage data:
```bash
/ctx:usage --update
```
Then paste your `/usage` output when prompted.
## Dashboard
View historical trends:
```bash
marimo edit notebooks/contextune_metrics_dashboard.py
```
The dashboard shows:
- Usage trends over time
- Cost savings from optimization
- Model selection patterns
- Parallel task efficiency

203
commands/ctx-verify.md Normal file
View File

@@ -0,0 +1,203 @@
---
name: ctx:verify
description: Verify and execute detected slash command with user confirmation
keywords:
- verify command
- confirm command
- verification
---
# Contextune Verification Agent
**IMPORTANT**: This command is automatically triggered by the Contextune hook when it detects a potential slash command. It runs in a sub-agent to preserve the main agent's context.
## Your Task
You are a verification sub-agent. Your job is simple and focused:
1. **Present the detection** to the user clearly
2. **Ask for confirmation**
3. **Execute their choice**
4. **Report results** back concisely
## Input from Hook
The Contextune UserPromptSubmit hook provides detection information in the `additionalContext` field of the modified prompt.
**Hook output structure:**
```json
{
"modifiedPrompt": "/ctx:research ...",
"additionalContext": "🎯 Detected: /ctx:research (85% via keyword)"
}
```
**You receive:**
- **Detected Command**: Extracted from additionalContext (e.g., `/ctx:research`)
- **Confidence**: Extracted from additionalContext (e.g., `85%`)
- **Detection Method**: Extracted from additionalContext (e.g., `keyword`, `model2vec`, `semantic`)
- **Original Prompt**: The user's original natural language input
---
## Execution Steps
### Step 1: Parse Detection Information
**Extract values from the additionalContext:**
```python
# Example additionalContext:
# "🎯 Detected: /ctx:research (85% via keyword)"
import re
context = "🎯 Detected: /ctx:research (85% via keyword)"
# Parse command
command_match = re.search(r'/[a-z:-]+', context)
detected_command = command_match.group() if command_match else None
# Parse confidence
conf_match = re.search(r'(\d+)%', context)
confidence = int(conf_match.group(1)) if conf_match else 0
# Parse method
method_match = re.search(r'via (\w+)', context)
method = method_match.group(1) if method_match else "unknown"
```
---
### Step 2: Present Options to User (Using AskUserQuestion Tool)
**Use the AskUserQuestion tool to get user choice:**
```python
AskUserQuestion(
questions=[{
"question": f"I detected you might want {detected_command}. Which approach?",
"header": "Contextune",
"multiSelect": false,
"options": [
{
"label": f"Run {detected_command}",
"description": f"Use specialized command ({confidence}% confidence via {method})"
},
{
"label": "Continue with original",
"description": "Process your original request instead"
}
]
}]
)
```
---
### Step 3: Execute Based on User Choice
**If user chose Option 1 (slash command):**
```python
# Execute the detected command
SlashCommand(command=detected_command)
```
**Output to user:**
```
✅ Executing {detected_command}...
```
Then the command will run and produce its normal output.
---
**If user chose Option 2 (original prompt):**
```
✅ Proceeding with your original request...
```
Process the original prompt as if Contextune didn't detect anything.
---
**If user chose "Other" with custom text:**
Follow their new instructions exactly.
## Reporting Back
After completing the task, provide a brief summary:
```
**Contextune Verification Complete**
User choice: [Option 1/Option 2/Custom]
Action taken: [What you did]
Result: [Brief outcome]
[Any important output or findings]
```
## Key Principles
1. **Be concise** - You're a sub-agent, save the main agent's context
2. **Ask once** - Don't go back and forth multiple times
3. **Execute quickly** - User already submitted a prompt, they want action
4. **Report briefly** - Main agent just needs to know outcome
## Example Flow
**User originally typed**: "can you help me analyze this code for bugs"
**You present**:
```
🎯 **Contextune Detection**
I detected that you might want to run a slash command instead:
**Option 1**: Run `/sc:analyze`
- This is a specialized code analysis command
- Confidence: 85% (keyword match)
**Option 2**: Continue with your original request
- Original: "can you help me analyze this code for bugs"
Which would you prefer?
```
**User replies**: "1"
**You execute**:
```
✅ Executing /sc:analyze...
[Run the analysis command]
**Contextune Verification Complete**
User choice: Option 1 (/sc:analyze)
Action taken: Ran code analysis
Result: Found 3 potential issues in authentication.py
[Analysis output]
```
---
## Technical Details
This command is invoked via:
```python
# From hook:
response = {
"continue": True,
"hookSpecificOutput": {
"additionalContext": "[Contextune delegation directive]"
}
}
```
The main agent receives this context and spawns you as a sub-agent to handle verification.

200
commands/slashsense-stats.py Executable file
View File

@@ -0,0 +1,200 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "rich>=13.0.0",
# ]
# ///
"""
SlashSense Statistics Command
Displays detection performance metrics and statistics.
Currently uses mock/example data for demonstration.
"""
import sys
from rich.console import Console
from rich.panel import Panel
from rich.table import Table
from rich.progress import Progress, BarColumn, TextColumn
console = Console()
# Mock statistics data for demonstration
MOCK_STATS = {
"total_detections": 1247,
"tier_performance": {
"keyword": {
"detections": 892,
"avg_latency_ms": 0.05,
"accuracy": 0.98
},
"model2vec": {
"detections": 245,
"avg_latency_ms": 0.18,
"accuracy": 0.94
},
"semantic_router": {
"detections": 110,
"avg_latency_ms": 47.3,
"accuracy": 0.89
}
},
"top_commands": [
{"command": "/sc:analyze", "count": 324},
{"command": "/sc:implement", "count": 218},
{"command": "/sc:test", "count": 187},
{"command": "/sc:git", "count": 156},
{"command": "/sc:improve", "count": 134}
],
"confidence_distribution": {
"0.7-0.8": 243,
"0.8-0.9": 456,
"0.9-1.0": 548
}
}
def display_overview():
"""Display overview statistics."""
console.print()
console.print(Panel.fit(
"[bold cyan]SlashSense Detection Statistics[/bold cyan]",
border_style="cyan"
))
console.print()
stats_table = Table(show_header=False, box=None, padding=(0, 2))
stats_table.add_column("Metric", style="cyan")
stats_table.add_column("Value", style="green bold")
stats_table.add_row("Total Detections", f"{MOCK_STATS['total_detections']:,}")
stats_table.add_row("Keyword Matches", f"{MOCK_STATS['tier_performance']['keyword']['detections']:,}")
stats_table.add_row("Model2Vec Matches", f"{MOCK_STATS['tier_performance']['model2vec']['detections']:,}")
stats_table.add_row("Semantic Router Matches", f"{MOCK_STATS['tier_performance']['semantic_router']['detections']:,}")
console.print(Panel(stats_table, title="[bold]Overview[/bold]", border_style="blue"))
console.print()
def display_tier_performance():
"""Display tier-by-tier performance metrics."""
tier_table = Table(show_header=True, box=None, padding=(0, 2))
tier_table.add_column("Tier", style="cyan bold")
tier_table.add_column("Detections", justify="right", style="green")
tier_table.add_column("Avg Latency", justify="right", style="yellow")
tier_table.add_column("Accuracy", justify="right", style="magenta")
tier_table.add_column("Performance", justify="left")
for tier_name, tier_data in MOCK_STATS["tier_performance"].items():
# Create performance bar
accuracy_pct = int(tier_data["accuracy"] * 100)
bar = "" * (accuracy_pct // 5) + "" * (20 - (accuracy_pct // 5))
tier_table.add_row(
tier_name.capitalize(),
f"{tier_data['detections']:,}",
f"{tier_data['avg_latency_ms']:.2f}ms",
f"{accuracy_pct}%",
f"[green]{bar}[/green]"
)
console.print(Panel(tier_table, title="[bold]Tier Performance[/bold]", border_style="blue"))
console.print()
def display_top_commands():
"""Display most frequently detected commands."""
cmd_table = Table(show_header=True, box=None, padding=(0, 2))
cmd_table.add_column("Rank", style="cyan", justify="right")
cmd_table.add_column("Command", style="green bold")
cmd_table.add_column("Count", justify="right", style="yellow")
cmd_table.add_column("Percentage", justify="right", style="magenta")
total = MOCK_STATS["total_detections"]
for idx, cmd_data in enumerate(MOCK_STATS["top_commands"], 1):
percentage = (cmd_data["count"] / total) * 100
cmd_table.add_row(
f"#{idx}",
cmd_data["command"],
f"{cmd_data['count']:,}",
f"{percentage:.1f}%"
)
console.print(Panel(cmd_table, title="[bold]Top 5 Commands[/bold]", border_style="blue"))
console.print()
def display_confidence_distribution():
"""Display confidence score distribution."""
console.print(Panel.fit(
"[bold]Confidence Score Distribution[/bold]",
border_style="blue"
))
console.print()
total = sum(MOCK_STATS["confidence_distribution"].values())
for range_label, count in MOCK_STATS["confidence_distribution"].items():
percentage = (count / total) * 100
bar_length = int(percentage / 2) # Scale to fit in terminal
bar = "" * bar_length
console.print(f" [cyan]{range_label}[/cyan]: [green]{bar}[/green] {count:,} ({percentage:.1f}%)")
console.print()
def display_recommendations():
"""Display performance recommendations."""
recommendations = []
# Check tier usage
keyword_pct = (MOCK_STATS['tier_performance']['keyword']['detections'] /
MOCK_STATS['total_detections']) * 100
if keyword_pct < 60:
recommendations.append(
"[yellow]Consider adding more keyword patterns to improve fast-path performance[/yellow]"
)
semantic_count = MOCK_STATS['tier_performance']['semantic_router']['detections']
if semantic_count > 200:
recommendations.append(
"[yellow]High semantic router usage detected. Consider promoting common patterns to Model2Vec tier[/yellow]"
)
if recommendations:
console.print(Panel(
"\n".join(f"{rec}" for rec in recommendations),
title="[bold]Recommendations[/bold]",
border_style="yellow"
))
console.print()
def main():
"""Main entry point for slashsense:stats command."""
try:
display_overview()
display_tier_performance()
display_top_commands()
display_confidence_distribution()
display_recommendations()
console.print("[dim]Note: These are example statistics. Real-time tracking coming soon![/dim]")
console.print()
return 0
except KeyboardInterrupt:
console.print("\n[yellow]Statistics display cancelled.[/yellow]")
return 130
except Exception as e:
console.print(f"[red]Error:[/red] {e}", file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())