Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:56:23 +08:00
commit 1c24761eb8
14 changed files with 2028 additions and 0 deletions

View File

@@ -0,0 +1,22 @@
{
"name": "shavakan-commands",
"description": "Slash commands for dev docs workflow and repository cleanup",
"version": "1.2.3",
"author": {
"name": "shavakan",
"email": "cs.changwon.lee@gmail.com"
},
"commands": [
"./shavakan/docs-feature-plan.md",
"./shavakan/docs-save-context.md",
"./shavakan/docs-update.md",
"./shavakan/docs-readme.md",
"./shavakan/cleanup.md",
"./shavakan/cleanup-architecture.md",
"./shavakan/cleanup-comments.md",
"./shavakan/cleanup-dead-code.md",
"./shavakan/cleanup-deps.md",
"./shavakan/cleanup-docs.md",
"./shavakan/cleanup-duplication.md"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# shavakan-commands
Slash commands for dev docs workflow and repository cleanup

85
plugin.lock.json Normal file
View File

@@ -0,0 +1,85 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:Shavakan/claude-marketplace:commands",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "537e63aadb037599f1137fd244b9404c8013bd6d",
"treeHash": "f1e210cb9a94275654e2a224bbd8fd0ea046e4928cc633b1f61e9286c26f32a8",
"generatedAt": "2025-11-28T10:12:47.499750Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "shavakan-commands",
"description": "Slash commands for dev docs workflow and repository cleanup",
"version": "1.2.3"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "8aadc078dbc46783be01f8a0c91a81fe37cf32eca6580220e925bc8bda571eda"
},
{
"path": "shavakan/cleanup-comments.md",
"sha256": "c27eac958c27c604e5a865fce7cb91f22895a822cdb0d0819d410d74470ae9b6"
},
{
"path": "shavakan/cleanup-docs.md",
"sha256": "eedfcaab655a7524a2ea171b6f57f3ec258154d5e02d8b0c7f6aad5577821d65"
},
{
"path": "shavakan/cleanup-deps.md",
"sha256": "708e12f702ecbbf1b20bab4c038e722ff218cb0f7ba3325368ffeb0112eff5d0"
},
{
"path": "shavakan/cleanup-duplication.md",
"sha256": "49c250a4f755fff5d806fd5f405d8fde27f082e6dcc71f2e4d3b5228c6b47c66"
},
{
"path": "shavakan/docs-readme.md",
"sha256": "e27c3d24002c063b13bc1189d6722038bf2d9ff0a7cd6dd6b5bc68ec4cfadb39"
},
{
"path": "shavakan/docs-update.md",
"sha256": "c5274ba3c55aae07cdeeb581539d6ff42ad8bbb2fbcdbd82a34c5dbb4c9e07c6"
},
{
"path": "shavakan/cleanup-architecture.md",
"sha256": "7609423379c561a316a6d1a752a62fd3cd746728ed8a7e4505ae9cde4d7c5e72"
},
{
"path": "shavakan/docs-feature-plan.md",
"sha256": "36e811a574035a7898172b08600cecb10ccd1908e80e6f0455a7a383ee6b377f"
},
{
"path": "shavakan/cleanup.md",
"sha256": "42590b55e38ee076e98b1dc3b95bde2388ef978f757e6b70ca512de479985acd"
},
{
"path": "shavakan/docs-save-context.md",
"sha256": "933c1ecb3b72247314dc067b6cb641a312434ceb5cdb8ea860b08fbc701756db"
},
{
"path": "shavakan/cleanup-dead-code.md",
"sha256": "8148a1cfcda965d74abde221dc10164b7ec81bf45a38624d815b17481c23f72b"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "525366c855b8eaf9d75089a653d3a1d287e965649e1a485130aaefff5fc5dfb2"
}
],
"dirSha256": "f1e210cb9a94275654e2a224bbd8fd0ea046e4928cc633b1f61e9286c26f32a8"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,235 @@
---
description: Refactor structure - split god objects, break circular deps, improve separation of concerns
---
# Refactor Architecture
Improve code structure: split god objects, break circular dependencies, improve separation of concerns.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. All tests passing before refactoring
3. Backup branch created automatically
4. Test validation after each refactor
**Run prerequisite check:**
```bash
# Get the plugin root (where scripts are located)
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
# Validate plugin root is under home directory
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
# Run prerequisites check and source output
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found at $PREREQ_SCRIPT"
exit 1
fi
# Capture output to temp file and source it
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Identify and fix architecture issues that make code hard to maintain: god objects, circular dependencies, poor layer separation, high complexity, and long parameter lists.
### Architecture Smells
**God objects** - Files >300 lines or classes with too many responsibilities (detect via file size, import count, mixed concerns)
**Circular dependencies** - Module A imports B, B imports A (creates tight coupling, import errors)
**Poor separation** - Files mixing layers (controller + service + repository in one file, UI with business logic)
**High complexity** - Functions with cyclomatic complexity >10, deeply nested conditionals, >50 lines
**Long parameter lists** - Functions with 5+ parameters (should use options objects)
---
## Execution
### Phase 1: Detect Architecture Issues
Scan codebase for architecture smells using appropriate analysis tools for the language. For each issue found:
- Identify the specific problem (what makes this a god object? which modules form the cycle?)
- Assess impact (how is this hurting maintainability?)
- Propose solution (split by domain? extract interface? introduce layer?)
Present findings with prioritization:
- **Critical**: Circular deps causing build issues, god objects blocking development
- **High**: Poor separation causing bugs, complexity >15
- **Medium**: Large files that are manageable, complexity 10-15
**Gate**: User must review full audit before proceeding.
### Phase 2: Prioritize with User
Present audit findings with impact assessment:
```
Refactor architecture issues?
□ Safest - Extract complex functions + convert long param lists
□ Low risk - Split god objects by domain boundaries
□ Medium risk - Break circular deps + separate mixed layers
□ High risk - Major architectural refactoring
□ Custom - Select specific issues
□ Cancel
```
**Gate**: Get user approval on which issues and risk level to address.
### Phase 3: Refactor Systematically
For each approved issue:
**Split god objects:**
- Analyze file to identify distinct responsibilities (auth vs profile vs settings)
- Propose domain-driven split with clear boundaries
- Create new files for each responsibility
- Move methods to appropriate files
- Update all imports
- Test after each file is moved
**Break circular dependencies:**
- Analyze cycle to understand why it exists
- Choose approach: extract shared types, introduce interface, dependency inversion
- Implement solution (e.g., create UserSummary type to break User ↔ Post cycle)
- Update imports
- Verify no circular reference remains
**Separate layers:**
- Identify mixed concerns (route handling + validation + business logic + DB in one file)
- Create appropriate layer files (Controller, Validator, Service, Repository)
- Move logic to correct layers
- Update dependencies to flow one direction
- Test layer boundaries work correctly
**Reduce complexity:**
- Extract methods for logical blocks
- Replace nested conditionals with guard clauses
- Introduce state pattern for workflow steps
- Test behavior unchanged
**Simplify parameters:**
- Create options interface with all parameters
- Update function signature
- Update all call sites
- Test all cases still work
**Critical**: Each refactoring must preserve exact behavior. Tests must pass. If tests fail, rollback and investigate the difference.
**Gate**: Tests must pass before moving to next refactoring.
### Phase 4: Report Results
Summarize improvements: god objects split (N → M files), circular deps broken (N cycles eliminated), complexity reduced (N functions simplified), maintainability impact.
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## Refactoring Patterns
### Split God Object
Extract cohesive responsibilities into focused modules:
```typescript
// Before: UserService (847 lines, does everything)
class UserService {
authenticate() { }
updateProfile() { }
sendNotification() { }
}
// After: Split by domain
class AuthService { authenticate() { } }
class ProfileService { updateProfile() { } }
class NotificationService { sendNotification() { } }
```
### Break Circular Dependency
Extract shared types or introduce interfaces:
```typescript
// Before: User imports Post, Post imports User
// Create shared type
interface UserSummary { id: string; name: string; }
// User can import Post (full)
// Post only imports UserSummary (breaks cycle)
```
### Separate Layers
Controller → Service → Repository:
```typescript
// Before: Everything in one function
async function createOrder(req, res) {
if (!req.body.items) return res.status(400).send();
const total = calculateTotal(req.body.items);
const order = await db.orders.create({...});
res.json(order);
}
// After: Proper layering
// Controller: HTTP handling only
// Service: Business logic
// Repository: Database access
```
---
## Safety Constraints
**CRITICAL:**
- One refactoring at a time - test after each structural change
- Preserve exact behavior - all tests must pass
- Commit granularly - each refactoring gets its own commit
- Never batch refactors - if one breaks, you can't identify which
- Keep interfaces stable - don't break public APIs
- Use types to catch breaking changes at compile time
**If tests fail**: Rollback immediately, identify the specific change that broke behavior, understand why it's different from original.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify refactorings don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-dead-code` - Remove unused code
- `/shavakan-commands:cleanup-duplication` - Remove code duplication

View File

@@ -0,0 +1,158 @@
---
description: Remove obvious comments, commented code, outdated TODOs - keep 'why' explanations
---
# Remove Comment Noise
Remove comments that add zero value. Code should be self-documenting. Keep comments that explain "why" not "what".
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. Backup branch created automatically
3. Test validation after each change category (if tests exist)
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" --skip-tests > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Identify and remove comment noise across the codebase while preserving valuable "why" explanations.
### Remove These
**Obvious comments** - Repeat what code does ("increment counter" above `count++`)
**Commented code blocks** - 5+ consecutive commented lines with code syntax (belongs in git history)
**Outdated TODOs/FIXMEs** - >1 year old or already completed (check git blame)
**Redundant documentation** - JSDoc/docstrings that add no value beyond signature (`@param user The user`)
**Visual separators** - ASCII art, "end of section" markers (use file structure instead)
### Preserve These
**Non-obvious "why" explanations** - Why this approach vs alternatives
**Workarounds** - Performance trade-offs, security considerations, edge cases
**Hacks with justification** - Browser quirks, API limitations, temporary fixes with context
---
## Execution
### Phase 1: Identify Comment Noise
Scan codebase for all five categories. For each finding, capture:
- Location (file:line or range)
- Comment content and surrounding code
- Why it's noise vs valuable
Present findings grouped by category with counts. Include examples of valuable comments you're preserving to demonstrate understanding.
Identify refactoring opportunities where improving code clarity (better names, extracted functions) would eliminate the need for explanatory comments.
**Gate**: User must review findings before removal.
### Phase 2: Confirm Scope
Present findings to user:
```
Remove comment noise?
□ Safest - Visual separators + obvious comments
□ Low risk - Commented code blocks + redundant docs
□ Needs review - Outdated TODOs (might still be relevant)
□ Custom - Select specific categories
□ Cancel
```
**Gate**: Get user approval on which categories to remove.
### Phase 3: Execute Removals
For each approved category:
1. Consider refactoring first: If comment explains complex logic, can you make the code self-documenting instead?
2. Remove comments: Edit files safely. Process one category completely before starting next.
3. Test and commit: Run `$TEST_CMD` after each category. If tests pass, commit with message explaining what was removed. If tests fail, rollback and report which removal caused the issue.
**Gate**: Tests must pass before proceeding to next category.
### Phase 4: Report Results
Summarize what was removed (counts per category), what was preserved, code improvements made, and impact (lines reduced, readability improved).
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## Safety Constraints
**CRITICAL:**
- One category at a time with test validation between each
- Commit granularly - each category gets its own commit
- When in doubt, preserve the comment
- If comment explains complex code, refactor the code instead of removing comment
- Never remove comments that justify non-obvious technical decisions
- Test failure → rollback immediately, investigate before retrying
**If tests fail**: Rollback with `git checkout . && git clean -fd`, identify which specific removal broke tests, decide whether to skip that item or investigate further.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify changes don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-dead-code` - Remove unused code
- `/shavakan-commands:cleanup-deps` - Clean dependencies

View File

@@ -0,0 +1,155 @@
---
description: Remove unused imports, functions, variables, commented blocks, unreachable code
---
# Remove Dead Code
Find and safely remove unused code: imports, functions, variables, commented blocks, unreachable code, unused files.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. All tests passing before cleanup
3. Backup branch created automatically
4. Test validation after each removal category
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Identify and remove dead code that clutters the codebase without affecting functionality.
### Categories
**Unused imports** - Modules imported but never referenced
**Unused functions** - Defined but never called, private methods with no references, exported with no external usage
**Unused variables** - Declared but never read, parameters never used, constants not referenced
**Commented code** - 5+ consecutive commented lines with code syntax (belongs in git history)
**Unreachable code** - After return/throw/break, conditional branches that never execute
**Unused files** - Source files with no imports, test files for deleted features, replaced implementations
---
## Execution
### Phase 1: Detect Dead Code
Scan codebase for unused code using language-appropriate static analysis. For each category:
- Report locations with context
- Verify findings (check for dynamic references, reflection, callbacks)
- Count impact (lines that can be removed)
**Be cautious with:**
- Public API exports - check for external usage first
- Test utilities - may be needed later even if not currently used
- Code called dynamically (eval, reflection, event handlers)
**Gate**: User must review findings before removal.
### Phase 2: Confirm Removals
Present findings to user with risk assessment:
```
Remove dead code?
□ Safe - Unused imports + variables (auto-fixable)
□ Low risk - Unused private functions + commented code
□ Needs review - Unused public functions + unused files
□ Custom - Select specific items
□ Cancel
```
**Gate**: Get user approval on which categories to remove.
### Phase 3: Execute Removals
For each approved category:
1. Remove items using language-specific tools when available (eslint --fix, autoflake). For functions/files, manual removal with careful validation.
2. Test and commit: Run `$TEST_CMD` after each category. If passing, commit. If failing, rollback and report which removal broke tests.
3. Continue safely: Process one category at a time. Stop on test failure.
**Gate**: Tests must pass before moving to next category.
### Phase 4: Report Results
Summarize: items removed per category, files affected, lines eliminated, code coverage maintained.
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## Safety Constraints
**CRITICAL:**
- One category at a time with test validation
- Never remove public API exports without confirming no external usage
- Verify zero references using code search before removing
- Test after each removal - rollback on failure
- Commit granularly
- Use `git rm` for file removal to preserve history
- Check for dynamic references (code may be called via string identifiers)
**If tests fail**: Rollback, identify which specific removal caused failure, decide whether to skip that item or investigate why it was actually needed.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify removals don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-comments` - Clean up comments
- `/shavakan-commands:cleanup-architecture` - Refactor structure

174
shavakan/cleanup-deps.md Normal file
View File

@@ -0,0 +1,174 @@
---
description: Clean dependencies - remove unused, fix security issues, update outdated, deduplicate
---
# Clean Up Dependencies
Remove unused packages, fix security vulnerabilities, update outdated packages, eliminate duplicate versions.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. All tests passing before cleanup
3. Backup branch created automatically
4. Test validation after each dependency change
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Clean up project dependencies across four categories:
**Unused dependencies** - Installed but never imported, dev deps not used in build/test
**Security vulnerabilities** - Packages with known CVEs (critical/high priority)
**Outdated dependencies** - Packages with newer stable versions, major updates available
**Duplicates** - Same package at multiple versions, conflicting peer dependencies
---
## Execution
### Phase 1: Detect Package Manager & Scan
Identify package manager (npm/pnpm/yarn/pip/cargo/go) from lockfiles.
Scan dependencies for all four categories. Present findings grouped by category with counts and severity:
- Critical vulnerabilities (fix immediately)
- Unused packages (safe to remove)
- Outdated packages (by semver level: major/minor/patch)
- Duplicates (version conflicts)
**Gate**: User must see full audit before proceeding.
### Phase 2: Prioritize with User
Present findings with risk assessment:
- **Critical**: Security vulnerabilities (URGENT - fix immediately)
- **Safe**: Unused dependencies (safe to remove after verification)
- **Low risk**: Minor/patch updates (backwards compatible)
- **Medium risk**: Major version updates (review breaking changes)
- **Needs review**: Duplicate resolution (check compatibility)
Offer update strategies:
```
Choose cleanup strategy:
□ Conservative - Patch only, critical security fixes
□ Moderate - Minor + patch, all security fixes
□ Aggressive - All major updates (extensive testing required)
□ Custom - Select specific categories
□ Cancel
```
**Gate**: Get user approval on which categories and strategy level.
### Phase 3: Execute Cleanup
For each approved category:
**Security vulnerabilities:**
- Update to patched version
- Check release notes for breaking changes
- Update code if API changed
- Test thoroughly
**Unused dependencies:**
- Verify not imported anywhere (check for dynamic requires)
- Remove from package manifest
- Clean lockfile
- Test immediately
**Outdated packages:**
- Check CHANGELOG for breaking changes
- Update one package at a time (or related packages together)
- Update code for API changes
- Test after each update
**Duplicates:**
- Choose version to keep (usually newer)
- Use resolutions/overrides if needed
- Rebuild lockfile
- Test compatibility
**Critical safety constraint**: One change at a time. Test after each. Commit on success, rollback on failure.
**Gate**: Tests must pass before moving to next category.
### Phase 4: Report Results
Summarize: vulnerabilities fixed (by severity), unused removed, packages updated (major/minor/patch), duplicates resolved, overall security/maintenance improvement.
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## Safety Constraints
**CRITICAL:**
- Security fixes first - prioritize over cosmetic improvements
- One package change at a time - test between each
- Read release notes before major updates
- Commit granularly
- Don't blindly auto-fix - some fixes introduce breaking changes
- Keep lockfiles - commit package-lock.json, yarn.lock, etc.
- Check peer dependencies after updates
**If tests fail**: Rollback, check if jumping too many versions, try intermediate version, review release notes for breaking changes.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify changes don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-dead-code` - Remove unused code
- `/shavakan-commands:cleanup-architecture` - Refactor structure

181
shavakan/cleanup-docs.md Normal file
View File

@@ -0,0 +1,181 @@
---
description: Sync documentation with code - fix dead links, update API docs, remove outdated content
---
# Sync Documentation with Code
Ensure docs accurately reflect current codebase. Remove outdated content, fix dead links, update API docs.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. Backup branch created automatically
3. Validation after each doc update category (if tests exist)
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" --skip-tests > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Synchronize documentation with current codebase state - eliminate inaccuracies, fix broken references, ensure examples work.
### Documentation Issues
**Dead links** - Internal links to moved/deleted files, external 404s, broken anchor references
**Outdated API docs** - Function signatures changed, parameters added/removed/renamed, return types mismatched, deprecated methods still documented
**Stale content** - Removed features still documented, old tooling instructions, architecture docs contradicting code, examples using deprecated APIs
**Incorrect paths** - Wrong import examples, outdated file structure diagrams, command examples referencing moved files
**Missing docs** - New features without documentation, API endpoints undescribed, configuration options undocumented, breaking changes not in changelog
---
## Execution
### Phase 1: Audit Documentation
Scan all docs and cross-reference with codebase. Documentation typically lives in: README.md, docs/ directory, API documentation (JSDoc/docstrings), CHANGELOG.md, CONTRIBUTING.md, inline examples.
For each issue found, capture location, problem description, and suggested fix. Present findings grouped by category with counts and severity.
**Gate**: User must review full audit before proceeding.
### Phase 2: Prioritize Fixes
Present audit findings with impact assessment:
```
Fix documentation issues?
□ Critical - Dead links + wrong API signatures + missing docs
□ High - Stale setup + removed features + incorrect paths
□ Medium - Outdated diagrams + deprecated tool references
□ Custom - Select specific categories
□ Cancel
```
**Gate**: Get user approval on which categories to fix.
### Phase 3: Fix Issues
For each approved category:
**Dead links:** Update to new location if file moved, remove if deleted, find replacement for broken external URLs, fix anchor links to match current headings
**Outdated API docs:** Compare documented signatures with actual code, update parameters/return types to match reality, add version notes ("Changed in v2.0.0"), note breaking changes in CHANGELOG
**Stale content:** Remove sections about deleted features, update setup instructions for current tools, note what replaced old features, update architecture diagrams, test and fix code examples
**Incorrect paths:** Find current file locations, update import examples, fix command line examples, correct file tree diagrams
**Missing docs:** Document new public APIs, add configuration option descriptions, note breaking changes in CHANGELOG, write migration guides for major changes
**Critical**: Test all code examples actually work. Verify links resolve correctly. One category at a time with test validation between each.
**Gate**: Tests must pass before moving to next category.
### Phase 4: Report Results
Summarize: dead links fixed, API docs synchronized, stale content removed/updated, paths corrected, missing docs added, documentation accuracy improvement.
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## Documentation Priority Tiers
**Essential** (always accurate): README.md, API documentation, CHANGELOG.md, migration guides, setup/installation
**Important** (reasonably current): Architecture docs, contributing guide, examples, troubleshooting
**Nice-to-have** (update as needed): Tutorials, design decisions, performance notes, comparisons
---
## Example API Doc Correction
```markdown
<!-- Before (outdated) -->
## authenticate(username, password)
Authenticates a user with username and password.
<!-- After (corrected) -->
## authenticate(credentials)
Authenticates a user with credentials object.
**Parameters:**
- `credentials.email` (string) - User email
- `credentials.password` (string) - User password
**Changed in v2.0.0:** Previously accepted separate username and password parameters.
```
---
## Safety Constraints
**CRITICAL:**
- Verify content is truly outdated before removing - check git history
- Test all code examples actually compile and run
- Keep migration context - when removing features, note what replaced them
- Update CHANGELOG when fixing API documentation errors
- One category at a time with test validation
- Commit granularly - separate commits for dead links, API updates, etc.
- Check markdown renders correctly after edits
**If examples fail**: Don't update docs to match broken code. Fix the code or mark the feature as broken in docs with an issue reference.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify changes don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-dead-code` - Remove unused code

View File

@@ -0,0 +1,221 @@
---
description: Find and refactor duplicated code - similar blocks, copy-paste functions, repeated strings
---
# Remove Code Duplication
Find DRY (Don't Repeat Yourself) violations and refactor duplicated code into reusable abstractions.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. All tests passing before refactoring
3. Backup branch created automatically
4. Test validation after each refactoring
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Identify and eliminate code duplication - extract common patterns into reusable functions, constants, and abstractions.
### Duplication Types
**Duplicated blocks** - Similar/identical code in multiple files (>80% similarity, 5+ lines), copy-pasted logic with minor variations
**Duplicated functions** - Same logic with different names, utility functions reimplemented across files, helper methods duplicated in multiple classes
**Magic strings/numbers** - Repeated literals (strings like "admin", "error" used 3+ times), hard-coded numbers (excluding 0, 1, -1), configuration values scattered throughout code
**Similar patterns** - Nearly identical implementations with slight variations, common algorithms reimplemented differently, parallel code structures that could be unified
---
## Execution
### Phase 1: Detect Duplication
Scan codebase for duplicated code. Look for similar blocks, repeated functions, magic values, and patterns that could be unified.
For each duplication found, capture location, similarity percentage, and proposed extraction. Present findings grouped by type with counts and estimated impact.
**Gate**: User must review full audit before proceeding.
### Phase 2: Prioritize Refactorings
Present audit findings with risk assessment:
```
Refactor code duplication?
□ Safest - Extract magic strings/numbers to constants
□ Low risk - Consolidate identical functions + extract utilities
□ Medium risk - Extract duplicated blocks + parameterize variations
□ High risk - Unify similar patterns into abstractions
□ Custom - Select specific duplications
□ Cancel
```
**Gate**: Get user approval on which duplications to refactor.
### Phase 3: Refactor Systematically
For each approved duplication:
**Extract duplicated blocks:** Identify common functionality, create shared function with appropriate parameters, replace all occurrences, preserve exact behavior
**Consolidate functions:** Compare implementations to find canonical version, merge variations into single configurable function, update all call sites, remove duplicate definitions
**Extract magic values:** Create constants with descriptive names (ROLE_ADMIN, DEFAULT_TIMEOUT), replace all occurrences with constant reference, group related constants together
**Unify patterns:** Design common abstraction that handles all cases, implement generic version with configuration, migrate each usage, remove old implementations
**Critical**: One refactoring at a time. Test after each. Commit on success, rollback on failure.
**Gate**: Tests must pass before moving to next refactoring.
### Phase 4: Report Results
Summarize: duplicated blocks extracted, functions consolidated, magic values extracted to constants, patterns unified, code reduction percentage, maintainability improvement.
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
---
## When NOT to Refactor
**Don't extract if:**
- Only 2 occurrences (wait for third) - "Rule of Three"
- Code happens to look similar but serves different purposes
- Abstraction would be more complex than duplication
- Duplication is in different contexts (tests vs. production code)
- Changes are likely to diverge in the future
**Bad extraction example:**
```typescript
// Don't do this - over-abstraction for trivial code
function addOneToNumber(n: number): number {
return n + 1;
}
// Just use n + 1 directly
```
---
## Refactoring Patterns
### Extract Function
```typescript
// Before: Duplicated in 3 files
if (user.role === 'admin' || user.role === 'owner') {
return true;
}
// After: Extract to shared utility
// utils/auth.ts
export function canManageResources(user: User): boolean {
return user.role === 'admin' || user.role === 'owner';
}
// Usage
import { canManageResources } from './utils/auth';
if (canManageResources(user)) { }
```
### Extract Constants
```typescript
// Before: Magic strings repeated
if (user.role === 'admin') { }
// After: Extract to constants
// constants/roles.ts
export const ROLE_ADMIN = 'admin';
// Usage
import { ROLE_ADMIN } from './constants/roles';
if (user.role === ROLE_ADMIN) { }
```
### Parameterize Variations
```typescript
// Before: Similar functions with variations
function fetchUsers() { return api.get('/users'); }
function fetchPosts() { return api.get('/posts'); }
// After: Single parameterized function
function fetchResource<T>(resource: string): Promise<T[]> {
return api.get(`/${resource}`);
}
// Usage
const users = await fetchResource<User>('users');
```
---
## Safety Constraints
**CRITICAL:**
- One refactoring at a time - test between each
- Preserve behavior exactly - all tests must pass
- Follow "Rule of Three" - extract when you have 3+ duplicates, not 2
- Name clearly - extracted functions should have descriptive names
- Don't over-abstract - complex abstraction worse than simple duplication
- Test thoroughly - edge cases may differ between duplicates
- Commit granularly - each extraction gets its own commit
**If tests fail**: Rollback immediately, identify which specific extraction broke behavior, understand the difference between duplicates.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify refactorings don't introduce issues.
---
## Related Commands
- `/shavakan-commands:cleanup` - Full repository audit
- `/shavakan-commands:cleanup-architecture` - Refactor god objects
- `/shavakan-commands:cleanup-dead-code` - Remove unused code

172
shavakan/cleanup.md Normal file
View File

@@ -0,0 +1,172 @@
---
description: Comprehensive repository audit and cleanup (dead code, comments, docs, architecture, deps)
---
# Repository Cleanup Audit
Scan the codebase for cleanup opportunities across all categories. Generate comprehensive report, then execute improvements based on user priorities.
## Prerequisites
**Safety requirements:**
1. Git repository with clean working tree
2. All tests passing before cleanup
3. Backup branch created automatically
4. Test validation after each cleanup operation
**Run prerequisite check:**
```bash
PLUGIN_ROOT="$HOME/.claude/plugins/marketplaces/shavakan"
if [[ ! "$PLUGIN_ROOT" =~ ^"$HOME"/.* ]]; then
echo "ERROR: Invalid plugin root path"
exit 1
fi
PREREQ_SCRIPT="$PLUGIN_ROOT/commands/cleanup/scripts/check-prerequisites.sh"
if [[ ! -f "$PREREQ_SCRIPT" ]]; then
echo "ERROR: Prerequisites script not found"
exit 1
fi
PREREQ_OUTPUT=$(mktemp)
if "$PREREQ_SCRIPT" > "$PREREQ_OUTPUT" 2>&1; then
source "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
else
cat "$PREREQ_OUTPUT"
rm "$PREREQ_OUTPUT"
exit 1
fi
```
This exports: `TEST_CMD`, `BACKUP_BRANCH`, `LOG_FILE`
---
## Additional Instructions
$ARGUMENTS
---
## Objective
Perform comprehensive audit of codebase for cleanup opportunities across all categories, then execute selected improvements systematically.
### Cleanup Categories
1. **Dead Code** - Unused imports, functions, variables, files
2. **Comments** - Obvious comments, commented code, outdated TODOs
3. **Documentation** - Broken links, outdated docs, missing API docs
4. **Architecture** - God objects, circular deps, poor separation
5. **Dependencies** - Unused packages, vulnerabilities, outdated deps
6. **Duplication** - Repeated code, magic values, similar patterns
---
## Execution
### Phase 1: Comprehensive Scan
Scan entire codebase for cleanup opportunities across all categories. For each category, identify issues with location, severity, and estimated impact.
Present audit findings grouped by category with counts. Include priority assessment:
- **Critical**: Security vulnerabilities, god objects blocking development, circular dependencies causing bugs
- **High**: Dead code cluttering codebase, broken documentation, architecture issues
- **Medium**: Comment noise, code duplication, outdated dependencies (non-security)
**Gate**: User must review full audit before proceeding.
### Phase 2: Prioritize with User
Present cleanup options with impact assessment:
```
Execute cleanup operations?
□ Critical only - Security fixes + blocking issues
□ Safe cleanups - Dead code + unused imports + obvious comments
□ Structural - Architecture + dead code + duplication
□ Full cleanup - All categories (comprehensive)
□ Custom - Select specific categories
□ Cancel
```
**Gate**: Get user approval on which categories to clean.
### Phase 3: Execute Cleanups
**IMPORTANT**: This command orchestrates by invoking specialized subcommands. Do not implement cleanup logic directly - use the subcommands:
- Security/unused/outdated deps → invoke `/shavakan-commands:cleanup-deps`
- Architecture issues → invoke `/shavakan-commands:cleanup-architecture`
- Dead code → invoke `/shavakan-commands:cleanup-dead-code`
- Comments → invoke `/shavakan-commands:cleanup-comments`
- Duplication → invoke `/shavakan-commands:cleanup-duplication`
- Documentation → invoke `/shavakan-commands:cleanup-docs`
**Execution order**: Security first, then structural improvements, then code quality, then documentation/maintenance.
**Between each subcommand**: Verify tests pass. If passing, proceed to next. If failing, rollback and report issue. User can cancel at any point.
**Gate**: Tests must pass after each subcommand before proceeding.
### Phase 4: Report Results
Summarize metrics (lines reduced, files removed, issues resolved), what was cleaned (by category with counts), impact analysis (code coverage maintained, build size reduced, maintainability improved).
Delete the backup branch after successful completion:
```bash
git branch -D "$BACKUP_BRANCH"
```
Note: All commits granular and revertable.
---
## Auto-Fix Mode
For safe automated fixes, execute:
- Remove unused imports (linter auto-fix)
- Remove obvious comments (unambiguous patterns only)
- Fix dead internal links (update to new file locations)
- Extract magic strings to constants (3+ occurrences)
Each fix tested independently. Rollback on any test failure. Commit each fix type separately.
---
## Safety Constraints
**CRITICAL:**
- One category at a time - complete and test each before next
- Security first - fix critical vulnerabilities before cosmetic changes
- Test continuously - run tests after each category
- Commit granularly - each fix gets its own commit
- Preserve rollback - keep backup branch until changes verified
- User confirmation - ask before executing each major change
- Stop on failure - don't continue if tests fail
**Error handling**: If cleanup fails, report which categories completed successfully (with commits), which category failed (with error), and offer options: skip failed category and continue, stop to investigate, or rollback all changes.
---
## After Cleanup
**Review with code-reviewer agent before pushing:**
Use `shavakan-agents:code-reviewer` to verify all changes.
**Final verification**: Run full test suite, check build succeeds, review commits, update CHANGELOG if significant changes.
---
## Related Commands
- `/shavakan-commands:cleanup-dead-code` - Remove unused code
- `/shavakan-commands:cleanup-deps` - Manage dependencies
- `/shavakan-commands:cleanup-comments` - Clean up comments
- `/shavakan-commands:cleanup-docs` - Fix documentation
- `/shavakan-commands:cleanup-architecture` - Refactor structure
- `/shavakan-commands:cleanup-duplication` - Remove duplication

View File

@@ -0,0 +1,197 @@
---
description: Create comprehensive strategic plan + context files (plan.md, context.md, tasks.md)
---
# Feature Plan - Strategic Planning and Documentation
Create comprehensive implementation plans that prevent losing context during complex features.
## Additional Instructions
$ARGUMENTS
---
## Objective
Create comprehensive implementation plans for complex features that prevent context loss during development. Research codebase, design strategic plan with phases and risks, generate planning files in features/ directory.
**Output Structure:**
```
features/[task-name]/
├── plan.md # Comprehensive accepted plan
├── context.md # Key files, architectural decisions, integration points
└── tasks.md # Markdown checklist of work items
```
---
## Execution
### Phase 1: Research and Plan
Analyze user requirements and research codebase for architecture, existing patterns, and integration points. Create comprehensive strategic plan including:
- Executive summary (what and why)
- Implementation phases with specific tasks
- Risks and mitigation strategies
- Success metrics and timeline estimates
Reference specific file paths, function names, and existing patterns to follow.
**Gate**: Present complete plan to user for review before creating files.
### Phase 2: Confirm Approach
Present the strategic plan with options:
```
Create feature plan files?
□ Approve plan - Create files in features/ directory
□ Revise approach - Adjust strategy before creating files
□ Change scope - Modify phases or tasks
□ Cancel
```
After approval, ask for task name (kebab-case format).
**Gate**: User must approve plan and provide task name.
### Phase 3: Generate Planning Files
Create `features/[task-name]/` directory with three files:
**plan.md** - Comprehensive implementation plan (executive summary, phases, tasks, risks, success metrics, timeline)
**context.md** - Key files, architectural decisions, integration points, patterns to follow, next steps for resuming work
**tasks.md** - Markdown checklist organized by phase, including testing and documentation tasks
**Gate**: All files created successfully.
### Phase 4: Report Results
Summarize what was created: directory location, files generated, next steps for implementation. Remind user to keep context.md updated during implementation and mark tasks complete in tasks.md.
---
## File Format Examples
### plan.md Format
```markdown
# [Feature Name] Implementation Plan
## Executive Summary
[2-3 sentences describing what and why]
## Context
- Current state
- Why this change is needed
- What problems it solves
## Implementation Phases
### Phase 1: [Name]
**Goal:** [What this phase achieves]
**Tasks:**
1. Task description with file references
2. Another task
**Risks:**
- Risk and mitigation
### Phase 2: [Name]
...
## Success Metrics
- How we know it's working
- What to test
## Timeline
Estimated: X days/weeks
```
### context.md Format
```markdown
# [Feature Name] - Context & Key Decisions
**Last Updated:** [timestamp]
## Key Files
- `path/to/file.ts` - Purpose and why it matters
- `path/to/another.ts` - Purpose
## Architectural Decisions
1. **Decision:** What we decided
**Rationale:** Why we decided it
**Impact:** What it affects
2. **Decision:** ...
## Integration Points
- How this feature connects with X
- Dependencies on Y service
- Data flow: A → B → C
## Important Patterns
- Pattern we're using and why
- Existing code to follow as example
## Next Steps
[When resuming work, start here]
```
### tasks.md Format
```markdown
# [Feature Name] - Task Checklist
**Last Updated:** [timestamp]
## Phase 1: [Name]
- [ ] Task 1 description
- [ ] Task 2 description
- [ ] Task 3 description
## Phase 2: [Name]
- [ ] Task 1 description
- [ ] Task 2 description
## Testing
- [ ] Unit tests for X
- [ ] Integration tests for Y
- [ ] Manual testing of Z
## Documentation
- [ ] Update API docs
- [ ] Update README
```
---
## Best Practices
- Be specific with file paths and function names
- Break large tasks into 2-4 hour chunks
- Flag risks early (breaking changes, performance impacts, security)
- Reference existing code patterns to follow
- Keep context.md updated with decisions made during implementation
- Mark tasks complete immediately in tasks.md
## When to Use
- Large features (more than a few hours of work)
- Complex refactors touching multiple files
- Features spanning multiple repos or services
- Anything where losing context would be costly
## Example
User request: "I want to add real-time notifications using WebSockets"
Workflow:
1. Research WebSocket patterns in codebase
2. Create comprehensive plan covering server setup, client integration, state management, fallbacks
3. Show plan to user for approval
4. After approval, create `features/websocket-notifications/` with all three files
5. Confirm creation and next steps

156
shavakan/docs-readme.md Normal file
View File

@@ -0,0 +1,156 @@
---
description: Generate/update condensed developer README for project usage and contribution
---
# README Generator
Create/update README.md in project root. **Constraints: < 200 lines, scannable in 30 seconds, technical tone (no marketing).**
## Additional Instructions
$ARGUMENTS
---
## Analysis Phase
Check files in order, record findings:
1. `flake.nix` → Nix flake present (Y/N)
2. `.envrc` + `.envrc.example` → direnv present (Y/N)
3. `Makefile` → Make targets available (Y/N)
4. `package.json` / `pyproject.toml` / `go.mod` / `Cargo.toml` → Language ecosystem
5. Existing `README.md` → Preserve badges, screenshots, custom sections
6. Entry points → `main.py`, `index.ts`, `cmd/main.go`, etc
## Build System Priority
**Select primary build interface based on presence:**
| Priority | System | When | Commands |
|----------|--------|------|----------|
| 1 | Nix | `flake.nix` exists | `nix develop`, `nix build`, `nix run` |
| 2 | Make | `Makefile` exists | `make install`, `make test`, `make build` |
| 3 | Package manager | `package.json` / `Cargo.toml` | `npm install`, `cargo build` |
| 4 | Direct | None above | `python -m`, `go run`, etc |
**If multiple exist**: Use highest priority as primary, mention others as alternatives.
## Environment Management
**If `.envrc` exists:**
- Prerequisites: Add "direnv" requirement
- Install step: Add `cp .envrc.example .envrc && direnv allow`
- Configuration section: Reference `.envrc.example` for variables
**If no direnv**:
- Use `.env` files or manual `export` commands
- Configuration section: List env vars inline
## README Composition
**Required sections (generate in order):**
### 1. Title + Summary
```
# [Project Name]
> [Single sentence: does X for Y users]
```
### 2. Quick Start
**Prerequisites**: List tools with versions. Omit language version if Nix manages it. Include Nix/direnv if detected.
**Install**: Commands to get dependencies. Prioritize by build system table above.
**Run**: Command to start/execute. Include port/URL for web apps.
**Test**: Command to run tests. Use `nix flake check` if available.
### 3. Project Structure
Tree diagram of key directories with inline comments:
```
src/
core/ # Business logic
api/ # Endpoints
tests/ # Test suite
```
### 4. Development
**Setup**: Additional dev steps (direnv, pre-commit hooks, etc)
**Build/Lint/Format**: Commands using primary build system
**Common tasks**: 2-3 bullet points for frequent operations
### 5. Optional Sections (include only if applicable)
**Architecture**: 2-3 sentences on design decisions and tradeoffs. Include if non-obvious.
**Configuration**: Env vars with descriptions. Required vs optional. Include if app needs config.
**Deployment**: Deploy command or link. Include for deployable apps.
**Contributing**: Link to CONTRIBUTING.md or inline if < 5 rules.
**License**: License type. Include if LICENSE file exists.
## Output Format Rules
**Command syntax:**
- Multi-word commands: Always use code blocks, never inline
- Good: "Run:\n```bash\nnpm run dev\n```"
- Bad: "Run `npm run dev`"
**Section length**: Each section < 15 lines. Link to detailed docs if longer.
**Style**: Technical, direct. Explain WHY for architecture, not WHAT code does. No "easy", "simple", "just".
**Preservation**: Keep existing badges, screenshots, unique sections from old README.
## Project Type Adaptations
| Type | Adaptations |
|------|-------------|
| CLI tool | Add "Usage" with examples, omit "Running" |
| Library | Add "Installation" + "Usage" with API examples, omit "Running" |
| Web app | Include PORT/URL in "Run", add "Deployment" |
| Monorepo | Show workspace structure, document per-package commands |
| Nix multi-output | Add `nix flake show` output, document available apps/packages |
## Execution Workflow
1. Run analysis phase → Collect build system, env management, project type
2. Generate README content using composition rules
3. Show preview with summary: "[Project type] with [build system]. README uses [commands]. ~[N] lines."
4. **If approved** → Write to `README.md`
5. **If rejected** → Ask what to change, regenerate from step 2
## Edge Cases
**No build system found**: Use direct language commands (python, go run, etc) and warn in preview.
**Nix without devShell**: Use `nix-shell` or `nix develop` with warnings about flake completeness.
**Multiple entry points**: List all in "Run" section with descriptions.
**Secrets in .envrc**: Never read actual `.envrc`, only `.envrc.example`. Remind about `.gitignore`.
## Example Execution
**Input**: "Generate README"
**Analysis**:
- flake.nix: Yes (Nix development shell + app)
- .envrc: Yes (with .envrc.example)
- Makefile: Yes (install/test/build targets)
- package.json: Yes (Next.js)
- Entry: pages/index.tsx
**Build priority**: Nix (priority 1), mention Make as fallback
**Preview**: "Next.js web app with Nix flake and direnv. README uses `nix run` for primary interface, notes `make` commands as alternative. Includes .envrc.example setup. ~90 lines. Approve?"
**Output**: README.md with Nix-first commands, direnv setup, project structure showing pages/ and components/, development section with `nix build` and `nix flake check`.

View File

@@ -0,0 +1,155 @@
---
description: Save conversation context before compaction (context.md, tasks.md)
---
# Save Context - Pre-Compaction State Preservation
Preserve current conversation state and decisions before approaching token limit or natural break point.
## Additional Instructions
$ARGUMENTS
---
## Objective
Preserve conversation state before compaction or natural break points. Extract decisions made, document current progress, update context files for continuity.
**Use when:**
- Approaching token limit (compaction warning)
- Natural break point in complex work
- Before switching contexts
- After important architectural decisions
---
## Execution
### Phase 1: Analyze Current State
Extract from conversation:
**Decisions made** - What was decided and why, trade-offs considered, alternatives rejected, rationale for approach
**Current progress** - What's completed, partially done, blocked (and why)
**Next steps** - Immediate next actions, dependencies to resolve, open questions
Check if feature context already exists in `features/[task-name]/` directory.
**Gate**: Present summary of extracted state to user.
### Phase 2: Confirm Context Strategy
Present options based on whether context files exist:
```
Save context state?
□ Update existing - Append to features/[task-name]/ files
□ Create new - Start fresh context directory
□ Review first - Show extracted state before saving
□ Cancel
```
If creating new, ask for task name (kebab-case format).
**Gate**: User must confirm approach and provide task name if needed.
### Phase 3: Update or Create Files
**If updating existing:**
- Append new decisions to context.md with timestamp
- Update "Next Steps" section
- Mark completed tasks in tasks.md
- Add new tasks if discovered
**If creating new:**
- Create `features/[task-name]/` directory
- Generate context.md with current state and decisions
- Generate tasks.md with remaining work checklist
Note: plan.md not needed for context saves (only for feature-plan command).
**Gate**: All files updated or created successfully.
### Phase 4: Report Results
Summarize what was saved: location of files, key decisions captured, tasks marked complete, next steps documented for resuming work
---
## File Formats
### context.md Format
```markdown
# [Feature Name] - Context
**Last Updated:** [timestamp]
## Current State
[Where we are in implementation - 2-3 sentences]
## Recent Decisions
### [Decision Title]
**What:** What was decided
**Why:** Rationale and trade-offs
**When:** [timestamp]
**Impact:** What this affects
## Key Files
- `path/to/file.ts` - Purpose and current state
- `path/to/another.ts` - What was changed
## Next Steps
[When resuming work, start here - specific actionable items]
## Open Questions
- Question that needs resolution
- Blocker or dependency
```
### tasks.md Format
```markdown
# [Feature Name] - Tasks
**Last Updated:** [timestamp]
## Completed
- [x] Task that was finished
- [x] Another completed task
## In Progress
- [ ] Task currently being worked on (50% done)
## Blocked
- [ ] Task blocked by X dependency
## Todo
- [ ] Next task to do
- [ ] Subsequent task
```
---
## Best Practices
- **Be specific**: Include file paths, function names, line numbers
- **Capture why**: Decisions without rationale lose value
- **Update timestamps**: Always update "Last Updated" field
- **Mark progress**: Move tasks from todo → in progress → completed
- **Note blockers**: Document what's preventing progress and why
---
## Related Commands
- `/shavakan-commands:docs-feature-plan` - Create comprehensive feature plan from scratch
- `/shavakan-commands:docs-update` - Update existing context files

114
shavakan/docs-update.md Normal file
View File

@@ -0,0 +1,114 @@
---
description: Update feature context and tasks before compaction
---
# Update Feature Context
Update existing feature context files (context.md, tasks.md) to preserve current state before compaction or break point.
## Additional Instructions
$ARGUMENTS
---
## Objective
Update existing feature context files to preserve current state. Add recent progress and decisions to context.md, mark completed tasks in tasks.md, document next steps for continuation.
**Use when:**
- Approaching token limit (compaction warning)
- Before long pause (end of day/week)
- After significant architectural decisions
- When switching to different feature
---
## Execution
### Phase 1: Locate and Backup
Find existing feature directory in `features/[task-name]/`. If multiple exist or none found, ask user which feature to update.
Create timestamped backups before modifying:
- `context.md.backup-[timestamp]`
- `tasks.md.backup-[timestamp]`
**Gate**: Feature directory located and backups created successfully.
### Phase 2: Review Updates
Present what will be updated:
```
Update feature context files?
□ Update both - context.md + tasks.md
□ Context only - Add progress and decisions
□ Tasks only - Mark completed tasks
□ Review first - Show current state before updating
□ Cancel
```
**Gate**: User must confirm which files to update.
### Phase 3: Apply Updates
**For context.md:**
- Update timestamp to current date/time
- Add Recent Progress section (Completed, In Progress, Next Steps, Issues/Blockers)
- Add new files to Key Files section
- Add architectural decisions with rationale
- Update integration points if dependencies changed
**For tasks.md:**
- Update timestamp to current date/time
- Mark completed tasks (change `- [ ]` to `- [x]`)
- Add new tasks discovered during work
- Reorder if priorities changed
Include specific file references and actionable next steps.
**Gate**: All updates applied successfully with no syntax errors.
### Phase 4: Validate and Report
Verify updates meet quality standards:
- Timestamps current
- Recent Progress section complete with Next Steps
- At least one task marked complete or new task added
- No unclosed code blocks or syntax errors
If validation passes, remove backup files. If validation fails, restore from backups.
Report what was updated: files modified, completed tasks count, next steps documented
---
## Quality Standards
### context.md Requirements
- Timestamp current
- Recent Progress with all subsections (Completed, In Progress, Next Steps, Issues/Blockers)
- Specific file references (not vague like "fix the bug")
- Clear next steps (numbered, actionable)
### tasks.md Requirements
- Timestamp current
- Completed tasks marked with [x]
- At least one change made
- Tasks have specific descriptions with file paths
---
## Rollback
If any step fails after backups created, restore original files from backups and report error.
---
## Related Commands
- `/shavakan-commands:docs-save-context` - Save context (works without existing files)
- `/shavakan-commands:docs-feature-plan` - Create new feature plan from scratch