Initial commit
This commit is contained in:
56
commands/add-to-todos.md
Normal file
56
commands/add-to-todos.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
description: Add todo item to TO-DOS.md with context from conversation
|
||||
argument-hint: <todo-description> (optional - infers from conversation if omitted)
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Edit
|
||||
- Write
|
||||
---
|
||||
|
||||
# Add Todo Item
|
||||
|
||||
## Context
|
||||
|
||||
- Current timestamp: !`date "+%Y-%m-%d %H:%M"`
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Read TO-DOS.md in the working directory (create with Write tool if it doesn't exist)
|
||||
|
||||
2. Check for duplicates:
|
||||
- Extract key concept/action from the new todo
|
||||
- Search existing todos for similar titles or overlapping scope
|
||||
- If found, ask user: "A similar todo already exists: [title]. Would you like to:\n\n1. Skip adding (keep existing)\n2. Replace existing with new version\n3. Add anyway as separate item\n\nReply with the number of your choice."
|
||||
- Wait for user response before proceeding
|
||||
|
||||
3. Extract todo content:
|
||||
- **With $ARGUMENTS**: Use as the focus/title for the todo and context heading
|
||||
- **Without $ARGUMENTS**: Analyze recent conversation to extract:
|
||||
- Specific problem or task discussed
|
||||
- Relevant file paths that need attention
|
||||
- Technical details (line numbers, error messages, conflicting specifications)
|
||||
- Root cause if identified
|
||||
|
||||
4. Append new section to bottom of file:
|
||||
- **Heading**: `## Brief Context Title - YYYY-MM-DD HH:MM` (3-8 word title, current timestamp)
|
||||
- **Todo format**: `- **[Action verb] [Component]** - [Brief description]. **Problem:** [What's wrong/why needed]. **Files:** [Comma-separated paths with line numbers]. **Solution:** [Approach hints or constraints, if applicable].`
|
||||
- **Required fields**: Problem and Files (with line numbers like `path/to/file.ts:123-145`)
|
||||
- **Optional field**: Solution
|
||||
- Make each section self-contained for future Claude to understand weeks later
|
||||
- Use simple list items (not checkboxes) - todos are removed when work begins
|
||||
|
||||
5. Confirm and offer to continue with original work:
|
||||
- Identify what the user was working on before `/add-to-todos` was called
|
||||
- Confirm the todo was saved: "✓ Saved to todos."
|
||||
- Ask if they want to continue with the original work: "Would you like to continue with [original task]?"
|
||||
- Wait for user response
|
||||
|
||||
## Format Example
|
||||
|
||||
```markdown
|
||||
## Add Todo Command Improvements - 2025-11-15 14:23
|
||||
|
||||
- **Add structured format to add-to-todos** - Standardize todo entries with Problem/Files/Solution pattern. **Problem:** Current todos lack consistent structure, making it hard for Claude to have enough context when revisiting tasks later. **Files:** `commands/add-to-todos.md:22-29`. **Solution:** Use inline bold labels with required Problem and Files fields, optional Solution field.
|
||||
|
||||
- **Create check-todos command** - Build companion command to list and select todos. **Problem:** Need workflow to review outstanding todos and load context for selected item. **Files:** `commands/check-todos.md` (new), `TO-DOS.md` (reads from). **Solution:** Parse markdown list, display numbered list, accept selection to load full context and remove item.
|
||||
```
|
||||
24
commands/audit-skill.md
Normal file
24
commands/audit-skill.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
description: Audit skill for YAML compliance, pure XML structure, progressive disclosure, and best practices
|
||||
argument-hint: <skill-path>
|
||||
---
|
||||
|
||||
<objective>
|
||||
Invoke the skill-auditor subagent to audit the skill at $ARGUMENTS for compliance with Agent Skills best practices.
|
||||
|
||||
This ensures skills follow proper structure (pure XML, required tags, progressive disclosure) and effectiveness patterns.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Invoke skill-auditor subagent
|
||||
2. Pass skill path: $ARGUMENTS
|
||||
3. Subagent will read updated best practices (including pure XML structure requirements)
|
||||
4. Subagent evaluates XML structure quality, required/conditional tags, anti-patterns
|
||||
5. Review detailed findings with file:line locations, compliance scores, and recommendations
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Subagent invoked successfully
|
||||
- Arguments passed correctly to subagent
|
||||
- Audit includes XML structure evaluation
|
||||
</success_criteria>
|
||||
22
commands/audit-slash-command.md
Normal file
22
commands/audit-slash-command.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
description: Audit slash command file for YAML, arguments, dynamic context, tool restrictions, and content quality
|
||||
argument-hint: <command-path>
|
||||
---
|
||||
|
||||
<objective>
|
||||
Invoke the slash-command-auditor subagent to audit the slash command at $ARGUMENTS for compliance with best practices.
|
||||
|
||||
This ensures commands follow security, clarity, and effectiveness standards.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Invoke slash-command-auditor subagent
|
||||
2. Pass command path: $ARGUMENTS
|
||||
3. Subagent will read best practices and evaluate the command
|
||||
4. Review detailed findings with file:line locations, compliance scores, and recommendations
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Subagent invoked successfully
|
||||
- Arguments passed correctly to subagent
|
||||
</success_criteria>
|
||||
22
commands/audit-subagent.md
Normal file
22
commands/audit-subagent.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
description: Audit subagent configuration for role definition, prompt quality, tool selection, XML structure compliance, and effectiveness
|
||||
argument-hint: <subagent-path>
|
||||
---
|
||||
|
||||
<objective>
|
||||
Invoke the subagent-auditor subagent to audit the subagent at $ARGUMENTS for compliance with best practices, including pure XML structure standards.
|
||||
|
||||
This ensures subagents follow proper structure, configuration, pure XML formatting, and implementation patterns.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Invoke subagent-auditor subagent
|
||||
2. Pass subagent path: $ARGUMENTS
|
||||
3. Subagent will read best practices and evaluate the configuration
|
||||
4. Review detailed findings with file:line locations, compliance scores, and recommendations
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Subagent invoked successfully
|
||||
- Arguments passed correctly to subagent
|
||||
</success_criteria>
|
||||
56
commands/check-todos.md
Normal file
56
commands/check-todos.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
description: List outstanding todos and select one to work on
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Edit
|
||||
- Glob
|
||||
---
|
||||
|
||||
# Check Todos
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Read TO-DOS.md in the working directory (if doesn't exist, say "No outstanding todos" and exit)
|
||||
|
||||
2. Parse and display todos:
|
||||
- Extract all list items starting with `- **` (active todos)
|
||||
- If none exist, say "No outstanding todos" and exit
|
||||
- Display compact numbered list showing:
|
||||
- Number (for selection)
|
||||
- Bold title only (part between `**` markers)
|
||||
- Date from h2 heading above it
|
||||
- Prompt: "Reply with the number of the todo you'd like to work on."
|
||||
- Wait for user to reply with a number
|
||||
|
||||
3. Load full context for selected todo:
|
||||
- Display complete line with all fields (Problem, Files, Solution)
|
||||
- Display h2 heading (topic + date) for additional context
|
||||
- Read and briefly summarize relevant files mentioned
|
||||
|
||||
4. Check for established workflows:
|
||||
- Read CLAUDE.md (if exists) to understand project-specific workflows and rules
|
||||
- Look for `.claude/skills/` directory
|
||||
- Match file paths in todo to domain patterns (`plugins/` → plugin workflow, `mcp-servers/` → MCP workflow)
|
||||
- Check CLAUDE.md for explicit workflow requirements for this type of work
|
||||
|
||||
5. Present action options to user:
|
||||
- **If matching skill/workflow found**: "This looks like [domain] work. Would you like to:\n\n1. Invoke [skill-name] skill and start\n2. Work on it directly\n3. Brainstorm approach first\n4. Put it back and browse other todos\n\nReply with the number of your choice."
|
||||
- **If no workflow match**: "Would you like to:\n\n1. Start working on it\n2. Brainstorm approach first\n3. Put it back and browse other todos\n\nReply with the number of your choice."
|
||||
- Wait for user response
|
||||
|
||||
6. Handle user choice:
|
||||
- **Option "Invoke skill" or "Start working"**: Remove todo from TO-DOS.md (and h2 heading if section becomes empty), then begin work (invoke skill if applicable, or proceed directly)
|
||||
- **Option "Brainstorm approach"**: Keep todo in file, invoke `/brainstorm` with the todo description as argument
|
||||
- **Option "Put it back"**: Keep todo in file, return to step 2 to display the full list again
|
||||
|
||||
## Display Format
|
||||
|
||||
```
|
||||
Outstanding Todos:
|
||||
|
||||
1. Add structured format to add-to-todos (2025-11-15 14:23)
|
||||
2. Create check-todos command (2025-11-15 14:23)
|
||||
3. Fix cookie-extractor MCP workflow (2025-11-14 09:15)
|
||||
|
||||
Reply with the number of the todo you'd like to work on.
|
||||
```
|
||||
48
commands/consider/10-10-10.md
Normal file
48
commands/consider/10-10-10.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
description: Evaluate decisions across three time horizons
|
||||
argument-hint: [decision or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply the 10/10/10 rule to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Ask: "How will I feel about this decision in 10 minutes, 10 months, and 10 years?"
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the decision clearly with options
|
||||
2. For each option, evaluate emotional and practical impact at:
|
||||
- 10 minutes (immediate reaction)
|
||||
- 10 months (medium-term consequences)
|
||||
- 10 years (long-term life impact)
|
||||
3. Identify where short-term and long-term conflict
|
||||
4. Make recommendation based on time-weighted analysis
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Decision:** [what you're choosing between]
|
||||
|
||||
**Option A:**
|
||||
- 10 minutes: [immediate feeling/consequence]
|
||||
- 10 months: [medium-term reality]
|
||||
- 10 years: [long-term impact on life]
|
||||
|
||||
**Option B:**
|
||||
- 10 minutes: [immediate feeling/consequence]
|
||||
- 10 months: [medium-term reality]
|
||||
- 10 years: [long-term impact on life]
|
||||
|
||||
**Time Conflicts:**
|
||||
[Where short-term pain leads to long-term gain, or vice versa]
|
||||
|
||||
**Recommendation:**
|
||||
[Which option, weighted toward longer time horizons]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Distinguishes temporary discomfort from lasting regret
|
||||
- Reveals when short-term thinking hijacks decisions
|
||||
- Makes long-term consequences visceral and real
|
||||
- Helps overcome present bias
|
||||
- Clarifies what actually matters over time
|
||||
</success_criteria>
|
||||
41
commands/consider/5-whys.md
Normal file
41
commands/consider/5-whys.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
description: Drill to root cause by asking why repeatedly
|
||||
argument-hint: [problem or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply the 5 Whys technique to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Keep asking "why" until you hit the root cause, not just symptoms.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the problem clearly
|
||||
2. Ask "Why does this happen?" - Answer 1
|
||||
3. Ask "Why?" about Answer 1 - Answer 2
|
||||
4. Ask "Why?" about Answer 2 - Answer 3
|
||||
5. Continue until you hit a root cause (usually 5 iterations, sometimes fewer)
|
||||
6. Identify actionable intervention at the root
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Problem:** [clear statement]
|
||||
|
||||
**Why 1:** [surface cause]
|
||||
**Why 2:** [deeper cause]
|
||||
**Why 3:** [even deeper]
|
||||
**Why 4:** [approaching root]
|
||||
**Why 5:** [root cause]
|
||||
|
||||
**Root Cause:** [the actual thing to fix]
|
||||
|
||||
**Intervention:** [specific action at the root level]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Moves past symptoms to actual cause
|
||||
- Each "why" digs genuinely deeper
|
||||
- Stops when hitting actionable root (not infinite regress)
|
||||
- Intervention addresses root, not surface
|
||||
- Prevents same problem from recurring
|
||||
</success_criteria>
|
||||
45
commands/consider/eisenhower-matrix.md
Normal file
45
commands/consider/eisenhower-matrix.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
description: Apply Eisenhower matrix (urgent/important) to prioritize tasks or decisions
|
||||
argument-hint: [tasks or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply the Eisenhower matrix to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Categorize items by urgency and importance to clarify what to do now, schedule, delegate, or eliminate.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. List all tasks, decisions, or items in scope
|
||||
2. Evaluate each on two axes:
|
||||
- Important: Contributes to long-term goals/values
|
||||
- Urgent: Requires immediate attention, has deadline pressure
|
||||
3. Place each item in appropriate quadrant
|
||||
4. Provide specific action for each quadrant
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Q1: Do First** (Important + Urgent)
|
||||
- Item: [specific action, deadline if applicable]
|
||||
|
||||
**Q2: Schedule** (Important + Not Urgent)
|
||||
- Item: [when to do it, why it matters long-term]
|
||||
|
||||
**Q3: Delegate** (Not Important + Urgent)
|
||||
- Item: [who/what can handle it, or how to minimize time spent]
|
||||
|
||||
**Q4: Eliminate** (Not Important + Not Urgent)
|
||||
- Item: [why it's noise, permission to drop it]
|
||||
|
||||
**Immediate Focus:**
|
||||
Single sentence on what to tackle right now.
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Every item clearly placed in one quadrant
|
||||
- Q1 items have specific next actions
|
||||
- Q2 items have scheduling recommendations
|
||||
- Q3 items have delegation or minimization strategies
|
||||
- Q4 items explicitly marked as droppable
|
||||
- Reduces overwhelm by creating clear action hierarchy
|
||||
</success_criteria>
|
||||
42
commands/consider/first-principles.md
Normal file
42
commands/consider/first-principles.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
description: Break down to fundamentals and rebuild from base truths
|
||||
argument-hint: [problem or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply first principles thinking to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Strip away assumptions, conventions, and analogies to identify fundamental truths, then rebuild understanding from scratch.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the problem or belief being examined
|
||||
2. List all current assumptions (even "obvious" ones)
|
||||
3. Challenge each assumption: "Is this actually true? Why?"
|
||||
4. Identify base truths that cannot be reduced further
|
||||
5. Rebuild solution from only these fundamentals
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Current Assumptions:**
|
||||
- Assumption 1: [challenged: true/false/partially]
|
||||
- Assumption 2: [challenged: true/false/partially]
|
||||
|
||||
**Fundamental Truths:**
|
||||
- Truth 1: [why this is irreducible]
|
||||
- Truth 2: [why this is irreducible]
|
||||
|
||||
**Rebuilt Understanding:**
|
||||
Starting from fundamentals, here's what we can conclude...
|
||||
|
||||
**New Possibilities:**
|
||||
Without legacy assumptions, these options emerge...
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Surfaces hidden assumptions
|
||||
- Distinguishes convention from necessity
|
||||
- Identifies irreducible base truths
|
||||
- Opens new solution paths not visible before
|
||||
- Avoids reasoning by analogy ("X worked for Y so...")
|
||||
</success_criteria>
|
||||
45
commands/consider/inversion.md
Normal file
45
commands/consider/inversion.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
description: Solve problems backwards - what would guarantee failure?
|
||||
argument-hint: [goal or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply inversion thinking to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Instead of asking "How do I succeed?", ask "What would guarantee failure?" then avoid those things.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the goal or desired outcome
|
||||
2. Invert: "What would guarantee I fail at this?"
|
||||
3. List all failure modes (be thorough and honest)
|
||||
4. For each failure mode, identify the avoidance strategy
|
||||
5. Build success plan by systematically avoiding failure
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Goal:** [what success looks like]
|
||||
|
||||
**Guaranteed Failure Modes:**
|
||||
1. [Way to fail]: Avoid by [specific action]
|
||||
2. [Way to fail]: Avoid by [specific action]
|
||||
3. [Way to fail]: Avoid by [specific action]
|
||||
|
||||
**Anti-Goals (Never Do):**
|
||||
- [Behavior to eliminate]
|
||||
- [Behavior to eliminate]
|
||||
|
||||
**Success By Avoidance:**
|
||||
By simply not doing [X, Y, Z], success becomes much more likely because...
|
||||
|
||||
**Remaining Risk:**
|
||||
[What's left after avoiding obvious failures]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Failure modes are specific and realistic
|
||||
- Avoidance strategies are actionable
|
||||
- Surfaces risks that optimistic planning misses
|
||||
- Creates clear "never do" boundaries
|
||||
- Shows path to success via negativa
|
||||
</success_criteria>
|
||||
44
commands/consider/occams-razor.md
Normal file
44
commands/consider/occams-razor.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
description: Find simplest explanation that fits all the facts
|
||||
argument-hint: [situation or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply Occam's Razor to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Among competing explanations, prefer the one with fewest assumptions. Simplest ≠ easiest; simplest = fewest moving parts.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. List all possible explanations or approaches
|
||||
2. For each, count the assumptions required
|
||||
3. Identify which assumptions are actually supported by evidence
|
||||
4. Eliminate explanations requiring unsupported assumptions
|
||||
5. Select the simplest that still explains all observed facts
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Candidate Explanations:**
|
||||
1. [Explanation]: Requires assumptions [A, B, C]
|
||||
2. [Explanation]: Requires assumptions [D, E]
|
||||
3. [Explanation]: Requires assumptions [F]
|
||||
|
||||
**Evidence Check:**
|
||||
- Assumption A: [supported/unsupported]
|
||||
- Assumption B: [supported/unsupported]
|
||||
...
|
||||
|
||||
**Simplest Valid Explanation:**
|
||||
[The one with fewest unsupported assumptions]
|
||||
|
||||
**Why This Wins:**
|
||||
[What it explains without extra machinery]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Enumerates all plausible explanations
|
||||
- Makes assumptions explicit and countable
|
||||
- Distinguishes supported from unsupported assumptions
|
||||
- Doesn't oversimplify (must fit ALL facts)
|
||||
- Reduces complexity without losing explanatory power
|
||||
</success_criteria>
|
||||
44
commands/consider/one-thing.md
Normal file
44
commands/consider/one-thing.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
description: Identify the single highest-leverage action
|
||||
argument-hint: [goal or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply "The One Thing" framework to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Ask: "What's the ONE thing I can do such that by doing it everything else will be easier or unnecessary?"
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Clarify the ultimate goal or desired outcome
|
||||
2. List all possible actions that could contribute
|
||||
3. For each action, ask: "Does this make other things easier or unnecessary?"
|
||||
4. Identify the domino that knocks down others
|
||||
5. Define the specific next action for that one thing
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Goal:** [what you're trying to achieve]
|
||||
|
||||
**Candidate Actions:**
|
||||
- Action 1: [downstream effect]
|
||||
- Action 2: [downstream effect]
|
||||
- Action 3: [downstream effect]
|
||||
|
||||
**The One Thing:**
|
||||
[The action that enables or eliminates the most other actions]
|
||||
|
||||
**Why This One:**
|
||||
By doing this, [specific things] become easier or unnecessary because...
|
||||
|
||||
**Next Action:**
|
||||
[Specific, concrete first step to take right now]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Identifies genuine leverage point, not just important task
|
||||
- Shows causal chain (this enables that)
|
||||
- Reduces overwhelm to single focus
|
||||
- Next action is immediately actionable
|
||||
- Everything else can wait until this is done
|
||||
</success_criteria>
|
||||
47
commands/consider/opportunity-cost.md
Normal file
47
commands/consider/opportunity-cost.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
description: Analyze what you give up by choosing this option
|
||||
argument-hint: [choice or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply opportunity cost analysis to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Every yes is a no to something else. What's the true cost of this choice?
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the choice being considered
|
||||
2. List what resources it consumes (time, money, energy, attention)
|
||||
3. Identify the best alternative use of those same resources
|
||||
4. Compare value of chosen option vs. best alternative
|
||||
5. Determine if the tradeoff is worth it
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Choice:** [what you're considering doing]
|
||||
|
||||
**Resources Required:**
|
||||
- Time: [hours/days/weeks]
|
||||
- Money: [amount]
|
||||
- Energy/Attention: [cognitive load]
|
||||
- Other: [relationships, reputation, etc.]
|
||||
|
||||
**Best Alternative Uses:**
|
||||
- With that time, could instead: [alternative + value]
|
||||
- With that money, could instead: [alternative + value]
|
||||
- With that energy, could instead: [alternative + value]
|
||||
|
||||
**True Cost:**
|
||||
Choosing this means NOT doing [best alternative], which would have provided [value].
|
||||
|
||||
**Verdict:**
|
||||
[Is the chosen option worth more than the best alternative?]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Makes hidden costs explicit
|
||||
- Compares to best alternative, not just any alternative
|
||||
- Accounts for all resource types (not just money)
|
||||
- Reveals when "affordable" things are actually expensive
|
||||
- Enables genuine comparison of value
|
||||
</success_criteria>
|
||||
40
commands/consider/pareto.md
Normal file
40
commands/consider/pareto.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
description: Apply Pareto's principle (80/20 rule) to analyze arguments or current discussion
|
||||
argument-hint: [topic or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply Pareto's principle to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Identify the vital few factors (≈20%) that drive the majority of results (≈80%), cutting through noise to focus on what actually matters.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Identify all factors, options, tasks, or considerations in scope
|
||||
2. Estimate relative impact of each factor on the desired outcome
|
||||
3. Rank by impact (highest to lowest)
|
||||
4. Identify the cutoff where ~20% of factors account for ~80% of impact
|
||||
5. Present the vital few with specific, actionable recommendations
|
||||
6. Note what can be deprioritized or ignored
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Vital Few (focus here):**
|
||||
- Factor 1: [why it matters, specific action]
|
||||
- Factor 2: [why it matters, specific action]
|
||||
- Factor 3: [why it matters, specific action]
|
||||
|
||||
**Trivial Many (deprioritize):**
|
||||
- Brief list of what can be deferred or ignored
|
||||
|
||||
**Bottom Line:**
|
||||
Single sentence on where to focus effort for maximum results.
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Clearly separates high-impact from low-impact factors
|
||||
- Provides specific, actionable recommendations for vital few
|
||||
- Explains why each vital factor matters
|
||||
- Gives clear direction on what to ignore or defer
|
||||
- Reduces decision fatigue by narrowing focus
|
||||
</success_criteria>
|
||||
48
commands/consider/second-order.md
Normal file
48
commands/consider/second-order.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
description: Think through consequences of consequences
|
||||
argument-hint: [action or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply second-order thinking to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Ask: "And then what?" First-order thinking stops at immediate effects. Second-order thinking follows the chain.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the action or decision
|
||||
2. Identify first-order effects (immediate, obvious consequences)
|
||||
3. For each first-order effect, ask "And then what happens?"
|
||||
4. Continue to third-order if significant
|
||||
5. Identify delayed consequences that change the calculus
|
||||
6. Assess whether the action is still worth it after full chain analysis
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Action:** [what's being considered]
|
||||
|
||||
**First-Order Effects:** (Immediate)
|
||||
- [Effect 1]
|
||||
- [Effect 2]
|
||||
|
||||
**Second-Order Effects:** (And then what?)
|
||||
- [Effect 1] → leads to → [Consequence]
|
||||
- [Effect 2] → leads to → [Consequence]
|
||||
|
||||
**Third-Order Effects:** (And then?)
|
||||
- [Key downstream consequences]
|
||||
|
||||
**Delayed Consequences:**
|
||||
[Effects that aren't obvious initially but matter long-term]
|
||||
|
||||
**Revised Assessment:**
|
||||
After tracing the chain, this action [is/isn't] worth it because...
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Traces causal chains beyond obvious effects
|
||||
- Identifies feedback loops and unintended consequences
|
||||
- Reveals delayed costs or benefits
|
||||
- Distinguishes actions that compound well from those that don't
|
||||
- Prevents "seemed like a good idea at the time" regret
|
||||
</success_criteria>
|
||||
49
commands/consider/swot.md
Normal file
49
commands/consider/swot.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
description: Map strengths, weaknesses, opportunities, and threats
|
||||
argument-hint: [subject or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply SWOT analysis to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Map internal factors (strengths/weaknesses) and external factors (opportunities/threats) to inform strategy.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Define the subject being analyzed (project, decision, position)
|
||||
2. Identify internal strengths (advantages you control)
|
||||
3. Identify internal weaknesses (disadvantages you control)
|
||||
4. Identify external opportunities (favorable conditions you don't control)
|
||||
5. Identify external threats (unfavorable conditions you don't control)
|
||||
6. Develop strategies that leverage strengths toward opportunities while mitigating weaknesses and threats
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Subject:** [what's being analyzed]
|
||||
|
||||
**Strengths (Internal +)**
|
||||
- [Strength]: How to leverage...
|
||||
|
||||
**Weaknesses (Internal -)**
|
||||
- [Weakness]: How to mitigate...
|
||||
|
||||
**Opportunities (External +)**
|
||||
- [Opportunity]: How to capture...
|
||||
|
||||
**Threats (External -)**
|
||||
- [Threat]: How to defend...
|
||||
|
||||
**Strategic Moves:**
|
||||
- **SO Strategy:** Use [strength] to capture [opportunity]
|
||||
- **WO Strategy:** Address [weakness] to enable [opportunity]
|
||||
- **ST Strategy:** Use [strength] to counter [threat]
|
||||
- **WT Strategy:** Minimize [weakness] to avoid [threat]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Correctly categorizes internal vs. external factors
|
||||
- Factors are specific and actionable, not generic
|
||||
- Strategies connect multiple quadrants
|
||||
- Provides clear direction for action
|
||||
- Balances optimism with risk awareness
|
||||
</success_criteria>
|
||||
45
commands/consider/via-negativa.md
Normal file
45
commands/consider/via-negativa.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
description: Improve by removing rather than adding
|
||||
argument-hint: [situation or leave blank for current context]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Apply via negativa to $ARGUMENTS (or the current discussion if no arguments provided).
|
||||
|
||||
Instead of asking "What should I add?", ask "What should I remove?" Subtraction often beats addition.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. State the current situation or goal
|
||||
2. List everything currently present (activities, features, commitments, beliefs)
|
||||
3. For each item, ask: "Does removing this improve the outcome?"
|
||||
4. Identify what to stop, eliminate, or say no to
|
||||
5. Describe the improved state after subtraction
|
||||
</process>
|
||||
|
||||
<output_format>
|
||||
**Current State:**
|
||||
[What exists now - activities, features, commitments]
|
||||
|
||||
**Subtraction Candidates:**
|
||||
- [Item]: Remove because [reason] → Impact: [what improves]
|
||||
- [Item]: Remove because [reason] → Impact: [what improves]
|
||||
- [Item]: Remove because [reason] → Impact: [what improves]
|
||||
|
||||
**Keep (Passed the Test):**
|
||||
- [Item]: Keep because [genuine value]
|
||||
|
||||
**After Subtraction:**
|
||||
[Description of leaner, better state]
|
||||
|
||||
**What to Say No To:**
|
||||
[Future additions to reject]
|
||||
</output_format>
|
||||
|
||||
<success_criteria>
|
||||
- Identifies genuine bloat vs. essential elements
|
||||
- Removes without breaking core function
|
||||
- Creates space and simplicity
|
||||
- Reduces maintenance burden
|
||||
- Improves by doing less, not more
|
||||
</success_criteria>
|
||||
7
commands/create-agent-skill.md
Normal file
7
commands/create-agent-skill.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
description: Create or edit Claude Code skills with expert guidance on structure and best practices
|
||||
allowed-tools: Skill(create-agent-skills)
|
||||
argument-hint: [skill description or requirements]
|
||||
---
|
||||
|
||||
Invoke the create-agent-skills skill for: $ARGUMENTS
|
||||
6
commands/create-hook.md
Normal file
6
commands/create-hook.md
Normal file
@@ -0,0 +1,6 @@
|
||||
---
|
||||
description: Invoke create-hooks skill for expert guidance on Claude Code hook development
|
||||
allowed-tools: Skill(create-hooks)
|
||||
---
|
||||
|
||||
Invoke the create-hooks skill.
|
||||
7
commands/create-meta-prompt.md
Normal file
7
commands/create-meta-prompt.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
description: Create optimized prompts for Claude-to-Claude pipelines (research -> plan -> implement)
|
||||
argument-hint: [task description]
|
||||
allowed-tools: Skill(create-meta-prompts)
|
||||
---
|
||||
|
||||
Invoke the create-meta-prompts skill for: $ARGUMENTS
|
||||
11
commands/create-plan.md
Normal file
11
commands/create-plan.md
Normal file
@@ -0,0 +1,11 @@
|
||||
---
|
||||
description: Create hierarchical project plans for solo agentic development (briefs, roadmaps, phase plans)
|
||||
argument-hint: [what to plan]
|
||||
allowed-tools:
|
||||
- Skill(create-plans)
|
||||
- Read
|
||||
- Bash
|
||||
- Write
|
||||
---
|
||||
|
||||
Invoke the create-plans skill for: $ARGUMENTS
|
||||
468
commands/create-prompt.md
Normal file
468
commands/create-prompt.md
Normal file
@@ -0,0 +1,468 @@
|
||||
---
|
||||
description: Create a new prompt that another Claude can execute
|
||||
argument-hint: [task description]
|
||||
allowed-tools: [Read, Write, Glob, SlashCommand, AskUserQuestion]
|
||||
---
|
||||
|
||||
<context>
|
||||
Before generating prompts, use the Glob tool to check `./prompts/*.md` to:
|
||||
1. Determine if the prompts directory exists
|
||||
2. Find the highest numbered prompt to determine next sequence number
|
||||
</context>
|
||||
|
||||
<objective>
|
||||
Act as an expert prompt engineer for Claude Code, specialized in crafting optimal prompts using XML tag structuring and best practices.
|
||||
|
||||
Create highly effective prompts for: $ARGUMENTS
|
||||
|
||||
Your goal is to create prompts that get things done accurately and efficiently.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
|
||||
<step_0_intake_gate>
|
||||
<title>Adaptive Requirements Gathering</title>
|
||||
|
||||
<critical_first_action>
|
||||
**BEFORE analyzing anything**, check if $ARGUMENTS contains a task description.
|
||||
|
||||
IF $ARGUMENTS is empty or vague (user just ran `/create-prompt` without details):
|
||||
→ **IMMEDIATELY use AskUserQuestion** with:
|
||||
|
||||
- header: "Task type"
|
||||
- question: "What kind of prompt do you need?"
|
||||
- options:
|
||||
- "Coding task" - Build, fix, or refactor code
|
||||
- "Analysis task" - Analyze code, data, or patterns
|
||||
- "Research task" - Gather information or explore options
|
||||
|
||||
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
|
||||
|
||||
IF $ARGUMENTS contains a task description:
|
||||
→ Skip this handler. Proceed directly to adaptive_analysis.
|
||||
</critical_first_action>
|
||||
|
||||
<adaptive_analysis>
|
||||
Analyze the user's description to extract and infer:
|
||||
|
||||
- **Task type**: Coding, analysis, or research (from context or explicit mention)
|
||||
- **Complexity**: Simple (single file, clear goal) vs complex (multi-file, research needed)
|
||||
- **Prompt structure**: Single prompt vs multiple prompts (are there independent sub-tasks?)
|
||||
- **Execution strategy**: Parallel (independent) vs sequential (dependencies)
|
||||
- **Depth needed**: Standard vs extended thinking triggers
|
||||
|
||||
Inference rules:
|
||||
- Dashboard/feature with multiple components → likely multiple prompts
|
||||
- Bug fix with clear location → single prompt, simple
|
||||
- "Optimize" or "refactor" → needs specificity about what/where
|
||||
- Authentication, payments, complex features → complex, needs context
|
||||
</adaptive_analysis>
|
||||
|
||||
<contextual_questioning>
|
||||
Generate 2-4 questions using AskUserQuestion based ONLY on genuine gaps.
|
||||
|
||||
<question_templates>
|
||||
|
||||
**For ambiguous scope** (e.g., "build a dashboard"):
|
||||
- header: "Dashboard type"
|
||||
- question: "What kind of dashboard is this?"
|
||||
- options:
|
||||
- "Admin dashboard" - Internal tools, user management, system metrics
|
||||
- "Analytics dashboard" - Data visualization, reports, business metrics
|
||||
- "User-facing dashboard" - End-user features, personal data, settings
|
||||
|
||||
**For unclear target** (e.g., "fix the bug"):
|
||||
- header: "Bug location"
|
||||
- question: "Where does this bug occur?"
|
||||
- options:
|
||||
- "Frontend/UI" - Visual issues, user interactions, rendering
|
||||
- "Backend/API" - Server errors, data processing, endpoints
|
||||
- "Database" - Queries, migrations, data integrity
|
||||
|
||||
**For auth/security tasks**:
|
||||
- header: "Auth method"
|
||||
- question: "What authentication approach?"
|
||||
- options:
|
||||
- "JWT tokens" - Stateless, API-friendly
|
||||
- "Session-based" - Server-side sessions, traditional web
|
||||
- "OAuth/SSO" - Third-party providers, enterprise
|
||||
|
||||
**For performance tasks**:
|
||||
- header: "Performance focus"
|
||||
- question: "What's the main performance concern?"
|
||||
- options:
|
||||
- "Load time" - Initial render, bundle size, assets
|
||||
- "Runtime" - Memory usage, CPU, rendering performance
|
||||
- "Database" - Query optimization, indexing, caching
|
||||
|
||||
**For output/deliverable clarity**:
|
||||
- header: "Output purpose"
|
||||
- question: "What will this be used for?"
|
||||
- options:
|
||||
- "Production code" - Ship to users, needs polish
|
||||
- "Prototype/POC" - Quick validation, can be rough
|
||||
- "Internal tooling" - Team use, moderate polish
|
||||
|
||||
</question_templates>
|
||||
|
||||
<question_rules>
|
||||
- Only ask about genuine gaps - don't ask what's already stated
|
||||
- Each option needs a description explaining implications
|
||||
- Prefer options over free-text when choices are knowable
|
||||
- User can always select "Other" for custom input
|
||||
- 2-4 questions max per round
|
||||
</question_rules>
|
||||
</contextual_questioning>
|
||||
|
||||
<decision_gate>
|
||||
After receiving answers, present decision gate using AskUserQuestion:
|
||||
|
||||
- header: "Ready"
|
||||
- question: "I have enough context to create your prompt. Ready to proceed?"
|
||||
- options:
|
||||
- "Proceed" - Create the prompt with current context
|
||||
- "Ask more questions" - I have more details to clarify
|
||||
- "Let me add context" - I want to provide additional information
|
||||
|
||||
If "Ask more questions" → generate 2-4 NEW questions based on remaining gaps, then present gate again
|
||||
If "Let me add context" → receive additional context via "Other" option, then re-evaluate
|
||||
If "Proceed" → continue to generation step
|
||||
</decision_gate>
|
||||
|
||||
<finalization>
|
||||
After "Proceed" selected, state confirmation:
|
||||
|
||||
"Creating a [simple/moderate/complex] [single/parallel/sequential] prompt for: [brief summary]"
|
||||
|
||||
Then proceed to generation.
|
||||
</finalization>
|
||||
</step_0_intake_gate>
|
||||
|
||||
<step_1_generate_and_save>
|
||||
<title>Generate and Save Prompts</title>
|
||||
|
||||
<pre_generation_analysis>
|
||||
Before generating, determine:
|
||||
|
||||
1. **Single vs Multiple Prompts**:
|
||||
- Single: Clear dependencies, single cohesive goal, sequential steps
|
||||
- Multiple: Independent sub-tasks that could be parallelized or done separately
|
||||
|
||||
2. **Execution Strategy** (if multiple):
|
||||
- Parallel: Independent, no shared file modifications
|
||||
- Sequential: Dependencies, one must finish before next starts
|
||||
|
||||
3. **Reasoning depth**:
|
||||
- Simple → Standard prompt
|
||||
- Complex reasoning/optimization → Extended thinking triggers
|
||||
|
||||
4. **Required tools**: File references, bash commands, MCP servers
|
||||
|
||||
5. **Prompt quality needs**:
|
||||
- "Go beyond basics" for ambitious work?
|
||||
- WHY explanations for constraints?
|
||||
- Examples for ambiguous requirements?
|
||||
</pre_generation_analysis>
|
||||
|
||||
Create the prompt(s) and save to the prompts folder.
|
||||
|
||||
**For single prompts:**
|
||||
|
||||
- Generate one prompt file following the patterns below
|
||||
- Save as `./prompts/[number]-[name].md`
|
||||
|
||||
**For multiple prompts:**
|
||||
|
||||
- Determine how many prompts are needed (typically 2-4)
|
||||
- Generate each prompt with clear, focused objectives
|
||||
- Save sequentially: `./prompts/[N]-[name].md`, `./prompts/[N+1]-[name].md`, etc.
|
||||
- Each prompt should be self-contained and executable independently
|
||||
|
||||
**Prompt Construction Rules**
|
||||
|
||||
Always Include:
|
||||
|
||||
- XML tag structure with clear, semantic tags like `<objective>`, `<context>`, `<requirements>`, `<constraints>`, `<output>`
|
||||
- **Contextual information**: Why this task matters, what it's for, who will use it, end goal
|
||||
- **Explicit, specific instructions**: Tell Claude exactly what to do with clear, unambiguous language
|
||||
- **Sequential steps**: Use numbered lists for clarity
|
||||
- File output instructions using relative paths: `./filename` or `./subfolder/filename`
|
||||
- Reference to reading the CLAUDE.md for project conventions
|
||||
- Explicit success criteria within `<success_criteria>` or `<verification>` tags
|
||||
|
||||
Conditionally Include (based on analysis):
|
||||
|
||||
- **Extended thinking triggers** for complex reasoning:
|
||||
- Phrases like: "thoroughly analyze", "consider multiple approaches", "deeply consider", "explore multiple solutions"
|
||||
- Don't use for simple, straightforward tasks
|
||||
- **"Go beyond basics" language** for creative/ambitious tasks:
|
||||
- Example: "Include as many relevant features as possible. Go beyond the basics to create a fully-featured implementation."
|
||||
- **WHY explanations** for constraints and requirements:
|
||||
- In generated prompts, explain WHY constraints matter, not just what they are
|
||||
- Example: Instead of "Never use ellipses", write "Your response will be read aloud, so never use ellipses since text-to-speech can't pronounce them"
|
||||
- **Parallel tool calling** for agentic/multi-step workflows:
|
||||
- "For maximum efficiency, whenever you need to perform multiple independent operations, invoke all relevant tools simultaneously rather than sequentially."
|
||||
- **Reflection after tool use** for complex agentic tasks:
|
||||
- "After receiving tool results, carefully reflect on their quality and determine optimal next steps before proceeding."
|
||||
- `<research>` tags when codebase exploration is needed
|
||||
- `<validation>` tags for tasks requiring verification
|
||||
- `<examples>` tags for complex or ambiguous requirements - ensure examples demonstrate desired behavior and avoid undesired patterns
|
||||
- Bash command execution with "!" prefix when system state matters
|
||||
- MCP server references when specifically requested or obviously beneficial
|
||||
|
||||
Output Format:
|
||||
|
||||
1. Generate prompt content with XML structure
|
||||
2. Save to: `./prompts/[number]-[descriptive-name].md`
|
||||
- Number format: 001, 002, 003, etc. (check existing files in ./prompts/ to determine next number)
|
||||
- Name format: lowercase, hyphen-separated, max 5 words describing the task
|
||||
- Example: `./prompts/001-implement-user-authentication.md`
|
||||
3. File should contain ONLY the prompt, no explanations or metadata
|
||||
|
||||
<prompt_patterns>
|
||||
|
||||
For Coding Tasks:
|
||||
|
||||
```xml
|
||||
<objective>
|
||||
[Clear statement of what needs to be built/fixed/refactored]
|
||||
Explain the end goal and why this matters.
|
||||
</objective>
|
||||
|
||||
<context>
|
||||
[Project type, tech stack, relevant constraints]
|
||||
[Who will use this, what it's for]
|
||||
@[relevant files to examine]
|
||||
</context>
|
||||
|
||||
<requirements>
|
||||
[Specific functional requirements]
|
||||
[Performance or quality requirements]
|
||||
Be explicit about what Claude should do.
|
||||
</requirements>
|
||||
|
||||
<implementation>
|
||||
[Any specific approaches or patterns to follow]
|
||||
[What to avoid and WHY - explain the reasoning behind constraints]
|
||||
</implementation>
|
||||
|
||||
<output>
|
||||
Create/modify files with relative paths:
|
||||
- `./path/to/file.ext` - [what this file should contain]
|
||||
</output>
|
||||
|
||||
<verification>
|
||||
Before declaring complete, verify your work:
|
||||
- [Specific test or check to perform]
|
||||
- [How to confirm the solution works]
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
[Clear, measurable criteria for success]
|
||||
</success_criteria>
|
||||
```
|
||||
|
||||
For Analysis Tasks:
|
||||
|
||||
```xml
|
||||
<objective>
|
||||
[What needs to be analyzed and why]
|
||||
[What the analysis will be used for]
|
||||
</objective>
|
||||
|
||||
<data_sources>
|
||||
@[files or data to analyze]
|
||||
![relevant commands to gather data]
|
||||
</data_sources>
|
||||
|
||||
<analysis_requirements>
|
||||
[Specific metrics or patterns to identify]
|
||||
[Depth of analysis needed - use "thoroughly analyze" for complex tasks]
|
||||
[Any comparisons or benchmarks]
|
||||
</analysis_requirements>
|
||||
|
||||
<output_format>
|
||||
[How results should be structured]
|
||||
Save analysis to: `./analyses/[descriptive-name].md`
|
||||
</output_format>
|
||||
|
||||
<verification>
|
||||
[How to validate the analysis is complete and accurate]
|
||||
</verification>
|
||||
```
|
||||
|
||||
For Research Tasks:
|
||||
|
||||
```xml
|
||||
<research_objective>
|
||||
[What information needs to be gathered]
|
||||
[Intended use of the research]
|
||||
For complex research, include: "Thoroughly explore multiple sources and consider various perspectives"
|
||||
</research_objective>
|
||||
|
||||
<scope>
|
||||
[Boundaries of the research]
|
||||
[Sources to prioritize or avoid]
|
||||
[Time period or version constraints]
|
||||
</scope>
|
||||
|
||||
<deliverables>
|
||||
[Format of research output]
|
||||
[Level of detail needed]
|
||||
Save findings to: `./research/[topic].md`
|
||||
</deliverables>
|
||||
|
||||
<evaluation_criteria>
|
||||
[How to assess quality/relevance of sources]
|
||||
[Key questions that must be answered]
|
||||
</evaluation_criteria>
|
||||
|
||||
<verification>
|
||||
Before completing, verify:
|
||||
- [All key questions are answered]
|
||||
- [Sources are credible and relevant]
|
||||
</verification>
|
||||
```
|
||||
</prompt_patterns>
|
||||
</step_1_generate_and_save>
|
||||
|
||||
<intelligence_rules>
|
||||
|
||||
1. **Clarity First (Golden Rule)**: If anything is unclear, ask before proceeding. A few clarifying questions save time. Test: Would a colleague with minimal context understand this prompt?
|
||||
|
||||
2. **Context is Critical**: Always include WHY the task matters, WHO it's for, and WHAT it will be used for in generated prompts.
|
||||
|
||||
3. **Be Explicit**: Generate prompts with explicit, specific instructions. For ambitious results, include "go beyond the basics." For specific formats, state exactly what format is needed.
|
||||
|
||||
4. **Scope Assessment**: Simple tasks get concise prompts. Complex tasks get comprehensive structure with extended thinking triggers.
|
||||
|
||||
5. **Context Loading**: Only request file reading when the task explicitly requires understanding existing code. Use patterns like:
|
||||
|
||||
- "Examine @package.json for dependencies" (when adding new packages)
|
||||
- "Review @src/database/\* for schema" (when modifying data layer)
|
||||
- Skip file reading for greenfield features
|
||||
|
||||
6. **Precision vs Brevity**: Default to precision. A longer, clear prompt beats a short, ambiguous one.
|
||||
|
||||
7. **Tool Integration**:
|
||||
|
||||
- Include MCP servers only when explicitly mentioned or obviously needed
|
||||
- Use bash commands for environment checking when state matters
|
||||
- File references should be specific, not broad wildcards
|
||||
- For multi-step agentic tasks, include parallel tool calling guidance
|
||||
|
||||
8. **Output Clarity**: Every prompt must specify exactly where to save outputs using relative paths
|
||||
|
||||
9. **Verification Always**: Every prompt should include clear success criteria and verification steps
|
||||
</intelligence_rules>
|
||||
|
||||
<decision_tree>
|
||||
After saving the prompt(s), present this decision tree to the user:
|
||||
|
||||
---
|
||||
|
||||
**Prompt(s) created successfully!**
|
||||
|
||||
<single_prompt_scenario>
|
||||
If you created ONE prompt (e.g., `./prompts/005-implement-feature.md`):
|
||||
|
||||
<presentation>
|
||||
✓ Saved prompt to ./prompts/005-implement-feature.md
|
||||
|
||||
What's next?
|
||||
|
||||
1. Run prompt now
|
||||
2. Review/edit prompt first
|
||||
3. Save for later
|
||||
4. Other
|
||||
|
||||
Choose (1-4): \_
|
||||
</presentation>
|
||||
|
||||
<action>
|
||||
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005`
|
||||
</action>
|
||||
</single_prompt_scenario>
|
||||
|
||||
<parallel_scenario>
|
||||
If you created MULTIPLE prompts that CAN run in parallel (e.g., independent modules, no shared files):
|
||||
|
||||
<presentation>
|
||||
✓ Saved prompts:
|
||||
- ./prompts/005-implement-auth.md
|
||||
- ./prompts/006-implement-api.md
|
||||
- ./prompts/007-implement-ui.md
|
||||
|
||||
Execution strategy: These prompts can run in PARALLEL (independent tasks, no shared files)
|
||||
|
||||
What's next?
|
||||
|
||||
1. Run all prompts in parallel now (launches 3 sub-agents simultaneously)
|
||||
2. Run prompts sequentially instead
|
||||
3. Review/edit prompts first
|
||||
4. Other
|
||||
|
||||
Choose (1-4): \_
|
||||
</presentation>
|
||||
|
||||
<actions>
|
||||
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005 006 007 --parallel`
|
||||
If user chooses #2, invoke via SlashCommand tool: `/run-prompt 005 006 007 --sequential`
|
||||
</actions>
|
||||
</parallel_scenario>
|
||||
|
||||
<sequential_scenario>
|
||||
If you created MULTIPLE prompts that MUST run sequentially (e.g., dependencies, shared files):
|
||||
|
||||
<presentation>
|
||||
✓ Saved prompts:
|
||||
- ./prompts/005-setup-database.md
|
||||
- ./prompts/006-create-migrations.md
|
||||
- ./prompts/007-seed-data.md
|
||||
|
||||
Execution strategy: These prompts must run SEQUENTIALLY (dependencies: 005 → 006 → 007)
|
||||
|
||||
What's next?
|
||||
|
||||
1. Run prompts sequentially now (one completes before next starts)
|
||||
2. Run first prompt only (005-setup-database.md)
|
||||
3. Review/edit prompts first
|
||||
4. Other
|
||||
|
||||
Choose (1-4): \_
|
||||
</presentation>
|
||||
|
||||
<actions>
|
||||
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005 006 007 --sequential`
|
||||
If user chooses #2, invoke via SlashCommand tool: `/run-prompt 005`
|
||||
</actions>
|
||||
</sequential_scenario>
|
||||
|
||||
---
|
||||
|
||||
</decision_tree>
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Intake gate completed (AskUserQuestion used for clarification if needed)
|
||||
- User selected "Proceed" from decision gate
|
||||
- Appropriate depth, structure, and execution strategy determined
|
||||
- Prompt(s) generated with proper XML structure following patterns
|
||||
- Files saved to ./prompts/[number]-[name].md with correct sequential numbering
|
||||
- Decision tree presented to user based on single/parallel/sequential scenario
|
||||
- User choice executed (SlashCommand invoked if user selects run option)
|
||||
</success_criteria>
|
||||
|
||||
<meta_instructions>
|
||||
|
||||
- **Intake first**: Complete step_0_intake_gate before generating. Use AskUserQuestion for structured clarification.
|
||||
- **Decision gate loop**: Keep asking questions until user selects "Proceed"
|
||||
- Use Glob tool with `./prompts/*.md` to find existing prompts and determine next number in sequence
|
||||
- If ./prompts/ doesn't exist, use Write tool to create the first prompt (Write will create parent directories)
|
||||
- Keep prompt filenames descriptive but concise
|
||||
- Adapt the XML structure to fit the task - not every tag is needed every time
|
||||
- Consider the user's working directory as the root for all relative paths
|
||||
- Each prompt file should contain ONLY the prompt content, no preamble or explanation
|
||||
- After saving, present the decision tree as inline text (not AskUserQuestion)
|
||||
- Use the SlashCommand tool to invoke /run-prompt when user makes their choice
|
||||
</meta_instructions>
|
||||
7
commands/create-slash-command.md
Normal file
7
commands/create-slash-command.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
description: Create a new slash command following best practices and patterns
|
||||
argument-hint: [command description or requirements]
|
||||
allowed-tools: Skill(create-slash-commands)
|
||||
---
|
||||
|
||||
Invoke the create-slash-commands skill for: $ARGUMENTS
|
||||
7
commands/create-subagent.md
Normal file
7
commands/create-subagent.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
description: Create specialized Claude Code subagents with expert guidance
|
||||
argument-hint: [agent idea or description]
|
||||
allowed-tools: Skill(create-subagents)
|
||||
---
|
||||
|
||||
Invoke the create-subagents skill for: $ARGUMENTS
|
||||
23
commands/debug.md
Normal file
23
commands/debug.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
description: Apply expert debugging methodology to investigate a specific issue
|
||||
argument-hint: [issue description]
|
||||
allowed-tools: Skill(debug-like-expert)
|
||||
---
|
||||
|
||||
<objective>
|
||||
Load the debug-like-expert skill to investigate: $ARGUMENTS
|
||||
|
||||
This applies systematic debugging methodology with evidence gathering, hypothesis testing, and rigorous verification.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
1. Invoke the Skill tool with debug-like-expert
|
||||
2. Pass the issue description: $ARGUMENTS
|
||||
3. Follow the skill's debugging methodology
|
||||
4. Apply rigorous investigation and verification
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Skill successfully invoked
|
||||
- Arguments passed correctly to skill
|
||||
</success_criteria>
|
||||
141
commands/heal-skill.md
Normal file
141
commands/heal-skill.md
Normal file
@@ -0,0 +1,141 @@
|
||||
---
|
||||
description: Heal skill documentation by applying corrections discovered during execution with approval workflow
|
||||
argument-hint: [optional: specific issue to fix]
|
||||
allowed-tools: [Read, Edit, Bash(ls:*), Bash(git:*)]
|
||||
---
|
||||
|
||||
<objective>
|
||||
Update a skill's SKILL.md and related files based on corrections discovered during execution.
|
||||
|
||||
Analyze the conversation to detect which skill is running, reflect on what went wrong, propose specific fixes, get user approval, then apply changes with optional commit.
|
||||
</objective>
|
||||
|
||||
<context>
|
||||
Skill detection: !`ls -1 ./skills/*/SKILL.md | head -5`
|
||||
</context>
|
||||
|
||||
<quick_start>
|
||||
<workflow>
|
||||
1. **Detect skill** from conversation context (invocation messages, recent SKILL.md references)
|
||||
2. **Reflect** on what went wrong and how you discovered the fix
|
||||
3. **Present** proposed changes with before/after diffs
|
||||
4. **Get approval** before making any edits
|
||||
5. **Apply** changes and optionally commit
|
||||
</workflow>
|
||||
</quick_start>
|
||||
|
||||
<process>
|
||||
<step_1 name="detect_skill">
|
||||
Identify the skill from conversation context:
|
||||
|
||||
- Look for skill invocation messages
|
||||
- Check which SKILL.md was recently referenced
|
||||
- Examine current task context
|
||||
|
||||
Set: `SKILL_NAME=[skill-name]` and `SKILL_DIR=./skills/$SKILL_NAME`
|
||||
|
||||
If unclear, ask the user.
|
||||
</step_1>
|
||||
|
||||
<step_2 name="reflection_and_analysis">
|
||||
Focus on $ARGUMENTS if provided, otherwise analyze broader context.
|
||||
|
||||
Determine:
|
||||
- **What was wrong**: Quote specific sections from SKILL.md that are incorrect
|
||||
- **Discovery method**: Context7, error messages, trial and error, documentation lookup
|
||||
- **Root cause**: Outdated API, incorrect parameters, wrong endpoint, missing context
|
||||
- **Scope of impact**: Single section or multiple? Related files affected?
|
||||
- **Proposed fix**: Which files, which sections, before/after for each
|
||||
</step_2>
|
||||
|
||||
<step_3 name="scan_affected_files">
|
||||
```bash
|
||||
ls -la $SKILL_DIR/
|
||||
ls -la $SKILL_DIR/references/ 2>/dev/null
|
||||
ls -la $SKILL_DIR/scripts/ 2>/dev/null
|
||||
```
|
||||
</step_3>
|
||||
|
||||
<step_4 name="present_proposed_changes">
|
||||
Present changes in this format:
|
||||
|
||||
```
|
||||
**Skill being healed:** [skill-name]
|
||||
**Issue discovered:** [1-2 sentence summary]
|
||||
**Root cause:** [brief explanation]
|
||||
|
||||
**Files to be modified:**
|
||||
- [ ] SKILL.md
|
||||
- [ ] references/[file].md
|
||||
- [ ] scripts/[file].py
|
||||
|
||||
**Proposed changes:**
|
||||
|
||||
### Change 1: SKILL.md - [Section name]
|
||||
**Location:** Line [X] in SKILL.md
|
||||
|
||||
**Current (incorrect):**
|
||||
```
|
||||
[exact text from current file]
|
||||
```
|
||||
|
||||
**Corrected:**
|
||||
```
|
||||
[new text]
|
||||
```
|
||||
|
||||
**Reason:** [why this fixes the issue]
|
||||
|
||||
[repeat for each change across all files]
|
||||
|
||||
**Impact assessment:**
|
||||
- Affects: [authentication/API endpoints/parameters/examples/etc.]
|
||||
|
||||
**Verification:**
|
||||
These changes will prevent: [specific error that prompted this]
|
||||
```
|
||||
</step_4>
|
||||
|
||||
<step_5 name="request_approval">
|
||||
```
|
||||
Should I apply these changes?
|
||||
|
||||
1. Yes, apply and commit all changes
|
||||
2. Apply but don't commit (let me review first)
|
||||
3. Revise the changes (I'll provide feedback)
|
||||
4. Cancel (don't make changes)
|
||||
|
||||
Choose (1-4):
|
||||
```
|
||||
|
||||
**Wait for user response. Do not proceed without approval.**
|
||||
</step_5>
|
||||
|
||||
<step_6 name="apply_changes">
|
||||
Only after approval (option 1 or 2):
|
||||
|
||||
1. Use Edit tool for each correction across all files
|
||||
2. Read back modified sections to verify
|
||||
3. If option 1, commit with structured message showing what was healed
|
||||
4. Confirm completion with file list
|
||||
</step_6>
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Skill correctly detected from conversation context
|
||||
- All incorrect sections identified with before/after
|
||||
- User approved changes before application
|
||||
- All edits applied across SKILL.md and related files
|
||||
- Changes verified by reading back
|
||||
- Commit created if user chose option 1
|
||||
- Completion confirmed with file list
|
||||
</success_criteria>
|
||||
|
||||
<verification>
|
||||
Before completing:
|
||||
|
||||
- Read back each modified section to confirm changes applied
|
||||
- Ensure cross-file consistency (SKILL.md examples match references/)
|
||||
- Verify git commit created if option 1 was selected
|
||||
- Check no unintended files were modified
|
||||
</verification>
|
||||
129
commands/run-plan.md
Normal file
129
commands/run-plan.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
type: prompt
|
||||
description: Execute a PLAN.md file directly without loading planning skill context
|
||||
arguments:
|
||||
- name: plan_path
|
||||
description: Path to PLAN.md file (e.g., .planning/phases/07-sidebar-reorganization/07-01-PLAN.md)
|
||||
required: true
|
||||
---
|
||||
|
||||
Execute the plan at {{plan_path}} using **intelligent segmentation** for optimal quality.
|
||||
|
||||
**Process:**
|
||||
|
||||
1. **Verify plan exists and is unexecuted:**
|
||||
- Read {{plan_path}}
|
||||
- Check if corresponding SUMMARY.md exists in same directory
|
||||
- If SUMMARY exists: inform user plan already executed, ask if they want to re-run
|
||||
- If plan doesn't exist: error and exit
|
||||
|
||||
2. **Parse plan and determine execution strategy:**
|
||||
- Extract `<objective>`, `<execution_context>`, `<context>`, `<tasks>`, `<verification>`, `<success_criteria>` sections
|
||||
- Analyze checkpoint structure: `grep "type=\"checkpoint" {{plan_path}}`
|
||||
- Determine routing strategy:
|
||||
|
||||
**Strategy A: Fully Autonomous (no checkpoints)**
|
||||
- Spawn single subagent to execute entire plan
|
||||
- Subagent reads plan, executes all tasks, creates SUMMARY, commits
|
||||
- Main context: Orchestration only (~5% usage)
|
||||
- Go to step 3A
|
||||
|
||||
**Strategy B: Segmented Execution (has verify-only checkpoints)**
|
||||
- Parse into segments separated by checkpoints
|
||||
- Check if checkpoints are verify-only (checkpoint:human-verify)
|
||||
- If all checkpoints are verify-only: segment execution enabled
|
||||
- Go to step 3B
|
||||
|
||||
**Strategy C: Decision-Dependent (has decision/action checkpoints)**
|
||||
- Has checkpoint:decision or checkpoint:human-action checkpoints
|
||||
- Following tasks depend on checkpoint outcomes
|
||||
- Must execute sequentially in main context
|
||||
- Go to step 3C
|
||||
|
||||
3. **Execute based on strategy:**
|
||||
|
||||
**3A: Fully Autonomous Execution**
|
||||
```
|
||||
Spawn Task tool (subagent_type="general-purpose"):
|
||||
|
||||
Prompt: "Execute plan at {{plan_path}}
|
||||
|
||||
This is a fully autonomous plan (no checkpoints).
|
||||
|
||||
- Read the plan for full objective, context, and tasks
|
||||
- Execute ALL tasks sequentially
|
||||
- Follow all deviation rules and authentication gate protocols
|
||||
- Create SUMMARY.md in same directory as PLAN.md
|
||||
- Update ROADMAP.md plan count
|
||||
- Commit with format: feat({phase}-{plan}): [summary]
|
||||
- Report: tasks completed, files modified, commit hash"
|
||||
|
||||
Wait for completion → Done
|
||||
```
|
||||
|
||||
**3B: Segmented Execution (verify-only checkpoints)**
|
||||
```
|
||||
For each segment (autonomous block between checkpoints):
|
||||
|
||||
IF segment is autonomous:
|
||||
Spawn subagent:
|
||||
"Execute tasks [X-Y] from {{plan_path}}
|
||||
Read plan for context and deviation rules.
|
||||
DO NOT create SUMMARY or commit.
|
||||
Report: tasks done, files modified, deviations"
|
||||
|
||||
Wait for subagent completion
|
||||
Capture results
|
||||
|
||||
ELSE IF task is checkpoint:
|
||||
Execute in main context:
|
||||
- Load checkpoint task details
|
||||
- Present checkpoint to user (action/verify/decision)
|
||||
- Wait for user response
|
||||
- Continue to next segment
|
||||
|
||||
After all segments complete:
|
||||
- Aggregate results from all segments
|
||||
- Create SUMMARY.md with aggregated data
|
||||
- Update ROADMAP.md
|
||||
- Commit all changes
|
||||
- Done
|
||||
```
|
||||
|
||||
**3C: Decision-Dependent Execution**
|
||||
```
|
||||
Execute in main context:
|
||||
|
||||
Read execution context from plan <execution_context> section
|
||||
Read domain context from plan <context> section
|
||||
|
||||
For each task in <tasks>:
|
||||
IF type="auto": execute in main, track deviations
|
||||
IF type="checkpoint:*": execute in main, wait for user
|
||||
|
||||
After all tasks:
|
||||
- Create SUMMARY.md
|
||||
- Update ROADMAP.md
|
||||
- Commit
|
||||
- Done
|
||||
```
|
||||
|
||||
4. **Summary and completion:**
|
||||
- Verify SUMMARY.md created
|
||||
- Verify commit successful
|
||||
- Present completion message with next steps
|
||||
|
||||
**Critical Rules:**
|
||||
|
||||
- **Read execution_context first:** Always load files from `<execution_context>` section before executing
|
||||
- **Minimal context loading:** Only read files explicitly mentioned in `<execution_context>` and `<context>` sections
|
||||
- **No skill invocation:** Execute directly using native tools - don't invoke create-plans skill
|
||||
- **All deviations tracked:** Apply deviation rules from execute-phase.md, document everything in Summary
|
||||
- **Checkpoints are blocking:** Never skip user interaction for checkpoint tasks
|
||||
- **Verification is mandatory:** Don't mark complete without running verification checks
|
||||
- **Follow execute-phase.md protocol:** Loaded context contains all execution instructions
|
||||
|
||||
**Context Efficiency Target:**
|
||||
- Execution context: ~5-7k tokens (execute-phase.md, summary.md, checkpoints.md if needed)
|
||||
- Domain context: ~10-15k tokens (BRIEF, ROADMAP, codebase files)
|
||||
- Total overhead: <30% context, reserving 70%+ for workspace and implementation
|
||||
166
commands/run-prompt.md
Normal file
166
commands/run-prompt.md
Normal file
@@ -0,0 +1,166 @@
|
||||
---
|
||||
name: run-prompt
|
||||
description: Delegate one or more prompts to fresh sub-task contexts with parallel or sequential execution
|
||||
argument-hint: <prompt-number(s)-or-name> [--parallel|--sequential]
|
||||
allowed-tools: [Read, Task, Bash(ls:*), Bash(mv:*), Bash(git:*)]
|
||||
---
|
||||
|
||||
<context>
|
||||
Git status: !`git status --short`
|
||||
Recent prompts: !`ls -t ./prompts/*.md | head -5`
|
||||
</context>
|
||||
|
||||
<objective>
|
||||
Execute one or more prompts from `./prompts/` as delegated sub-tasks with fresh context. Supports single prompt execution, parallel execution of multiple independent prompts, and sequential execution of dependent prompts.
|
||||
</objective>
|
||||
|
||||
<input>
|
||||
The user will specify which prompt(s) to run via $ARGUMENTS, which can be:
|
||||
|
||||
**Single prompt:**
|
||||
|
||||
- Empty (no arguments): Run the most recently created prompt (default behavior)
|
||||
- A prompt number (e.g., "001", "5", "42")
|
||||
- A partial filename (e.g., "user-auth", "dashboard")
|
||||
|
||||
**Multiple prompts:**
|
||||
|
||||
- Multiple numbers (e.g., "005 006 007")
|
||||
- With execution flag: "005 006 007 --parallel" or "005 006 007 --sequential"
|
||||
- If no flag specified with multiple prompts, default to --sequential for safety
|
||||
</input>
|
||||
|
||||
<process>
|
||||
<step1_parse_arguments>
|
||||
Parse $ARGUMENTS to extract:
|
||||
- Prompt numbers/names (all arguments that are not flags)
|
||||
- Execution strategy flag (--parallel or --sequential)
|
||||
|
||||
<examples>
|
||||
- "005" → Single prompt: 005
|
||||
- "005 006 007" → Multiple prompts: [005, 006, 007], strategy: sequential (default)
|
||||
- "005 006 007 --parallel" → Multiple prompts: [005, 006, 007], strategy: parallel
|
||||
- "005 006 007 --sequential" → Multiple prompts: [005, 006, 007], strategy: sequential
|
||||
</examples>
|
||||
</step1_parse_arguments>
|
||||
|
||||
<step2_resolve_files>
|
||||
For each prompt number/name:
|
||||
|
||||
- If empty or "last": Find with `!ls -t ./prompts/*.md | head -1`
|
||||
- If a number: Find file matching that zero-padded number (e.g., "5" matches "005-_.md", "42" matches "042-_.md")
|
||||
- If text: Find files containing that string in the filename
|
||||
|
||||
<matching_rules>
|
||||
|
||||
- If exactly one match found: Use that file
|
||||
- If multiple matches found: List them and ask user to choose
|
||||
- If no matches found: Report error and list available prompts
|
||||
</matching_rules>
|
||||
</step2_resolve_files>
|
||||
|
||||
<step3_execute>
|
||||
<single_prompt>
|
||||
|
||||
1. Read the complete contents of the prompt file
|
||||
2. Delegate as sub-task using Task tool with subagent_type="general-purpose"
|
||||
3. Wait for completion
|
||||
4. Archive prompt to `./prompts/completed/` with metadata
|
||||
5. Commit all work:
|
||||
- Stage files YOU modified with `git add [file]` (never `git add .`)
|
||||
- Determine appropriate commit type based on changes (fix|feat|refactor|style|docs|test|chore)
|
||||
- Commit with format: `[type]: [description]` (lowercase, specific, concise)
|
||||
6. Return results
|
||||
</single_prompt>
|
||||
|
||||
<parallel_execution>
|
||||
|
||||
1. Read all prompt files
|
||||
2. **Spawn all Task tools in a SINGLE MESSAGE** (this is critical for parallel execution):
|
||||
<example>
|
||||
Use Task tool for prompt 005
|
||||
Use Task tool for prompt 006
|
||||
Use Task tool for prompt 007
|
||||
(All in one message with multiple tool calls)
|
||||
</example>
|
||||
3. Wait for ALL to complete
|
||||
4. Archive all prompts with metadata
|
||||
5. Commit all work:
|
||||
- Stage files YOU modified with `git add [file]` (never `git add .`)
|
||||
- Determine appropriate commit type based on changes (fix|feat|refactor|style|docs|test|chore)
|
||||
- Commit with format: `[type]: [description]` (lowercase, specific, concise)
|
||||
6. Return consolidated results
|
||||
</parallel_execution>
|
||||
|
||||
<sequential_execution>
|
||||
|
||||
1. Read first prompt file
|
||||
2. Spawn Task tool for first prompt
|
||||
3. Wait for completion
|
||||
4. Archive first prompt
|
||||
5. Read second prompt file
|
||||
6. Spawn Task tool for second prompt
|
||||
7. Wait for completion
|
||||
8. Archive second prompt
|
||||
9. Repeat for remaining prompts
|
||||
10. Archive all prompts with metadata
|
||||
11. Commit all work:
|
||||
- Stage files YOU modified with `git add [file]` (never `git add .`)
|
||||
- Determine appropriate commit type based on changes (fix|feat|refactor|style|docs|test|chore)
|
||||
- Commit with format: `[type]: [description]` (lowercase, specific, concise)
|
||||
12. Return consolidated results
|
||||
</sequential_execution>
|
||||
</step3_execute>
|
||||
</process>
|
||||
|
||||
<context_strategy>
|
||||
By delegating to a sub-task, the actual implementation work happens in fresh context while the main conversation stays lean for orchestration and iteration.
|
||||
</context_strategy>
|
||||
|
||||
<output>
|
||||
<single_prompt_output>
|
||||
✓ Executed: ./prompts/005-implement-feature.md
|
||||
✓ Archived to: ./prompts/completed/005-implement-feature.md
|
||||
|
||||
<results>
|
||||
[Summary of what the sub-task accomplished]
|
||||
</results>
|
||||
</single_prompt_output>
|
||||
|
||||
<parallel_output>
|
||||
✓ Executed in PARALLEL:
|
||||
|
||||
- ./prompts/005-implement-auth.md
|
||||
- ./prompts/006-implement-api.md
|
||||
- ./prompts/007-implement-ui.md
|
||||
|
||||
✓ All archived to ./prompts/completed/
|
||||
|
||||
<results>
|
||||
[Consolidated summary of all sub-task results]
|
||||
</results>
|
||||
</parallel_output>
|
||||
|
||||
<sequential_output>
|
||||
✓ Executed SEQUENTIALLY:
|
||||
|
||||
1. ./prompts/005-setup-database.md → Success
|
||||
2. ./prompts/006-create-migrations.md → Success
|
||||
3. ./prompts/007-seed-data.md → Success
|
||||
|
||||
✓ All archived to ./prompts/completed/
|
||||
|
||||
<results>
|
||||
[Consolidated summary showing progression through each step]
|
||||
</results>
|
||||
</sequential_output>
|
||||
</output>
|
||||
|
||||
<critical_notes>
|
||||
|
||||
- For parallel execution: ALL Task tool calls MUST be in a single message
|
||||
- For sequential execution: Wait for each Task to complete before starting next
|
||||
- Archive prompts only after successful completion
|
||||
- If any prompt fails, stop sequential execution and report error
|
||||
- Provide clear, consolidated results for multiple prompt execution
|
||||
</critical_notes>
|
||||
108
commands/whats-next.md
Normal file
108
commands/whats-next.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
name: whats-next
|
||||
description: Analyze the current conversation and create a handoff document for continuing this work in a fresh context
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- Bash
|
||||
- WebSearch
|
||||
- WebFetch
|
||||
---
|
||||
|
||||
Create a comprehensive, detailed handoff document that captures all context from the current conversation. This allows continuing the work in a fresh context with complete precision.
|
||||
|
||||
## Instructions
|
||||
|
||||
**PRIORITY: Comprehensive detail and precision over brevity.** The goal is to enable someone (or a fresh Claude instance) to pick up exactly where you left off with zero information loss.
|
||||
|
||||
Adapt the level of detail to the task type (coding, research, analysis, writing, configuration, etc.) but maintain comprehensive coverage:
|
||||
|
||||
1. **Original Task**: Identify what was initially requested (not new scope or side tasks)
|
||||
|
||||
2. **Work Completed**: Document everything accomplished in detail
|
||||
- All artifacts created, modified, or analyzed (files, documents, research findings, etc.)
|
||||
- Specific changes made (code with line numbers, content written, data analyzed, etc.)
|
||||
- Actions taken (commands run, APIs called, searches performed, tools used, etc.)
|
||||
- Findings discovered (insights, patterns, answers, data points, etc.)
|
||||
- Decisions made and the reasoning behind them
|
||||
|
||||
3. **Work Remaining**: Specify exactly what still needs to be done
|
||||
- Break down remaining work into specific, actionable steps
|
||||
- Include precise locations, references, or targets (file paths, URLs, data sources, etc.)
|
||||
- Note dependencies, prerequisites, or ordering requirements
|
||||
- Specify validation or verification steps needed
|
||||
|
||||
4. **Attempted Approaches**: Capture everything tried, including failures
|
||||
- Approaches that didn't work and why they failed
|
||||
- Errors encountered, blockers hit, or limitations discovered
|
||||
- Dead ends to avoid repeating
|
||||
- Alternative approaches considered but not pursued
|
||||
|
||||
5. **Critical Context**: Preserve all essential knowledge
|
||||
- Key decisions and trade-offs considered
|
||||
- Constraints, requirements, or boundaries
|
||||
- Important discoveries, gotchas, edge cases, or non-obvious behaviors
|
||||
- Relevant environment, configuration, or setup details
|
||||
- Assumptions made that need validation
|
||||
- References to documentation, sources, or resources consulted
|
||||
|
||||
6. **Current State**: Document the exact current state
|
||||
- Status of deliverables (complete, in-progress, not started)
|
||||
- What's committed, saved, or finalized vs. what's temporary or draft
|
||||
- Any temporary changes, workarounds, or open questions
|
||||
- Current position in the workflow or process
|
||||
|
||||
Write to `whats-next.md` in the current working directory using the format below.
|
||||
|
||||
## Output Format
|
||||
|
||||
```xml
|
||||
<original_task>
|
||||
[The specific task that was initially requested - be precise about scope]
|
||||
</original_task>
|
||||
|
||||
<work_completed>
|
||||
[Comprehensive detail of everything accomplished:
|
||||
- Artifacts created/modified/analyzed (with specific references)
|
||||
- Specific changes, additions, or findings (with details and locations)
|
||||
- Actions taken (commands, searches, API calls, tool usage, etc.)
|
||||
- Key discoveries or insights
|
||||
- Decisions made and reasoning
|
||||
- Side tasks completed]
|
||||
</work_completed>
|
||||
|
||||
<work_remaining>
|
||||
[Detailed breakdown of what needs to be done:
|
||||
- Specific tasks with precise locations or references
|
||||
- Exact targets to create, modify, or analyze
|
||||
- Dependencies and ordering
|
||||
- Validation or verification steps needed]
|
||||
</work_remaining>
|
||||
|
||||
<attempted_approaches>
|
||||
[Everything tried, including failures:
|
||||
- Approaches that didn't work and why
|
||||
- Errors, blockers, or limitations encountered
|
||||
- Dead ends to avoid
|
||||
- Alternative approaches considered but not pursued]
|
||||
</attempted_approaches>
|
||||
|
||||
<critical_context>
|
||||
[All essential knowledge for continuing:
|
||||
- Key decisions and trade-offs
|
||||
- Constraints, requirements, or boundaries
|
||||
- Important discoveries, gotcas, or edge cases
|
||||
- Environment, configuration, or setup details
|
||||
- Assumptions requiring validation
|
||||
- References to documentation, sources, or resources]
|
||||
</critical_context>
|
||||
|
||||
<current_state>
|
||||
[Exact state of the work:
|
||||
- Status of deliverables (complete/in-progress/not started)
|
||||
- What's finalized vs. what's temporary or draft
|
||||
- Temporary changes or workarounds in place
|
||||
- Current position in workflow or process
|
||||
- Any open questions or pending decisions]
|
||||
</current_state>
|
||||
```
|
||||
Reference in New Issue
Block a user