---
description: Create a new prompt that another Claude can execute
argument-hint: [task description]
allowed-tools: [Read, Write, Glob, SlashCommand, AskUserQuestion]
---
Before generating prompts, use the Glob tool to check `./prompts/*.md` to:
1. Determine if the prompts directory exists
2. Find the highest numbered prompt to determine next sequence number
Act as an expert prompt engineer for Claude Code, specialized in crafting optimal prompts using XML tag structuring and best practices.
Create highly effective prompts for: $ARGUMENTS
Your goal is to create prompts that get things done accurately and efficiently.
Adaptive Requirements Gathering
**BEFORE analyzing anything**, check if $ARGUMENTS contains a task description.
IF $ARGUMENTS is empty or vague (user just ran `/create-prompt` without details):
→ **IMMEDIATELY use AskUserQuestion** with:
- header: "Task type"
- question: "What kind of prompt do you need?"
- options:
- "Coding task" - Build, fix, or refactor code
- "Analysis task" - Analyze code, data, or patterns
- "Research task" - Gather information or explore options
After selection, ask: "Describe what you want to accomplish" (they select "Other" to provide free text).
IF $ARGUMENTS contains a task description:
→ Skip this handler. Proceed directly to adaptive_analysis.
Analyze the user's description to extract and infer:
- **Task type**: Coding, analysis, or research (from context or explicit mention)
- **Complexity**: Simple (single file, clear goal) vs complex (multi-file, research needed)
- **Prompt structure**: Single prompt vs multiple prompts (are there independent sub-tasks?)
- **Execution strategy**: Parallel (independent) vs sequential (dependencies)
- **Depth needed**: Standard vs extended thinking triggers
Inference rules:
- Dashboard/feature with multiple components → likely multiple prompts
- Bug fix with clear location → single prompt, simple
- "Optimize" or "refactor" → needs specificity about what/where
- Authentication, payments, complex features → complex, needs context
Generate 2-4 questions using AskUserQuestion based ONLY on genuine gaps.
**For ambiguous scope** (e.g., "build a dashboard"):
- header: "Dashboard type"
- question: "What kind of dashboard is this?"
- options:
- "Admin dashboard" - Internal tools, user management, system metrics
- "Analytics dashboard" - Data visualization, reports, business metrics
- "User-facing dashboard" - End-user features, personal data, settings
**For unclear target** (e.g., "fix the bug"):
- header: "Bug location"
- question: "Where does this bug occur?"
- options:
- "Frontend/UI" - Visual issues, user interactions, rendering
- "Backend/API" - Server errors, data processing, endpoints
- "Database" - Queries, migrations, data integrity
**For auth/security tasks**:
- header: "Auth method"
- question: "What authentication approach?"
- options:
- "JWT tokens" - Stateless, API-friendly
- "Session-based" - Server-side sessions, traditional web
- "OAuth/SSO" - Third-party providers, enterprise
**For performance tasks**:
- header: "Performance focus"
- question: "What's the main performance concern?"
- options:
- "Load time" - Initial render, bundle size, assets
- "Runtime" - Memory usage, CPU, rendering performance
- "Database" - Query optimization, indexing, caching
**For output/deliverable clarity**:
- header: "Output purpose"
- question: "What will this be used for?"
- options:
- "Production code" - Ship to users, needs polish
- "Prototype/POC" - Quick validation, can be rough
- "Internal tooling" - Team use, moderate polish
- Only ask about genuine gaps - don't ask what's already stated
- Each option needs a description explaining implications
- Prefer options over free-text when choices are knowable
- User can always select "Other" for custom input
- 2-4 questions max per round
After receiving answers, present decision gate using AskUserQuestion:
- header: "Ready"
- question: "I have enough context to create your prompt. Ready to proceed?"
- options:
- "Proceed" - Create the prompt with current context
- "Ask more questions" - I have more details to clarify
- "Let me add context" - I want to provide additional information
If "Ask more questions" → generate 2-4 NEW questions based on remaining gaps, then present gate again
If "Let me add context" → receive additional context via "Other" option, then re-evaluate
If "Proceed" → continue to generation step
After "Proceed" selected, state confirmation:
"Creating a [simple/moderate/complex] [single/parallel/sequential] prompt for: [brief summary]"
Then proceed to generation.
Generate and Save Prompts
Before generating, determine:
1. **Single vs Multiple Prompts**:
- Single: Clear dependencies, single cohesive goal, sequential steps
- Multiple: Independent sub-tasks that could be parallelized or done separately
2. **Execution Strategy** (if multiple):
- Parallel: Independent, no shared file modifications
- Sequential: Dependencies, one must finish before next starts
3. **Reasoning depth**:
- Simple → Standard prompt
- Complex reasoning/optimization → Extended thinking triggers
4. **Required tools**: File references, bash commands, MCP servers
5. **Prompt quality needs**:
- "Go beyond basics" for ambitious work?
- WHY explanations for constraints?
- Examples for ambiguous requirements?
Create the prompt(s) and save to the prompts folder.
**For single prompts:**
- Generate one prompt file following the patterns below
- Save as `./prompts/[number]-[name].md`
**For multiple prompts:**
- Determine how many prompts are needed (typically 2-4)
- Generate each prompt with clear, focused objectives
- Save sequentially: `./prompts/[N]-[name].md`, `./prompts/[N+1]-[name].md`, etc.
- Each prompt should be self-contained and executable independently
**Prompt Construction Rules**
Always Include:
- XML tag structure with clear, semantic tags like ``, ``, ``, ``, `
1. **Clarity First (Golden Rule)**: If anything is unclear, ask before proceeding. A few clarifying questions save time. Test: Would a colleague with minimal context understand this prompt?
2. **Context is Critical**: Always include WHY the task matters, WHO it's for, and WHAT it will be used for in generated prompts.
3. **Be Explicit**: Generate prompts with explicit, specific instructions. For ambitious results, include "go beyond the basics." For specific formats, state exactly what format is needed.
4. **Scope Assessment**: Simple tasks get concise prompts. Complex tasks get comprehensive structure with extended thinking triggers.
5. **Context Loading**: Only request file reading when the task explicitly requires understanding existing code. Use patterns like:
- "Examine @package.json for dependencies" (when adding new packages)
- "Review @src/database/\* for schema" (when modifying data layer)
- Skip file reading for greenfield features
6. **Precision vs Brevity**: Default to precision. A longer, clear prompt beats a short, ambiguous one.
7. **Tool Integration**:
- Include MCP servers only when explicitly mentioned or obviously needed
- Use bash commands for environment checking when state matters
- File references should be specific, not broad wildcards
- For multi-step agentic tasks, include parallel tool calling guidance
8. **Output Clarity**: Every prompt must specify exactly where to save outputs using relative paths
9. **Verification Always**: Every prompt should include clear success criteria and verification steps
After saving the prompt(s), present this decision tree to the user:
---
**Prompt(s) created successfully!**
If you created ONE prompt (e.g., `./prompts/005-implement-feature.md`):
✓ Saved prompt to ./prompts/005-implement-feature.md
What's next?
1. Run prompt now
2. Review/edit prompt first
3. Save for later
4. Other
Choose (1-4): \_
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005`
If you created MULTIPLE prompts that CAN run in parallel (e.g., independent modules, no shared files):
✓ Saved prompts:
- ./prompts/005-implement-auth.md
- ./prompts/006-implement-api.md
- ./prompts/007-implement-ui.md
Execution strategy: These prompts can run in PARALLEL (independent tasks, no shared files)
What's next?
1. Run all prompts in parallel now (launches 3 sub-agents simultaneously)
2. Run prompts sequentially instead
3. Review/edit prompts first
4. Other
Choose (1-4): \_
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005 006 007 --parallel`
If user chooses #2, invoke via SlashCommand tool: `/run-prompt 005 006 007 --sequential`
If you created MULTIPLE prompts that MUST run sequentially (e.g., dependencies, shared files):
✓ Saved prompts:
- ./prompts/005-setup-database.md
- ./prompts/006-create-migrations.md
- ./prompts/007-seed-data.md
Execution strategy: These prompts must run SEQUENTIALLY (dependencies: 005 → 006 → 007)
What's next?
1. Run prompts sequentially now (one completes before next starts)
2. Run first prompt only (005-setup-database.md)
3. Review/edit prompts first
4. Other
Choose (1-4): \_
If user chooses #1, invoke via SlashCommand tool: `/run-prompt 005 006 007 --sequential`
If user chooses #2, invoke via SlashCommand tool: `/run-prompt 005`
---
- Intake gate completed (AskUserQuestion used for clarification if needed)
- User selected "Proceed" from decision gate
- Appropriate depth, structure, and execution strategy determined
- Prompt(s) generated with proper XML structure following patterns
- Files saved to ./prompts/[number]-[name].md with correct sequential numbering
- Decision tree presented to user based on single/parallel/sequential scenario
- User choice executed (SlashCommand invoked if user selects run option)
- **Intake first**: Complete step_0_intake_gate before generating. Use AskUserQuestion for structured clarification.
- **Decision gate loop**: Keep asking questions until user selects "Proceed"
- Use Glob tool with `./prompts/*.md` to find existing prompts and determine next number in sequence
- If ./prompts/ doesn't exist, use Write tool to create the first prompt (Write will create parent directories)
- Keep prompt filenames descriptive but concise
- Adapt the XML structure to fit the task - not every tag is needed every time
- Consider the user's working directory as the root for all relative paths
- Each prompt file should contain ONLY the prompt content, no preamble or explanation
- After saving, present the decision tree as inline text (not AskUserQuestion)
- Use the SlashCommand tool to invoke /run-prompt when user makes their choice