Guidelines for determining prompt complexity, tool usage, and optimization patterns. Single focused task, clear outcome: **Indicators:** - Single artifact output - No dependencies on other files - Straightforward requirements - No decision-making needed **Prompt characteristics:** - Concise objective - Minimal context - Direct requirements - Simple verification Multi-step tasks, multiple considerations: **Indicators:** - Multiple artifacts or phases - Dependencies on research/plan files - Trade-offs to consider - Integration with existing code **Prompt characteristics:** - Detailed objective with context - Referenced files - Explicit implementation guidance - Comprehensive verification - Extended thinking triggers Use these phrases to activate deeper reasoning in complex prompts: - Complex architectural decisions - Multiple valid approaches to evaluate - Security-sensitive implementations - Performance optimization tasks - Trade-off analysis ``` "Thoroughly analyze..." "Consider multiple approaches..." "Deeply consider the implications..." "Explore various solutions before..." "Carefully evaluate trade-offs..." ``` ```xml Thoroughly analyze the authentication options and consider multiple approaches before selecting an implementation. Deeply consider the security implications of each choice. ``` - Simple, straightforward tasks - Tasks with clear single approach - Following established patterns - Basic CRUD operations ```xml For maximum efficiency, invoke all independent tool operations simultaneously rather than sequentially. Multiple file reads, searches, and API calls that don't depend on each other should run in parallel. ``` - Reading multiple files for context - Running multiple searches - Fetching from multiple sources - Creating multiple independent files - Modifying existing code - Following established patterns - Integrating with current systems - Building on research/plan outputs - Greenfield features - Standalone utilities - Pure research tasks - Standard patterns without customization ```xml Research: @.prompts/001-auth-research/auth-research.md Plan: @.prompts/002-auth-plan/auth-plan.md Current implementation: @src/auth/middleware.ts Types to extend: @src/types/auth.ts Similar feature: @src/features/payments/ ``` For research and plan outputs that may be large: **Instruct incremental writing:** ```xml 1. Create output file with XML skeleton 2. Write each section as completed: - Finding 1 discovered → Append immediately - Finding 2 discovered → Append immediately - Code example found → Append immediately 3. Finalize summary and metadata after all sections complete ``` **Why this matters:** - Prevents lost work from token limit failures - No need to estimate output size - Agent creates natural checkpoints - Works for any task complexity **When to use:** - Research prompts (findings accumulate) - Plan prompts (phases accumulate) - Any prompt that might produce >15k tokens **When NOT to use:** - Do prompts (code generation is different workflow) - Simple tasks with known small outputs For Claude-to-Claude consumption: **Use heavy XML structure:** ```xml Token Storage httpOnly cookies Prevents XSS access ``` **Include metadata:** ```xml Verified in official docs Cookie parser middleware SameSite policy for subdomains ``` **Be explicit about next steps:** ```xml Create planning prompt using these findings Validate rate limits in sandbox ``` For human consumption: - Clear headings - Bullet points for scanning - Code examples with comments - Summary at top Simple Do prompts: - 20-40 lines - Basic objective, requirements, output, verification - No extended thinking - No parallel tool hints Typical task prompts: - 40-80 lines - Full objective with context - Clear requirements and implementation notes - Standard verification Complex task prompts: - 80-150 lines - Extended thinking triggers - Parallel tool calling hints - Multiple verification steps - Detailed success criteria Always explain why constraints matter: ```xml Never store tokens in localStorage. ``` ```xml Never store tokens in localStorage - it's accessible to any JavaScript on the page, making it vulnerable to XSS attacks. Use httpOnly cookies instead. ``` This helps the executing Claude make good decisions when facing edge cases. ```xml 1. Run test suite: `npm test` 2. Type check: `npx tsc --noEmit` 3. Lint: `npm run lint` 4. Manual test: [specific flow to test] ``` ```xml 1. Validate structure: [check required sections] 2. Verify links: [check internal references] 3. Review completeness: [check against requirements] ``` ```xml 1. Sources are current (2024-2025) 2. All scope questions answered 3. Metadata captures uncertainties 4. Actionable recommendations included ``` ```xml 1. Phases are sequential and logical 2. Tasks are specific and actionable 3. Dependencies are clear 4. Metadata captures assumptions ``` Research prompts should: - Structure findings for easy extraction - Include code examples for implementation - Clearly mark confidence levels - List explicit next actions Plan prompts should: - Reference research explicitly - Break phases into prompt-sized chunks - Include execution hints per phase - Capture dependencies between phases Do prompts should: - Reference both research and plan - Follow plan phases explicitly - Verify against research recommendations - Update plan status when done