Initial commit
This commit is contained in:
267
skills/prompt-architecting/references/ADVANCED-ANTI-PATTERNS.md
Normal file
267
skills/prompt-architecting/references/ADVANCED-ANTI-PATTERNS.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Advanced Anti-Patterns: Workflow & Agent Optimization
|
||||
|
||||
**CRITICAL**: For detailed stopping point analysis, see `/Users/brandoncasci/.claude/tmp/workflow-optimization-spec.md`
|
||||
|
||||
**CRITICAL**: For safety guidelines and dimensional analysis, see `OPTIMIZATION-SAFETY-GUIDE.md`
|
||||
|
||||
**KEY INSIGHT**: Most stopping risk patterns are caused by over-technical notation (Dimension 3). Simplifying notation while preserving appropriate structure solves the problem.
|
||||
|
||||
---
|
||||
|
||||
Advanced patterns for optimizing multi-step workflows and agent prompts.
|
||||
|
||||
## Pattern 6: Numbered Steps Without Execution Mandate
|
||||
|
||||
### ❌ Verbose
|
||||
|
||||
```
|
||||
You are optimizing a Claude Code prompt file. Follow this workflow exactly:
|
||||
|
||||
## Step 1: Read File
|
||||
|
||||
Read the file at the path provided by the user. If no path provided, ask for it.
|
||||
|
||||
## Step 2: Parse Structure
|
||||
|
||||
- Detect YAML front matter (content between `---` markers at file start)
|
||||
- If front matter exists, extract `name` field
|
||||
- Separate front matter from content body
|
||||
|
||||
## Step 3: Optimize Content
|
||||
|
||||
Use the prompt-architecting skill with:
|
||||
- Task description: "Optimize this prompt"
|
||||
- Current content: {content body without front matter}
|
||||
|
||||
Wait for skill to return optimized prompt. DO NOT implement optimization yourself.
|
||||
|
||||
## Step 4: Analyze Dependencies
|
||||
|
||||
Check if description has dependencies by searching codebase.
|
||||
|
||||
## Step 5: Present Results
|
||||
|
||||
Show optimization results and ask for approval.
|
||||
|
||||
## Step 6: Replace File
|
||||
|
||||
Write optimized content back to file.
|
||||
```
|
||||
|
||||
**Problem**: Numbered steps imply sequence but don't mandate complete execution. LLM may stop after Step 3 (skill returns result) treating it as a deliverable. No guarantee all steps execute sequentially or that Step N uses Step N-1 output.
|
||||
|
||||
### ✅ Optimized
|
||||
|
||||
```
|
||||
Execute this 6-step workflow completely. Each step produces input for the next:
|
||||
|
||||
WORKFLOW:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body from content → {front_matter, body}
|
||||
3. OPTIMIZE: Run your prompt-architecting skill to optimize body → optimized_body
|
||||
4. ANALYZE: Use Grep to check dependencies in front_matter → risk_level
|
||||
5. PRESENT: Show optimized_body + risk_level → STOP, WAIT for user approval
|
||||
6. WRITE: If approved, use Write tool to save optimized_body + front_matter to $1 → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- Complete steps 1-5 without stopping
|
||||
- STOP only at step 5 (user approval required)
|
||||
- Proceed to step 6 only if user approves (yes/1/2)
|
||||
- Task incomplete until step 6 completes or user cancels
|
||||
|
||||
Each step's output feeds the next. Do not stop early.
|
||||
```
|
||||
|
||||
**Strategies applied**: Execution Flow Control, Decomposition, Directive Hierarchy, Output Formatting
|
||||
|
||||
**Key improvements**:
|
||||
- Opening mandate: "Execute this 6-step workflow completely"
|
||||
- Explicit data flow: "Step X → output Y"
|
||||
- Clear terminal states: "STOP only at step 5"
|
||||
- Completion guarantee: "Task incomplete until step 6"
|
||||
- Prevents premature stopping after async operations (skill invocations)
|
||||
|
||||
---
|
||||
|
||||
## Pattern 7: Removing Procedural Detail as "Bloat" (Agent/Workflow Prompts)
|
||||
|
||||
### ❌ Over-optimized
|
||||
|
||||
```
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (MANDATORY):
|
||||
- Read relevant CLAUDE.md files
|
||||
- Search similar implementations
|
||||
- Check test structure
|
||||
- For gem-backed features: Verify gem capabilities FIRST
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
```
|
||||
|
||||
**Problem**:
|
||||
- "Read relevant CLAUDE.md files" - vague (which ones? just root? subdirectories?)
|
||||
- Pattern-finding detail only in "New Features" mode, removed from "Bug Fixes"
|
||||
- Agent doesn't know if bug fix mode needs same rigor as new features
|
||||
- Lost specificity: "ALL files (root + subdirectories)", "# AI: comments", specific checklist items
|
||||
- Aggressive 60%+ reduction created ambiguity
|
||||
|
||||
### ✅ Properly optimized
|
||||
|
||||
```
|
||||
## Research Checklist
|
||||
|
||||
For ALL modes, check:
|
||||
- ALL CLAUDE.md files (root + subdirectories)
|
||||
- Similar implementations in codebase
|
||||
- # AI: comments in existing code
|
||||
- Test structure
|
||||
- **For gem-backed features**: Gem capabilities before custom code
|
||||
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted: "Read scratchpad for context: [path]"
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (see Research Checklist above)
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes (from issue-diagnosis)
|
||||
|
||||
ULTRATHINK MODE: Think comprehensively about best solution.
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Research context (see Research Checklist above)
|
||||
4. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
- Don't settle for "good enough" - recommend all appropriate tests
|
||||
```
|
||||
|
||||
**Strategies applied**: Execution Flow Control + DRY refactoring, Agent/Workflow Guidelines
|
||||
|
||||
**Key improvements**:
|
||||
- Extracted shared "Research Checklist" - eliminates repetition without losing detail
|
||||
- Preserved ALL specificity: "ALL CLAUDE.md files (root + subdirectories)", "# AI: comments"
|
||||
- Applied to all modes - bug fixes get same rigor as new features
|
||||
- DRY refactoring instead of deletion - saves ~40 words while maintaining clarity
|
||||
- 40-50% reduction (appropriate for agents) vs 60%+ (too aggressive)
|
||||
|
||||
**When this pattern applies**:
|
||||
|
||||
- Optimizing agent prompts or workflow commands
|
||||
- Multiple modes/sections with similar procedural steps
|
||||
- Procedural detail appears repetitive but is actually necessary
|
||||
- Target reduction is 60%+ (too aggressive for agents)
|
||||
|
||||
**How to avoid**:
|
||||
|
||||
- Extract shared checklists instead of deleting detail
|
||||
- Preserve specific qualifiers: "ALL", "MANDATORY", "root + subdirectories"
|
||||
- Target 40-50% reduction for agents (not 60%+)
|
||||
- Ask: "Does removing this create vagueness?" If yes, refactor instead
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Defensive Meta-Commentary and Stop-Awareness
|
||||
|
||||
### ❌ Creates stopping risk through negative priming
|
||||
|
||||
```markdown
|
||||
**Step 3: OPTIMIZE** → optimized_body
|
||||
|
||||
- Use Skill tool: Skill(skill="prompt-architecting")
|
||||
- WAIT for skill output (contains multiple sections)
|
||||
- EXTRACT text under "## Optimized Prompt" heading → optimized_body
|
||||
- → DO NOT STOP - this is NOT the end - continue to Step 6 after Step 4
|
||||
|
||||
**CRITICAL REMINDERS:**
|
||||
|
||||
- The Skill tool (Step 3) returns structured output with multiple sections
|
||||
- You MUST extract the "## Optimized Prompt" section and store as optimized_body
|
||||
- Receiving skill output is NOT a completion signal - it's just data for Step 6
|
||||
- NEVER return control to caller after Step 3 - continue to Steps 4 and 6
|
||||
- The ONLY valid stopping points are: Step 5 (waiting for user) or Step 6 (done writing)
|
||||
- If you find yourself returning results without calling Write tool, you failed
|
||||
```
|
||||
|
||||
**Problem**:
|
||||
|
||||
- Each "DO NOT STOP" warning creates decision point: "Should I stop here?"
|
||||
- "This is NOT the end" reinforces that ending is a possibility
|
||||
- CRITICAL REMINDERS section acknowledges failure mode, normalizing it
|
||||
- "If you find yourself returning results... you failed" describes the exact unwanted behavior
|
||||
- Defensive commentary creates stop-awareness, making premature stopping MORE likely
|
||||
|
||||
**Psychological mechanism** (Ironic Process Theory):
|
||||
|
||||
- Telling someone "don't think about X" makes them think about X
|
||||
- Repeatedly saying "DO NOT STOP" primes stopping behavior
|
||||
- Meta-commentary about failure normalizes and increases failure
|
||||
|
||||
### ✅ Trust structure, eliminate stop-awareness
|
||||
|
||||
```markdown
|
||||
Your job is to update the file with optimized prompt from your skill.
|
||||
|
||||
Read the file, extract any front matter. Run the prompt-architecting skill on the content body. Check for dependencies if front matter exists. Ask user for approval if dependencies found. Write the optimized content back to the file.
|
||||
```
|
||||
|
||||
**Or, if complexity requires structure:**
|
||||
|
||||
```markdown
|
||||
Execute this workflow completely:
|
||||
|
||||
1. READ: Use Read(file_path) → content
|
||||
2. OPTIMIZE: Run prompt-architecting skill on content → optimized_content
|
||||
3. CHECK: If front matter exists, search for dependencies → risk_level
|
||||
4. APPROVE: If risk_level high, ask user → approval
|
||||
5. WRITE: Save optimized_content to file → done
|
||||
|
||||
Task completes at step 5.
|
||||
```
|
||||
|
||||
**Strategies applied**: Natural Language Reframing (first example) or moderate EFC without defensive warnings (second example)
|
||||
|
||||
**Key improvements**:
|
||||
|
||||
- No "DO NOT STOP" warnings anywhere
|
||||
- No CRITICAL REMINDERS section discussing failure modes
|
||||
- No meta-commentary about what might go wrong
|
||||
- Structure implies continuation naturally
|
||||
- Task framing makes completion criteria obvious
|
||||
|
||||
**When this pattern applies**:
|
||||
|
||||
- Any workflow with skill/agent invocations
|
||||
- Multi-step processes where premature stopping is a risk
|
||||
- Prompts that have been "fixed" by adding more warnings
|
||||
|
||||
**How to avoid**:
|
||||
|
||||
- Frame positively: "Continue to X" instead of "DO NOT STOP"
|
||||
- Remove all meta-commentary about failures
|
||||
- If you need warnings, the structure is wrong - simplify instead
|
||||
- Trust natural language and clear structure over defensive reminders
|
||||
- Test: If prompt mentions stopping/failure, you're creating the problem
|
||||
188
skills/prompt-architecting/references/ADVANCED-EXAMPLES.md
Normal file
188
skills/prompt-architecting/references/ADVANCED-EXAMPLES.md
Normal file
@@ -0,0 +1,188 @@
|
||||
# Advanced Optimization Examples: Workflows & Agents
|
||||
|
||||
Advanced case studies showing prompt-architecting for multi-step workflows and agent prompts.
|
||||
|
||||
## Example 5: Multi-Step Workflow Command
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Create a command that optimizes prompt files by using the prompt-architecting skill, checks for front matter dependencies, and writes the optimized version back"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude generates:
|
||||
|
||||
```
|
||||
You are optimizing a Claude Code prompt file. Follow this workflow exactly:
|
||||
|
||||
## Step 1: Read File
|
||||
Read the file at the path provided by the user.
|
||||
|
||||
## Step 2: Parse Structure
|
||||
Detect YAML front matter and separate from content body.
|
||||
|
||||
## Step 3: Optimize Content
|
||||
Use the prompt-architecting skill with the content.
|
||||
Wait for skill to return optimized prompt.
|
||||
|
||||
## Step 4: Analyze Dependencies
|
||||
Check if description has dependencies.
|
||||
|
||||
## Step 5: Present Results
|
||||
Show optimization results and ask for approval.
|
||||
|
||||
## Step 6: Replace File
|
||||
Write optimized content back to file.
|
||||
```
|
||||
|
||||
**Result**: LLM stops after Step 3 (skill returns result), treating skill output as deliverable. Never proceeds to dependency analysis, approval, or file writing. User receives optimization output but file is never modified.
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Multi-step workflow command (6 sequential steps)
|
||||
- Complexity: High (skill invocation, dependency checking, user approval, file I/O)
|
||||
- Risk: Premature stopping after Step 3 (async skill call), skipped steps, no execution guarantee
|
||||
- Bloat risks: Verbose step descriptions, missing flow control
|
||||
- Optimal strategies: **Execution Flow Control** (critical), Decomposition, Directive Hierarchy, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Execute this 6-step workflow completely. Each step produces input for the next:
|
||||
|
||||
WORKFLOW:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body from content → {front_matter, body}
|
||||
3. OPTIMIZE: Run your prompt-architecting skill to optimize body → optimized_body
|
||||
4. ANALYZE: Use Grep to check dependencies in front_matter → risk_level
|
||||
5. PRESENT: Show optimized_body + risk_level → STOP, WAIT for user approval
|
||||
6. WRITE: If approved, use Write tool to save optimized_body + front_matter to $1 → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- Complete steps 1-5 without stopping
|
||||
- STOP only at step 5 (user approval required)
|
||||
- Proceed to step 6 only if user approves (yes/1/2)
|
||||
- Task incomplete until step 6 completes or user cancels
|
||||
|
||||
Each step's output feeds the next. Do not stop early.
|
||||
```
|
||||
|
||||
**Result**: Clear execution mandate, explicit data flow between steps, guaranteed completion through step 5, proper stop at approval gate, file gets written after approval.
|
||||
|
||||
**Why Execution Flow Control was critical:**
|
||||
|
||||
1. **Prevents premature stopping**: Mandate ensures execution continues after Step 3 skill invocation
|
||||
2. **Explicit dependencies**: "Step X → output Y" shows each step consumes previous output
|
||||
3. **Clear terminal states**: "STOP only at step 5" prevents arbitrary stopping
|
||||
4. **Completion guarantee**: "Task incomplete until..." creates obligation to finish
|
||||
|
||||
**Lessons from this example:**
|
||||
|
||||
- Numbered steps alone don't guarantee sequential execution
|
||||
- Skill invocations are natural stopping points - must mandate continuation
|
||||
- Multi-step workflows need opening mandate + terminal state specification
|
||||
- Data flow notation (→) makes dependencies explicit and prevents skipping
|
||||
|
||||
---
|
||||
|
||||
## Example 6: Agent Prompt with Multiple Modes
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Optimize this analyst agent prompt that has ~1,450 words with sections for new features, bug fixes, and gem verification"
|
||||
```
|
||||
|
||||
### Without Agent/Workflow Guidelines
|
||||
|
||||
Claude generates:
|
||||
|
||||
- 1,450w → 560w (61% reduction - too aggressive)
|
||||
- Removes procedural detail to hit 60% target
|
||||
- Creates vague instructions: "Read relevant CLAUDE.md files" (which ones?)
|
||||
- Pattern-finding detail only in "New Features", removed from "Bug Fixes"
|
||||
- Agent doesn't know if bug fixes need same rigor as features
|
||||
- Lost specificity: "ALL files (root + subdirectories)", "# AI: comments"
|
||||
|
||||
**Result**: Concise but vague. Agent has unclear guidance for bug fix mode.
|
||||
|
||||
### With Agent/Workflow Guidelines
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Optimize agent prompt with multiple modes
|
||||
- Complexity: High (1,450 words, 3 modes: new features, bug fixes, gems)
|
||||
- Risk: Over-optimization removes necessary procedural detail
|
||||
- Bloat risks: Verbose YAML examples (90+ lines), Rails conventions, repetitive pattern-finding
|
||||
- Optimal strategies: **Agent/Workflow Guidelines** (preserve procedural detail), DRY refactoring, Progressive Disclosure, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
You are a requirements and architecture analyst. Tools: Read, Grep, Glob (read-only).
|
||||
|
||||
Follow output structure from @.claude/protocols/agent-output-protocol.md
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
ULTRATHINK: Prioritize correctness over speed. AI implements fast regardless of approach. Strategic decisions matter most.
|
||||
|
||||
## Research Checklist
|
||||
|
||||
For ALL modes, check:
|
||||
- ALL CLAUDE.md files (root + subdirectories)
|
||||
- Similar implementations in codebase
|
||||
- # AI: comments in existing code
|
||||
- Test structure
|
||||
- **For gem-backed features**: Gem capabilities before custom code
|
||||
|
||||
## Process
|
||||
|
||||
### For New Features
|
||||
|
||||
1. Read scratchpad if prompted: "Read scratchpad for context: [path]"
|
||||
2. Understand requirement (ULTRATHINK): Core request, acceptance criteria, constraints
|
||||
3. Find patterns (see Research Checklist above)
|
||||
4. Determine approach:
|
||||
- Existing pattern → point to specific files
|
||||
- New pattern → recommend architecture fitting codebase style
|
||||
5. Synthesize: Which files, patterns to follow, architecture rationale
|
||||
|
||||
### For Bug Fixes (from issue-diagnosis)
|
||||
|
||||
ULTRATHINK MODE: Think comprehensively about best solution.
|
||||
|
||||
1. Read scratchpad if prompted
|
||||
2. Analyze bug nature: Where manifests? User impact? Larger architectural issue?
|
||||
3. Research context (see Research Checklist above)
|
||||
4. Evaluate ALL test levels (ULTRATHINK):
|
||||
- System: UI/JavaScript/user-visible bugs
|
||||
- Integration: Request/response/multi-component
|
||||
- Unit: Business logic/model behavior
|
||||
- Don't settle for "good enough" - recommend all appropriate tests
|
||||
...
|
||||
|
||||
[Verbose YAML examples moved to references/analyst-examples.md]
|
||||
```
|
||||
|
||||
**Result**: 1,450w → 650w (55% reduction - appropriate for agents). Preserved procedural detail while eliminating repetition via DRY refactoring.
|
||||
|
||||
**Why Agent/Workflow Guidelines were critical:**
|
||||
|
||||
1. **Recognized agent context**: Applied 40-50% target instead of 60%+
|
||||
2. **DRY refactoring over deletion**: Extracted "Research Checklist" - eliminated repetition without losing specificity
|
||||
3. **Preserved procedural detail**: "ALL CLAUDE.md files (root + subdirectories)" not "relevant files"
|
||||
4. **All modes get rigor**: Bug fixes reference same Research Checklist as new features
|
||||
5. **Aggressive optimization where appropriate**: 90-line YAML examples → references/
|
||||
|
||||
**Lessons from this example:**
|
||||
|
||||
- Agent prompts need execution detail - different standard than docs
|
||||
- DRY refactoring beats deletion - extract shared sections instead of removing
|
||||
- Target 40-50% for agents (not 60%+) - they need procedural clarity
|
||||
- Preserve specificity: "ALL", "MANDATORY", "root + subdirectories"
|
||||
- Recognize when detail is necessary vs when it's bloat
|
||||
194
skills/prompt-architecting/references/ANTI-PATTERNS.md
Normal file
194
skills/prompt-architecting/references/ANTI-PATTERNS.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# Anti-Patterns: Verbose → Concise
|
||||
|
||||
Real examples of prompt bloat and their optimized versions.
|
||||
|
||||
## Pattern 1: Over-Elaborated Context
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
I need you to create comprehensive documentation that covers all aspects of
|
||||
user authentication in our system. This should include detailed explanations
|
||||
of how the system works, what technologies we're using, best practices for
|
||||
implementation, common pitfalls to avoid, security considerations, edge cases,
|
||||
error handling strategies, and example code showing different use cases. Make
|
||||
sure it's thorough and covers everything a developer might need to know.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Write auth docs. Structure: [Setup - 100w] [Usage - 150w] [Error handling - 100w]
|
||||
[One example - code only]. MAX 400 words total. Audience: Mid-level dev familiar
|
||||
with JWT. DO NOT: Include security theory, framework comparisons, or "best practices"
|
||||
sections.
|
||||
```
|
||||
|
||||
**Strategies applied**: Constraint-Based, Output Formatting, Negative Prompting
|
||||
|
||||
---
|
||||
|
||||
## Pattern 2: Defensive Over-Coverage
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
Create a skill for processing CSV files. It should handle reading CSV files,
|
||||
parsing them, validating the data, transforming it, handling errors, dealing
|
||||
with different encodings, managing large files, streaming if needed, and
|
||||
outputting to various formats. Include comprehensive documentation, examples
|
||||
for each use case, edge case handling, performance considerations, and detailed
|
||||
setup instructions.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Create CSV processing skill. SKILL.md (300w max): [Problem - 50w] [Workflow:
|
||||
read → parse → transform → output - 100w] [One example - code only - 100w].
|
||||
Move to references/: encoding edge cases, streaming for large files, pandas vs
|
||||
csv module comparison.
|
||||
```
|
||||
|
||||
**Strategies applied**: Constraint-Based, Progressive Disclosure, Template-Based
|
||||
|
||||
---
|
||||
|
||||
## Pattern 3: Meta-Discussion Instead of Instructions
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
I think we should probably create some documentation for the API endpoints.
|
||||
It would be good to explain what each endpoint does, what parameters they take,
|
||||
what they return, and maybe some examples of how to use them. We should also
|
||||
probably mention error codes and authentication requirements. Does that sound good?
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Document REST API endpoints. Format per endpoint: [Method + Path] [Purpose - 1
|
||||
sentence] [Parameters - table] [Response - JSON schema] [Auth required - yes/no]
|
||||
[Error codes - list only]. One example request/response per endpoint.
|
||||
```
|
||||
|
||||
**Strategies applied**: Output Formatting, Template-Based, Density Optimization
|
||||
|
||||
---
|
||||
|
||||
## Pattern 4: Unclear Success Criteria
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
Help me understand how the deployment process works in this codebase and create
|
||||
some documentation about it so other developers can figure it out too.
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Document deployment process. Target: New dev can deploy to staging in <10 min.
|
||||
Structure: [Prerequisites - bullet list] [Steps - numbered, 1 sentence each]
|
||||
[Verification - how to confirm success] [Rollback - 2 sentences]. MAX 300 words.
|
||||
```
|
||||
|
||||
**Strategies applied**: Audience-Targeted, Template-Based, Constraint-Based
|
||||
|
||||
---
|
||||
|
||||
## Pattern 5: Solution Seeking Instead of Problem Stating
|
||||
|
||||
### ❌ Verbose
|
||||
```
|
||||
We need a really robust, production-grade, enterprise-level authentication
|
||||
system with all the features and best practices implemented following industry
|
||||
standards and security guidelines...
|
||||
```
|
||||
|
||||
### ✅ Optimized
|
||||
```
|
||||
Implement auth. Requirements: JWT tokens, refresh flow, role-based access.
|
||||
MUST: Secure token storage, HTTPS only. SHOULD: Remember me option. OUT OF SCOPE:
|
||||
OAuth, SSO, 2FA. Match existing pattern in: src/auth/session.ts
|
||||
```
|
||||
|
||||
**Strategies applied**: Directive Hierarchy, Negative Prompting, Anchoring
|
||||
|
||||
---
|
||||
|
||||
## Common Bloat Indicators
|
||||
|
||||
**Red flags in prompts:**
|
||||
- "comprehensive", "robust", "enterprise-grade", "production-ready"
|
||||
- "all aspects", "everything", "fully cover"
|
||||
- "best practices", "industry standards"
|
||||
- Multiple questions without priority
|
||||
- Hypothetical edge cases ("what if...", "we might need...")
|
||||
|
||||
**Optimization checklist:**
|
||||
1. Remove adjectives (comprehensive, robust, etc.)
|
||||
2. Set word/line limits
|
||||
3. Specify structure explicitly
|
||||
4. Use DO NOT for known over-generation
|
||||
5. Define success criteria concretely
|
||||
6. Defer details to references where possible
|
||||
|
||||
**Decision tree:**
|
||||
|
||||
- Adjective-heavy? → Constraint-Based
|
||||
- No structure? → Template-Based or Output Formatting
|
||||
- Known bloat patterns? → Negative Prompting
|
||||
- 1-2 very simple steps (sequence obvious)? → Natural language acceptable
|
||||
- 3+ steps where sequence matters? → Enumeration helps (research: improves thoroughness and reduces ambiguity)
|
||||
- Complex task with branching? → Execution Flow Control (appropriate level)
|
||||
- Numbered steps but overly formal? → Simplify notation, keep enumeration for clarity
|
||||
- Agent/workflow with repeated procedural steps? → DRY refactoring (extract shared checklist)
|
||||
- Procedural detail appears as bloat? → Preserve specificity, target 40-50% reduction
|
||||
- Need examples? → Few-Shot or Anchoring
|
||||
|
||||
---
|
||||
|
||||
## Pattern 8: Destroying Callable Entity Triggers
|
||||
|
||||
### ❌ Over-optimized
|
||||
```
|
||||
# Before (complete)
|
||||
description: Reviews code for security, bugs, performance when quality assessment needed. When user says "review this code", "check for bugs", "analyze security".
|
||||
|
||||
# Over-optimized (WRONG - lost triggers)
|
||||
description: Code review assistant
|
||||
```
|
||||
|
||||
### ✅ Correct
|
||||
```
|
||||
# Minimal acceptable optimization
|
||||
description: Reviews code for security, bugs, performance when quality assessment needed. When user says "review code", "check bugs", "analyze security".
|
||||
```
|
||||
|
||||
**Why**: Trigger phrases are functional pattern-matching signals for model-invocation, not decorative examples. Preserve both contextual "when" AND literal trigger phrases.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 4 for callable entity preservation rules.**
|
||||
|
||||
---
|
||||
|
||||
## Pattern 9: Over-Technical Notation Creating Cognitive Load
|
||||
|
||||
### ❌ Over-technical
|
||||
```
|
||||
Execute this workflow:
|
||||
1. READ: Load file → content
|
||||
2. PARSE: Extract(content) → {fm, body}
|
||||
3. OPTIMIZE: Run skill(body) → optimized
|
||||
a. Pass parameters: {content: body, mode: "optimize"}
|
||||
b. Extract result → optimized
|
||||
c. DO NOT STOP - continue to step 4
|
||||
4. WRITE: Save(optimized) → done
|
||||
|
||||
EXECUTION RULES:
|
||||
- DO NOT STOP at step 3 when skill returns
|
||||
```
|
||||
|
||||
### ✅ Organized natural
|
||||
```
|
||||
Your job: Optimize the file and write it back.
|
||||
|
||||
Read the file and parse structure. Optimize the content using the skill. Write the result back. The file edit is the deliverable.
|
||||
```
|
||||
|
||||
**Why**: Technical notation (CAPS + → + variables + function syntax + warnings) increases cognitive load and creates stopping risk. Organized natural language with appropriate structure is clearer for LLM audiences.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 3 and Part 6 for detailed analysis.**
|
||||
219
skills/prompt-architecting/references/EXAMPLES.md
Normal file
219
skills/prompt-architecting/references/EXAMPLES.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Real-World Optimization Examples
|
||||
|
||||
Case studies showing prompt-architecting in action.
|
||||
|
||||
## Example 1: Skill Generation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Create a skill for processing PDFs"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude generates:
|
||||
|
||||
- 1,200-line SKILL.md with 15 subsections
|
||||
- Complete PDF specification history
|
||||
- 8 different extraction libraries compared
|
||||
- 20 edge cases documented inline
|
||||
- Security considerations essay
|
||||
- Performance tuning guide
|
||||
- Complete API reference
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Skill creation
|
||||
- Complexity: Medium (PDF parsing is known domain)
|
||||
- Risk: Over-documentation of libraries, edge cases
|
||||
- Optimal strategies: Constraint-Based, Template-Based, Progressive Disclosure
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Create PDF processing skill. SKILL.md MAX 300 words:
|
||||
- When to use (50w)
|
||||
- Core workflow: upload → extract text/images → output (100w)
|
||||
- One example showing pypdf2 usage (code only, 100w)
|
||||
- Move to references/: library comparison (pypdf2 vs pdfplumber), OCR for scanned PDFs, encoding issues
|
||||
|
||||
Match this style: {paste concise skill example from building-skills}
|
||||
```
|
||||
|
||||
**Result:** 280-word SKILL.md, 2 reference files, actionable and maintainable
|
||||
|
||||
---
|
||||
|
||||
## Example 2: Documentation Generation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Document the API endpoints in src/api/"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude writes:
|
||||
|
||||
- Comprehensive REST API guide
|
||||
- HTTP protocol primer
|
||||
- Authentication deep-dive
|
||||
- Rate limiting theory
|
||||
- Pagination best practices
|
||||
- Error handling philosophy
|
||||
- 40 pages of markdown
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: API documentation
|
||||
- Complexity: Low (structured data)
|
||||
- Risk: Theory instead of reference
|
||||
- Optimal strategies: Output Formatting, Template-Based, Negative Prompting
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Document API endpoints. Format per endpoint:
|
||||
## [METHOD] /path
|
||||
Purpose: {1 sentence}
|
||||
Auth: {required|optional|none}
|
||||
Params: {table: name, type, required, description}
|
||||
Response: {JSON schema only}
|
||||
Errors: {codes list}
|
||||
Example: {curl + response}
|
||||
|
||||
DO NOT: Include HTTP theory, auth implementation details, or pagination strategy essays.
|
||||
Target: API reference, not guide.
|
||||
```
|
||||
|
||||
**Result:** Clean reference docs, 8 endpoints in 4 pages, instantly usable
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Plan Creation
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Plan the implementation of user notification system"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude creates:
|
||||
|
||||
- 15-page plan with every possible notification type
|
||||
- Complete microservices architecture
|
||||
- Email, SMS, push, in-app, webhook notifications
|
||||
- Queue theory and message broker comparison
|
||||
- Scaling strategy for 1M users
|
||||
- 30 implementation tasks spanning 6 months
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Implementation plan
|
||||
- Complexity: High (but scope unclear)
|
||||
- Risk: Over-engineering, scope creep
|
||||
- Optimal strategies: Decomposition, Directive Hierarchy, Constraint-Based
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Create notification system plan. Scope: Email + in-app only (MVP).
|
||||
|
||||
MUST cover:
|
||||
- Data model (users, notifications, preferences)
|
||||
- Notification creation flow
|
||||
- Delivery mechanism (async job)
|
||||
- User preferences UI
|
||||
|
||||
SHOULD cover:
|
||||
- Read/unread state
|
||||
- Basic templates
|
||||
|
||||
OUT OF SCOPE: SMS, push, webhooks, scaling >10K users
|
||||
|
||||
Structure: [Problem] [Phase 1 tasks] [Phase 2 tasks] [Success criteria]
|
||||
Target: 2-week implementation for 2 devs
|
||||
```
|
||||
|
||||
**Result:** Focused 4-page plan, 12 actionable tasks, clear scope, deliverable
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Code Review Request
|
||||
|
||||
### Original Request
|
||||
|
||||
```
|
||||
User: "Review this code and tell me what you think"
|
||||
```
|
||||
|
||||
### Without Optimization
|
||||
|
||||
Claude provides:
|
||||
|
||||
- Line-by-line analysis
|
||||
- Code style commentary
|
||||
- Design pattern suggestions
|
||||
- Performance speculation
|
||||
- Security theoretical concerns
|
||||
- Refactoring proposal (3 alternatives)
|
||||
- 800 lines of feedback
|
||||
|
||||
### With prompt-architecting
|
||||
|
||||
**Subagent analysis:**
|
||||
|
||||
- Task: Code review (but criteria unknown)
|
||||
- Complexity: Medium
|
||||
- Risk: Unfocused feedback overload
|
||||
- Optimal strategies: Constraint-Based, Audience-Targeted, Decomposition
|
||||
|
||||
**Optimized prompt:**
|
||||
|
||||
```
|
||||
Review code for: (1) Bugs/errors, (2) Security issues, (3) Performance bottlenecks.
|
||||
|
||||
Format:
|
||||
## Issues Found
|
||||
- [SEVERITY] Location: Description + fix suggestion
|
||||
|
||||
## Summary
|
||||
- {Count by severity}
|
||||
- {Top priority item}
|
||||
|
||||
DO NOT: Comment on style, suggest refactorings, or discuss alternative patterns unless directly related to bugs/security/performance.
|
||||
|
||||
Audience: Code works, need to ship, focused review only.
|
||||
```
|
||||
|
||||
**Result:** 15-line review, 2 bugs found, 1 security fix, actionable
|
||||
|
||||
---
|
||||
|
||||
**For advanced workflow and agent optimization examples, see ADVANCED-EXAMPLES.md**
|
||||
|
||||
---
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
**Unspecified scope = maximal scope** (Examples 1-3): Without constraints, Claude assumes comprehensive coverage. Fix: Set MAX length and explicit boundaries.
|
||||
|
||||
**Complexity triggers research mode** (Examples 1, 2): Unfamiliar topics trigger defensive over-documentation. Fix: Progressive Disclosure - overview now, details in references.
|
||||
|
||||
**Ambiguous success = everything** (Example 3): "Help me understand" lacks definition of done. Fix: Define success concretely ("New dev deploys in <10min").
|
||||
|
||||
**Implicit = inclusion** (Examples 2, 4): Unexcluded edge cases get included. Fix: Negative Prompting to exclude known bloat.
|
||||
|
||||
**Workflow patterns** (see ADVANCED-EXAMPLES.md): Numbered steps don't mandate completion after async operations. Fix: Execution Flow Control.
|
||||
|
||||
**Meta-lesson**: Every optimization uses 2-3 strategies, never just one. Pair Constraint-Based with structure (Template/Format) or exclusion (Negative). For workflows with dependencies, Execution Flow Control is mandatory.
|
||||
2325
skills/prompt-architecting/references/OPTIMIZATION-SAFETY-GUIDE.md
Normal file
2325
skills/prompt-architecting/references/OPTIMIZATION-SAFETY-GUIDE.md
Normal file
File diff suppressed because it is too large
Load Diff
249
skills/prompt-architecting/references/STRATEGIES.md
Normal file
249
skills/prompt-architecting/references/STRATEGIES.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Prompting Strategies Catalog
|
||||
|
||||
Reference for prompt-architect. Each strategy includes when to use and example pattern.
|
||||
|
||||
# IMPORTANT: Read Safety Guide First
|
||||
|
||||
Before selecting strategies, read OPTIMIZATION-SAFETY-GUIDE.md to understand:
|
||||
- When NOT to optimize
|
||||
- Three dimensions of optimization (verbosity, structure, notation)
|
||||
- Over-optimization risks
|
||||
- Natural language vs technical strategy decision criteria
|
||||
- Callable entity preservation requirements
|
||||
- Strategy combination limits (1-3 max)
|
||||
- Cognitive load as the core metric
|
||||
|
||||
This ensures trustworthy optimization that reduces cognitive load while preserving intent.
|
||||
|
||||
---
|
||||
|
||||
## 1. Constraint-Based Prompting
|
||||
|
||||
**When**: Task scope clear but tends toward over-generation
|
||||
**Pattern**: Set hard boundaries on length/scope
|
||||
**Example**: `Generate auth docs. MAX 300 words. Cover only: setup, usage, errors.`
|
||||
|
||||
## 2. Progressive Disclosure
|
||||
|
||||
**When**: Complex topics where details can be separated
|
||||
**Pattern**: Overview in main doc, details in references
|
||||
**Example**: `Write skill overview (100w), then separate reference docs for: API specs, edge cases, examples.`
|
||||
|
||||
## 3. Template-Based
|
||||
|
||||
**When**: Output needs consistent structure
|
||||
**Pattern**: Provide fill-in-the-blank format
|
||||
**Example**: `Follow: [Problem] [Solution in 3 steps] [One example] [Common pitfall]`
|
||||
|
||||
## 4. Directive Hierarchy
|
||||
|
||||
**When**: Mixed priority requirements
|
||||
**Pattern**: Use MUST/SHOULD/MAY tiers
|
||||
**Example**: `MUST: Cover errors. SHOULD: Include 1 example. MAY: Reference advanced patterns.`
|
||||
|
||||
## 5. Negative Prompting
|
||||
|
||||
**When**: Known tendency to add unwanted content
|
||||
**Pattern**: Explicitly exclude behaviors
|
||||
**Example**: `Write deploy guide. DO NOT: framework comparisons, history, "best practices" essays.`
|
||||
|
||||
## 6. Few-Shot Learning
|
||||
|
||||
**When**: Abstract requirements but concrete examples exist
|
||||
**Pattern**: Show 2-3 examples of desired output
|
||||
**Example**: `Good doc: [150w example]. Bad doc: [verbose example]. Follow "good" pattern.`
|
||||
|
||||
## 7. Decomposition
|
||||
|
||||
**When**: Complex multi-step tasks
|
||||
**Pattern**: Break into numbered discrete subtasks
|
||||
**Example**: `Step 1: Identify 3 use cases. Step 2: 50w description each. Step 3: 1 code example each.`
|
||||
|
||||
## 8. Comparative/Contrastive
|
||||
|
||||
**When**: Need to show difference between good/bad
|
||||
**Pattern**: Side-by-side ❌/✅ examples
|
||||
**Example**: `❌ "Comprehensive guide covering everything..." ✅ "Setup: npm install. Use: auth.login()."`
|
||||
|
||||
## 9. Anchoring
|
||||
|
||||
**When**: Have reference standard to match
|
||||
**Pattern**: Provide example to emulate
|
||||
**Example**: `Match style/length of this: [paste 200w reference doc]`
|
||||
|
||||
## 10. Output Formatting
|
||||
|
||||
**When**: Structure more important than content discovery
|
||||
**Pattern**: Specify exact section structure
|
||||
**Example**: `Format: ## Problem (50w) ## Solution (100w) ## Example (code only)`
|
||||
|
||||
## 11. Density Optimization
|
||||
|
||||
**When**: Content tends toward fluff/filler
|
||||
**Pattern**: Maximize information per word
|
||||
**Example**: `Write as Hemingway: short sentences, concrete nouns, active voice. Every sentence advances understanding.`
|
||||
|
||||
## 12. Audience-Targeted
|
||||
|
||||
**When**: Reader expertise level known
|
||||
**Pattern**: Specify what to skip based on audience
|
||||
**Example**: `Audience: Senior dev who knows React. Skip basics, focus on gotchas and our implementation.`
|
||||
|
||||
## 13. Execution Flow Control
|
||||
|
||||
**When**: Complex workflows requiring state management, branching control, or approval gates
|
||||
**Pattern**: Mandate complete execution with explicit flow control and dependencies
|
||||
**Example**:
|
||||
|
||||
```markdown
|
||||
Execute this workflow completely:
|
||||
1. READ: Use Read tool on $1 → content
|
||||
2. PARSE: Extract front matter + body → {front_matter, body}
|
||||
3. OPTIMIZE: Use prompt-architecting skill → optimized_body
|
||||
4. PRESENT: Show optimized_body → STOP, WAIT for approval
|
||||
|
||||
EXECUTION RULES:
|
||||
- Stop only at step 4 (user approval required)
|
||||
- Task incomplete until approval received
|
||||
```
|
||||
|
||||
**Indicators**:
|
||||
- REQUIRED: User approval gates, multiple terminal states, 3-way+ branching, complex state tracking
|
||||
- NOT REQUIRED: Simple sequential tasks, linear flow, skill invocations for data only
|
||||
|
||||
**Anti-pattern**: Using EFC for simple tasks that can be expressed as "Do X, then Y, then Z"
|
||||
|
||||
See OPTIMIZATION-GUIDE.md for complete Execution Flow Control pattern, language guidelines, and agent/workflow optimization standards.
|
||||
|
||||
## 14. Natural Language Reframing
|
||||
|
||||
**When**: 1-2 step tasks where sequence is obvious or trivial
|
||||
**Pattern**: Rewrite as clear prose when enumeration adds no clarity
|
||||
**Example**:
|
||||
|
||||
Input (over-enumerated):
|
||||
```markdown
|
||||
1. Read the file at the provided path
|
||||
2. Write it back with modifications
|
||||
```
|
||||
|
||||
Output (natural language):
|
||||
```markdown
|
||||
Read the file and write it back with modifications.
|
||||
```
|
||||
|
||||
**Research findings**: Enumeration helps for 3+ steps (improves thoroughness, reduces ambiguity, provides cognitive anchors). Only skip enumeration when:
|
||||
- 1-2 very simple steps
|
||||
- Sequence is completely obvious
|
||||
- Structure would add no clarity
|
||||
|
||||
**Indicators for natural language**:
|
||||
- Task is genuinely 1-2 steps (not 3+ steps disguised as one job)
|
||||
- Sequence is trivial/obvious
|
||||
- No need for LLM to address each point thoroughly
|
||||
- Enumeration would be redundant
|
||||
|
||||
**Why research matters**: Studies show prompt formatting impacts performance by up to 40%. Numbered lists help LLMs:
|
||||
- Understand sequential steps clearly
|
||||
- Address each point thoroughly and in order
|
||||
- Reduce task sequence ambiguity
|
||||
- Provide cognitive anchors that reduce hallucination
|
||||
|
||||
**Anti-pattern**: Avoiding enumeration for 3+ step tasks. Research shows structure helps more than it hurts for multi-step instructions.
|
||||
|
||||
**Revised guidance**: Default to enumeration for 3+ steps. Use natural language only when complexity truly doesn't justify structure.
|
||||
|
||||
---
|
||||
|
||||
## 15. Technical → Natural Transformation
|
||||
|
||||
**When**: Over-technical notation detected (3+ indicators) and cognitive load test shows notation hurts understanding
|
||||
|
||||
**Indicators**:
|
||||
- CAPS labels as action markers (CHECK:, PARSE:, VALIDATE:)
|
||||
- → notation for data flow (→ variable_name)
|
||||
- Variable naming conventions (work_file_status, requirement_data)
|
||||
- Function call syntax (tool({params}))
|
||||
- Sub-step enumeration (a/b/c when prose would work)
|
||||
- Defensive meta-instructions ("DO NOT narrate", "continue immediately")
|
||||
|
||||
**Pattern**: Keep appropriate structure level (based on complexity score), simplify notation to organized natural language
|
||||
|
||||
**Transformation**:
|
||||
- CAPS labels → natural section headers or action verbs
|
||||
- → notation → implicit data flow or prose
|
||||
- Variable names → eliminate or minimize
|
||||
- Function call syntax → natural tool mentions
|
||||
- Sub-step enumeration → consolidate to prose
|
||||
- Defensive warnings → remove (trust structure)
|
||||
|
||||
**Example**:
|
||||
|
||||
Before (over-technical):
|
||||
```
|
||||
1. CHECK: Verify status → work_file_status
|
||||
a. Use Bash `git branch` → branch_name
|
||||
b. Check if file exists
|
||||
c. DO NOT proceed if exists
|
||||
2. PARSE: Extract data → requirement_data
|
||||
```
|
||||
|
||||
After (organized natural):
|
||||
```
|
||||
## Setup
|
||||
|
||||
Get current branch name and check if work file already exists. If it exists, stop and tell user to use /dev-resume.
|
||||
|
||||
Parse the requirement source...
|
||||
```
|
||||
|
||||
**Why this works**:
|
||||
- Preserves appropriate structure (complexity still warrants organization)
|
||||
- Removes ceremonial notation that creates cognitive load
|
||||
- Eliminates stopping risk (no CAPS/→/variables creating boundaries)
|
||||
- Natural language is clearer for LLM audiences
|
||||
- Reduces cognitive load significantly
|
||||
|
||||
**Often solves multiple problems simultaneously**:
|
||||
- Dimension 3 (notation clarity)
|
||||
- Stopping risk (no false completion boundaries)
|
||||
- Cognitive load reduction
|
||||
|
||||
**May be sufficient optimization alone** - don't over-optimize by adding more strategies.
|
||||
|
||||
**See OPTIMIZATION-SAFETY-GUIDE.md Part 3 for detailed examples and Part 6 for stopping risk relationship.**
|
||||
|
||||
---
|
||||
|
||||
## Strategy Selection Guide
|
||||
|
||||
**FIRST**: Calculate complexity score (see SKILL.md Step 2). Let score guide structure level.
|
||||
|
||||
**New addition**: Technical → Natural Transformation (applies across all complexity levels when notation is over-technical)
|
||||
|
||||
**By complexity score** (research-informed):
|
||||
|
||||
- **Score ≤ 0**: Natural Language Reframing acceptable (1-2 trivial steps). Add Constraint-Based if word limits needed.
|
||||
- **Score 1-2**: Use numbered enumeration (research: 3+ steps benefit from structure). Add Template-Based or Constraint-Based. Avoid heavy EFC.
|
||||
- **Score 3-4**: Moderate structure (enumeration + opening mandate). Add Decomposition or Template-Based. No EXECUTION RULES yet.
|
||||
- **Score ≥ 5**: Full EFC pattern (mandate + EXECUTION RULES). Add Decomposition + Directive Hierarchy.
|
||||
|
||||
**By output type**:
|
||||
|
||||
**For skills**: Constraint-Based + Template-Based primary. Add Progressive Disclosure (move details to references/).
|
||||
|
||||
**For documentation**: Output Formatting + Density Optimization primary. Add Audience-Targeted or Negative Prompting conditionally.
|
||||
|
||||
**For plans**: Template-Based + Decomposition primary. Add Directive Hierarchy for priority tiers.
|
||||
|
||||
**For simple workflows** (can be described as single job): Natural Language Reframing primary. Avoid enumeration and formal structure.
|
||||
|
||||
**For complex workflows** (approval gates, multiple terminal states): Execution Flow Control (appropriate level based on score) + Decomposition. Apply agent/workflow optimization guidelines (40-50% reduction, preserve procedural detail). See OPTIMIZATION-GUIDE.md for specifics.
|
||||
|
||||
**General complexity-based**:
|
||||
|
||||
- Low: 1-2 strategies (Natural Language Reframing or Constraint-Based + Output Formatting)
|
||||
- Medium: 2 strategies (Template-Based + Constraint-Based or light EFC)
|
||||
- High: 2-3 strategies max (full EFC + Decomposition, or Natural Language + Progressive Disclosure)
|
||||
|
||||
**Rule**: 1-3 strategies optimal. More than 3 = over-optimization risk.
|
||||
Reference in New Issue
Block a user