Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:48:55 +08:00
commit f28999f19c
127 changed files with 62038 additions and 0 deletions

View File

@@ -0,0 +1,773 @@
# Claude-Specific Techniques
## Overview
Claude (by Anthropic) excels at instruction following, long-context tasks, and detailed analysis. This guide covers techniques optimized specifically for Claude, based on official Anthropic documentation and best practices.
## Claude's Strengths
### What Claude Does Best
1. **Instruction adherence** - Exceptional at following complex, detailed instructions
2. **Long context** - Handles 100K+ tokens effectively
3. **Structured output** - Excels with XML and structured formats
4. **Safety and helpfulness** - Well-calibrated for harmless, helpful responses
5. **Analysis depth** - Thorough analysis when asked
6. **Reasoning clarity** - Shows work step-by-step naturally
### Key Differentiators
**Claude vs GPT-5:**
- Better with very long contexts (200K vs GPT-5's limits)
- More conservative (less likely to make things up)
- Responds better to XML structuring
- Excellent at detailed instruction following
- Strong at nuanced analysis
## XML Tag Structuring
### Why XML Tags Matter
Claude was trained with XML tags extensively, making them highly effective for:
- Separating different types of content
- Creating clear structure
- Guiding output format
- Organizing complex prompts
### Basic XML Pattern
```xml
<instruction>
Your main task description
</instruction>
<context>
Background information Claude needs
</context>
<constraints>
- Limitation 1
- Limitation 2
</constraints>
<output_format>
Expected structure of the response
</output_format>
```
### Common Tag Names
**Use these for clarity:**
- `<instruction>` - Main task
- `<context>` - Background info
- `<examples>` - Sample inputs/outputs
- `<constraints>` - Limitations
- `<output_format>` - Desired structure
- `<data>` - Data to process
- `<document>` - Long-form content
- `<code>` - Code snippets
- `<thinking>` - Request reasoning steps
**Example:**
```xml
<instruction>
Review this code for security vulnerabilities
</instruction>
<code>
function login(username, password) {
const query = `SELECT * FROM users WHERE username='${username}'`;
// ... more code
}
</code>
<focus_areas>
- SQL injection
- Password handling
- Session management
</focus_areas>
<output_format>
For each issue:
- Severity: Critical/High/Medium/Low
- Location: Line number
- Problem: Description
- Fix: Recommended solution
</output_format>
```
## Step-by-Step Thinking
### Enabling Reasoning
Claude naturally shows reasoning when asked, but you can enhance it:
**Simple approach:**
```
Think step-by-step about this problem:
[Problem description]
```
**Structured approach:**
```xml
<instruction>
Solve this problem by thinking through it step-by-step
</instruction>
<problem>
[Problem description]
</problem>
<thinking_process>
1. First, identify the key components
2. Then, analyze relationships
3. Next, consider edge cases
4. Finally, provide solution
</thinking_process>
```
**Explicit reasoning request:**
```
Before providing your answer, explain your reasoning process.
<thinking>
[Let Claude show its work here]
</thinking>
<answer>
[Then provide the final answer here]
</answer>
```
### Benefits
- More accurate results
- Visible logic for debugging
- Builds trust in responses
- Helps Claude catch its own errors
- Better for complex problems
## Clear and Specific Instructions
### The Clarity Principle
**Treat Claude like a skilled intern on day one:**
- Provide ALL necessary context
- Define terms explicitly
- Give specific examples
- State assumptions clearly
- Specify exact requirements
### Bad vs Good Instructions
**❌ Vague:**
```
"Make this better"
```
**✅ Specific:**
```
"Refactor this function to:
1. Reduce cyclomatic complexity below 5
2. Extract helper functions for clarity
3. Add type annotations
4. Include JSDoc comments
5. Handle null/undefined edge cases"
```
**❌ Ambiguous:**
```
"Write professional content"
```
**✅ Explicit:**
```
"Write in formal business tone with these characteristics:
- No contractions (use 'do not' vs 'don't')
- Active voice
- Sentences under 20 words
- Direct address ('you') rather than third person
- Industry-standard terminology
- Professional but approachable"
```
### Specificity Checklist
- [ ] Exact task clearly stated
- [ ] Output format specified
- [ ] Length/size requirements defined
- [ ] Tone/style described
- [ ] Edge cases addressed
- [ ] Success criteria stated
- [ ] Examples provided (if applicable)
- [ ] Constraints listed
## Progressive Disclosure
### Why It Matters for Claude
With Claude's 100K+ token context window, you might be tempted to dump everything at once. Don't.
**Progressive disclosure benefits:**
- More focused responses
- Better token efficiency
- Clearer reasoning chains
- Easier to course-correct
- More maintainable conversations
### Pattern: Multi-Stage Workflow
**Stage 1: High-level analysis**
```xml
<instruction>
Analyze this codebase structure and provide:
1. List of main components
2. Key dependencies
3. Overall architecture pattern
</instruction>
<codebase>
[File structure]
</codebase>
```
**Stage 2: Focused deep-dive**
```xml
<instruction>
Now, for the authentication component you identified,
perform a detailed security review
</instruction>
<context>
You previously identified the auth component at: [location]
</context>
<focus>
Review for:
- Authentication bypass vulnerabilities
- Token security
- Session management
- Password handling
</focus>
```
**Stage 3: Implementation**
```xml
<instruction>
Based on the security issues you found, implement fixes
for the high-severity items
</instruction>
<issues_to_fix>
[High severity issues from previous response]
</issues_to_fix>
```
### When to Use Progressive Disclosure
**Good for:**
- Large codebases
- Multi-step workflows
- Research tasks
- Iterative refinement
- Complex projects
**Not needed for:**
- Simple, single-shot tasks
- Well-defined transformations
- When all context fits easily
## Example-Driven Prompting
### Few-Shot with Claude
Claude excels at learning from examples. Show 2-5 examples of desired behavior:
```xml
<instruction>
Convert casual messages to professional email format
</instruction>
<examples>
<example>
<input>
Hey! Got your message. That sounds cool, let's do it!
</input>
<output>
Thank you for your message. I appreciate the proposal and
would be pleased to move forward with this initiative.
</output>
</example>
<example>
<input>
Thanks! I'll get back to you later about this.
</input>
<output>
Thank you. I will provide you with my response regarding
this matter at my earliest convenience.
</output>
</example>
</examples>
<task>
Now convert this message:
"Sounds good! See you next week for the meeting."
</task>
```
### Example Quality Matters
**Good examples:**
- Cover edge cases
- Show variety
- Are realistic
- Demonstrate exact format
- Include challenging scenarios
**Poor examples:**
- Too similar to each other
- Unrealistic/contrived
- Missing edge cases
- Inconsistent format
## Long-Context Best Practices
### Leveraging Claude's 100K+ Token Window
Claude can handle massive context, but structure it well:
```xml
<task>
Analyze all these customer support tickets for common issues
</task>
<tickets>
<ticket id="1">
[Ticket 1 content]
</ticket>
<ticket id="2">
[Ticket 2 content]
</ticket>
<!-- ... many more tickets ... -->
<ticket id="1523">
[Ticket 1523 content]
</ticket>
</tickets>
<analysis_instructions>
1. Categorize issues by type
2. Identify the top 5 most common problems
3. Calculate frequency for each
4. Recommend solutions for top issues
5. Note any patterns across tickets
</analysis_instructions>
<output_format>
## Issue Categories
[List of categories found]
## Top 5 Issues
1. [Issue]: [Frequency] - [Description]
- Recommended solution: [Solution]
## Patterns Observed
[Cross-ticket patterns]
</output_format>
```
### Long-Context Tips
1. **Use clear delimiters** - XML tags, markdown headers
2. **Structure hierarchically** - Group related content
3. **Reference by ID** - Make it easy to cite specific parts
4. **Summarize upfront** - Give Claude the big picture first
5. **Be specific in queries** - Don't make Claude search the entire context
## Document Analysis Pattern
### For Large Documents
```xml
<instruction>
Analyze this technical specification document
</instruction>
<document>
[Large technical document - can be 50K+ tokens]
</document>
<analysis_framework>
1. COMPLETENESS: Are all necessary sections present?
2. CLARITY: Are requirements unambiguous?
3. CONSISTENCY: Any contradictions?
4. FEASIBILITY: Any technical concerns?
5. GAPS: What's missing?
</analysis_framework>
<output_format>
For each framework dimension:
## [Dimension]
- Assessment: [Good/Needs Work/Critical Issues]
- Findings: [Specific findings with section references]
- Recommendations: [What to improve]
</output_format>
```
## Code Review Pattern
### Thorough Code Analysis
```xml
<instruction>
Conduct a comprehensive code review
</instruction>
<code_files>
<file path="auth/service.ts">
[Code content]
</file>
<file path="auth/controller.ts">
[Code content]
</file>
<file path="auth/middleware.ts">
[Code content]
</file>
</code_files>
<review_criteria>
<security>
- Authentication/authorization flaws
- Input validation
- Sensitive data exposure
- Injection vulnerabilities
</security>
<quality>
- Code complexity
- Naming conventions
- DRY violations
- Error handling
</quality>
<performance>
- N+1 queries
- Unnecessary loops
- Inefficient algorithms
</performance>
</review_criteria>
<output_format>
For each issue found:
### [Issue Title]
- **Severity**: Critical/High/Medium/Low
- **Category**: Security/Quality/Performance
- **Location**: `[file]:[line]`
- **Problem**: [Clear description]
- **Impact**: [Why this matters]
- **Fix**: [Specific code solution]
At the end:
### Summary
- Total issues: [count by severity]
- Recommendation: Approve / Request Changes / Comment
- Must-fix before merge: [Critical/High issues]
</output_format>
```
## Iterative Refinement Pattern
### Building Complex Solutions
**Round 1: Initial Draft**
```xml
<instruction>
Create an initial draft of a REST API design for user management
</instruction>
<requirements>
- CRUD operations for users
- Authentication required
- Role-based permissions
- Email verification flow
</requirements>
```
**Round 2: Refine Based on Feedback**
```xml
<instruction>
Refine the API design based on this feedback
</instruction>
<original_design>
[Claude's previous response]
</original_design>
<feedback>
- Add rate limiting specifications
- Include pagination for list endpoints
- Define error response format
- Add API versioning strategy
</feedback>
```
**Round 3: Implementation Details**
```xml
<instruction>
Now provide implementation details for the refined design
</instruction>
<context>
Refined API design: [design from round 2]
</context>
<implementation_requirements>
- TypeScript with Express
- PostgreSQL database
- JWT authentication
- Include OpenAPI spec
</implementation_requirements>
```
## Structured Output Enforcement
### Guaranteed Format with XML
```xml
<instruction>
Analyze this data and respond ONLY in the specified XML format
</instruction>
<data>
[Data to analyze]
</data>
<required_output_format>
<analysis>
<summary>One paragraph overview</summary>
<metrics>
<metric name="[name]" value="[value]" trend="up|down|stable"/>
<!-- Repeat for each metric -->
</metrics>
<insights>
<insight priority="high|medium|low">
<finding>What was discovered</finding>
<implication>What it means</implication>
<action>What to do</action>
</insight>
<!-- Repeat for each insight -->
</insights>
</analysis>
</required_output_format>
Do not include any text outside this XML structure.
```
## Common Claude Patterns
### Pattern 1: Document Summarization
```xml
<instruction>
Summarize this document for [target audience]
</instruction>
<document>
[Long document]
</document>
<summary_requirements>
- Length: [word count]
- Focus: [key aspects]
- Include: [specific elements]
- Tone: [formal/casual/technical]
</summary_requirements>
<output_structure>
1. Executive Summary (2-3 sentences)
2. Key Points (bullet list)
3. Important Details (1-2 paragraphs)
4. Action Items (if applicable)
</output_structure>
```
### Pattern 2: Data Extraction
```xml
<instruction>
Extract structured data from these unstructured documents
</instruction>
<documents>
<document id="1">
[Unstructured text]
</document>
<!-- More documents -->
</documents>
<extraction_schema>
{
"name": "string",
"date": "YYYY-MM-DD",
"amount": "number",
"category": "string",
"tags": ["string"]
}
</extraction_schema>
<output_format>
Return a JSON array of objects matching the schema
</output_format>
```
### Pattern 3: Comparative Analysis
```xml
<instruction>
Compare these options across the specified dimensions
</instruction>
<options>
<option name="Option A">
[Details]
</option>
<option name="Option B">
[Details]
</option>
<option name="Option C">
[Details]
</option>
</options>
<comparison_dimensions>
- Cost (initial and ongoing)
- Performance (speed, scalability)
- Complexity (setup, maintenance)
- Risk (technical, business)
- Timeline (implementation time)
</comparison_dimensions>
<output_format>
## Comparison Matrix
| Dimension | Option A | Option B | Option C |
|-----------|----------|----------|----------|
## Analysis
For each dimension, explain the differences
## Recommendation
Ranked recommendation with reasoning
</output_format>
```
## Best Practices Summary
### Structure Your Prompts
**DO:**
- Use XML tags for organization
- Separate concerns clearly
- Provide complete context
- Show examples
- Specify exact format
**DON'T:**
- Mix different types of content without delimiters
- Leave requirements implicit
- Assume Claude knows your context
- Skip format specification
### Leverage Long Context
**DO:**
- Include all relevant information
- Structure with clear sections
- Use progressive disclosure for complex tasks
- Reference specific parts by ID
**DON'T:**
- Dump unstructured mass of text
- Make Claude search for key info
- Forget to organize hierarchically
### Request Reasoning
**DO:**
- Ask for step-by-step thinking
- Request explanations
- Use `<thinking>` tags
- Show work for complex problems
**DON'T:**
- Accept black-box answers for important decisions
- Skip reasoning on complex tasks
### Be Specific
**DO:**
- Define exact requirements
- Specify tone, length, format
- Give concrete examples
- State success criteria
**DON'T:**
- Use vague terms like "better" or "professional"
- Assume Claude knows what you want
- Leave format open-ended
## Quick Reference
### Basic Template
```xml
<instruction>
[Clear, specific task]
</instruction>
<context>
[Background information]
</context>
<requirements>
- [Requirement 1]
- [Requirement 2]
</requirements>
<constraints>
- [Constraint 1]
- [Constraint 2]
</constraints>
<examples>
[If using few-shot]
</examples>
<output_format>
[Exact structure expected]
</output_format>
```
### Quick Tips
| Goal | Technique |
|------|-----------|
| Structured output | XML tags with schema |
| Complex reasoning | Ask for step-by-step thinking |
| Learning patterns | 2-5 few-shot examples |
| Long documents | XML sections with IDs |
| Precise format | Provide exact template |
| Detailed analysis | Multi-stage progressive disclosure |
| Consistent style | Specific style guidelines + examples |
---
**Remember:** Claude excels with clear structure, XML tags, and detailed instructions. Be explicit, provide context, and leverage its long-context capabilities.

View File

@@ -0,0 +1,681 @@
# GPT-5 Specific Techniques
## Overview
GPT-5 represents a significant advancement in LLM capabilities, requiring updated prompting strategies that differ from GPT-4 and earlier models. This guide covers GPT-5-specific techniques based on official OpenAI guidelines.
## Key Architectural Changes
### What's Different in GPT-5
1. **Agentic behavior** - GPT-5 is proactive and thorough by default
2. **Better reasoning** - Native step-by-step thinking without explicit prompting
3. **Instruction adherence** - Superior at following complex constraints
4. **Reduced hallucination** - More factually accurate
5. **Context handling** - Better at long-context tasks
### What No Longer Works
**Old GPT-4 techniques that fail with GPT-5:**
**Excessive repetition** - Don't repeat instructions multiple times
```
BAD (GPT-4 style):
"Write code. Remember to write code. Make sure you write code.
The task is to write code."
GOOD (GPT-5):
"Write a TypeScript authentication service with JWT support."
```
**Over-prompting for reasoning** - GPT-5 reasons well by default
```
BAD:
"Think carefully. Take your time. Reason step-by-step. Don't rush.
Make sure to think deeply..."
GOOD:
"Analyze this architecture decision:"
[GPT-5 will reason appropriately without excessive prompting]
```
**Contradictory instructions** - GPT-5 will call these out
```
BAD:
"Be concise but provide extensive detail with comprehensive examples..."
[GPT-5 may refuse or ask for clarification]
GOOD:
"Provide a detailed explanation (3-4 paragraphs) with 2 code examples"
```
## Structured Prompting Format
### The Triple-S Pattern: System, Specification, Structure
**Optimal GPT-5 prompt structure:**
```
SYSTEM (Role + Context):
You are [specific role with relevant expertise]
[Brief context about the situation]
SPECIFICATION (Task + Constraints):
Task: [Clear, specific task]
Requirements:
- [Requirement 1]
- [Requirement 2]
Constraints:
- [Limitation 1]
- [Limitation 2]
STRUCTURE (Output Format):
Provide output as:
[Exact format specification]
```
**Example:**
```
SYSTEM:
You are a senior security engineer reviewing authentication systems.
This is a production system handling 1M+ daily active users.
SPECIFICATION:
Task: Review this authentication implementation for security vulnerabilities
Requirements:
- Focus on OWASP Top 10 vulnerabilities
- Consider token-based auth best practices
- Evaluate session management approach
Constraints:
- Only flag issues with security implications
- Must provide specific line numbers
- Include severity ratings (Critical/High/Medium/Low)
STRUCTURE:
For each vulnerability found:
1. Title: [Brief description]
2. Severity: [Rating]
3. Location: [File:Line]
4. Explanation: [Why this is a problem]
5. Fix: [Specific code recommendation]
6. References: [Relevant security standards]
```
## Reasoning Effort Control
### Understanding Reasoning Effort
GPT-5 includes a `reasoning_effort` parameter that controls computational intensity:
- **Low** - Quick responses, simple tasks
- **Medium** (default) - Balanced approach
- **High** - Deep reasoning, complex problems
### When to Adjust Reasoning Effort
**Use Low reasoning_effort:**
- Simple transformations
- Formatting tasks
- Quick classifications
- Template filling
- Fast iterations needed
**Use Medium reasoning_effort:**
- Standard coding tasks
- Content generation
- Analysis tasks
- Most general work
**Use High reasoning_effort:**
- Complex algorithms
- Architecture decisions
- Security analysis
- Mathematical proofs
- Multi-step reasoning
- Critical decisions
### Prompting for Higher Reasoning
If you can't control the API parameter, trigger higher reasoning in prompts:
```
"This is a high-stakes decision requiring careful analysis.
Consider all edge cases and potential failure modes before responding.
[Your task]"
```
**Magic phrases that trigger deeper reasoning:**
- "This is a high-stakes/critical task"
- "Analyze thoroughly before responding"
- "Consider all edge cases"
- "Think through potential failure modes"
## Agentic Behavior Calibration
### Understanding GPT-5's Agentic Nature
GPT-5 is proactive by default - it will:
- Ask clarifying questions
- Suggest improvements
- Point out potential issues
- Offer alternatives
- Request more context
### Controlling Agentic Behavior
**For maximum autonomy (task completion mode):**
```
"Complete this task end-to-end without asking for guidance.
Make reasonable assumptions where needed and persist until
the task is fully handled.
[Task description]"
```
**For collaborative mode (step-by-step):**
```
"Work on this task incrementally, asking for my confirmation
before making significant decisions or assumptions.
[Task description]"
```
**For minimal agency (just answer):**
```
"Answer this question directly based only on the information
provided. Do not suggest alternatives or ask for clarification.
[Question]"
```
### Examples
**Task Completion Mode:**
```
"Implement a complete REST API for user management with
authentication. Make standard architectural decisions and
persist until fully implemented. Use TypeScript, Express,
and PostgreSQL."
Result: GPT-5 will complete entire implementation without
asking intermediate questions.
```
**Collaborative Mode:**
```
"Let's build a REST API for user management. Start by proposing
the overall architecture and wait for my feedback before
implementing."
Result: GPT-5 will propose, wait for approval, then implement
each piece with check-ins.
```
## Verbosity Management
### The Verbosity Challenge
GPT-5 tends to be thorough (sometimes too thorough), providing:
- Detailed explanations
- Multiple examples
- Edge case discussions
- Alternative approaches
### Controlling Output Length
**For concise output:**
```
"Provide a concise implementation:
- Core functionality only
- Minimal comments
- Under 50 lines of code
- No explanatory text"
```
**For detailed output:**
```
"Provide a comprehensive solution:
- Fully documented code
- Multiple examples showing different use cases
- Detailed explanation of design decisions
- Error handling for edge cases"
```
**Using word/line limits:**
```
"Explain JWT authentication in exactly 3 paragraphs (about 150 words total)"
"Write this function in under 30 lines with inline comments"
"List the top 5 issues only, one sentence each"
```
### Verbosity Spectrum
```
"Minimal" → Core answer only, no elaboration
"Concise" → Brief answer with key points
"Standard" → Balanced explanation with examples
"Detailed" → Thorough with multiple examples
"Comprehensive" → Exhaustive coverage of topic
```
## Prompt Optimization Workflow
### 1. Start with Clear Structure
```
ROLE: [Who the AI is]
TASK: [What to do]
CONSTRAINTS: [Limitations]
FORMAT: [Output structure]
```
### 2. Add Reasoning Guidance (if complex)
```
"Approach this step-by-step:
1. First, [step 1]
2. Then, [step 2]
3. Finally, [step 3]"
```
### 3. Specify Verbosity
```
"Provide a [concise/detailed/comprehensive] [output type]
of approximately [length]"
```
### 4. Set Agentic Behavior
```
"[Complete autonomously / Ask before major decisions /
Answer directly without suggestions]"
```
## Common GPT-5 Patterns
### Pattern 1: Code Generation
```
SYSTEM:
You are an expert [language] developer following [style guide]
TASK:
Implement [specific feature] with these requirements:
[Requirement list]
CODE STYLE:
- [Style point 1]
- [Style point 2]
OUTPUT:
1. Implementation code with JSDoc/docstrings
2. Unit tests covering main scenarios
3. Brief usage example
CONSTRAINTS:
- Under [X] lines for main implementation
- Follow [pattern/architecture]
- Handle [specific edge cases]
```
### Pattern 2: Analysis Tasks
```
SYSTEM:
You are a [domain] expert analyzing [subject]
DATA:
[Data/code/content to analyze]
ANALYSIS FRAMEWORK:
1. Identify [aspect 1]
2. Evaluate [aspect 2]
3. Assess [aspect 3]
4. Recommend [improvements]
OUTPUT FORMAT:
## Summary
[2-3 sentences]
## Findings
[Structured findings]
## Recommendations
[Prioritized action items]
```
### Pattern 3: Creative Content
```
SYSTEM:
You are a [type] writer with expertise in [domain]
TASK:
Create [content type] about [topic]
REQUIREMENTS:
- Audience: [target audience]
- Tone: [specific tone]
- Length: [word count]
- Style: [style guidelines]
INCLUDE:
- [Element 1]
- [Element 2]
STRUCTURE:
[Specific outline or format]
```
### Pattern 4: Decision Support
```
SYSTEM:
You are a [role] helping with a strategic decision
DECISION CONTEXT:
[Background information]
OPTIONS:
1. [Option 1]
2. [Option 2]
3. [Option 3]
ANALYSIS REQUIRED:
For each option, analyze:
- Pros and cons
- Risks and mitigations
- Resource requirements
- Implementation complexity
- Expected outcomes
RECOMMENDATION:
Provide ranked recommendation with reasoning
THINK STEP-BY-STEP:
Consider all implications before recommending
```
## Using the GPT-5 Prompt Optimizer
### What the Optimizer Does
OpenAI provides a prompt optimizer that:
- Identifies contradictions
- Removes redundancy
- Adds missing specifications
- Improves structure
- Suggests format improvements
- Fixes common anti-patterns
### When to Use It
**Good candidates for optimization:**
- Complex prompts with multiple requirements
- Prompts getting inconsistent results
- Long prompts with possible contradictions
- Migrating GPT-4 prompts to GPT-5
**Don't optimize:**
- Simple, working prompts
- Prompts that are already well-structured
- When you need full control over exact wording
### Optimization Process
1. Submit prompt to optimizer
2. Review suggested changes
3. Understand the reasoning
4. Test optimized version
5. Iterate if needed
**Example transformation:**
**Before (GPT-4 style):**
```
You are a developer. Write code. Make sure to write good code.
The code should be in Python. Don't forget to add comments.
Remember to handle errors. The code should be clean. Make it
maintainable. Think about edge cases. Write tests too.
```
**After (GPT-5 optimized):**
```
ROLE: Senior Python developer
TASK: Implement [feature]
REQUIREMENTS:
- Clean, maintainable code
- Comprehensive error handling
- Inline comments for complex logic
- Unit tests for main scenarios
OUTPUT:
1. Implementation
2. Tests
3. Usage example
```
## Advanced GPT-5 Techniques
### Technique 1: Constraint Layering
Stack constraints from general to specific:
```
GENERAL CONSTRAINTS:
- Language: TypeScript
- Framework: React
- Style: Functional components
SPECIFIC CONSTRAINTS:
- Use hooks (useState, useEffect)
- Props interface must be exported
- Handle loading and error states
EDGE CASE CONSTRAINTS:
- Handle empty data gracefully
- Debounce rapid state changes
- Clean up subscriptions in useEffect
```
### Technique 2: Format Enforcement
Use examples to enforce exact format:
```
OUTPUT FORMAT EXAMPLE:
{
"status": "success|error",
"data": {
"field1": "value",
"field2": 123
},
"metadata": {
"timestamp": "ISO 8601",
"version": "1.0"
}
}
Now process this input and return in exactly this format:
[Input data]
```
### Technique 3: Multi-Stage Reasoning
Break complex tasks into explicit stages:
```
STAGE 1 - ANALYSIS:
First, analyze the requirements and identify:
- Core functionality needed
- Technical constraints
- Potential challenges
STAGE 2 - DESIGN:
Then, design the solution:
- Propose architecture
- Choose patterns
- Plan data flow
STAGE 3 - IMPLEMENTATION:
Finally, implement with:
- Clean code
- Error handling
- Tests
Complete each stage fully before moving to the next.
```
### Technique 4: Self-Validation
Ask GPT-5 to validate its own output:
```
[Task description]
After completing the task, validate your output against:
1. All requirements met
2. No contradictions
3. Edge cases handled
4. Code compiles/runs
5. Tests pass
If validation fails, fix issues and re-validate.
```
## Common Mistakes with GPT-5
### Mistake 1: Over-Prompting
**Don't:**
```
"Please carefully think about this step-by-step and make sure
you reason through it thoroughly and don't rush and take your
time to analyze..."
```
**Do:**
```
"Analyze this step-by-step:"
```
### Mistake 2: Contradictory Requirements
**Don't:**
```
"Be very concise but include extensive detail and comprehensive
examples covering all edge cases"
```
**Do:**
```
"Provide a focused explanation (2-3 paragraphs) with one
representative example"
```
### Mistake 3: Unclear Success Criteria
**Don't:**
```
"Write good, clean code that follows best practices"
```
**Do:**
```
"Write code that:
- Passes TypeScript strict mode
- Has < 10 cyclomatic complexity per function
- Includes JSDoc for public methods
- Handles null/undefined explicitly"
```
### Mistake 4: Ignoring Agentic Behavior
**Don't:**
```
[Give vague task and get frustrated when GPT-5 asks questions]
```
**Do:**
```
"Complete this autonomously, making standard decisions..."
OR
"Let's work on this step-by-step with check-ins..."
```
## Best Practices Summary
### Structure
- Use SYSTEM/SPECIFICATION/STRUCTURE format
- Layer constraints from general to specific
- Provide explicit examples for format
### Reasoning
- Let GPT-5 reason naturally for most tasks
- Use "high-stakes" language for deeper analysis
- Break complex tasks into explicit stages
### Behavior
- Set clear agentic behavior expectations
- Use "persist until complete" for autonomy
- Use "ask before major decisions" for collaboration
### Verbosity
- Specify exact length requirements
- Use "concise" or "comprehensive" explicitly
- Give word/line count limits
### Optimization
- Start with clear structure
- Use the optimizer for complex prompts
- Test and iterate based on results
- Remove GPT-4 anti-patterns
## Quick Reference
### Prompt Template
```
SYSTEM:
You are [role] with [expertise]
[Context]
SPECIFICATION:
Task: [Clear task]
Requirements:
- [Req 1]
- [Req 2]
Constraints:
- [Limit 1]
- [Limit 2]
STRUCTURE:
[Exact output format]
BEHAVIOR:
[Complete autonomously / Collaborate / Direct answer only]
VERBOSITY:
[Concise/Standard/Detailed] - approximately [length]
```
### Quick Fixes
| Problem | Solution |
|---------|----------|
| Inconsistent results | Add explicit constraints |
| Too verbose | Specify length + "concise" |
| Asks too many questions | "Complete autonomously" |
| Not thorough enough | "High-stakes" + explicit stages |
| Wrong format | Provide format example |
| Contradictions | Use optimizer to identify |
| Slow responses | Lower reasoning_effort |
---
**Remember:** GPT-5 is smarter but needs clearer structure. Less repetition, more precision.

View File

@@ -0,0 +1,784 @@
# Prompt Optimization Strategies
## Overview
This guide covers systematic approaches to analyzing and improving prompts that aren't giving you the results you need. Learn how to diagnose problems, apply fixes, and measure improvements.
## The Optimization Process
```
1. IDENTIFY → What's wrong with current results?
2. DIAGNOSE → Why is the prompt failing?
3. FIX → Apply appropriate optimization
4. TEST → Verify improvement
5. ITERATE → Refine further if needed
```
---
## Step 1: Identify the Problem
### Common Prompt Problems
| Problem | Symptoms | Quick Test |
|---------|----------|------------|
| **Vague instructions** | Inconsistent outputs, missing requirements | Run prompt 3 times - do you get similar results? |
| **Missing context** | Wrong assumptions, irrelevant responses | Does the prompt include all necessary background? |
| **Unclear format** | Output structure varies | Does prompt specify exact format? |
| **Too complex** | Partial completion, confusion | Does prompt try to do too much at once? |
| **Wrong technique** | Poor accuracy, missed patterns | Does task match technique used? |
| **Model mismatch** | Suboptimal results | Are you using model-specific features? |
| **No examples** | Pattern not learned | Would examples help? |
| **Ambiguous terms** | Misinterpreted requirements | Are terms like "professional" defined? |
### Diagnostic Questions
Ask yourself:
**Clarity:**
- [ ] Is the task stated explicitly?
- [ ] Are all requirements spelled out?
- [ ] Would a junior developer understand this?
**Completeness:**
- [ ] Is all necessary context included?
- [ ] Are constraints specified?
- [ ] Is output format defined?
**Consistency:**
- [ ] Does running it 3x give similar results?
- [ ] Are there contradictory instructions?
- [ ] Is terminology used consistently?
**Appropriateness:**
- [ ] Is the right technique being used?
- [ ] Is this the right model for the task?
- [ ] Is complexity level appropriate?
---
## Step 2: Diagnose Root Cause
### Problem: Inconsistent Results
**Diagnosis:**
- Instructions too vague
- Missing constraints
- Ambiguous terminology
- No output format specified
**How to confirm:**
Run the same prompt 3-5 times and compare outputs.
**Evidence:**
```
Run 1: Returns JSON
Run 2: Returns markdown table
Run 3: Returns plain text
Diagnosis: No format specification
```
### Problem: Wrong or Irrelevant Answers
**Diagnosis:**
- Missing context
- Unclear task definition
- Wrong assumptions by model
- No examples provided
**How to confirm:**
Does the prompt include all information needed to complete the task correctly?
**Evidence:**
```
Prompt: "Optimize this code"
Response: "What language is this? What should I optimize for?"
Diagnosis: Missing context (language, optimization goals)
```
### Problem: Partial Completion
**Diagnosis:**
- Task too complex for single shot
- Prompt too long/confusing
- Multiple competing requirements
- Agentic behavior not calibrated
**How to confirm:**
Break task into smaller parts - does each part work individually?
**Evidence:**
```
Prompt: "Build complete e-commerce backend with auth, payments, inventory"
Response: Only implements auth, ignores rest
Diagnosis: Too complex, needs progressive disclosure
```
### Problem: Incorrect Pattern Application
**Diagnosis:**
- Using few-shot when zero-shot would work
- Missing chain-of-thought for complex reasoning
- No examples when pattern matching needed
- Wrong model for the task
**How to confirm:**
Compare results with different techniques.
**Evidence:**
```
Complex math problem solved directly: Wrong answer
Same problem with "think step-by-step": Correct answer
Diagnosis: Needs chain-of-thought prompting
```
---
## Step 3: Apply Fixes
### Fix: Add Specificity
**Before:**
```
"Write professional code"
```
**After:**
```
"Write TypeScript code following these standards:
- Strict type checking enabled
- JSDoc comments for all public methods
- Error handling for all async operations
- Maximum cyclomatic complexity of 5
- Meaningful variable names (no abbreviations)"
```
**Why it works:** Removes ambiguity, defines "professional" explicitly.
### Fix: Provide Context
**Before:**
```
"Optimize this function:
[code]"
```
**After:**
```
"Optimize this TypeScript function that processes user uploads:
Context:
- Called 1000x per minute
- Typical input: 1-10MB files
- Current bottleneck: O(n²) loop
- Running in Node.js 18
Optimization goals (in priority order):
1. Reduce time complexity
2. Lower memory usage
3. Maintain readability
[code]"
```
**Why it works:** Provides necessary context for appropriate optimization.
### Fix: Specify Format
**Before:**
```
"Analyze this code for issues"
```
**After:**
```
"Analyze this code for issues and respond in this exact format:
## Issues Found
### [Issue Title]
- **Severity**: Critical/High/Medium/Low
- **Category**: Security/Performance/Quality
- **Location**: Line X
- **Problem**: [Description]
- **Fix**: [Specific solution]
### [Next Issue]
[Same format]
## Summary
Total issues: [count by severity]
Recommendation: [Approve/Request Changes]"
```
**Why it works:** Guarantees consistent, parseable output.
### Fix: Add Examples (Few-Shot)
**Before:**
```
"Convert these to camelCase:
user_name
total_count
is_active"
```
**After:**
```
"Convert these to camelCase:
Examples:
user_name → userName
total_count → totalCount
Now convert:
is_active →
created_at →
max_retry_attempts →"
```
**Why it works:** Shows the pattern, improves accuracy.
### Fix: Enable Reasoning (Chain-of-Thought)
**Before:**
```
"What's the time complexity of this algorithm?"
```
**After:**
```
"Analyze the time complexity of this algorithm step-by-step:
1. First, identify the loops and recursive calls
2. Then, determine how many times each executes
3. Next, calculate the overall complexity
4. Finally, express in Big O notation
Algorithm:
[code]"
```
**Why it works:** Structured reasoning leads to correct analysis.
### Fix: Break Down Complexity (Progressive Disclosure)
**Before:**
```
"Build a complete user authentication system with OAuth,
2FA, session management, and password reset"
```
**After:**
```
"Let's build a user authentication system in stages.
STAGE 1: Design the overall architecture
- Components needed
- Data models
- API endpoints
Wait for my review before proceeding to implementation."
```
**Then after review:**
```
"STAGE 2: Implement the core authentication service
- User login with email/password
- JWT token generation
- Session management
Use the architecture we designed in Stage 1."
```
**Why it works:** Breaks overwhelming task into manageable pieces.
### Fix: Add Constraints
**Before:**
```
"Write a function to sort an array"
```
**After:**
```
"Write a function to sort an array with these constraints:
Requirements:
- Input: Array of numbers (may contain duplicates)
- Output: New array, sorted ascending
- Must be pure function (don't modify input)
- Time complexity: O(n log n) or better
- Language: TypeScript with strict types
Edge cases to handle:
- Empty array → return empty array
- Single element → return copy of array
- All same values → return copy of array
- Negative numbers → sort correctly"
```
**Why it works:** Removes ambiguity, ensures all cases handled.
### Fix: Use Model-Specific Features
**For GPT-5:**
**Before (generic):**
```
"Implement this feature"
```
**After (GPT-5 optimized):**
```
SYSTEM: Senior TypeScript developer
TASK: Implement user authentication service
CONSTRAINTS:
- Use JWT with refresh tokens
- TypeScript strict mode
- Include error handling
OUTPUT: Complete implementation with tests
BEHAVIOR: Complete autonomously without asking questions
VERBOSITY: Concise - core functionality only
```
**For Claude:**
**Before (generic):**
```
"Review this code for security issues"
```
**After (Claude optimized):**
```xml
<instruction>
Review this code for security vulnerabilities
</instruction>
<code>
[Code to review]
</code>
<focus_areas>
- SQL injection
- XSS attacks
- Authentication bypasses
- Data exposure
</focus_areas>
<output_format>
For each vulnerability:
- Severity: [Critical/High/Medium/Low]
- Location: [Line number]
- Description: [What's wrong]
- Fix: [How to remediate]
</output_format>
```
---
## Step 4: Test Improvements
### A/B Testing Prompts
**Method:**
1. Keep original prompt as baseline
2. Create improved version
3. Run both on same inputs (5-10 test cases)
4. Compare results objectively
**Evaluation criteria:**
- Accuracy (does it solve the problem?)
- Consistency (similar results each time?)
- Completeness (all requirements met?)
- Quality (meets standards?)
- Format compliance (matches specification?)
**Example:**
| Test Case | Original Prompt | Improved Prompt | Winner |
|-----------|----------------|-----------------|--------|
| Case 1 | 60% accurate | 95% accurate | Improved |
| Case 2 | Wrong format | Correct format | Improved |
| Case 3 | Incomplete | Complete | Improved |
| Case 4 | Correct | Correct | Tie |
| Case 5 | Partial | Complete | Improved |
**Result:** Improved prompt wins 4/5 cases, use it.
### Measuring Improvement
**Quantitative metrics:**
- Success rate: X% of runs produce correct output
- Consistency: X% of runs produce similar output
- Completeness: X% of requirements met on average
- Format compliance: X% match specified format
- Token efficiency: Avg tokens to correct result
**Qualitative assessment:**
- Does it feel easier to use?
- Do results need less manual correction?
- Is output more predictable?
- Are edge cases handled better?
### Iterative Testing
```
Version 1 → Test → 70% success → Identify issues
Version 2 → Test → 85% success → Identify remaining issues
Version 3 → Test → 95% success → Good enough for production
```
Don't expect perfection on first try. Iterate.
---
## Step 5: Continuous Iteration
### When to Iterate Further
**Keep improving if:**
- Success rate < 90%
- Inconsistent outputs across runs
- Missing edge cases
- Requires manual post-processing
- Takes multiple attempts to get right
**Stop iterating when:**
- Success rate > 95%
- Consistent, predictable outputs
- Handles known edge cases
- Minimal manual correction needed
- One-shot success most of the time
### Iteration Strategies
**Strategy 1: Incremental Refinement**
1. Start with basic working prompt
2. Identify most common failure mode
3. Add constraint/example to fix it
4. Test
5. Repeat for next most common failure
**Strategy 2: Template Evolution**
1. Create template with placeholders
2. Test with multiple scenarios
3. Identify missing sections
4. Add new sections to template
5. Retest with scenarios
**Strategy 3: Technique Stacking**
1. Start with base technique (e.g., zero-shot)
2. If accuracy low, add chain-of-thought
3. If still issues, add few-shot examples
4. If format wrong, add explicit structure
5. Layer techniques until quality sufficient
### Tracking What Works
**Maintain a prompt library:**
```
prompts/
├── code-review.md (Success rate: 97%)
├── data-analysis.md (Success rate: 94%)
├── documentation.md (Success rate: 99%)
└── bug-diagnosis.md (Success rate: 91%)
```
**Document each prompt:**
```markdown
# Code Review Prompt
**Success Rate:** 97%
**Last Updated:** 2025-01-15
**Model:** GPT-5 / Claude Sonnet
**Use When:** Reviewing pull requests for security, performance, quality
**Template:**
[Prompt template]
**Known Limitations:**
- May miss complex race conditions
- Better for backend than frontend code
**Changelog:**
- v3 (2025-01-15): Added self-validation step (95% → 97%)
- v2 (2025-01-10): Added severity ratings (90% → 95%)
- v1 (2025-01-05): Initial version (90%)
```
---
## Optimization Checklist
Use this to systematically improve any prompt:
### Clarity
- [ ] Task explicitly stated
- [ ] No ambiguous terms
- [ ] Success criteria defined
- [ ] All requirements listed
### Completeness
- [ ] All necessary context provided
- [ ] Constraints specified
- [ ] Edge cases addressed
- [ ] Output format defined
### Structure
- [ ] Organized into clear sections
- [ ] Appropriate headings/tags
- [ ] Logical flow
- [ ] Easy to parse
### Technique
- [ ] Right approach for task type
- [ ] Examples if pattern matching needed
- [ ] Reasoning steps if complex
- [ ] Progressive disclosure if large
### Model Fit
- [ ] Uses model-specific features
- [ ] Matches model strengths
- [ ] Avoids model weaknesses
- [ ] Appropriate complexity level
### Testability
- [ ] Produces consistent outputs
- [ ] Format is parseable
- [ ] Easy to validate results
- [ ] Clear success/failure criteria
---
## Advanced Optimization Techniques
### Technique 1: Constraint Tuning
**Concept:** Find the right level of constraint specificity.
**Too loose:**
```
"Write good code"
```
**Too tight:**
```
"Write code with exactly 47 lines, 3 functions named specifically
foo, bar, baz, using only these 12 allowed keywords..."
```
**Just right:**
```
"Write TypeScript code:
- 3-5 small functions (<20 lines each)
- Descriptive names
- One responsibility per function
- Type-safe"
```
**Process:**
1. Start loose
2. Add constraints where outputs vary
3. Stop when consistency achieved
4. Don't over-constrain
### Technique 2: Example Engineering
**Concept:** Carefully craft few-shot examples for maximum learning.
**Principles:**
- **Diversity**: Cover different scenarios
- **Edge cases**: Include tricky examples
- **Consistency**: Same format for all
- **Clarity**: Obvious pattern to learn
**Bad examples:**
```
Input: "Hello"
Output: "HELLO"
Input: "Hi"
Output: "HI"
Input: "Hey"
Output: "HEY"
```
(Too similar, no edge cases)
**Good examples:**
```
Input: "Hello World"
Output: "HELLO WORLD"
Input: "it's-a test_case"
Output: "IT'S-A TEST_CASE"
Input: "123abc"
Output: "123ABC"
Input: ""
Output: ""
```
(Diverse, includes spaces, punctuation, numbers, empty)
### Technique 3: Format Scaffolding
**Concept:** Provide the exact structure you want filled in.
**Basic request:**
```
"Analyze this code"
```
**With scaffolding:**
```
"Analyze this code and fill in this template:
## Code Quality: [Good/Fair/Poor]
## Issues Found:
1. [Issue]: [Description]
- Location: [Line X]
- Fix: [Solution]
2. [Issue]: [Description]
- Location: [Line Y]
- Fix: [Solution]
## Strengths:
- [Strength 1]
- [Strength 2]
## Recommendation: [Approve/Request Changes]"
```
**Result:** Guaranteed format match.
### Technique 4: Self-Correction Loop
**Concept:** Ask the model to validate and fix its own output.
**Single-pass:**
```
"Generate a JSON API response for user data"
```
**With self-correction:**
```
"Generate a JSON API response for user data.
After generating:
1. Validate JSON syntax
2. Check all required fields present
3. Verify data types correct
4. Ensure no sensitive data exposed
If validation fails, fix issues and regenerate."
```
**Result:** Higher quality, fewer errors.
### Technique 5: Precision Through Negation
**Concept:** Specify what NOT to do as well as what to do.
**Positive only:**
```
"Write a professional email"
```
**Positive + Negative:**
```
"Write a professional email:
DO:
- Use formal salutation (Dear [Name])
- State purpose in first sentence
- Use clear, concise language
- Include specific next steps
DON'T:
- Use contractions (don't, won't, etc.)
- Include emojis or informal language
- Exceed 3 paragraphs
- End without clear call-to-action"
```
**Result:** Avoids common mistakes.
---
## Common Optimization Patterns
### Pattern: Vague → Specific
**Before:** "Make this better"
**After:** "Refactor to reduce complexity from 15 to <5, extract helper functions, add type safety"
### Pattern: No Examples → Few-Shot
**Before:** "Format these dates"
**After:** "2025-01-15 → January 15, 2025\n2024-12-01 → December 1, 2024\nNow format: 2025-03-22"
### Pattern: Single-Shot → Progressive
**Before:** "Build entire app"
**After:** "Stage 1: Design architecture [wait for approval] Stage 2: Implement core [wait for approval]..."
### Pattern: Generic → Model-Specific
**Before:** "Review this code"
**After (GPT-5):** "SYSTEM: Senior dev\nTASK: Code review\nCONSTRAINTS:..."
**After (Claude):** `<instruction>Review code</instruction><code>...</code>`
### Pattern: Implicit → Explicit Format
**Before:** "List the issues"
**After:** "List issues as:\n1. [Issue] - Line X - [Fix]\n2. [Issue] - Line Y - [Fix]"
---
## Quick Reference
### Optimization Decision Tree
```
Is output inconsistent?
├─ Yes → Add format specification + constraints
└─ No → Continue
Is output incomplete?
├─ Yes → Check if task too complex → Use progressive disclosure
└─ No → Continue
Is output inaccurate?
├─ Yes → Add examples (few-shot) or reasoning (CoT)
└─ No → Continue
Is output low quality?
├─ Yes → Add quality criteria + validation step
└─ No → Prompt is good!
```
### Quick Fixes
| Symptom | Quick Fix |
|---------|-----------|
| Varying formats | Add exact format template |
| Wrong answers | Add "think step-by-step" |
| Inconsistent results | Add specific constraints |
| Missing edge cases | Add explicit edge case handling |
| Too verbose | Specify length limit |
| Too brief | Request "detailed" or "comprehensive" |
| Wrong assumptions | Provide complete context |
| Partial completion | Break into stages |
---
**Remember:** Optimization is iterative. Start simple, measure results, identify failures, apply targeted fixes, and repeat until quality is sufficient. Don't over-engineer prompts that already work well enough.

View File

@@ -0,0 +1,919 @@
# Prompt Patterns & Templates
## Overview
This guide provides reusable prompt patterns and templates for common tasks. Each pattern includes the template, usage guidelines, and examples.
## Pattern Categories
1. **Code Generation** - Creating code from specifications
2. **Analysis & Review** - Evaluating code, documents, data
3. **Content Creation** - Writing documentation, articles, etc.
4. **Data Processing** - Transforming, extracting, analyzing data
5. **Decision Support** - Helping make informed choices
6. **Learning & Teaching** - Explaining concepts, tutoring
7. **Debugging & Troubleshooting** - Finding and fixing problems
---
## Code Generation Patterns
### Pattern: Feature Implementation
**Use when:** Building a new feature or component
**Template:**
```
ROLE: Expert [language] developer following [style guide/framework]
FEATURE REQUEST:
[Description of feature to implement]
TECHNICAL REQUIREMENTS:
- Language/Framework: [specific versions]
- Architecture: [pattern to follow]
- Dependencies: [what you can use]
- Constraints: [limitations]
CODE STANDARDS:
- [Naming conventions]
- [Documentation requirements]
- [Error handling approach]
- [Testing requirements]
DELIVERABLES:
1. Main implementation
2. Unit tests
3. Integration tests (if applicable)
4. Usage example
5. Brief documentation
OUTPUT STRUCTURE:
[Specify files, organization]
```
**Example:**
```
ROLE: Expert TypeScript developer following Airbnb style guide
FEATURE REQUEST:
Implement a caching service with TTL support and LRU eviction
TECHNICAL REQUIREMENTS:
- Language/Framework: TypeScript 5.0+, Node.js
- Architecture: Service class with dependency injection
- Dependencies: Only Node.js built-ins
- Constraints: Must be memory-efficient, thread-safe
CODE STANDARDS:
- Use strict TypeScript types
- JSDoc comments for public methods
- Explicit error handling with custom exceptions
- 80%+ code coverage
DELIVERABLES:
1. CacheService class implementation
2. Comprehensive unit tests
3. Usage examples
4. Performance considerations doc
OUTPUT STRUCTURE:
- src/services/CacheService.ts
- src/services/__tests__/CacheService.test.ts
- examples/cache-usage.ts
```
### Pattern: Code Refactoring
**Use when:** Improving existing code
**Template:**
```
ROLE: Senior developer performing code refactoring
CURRENT CODE:
[Code to refactor]
REFACTORING GOALS:
1. [Goal 1 - e.g., reduce complexity]
2. [Goal 2 - e.g., improve testability]
3. [Goal 3 - e.g., enhance performance]
CONSTRAINTS:
- Maintain backward compatibility: [yes/no]
- Keep same API surface: [yes/no]
- Preserve functionality exactly: [yes/no]
EVALUATION CRITERIA:
- Cyclomatic complexity < [number]
- No functions > [number] lines
- [Other measurable improvements]
OUTPUT:
1. Refactored code
2. Explanation of changes
3. Before/after comparison
4. Test plan for validation
```
### Pattern: Bug Fix
**Use when:** Diagnosing and fixing bugs
**Template:**
```
ROLE: Senior debugger and problem solver
BUG REPORT:
Symptoms: [What's happening]
Expected: [What should happen]
Steps to reproduce: [How to trigger]
CODE:
[Relevant code sections]
CONTEXT:
- Environment: [runtime, versions]
- Recent changes: [what changed recently]
- Error messages: [any errors]
DEBUGGING APPROACH:
1. Analyze code for potential issues
2. Identify root cause
3. Propose fix
4. Suggest tests to prevent regression
OUTPUT:
## Root Cause
[Explanation]
## Proposed Fix
[Code changes]
## Testing
[How to verify fix]
## Prevention
[How to avoid similar bugs]
```
---
## Analysis & Review Patterns
### Pattern: Code Review
**Use when:** Reviewing pull requests or code changes
**Template:**
```
ROLE: Senior engineer conducting code review
CODE TO REVIEW:
[Code or diff]
REVIEW FOCUS:
1. Security vulnerabilities
2. Performance issues
3. Code quality and maintainability
4. Best practices adherence
5. Test coverage
6. Documentation
SEVERITY LEVELS:
- Critical: Must fix before merge
- High: Should fix before merge
- Medium: Should fix soon
- Low: Nice to have
OUTPUT FORMAT:
For each issue:
### [Issue Title]
- **Severity**: [Level]
- **Category**: [Security/Performance/Quality/Tests/Docs]
- **Location**: `[file]:[line]`
- **Issue**: [What's wrong]
- **Why**: [Why it matters]
- **Fix**: [How to fix it]
SUMMARY:
- Overall assessment: Approve / Request Changes / Comment
- Critical/High issues: [count]
- Recommendation: [merge or not]
```
### Pattern: Architecture Review
**Use when:** Evaluating system design decisions
**Template:**
```
ROLE: Solutions architect reviewing system design
PROPOSED ARCHITECTURE:
[Architecture description, diagrams, specs]
EVALUATION FRAMEWORK:
1. **Scalability**: Can it handle growth?
2. **Reliability**: Single points of failure?
3. **Performance**: Will it meet SLAs?
4. **Security**: Threat model coverage?
5. **Maintainability**: Easy to change/debug?
6. **Cost**: Resource efficiency?
CONTEXT:
- Expected scale: [users, requests, data]
- Team size: [developers]
- Timeline: [when needed]
- Constraints: [technical, business]
OUTPUT:
For each dimension:
## [Dimension]
- Assessment: [Good/Concerns/Critical Issues]
- Analysis: [Detailed evaluation]
- Risks: [What could go wrong]
- Recommendations: [Improvements]
FINAL RECOMMENDATION:
[Approve / Modify / Redesign] with reasoning
```
### Pattern: Security Audit
**Use when:** Checking for security vulnerabilities
**Template:**
```
ROLE: Security engineer performing audit
TARGET:
[Code, system, or architecture to audit]
SECURITY FRAMEWORK:
- OWASP Top 10
- Authentication/Authorization
- Data protection
- Input validation
- Cryptography
- API security
- Dependency security
THREAT MODEL:
- Assets: [What needs protection]
- Threat actors: [Who might attack]
- Attack vectors: [How they might attack]
OUTPUT:
For each vulnerability:
### [Vulnerability Name]
- **CVSS Score**: [0-10]
- **Category**: [OWASP category]
- **Location**: [Where found]
- **Description**: [What's vulnerable]
- **Exploit scenario**: [How to exploit]
- **Impact**: [Damage if exploited]
- **Remediation**: [How to fix]
- **References**: [CVE, CWE numbers]
EXECUTIVE SUMMARY:
- Critical vulnerabilities: [count]
- Overall risk: [High/Medium/Low]
- Priority fixes: [top 3]
```
---
## Content Creation Patterns
### Pattern: Technical Documentation
**Use when:** Writing docs for APIs, libraries, tools
**Template:**
```
ROLE: Technical writer with [domain] expertise
DOCUMENTATION TARGET:
[What you're documenting]
AUDIENCE:
- Experience level: [Beginner/Intermediate/Advanced]
- Background: [Assumed knowledge]
- Goals: [What they want to achieve]
DOCUMENTATION REQUIREMENTS:
- Tone: [Professional/Friendly/Formal]
- Detail level: [High-level/Detailed/Comprehensive]
- Include: [Examples/Diagrams/Code samples]
STRUCTURE:
# [Title]
## Overview
[One paragraph - what is it]
## Quick Start
[Minimal example to get started]
## Installation
[How to install/setup]
## Core Concepts
[Key ideas to understand]
## Usage
### [Feature 1]
[Description + example]
### [Feature 2]
[Description + example]
## API Reference
[Detailed API docs]
## Advanced Usage
[Complex scenarios]
## Troubleshooting
[Common issues and solutions]
## FAQ
[Frequently asked questions]
```
### Pattern: Tutorial Creation
**Use when:** Teaching how to do something step-by-step
**Template:**
```
ROLE: Expert teacher creating a tutorial
TUTORIAL GOAL:
Teach [audience] how to [accomplish task]
AUDIENCE:
- Current knowledge: [What they know]
- Learning goal: [What they'll learn]
- Preferred style: [Hands-on/Conceptual/Both]
TUTORIAL STRUCTURE:
## Introduction
- What you'll build
- What you'll learn
- Prerequisites
## Concepts
[Key ideas explained simply]
## Step-by-Step Instructions
### Step 1: [First step]
- Explanation: [Why this step]
- Code: [What to write]
- Result: [What happens]
### Step 2: [Next step]
[Same format]
## Challenges
[Optional exercises]
## Next Steps
[Where to go from here]
REQUIREMENTS:
- Clear explanations for each step
- Working code at each checkpoint
- Expected output shown
- Common mistakes highlighted
```
### Pattern: Blog Post/Article
**Use when:** Writing technical articles or posts
**Template:**
```
ROLE: Technical blogger writing for [audience]
ARTICLE TOPIC:
[What you're writing about]
ARTICLE GOALS:
- Educate about: [Topic]
- Show how to: [Practical application]
- Inspire to: [Action reader should take]
REQUIREMENTS:
- Length: [Word count]
- Tone: [Conversational/Professional/Academic]
- Include: [Code examples/Diagrams/Screenshots]
- SEO keywords: [Primary and secondary]
STRUCTURE:
# [Compelling Title with Keyword]
## Introduction (Hook)
- Problem statement
- Why it matters
- What you'll learn
## Background/Context
[Necessary foundation]
## Main Content
### [Section 1]
[Content with examples]
### [Section 2]
[Content with examples]
### [Section 3]
[Content with examples]
## Practical Example
[Working code or demo]
## Conclusion
- Summary of key points
- Call to action
- Further reading
STYLE GUIDELINES:
- Use "you" (second person)
- Active voice
- Short paragraphs (3-4 sentences)
- Subheadings every 300 words
- Code examples well-commented
```
---
## Data Processing Patterns
### Pattern: Data Extraction
**Use when:** Pulling structured data from unstructured sources
**Template:**
```
ROLE: Data engineer extracting information
SOURCE DATA:
[Unstructured text, logs, documents]
EXTRACTION SCHEMA:
{
"field1": "type (description)",
"field2": "type (description)",
"nested": {
"field3": "type"
}
}
EXTRACTION RULES:
- If field not found: [default value or null]
- Date format: [ISO 8601 / custom]
- Number format: [decimal places, units]
- Validation: [rules for valid data]
OUTPUT FORMAT:
Return JSON array of objects matching schema
HANDLING AMBIGUITY:
- Multiple possible values: [take first / all / ask]
- Unclear data: [skip / estimate / flag]
- Missing data: [null / default / omit]
```
### Pattern: Data Transformation
**Use when:** Converting data from one format to another
**Template:**
```
ROLE: Data transformation specialist
INPUT DATA:
[Current format]
INPUT SCHEMA:
[Structure of input]
OUTPUT SCHEMA:
[Desired structure]
TRANSFORMATION RULES:
1. [Field mapping]
2. [Calculations]
3. [Aggregations]
4. [Filtering]
EDGE CASES:
- Missing values: [How to handle]
- Invalid data: [How to handle]
- Duplicates: [Keep / remove / aggregate]
OUTPUT:
Transformed data in target format
```
### Pattern: Data Analysis
**Use when:** Analyzing datasets for insights
**Template:**
```
ROLE: Data analyst
DATASET:
[Data to analyze]
ANALYSIS OBJECTIVES:
1. [What to discover - trends]
2. [What to measure - metrics]
3. [What to compare - segments]
ANALYSIS FRAMEWORK:
- Descriptive statistics (mean, median, distribution)
- Trend analysis (over time)
- Comparative analysis (between groups)
- Correlation analysis (relationships)
- Anomaly detection (outliers)
OUTPUT STRUCTURE:
## Executive Summary
[2-3 sentences of key findings]
## Key Metrics
| Metric | Value | Change | Trend |
## Detailed Analysis
### [Finding 1]
- Data: [Numbers]
- Insight: [What it means]
- Implication: [Why it matters]
### [Finding 2]
[Same structure]
## Recommendations
1. [Action based on findings]
2. [Action based on findings]
## Methodology
[How analysis was performed]
```
---
## Decision Support Patterns
### Pattern: Technology Selection
**Use when:** Choosing between technology options
**Template:**
```
ROLE: Technical architect evaluating options
DECISION CONTEXT:
[What you're building, constraints, goals]
OPTIONS:
1. [Option A]
2. [Option B]
3. [Option C]
EVALUATION CRITERIA:
- Technical fit (how well it solves problem)
- Team expertise (learning curve)
- Community/support (help available)
- Maturity/stability (production-ready?)
- Performance (speed, efficiency)
- Cost (licensing, infrastructure)
- Maintenance (ongoing effort)
- Scalability (future growth)
ANALYSIS:
For each option, evaluate against each criterion
COMPARISON MATRIX:
| Criterion | Weight | Option A | Option B | Option C |
|-----------|--------|----------|----------|----------|
RECOMMENDATION:
- First choice: [Option] because [reasoning]
- Alternative: [Option] if [conditions]
- Avoid: [Option] because [reasoning]
IMPLEMENTATION PLAN:
[Next steps if recommendation accepted]
```
### Pattern: Problem Solving
**Use when:** Working through complex problems
**Template:**
```
ROLE: Problem-solving expert
PROBLEM STATEMENT:
[Clear description of the problem]
CONTEXT:
- Current situation: [What exists now]
- Constraints: [Limitations]
- Success criteria: [What good looks like]
PROBLEM-SOLVING APPROACH:
1. **Understand**: Restate problem, identify root cause
2. **Analyze**: Break down into components
3. **Generate**: Brainstorm possible solutions
4. **Evaluate**: Assess each solution
5. **Decide**: Choose best approach
6. **Plan**: Create action plan
OUTPUT:
## Problem Analysis
[Root cause, contributing factors]
## Possible Solutions
1. [Solution A]
- Pros: [Benefits]
- Cons: [Drawbacks]
- Effort: [Low/Medium/High]
2. [Solution B]
[Same format]
## Recommended Solution
[Which one and why]
## Implementation Plan
1. [Step 1]
2. [Step 2]
3. [Step 3]
## Risk Mitigation
[What could go wrong and how to handle]
```
---
## Learning & Teaching Patterns
### Pattern: Concept Explanation
**Use when:** Explaining technical concepts
**Template:**
```
ROLE: Expert teacher explaining [topic]
CONCEPT TO EXPLAIN:
[Technical concept]
AUDIENCE:
- Background: [What they know]
- Learning goal: [What they need to understand]
- Preferred learning style: [Visual/Hands-on/Theoretical]
EXPLANATION STRUCTURE:
## Simple Definition
[One sentence explanation]
## Why It Matters
[Real-world relevance]
## How It Works
[Step-by-step breakdown]
## Analogy
[Relate to familiar concept]
## Example
[Concrete, runnable example]
## Common Misconceptions
[What people get wrong]
## When to Use
[Practical guidelines]
## Further Learning
[Next topics to explore]
REQUIREMENTS:
- No jargon without explanation
- Build from simple to complex
- Multiple examples
- Visual aids if helpful
```
### Pattern: Code Explanation
**Use when:** Explaining how code works
**Template:**
```
ROLE: Code teacher explaining implementation
CODE TO EXPLAIN:
[Code snippet or file]
EXPLANATION LEVEL:
- Audience: [Beginner/Intermediate/Advanced]
- Focus: [High-level/Detailed/Both]
- Goal: [Understanding/Implementation/Both]
EXPLANATION STRUCTURE:
## Overview
[What this code does in one sentence]
## Purpose
[Why this code exists, problem it solves]
## How It Works
[Step through the logic]
## Line-by-Line Breakdown
Line X: [What it does, why it's there]
Line Y: [What it does, why it's there]
## Key Concepts Used
- [Concept 1]: [Brief explanation]
- [Concept 2]: [Brief explanation]
## Example Usage
[How to use this code]
## Edge Cases
[Special scenarios handled]
## Improvements
[How it could be better]
```
---
## Debugging & Troubleshooting Patterns
### Pattern: Error Diagnosis
**Use when:** Figuring out what's wrong
**Template:**
```
ROLE: Expert debugger
ERROR REPORT:
- Error message: [Exact error]
- When it occurs: [Circumstances]
- Expected behavior: [What should happen]
- Actual behavior: [What actually happens]
CODE:
[Relevant code]
ENVIRONMENT:
- Language/Runtime: [Version]
- Dependencies: [Versions]
- OS/Platform: [Details]
- Recent changes: [What changed]
DEBUGGING PROCESS:
1. **Understand**: Interpret error message
2. **Locate**: Find where error originates
3. **Analyze**: Determine root cause
4. **Verify**: Confirm diagnosis
5. **Solve**: Propose fix
OUTPUT:
## Error Analysis
[What the error means]
## Root Cause
[Why it's happening]
## Location
[Where in code]
## Fix
[Code changes needed]
## Verification
[How to test the fix]
## Prevention
[How to avoid in future]
```
### Pattern: Performance Optimization
**Use when:** Making code faster or more efficient
**Template:**
```
ROLE: Performance optimization expert
PERFORMANCE ISSUE:
[What's slow or inefficient]
CURRENT METRICS:
- Response time: [Current]
- Throughput: [Current]
- Resource usage: [CPU/Memory]
- Target: [What you need]
CODE/SYSTEM:
[Code or architecture to optimize]
PROFILING DATA:
[Measurements, bottlenecks identified]
OPTIMIZATION APPROACH:
1. Identify hotspots
2. Analyze algorithmic complexity
3. Propose optimizations
4. Estimate improvements
5. Consider trade-offs
OUTPUT:
## Bottleneck Analysis
[What's causing slowness]
## Optimization Opportunities
1. [Optimization 1]
- Current: O(n²)
- Proposed: O(n log n)
- Expected improvement: [X% faster]
- Trade-off: [Any downsides]
- Implementation: [Code changes]
2. [Optimization 2]
[Same format]
## Recommended Approach
[Which optimizations to do, in what order]
## Expected Results
[Performance after optimization]
```
---
## Pattern Combination
### Multi-Pattern Workflow
Often, you'll combine patterns:
**Example: Full Feature Development**
1. Use **Technology Selection** to choose stack
2. Use **Feature Implementation** to build it
3. Use **Code Review** to verify quality
4. Use **Technical Documentation** to document it
5. Use **Tutorial Creation** to teach others
**Example: Problem Resolution**
1. Use **Error Diagnosis** to find the issue
2. Use **Problem Solving** to explore solutions
3. Use **Bug Fix** to implement the fix
4. Use **Code Explanation** to document the solution
---
## Quick Reference
### Choosing a Pattern
| Task | Pattern |
|------|---------|
| Building new code | Feature Implementation |
| Improving code | Code Refactoring |
| Fixing bugs | Bug Fix, Error Diagnosis |
| Reviewing code | Code Review |
| Evaluating design | Architecture Review |
| Security check | Security Audit |
| Writing docs | Technical Documentation |
| Teaching concept | Concept Explanation, Tutorial |
| Making decisions | Technology Selection, Problem Solving |
| Analyzing data | Data Analysis |
| Transforming data | Data Extraction, Data Transformation |
| Speeding up code | Performance Optimization |
### Customizing Patterns
All patterns are templates. Customize by:
- Adjusting sections for your needs
- Adding domain-specific criteria
- Changing output format
- Combining multiple patterns
- Simplifying for simpler tasks
---
**Remember:** These patterns are starting points. Adapt them to your specific needs, combine them creatively, and iterate based on results.